From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Thu Jul 14 2005 - 12:22:19 MDT
(In as much as this whole topic isn't hilarious, the humourous bit
is at the end)
At various times in the past week, Marc Geddes said both of the
following paragraphs:
> Um.. I have no idea what you're going on about here. I never
> claimed to know what the 'Universal Morality' was. I only posited
> that *some* UM existed. So no, I certainly don't claim that the
> UM will look anything like a morality I like (or even be
> comprehensible to me).
and
> I'm positing that an unfriendly cannot self-improve past a certain
> point (i.e it's intelligence level will be limited by it's degree
> of unfriendliness). I posit that only a friendly ai can undergo
> unlimited recursive self-improvement.
AFAICT, Marc also is saying that the means by which intelligence is
constrained is by going against objective morality.
If so, these two paragraphs *massively* contradict each other: he is
stating both "I have no idea what the UM is" and "the UM will force
AI to be friendly".
Holding two contradictory beliefs at the same time is, in my
lexicon, called "being nuts".
Having said all that, however, I have hope for resolving this
(chuckle) contentious issue: a friend has suggested an empirical
experiment!
To wit:
Use evolutionary algorithms. Evolve a set of functions that take
some input, and then decide whether or not to give a human being an
electric shock, or choose a voltage, or something. The algorithms
that compute fastest, survive. If evilness causes slower
computation, algorithms that give people electric shocks should be
weeded out.
-Robin
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT