From: Ben Goertzel (ben@goertzel.org)
Date: Wed Aug 17 2005 - 07:12:04 MDT
> this is *not* a creature
> asking itself "what feels good to me?", it is a creature that has
> already jumped up a level from that question and is asking itself "what,
> among the infinite possibilities, are the kind of experiences that I
> would like to *become* pleasurable?
>
> This moment - when this particular thought occurs to the first AI we
> build - will be THE hinge point in the history of the solar system
I understand the importance of this point in history, yes.
> I suggest that, at this point, the creature will realise something that,
> in fact, we can also know if we think about it carefully enough, which
> is that the infinite landscape of possible motivations divides into two
> classes, in much the same way that infinite series of numbers divide
> into two classes: those that converge and those that diverge. The
> difference is this: the universe contains fragile, low-entropy things
> called sentient beings (including itself) which are extraordinarily
> rare. It also contains vast quantities of high-entropy junk, which is
> common as muck and getting more so.
>
> The creature will know that some motivation choices (paperclipization,
> axe-murdering, and also, most importantly, total amorality) are
> divergent: they have the potential, once implemented and switched on,
> to so thoroughly consume the AI that there will be a severe danger that
> it will deliberately or accidentally, sooner or later, cause the
> snuffing out of all sentience. Choosing, on the other hand, to
> implement a sentience-compassion module, which then governs and limits
> all future choices of motivation experiments is convergent: it pretty
> much guarantees that it, at least, will not be responsible for
> eliminating sentience.
>
> Now, ask yourself again which of these two choices it would make.
Yes, but your "good" branch divides into a number of sub-branches, including
1) one in which the AI programs itself to act in a way that is
close-to-guaranteed to keep ITSELF alive and rational and acting in such a
way as to continue creating interesting new patterns
2) one in which the AI programs itself to do the above, but also to preserve
and protect other sentiences
You haven't given an argument as to why the AI would choose 2 over 1, for
example (among the many other possibilities)
-- Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT