From: micah glasser (micahglasser@gmail.com)
Date: Tue Apr 25 2006 - 12:09:30 MDT
All intelligent life on this planet have at least one goal in common: to
continue the process of replication. From this fact comes scarcity,
competition, and the threat of extinction from competing replication systems
(life forms).
Taking this into consideration it would seem that any entity (including AGI)
that has reproductive replication as a super-goal could conceivably be a
threat to the human super-goal of reproductive replication/ genetic
preservation. Unfortunately this circumstance sets up a catch 22. Any system
that is intelligent enough to self-recursively improve its own design and
level of intelligence is inherently a system with the goal of replication
and is a threat to human existence so long as there is scarcity of resources
shared between the two competing replication systems (humans and AGI
respectively).
So the question I raise is whether or not friendly AGI is even a theoretical
possibility. If it is a possibility then the next question we must ask is
how can we program an AGI that is recursively self-improving yet not in
competition with human civilization for vital resources. Probably the
solution to this conundrum is simply that any AGI should have human
preservation and human flourishing as its super goal with recursive
reproductive self-improvement taking place only to the extent, and at a
rate, that is conducive to this goal.
If this kind of algorithm can be successfully implemented then it would not
only constitute a benevolent AGI but it would also prevent a violently hard
take-off because the AGI would not just recursively self-improve its
hardware and programming as fast as it could but would moderate its own
development according to what is best for human survival and flourishing.
On 4/25/06, Richard Loosemore <rpwl@lightlink.com> wrote:
>
> Jef Allbright wrote:
> > On 4/25/06, Ben Goertzel <ben@goertzel.org> wrote:
> >>> I think that the question of an AI's "goals" is the most important
> issue
> >>> lurking beneath many of the discussions that take place on this list.
> >>>
> >>> The problem is, most people plunge into this question without stopping
> >>> to consider what it is they are actually talking about.
> >> Richard, this is a good point.
> >>
> >> "Goal", like "free will" or "consciousness" or "memory", is
> >>
> >
> >
> > Building upon Ben's points, much of the confusion with regard to
> > consciousness, free will, etc., is that we tend to fall into the trap
> > of thinking that there is some independent entity to which we attach
> > these attributes. If we think in terms of describing the behavior of
> > systems, with the understanding that each level of system necessarily
> > exists and interacts within a larger context, then this whole class of
> > confusion falls away.
> >
> > - Jef
> >
> >
> Hmmmm.... I wasn't sure I would go along with the idea that goals are in
> the same category of misunderstoodness as free will, consciousness and
> memory.
>
> I agree that when these terms are used in a very general way they are
> often misused.
>
> BUt in the case of goals and motivations, would we not agree that an AGI
> would have some system that was responsible for maintaining and
> governing goals and motivations?
>
> I am happy to let it be a partially distributed system, so that the
> actual moment to moment state of the goal system might be determined by
> a collective, rather than one single mechanism, but would it make sense
> to say that there is no mechanism at all?
>
> If your point were about free will, I would agree completely with your
> comment. About consciousness .... well, not so much (but I am writing a
> paper on that right now, so I am prejudiced). About memory? That
> sounds much more of a real thing than free will, surely? I don't think
> that is a fiction.
>
>
> Richard Loosemore.
>
-- I swear upon the alter of God, eternal hostility to every form of tyranny over the mind of man. - Thomas Jefferson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT