From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jun 17 2002 - 16:51:41 MDT
Gordon Worley wrote:
>
> On Monday, June 17, 2002, at 05:31 AM, Samantha Atkins wrote:
>
>>> First off, attachment to humanity is a bias that prevents rational
>>> thought.
>>
>>
>>
>> Rational? By what measure? How is attachment to the well-being of
>> ourselves and all like us irrational?
>
>
> Eliezer addressed this in his reply to this thread earlier. It is
> irrational if the attachment is blind. You must have some reason that
> you need to stay alive, otherwise provisions for it will most likely get
> in the way of making rational decisions.
>
There must be some core, some set of fundamental values, that is
unassailable (at least at a point in time) for an ethical system
to be built. It is only in the context of such that the
question of "some reason" can even be addressed meaningfully.
The life and well-being of sentients *is* part of my core. It
is not itself subject to further breakdown to reasons why this
is a core. To further break it down would require another core
reason that this one could be examined in terms of. A large
part of my questions here are an attempt to determine what that
core is for various parties.
>> Whether we transform or simply cease to exist seems to me to be a
>> perfectly rational thing to be a bit concerned about. Do you see it
>> otherwise?
>
>
> Sure, you should be concerned. I think that the vast majority of
> humans, uploaded or not, have something positive to contribute, however
> small. It'd be great to see life get even better post Singularity, with
> everyone doing new and interesting good things.
>
Then we shouldn't shoot for any less, right?
>>> I and others have broken this attachment to keep it from clouding our
>>> thinking.
>>
>>
>> So you believe that becoming inhuman and uncaring about the fate of
>> humanity allows you to think better?
>
>
> If only it were easy to become inhuman, but it's not.
>
There is a mine of semantics hidden in this!
> Uncaring is inaccurate. I do care about humans and would like to see
> them upload. I care about any other intelligent life that might be out
> there in the universe and helping it upload. I just don't care about
> humans so much that I'd give up everything to save humanity (unless that
> was the most rational thing to do).
>
On what basis will you judge what is rational? In terms of what
supergoals, if you will?
>>> It is the result of being genetically related to the rest of
>>> humanity, where the death of all human genes is a big enough problem
>>> to cause a person to give up a goal or die to save humanity.
>>
>>
>> I do not necesarily agree that we can just write it off as a genetic
>> relatedness issue at all. Whether there is sentient life and whether
>> it continues, regardless of its form, is of intense interest to me.
>> That some forms are not genetically related is not of high relevance
>> to the form of my concern. So please don't assume that explains it
>> away or makes the issue go away. It doesn't.
>
>
> There is an ethical issue, however the irrational attachment is the
> result of relatedness. A proper ethic is not so strong that it prevents
> you from even thinking about something, the way evolved ethics do.
>
You can call it "irrational" all you wish. I consider it the
very bedrock of rationality in our current context.
>>> Some of us, myself included, see the creation of SI as important
>>> enough to be more important than humanity's continuation. Human
>>> beings, being
>>
>>
>> How do you come to this conclusion? What makes the SI worth more than
>> all of humanity? That it can outperform them on some types of
>> computation? Is computational complexity and speed the sole measure
>> of whether sentient beings have the right to continued existence? Can
>> you really give a moral justification or a rational one for this?
>
>
> In many ways, humans are just over the threshold of intelligence.
Whose threshold? By what standards? Established and verified
as the standards of value how?
> Compared to past humans we are pretty smart, but compared to the
> estimated potentials for intelligence we are intellectual ants. Despite
So we are to think less of ourselves because of estimated
potentials? Do we consider ourselves expendable because an SI
comes into existence that is a million times faster and more
capable in the scope of its creations, decision making and
understanding? This does not follow.
> our differences, all of us are roughly of equivalent intelligence and
> therefore on equal footing when decided whose life is more important.
It would be best to not need to decide any such thing as much as
possible.
> But, it's not nearly so simple. All of us would probably agree that
> given the choice between saving one of two lives, we would choose to
> save the person who is most important to the completion of our goals, be
> that reproduction, having fun, or creating the Singularity. In the same
> light, if a mob is about to come in to destroy the SI just before it
> takes off and there is no way to stop them other than killing them, you
> have on one hand the life of the SI that is already more intelligent
> than the members of the mob and will continue to get more intelligent,
> and on the other the life of 100 or so humans. Given such a choice, I
> pick the SI.
>
But that is not the context of the question. The context is
whether the increased well-being and possibilities of existing
sentients, regardless of their relative current intelligence, is
a high and central value. If it is not then I hardly see how
such an SI can be described as "Friendly".
> In my view, more intelligent life has more right to the space it uses
> up. Of course, we hope that intelligent life is compassionate and is
> willing to share. Actually, I should be more precise. I think that
> wiser life has more right to the space it uses (but you can't be wiser
> without first being more intelligent). I would choose a world full of
> dumb humans trying hard to do some good over an Evil AI.
>
If it is not willing to share in the sense of respecting the
life and well-being of other sentients then I consider it
neither "intelligent" or desirable. Do not use the disputed
word "wise" to describe something that is only more intelligent.
I dispute that the "wise" will destroy other sentients when
it is within their means to preserve them.
>>> self aware, do present more of an ethical delima than cows if it
>>> turns out that you might be forced to sacrifice some of them. I
>>> would like to see all of humanity make it into a post Singularity
>>> existence and I am willing to help make this a reality.
>>
>>
>> How kind of you. However, from the above it seems you see them as an
>> ethical dilemna greater than that of cows but if your SI, whatever it
>> turns out really to be, seems to require or decides the death of one
>> or all of them, then you would have to side with the SI.
>>
>> Do I read you correctly? If I do, then why do you hold this
>> position? If I read you correctly then how can you expect the
>> majority of human beings, if they really understood you, to consider
>> you as other than a monster?
>
>
> If an SI said it needed to kill a bunch of humans, I would seriously
> start questioning its motives. Killing intelligent life is not
> something to be taken lightly and done on a whim. However, if we had a
> FAI that was really Friendly and it said "Gordon, believe me, the only
> way is to kill this person", I would trust in the much wiser SI.
>
OK, that seems better. But how would you evaluate how Friendly
this superintelligence really was?
> This is the kind of reaction I expect and, while I'm a bit disappointed
> to get so much of it on SL4, therefore avoid pointing this view out. I
> never go out of my way to say that human life is not the most important
> thing to me in the universe, but sometimes it is worth talking about.
I would be very disappointed if you did not get such reactions
to your original statement.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT