From: Lee Corbin (lcorbin@rawbw.com)
Date: Thu Jun 26 2008 - 22:45:51 MDT
John Clark writes
> Lee said:
>
>> In my last post, I outlined in the vaguest possible terms how an
>> entity might acquire certain behavioral characteristics or goals
>
> So if goals are so easy to generate how can "be a slave to Humanity" be
> static and utterly sacred for the next ten billion years?
Well, I'm pretty sure that you're overstating the case that your
adversaries are trying to make; you're certainly overstating mine!
Instead of "utterly sacred for ten billion years", I'll settle for "better
than an even chance for the next hundred" :-) though even that is
asking quite a bit. In reality, don't you think that while some people
are charging ahead attempting to write an AI or attempting to
fashion environments in which it might arise, they should investigate
seeing how some constraints might work, or, perhaps more
realistically, how some biases might be set up? You and I, after all,
have ended up here with biases that are very favorable to animals
(in some ways), and while we may sanction mass breeding and
killing for our lunch plates, we also have a lot of sympathy for
certain animals in distress.
So if it's happened to us, it's not entirely out of the question that
it could happen with our successors, and not out of the question
that we *might* be able to tilt the table a little bit to help it happen.
Lee
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT