From: Philip Goetz (philgoetz@gmail.com)
Date: Mon Apr 24 2006 - 21:10:35 MDT
On 4/24/06, Richard Loosemore <rpwl@lightlink.com> wrote:
>
> I could pick up on several different examples, but let me grab the most
> important one. You seem to be saying (correct me if I am wrong) that an
> AGI will go around taking the same sort of risks with its technological
> experiments that we take with ours, and that because its experiments
> could hold existential risks for us, we should be very afraid. But
> there is no reason to suppose that it would take such risks, and many,
> many reasons why it would specifically not do the kind of risky stuff
> that we do: if you look at all the societal and other pressures that
> cause humans to engage in engineering ventures that might have serious
> side effects, you find that all the drivers behind those proessures come
> from internal psychological factors that would not be present in the
> AGI.
The problem is that the AI's goals might not be our goals. At all.
Most people would say we haven't taken many outrageous risks in
settling America, and yet the number of wolves has declined by a
factor of as much as a thousand! How did that happen?
(Aside: It is worth asking whether a free AI or an AI controlled by
humans would be more dangerous.)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT