From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Wed Jun 02 2004 - 10:44:46 MDT
Ben Goertzel wrote:
>
> Even once AI science is far more advanced, there may still be many
> different ways of creating AI's, and many different ways of creating
> Friendly AI's. AI science, at its present primitive stage, certainly
> doesn't rule this out.
>
> If we're lucky, there will be a unified theory of AI that tells us which
> ways of creating AI will be successful at creating intelligence, which
> ways will be successful at creating FAI, etc.
>
> I see no evidence that Eliezer or anyone else possesses a rigorous,
> demonstrated unified theory of this nature, at the present time.
Just don't forget that this theory is *needed*, and that philosophy is not
an acceptable substitute therefor. I now know enough to realize this,
which was obvious enough in retrospect, though I don't yet have as much
theory as I need. *Of course* you can't build a Friendly AI without
knowing what you're doing! It would be like Greek philosophers building an
airplane. Why was I ever foolish enough to think otherwise? Because I did
not know enough of the rules to know that knowing the rules was necessary.
Well, now I know.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT