From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon May 06 2002 - 00:18:55 MDT
Ben Goertzel wrote:
>
> You are very right that none of the component technologies of Novamente
> will, on their own, work for big problems. Our belief is that by *piecing
> together unscalable technologies in the right global architecture*, one can
> arrive at a system that *will* work for big problems.
>
> I understand that you don't share this belief. But you should understand
> that the Novamente design is in no way refuted by the observation that one
> or another of the component technologies, on its own, is not scalable or is
> not suitable as an AGI.
Of course. Heck, my own belief is that an AI is built out of components
which are not only "not scalable" but are in fact "not problem-solving" if
you isolate them independently. I agree that if you take several multistep
generic problem-solving algorithms and give them a common representation,
then the resulting tractable problem space will be the multiplicative rather
than the additive sum of the individually tractable problem spaces. The
problem is that this *still* carves out only a tiny corner of the overall
problem space. Giving several generic stepwise problem-solving algorithms a
common representation is certainly not a trivial design problem, but it is
still a design problem which is much easier than the functional
decomposition of general intelligence into interdependent, internally
specialized subsystems.
> What baffles me is not the fact that you hold this opinion, but the
> incredible air of certainty with which you make such a statement. You *may*
> possibly be right, but there's just no way you can *know* this!!
> Goodness....
Optimists are so strange... isn't it obvious that "Idea X is right" should
be used sparingly and tentatively and "Idea X is wrong" should be used often
and firmly, whether the ideas are your own or someone else's? I can always
be wrong; that doesn't mean the other guy's notions have a better chance of
being right.
> We ran some simple "experiential interactive learning" experiments, but they
> were more at the level of a "digital roach" than a "digital doggie"... and
> then we ran into nasty performance problems with the Webmind software
> architecture (remedied with the Novamente architecture, and not connected
> with scalability problems in the underlying AI algorithms).
>
> Whether the various experiences we've had experimenting with our AI systems
> have been of any value for AGI or not, I suppose time will tell. So far our
> feeling is that they HAVE been valuable.
And my continuing project is to get you to disgorge these experiences, or
just some of the highlights, in a sufficiently concrete form that they are
useful to a next generation of cognitive scientists who are looking at them
through the lens of other theories. Or if the experiences are strong enough
evidence to force the viewer to adopt your perspective and beliefs about
them, that's fine too. But in the meanwhile, knowing that you've had
experiences that you think are valuable, or that your "experiential
interactive learning" experiments seemed to you like a "digital roach"
rather than a "digital doggie", or that you ran into "nasty performance
problems" which were "remedied with the Novamente architecture", is not
sufficiently concrete; it provides information about your beliefs about your
experiences but does not provide the experiences themselves in enough detail
for others to form beliefs about them.
> > It looks
> > to me like
> > another AI-go-splat debacle in the making.
>
> Yeah, I think you've probably repeated that often enough, Eliezer.
>
> I know that is your opinion, and everyone else on this list (if they have
> bothered to read all these messages) does as well.
Perhaps my memory is tricking me, but I think that if you look through the
list archives of SL4, you will find that I have never once said, on SL4 or
in any other public forum, that Novamente was an AI-go-splat debacle in the
making, until after I'd read the Novamente manuscript. Because my memory
says that I was very careful not to say this until I'd posted a review of
your system and given you a chance to defend it. Just FYI.
> I'll tell you one thing though. It's sure pretty easy to talk to people
> trying to do ambitious things and tell them "That won't work! You've
> underestimated the difficulty of the problem!"
>
> If you say that to 10 people trying to actually build AGI right now, you're
> bound to be right in at least, say, 8 or 9 of the cases. In which case
> you'll come out looking like you're pretty clever -- 80% or 90% prediction
> accuracy!!
Yes, I know, this is what makes it an admitted cheap shot for me to, i.e.,
ask you what your system does to make you think it can exhibit general
intelligence. And naturally, predicting that a design will not turn into a
successful seed AI is an ABSOLUTELY safe bet at any odds - the classic
"Singularity Sucker Bet".
However, do bear in mind that I have not just predicted a generic splat of
some undefined type, but also said something about the nature of the splat,
its origins, and what its consequences will look like.
> One more thing. I have spent waaaaaay too much time e-mailing to this list
> over the last few days.
I noticed. <smile>.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT