From: Ben Goertzel (ben@goertzel.org)
Date: Wed Sep 07 2005 - 20:36:30 MDT
Eli:
> >> I bet that if you name three subtleties, I can
> >> describe how Bayes plus expected utility plus Solomonoff
> >> (= AIXI) would do it given infinite computing power.
Of course, but who cares what can be done using infinite computing power?
The question is whether probabilistic inference plus an Occam's Razor
assumption is a good approach to achieving intelligence given *very finite*
computing power, which is what our seed AI's will most likely actually
have...
> First off, you didn't answer my challenge. Name three subtleties, heck,
> name one subtlety, and see what I make of it.
I am curious how you propose to handle Hempel's paradox of confirmation
using probabilistic semantics alone.
In PTL (Novamente's probabilistic inference component) we handle this sort
of thing via augmenting probability theory with other mathematics.
There are many solutions to this problem, but I'm curious which one you
advocate...
> Give me one good, solid, predictive equation applying to cognitive
> systems that stems from CAS.
Well, Wolfram's division of cellular automata into
-- stable
-- periodic
-- chaotic
-- complex
led to the notion of "complexity at the edge of chaos", which is the idea
that in the "parameter space" of CA's, complex CA's are generally found in
the region of parameter space between the region corresponding to periodic
CA's and the region corresponding to complex CA's. This rule often seems to
hold for other complex systems besides CA's as well.
This observation is predictive in the sense that, if one knows a bunch of
systems of a certain type (e.g. CA's) and knows their parameters as well as
their dynamical category, then when one is given a new example system, one
can guess the dynamical category of the new example system with an accuracy
much better than random.
Another example is Michel Baranger's principle of Maximum Entropy
Production, which states that in many complex systems, dynamics occurs in
such a way as to maximize the rate of entropy production. Note that this
goes beyond the second law of thermodynamics, and is NOT universal.
Baranger showed that it occurred in a simple model system (the Benard cell)
and gave some math showing why it will often hold. I recently read an
article in Nature which did not cite Baranger but which proposed his same
principle and showed that it applied to climate change in many cases. This
is predictive because it allows one to predict that, given a complex system
one knows little about, the odds are decent that it will approximately
follow a path of near maximum entropy production rate.
Neither of these is a definite, hard and fast, universal rule -- they are
probabilistic generalizations that seem to hold across "most" complex
systems. If one is given a complex system to study, and isn't given much
data about it, these principles definitely allow one to make crude judgments
about that system -- "better than random." But they are no substitute for
detailed analysis of particular systems, they only provide general guidance.
This may not be the kind of science that you want for AGI design or FAI
theory, but it's still real science and it's interesting. I think there are
probably ideas in complexity science that are useful for pointing the way to
more specific and rigorous conclusions about specific classes of AGI
systems.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT