From: Krekoski Ross (rosskrekoski@gmail.com)
Date: Mon Jun 16 2008 - 21:16:06 MDT
You would run into a complexity ceiling if it was machines improving
machines without external input.
Ross
On Tue, Jun 17, 2008 at 1:19 AM, Mark Nuzzolilo <nuzz604@gmail.com> wrote:
> I'll take a swing at this.
>
> Let's start with the assumption that a machine cannot output a machine of
> greater algorithmic complexity.
> Now for a thought experiment put humans in that same category. A single
> human would not be able to produce something "greater" than itself. The
> details of this are unimportant. The point is that when you take a larger
> group of humans, the complexity increases and you can now produce a machine
> potentially greater than a single human. This machine could then improve
> the intelligence or ability of single humans at a time, and then those
> humans could then create a greater machine.
>
> This is obviously not a "typical" RSI scenario but if my reasoning is
> correct here (correct me if I am wrong), then in theory RSI would be
> possible even by taking this concept and abstracting it to specific (and
> properly designed) AGI components rather than specific components of a group
> of humans (the humans themselves).
>
> Mark Nuzzolilo
>
>
>
> On Sun, Jun 15, 2008 at 1:18 PM, Matt Mahoney <matmahoney@yahoo.com>
> wrote:
>
>> Is there a model of recursive self improvement? A model would be a
>> simulated environment in which agents improve themselves in terms of
>> intelligence or some appropriate measure. This would not include genetic
>> algorithms, i.e. agents make random changes to themselves or copies,
>> followed by selection by an external fitness function not of the agent's
>> choosing. It would also not include simulations where agents receiving
>> external information on how to improve themselves. They have to figure it
>> out for themselves.
>>
>> The premise of the singularity is that humans will soon reach the point
>> where we can enhance our own intelligence or make machines that are more
>> intelligent than us. For example, we could genetically engineer humans for
>> bigger brains, faster neurons, more synapses, etc. Alternatively, we could
>> upload to computers, then upgrade them with more memory, more and faster
>> processors, more I/O, more efficient software, etc. Or we could simply build
>> intelligent machines or robots that would do the same.
>>
>> Arguments in favor of RSI:
>> - Humans can improve themselves by going to school, practicing skills,
>> reading, etc. (arguably not RSI).
>> - Moore's Law predicts computers will have as much computing power as
>> human brains in a few decades, or sooner if we figure out more efficient
>> algorithms for AI.
>> - Increasing machine intelligence should be a straightforward hardware
>> upgrade.
>> - Evolution produced human brains capable of learning 10^9 bits of
>> knowledge (stored using 10^15 synapses) with only 10^7 bits of genetic
>> information. Therefore we are not cognitively limited from understanding our
>> own code.
>>
>> Arguments against RSI:
>> - A Turing machine cannot output a machine of greater algorithmic
>> complexity.
>> - If an agent could reliably produce or test a more intelligent agent, it
>> would already be that smart.
>> - We do not know how to test for IQs above 200.
>> - There are currently no non-evolutionary models of RSI in humans,
>> animals, machines, or software (AFAIK, that is my question).
>>
>> If RSI is possible, then we should be able to model simple environments
>> with agents (with less than human intelligence) that could self improve (up
>> to the computational limits of the model) without relying on an external
>> intelligence test or fitness function. The agents must figure out for
>> themselves how to improve their intelligence. How could this be done? We
>> already have genetic algorithms in simulated environments that are much
>> simpler than biology. Perhaps agents could modify their own code in some
>> simplified or abstract language of the designer's choosing. If no such model
>> exists, then why should we believe that humans are on the threshold of RSI?
>>
>>
>> -- Matt Mahoney, matmahoney@yahoo.com
>>
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT