From: Rolf Nelson (rolf.h.d.nelson@gmail.com)
Date: Fri Sep 28 2007 - 23:27:29 MDT
As a follow-up to the "deterring AI" thread from August, I've created a new
blog at <http://aibeliefs.blogspot.com/>.
Description:
What does an AI believe about the world?
Nick Bostrom's Simulation Argument
<http://www.simulation-argument.com/>claims that, using universally
accepted principles such as Occam's Razor and
Bayesian Logic, you and I should (under certain conditions) logically
conclude we are likely living in a simulation.
Our "AI Beliefs" blog does not concern itself about the nature of reality.
Instead, our blog asks: under what circumstances would an
AGI<http://en.wikipedia.org/wiki/Artificial_general_intelligence>reach
the conclusion that it might be in a simulated environment? The
purposes of asking this question include:
1. Answering this question may provide some unsolicited insight towards the
question of "how to predict the behavior of an AGI", which in turn may
provide some insight towards the World's Most Important Math Problem, the
question of "how to build a Friendly
AI<http://en.wikipedia.org/wiki/Friendly_Artificial_Intelligence>."
The Simulation Argument might be deliberately built into the design of a
Friendly AI, or alternatively may be used as a test of how well a proposed
Friendly AI handles such a philosophical
crisis<http://www.intelligence.org/upload/CFAI/design/structure/crisis.html>
.
2. Answering this question may make it possible to develop a "last line of
defense" against an UnFriendly AGI that was accidentally loosed upon the
world, even if the AGI gained a trans-human level of intelligence. Such a
"last line of defense" might include trying to convince the AGI that it may
be inside a simulated environment.
-Rolf
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT