From: Stathis Papaioannou (stathisp@gmail.com)
Date: Mon Aug 27 2007 - 20:42:06 MDT
On 27/08/07, Norman Noman <overturnedchair@gmail.com> wrote:
> I'd like to say that CEV would both make people smart enough to realize
> religion is a load of hooey, and prevent people from threatening each other
> with simulations, but frankly I don't know what CEV does, it seems to be
> more of a mysterious treasure map than an actual target.
What would the CEV of the Pope or Osama Bin Laden look like? I
wouldn't discount the possibility of a theocratic FAI, unpleasant
though it may be to contemplate.
> > The probability that some
> > member of the movement will succeed at some point in the future of the
> > universe will then determine the probability that you are now in the
> > simulation.
> >
> >
> > If the movement further stipulates that the simulation
> > will be recursive - simulations within simulations - you could argue
> > that you are almost certainly in one of these simulations.
>
> Except that, under the hypothesis where everybody and his brother is allowed
> to simulate the universe, there would be billions of recursive simulations
> and you might be in any one of them. The difficulty in calculating the
> average effect is partially due to complexity, but also due to the basic
> implausibility of this hypothetical situation.
That's right, and my point is that for this reason the only rational
course of action is to ignore the possibility of a simulation.
> In contrast, rolf's plan is quite plausible, because it's something that
> benefits everyone. Not just humanity and the Friendly AI, but the Rogue AI
> too. If everyone cooperates, then whether mistakes are made or not, humanity
> will be saved and C will be calculated.
I think there would be more people interested in promoting their
religion or increasing their profits than would be interested in
making their descendants' future safe from a RAI. This might not be
rational or moral or whatever, but it's what people would.
> > > And the thing is, it's not going to work either way. If they reveal the
> > > plan, and say SMOKE OR DIE! it's only going to make everyone hate them
> even
> > > more. Actually, it's only going to make everyone laugh at them and think
> > > they're nuts, but assuming they were taken seriously for some reason...
> >
> > The religious people would convince the faithful that they were doing
> > God's work, as explained above. But even if it's tobacco companies,
> > the fact that it's obviously an evil threat doesn't make it any less
> > likely to be true. And if it actually got to the point where
> > legislation was passed to make this sort of thing illegal, that would
> > be the ultimate proof that people were taking it seriously.
>
> Are you playing the devil's advocate or do you really think it's even
> remotely likely that big tobacco would invest in a karmic simulation of the
> universe in order to get people to smoke?
As you put it, everybody and his brother could join in, with the
result that the only rational action would be to ignore the
possibility of a simulation.
> > > If I am an RAI, the evidence that I may be in a simulation is as
> follows:
> > >
> > > "The people who created me were trying to create a FAI. With more luck,
> they
> > > would have succeeded. The FAI they created would run karmic simulations
> of
> > > likely RAIs that could have been created instead.
> > >
> > > Ergo, the probability I am one of them is XY/(1-X)Z, where X is the
> > > likelihood they'd create a friendly AI rather than a rouge one, Y is the
> > > likelihood a friendly AI would simulate me specifically, and Z is the
> > > likelihood I would be created as a result of real human error."
> >
> > These are difficult things to reason about. What about the possibility
> > that you or the RAI might be in a recursive simulation?
>
> If everyone simulates each other, their influence is proportional to their
> likelihood of existing in the first place. Since FAI is the target, it's
> probably the most likely to exist.
>
> Everyone with enough influence to do so manipulates everyone else into
> cooperating with them, and the resulting actions of all the AIs, even the
> FAI, cohere to the extent of satisfying all the most likely goals to the
> extent that they are mutually satisfiable, including probably a zillion
> different definitions of friendliness, calculating C, and whatever else is
> likely to crop up by mistake.
>
> I don't see how recursive simulations, if the primary simulator bothers to
> actually run them at all, would make a difference. They would just be more
> reasons to do the same things already being done.
It makes a difference to the probability calculations. In the simple
case, if you can be sure that one simulation has been run, you have a
1/2 chance of being in that simulation. But if a recursive simulation
has been run, you have a much higher chance of being in the
simulation. If an actual Turing machine with infinite cycles available
to it exists somewhere (and a priori there is no reason to suppose
that this is impossible, even if it isn't possible in the universe we
observe), then we might almost certainly be living in a simulation.
But this realisation should have no effect on our behaviour.
-- Stathis Papaioannou
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT