Social Question

LostInParadise's avatar

What do you think of this solution to Newcomb's Paradox?

Asked by LostInParadise (31903points) January 3rd, 2015

Here is the well known paradox, which keeps philosophers up at night. You are presented with two boxes. One is transparent and you can clearly see that it contains $1000. The other is opaque and may or may not contain a million dollars. You are given a choice of either taking just the second box or taking both boxes.

Here is the hitch. A near perfect computer simulation has been run of your life to predict your choice. The second box will contain the million dollars only if the computer thinks that you will just take the one box.

People split pretty much down the middle as to whether to take one box or two.

I am going to add a seemingly irrelevant twist to the problem. The whole thing is run on national television and you are the first ever contestant. I say that in this case, take the one box.

Suppose you choose two boxes. The possibilities are:
1. The second box is empty. You get a crumby $1000. You show yourself to be a greedy bastard on national television and are embarrassed to be outfoxed by the computer.
2. You get the extra million dollars. Everybody thinks you are clever for beating the system, but they don’t much like you.

Suppose you choose the one box. The possibilities are:
1. You get a million dollars.
2. You expose the computer as a fraud, which is definitely worth the $1000 you forego. Everyone gets your sympathy for being a naive trusting soul. You get your fifteen minutes of fame and maybe a commercial deal worth far more than the $1000.

The choice is obvious.

Observing members: 0 Composing members: 0

24 Answers

ragingloli's avatar

Depends. Do you go for certainty or high probability.
Choosing both nets you a guaranteed 1000$.
However, the computer would most likely predict that choice, so the chance of an additional million would be almost nonexistant.
Choosing the just the second box does not guarantee a million $.
But if you choose the second box, the computer most likely has already predicted that choice, so the chance of winning the million is very high.

The choice is basically a certain 1000 or an almost certain 1000000.
Risk/reward tells me that taking just the second box is the best choice.

dxs's avatar

I say take the second box. It’s a game, right? So it’s worth the risk after you’ve gone over the strategy.

dappled_leaves's avatar

If your action doesn’t affect what is in the boxes (i.e., the boxes are packed beforehand) and there is only one choosing event, it makes no sense to leave one box unopened. Just open both and enjoy your $1000 or $1,001,000. The probabilities don’t really matter.

CWOTUS's avatar

That keeps philosophers up at night? Jaysus, I should introduce them to Sudoku. If I get a particularly difficult puzzle I can be out in 20 minutes, if that’s my goal.

Otherwise, I could introduce them to … well … never mind.

hominid's avatar

I can’t find the paradox here. And does the computer know your choice or not (“A near perfect computer simulation has been run of your life to predict your choice.”)? Does this whole question depend on the word “near”?

LostInParadise's avatar

Nobody is perfect, even computers. Let’s take near to be 99.9%. Does that make a difference?

hominid's avatar

Still don’t see the paradox here. Maybe you could elaborate. It seems that if you choose the one box, there is a 99.9% chance it contains a million dollars. If you choose both boxes, there is a .1% chance that the opaque box will contain a million dollars. It all rests on the precision of the computer.

LostInParadise's avatar

There are two ways of looking at this. If you figure that the probability of the computer being right is 99.9% then the expected value of choosing both boxes is 99.9%x1000 + 0.1%x1,000,000. The expected value of choosing just one box is 99.9%x1,000,000 + 0.1%x0. Looked at that way, you are better off choosing one box. On the other hand, you can say, as others here have, that the decision has already been made, so you are always better off taking both boxes.

hominid's avatar

@LostInParadise: “On the other hand, you can say, as others here have, that the decision has already been made, so you are always better off taking both boxes.”

But these people are wrong. Saying that “the decision has already been made” or describing the color of the 2nd box doesn’t change this fact: there is a 99.9% chance that if you choose just the one box, it will contain a million dollars.

LostInParadise's avatar

That is what makes this a paradox. Each side is competely convinced that it is right. I am not going to take a stand, but just point out that the paradox raises some interesting issues regarding free will.

dappled_leaves's avatar

@hominid “It all rests on the precision of the computer.”

Well… no, it doesn’t. Nothing rests on the precision of the computer. The contents of the boxes do not change when you open them.

hominid's avatar

^ And the box may be orange. If the precision of the computer is 99.9%, then there is no getting around the fact that if you decide to open the one box, there is a 99.9% chance that the computer predicted that this would happen. According to the rules of this thought experiment, this means that there is a 99.9% chance that you are opening a box containing one million dollars.

ragingloli's avatar

“The probabilities don’t really matter.”
Of course they matter.
Let us apply that to the lottery.
“Either you win the jackpot or you do not. The probabilities do not matter.”
If someone said that about the lottery, “hey, it is just 50/50”, you would call him a bloody idiot, and rightly so.

your action doesn’t affect what is in the boxes
But every single aspect of reality and yourself that leads to that action, does.
including your internal monologues contemplating how to trick the computer, as well as the mental predisposition to such deception.
To look at it differently, for all intents and purposes, the simulation that the computer has run to predict your action, is you, the simulation’s action is identical to your action, and as such your action does actually affect what is in the box.

hominid's avatar

@LostInParadise: “I am not going to take a stand, but just point out that the paradox raises some interesting issues regarding free will.”

You must be right. It seems that the only way to see this in any way other than a simple 99.9% > .1% evaluation is to insert some version of free will that makes little sense. I’ve been struggling to try to understand where the disconnect is, and it must be here. Maybe there are people who believe you can trick the computer based on intention or something. Either way, they’re changing the rules of the problem and making 99.9% not equal 99.9%.

@dappled_leaves (and others) – Are you suggesting that the experiment as described is inaccurate in some way? When @LostInParadise states that the computer is 99.9% accurate, for you to say that there is not a 99.9% chance that there will be a million dollars in the box if you open that one box, you are asserting that the computer is not 99.9% accurate. You’re defining the rules of this thought experiment away in your answer, right?

@dappled_leaves: “Just open both and enjoy your $1000 or $1,001,000. The probabilities don’t really matter.”

The OP has already stated that the computer is 99.9% accurate. So, you can do this (open both) if you choose go with the .1% chance that this the time the computer has failed.

What am I missing?

LostInParadise's avatar

What makes this a great problem is that there are two reasonable ways of looking at it that come to opposite conclusions. In terms of probability, just taking the second box is the right choice. On the other hand, you can reason, as some of those here have, that the decision has been made and, whatever the decision is, you are at least as well off if you take the two boxes.

I don’t know how many here recall who Martin Gardner was. Gardner was a journalist by training and was self-taught in math. He wrote a wonderful recreational math column for Scientific American magazine back before the magazine dumbed down. I have a book containing several of Gardner’s columns. There is one on Newcomb’s paradox.

Gardner’s conclusion, which is what I tend to go with, is that the problem shows that it is not possible to make the prediction with any accuracy. Even if we do not have free will, there are certain self-referential types of cases where it is not possible to make predictions as to what someone is going to do. To use Gardner’s example, if the computer predicts that you are going to go to sleep next Wednesday with your shoes on, it is easy to invalidate the prediction. I think Newcomb’s Paradox is along similar lines.

hominid's avatar

@LostInParadise: “To use Gardner’s example, if the computer predicts that you are going to go to sleep next Wednesday with your shoes on, it is easy to invalidate the prediction.”

I feel as though I missed out a great deal in not taking advanced philosophy classes. The above sentence seems nonsensical.

You have a computer that accurately predicts what I am going to do 99.9% of the time. How is it easy to “invalidate the prediction”? This means that the 99.9% number is incorrect. In the shoes on example, as well as Newcomb’s “paradox”, we are led to believe that something can be both 99.9% and not 99.9%. A = not A.

If I am asked to write down the letter A or B on a piece of paper, I may take 1 second or 1 minute. I may start by thinking of choosing the letter A, but change my mind a few times and eventually choose B. Whatever the process, I will either choose A or B, and in a thought experiment that proposes 99.9% successful prediction rate, that computer knows 99.9% of the time what will end up on paper. There is no “tricking” the computer – that’s just more input into the equation. 99.9% of the time, the computer will know what I am going to write down – even before I am aware of it.

The whole experiment is made even easier, because the answer is built into the premise. 99.9% is one of those facts that can’t be worked around without changing the premise to something else altogether.

LostInParadise's avatar

What I am questioning is whether there could ever be a computer that could make accurate predictions about what a person will do where the person knows that the computer is making the prediction. It just seems to me that we get a situation where computer and person are trying to outguess each other. The person might think that ordinarily he might take just the one box, but he is going to try to fool the computer by taking both boxes. The complexity of the situation and the possible whimsical nature of the choice make me question whether this kind of prediction is ever going to be possible. This obviously is just my viewpoint and I have the feeling that your viewpoint differs.

ragingloli's avatar

“To use Gardner’s example, if the computer predicts that you are going to go to sleep next Wednesday with your shoes on, it is easy to invalidate the prediction.”
Only if you tell the person that this is what the computer predicted.

dappled_leaves's avatar

@LostInParadise “What I am questioning is whether there could ever be a computer that could make accurate predictions about what a person will do where the person knows that the computer is making the prediction”

I am still not getting why people find this situation paradoxical at all. A computer may make a prediction about my behaviour. I may know about this prediction. All fine. But why on earth would I let either of these facts influence my decisions? There is no reason for this, unless I have some kind of religious belief that the computer controls my choices.

hominid's avatar

@ragingloli: “Only if you tell the person that this is what the computer predicted.”

But all of this is included in the data that goes into the prediction. In other words, you can’t invent a way out of this. The entire question is simply a question of which number is greater: 99.9 or .1.

stanleybmanly's avatar

I fail to see the paradox. I have 2 chioces. The computer has 2 choices. I can choose a guaranteed $1000, or take a chance that a machine designed to “know” me will perform as advertised. The national television/greed/embarrassment thing is a silly red herring against a 50 -50 shot at a million bucks and the computer which “knows” me MUST realize that I would appear butt naked picking my nose at high noon on the Whitehouse lawn for one million dollars.

flutherother's avatar

Because the assumption of a near perfect predictor leads to a paradox then no such thing can be possible so I would take both boxes.

dappled_leaves's avatar

But what you choose has no effect on the calculation of the probability!
Oh, I give up. Bangs head on desk.

ragingloli's avatar

I predict that upon reading this, you will think of an elephant.
You will also breathe manually.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther