General Question

Myndecho's avatar

Deterministic paradox? (Click to read more)

Asked by Myndecho (948points) March 23rd, 2009

Imagine you have created nanobots, what core functions were to replicate themselves and to collect the data of every subatomic particles position and all other information about them. Becoming omniscient, they have the fundamental knowledge to predicting every single event from that point onward. When and where earthquakes will happen, what random number I choose etc. I meet up with your omniscient nanobots and ask them a simply question. What hand will I put up? The regulations are, you have to tell me bef

Observing members: 0 Composing members: 0

13 Answers

Myndecho's avatar

Of course it is ignoring Heisenberg uncertainty principle.

fireside's avatar

They will tell you the opposite of which hand they know you will raise.

Myndecho's avatar

@fireside
Fireside the computers has no reason to fool us, this is supposed to be a complex calculator, not a machine designed to lure us into a trap and you know this still wouldn’t work. I already know what would happen and I just want people to think. Hint Kurt Godel!

fireside's avatar

The computers wouldn’t be fooling you.
They know that if they tell you left, you will raise right and if they tell you right, you will raise left.

So if they know that you will raise your right hand, then they will tell you left.

Myndecho's avatar

@fireside
Fireside the computer hasn’t been designed to do this. And even with this it has guess incorrectly even if you say it knew it, just remember I could always just put up the hand it says if I wanted to.

Jayne's avatar

The outcome in this case depends entirely on the robot; and the robot depends on that outcome to determine its actions. This is not some special pocket outside of reality, but it is somewhat bizarre because you have assigned no default behavior for the robot, no personality for it to obey, when it has an influence on reality; which of course it must have. It might sit there, it might sing old Tom Lehrer songs, it might throw its hands in the are as if it did not care, all of which the robot would be able to predict and thereby foresee the future. If the robot does not have this behavior, then outcomes involving itself, which are quite common, would have no input from which to determine an outcome, and the entire premise on which it is conceived would break down. Assuming it just sits there, perhaps making predictions about unrelated phenomena, then it would be able to predict that the contest ends in stalemate, and would be correct; thus, no paradox.

Myndecho's avatar

@Jayne
Ok I’ll tell you, this is connected to the Gödel’s incompleteness theorems link
It’s just not possible to know 100% because it works on it’s own axioms.
You can read more on it if you like and if you want I’ll upload a video to youtube that will explain.

Also paradox would be the correct term as (Wiki would put it) A paradox is a statement/s that leads to a contradiction.

fireside's avatar

Here’s the link

But to answer the question, I will need to see the technical spec of said hypothetical robots in order to know if their predictions are based on internal knowledge, or if their predictions are only valid once spoken.

What if they printed out a piece of paper that said, “I’m going to say right and you will raise your left hand” and then told you “Right” then once you raised your left hand they would show you the paper.

Myndecho's avatar

@fireside
As I said before you could alway then do what it says, just because I said I would put up the opposite hand doesn’t mean I will.
Also what you said is different from mine, lets stay on track.

cwilbur's avatar

Your machine is logically impossible to begin with. To accurately model its own behavior, it needs to be at least twice as complex as it is, even taking that into account. That’s where your paradox lies.

(The incompleteness theorem, which is very interesting, is a total red herring here.)

fireside's avatar

Oh, sorry, I got off the hypothetical track, lol.

Myndecho's avatar

@cwilbur
I agree, this machine is not possible on a mathematical level, which is a pretty big gap to get around.

fireside's avatar

Also, you can’t change your variables to refute my possibility:

“The regulations are, you have to tell me before I make my decision and whatever you say I will choose the opposite.”

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther