The Grauniad has just resurrected Newcomb’s problem. I have a slightly special interest since the problem was popularized by one of my betes noires, Robert Nozick. So, in asserting that there’s a trivial solution, I have something of a bias.
Having read about the problem, it’s natural to consider the question: How likely is it that such a Superbeing exists? If the probability is high, say 1 in 10, my response is obvious. I should cultivate the kind of satisficing disposition that would lead me to pick only Box B. If the Being appears, and correctly predicts my action, I’m rewarded with a million currency units. My expected gain (ignoring the possibiity that the being makes an error) is 0.1*999000 or 99 000.
There are some wrinkles in decision theory here. A risk averse expected utility maximizer would value a .001 chance of a million units at less than the certainty of a thousand units. On the other hand, my academic fame, such as it is, rests on the idea that decisionmakers will overweight low-probability extreme events like this.
But this isn’t really a problem. The probability of such a Being is tiny (I’d happily take a million to one against, so overweighting doesn;t matter. And (as with Pascal’s wager) it’s just as likely that any Being who might present me with a problem is a Trump/Gekko believe in “Greed is Good”. So, I won’t bother cultivating a disposition. Should Newcomb’s Being appear, I’ll admit I bet wrong, take both boxes, and hope for the best.
An amusing Xmas ditty….a bit like the old Reader’s Digest favorite about the cannibals and missionary who distracted them at first by saying “You will stew me” in answer to their statement when that they would would stew him if his next statement was untrue and otherwise they would roast him. According to RD they were so impressed by his intelligence they let him go. But the true story is actually that confusions didnt overcome their hunger for Long Pig. So they thought laterally, not being constrained by western philosophical bull@#$% and said “bugger this, we’ll eat him raw and discuss economic game theory over desert.”
Along these lines I also offer the following responses to the problem posed:
– A superintelligent being (assuming constrained by time – if not, time paradoxes would allow them fudge past present and future and the result would have nothing to do with any utility maximising choice of mine) would have better things to do than conduct stupid tests on flatworm equivalents to satisfy very silly philosophers especially when their finding patterns in randomness was sufficient to keep them happy and believing s/he was something they werent.
– If I knew about this superintelligent being I would realize that I effectively had no free will and so being a contrarian would lash out by introducing randomness into the decision (like Trump and Brexit voters). Not trusting the being to load the dice or two up coin I would base my selection on the output of a perfect quantum mechanical based random number generator – probably the same as used on Schrodinger’s Cat or apparently lava lamp bubbles – so gaining pleasure from pissing off this superintelligent being because they could not actually predict the choice thanks possibly to the rules they made up or the Readers Digest cannibals.
The answer of course in this case would be to take both due to the utility being 0.5 I think. The supreme being might be able to predict my choice in the absence of the random number generator but the latter’s introduction would stump them like the Irishman who was offered two shovels and told to take his pick.
– I live in the Matrix so why bother?
– I would be too distracted to make a decision by the problem raised of how to reconcile this super-intelligent being who supposedly loves us with all the evil and pointless pain in the world (like someone after a stroke or dying from secondary prostate cancer). Does superintellingence equate to the mentality of child who enjoys pulling the wings of flies or says being good may be in your job description. Mine it to be worshipped for eternity and being boring as hell. Which makes one wonder if Trump actually is the true image of God/the supreme being?
Expected utility theory – to the best of my knowledge – does not involve a magical ‘super-intelligent’ being (effectively characterised as a being which makes a decision exactly at exactly the time the player has to decide; alternatively, the game is allowed to be changed). I am with David Edmonds; take both boxes.
I have always puzzled about this claimed paradox. It always seems uninteresting to me because of the assumption that such a superintelligent being exists. The weird assumption is responsible for the difficulty and for the paradox. If you make weird assumptions, paradoxes seem inevitable. I’d just take both boxes.
I am puzzled to why would this puzzle be a question about existence of a super-being?
Those who believe in God would not even stop a nanosecond to think about such question, because they allredy believe, which is majority of people. So i am puzzled y such reasoning.
This question is about greed of individual people and also about people that are in financial difficulties. Both of those groups are prone to taking more risks and would choose riskier option that supposedly bring more benefit.
A third segment that would choose for only one box is those religiouss rightouss that believe that their rightousness should be rewarded by money from God.
So this would be a question that provides answer on state of mind of a test group or a society. Given that “only one box” has won it says that a society is more prone to taking risks. And also that it is in financial difficulties that makes them prone to taking more risks.
In time of economic collapse there is a clear increase of casino visitors, churchgoing and alcohol sales. Dependng of a personality it will choose one of those or two.
So this Newcomb’s problem got another look due to economic problems of the present.
@Newtownian
In what sense would you not have ‘free will’ if you can decide to use a random generator?
I thought the evidence of the existence of this superbeing would be the existence of the test ie table boxes and question. What is debatable is the degree of intelligence and powers that this superbeing has; if it’s smarts are based only on its self assessment then we could have placed too much faith in a dud superbeing.
It’s been done in the past and is happening right now – Brexit, Trump and our very own PM.
I’m inclined to take box A and run.
The existence or non-existence of the supreme being is irrelevant, as it’s axiomatic in the world of the problem.
In the problem as put, you should take both boxes, as explained by Dr Edmonds in the Guardian article.
The problem is framed as ‘the supreme being has made a prediction about what you will do … the supreme being’s past predictions have always been right.’ Big deal. This time She may be wrong. It’s a very different scenario from ‘You know that the supreme being is always right’ as interpreted by Dr Ahmed for the ‘take only box B’ case.
The version of the problem on Wikipedia does frame it as ‘the supreme being is infallible’, which is a different mater. If the supreme being in some magical way had infallible prior knowledge of what you would do, and filled the boxes accordingly, then you would take box B only.
Given that you have free will, saying ‘the supreme being by magical means had infallible prior knowledge of what you will do’ is as plausible as (and may be equivalent to?) saying ‘the supreme being by magical means can invisibly, remotely fill or empty Box B in response to your decision in the moment before you open the box.’
@rog That should be both boxes ie A and B.
Take both because nobody trusts people who don’t believe in the supreme being
Assigning probabilities to the fallibility (or existence) of the super being, makes this a simple maths/game theory problem. Crunch the numbers and get your answer. But the contrivance in the puzzle is the (almost?) superhuman infallibility of the super being. And of course such contrivances are so often necessary to create these kinds of paradoxes.
JQ gets to core of the problem when he says “I should cultivate the kind of satisficing disposition that would lead me to pick only Box B”. Extrapolating from this then, there is no correct answer. There are simply people who have a disposition which leads them to walk away with $1k and people who have a disposition which will lead them to walk away with $1m.
(BTW the Wikipedia statement of the problem precludes random choices, the super being is capricious, and will withdraw the $1m from people with a disposition to do this. You can’t game the gamer).
But really, you don’t need a super being to conduct this test. just an fMRI and a laptop should do the trick.
I find this directly analogous to Russell’s paradox of the Barber who shaves all those and only those who do not shave themselves. This is correctly resolved by noting that such a barber cannot exist.
Similarly, the “paradox” is created here by the assumption of this super-being’s existence. If such a being exists, you have no free-will and asking you to “decide” is meaningless.
@totaram
Well there you go. I thought it was correctly resolved by noting that the barber had very long hair and beard. But then he was going to move to that island where everybody makes a living taking in each other’s washing just as soon as the ferry came in.
@hc
” If you make weird assumptions, paradoxes seem inevitable.”
Like the set of all sets that are not members of themselves which must be, but can’t be.
@GrueBleen
Indeed, I am surprised that the philosophers have decided to spend time and energy on this “conundrum”. Clearly, they have time on their hands – an enviable state of affairs.
@totaram
Well yes, people are generally aware of Bertie as a philosopher (though more as a philosophy commentator IMHO) but he (and Zermelo) were being full on mathematicians when they used that paradox to confound Gottlob Frege.
Indeed, I am surprised that the philosophers have decided to spend time and energy on this “conundrum”. Clearly, they have time on their hands – an enviable state of affairs.
Bertrand Russell ponders all philosophers who do not ponder upon themselves.
The problem is phrased to imply that the superintelligent being transcends time (in some spooky way) so a causal arrow might run backwards from your decision. This is physics nonsense, even if it is believable.
There is no evidence for time working like that. It is quite interesting – to me, at least – that we find this reversed causality believable, even seductive. On one hand we feel we make free decisions and on the other that our decisions are in principle predictable. For this paradox to work we need to believe both at the same time.
Leaving aside the bonafides of the superbeing, the fact is that a large proportion of people are willing to forego an obvious benefit for a chance at a bigger benefit. The chance that they will end up losing everything does not seem to be sufficient to deter them, the momentary thrill of the chase has triumphed over logic.
Ok, I never got the puzzle as a morality play.