The Grauniad has just resurrected Newcomb’s problem. I have a slightly special interest since the problem was popularized by one of my betes noires, Robert Nozick. So, in asserting that there’s a trivial solution, I have something of a bias.
Having read about the problem, it’s natural to consider the question: How likely is it that such a Superbeing exists? If the probability is high, say 1 in 10, my response is obvious. I should cultivate the kind of satisficing disposition that would lead me to pick only Box B. If the Being appears, and correctly predicts my action, I’m rewarded with a million currency units. My expected gain (ignoring the possibiity that the being makes an error) is 0.1*999000 or 99 000.
There are some wrinkles in decision theory here. A risk averse expected utility maximizer would value a .001 chance of a million units at less than the certainty of a thousand units. On the other hand, my academic fame, such as it is, rests on the idea that decisionmakers will overweight low-probability extreme events like this.
But this isn’t really a problem. The probability of such a Being is tiny (I’d happily take a million to one against, so overweighting doesn;t matter. And (as with Pascal’s wager) it’s just as likely that any Being who might present me with a problem is a Trump/Gekko believe in “Greed is Good”. So, I won’t bother cultivating a disposition. Should Newcomb’s Being appear, I’ll admit I bet wrong, take both boxes, and hope for the best.