My first post on the Stern review started with the observation that
the apocalyptic numbers that have dominated early reporting represent the worst-case outcomes for 2100 under business-as-usual policies.
Unfortunately, a lot of responses to the review have been characterized by a failure to understand this point correctly. On the one hand, quite a lot of the popular response has reflected an assumption that these worst-case outcomes are certain (at least in the absence of radical changes in lifestyles and the economy) and that they are going to happen Real Soon Now. On the other hand, quite a few critics of the Review have argued that, since these are low-probability worst cases, we should ignore them.*
But with nonlinear (more precisely strongly convex) damage functions, low-probability events can make a big difference to benefit-cost calculation. Suppose as an illustration that, under BAU there is a 5 per cent probability of outcomes with damage equal to 20 per cent of GDP or more, and that with stabilisation of CO2 emissions this probability falls to zero. Then this component of the probability distribution gives a lower bound for the benefits of stabilisation of at least 1 per cent of GDP (more when risk aversion is taken into account). That exceeds Stern’s cost estimates, without even looking at the other 95 per cent of the distribution.
An important implication is that any reasoning based on picking a most likely projection and ignoring uncertainty around that prediction is likely to be badly wrong, and to understate the likely costs of climate change. Since the distributions are intractable the best approach, adopted by the Stern review, is to take an average over a large number of randomly generated draws from the distribution (this is called the Monte Carlo approach).
To sum up, the suggestion that because bad outcomes are improbable, we should ignore them is wrong. If it were right, insurance companies would be out of business (not coincidentally, insurance companies were the first sector of big business to get behind Kyoto and other climate change initiatives)
A slightly more substantive, but still ultimately irrelevant objection is that Stern adopts projections of BAU emissions that are too high (in this case, the IPCC A2 projection) and that the mean of the distribution is therefore too high. As Stern observes, any plausible BAU projection involves large increases in atmospheric CO2 levels. If we use a lower projection for BAU, the costs of doing nothing decline, but so do the costs of stabilisation, and the optimal policy, defined as a target trajectory for emissions can go either way. In particular, there is unlikely to be any change in the short-run policy implication of the analysis, which is that we should get started on mitigtion as soon as possible.
A more reasonable objection is that the probability distributions used by Stern allow for too much uncertainty, implying excessively high probabilities. A recent Bayesian analysis by Annan and Hargreaves argues that, by combining multiple lines of evidence we can get much tighter bounds on the equilibrium climate change associated with any given increase in atmospheric CO2 levels, such as a doubling (the shorthand for this is ‘sensitivity’). This means that the probability of either very slow or very rapid global warming is less than is usually thought. Annan has criticised the Stern report here.
Interestingly, some denialists have jumped on this although the same reasoning leads Annan to be one of the most confident supporters of the AGW hypothesis. He challenged Lindzen and others to bet on their beliefs, but Lindzen declined, though some Russian physicists have taken him on.
A big problem with using Annan’s work to discuss Stern is that the two are talking about different things. Stern is looking at projections of global temperature in the year 2100 under BAU. This is not the same as ‘sensitivity’ for four main reasons
(i) ‘Sensitivity’ is normally measured as the equilibrium change for a doubling of CO2 levels. But a doubling relative to the pre-industrial level is what would happen if we adopted the stabilisation policies proposed by Stern. BAU means a much larger change
(ii) ‘Sensitivity’ is a long-run equilibrium change, but the climate won’t have fully adjusted to equilibrium by 2100
(iii) ‘Sensitivity’ is a property of the atmosphere, as represented in a GCM model, in response to given forcings. It is defined to exclude a range of non-atmospheric feedback effects that are of obvious importance in thinking about the likelihood of extreme outcomes from a given scenario, such as large releases of methane from a thawing tundra or (going in the opposite direction) the collapse of the thermohaline circulation.
Points (i) and (ii) go in opposite directions, and mainly affect the mean value of the projections for 2100. Their joint impact can be calculated in making projections, but it means that it’s necessary to take care in reading Annan and Hargreaves’ work. When they say, for example, that the probability of a climate sensitivity or more than 6 degrees C is small, this does not mean “It’s very unlikely that global average temperatures will rise by 6 degrees C”. On the contrary, the Annan and Hargreaves’ analysis implies that, given enough forcing (roughly speaking a quadrupling of CO2 relative to pre-industrial levels), an eventual increase of 6 degrees C is virtually inevitable.
Point (iii) means that the actual range of uncertainty about changes in average temperatures under any given scenario is greater than the range of uncertainty about a single model parameter such as sensitivity. We also have to take account of uncertainty about the relationship between human activity and the forcings that are defined on the input side of the model.
This brings us to point (iv) uncertainty about the model itself. GCM models have improved a lot in recent years, but there is still plenty of uncertainty about the details. Some of this makes it into calculations like those of Anna and Hargreaves, but some does not. For example, sensitivity appears as part of (as I read it) a linear relationship between equilibrium climate and the log of C02 emissions, but there must be some positive probability that the true relationship is nonlinear, with unmodelled effects coming into play as we move outside the range of temperatures on which the models have been estimated. I don’t see how work like that of Annan and Hargreaves capture the uncertainty associated with this possibility.
It also points to a more general problem. Bayesian reasoning is powerful, and a big improvement on older frequentist ways of thinking about probabilities, but it tends to produce overconfidence. If a given assumption is built into the prior probability distribution for a Bayesian model, it can’t be changed by contrary evidence. All you can do is scrap the model and start over. This is why there has been a lot of interest in more general models of uncertainty with unknown probabilities some of which is referred to in Stern. This is the main theme of my current research on uncertainty some of which you can read here.
(i) low probability events at the tail of the distribution are important
(ii) when applied to the variable of interest, the likely rate of warming over the next century, the range of uncertainty in the Stern report does not appear excessive.
* See, for example, this piece by Ron Bailey, who also confuses changes in levels with changes in growth rates, and plugs the Copehnagen Consensus which even participants eventually realised was rigged.