The Stern Review and the long tail

My first post on the Stern review started with the observation that

the apocalyptic numbers that have dominated early reporting represent the worst-case outcomes for 2100 under business-as-usual policies.

Unfortunately, a lot of responses to the review have been characterized by a failure to understand this point correctly. On the one hand, quite a lot of the popular response has reflected an assumption that these worst-case outcomes are certain (at least in the absence of radical changes in lifestyles and the economy) and that they are going to happen Real Soon Now. On the other hand, quite a few critics of the Review have argued that, since these are low-probability worst cases, we should ignore them.*

But with nonlinear (more precisely strongly convex) damage functions, low-probability events can make a big difference to benefit-cost calculation. Suppose as an illustration that, under BAU there is a 5 per cent probability of outcomes with damage equal to 20 per cent of GDP or more, and that with stabilisation of CO2 emissions this probability falls to zero. Then this component of the probability distribution gives a lower bound for the benefits of stabilisation of at least 1 per cent of GDP (more when risk aversion is taken into account). That exceeds Stern’s cost estimates, without even looking at the other 95 per cent of the distribution.

An important implication is that any reasoning based on picking a most likely projection and ignoring uncertainty around that prediction is likely to be badly wrong, and to understate the likely costs of climate change. Since the distributions are intractable the best approach, adopted by the Stern review, is to take an average over a large number of randomly generated draws from the distribution (this is called the Monte Carlo approach).

To sum up, the suggestion that because bad outcomes are improbable, we should ignore them is wrong. If it were right, insurance companies would be out of business (not coincidentally, insurance companies were the first sector of big business to get behind Kyoto and other climate change initiatives)

A slightly more substantive, but still ultimately irrelevant objection is that Stern adopts projections of BAU emissions that are too high (in this case, the IPCC A2 projection) and that the mean of the distribution is therefore too high. As Stern observes, any plausible BAU projection involves large increases in atmospheric CO2 levels. If we use a lower projection for BAU, the costs of doing nothing decline, but so do the costs of stabilisation, and the optimal policy, defined as a target trajectory for emissions can go either way. In particular, there is unlikely to be any change in the short-run policy implication of the analysis, which is that we should get started on mitigtion as soon as possible.

A more reasonable objection is that the probability distributions used by Stern allow for too much uncertainty, implying excessively high probabilities. A recent Bayesian analysis by Annan and Hargreaves argues that, by combining multiple lines of evidence we can get much tighter bounds on the equilibrium climate change associated with any given increase in atmospheric CO2 levels, such as a doubling (the shorthand for this is ‘sensitivity’). This means that the probability of either very slow or very rapid global warming is less than is usually thought. Annan has criticised the Stern report here.

Interestingly, some denialists have jumped on this although the same reasoning leads Annan to be one of the most confident supporters of the AGW hypothesis. He challenged Lindzen and others to bet on their beliefs, but Lindzen declined, though some Russian physicists have taken him on.

A big problem with using Annan’s work to discuss Stern is that the two are talking about different things. Stern is looking at projections of global temperature in the year 2100 under BAU. This is not the same as ‘sensitivity’ for four main reasons
(i) ‘Sensitivity’ is normally measured as the equilibrium change for a doubling of CO2 levels. But a doubling relative to the pre-industrial level is what would happen if we adopted the stabilisation policies proposed by Stern. BAU means a much larger change
(ii) ‘Sensitivity’ is a long-run equilibrium change, but the climate won’t have fully adjusted to equilibrium by 2100
(iii) ‘Sensitivity’ is a property of the atmosphere, as represented in a GCM model, in response to given forcings. It is defined to exclude a range of non-atmospheric feedback effects that are of obvious importance in thinking about the likelihood of extreme outcomes from a given scenario, such as large releases of methane from a thawing tundra or (going in the opposite direction) the collapse of the thermohaline circulation.

Points (i) and (ii) go in opposite directions, and mainly affect the mean value of the projections for 2100. Their joint impact can be calculated in making projections, but it means that it’s necessary to take care in reading Annan and Hargreaves’ work. When they say, for example, that the probability of a climate sensitivity or more than 6 degrees C is small, this does not mean “It’s very unlikely that global average temperatures will rise by 6 degrees C”. On the contrary, the Annan and Hargreaves’ analysis implies that, given enough forcing (roughly speaking a quadrupling of CO2 relative to pre-industrial levels), an eventual increase of 6 degrees C is virtually inevitable.

Point (iii) means that the actual range of uncertainty about changes in average temperatures under any given scenario is greater than the range of uncertainty about a single model parameter such as sensitivity. We also have to take account of uncertainty about the relationship between human activity and the forcings that are defined on the input side of the model.

This brings us to point (iv) uncertainty about the model itself. GCM models have improved a lot in recent years, but there is still plenty of uncertainty about the details. Some of this makes it into calculations like those of Anna and Hargreaves, but some does not. For example, sensitivity appears as part of (as I read it) a linear relationship between equilibrium climate and the log of C02 emissions, but there must be some positive probability that the true relationship is nonlinear, with unmodelled effects coming into play as we move outside the range of temperatures on which the models have been estimated. I don’t see how work like that of Annan and Hargreaves capture the uncertainty associated with this possibility.

It also points to a more general problem. Bayesian reasoning is powerful, and a big improvement on older frequentist ways of thinking about probabilities, but it tends to produce overconfidence. If a given assumption is built into the prior probability distribution for a Bayesian model, it can’t be changed by contrary evidence. All you can do is scrap the model and start over. This is why there has been a lot of interest in more general models of uncertainty with unknown probabilities some of which is referred to in Stern. This is the main theme of my current research on uncertainty some of which you can read here.

In summary,
(i) low probability events at the tail of the distribution are important
(ii) when applied to the variable of interest, the likely rate of warming over the next century, the range of uncertainty in the Stern report does not appear excessive.

* See, for example, this piece by Ron Bailey, who also confuses changes in levels with changes in growth rates, and plugs the Copehnagen Consensus which even participants eventually realised was rigged.

25 thoughts on “The Stern Review and the long tail

  1. Discounting of worst-case outcomes seems to be the Howard position. A further complication is that a chosen path may change with short term outcomes. For example if 2007 is a mild weather year with modest prices for food, electricity and water then the public may accept carbon constraints. If 2007 is a bad year in all respects the public may say ‘we can’t afford any more costs right now’ knowing that things could get worse. I don’t think the public cares what will happen in 50 years time whatever the long run best path should be. Also business-as-usual as a baseline is awkward because of inevitable rainfall decline, oil depletion and demographic change.

  2. This brings us to point (iv) uncertainty about the model itself. GCM models have improved a lot in recent years, but there is still plenty of uncertainty about the details. Some of this makes it into calculations like those of Anna and Hargreaves, but some does not. For example, sensitivity appears as part of (as I read it) a linear relationship between equilibrium climate and the log of C02 emissions, but there must be some positive probability that the true relationship is nonlinear, with unmodelled effects coming into play as we move outside the range of temperatures on which the models have been estimated.

    When models (eg economic models) depend on models (eg climate models) that depend on infered relationships and curve fitting as well as cost benefit analysis based on other models (eg ecosystem models) and a few value judgements about what if any of this matters, not to mention some good guesses about what is physcially relevant and what is not so relevant doesn’t the whole exercise become quite contrived. And at the end of the day the level of action taken also depends on political inputs. If you believe in a rational government intervention there is probably little alternative but it all seems pretty shacky to me and quite dependent on a lot of masking tape and excessive quantities of glue. I can see that we are going to be discussing this issue and making revisions for another 50-100 years.

  3. I am certain that in 10 years we will be speaking about global cooling. We have evidence from our Russian friends that we are on the way. Further, the hype surrounding global warming is deafening. No real facts, just models that do not have methods of validation any non climate scientist could accept.

    Please the facts …
    – less than 1 degree is the last 100 years (satellite measurements over past 40 years show temperature trending down)
    – CO2 is not the major global warming gas … its one of the smallest
    – CO2 is not the major driver of temperature change
    – the Artic is experiencing a cooling trend
    – the proposed reductions will not materially alter temperature

    The Stern report will go down in history as an example where facts were over come by hysteria. It will serve as an example where it was not religion that twisted men’s ability to reason.

  4. As noted in the post, Marcus, James Annan stands ready to take your money. Please tell us how you get on laying your bet.

    But before betting, I’d suggest you check your facts, starting with the satellite data.

  5. But before betting, I’d suggest you check your facts, starting with the satellite data.

    Sure. Quote from the WIki link:

    Discussion of the satellite temperature records

    …However, the same panel then concluded that

    “the warming trend in global-mean surface temperature observations during the past 20 years is undoubtedly real and is substantially greater than the average rate of warming during the twentieth century. The disparity between surface and upper air trends in no way invalidates the conclusion that surface temperature has been rising.”[17][18]

    I wish marcus had left a link to where he got his facts.

  6. Actually, Darren it’s worse than that. Although there was a discrepancy in the late 90s, improved measurement and new data have eliminted the disparity and shown that the surface temperature data (showing a strong warming trend) was right all along.

  7. JQ said: “the suggestion that because bad outcomes are improbable, we should ignore them is wrong. If it were right, insurance companies would be out of business”

    They stay in business because the bad outcomes ARE improbable (ie mostly never happen). Perhaps you overlooked my post re your equations in your submission to Stern? If I am right there is a problem that needs fixing. All same your equity risk premium papers with Grant are impressive.

  8. I congratulate Prof. Quiggin on deciding to concentrate henceforth on the economic issues raised by anthropogenic global warming (AGW) and to waste less time arguing with denialists.

    There are several elephants in the AGW debating room; perverse subsidies and conservation are two of them. Prof. Quiggin has started with the Stern Review. That enormous report estimates the worldwide value of perverse subsidies at $250b. annually, of which over $80b. are in OECD countries and over $160b. are in developing countries. Stern says: “These transfers are on broadly the same scale as the average incremental costs of an investment programme required for the world to embark on a substantial policy of climate change mitigation over the next twenty years…� (Part 3, Sec. 12.5, p. 278 of the download).

    New subsidy arrangements are still being instituted, as witness the 2005 US Energy Policy Act, which freely distributes subsidies to both nuclear and fossil power generation. Australian subsidies to fossil fuel generation have been estimated at $6.5b. annually, an amount which the authors of the linked report consider ample for funding of the switch to their Clean Energy Scenario for Australia. (People interested in subsidies might like to read the 2001 book by Myers and Kent: “Perverse Subsidies: How Tax Dollars Can Undercut the Environment and the Economy�, which sounds like a useful publication from IISD. The November 2006 issue of “Subsidy Watch�, a regular newsletter also published by IISD, features a coverage of energy subsidies).

    Stern splits conservation into demand and supply-side, though I haven’t yet found any particular use made of the distinction in his report – which could easily arise from the size of the report and my slow reading pace, I hasten to add! But for several years now, the IEA’s annual World Energy Outlook reports have included a World Alternative Policy Scenario which (2004 – the latest free download) does make use of this distinction. This Alternative Scenario “… analyses the effect on global energy markets of energy efficiency and environmental policies beyond those considered in the Reference Scenario…� (p. 368). Under this scenario, energy-related CO2 emissions are reduced by 16% below the reference scenario by 2030, mostly through efficiency gains. In regard to cost, the IEA concludes that (p.380) “In aggregate, [ie. globally] larger capital needs on the demand side are entirely offset by lower needs on the supply side�. To my mind, this indicates that the net (global) cost of these CO2 reductions should therefore be low or zero.

    One might interpret this as a complicated way of saying what the Rocky Mountain Institute has been saying for years – efficiency pays for itself. So why aren’t we already more efficient than we are? In Chapter 13 (“Making Markets Work�) of the RMI’s book “Natural Capitalism�, a number of reasons are given, including (in my own words) bad management accounting, bad industrial systems and procedures, perverse subsidies, badly designed regulations and plain ignorance. Imposition of a carbon tax, for example, won’t make these problems vanish overnight.

    Before we plunge into detailed arguments about the pros and cons of carbon taxes, nuclear generation and carbon trading schemes, therefore, we really should think, first, what are we to do about subsidies, and second, whether we can make major gains by addressing the inefficiency-causing problems in the existing energy system.

  9. The insurance companies stay in business Tam partly because they have better science on their side which allows them to better understand the real risks than do their policy holders. You might say that policyholders are more alarmist than insurance companies. It’s not the whole story of course because there’s also the larger entity’s ability to balance its risks in various ways across and within portfolios and the fact that it makes sense for the policyholder to pay over the mathematical odds today for the sake of ongoing peace of mind and a feeling of security for the future.

    You decide whether there’s any metaphorical value or not here (in the context of the Stern report) Tam.

    On Prof Q’s equations might I suggest a little more specificity in your suggestion of a problem could be helpful?

  10. frankis: it is difficult to paste or transcribe equations into this format, but I’ll try.

    Welfare loss as proportion of initial expenditure = D = ((-1 + m.exp.1/(k-1))/((k – 1)(1-m.exp.(1/k)) so long as k does not = 1 (if it does =1 the equation collapses).
    where m = the desired proportion reduction in emissions, and k = the elasticity of demand.

    Substituting for k = 1.1 and m = .5, it would seem we get D = -21.37, whereas JQ reports -0.7, and for k = 2, I get -1.707 and JQ reports -0.3. With energy at 6% of GDP, JQ’s figures translate to welfare loss of between 1.8 and 4.2 per cent of GDP. No doubt there are either typos in the syntax of the Quiggin/Einstein equation or neither Excel nor I can do math.

  11. Tam I’m sorry but I don’t have a link to a copy of JQ’s submission to Stern from where (I think) you’ve taken the equation. On what you’ve reproduced above though I can’t think of any way to read the exp.1/(k-1) term other than as exp(1/k-1) which really does behave badly for values of k near 1. The technical term would be “pathological” I think 🙂 so it’s difficult to see that any expression of that form would be giving sensible answers for elasticities near k = 1. Something is amiss indeed.

  12. Gordon, aren’t you just as likely to run into the Jevons paradox? Let’s say we get awfully clever, introduce a bunch of efficiency-creating technologies, and thus save carbon emissions and increase economic growth. After patting ourselves on the back, we notice that the price of fossil fuels has dropped, and because we’ve increased economic growth we’ve got even more money to spend on them than we otherwise would have. Voila, a few million more SUVs appear on the road, more people decide to holiday in London rather than in Surfers Paradise, and more Melburnians decide they really can afford air conditioning, and we’re back where we started.

    Back to the topic, while I’ll take his word on the details of the mathematics, Prof. Q’s basic observation is quite right; a small probability of an very negative outcome justifies, both from a mathematical and a common-sense perspective, the incurring of considerable costs to prevent that outcome.

  13. … and I might just try a little harder to make myself clear by writing what Tam’s term appears to me to mean as exp(1/(k-1)).

  14. An important implication is that any reasoning based on picking a most likely projection and ignoring uncertainty around that prediction is likely to be badly wrong, and to understate the likely costs of climate change.

    Agreed. But uncertainty is a two way street, and risky outcomes can be both more positive and more negative than the expected or average outcome. Hence I can rephrase that comment:

    An important implication is that any reasoning based on picking a most likely projection and ignoring uncertainty around that prediction is likely to be badly wrong, and to overstate the likely costs of climate change.

  15. I don’t think so, Robert Merkel, because the policies listed by the IEA in the linked report
    which are to drive increased efficiency are in the main regulatory in nature. However the report does indicate that price falls in oil, coal and natural gas would occur. If the efficiency increases
    were to be entirely driven by market forces, then I suppose Jevons could occur, though it leaves out
    changes in individual preferences away from CO2-producing activities and changes in lifestyles and habits. Maybe this is another argument for preferring regulatory over market-based strategies to increase efficiency.

  16. That’s a possibility Gordon, but after reading Prof. Q for this long I still reckon the best policy is to simply gradually price carbon emissions out of the market.

    Of course, a possible result of that might be that non-emitting sources of energy (nuclear, sequestered fossil fuels, and improved renewables technology) take over, rather than the energy conservation outcomes that the RMI advocates. But, frankly, I don’t see that as a major problem. Energy use is not a major environmental problem in and of itself. Greenhouse gas emissions, not to mention all the other pollutants coal-fired power stations dump into the biosphere, are.

  17. Of course, a possible result of that might be that non-emitting sources of energy (nuclear, sequestered fossil fuels, and improved renewables technology) take over, rather than the energy conservation outcomes that the RMI advocates. But, frankly, I don’t see that as a major problem.

    Why would that outcome be a problem at all?

  18. frankis Said:
    November 20th, 2006 at 12:57 pm
    “On what you’ve reproduced above though I can’t think of any way to read the exp.1/(k-1) term other than as exp(1/k-1) which really does behave badly for values of k near 1. The technical term would be “pathologicalâ€? I think so it’s difficult to see that any expression of that form would be giving sensible answers for elasticities near k = 1. Something is amiss indeed.”

    It’s worse than that. There appears to be a mismatch also between consumer surplus theory and JQ’s equation. Using a plausible demand function for petrol of x = 10*sqrt((50-p/p) which has a broadly constant elasticity of demand of -0.55 (more realistic than JQ’s 1.1-2 in my experience since 1999), the loss of income from a near quadrupling of the price per litre from A$0.8 in 2000 to A$3.00 (which on that function would be needed to drive my demand down by 50%)reduces my other-disposable income (or GDP) by “only” 2.8 per cent. That is in line with JQ’s version, but my loss of consumer’s surplus from this is roughly 80%. I do hope JQ converts the ALP to promising petrol at A$3/litre at the next elections!

  19. I think I’m with Tom Davies on the “non-emitting sources of energy”, but that leaves out the
    non-energy uses of fossil hydrocarbons. A price fall of fossil hydrocarbons could lead to
    cheaper chemicals, artificial fibres and plastics.

  20. “A big problem with using Annan’s work to discuss Stern is that the two are talking about different things.” – this is true but wrong. Annans work means that the sensitivity estimates Stern is using are very likely too high. Since his damage is non-linear in T (increasing strongly with T) this inflates his damage estimates. You can’t get away from this problem by distinguishing 2100 and equilibrium

  21. William, my main points here are (iii) and (iv) (feedbacks and model uncertainty). The points about 2100 and equilibrium are true but, as I say in the post, not that important. On the other hand, points (iii) and (iv) suggest that the precision of model sensitivity is a lower bound on uncertainty about sensitivity in the ordinary sense of the term.

Leave a comment