Home > Science > Monochrome swans

Monochrome swans

September 15th, 2014

I have a request[^1] for help from scientifically literate readers. A lot of my research work is focused on the problem of unforeseen contingencies, popularly, if ethnocentrically, described as “black swans”. In particular, I’m interested in the question of how you can prepare for such contingencies given that, by definition, you can’t foresee exactly what they will be. One example, with which I’m very pleased, is that of the precautionary principle. It seems reasonable to say that we can distinguish well-understood choices involving hazards from those that are poorly understood, and avoid the latter, precisely because the loss from hazard cannot be bounded in advance.

Anyway, I was thinking about this in relation to the actual case of black swans (or, from my own perspective, white swans). The question is: what principles would help you to avoid making, and acting on, the assumption “all swans are white (or, in my own case, black)”. It seems to me that the crucial fact here is that the shift from black to white, or vice versa, is, in evolutionary terms, a small one. So, if you used something like cladistics, you would avoid choosing feather color as a defining feature of swans, and birds in general. As I understand it, a phylogenetic approach starts with features that are very strongly conserved (body plans) and proceeds from there. But, rather than assume that my own understanding is correct, it seemed simpler to ask.

[^1]: There’s a blog-specific word for this, but I refuse to use it

Categories: Science Tags:
  1. Neil
    September 15th, 2014 at 21:01 | #1

    How’s this for a black swan? Turns out the bauplan is not always conserved.

    http://entomologytoday.org/2014/09/12/newly-discovered-planthopper-appendage-doesnt-fit-the-bauplan/

  2. andrewt
    September 15th, 2014 at 21:05 | #2

    Haven’t you answered your question yourself – if you observe sometimes vertebrates with a recent common ancestor have different coloration, then why you assume that close relatives of a given species have the same coloration?

    But you haven’t addressed the question of what a swan is – you might say member of the genus Cygnus which its true currently include black and pied species – but genus is an arbitrary concept and placement into genera is somewhat subjective. If you construe swans as a more restrictive phylogenetic grouping around the species first given the name, which on some current phylogenies would would exclude the southern hemisphere species, then all swans would be white.

    Also beware names can be polyphyletic – zebras are defined by their coloration not their phylogeny. This could have been the case (but wasn’t) for swans, but is at least partly for other birds, e.g. magpies.

  3. Ernestine Gross
    September 15th, 2014 at 23:27 | #3

    I am not sure to be on the desired track regarding “scientifically literate” with my 2 cents worth on decision making under uncertainty.

    One way to think of unforeseen contingencies is to assign (1-x) probability weights to the “well-understood choices involving hazards” and x to the unknown contingencies. I am sure this is not a novel idea. The question is, who assigns the probability weights? It seems to me, people who know a lot, relative to others, on scientific matters would assign a much larger numerical value to x than those who know much less or nothing at all. Without x being strictly positive, motivation for a life spent on basic research (now ‘blue sky’ research) would be difficult to comprehend. I am thinking of climate scientists versus the proverbial Moncktons of this world or researchers in theoretical economics versus a religious person who made some money as chairman of various organisations and maintains ‘it is always the same’.

    Is it not possible that there would be no unforeseen (unanticipated) as distinct from unknown (prior observations) consequences in the universe of all people’s beliefs about future possible events to which they assign a collective probability weight of x? Who decides whose beliefs should be ignored? Is expected utility theory justifiable iff every member of society has the same weight of influence, given the institutional environment of society? (eg the same financial means in the current system). Or should the influence be weighted by knowledge rather than money?

    I am not sure whether you are working on a general characterisation of the problem and then get a solution to the specified problem or whether you have specific practical problems in mind.

  4. September 15th, 2014 at 23:43 | #4

    JQ: “The question is: what principles would help you to avoid making, and acting on, the assumption “all swans are white (or, in my own case, black)”.

    None. All you can do is start out and keep learning, as recommended by Solon and the Reverend Bayes. That’s what children do: white swan, white swan, white swan … looks like all swans are white. But see, large bird in all respects like earlier swans except that it’s black. Oops, drop “white” from nominal essence of swan.

    I do not claim to be scientifically literate but did a little philosophy once.

  5. September 16th, 2014 at 00:18 | #5

    Is it not the case that black swans were discovered (and surprising) before evolution was fully understood? And had evolution been better understood at the time of the discovery of black swans, people would have been all “meh. Swans can be black. Whatevs” and kept going about their lives? Also, as you say, the feather colour is largely irrelevant. And probably the discovery of black swans helped scientists better understand evolution (were they a core data point for Darwin? I don’t really know).

    Also, as an aside, the dude who wrote the black swan book has a big chip on his shoulder about how standard statisticians assume data is normally distributed. This isn’t what standard statisticians do; we know that the sum of random variables is normally distributed. These are completely different things and anyone who markets a whole book on the claim that people do the former is probably being deliberately disingenuous.

  6. plaasmatron
    September 16th, 2014 at 01:20 | #6

    I agree with James Wimberley.

    The principle is “carefully define what is fact and what is assumption”. As my old man used to say, “assumption is the mother of all cock-ups”.

    And I also agree it is probably more of a philosophy question than a science question. Many of us scientists take assumptions as facts, depending on the accepted dogma of the day.

  7. Paul Norton
    September 16th, 2014 at 07:57 | #7

    The taxonomy of Anatid birds (the family that includes ducks, geese and swans) is still a work in progress, with Muscovy ducks proving particularly difficult to classify. One consequence of this is that there is an unresolved dispute within Judaism over whether Muscovy ducks are kosher or whether they fall under the prohibition in Leviticus against the eating of swans.

  8. calyptorhynchus
    September 16th, 2014 at 07:58 | #8

    The first non-completely-white swan observed by Europeans was in South America. It is black and white.

  9. Paul Norton
    September 16th, 2014 at 08:09 | #9

    On the serious question, when I studied biology and ecology at university we used cladistic tables that basically proceeded via something like a “20 questions” game by asking a series of questions about the characteristics of the organism we were interested in, with the answers to the questions being binary alternatives, and the answers to the first question or two telling us what domain it belonged to, then what kingdom, phylum, class, order, family, genus, species and sub-species. No doubt the procedure could be extended further to distinguish between different breeds or strains of domesticated animals or plants. The point is that the question that distinguishes members of class Mammalia and class Aves occurs sufficiently far up the table for us to be able to know that a white swan and a black swan are far more closely related than a black swan and a black cat.

  10. Ken Fabian
    September 16th, 2014 at 08:59 | #10

    Broad support for Research; some of those unforeseens may be converted into foreseeables with a continuing commitment to building our knowledge base.

    Disaster preparation; having something in reserve – people, organisations and resources to fall back on.

  11. Moz of Yarramulla
    September 16th, 2014 at 09:33 | #11

    Thanks Ken for trying to answer the underlying (actual?) question 🙂 Also, ProfQ thanks for bringing up the point about the white swan effect being very hemicentric.

    I think that most of the unexpected disasters can be dealt with using our preparations for the disasters we already know about and expect. Or to put it the other way, why bother preparing for unexpected disasters when you’re not willing to prepare for entirely predictable disasters? It makes more sense to spend more on the disasters we know about and perhaps a little research into things we suspect might become disasters in the future. But that’s very much a side issue in the general research budget, there’s no point doing blue-sky research into “things that might kill us all” when there’s a huge list of those that we already know about.

    For example, first work on preventing catastrophic climate change, and use the research from that to address the more general problem of getting shaved monkeys to worry about threats they can’t see that will kill them in the impossibly far distant future (viz, later than next fiscal quarter). And work on making our cities more resilient in the face of bushfires or tourist attacks will also help if there’s an earthquake or big meteorite strike. It’s likely that those preparations will also help if there’s an alien landing or an invasion of angry Loch Ness monsters.

    There’s also a list of disasters that there is no point worrying about because we can’t yet imagine a solution to them. A black hole heading towards our solar system? Prepare to die… in a few million years. A nearby supernova? We might, if we’re lucky, get a few minutes warning. Neither of those can be prevented or ameliorated so why worry?

    But the general principle of “eggs in one basket” might, eventually, solve both of those problems. As well as many others, some of which will only be revealed by attempting the solution. I mean, work on antimatter space drives will naturally lead to wanting to prevent terrorists getting antimatter weapons. A whole new disaster!

  12. Gaius X
    September 16th, 2014 at 10:18 | #12

    The question is: what principles would help you to avoid making, and acting on, the assumption “all swans are white (or, in my own case, black)”.

    As Paul suggests, traditionally taxonomists produce keys with binary yes/no answers that allow punters such as myself to categorise a specimen.

    Usually just one “character” (to use the lingo) of presumed evolutionary significance will separate related taxa, for instance a duck form a goose (eg three toes instead of four).

    Provided a specimen meets the single criteria in the key, it is a duck or whatever even if it is morphologically different from all other known ducks. Hence there is no need for a principal as per the one you suggest.

    I’m not sure this is relevant to your question, but it turns out that taxonomy, which is an attempt to place all living things in a neat pigeon hole, turns out to very difficult in many cases for various reasons.

    Moreover taxomnists have never been able to produce a neat commonly accepted definition of the cornerstone concept, species, and even the best operational definition offers only general guidance. Accordingly there are always disputes in taxonomy, for instance Tasmania recently lumped callistemon in with melaleuca but other states haven’t followed suit.

    Interestingly even politics impacts taxonomy in several ways but that is OT.

  13. ZM
    September 16th, 2014 at 10:25 | #13

    John Quiggin,

    What sort of thing are you wanting to use this in?

    I had a guest lecturer in a class on the problems with the precautionary principle from a sustainability perspective. He mentioned the black swan book by taleb as part of a discussion on the difficulty of understanding g and governing complexity and complex systems . He saw the precautionary principle as enabling the status quo by convincing administrators that complexity is more manageable than it is – from the lecture:

    “we convince ourselves that complex systems [eg a forest or society] can be treated like complicated ones [eg complicated engineering of a machine] and when our management fails, we respond by calling for a little more precaution. We respond by refocusing the discussion away from ‘stopping’ our behaviors towards questions like ‘how much of this can we get away with?’ This ‘bait & switch’ allows us to substitute what we know (climate, pollution, poverty) with uncertainty, which allows us to carry on doing what we like”

  14. Newtownian
    September 16th, 2014 at 10:55 | #14

    Where would you like to start John? (that’s rhetorical). Here are a few thoughts you might or might not find useful. Perspective is environmental science/engineering working in risk quantitative risk management models with parallels to the issues faced by economists.

    What are Black Swans?
    1. The black swan is the unexpected – to varying degrees and over varying timeframes. For engineers the biggest problems are the unpleasant black swans which are accepted as inevitable unlike until recently in economics.
    2. The discipline that covers this is Risk Assessment and Management. In 2009 a new standard emerged – ISO 31000 – most useful is the practical guide ISO 31010 IEC/ISO 2009. IEC/ISO 31010 Risk management – Risk assessment techniques Edition 1.0 2009-11. There are 31 different tools developed in different contexts each conveniently outlined in 2 page summaries. (look at the standard to avoid any wheel reinvention. list is not exclusive).
    3. Several ISO tools discuss concepts like hazards, hazardous event, fault trees, event trees, scenario analysis etc. – if you like the nasty black swans with a view to managing them. They address how to systematically collect the knowledge and how to use it for management. Many are qualitative. Some are quantitative tools (more below). All are useful.
    4. To my surprise a few years ago I discovered that consideration of hazardous events has not as routine in business/economics risk assessment as might be expected. This puzzled me until I discovered there is this scholastic vision belief in equilibration on the part of most economists. Daft. Yes equilibration occurs most of the time, but its when non-equlibrium times occur that the fun/black swans appear. So 2008/Zombie Economics has been fascinating to watch.
    5. In respect to the Precautionary Principle there is one well developed area not yet incorporated in the ISO standard, flood management. Being in Brisbane you are no doubt aware of the strife caused by misapplication of this discipline a couple of years back – the Brisbane Floods. Remember how the lawyers and politicians tried to shoot the messengers (the engineers who did the calculations and gave their best expert judgement) rather than the developers who wilfully misunderstood how flooding work – a bit like climate change deniers who see short term profit on a gamble and bugger the future.
    6. Central to engineering/management control of flood impacts are the “1 in 100 year” style flood levels, flows and maps which provide the benchmarks for sizing engineering controls (stormwater drains, housing block locations etc.). These are often misunderstood as ‘a flood with occur every 100 years to a particular height above which we can build and get insured’ as against ‘there is a 1% probability of a flood reaching or exceeding the estimated level every year’ (see Australian rainfall and runoff for details – the latter is referred to as an ‘Exceedence Probability’. The difference is subtle – see Met Bureau explanation on rainfall.
    7. Flood management is a perfect example of being precautionary in the event of this kind of black swan and lots of associated but measurable uncertainty. In response to knowledge and modelling of flooding you can put in a big dam with a certain capacity, stop people building in what is a flood plain, you can set an arbitrary ‘tolerable risk’ level, you can watch rainfall and issue timely evacuation warnings etc.
    8. Risk benchmarks are necessarily compromises between affordability and likelihood. Their calculation may be entirely rational but the cut-off point is an institutional decision.
    Black swan management
    1. So how do you avoid or minimise the impact of black swans generically?
    2. You first have to understand the system well and preferably quantitatively as objectively as possible. Then you have to put the pieces together coherently which recognizing data gaps and other uncertainties.
    3. Systems are complex and humans are lousy at doing probability calculations which combine posterior and prior knowledge. This means you need to do systematic and credible modelling (shades of climate change) bearing in mind:
    a. Our penchant from delusion and prejudgment “It seems to me a crucial fact” as against letting the model and analysis reveal structures we never dreamed of.
    b. All models are wrong but some are useful.
    c. Models too often take on a black box format which mystifies outsiders
    d. Models are only as good as the input data – remember GIGO.
    e. We are inhabitants of Plato’s cave and all that we see has associated uncertainty and risk which we must ignore or discount when making decisions else we might never make a decision.
    A Black Swan Oracle?
    1. 31 Assessment tools and counting is a lot which begs the question how to integrate them all.
    2. For my money an at least partial solution appears to be Bayesian Belief Nets which provides a generic qualitative and quantitative systems description and analysis program.
    3. Two first stop shops if this interests are the Monty Hall problem (see Wiki) and Netica (inexpensive commercial software with a fully functional freemium version sufficient to learn)
    4. Ironically such Bayes application has been driven by economists.

  15. September 16th, 2014 at 12:36 | #15

    @Newtownian

    Ah, but a real black swan event in your flooding example is an earthquake destroying the dam when it is full due to heavy rainfall.

  16. conrad
    September 16th, 2014 at 12:49 | #16

    There’s really two questions here — what defines things and how to predict things that are undefined or of very low probability.

    In terms of the first one, there’s a large psychology literature as to what people see as the defining features of certain things are (unfortunately, most of the examples are on things like Swans in the physical sense and presumably not the type of thing you are really interested in). As for the second, not that I know much about it, but I used to work with a stats guy into Extreme Value Theory, and I imagine you could combine aspects of the two to get predictions about what people would predict and potentially how that might deviate from more optimal solutions (i.e., categorization theory with very low probability outcomes).

  17. Rob Banks
    September 16th, 2014 at 14:52 | #17

    John

    Your initial question seems to be “how can you prepare for unforeseen contingencies?” I think the answer developed and used by evolution is to generate and retain redundancy and diversity. Redundancy provides protection against events that take out some members of a class (of species, methods, individuals within a species). Diversity enhances that by adding capacity to not be taken out due to some functional attribute(s).

    So redundancy is protection by numbers, diversity is protection via variation. Evolution usually utilises and maintains both.

    Hope this is not off the track of your question.

  18. September 16th, 2014 at 14:53 | #18

    Bayesian stats cannot fix the “Black swan” problem because it is not a statistical problem, but an epistemological one. At some point you cannot predict what is going to happen – all you can do is prepare. And discussions about the cost-effectiveness of preparation can easily handle “black swan” events simply by incorporating them into the probability distribution.

  19. Royton De’Ath
    September 16th, 2014 at 16:07 | #19

    Having read the posts and associated comments here and at CT. I’m a bit confused as to what is being asked of the ‘scientifically literate’ commenter (a state of confusion being almost permanent on my part, I must admit). Gauging from the questions being posed, and the paper linked to (thanks, very interesting stuff) this is an exercise in decision rules? For another paper?

    My confusion rests in the fact that the real world has (albeit messily) engaged with aspects of PrQ’s enquiry, so I’m wondering if there’s something else that’s being tilted at. Or. More likely – personal bewilderment reigns.

    To my certain knowledge, case-law on PP has ‘operationalised’ rules for decision making for planning decisions. Jacqueline Peel’s book (Fed. Press) is a very useful overview of the state-of-play on PP in Oz and elsewhere. Judgements applying PP in planning law are well worth reading as they reflect (types of) decision-making in real-world situations.

    C.S (Buzz) Holling and Lance Gunderson were publishing interesting work on complexity and scientific responses thereto, from the early 70s, onwards (in particular developing adaptive management as a tools for evidence-based, but conditional, decision-making for managers). They and others are worth reading (try “ecological resilience”).

    But, all this is, at rock bottom, is the old, old issue that all environmental/ecological managers face? The issue mentioned in passing in PrQ’s paper: making difficult decisions wrt “developer vs public risks”. One of the better overviews of these issues (that I read) is Shrader-Frechette & McCoy’s “Method in Ecology”.

    Having recently retired, having spent about thirty years in environmental management (for a chunk of that reporting directly to decision-makers) all I can say is that it was a b….y difficult, time- and resource-constrained “decision-making environment”. Uncertainty was just an everyday working “thing” that had to be addressed with the most appropriate tools available – honesty being one of them.

  20. Newtownian
    September 16th, 2014 at 16:50 | #20

    @John Brookes

    Wrong on several counts. Sorry John but you dont seem to get how this game is played.

    1. I said all models are wrong but some are useful so your quasi known unknown has been considered to the extent the data allows this which isnt that bad IMHO. The Queensland floods were by contrast forseeable and therein lies the tragedy that the management response of the dam was used to excuse stupid building in a flood plain.

    2. Earthquake impacts are predictable in the same way as extreme floods. Its just that current practice uses the 100 year recurrence as a compromise between tolerable risk and earthquakes at least around Sydney appear to be very unlikely/outside that range. Of course earthquakes are accounted for. How good the stats are though is always worth having a look at.

    This doesnt mean you cant adapt if you dont find a flaw in existing design. For example at Warragamba a couple of decades back they realized a dam fracturing rainfall event was more likely than expected (revision of stats brought it down from 10,000 year recurrence to 1000 year recurrence I vaguely remember). In response a huge spillway was constructed to take the 750 year recurrence event in the catchment.

    3. The black swan is a pretty rubbery concept – it might be predictable but unlikely but occur next year. It might be completely unexpected or discounted as irrelevant by planners – the Japan earthquake and tidal wave was predictable. They just thought it was sufficiently unlikely that they could get away with below ideal protection. The trouble is that its a big world and every now and then the 100 year or even 1000 year event will be exceeded.

    4. Regarding earthquakes destroying dams thankfully it doesnt happen too often though of course this kind of even cant and will happen from time to time. If you are worrying about that I would not move to San Francisco.

    5. Earthquakes arent the real black swan. For Sydney Brisbane and Melbourne the real Black Swans would be the Probable Maximum flood which make and earthquake look as nothing. The PMF for Western Sydney is 28 m where there is a bathtub effect!!! – probably reflecting an Indian monsoon conditions.

  21. Ikonoclast
    September 16th, 2014 at 17:11 | #21

    For me the black swan event was and is neoliberalism taking such firm hold and ruling western society for 40 years and counting. Along with this, the rise of anti-science denialism has also been a black swan event IMO. The two are linked. Neoliberalism equals intellectual and scientific ignoranece.Despite all our learning and progress we now regress and enter the New Age of Endarkenment. The black swan wings of neoliberalism spread over the world and smother it in dark ignorance. Is it a black swan or the predictable black vulture of Corporate-Oligarchic dictatorship?

    The scientific humanist enlightenment is being destroyed by late stage capitalism. Not entirely a black swan I guess. Karl Marx did warn us about this. Socialism or barbarism, there is no third way.

  22. Newtownian
    September 16th, 2014 at 17:16 | #22

    @faustusnotes
    Wrong also on several matters.

    1. Bayesian Belief Nets are not designed to fix anything. They are decision support tool designed to assist you getting the best possible estimates given a certain set of beliefs and quantitative data which are like Platos Cave – uncertain on a probability scale reflecting available knowledge + speculation/scenario construction ranging from 0.999999 (close enough to equate to certainty) to <0.00000000000000001 (so unlikely it would bare thinking about).

    2. You can never predict anything with certainty – not just the Black Swan. But some things are more likely than others so Bayes nets etc can still be useful.

    3. You cant handle Black Swan events optimally if you dont have some idea how a system functions in the first place – possibly only empirically but preferably based on identified mechanisms. The task is complicated because getting good input data can be damn hard and expensive though it is always possible to speculate at an order of magnitude scale and that qualitative approach can be still be useful.

    Another complication with Bayes is that in practice different people have different ideas often about causes and effects which means its hard to reach agreement.

    One thing about economics that intrigues me is that there appear to be many claims that the discipline understands mechanisms but closer analysis indicates things are a lot more rubbery. Useful examples are the concepts of 'value' and freedom. Discounting in effect says very little value is placed on the future beyond 10 years yet people keep having children in part because they are looking to the future. But this is of course fudged/ignored most of the time.

    Free choice seems to assume we understand the concept of free will v. determinism when this just isnt so.

    I'm not suggesting these arent interesting starting points for analysis but rather to develop a real Bayes net for identifying Black Swans you need to go a lot deeper than is currently the approach. I include Steve Keen in this though to his credit he is certainly trying to expand the way economic theory is framed.

    The potential significance of current economic concepts not being well grounded is that if the we dont really understand economics we are very vulnerable to economic hazardous events – such as the production of dangerous bubbles.

  23. Newtownian
    September 16th, 2014 at 17:22 | #23

    @Ikonoclast
    Is Neoliberalism really a black swan? Ever since it emerged with Thatcher’s 1980s nonsense mantras I cant say I’ve given it much credence. And from what I’ve read from your posts IK I doubt you ever gave it much credence either.

    I’d cast it rather as a hideous known known whose agenda is just being worked through at a slow but steady pace.

  24. Donald Oats
    September 16th, 2014 at 19:25 | #24

    Engineer: I see a black swan; therefore, all swans are black.
    Physicist: I see a black swan, therefore that swan is black.
    Mathematician: I see a black swan, therefore that side of that swan is black.

    In other words, it depends upon your powers of reasoning and the level of precision you demand in focussing upon something as being in a particular category for the purposes of treating the relevant collection of events as a stochastic variable. My understanding is that the black swan is meant to be the category-breaker, the deal-breaker, the thing that couldn’t happen in a zillion years, and yet it did. It isn’t just something inside the 5% tail, or the 1% of the tail, it is that event which is so impossibly unlikely that we have no good tools for apprehending it. We just don’t even consider incorporating it into our collection of possible events.

    In the context of gambling, say we bet on red at roulette, and just as the ball is about to settle on a red number, a fist-size meteorite zaps through the space inhabited by the roulette ball–well that would be a black swan event. When calculating the odds of a win at a known roulette table, we can take all the usual variables into account, but who is going to think of a meteorite cratering its way through the roulette table? Noone (until now, anyway). That’s a black swan for you…

  25. Andrew Dodds
    September 17th, 2014 at 00:08 | #25

    I can think of two related approaches to dealing with Black Swan style events – overcapacity, and making sure you can do the same thing in more than one way (redundancy).

    Overcapacity could mean something like making sure you could always produce 50% more electricity than required; redundancy might mean that you could keep your grid functioning even with the entire loss of one source of electricity – i.e. all your dams are empty due to a freak continent scale drought..

    And you need overcapacity and redundancy in all essential areas – food, energy, housing, transport, water, communications.. because the effect of a black swan is unknowable in advance.

    Of course, if you leave these things to the market then by definition, overcapacity and redundancy get stripped out. Which is fine if you are making TVs, or Cars, or Websites (Computer operating systems? Hmmm.) But for essentials, it does seem a bit dangerous.

  26. sunshine
    September 17th, 2014 at 12:30 | #26

    Classical logic has it that there are 2 ways of assigning probability. Empirical (from observation or experience) and a prior i (from rules and deduction). They are not totally separable from each other – but an empirical method would not predict a black swan whereas a prior i could.

    The peer review system may not be good at finding black swans as it usually ensures research unfolds methodically and cautiously in predictable directions .The big game changing revolutions often come as black swans. So many of the big scientific advances came from accidents or mistakes -even from dreams. As for black swan useful principals – I’d say ‘avoid hubris’ ,embrace doubt .

  27. TerjeP
    September 17th, 2014 at 14:17 | #27

    For example, first work on preventing catastrophic climate change, and use the research from that to address the more general problem of getting shaved monkeys to worry about threats they can’t see that will kill them in the impossibly far distant future (viz, later than next fiscal quarter).

    I was watching a video of a talk by David Friedman the other day. He was talking about global warming and the associated temperature risk. He talked about the range of temperature increases predicted by the IPCC and then pointed out that there was a small risk that things could be much worse than predicted. He referred to this as a low probability, high cost scenario. However he then pointed out that the opposite scenario also exists. There is a low probability that without the extra CO2 we would be entering a new glacial period and that global warming is slowing or even stopping that from occurring. Under this second low probability scenario policies that stop or slow global warming are very high cost.

    Whilst you may choose to reject any notion of Swan symmetry in the particular case of global warming it would seem likely that there is in general some difficulty in knowing the symmetry of Swan occurrence in any given problem space. So in many circumstances Swans can turn up that lead you to regret not having done more of X, just as much as Swans can turn up that lead you to regret not having done less of X.

  28. David Rohde
    September 17th, 2014 at 16:33 | #28

    Dear Prof Quiggin,

    I can probably provide some pointers to some interesting references. Apologies, if I only state the obvious…

    There are some interesting comments on Taleb’s books by Christian Robert, Larry Wasserman, Andrew Gelman, Denis Lindley and David Aldous if you want to search for them. Obviously Taleb’s style annoys lots of people. I like Aldous’ commentary the best.

    In part Taleb’s argument is anti-formalist, and anti-modeling, that is, he argues there is not much you can do and it is a bit defeatist. Although he argues that fat tail distributions help with “grey swans”.

    The most comprehensive account of decision making under uncertainty is usually considered to be due to Bruno de Finetti. Especially Prévision: ses lois logiques, ses sources subjectives, and Theory of probability. There is an excellent and very underrated book by Frank Lad, the book by Peter Walley might also be relevant.

    The most important aspects of the theory are the de Finetti representation theorem, and the fundamental theorem of prevision. The de Finetti representation provides bridges between classical and Bayesian statistics. The fundamental theorem of prevision (FTP) is less widely known, importantly it allows uncertainties to be specified imprecisely.. Bayesian statistics is currently being driven forward by computational methods that perform conditioning (or more accurately marginalization) usually using MCMC. The imprecise inference based upon the FTP is not being pursued as much because the action is elsewhere at the moment, but an imprecise probabilistic specification seems relevant to this type of ‘black swan’ problem.

    The theory importantly separates, unknown future outcomes, utility and decisions. I interpret the black swan idea as you being unable to asses the probabilities for (some) unknown future outcomes i.e. you place wide intervals on these probabilities. This in turn makes it hard to determine the expected utility of some or possibly all decisions. I expect you can motivate the precautionary principle from this sort of perspective. I would quibble a little with how you introduce the precautionary principle, in that it is the expected loss of a decision that you cannot bound – usually you have a pretty good idea of the loss of a particular decision for a particular known future outcome.

  29. Blissex
    September 18th, 2014 at 10:26 | #29

    «unforeseen contingencies, popularly, if ethnocentrically, described as “black swans”»

    If I understand Nassim Taleb correctly that’s not how he characterizes “black swan” events, and is it indeed very far from his conception of “black swans”.

    «In particular, I’m interested in the question of how you can prepare for such contingencies»

    I dearly hope that our poster has read the somewhat fuzzily written “Antifragile” book by Nassim Taleb. That his is reply both to “black swans” and what you are asking, except that is now ambiguous:

    «such contingencies given that, by definition, you can’t foresee exactly what they will be.»»

    OOPS, major bait-and-switch here: you have talked of “black swan”, “unforeseen contingencies” and “can’t foresee”. As to the latter two it seems to me a confusion between “unforeseen” and “unforeseeable”, where the difference between the two is pretty gigantic.

    «what principles would help you to avoid making, and acting on, the assumption “all swans are white (or, in my own case, black)”.»

    That’s a different topic again. That’s the ancient problem of extrapolation, which is based on the alternative between making sharp predictions and accepting that they may be wrong, or making vague predictions that are never wrong.

    «One example, with which I’m very pleased, is that of the precautionary principle. It seems reasonable to say that we can distinguish well-understood choices involving hazards from those that are poorly understood, and avoid the latter, precisely because the loss from hazard cannot be bounded in advance.»

    “Antifragile” by Nassim Taleb has extensive discussions of this and much more.

  30. Andrew Dodds
    September 18th, 2014 at 18:23 | #30

    @TerjeP

    We have a very strong idea that human activity has stopped an ice age – paleo analogues suggest that we should have been gradually moving back to glacial conditions since the holocene climatic optimum 6000 years ago.. but the small additions of CO2 from deforestation and agriculture stopped it.

    The key is in the timescales – natural ice age onset takes thousands of years, whereas the timescale for AGW is decades to centuries.

  31. September 18th, 2014 at 22:20 | #31

    @Gaius X
    Yes. JQ’s question assumes there is a connection between the problems of taxonomy of living creatures, and those of labelling events in human affairs.

    In biology, you have two problems. The one normally encountered – and the only one encountered by non-specialists – is identifying the species of an individual. Species undoubtedly exist in nature, even if there are hard cases. Jared Diamond, a card-carrying ornithologist, found that New Guinean tribesmen had the same list of bird species that he did. Only their higher classifications were different. Identifying this creature as an x, you have a lot of characters to play with. The difficulty only arises if there are very similar species. In some cases, a five-year-old can’t go wrong: giraffes, polar bears, toucans, condors. In others, you need a microscope: ants, beetles, small brown birds. But there isn’t an epistemological problem; you look it up.

    The second problem, for experts only, is fitting the species into a higher taxonomic tree. Before genetics, this was a somewhat arbitrary procedure. The higher orders were human creations, and taxonomy was not right or wrong. Cladistics tries, as I understand it,to correct taxonomy to fit a true evolutionary descent. There are I suppose difficulties for expert judgement, when different genes seem to tell different stories, or stories that don’t fit the phenotypical differences. But again, where is the epistemological problem?

    I’m guessing, but I feel that John’s economic black swan problems are closer to species identification than cladistics. That’s simple enough. Take as many characteristics as you can. Then look for a unique characteristic or two, like the toucan’s bill. (These may be environmental. A very large all-white animal in the Arctic is a polar bear). Alternatively (for small brown birds, etc), assemble a very short list of species that fit obvious characteristics, look up the differentiating characteristic, and check that.

  32. Stockingrate
    September 20th, 2014 at 16:06 | #32

    Monotremes

  33. Stockingrate
    September 20th, 2014 at 19:00 | #33

    Monotremes were a surprise – egg laying, milk producing ie 2 things not previously, ethnocentrically, known to exist in the same animal and where animal groups that had one of the features had many phenotypic differences from the other. So using a larger genetic difference would not preclude black swans. Still the principle seems sound.

Comments are closed.