Home > Philosophy > A trolley problem

A trolley problem

July 26th, 2017

I’ve generally been dubious about trolley problems and similar thought experiments in ethics. However, it’s just occurred to me that an idea I’ve tried to express in the economistic terms of opportunity cost, without convincing anybody, might be more persuasive as a trolley problem. So, let’s start with the standard problem where the train is about to kill ten people, but can be diverted onto a side track where it will kill only one.

In my version, however, there is a second train, loaded with vital medical supplies, which is about to crash. The loss of the supplies will lead to hundreds of deaths. You can prevent the crash, and save the supplies, by diverting the train to an alternative route (not killing anybody), but you don’t have time to deal with both trains. Do you divert the first train, the second train, or neither?

Hopefully, most respondents will choose the second train.

Now suppose that the first train has been hijacked by an evil gangster and his henchmen, who will be killed if you divert it, but will otherwise get away with the crime. As well as the gangsters, the single innocent person will die, but the ten people the gangster was going to kill will live.

The impending crash of the second train isn’t caused by anybody in particular. The region it serves is poor and no one paid for track maintenance. If the train doesn’t get through, hundreds of sick people will die, as sick poor people always have, and nobody much will notice.

Does that change your decision?

As I hope at least some readers will have realised, this version of the trolley problem is a metaphor for humanitarian military intervention. The moral intuition supporting such intervention is the same one that would lead to choose stopping the gangster over saving lives of people who would otherwise die as a result of poverty and disease.

As I’ll argue at length if needed, the numbers in the example are stacked in favor of humanitarian intervention. Many such interventions kill more people than they save. Even where they are successful in their own terms, the cost is massively more than that of civilian aid, for a fraction of the benefit.

One final point is that, in reality, the ‘henchmen’ are often conscripted, by force or economic necessity, from the same population as the people whose lives are supposed to be saved by intervention. On any reasonable account, their deaths ought to be weighed in the ledger against any lives saved.

Categories: Philosophy Tags:
  1. Gregory J. McKenzie
    July 26th, 2017 at 14:59 | #1

    Divert the first train. My choice was made because the underlying assumption, that saving a supply train is more important than saving people’s lives now, is absurd. Lives matter and death is final. I would save the ten lives at the cost of the driver of the first train. The opportunity cost of losing ten lives is too high and it is a present cost. If you were to divert the second train, you would save one life today with a future benefit of hundreds of sick people getting essential medication. Future cost is not the same as present cost. There is a certainty of death for ten people now, put up against the probability of hundreds of deaths in the future. As an economist I may pick the second train, or even do nothing, but as a human being I can only save people’s lives in the present. Economists must be moral philosophers first and mathematicians last.

  2. Newtownian
    July 26th, 2017 at 15:23 | #2

    John – Good to see you venturing into the real world of risk assessment and management where game theory might fear to wander for fear of spoiling ideal oversimplified models which are more analogous to straw men.

    In the interest of being constructive here is a possible solution.

    What you have here is a problem of reasoning under uncertainty e.g. you talk about hundreds of people. Is it 200, 900? Are they old and gaga with not much life yet or is it a train loaded with young children on their way to holiday camp? How much is a baddy worth? And compared to the original problem its got lots of scenarios and variables………so…..

    1. For starters its not a problem for economics but one for health risk assessment which if you are utilitarian happens to have the ideal metric these days, the Disability Adjusted Life Year, which allows us to value human life in quantitative terms and allocate resources and decisions in the best Pareto tradition. Being old I’m a little dubious about the system as it would value a child’s life at 3-4 times that of mine but we’ll let that slide for the moment and just use averages at least for the innocents (the baddies are a different matter).

    2. This is a classic problem in risk assessment based on causal reasoning as expounded by Judea Pearl. I recommend for beginners this REASONING WITH CAUSE AND EFFECT International Joint Conference in Artifical Intelligence 1999 Research Excellence Award Lecture http://singapore.cs.ucla.edu/IJCAI99/index.html . This presentation which is on his site explains nicely the different causal reasoning approaches.

    In short what I am suggesting is set up a Bayes Net of your problem and use the results for decision support rather than futz about with game theory.

    3. But how do you combine Bayes Nets and DALYs? Well happily the Bayes Net programmers have taken a leaf out of the Von Neumann Morgenstern playbook and developed the utility + decision node system. Using these you can look at the DALY benefits or otherwise of your various options and if you like you can reduce the value of the lives of the baddies (in any case if they are locked up for life they wont be worth much). You can of course also add the benefit of the medicines depending what they are for and how much there are of them and who they would save. Alzheimer experimental medicines would be pretty worthless I suspect.

    4. If you really must do an economic analysis you can add some other utility+decision nodes which focus on money like the cost of replacing a derailed train which might otherwise be used for bringing in food and medicines in the future. The Jeremy Bentham would be really proud of you and smile out at you from his glass box.

    5. All this will generate a range of scenarios and decisions and the costs/benefits in metric form in a nice table. Finally now you use this to as decision support to judge what is the best of a number of devils alternatives. Playing god as it were which is what politicians overseeing finance portfolios effectively do.

    6. If you have a supervisor who says your decision must be judged as credible based on economic rationality point him/her to Rebonato, R., & Denev, A. (2014). Portfolio Management under Stress: A Bayesian-Net Approach to Coherent Asset Allocation. Cambridge: Cambridge University Press. Rebonato, R. (2010). Coherent Stress Testing: a Bayesian approach to the analysis of financial stress: John Wiley & Sons. and of course Blaschke, W., Peria, M. S. M., Majnoni, G., & Jones, M. T. (2001). And Stress testing of financial systems: an overview of issues, methodologies, and FSAP experiences (Vol. 1): International Monetary Fund.

    Being a good bureaucrat s/he should appreciate that such gravitas means they can cover their rear end in court even though they dont have clue what these works are about.

    7. If on the other hand s/he is more anti-rationality show them this piece of semi humourous, semi deadly serious survey of the geography of health decision making and ask them to choose the final decision making basis as ‘they are the expert’ i.e. pass the buck. Dowie, J. (2006). A new map of the world of judgment and decision making in health. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.6824&rep=rep1&type=pdf (Unpulbished work accessed 1/6/2013), from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.6824&rep=rep1&type=pdf

  3. Newtownian
    July 26th, 2017 at 15:48 | #3

    ps It wasnt an oversight that I didnt make a decision other than pass the buck or indicate how you might solve the problem. That’s because I wouldnt without doing a proper quantitative analysis of the scenarios. Without doing this my decision would be based on my not so expert opinion and IMO about as useful as flipping a coin.

    To say ‘expert opinions’ are unreliable is an understatement as ably to my mind demonstrated by Slembeck, T., & Tyran, J.-R. (2004). Do institutions promote rationality? An experimental study of the three-door anomaly? Journal of Economic Behavior & Organization, 54 337-350.

    If people cant make a decision on the Monty Hall problem how far could you trust them really to eke out your substantially more complex brain teaser. That said there are ways around these but they must be systematic and quantitative. You’ll find a nice refereed discussion of the challenge in various works by Dowie such as this one which includes a couple of loacl health economics specialists Dowie, J., Kaltoft, M. K., Salkeld, G., & Cunich, M. (2013). Towards generic online multicriteria decision support in patient-centred health care. Health Expectations, doi: 10.1111/hex.12111.

  4. July 27th, 2017 at 12:05 | #4
  5. alex watt
    July 27th, 2017 at 12:49 | #5

    What would Jesus do.

  6. Jim Birch
    July 27th, 2017 at 13:12 | #6

    The trolley problem is interesting to me because it shows that we make irrational decisions based on a notion of agency: the typical result is that its ok to let many people die rather than be the agent of the death of a few.

    In the case of humanitarian intervention, there’s often a weird flip to the opposite disposition. Those in power feel compelled to take some “moral” action without actually evaluating the consequences. The Iraq invasion – saving the free world from Saddam Hussein – would be a extreme case. You might argue that this was undertaken cynically; I would (mostly) disagree – it was monstrously and culpably incompetent. It would be practically useful to clear this up so the language required for the justifying mythology is demystified and less believable.

  7. Blissex
    July 27th, 2017 at 20:39 | #7

    Your ethical quandaries cannot be resolved without a value system, because some involve a situation which is nor very well recognized in “trolley problems” about the lesser evil:

    «we make irrational decisions based on a notion of agency: the typical result is that its ok to let many people die rather than be the agent of the death of a few.»

    The situation where one has to choose between a lesser evil committed by oneself against a greater evil committed by someone else or by “fate”. In religious terms, when the choice is committing a great sin in order to prevent a greater sin being committed by someone else, or a greater loss by “fate”.

    Consider this “trolley problem”:

    * You are on the second floor of a mall.
    * Next to you there is a woman who seems on the verge of throwing a bomb onto the crowd below.
    * You are sure that you are not personally at risk, so it cannot be self-defense.
    * You have a chance of killing him before she can do that, and it looks like that killing her is the only option you have.

  8. Ronald
    July 28th, 2017 at 15:26 | #8

    As presented, the best solution is to save the train carrying medical supplies so hundreds of lives will be saved.

    However, I happen to know my good friend, Randy Ann, would do nothing, as any action could lead to her sued, while if she did nothing she could claim the healthy smoke from her life giving cigarettes got in her eyes and prevented her from being aware of the situation in the first place.

    Personally, I would save the medical supplies. But I would only do this to set a good example to others, as I would be certain the whole situation was a set up resulting from funds saved from disbanding the ethics committee being given to the trolley problem gang.

  9. Urbie
    July 29th, 2017 at 12:35 | #9

    There is a slight-of-hand in trolley thought experiments. They presume absolute certainty of the consequences of your decisions, and are a logical device to ignore uncertainty so it can be hidden from the ethics calculus. That is their purpose, and is their weakness.
    Choosing to save the second train is the more rational choice as presented, but will be countered with ‘who are you to be so certain of the alternative outcome?’

Comments are closed.