This is a followup to my earlier posts on consequentialism/utilitarianism. A notable debate in the literature on this topic is on whether, from a consequentialist or utilitarian perspective, it is best to try always to choose the action with the best consequences (act-consequentialism) or whether to try to find those general rules which, on average, yield the best consequences, and follow those rules even when in particular cases, they yield bad consequences (rule-consequentialism).
In the consideration of consequentialism as an ethical philosophy for individuals, rule-consequentialism is often used a device to get around ‘hard cases’ for utilitarianism like the ‘organ-transplant’ example I discussed (and debunked) a few days ago.
This gambit clearly fails. Any proposed general rule is dominated by the rule “Do whatever action yields the best consequences” and any specific rule yielding bad consequences in some particular situation can always be modified to specify a better action in that situation.
But the situation is totally different when we consider utilitarianism/consequentialism as a public philosophy (which I’ve argued is the only sensible role for utilitarianism). The example of speeding, which has been debated in recent posts, is an ideal illustration. Given the complexity of road situations, the prefect policy would be for every driver to drive in a way that yielded the socially optimal tradeoff between travel time and safety.
But as the discussion has revealed
(a) Many people are going to put more weight on their own convenience than on the safety of others
(b) Most people overestimate their own competence
(c) Safety is enhanced if everyone travels at the same speed
What this means is that, with some exceptions (medical emergencies etc) it is best to settle on a fixed speed limit and enforce it than to allow drivers to make their own judgements. These problems (self-interest, cognitive biases and co-ordination problems) are ubiquitous in public policy problems, and imply that there is frequently a strong case for rules rather than discretion [in Milton Friedman’s phrase].
Does this argument force consequentialists into the position of accepting Kant’s categorical imperative “”Act only according to that maxim by which you can at the same time will that it should become a universal law”? Only in the trivial sense that utilitarianism and related forms of consequentialism treat all people equally “each to count for one, and none to count for more than one” in Bentham’s phrase. Hence, considered as a general rule, utilitarianism is consistent with the categorical imperative.
At the level of specifics, though, it is sometimes desirable to have rules and co-ordination, and sometimes desirable to have discretion and diversity. The only sensible way to decide which case is which is on the basis of consequences.
Consider the proposed rules
“Drive on the left-hand side of the road”
and
“Go to work at precisely 8:00 AM”
In the first case, we’d better hope that everyone follows the same rule (there’s a nice joke about a country -insert your favorite dumb country here- converting to right-hand drive on a staged basis, trucks first). In the second case, any rule of this kind would be a recipe for chaos.
I should mention that there’s more on this over at Catallaxy. Jason Soon, a fellow-economist, supports rule-consequentialism as a basis for public policy. Meanwhile Jack Strocchi’s Machiavellian/Hobbesian position is an extreme form of act-consequentialism.
I think that you’re being a bit harsh on rule-consequentialism as a system of private morality. Surely it is useful to make a distinction within consequentialist doctrines between (at one end of a continuum), those which appear to presuppose something like the von Neummann/Morgernstern axioms with respect to the desirability of moral outcomes, and those which (at the other end) believe in fundamental incommensurability between individuals and recognise only rules of thumb for the evaluation of outcomes? And if we’re going to make that distinction, then it strikes me that the rule/act terminology fits it quite well.
I think that a very common problem in ethics is the constant difficulty of disentangling statements about rationality and statements about morality (and indeed it’s extremely debatable whether this distinction can actually be made in anything like its plain meaning), and that consequentialist doctrines tend to have this problem in a more obvious and serious form because they’ve imported a lot of the problems of the economic concept of rationality along with the intuitive appeal, because of the shared interest in problems of constrained optimisation.
If someone is, say, a utlitarian, but one who in fact acts in a Simonian “satisficing” kind of way and follows loads of heuristic rules of thumb in evaluating outcomes, then are you really going to say that there is no difference between this and classic Benthamite hedonic calculus? Perhaps so if you think that the important thing is that he would break his “rules” in specially constructed hard cases, but it strikes me that to do so is to a) skate quite fast over some very troublesome implicit claims about rationality and b) use an absurdly strong version of what it is to follow a moral rule, in claiming that a rule can’t be what’s “really” important unless it has no exceptions.
I agree with most of what you say, and in fact plan to something to say about rationality in a later post. I’ve got no problem with a satisficing version of act-consequentialism e.g. ‘Do whatever appears to have the best expected consequences but don’t spend too much time worrying about minor benefits’.
I think though that it’s precisely the ‘hard cases” that interest most advocates of personal rule-utilitarianism.
On a technical note, though, for act-consequentialism to be coherent at all, there has to be some sense in which the expectation of the value of future states is well-defined, which appears to drag some quite specific statements of measure theory into a context in which they don’t necessarily belong. My guess is that the satisficing version of act-consequentialism is all there is.
This attempted distinction between utilitarianism as public philosophy and ethical philosophy for individuals is a cop-out, which falls over with the first push.
There is no more distinguished or eloquent utilitarianism-as-public-philosophy exponent than Peter Singer. One of the policies he advocates is that old and decrepit people should be left to die, rather than have lavish resources spent keeping their lives going a bit longer, because the money could achieve a better objective for society as a whole.
Recently, it was revealed that he was spending lavish resources on his old and decrepit mother. His response was “mothers are different”.
I’ll bet he wouldn’t support a policy which said, “for the sake of society as a whole, we are going to force you to stop acting out your personal ethics and so you must stop spending your money on your old and sick mother”.
What good is a public policy, or a philosophy on which it is based, which people might support in the abstract, but never should be applied to them personally.
We don’t need an expectation wrt a probability measure here. A certainty equivalent (which just requires complete transitive preferences) is all that is needed. I’ll try to post some links.
Dave, your Singer example goes precisely the wrong way for you (unless you are trying to argue by example that all consequentialists are hypocrites).
Singer claims that utilitarianism is an appropriate personal morality but finds in practice that it the requirement to be neutral between people is impossibly demanding, which is precisely the point I made. Singer’s experience says nothing about the appropriateness of utilitarian public policy, for example the allocation of publicly-funded health care resources.
Your implication that public-policy consequentialists must favor a policy of total control over personal actions to ensure that each action is in line with a utilitarian calculus is equally off-beam. To the extent that suggestions like this have been implemented, as in extreme forms of Communism, they have always produced bad consequences. So, as a consequentialist, I naturally reject such policies.
What I am saying is it may be difficult, and in some cases, impossible for people to accept as good public policy what they themselves would not do as individuals. Therefore, any philosophy which, for the sake of society, forces outcomes onto people which they would not choose individually, is ultimately, authoritarian.
Maybe that what is what is required to have a functioning modern society, but if so it should be admitted up front. You can’t escape this dilemma by saying that you’d avoid any application of utilitarianism that has bad consequences. That just defines the problem away.
Dave
The whole point of public policy is that people behaving as individuals don’t take into account longer term social interests. Most people *are* going to weight the interests of friends and family above that of others. In such a world it’s a war of all against all. That is precisely why the public philosophy should be different from personal ethics. By your logic on the other hand you are committed to either saying
1) all people are angels or
2) neopotism in public life or ‘Tikritism” Suhartoism’ is a good thing
Dave
By your definition all public policy is authoritarian. You can define things any way you want but I’d suggest some definitions are more useful than others.
John
to be fair to Jack he is an act utilitarian in foreign policy, not domestic policy. Though I’m not as comfortable with this distinction as Jack Machiavelli Strocchi, I can understand it
Jason, I am committed to neither of the propositions that you are trying to pin on me. What I am saying is that a public policy philosophy which puts the benefits of policy to society (however defined) as numero uno consideration is authoritarian, and honest utilitarians should admit this.
Personally, I don’t have a problem with this in many cases, as my posts on the speeding issue over the past couple of days have shown. I also don’t mind having to pass through metal detectors at airports, even though I am not a terrorist, and would be affronted if somebody accused me of being one.
But you can too take utilitarianism too far — such as in Singapore.
Before the “battle” begins, does anyone think he has a philosophy of ethics which, ultimately, doesn’t rely upon God’s advice, a vague “intuition”, or some other purely psychological basis?
If we’re to dwell on ‘hard’cases, what about the allocation of relatively scarce resources..kidney dialysis, organ transplant.. to the prisoner on death row in Oregon?
Is it consistent to seek to apply the maxims of some public philosophy,say utilitarianism,within the framework of an anti-utilitarian policy,say, the death penalty, or should all available resources be channelled into policies that would eliminate the death penalty or maximize medical outcomes (research,organ donor progams etc.)?
The application of which of the philosophical systems discussed would best resolve this?
Very interested to see your refs; I must confess to being somewhat out of my depth here. But on reflection, I might even venture a stronger version of my claim; that without potentially too restrictive assumptions about the way the world is, there might not even exist a probability measure over future events at all (specifically, I don’t have any strong intuitions about whether you can put all possible future events in correspondence with real numbers; I might use the word “Borel” at this point but that would give me away as a bluffer). I don’t see how one could possibly have a certainty equivalent in the absence of any probability measure at all.
I agree that the existence of well-defined probabilities is a special case – only carefully trained intuition can make them seem natural. But this is not a fundamental problem. The difficulty is that, even though non-probabilistic beliefs are simpler, descriptions of them are more complicated.
This paper (Warning PDF file) with Simon Grant is my own latest thought on the subject.
That’s certainly an impressive paper, and I wouldn’t pretend to be in a position to comment on it. Outside the context of portfolio selection though (where the assumption of an objectively given and real valued state space seems unproblematic), though, I worry about the amount of knowledge about the outcome space an agent needs to have in order to have a well-defined certainty equivalent. I’ve been reading Derek Parfit books again and am getting worried about the moral significance of very improbable but very bad outcomes.
I guess I’m only raking the cold embers of this discussion, but I’m still mulling over the statement that
‘..consideration of consequentialism as an ethical philosophy for individuals clearly…fails. Any proposed general rule is dominated by the rule “Do whatever action yields the best consequences” and any specific rule yielding bad consequences in some particular situation can always be modified to specify a better action in that situation.’
Why is this true of the following behaviour?
When I travel in third-world countries, I want to give money to every beggar I meet, but worry about helping to create an industry. So I have a rule to give only to beggars who are elderly or who have severe handicaps. There is no question that my moral judgement is entangled with any personal loyalties: giving to beggars is an impersonal moral action. At the same time, there is no question that my actions are guided or compelled by any law or public policy. It is an individual ethical choice.
I hope it helps that is a real and non-violent example.
Blogging is ephemeral, James, but this thread is definitely still alive.
I think your example is consistent with what I’m arguing. An individual rule-consequentialist might want to argue in favor of the personal rule ‘Don’t give money to beggars’ because of the bad consequences you allude to.
But, if you can identify cases where these consequences are likely to be small and the the benefits large, you can always propose the modified rule you suggest “Only give to elderly/disabled beggars”. And if you found evidence that this rule produced bad consequences in some cases, you could modify the rule further.
To restate my basic point, any conflict between rule-consequentialism and act-consequentialism in personal ethics can and should be resolved by modifying the rules.
Daniel, I am actively working on the issues you raise, but it will be an eternity in blog time before I have an adequate response.
Do you have a reference for Parfit ? I know the name but I haven’t read his work on this topic.
Ross, I missed the story to which you refer. Can you post a link?
A rabid talk-radio host’s dream topic?
http://online.statesmanjournal.com/sp_section_article.cfm?i=59756&s=2242
Derek Parfit’s book is called “Reasons and Persons” and bloody good it is too. Thanks for the WP, which I am progressing through at snail’s pace. It’s helping me a lot with the probability textbook that I am also currently progressing through at snail’s pace.