Home > Economics - General, Philosophy, Science > Rationality and utility

Rationality and utility

September 17th, 2007

Over at Cosmic Variance, physicist Sean Carroll offers some admittedly uninformed speculation about utility theory and economics, saying

Anyone who actually knows something about economics is welcome to chime in to explain why all this is crazy (very possible), or perfectly well-known to all working economists (more likely), or good stuff that they will steal for their next paper (least likely). The freedom to speculate is what blogs are all about.

I didn’t notice anything crazy but there’s a fair bit that’s well-known. For example, Carroll observes that utility is generally not additive across commodities, and that some goods are likely to be more closely related than others. That’s textbook stuff, covered by the basic concepts of complementarity and substitutability.

This is a more interesting and significant point

But I’d like to argue something a bit different — not simply that people don’t behave rationally, but that “rational� and “irrational� aren’t necessarily useful terms in which to think about behavior. After all, any kind of deterministic* behavior — faced with equivalent circumstances, a certain person will always act the same way — can be modeled as the maximization of some function. But it might not be helpful to think of that function as utility, or as the act of maximizing it as the manifestation of rationality.

I can only agree. But economists and (even more, I think) political scientists in the “rational choice” tradition regularly get themselves tied up in all sorts of knots about this, switching between the trivial notion of maximising a function and substantive claims in which rationality is frequently equated with egoism. Joseph Butler demolished this kind of reasoning nearly 300 years ago, but it keeps on popping up.

* This qualification isn’t necessary, and Carroll notes later on that choices are often stochastic. The resulting probability distributions still maximise an appropriately defined function.

Categories: Economics - General, Philosophy, Science Tags:
  1. Ernestine Gross
    September 17th, 2007 at 22:43 | #1

    Sean Carroll’s piece is quite fun to read.

    In general I am not fond of picking a sentence from a text and commenting on it. But I can’t see a problem of creating a strawman by picking a sentence from Carroll’s piece, included in the quote in the post.

    “But I’d like to argue something a bit different — not simply that people don’t behave rationally, but that “rationalâ€? and “irrationalâ€? aren’t necessarily useful terms in which to think about behavior. ”

    IMHO, the crucial words in the sentence are “not necessarily�.

    I propose that the term ‘rational’ is not necessarily useful to ‘think about behaviour’. We don’t have to ‘think about’ behaviour. We can observe behaviour. However, it may be useful to think about the inferences people draw from observed behaviour. In this context, is it possible for an observer to draw any inference from observed human behaviour without the observer presupposing that he or she is ‘rational’ in some sense?

    PS: Carroll’s wine example is a nice one – it brings out one of the many unresolved theoretical problems of the corporate form of enterprise in relation to microeconomics.

  2. September 18th, 2007 at 00:52 | #2

    “But I’d like to argue something a bit different — not simply that people don’t behave rationally, but that “rational� and “irrational� aren’t necessarily useful terms in which to think about behavior. �

    Insert the word “governments” instead of “people” and it still makes sense. Amazing! Or perhaps not given what governments are comprised of.

    It is a good point however. And a fun article also by the way. Personally I’m okay with the idea of people being free to make dopey irrational decisions about their own lives. The function thus maximised would be the freedom function. It takes a pretty compelling utility function to convince me that we should deviate from the maxima of the freedom function.

  3. mugwump
    September 18th, 2007 at 01:14 | #3

    Determinism is also required for stochastic actors, but it is their probability distributions that must be deterministic in Carroll’s sense, ie faced with equivalent circumstances, a certain person will always choose actions according to the same probability distribution.

    But back to the question of utility and rationality, I still think they’re useful concepts. Carroll’s two key examples showing the nonutility of utility are that of drinking wine in a restaurant (why pay 4x retail?) and receiving gifts instead of cash (you’re more likely get what you want if you’re given cash). He considers but dismisses solving this by expanding the space of goods to include qualifiers – so that wine drunk in a restaurant, or goods received as gifts, are not the same as wine drunk at home or goods purchased personally. But I think this is precisely the way to reintroduce rationality.

    Certainly from the perspective of the producer, wine is wine whether drunk at home or with a restaurant meal. But for the consumer, the two are different, so why not model them as different goods. The restaurant is then adding value to the wine by packaging it with the restaurant meal; they are producers of experience, with wine and whole foods as their inputs.

    This supports purchasing wine with a meal as rational utility maximization by the consumer.

  4. gerard
    September 18th, 2007 at 12:25 | #4

    I don’t know anything about economics but the whole ‘utility’ thing always seemed to be a truism of limited explanatory value for anything. Can anybody give an example of a decision somebody might make that is not an example of “rational utility maximization”? It’s impossible because any decision voluntarily made can be thought of as the satisfaction of a want, and utility is also the result of satisfying a want. The real question is where do these anterior wants come from?

  5. conrad
    September 18th, 2007 at 14:16 | #5

    Gerard: I’ll have a guess, but I guess it depends on what you mean by function maximization.

    Some people would argue that some aspects of altruistic behavior are not governed by satisfying any particular want — you simply do them because you are evolutarily programmed to do them. The idea behind this is that people have evolved to live in groups and some aspects of altruistic behavior occur because of this. Thus the action you perform may be independent of the consequence, hence it isn’t maximizing anything. Its just something you do.

    More abstractly, I think there are probably a whole class of decisions that you can make that have nothing to do with maximizing some external constraint, if you consider maximizing some outcome picking the best behavior to achieve some outcome. These being decisions you make based on the frequency of common co-occurences of infomation — People are susceptible to learning correlated structure in an enviroment, and thus make deicisions based on this information even if there is no meaingful outcome — you simply pick that which occurs most often. In this case, the behavior is just something that you do, rather than something you do based on an outcome.

  6. gerard
    September 18th, 2007 at 17:01 | #6

    Utility is what we get from doing something we want to do – and we do what we want to do because it gives us utility. This is meaningless circle and it doesn’t tell us where the wants come from in the first place. An altruist gets ‘satisfaction’ from helping others, a masochist from experiencing pain, a conformist from doing what everyone else is doing. If we are evolutionarily programmed to do something does this mean we get utility from going with our programming, rather than going to the trouble of resisting it (that is, presuming we have free-will at all)? If we do what we do out of habit rather than rational calculation, does this mean that following habit gives us more utility because it saves us the trouble of rationally calculating everything?

  7. conrad
    September 18th, 2007 at 17:40 | #7

    There’s no need to assume that you get satisfication from helping others if you assume altruism has some sort of evolutionary basis — it may well provide you with satisfcation, but that isn’t why you would perform the act. I think the idea of the evolutionary basis was explicitly to test the idea that people act based on outcomes specifically relevant to themselves, rather than outcomes which take millions of years to achieve (i.e., evolution)

  8. gerard
    September 18th, 2007 at 19:11 | #8

    Doesn’t eating food have an evolutionary basis? Do we therefore assume that although it provides us with satisfaction, the satisfaction is not why we perform the act? Works for me. The satisfaction is a result of a want being met, it is not the cause of the want. But beyond base wants like eating and such, people may want to do or have things because they think doing or having them will give them a feeling of satisfaction. But how they come to believe that this or that thing will give them a feeling of satisfaction is the real question. Why is such a huge portion of economic output devoted to advertising and marketing? Why do wants differ between cultures, generations and individuals?

    Regarding those acts which are not based on outcomes specifically relevent to yourself: think of the last altruistic act you performed – what motivated you to do it? The feeling that to do otherwise was wrong? This might have had a lot to do with you personal upbringing, and the fact that your cerebral cortex endowed you with emotions such as empathy and guilt which are powerful incentives.

    Evolution is the source of our emotional and social needs in the same way as it is of our animal needs. But so long as choice is involved, whatever ‘utility’ is can fit the definition of that choice. To use another example of people not acting on outcomes specifically relevent to themselves, think why so many people care more about the success of their local sports team than about the government of their country.

  9. conrad
    September 18th, 2007 at 20:48 | #9

    Perhaps a different way to look at is to use an analogy with reflexes (you can choose any you want). If a bit of dust moves towards you, you will blink so it doesn’t get in your eye. Sometimes you will also blink for no apparent reason. Now we know why people blink in general (it protects your eyes), but it isn’t clear whether any particular blink you make is fitting any function — its just a motor program that gets triggered now and then thanks to a billion years of evolution. In addition, you do it all the time, and it doesn’t make you feel good bad or indifferent — its just something you do because thats way eyes work. Thus we blink because we do, not because it makes us feel good, and it has the side benefit of keeping our eyes working. The extent that we do this for higher level cognitive functions is unclear.

    Also, I don’t care about introspection — I think people’s self report about the way the feel etc. is often poorly correlated with behavior. Thus I can’t tell you the real reason I performed my last altruistic act — perhaps that information isn’t privvy to my consciousness, or perhaps my perception of why I did is incorrect.

  10. gerard
    September 19th, 2007 at 13:00 | #10

    Sounds like we’re getting into the discussion about free-will and determinism. Are our choices real or just electro-chemical reactions in our nervous system…

  11. gerard
    September 19th, 2007 at 13:41 | #11

    bludging at work and hit on this article

    Is ‘Do Unto Others’ Written Into Our Genes?
    http://www.nytimes.com/2007/09/18/science/18mora.html?em&ex=1190260800&en=2fa7877883a71b12&ei=5087

  12. September 19th, 2007 at 14:05 | #12

    The evolutionary basis for altruistic is fully explained by Richard Dawkins in his book “The Selfish Gene”. The essential insight is that it is our genes that are amoral and notionally selfish and it is their survival (not ours) that influences evolution.

    If the “risk your life to save your brother gene” is in both me and my brother than that gene benefits when I risk my neck to save my brothers life. Even if I die and my brother is saved the gene has made a successful wager and will propagate to the next generation (via my brother). Richard Dawkins also demonstrates a lot of this mathematically by looking at issues of relatedness within different animal groups and the corresponding acts of altruism. “The Selfish Gene” remains one of the most insightful books I have ever read.

  13. gerard
    September 19th, 2007 at 17:04 | #13

    The point is that I can’t imagine a scientific definition of a ‘rational’ act. A hypothesis (that such and such an act is ‘rational’ or ‘irrational’) should be falsifiable and testable, otherwise it is not scientific, and it is curious that such an truism as passes for ‘utility maximization’ should be a foundation of a ‘science’.

  14. BilB
    September 19th, 2007 at 21:50 | #14

    Gerard,

    Try structered and random in place of rational and irrational, to achieve definability.

  15. gerard
    September 20th, 2007 at 14:13 | #15

    So you define ‘rational’ as ‘structured’ and ‘irrational’ as ‘random’?

  16. BilB
    September 22nd, 2007 at 13:25 | #16

    Rational and irrational imply the application of more or less emotion, emotion being perceived as unquantifiable. A rational viewpoint, however, suggests structure, and irrational judgement can be percieved externally as having random expression. Both structured and random are mathematically defineable concepts.

  17. gerard
    September 22nd, 2007 at 23:15 | #17

    So a rational decision is one which doesn’t involve the ‘application’ of emotion?

    In that case I think it’s pretty safe to say that the hypothesis that ‘people behave rationally’ is obviously contradicted by the pervasive and obvious existence of emotion (although I think that the word ‘application’ is inappropriate, since emotion is not something that is consciously applied, as it by definition arises spontaneously without conscious effort).

    Emotional and physcial factors both influence our conscious decisions. But you can’t say that the presence of emotion makes a decision less ‘rational’ any more than does the presence of a physical feeling such as hunger make an act like eating food any less rational.

    I guess there’s the question of, for instance, saving food for future uncertainty rather than eating all of your food at once, the former would seem more ‘rational’ than the latter. But obviously whatever decision was made could be attributed to some ‘utility’ function of the individual in question and there would be no way of proving if this were true or not. Therefore the connection between ‘rationality’ and ‘utility’ is not scientific.

    At any rate, it is still very poorly understood how emotions and decision making actually take place. It seems that as soon as you scratch the surface of this ancient cornerstone of economics you find yourself lost in a deep ocean of unresolved contemporary questions in cognitive science, psychology and other fields.

  18. Ernestine Gross
    September 23rd, 2007 at 20:23 | #18

    I am responding to gerard.

    The term ‘utility’ and ‘utility function’ may well be labelled an ‘ancient cornerstone of economics’. However, in contemporary economic theory, a ‘utility function’ is a derived notion. The primitive is known as ‘preferences’, a pre-ordering on a space of measurable quantities. Under certain conditions on the preference pre-ordering, a function, as specified in the Sean Correll piece, can be shown to exist (in the logic of mathematics). These conditions do not necessarily correspond to a notion of ‘rational behaviour’ and they do not correspond to all interpretations of ‘rationality’ (a sample of interpretations has been provided by JQ).

    In this theoretical framework, the term ‘rational behaviour’ means, in general, that individuals act in a manner that is (logically) consistent with their preferences. Thus, ‘utility maximisation’ in theoretical models is a consequence (implication) of assumptions on the preferences such that a ‘utility function’ exists and the assumption that individuals act in a manner that is consistent with their preferences.

    Yes, it is possible to have preference pre-orderings such that the (rational) behaviour is labelled ‘masochist’ or ‘conformist’ in some branches of social sciences. I can’t see anything wrong with this. Surely any general theoretical framework is useful only if it can cope with a large variety of human behaviour without requiring new primitives. Sure, the outcome of ‘an economy’ (either a theoretical model or empirical) differs according to the preferences of the people, including the distribution. Masochistic behaviour may not be dominant or evolutionary sustainable. However, behaviour labelled as masochistic does apparently exist empirically, otherwise the term would be empty.

    Yes, various groups of people have developed different social habits, including some decision making processes. And, yes, economic theory, which takes preferences as given, does not answer any of the questions raised (eg whether people are influenced by what others do or whether all or some of the behaviour is biologically determined). However, this is how it should be for that part of economics which may be thought of as a branch of philosophy that takes as given that people, for whatever reasons (including ‘emotional’) have the right to be the best judge for what is good for them.

    Empirical tests of individual rationality would require soliciting an individual’s preferences and comparing the stated preferences to observed behaviour under controlled conditions (regarding new information and constraints on behaviour). What is unscientific about that?

    Preferences (and associated rational behaviour) may not be ‘reasonable’ in the eyes of an observer but this is not the same as saying the individual is not rational.

    JQ has provided a list of interpretations of the term ‘rational’. I tend to agree with it except that it might be useful to remove the word ‘should’ from the last statement.

  19. gerard
    September 24th, 2007 at 00:27 | #19

    Thank you Ernestine Gross for your very informative post.

    I never got past high school economics so I hope you can forgive my confusion over the basics. How do we arrive at the pre-ordered field of measurable quantities – the preferences? I vaguely remember something about ‘indifference curves’ between two commodities – a banana x-axis and a widget y-axis for example. Are the ‘preferences’ a set of relationships between every possible option open to the individual (some very complex field that would wind around in as many dimensions as there are choices available?)

    It would seem a daunting task to come up with a ‘preference’ field for an individual. Where would the information come from? I can think of two ways the information could be gathered. One way would be to infer an individual’s preferences from their prior actions. In this case I don’t think you could meaningfully contrast the actions with preferences, because we infer preferences from actions.

    The other way would be to solicit the preferences from the individual – I suppose just by asking the individual how they would hypothetically choose between some options. I imagine that this would be impractical for more than a very narrow set of options. Then one could measure the individuals actual behavior and see if it matches up. To the extent that it does I think that just means that the individual is able to correctly describe/predict their own behavior. In that case the statement that people behave rationally might just mean that people are aware of what their preferences are.

    Is there an example of an preference field that quantifies the preferences of an actual individual that is available?

  20. BilB
    September 24th, 2007 at 07:27 | #20

    Ernestine,

    The utility function preferences that you speak of I would suggest are entirely rational (structured), predictable, and quantifiable, as they will relate very directly with the levels of security theory. This theory describes 5 levels (I’m guessing here because it is 40 years since I read of this) from basic need to eat to total freedom of choice of activity. This information is easily extractable from a demographic matrix. By matrix I mean all aspects of age, education, stage of life, income, relative security level, etc. Commercial activity can be very precisely predicted if the full demographic matrix is brought into the economic equation. If you then bring that into a general equation which takes into account the basic economic building blocks of (ideas|what you get from nature for free|opprtunities to perform)(People|establishment|infrastructure|machinery)(economic accelerators)(economic decelerators) then you can be very precise in predicting the outcome of political decisions even with environmental complications. Any confusion over rational and irrational behaviour is more to do with inadequate evaluation.

    “I did once try to convince Bob Hall at a restaurant in Palo Alto not to order wine: the fact that the wine would cost four times retail would, I said, depress me and lower my utility. Even though I wasn’t paying for it, I would still feel as though I was being cheated, and as I drank the wine that would depress me more than the wine would please me.”

    If you think about it this statement is about 2 people with different levels of perceived personal security.

Comments are closed.