A Crooked Timbercomment on my last post, about Chapter 2 of my book-in-progress, Economics in Two Lessons, convinced me that I needed to include something about bounded rationality. I shouldn’t have needed convincing, since this is my main area of theoretical research, but I hadn’t been able to work out where to work this into the book. I’m still not sure, but at least I’ve written something I’m reasonably happy with. Comments, praise and criticism welcome as usual.
Human beings are incredibly clever at processing and responding to information. We have a general capacity for reasoning that far exceeds that of other animals. In addition, genetic and cultural evolution has equipped us with a variety of cognitive ‘modules’ enabling us to perform specific tasks rapidly and efficiently. For example, we can naturally throw and catch objects much better than any other animal.
We can improve our ability to catch thrown objects in a couple of ways. One, based on general reasoning involves estimating the speed and trajectory of the object, and running to the point where we expect it to fall within our reach. Going further, we can use mathematics and physics to make incredibly accurate predictions, enabling humanity to send spaceships to the edge of the solar system and beyond with exact knowledge of the course they will follow.
Such rational optimization takes a lot of time and mental effort however. To catch a ball flung in the air, a much simpler solution is the ‘gaze heuristic’. A catcher using the gaze heuristic observes the initial angle of the ball and runs towards it in such a way as to keep this angle constant. This heuristic works well in practice. It is therefore described, by Gerd Gigerenzer, as ‘ecologically rational’ for the environment in question. Heuristics are examples of cognitive modules.
Heuristics work well in the environments in which they evolve. However, they may fail in other environments. Beginning with the work of Daniel Kahneman and Amos Tversky, and taken further by Richard Thaler, researchers have examined ways in which heuristics may lead to good decisions in some contexts and bad ones in others. Given our cognitive limits, good decision making requires a mixture of heuristics and rational calculation.
One Lesson economics ignores this. In the standard One Lesson model of decision making, human beings are replaced by ‘rational agents’ who are assumed to be members of the species Homo Economicus. Rational agents have an infinite capacity to calculate the consequences of their actions under every possible contingency. Not only that, but they can use their reasoning capacity to model the actions of other agents, taking account of the fact that the other agents are modelling them, and so on, ad infinitum. In economics jargon, this assumption is referred to as ‘common knowledge of rationality’.
The problem of making decisions under uncertainty is an important case where bounded rationality plays a crucial role. The efficient markets hypothesis rests on the assumption that market participants are rational agents making decisions to maximize their ‘expected utility’.
It has long been known, however, that real-life choices aren’t consistent with the theory of expected utility and that more general and flexible models are needed. Much of my career as an academic economist has been devoted to this task.
One aspect of the problem is that people tend to place more weight than they should (at least according to expected uility theory) on low probability extreme events, like winning the lottery or dying in a plane crash. It’s possible to develop models of this behavior involving weighted probabilities, but these aren’t necessarily consistent with the rationality required for the efficient markets hypothesis.
Another, more fundamental, difficulty is that we can’t possibly be aware of all contingencies relevant to a decision. Contingencies of which we suddenly (and often painfully) become aware have been described as ‘unknown unknowns’ and ‘Black Swans’. When participants in financial markets, unaware of their own unawareness, attempt to apply rational optimization to an incomplete model of the world, the results can be disastrous. Financial crises typically involve a rapid spread of awareness about a possibility (such as a simultaneous default by a number of borrowers) that had previously been disregarded.
Bounded rationality is most significant in choices involving time and uncertainty, but it can arise in ‘spot’ markets where these factors are not important. Dominant firms in a market (for example the market for phone and Internet service) sometimes offer a vast and confusing range of options. The idea is that consumers with the time and ability to pick out the best offer will do so, rather than defecting to a competitor, while more loyal customers will stick with bad deals, on the mistaken assumption that there is nothing better on offer.
More generally, given our bounded rationality, it is possible to have too many choices, most of which from each other only marginally. This point has been stressed by psychologists such as Barry Schwartz, who argues that too much choice can lead to depression.
The fact that our reasoning capacity is bounded limits is another instance of Lesson 2. Prices give us information about opportunity costs, but only if we have the capacity to process that information. We will consider some responses this problem in Part 4.
* the sole exception to this model of unbounded rationality is in the case of economic policymakers and, in particular, central planners, whose cognitive limitations are taken for granted
* Baseball fielders learn the gaze heuristic through trial and error, or through ‘cultural transmission’ (that is, advice from coaches or fellow players. But it can also be arrived at the as the solution to an optimization problems.