Home > Dead Ideas book > A snippet on representative agents

A snippet on representative agents

October 23rd, 2009

In response to some comments, I’ve written a little bit about the representative agent assumption in Dynamic Stochastic General Equilibrium Models. I argue that, given the underlying DSGE assumptions, you won’t get very much extra by including heterogeneous agents.

But, I intend to say in the “Where next” section, it seems likely that heterogeneous and boundedly rational individuals, interacting in imperfect and incomplete markets will generate ‘emergent’ macro outcomes that are not obvious from the micro foundations. Of course, this is going to be a prospectus for a theory, not the theory itself.

In the meantime, comments on my snippet would be much appreciated.

Commonly, though not invariably [in DSGE models], it was assumed that everyone in the economy had the same preferences, and the same relative endowments of capital, labour skills and so on, with the implication that it was sufficient to model the decisons of a single ‘representative agent’. This assumption has attracted a lot of criticism, and quite a few critics have suggested that models that take account of individual differences in tastes and endowments would yield more realistic results.

Such criticisms are somewhat off the point in relation to micro-based macro models. As long as the agents in such a model are rational optimizers in the standard sense of neoclassical microeconomics, any initial differences in tastes or endowments will be evened out by trade in competitive markets. In a standard market equilibrium, everyone will have the same preferences ‘at the margin’, namely the preferences given by market prices. If, for example, the market price of oranges is twice the market price of apples, then market equilibrium requires that everyone who trades in that market should be willing, at the margin, to trade two apples for an orange, regardless of whether they prefer to consume a lot of apples and a few oranges, or vice versa. In standard micro-based macro models, this means that not much is lost by aggregating all the participants in a market, and then working with an average or ‘representative’ agent.

Categories: Dead Ideas book Tags:
  1. Ernestine Gross
    October 23rd, 2009 at 16:21 | #1

    So the adjective ‘representative’ stands for ‘handwaving’ regarding the conditions under which an unspecified general equilibrium exists.

    Thank you very much for making it so clear.

    “Where next” ? Perhaps to the systematic research program which has its origin in the Arrow-Debreu general equilibrium model as distinct from the micro-economics as described above. This is the research program which resulted in a general theory of incomplete markets in the mid-1980s. This is the research program which led to insights on the difference between commodity markets and financial markets in the mid-1970s, this led to the construction of ‘agent models’, alternative equilibrium concepts. This is the research program which did not ‘fight’ Keynes but rather developed temporary equilibrium models. Related are the research programs in game theory.

  2. Kien Choong
    October 23rd, 2009 at 16:50 | #2

    I recall Joe Stiglitz writing somewhere that a single representative agent model could not take account of information asymmetries. There is therefore no place for moral hazard and adverse selection, which both seem central to every financial crisis.

  3. Vivianne
    October 23rd, 2009 at 17:30 | #3

    “If, for example, the market price of oranges is twice the market price of apples, then market equilibrium requires that everyone who trades in that market should be willing, at the margin, to trade two apples for an orange, regardless of whether they prefer to consume a lot of apples and a few oranges, or vice versa. In standard micro-based macro models, this means that not much is lost by aggregating all the participants in a market, and then working with an average or ‘representative’ agent.”

    Then what is the point of micro-foundations? If in the end you are just aggregating over something that is not what consumers do, that is just an average, then you are not micro-founding at all.

  4. Ernestine Gross
    October 23rd, 2009 at 18:41 | #4

    Well, we should remember the title of JQ’s book.

  5. PeterS
    October 23rd, 2009 at 20:11 | #5

    I have had some experience with models with too many knobs to twiddle. They turn into mush unless one can find a way to calibrate the setting for each knob.

    Good luck with finding a way to calibrate the range and distribution of “individual differences in tastes and endowments”.

  6. October 24th, 2009 at 02:50 | #6

    I’m willing to believe in Rational Expectations and Representative Agents–just not at the same time. (There is good reason I am not working in a micro-based macro department right now.)

    What is missing in that paragraph is what is missing in all DSGE models–that you sacrifice liquidity for the Rule of One Price. Simple example:

    Assume there are 3,000 people with apples and 3,000 with oranges. Each breaks into fifteen groups of 20 each. The 8th group (median or “representative”) on each side agrees that the price should be 1 orange = 2 apples.

    Those who think the price should be higher for oranges (“I want 2.5 apples for each orange”) or lower for oranges (“I’ll give you 3 apples for 2 oranges”) will stay out of the market, while those who would have taken less (e.g., orange sellers at 1 for 1 or apple sellers at 3 for 1) happily take more than they expected to get.

    But you don’t have 600 market participants; you have 320. And when those 320 sell at the One Price, there will either (1) be a discontinuity in the market or (2) be enough people on both sides who don’t have full capital access that they will take a lower price.

    If you’re also using a DSGE models that assumes full credit market access–most of the labor market models don’t, but strangely most of the capital market models seem to–the “rational” thing to do is withhold stock, making the market only half (+/-) so liquid as it would be (since those who were willing to sell for prices that map to those who are out of the market are taking the _higher_ “market” price).

    What the Representative Agent model does is prove the market equivalent of “consumer surplus” to those who would have sold for lower-than-market while keeping a large portion of the market from making truly _voluntary_ transactions.

    Which is fine until the _de facto_ lack of liquidity does more harm to the market than attempts to withhold product (or decisions to change production, i.e., find another line or work or another crop to grow, which will shift the equilibrium).

  7. October 24th, 2009 at 02:55 | #7

    Oops. Make the above 300 each instead of 3000. Or add an order of magnitude to 20, 600, and 320.

    I feel like Roger Pielke.

  8. Ikonocalst
    October 24th, 2009 at 09:15 | #8

    Representative agent theory is entirely misconstrued. There is no need for representative agent theory in any form in macro-economics. Not only is there no need for it but any attempt to deploy it is counter-productive and clouds empirical observation in any field with essentialist assumptions about agents and causation. Strictly speaking, neither discrete agents nor discrete causation(s) exist. A more fruitful approach is to seek dependable laws of relation.

    “Agents” and “causation” are a useful and indeed necessary shorthand for social, moral and legal life. Indeed, we might say imputing agencies and causations involves the application of a heuristic set to problems which do not have (known) algorithmic solutions.

    Agency and causation certianly have no place in macro-economics.

  9. Ikonocalst
    October 24th, 2009 at 09:17 | #9

    Sorry, I made a typo on my name. I’ll change it back to Ikonoclast next time.

  10. Donald Oats
    October 24th, 2009 at 09:50 | #10

    Ken Houghton makes a good point – and I hope he doesn’t feel Pielke-ish for too long.

    Something else that strikes me as being worth exploring are agents with different entrance and exit times into the market(s) of interest. For example, in the case of a stock market, one individual may buy shares in stock A at time t1, with a sell criterion that determines at what time they sell A, given the state of their portfolio and the market. For another individual with a different portfolio constructed over a different time frame, even if they buy stock A at time t1 they may behave quite differently to the first individual. Further more, for one individual to sell shares in A we need at least one other willing to buy, and that may determine how many transactions are required to sell down shares in A into the market. Of course some effects may be quite amenable to averaging without loss of essential information. On the other hand, I think the time displacements and the individual’s rules for buying and selling is likely to result in quite different model outcomes if treated as a distribution of behaviours rather than treated as an averaged variable.

    Hmm, not a very clear explanation, hope it makes sense to someone.

  11. ssendam
    October 25th, 2009 at 09:01 | #11

    “In standard micro-based macro models, this means that not much is lost by aggregating all the participants in a market, and then working with an average or ‘representative’ agent.”

    Not true – there are equilibria that can exist with many agents that cannot exist in any representative-agent model. For instance: the valuation of fiat money must be zero in a representative-agent model (which is why monetary models must directly impose constraints that force agents to value money, such as cash-in-advance or putting money in the utility function). On the other hand, in the OLG, fiat money can have value with all agents being rational.

  12. Freelander
    October 26th, 2009 at 08:55 | #12

    Krugman recycles ‘zombie’ usage:

    ” Whenever exchange rates enter into discussion, certain zombie fallacies — ideas that you kill repeatedly, but refuse to die — inevitably make their appearance. What I’m hearing a lot now is the old line that exchange rates have nothing to do with international imbalances: the trade deficit is the difference between investment spending and savings, and that’s all there is to it. It’s a fallacy that John Williamson of the Institute for International Economics calls the doctrine of immaculate transfer. So let me try killing the zombie once again. ”

    http://krugman.blogs.nytimes.com/2009/10/24/adjustment-and-the-dollar/

  13. James
    October 26th, 2009 at 09:39 | #13

    This seems like a case of what Farjoun and Machover call the thermodynamic fallacy; the assumption that you can treat each particle/agent in a heterogenous set as possessing the average qualities of that set. All too often this leads to averaging away the phenomenon you are trying to model or explain. For example, in thermodynamics if all the particles of an ideal gas have the same values of energy, velocity, momentum, etc, as the average of the ensemble, the whole would behave quite differently from if their qualities were randomly distributed (as is normally the case).
    “This last example merely highlights a simple mathematical fact, ignorance of which is a common source of fallacy. The fact is this: a mathematical relation that holds among variable quantities does not, in general, hold between their respective averages.
    The moral of this is that one must exercise extreme care in considering an ideal ‘average’ state as though it were a real functioning state, with the usual relations between various quantities. Without such care, one can fall into the same error as the poor statistician who drowned in a lake whose average depth was six inches.” (Laws of Chaos, page 30)

Comments are closed.