Home > Economics - General > Bad models or bad modellers

Bad models or bad modellers

November 6th, 2008

The idea that bad mathematical models used to evaluate investments are at least partially to blame for the financial crisis has plenty of appeal, and perhaps some validity, but it doesn’t justify a lot of the anti-intellectual responses we are seeing. That includes this NY Times headline In Modeling Risk, the Human Factor Was Left Out

Laserblast movie download

. What becomes clear from the story is that a model that left human factors out would have worked quite well. The elements of the required model are
(i) in the long run, house prices move in line with employment, incomes and migration patterns
(ii) if prices move more than 20 per cent out of line with long run value they will in due course fall at least 20 per cent
(iii) when this happens, large classes of financial assets will go into default either directly or because they are derived from assets that will default in the event of a 20 per decline in house prices

It was the attempt to second-guess human behavioral responses to a period of rising prices, so as to reproduce the behavior of housing markets in the bubble period, that led many to disaster. A more naive version of the same error is to assume that particular observed behavior (say, not defaulting on home loans) will be sustained even when the conditions that made that behavior sensible no longer apply.

But at least this is criticism of specific models. What is really silly, on a par with saying “evolution is just a theory” is the currently popular talking point “this shows you shouldn’t trust models, so I can consult my own prejudices on topic X (most commonly, climate change)“. Any attempt to predict the future behavior of a system requires a model of that system, whether it’s explicit or implicit, complex or simple, solved with a computer or by assertion.

In the case of the bubble, the crucial determinant of model failure was not complexity or simplicity. It was the presence (or, for those who predicted disaster, absence) of the assumption “house prices always go up”. Of course, this assumption was much easier to detect from talking to an amateur speculator than in analyzing a synthetic CDO, but it had the same effect in either case.

More generally, in most cases, the headline result from a large and complex model can usually be reproduced with a much simpler model embodying the same key assumptions. If those assumptions are right (or wrong) the model results will be the same. The extra detail usually serves to produce more detailed results rather than to produce significant changes in the headline results.

Categories: Economics - General Tags:
  1. TerjeP (say tay-a)
    November 6th, 2008 at 17:19 | #1

    I think our monetary models are a big part of the problem. We should be using better models that do away with most of this exchange rate chaos. The models that say devaluation will improve trade imbalances or fix some set of significant economic concerns need to be ditched. We should go back to the simpler models of currency stability advocated by Karl Marx and Adam Smith.

    The leaders of the UK, Germany and France along with the head of the ECB are all talking about a version II of Bretton Woods. And not before time. Although hopefull we can do better than Bretton Woods.

    http://alsblog.wordpress.com/2008/11/06/eu-leaders-call-for-bretton-wood-ii/

  2. Ernestine Gross
    November 6th, 2008 at 19:43 | #2

    “A more naive version of the same error is to assume that particular observed behavior (say, not defaulting on home loans) will be sustained even when the conditions that made that behavior sensible no longer apply.”

    Professor John Quiggin, you are speaking what is music to my ear: Naive empiricism is as disasterous as not recognising that the results of economic and financial models are conditional.

    Against the background of MH experience, I have come to form a hypothesis: There is an inverse relationship between the interest of students and practitioners of Finance in studying the conditions of theoretical models and the amount of money that is earned by practitioners in financial markets.

    Except possibly for a hand-full of honours students at one or two locally distinguished Finance Schools, who would be interested in understanding (interpreting the mathematical condition in terms of how the world would have to look like)of Merton’s and later authors model on ‘continuous time trading’. But, to the best of my knowledge, Merton’s model is THE model which deals with portfolios of risky debt. Yes, exactly THE problem encountered with sub-prime mortgages. How or why any of the rating agencies ever assigned a AAA (investment grade) rating to these synthetic debt securities requires and explanation. The continuous time trading condition is, in my interpretation, exactly the opposite to the notion of an investment grade security because I associate an investment grade security with the idea that it can be held to maturity. The advise that should have been attached is: Keep on selling the stuff, as fast as possible, 24/7 for 360 days a year and hold your breath when there is a holiday almost everywhere in the global economy.

  3. Tom
    November 6th, 2008 at 19:45 | #3

    It seems to me that the central problem with all of these models is that they ignored correlations in the underlying asset. Since large scale defaults are correlated events, a bundle of mortgages will not be uncorrelated with one another. Therefore a portfolio of different CDOs will have a risk profile that is much higher than one might guess on the basis of their individual ratings. My guess is that this correlated risk was substantially ignored by the market.

  4. Ernestine Gross
    November 6th, 2008 at 20:33 | #4

    “Therefore a portfolio of different CDOs will have a risk profile that is much higher than one might guess on the basis of their individual ratings.”

    Each CDO is a portfolio of debt securities.

  5. Alanna
    November 6th, 2008 at 21:02 | #5

    Someone out there completed a study of Amsterdam house prices since the 1600s ie 400 plus years. In some hundred year periods the house prices rose for half a century, in others they fell for half a century – average house price growth over 400 years plus per annum? 1 percent per annum.
    Are we really surprised when we fail to acheive 8% return plus each year and every year on the value of a financial asset or are we just dreaming? Models sometimes cant see the long run for all the short run noise. History is littered with seemingly perfect models that fail to predict a reasonable return in either the short or long run. Perhaps they dont take into account a long enough long run and end up confusing both outcomes!

  6. sean
    November 6th, 2008 at 21:38 | #6

    I think you are confusing models with patterns.
    Your example is just that a pattern, between now and next Sunday afternoon there could be a shift in these assumptions, unlikely? yes very, but the math (butterfly effect) says its possible. Its once again the myth of the framework.

    Donny boy was laughed at loudly, but he is yet to be falsified.

    “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

  7. Donald Oats
    November 6th, 2008 at 23:38 | #7

    All models are based upon assumptions. Good modellers bring the assumptions to the foreground. They give the model user an appreciation of the realm of validity of the model, and perhaps methods to explore beyond the “normal parameters,” so to speak. Model complexity is the result of the compromises made in terms of assumptions and the “artistic freedom” that the modeller has. For example, APRA (Australian Prudential Regulatory Authority) has mandated that the big four banks in Australia operate under the Basel II regulatory framework for risk. This has major implications for the banks, and of course, for the modeller too.

    Take the Basel II operational risk class, for example. A particular risk distribution consists of both a severity distribution (ie how big is the monetary loss, given the event occurred), and a frequency distribution (ie how often do the events occur \emph{per year}, loosely speaking). The parameter(s) for the severity distribution and the frequency distribution must be determined by some means. The Basel II operational risk class further refines op risk by event class and by business line – about 56 separate operational risk distributions, each with its own severity and frequency distributions, each of which has its own parameters to fit/determine. To model a bank’s complete set of operational risk exposures in, accordance with the Basel II operational risk class, is clearly something of an undertaking. Ponder how we might evaluate such a model, \emph{even with the assumption that the risks are uncorrelated}! Some operational risks may not have occurred once during the bank’s organisational life!

    This means that any bank wishing to gain a more mature appreciation of their exposure to operational risk, for example, is going to have to expand the model in some manner – they could, for example, allow the risk parameters to depend upon economic factors – there are many approaches that may be taken from here on.

    But step back for a second, and consider what an operational risk is: it is typically something that doesn’t happen very often, but when it does happen is typically very costly. Even with the simplest economic assumptions, such as constant growth, constant employment rate, etc, we probably don’t have sufficient observations to fit a risk distribution. Or even to determine what class of random variable (probability distribution function) is appropriate.

    How then, can a model including the credit risk class, operational risk class, etc ever hope to remotely match reality? It is obviously asking too much. Perhaps the best that we can do in practice is to apply “stress tests” in which a change in economic conditions is assumed to have a particular impact upon risk parameterisations. The model, under the new parameters, is evaluated. If the impact to capital requirements is within acceptable limits, under all of our chosen stress tests, then all is well (we hope). If not, an examination into why not is essential.

    In any case, it is \emph{up to the board} to accept final responsibility for risk exposure. The reports from models, such as may come from Basel II or internally, are only one input into appreciating risk exposure. As humans with broad and deep experience in economic factors \emph{and human frailties}, the board members set the broad parameters for how the bank is to run, as an institution.

    During an asset bubble a bank is operating in a competitive environment, one in which many of its competitors are greedily increasing current profits (ie those made in the current reporting period) by increasing future risk exposure (ie risk exposure in future reporting periods). This race to the bottom must be vigorously resisted by banks as a whole, if there is to be any hope of avoiding situations such as the current one confronting us. A bank board has the duty to explicitly state what its strategy for growth is, in terms of how it intends to balance future risk exposure against current profits.

    PS: Institutional behaviour must pass the ethics test; this is why it is so important to pursue the AWB wheat-for-guns kickbacks and who had knowledge of it.

  8. jquiggin
    November 6th, 2008 at 23:52 | #8
  9. November 7th, 2008 at 01:07 | #9

    “But the larger failure, they say, was human”
    On this blog, I have been putting forward what I call a Human Agency explanation of this crisis versus what I have called a Mechanistic explanation of this crisis. My view focuses on:

    1) Incentives ( e.g. ) Implicit and explicit government guarantees
    2) Unethical behavior ( e.g. ) Fraud, negligence, fiduciary malpractice
    3) Human weakness ( e.g. ) wishful thinking, lack of due diligence

    As opposed to, what I would call, conditions that are acted upon or taken advantage of:

    1) Low interest rates
    2) Sloshing pool of money
    3) Complex investments
    4) Complex analysis

    Today, an excellent article by Steve Lohr in the NY Times entitled “In Modeling Risk, the Human Factor Was Left Out”. This is a version of everything that I have been saying.

    “Today’s economic turmoil, it seems, is an implicit indictment of the arcane field of financial engineering — a blend of mathematics, statistics and computing. Its practitioners devised not only the exotic, mortgage-backed securities that proved so troublesome, but also the mathematical models of risk that suggested these securities were safe.

    What happened?

    The models, according to finance experts and economists, did fail to keep pace with the explosive growth in complex securities, the resulting intricate web of risk and the dimensions of the danger.
    But the larger failure, they say, was human — in how the risk models were applied, understood and managed. Some respected quantitative finance analysts, or quants, as financial engineers are known, had begun pointing to warning signs years ago. But while markets were booming, the incentives on Wall Street were to keep chasing profits by trading more and more sophisticated securities, piling on more debt and making larger and larger bets.”

    Absolutely. The models are merely a condition of the crisis, but not the cause.

    “Complexity, transparency, liquidity and leverage have all played a huge role in this crisis,� said Leslie Rahl, president of Capital Market Risk Advisors, a risk-management consulting firm. “And these are things that are not generally modeled as a quantifiable risk.�

    Math, statistics and computer modeling, it seems, also fell short in calibrating the lending risk on individual mortgage loans. In recent years, the securitization of the mortgage market, with loans sold off and mixed into large pools of mortgage securities, has prompted lenders to move increasingly to automated underwriting systems, relying mainly on computerized credit-scoring models instead of human judgment.
    So lenders had scant incentive to spend much time scrutinizing the creditworthiness of individual borrowers. “If the incentives and the systems change, the hard data can mean less than it did or something else than it did,� said Raghuram G. Rajan, a professor at the University of Chicago. “The danger is that the modeling becomes too mechanical.�

    Rather, the view of the market and investing becomes too mechanical, forgetting that human agency is behind all of these decisions. I dispute that the investments were too complex to understand their true nature. Rather, the complexity was used to obfuscate risk. Take any of these investments, for example, credit default swaps, and they are not impossible to understand. On the contrary, the risk is apparent.

    “Besides, the formation of a housing bubble was well under way. Until 2003, prices moved in line with employment, incomes and migration patterns, but then they departed from the economic fundamentals.”

    See, human agency is not factored into this mechanistic view. It appears to be a mathematical relationship between various figures.

    “The Wall Street models, said Paul S. Willen, an economist at the Federal Reserve in Boston, included a lot of wishful thinking about house prices. But, he added, it is also true that asset price trends are difficult to predict. “The price of an asset, like a house or a stock, reflects not only your beliefs about the future, but you’re also betting on other people’s beliefs,â€? he observed. “It’s these hierarchies of beliefs — these behavioral factors — that are so hard to model.â€?

    Indeed, the behavioral uncertainty added to the escalating complexity of financial markets help explain the failure in risk management. The quantitative models typically have their origins in academia and often the physical sciences. In academia, the focus is on problems that can be solved, proved and published — not messy, intractable challenges. In science, the models derive from particle flows in a liquid or a gas, which conform to the neat, crisp laws of physics

    .Not so in financial modeling. Emanuel Derman is a physicist who became a managing director at Goldman Sachs, a quant whose name is on a few financial models and author of “My Life as a Quant — Reflections on Physics and Finance� (Wiley, 2004). In a paper that will be published next year in a professional journal, Mr. Derman writes, “To confuse the model with the world is to embrace a future disaster driven by the belief that humans obey mathematical rules.�

    That last line is exactly what I’ve been saying.

    “Yet blaming the models for their shortcomings, he said in an interview, seems misguided. “The models were more a tool of enthusiasm than a cause of the crisis,â€? said Mr. Derman, who is a professor at Columbia University.”

    Bingo. The models were a condition, but not a cause.

    “Better modeling, more wisely applied, would have helped, Mr. Lindsey said, but so would have common sense in senior management.”

    I know that we want some big scary villain, but it does just come down to human agency like common sense.

    “The dismissive response, Mr. Lo said, was not really surprising because Wall Street was going to chase profits in the good times. The path to sensible restraint, he said, will include not only better risk models, but also more regulation. Like others, Mr. Lo recommends higher capital requirements for banks and the use of exchanges or clearinghouses for the trade of exotic securities, so that prices and risks are more visible. Any hedge fund with more than $1 billion in assets, he added, should be compelled to report its holdings to regulators.

    Financial regulation, Mr. Lo said, should be seen as similar to fire safety rules in building codes. The chances of any building burning down are slight, but ceiling sprinklers, fire extinguishers and fire escapes are mandated by law.
    “We’ve learned the hard way that the consequences can be catastrophic, even if statistically improbable,â€? he said.”

    The recommendations made are fine:
    1) higher capital
    2) a clearinghouse
    3) better regulation

    I’ve already agreed with and talked about all of them. However, the regulations need to be based on human agency and not simply measurable factors. That’s why I believe only simple and broad principles can work in this forthcoming regulation. Otherwise:

    “Innovation can be a dangerous game,� said Andrew W. Lo, an economist and professor of finance at the Sloan School of Management of the Massachusetts Institute of Technology. “The technology got ahead of our ability to use it in responsible ways.�

    This is just another way of saying that if the regulations are too specific, investors will find a way around them. You need a set of principles that can funnel riskier investments into a screening process where they can be examined. The actual amount of government regulation should be based on that assessment. Unfortunately, that will be done by human agency, and has its own set of conditions and risks. There’s no simple answer.

  10. sean
    November 7th, 2008 at 02:18 | #10

    8, JQ

    This blog is improving all the time, just like Burkian and Hayekian philosophy, through trial and error you can achieve ever greater truth :0) We will make a conservative of you yet, there are too few real ones around.

    Not too bothered about anyone going to war without a plan to win, even if its a rubbish one.
    I suppose I should condemn the pentagon war game models as well, just for consistency?

  11. gerard
    November 7th, 2008 at 10:11 | #11

    I’m sure quants these days don’t use it anymore, but the original Black-Scholes is based on a normal distribution of price movements when actual price movements are leptokurtic; large movements occur with a frequency that Black-Scholes severely underestimates. According to these models, crashes of the type that seem to occur about once a decade should only occur about once every few centuries. A Monte-Carlo simulation of market prices can be distinguished from a graph of genuine movements by noting the frequency of large movements.

  12. November 7th, 2008 at 12:14 | #12

    PrQ,
    Just a small correction. None of the models you were referring to (at least the ones I have seen) assumed that prices never fall. Their error was effectively to put too high a lower bound on price drops – i.e. they assumed that prices never fall by more than x% in a year, or by a total of more than 2x% over several years.
    All of the CDOs issued with high ratings benefitted from a “waterfall” structure, with the highly rated paper protected by “equity” and mezzanine tranches that were meant to cover all possible losses.
    .
    gerard,
    a good monte-carlo simulation will have the correct frequency of large losses in it. They are much better than a Black-Scholes model.

  13. observa
    November 7th, 2008 at 13:26 | #13

    But at least this is criticism of specific models. What is really silly, on a par with saying “evolution is just a theory� is the currently popular talking point “this shows you shouldn’t trust models, so I can consult my own prejudices on topic X (most commonly, climate change)�.

    And what is really silly too is to assume that those who are critical of adopting a particular CC amelioration model with inherent, proven systemic risk, not to mention failure to ameliorate to date when applied, are just consulting their own prejudices on the topic overall. Some may well be but still have a valid point that the model cure proposed is as bad as the disease.

    There is no necessary correlation between developing/recognising a valuable model on AGW and developing/recognising a valid amelioration model. Just as Henry Ford developed a good car model, there was no reason to presume he had a similar mortgage on traffic control. Indeed Henry might have had some very blind prejudices and loopy ideas about the latter.

  14. observa
    November 7th, 2008 at 14:16 | #14

    But at least this is criticism of specific models.

    Wise words indeed and we’d all do well to remember them. Having said that JQ is right to point out that those who now dread the repeat of global ETS and all the derivatives and regulatory failure that can so obviously sail with it are somewhat silent on alternatives. Generally speaking, I’d agree with him that coherent market green policies are a pretty barren landcape at present, which may have reduced such ETS skeptics to the periphery of the knockers and obfuscators. It’s a fair point and one that needs to be addressed urgently before global ETS rushes in to fill the vacuum and unleashes the Gordon Geckos and Louis Leeches upon us all over again.

    I’m a market green man and have been in favour of level playing field carbon taxing, long before it became fashionable. Taking AGW on board is small beer in the priorities of devising a sensible constitutional marketplace, where total natural environment has true countervailing market power to its perpetual destruction. ETS does none of that terrestrially. In fact quite the opposite with its counterproductive offset policies(ever more gloabl derivatives trading) No it will need more than that papering over the cracks and one benefit of the current world economic depression(and that it is) is that we might have the time and motivation now to look at some real answers. Market based green ones.

  15. November 7th, 2008 at 14:45 | #15

    A different take on the same topic, for those interested in different viewpoints on economics:

    http://blog.mises.org/archives/008908.asp

  16. TerjeP (say tay-a)
    November 7th, 2008 at 16:57 | #16

    Having said that JQ is right to point out that those who now dread the repeat of global ETS and all the derivatives and regulatory failure that can so obviously sail with it are somewhat silent on alternatives.

    I hate the idea of a national or international ETS scheme in spite of once being favourably inclined. A carbon tax with revenue used to reduce other taxes is the logical alternative. Of course you could implement a carbon tax by just sell the ETS permits at a set fixed price rather than at auction.

  17. observa
    November 7th, 2008 at 20:28 | #17

    For me the problem of AGW is fairly analogous to the problem of the MDB. We now have a fairly reliable model of the overall problem. Although the problem is glaringly obvious to all at a glance, we have the scientific and technical capability to quantify the problem. ie measure and predict basin wide inflows vs current useage. Also we can model some minimal and more desirable environmental flows and the overall solution is fairly clear. Basically only so much human useage can continue. That’s the quants so to speak but then there’s the human factor to deal with and it’s farewell to models and hello common sense and pronto.

    Where’s the commonsense? Well let me put that question and answer tangentially and let’s see if the obvious springs to the fore. Consider the following 2 statements quickly and pick agree or disagree as quickly-
    1. Generally Australians do not suffer shortages of petrol
    2. Generally Australians do not suffer shortages of water
    Therein lies the commonsense because we do suffer water shortages because of the current and historical prices charged for them. There generally no shortage of petrol anywhere in Australia as oil prices went from US$35/brl to $145 and now back down to $60. Not so with water, when water rights went from around $300/ML up to $1200 and now around $400. ETS fans take particular note of the latter, although we’re not comparing apples with apples. One is an input cost of oil and the other a capital cost of mud more often than not.

    Now notice that while we don’t go short of petrol we seem to always be short of water although the world is 80% water and the supermarkets are always full of bottles of the stuff. It’s almost trite to point out that it’s all about price. The same price mechanism we use to allocate food, clothing, shelter and even seaside real estate. The truth is that unlike seaside real estate, we can have as much water as we like anywhere in Australia whenever we want to pay the opportunity cost of doing so.

    Simple, inexorable price that works 24 hrs a day, 7 days a week, 365 days a week, rather than when silly SA Libs reckon public service water ‘advisors’, working daylight flexi-hours only and not on weekends, can do a whole lot better. Food, clothing, shelter, petrol, plasmas to RE and we all understand the price mechanism intuitively, but suddenly like SA Libs with water, environmentalists want to abandon price for public service quantity controls in one of the most important arenas affecting it all. That is not common sense, whatever their quant models are telling them.

  18. observa
    November 8th, 2008 at 09:53 | #18

    So as far as the environment is concerned, it’s common sense to rely on the price mechanism and be critical of quantity controls and yet market men haven’t a lot to say about how and what prices we should all face. Hmmm…thanks for all your help there Observa and Co, but we’ve got a planet to save. Basically for market men it’s time to put up or shutup. In that sense, while Austrian analysis has something very useful to say about the need for real prices, rather than destructive funny money ones, it doesn’t go into much detail about the ‘construction’ of actual prices, other than to say interfere as little as possible and see what nature throws up. Whilst Govt has around a 40% stake in the economy now, it’s naive not to address that impact on prices and incentives in the current marketplace.

    Leaving aside the debate over the size of Govt, that current need for revenue sees a massive intervention in market pricing largely via the taxation mechanism now, although to a lesser extent by legislative fiat. You only have to think of the way in which everything from stamp duty to progressive income tax, capital gains exemption and negative gearing impacts housing affordability to understand that. Govt taxation largely sets the current constitution of prices in the marketplace we face now. ie the constitutional marketplace (CM) On environmental grounds we have to question whether that inherited CM and its consequent price structure is relevant to today’s realities. Clearly it’s not and there’s no time like the present to propose serious and far reaching change to that CM, rather than more add ons or tinkering around at the margins.

    What I’m suggesting is not that we discount jumping on board with a quantity based ETS, but we put it back in the drawer for a later date and concentrate on sensible price reform at home. We can after all bring it out again and overlay it if it looks like the world has made a goer of it. In that sense a cautious approach on quantity controls is called for, because like MDB water licences it could be extremely problematic to unwind if it all goes the same way. In any case ETS is somewhat limited in its approach and has some nasty offset incentives. This stance is of course predicated on the assumption that reliance on domestic price reform can produce equal or better positive directional change in the interim. I believe it can, but it’s just that we haven’t really addressed the broader issues properly and to some extent AGW has got us all sidetracked and too narrowly focussed.

    Enough said and in the absence of any coherent, market based CM, I’ll outline a blueprint for one, bearing in mind how overall taxation largely impacts price and our behaviour. Apart from say excise on booze and fags, we’re about to ditch all other forms of taxation to rely on a mix of the following-
    1. We tax carbon at the mine or well head(really CO2E taxing but no exemption for use in plastics manufacture, etc)
    2. We tax other resources from mining, quarrying, forestry, fishing and water use.
    3. The private use of land is a resource and is taxed based upon land use from zero for natural state to a maximum for full artificial cover (ie buidings, bitumen and even dam cover or mine and quarrying disturbance.)
    4. We rely on an ANWT(life cycle adjusted)for top end redistribution with a specific exemption and franking credit. That exemption would be all wealth held as natural environment and a franking credit given for any expenditure on remediation(ie mines quarries or opening dams, etc) or for creation of natural environment( ie Walmsley’s Earth Sanctuaries and the like)

    I put it to ETS fans and quantity control fans in general, that rapid movement toward that price based CM, would be far more practical and productive environmentally in the medium term than what they are proposing. Furthermore it takes a more holistic approach to our natural environment and gives true countervailing market power to that environment against the current price incentive structure to knock it over. Without inexorable price and that countervailing market power on its side, protecting natural environment with rearguard quantity measures is like pissing on a bushfire. That’s why I’m a market green and prepared to nail my colours to the mast for any critique, whatever its colour. No fancy modelling you’ll note, just common sense and some history and philosophical thought on my side I reckon. This third way stuff isn’t that hard when you think about it a bit and if not us who then? Obama?

Comments are closed.