Home > Economics - General > Cracks in the foundations

Cracks in the foundations

January 23rd, 2008

The decision of the US Federal Reserve to cut interest rates by 0.75 per cent is as clear a sign of panic on the part of the monetary authorities as we’ve seen since the 1987 stock market crash. It’s not entirely coincidental that it followed a dreadful week on Wall Street, and a couple of awful days on world stock markets while the US was closed for the long weekend.

Still, stock markets have fluctuated quite a bit in the last 20 years without producing this kind of reaction. The really alarming events have been happening in bond markets and, in retrospect, the most alarming happened just over a month ago.*

That’s when Standard and Poors cut the credit rating of ACA Financial Guaranty Corp from A (strong investment grade) to CCC (just about the worst kind of junk) in one move. This event showed the weakness of two of the most important defences against the kind of credit derivative meltdown that market bears have been worrying about for years.

First up, it’s yet more evidence that, when it comes to systemic risk, credit rating agencies like S&P are either asleep on the job or, worse, incapable of performing it. They are fine at the day-to-day job of comparing different assets of the same kind, for example, estimating which companies are more or less likely than others to default on their corporate bonds. But when it comes to assessing the risks of whole asset classes, particularly new and ‘innovative’ asset classes, they’ve proved themselves to be hopeless.

They were caught napping by the Asian financial crisis. In the dotcom boom, they failed to detect the bogus financial structures of firms like Enron and many of the big telecoms. And they have been centrally implicated in the crisis that began with the repacking of subprime loans into bundles of securities, many of which were given AAA ratings on the basis of dubious projections of default rates.

But the failure to detect problems with bond insurers like ACA is far more serious. As I said in the 2002 post I linked above,

The starting point [for a possible financial meltdown] is a crisis in derivatives markets arising when ‘counterparties’ (those owing money on the transaction) … refuse to pay up.

To protect themselves against such a risk, parties to these deals have insured themselves against the possibility of default with bond insurers like ACA. But what happens if the insurers themselves go broke? This looks very likely to happen. ACA is relatively small, and has just staved off liquidation for a month or so. But the bigger insurers like MBIA and Ambac are also in trouble – Fitch just downgraded Ambac to AA and if the other agencies follow suit, the company will be largely unable to write new policies. It’s hard to believe they can make it through the coming year without being rescued either by the big banks (themselves looking pretty sick now) or the US government.

Suddenly, there are a couple of trillions of dollars of bonds that are less secure, maybe much less secure, than their holders thought they were. If, as seems entirely possible, derivative contracts have been written with these bonds as part of the underlying assets, the amounts at stake could be much larger.

The subprime mortgage crisis, in isolation, seems likely to produce losses of a couple of hundred billion, possibly enough to generate a mild recession in the USA. But the possibility of large-scale failure in bond and credit derivative markets, now all too real, could bring an end to the long period of global economic expansion that began with the end of the last big global recession in the early 1990s.

Note:I’ve made various updates in response to helpful comments

* Just when I began my Christmas break, as it happens.

Categories: Economics - General Tags:
  1. January 27th, 2008 at 08:51 | #1

    “At root, any model of human behaviour must include within itself perception of the predicted effects of that very model of human behaviour. In other words, the model undermines itself as a predictive tool.”

    I forget the name of it (maybe somebody can remind me?), but there’s even a law of economics that states something like “when an economic measure is used in policy, it ceases to be reliable”.

  2. Ernestine Gross
    January 27th, 2008 at 10:30 | #2

    Will,

    I agree with your agreement with JQ regarding the uselessness of rating agencies. Are you going to write to Mr Costa, NSW State Government, telling him what you think of rating agencies or are you blaming academics for not doing it? I mention this because the only argument Mr Costa has in defence of the privatisation policy of the electricity industry is the dripple A rating!

    I may even agree with some of your statements about academic papers in ‘Finance’, if you happen to have in mind that part of the literature I am thinking about. The literature I have in mind is that which is driven by hypothetical practitioners who respond to both ‘industry demand for applied work’ and publication targets and consultancy fees. But this is not the professional economics literature where finance is but one element of interest. If ‘industry’ is not satisfied with the outcome of its demands on the education system, then I suggest you address your complaints to ‘industry’.

  3. Ernestine Gross
    January 27th, 2008 at 10:35 | #3

    PML, Milton Friedman and his students held a view similar to the one you present. I wouldn’t call it a law. However, I would be prepared to say that those who advocated ‘economic rationalism’, based on data preceding their policies on institutional change, have failed to recognise that the data on which they base their predictions is irrelevant.

  4. Ernestine Gross
    January 27th, 2008 at 12:26 | #4

    Re 50: Will,

    I wrote my post before reading your post, item 50.

    There are no axioms in the finance models you refer to. These models are characterisations of solutions to general equilibrium models on which specific ‘restrictions’ (convenient assumptions) have been imposed to allow the derivation of equations into which numbers can be put. So, they are at best special cases. This is what ‘industry’ wants because it is (looks like) applied.

    If you want to know about the axioms on which these Finance models rest, you need to study the theoretical models from which they are derived.

    General equilibrium theory does not, to the best of my knowledge, aim to satisfy the demands of managers of financial corporations but rather to investigate a particular philosophy. I should think this activity is in the proper domain of academia.

    The theoretical models from which the Finance models can be derived as special cases assume that individuals make probability assessments about future events when making their own decisions.

    There is indeed a jump to conclusions involved when applying special case solutions of theoretical models, which do not contain ‘corporate managers’ at all, to risk management by so-called ‘professional’ finance managers.

    As far as I know, most Finance programs are now taught separate from Economics. I am not convinced this is a good idea.

    You discussed the importance of the independence of financial variables in one of your earlier posts. It seems to me, the independence of financial variables depends, among other factors, on independent minds of people with independent wealth in their pockets. But independent minds are not obviously compatible with the corporate form of business.

    Your term ‘invalid empiricism’ makes sense to me.

  5. Will
    January 27th, 2008 at 17:37 | #5

    Ernestine, thanks for your thoughtful response that makes much the same excellent points about the proper use of models point as Hayek did in his 1974 Nobel prize lecture. I do think, however, that the ability to numerically specify the distribution of outcomes of complex financial systems (namely firms) is an axiom of these models.

    For convenience, I’ll take the Wiki definition: “In traditional logic, an axiom or postulate is a proposition that is not proved or demonstrated but considered to be self-evident. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths.”

    Now consider, as an example of an important and public domain model, the Basel II Internal Rating Based (IRB) formula. For banking it doesn’t get more important than this. (A very convenient description see http://www.bis.org/bcbs/irbriskweight.pdf.) Now this formula calculates the amount of capital a regulated bank must hold against each of its exposures. Its major inputs, to be provided by the bank itself, are annual probability of default (PD) and loss given default (LGD). Logically, both are derived from the same distribution of future asset values of the borrowing firm (although economists might note that PD and LGD also have to be specified for sovereign loans as well).

    I think the assumption that these are measurable is axiomatic to the approach.

    What is particularly damning is that a bank is only allowed to use the IRB approach if it can demonstrate somehow that it can measure these things, but no specific guidance is provided as to how. The major criteria is the amount of data a bank can gather, explicitly pushing an empirical approach.

    Now I have seen the type of stuff that banks are doing to try to claim that they can measure PD/LGD, and that is being accepted by their regulators, and it would make you hair turn white. It is absolutely the worst kind of pseudoscience you have ever seen. As long as something, anything, can be specified to a 95% confidence level, then it is ok. Its relevance is irrelevant.

    I have tried hard to limit my comments to finance academia and not economics or academia generally (but may have forgotten occasionally) because that is where the problem lies to me. Just open any finance textbook. I think it would be great if the rest of the economics discipline took seriously what was being done in their name.

  6. January 27th, 2008 at 19:48 | #6

    “PML, Milton Friedman and his students held a view similar to the one you present. I wouldn’t call it a law.”

    I actually bumped into it on wikipedia once, under the name of “…’s law”, only I forget just whose. I’ll try searching.

  7. Ian Gould
    January 28th, 2008 at 10:17 | #7

    So when we’re dealign with thinly trading commercial instruments are mark-to-market requirements actually part of the problem?

    Is there an alternative?

  8. Ernestine Gross
    January 29th, 2008 at 07:47 | #8

    Re 55:
    Will, the link you have given doesn’t work (page cannot be found).

    I am familiar with Finance textbooks since the mid-1980s. I can’t quite agree with the generality of your statement about Finance textbooks. See for example texts by Cox and Rubinstein and in particular texts by D. Duffie. I’d be very surprised if Markowitz or Merton would treat a ‘restriction’ on the solution of a theoretical model as anything other than a hypothesis to be tested empirically. However, I have to agree that there are a few undergraduate Finance texts, even of recent vintage, which could be read by the uninitiated in a manner you describe. One I have in mind has explicit ‘industry endorsement’ comments. (Good marketing but bad scholarship as far as I am concerned.)

    Incidentally, my earlier comments were not about the proper use of models in general but merely about the specific relationship between the Finance models you indicated and post-1950s general equilibrium theory.

    You mention V. Hayek. He is known to me as an advocate of ‘laissez faire’ but not as a contributor to analytical models which examine the beliefs in ‘laissez faire’ within an axiomatic approach (eg post 1950s, starting with Arrow-Debreu-Koopman). I find it interesting that v. Hayek was resurrected, so to speak, in the political sphere at just about the time when the analytical research program showed up problems with the 19th century beliefs that may well be labeled “Cracks in the foundation�.

  9. Will
    January 29th, 2008 at 18:30 | #9

    Ernestine, thanks for your response. I will take a look at the textbooks you mention, and you are correct to pick me up for generalizing. The link to the BIS article works if you remove the full stop at the end, or else google “explanatory note irb risk weight functions”. I really recommend Hayek’s speech, which I stumbled across one day. It is at http://nobelprize.org/nobel_prizes/economics/laureates/1974/hayek-lecture.html

    With respect to Merton, his 1974 paper is specifically “On the Pricing of Corporate Debt” and pretty much offers what it promises, a function/functions for the price (yield) of debt in terms of, inter alia, “the variance (or volatility) of the firms operations”.

    Now that is still consistent with what you say about it being a theoretical model, but in the introduction he cites Black and Scholes and says of their option pricing model that they “present a complete general equilibrium theory of option pricing which is particularly attractive because the final formula is a function of ‘observable’ variables. Therefore the model is subject to direct empirical tests …”. (Download available from http://www.pims.math.ca/science/2006/06ssfme/merton.pdf)

    The issue is that the major variable in the Black/Scholes and Merton models, variance, is actually not observable, which is where my criticism of invalid empiricism comes in.

    In fact that is why we have the concept of implied volatility, where the function is assumed to be correct and the variance is then reversed out from the observed pricing. This is obviously not a new criticism, but I think that it is the acceptance and continued teaching of these models that has led to wider acceptance of the presumption that uncertainty can actually be quantified, which has now led to things like Basel II.

  10. Ernestine Gross
    January 31st, 2008 at 17:18 | #10

    Will, thank you for your replies and in particular for the Basel II reference. It looks more concise than any of the material I’ve pulled down on this topic and this is a big help for someone like me who doesn’t like reading about the procedural matters.

    V. Hayek is known to be a persuasive writer (he started in law). The lecture you referenced was given in 1974. I developed an aversion to this kind of literature during my undergraduate days. But I can see why the title is relevant to your interests.

    As for Merton’s 1974 paper, I see no need to revise my earlier comments on Finance models (characterizations of special cases of the solutions to general equilibrium models) only because Merton writes that the Black and Scholes options pricing model is a ‘complete general equilibrium model’. It is not. However, I must mention that my comments are written from the perspective of about 1990 rather than 1974.

    Perhaps physicists should be consulted on the possibility of humans being able to observe a potentially large set of instantaneous variances in an economy where trading of financial securities (called assets in the Merton model) takes place in discrete but varying time intervals. I observe that computing systems can’t handle handle the volume of trade in financial securities resulting from variations in individuals’ risk assessment. However, it is exactly at such times when the special assumption in Merton’s model should be tested (within the narrow framework of Finance) – no? And this is only 1 problem on the path from finance theory models to finance in practice.

    Nevertheless, Merton’s continuous time model is a particularly interesting one because ‘continuous time models’ are still the only ones which deal with risky debt – as far as I know. I don’t buy the argument that the continuous time trading assumption is ‘only an approximation’. It is crucial.

    ‘Quantification of uncertainty’ is, I believe, a separate and much broader question. Perhaps JQ is opening up another related thread in the near future and, assuming I have read the Basel II reference by then, we might be able to continue our conversation.

Comment pages
1 2 3862
Comments are closed.