First up, sincere apologies to the organizers and attendees of the Milken Global Forum, in Los Angeles, where I was due to appear this afternoon at a session about economic models of risk. I was looking forward to engaging the other panelists, who included Nobel laureate Myron Scholes, of “Black Scholes” fame; Colin Camerer, a Cal-Tech behavioral economist I’ve written about in the past; and Aaron Brown, a former Wall Street risk modeler. Unfortunately, my early morning flight from Ottawa, Canada, where I had another speaking engagement last night, was cancelled because of mechanical problems. The best alternative Air Canada could offer was a connecting flight scheduled to touch down at LAX after the panel session had begun. After hanging around Ottawa airport all morning vainly trying to find a more direct route, I gave up and flew back to New York. Although people say this sort of thing happens all the time, in my experience it doesn’t. But on this occasion it did.
Anyway, here is roughly what I would have said had I made it to L.A. Adhering to the old maxim that no audience can be expected to soak up more than three points per panelist, I was going to make the following points. (Also, thanks to Glenn Yago, the head of the Milken Institute’s economic program, for issuing the invitation to me.)
1) The risk models that were commonly used on Wall Street failed abysmally. Not only did they fail to protect their users from a bad outcome, they made such an outcome far more likely. In short, the risk models added to systemic risk.
2) In part, this was a failure of statistical modelling. The techniques that the risk modelers used weren’t up to the task they set for themselves. But it was also a problem of how the models were used. Rather than looking on them as a useful but limited tool, banks and other institutions used them as a substitute for proper risk management, and as a justification for taking on more leverage and more risk. This explains how the risk models made the entire system more risky.
3) At it’s root, the problem is conceptual. The financial market isn’t a deterministic system underpinned by laws of nature, and attempts to treat it like one—such as contemporary risk-management techniques—are destined to backfire.
I won’t spend long on the model failures, which have been well documented in my own book, “How Markets Fail,” and in many others. In the past ten or fifteen years, many banks and investment banks came to rely heavily on Value-at-Risk models, which supposedly gave them a daily dollar figure of the amount of risk they were taking on. Model-based risk management seduced the regulators, too. Under the Basel system of international banking regulation, big financial institutions were allowed to use their own risk models in setting their capital reserves. Alan Greenspan and many other policymakers insisted that the development of “scientific risk management” had made the system a lot safer.
What went wrong? It is now commonly said that the reason the models, especially the Value-at-Risk models, came a cropper is that they didn’t account for the possibility of “fat tails.” This is Nassim Taleb’s “black swan” critique, which goes back to Benoit Mandlebrot’s work in the early nineteen-sixties. Movements in financial markets don’t follow the normal (or Gaussian) distribution, and any risk-management strategy that relies on such an assumption is destined to greatly underestimate the real risks of any given trading strategy.
Taleb is basically right, but the failure wasn’t as simple as it appears. In the risk-management community, the common response to the fat-tails critique is to say: come on, we never believed returns were normally distributed: we all knew they were leptokurtic. (A leptokurtic curve has a higher peak and fatter tails than a normal curve.) We were also perfectly aware that volatility tends to run in waves—this is the property of “volatility clustering”—and our risk models took due account of that.
On one level, the risk modelers have a point. Following the pioneering work of N.Y.U.’s Robert Engle during the nineteen-eighties, a new econometric technique was invented to deal with this problem: GARCH (if you must ask, the acronym stands for Generalized Autoregressive Conditional Heteroskedasticity). When the risk modelers plugged various financial time series into their computers to spit out a figure for Value-at-Risk, they generally used GARCH or one its variants (of which there are now many) rather than ordinary least squares (O.L.S.).
Unfortunately, GARCH doesn’t work very well either, especially when it is run on relatively short time series—up to two or three years, say—which was the standard practice on Wall Street. GARCH-based Value-at-Risk estimates are generally a bit higher than O.L.S.-based estimates, but the difference isn’t very great. Consequently, risk-management strategies that relied on the fancy new time-series techniques made the same error that the less sophisticated techniques did: they greatly underestimated the possibility of extreme outcomes.
If that was the only drawback with these risk models, I wouldn’t be so critical of them. After all, on a day-to-day basis, they do provide their users with some information that isn’t without value. Effectively, the models say this: if today is roughly the same as yesterday (or the average yesterday over the past few months or year), you are highly unlikely to lose more than X in today’s trading session. As I said, that’s worthwhile information to have at hand, especially for a trading desk.
To repeat myself, the problem wasn’t so much with the models themselves, but with how they were utilized. Rather than being used to discipline individual traders and trading desks, they were used to justify bigger and bigger speculative positions, and more and more leverage. A couple of years ago, when I was doing some reporting about the causes of the subprime collapse, I repeatedly heard the same lament from senior executives at Wall Street firms: nobody in the market-risk or credit department ever warned us that we were getting dangerously exposed to subprime. To the contrary, the risk guys told us it was a low-risk business.
If you go to the S.E.C. Web site and examine the historical financial filings of big firms like Lehman Brothers and Morgan Stanley and Goldman Sachs and Merrill Lynch, you will see two remarkable features over the period from 1995, say, to 2007: a tripling or quadrupling of balance sheets, and a similar increase in leverage (total assets/total equity). Now, the increasing popularity of Value-at-Risk models wasn’t the only factor driving these developments, but it was one of them, surely. Senior managers were comfortable taking on more risk because their risk modelers told them it was safer to do so! Just like many policy makers, the Wall Street C.E.O.s had effectively joined a cult of economic modelers, and whatever the modelers told them they believed.
Finally, to return to my third point, the problem isn’t merely a technical, statistical issue. It is also a philosophical one, and it goes to the heart of what we can expect from orthodox financial-risk modelling, which isn’t very much.
If you think about it for a moment, the risk models aren’t really indicators of risk at all; in fact, they often act as reverse indicators. When they say the outlook is benign, that is the time to get worried. Why do I say that? Well, let’s go back to history and think about the leading indicators of financial crises and what they are. In almost every case, major crises are preceded by periods of euphoria, during which people become overconfident, and markets behave very benignly. The evolution of the subprime market in the period from 1999 to 2007 fits this pattern perfectly. If you run a Value-at-Risk model on a sample taken from such a period, it will always tell you that the risks of a big setback are falling rather than increasing. In effect, if not intention, the model converts “disaster myopia”—the human tendency to discount previous highly negative experiences—into a dollar figure, thereby giving it a ring of authenticity. Armed with this false knowledge, individuals and firms act more and more recklessly.
It is here that the allusion to classical physics, which underpins much of modern finance, breaks down. In a Newtonian system, the heat molecules don’t suddenly line up and march around the room in lockstep. But on Wall Street, the molecules—the individual traders and investors—sometimes do exactly that, with market prices acting as the coördination mechanism. Once such a system moves away from equilibrium, the (privately) rational choices of individual decision makers tend to accentuate disturbances rather than dampening them. And the result, all too often, is boom and bust.
So what’s the solution? That is a topic for another panel (or post). But it must ultimately come down to exercising human judgment rather than relying blindly on statistical models. Such contraptions have their place, but they should never again be elevated to the position they occupied during the past decade. In this area, as in others, it is time to put an end to the cult of economic modelling.
More news, politics, culture, business, and technology:
Registration on or use of this site constitutes acceptance of our User Agreement (Revised January 7, 2009) and Privacy Policy (Revised January 7, 2009). The New Yorker © 2010 Condé Nast Digital. All rights reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast Digital.
Read Full Article »