Wall Street's Random Walk Doesn't Exist

X
Story Stream
recent articles

Traditionally, risk is expressed as the statistical probability of losing money. Since there is no way to tell ahead of time what will happen, modern finance uses the immediate past to describe that risk. The common tools are measures like standard deviation.

Conventional thinking says that if a standard deviation is high, describing a "volatile" market or stock, then the chance of losing money is also high. If a stock's movements are large relative to its average, then it follows that the stock could potentially experience a significant drop quite easily. Conversely, low measures of volatility are thought to signify less risk.

Is this really the case?

We can compare two very different market periods to see if the common measures of risk/volatility are sufficient. Using the S&P 500 as a market proxy, in Time Period 1 the one-year standard deviation of weekly returns is 54 index points and the VIX index (a measure of implied volatility derived from stock options) is 16.67.

Time Period 2 shows a much more volatile market: the one-year standard deviation of weekly returns is 234 index points and the VIX is far higher at 49.68.

By the traditional measures of risk, Time Period 1 is far less risky than Time Period 2. The problem is that Time Period 1 is October 8, 2007, and Time Period 2 is March 9, 2009.

Traditional risk measures are backward looking and therefore assume that the future will look like the past. In the most common statistical expressions, the problem is far worse - it is assumed that the near future will look like the recent past. In both the Time Periods above, the near future looked nothing like the recent past.

It is safe to conclude that statistical measures of probability have a major problem describing actual risk. Volatility is too static to account for dynamic changes in individual securities or markets. Even more advanced measures of risk, like skew or correlations, still exhibit recency bias. This kind of static forecasting leads to straight-line extrapolation, missing the key inflection points that describe true risk. In fact, statistical measures of risk do not show true risk until after it occurs (thus the high risk measures at the bottom of the market in March 2009).

Every statistical measure is a creature of the time series of data that precedes the point of measurement. Even the most complex market and economic models embed this fatal flaw. The results of the most elegant and complex equations will always be behind the curve as long as they are based on time series.

The Federal Reserve and its operation of monetary policy is a perfect example. In our February 2011 Special Report I described it this way:

"Current models are based on observations of the past fifty years and are therefore limited to the experiences of the last fifty years. In truth, they are heavily weighted toward experiences during the bubble periods after 1980. When events unfold similar to those of the 1930's, the models judge them as extremely unlikely, even impossible (as in "housing prices will never decline because they never have"). We called this shortcoming a lack of imagination since policy that is wedded to time series will never be able to make the unscientific leap to actual human interactions, such as panic selling in the repo market. That would be extremely difficult to model using time series data since the repo market's marginal impact on credit production has grown exponentially in just the past six years."

The Fed itself agrees with these conclusions. In a January 2011 paper, titled "Have We Underestimated the Likelihood and Severity of Zero Lower Bound Events?", the authors pick apart the blatant mistakes of the Fed's intricate math. They conclude that they did, surprise, underestimate the likelihood and severity of the events that were unfolding, particularly in 2008. The reason: the time series that all of their models were based on did not show anything like what was unfolding, therefore it was judged to be impossible.

I encourage everyone to read this paper, especially the charts in Table 2, page 38, to see just how far the Fed's expectations were from reality (they were five to seven standard deviations off). When Chairman Bernanke spoke in May 2008 that the worst was behind us, he actually and tragically believed it because his models told him so. If the scale of these defects were more commonly known there would be far less confidence in the Fed's designs and implementations of various monetary policies.

Even though the Fed now has a more diverse time series to incorporate into its models, there has been little improvement in forecasting ability. That same January 2011 paper recalculated monetary and economic assumptions from 2008 using the new data subsequently obtained during and after the panic. Their "improved" models, for the most part, predicted that there was a "nontrivial" (less than 1%) chance that monetary policy would hit the lower bound (zero interest rates) and stay there for eight quarters. The "best" statistical model "improved" from a 1% probability to a 3% to 5% chance of staying at the zero lower bound for two years. The best these improved models could do is assigning a 1 in 20 chance for something that has actually happened.

Where does this leave us in 2011?

Despite the woeful track record of statistical modeling, it still forms the basis of most people's investment and economic risk assessments. Market standard deviations and risk measures are again relatively low and investor complacency has returned. The economic recovery is still assumed to be in progress and inflation is "transitory" because the math says monetary policy works. There is no chance that the dollar gets replaced as a reserve currency because it has never happened before. The global financial and trade system will not experience a dramatic re-adjustment because it is not a part of any time series data.

In the end, statistics breaks down because it assumes that what happened yesterday has no bearing on today. Random walk statistical constructs, such as standard deviation, assume that the probability of seeing eleven straight down days for stocks is extremely unlikely because daily returns, like the flip of a coin, are unrelated to each other and subject to mean reversion. The reality is that ten straight down days actually make that eleventh more likely; psychology does not exist in an academic vacuum. Commodity prices do breed inflation because they get embedded in expectations that turn into action, such as inventory overproduction. There is no mathematical expression for emotion.

The random walk on Wall Street does not exist. Despite all the sophisticated measures of risk and forecasting, far too many investors are worse off after the decade of the 2000's since stocks are at the same level they were in 1999. Even the best hedge funds, the fullest manifestation of quantification, were completely surprised by the 2008 debacle. The true expressions of risk are the statistically invisible inflection points that populate every stock chart and economic series alike.

As the economic recovery slows again, we are again told by the Fed and mainstream economists not to worry. Behind those reassurances is the embedded weakness of modern arithmetic adherence and the ironic, illogical hubris it breeds. As long as mathematics has taken over the core functions of risk estimation, the model users will always be several steps behind. Inflections are functions of emotion and confidence, well beyond the grasp of the best, most advanced statistical creations.

Jeffrey Snider is the Chief Investment Strategist of Alhambra Investment Partners, a registered investment advisor. 

Comment
Show commentsHide Comments

Related Articles