An intellectual virus, a bubble, or both?

The recent phase of market volatility precipitated by the February 2018 ‘flash crash’ has morphed into fears of a growth slowdown, phoney Trumpian trade wars, and real wars. But the initial crash itself may be more significant. There have been only three phases in last 25 years when the S&P500 has moved this rapidly in this short a period of time, a fact drawn to my attention by my perceptive colleague at M&G, Marc Beckenstrater (see chart). Rapid moves of this magnitude have historically coincided with genuine events. The Asian crisis, the tech bust, and the GFC.

The initiation of this phase in February, is interesting because it occurred in the absence of obvious news. Intriguingly, it also comes after a similar ‘newless’ panic in August 2015, attributed at the time to China’s trivial 2% devaluation. So what is going on?

The volatility virus

I often get asked what the biggest bubble out there is, or the most significant risk. I usually point to ‘volatility’. Not low volatility or high volatility. But the very concept of volatility. VAR analysis, volatility-based measures of active risk, volatility targeting, risk-parity – these fads have hijacked the collective consciousness.

Thirty years ago, only a small minority of professional investors or commentators spoke about volatility (the Vix index itself was launched in 1993). To the extent that the majority had much interest in the subject, it was akin to Warren Buffett’s famous maxim about the dangers of following any investment advice that involved the use of the greek alphabet.

Despite what was happening in finance departments throughout academia, practitioners thought very differently about risk. Almost all the great investors had highlighted swings in market prices as sources of opportunity. Ben Graham famously describes the manic depressive Mr Market as a servant to the intelligent investor with a long time horizon. Risk was definitively not defined by the variance of prices, or even extreme moves, but permanent or sustained losses. Keynes described optimal portfolio construction in these terms in the 1930s: the key to good investing was to avoid what he termed a ‘stumer’ – a stock whose value was permanently destroyed.

Today, volatility-based frameworks are omnipresent. Portfolio risk is measured using Value-at-Risk (VAR) models, which are attempts to capture the volatility of a portfolio, and specify probable losses with varying degrees of statistical confidence, subject to assumptions about correlations and return distributions. Investors everywhere are being encouraged to define their ‘risk profile’. This translates into constructing portfolios which attempt to target levels of volatility – the risk-averse elderly are encouraged to invest in less volatile funds, the risk-taking young professionals are encouraged to move higher up the volatility spectrum.

Every corner of the professional investment industry globally is obsessed with volatility. Risk managers base their processes around volatility, institutional investors want volatility targets, the private sector and the regulator are using volatility as the lens through which all is captured.

Robin Wigglesworth, at the FT, has written a fascinating history of volatility and its use in financial markets. The reasons this virus has propagated are understandable. Volatility is easy to measure, it is simply the standard deviation of a security price. And measurement is the holy grail. If you can measure, you can compute. If you can compute, there is limitless scope for ‘sophistication’ and complexity. Pinning risk down to a simple measurable mathematical formula has obvious appeal.

The specific history of VAR analysis is also revealing. Value-at-risk models were designed by banks in 1990s to statistically proxy how much they might lose across their trading books from one day to the next. For a large bank like JP Morgan which takes risk to facilitate client trading and to express short-term views of individual traders, attempting to model this exposure statistically makes sense. Some statistical properties of volatility are relatively robust. Volatility tends to cluster, in other words yesterday’s volatility is the best estimate of tomorrow’s. And in statistics, the more data the better. There is lots of daily data. So for a risk manager at a bank with a one-day or at most one-month time horizon and thousands of market exposures, relying on a three-month sample period of daily volatility to proxy the probability of short-term loss is reasonable. Non-normal shocks will still occur, but the exercise is a valid starting point. Furthermore, investment bank risk managers, who are subject to capital requirements and mark-to-market accounting, have very short time horizons when it comes to mark-to-mark loss tolerance.

The major problems occur when transferring this model to users with totally different objectives. Mutual fund investors don’t have capital requirements and are rarely deploying leverage to their portfolios. Daily price fluctuations are noise in the context of their objectives – which should be focused on long-term capital accumulation not high-frequency price moves. Why on earth should most savers care about daily price moves in the stock market? And yet most VAR models being deployed across the industry effortlessly make this analytical misrepresentation. Statistical samples can be expanded with the use of higher frequencies but at the expense of relevance.

The virus self-propagates

Volatility targeting will beget volatility. The first of three significant problems with this intellectual virus is typically unrecognised: investor behaviour is becoming correlated. That’s jargon for saying more people are behaving in exactly the same way. The correlation of beliefs and behaviour is one of the most compelling explanations as to why asset prices frequently move by far more than is warranted by changes in underlying fundamental news. This has been formalised in the work of Stanford University’s Mordecai Kurz, perhaps the most underrated innovator in financial theory of the last 50-years.

This gets close to the heart of newsless flash crashes. If volatility itself can forces many, even most, investors to behave in the same way, trivial news can trigger rapid and exaggerated price responses.

Think of it like this: If investors are different, have different objectives, preferences, and beliefs about the future, there is a higher probability of orderly trade – buyers will easily find sellers and vice-versa. But if behaviour is correlated, and everyone attempts to move in one way it will require great price moves for the market to clear. Correlated behaviour accentuates price moves.

Measurement and vast quantities of data, are also a recipe for pseudo-science and over-confidence. Technology has played an important role as carrier of this virus. Technology creates an incentive to quantify. That is the fundamental appeal of a statistical, price-based measure of risk. We have vast quantities of data, we can compare all portfolios, and apply limitless computing power. So it is not a surprise that the volatility virus has spread to infect all corners of investment markets and is a global phenomenon. Technology amplifies our hardwiring – we can copy and compare ourselves to everyone else. The greatest statisticians in finance, from Keynes, to Markowitz, to Fama, to Taleb, emphasis how little we know. Statisticians in finance should be extremely humble.


The final and most important point, is that volatility is not risk. Certainly, measuring volatility using daily prices and three month trailing sample periods – which underpins the famous Vix index – should be no one’s measure of risk, other than a leveraged Vix trader. Investors are not supposed to have daily time horizons, and three months is a spurious sample period for anyone with a three to five-year investment horizon. Most investors should be thinking at least in terms of five year horizons, if not decades. The emphasis on volatility as a proxy for risk is significantly based on the history of academic research and the practice of banks monitoring complex portfolios of exposure across many markets and geographies, as Wigglesworth outlines in detail. But the original sin may have been committed by the great Chicago Economist, Frank Knight, writing in the 1930s. Knight famously made a distinction between ‘risk’ and ‘uncertainty’. Knight held that ‘risk’ referred to measurable probabilities and uncertainty was unmeasurable. Economists have take this distinction as given and definitive, but in reality it is eccentric. Some phenomena have measurable probabilities, some don’t. But this is hardly the end of the discussion of risk – it is not even the beginning. The semantics of ‘risk’ as used in our language and discourse suggests far richer concerns. Far richer than measuring standard deviations of security prices.

Bubbles

Is this a bubble? It is a bubble of sorts. It may in part explain why cash rates globally are so low. Cash has zero nominal volatility. It also has a near guaranteed real loss across the entire developed world. Is something safe if it has no volatility but always loses money? It can be in a VAR model.

The February flash crash may indicate the size of this bubble. Gavin Jackson of the Financial Times reminds us that it is right to be sceptical that anything genuinely novel is at play when markets crash, so does Clifford Assness. The stock market’s propensity to crash is as old as the stock market.

But my hunch, and it can be little more than that, is that we are observing something more pernicious. Endogenous instability is rising, and volatility is at the core. Volatility has virus-like properties. It started as the domain of a small specialist group of quants. And it is has spread to infect everyone. It propagates because we can compare the volatility of diverse funds, we can measure it objectively, and – as is frequently pointed out to me – what is the alternative? (there are many, but that’s another story). Technology is not constraining behavioural biases, it is amplifying them. Wild swings in markets have always been defined by myopia. It explains why markets trend higher, and often move up exponentially. It also explains crashes. The most well-established observation from behavioural finance is the concept of myopic loss aversion. This is jargon for what thoughtful market participants have always observed – that humans have a very strong propensity to sacrifice long-term returns to avoid the pain of short term loss. All sound investment advice, at least since the onset of capitalism, if not prior, recognises the returns to patience.

It is a great irony that the volatility bubble is now creating volatility. How will this bubble burst, if it ever does?

Note: this is a revised version, the original was posted here.

About The Author

Eric Lonergan is a macro hedge fund manager, economist, and writer. His most recent book is Supercharge Me, co-authored with Corinne Sawers. He is also author of the international bestseller, Angrynomics, co-written with Mark Blyth, and published by Agenda. It was listed on the Financial Times must reads for Summer 2020. Prior to Angrynomics, he has written Money (2nd ed) published by Routledge. He has written for Foreign AffairsThe Financial Times, and The Economist. He also advises governments and policymakers. He first advocated expanding the tools of central banks to including cash transfers to households in the Financial Times in 2002. In December 2008, he advocated the policy as the most efficient way out of recession post-financial crisis, contributing to a growing debate over the need for ‘helicopter money’.

Leave a Reply

Your email address will not be published.

* Checkbox GDPR is required

*

I agree