Damian Handzy: Three ways to model liquidity risk

NEW_RiskNet_Logo Damian_H_268

Source: Damian Handzy in Hedge Funds Review – April 11, 2016

Deriving synthetic bid/ask spreads offers a new approach to a thorny problem

Liquidity risk has got a fair amount of attention recently, with much discussion of how it is defined and thought about, but little revealed about how it can be effectively measured. Portfolio and risk managers describe officially sanctioned procedures for priority treatment of assets during liquidity events, such as rankings based on unrealised profit or on estimated time to liquidity. But most liquidity risk measurements discussed suffer from a serious flaw.

 

Liquidity paradox

Traditional liquidity risk measures are based on data or events during normal liquidity periods: when liquidity is not only present but also robust. For example, volume-based liquidity estimates can be made for individual stocks when they trade in normal conditions. If I own 1% of the outstanding issuance of a stock and 10% of the outstanding shares are traded on an average day, then I should be able to liquidate my holding in one-tenth of a trading day. Similarly, if I own 10% of the outstanding shares and only 1% of them trade on a typical day, it should take me 10 days to liquidate. That is, if things continue to behave the way they have in the recent past.

But this analysis says nothing about how long it might take me to liquidate a book if volumes decrease, the likelihood of any such decrease, or the amount by which a trader might move the market by selling shares. Worse still, it can only be done for securities that have a reported volume – typically, stocks. Bonds and other non-exchange traded securities can’t be analysed this way.

Basing liquidity measures on quantities that only hold during normal market conditions is missing the point of risk management: understanding what may happen during a crisis.

Fortunately, several new approaches have been announced recently and we may well be witnessing the emergence of useful liquidity measures.

 

Survey-based measures

Some firms have taken to using surveys of market participants to produce an estimate of trader sentiment about liquidity issues. One benefit of this approach is it uses evolving market expertise to estimate liquidity issues as expressed by the people who trade the securities in question. If a firm has sufficient access to market participants, the coverage can be quite wide – even extending to thinly traded issues.

One issue with a survey-based approach, though, is participants might report biased numbers: depending on which side of a potential transaction the participant might be, they might report numbers advantageous to themselves. Furthermore, there are only so many market participants for each security, resulting in a diminishing number of data points for the most thinly traded instruments. Finally, firms pursuing this approach will likely find themselves exerting a significant manual effort to collect and clean the data – effort that will lead to higher costs to consumers of this type of liquidity analysis.

 

Machine learning

Data providers have at their disposal massive amounts of information that might – or might not – be related to the liquidity risk of a given issuance. Rather than forming an explicit model of liquidity risk, some have chosen to employ machine-learning algorithms to exploit computers’ ability to find patterns in the mass of market data they posses. Machine learning grew out of the field of artificial intelligence as a practical approach to identifying patterns as part of the so-called big data phenomenon.

While this approach may ultimately prove more practical and accurate than a survey based approach, its main drawback is its black-box nature. The learning algorithm might find patterns between unrelated variables that have no causal connection even though they exhibit a statistical relationship.

Similar to the infamous over-fitting regression problem entertainingly demonstrated by the fact that 75% of the movement in the S&P 500 is explained by the production of butter in Bangladesh,1 machine learning is prone to finding patterns that are spurious. David Leinweber, the author of the over-fitting paper, takes care to point out that the fit can be significantly improved by also including US cheese production and sheep population in both Bangladesh and the US, resulting in an R-square of 99%.

Because machine-learning models are inherently non-transparent, users might find them hard to trust.

 

Bid/ask spreads of the underlying drivers

Another approach that has merit is to focus on the bid/ask spread of a security’s underlying drivers. Rather than looking at the security’s own bid/ask spread, which is not often available for thinly traded or illiquid securities, this method examines the spreads of largely available quantities and uses them to derive model spreads of the security of interest.

This model attempts to predict the amount of money a trader could lose due to differences between the bid and ask of each security. To compute this range of possible prices, it mimics what a market-maker would quote for a security by considering what trades they would need to put in place to hedge the security.

By carefully recreating the market-maker’s basket of trades, an analyst can use the bids and asks of the underlying liquid markets used for hedging in establishing an estimate of the bid/ask spread of any security: liquid or illiquid. For example, the price of a fixed-coupon bond can be replicated using a Libor curve built from the interest-rate swap (IRS) curve, the credit default swap (CDS) curve of the issuer and the basis between CDS and bond credit spread.

Because this approach uses the bids and asks of the liquid drivers of risk, it can be employed for any security for which a theoretical pricing function is available regardless of the security’s volume or frequency of trading. Furthermore, those bid and ask values can be taken from any timeframe: to simulate losses in a crisis, one could use the bids and asks of IRS, credit and stocks from the global financial crisis.

Another refinement is to then apply heuristic haircuts to the value of the security, taking into account issuance seniority: a $500,000 issuance should have a higher liquidity risk premium than a $5 million issuance from the same issuer, just as a micro-cap stock should have a higher risk premium than a mid-cap stock.

 

Case study: Volkswagen

StatPro used the above bid/ask model to study Volkswagen bonds before and after the car company’s admission of having doctored emissions testing results. The figure below shows the model’s estimate of the liquidity risk loss for several bonds through this period. Three different sets of bid/ask spreads were used: recent markets, dubbed ‘normal’, along with bid/ask spreads from ‘stressed’ and ‘highly stressed’ timeframes during the financial crisis. The highly stressed scenario picks up signs of increased liquidity risk as VW begins to admit participation in the emissions scandal while the current markets don’t show any signal until VW’s CEO resigns.

risk graph

Surprisingly, and somewhat disappointingly, it’s taken a full nine years after the global financial crisis for the first serious and robust measures of liquidity risk to emerge. Whichever of these three approaches, or some yet-to-be-revealed method, becomes the industry standard, the industry has finally begun to incorporate measure of liquidity risk into risk management.

 

1 Leinweber, D J. Stupid Data Miner Tricks: Overfitting the S&P 500

Damian Handzy is StatPro’s Global Head of Risk.

contact us Press room

Share this article