Liquidity as a measurable risk factor
Without accounting for current market conditions to reflect prevailing market capacity, a comprehensive liquidity risk measurement is incomplete. Measuring for these conditions, however, requires a deliberate approach to data analysis that assesses both forward-looking and current investor appetite. A liquidity risk manager should be able to customize input parameters and analyze market scenarios within models incorporating these nuances.
Balancing methodology and accuracy
These capabilities rely upon calibrating models for complex real-world data from different asset markets. The model must:
Reflect changes in market conditions which may be driven by geopolitical shocks or tax stimulus, for example.
Retain stability from day-to-day in order to maintain the credibility of the results.
After all, without accuracy the model is impractical.
However, risk managers must balance transparency with reliability. While models that are transparent are easier to document and explain, they risk being less accurate. More reliable models typically have more accuracy, but the methodology can be more difficult to convey.
Linear statistical models, for example, can be explained easily because they look for formalized relationships between variables to predict an outcome and, thereby require the user to have some knowledge of these relationships.
However they aren’t as dynamic as a machine learning (ML) model. The methodology of an ML based model, in comparison, is more dynamic and often yields superior estimates, but is complex and challenging to explain.
Fortifying a modeling framework
Measures can be taken to increase transparency by adding elements that test for accuracy and enhance granularity.
Out of sample back testing for example, can be employed to determine how well the model predicts real-world outcomes.
Global coverage is also important as reliable results do not come from using a one size fits all approach to asset sub-classes. There needs to be sufficient granularity in the framework to reflect trading dynamics across different asset classes and markets.
For example, trade execution behavior in U.S. municipal bond markets is different to behavior observed in Euro corporate bonds.
The three pillars of out-of-sample back testing, combined with machine learning techniques and global coverage that respects regional or asset class specific trading styles, provide a solid foundation to allow confident assessment of the accuracy of liquidity risk models in current market conditions. Bloomberg’s solution for liquidity assessment, LQA, incorporates these three features across multiple asset classes to paint a robust picture of market liquidity.
Opportunities in big data
A robust, big data-driven framework for examining liquidity can be fortuitous in various ways. Imagine the insight possible from having several hundred firm-based sources of execution data gleaned from exchanges, clearing-houses, and order and portfolio management systems over a several years. Incorporating such a substantial volume of data into a data-driven methodology, facilitates a tactical advantage for risk management as the resulting framework always reflects prevailing market conditions. Access to large quantities of data ensures the relevant factors influencing liquidity are considered. More importantly, meaningful liquidity estimates become possible even for instruments with limited trading activity.
For instance, if a bond has sparse trading activity or hasn’t traded in several months, the large data pool allows the creation of a robust risk model to assess liquidity for this bond.
The improved ability to classify securities or portfolios by liquidation horizons can enhance security selection choice when constructing portfolios or devising trading strategies.
Understanding the numbers
Having a measure of liquidity is only half the story. Knowing how the liquidity measure was determined, or indeed having some measure of uncertainty or reliability of the measure is essential to obtaining the appropriate context for confident decision making.
Two securities, for example, can have the same expected liquidity cost, but one may have a larger dispersion of likely transaction cost — a factor that is likely to have some bearing on security selection. From a regulatory perspective, most regulators now specify the production of detailed liquidity metrics, both in normal conditions and under times of stress. Performing this exercise in an efficient and timely manner is important to ensure compliance.
Internal and external governance also has an increasing focus on cost transparency and scenario analysis. More reliable stress testing and early warning indicators can become a reality through data-driven analysis.
Source: 2019 Bloomberg Global Liquidity Report