The fall out of Knight Capital’s software glitch has created a global crisis of confidence, as it has yet again revealed the proliferation of algorithms for trading. We have seen significant growth in automated and algorithmic trading across the industry swelling the ranks of competitors and creating surging data volumes. Firms are ever fearful of the competition as they strive to keep pace with rapidly advancing technology, tackle thinner margins and manage trading costs. Together they act like a dragging anchor on things that matters most – what drives alpha.
Knight Capital’s software mishap and the ensuing jitters have caught the attention regulators across the globe from the U.S. and Europe to Asia producing proposals and guidelines for algo-testing. The European Securities and Markets Authority (ESMA) have issued guidelines for investment firms that operate algos to “… Develop testing methodologies for new algorithms that might include performance simulations/back testing or offline testing within a trading platform testing environment”. The Hong Kong Securities and Futures Commission (SFC) has also proposed that “trading algorithms will operate as designed… taking into account foreseeable extreme circumstances and the characteristics of different trading sessions… Deployment of the algorithmic trading system and trading algorithms would not interfere with the operation of a fair and orderly market”. Regulators across the globe are lecturing on algo-testing showing a firm grasp of the obvious, but completely miss on the nuances.
Investment firms from quant trading shops to long-only asset managers understand the importance for strategy testing only too well. Algorithms, essential to the sustainability of the business are born out of the mathematical ingenuity of quants and developers and their complexity is accelerating due to increasing competition. Testing them is focused on two main areas; robustness and profitability.
Testing begins with unit testing through integration testing and acceptance testing – robustness is an attempt to measure the stability of an algorithm. It is determined by replaying historical data through algorithms and trading to a fill simulator, a scaled down replica of an exchange’s matching engine. The historical data can represent normal market activity, highly volatile conditions even crash periods. Semantically an algo’s logic should behave gracefully to turn a profit, avoid a loss or exit the market. Analyzing test results is a hunt for regressions and validation before a production roll out.
Back-testing is to measure estimates for profitability as strategies are optimized – this could be targets for profit factor, max drawdown or an equity curve to improve the quality of execution decisions. Historical data sets provide “what if” market conditions as strategy parameters are varied. Back-testing for profitability is a challenging task involving prediction, that elusive Holy Grail that seeks to validate the efficiency of an algorithm and understand market movement. It requires modeling market impact – which seeks to predict what other market participants will do when your orders hit the book. For such a determinate requires a level of inferring human behavior programmatically, judging what might happen to the liquidity in the book as your order activity either posting or taking, acts as an influencer. Sadly regulators simply gloss over this in their simplistic notion of testing.
Quantitative trading looks to exploit perceived market inefficiencies created by human behavior, geo-political events and market structure. Simulating this for algo-testing is complex and difficult. In that quest for profitability firms understand the importance of testing, devoting time, attention and dollars to the endeavor to stay ahead of the competition.
Once again thanks for reading.
For an occasional opinion or commentary on technology in Capital Markets you can follow me on twitter, here.