This article first appeared on Markets Media Experts page on February 02, 2012.
The salient fact is that Big Data is messy. Financial practitioner’s worst fears are spending more time processing and cleaning data than analyzing it. The vast hordes of data demand a devotion to the collection and management of a multiplicity and diversity of markets across assets and continents.
Chris Pickles, BT’s Head of Industry Initiatives -Global Banking & Financial Markets recently commented at an industry round table on Big Data, “… market data is probably the biggest dump of data going into a bank’s dealing rooms.”
Yet Big Data extends well beyond infrastructure. A more competitive trading environment, tighter spreads, thinner margins and a lower risk appetite results in quantitative traders exploring more cross asset trading models and cross asset hedging. The side effect of this is increasing demands for deep data over longer time periods across a multiplicity of markets; Equities, Futures, Options and of course cross-border currencies.
The challenge of this data dump is dealing with the vagaries of multiple data sources, mapping ticker symbols across a global universe, tying indices to their constituents, tick-level granularity, ingesting cancelations and corrections, inserting corporation action price and symbol changes and detecting (missing) gaps in history. Any and all of these factors are vital to the science of quantitative trade modeling. Big Data is about linking disparate data sets under some common thread to tease out an intelligible answer, a diamond from a mountain of coal. It’s about finding the cross asset correlations or understanding how to best hedge a position to offset risk, these are the underpinnings for trade models and portfolio management.
It is vital to recognize that data volumes can quickly overwhelm the capacity to consume and the resulting problems are manifold. On one hand is the technology, poorly designed or legacy systems can easily translate to spending inordinate amounts of time and resources processing data, outfitting new storage and scrambling to deploy new systems to meet the rising velocity. On the other is the demand placed on the human intellect. The ‘search and (alpha) discovery’ of Big Data is to artfully leverage the mathematical and statistical sciences in a timely manner.
Big data IT is about managing scale. With 4.55B contracts traded in 2011 the Options markets, volume and velocity place enormous demands on IT infrastructure. The need for Big Data technology capable to swallow in big gulps is evident to cope with the information flood as more and more firms engage in cross asset trading. Not just for consumption of Options strikes but with the underlying Equities and the Futures and Foreign Exchange markets as well, which at last count was the world’s largest and most liquid market with average daily turnover exceeding 4 trillion dollars. But data cannot be seen as just a tech issue.
Big Data is a daunting task placed on human capital. Finding useful information in oceans of data is an increasingly complex problem. Big data is about big analytics. There is a clear and present danger of the inherent wisdom being lost in the darkness and noise as data accumulates in floods. A clear thread has to be teased from the veritable sea of information, to focus, direct and ultimately give meaning to what has been amassed. Whether its research for discovery of new models or backtesting to prove profitability, the skill to recognize the influencers of market impact, its inherent volatility need to be factored in. The quest to devise smarter models is what analysis is for. Yet ultimately to make sense of the analysis, deciding what to prune and knowing what to embrace is crucial for enabling you to make timely business decisions. It’s a matter of survival in the fiercely competitive trading landscape.
Once again thanks for reading.
For an occasional opinion or commentary on technology in Capital Markets you can follow me on twitter, here.