The evolution of OneTickCloud

OneTickCloud is the industry’s premier solution for on-demand access to global tick data and analytics. Deployed in a secure data center, OneTickCloud provides firms the ability to aggregate, normalize and analyze large volumes of data, including Morningstar global tick data, using OneMarketData’s market-leading enterprise data management software, OneTick.

OneTick Cloud is a managed service of normalized and cleansed exchange and OTC data and analytics to support backtesting, algo development, transaction cost analysis, technical studies and charting applications.

OneTick Cloud is a securely hosted service providing managed data and analytics across global equities and futures tick history, with on-demand analytics tools for creating custom datasets.

OneTick provides you 10 years of normalized tick data for US Markets, 45 years of end-of-day data for US equities, 8 years of corporate actions data, and a trove of real-time and historical data from over 120 global markets.

OneTickCloud is accessible over the internet with JSON or CSV formatted extracts. Access the Data by FTP, by Web API or through the OneTick GUI. It offers a subset of the most powerful data analytics available in our OneTick software platform.

A year after its 2015 release, join us as we discuss how the service has grown and matured in our September 29th webinar, “The Evolution of OneTickCloud.”

Microsoft PowerPoint - OTC_Draft.pptx

REGISTER HERE

In this webinar, OneMarketData’s Louis Lovas and Jeff Banker will provide an overview of their company and its powerful hosted solutions. DASH Financial’s Ben Locke will then provide insight into the OneTickCloud customer experience, and how his team has utilized the product.

OneTickCloud Architecture includes Web Queries, Web On-Demand, Web Scheduled Queries and Desktop OneTick. Join our webinar on September 29 for more details on what OneTickCloud can do, and what it can do for you!

 

 

 

Posted in Big Data, OneMarketData, OneTick, Uncategorized | Leave a comment

Finding Alpha in Transaction Cost Analysis

Technology is making a sweeping transformation in trading styles as the accelerating use of algorithms creates a more competitive environment for all market participants. Tighter spreads, diminished liquidity and increased volatility are re-defining global markets and thinning margins.

This translates to the increased awareness of trading costs as participants look to squeeze alpha out of a diminishing pot. Whether you’re an asset manager, an institutional investor, a quant researcher or an executing broker, sought-after cost controls create the incentive to invest in advanced technology for Transaction Cost Analysis or ‘TCA’. Essentially a collection of comparisons between various market benchmarks and traded prices to determine whether the spread between them is high or low at the time of order, TCA can generate alpha by exposing, and ideally lowering, the cost at which you buy and sell. Results from the analysis are used to fine tune the trading process, compare venues, and provide clients desired reports and dashboards.

Benchmarks provide the formal metrics of market conditions to quantify costs. They offer baseline measurements throughout market history, across the trading day and at the time of execution. They include price metrics such as open, high, low, close and volume-weighted-average-price as well as liquidity metrics of the order book.

OneTick TCA, an enterprise platform, offers a set of market benchmarks across global equity and futures markets. This starts with tick-by-tick trade, quote and order book history and boasts over 20 price and volume metrics. Along with global benchmarks, OneTick TCA also includes a collection of best-practice methodologies to determine the effectiveness of your trading.

OneTick TCA provides the tools to …

  • Spot outliers in your trades by measuring slippage against price benchmarks such as VWAP, Bars and Beta
  • Measure participation rates by venue and across venue against stated goals
  • Compare performance vs. volume benchmarks to determine the effectiveness of capturing the visible liquidity at each price level.
  • Measure market impact or the cost of demanding liquidity by understanding quote fade across venues
  • Quantify the opportunity cost of not trading as a justification to tune algo aggressiveness

OneTick TCA is a hosted platform built upon OneTick’s time-series database, analytics and market history to service the demanding and varied needs of both buy- and sell-side institutions. To learn more about OneTick TCA, download a product sheet or request a demo today.

Once again thanks for reading.
Louis Lovas

For an occasional opinion or commentary on technology in Capital Markets you can follow me on  twitter, here.

Posted in Algorithmic Trading, Analytics, Big Data, Tick database, Transaction Cost Analysis, Uncategorized | Leave a comment

OneTick Webinar on leveraging Hadoop MapReduce with OneTick Analytics

OneTick Map-Reduce is a Hadoop based solution combining OneTick’s analytical engine with the MapReduce computational model that can be used to perform distributed computations over large volumes of financial tick data. As a distributed tick data management system, the OneTick internal architecture provides support for databases that are spread across multiple physical machines. This architecture designed for distributed parallel processing improves query performance as the typical OneTick query is easily parallelize-able at logical boundaries (e.g. running the same query analytics across a large symbol universe) and can be processed on a separate physical machine.

On April 26, 2016 we had a very successful broadcast webinar on the details behind how OneTick’s large collection of built-in analytical functions and query design can easily leverage the Hadoop middleware framework for large scale parallel processing.  You can watch the recording at this link or click on the image:

  Register Now

OneTick Map-Reduce is a dynamically distributing data (stored in OneTick historical archives) and computation across the nodes using a combination of distributed file system (HDFS) and the MapReduce computational framework.

  • OneTick archives are stored on a distributed file system (e.g. HDFS with Amazon S3 as a backup). The distributed file system serves as an abstraction layer providing shared access — physically the data resides on different nodes of the cluster. The distributed file system is also responsible for balancing disk utilization and minimizing the network bandwidth.
  • Hadoop’s MapReduce daemons are responsible for distributing the query across the nodes of the cluster, by taking into account the locality of the queried data.
  • The distributed OneTick query is an analytical process that semantically defines a user’s business function. OneTick query analytics are designed specifically for that purpose.

OneTick Analytics

OneTick provides a large collection of built-in analytical functions which are applied to streams of historical or real-time data. These functions referred to as Event Processors (EPs) are a set of business and generic processors that are semantically assembled in a query and ultimately define the logical, time series result set of a query. Event Processors include aggregations, filters, transformers, joins & unions, statistical and finance-specific functions order book  management, sorting and ranking, and input and output functions.

The OneTick Map-Reduce design allows an easy to switch between different data representation/job dispatching models – affording support for an internal model and external model. Users define their “map”, “reduce” operations in this restricted computational model and the framework takes care of the parallelization.

Once again thanks for reading.
Louis Lovas

For an occasional opinion or commentary on technology in Capital Markets you can follow me on  twitter, here.

Posted in Algorithmic Trading, Analytics, Big Data, Cloud Computing, Complex Event Processing, Equities, OneMarketData, OneTick, Tick database, Transaction Cost Analysis, Uncategorized | Leave a comment

Computational Models in OneTick and Hadoop

Cloud computing, it means a lot of different things to different people. There are public, private and hybrid models, yet the variations are endless. A key characteristic of the cloud is rapid elasticity which offers compute power unheard in priory infrastructures. Such parallelized scalability allows previously intractable problems to become a reality. There is two key underlying components behind this – a computational model and distributed data.

OneTick Map-Reduce is a Hadoop based solution combining OneTick’s analytical engine with the MapReduce computational model that can be used to perform distributed computations over large volumes of
financial tick data. As a distributed tick data management system, the OneTick internal architecture provides support for databases that are spread across multiple physical machines. This architecture designed for distributed parallel processing improves query performance – as the typical OneTick query is easily parallelize-able at logical boundaries (e.g. running the same query analytics across a large symbol universe) and can be processed on separate physical machines.

OneTick Map-Reduce offers a solution to leverage elastic computation by dynamically distributing both data and computation across a Hadoop cluster using a combination of a distributed file system (HDFS) and a computational framework called MapReduce.

OneTick Map-Reduce dynamically distributes data (stored in OneTick historical archives) and analytics across the nodes using a combination of distributed file system (HDFS) and the MapReduce computational framework.

  • OneTick archives are stored on a distributed file system (e.g. HDFS with Amazon S3 as a backup). The distributed file system serves as an abstraction layer providing shared access — physically the data resides on different nodes of the cluster. The distributed file system is also responsible for balancing disk utilization and minimizing the network bandwidth.
  • Hadoop’s MapReduce daemons are responsible for distributing the query across the nodes of the cluster, by taking into account the locality of the queried data.
  • The distributed OneTick query is an analytical process that semantically defines a user’s business function. OneTick query analytics are designed specifically for that purpose.

OneTick Analytics

OneTick provides a large collection of built-in analytical functions which are applied to streams of historical or real-time data. These functions referred to as Event Processors (EPs) are a set of business and generic processors that are semantically assembled and ultimately define the logical, time series result set of a query. Event Processors include aggregations, filters, transformers, joins & unions, statistical and finance-specific functions order book management, sorting and ranking, and input and output functions. Also included is a reference data architecture for managing security identifiers, holiday calendars and corporate action information. Together these allow time series tick streams originating from any of the OneTick storage sources (archive, in-memory or real time) to be filtered, reduced and/or enriched into the business logic supporting a wide variety of use cases.

  • Quantitative Research
  • Algorithmic, low-touch and program trading
  • Firm-wide profit / loss monitoring
  • Real-time transaction cost analysis
  • Statistical arbitrage and market making
  • Regulatory compliance and surveillance

OneTick, Hadoop and Spark

Spark and Hadoop are middleware frameworks that facilitate parallel processing of data, whereas MapReduce is a computation model. These components provide a platform for distributed computation and combined with HDFS offer distributed data access as well. HDFS is (by definition) the file system part of Hadoop, and Spark can make use of HDFS as input data source. Yet, neither Hadoop nor Spark provide targeted business-oriented functions to support the above-mentioned use-case solutions. Furthermore, those trade-related solutions depend on cleansed, normalized high-quality data available in OneTick data management either by itself of integrated into Hadoop. OneTick has its own very efficient mechanisms for parallelization of computations (e.g. concurrent processing of symbol sets across load-balanced group of tick servers, client-side and server-side database partitioning with concurrent partition access, splitting queries locally into multiple execution threads). OneTick also supports Hadoop as an alternative mechanism of parallelization of computations.  The OneTick Map-Reduce design allows an easy to switch between different data representation/job dispatching models – affording support for an internal model and external models (Hadoop, Spark, etc.). The idea is that you start with a collection of data items and start applying certain map and reduce operations on this collection (as in functional programming). Map operations transform existing items into new items and Reduce operations group multiple items into a single aggregated item. The computation must be stateless, so that it’s easier to parallelize. This means that each transformation creates a new collection, rather than manipulating the existing one. Users define their “map”, “reduce” operations in this restricted computational model and the framework takes care of the parallelization.

How does this translate to OneTick’s data model?

Data items <-> OneTick time series
Map operations <-> OneTick transformer EPs
Reduce operations <-> OneTick merge/join EPs


Spark is similar to Hadoop yet it overcomes the limitation that they take a long time upon startup of a job. Similar to OneTick’s own dispatching model, Spark appears to be more suitable for interactive data processing. Nonetheless, both are suitable for large batch processing tasks and thus the reason for OneTick’s integration as a complementary technology.

OT architecture
Once again thanks for reading.
Louis Lovas

For an occasional opinion or commentary on technology in Capital Markets you can follow me on  twitter, here.

Posted in Algorithmic Trading, Analytics, Big Data, Cloud Computing, OneMarketData, Tick database | Leave a comment

Not Your Granddad’s Spoof

A recent job posting by a major investment bank reads: “Basic qualifications: PhD in Computer Science, specializing in … machine learning … with extensive knowledge of big data technologies … and experience with predictive modeling, natural language processing, and simulation.”  Quantitative trading, right?  Or global risk modeling? Maybe electricity market forecasting? No, Compliance. Specifically, surveillance analytics. Twenty years ago, even ten, the Quants landing on Wall Street with their freshly minted PhDs from Stanford and MIT would have laughed. Twenty years ago, even ten, surveillance was a dreary back office affair, something that somebody in a cheap labor state did on a mainframe, if they hadn’t been laid off yet. Since Dodd-Frank, since the Flash Crash, since MiFID and MAD, since that darned book, compliance surveillance is front office with a capital ‘F’. On the trading floors of New York and Chicago, and on quieter desks from Greenwich to Boston, trading supervisors are reviewing surveillance reports and consulting real-time surveillance monitors as though their bonus checks depend on it—because they do.
Here’s why. The regulators have grown more aggressive, and grown sharper teeth—they’re now empowered to prosecute on the basis of ‘disruptive practice’ rather than ‘intent’.  Staffs are larger and regulatory actions mo
re frequent. Most importantly, money penalties have grown dramatically. Enforcement groups seem to vie with each other for bragging rights.

The regulators are better equipped now, too. They have to be, in order to analyze an ocean of market data. The CFTC’s trade surveillance system, for example, had gathered over 160 Terabytes as of June 2014, and that has likely passed 200 TB by now. Regulators are turning to sophisticated analysis to ferret out patterns of misconduct and detect market stability risks. In 2015, Scott Baugues, Deputy Chief Economist at the SEC, wrote that several SEC departments use machine learning techniques to identify likely misconduct. The old spoofs and other tricks are now easily spotted. New ones are appearing, but they’re being learned by smart analytics. Manipulate on the CME in one contract, and take on ICE in a correlated one?  Nope, a regulator’s cross-market surveillance can see it. Collude in setting a fix? Software that learns social media relationships may detect it. So what should the compliance chief who wants to protect her firm from a business-busting fine do?  The first thing is to wake up to how dramatically the surveillance landscape has changed—it’s not your granddad’s spoof anymore.

chart


References:
Job posting: http://www.goldmansachs.com/a/data/jobs/27931.html
CFTC Enforcements and penalties are from the CFTC Enforcement Division annual results archived on the CFTC Press Room web site.
CFTC Surveillance collects 160 TB: https://fcw.com/articles/2014/06/03/cftc-mulls-retooling-market-surveillance.aspx
Scott Baugues comment: http://cfe.columbia.edu/files/seasieor/center-financial-engineering/presentations/MachineLearningSECRiskAssessment030615public.pdf

thanks for reading.
Dermot Harriss

Posted in Algorithmic Trading, Analytics, Big Data, Cloud Computing, HFT Regulation, Tick database, Trade Surveillance | Leave a comment

OneMarketData enhances OneTick Cloud global content platform with Tick Data, Inc.

OneMarketData has acquired Virginia-based Tick Data Inc., a trusted and leading provider of historical intraday exchange time series data. Tick Data Inc. will be a wholly owned subsidiary of OneMarketData and will continue to operate from their Virginia offices under the current management team.

Data is the resource for better trade decisions, better cost controls and improving compliance and surveillance. As a consequence, key factors emerge for large-scale data management.  Those include infrastructure, data quality and timely access.  This acquisition is focused on enhancing the OneTick Cloud platform:

–   Tick Data enables OneMarketData to expand and expedite our Cloud Solutions platform, which provides the market with a broad set of Data and Analytics

–   The Tick Data, Inc. acquistion allows OneMarketData to expand the content platform to address the $1.5B market opportunity

–  OneMarketData has seen a significant interest in our OneTickCloud platform

–  OneTickCloud platform enables customers of Tick Data to greatly expand their ability to analyze exchange content, for backtesting, algo development, research and compliance

Learn more about this acquistion and how OneMarketData’s OneTickCloud and Tick Data’s content looks to address those challenges providing the tools and services for managing and analyzing data more effectively. Click here

OneTickCloud leverages the capabilities of OneTick to offer the analysis of cleansed, normalized history across domestic and international markets.  OneTick is a leading solution for managing market data – its capture, storage and analysis for use in quantitative research, back-testing, TCA, trade surveillance and many other areas in the trade life cycle.

Once again thanks for reading.
Louis Lovas

For an occasional opinion or
commentary on technology in Capital Markets you can follow me on  twitter, here.

Posted in Uncategorized | Leave a comment

OneTickCloud: Simplifying Access to Normalized Tick Data for Sophisticated Analysis

As I’m sure most of you have experienced there is an ever increasing need for more data… from domestic markets, international markets, capturing your own order flow, news, and recently sentiment from social media. Regardless of the hype around big data improving your understanding of markets, it’s microstructure and the impact of your own trading can be a game changer.

Data is the resource for better trade decisions, better cost controls and improving compliance and surveillance. As a consequence, key factors emerge for large-scale data management.  Those include infrastructure, data quality and timely access.

Learn how OneMarketData’s new OneTickCloud looks to address those challenges providing the tools and services for managing and analyzing data more effectively in this webinar recorded from a earlier broadcast.

OneTickCloud leverages the capabilities of OneTick to offer the analysis of cleansed, normalized history across domestic and international markets.  OneTick is a leading solution for managing market data – its capture, storage and analysis for use in quantitative research, back-testing, TCA, trade surveillance and many other areas in the trade life cycle.

OneMarketData offers the tools for quant researchers to extract alpha, better manage risk and achieve confidence in model design.

OneTickCloud is a securely hosted service providing managed data and analytics across global equities and futures markets. It offers deep history of tick-by-tick and end-of-day prices. It includes reference data, split and dividend adjustment factors, name changes, earnings announcements and calendars.   You can think of OneTickCloud as providing …

  • Global content – normalized and cleansed across markets and geographies
  • A range of analytical query tools from a self-service web application for assembling and organizing the content as you need to more sophisticated desktop tooling for custom analysis
  • And lastly, easy access from your own application, that can be immediate on-demand access or more traditionally as file downloads.

And you only pay for what you need.  To learn more watch the webinar recording.

Once again thanks for reading.
Louis Lovas

For an occasional opinion or commentary on technology in Capital Markets you can follow me on  twitter, here.

Posted in Algorithmic Trading, Analytics, Big Data, Cloud Computing, Complex Event Processing, Equities, Foreign Exchange, Futures and Options, HFT Regulation, High Frequency Trading, OneMarketData, OneQuantData, OneTick, Tick database, Transaction Cost Analysis | Leave a comment