Updated on April 4, 2017 by Mike Baukes
There are two constants in the world of High Frequency Trading (HFT): massive volumes of data, and the need for programs that process this data and act on it at blistering, fast speeds. These systems change frequently as the needs of the companies using them change and as the rules and regulations of market organizations and governments change. The potential for market instability is a big concern for both companies and regulatory bodies, and major incidents occurring in the market simply due to algorithm errors have put a sharp focus on the quality and performance of HFT software. The DevOps philosophy can provide serious advantages to HFT companies, and this article will take a look at some of the main issues and concerns of the business and summarize it with how DevOps can help.
Automation Issues in High Frequency Trading
No program has ever worked perfectly for every moment of its existence, and the ones used by HFT companies are no different. The most prevalent and controversial issue of HFT is the massive impact they have on the markets. Even the people who set up HFT capabilities for various trading groups are aware of this risk; Michael Durbin, who is the primary reason Citadel Investment Group even has HFT functionality, has stated that the concern about market instability due to HFT is "reasonable".
These flash crashes are not something out of science fiction, either. The first such incident occurred on October 19, 1987, also known as Black Monday. A mixture of automated trading and other market issues caused a 23% drop in the Dow Jones. Then, in May 2010, computers with far greater capabilities and truer to the idea of HFT brought the issue to the forefront again, resulting in several forms of legislation attempted to curtail the possibility.
A less globally impactful event, though just as important to be concerned with in development, is the potential for a system failure that causes drastic, nearly immediate losses. Knight Capital is the most recent illustration of this; the major company lost hundreds of millions in a matter of seconds, after a rogue application was accidentally deployed. This shows how the automation process of HFT can be detrimental for the companies that use it on a "small scale" (the term being used relative to the flash crashes).
Regulation Compliance's Effects on Development and Production
Due to the volatile nature of HFT and the power it gives to individual companies to drastically alter the market, several countries are passing legislation for markets and companies within their respective jurisdictions. These regulations tend to go through small changes over time with more drastic jumps as responses to highly prominent events such as the crashes and glitches in the first section.
Regulations already exist in the United States, the European Union and its member countries, and Asian markets. They tend to enforce certain checks or procedures into the HFT algorithms, but there is no clear evidence that says current regulations are having much impact, and there are cases (such as the May 2010 crash) that point towards their ineffectiveness. The regulating bodies are aware of this, and for this reason it is almost guaranteed that the regulations will be changing globally in the near future.
Secondary Costs of Delayed Build Deployment
For the HFT company itself, the time it takes to get from the planning board to deployment for their new software can mean millions, if not billions, in lost profits. For example, consider that Company A is a market trading corporation that deals in HFT and that Company B is one of its main competitors. Both of these companies began working on a new software suite that reduces trading times to half of their current value. For simplicity, we will say that the current speed is one one-millionth of a second. Because of poor testing and development procedures, Company A's software is deployed two months after Company B's. Company A is now suffering from almost a full quarter of significant disadvantages over their competitors.
Program Specifications for High Frequency Trading
After regulation and standard compliance is handled, the trading companies expect just as much from their software, demanding high functionality and insanely fast speeds. While the occasional errors do occur, HFT companies want their software to work as best as they can to avoid repeating the same mistake as Knight Capital Group and have a vested interest in maintaining a certain level of market stability. Other specifications can include the ability to handle a certain number of orders per second, interaction with specific exchanges, executive fail safes, alerts and notifications for specific people in the company, and the option to inject specific high volume trades for the software to process and execute.
The Future of HFT
There is an undeniable amount of controversy in equity trading, mostly concerned with the argument of whether or not HFT creates an unequal playing field and can destabilize markets which allow it, more so than every day events. The panic and fear caused by these concerns are even potentially able to cause market crashes, as investors pull out in anticipation of future downturns. Even the regulations implemented to try and compensate for the power of HFT have resulted in further issues: the May 2010 crash was caused largely by the interaction of HFT with "circuit breaker" regulations designed to slow down certain trading in certain scenarios.
Also consider that if HFT continues as it stands today, the markets of the future will become a cluttered wasteland, where only the very fastest computers with the best algorithms, backed by the most capital will have any hope of operating. This creates an interesting scenario where even the different companies that participate in HFT might have different views on how regulations on the technology should be implemented.
Where Does DevOps Come Into Play in HFT?
With all this in mind, it becomes immediately apparent that a number of factors need to be in the forefront of the minds of developers of HFT software. Below is a quick recap of the five major points of the essay, as well as a look at how DevOps can improve the process.
First: strenuous and complete testing has to be done. This pivotal testing requires the development of reliable and reusable automated testing procedures, which are capable of pushing the software through a large number of scenarios in order to detect any glitches.
Second: the regulations and standards imposed on HFT by the Securities and Exchange Commission and other regulatory bodies are subject to change, albeit slowly. Each implementation of a regulation is effectively a test run where the rules work until they don't. The May 2010 crash is an example of such regulatory backfire, so it should be expected that a program may have to adapt to the non-market influence that has a larger deviation from the norm.
Third: a company will want to produce this quality software without increasing development time by a considerable factor. Even for companies who already have worked on previous versions of HFT software, new builds will be required and DevOps will be needed to mitigate issues, like configuration drift, that can cause a previously working program to run into unforeseen errors. The first point's mention of automated testing procedures comes into play here as well.
Fourth: a comprehensive development approach is the only sure way to include all the functions and capabilities that a company wants in these highly complicated programs. Given the potential for a desired features list to include far more than the sample selection given in section four, systematically ensuring the software meets the requirements is necessary.
Finally: consideration for future changes that might be necessary can make the development time in response to these changes much easier to implement. A simple example would be including a method to easily modify the software to perform fewer trades at longer intervals in anticipation of legislature that limits the speed of HFT. Speculation and discussion with these organizations can help companies prepare for the changes, but even then there will always be an element of surprise that gives an edge to fast and precise development methods.
With these reasons in mind, it's highly recommended that any company involved with or seeking to venture in to the HFT should employ DevOps methodology in its policies for their own sake, and for the sake of the market.
Follow UpGuard on Twitter
Misconfigurations are an internal problem that emanate from within the IT infrastructure of any enterprise; no hacker is necessary for massive damage to occur to digital systems and stored data. And the problem is pervasive, with Gartner estimating anywhere from 70% to 99% of data breaches result not from external, concerted attacks, but from internal misconfiguration of the affected IT systems.