Intelligent Algorithmic Trading Systems
Algorithmic trading is the use of computer algorithms to automatically make trading decisions, submit orders, and manage those orders after submission. Algorithmic trading systems are best understood using a simple conceptual architecture consisting of three components which handle different aspects of the algorithmic trading system namely the data handler, strategy handler, and the trade execution handler. These components map one-for-one with the aforementioned definition of algorithmic trading. In this article we extend this architecture to describe how one might go about constructing more intelligent algorithmic trading systems.
What does it mean for a system to be more intelligent? In the context of algorithmic trading we will measure intelligence by the degree to which the system is both self-adapting and self-aware. But before we get to that let's elaborate on the three components in the conceptual architecture of the algorithmic trading system.
Algorithmic Trading systems can use structured data, unstructured data, or both. Data is structured if it is organized according to some pre-determined structure. Examples include spreadsheets, CSV files, JSON files, XML, Databases, and Data-Structures. Market related data such as inter-day prices, end of day prices, and trade volumes are usually available in a structured format. Economic and company financial data is also available in a structured format. Two good sources for structured financial data are Quandl and Morningstar.
Data is unstructured if it is not organized according to any pre-determined structures. Examples include news, social media, videos, and audio. This type of data is inherently more complex to process and often requires data analytics and data mining techniques to analyse it. Mainstream use of news and data from social networks such as Twitter and Facebook in trading has given rise to more powerful tools that are able to make sense of unstructured data. Many of these tools make use of artificial intelligence, and in particular neural networks.
A model is the representation of the outside world as it is seen by the Algorithmic Trading system. Financial models usually represent how the algorithmic trading system believes the markets work. The ultimate goal of any models is to use it to make inferences about the world, or in this case the markets. The most important thing to remember here is the quote from George E.P Box "all models are essentially wrong, but some are useful".
Models can be constructed using a number of different methodologies and techniques but fundamentally they are all essentially doing one thing: reducing a complex system into a tractable and quantifiable set of rules which describe the behaviour of that system under different scenarios. Some approaches include, but are not limited to, mathematical models, symbolic and fuzzy logic systems, decision trees, induction rule sets, and neural networks.
The use of mathematical models to describe the behaviour of markets is called quantitative finance. Most quantitative finance models work off of the inherent assumptions that market prices (and returns) evolve over time according to a stochastic process, in other words, markets are random. This has been a very useful assumption which is at the heart of almost all derivatives pricing models and some other security valuation models.
Essentially most quantitative models argue that the returns of any given security are driven by one or more random market risk factors. The degree to which the returns are affected by those risk factors is called sensitivity. For example, a well-diversified portfolio's returns may be driven by the movement of short-term interest rates, various foreign exchange rates, and the returns in the overall stock market. These factors can be measured historically and used to calibrate a model which simulates what those risk factors could do and, by extension, what the returns on the portfolio might be. For more information please see Random Walks Down Wall Street.
Symbolic and Fuzzy Logic Models
Symbolic logic is a form of reasoning which essentially involves the evaluation of predicates (logical statements constructed from logical operators such as AND, OR, and XOR) to either true or false. Fuzzy logic relaxes the binary true or false constraint and allows any given predicate to belong to the set of true and or false predicates to different degrees. This is defined in terms of set membership functions.
In the context of financial markets the inputs into these systems may include indicators which are expected to correlate with the returns of any given security. These indicators may be quantitative, technical, fundamental, or otherwise in nature. For example, a fuzzy logic system might infer from historical data that if the five day exponentially weighted moving average is greater than or equal to the ten day exponentially weighted moving average then there is a sixty-five percent probability that the stock will rise in price over the next five days.
A data-mining approach to identifying these rules from a given data set is called rule induction. This is very similar to the induction of a decision tree except that the results are often more human readable.
Decision Tree Models
Decision trees are similar to induction rules except that the rules are structures in the form of a (usually binary) tree. In computer science, a binary tree is a tree data structure in which each node has at most two children, which are referred to as the left child and the right child. In this case each node represents a decision rule (or decision boundary) and each child node is either another decision boundary or a terminal node which indicates an output.
There are two types of decision trees: classification trees and regression trees. Classification trees contain classes in their outputs (e.g. buy, hold, or sell) whilst regression trees contain outcome values for a particular variable (e.g. -2.5%, 0%, +2.5%, etc.). The nature of the data used to train the decision tree will determine what type of decision tree is produced. Algorithms used for producing decision trees include C4.5 and Genetic Programming.
As with rule induction, the inputs into a decision tree model may include quantities for a given set of fundamental, technical, or statistical factors which are believed to drive the returns of securities.
Neural Network Models
Neural networks are almost certainly the most popular machine learning model available to algorithmic traders. Neural networks consist of layers of interconnected nodes between inputs and outputs. Individual nodes are called perceptrons and resemble a multiple linear regression except that they feed into something called an activation function, which may or may not be non-linear. In non-recurrent neural networks perceptrons are arranged into layers and layers are connected with other another. There are three types of layers, the input layer, the hidden layer(s), and the output layer. The input layer would receive the normalized inputs which would be the factors expected to drive the returns of the security and the output layer could contain either buy, hold, sell classifications or real-valued probable outcomes such as binned returns. Hidden layers essentially adjust the weightings on those inputs until the error of the neural network (how it performs in a backtest) is minimized. One interpretation of this is that the hidden layers extract salient features in the data which have predictive power with respect to the outputs. For a much more detailed explanation of neural networks please see this article.
In addition to these models there are a number of other decision making models which can be used in the context of algorithmic trading (and markets in general) to make predictions regarding the direction of security prices or, for quantitative readers, to make predictions regarding the probability of any given move in a securities price.
The choice of model has a direct effect on the performance of the Algorithmic Trading system. Using multiple models (ensembles) has been shown to improve prediction accuracy but will increase the complexity of the implementation. The model is the brain of the algorithmic trading system. In order to make the algorithmic trading system more intelligent the system should store data regarding any and all mistakes made historically and it should adapt it's internal models according to those changes. In some sense this would constitute self awareness (of mistakes) and self adaptation (continuous model calibration). That said, this is certainly not a terminator!
The execution component is responsible for putting through the trades that the model identifies. This component needs to meet the functional and non-functional requirements of Algorithmic Trading systems. For example, the speed of the execution, the frequency at which trades are made, the period for which trades are held, and the method by which trade orders are routed to the exchange need to be sufficient. Any implementation of the algorithmic trading system should be able to satisfy those requirements. In this article I propose an open architecture for algorithmic trading systems which I believe satisfies many of the requirements.
Artificial intelligence learns using objective functions. Objective functions are usually mathematical functions which quantify the performance of the algorithmic trading system. In the context of finance, measures of risk-adjusted return include the Treynor ratio, Sharpe ratio, and the Sortino ratio. The model component in the algorithmic trading system would be "asked" to maximize one or more of these quantities. The challenge with this is that markets are dynamic. In other words the models, logic, or neural networks which worked before may stop working over time. To combat this the algorithmic trading system should train the models with information about the models themselves. This kind of self-awareness allows the models to adapt to changing environments. I think of this self adaptation as a form of continuous model calibration for combating market regime changes.
Algorithmic Trading has become very popular over the past decade. It now accounts for the majority of trades that are put through exchanges globally and is has attributed to the success of some of the worlds best performing hedge funds, most notably that of Renaissance Technologies. That having been said, there is still a great deal of confusion and misnomers regarding what Algorithmic Trading is, and how it affects people in the real world. To some extent, the same can be said for Artificial Intelligence.
Too often research into these topics is focussed purely on performance and we forget that it is equally important that researchers and practitioners build stronger and more rigorous conceptual and theoretical models upon which we can further the field in years to come. Whether we like it or not, algorithms shape our modern day world and our reliance on them gives us the moral obligation to continuously seek to understand them and improve upon them. I leave you with a video entitled "How Algorithms shape our world" by Kevin Slavin.
How Algorithms Shape our World
Right-click or ctrl-click this link to download