Testing the Random Walk Hypothesis with R, Part One
Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/customer/www/turingfinance.com/public_html/wp-content/plugins/latex/latex.php on line 47
Whilst working on some code for my Masters I kept thinking, "it would be really awesome if there was an R package which just consumed a price series and produced a data.frame of results from multiple randomness tests at multiple frequencies". So I decided to write one and it's named emh after the Efficient Market Hypothesis.
The emh package is extremely simple. You download a price series zoo object from somewhere (e.g. Quandl.com) and then pass the zoo object into the is_random() function in emh. This function will return an R data.frame containing the results of many randomness tests applied to your price series at different frequencies / lags:
.
.
This is my first open source R package so I invite you to use the package and, if you encounter any issues or missing must-have features, please let me know of any them on the Github repository. I will also be giving an R/Finance talk about market efficiency this Thursday at Barclay's Think Rise in Cape Town, so please come through.
With that said, here is the outline for the rest of this article,
.
- Background Information and Context
- Introduction to the emh R Package
- The Six Statistical Tests in emh v0.1.0
- Conclusions and Future Plans
.
This article is part one of at least three parts. In the parts that follow I will continue going through various statistical tests of randomness and explain if and how they relate to the ones we have already covered.
Background Information and Context
I've written about market efficiency and randomness for a while now (since April of 2015) and over that time my understanding of the topic has grown exponentially. I recommend taking a look at my previous articles - which can be found here, here, here, and here - but if you don't have the time, the subsections below are designed to provide just the right amount of background information and context to understand the point of the package.
.
What is the Efficient Market Hypothesis?
The Efficient Market Hypothesis (EMH) is an economic theory which proposes that financial markets accurately and instantaneously take into account information about any given security into the current price of that security. The Efficient Market Hypothesis was introduced by Professor Eugene Fama from 1965 to 1970. If true, actively trading securities in the market based on historical information cannot be used to generate abnormal returns. Abnormal returns are defined as consistent returns over-and-above those produced by the market which were obtained whilst taking on less risk than that of the market.
The Efficient Market Hypothesis does not say you can't beat the market in terms of cumulative return. Theoretically that's "easy", you can just buy a leveraged index ETF and hold on to your pants [1]. What the Efficient Market Hypothesis says is that there is no free lunch. If you want higher returns, you need to take on higher risk.
The Efficient Market Hypothesis distinguishes between weak, semi-strong, and strong form efficient markets according to the subset of information which the market takes into account in current security prices.
Weak form efficient markets take into account all historical price data in current security prices. Semi-strong form efficient markets take into account all relevant publicly available information in current security prices. Strong form efficient markets take into account all relevant (even insider) information in current security prices.
Market efficiency is a by-product of market participation by information arbitrageurs. Information arbitrageurs are economic agents which buy undervalued assets and sell overvalued assets based on new information as it comes out. In so doing these information arbitrageurs reflect the new information into security prices.
If markets were perfectly efficient the expected return of being an information arbitrageur is zero therefore markets cannot be perfectly efficient (see Grossman and Stiglitz 1980). Economic rents earned by information arbitrageurs are therefore earned because of the "inefficient" actions of noise traders (see Black 1986).
Noise traders are economic agents which buy and sell assets for reasons other than new information. An example is a large insurance company which liquidates some of its holdings to pay out a large insurance claim. Efficient markets cannot exist without both information arbitrageurs and noise traders. Passive index investors are, in my opinion, just another form of noise trader ... feel free to disagree in the comment section below ;-).
.
What is the Random Walk Hypothesis?
The Random Walk Hypothesis is a theory about the behaviour of security prices which argues that they are well described by random walks, specifically sub-martingale stochastic processes. The Random Walk Hypothesis predates the Efficient Market Hypothesis by 70-years but is actually a consequent and not a precedent of it.
If a market is weak-form efficient then the change in a security's price, with respect to the security's historical price changes, is approximately random because the historical price changes are already reflected in the current price. This is why randomness tests are typically used to test the weak-form efficient market hypothesis.
I say "approximately" random because even if the market is efficient you should - in theory at least - be compensated for taking on the risk of holding assets. This is called the market risk premium and it is the reason buy-and-hold investing and index investing don't have expected returns equal to zero in the long run.
Consider the graph below. The log price of the Dow Jones Industrial Average from 1896 to 2016 is shown in black. If the market was truly random this line would not consistently increase like it does.
The market goes up because investors deserve to be compensated for the risk they took when they invested in the stock market over some other investment e.g. cash. This return is called the equity risk premium and it has been approximated by a compounded 126-day rolling average return in the graph [2] (the grey line). The red line represents the compounded excess / residual return of the market over our approximation of the equity risk premium.
Assuming our approximation of the market risk premium is correct - which it isn't - the grey line represents the market and it is what you should expect to have made. It is the signal. The red line should just be noise or a Martingale process. Thinking along these lines we soon realise that there are a few ways to test the random walk hypothesis:
.
- Predict or find statistically significant patterns in the equity risk premium (market timing),
- Predict or find statistically significant patterns in the residual returns (not prices),
- Predict or find statistically significant patterns in the sign or rank of the residual returns, or
- Use non-parametric statistical tests of randomness which factor in the equity risk premium a.k.a drift.
.
Most statistical tests of randomness boil down to approaches (2), (3), or (4). The purpose of the emh R package is to make correctly running all of these statistical tests on financial price time series as easy as possible :-).
Note that approaches (1), (2), and (3) are also essentially what active investors try to do on a daily basis! Therefore, it shouldn't come as a surprise that converting any test of the Random Walk Hypothesis to a test of the Efficient Market Hypothesis essentially involves testing whether the identified patterns are also economically significant. An economically significant pattern is one which can be exploited to generate abnormal returns.
.
Why should you care about randomness?
A lot of people ask me why I am so obsessed with randomness. I am obsessed with randomness because all forms of investing can be easily understood in the context of market efficiency and randomness testing. This is an over-simplification, but here's how I see the world of investing through the lens of randomness and market efficiency,
Classical Mean Variance Portfolio Optimization (MVO) assumes that security prices are stationary random walks that are fully described by their first two moments ... hence the name mean variance optimization.
Quantitative asset pricing models argue that security prices are random and can be effectively modelled by stochastic processes. These stochastic processes are often used in Monte Carlo simulations to price assets.
High Frequency Traders and Arbitrageurs argue that security prices are not random at high frequencies either because they exhibit patterns or because the law of one price is violated across geographic regions.
Fundamental Analysis argues that security prices are not random at low frequencies with respect to the set of information, , which contains fundamental information about the company which underlies the security.
Macroeconomic investors argue that security prices are not random at low frequencies with respect to the set of information, , which contains macroeconomic indicators such as business and credit cycles.
Technical Analysis argues that security prices and volume data are not random at any frequency because they exhibit economically significant patterns which are identifiable and exploitable using deterministic technical indicators.
Quantitative investing is a combination of all the above. Quantitative traders believe that with respect to any set of information, , and some set of models,
, security prices consist of a signal component and a noise component. In other words identifying the signal within the noise requires both data,
, and powerful models,
[3].
That said, no matter how powerful your models are, if security prices are random with respect to your dataset you will never be able to produce abnormal returns using it. This is the reason why I think you should care about randomness tests. They can help identify inefficient securities / markets, useful frequencies, and even useful datasets.
.
Comments on the Above
[1] I actually have some serious concerns about leveraged ETP's. So do not interpret this statement as financial advice. It is most definitely not. I might write a blog post about this sometime soon.
[2] Most randomness tests actually work on the residuals between the data and a linear regression fitted to that data. The emh package allows the user to decide how they want to calculate residual returns ... but personally I think that computing the residual returns by subtracting a moving average is more accurate because, firstly, it does not assume that the risk premium is constant and, secondly, using a linear regression makes the implicit assumption that you know what the parameter values of the linear regression are upfront, which is basically a form of look-ahead bias.
[3] I usually like to draw a distinction between quantitative and computational investors. The difference comes down to the power of the models, , used by these two groups of investors. Quantitative investors use less powerful models with few parameters whereas computational investors use more powerful models with many, many parameters. Tools used by computational investors include Machine Learning and Graphical Models. There are pro's and con's to either approach which will have to be covered in another unrelated blog post :-).
Introduction to the emh R Package
There are many randomness tests out there and many of them have been used to test the efficiency of markets. However, most randomness tests have biases and gaps. What may be random according to one test may be non-random according to another. This is the reason why industries outside of finance which rely on secure random number generation (such as the information security industry) typically make use of large batteries of randomness test suites to conclude anything about the randomness of a particular sequence.
Quantitative finance should aim to do the same and that is where the emh package comes in. emh aims to provide a simple interface to a suite of randomness tests commonly used for testing market efficiency.
.
How to install emh in R from GitHub
I am only planning on uploading the emh package to CRAN once I have added about 15 randomness tests and 5 or 6 stochastic process models, so for now the package can only be installed from GitHub via devtools,
.
Example Application to the S&P SMA Index
Originally I was planning on showing a demonstration of the package in this blog post, however, Jupyter notebooks are far better suited to the task that this website. As such you can view an example of the package using the Jupyter Notebook Viewer or directly in the GitHub repository in the examples directory. You can also clone the repository and open up the notebook on your own Jupyter notebook server. Please let me know if you have any problems.
The Six Statistical Tests in emh v0.1.0
In emh v0.1.0 I have included six simple randomness tests which are used all the time when studying market efficiency. Generally speaking there are five types of randomness tests - runs tests, serial correlation tests, unit root tests, variance ratio tests, and complexity tests. In version 0.1.0 there is one runs test, three serial correlation tests, and two variance ratio tests. In subsequent versions there will be many more tests added in each category.
.
The Independent Runs Test
I wrote about runs tests before on this blog in my second randomness article, Hacking The Random Walk Hypothesis with Python, you can read what I had to say here and here. To put it very simply, the runs test is a non-parametric test (meaning that it does not assume much about the underlying distribution of the data) which works on binarized returns. Binarized returns are returns which have been converted to binary i.e. 1 or 0 depending on whether they were positive returns (+) or negative returns (-). A run is any consecutive sequence of either 1 (+) or 0 (-),
Abraham Wald and Jacob Wolfowitz, two mathematicians who wrote a lot about exact and non-parametric randomness test in the 1940's, were the first to prove than when the number of bits in a sequence gets large, , the conditional distribution given the number of ones,
, and the number of zeroes,
, is approximately normal with,
and
Note that this randomness test is conditional on the number of 1's and the number of 0's. Therefore drift, the general tendency of markets to go up over time rather than down, does NOT impact the results. The number of 1's in the sequence could be 90% and the above statement would still hold true. Furthermore, because the runs test only deals with the sign of the return and not its magnitude, it is not affected by stochastic volatility.
That having been said, if patterns exist in the magnitude or size of returns in either direction over time, such as would be the case in a mean-reverting or momentum-driven market, the runs test will not be able to identify these.
.
The Durbin-Watson Test
The Durbin-Watson test is named after James Durbin and Geoffrey Watson. Their test looks for the presence of autocorrelation, also known as serial correlation, in time series. Autocorrelation is the correlation between a time series and itself lagged by some amount. If a time series exhibits statistically significant autocorrelation it is considered non-random because it means that historical information can be used to predict future events.
To be more specific the Durbin-Watson test looks for autocorrelation in the residuals, , from a regression analysis done between the returns and the returns lagged by some amount. The test statistic,
, is calculated as follows,
where is the length of the residuals. To test for either positive autocorrelation (momentum) or negative autocorrelation (mean reversion) at some confidence interval,
, the test statistic
is compared to the upper and lower critical values of
at
. These values have been derived and are available from Stanford's website.
.
The Ljung-Box Test
The Ljung-Box Test is named after Greta Ljung and George Box, source of the famous quote - "all models are wrong, but some are useful". The Ljung-Box test is a test of whether any autocorrelation in a group of autocorrelations of a time series are significantly different from zero. This is expected when the financial time series being tested exhibit either momentum or mean-reversion. The Ljung-Box test statistic is calculated as follows,
where is the length of the time series and
is the squared autocorrelation calculated with lag equal to
. The test statistic,
, is distributed according to the Chi-Squared distribution ... although this depends on some assumptions about the data and may not always be true. This is the single biggest criticism against this test.
.
The Breusch-Godfrey Test
The Durbin-Watson Test and the Ljung-Box Test are used very often, however some studies indicate that they are biased toward the null hypothesis. In other words, they are more likely to say that a time series is random than non-random (see chapter 6). Such biases are actually the topic of a journal article I am working on :-).
The Breusch-Godfrey Test was developed by Trevor S. Breusch and Leslie G. Godfrey and is considered a more powerful test for autocorrelations than either the Durbin-Watson or the Ljung-Box test. The Breusch-Godfrey test also tests for statistically significant autocorrelation in the residuals, , from a regression analysis. Breusch and Godfrey proved that if you fit an auxiliary regression to the original data and the lagged residuals from a linear regression the statistic,
, is asymptotically distributed according to the Chi-Squared distribution,
For more information on the above statistical tests for autocorrelation I recommend listening to Professor Ben Lambert's videos on testing for autocorrelation. I found them to be very helpful and relatively easy to understand.
.
.
The Bartell Rank-based Variance Ratio Test
In 1941 John Von Neumann, a hero of early classical computing and mathematics in general, introduced a test of randomness based on the ratios of variances computed at different sampling intervals. This test, known as the Von Neumann Ratio Test, is a very good test of randomness under the assumption of normality.
In 1982 Robert Bartell create a nonparametric, rank-based version of the Von Neumann Ratio Test which doesn't assume that the data is normally distributed. The test statistic, , is computed as follows,
where is the the rank of the logarithmic return
and
is the length of the time series. Bartell proved that the statistic,
, is asymptotically standard normal (i.e. with mean 0 and variance 1) with,
.
The Lo-MacKinlay Variance Ratio Test
The Heteroscedastic-consistent Variance Ratio test developed by Andrew Lo and Jonathan MacKinlay in 1987 is perhaps the most interesting and complex randomness tests I have encountered. I wrote about this test in my third randomness article, Stock Market Prices Do Not Follow Random Walks - named after Lo and MacKinlay's paper by the same name. I highly recommend reading the above article as I will not be recapping the test here ...
I will, however, make one comment about this test. The test is only valid if security price changes have finite variances. In this context a security with infinite variance is one where the estimate of variance does not converge according to the central limit theorem. A number of people, upon reading this, argued that security price changes have infinite variances. This is essentially a throwback to Mandelbrot's Stable Paretian Hypothesis from the 1960's.
Those people are wrong for one simple reason: if daily security prices have infinite variance then so must weekly, monthly, quarterly, and yearly price changes. Why? Because the characteristic exponent of any stable distribution is invariant in the sampling interval. However, what we observe in reality is that lower frequency returns do have finite variances. Therefore daily returns cannot be distributed according to any stable distribution.
Mandelbrot himself admitted in later years that the "infinite variances" (variances which do not converge to a true estimate) observed in daily returns are likely to be a symptom of conditional heteroscedasticity ... which is what we generally assume when we modelling security prices using Auto-regressive Conditional Heteroscedastic (ARCH) models and this is also, to some extent, what Lo and MacKinlay were controlling for in their test.
Conclusions and Future Plans
All investment methodologies and techniques can be understood and reasoned about within the context of market efficiency and ultimately all quantitative investing boils down to the belief that: with respect to any set of information, , and some set of models,
, security prices consist of a signal component and a noise component. In other words identifying the signal within the noise requires both data,
, and powerful models,
.
Randomness testing - and the emh package by extension - can help to identify inefficient markets, inefficient frequencies, and information-rich datasets. The emh R package is still very new, and I will be contributing to it considerably during the December holidays and in 2017. I plan to add many more randomness tests in each of the five categories: runs tests, serial correlation tests, unit root tests, variance ratio tests, and complexity tests.
In the meantime, try it out! Let me know what you think and if you find any issues with the tests. Unlike the NIST suite which I coded up in Python back in 2015 there are no "unit tests" against which to test mine and other's implementations, so bugs are probably an inevitability. Lastly, if you are in Cape Town in Thursday evening and you find this stuff interesting the please try to come through to the R/Finance workshop.
Previous Story
The Promise of Computing
Next Story
This is the most recent story.
-
Really nice work. Just a quick, and perhaps stupid, question - if these tests were applied to an equity curve generated by a trading strategy, would it be theoretically justified to say that the underlying trading strategy is non-random?
-
Thank you for making this package, it looks like you have expended tremendous effort in producing it. I am very excited to try 'emh' as well as learn from your code.
I cloned your git repo and attempted to build the package in RStudio. Unfortunately, I am on Mac OS X and it looks like the package is coded for Linux. I'll try to make a port to Mac OS X.
But serious thanks for creating this package. Very impressive.
-
Stuart,
Please ignore my previous. I had a bad value in my ~/.R/Makevars which was the culprit ... works fine! Stupid User Error ... sorry
Randy
-
Hi,
Nice overview and nice package.
One point that you make above: "...if daily security prices have infinite variance then so must weekly, monthly, quarterly, and yearly price changes... However, what we observe in reality is that lower frequency returns do have finite variances. Therefore daily returns cannot be distributed according to any stable distribution."
I suggest you read the paper below for a simple but very effective counter-argument to the aggregational Gaussianity hypothesis of lower frequency returns:
(http://www.sciencedirect.com/science/article/pii/S2212567115007510) -
The best blog in quantitative finance! In R studio, I cannot install the package, I get this:
*** arch - i386
Error in inDL(x, as.logical(local), as.logical(now), ...) :
unable to load shared object 'F:/R-3.3.2/library/emh/libs/i386/emh.dll':
LoadLibrary failure: %1 is not a valid Win32 application.
when running
devtools::install_github(repo="stuartgordonreid/emh")
Please help, I have tried both 32/64 bit r versions, and much more....
Comments