Group Abstract Group Abstract

Message Boards Message Boards

Johansen Test in Mathematica

Posted 7 years ago

A post from five years ago, How to run Johansen Test in Mathematica, requested the code for the Johansen test in Mathematica. However, the verbeia.com code that was offered had problems (incorrectly normalized eigenvectors, computational errors). As a better alternative, I'd like to post my Johansen test code here which I believe is correct. I've compared the output of this code with the output of the Matlab Johansen code in the Spatial Econometrics library and they agree. I've added my Mathematica code as an attachment to this post, "JohansenTest.nb".

The code includes a few subroutines that allows the output from the Johansen test to be displayed in a nice tabular form, such as:

Johansen Test Output

This table shows the results for a cointegrated portfolio of three Exchange Traded Funds (ETFs), having two cointegrating relationships (r <= 0 and r <= 1) for both the trace and eigen statistics (at > 99% confidence, except for the eigen statistic for r <= 0, which is > 95% confidence).

I use this code to generate the weights for a cointegrated porfolio of ETFs which I've trading profitably for several months now. I usually set order = 2, and detrend = 1. That seems to give the best results for the portfolios I've looked at. As in Ernie Chan's Algorithmic Trading: Winning Strategies and Their Rationale, I apply a Kalman filter to the ETF data and Johansen weights to improve the trading algorithm performance. If there is interest, I can discuss that in future posts, as well. (Chan's Kalman filter discussion is very incomplete, in my opinion.)

I've left a few optional "debug" statements in the code to allow you to check that the matrices are properly normalized. These lines can be deleted. Note that the Johansen weights are the rows of the eigenvector matrix, not the columns (as in the Spatial Economentrics code). I feel this is more consistent with the way that Mathematica handles vectors and matrices.

For detail on the equations on which this code is based, see this 2005 article by B.E. Sorenson: Cointegration.

I welcome any feedback.

Attachments:
POSTED BY: Amanda Gerrish
17 Replies
Posted 6 years ago

Amanda - Your help regarding the implementation of the Kalman filter would be greatly appreciated and I fully understand that you don't want to publish your code - I wouldn't either! I'm just about to finalize the index arbitrage backtesting and I'll let you know whether there is any value to be gained. Then I'll start working on your Kalman idea, expecting to get stuck rather soon(!). So if you don't mind, I'll contact you again once I'm on the move with that.

Per

POSTED BY: Per Ravn
Posted 6 years ago

Per,

Thanks for you post. I suppose that large excursions from equilibrium are a risk with mean-reversion strategies. (The underlying statistics are not strictly a normal distribution, and so "fat tails" imply that large excursions occur more frequently than one might expect.)

In reply to your points:

  1. I'm using the same ETF triplet for my trading because it has a high Johansen score (>> 99%), a modest half-life for mean-reversion, and the three ETFs are very liquid (which is also very important!). I can imagine that trading multiple portfolios (or perhaps larger portfolios of more than three ETFs) would likely reduce risk, but it would also increase transaction costs. My funds are limited (< $100,000), so I haven't pursued that option.

  2. As to arbitrage between an ETF and its components, I would imagine that there would be only limited arbitrage opportunities (because the ETFs track their components pretty closely) which would limit profits, as you suggest. However, I would certainly expect high cointegration. It's just the small excursions from equilibrium that would limit profitability.

  3. Combining strategies is probably a good idea. I hear that that's what the large quant hedge funds do. They have multiple quants pursuing different strategies, and when one strategy is not working, others are. I actually have a trend-following algorithm that I've been using with cryptocurrencies over the past seven months, so I suppose I am "combining strategies" -- even though my total investment in cryptocurrencies is small ($5,000). Unfortunately, the cryptocurrencies had an horrendous sell-off last year. Nevertheless, my algo limited my maximum drawdown to around 20% by mostly keeping me out of the market. I'm hoping that the period of relative stability in the cryptocurrencies in the past few months is a prelude to stronger prices. I'm actually starting to make a small amount of money in the cryptos.

I agree with your comments about Chan. I'm grateful that he's illuminated the basic concepts and strategies. His book inspired me to study how to best implement the Kalman filter when trading a cointegrated portfolio, which I decided to share with others. If you have difficulty implementing the Kalman filter strategy, let me know. I can help with explanations, but I won't post my Kalman filter code on a public forum. I put too much effort into that to just give it away. I'm sure you understand.

Amanda

POSTED BY: Amanda Gerrish
Posted 3 years ago

Hi Amanda,

This post is several years old now and so I don't know if you still follow it but I'm curious how your strategy performed and if you've made any modifications or changes to your methodology.

Also, could you implement your iterative weight estimation procedure with Kalman Filter using Mathematica's built-in KalmanEstimator function?

Thank you, Reid

POSTED BY: Reid Frasier
Posted 6 years ago
POSTED BY: Per Ravn
Posted 6 years ago

Hi Amanda - not sure whether you still monitor this thread, but I'm curious if your algorithm is still peforming? I think your findings are remarkable to say the least. I stumled on this post because I'm trying to do something very similar, using another idea of Ernie's. My model trades the spread between an index and a basket of its constituents where the basket is reconstructed periodically using the Johansen procedure. It suffers from the same OOS stationarity issue as Kinlay describes and I will certainly try to apply your model if I can interpret the details correctly.
If you could share any details on the production performance of your model, it would be very interesting.

POSTED BY: Per Ravn
Posted 6 years ago
POSTED BY: Amanda Gerrish

Before we delve into the Kalman Filter model, its worth pointing out that the problem with the nonstationarity of the out-of-sample estimated portfolio values is not mitigated by adding more in-sample data points and re-estimating the cointegrating vector(s):

IterateCointegrationTest[data_, n_] := 
  Module[{isdata, osdata, JT, isportprice, osportprice},
   isdata = Take[data, All, n];
   osdata = Drop[data, 0, n];
   JT = JohansenTest[isdata, 2, 1];
   isportprice = JT[[2, 1]].isdata;
   osportprice = JT[[2, 1]].osdata;
   {UnitRootTest[isportprice], UnitRootTest[osportprice]}];

We continue to add more in-sample data points, reducing the size of the out-of-sample dataset correspondingly. But none of the tests for any of the out-of-sample datasets is able to reject the null hypothesis of a unit root in the portfolio price process:

ListLinePlot[
 Transpose@
  Table[IterateCointegrationTest[stockprices, 52*14 + i], {i, 1, 50}],
  PlotLegends -> {"In-Sample", "Out-of-Sample"}]

enter image description here

POSTED BY: Jonathan Kinlay
Posted 7 years ago
POSTED BY: Amanda Gerrish
POSTED BY: Jonathan Kinlay
POSTED BY: Jonathan Kinlay
Posted 7 years ago

A problem with out-of-sample testing is that market structure can shift so that relationships (such as cointegration) may start to break down. One way to try to minimize this effect is to update your Johansen coefficients more frequently. In backtesting, I update the Johansen coefficients weekly, being careful to use only past data to calculate the current portfolio weights at any time point. (I think this is called "walk forward".) This reflects how I actually use the function in practice. In effect, my out-of-sample period is always one time step. This gives better backtest results, but because I'm avoiding look-ahead bias, it's valid. That's what I did in the backtest I described in a previous reply. You can even track the trace/eigen-statistics over time to make sure that the cointegration is not falling apart.

Also, the Kalman filter dynamically adjusts the Johansen weights so that the weighted price series is more stationary.

POSTED BY: Amanda Gerrish

Amanda, I think you may have hit on something very important. As you point out, the determination of the variance/covariances is critical and the adaptive tuning procedure you recommend appears very successful in stabilizing the portfolio, making it suitable for a stat-arb strategy.

As you saw, I did not use MMA in my own implementation because I felt that Wolfram's approach was somewhat unsympathetic to the needs of the economic researcher (vs. say the requirements of an engineer), compared to the available alternatives. I see that I am not entirely alone in that assessment: here, for instance. So I am delighted that you have successfully implemented this in MMA, presumably using KalmanEstimator(?). Or did you build the model from scratch?

I will run a few tests on your Johansen code and attempt to build a KF model in MMA using some of the ETF pairs/triplets Ernie discusses in his book and compare the results.

Meanwhile, I wondered if you could comment on the following:

1) While the initial trading performance appear very encouraging, what kind of performance results did the backtest produce, out of sample?

2) You mention that you update the model using weekly data and then trade it intraday during the following week. So presumably you are getting real-time market data into MMA somehow: via the Finance Platform, perhaps? And do you trade the signals via that platform, or some other way (manually)?

3) One extension that i found quite useful in my own research was to fit a GARCH model to the residuals and use this to determine the trade entry/exit points. But that procedure was probably only useful because of the nonstationarity in the portfolio returns process. If you have succeeded in dealing with that key issue at a more fundamental level, a GARCH extension is probably superfluous.

POSTED BY: Jonathan Kinlay
Posted 7 years ago

Jonathan,

I wrote my Kalman Filter routine in Mathematica, from scratch. This way I know exactly what it does.

Regarding your questions:

1) My backtesting showed average yearly returns (AYRs) in the 30% - 40% range (over a 6-year period), with a maximum drawdown under 10%. This was with fixed entry/exit limits, and 100% of my cash in and out. However, in live trading, what I do is put on 50% of my position when I cross one limit, another 25% when I cross another limit, etc., so that I reduce my drawdown if I get a large excursion in the statistic (say, 2 or 3 standard deviations), while capturing some returns on the smaller excursions (1 standard deviation). I really feel that I need to see how my track record goes with live trading. That's what counts.

2) I wrote a small routine to download real-time ETF data from nasdaq.com. Basically, I use the Mathematica URLRead function and screen scrape for the real-time quote. I use the Mathematica Dynamic function to do this, and update the plot and recommended positions, automatically once per minute. Real-time 1-minute data is good enough for my purposes. I enter the orders manually on a multi-order trading screen. I've got a system that keeps the lag to a few seconds. Again, good enough for my purposes.

3) Yes, GARCH can be useful to show changes in volatility. I haven't implemented that. However, I've recently applied the Mathematica functions HiddenMarkovProcess and FindHiddenMarkovStates to detect and display a shift from a low volatility state to a high volatility state (and vice-versa) in my statistic. It's mainly for informational purposes. (I basically highlight areas of the plot with white or light-gray background, depending on whether I'm in a low-volatility state or a high volatility state.) It may affect when I place my trades. Too early to say yet. A big issue for me is how best to display the information so that I can easily and quickly react and trade when needed.

POSTED BY: Amanda Gerrish
POSTED BY: Jonathan Kinlay
Posted 7 years ago

Of course, since you are only updating the model weekly you wouldn't need to use MMA at all during the week.

There's a subtlety here. I update my Kalman filter parameters (noise variances/covariances, initial values, etc.) once per week. However, I calculate the Kalman filter weights (using these parameters) for the latest real-time data point in real-time. Basically, I append the latest real-time data point to the weekly data series and run a single iteration of the Kalman filter. This gives me optimal weights for the current prices.

As to order entry, I've actually written code to automate order entry and I've done some simulated trading which looked good. However, trusting the code with my funds makes me a bit nervous. I prefer to enter the orders manually for now, but I may experiment with automated order entry in the future. The ETFs I'm trading are pretty liquid, and it's important that all of the legs of the trade get executed simultaneously (otherwise you risk significant losses if only one or two legs of the trade execute), so I use market orders. I've been watching the fills that I get and they seem reasonable.

Another question is how to treat open positions held over a w/e when models get updated.

Yes, this is a tricky issue. The problem is that the Kalman filter parameters change with the update, and so the statistic I'm using shifts a little between Friday close and Monday open. Therefore, if I have a decent profit near the close on Friday, I'll often sell my positions even if I haven't quite hit my "sell" limit. If I do hold the positions over the weekend, it's not catastrophic. I just sell when the limit is reached with the new statistic, although it may mean my profit is less than what I estimated it would be the previous week. On one occasion, I even had a loss as the statistic moved past my sell limit after the Monday open before the positions had turned profitable. One solution would be to update my Kalman filter parameters less often, say, once per month. However, that makes the weights more out-of-sample (as I get further into the month), which might reduce profitability. For now, once-per-week seems to be working OK.

Finally, I'm using prices, not log(prices), because my backtesting has indicated that using log(prices) is less profitable.

Thanks for bringing up these important practical considerations! I've thought about them, but I'm still getting a handle on all these issues.

POSTED BY: Amanda Gerrish

Nice work, Amanda.

Hopefully Wolfram will include more of these standard statistical tests in futures releases, to bring MMA to parity with comparable math/stats software packages.

I have written a few posts about using the Kalman filter approach in pairs trading (stat arb), for instance here and here.

I would certainly be interested to get your own take on the subject and any trading results you might care to share.

POSTED BY: Jonathan Kinlay
Posted 7 years ago

Thanks, Jonathan.

I read through your two Kalman filter papers and I found them interesting. Good analysis. Your approach is similar to Ernie Chan's chapter on Kalman filter in the book I mentioned. I believe his "measurement prediction error", e(t), is the same as your alpha(t).

You've hit on a major challenge in applying the Kalman filter, namely, how to determine the noise variances/covariances, R and Q. Most coders seem to use values determined by trial-and-error. However, if you're interested, I've come up with a derivation based on the observed measurement errors for calculating R (what I call ve), and the observed variation in beta(t) for calculating Q (what I call vw). This necessitates an iterative approach -- using initial estimates for Q, R, and the initial state and state covariance, implementing the Kalman filter, calculating new estimates, and so on, until the estimates converge to stable values.

This approach is more math intensive, but it allows generalization beyond pairs to trading cointegrated portfolios of 3 or more financial instruments. (I prefer trading a portfolio of ETFs.) I've posted my method here on the Quantitative Finance area of the StackExchange website.

I've been trading a portfolio of 3 ETFs using this algo for three months and nearly all of my trades have been profitable. I use weekly data to calculate the z-score and I usually get one or two "buy" signals a week with a holding time that seems to vary from a few hours to two weeks, partly depending upon where I choose my "sell" level. After a couple dozen trades, profit per trade has been in the range 0.5% to 3%, except for one trade where I had a loss of about 1%. This should give me a good return over the next year, if I can maintain that performance. I'm still fine tuning the algo, especially with regard to where to set "buy" and "sell" limits.

Here's a plot of the z-score that my code produces: enter image description here

This plot is only showing the last two years of weekly data but I use anywhere from 5 - 10 years of data in my algo. I'm displaying weekly data because I update my Kalman filter parameters every weekend. However, I actually calculate and plot the z-score in real-time during trading hours (using weights from the Kalman filter). When I hit a z-score of say, 1 (-1), I put on a short (long) portfolio position. I close the position when the z-score returns back to zero (or perhaps a little beyond zero). It's too early to say how profitable this will be over the long term. When I have more data perhaps I'll post my total returns.

I hope this is helpful.

POSTED BY: Amanda Gerrish
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard