Message Boards Message Boards

Johansen Test in Mathematica

Posted 6 years ago

A post from five years ago, How to run Johansen Test in Mathematica, requested the code for the Johansen test in Mathematica. However, the code that was offered had problems (incorrectly normalized eigenvectors, computational errors). As a better alternative, I'd like to post my Johansen test code here which I believe is correct. I've compared the output of this code with the output of the Matlab Johansen code in the Spatial Econometrics library and they agree. I've added my Mathematica code as an attachment to this post, "JohansenTest.nb".

The code includes a few subroutines that allows the output from the Johansen test to be displayed in a nice tabular form, such as:

Johansen Test Output

This table shows the results for a cointegrated portfolio of three Exchange Traded Funds (ETFs), having two cointegrating relationships (r <= 0 and r <= 1) for both the trace and eigen statistics (at > 99% confidence, except for the eigen statistic for r <= 0, which is > 95% confidence).

I use this code to generate the weights for a cointegrated porfolio of ETFs which I've trading profitably for several months now. I usually set order = 2, and detrend = 1. That seems to give the best results for the portfolios I've looked at. As in Ernie Chan's Algorithmic Trading: Winning Strategies and Their Rationale, I apply a Kalman filter to the ETF data and Johansen weights to improve the trading algorithm performance. If there is interest, I can discuss that in future posts, as well. (Chan's Kalman filter discussion is very incomplete, in my opinion.)

I've left a few optional "debug" statements in the code to allow you to check that the matrices are properly normalized. These lines can be deleted. Note that the Johansen weights are the rows of the eigenvector matrix, not the columns (as in the Spatial Economentrics code). I feel this is more consistent with the way that Mathematica handles vectors and matrices.

For detail on the equations on which this code is based, see this 2005 article by B.E. Sorenson: Cointegration.

I welcome any feedback.

POSTED BY: Amanda Gerrish
17 Replies
Posted 6 years ago

Thanks, Jonathan.

I read through your two Kalman filter papers and I found them interesting. Good analysis. Your approach is similar to Ernie Chan's chapter on Kalman filter in the book I mentioned. I believe his "measurement prediction error", e(t), is the same as your alpha(t).

You've hit on a major challenge in applying the Kalman filter, namely, how to determine the noise variances/covariances, R and Q. Most coders seem to use values determined by trial-and-error. However, if you're interested, I've come up with a derivation based on the observed measurement errors for calculating R (what I call ve), and the observed variation in beta(t) for calculating Q (what I call vw). This necessitates an iterative approach -- using initial estimates for Q, R, and the initial state and state covariance, implementing the Kalman filter, calculating new estimates, and so on, until the estimates converge to stable values.

This approach is more math intensive, but it allows generalization beyond pairs to trading cointegrated portfolios of 3 or more financial instruments. (I prefer trading a portfolio of ETFs.) I've posted my method here on the Quantitative Finance area of the StackExchange website.

I've been trading a portfolio of 3 ETFs using this algo for three months and nearly all of my trades have been profitable. I use weekly data to calculate the z-score and I usually get one or two "buy" signals a week with a holding time that seems to vary from a few hours to two weeks, partly depending upon where I choose my "sell" level. After a couple dozen trades, profit per trade has been in the range 0.5% to 3%, except for one trade where I had a loss of about 1%. This should give me a good return over the next year, if I can maintain that performance. I'm still fine tuning the algo, especially with regard to where to set "buy" and "sell" limits.

Here's a plot of the z-score that my code produces: enter image description here

This plot is only showing the last two years of weekly data but I use anywhere from 5 - 10 years of data in my algo. I'm displaying weekly data because I update my Kalman filter parameters every weekend. However, I actually calculate and plot the z-score in real-time during trading hours (using weights from the Kalman filter). When I hit a z-score of say, 1 (-1), I put on a short (long) portfolio position. I close the position when the z-score returns back to zero (or perhaps a little beyond zero). It's too early to say how profitable this will be over the long term. When I have more data perhaps I'll post my total returns.

I hope this is helpful.

POSTED BY: Amanda Gerrish

Nice work, Amanda.

Hopefully Wolfram will include more of these standard statistical tests in futures releases, to bring MMA to parity with comparable math/stats software packages.

I have written a few posts about using the Kalman filter approach in pairs trading (stat arb), for instance here and here.

I would certainly be interested to get your own take on the subject and any trading results you might care to share.

POSTED BY: Jonathan Kinlay
Posted 6 years ago


I wrote my Kalman Filter routine in Mathematica, from scratch. This way I know exactly what it does.

Regarding your questions:

1) My backtesting showed average yearly returns (AYRs) in the 30% - 40% range (over a 6-year period), with a maximum drawdown under 10%. This was with fixed entry/exit limits, and 100% of my cash in and out. However, in live trading, what I do is put on 50% of my position when I cross one limit, another 25% when I cross another limit, etc., so that I reduce my drawdown if I get a large excursion in the statistic (say, 2 or 3 standard deviations), while capturing some returns on the smaller excursions (1 standard deviation). I really feel that I need to see how my track record goes with live trading. That's what counts.

2) I wrote a small routine to download real-time ETF data from Basically, I use the Mathematica URLRead function and screen scrape for the real-time quote. I use the Mathematica Dynamic function to do this, and update the plot and recommended positions, automatically once per minute. Real-time 1-minute data is good enough for my purposes. I enter the orders manually on a multi-order trading screen. I've got a system that keeps the lag to a few seconds. Again, good enough for my purposes.

3) Yes, GARCH can be useful to show changes in volatility. I haven't implemented that. However, I've recently applied the Mathematica functions HiddenMarkovProcess and FindHiddenMarkovStates to detect and display a shift from a low volatility state to a high volatility state (and vice-versa) in my statistic. It's mainly for informational purposes. (I basically highlight areas of the plot with white or light-gray background, depending on whether I'm in a low-volatility state or a high volatility state.) It may affect when I place my trades. Too early to say yet. A big issue for me is how best to display the information so that I can easily and quickly react and trade when needed.

POSTED BY: Amanda Gerrish
Posted 6 years ago

Of course, since you are only updating the model weekly you wouldn't need to use MMA at all during the week.

There's a subtlety here. I update my Kalman filter parameters (noise variances/covariances, initial values, etc.) once per week. However, I calculate the Kalman filter weights (using these parameters) for the latest real-time data point in real-time. Basically, I append the latest real-time data point to the weekly data series and run a single iteration of the Kalman filter. This gives me optimal weights for the current prices.

As to order entry, I've actually written code to automate order entry and I've done some simulated trading which looked good. However, trusting the code with my funds makes me a bit nervous. I prefer to enter the orders manually for now, but I may experiment with automated order entry in the future. The ETFs I'm trading are pretty liquid, and it's important that all of the legs of the trade get executed simultaneously (otherwise you risk significant losses if only one or two legs of the trade execute), so I use market orders. I've been watching the fills that I get and they seem reasonable.

Another question is how to treat open positions held over a w/e when models get updated.

Yes, this is a tricky issue. The problem is that the Kalman filter parameters change with the update, and so the statistic I'm using shifts a little between Friday close and Monday open. Therefore, if I have a decent profit near the close on Friday, I'll often sell my positions even if I haven't quite hit my "sell" limit. If I do hold the positions over the weekend, it's not catastrophic. I just sell when the limit is reached with the new statistic, although it may mean my profit is less than what I estimated it would be the previous week. On one occasion, I even had a loss as the statistic moved past my sell limit after the Monday open before the positions had turned profitable. One solution would be to update my Kalman filter parameters less often, say, once per month. However, that makes the weights more out-of-sample (as I get further into the month), which might reduce profitability. For now, once-per-week seems to be working OK.

Finally, I'm using prices, not log(prices), because my backtesting has indicated that using log(prices) is less profitable.

Thanks for bringing up these important practical considerations! I've thought about them, but I'm still getting a handle on all these issues.

POSTED BY: Amanda Gerrish

Amanda, I think you may have hit on something very important. As you point out, the determination of the variance/covariances is critical and the adaptive tuning procedure you recommend appears very successful in stabilizing the portfolio, making it suitable for a stat-arb strategy.

As you saw, I did not use MMA in my own implementation because I felt that Wolfram's approach was somewhat unsympathetic to the needs of the economic researcher (vs. say the requirements of an engineer), compared to the available alternatives. I see that I am not entirely alone in that assessment: here, for instance. So I am delighted that you have successfully implemented this in MMA, presumably using KalmanEstimator(?). Or did you build the model from scratch?

I will run a few tests on your Johansen code and attempt to build a KF model in MMA using some of the ETF pairs/triplets Ernie discusses in his book and compare the results.

Meanwhile, I wondered if you could comment on the following:

1) While the initial trading performance appear very encouraging, what kind of performance results did the backtest produce, out of sample?

2) You mention that you update the model using weekly data and then trade it intraday during the following week. So presumably you are getting real-time market data into MMA somehow: via the Finance Platform, perhaps? And do you trade the signals via that platform, or some other way (manually)?

3) One extension that i found quite useful in my own research was to fit a GARCH model to the residuals and use this to determine the trade entry/exit points. But that procedure was probably only useful because of the nonstationarity in the portfolio returns process. If you have succeeded in dealing with that key issue at a more fundamental level, a GARCH extension is probably superfluous.

POSTED BY: Jonathan Kinlay

So I thought it might be useful to work through an example, to try to make the mechanics clear. I'll try to do this is stages so that others can jump in along the way, if they want to.

Start with some weekly data for an ETF triplet analyzed in Ernie Chan's book:

`tickers = {"EWA", "EWC", "IGE"};
period = "Week";
nperiods = 52*15;
finaldate = DatePlus[Today, {-1, "BusinessDay"}];`

After downloading the weekly close prices for the three ETFs we divide the data into 14 years of in-sample data and 1 year out of sample:

  stockdata = 
             "Close", {DatePlus[finaldate, {-nperiods, period}], finaldate, 
              period}, "DateList"] & /@ tickers;
    stockprices = stockdata[[All, All, 2]];
    isprices = Take[stockprices, All, 52*14];
    osprices = Drop[stockprices, 0, 52*14];

We then apply Amanda's JohansenTest function:

JT = JohansenTest[isprices, 2, 1]

We find evidence of up to three cointegrating vectors at the 95% confidence level:

enter image description here

Let's take a look at the vector coefficients (laid out in rows, in Amanda's function):

enter image description here

We now calculate the in-sample and out-of-sample portfolio values using the first cointegrating vector:

isportprice = (JT[[2, 1]]*100).isprices;
osportprice = (JT[[2, 1]]*100).osprices;

The portfolio does indeed appear to be stationary, in-sample, and this is confirmed by the unit root test, which rejects the null hypothesis of a unit root:


enter image description here



Unfortunately (and this is typically the case) the same is not true for the out of sample period:


enter image description here



We fail to reject the null hypothesis of unit root in the portfolio process, out of sample.

I'll press pause here before we go on to the next stage, which is Kalman Filtering.

POSTED BY: Jonathan Kinlay
Posted 6 years ago

A problem with out-of-sample testing is that market structure can shift so that relationships (such as cointegration) may start to break down. One way to try to minimize this effect is to update your Johansen coefficients more frequently. In backtesting, I update the Johansen coefficients weekly, being careful to use only past data to calculate the current portfolio weights at any time point. (I think this is called "walk forward".) This reflects how I actually use the function in practice. In effect, my out-of-sample period is always one time step. This gives better backtest results, but because I'm avoiding look-ahead bias, it's valid. That's what I did in the backtest I described in a previous reply. You can even track the trace/eigen-statistics over time to make sure that the cointegration is not falling apart.

Also, the Kalman filter dynamically adjusts the Johansen weights so that the weighted price series is more stationary.

POSTED BY: Amanda Gerrish

Hi Amanda,

Your approach seems very promising.

On point 2: I made the assumption that you had to be getting (quasi) real-time data into MMA somehow and indeed this turns to be the case - a creative solution to the problem.

Of course, since you are only updating the model weekly you wouldn't need to use MMA at all during the week. Some trading platforms will allow you to place bids and offers for a synthetic contract according to a simple formula, where the betas are fixed (for the week). In other cases a simple api interface is provided to something like Excel. That would enable you to recalculate the entry/exit prices automatically tick-by-tick, if you wanted to, and would also eliminate the need for manual trading as the orders could be fired into the trading platform via the api.

There are the usual practical considerations that apply to any stat arb strategy. For instance, do you try to enter passively, posting orders on the bid and ask prices of the portfolio (treating it as a single synthetic security)? Another approach is to post resting orders for the individual ETF components at appropriate price levels then cross the spread on the other ETFs if you get filled on one of them. These execution strategies tend to apply more in the case of pairs trading. For more complex strategies involving multiple securities like yours they can be very tricky to implement and traders typically cross the spread on entry and exit, which is what is you are doing, I would guess.

Another question is how to treat open positions held over a w/e when models get updated. The original exit points will likely change. So you have some options there too: exit all positions by the end of the week; maintain the original exit prices (profit target and stop loss); or recalculate exit prices for existing positions once the models get updated.

Finally, one other important issue is whether to use prices or (log) returns in your cointegration model. I suspect you are using the former, as I did in my toy illustration. But the resulting portfolios are rarely dollar neutral and hence consume margin capital. On the other hand, if you use returns and create a dollar-neutral portfolio, rebalancing becomes more of an issue. In that case I suspect you would want to rebalance the portfolio at least once a day, or according to some more sophisticated rebalancing algorithm.

POSTED BY: Jonathan Kinlay

Before we delve into the Kalman Filter model, its worth pointing out that the problem with the nonstationarity of the out-of-sample estimated portfolio values is not mitigated by adding more in-sample data points and re-estimating the cointegrating vector(s):

IterateCointegrationTest[data_, n_] := 
  Module[{isdata, osdata, JT, isportprice, osportprice},
   isdata = Take[data, All, n];
   osdata = Drop[data, 0, n];
   JT = JohansenTest[isdata, 2, 1];
   isportprice = JT[[2, 1]].isdata;
   osportprice = JT[[2, 1]].osdata;
   {UnitRootTest[isportprice], UnitRootTest[osportprice]}];

We continue to add more in-sample data points, reducing the size of the out-of-sample dataset correspondingly. But none of the tests for any of the out-of-sample datasets is able to reject the null hypothesis of a unit root in the portfolio price process:

  Table[IterateCointegrationTest[stockprices, 52*14 + i], {i, 1, 50}],
  PlotLegends -> {"In-Sample", "Out-of-Sample"}]

enter image description here

POSTED BY: Jonathan Kinlay
Posted 6 years ago

You're correct. I was thinking about the dynamic weights from the Kalman filter. However, when we use the static weights from the Johansen test, we lose the stationarity for out-of-sample data. So, for example, when I apply the unit root test to my weighted portfolio, using the Johansen (static) weights, I get:

In-sample data length = 289, Johansen weights
p = 0.0718

However, when I calculate the Johansen coefficients using only the first 189 data points, and then look at unit root test, I get:

In-sample data length = 189, Johansen weights
p = 0.109
Out-of-sample data length = 100, Johansen weights
p = 0.587

Clearly, the out-of-sample period cannot be considered stationary. The situation is not helped by going to a larger in-sample (smaller out-of-sample) period, as you point out.

Now, however, let's look at the same situation except using the dynamic weights from the Kalman filter. For the full sample length:

In-sample data length = 289, Kalman weights
p = 5.5 x 10^-11

Much higher confidence of stationarity! Now, for in-sample/out-of-sample:

In-sample data length = 189, Kalman weights
p = 1.76 x 10^-9
Out-of-sample data length = 100, Kalman weights
p = 3.722 x 10^-7

Still, very good. However, it may be argued that I'm cheating here because I used the entire array of data to calculate the Kalman filter parameters (transition matrix, noise variance/covariances, initial state covariance). So I re-calculated the in-sample/out-of-sample weights using only in-sample data to calculate these parameters:

In-sample data length = 189, Kalman weights, revised Kalman parameter calculations
p = 0.000211

Out-of-sample data length = 100, Kalman weights, revised Kalman parameter calculations
p = 0.0871

The out-of-sample p-value for the unit root test is not as good, but still what I would consider stationary. Furthermore, let's look at a smaller out-of-sample (larger in-sample) period:

In-sample data length = 239, Kalman weights, revised Kalman parameter calculations
p = 7.1 x 10^-10
Out-of-sample data length = 50, Kalman weights, revised Kalman parameter calculations
p = 0.0000548

Using the Kalman filter weights, the stationarity of the out-of-sample period appears to be dependent on the size of the in-sample/out-of-sample periods. A shorter out-of-sample period gives a much smaller p-value for the unit root test. Now, considering that I update my Kalman filter parameters once per week, my out-of-sample period is only 1 time step. Therefore, the loss in stationarity should be very small.

POSTED BY: Amanda Gerrish

Amanda has correctly anticipated the direction I was headed in i.e to show that regardless of how small the size of the OOS period relative to the IS period, the Johansen procedure by itself is unable to produce a cointegrating vector capable of yielding a portfolio price process that is stationary out of sample. But her iterative Kalman Filter approach is able to cure the problem.

I don't want to gloss over this finding, because it is very important. In our toy problem we know the out-of-sample prices of the constituent ETFs, and can therefore test the stationarity of the portfolio process out of sample. In a real world application, that discovery could only be made in real time, when the unknown, future ETFs prices are formed. In that scenario, all the researcher has to go on are the results of in-sample cointegration analysis, which demonstrate that the first cointegrating vector consistently yields a portfolio price process that is very likely stationary in sample (with high probability).

The researcher might understandably be persuaded, wrongly, that the same is likely to hold true in future. Only when the assumed cointegration relationship falls apart in real time will the researcher then discover that it's not true, incurring significant losses in the process, assuming the research has been translated into some kind of trading strategy.

A great many researchers have been down exactly this path, learning this important lesson the hard way. Nor do additional "safety checks" such as, for example, also requiring high levels of correlation between the constituent processes add much value. They might offer the researcher comfort that a "belt and braces" approach is more likely to succeed, but in my experience it is not the case: the problem of non-stationarity in the out of sample price process persists.

For a more detailed discussion of the problem see this post: Why Statistical Arbitrage Breaks Down

I was hitherto unaware of any methodology for tackling this problem, which is why Amanda's discovery is so important. As she demonstrates in her latest post, the iterative Kalman Filter approach is capable of producing a stationary out of sample process, based on the initial estimates of the cointegrating vector derived from the Johansen procedure.

In fact, Amanda's discovery is important in two fields of econometric research: cointegration theory and the theory of Kalman Filters in modeling inter-asset relationships where, as with the Johansen procedure, KF models have traditionally suffered from difficulties associated with nonstationarity in the out of sample period.

It's a tremendous achievement.

So, despite the fact that Amanda has leapt ahead to the finish line, I shall continue to plod along because, firstly, only by implementing the methodology can I be sure that I have properly and fully understood it and, secondly, as one discovers as one progresses in the field of quantitative research, fine details are often very important. So I am hoping that Amanda will provide additional guidance if I stray too far off piste in the forthcoming exposition.

POSTED BY: Jonathan Kinlay
Posted 5 years ago

Hi Amanda - not sure whether you still monitor this thread, but I'm curious if your algorithm is still peforming? I think your findings are remarkable to say the least. I stumled on this post because I'm trying to do something very similar, using another idea of Ernie's. My model trades the spread between an index and a basket of its constituents where the basket is reconstructed periodically using the Johansen procedure. It suffers from the same OOS stationarity issue as Kinlay describes and I will certainly try to apply your model if I can interpret the details correctly.
If you could share any details on the production performance of your model, it would be very interesting.

Posted 5 years ago

Hi Per. I got an email today notifying me of your post to this discussion. I've been trading this algorithm during this entire time, as well as having a lot of email exchanges with Jonathan Kinlay regarding how to implement both the Johansen code and the Kalman filter in a manner similar to Chan. The algorithm was working well for me until about 2 months ago when one of the components of my triplet started rising in price in a way that appears to violate the cointegration. The result was that I waited 9 weeks for mean reversion. The portfolio finally mean-reverted, but by that time I had taken a significant hit on my profits. As a result my total trading profit over this time is much smaller than it was before this event -- around 10% or so over the past 9 months.

The past 2 months are clearly an outlier as compared to past algorithm performance. I can tell that simply by looking at the variation of the z-score over the past 10 years. The average half-life for mean-reversion of the z-score was 1.17 weeks. On a few occasions it took as long as 4 or 5 weeks to mean revert. (Remember, I don't make a profit until the portfolio value -- and thus the z-score -- mean-reverts.) Over the past 2 months or so, it took 9 weeks for my portfolio to mean revert. Is the cointegration breaking down, or was this just a one-time statistical fluke?

The problem is that when I re-calculate the Kalman filter parameters each weekend, the z-score often shifts in such a way that losses can accumulate if the mean reversion is delayed for too long. For example, on the close Friday, I may show a z-score of around 1.0. Over the weekend I re-calculate the Johansen test + Kalman filter parameters, and then when I run the algo on Monday morning, the z-score is significantly lower (in the range 0.5 - 0.8), even if the market is essentially unchanged. Thus, even if mean reversion occurs that day, I don't make as much profit as I'd hoped. This isn't a big deal if the portfolio mean reverts within 3 weeks -- I still make a profit. But if mean reversion doesn't happen for more than 3 or 4 weeks, I may end up with a loss. Waiting 9 weeks for mean reversion accumulated a 12% loss, which was more than half my profit over the past 9 months.

I'm still trading with this algorithm, but I'm waiting for a higher z-score before I "pull the trigger" on my trades, in order to increase the probability of a quicker mean-reversion. This means I may miss some trades, but that's OK. Also, I scale in my buys. (Partial buy at, say, z-score = 1.0, and another partial buy at z-score = 1.5, etc.)

If you're interested in how I implement the Kalman filter -- which is significantly different than Chan -- I wrote a detailed post on StackExchange - Quantitative Finance:

Does Chan use the wrong state transition model in his Kalman filter code?

Using a careful analysis, I argued in my post that Chan uses the wrong state transition matrix, i.e., the identity matrix, in his Kalman filter. I showed how to calculate the correct state transition matrix for a cointegrated portfolio, as well as how to initialize the Kalman filter using an adaptive tuning method. I received positive feedback from the readers of the forum, and one reader emailed Ernie Chan. This precipitated an email exchange between Ernie and I. Ernie basically said that his treatment was meant to be more general and so he didn't assume cointegration. I didn't want to argue, so I let it go. (My method actually works when there's no cointegration -- you basically get the identity matrix solution that Chan uses in that case.) Chan replied that "a stationary [cointegrated] portfolio can be more profitable", and that "your analysis may be correct". I didn't press him any further. I know what I did is correct, and none of the readers of my StackExchange post criticized my analysis. I received a fair number of up-votes.

I hope this is helpful.

POSTED BY: Amanda Gerrish
Posted 5 years ago

Amanda - thanks for taking your time to write such elaborate answer. To me this is extremely interesting and I had a similar experience beginning of this year in one of my cointegration baskets with European stock index futures, where the basket wandered off on a really long adverse excursion before eventually reverting at a loss. I realized that this became a quite lengthy post so I apologize for that in advance.

There are a few core concepts associated with this type of trading that I'm constantly working on in addition to refining the mathematical procedure of constructing a stationary portfolio. It would be interesting to here your view on these as well:

  1. Selecting the ETFs? Are you using the same ETFs or do you continuously screen for new combinations with potentially better cointegration statistics? I have relied on a basket with the same set of stock index futures, reasoning that the European economies are fundamentally interlinked at some level and indeed this can be validated statistically for extended periods. But not alwaysÂ…and there's the problem.

  2. Arbitrage between the ETF and its constituents? This is also briefly described by Chan, but of course any practical implementation comes with a heap of issues not covered in the book. I alluded to this in my first post and I think it is a quite interesting approach. The point here is that the ETF is perfectly cointegrated with its portfolio of weighted constituents by construction and not by a hidden set of underlying factors et c. The task and the challenge here is to find a subset of constituents with high enough cointegration properties in combination with sufficient variance to overcome the transaction cost. I'm exploring this approach again for stock indices and their constituents, where I periodically reconstruct the constituent subset basket. I would imagine it to be quite straight forward to apply your existing model to this approach as well?

  3. Combining strategies? Wether they are statistical flukes or cointegration breakdowns, one of the cures for painful or even catastrophic drawdowns is to maintain a portfolio of different strategies with limited correlation. I tend to focus more on methods to construct a portfolio of cointegratin baskets than going all in on one or a few of them. What are your thoughts in this?

Regarding Chan's book, I think the depth of your research is on a completely different level. You are rebuilding the concepts from scratch, finding and solving issues not even mentioned in the book. There are clearly shortcuts and maybe even mistakes in the book, but in all fairness I think Ernie's doing a great job explaining the basic concepts and setting the scene for further investigation.

Your post on StackExchange is what led me to the Wolfram site in the first place. I'm already working on incorporating your procedure in Python.

Posted 5 years ago


Thanks for you post. I suppose that large excursions from equilibrium are a risk with mean-reversion strategies. (The underlying statistics are not strictly a normal distribution, and so "fat tails" imply that large excursions occur more frequently than one might expect.)

In reply to your points:

  1. I'm using the same ETF triplet for my trading because it has a high Johansen score (>> 99%), a modest half-life for mean-reversion, and the three ETFs are very liquid (which is also very important!). I can imagine that trading multiple portfolios (or perhaps larger portfolios of more than three ETFs) would likely reduce risk, but it would also increase transaction costs. My funds are limited (< $100,000), so I haven't pursued that option.

  2. As to arbitrage between an ETF and its components, I would imagine that there would be only limited arbitrage opportunities (because the ETFs track their components pretty closely) which would limit profits, as you suggest. However, I would certainly expect high cointegration. It's just the small excursions from equilibrium that would limit profitability.

  3. Combining strategies is probably a good idea. I hear that that's what the large quant hedge funds do. They have multiple quants pursuing different strategies, and when one strategy is not working, others are. I actually have a trend-following algorithm that I've been using with cryptocurrencies over the past seven months, so I suppose I am "combining strategies" -- even though my total investment in cryptocurrencies is small ($5,000). Unfortunately, the cryptocurrencies had an horrendous sell-off last year. Nevertheless, my algo limited my maximum drawdown to around 20% by mostly keeping me out of the market. I'm hoping that the period of relative stability in the cryptocurrencies in the past few months is a prelude to stronger prices. I'm actually starting to make a small amount of money in the cryptos.

I agree with your comments about Chan. I'm grateful that he's illuminated the basic concepts and strategies. His book inspired me to study how to best implement the Kalman filter when trading a cointegrated portfolio, which I decided to share with others. If you have difficulty implementing the Kalman filter strategy, let me know. I can help with explanations, but I won't post my Kalman filter code on a public forum. I put too much effort into that to just give it away. I'm sure you understand.


POSTED BY: Amanda Gerrish
Posted 5 years ago

Amanda - Your help regarding the implementation of the Kalman filter would be greatly appreciated and I fully understand that you don't want to publish your code - I wouldn't either! I'm just about to finalize the index arbitrage backtesting and I'll let you know whether there is any value to be gained. Then I'll start working on your Kalman idea, expecting to get stuck rather soon(!). So if you don't mind, I'll contact you again once I'm on the move with that.


Posted 2 years ago

Hi Amanda,

This post is several years old now and so I don't know if you still follow it but I'm curious how your strategy performed and if you've made any modifications or changes to your methodology.

Also, could you implement your iterative weight estimation procedure with Kalman Filter using Mathematica's built-in KalmanEstimator function?

Thank you, Reid

POSTED BY: Reid Frasier
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
or Discard

Group Abstract Group Abstract