Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Mathematics sorted by activePlot in OblateSpheroidal coordinate system?
http://community.wolfram.com/groups/-/m/t/1132374
I wanted to produce a plot in OblateSpheroidal coordinate system. Since there is no such function in Mathematica, I tried to do coordinate transformation to Spherical Coordinate follow by a plot using SphericalPlot3D . I want to plot an ellipse and hyperbola of revolution as the image below. But the code just plots a sphere.
![enter image description here][1]
fromOblatetoSpherical =
CoordinateTransformData[{{"OblateSpheroidal", 1}, 3} -> "Spherical",
"Mapping"];
CoordinateChartData[{{"OblateSpheroidal", {\[FormalA]}}, "Euclidean",
3}, "StandardCoordinateNames"]
sph = fromOblatetoSpherical@%
sph2 = Simplify[sph /. x_String :> ToExpression[x]]
SphericalPlot3D[
sph2[[1]] = #/5, {\[Eta], 0, 3 Pi/4}, {\[CurlyPhi], 0, 2 Pi},
PlotStyle ->
Directive[Orange, Opacity[0.7], Specularity[White, 10]],
PlotRange -> All, ImageSize -> Small, Mesh -> None,
PlotPoints -> 50] & /@ {-1, 3, 6, 8, 12}`
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ObaltedSpheroid.png&userId=120134Jose Calderon2017-07-01T20:26:26ZThe Kolakoski Sequence
http://community.wolfram.com/groups/-/m/t/1154374
The Kolakoski Sequence was recently in the news. It starts out as
12211212212211211221211212211211212212211212212112112212211212212211211212212112212211212212211211221211212212211211212
n = 10; ko = Prepend[Nest[Flatten[Partition[#, 2] /.
{{2, 2} -> {2, 2, 1, 1}, {2, 1} -> {2, 2, 1}, {1, 2} -> {2, 1, 1}, {1, 1} -> {2, 1}}] &, {2, 2}, n], 1];
Try doing Length /@ Split[ko] on that, and you get the same sequence. It is self-descriptive. I bumped the code up to n=32 to get 1058436 terms. What is the behavior of 1 and 2 over that range?
ListPlot[FoldList[Plus, 0, 2 (ko - 3/2)], Joined -> True, AspectRatio -> 1/7]
![Kolakowski sequence][1]
Seems pretty chaotic.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=kolakowski.jpg&userId=21530Ed Pegg2017-07-27T22:13:10ZZ-Transform of sequence=causal sequence+anticausal sequence
http://community.wolfram.com/groups/-/m/t/1153032
Hello everyone. How can I get the Z-Transform of sequence=causal sequence+anticausal sequence like this:
x=a^n HeavisideTheta[n]-b^n HeavisideTheta[-n-1]
Z[x]=(1/(1-a z^-1) + 1/(1-b z^-1))
I know that the command ZTransform works only for causal sequence because ZTransform implements only unilateral Z-Transform.
Thank you very much.Gennaro Arguzzi2017-07-26T10:26:32ZHow can I rerun this recurrence function multiple times?
http://community.wolfram.com/groups/-/m/t/1152029
Hey,
first and foremost I'm sorry if this is in the wrong thread, it is my first time posting here.
So I'm currently trying to forecast interest rates with the vasicek modell. My approach is to use a recursive equation (mean reversion rate, mean reversion level and volatility are already calculated). It goes as follows:
r_{i+1}=r_{i}+\kappa * (\theta-r_{i})Dt + \sigma \epslion_{i}dt
where \epsilon_{i} is a standard normally-distributed random. For simplicity reasons i set dt as 1, because I wanted to have a daily forecast.
Currently I'm using the recurrencetable to solve it, but I get the error message "the expression 'xy' cannot be used as a part specification" (but the programm gives me an output nevertheless).
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=vasicekforecast.PNG&userId=1151771
It seems like the programm runs as expected, but the error message is buzzing me a little. Is there a better way to implement it? I tried it in a for loop but it didnt work...
My next step would be to rerun this multiple times to get different forecasts but I dont know how I should do this. Manually it would be too much of an effort, but my last try to programm it was a failure. My idea was to have two for-loops that give me an output in matrix form with the days written in the rows and the runs written in the columns.
If you have a question regarding the idea feel free to ask. I'm thankful for all the help.
Regards TimTim2017-07-24T15:18:25ZPolynomial Term Order?
http://community.wolfram.com/groups/-/m/t/1152222
I would like polynomials in TraditionalForm to be ordered in descending order of degrees. All the polynomials that I am dealing with are in one variable. For instance, I have -5x+3. TraditionalForm outputs 3-5x. I found an article [HERE][1] that talks about a solution, but I can't get that solution to work. It says to do this:
poly=-5x+3;
MonomialList[poly, x];
poly // TraditionalForm
But the output I'm getting is still 3-5x. What am I doing wrong?
Thanks in advance,
Mark Greenberg
[1]: http://reference.wolfram.com/language/howto/RearrangeTheTermsOfAPolynomial.html.enMark Greenberg2017-07-24T23:06:34ZHow do I model electrodiffusion in Wolfram Mathematica?
http://community.wolfram.com/groups/-/m/t/1152471
I am an undergraduate researcher. I have been working with Mathematica for roughly a month. I have been assigned a modeling task. My objective is to model the diffusion of ions in solution under the effect of an electric field and ignoring concentration gradient.
Part 1: Equations
-----------------
The model is to be one dimensional for the time being. In the future it will be translated to 2D and ultimately 3D.
The equation used is as follows:
∂C/∂t=∂/∂x(D ∂C/∂x-zμC ∂Φ/∂x)
Where:
C : Concentration of ions
D: Diffusion coefficient
z: Charge per molecule
μ: Ion mobillity
Φ: Electric potential
*C is a function of time and space. “t” and “x”.*
*Φ is a function of space. “x”.*
*All remaining terms are assumed constant. (D,z, μ)*
The model I am making ignores diffusion caused by concentration gradient (The highlighted term).
∂C/∂t=∂/∂x(**D ∂C/∂x**-zμC ∂Φ/∂x)
The following assumptions are made:
D ∂C/∂x=0
z = -1
μ = 1
This leaves us with:
∂C/∂t=∂/∂x(C ∂Φ/∂x)
Finally, performing the product rule gives the final equation.
∂C/∂t=C (∂^2 Φ)/(∂x^2 )+∂C/∂x ∂Φ/∂x
Part 2: Wolfram
---------------
Mathematica’s DSolve function presents a general solution. But despite multiple attempts and combinations of boundary and initial conditions of both the concentration and potential, I can’t get DSolve to present a particular solution. The code is as follows:
enter code h1D Electrodiffusion
The purpose of this program is to model the diffusion of ions under the influence of an electric field ONLY.
The first attempts will use "DSolve"
Later attempts will use "NDSolve" if no prior attempt is successful
The following cell works to describe the partial differential equation to be solved
edeqn = D[u[t, x], t] ==
u[t, x]*D[\[CapitalPhi][x], {x, 2}] +
D[u[t, x], x]*D[\[CapitalPhi][x], x]
The following cell attempts to discern a solution to the differential equation
sol = DSolve[edeqn, u, {t, x}]
Simplify[edeqn /. sol]
This returns the general solution. I have provided a few sample of my attempts to attain a particular solution below.
The following cell shows the effect of adding an initial condition and a boundary condition for the left AND right sides of the channel.
bc = {u[t, 0] == 10, u[t, 2] == 0}
ic = u[0, x] == 8
sol = DSolve[{edeqn, bc, ic}, u, {t, x}]
The following cell attempts to discern a solution to the differential equation.
The solver has been told to solve for "u" and "\[CapitalPhi]".
Both "u" and "\[CapitalPhi]" are given as functions of "t" and "x".
edeqn = D[u[t, x], t] ==
u[t, x]*D[\[CapitalPhi][t, x], {x, 2}] +
D[u[t, x], x]*D[\[CapitalPhi][t, x], x]
sol = DSolve[edeqn, {u, \[CapitalPhi]}, {t, x}]
Boundary condition. "u". Left side.
Boundary condition. "u". Right side.
Initial condition. "u".
Boundary condition "\[CapitalPhi]". Left side.
Boundary condition "\[CapitalPhi]". Right side.
Initial condition. "\[CapitalPhi]".
bc = {u[t, 0] == 10,
u[t, 2] == 0, \[CapitalPhi][t, 0] == 5, \[CapitalPhi][t, 2] == 0}
ic = {u[0, x] == 8, \[CapitalPhi][0, x] == 0}
edeqn = D[u[t, x], t] ==
u[t, x]*D[\[CapitalPhi][t, x], {x, 2}] +
D[u[t, x], x]*D[\[CapitalPhi][t, x], x]
sol = DSolve[{edeqn, bc, ic}, {u, \[CapitalPhi]}, {t, x}]
I apologize for the length of the post. I've never used a forum before.
I do hope you all will consider my dilemma, and are willing and able to provide a solution. I have attached the full code of my attempts. Thank you for your time.Kali Ellison2017-07-25T15:09:20ZHow can i solve-plot numerically this differential equation??
http://community.wolfram.com/groups/-/m/t/1150253
e1 = 0.001;
e^0.4 = {{2.37/de^0.06} - 1.6}/10 {1 - Log[de]/13.8}^0.7
the initial value is e1=0.001 for t0=1 and the time period is between {t,1,18000}dimitriss mal2017-07-20T20:00:18ZPlot special functions real and imaginary part?
http://community.wolfram.com/groups/-/m/t/1092218
Consider the following code:
0F1[;1;j*pi/2x]*e^j2*pi*x
x[-pi/2,+pi/2]
The task is to visualize the real and imaginary part here is how i tried it what has to be different?besides i need the first three derivatives it didnt work like that
Grid[
Partition[
Table[
Plot[
Evaluate[{Re[
D[Hypergeometric0F1[
1, (\[ImaginaryJ]*\[Pi]/2*x)*E^j2\[Pi]x], {x, i}]],
Im[D[
Hypergeometric0F1[
1, (\[ImaginaryJ]*\[Pi]/2*x)*E^j2\[Pi]x], {x, i}]]}],
{x, -2/\[Pi], 2/\[Pi]},
PlotRange -> Automatic,
Frame -> True,
GridLines -> Automatic,
AspectRatio -> 1,
FrameLabel -> {"x",
StringForm[
"\!\(\*SubscriptBox[\(\[InvisiblePrefixScriptBase]\), \(0\)]\)\
\!\(\*SubscriptBox[OverscriptBox[\(F\), \(~\)], \(1\)]\)^(``)(\
\[ImaginaryJ]*\[Pi]/2*x)*\!\(\*SuperscriptBox[\(\[ExponentialE]\), \
\(j2\[Pi]x\)]\)"]},
PlotLegends -> Placed[{"Re", "Im"}, {Center, Top}],
ImageSize -> 300], {i, 0, 3}], 2], Frame -> All]Azad Kaygun2017-05-12T15:36:28ZFunction eveluation
http://community.wolfram.com/groups/-/m/t/1152248
Hello Everyone,
I would be grateful for any help you can offer.
Given a function of 2 variables and a parameter f(x,y,a), can we find for what values of the variables the function is less than zero?
Thanks a lot for your time.Deepa M2017-07-25T06:35:30ZInfinite expression encountered problem
http://community.wolfram.com/groups/-/m/t/1151487
Hi All,
I would like to solve two ODEs coupled together using NDSolve, please see below for my code. Basically, I have two variables, y'[x] and g'[x] (not g''[x]). The reason why I formulate my ODE using g''[x] is that I have integral boundary condition on g'[x], so I use the trick from [here][1] to reformulate my equation. I will be happy to provide an original form of the equation if there is a question that is related to this part.
My problem is Mathematica complaints about "Infinite expression encountered" for my second equation(g''[x]) part. My guess is that in the denominator of this equation, It has y[x] term which equals to 0 at left boundary. Even though I tried to avoid this by calculating y[-1+eps]=eps, It appears to still have this issue.
ClearAll["Global`*"];
w = 0.7;
q = 0.5*w*(1 - x^2);
kappa = 1.469;
sol = With[{eps = 10^-5}, NDSolve[{
y'[x] == (560*y[x]^0.5*q - 64*y[x]^4 + 6*w^(7/8)*(72*kappa - 77)*x*q*(-w*y[x]))/(3*w^(7/8)*(96*kappa - 77)*q^2),
g''[x] == (2695*y[x]^0.5*q - (864*kappa - 385)*y[x]^4 - 693*w^(7/8)*kappa*y[x]*q*(-w*x))/(9*(96*kappa - 77)*y[x]^3),
y[-1 + eps] == eps,
g[1 - eps] - g[-1 + eps] == 0,
g'[0] == 0}, {y[x], g'[x]},
{x, -1 + 0.001, 1 - 0.001}]];
Methods that I have tried is to give initial condition to Mathematica like below. (As suggested by [here][2]). However, I encountered error " Initial conditions should be specified at a single point."
ClearAll["Global`*"];
w = 0.7;
q = 0.5*w*(1 - x^2);
kappa = 1.469;
sol = With[{eps = 10^-5}, NDSolve[{
y'[x] == (560*y[x]^0.5*q - 64*y[x]^4 + 6*w^(7/8)*(72*kappa - 77)*x*q*(-w*y[x]))/(3*w^(7/8)*(96*kappa - 77)*q^2),
g''[x] == (2695*y[x]^0.5*q - (864*kappa - 385)*y[x]^4 - 693*w^(7/8)*kappa*y[x]*q*(-w*x))/(9*(96*kappa - 77)*y[x]^3),
y[-1 + eps] == eps, g[1 - eps] - g[-1 + eps] == 0,
g'[0] == 0}, {y[x], g'[x]},
{x, -1 + 0.001, 1 - 0.001},
Method -> {"Shooting",
"StartingInitialConditions" -> {y[-1 + eps] == eps,
g'[eps] == 0}}]];
Any help will be greatly appreciated! Also, I have a related question: Right now, I have 2 ODEs, but I am planning to add transient terms d/dt for each variable I am solving at here. Let's assume Mathematica could solve these ODEs. Is it possible to solve the transient PDEs? It will be a transient 1D problem, and from the NDSolve documentation, it seems that Mathematica should have capabilities to solve it.
------------------------Update for the original equation-----------------
My x range is from -1 to 1. Here are equations that I would like to solve
![My equation][3]
I also noticed [here][4] that this problem can be formulated as an optimization problem. However, I am having difficulty in this line of code on that link:
sol2[bc2_, {xmin_, xmax_}] :=
NDSolveValue[{y''[x] - y[x] == (x^2), y'[0] == bc2, y[0] == 1}, y, {x, xmin, xmax}];
int[bc2_?NumericQ] := NIntegrate[sol2[bc2, {0, 2}][x], {x, 0, 2}];
**y2 = sol2[NMinimize[(int[bc2V] - 5)^2, bc2V][[-1, -1, -1]], {-3, 3}]**
Plot[{y1[x], y2[x]}, {x, -3, 3}]
I don't know what does **[-1,-1,-1]** has to do with the original problem or formulation.
[1]: https://mathematica.stackexchange.com/questions/86403/solve-differential-equation-using-a-integral-form-boundary-condition
[2]: https://mathematica.stackexchange.com/questions/24312/infinite-expression-error-from-ndsolve
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CodeCogsEqn.png&userId=1150579
[4]: https://mathematica.stackexchange.com/questions/86403/solve-differential-equation-using-a-integral-form-boundary-conditionKai D2017-07-23T17:32:53ZUsing (MinimalPolynomial[ x^(1/x)-1,b]==(1 + b)^x-x).
http://community.wolfram.com/groups/-/m/t/1152134
The MRB constant is defined at [http://mathworld.wolfram.com/MRBConstant.htm][1]l.
In looking for a faster method of calculating digits of the MRB constant, Sum[(1-)^x (x^(1/x)-1))],
by the seemingly difficult method of solving minimal polynomials, I came across the following
where it seems Table[Expand[(1 + b)^x-x], {x, 1, 145}] = Table[MinimalPolynomial[x^(1/x)-1, b], {x, 1, 145}] for all x
except for "numbers of the form (kp)^p for prime p and k=1,2,3,...," OEIS [A097764][2] :
(real = Table[MinimalPolynomial[-1 + x^(1/x), b], {x, 1, 145}]);
(guess = Table[Expand[(1 + b)^x]-x, {x, 1, 145}]);
real - guess // TableForm (*shows the equality for all but OEIS [A097764][3] *)
This equality could become very useful because as x gets large the minimal polynomial of x^(1/x)-1 becomes exceedingly difficult to compute!
Here is how this equality can be used:
Partial sum(s) of the MRB constant can be found through a sum of NSolves,
y = 1000; N[-y - Sum[b /. (NSolve[(1 + b)^n == n, b, Reals][[1]]), {n, 1, y}]]
giving a more correct result than
NSum[(-1)^n (n^(1/n) - 1), {n, 1, 1000}]
because it removes the imaginary part given by E^(I*Pi x) i.e. (-1)^x.
[1]: http://mathworld.wolfram.com/MRBConstant.html
[2]: https://oeis.org/A097764
[3]: https://oeis.org/A097764Marvin Ray Burns2017-07-24T15:11:14Z[✓] Inverse Z-Transform of z/(z - a) with different region of convergence?
http://community.wolfram.com/groups/-/m/t/1147972
Hello everyone. I tried to get the inverse Z transform of z/(z - a) with different ROC.
InverseZTransform[z/(z - a), z, n, Assumptions -> Abs[z] > Abs[a]]
InverseZTransform[z/(z - a), z, n, Assumptions -> Abs[z] < Abs[a]]
Both cases give me the following output a^n. Actually the inverse Z transform is
(a^n) HeavisideTheta[n] when ROC is Abs[z] > Abs[a],
and is
(-a^n) HeavisideTheta[-n-1] when ROC is Abs[z] < Abs[a].
How can I get these outputs?
Thank you very much.Gennaro Arguzzi2017-07-17T19:42:48ZPairs Trading with Copulas
http://community.wolfram.com/groups/-/m/t/1111149
**Introduction**
In a previous post, [Copulas in Risk Management][1], I covered the theory and applications of copulas in the area of risk management, pointing out the potential benefits of the approach and how it could be used to improve estimates of Value-at-Risk by incorporating important empirical features of asset processes, such as asymmetric correlation and heavy tails.
In this post I take a different tack, to show how copula models can be applied in pairs trading and statistical arbitrage strategies.
This is not a new concept - it stems from when copulas began to be widely adopted in financial engineering, risk management and credit derivatives modeling. But it remains relatively under-explored compared to more traditional techniques in this field. Fresh research suggests that it may be a useful adjunct to the more common methods applied in pairs trading, and may even be a more robust methodology altogether, as we shall see.
**Traditional Approaches to Pairs Trading**
Researchers often use simple linear correlation or distance metrics as the basis for their statistical arbitrage strategies. The problem is that statistical relationships may be nonlinear or nonstationary. Correlations (and betas) that have fluctuated in a defined range over a considerable period of time may suddenly break down, producing substantial losses.
A more sophisticated technique is the Kalman Filter, which can be used as a means of dynamically updating the the estimated correlation or relative beta between pairs (or portfolios) of stocks, a technique I have written about in the post Statistical Arbitrage with the Kalman Filter.
Another commonly employed econometric technique relies on cointegration relationships between pairs or small portfolios of stocks, as described in my post on Developing Statistical Arbitrage Strategies Using Cointegration. The central idea is that, in theory, cointegration is a more stable and reliable basis for assessing the relationship between stocks than correlation.
Researchers often use a combination of methods, for example by requiring stocks to be both cointegrated and with stable, high correlation throughout the in-sample formation period in which betas are estimated.
In all these cases, however, the challenge is that, no matter how they are derived or estimated, statistical relationships have a tendency towards instability. Even a combination of several of these methods often fails to detect signs of a breakdown in statistical relationships. There is even evidence that cointegration models are no more robust or reliable than simple correlations. For example, in his paper On the Persistence of Cointegration in Pairs Trading, Matthew Clegg assess the persistence of cointegration among U.S. equities in the calendar years 2002-2012, comprising over 860,000 pairs in total. He concludes that “the evidence does not support the hypothesis that cointegration is a persistent property”.
**Pairs Trading in the S&P500 and Nasdaq Indices**
To illustrate the copula methodology I will use an equity pair comprising the S&P 500 and Nasdaq indices. These are not tradable assets, but the approach is the same regardless and will serve for the purposes of demonstrating the technique.
We begin by gathering daily data on the indices and calculating the log returns series. We will use the data from 2010 to 2015 as the in-sample “formation” period, and test the strategy out of sample on data from Jan 2016-Feb 2017.
![enter image description here][2]
![enter image description here][3]
![enter image description here][4]
![enter image description here][5]
![enter image description here][6]
The chart below shows a scatter plot of daily percentage log returns on the SP500 and NASDAQ indices.
![enter image description here][7]
![enter image description here][8]
**MODELING**
**Marginal Distribution Fitting**
In the post Copulas in Risk Management it was shown that the returns series for the two indices were well-represented by Student T distributions. I replicate that analysis here, estimating the parameters by maximum likelihood and proceed from there to test each distribution for goodness of fit. In each case, the Student T distribution appears to provide an adequate fit for both series.
![enter image description here][9]
![enter image description here][10]
![enter image description here][11]
![enter image description here][12]
**Copula Calibration**
We next calibrate the parameters for the Gaussian copula by maximum likelihood, from which we derive the joint distribution for returns in the two indices via Sklar’s decomposition. This will be used directly in the pairs trading algorithm. As pointed out previously, there are several alternatives to MLE, including the Method of Moments, for example, and these are listed in the Mathematica documentation for the EstimatedDistrubution function.
![enter image description here][13]
![enter image description here][14]
![enter image description here][15]
![enter image description here][16]
![enter image description here][17]
![enter image description here][18]
**Pairs Trading with the Copula Model**
Once we have successfully fitted marginal distributions for the two series and a copula distribution to describe their relationship, we are able to derive the joint distribution. This means that we can directly calculate the joint probability of each pair of data observations. So, for instance, we find that the probability of a return in the S&P500 of 5% or more, together with a return in the Nasdaq of 1% or higher, is approximately 0.2%:
![enter image description here][19]
![enter image description here][20]
So the way we test our model is to calculate the daily returns for the two indices during the-out-of sample period from Jan 2016 to Feb 2017 and compute the probability of each pair of daily observations. On days where we see observation pairs with abnormally low estimated probabilities, we trade the pair accordingly over the following day.
Naturally, there are multiple issues with this simplistic approach. To begin with, the indices are not tradable and if they were we would have to account for transaction costs including the bid-offer spread. Then there is the issue of determining where to set the probability threshold for initiating a trade. We also need to decide on criteria to try to optimize the trade holding period or trade exit rules. And, finally, we need to think about trade expression: for example, we might attempt to trade both legs passively, perhaps crossing the spread to fill the remaining leg when an order for one of the pairs is filled.
But none of these issues are specific to the copula approach - they apply equally to all of the methods discussed previously. So, for the sake of clarity, I am going to ignore them. In this analysis I pick a threshold probability level of 15% and assume we hold the trade for one day only, opening and closing the trade at the start and end of the day after we receive a signal. In computing the returns for each trade I ignore any transaction costs.
First, we gather data for the test period:
![enter image description here][21]
Next, we use the estimated joint distribution to compute the probability of each daily observation of index returns. We gather the daily returns series and associated probability series into a single temporal variable:
![enter image description here][22]
![enter image description here][23]
We plot the time series of index returns and associated probabilities as follows:
![enter image description here][24]
![enter image description here][25]
![enter image description here][26]
![enter image description here][27]
**Trade Signal Generation**
The table below lists the index returns and joint probabilities over the first several days of the series. The sequence of trade signals is as follows:
After a very low probability reading for 2016/1/4, we take equally weighted positions short the S&P500 Index and long the Nasdaq index on 2016/1/5. We close the position at the end of the day, producing a total return of 0.44%. Similar signals are generated on 2016/1/6, 2016/1/7, 2016/1/8, 2016/1/13 , 2016/1/15 and 2016/1/20 (assuming a 15% probability threshold). We take the reverse trade (Buy the S&P500, Sell the Nasdaq) on only one occasion in the initial part of the sample, on 2016/1/14.
![enter image description here][28]
![enter image description here][29]
**Pairs Trading Strategy Results**
We are now ready to apply the trading algorithm to the entire sample and chart the resulting P&L.
![enter image description here][30]
![enter image description here][31]
![enter image description here][32]
![enter image description here][33]
**Comment on Strategy Performance**
The performance of the strategy over the out-of-sample period, at just under 4%, can hardly be described as stellar. But this is largely due to the dampening of volatility seen in both indices over the last year, which is reflected in the progressively lower volatility of joint probabilities over the course of the test period. Such variations in signal frequency and trading strategy performance are commonplace in any statistical arbitrage strategy, regardless of the methodology used to generate the signals.
The obvious remedy is to create similar trading algorithms for a large number of pairs and combine them together in an overall portfolio that will produce a sufficient number of signals and trading opportunities to make the performance sufficiently attractive. One of the benefits of statistical arbitrage strategies developed in this way is their highly efficient use of capital, since the combination of long and short positions minimizes the margin requirement for each trade and for the portfolio as a whole.
Finally, it is worth noting here that, in principle, one could easily create similar copula-based arbitrage strategies for triplets, quadruplets, or any (reasonably small) number of assets. The principle restriction lies in the increasing difficulty of estimating the copulas and joint densities, given the slow convergence of the MLE method.
**Recent Research**
In the last few years several researchers have begun exploring the application of copulas as a basis for statistical arbitrage. In their paper “Nonlinear dependence modeling with bivariate copulas: Statistical arbitrage pairs trading on the S&P 100”, Krauss and Stubinger apply the copula approach to pairs drawn from the universe of S&P 100 index constituents, with promising results. They conclude that their “findings pose a severe challenge to the semi-strong form of market efficiency and demonstrate a sophisticated yet profitable alternative to classical pairs trading”.
In the paper by Rad, et al., cited below, the researchers compare several different methods for pairs trading strategies. They find that all of the tested methods produce economically significant returns, but only the performance of the copula-based approach remains consistent after 2009. Further, the copula method shows better performance for its unconverged trades compared to those of the other methods.
**Conclusion**
The application of copulas to statistical arbitrage strategies is an interesting and relatively under-explored alternative to the usual distance and correlation based methods. In addition to its sound theoretical underpinnings, the copula approach appears to offer greater consistency in performance compared to traditional techniques, whose efficacy has declined since the financial crisis on 2008/09. The benefits of the approach must be weighed against its greater computational complexity, although with the growth in the power of modeling software in recent years this represents less of an obstacle than it has previously.
**References**
Clegg., M., , On the Persistence of Cointegration in Pairs Trading, Jan. 2014
Krauss, C. and Stubinger , J., Nonlinear dependence modeling with bivariate copulas: Statistical arbitrage pairs trading on the S&P 100, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung, No 15/2015.
Rad, H., Kwong, R., Low, Y. and Faff, R., The profitability of pairs trading strategies: distance, cointegration, and copula methods, Quantitative Finance, DOI: org/10.1080/14697688.2016.1164337, 2015
[1]: http://jonathankinlay.com/2017/01/copulas-risk-management/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_1.gif&userId=773999
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_2.png&userId=773999
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7037Fig1.png&userId=773999
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_3.gif&userId=773999
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_4.gif&userId=773999
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_5.png&userId=773999
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_6.gif&userId=773999
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_7.gif&userId=773999
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_8.png&userId=773999
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_10.gif&userId=773999
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4282Fig2.png&userId=773999
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_16.gif&userId=773999
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_17.png&userId=773999
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_18.png&userId=773999
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_19.gif&userId=773999
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_20.png&userId=773999
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_21.gif&userId=773999
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_22.png&userId=773999
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_23.png&userId=773999
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_24.gif&userId=773999
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_25.gif&userId=773999
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_26.gif&userId=773999
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_27.png&userId=773999
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_28.gif&userId=773999
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_29.png&userId=773999
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_30.gif&userId=773999
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_31.gif&userId=773999
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2430Fig3.png&userId=773999
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_32.gif&userId=773999
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_33.gif&userId=773999
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_34.png&userId=773999
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_35.gif&userId=773999Jonathan Kinlay2017-05-30T17:41:07Z