Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Mathematics sorted by activeHow to use the Wolfram|Alpha time dilation calculator on Black Holes?
http://community.wolfram.com/groups/-/m/t/772569
Hello,
I’m a graphic artist working for an education company in Arizona. I was given the task of writing a twenty page reader about black holes for middle school students, and all of my research has gone well. But I’ve reached a dead end regarding time dilation. I’m writing a scenario where the reader visits a ten-solar mass black hole while his or her friend stays at a safe distance. I would like to write the following:
> If you could stay just in front of the event horizon, you could watch
> your ten year old friend turn 100 years old in just [*xxx* *amount of
> time*].
Unfortunately, I can’t get an adequate answer to this. I was directed to the time dilation calculator here http://www.wolframalpha.com/input/?i=time+dilation+calculator , but I don’t know how to use it. Just playing with it I’ve gotten negative numbers, *i*, and “exceeds the speed of light’. I have no idea what any of this means. What’s the gravitational acceleration? What’s the rest frame? What’s the radius of what?
I know the time should be very short, but “a blink of an eye” isn’t useful. Would some kind-hearted soul be willing to walk me through this in layman’s language? Or better yet, give me an accurate (but not necessarily precise) number. Any help is greatly appreciated.
Thank you and best regards,
JackJack M2016-01-13T02:39:29Z[✓] Calculate this integral with one/two unknown variables?
http://community.wolfram.com/groups/-/m/t/1290048
Among this double integral, "a" and "e" are assumed to be unknown parameters. I want to calculate this integral when "a" or "e" is set as a constant. And if possible, I want to draw the 3D plot of this integral with unknown "a" and "e". Thanks very much.
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=%E6%96%B0%E5%BB%BA%E4%BD%8D%E5%9B%BE%E5%9B%BE%E5%83%8F6.jpg&userId=1290032zhaojx842018-02-22T14:04:43Z[✓] Plot an Archimedean Spiral with equidistant points?
http://community.wolfram.com/groups/-/m/t/1180645
Hi
has somebody experience with the Archimedean Spiral?
In general it is quite simple to create an Archimedean Spiral by e.g. a line function. I rather want to create a spiral with equidistant points.
The amount of the "sampling" points should be adjustable.
I saw some examples, but I could not follow it, for example:
[Equation to place points equidistantly on an Archimedian Spiral using arc-length][1]
I created following code so far:
a = 0.75;
K = 3;
L1 = 90;
rp = 0.0;
alpha = L1*Pi/180;
Sample = 80; (* sampling per azimuth direction *)
M = 1; (* number of spirals *)
x1[t_, m_] := (rp*Cos[alpha]) + a*t*Cos[t + (m*2*Pi/M)];
y1[t_, m_] := (rp*Sin[alpha]) + a*t*Sin[t + (m*2*Pi/M)];
data1 = Table [{x1[t, m], y1[t, m]}, {t, 0, K*2 \[Pi],
2 \[Pi]/Sample}, {m, 1, M}];
dataflat1 = Flatten[data1, 1];
Graphics[{Thick, Blue, PointSize[0.0075], Point[dataflat1]}, Axes -> True, AxesLabel -> {X, Y}]
![enter image description here][2]
[1]: https://math.stackexchange.com/questions/1371668/equation-to-place-points-equidistantly-on-an-archimedian-spiral-using-arc-length/2216736#2216736
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3425trgfa.png&userId=11733Nikki Peter2017-09-10T15:18:29Z[✓] Keep x/y (Divide[x,y]) from turning into Times[x, Power[y,-1]]?
http://community.wolfram.com/groups/-/m/t/1290158
I would like to have a way to keep x/y in the Divide[x,y] notation, but it seems
to go immediately to the standard form
qTimes[x, Power[y,-1]]
There does not seem to be any attribute that controls this as there is
for properties such as commutative and associative, etc.
Similarly, you have
FullForm[2/(3x)]
as
Times[2, Times[Rational[1, 3], Power[x, -1]]]
when
I might want to distinguish between 2/(3x) and (2/3)(1/x) etc.
I know this is normally a feature, not a bug, but there are times when I'd like to have
more control over conversion to standard form. Is there any way to do this??
Much thanks. - ElaineElaine Kant2018-02-22T23:32:14ZMaTLink (Matlab==>Mathematica)
http://community.wolfram.com/groups/-/m/t/1290731
I am Hamed, I have MATLABR2015a and Mathematica 11.1.1.0, I will now want to install MATLink to allow a quick and easy transition from Matlab to Mathematica. The version of MATLink 1.1 does not work on my computer can someone help me please?Hamed BOUARE2018-02-23T12:47:50Z[✓] Integrate a piecewise function?
http://community.wolfram.com/groups/-/m/t/1290148
I am trying to integrate a step function given in a book, but it doesn't work.
Phi[x_]:= Piecewise[{{-Hg x, -g/2 <= x <= g/2}, {-g Hg/2, x >= g/2}, {g Hg/2, x < -g/2}}]
I can plot it and give the same result as in book but the integration is not the same so my integration code is:
Integrate[y1/ \[Pi] \[CapitalPhi][x]/((x1-x)^2+y1^2), {x, -Infinity, Infinity}, Assumptions -> x1 > 0 && y1 > 0](*here y1=y in book*)
In book it only says for y>0, so I assume x>0 also because for just y>0 mathematica doesn't integrate. Thanks for your help
my code and book page is attached
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_4799.JPG&userId=1289160Arm Mo2018-02-22T23:29:57Z[✓] Plot a graph for viscosity vs. molecular weight?
http://community.wolfram.com/groups/-/m/t/1289179
Hi,
I just started learning mathematica.
I'm trying to plot this plot the graph for viscosity vs molecular weight (attached pictures Formula [1] and
books plot[2] from Cosgrove book of colloidal sciences.)
I first try to sum functions (linear part EE(M) and non linear part EE(M)) but I don't think it works like that (line 3).
So I used the "IF" command and it seems it is also wrong (line 5). I think I have problem with defining functions and parameters in mathematica.
My .nb file is attached and
I appreciate your help.
P.S: Also look at line7 why mathematica doesn't plot when I use function's name?
EE1[M1_] := a M1;
EE2[M1_] := a M1^3.4;
Plot[{a M1 + a M1^3.4}, {M1, 1, 100}, PlotRange -> Automatic]
Mc = 54;
ee[M_] := If[M < Mc, M, M^3.4](*ee is viscosity [\eta]*)
Plot[If[M < Mc, M, M^3.4], {M, 1, 100}, PlotRange -> Automatic]
Also why this code doesn't plot manipulate when i use function's name
Manipulate[Plot[{EE1 + EE2}, {M1, 1, 100}, PlotRange -> Automatic], {a, 1, 3}]
![enter image description here][1]
![`enter image description here`][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=formula.png&userId=1289160
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=plotofviscosity.png&userId=1289160Arm Mo2018-02-21T18:38:12ZConsistent foreign exchange options
http://community.wolfram.com/groups/-/m/t/1278848
Consistency in foreign exchange derivatives is being discussed in the below note where we look at the problem from the probability measure perspective. We review option valuation from both sides of the FX contract and conclude that investors' preferences are subject to different probability measures when the FX rate inverts. Following this we prove the validity of Siegel's paradox.
![enter image description here][1]
**Introduction**
------------
Foreign exchange options are the oldest options in the market with a long history of trading. As such, they have been deeply-researched and are well-understood. Nevertheless, we return to this topic to look at the product consistency, since this may still not be entirely clear. We review this consistency from both - domestic and foreign perspectives and show what adjustments are required to ensure the options are arbitrage-free when the investor's position changes
**Foreign exchange options - 1st currency measure**
-----------------------------------------------
Foreign e change options are financial contracts on FX rate -i.e. rate of exchange of currency 1 fro currency 2. GBP/USD or EUR/USD are examples of such currency pairs. Options are essentially contracts on the future spot FX rate. We will demonstrate the exposition to this subject using EUR/USD exchange rate. This is the rate that sets the exchange equation of $X = \[Euro]1
A reader familiar with the equity derivatives market will immediately spot the similarity between these two products. If the equity growth rate under the risk-neutral measure is the risk-free rate r, the equity pays a continuous dividend yield q and the price process is assumed log-normal, this is identical to the FX when we express r and the USD risk-free rate and q as the equivalent EUR risk-free rate.
Looking at his from the USD-perspective, we can express the EUR/USD FX process as:
$$dF = F (r-q) dt + σ F dW$$
This is a well-known log-normal process for the exchange rate where F represents the EUR/USD rate, \[Sigma] is the FX rate volatility and W represents a Wiener process under the USD-measure.
Pricing option on this future rate is trivial - this is an option buy \[Euro] 1 for K USD and time T. Therefore from the USD-perspective, the option pays: Max[0,F-K] where K is the strike exchange rate. Pricing this option in Mathematica is easy - we build the standard Ito Process for initial value F0.
ipUSD = Refine[
ItoProcess[{(r - q)*F, \[Sigma]*F}, {F, F0}, t], {\[Sigma] > 0,
F[t] > 0, t > 0}];
{Mean[ipUSD[t]], Variance[ipUSD[t]]}
{E^((-q + r) t) F0, E^(-2 q t + 2 r t) (-1 + E^(t $[Sigma]^2)) F0^2}
The option premium from the USD-perspective is an expectation of the above Ito Process.
usdOpt = Exp[-r*t]*
Expectation[Max[F[t] - K, 0], F \[Distributed] ipUSD,
Assumptions ->
F0 > 0 && K > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
-(1/2) E^(-r t) (-2 E^((-q + r) t) F0 +
E^((-q + r) t)
F0 Erfc[(t (-2 q + 2 r + \[Sigma]^2) + 2 Log[F0] - 2 Log[K])/(
2 Sqrt[2] Sqrt[t] \[Sigma])] +
K Erfc[(t (2 q - 2 r + \[Sigma]^2) - 2 Log[F0] + 2 Log[K])/(
2 Sqrt[2] Sqrt[t] \[Sigma])])
**Foreign exchange options - 2nd currency measure**
-----------------------------------------------
Now we touch upon a part that is less clear - what if the option buyer (seller) thinks from the the EUR-perspective? This is quite legitimate as option buyers or sellers can have different preferences when entering into the option contract. How do we ensure that the option contract is consistent from each side-perspective?
Let's spell out the EUR investor position by replicating the USD investor side
- EUR riskless process is dP = P q dt and not dB = B r dt representing USD process
- The exchange-rate is now 1/F and not F
- When SDE for the exchange rate from the USD-point of view is the one above, then for the process 1/F this becomes - using Ito lemma:
f = 1/F;
ip02 = Refine[
ItoProcess[{(r - q)*F, \[Sigma]*F, f}, {F, F0}, t], {\[Sigma] > 0,
F[t] > 0, t > 0, r > 0, q > 0}];
ipEUR = ItoProcess[ip02] // Simplify
ItoProcess[{{(-q + r) F[t], (q - r + \[Sigma]^2)/
F[t]}, {{\[Sigma] F[t]}, {-(\[Sigma]/F[t])}}, \[FormalX]1[
t]}, {{F, \[FormalX]1}, {F0, 1/F0}}, {t, 0}]
The inverted FX rate (USD/EUR) produces different Ito Process than the one observed on the USD-side. This is clear from the definition below:
$$d(1/F) = (1/F) (q-r+σ^2) dt -σ (1/F) d W$$
Our objective is to find probability measure under which the FX option priced in the first section from the USD-perspective will be identical to the one priced from the EUR-perspective. Let's take all tradable components of the trade: (i) USD risk-free discount factor B , (ii) FX rate EUR/USD F and (iii) EUR discount factor P. Based on this we define:
- USD-risk-free process converted to EUR: B/F
- Discounted value of the above : B/ (F P)
So, we need a multi-dimensional Ito process to model B/(F P)
ip03 = Refine[
ItoProcess[{{0, r B, q P}, {F \[Sigma], 0, 0},
B/(P F)}, {{F, B, P}, {F0, B0, P0}}, t], {\[Sigma] > 0, r > 0,
q > 0, t > 0}] // Simplify;
ipEUR2 = ItoProcess[ip03]
ItoProcess[{{0, r B[t], q P[t], (-q B[t] + r B[t] + \[Sigma]^2 B[t])/(
F[t] P[t])}, {{\[Sigma] F[t]}, {0}, {0}, {-((\[Sigma] B[t])/(
F[t] P[t]))}}, \[FormalX]1[t]}, {{F, B, P, \[FormalX]1}, {F0, B0,
P0, B0/(F0 P0)}}, {t, 0}]
From the above Ito Formula, we extract two coefficients - drift and volatility of B/(F P) and create new ItoProcess that reflects the changes when FX inversion occurs.
Flatten[ipEUR2[[1]]];
dr = %[[4]] /. {F[t] -> 1, B[t] -> 1, P[t] -> 1}
vl = %%[[8]] /. {F[t] -> 1, B[t] -> 1, P[t] -> 1}
ItoProcess[{dr F, vl F}, {F, F0}, t];
ipEUR3 = ItoProcess[%]
-q + r + \[Sigma]^2
-Sigma
ItoProcess[{{(-q + r + \[Sigma]^2) F[t]}, {{-\[Sigma] F[t]}},
F[t]}, {{F}, {F0}}, {t, 0}]
It is quite clear that the inverted FX rate process USD/EUR is indeed different to the one observed in the EUR/USD case.
In order to prove this consistency, we need to show that FX call option on EUR/USD from EUR point of view is identical to the one priced from the USD-perspective. So, we need to prove that:
E^(-r t) Subscript[E, USD] ( Max[ Subscript[F, t]-K,0]) = E^(-q t) Subscript[F, 0] Subscript[E, EUR] ( (1/Subscript[F, t]) Max[1/Subscript[F, t]-K,0] )
This is because the expectation of the option payoff has to be converted back into EUR. All we need to price this option is use the following expectation:
eurOpt = F0 Exp[-q t] Expectation[Max[F[t] - k, 0]/F[t],
F \[Distributed] ipEUR3,
Assumptions ->
F0 > 0 && k > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
1/2 E^(-(q + r) t) (E^(r t) F0 - E^(q t) k +
E^(r t) F0 Erf[(
t (-2 q + 2 r + \[Sigma]^2) + 2 Log[F0] - 2 Log[k])/(
2 Sqrt[2] Sqrt[t] \[Sigma])] +
E^(q t) k Erf[(t (2 q - 2 r + \[Sigma]^2) - 2 Log[F0] + 2 Log[k])/(
2 Sqrt[2] Sqrt[t] \[Sigma])])
To finalise this exercise, we compute both option premiums:
usdNum = usdOpt /. {F0 -> 1.35, t -> 0.5, K -> 1.36, \[Sigma] -> 0.2,
r -> 0.01, q -> 0.012}
eurNum = eurOpt /. {F0 -> 1.35, t -> 0.5, k -> 1.36, \[Sigma] -> 0.2,
r -> 0.01, q -> 0.012}
usdNum - eurNum // Chop
0.070452
0.070452
0
Both option premium are the same. This proves they are ***consistent***.
**Siegel's paradox**
----------------
In the context of the above discussion, it is worth mentioning ***Siegel's paradox*** as it directly links the FX processes to probability measures. Let's start again with the definition of FX evolution from the USD-perspective . Under the USD probability measure (USD risk-neutral process) we showed earlier that this was:
$$dF = F (r-q) dt + σ F dW$$
The expected future FX rate - the ***FX Forward*** at time t is an expectation of Subscript[F, t] under the USD measure:
usdExp = Expectation[F[t], F \[Distributed] ipUSD,
Assumptions -> F0 > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
E^((-q + r) t) F0
Let's look now at EUR-investor point of view. (S)he can do similar calculation and under her/his neutral measure the USD/EUR process follows:
$$d(1/F) = (1/F) (q-r) dt + (1/F) σ dW$$
So, the forward rate of 1/F (EUR per USD) is:
eurExp2 =
Expectation[1/F[t], F \[Distributed] ipEUR3,
Assumptions -> F0 > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
E^((q - r) t)/F0
This seems logical, since inverted FX is simply :1/F. Here lies the problem: since 1/F is essentially a convex function, by Jensen's inequality:
(E[F])^-1 < E [F^-1]
when both expectations are taken w.r.t to same probability measure = i.e. calculated with the same distribution and F is non-constant. This runs contrary to our assertion above where we outlined the conditions for consistency - i.e. different probability measure.
Siegel's paradox is simply a statement confirming that the spot rate inversion does not extrapolate to the forward space and the forward FX rate in general ***cannot*** be an unbiased estimate of future spot FX rate. At least not *simultaneously* for both sides of the contract due to *'convexity'* effect in the inverted FX function. This is due to the Jensen's inequality statement above. If the property holds for the USD-investor, it cannot be true for the EUR investor and vice-versa since their forward expectation are subject to ***different probability measures***.
We prove this on the simple case - define standard Ito process and then take the expectations for for F and 1/F
ip05 = Refine[
ItoProcess[{(r - q)*F, \[Sigma]*F}, {F, F0}, t], {\[Sigma] > 0,
F[t] > 0, t > 0, r > 0, q > 0}];
usdFwrd =
Expectation[F[t], F \[Distributed] ip05,
Assumptions -> F0 > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
eurFwrd =
Expectation[1/F[t], F \[Distributed] ip05,
Assumptions -> F0 > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
E^((-q + r) t) F0
E^(t (q - r + \[Sigma]^2))/F0
We see that the FX forwards are different as their are taken from different probabilities (with different mean and variance). The forward of 1/F depends also on volatility whereas F does not. Let's show the validity of Jensen's inequality: 1/ Subscript[F, USD] and Subscript[F, EUR]
fxMeanDiff = 1/usdFwrd - eurFwrd // Simplify
-((E^((q - r) t) (-1 + E^(t \[Sigma]^2)))/F0)
Since the above quantity is negative, this shows that indeed
(E[Subscript[F, t]])^-1 < E[Subsuperscript[F, t, -1]]
Plot[fxMeanDiff /. {F0 -> 1.35, r -> 0.01, q -> 0.0045,
t -> 0.5}, {\[Sigma], 0.1, 0.3},
PlotLabel ->
Style["Jensen's inequality and FX forward rates", {15, Bold}, Blue],
PlotStyle -> {Thick, Red}]
![enter image description here][2]
Jensen's inequality effects increases with volatility. On the other hand, the only instance when both forwards will be consistent w.r.t the same probability occurs when \[Sigma]=0. Since this is never the case, we conclude that Siegel's paradox holds.
**Conclusion**
----------
The objective of this note was to present the FX derivatives - forwards and options from different perspective. Whilst the FX spot market is reasonably simple, derivatives are more complicated, especially when we start looking at them from each contractual perspective. Change of probability measure, and hence different probabilities are required to ensure consistency. Existence of Siegel's paradox proves this.
Change of probability measure is handled implicitly by Mathematica once the FX process is correctly defined. The same applies to proving of Siegel's paradox.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Jensinequality.png&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Jensinequality.png&userId=387433Igor Hlivka2018-02-04T20:12:16ZFactor out known multipliers in algebraic expressions?
http://community.wolfram.com/groups/-/m/t/1285533
Hi there,
I have this function of a[t], and it's a long function of other variables in time and some constant parameter (Ixx, Iyy, Izz), which I really want to explicit in a somatory of any combination of those constant parameters.
I've tried to Simplify and then Apart, but Mathematica doesn't group similar coefficients. I wonder if I could get this result applying some kind of mapping (...)?
Well, below is the function a[t]:
a[t_] := -((Cos[ϕ[t]]*
Izz[t]*(Ty[t] -
Cos[ϕ[t]]*Iyy[t]*Derivative[1][ϕ][t]*
Derivative[1][ψ][t] +
Cos[ϕ[t]]*(-Ixx[t] + Izz[t])*
(Sin[ψ[t]]*Derivative[1][θ][t] +
Derivative[1][ϕ][t])*((-Cos[ψ[t]])*
Tan[ϕ[t]]*Derivative[1][θ][t] +
Derivative[1][ψ][t]) +
Iyy[t]*Derivative[1][θ][
t]*(Cos[ψ[t]]*Sin[ϕ[t]]*
Derivative[1][ϕ][t] +
Cos[ϕ[t]]*Sin[ψ[t]]*
Derivative[1][ψ][t])) -
Iyy[t]*Sin[ϕ[t]]*(Tz[t] +
Izz[t]*Sin[ϕ[t]]*Derivative[1][ϕ][t]*
Derivative[1][ψ][
t] + (Ixx[t] -
Iyy[t])*(Sin[ψ[t]]*Derivative[1][θ][t] +
Derivative[1][ϕ][t])*
(Cos[ϕ[t]]*Cos[ψ[t]]*
Derivative[1][θ][t] +
Sin[ϕ[t]]*Derivative[1][ψ][t]) +
Izz[t]*Derivative[1][θ][
t]*(Cos[ϕ[t]]*Cos[ψ[t]]*
Derivative[1][ϕ][t] -
Sin[ϕ[t]]*Sin[ψ[t]]*
Derivative[1][ψ][t])))/((-Cos[ϕ[t]]^2)*
Cos[ψ[t]]*Iyy[t]*Izz[t] -
Cos[ψ[t]]*Iyy[t]*Izz[t]*Sin[ϕ[t]]^2));
Summarizing, what I want is a way to say : **rewrite a[t] factorizing in a sum by these terms (Ixx, Iyy, Izz) on any combination between them.**
( In other way, I could ask the same by *rewriting a[t] factorizing in terms that don't vary in time?* Would this be possible/easier? )
Can anybody suggest a good thing to try?
Thanks.André Barbosa2018-02-15T10:31:20Z[✓]Find values of a function's variable that satisfy a certain condition?
http://community.wolfram.com/groups/-/m/t/1289212
Given a function
F[x,y]=x+y
I want to find all values of **y** for which
Abs[F[x,y]]<= 5
when
0 <= x <= 1
The answer should be
0 <= y <= 4
Is there built in functionality that can handle a problem of this type? I'm looking for a shortcut to help avoid implementing a full blown algorithm on my own.Piotr Pawlowicz2018-02-21T01:14:36Z[✓] Control the precision of the result with FindMinimum?
http://community.wolfram.com/groups/-/m/t/1287379
Dear all,
I have an issue with a constrained minimization and FindMinimum.
Consider two cylinders of length L and radius R: the center of the first is located at the origin and its axis is parallel to the z axis, the center of the second is located at {xT, 0, xz}, and its axis is oriented according to the polar angles \theta and \phi.
Given two vectors {r[1],..,r[3]} and {s[1],..,s[3]}, I want to find the minimum of the square distance between them
f(r,s) = (r[1] - s[1])^2 + (r[2] - s[2])^2 + (r[3] - s[3])^2,
with the constraint that the vector {r[1],r[2],r[3]} lies within the first cylinder, and {s[1],..,s[3]} lies within the second cylinder:
In[1]:= R = 1/2;
L = 4;
xT = 78/100;
xz = -4/10;
\[CurlyTheta] = 8/10;
\[Phi] = 3;
In[7]:= FindMinimum[{(r[1] - s[1])^2 + (r[2] - s[2])^2 + (r[3] -
s[3])^2, r[1]^2 + r[2]^2 <= R^2, L + 2 r[3] >= 0,
2 r[3] <=
L, (Sin[\[Phi]] (xT - s[1]) +
Cos[\[Phi]] s[
2])^2 + (Cos[\[CurlyTheta]] (Cos[\[Phi]] (-xT + s[1]) +
Sin[\[Phi]] s[2]) + Sin[\[CurlyTheta]] (xz - s[3]))^2 <= R^2,
2 Cos[\[Phi]] Sin[\[CurlyTheta]] (xT - s[1]) +
2 Cos[\[CurlyTheta]] (xz - s[3]) <=
L + 2 Sin[\[CurlyTheta]] Sin[\[Phi]] s[2],
2 Sin[\[CurlyTheta]] Sin[\[Phi]] s[2] <=
L + 2 Cos[\[Phi]] Sin[\[CurlyTheta]] (xT - s[1]) +
2 Cos[\[CurlyTheta]] (xz - s[3])}, {{r[1], 0}, {s[1], xT}, {r[2],
0}, {s[2], 0}, {r[3], 0}, {s[3], xz}}]
Out[7]= {1.7518957310232823*10^-15, {r[1] -> 0.24200719173126448,
s[1] -> 0.24200723086912757, r[2] -> 0.0173827341902361,
s[2] -> 0.01738274867534208, r[3] -> -0.0952795259719789,
s[3] -> -0.0952795227618218}}
In this example the two cylinders overlap, the minimum is r = s, and the value of the objective function at the minimum must be zero. However, FindMinimum returns some small, but nonzero value ˜ 1e-15.
Is there a way to make sure that, if the minimum is x=y, then the minimum for the objective function is exactly zero, i.e., `0.`?
Thank you.Joao Porto2018-02-18T14:37:20ZLittle pieces of code for graph and networks theory
http://community.wolfram.com/groups/-/m/t/98022
I thought it would be nice to have a discussion of little pieces of Mathematica code that can help when working with graphs and networks. Here for example, one to undirect graphs when needed:
[mcode]ToUndirectedGraph[dirGraph_] :=
Graph[VertexList@dirGraph, #[[1]] \[UndirectedEdge] #[[2]] & /@
Union[Sort[{#[[1]], #[[2]]}] & /@ EdgeList@dirGraph]][/mcode]or what about a graph planarity check function (i.e. adding any edge would destroy its planarity):[mcode]MaxPlanarQ[graph_] :=
PlanarGraphQ[graph] &&
With[{pos =
Select[Position[Normal@AdjacencyMatrix@graph,
0], #[[1]] < #[[2]] &],
vertex = VertexList[graph],
edges = EdgeList[graph]
},
val = True;
Do[If[PlanarGraphQ[
Graph[Append[edges,
vertex[[i[[1]]]] \[UndirectedEdge] vertex[[i[[2]]]]]]],
val = False; Break[]], {i, pos}]; Return[val]][/mcode]And one to produce random permutations for a given graph with the indicated number n of nodes:[mcode]PermuteGraph[g_, n_] :=
Table[AdjacencyMatrix@
Graph[RandomSample[VertexList@g], EdgeList@g], {n}][/mcode]What about code for counting sizes of graph automorphism groups? I have some, but it uses Saucy, an open-source software that has been tested to be (surprisingly) in practice very fast, despite the NP question underlying this task (unknown whether it has a polynomial time algorithm or it is NP-complete). There is a function in Combnatorica but you can read about its drawbacks in the [url=http://mathworld.wolfram.com/GraphAutomorphism.html]graph automorphism[/url] page in MathWorld.Hector Zenil2013-08-15T18:51:44Z[✓]Create a network of 300 nodes that can be clustered into 100-node chunks
http://community.wolfram.com/groups/-/m/t/1288902
I have a rather basic problem where I am asked to create a network of 300 nodes that can be clustered into 100-node chunks. Essentially, the nodes have many connections within a chunk and comparatively few between different100-node chunks.
For example, if I made the connections with a chunk occur with 100% probability and between chunks occur 0% I get the following:
edges = {};
For[l = 1, l <= 3 , l++, h = 100*l;
For[n = 100 (l - 1) + 1, n < 100*l + 1, n++,
For[m = n + 1, m <= 300, m++,
If[m <= h,
If[RandomReal[{0, 1}] > 0.0, AppendTo[edges, n <-> m]],
If[RandomReal[{0, 1}] > 0.99, AppendTo[edges, n <-> m]]]]]]
![enter image description here][1]
which is expected and gives a corresponding MatrixPlot of the Adjacency Matrix as:
![enter image description here][2]
Which is also all well and good. Now, something very strange seems to be going on when I try to make the connections between the 100-node chunks be non zero. For example, if I made the probability be 1%, the folowing is obtained:
![enter image description here][3]
This looks ok, but it becomes very evident that there is a problem when one looks at the corresponding Matrix Plot of the Adjacency Matrix
![enter image description here][4]
Evidently, there is a very big problem somewhere. I did try to check this in a bit more detail and it seems that everything is working properly with the Matrix Plot. It seems that there is somewhere a problem with my code - but where? The code, in my opinion, seems rather simple and straightforward. I am really confused with what the issue is and could use assistance, it doesn't seem possible that Mathematica would have a problem with something so basic.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture1.PNG&userId=1288488
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture2.PNG&userId=1288488
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture3.PNG&userId=1288488
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture4.PNG&userId=1288488Boris Barron2018-02-20T16:41:48ZSolve Pdes in cylinder coordinates? ( Infinity error due to 1/r )
http://community.wolfram.com/groups/-/m/t/1288801
I have been trying to solve the following equations(Eq.1) in cylinder coordinates. And I want my solution domain is r>=0. Because of the 1/r and 1/r^2 terms in Eq.1, I ran into the Infinity error problem when using NDSolve . Then I rewrote my equations by multiplying them by r or r^2 to remove 1/r and 1/r^2 terms and got Eq.2, but I still met the Infinite problem using NDSolve. So is it possible to get solutions if my solution domain is r>=0. I believe this solution is physically real but I do not know how to get it using NDSolve. Any help would be great.
![enter image description here][1]
![enter image description here][2]
And my codes for Eq.2 are:
TMax = 1.615; S = 1/Pi^2/2; rMin = 0; rMax = 2;
{usol, hsol} =
NDSolveValue[{D[u[t, r], t]*r^2 == -D[u[t, r], r]*u[t, r]*r^2 +
3*1/h[t, r]^4*D[h[t, r], r]*r^2 +
3*S*(D[h[t, r], r, r, r]*r^2 - D[h[t, r], r] +
r*D[h[t, r], r, r]) +
4/h[t, r]*(h[t, r]*r^2*D[u[t, r], r, r] +
D[u[t, r], r]*D[h[t, r], r]*r^2 + h[t, r]*r*D[u[t, r], r] -
h[t, r]*u[t, r] - u[t, r]*r/2*D[h[t, r], r]),
D[h[t, r], t]*r == -h[t, r]*u[t, r] - u[t, r]*r*D[h[t, r], r] -
h[t, r]*r*D[u[t, r], r], u[0, r] == 0,
h[0, r] == 1 - 0.2*Cos[Pi*r], h[t, rMin] == h[t, rMax]}, {u,
h}, {t, 0, TMax}, {r, rMin, rMax}, PrecisionGoal -> Infinity,
AccuracyGoal -> 10, MaxSteps -> 10^6,
Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"MaxPoints" -> 5000, "MinPoints" -> 5000,
"DifferenceOrder" -> 4}}]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20180220110517.jpg&userId=1266560
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20180220104746.jpg&userId=1266560Yixin Zhang2018-02-20T11:03:08ZInterest rates derivatives in multi-curves framework
http://community.wolfram.com/groups/-/m/t/1273892
We discuss the changes to the interest rates processes when we move from mono-curve setting to multi-curve framework. This is characterised by presence of several curves – a dedicated discount curve and set of estimation curves – each for specific Libor rate. The first is generally assumed to be the OIS curve, whilst the rest are ‘tenor’ curves for given Libor tenor.
The changes in the forward Libor estimation are due to the ‘loss’ of martingale property when mono-curve world is replaced by multiple curve framework. We show how multiplicative adjustment works in this new setting and how interest rate derivatives are affected. Various modelling assumptions are used to show derivatives pricing in this new setting.
![enter image description here][23]
#Introduction#
We review the setting of Interest rate derivatives in post-crisis era characterised by multi-curve environment where dedicated yield curves are defined for forward rate estimation and cashflow discounting. The multi-curve framework is a direct consequence of financial crisis of 2007-2008 when the so-called 'Libor market' - represented by single yield curve stopped being seen as risk-free, and new curves started to emerge to better reflect the counterparty risk in the financial markets.
The current interest rate framework exists in the simplest form in the dual curve setting - (i) discounting curve - usually built with OIS instruments and (ii) estimation curve - generally used to build the 'main' estimation curve in a given currency. This is 3 month Libor curve for USD or 6 months Euribor curve for EUR.
Existence of dual curve environment does change the interest rate mathematics. Martingales defined in the single curve framework do not hold and the process has to be adapted to account for curves duality. We demonstrate how this process work and show how interest rate derivatives - both linear and optional work when we move from the singe to dual framework.
#Interest rate derivatives in a single curve framework#
We firs look at the single-curve homework in the pre-2007 era. When only one yield curve exists, the derivatives pricing is simple and tidy. Forward rate defined on the singe curve using two deposits coincides the forward rate agreement rate = FRA rate
Subscript[F, S] = 1/(Subscript[T, 2] - Subscript[T, 1]) * P[0, Subscript[T, 1]]/P[0, Subscript[T, 2]] - 1;
where Subscript[F, S] is the forward rate in a single-curve setting, Subscript[T, 1], Subscript[T, 2] and tow maturity dates with Subscript[T, 2]>Subscript[T, 1] and P[0, Subscript[T, 1]] , P[0,Subscript[T, 2]] are two discount factors at time 0 with maturities Subscript[T, 1] and Subscript[T, 2].
The FRA rate is then defined as Subscript[F, FRA] = K such that the payoff of the contract at time 0 has value = 0 L[Subscript[T, 1], Subscript[T, 2]] - K = 0 with L[Subscript[T, 1], Subscript[T, 2]] defined as forward term-rate.
## Interest rate swaps in single-curve setting ##
Interest rate swap together with FRA are one the simplest linear interest rate derivatives. It usually involves exchange of fixed rate for a series of floating forward rates up to final maturity:
fixedLeg = S Sum[\[Delta][i] P[0, i], {i, 1, m}]
floatLeg = Sum[\[Delta][i] P[0, i] Subscript[L, S][i], {i, 1, n}]
![enter image description here][2]
where $L_S$ is the forward Libor rate in a single curve framework. This is identical to the FRA rate defined above:
Subscript[L, S][i] = 1/\[Delta][i]*(P[i - 1]/P[i]-1)
floatLeg =
Sum[\[Delta][i]*
P[0, i]*(1/\[Delta][i])*(P[0, i - 1]/P[0, i] - 1), {i, 1, n}]
> P[0, 0] - P[0, n]
Then swap rate *S* is simply a solution to the equation:
Solve[fixedLeg == floatLeg, S]
![enter image description here][3]
This shows that in a single-curve framework the swap rate is simply a difference in two discount factors normalised by $annuity= \sum_{i = 1}^n\delta[i]\ P[0, i]$. The same curve is used to produce discount factors that are used for (i) discounting and (ii) forward Libor estimation.
#Multi-curve framework for Interest rate derivatives#
When we move to multi-curve setting, we assume:
- Separate discounting curve - generally OIS curve
- Separate estimation curve for 'main index - 3M or 6M
- Separate estimation curves for 'minor indices - say 1M, 12M or 6M (in 3M setting)
When the discounting curve is set to the OIS curve, we define the OIS forward rate with tenor h similarly to the forward in the mono-currency setting:
![enter image description here][4]
for i = 1...n where $\delta[i]$ is a year fraction for interval $T_{i-1} -T_i$ and $P_{OIS}[t,T_i]$ is a discount factor from the OIS curve at time t maturing at time $T_i$
In the multi-curve framework, the Libor definition in the single curve environment does not hold L[Subscript[T, 1],Subscript[T, 2]]] != Subscript[F, S][t;Subscript[T, 1],Subscript[T, 2]] != Subscript[F, OIS][t,Subscript[T, 1],Subscript[T, 2]] since the discount factors in definition of Libor when only one curve is used is not the same as in dual curve case. Libor is dual curve setting is calculated from the estimation curve with unique set of discount factors.
The expectation of forward Libor in the dual curve setting can be expressed as
\!\(
\*SubsuperscriptBox[\(E\), \(t\),
SubsuperscriptBox[\(Q\), \(OIS\), \(T2\)]]\([\)\)Subscript[F, D][Subscript[T, 1];Subscript[T, 1],Subscript[T, 2]]] = Subsuperscript[E, t, Subsuperscript[Q, OIS, T2]][E^Subsuperscript[Q, OIS, T2][L[Subscript[T, 1],Subscript[T, 2]|Subscript[\[ScriptCapitalF], t]]. The valuation of forward rate agreement with Libor forward rate is therefore defined as: FRA[t,Subscript[T, 1],Subscript[T, 2]] =Subscript[P, OIS][t,Subscript[T, 2]] \[Delta][Subscript[T, 1],Subscript[T, 2]] \!\(
\*SubsuperscriptBox[\(E\), \(t\),
SubscriptBox[\(Q\), \(OIS\)]]\([\)\)L[t,Subscript[T, 1],Subscript[T, 2]]-K].
Current market practice takes a shortcut and simply values the FRA as FRA[t,Subscript[T, 1],Subscript[T, 2]] =Subscript[P, OIS][t,Subscript[T, 2]] \[Delta][Subscript[T, 1],Subscript[T, 2]] (L[t,Subscript[T, 1],Subscript[T, 2]]-K]) where discount factor Subscript[P, OIS][t,Subscript[T, 2]] comes from the OIS curve and the forward Libor L[t,Subscript[T, 1],Subscript[T, 2]] is taken from the estimation curve. This is clearly inconsistent since forward Libor is NOT martingale under the OIS forward measure.
##Libor adjustment in multi-curve framework##
To restore the non-arbitrage relationship, forward Libor rate has to be adjusted. We refer to this as Forward basis that restores the equilibrium between Subscript[F, OIS] and Subscript[F, E]. Assuming multiplicative basis Aj, we get:
Fd = (1/\[Delta]) (Pd[T1]/Pd[T2] - 1);
Fe = (1/\[Gamma]) (Pe[T1]/Pe[T2] - 1);
Solve[Fe \[Gamma] == Fd \[Delta] Aj, Aj]
![enter image description here][5]
where Subscript[F, d] represents forward rate from the OIS curve, Subscript[F, e] is the forward Libor from the estimation curve, Subscript[P, d] is a discount factor f from the OIS curve and Subscript[P, e] is a discount factor from the estimation curve.
The forward basis is therefore a ratio of discount factors from both curves and can be recovered ex-post once both curve have been calibrated to the market data.
From modelling perspective, however it is desirable to express the forward rate in terms of single curve. We introduce new discount factor adjustment Subscript[B, j] and re-calculate the forward adjustment spread:
Fd = (1/\[Delta]) (PdT1/PdT2 - 1);
Fe = (1/\[Gamma]) (PeT1/PeT2 - 1);
PeT1 = BjT1 PdT1;
PeT2 = BjT2 PdT2;
Solve[Fd == Fe Aj, Aj] // Simplify
![enter image description here][6]
and get the forward Libor $F_e$
Fe
![enter image description here][7]
This shows that the Libor is a function of (i) OIS curve and (ii) discount factor adjuster. To proceed, we assume that the forward rate follows LogNormal martingale dynamics under forward measure Subscript[Q, e]
Fe = GeometricBrownianMotionProcess[0, \[Sigma], x0];
Fe[t]
![enter image description here][8]
We apply to similar process to the forward adjuster defined under forward measure $Q_d$
Bj = GeometricBrownianMotionProcess[0, \[Eta], y0];
Bj[t]
![enter image description here][9]
To change the measure from Subscript[Q, e]==> Subscript[Q, d], we use the change-of-measure technique that says:
Subscript[E, OIS][Fe] = Subscript[E, e][Fd Bj]. To change the measure, we need joint expectation of OIS forward and the forward adjuster. We apply the **Binormal Copula** with LogNormal marginals
cDist = CopulaDistribution[{"Binormal", ρ}, {Fe[t], Bj[t]}];
cdrift = Expectation[x*y, {x, y} \[Distributed] cDist,
Assumptions ->
t > 0 && η > 0 && σ > 0 && -1 <= ρ <= 1]
![enter image description here][10]
The joint expectation of forward OIS and forward adjuster on relative basis provides the adjustment for the process where the change of measure occurs. Since the martingale process for forward Libor has to be drift less, we adjust the forward rate by its negative quantity;
Aj = cdrift/(x0 y0) /. t -> -t
![enter image description here][11]
Returning back to our original Libor adjustment formula, we observe:
![enter image description here][12]
Fe_Adj = Expectation[x, x \[Distributed] Fe[t],
Assumptions -> t > 0 && σ > 0 && x0 > 0]*Aj /. x0 -> L[0]
![enter image description here][13]
This is the forward Libor rate under the OIS forward discounting measure. The adjustment is a function (i) time, (ii) volatility of Libor and (iii) volatility of OIS rate. A reader familiar with the exposition above, an recognise here the parallelism to process drift adjustment in the foreign currency market know as **'quanto adjustment'**. The similarity is obvious - we work with two curves, two sources and randomness and switch the measure similarly to what we do in foreign currency markets.
##Forward rate agreement - FRA##
This is the simple contract that pays the difference between forward Libor and fixed rate
![enter image description here][14]
where \[DoubleStruckCapitalC] is nominal and $\delta[\tau]$ is a year fraction on day count convention between the Libor tenor $T_1$ and $T_2$.
Ho much does the adjustment affects the FRA valuation? We look first at **market volatilities** - (i) for the Libor rate and (ii) Adjuster:
Assume: $\delta=0.25$, C=1 mil, $P_{OIS}[t,T_2] = 0.98$, t=1, L=0.0125,K=0.0125
fra = C*Pois*δ*(L*E^(-t*ρ*σ*η) - K) /. {C ->
1000000, L -> 0.0125, K -> 0.0125, t -> 1, δ -> 0.25,
Pois -> 0.98, ρ -> 0.75}
Plot3D[fra, {σ, 0.1, 0.3}, {η, 0.1, 0.3},
AxesLabel -> Automatic, PlotTheme -> "Marketing",
PlotLabel ->
Style["FRA valuation impact by market volatilities", Blue, 15]]
![enter image description here][15]
![enter image description here][16]
As we can see from the graph above, higher volatilities will push the value of forward rate lower and therefore making the value of long forward contract more negative. The opposite applies to a short FRA contract.
We can now look at correlation impact:
fra2 = C*Pois*δ*(L*E^(-t*ρ*σ*η) - K) /. {C ->
1000000, L -> 0.0125, K -> 0.0125, t -> 1, δ -> 0.25,
Pois -> 0.98, σ -> 0.2, η -> 0.2};
Plot[fra2, {ρ, -0.75, 0.75}, PlotStyle -> Red,
PlotLabel -> Style["Correlation impact on FRA valuation", Blue, 15]]
![enter image description here][17]
Negative correlation will increase the long FRA value since the adjusted Libor will be higher. Positive correlation will drive the valuation i into negative territory.
##Interest rate swap - IRS##
The payer IRS formula is determined from the same equation: fixed leg = float leg
fixedLeg = K Sum[δ[i] Subscript[P, OIS][i], {i, 1, m}];
floatLeg =
Sum[δ[i] Subscript[P, OIS][i] L[
i] Exp[-Subscript[t, i] ρ σ η], {i, 1, n}];
swapR = Solve[fixedLeg == floatLeg, K] // Simplify
![enter image description here][18]
This is the equilibrium swap rate that will make present value at inception zero. The new formula differs from the swap rate formula in the single curve framework in two instances:
- Discount factor P comes from a special discounting curve - the OIS
curve and becomes Subscript[P, OIS][t,Subscript[T, i]]
- Numerator does not reduce to a simple difference of two discount
factors since adjusted Libor rate L[t, Subscript[T, 1],Subscript[T,
2]] E^(-t \[Rho] \[Eta] \[Sigma]) is now estimated on a different
curve, the so-called estimation curve
##Caps and Floors##
Consider first a Caplet paying out at time Subscript[T, k]. Caplet is essentially a call option on forward Libor rate L[t;Subscript[T, k-1],Subscript[T, k]]: \[Delta][\[Tau]] *(L[t;Subscript[T, k-1],Subscript[T, k]]-K)^+ where \[Delta][\[Tau]] is a year fraction between Subscript[T, 1] and Subscript[T, 2] and K is a fixed strike rate. The pricing formula is simply a discounted conditioned expectation of the payoff positivity under certain distributional assumptions. So, to price a Caplet in multi-curve framework, we proceed as in mono-currency case, with replacement: Libor mono-curve -> Libor multi-curve:
![enter image description here][19]
Pricing formula will differ depending on the choice distributional assumptions for the forward Libor rate. For calculation purposes, we set the adjusted Libor rate -Subscript[L, e][t;Subscript[T, 1],Subscript[T, 2]] E^(-t \[Rho] \[Eta] \[Sigma]) = x0. We choose the three processes that become the most common in the market - i.e. (i) Normal process, (ii) LogNormal process and (iii) Mean-reverting Normal process.
- **Normal process:**
nProc = OrnsteinUhlenbeckProcess[0, σ, 0, x0];
nCplt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[x - k, 0],
x \[Distributed] nProc[t],
Assumptions -> σ > 0 && t > 0] // FullSimplify
![enter image description here][20]
We can now investigate the behaviour of the Caplet w..r.t Libor volatility and strike
Plot3D[nCplt /. {Subscript[P, OIS][t, i] -> 0.98, δ[i] -> 0.25,
x0 -> 0.0125, t -> 1}, {σ, 0.005, 0.015}, {k, 0.01,
0.0135}, PlotLabel ->
Style["Caplet Normal premium", Blue, {15, Bold}],
PlotLegends -> Automatic, AxesLabel -> Automatic,
ColorFunction -> "Rainbow"]
![enter image description here][21]
Premium increases as volatility goes up and strike declines. However, volatility is more dominant factor.
- **Lognormal process**
lProc = GeometricBrownianMotionProcess[0, σ, x0];
lCplt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[x - k, 0],
x \[Distributed] lProc[t],
Assumptions -> σ > 0 && t > 0 && k > 0 && x0 > 0] //
FullSimplify
![enter image description here][22]
A similar pattern is observed for other processes, such as LogNormal
Plot3D[lCplt /. {Subscript[P, OIS][t, i] -> 0.98, δ[i] -> 0.25,
x0 -> 0.0125, t -> 1}, {σ, 0.15, 0.5}, {k, 0.01, 0.0135},
PlotLabel -> Style["Caplet LogNormal premium", Blue, {15, Bold}],
PlotLegends -> Automatic, AxesLabel -> Automatic,
ColorFunction -> "TemperatureMap"]
![enter image description here][23]
- **Mean-reverting normal process**
mProc = OrnsteinUhlenbeckProcess[μ, σ, θ, x0];
mCplt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[x - k, 0],
x \[Distributed] NormalDistribution[a, b],
Assumptions -> b > 0 && t > 0];
mCplt = % /. {a -> mProc[t][[1]], b -> mProc[t][[2]]} // FullSimplify
![enter image description here][24]
###Floors###
Interest rate floors are are essentially put options on forward Libor rate with payoff function:
![enter image description here][25]
- **Normal process**
nProc = OrnsteinUhlenbeckProcess[0, σ, 0, x0];
nFlrt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[k - x, 0],
x \[Distributed] nProc[t],
Assumptions -> σ > 0 && t > 0] // FullSimplify
![enter image description here][26]
- **LogNormal process**
lFlrt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[k - x, 0],
x \[Distributed] lProc[t],
Assumptions -> σ > 0 && t > 0 && k > 0 && x0 > 0] //
FullSimplify
![enter image description here][27]
Plot3D[lFlrt /. {Subscript[P, OIS][t, i] -> 0.98, δ[i] -> 0.25,
x0 -> 0.0125, t -> 1}, {σ, 0.15, 0.5}, {k, 0.01, 0.0135},
PlotLabel -> Style["Floorlet LogNormal premium", Blue, {15, Bold}],
PlotLegends -> Automatic, AxesLabel -> Automatic,
ColorFunction -> "Pastel"]
![enter image description here][28]
##Swaptions##
Swaptions are options on the swap rate defined above. They exist in tow formats: (i) Payer swaption = put option on the swap rate and (ii) Receiver swaption = call option on the swap rate. When we operate in the multi-curve framework, we deal with the same problem as in Libor case - i.e. swap rate adjustment.
We develop the swap adjustment in the same way as Libor. When LogNormal dynamics for the swap rate is envisaged, we arrive at the adjustment quantity though a joint expectation process:
![enter image description here][29]
The volatilities in the exponent are now swaption volatilities and correlation coefficient $\rho$ is the correlation between the swap rate and the adjuster.
###Receiver swaption###
This is the call option on the swap rate with the payoff ; Rec_OSWP = Subscript[AF, OIS] (Subscript[S, ADJ][t; Subscript[T, 0],Subscript[T, n]] -K)^+ where:
Subscript[AF, OIS] = Sum[γ[i] Subscript[P, OIS][i], {i, 1, n}]
![enter image description here][30]
The option premium will depend on the modelling choice for the underlying swap rate. We look again at (i) Normal process, (ii) LogNormal process and (iii) Mean-reverting Normal process:
- **Normal process**
nSwpn = Subscript[AF, OIS]
Expectation[Max[x - k, 0], x \[Distributed] nProc[t],
Assumptions -> σ > 0 && t > 0] // FullSimplify
![enter image description here][31]
- **LogNormal process**
lSwpn = Subscript[AF, OIS]
Expectation[Max[x - k, 0], x \[Distributed] lProc[t],
Assumptions -> σ > 0 && t > 0 && k > 0 && x0 > 0] //
FullSimplify
![enter image description here][32]
- **Normal mean-reverting process**
mCplt = Subscript[AF, OIS]
Expectation[Max[x - k, 0],
x \[Distributed] NormalDistribution[a, b],
Assumptions -> b > 0 && t > 0];
mCplt = % /. {a -> mProc[t][[1]], b -> mProc[t][[2]]} // FullSimplify
![enter image description here][33]
###Payer swaption###
These are *put opinions* on the swap rate which in case of multi-curve environment is drift-adjusted. For example, if we assume normal distribution for the swap rate, we get
- **Normal process**
nSwpn2 = Subscript[AF, OIS]
Expectation[Max[k - x, 0], x \[Distributed] nProc[t],
Assumptions -> σ > 0 && t > 0] // FullSimplify
![enter image description here][34]
#Conclusion#
Multi-curve framework in case of interest rate derivatives brings new paradigm that requires certain adjustment to the underlying rates. This is due to a measure change when the expectation of the rates stops being martingale. Introduction of separate discounting curve - OIS requires adjustment to the forward rate drift in order to preserve non-arbitrage condition. Quarto-style adjustment known in foreign currency market is being used to derive neat formula.
Pricing and valuation adjustment with Mathematica, as the above demonstration shows, is easy. Availability of stochastic routines and probabilistic functions leads to quick and elegant solution.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Caplet.jpg&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.02.28.png&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.18.04.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.23.08.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.27.49.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.29.20.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.30.27.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.31.55.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15141.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=36372.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=57483.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.38.59.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10334.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=52615.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=64676.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=71507.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=103448.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=31589.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at12.05.36.png&userId=20103
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1092710.png&userId=20103
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=291011.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=446612.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=732913.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=570614.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at12.14.26.png&userId=20103
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=329415.png&userId=20103
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=259916.png&userId=20103
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=810317.png&userId=20103
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at12.19.37.png&userId=20103
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=240818.png&userId=20103
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=934319.png&userId=20103
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=797420.png&userId=20103
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=881621.png&userId=20103
[34]: http://community.wolfram.com//c/portal/getImageAttachment?filename=498322.png&userId=20103Igor Hlivka2018-01-29T12:58:41ZRendering of RegionIntersection in 3D?
http://community.wolfram.com/groups/-/m/t/1283669
I am trying to visualize some region intersections in 3D.
## Example 1:
ra = 10;
ri = 5;
R1 = RegionDifference[Ball[{0, 0, 0}, ra], Ball[{0, 0, ri - 1/2}, ri]];
Show[R1 // Region, Axes -> True]
![rendered result][1]
The resulting rendered region has a hole, while it should not have one. Does anyone know a way to improve on this.
Another example.
## Example 2:
ra = 10;
ri = 5;
R1 = RegionDifference[Ball[{0, 0, 0}, ra], Ball[{0, 0, 0}, ri]];
R2 = Cylinder[{{-100, 0, 0}, {100, 0, 0}}, 5];
R = RegionIntersection[R1, R2] // Region
The resulting region is rendered with jagged edges.
![The rendered result of Example2][2]
How can this rendering be improved? I know that the rendered edges can not be infinitely sharp like in the mathematical world, but I think some improvement should be possible. Does anyone know how to achieve this? I am using Mathematica 11.1 on Windows.
Thanks for your help.
Maarten
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2018-02-1310_05_39-RegionIntersectionrenderingnotgood.nb_-WolframMathematica11.1.png&userId=307930
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2018-02-1310_06_27-RegionIntersectionrenderingnotgood.nb_-WolframMathematica11.1.png&userId=307930Maarten van der Burgt2018-02-13T09:17:38Z[✓] Get a TransitiveClosureGraph[] with loops?
http://community.wolfram.com/groups/-/m/t/1287985
Here is a simple example:
In[1]:= TransitiveClosureGraph[{1->2, 2->3, 3->1}]//AdjacencyMatrix//MatrixForm
Out[1]//MatrixForm= 0 1 1
1 0 1
1 1 0
This produces the graph with the adjacency matrix of {{0,1,1},{1,0,1},{1,1,0}}, but I expected the diagonal elements to be 1 as well, i.e. a loop for each vertex. At least for a binary relation that would be the case, so I don't understand why the transitive closure for a graph is ignoring the loops. By the definition of transitive closure on [MathWorld definition][1] it is a graph which contains an edge {u,v} whenever there is a directed path from u to v. Well, in our case there is a directed path from 1 to 1, namely: 1->2, 2->3, 3->1. And likewise for the nodes 2 and 3. What am I missing here? Thank you.
[1]: http://mathworld.wolfram.com/TransitiveClosure.htmlTigran Aivazian2018-02-19T12:06:21ZNarayana Cow Triangle Fractal
http://community.wolfram.com/groups/-/m/t/1286708
In 1356, Narayana posed a question in his book *Gaṇita Kaumudi*: "A cow gives birth to a calf every year. In turn, the calf gives birth to another calf when it is three years old. What is the number of progeny produced during twenty years by one cow?" This is now known as Narayana's cows sequence. The Narayana's cows sequence constant, **cow**=1.4655712318767680266567312252199391080255775684723, is the limit ratio between neighboring terms.
LinearRecurrence[{1, 0, 1}, {2, 3, 4}, 21]
NestList[Round[# Root[-1 - #1^2 + #1^3 &, 1]] &, 2, 20]
Either gives {2, 3, 4, 6, 9, 13, 19, 28, 41, 60, 88, 129, 189, 277, 406, 595, 872, 1278, 1873, 2745, 4023}. This turns out to be a good constant to use for a Rauzy fractal. The outer fractal triangle can be divided into copies of itself
r = Root[-1 - #1^2 + #1^3 &, 3]; iterations = 6;
cowed[comp_] := First /@ Split[Flatten[RootReduce[#[[1]] + (#[[2]] - #[[1]]) {0, -r^5, r^5 + 1, 1}] & /@ Partition[comp, 2, 1, 1], 1]];
poly = ReIm[Nest[cowed[#] &, #, iterations]] & /@ Table[N[RootReduce[r^({4, 1, 3, 5} + n) {1, 1, -1, 1}], 50], {n, 1,14}];
Graphics[{EdgeForm[{Black}], Gray, Disk[{0, 0}, .1], MapIndexed[{Hue[#2[[1]]/12], Polygon[#1]} &, poly]}]
![fractal Narayana Cow spiral ][1]
The ratio of areas for the triangles turns out to be **cow**. Try Area[Polygon[poly[[1]]]]/Area[Polygon[poly[[2]]]] and you'll see.
If you want to laser cut that, it's handy to get a single path.
cowpath[comp_] := First /@ Split[Flatten[RootReduce[#[[1]] + (#[[2]] - #[[1]]) {0, -r^5, r^5 + 1, 1}] & /@ Partition[comp, 2, 1], 1]];
path = ReIm[Nest[cowpath[#] &, N[Drop[Flatten[Table[r^({4, 1, 3} + n) {1, 1, -1}, {n, 1, 16}]], -1], 50], iterations]]; Graphics[{Line[path]}]
What else can be done with **cow**? With some trickier code I put together the pieces this way. Notice how order 5 spokes appear.
![Narayana cow fractal egg][2]
The opening gave an order 3 infinite spiral. Is there an order 5 infinite spiral? It turns out there is. Behold the **cow-nautilus**!
![cow-nautilus][3]
It can be made with the following code:
r=Root[-1-#1^2+#1^3&,3]; iterate=3;
cowed[comp_]:= First/@Split[Flatten[RootReduce[#[[1]]+(#[[2]]-#[[1]]){0,-r^5,r^5+1,1}]&/@Partition[comp,2,1,1],1]];
base={{r^10,r^7,-r^9,r^11},{-r^12,-r^9,r^11,-r^13},{r^8,r^5,-r^7,r^9},{-r^7,-r^4,r^6,-r^8}}+{-r^10,r^11,-r^6,r^4+r^8};
naut=RootReduce[Join[Table[base[[1]] (-r)^n,{n,0,-4,-1}],Flatten[Table[Drop[base,1](-r)^n,{n,-8,0}],1]]];
poly=ReIm[Nest[cowed[#]&,#,iterate]]&/@N[naut,50];
Graphics[{EdgeForm[{Black}],MapIndexed[{ColorData["BrightBands"][N[Norm[Mean[#1]]/2]],Polygon[#1]}&,poly]},ImageSize-> 800]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fractalcowspiral.jpg&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=cowegg.jpg&userId=21530
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=cownautilus.jpg&userId=21530Ed Pegg2018-02-16T22:52:01ZImprove this relatively simple FindRoot code?
http://community.wolfram.com/groups/-/m/t/1286896
I suspect the answer to this question is likely to be embarrassingly simple, but right now I'm at a loss as to why the following two examples behave so differently:
Trying to replicate the FindRoot function behavior in Mathematica for solving a system of two transcendental equations. I've setup two sample problems and I'm using the FindRoot function to check the output of my code.
This is the first example and it works as expected, matching exactly FindRoot for every iteration cycle:
U[x_, y_] := x^3 - 3 x y^2 + 1;
V[x_, y_] := 3 x^2 y - y^3;
Ux[x_, y_] = D[U[x, y], x];
Uy[x_, y_] = D[U[x, y], y];
Vx[x_, y_] = D[V[x, y], x];
Vy[x_, y_] = D[V[x, y], y];
J[x_, y_] = Ux[x, y] Vy[x, y] - Uy[x, y] Vx[x, y] // Simplify;
Iterate[x_, y_] := {x - (U[x, y] Vy[x, y] - V[x, y] Uy[x, y])/J[x, y],y - (Ux[x, y] V[x, y] - U[x, y] Vx[x, y])/J[x, y]}
NestList[Iterate[#[[1]], #[[2]]] &, {10., 10.}, 10] // TableForm
FindRoot[{U[x, y] == 0, V[x, y] == 0}, {{x, 10.}, {y, 10.}},StepMonitor :> Print[x, " ", y],Method -> {"Newton", "UpdateJacobian" -> 1}]
This produces the following output:
10. 10.
6.66667 6.66833
4.44445 4.4493
2.96297 2.97463
1.97539 2.002
1.31749 1.3768
0.882366 1.00957
0.613064 0.85679
0.505639 0.855438
0.499855 0.866033
0.5 0.866025
6.66667 6.66833
4.44445 4.4493
2.96297 2.97463
1.97539 2.002
1.31749 1.3768
0.882366 1.00957
0.613064 0.85679
0.505639 0.855438
0.499855 0.866033
0.5 0.866025
0.5 0.866025
Now testing the same code with a different set of equations:
SampleParams = {x1 -> 2., y1 -> 5., x2 -> 4., y2 -> 2, x3 -> 8, y3 -> 7};
U[x_, y_] := y1 - y3 - x Cosh[(-y + x1)/x] + x Cosh[(-y + x3)/x] /. SampleParams
V[x_, y_] := y2 - y3 - x Cosh[(-y + x2)/x] + x Cosh[(-y + x3)/x] /. SampleParams
Ux[x_, y_] = D[U[x, y], x];
Uy[x_, y_] = D[U[x, y], y];
Vx[x_, y_] = D[V[x, y], x];
Vy[x_, y_] = D[V[x, y], y];
J[x_, y_] = Ux[x, y] Vy[x, y] - Uy[x, y] Vx[x, y] // Simplify;
Iterate[x_, y_] := {x - (U[x, y] Vy[x, y] - V[x, y] Uy[x, y])/J[x, y],y - (Ux[x, y] V[x, y] - U[x, y] Vx[x, y])/J[x, y]}
NestList[Iterate[#[[1]], #[[2]]] &, {10., 10.}, 3] // TableForm
FindRoot[{U[x, y] == 0, V[x, y] == 0}, {{x, 10.}, {y, 10.}},StepMonitor :> Print[x, " ", y], Method -> {"Newton", "UpdateJacobian" -> 1}]
This time I'm not getting the desired results:
10. 10.
-61.2969 -34.1643
-2953.96 -1832.23
-6.67248*10^6 -4.14752*10^6
7.00846 8.14691
5.76637 7.37079
5.04881 6.91975
4.58046 6.62407
1.31254 4.55446
1.46434 4.67168
1.51128 4.71681
1.51456 4.72029
1.51457 4.72031
1.51457 4.72031
{x->1.51457,y->4.72031}
As you can see my code simply does not work in this case, even though a root clearly exists @ {x->1.51457,y->4.72031}
Any thoughts?Todor Latev2018-02-18T02:28:03Z