Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Language sorted by activePlot in OblateSpheroidal coordinate system?
http://community.wolfram.com/groups/-/m/t/1132374
I wanted to produce a plot in OblateSpheroidal coordinate system. Since there is no such function in Mathematica, I tried to do coordinate transformation to Spherical Coordinate follow by a plot using SphericalPlot3D . I want to plot an ellipse and hyperbola of revolution as the image below. But the code just plots a sphere.
![enter image description here][1]
fromOblatetoSpherical =
CoordinateTransformData[{{"OblateSpheroidal", 1}, 3} -> "Spherical",
"Mapping"];
CoordinateChartData[{{"OblateSpheroidal", {\[FormalA]}}, "Euclidean",
3}, "StandardCoordinateNames"]
sph = fromOblatetoSpherical@%
sph2 = Simplify[sph /. x_String :> ToExpression[x]]
SphericalPlot3D[
sph2[[1]] = #/5, {\[Eta], 0, 3 Pi/4}, {\[CurlyPhi], 0, 2 Pi},
PlotStyle ->
Directive[Orange, Opacity[0.7], Specularity[White, 10]],
PlotRange -> All, ImageSize -> Small, Mesh -> None,
PlotPoints -> 50] & /@ {-1, 3, 6, 8, 12}`
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ObaltedSpheroid.png&userId=120134Jose Calderon2017-07-01T20:26:26ZResourceSearch[] Error about CreateDirectory[]
http://community.wolfram.com/groups/-/m/t/1154600
Hi. I was using ResourceSearch[], but encountered this error:
In[1]= resources = ResourceSearch["Machine Learning"]
Out[1]=
CreateDirectory::privv: Privilege violation for file or directory C:\Users\ed\[PlusMinus]ӜAppData.
Put::noopen: Cannot open C:\Users\ed\[PlusMinus]ӜAppData\Roaming\Wolfram\Objects\Resources\69f\69f1e629-81e6-4eaa-998f-f6734fcd2cb3\metadata\object.wl.
What is causing this problem? Thank you,Michael Jang2017-07-28T01:09:28ZWays to access elements of a list or an array
http://community.wolfram.com/groups/-/m/t/1142765
Dear All,
For your information only.
A code to access elements of a list or an array.
Grid[{
{"Description\n\nUsage with:\nmatx={{2,3,5},{7,11,13},{17,19,23}},\n\
b={6,1,8,-4}", "Mathematica function", "Expression", "Output"}
, {"Select an element", " - ", "b[{3]]\nmatx[[2,3]]", "8\n13"}
, {"Select a row", " - ", "matx[[2]]", "{7,11,13}"}
, {"Select a column", " - ", "matx[[All,1]]", "{2,7,17}"}
, {"Select a submatrix", " - ", "matx[[2;;3,1;;2]]",
"{{7,11},{17,19}"}
, {"Select the first element", "First[list]", "First[b]", "6"}
, {"Select the last element", "Last[list]", "Last[b]", "-4"}
, {"Select the first row", "First[mat]", "First[matx]", "{2,3,5}"}
, {"Select the last row", "Last[mat]", "Last[matx]", "{17,19,23}"}
, {"Take the first n elements of a list", "Take[list,n]",
"Take[b,2]", "{6,1}"}
, {"Take the last n elements of a list", "Take[list,-n]",
"Take[b,-2]", "{8,-4}"}
, {"Take the nth to kth elements of a list", "Take[list,{n,k}]",
"Take[b,{2,4]", "{1,8,-4}"}
}
, Alignment -> Left
, Frame -> {{Red}, {Red}}
, Background -> {{Lighter[Yellow, .9]}, {Lighter[Yellow, .9]}, None}
, Spacings -> {3, 2}
, Dividers -> All
, ItemStyle -> Directive[FontSize -> 16, Bold]
, FrameStyle -> Thick]
Print[Style[
"mat = array of mXm elements,m > 1\nlist = list of m elements\nn, k \
= integer", 18, Red, Bold]]
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Select-Take.jpg&userId=185016
Cheers,.....JosJos Klaps2017-07-09T21:54:08Z[✓] Access an image in a System.Windows.Media.Imaging.CachedBitmap ?
http://community.wolfram.com/groups/-/m/t/1148854
I am trying to use a camera with a .NET interface. The code below seems like it may be capturing an image into a System.Windows.Media.Imaging.CachedBitmap object.
Any ideas how to access this from within Mathematica? Eventually, I will need a solution that runs quickly. Saving to disk using .NET and then reading the file into Mathematica would functionally give me what I need, but will be too slow.
Needs["NETLink`"];
InstallNET[];
LoadNETAssembly["C:\\XIMEA\\API\\x64"];
LoadNETType["xiApi.NET.xiCam"];
myCam = NETNew["xiApi.NET.xiCam"];
myCam@OpenDevice[0];
(*Start acquisition *)
myCam@StartAcquisition[];
timeouts = 1000;
myCam@GetImage[myImage, timeouts]
myCam@StopAcquisition[];
myImage
Output of last statement is:
« NETObject[System.Windows.Media.Imaging.CachedBitmap]»Jeff Burns2017-07-18T22:49:33Z[Conference] "Maths 4 Everyday Life" powered by the Wolfram Language
http://community.wolfram.com/groups/-/m/t/1150809
We will be organising a Wolfram Language Event in Aberdeen from 14th-19th August this year in close collaboration with Wolfram Europe. It is going to be en exciting event as we try to cover a huge range of applications of the Wolfram Language. We reach out to people from all walks of life: we ask pupils, teachers, academic and industrial researchers, people from various administrative areas and the city council, and laypersons to attend. We want to show how broadly applicable the Wolfram Language is and how relevant computational thinking is for all of us. We will cover typical areas where computational thinking is key, but also look at applications to arts, music and literature.
There will be two streams of events all week. One will be for absolute beginners, with no experience in programming or maths. The second one will be for users with either a maths/science or programming background. Experienced programmers both from Wolfram Europe and the University of Aberdeen will guide you though the week's program.
There will roughly be three parts to the event: the first two days (Monday/Tuesday) will introduce attendees to the Wolfram Language. The next two days (Wednesday, Thursday) will feature interesting applications of the Wolfram Language; we will cover statistics, image and sound analysis, connected devices and much more. Conrad Wolfram will speak via video link. The final two days will be project oriented. Small groups of attendees will work on projects of their choice - with the support of experience programmers and scientists.
Please check out the following link
[https://www.maths4everydaylife.org/conference][1]
for more information.
We look forward to seeing you at Aberdeen,
Marco
![enter image description here][2]
[1]: https://www.maths4everydaylife.org/conference
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-07-21at15.49.09.png&userId=48754Marco Thiel2017-07-21T15:03:25ZExamples of Query[]
http://community.wolfram.com/groups/-/m/t/1152954
I routinely work with nested Associations that might be efficiently queried and manipulated with the Query[] command, but I have trouble making the command do what I want. A trivial example:
The following dataset describes a couple of grades at a hypothetical secondary school.
sampleStructure = {<|"grade" -> 11,
"students" -> {<|"name" -> "bill", "age" -> 15|>, <|
"name" -> "susan", "age" -> 16|>}|>, <|"grade" -> 12,
"students" -> {<|"name" -> "manuel", "age" -> 16|>, <|
"name" -> "morris", "age" -> 17|>, <|"name" -> "jackie",
"age" -> 16|>}|>};
Problem 1 (solved): We want to pass over the entire school, assigning a grade to every student. The process is designed to be perfectly objective and remove any chance of favoritism by the teacher -- if the student's age is an even number, he or she gets an A-, otherwise a B+.
Solution: `Query[All, All, All, <|#, "score" -> If[EvenQ@#["age"], "A-", "B+"]|> &]@sampleStructure`
Problem 2 (unsolved): How might one assign grades on the basis of each student's age and grade, rather than just his or her age?
Problem 3 (unsolved): How might one add an additional Key ("teacher"→) at the top level of any grade that has more than two students, leaving other grades unchanged?
Thanks for any help.Michael Stern2017-07-26T01:24:27ZZ-Transform of sequence=causal sequence+anticausal sequence
http://community.wolfram.com/groups/-/m/t/1153032
Hello everyone. How can I get the Z-Transform of sequence=causal sequence+anticausal sequence like this:
x=a^n HeavisideTheta[n]-b^n HeavisideTheta[-n-1]
Z[x]=(1/(1-a z^-1) + 1/(1-b z^-1))
I know that the command ZTransform works only for causal sequence because ZTransform implements only unilateral Z-Transform.
Thank you very much.Gennaro Arguzzi2017-07-26T10:26:32Z[GIF] Hex (Rotating Hexagons)
http://community.wolfram.com/groups/-/m/t/1152974
![Rotating hexagons][1]
**Hex**
A couple of years ago I set out to recreate Saskia Freeke's piece [_Geometric Animations / 150224_][2]. Once I had gotten a reasonable approximation, I quickly switched from hexagons to squares and played around with it for a bit, but eventually that notebook just kind of languished. A couple of weeks ago I opened it up again and played around some more; eventually [_Just Squares_][3] was the result.
When I posted it, [Harold Hausman suggested][4] a version with hexagons, which I thought was funny since it originally _was_ hexagons. That gave me motivation to try again to make something with hexagons, and this is the result.
Here's the code:
smootheststep[t_] := -20 t^7 + 70 t^6 - 84 t^5 + 35 t^4;
DynamicModule[{colorArray, t,
cols = RGBColor /@ {"#f54123", "#0098d8", "#fbfbfb", "#46454b"}},
colorArray = Table[RandomChoice[cols[[;; 2]]],
{j, -1, 14, 1/2}, {i, -1, 13 - If[OddQ[2 j], 1/2, 0]}];
Manipulate[
t = smootheststep[s];
Graphics[
{Thickness[.005],
Table[
{Blend[{Lighter[colorArray[[2 j + 3, i + 2]], .2], cols[[-2]]}, 2 Abs[t - 1/2]],
Line[
Table[
{ Cos[θ + 2 π/6 t] + 2 i + If[OddQ[2 j], 1, 0], Sin[θ + 2 π/6 t] + 2/Sqrt[3] j},
{θ, 0, 2 π, π/3}]]},
{j, -1, 14, 1/2}, {i, -1, 13 - If[OddQ[2 j], 1/2, 0]}]},
ImageSize -> 540, PlotRange -> {{0, 12}, {0, 12}}, Background -> Darker[cols[[-1]]]],
{s, 0, 1}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=hex9c.gif&userId=610054
[2]: http://sasj.tumblr.com/post/111975871965/geometric-animations-150224
[3]: http://community.wolfram.com/groups/-/m/t/1148062
[4]: https://ello.co/shonk/post/ga9eklc3reri3ykry5dvyqClayton Shonkwiler2017-07-26T04:51:55ZAvoid control variable in Manipulate displaying unexpected values?
http://community.wolfram.com/groups/-/m/t/1135555
I have a Manipulate command in which the control variable runs over an interval which includes zero and with a specific increment. When the slider value is near zero, the value displayed can be an exact zero or an approximate zero depending upon how it was approached. Try the following and note that "sliding" onto zero is no problem, but incrementing to zero from either direction gives an inexact zero representation. This can be overcome with a Chop command, but I can't seem to change the actual display of the value that way. Here's a simplified command to try:
Manipulate[N[c,20],{{c,0,"(-0.5,2)"},-0.5,2.0,0.01,Appearance->{"Labeled","Open"}}];
Is there any way to "manipulate" the control variable to avoid this behavior?William Vaughn2017-07-05T15:26:59ZCan I plot it in different ways...
http://community.wolfram.com/groups/-/m/t/1152390
After generating the numbers I can not if not only plot it in this way but there are interesting numbers coming out of the equation, can someone find me a plot , a nice plot for it? It comes about a nice plot little dots of a progression of very high numbers.
a=RandomInteger[20,{40}]
For t[n=0->n^y=1/y->n=a;n==(y)^x/n+(1);y==n*x^n,n<x,n++,x==n+n*-Sqrt[n^Sqrt[n]]
Do[y=n*x^n,{n,x},n];[x=n+n*-Sqrt[n^n
Sqrt[n]],n]If -1<y>1 -> y=n *n-1+n/x While Print[x=y]]
d=x/a
Table[x/a,{x,40}]
ListPlot[x]luis felipe massena misiec2017-07-25T21:21:12ZHow can I rerun this recurrence function multiple times?
http://community.wolfram.com/groups/-/m/t/1152029
Hey,
first and foremost I'm sorry if this is in the wrong thread, it is my first time posting here.
So I'm currently trying to forecast interest rates with the vasicek modell. My approach is to use a recursive equation (mean reversion rate, mean reversion level and volatility are already calculated). It goes as follows:
r_{i+1}=r_{i}+\kappa * (\theta-r_{i})Dt + \sigma \epslion_{i}dt
where \epsilon_{i} is a standard normally-distributed random. For simplicity reasons i set dt as 1, because I wanted to have a daily forecast.
Currently I'm using the recurrencetable to solve it, but I get the error message "the expression 'xy' cannot be used as a part specification" (but the programm gives me an output nevertheless).
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=vasicekforecast.PNG&userId=1151771
It seems like the programm runs as expected, but the error message is buzzing me a little. Is there a better way to implement it? I tried it in a for loop but it didnt work...
My next step would be to rerun this multiple times to get different forecasts but I dont know how I should do this. Manually it would be too much of an effort, but my last try to programm it was a failure. My idea was to have two for-loops that give me an output in matrix form with the days written in the rows and the runs written in the columns.
If you have a question regarding the idea feel free to ask. I'm thankful for all the help.
Regards TimTim2017-07-24T15:18:25ZPolynomial Term Order?
http://community.wolfram.com/groups/-/m/t/1152222
I would like polynomials in TraditionalForm to be ordered in descending order of degrees. All the polynomials that I am dealing with are in one variable. For instance, I have -5x+3. TraditionalForm outputs 3-5x. I found an article [HERE][1] that talks about a solution, but I can't get that solution to work. It says to do this:
poly=-5x+3;
MonomialList[poly, x];
poly // TraditionalForm
But the output I'm getting is still 3-5x. What am I doing wrong?
Thanks in advance,
Mark Greenberg
[1]: http://reference.wolfram.com/language/howto/RearrangeTheTermsOfAPolynomial.html.enMark Greenberg2017-07-24T23:06:34ZVisual Interface to the Wolfram Language
http://community.wolfram.com/groups/-/m/t/1124967
Hi everyone.
I'm working on a visual interface to the Wolfram Language called [visX][1], and I'd like to ask what you all think of it.
Wolfram Language code can often be thought of as a set of blocks, each of which takes some inputs, does something, and produces an output. VisX lets you write WL code exactly this way - you draw a digram, connecting blocks with links. For example, say you want to count how many times each digit (0 to 9) occurs in the first 30 digits of Pi. With text-based WL code, you'd write
digits = RealDigits[N[Pi, 30]][[1]]
Count[digits, #] & /@ Range[0, 9]
In visX, you'd draw this:
![digits of Pi][2]
I guess it's pretty self-explanatory. In addition to using built-in WL blocks, you can write your own, like the CountInList block. Normally, blocks just transform inputs to output, but the CountInList block is mapped over its input which is indicated by the little brackets on the outside of its connection ports. (That's basically visual syntactic sugar for "/@" or Map.) The 4 in the upper-right corner indicates that results inside this block are showing results from the 4-th time through the map. The block with "digits" in it sets a variable, which is then referenced in the CountInList block.
You define blocks (which are basically functions) by just making an empty rectangle and dragging contents in then wiring them together, then you can use copies of the block wherever you want. A change in any copy of the block will be reflected in all other copies. There's no real difference between a defining a block and using it. Recursion can be specified by just including a copy of the block inside itself. Blocks can call other blocks in the same manner.
Just like regular WL code, visX blocks can be nested deeply, but with the visual interface, it's easy to zoom in and out. At any point, the UI will show you the right amount of detail for each block - sometimes no detail at all, sometimes its name and labels on its inputs, sometimes its actual contents (which can then be edited or further zoomed...).
visX is stand-alone software that runs locally on your machine, evaluates the diagram using your local Mathematica kernel, and receives the results and puts them back in the diagram. You can load data files using Import as usual.
One of the problems that I've seen with visual languages in the past is that while simple things are easy to do, the code quickly gets too complex to manage and the visual interface starts to get in the way. With the Wolfram Language in theory everything is an expression, and this can lead you to write functional-style programs which are easily thought of as a diagram, but that's not always the most natural way to express a computation. Sometimes you just need a little for loop. Consider calculating Fibonacci numbers. Start the sequence with 1, 1, ... then each element of the sequence is the sum of the previous two. Yes, you can write a recursive algorithm to do this, but most people just want to write a little for loop. In visX, you can do this (calculates the 6th Fibonacci number):
![embedded code][4]
I've tried to let you use blocks-and-links when that's the most natural thing (which is usually), and text-based code when that's better. Of course, you can mix them together however you want.
A second problem I've found with visual programming languages is that it can actually be much slower to use then writing out text, because you have to laboriously drag and drop every single block. Even simple algebraic expressions like
Sin[x]^2 + Cos[x]^2
2x^2 + 4x*y + 8y^2
would involve a lot of blocks because of all the Plus, Times, and Power blocks, as well as all the constants and symbols. With visX, you can enter Wolfram Language code snippets like those, and it will parse them and transform them into blocks which you can then insert into your diagram all at once and edit at will. This makes it much faster to get your idea onto the screen so that you can start evaluating it and developing it. I'm also working on the ability to take a visX block and give you back the Wolfram Language code that it represents.
The examples given here are simple, but of course you can use this interface for putting together a complex piece of code as well. I find it especially handy when building up a calculation with lots of intermediate results along the way, or to rapidly prototype an algorithm where I want to be able to easily switch the data flows around.
Does this project seem useful to anyone? I'd like to get some feedback - what do you think of it? Would you use it? For what?
If there's interest, I could do a small-scale alpha test in about a month from now.
More info at [visx.io][5].
-Nicholas Hoff
*edited to clarify block definitions and recursion*
[1]: http://visx.io
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=pi_digits_without_chrome.png&userId=1124239
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=pi_digits.png&userId=1124239
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=embeded_wl.png&userId=1124239
[5]: http://visx.ioNicholas Hoff2017-06-20T10:02:52ZHow do I model electrodiffusion in Wolfram Mathematica?
http://community.wolfram.com/groups/-/m/t/1152480
I am an undergraduate researcher. I have been working with Mathematica for roughly a month. I have been assigned a modeling task. My objective is to model the diffusion of ions in solution under the effect of an electric field and ignoring concentration gradient.
Part 1: Equations
-----------------
The model is to be one dimensional for the time being. In the future it will be translated to 2D and ultimately 3D.
The equation used is as follows:
∂C/∂t=∂/∂x(D ∂C/∂x-zμC ∂Φ/∂x)
Where:
C : Concentration of ions
D: Diffusion coefficient
z: Charge per molecule
μ: Ion mobillity
Φ: Electric potential
*C is a function of time and space. “t” and “x”.*
*Φ is a function of space. “x”.*
*All remaining terms are assumed constant. (D,z, μ)*
The model I am making ignores diffusion caused by concentration gradient (The highlighted term).
∂C/∂t=∂/∂x(**D ∂C/∂x**-zμC ∂Φ/∂x)
The following assumptions are made:
D ∂C/∂x=0
z = -1
μ = 1
This leaves us with:
∂C/∂t=∂/∂x(C ∂Φ/∂x)
Finally, performing the product rule gives the final equation.
∂C/∂t=C (∂^2 Φ)/(∂x^2 )+∂C/∂x ∂Φ/∂x
Part 2: Wolfram
---------------
Mathematica’s DSolve function presents a general solution. But despite multiple attempts and combinations of boundary and initial conditions of both the concentration and potential, I can’t get DSolve to present a particular solution. The code is as follows:
enter code h1D Electrodiffusion
The purpose of this program is to model the diffusion of ions under the influence of an electric field ONLY.
The first attempts will use "DSolve"
Later attempts will use "NDSolve" if no prior attempt is successful
The following cell works to describe the partial differential equation to be solved
edeqn = D[u[t, x], t] ==
u[t, x]*D[\[CapitalPhi][x], {x, 2}] +
D[u[t, x], x]*D[\[CapitalPhi][x], x]
The following cell attempts to discern a solution to the differential equation
sol = DSolve[edeqn, u, {t, x}]
Simplify[edeqn /. sol]
This returns the general solution. I have provided a few sample of my attempts to attain a particular solution below.
The following cell shows the effect of adding an initial condition and a boundary condition for the left AND right sides of the channel.
bc = {u[t, 0] == 10, u[t, 2] == 0}
ic = u[0, x] == 8
sol = DSolve[{edeqn, bc, ic}, u, {t, x}]
The following cell attempts to discern a solution to the differential equation.
The solver has been told to solve for "u" and "\[CapitalPhi]".
Both "u" and "\[CapitalPhi]" are given as functions of "t" and "x".
edeqn = D[u[t, x], t] ==
u[t, x]*D[\[CapitalPhi][t, x], {x, 2}] +
D[u[t, x], x]*D[\[CapitalPhi][t, x], x]
sol = DSolve[edeqn, {u, \[CapitalPhi]}, {t, x}]
Boundary condition. "u". Left side.
Boundary condition. "u". Right side.
Initial condition. "u".
Boundary condition "\[CapitalPhi]". Left side.
Boundary condition "\[CapitalPhi]". Right side.
Initial condition. "\[CapitalPhi]".
bc = {u[t, 0] == 10,
u[t, 2] == 0, \[CapitalPhi][t, 0] == 5, \[CapitalPhi][t, 2] == 0}
ic = {u[0, x] == 8, \[CapitalPhi][0, x] == 0}
edeqn = D[u[t, x], t] ==
u[t, x]*D[\[CapitalPhi][t, x], {x, 2}] +
D[u[t, x], x]*D[\[CapitalPhi][t, x], x]
sol = DSolve[{edeqn, bc, ic}, {u, \[CapitalPhi]}, {t, x}]
I apologize for the length of the post. I've never used a forum before.
I do hope you all will consider my dilemma, and are willing and able to provide a solution. I have attached the full code of my attempts. Thank you for your time.Kali Ellison2017-07-25T15:11:17ZPlot special functions real and imaginary part?
http://community.wolfram.com/groups/-/m/t/1092218
Consider the following code:
0F1[;1;j*pi/2x]*e^j2*pi*x
x[-pi/2,+pi/2]
The task is to visualize the real and imaginary part here is how i tried it what has to be different?besides i need the first three derivatives it didnt work like that
Grid[
Partition[
Table[
Plot[
Evaluate[{Re[
D[Hypergeometric0F1[
1, (\[ImaginaryJ]*\[Pi]/2*x)*E^j2\[Pi]x], {x, i}]],
Im[D[
Hypergeometric0F1[
1, (\[ImaginaryJ]*\[Pi]/2*x)*E^j2\[Pi]x], {x, i}]]}],
{x, -2/\[Pi], 2/\[Pi]},
PlotRange -> Automatic,
Frame -> True,
GridLines -> Automatic,
AspectRatio -> 1,
FrameLabel -> {"x",
StringForm[
"\!\(\*SubscriptBox[\(\[InvisiblePrefixScriptBase]\), \(0\)]\)\
\!\(\*SubscriptBox[OverscriptBox[\(F\), \(~\)], \(1\)]\)^(``)(\
\[ImaginaryJ]*\[Pi]/2*x)*\!\(\*SuperscriptBox[\(\[ExponentialE]\), \
\(j2\[Pi]x\)]\)"]},
PlotLegends -> Placed[{"Re", "Im"}, {Center, Top}],
ImageSize -> 300], {i, 0, 3}], 2], Frame -> All]Azad Kaygun2017-05-12T15:36:28ZCompiling a notebook directly into LaTeX
http://community.wolfram.com/groups/-/m/t/1114655
Hi everyone,
Recently I've been writing some work up in LaTeX. As I usually work in Mathematica, I found myself going back and forth between Mathematica and my Tex editor as I adjusted figures I was rendering with Mathematica code. I began to wonder if I could set up my notebook to turn itself into LaTeX, cutting out the middleman (me!).
To achieve this I've done two things, firstly I wrote a notebook that could select sections of itself and convert them to plaintext. I do this using various bits of Mathematicas front end interface. Notably the notebook saves and deletes output cells, this means one can write an input "hello " <> "world", and trust that it will be converted to output, without having to view that output, which might be repetitive or contain a lot of superfluous scaffolding. Find this notebook "Notebook to plaintext" is attached.
The second step was augmenting the above notebook to save the generated plaintext as a tex file, and automatically run pdfLatex.exe on the tex file to receive the ultimate output file. By writing Mathematica functions that export Mathematica plots, and return LaTeX that imports said plots, one can write a LaTeX document in Mathematica, the same environment that we specify our graphics. This notebook "Notebook to LaTex" is also attached, but one requires pdfLatex.exe, as is included in a MikTex installation for example. The pdf output of this example is also attached.
I had not previously worked with front end controls. As write time and run time are so close in Mathematica, we have fairly unique capacity to muck about with the structure of the work environment; automatically updating and evaluating cells. I hope this proves interesting.
DavidDavid Gathercole2017-06-05T16:19:53ZAccessing camera using .Net ?
http://community.wolfram.com/groups/-/m/t/1131472
I am trying to read images from a camera using NETLink. The documentation and example code for the camera API is located at [This link] (https://www.ximea.com/support/wiki/apis/XiAPINET). The code seems to work until I try to read out a parameter from the camera.
Needs["NETLink`"];
InstallNET[];
LoadNETAssembly["C:\\XIMEA\\API\\x64"];
LoadNETType["xiApi.NET.xiCam"];
myCam = NETNew["xiApi.NET.xiCam"];
out = 0;
myCam@GetParam["PRM.EXPOSURE", out];
The last line of code produces this error:
> NET::netexcptn: A .NET exception occurred: xiApi.NET.xiExc: GetParam PRM.EXPOSURE: Parameter is not supported
at xiApi.NET.xiCam.GetParam(String prm, Int32& val).
Any ideas what I am doing wrong?Jeff Burns2017-06-30T18:02:37Z[GIF] Septafoil (Stereographic Projection of a Septafoil Knot)
http://community.wolfram.com/groups/-/m/t/1152144
![Stereographic projection of spheres along a septafoil knot][1]
**Septafoil**
This is the same idea as [_Intertwine_][2], but with the septafoil knot $7_1$ rather than the trefoil knot. Read [the post on _Intertwine_][2] for more details on the code, which is mostly identical. The one notable change is that I've added noise to each of the frames in the animation. This is accomplished by creating a single noisy image of the right size:
noise = ImageEffect[Graphics[Background -> None, ImageSize -> 540], {"PoissonNoise", .4}];
and then combining it with each frame using `ImageMultiply`. Also, after exporting I reduced the color palette to 40 colors and dithered the frames using [gifsicle][3].
Here's the rest of the code (as with _Intertwine_, I used `MaxRecursion -> 5` for the final export, but this is too computationally intensive to use inside `Manipulate`):
Stereo3D[{x1_, y1_, x2_, y2_}] := {x1/(1 - y2), y1/(1 - y2), x2/(1 - y2)};
pqtorus[t_, θ_, p_, q_] := 1/Sqrt[2] Flatten[ReIm /@ {E^(p I (t + θ/p)), E^(q I t)}];
DynamicModule[{p = 7, q = 2, n = 50,
viewpoint = 10. {Cos[-4 π/5], Sin[-4 π/5], 0}, point, basis,
sphere, cols = RGBColor /@ {"#7AC7C4", "#F73859", "#384259"}},
point[t_, θ_] := pqtorus[t + θ, 0, p, q];
basis[t_, θ_] := NullSpace[{point[t, θ]}];
sphere[t_, θ_, ψ_, ϕ_] :=
Cos[.1] point[t, θ] + Sin[.1] Total[{Cos[ψ] Sin[ϕ], Sin[ψ] Sin[ϕ], Cos[ϕ]}*basis[t, θ]];
Manipulate[
ImageMultiply[
ParametricPlot3D[
Evaluate@
Table[Stereo3D[sphere[t, θ, ψ, ϕ]], {t, 0., 2 π - 2 π/(q n), 2 π/(q n)}],
{ψ, 0, π}, {ϕ, 0, 2 π},
Mesh -> None, PlotRange -> 4, ViewPoint -> viewpoint,
ViewAngle -> π/80, ImageSize -> 540, Boxed -> False,
PlotStyle -> White, ViewVertical -> {0, 0, 1}, Axes -> None,
Background -> cols[[-1]],
Lighting -> {{"Point", cols[[1]], 2 {Cos[7 π/10], Sin[7 π/10], 0}},
{"Point", cols[[2]], 2 {Cos[17 π/10], Sin[17 π/10], 0}} ,
{"Ambient", cols[[-1]], viewpoint}}],
noise],
{θ, 0, 2 π/(q n)}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=knots65a.gif&userId=610054
[2]: http://community.wolfram.com/groups/-/m/t/1144941
[3]: http://www.lcdf.org/gifsicle/Clayton Shonkwiler2017-07-24T15:30:57ZUsing (MinimalPolynomial[ x^(1/x)-1,b]==(1 + b)^x-x).
http://community.wolfram.com/groups/-/m/t/1152134
The MRB constant is defined at [http://mathworld.wolfram.com/MRBConstant.htm][1]l.
In looking for a faster method of calculating digits of the MRB constant, Sum[(1-)^x (x^(1/x)-1))],
by the seemingly difficult method of solving minimal polynomials, I came across the following
where it seems Table[Expand[(1 + b)^x-x], {x, 1, 145}] = Table[MinimalPolynomial[x^(1/x)-1, b], {x, 1, 145}] for all x
except for "numbers of the form (kp)^p for prime p and k=1,2,3,...," OEIS [A097764][2] :
(real = Table[MinimalPolynomial[-1 + x^(1/x), b], {x, 1, 145}]);
(guess = Table[Expand[(1 + b)^x]-x, {x, 1, 145}]);
real - guess // TableForm (*shows the equality for all but OEIS [A097764][3] *)
This equality could become very useful because as x gets large the minimal polynomial of x^(1/x)-1 becomes exceedingly difficult to compute!
Here is how this equality can be used:
Partial sum(s) of the MRB constant can be found through a sum of NSolves,
y = 1000; N[-y - Sum[b /. (NSolve[(1 + b)^n == n, b, Reals][[1]]), {n, 1, y}]]
giving a more correct result than
NSum[(-1)^n (n^(1/n) - 1), {n, 1, 1000}]
because it removes the imaginary part given by E^(I*Pi x) i.e. (-1)^x.
[1]: http://mathworld.wolfram.com/MRBConstant.html
[2]: https://oeis.org/A097764
[3]: https://oeis.org/A097764Marvin Ray Burns2017-07-24T15:11:14Z[✓] Inverse Z-Transform of z/(z - a) with different region of convergence?
http://community.wolfram.com/groups/-/m/t/1147972
Hello everyone. I tried to get the inverse Z transform of z/(z - a) with different ROC.
InverseZTransform[z/(z - a), z, n, Assumptions -> Abs[z] > Abs[a]]
InverseZTransform[z/(z - a), z, n, Assumptions -> Abs[z] < Abs[a]]
Both cases give me the following output a^n. Actually the inverse Z transform is
(a^n) HeavisideTheta[n] when ROC is Abs[z] > Abs[a],
and is
(-a^n) HeavisideTheta[-n-1] when ROC is Abs[z] < Abs[a].
How can I get these outputs?
Thank you very much.Gennaro Arguzzi2017-07-17T19:42:48ZPairs Trading with Copulas
http://community.wolfram.com/groups/-/m/t/1111149
**Introduction**
In a previous post, [Copulas in Risk Management][1], I covered the theory and applications of copulas in the area of risk management, pointing out the potential benefits of the approach and how it could be used to improve estimates of Value-at-Risk by incorporating important empirical features of asset processes, such as asymmetric correlation and heavy tails.
In this post I take a different tack, to show how copula models can be applied in pairs trading and statistical arbitrage strategies.
This is not a new concept - it stems from when copulas began to be widely adopted in financial engineering, risk management and credit derivatives modeling. But it remains relatively under-explored compared to more traditional techniques in this field. Fresh research suggests that it may be a useful adjunct to the more common methods applied in pairs trading, and may even be a more robust methodology altogether, as we shall see.
**Traditional Approaches to Pairs Trading**
Researchers often use simple linear correlation or distance metrics as the basis for their statistical arbitrage strategies. The problem is that statistical relationships may be nonlinear or nonstationary. Correlations (and betas) that have fluctuated in a defined range over a considerable period of time may suddenly break down, producing substantial losses.
A more sophisticated technique is the Kalman Filter, which can be used as a means of dynamically updating the the estimated correlation or relative beta between pairs (or portfolios) of stocks, a technique I have written about in the post Statistical Arbitrage with the Kalman Filter.
Another commonly employed econometric technique relies on cointegration relationships between pairs or small portfolios of stocks, as described in my post on Developing Statistical Arbitrage Strategies Using Cointegration. The central idea is that, in theory, cointegration is a more stable and reliable basis for assessing the relationship between stocks than correlation.
Researchers often use a combination of methods, for example by requiring stocks to be both cointegrated and with stable, high correlation throughout the in-sample formation period in which betas are estimated.
In all these cases, however, the challenge is that, no matter how they are derived or estimated, statistical relationships have a tendency towards instability. Even a combination of several of these methods often fails to detect signs of a breakdown in statistical relationships. There is even evidence that cointegration models are no more robust or reliable than simple correlations. For example, in his paper On the Persistence of Cointegration in Pairs Trading, Matthew Clegg assess the persistence of cointegration among U.S. equities in the calendar years 2002-2012, comprising over 860,000 pairs in total. He concludes that “the evidence does not support the hypothesis that cointegration is a persistent property”.
**Pairs Trading in the S&P500 and Nasdaq Indices**
To illustrate the copula methodology I will use an equity pair comprising the S&P 500 and Nasdaq indices. These are not tradable assets, but the approach is the same regardless and will serve for the purposes of demonstrating the technique.
We begin by gathering daily data on the indices and calculating the log returns series. We will use the data from 2010 to 2015 as the in-sample “formation” period, and test the strategy out of sample on data from Jan 2016-Feb 2017.
![enter image description here][2]
![enter image description here][3]
![enter image description here][4]
![enter image description here][5]
![enter image description here][6]
The chart below shows a scatter plot of daily percentage log returns on the SP500 and NASDAQ indices.
![enter image description here][7]
![enter image description here][8]
**MODELING**
**Marginal Distribution Fitting**
In the post Copulas in Risk Management it was shown that the returns series for the two indices were well-represented by Student T distributions. I replicate that analysis here, estimating the parameters by maximum likelihood and proceed from there to test each distribution for goodness of fit. In each case, the Student T distribution appears to provide an adequate fit for both series.
![enter image description here][9]
![enter image description here][10]
![enter image description here][11]
![enter image description here][12]
**Copula Calibration**
We next calibrate the parameters for the Gaussian copula by maximum likelihood, from which we derive the joint distribution for returns in the two indices via Sklar’s decomposition. This will be used directly in the pairs trading algorithm. As pointed out previously, there are several alternatives to MLE, including the Method of Moments, for example, and these are listed in the Mathematica documentation for the EstimatedDistrubution function.
![enter image description here][13]
![enter image description here][14]
![enter image description here][15]
![enter image description here][16]
![enter image description here][17]
![enter image description here][18]
**Pairs Trading with the Copula Model**
Once we have successfully fitted marginal distributions for the two series and a copula distribution to describe their relationship, we are able to derive the joint distribution. This means that we can directly calculate the joint probability of each pair of data observations. So, for instance, we find that the probability of a return in the S&P500 of 5% or more, together with a return in the Nasdaq of 1% or higher, is approximately 0.2%:
![enter image description here][19]
![enter image description here][20]
So the way we test our model is to calculate the daily returns for the two indices during the-out-of sample period from Jan 2016 to Feb 2017 and compute the probability of each pair of daily observations. On days where we see observation pairs with abnormally low estimated probabilities, we trade the pair accordingly over the following day.
Naturally, there are multiple issues with this simplistic approach. To begin with, the indices are not tradable and if they were we would have to account for transaction costs including the bid-offer spread. Then there is the issue of determining where to set the probability threshold for initiating a trade. We also need to decide on criteria to try to optimize the trade holding period or trade exit rules. And, finally, we need to think about trade expression: for example, we might attempt to trade both legs passively, perhaps crossing the spread to fill the remaining leg when an order for one of the pairs is filled.
But none of these issues are specific to the copula approach - they apply equally to all of the methods discussed previously. So, for the sake of clarity, I am going to ignore them. In this analysis I pick a threshold probability level of 15% and assume we hold the trade for one day only, opening and closing the trade at the start and end of the day after we receive a signal. In computing the returns for each trade I ignore any transaction costs.
First, we gather data for the test period:
![enter image description here][21]
Next, we use the estimated joint distribution to compute the probability of each daily observation of index returns. We gather the daily returns series and associated probability series into a single temporal variable:
![enter image description here][22]
![enter image description here][23]
We plot the time series of index returns and associated probabilities as follows:
![enter image description here][24]
![enter image description here][25]
![enter image description here][26]
![enter image description here][27]
**Trade Signal Generation**
The table below lists the index returns and joint probabilities over the first several days of the series. The sequence of trade signals is as follows:
After a very low probability reading for 2016/1/4, we take equally weighted positions short the S&P500 Index and long the Nasdaq index on 2016/1/5. We close the position at the end of the day, producing a total return of 0.44%. Similar signals are generated on 2016/1/6, 2016/1/7, 2016/1/8, 2016/1/13 , 2016/1/15 and 2016/1/20 (assuming a 15% probability threshold). We take the reverse trade (Buy the S&P500, Sell the Nasdaq) on only one occasion in the initial part of the sample, on 2016/1/14.
![enter image description here][28]
![enter image description here][29]
**Pairs Trading Strategy Results**
We are now ready to apply the trading algorithm to the entire sample and chart the resulting P&L.
![enter image description here][30]
![enter image description here][31]
![enter image description here][32]
![enter image description here][33]
**Comment on Strategy Performance**
The performance of the strategy over the out-of-sample period, at just under 4%, can hardly be described as stellar. But this is largely due to the dampening of volatility seen in both indices over the last year, which is reflected in the progressively lower volatility of joint probabilities over the course of the test period. Such variations in signal frequency and trading strategy performance are commonplace in any statistical arbitrage strategy, regardless of the methodology used to generate the signals.
The obvious remedy is to create similar trading algorithms for a large number of pairs and combine them together in an overall portfolio that will produce a sufficient number of signals and trading opportunities to make the performance sufficiently attractive. One of the benefits of statistical arbitrage strategies developed in this way is their highly efficient use of capital, since the combination of long and short positions minimizes the margin requirement for each trade and for the portfolio as a whole.
Finally, it is worth noting here that, in principle, one could easily create similar copula-based arbitrage strategies for triplets, quadruplets, or any (reasonably small) number of assets. The principle restriction lies in the increasing difficulty of estimating the copulas and joint densities, given the slow convergence of the MLE method.
**Recent Research**
In the last few years several researchers have begun exploring the application of copulas as a basis for statistical arbitrage. In their paper “Nonlinear dependence modeling with bivariate copulas: Statistical arbitrage pairs trading on the S&P 100”, Krauss and Stubinger apply the copula approach to pairs drawn from the universe of S&P 100 index constituents, with promising results. They conclude that their “findings pose a severe challenge to the semi-strong form of market efficiency and demonstrate a sophisticated yet profitable alternative to classical pairs trading”.
In the paper by Rad, et al., cited below, the researchers compare several different methods for pairs trading strategies. They find that all of the tested methods produce economically significant returns, but only the performance of the copula-based approach remains consistent after 2009. Further, the copula method shows better performance for its unconverged trades compared to those of the other methods.
**Conclusion**
The application of copulas to statistical arbitrage strategies is an interesting and relatively under-explored alternative to the usual distance and correlation based methods. In addition to its sound theoretical underpinnings, the copula approach appears to offer greater consistency in performance compared to traditional techniques, whose efficacy has declined since the financial crisis on 2008/09. The benefits of the approach must be weighed against its greater computational complexity, although with the growth in the power of modeling software in recent years this represents less of an obstacle than it has previously.
**References**
Clegg., M., , On the Persistence of Cointegration in Pairs Trading, Jan. 2014
Krauss, C. and Stubinger , J., Nonlinear dependence modeling with bivariate copulas: Statistical arbitrage pairs trading on the S&P 100, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung, No 15/2015.
Rad, H., Kwong, R., Low, Y. and Faff, R., The profitability of pairs trading strategies: distance, cointegration, and copula methods, Quantitative Finance, DOI: org/10.1080/14697688.2016.1164337, 2015
[1]: http://jonathankinlay.com/2017/01/copulas-risk-management/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_1.gif&userId=773999
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_2.png&userId=773999
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7037Fig1.png&userId=773999
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_3.gif&userId=773999
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_4.gif&userId=773999
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_5.png&userId=773999
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_6.gif&userId=773999
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_7.gif&userId=773999
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_8.png&userId=773999
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_10.gif&userId=773999
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4282Fig2.png&userId=773999
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_16.gif&userId=773999
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_17.png&userId=773999
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_18.png&userId=773999
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_19.gif&userId=773999
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_20.png&userId=773999
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_21.gif&userId=773999
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_22.png&userId=773999
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_23.png&userId=773999
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_24.gif&userId=773999
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_25.gif&userId=773999
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_26.gif&userId=773999
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_27.png&userId=773999
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_28.gif&userId=773999
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_29.png&userId=773999
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_30.gif&userId=773999
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_31.gif&userId=773999
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2430Fig3.png&userId=773999
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_32.gif&userId=773999
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_33.gif&userId=773999
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_34.png&userId=773999
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_35.gif&userId=773999Jonathan Kinlay2017-05-30T17:41:07Z