Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Scienceattachment.php?postid=2617 sorted by activeNeed help figuring out how to find c1 and c2 for differential equation
http://community.wolfram.com/groups//m/t/1190358
So the problem says, "Verify that the indicated function is a solution of the given DE and find c1 and c2: ...
The solution: y=c1e^x+c2e^x ...
The DE and conditions: y''y=0, y(1)=5, y'(1)= 5" Now I used DSolve to verify that the indicated function is a solution of the DE, but I'm not sure how to find c1 and c2. I've tried DSolveValue and I couldn't get that to work so maybe I'm using the wrong thing or I just inputted it wrong I don't know. I'm new to mathematica so I have no idea what to use. I've tried searching other places but couldn't find anything.Brendan Isaac20170922T21:11:16ZAccount for this nonlinearity in FourierTransform?
http://community.wolfram.com/groups//m/t/1189490
Define two functions:
functionWithConditional[t_] := If[t < 0, 0, t]
functionWithSign[t_] := (Sign[t] + 1)*t/2
Mathematica considers them equal in this sense:
In[3]:= FullSimplify[functionWithConditional[t]  functionWithSign[t]]
Out[3]= 0
But the difference between their transforms is nonzero:
In[5]:= FourierTransform[functionWithConditional[t], t, s] 
FourierTransform[functionWithSign[t], t, s]
Out[5]= I Sqrt[\[Pi]/2] Derivative[1][DiracDelta][s]
And the inversetransform of the difference is nonzero:
In[6]:= InverseFourierTransform[%, s, t]
Out[6]= (t/2)
Which if these results is correct, and which is incorrect:
In[7]:= FourierTransform[functionWithConditional[t], t, s]
Out[7]= (1/(Sqrt[2 \[Pi]] s^2))
In[8]:= FourierTransform[functionWithSign[t], t, s]
Out[8]= (1/(Sqrt[2 \[Pi]] s^2))  I Sqrt[\[Pi]/2] Derivative[1][DiracDelta][s]
Or are they somehow both correct, despite Mathematica claiming the functions are equal? Why should two equal functions have unequal transforms?Joe Donaldson20170922T04:09:18ZGet time series data from FitBit service connection?
http://community.wolfram.com/groups//m/t/1189146
Working through the basic examples in the Wolfram documentation related to the FitBit service connection. I'm able to get a proper ServiceObject from:
**fitbit = ServiceConnect["Fitbit", "New"]**
Running ServiceExecute for the "ActivityData" requests as shown in the example returns results as expected. However, running a similar ServiceExecute on "StepsPlot" does not return any results. I've gone through several other available requests (e.g., "SleepDate", "UserDate", "CaloriesTimeSeries", etc.) and what I'm finding is that requests that take "StartDate" and "EndDate" parameters don't seem to work for me.
I'm running Mathematica "11.2.0 for Mac OS X x86 (64bit) (September 11, 2017)". Again, not really doing anything fancy other than copynpasting the examples from the documentation and slightly adjusting date ranges to reflect "real" data for my case. BTW, leaving the exact dates from the example seems to make no difference in behavior (e.g., still getting back symbolic result).
Any ideas on what I'm doing wrong or missing here?
Below is a screen shot of my notebook for reference.
![FitBit Example Failing][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fitbit_fail.png&userId=1189127macrod20170921T03:42:42ZBasic Mathematica Help?
http://community.wolfram.com/groups//m/t/1190527
resolved!David Joseph20170922T19:20:29ZPicking the elements of a matrix in increasing numerical values for loop op
http://community.wolfram.com/groups//m/t/1190072
Dear Friends,
I have a programming problem which seems to be a case for If/For functions but I am not able to formulate it properly. Attached file contains a simplified script which is my first thought, but it can work only if somehow I find a way to pick the elements of a matrix in increasing numerical values. I could not think of a way out, will appreciate any suggestion on it. Following is the short description of the problem. Thanks
I have two 6 X 6 matrix G and A having elements to be a function of integer m having values 0  5. At the beginning of loop m = 0.
G[m_] = {{g11, g12, g13, g14, g15 g16}, …., {g21, g22, g23, …g56, g66}}
A[m_] = {{a11, a12, a13, a14, a15 a16}, …., {a21, a22, a23, …a56, a66}}
There is another six elements row matrix H[m_] = {h1, h2, h3, h4, h5, h6}
Now I want to perform a loop operation on the system such that elements of matrix G in the order of increasing numerical values are compared with the corresponding elements of A such that if for the element [i, j]
If g[i j] < A[i, j] > next higher magnitude g[i, j] is selected
Or if
g[i j] > A[i, j] > m increases by 1 (m++) and hence H[m] > H[m++], where
H[m++] = ReplacePart[H[m], {j> hj+hi, i> 0]
Now for new value of m, G and A will have new elements and cycle will continue till m = 5, and at this point one element of H[5] matrix would be h1+h2+h3+h4+h5+h6 with all others to be zero.S G20170922T18:43:24ZElevation Data Import?
http://community.wolfram.com/groups//m/t/1188722
I'm trying to produce 3D models in the Wolfram Language of various mountains around the world. The GeoElevationData[] function does not have enough resolution for a good looking model at the scale of a mountain, so I'm searching for online data sets that have more resolution and can be imported into Wolfram. For mountains in the United States, the United States Geological Survey offers ArcGRID files that Wolfram can import. This has worked fabulously, producing models such as this:
![enter image description here][1]
(Can you tell what mountain it is?)
But when I try to find data for mountains outside the US, like Mt. Fuji or Mt. Everest, I get overwhelmed by the abundance of file formats that, as far as I can tell, won't work in Wolfram. I sense that there is a way to do this. Has anyone solved this problem before? I could sure use some help here.
Thanks in advance,
Mark Greenberg
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170920at8.14.21AM.png&userId=788861Mark Greenberg20170920T15:35:58ZComputing with Dataset[]: A reductionist approach
http://community.wolfram.com/groups//m/t/1189385
Dataset[] is an amazingly powerful function with excellent memory use and speed. Dataset[] obtains these virtues at the expense of an idiosyncratic programming language. Once understood, the language is very useful. However, understanding this language requires effort and a reductionist approach.
The text and examples in the attached file are intended for a programmer who has worked through the examples in Mathematica's Dataset[] documentation. It is intended to convey an approach to Dataset[] programming that can be successfully applied to most or all Dataset[] problems. The material is intended to be useful to all programmers, including business and financial analysts, who frequently find their data sources to be rectangular arrays in .csv or .xl family format. The material contains a section on converting such lists to Dataset[]. Applicability of the general approach is not restricted to rectangular arrays, but only rectangular arrays are considered below.
Corrections from Dataset[] implementers are welcome. Identification of typographic errors is welcome, as is identification of programming errors.Bill Lewis20170921T20:16:18ZUsing Manipulate with ListLinePlot data
http://community.wolfram.com/groups//m/t/1190455
I am attempting to plot a frequency amplitude response for a simple Duffing Problem. I have a range defined for gamma. This means that a, the amplitude is known. The sigma is the frequency of the system. Sigma is a function of the amplitude and gamma. Since gamma is defined as a range the amplitude is known and thus we can find sigma. I plotted the amplitude vs the frequency. I now want to wrap the plot with manipulate so that i can vary the magnitude of the force F, damping parameter c, and the non linearity mue. This is the code i have so far. This plot generates with the slide bars. When i press play however the plot remains static. Please help. Thank You.![enter image description here][1]
![enter image description here][2]
\[Omega] = 3.15;
\[Gamma] = Range[0.001, 3.1415, 0.001];
c = 0.01;
F = 0.1;
\[Mu] = 5;
a = (F*Sin[\[Gamma]])/(2*c*\[Omega]);
\[Sigma] = (3*\[Mu]*a^2)/(8*\[Omega])  (F*Cos[\[Gamma]])/(
2*a*\[Omega]);
data = Transpose@{\[Sigma], a};
Manipulate[
ListLinePlot[data, PlotRange > {{1, 1}, {0, 1}},
AxesLabel > {sigma, amplitude},
PlotLabel > "MMS Duffing Frequency Response"], {{\[Mu], 1,
"Nonlinearity"}, 1, 5}, {{F, 0.1, "Force"}, 0.1,
1}, {{c, 0.01, "Damping"}, 0.01, 1}]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture1.JPG&userId=1190441Christopher Reyes20170922T16:18:12ZSoftware Developers Needed
http://community.wolfram.com/groups//m/t/1190055
Designing a Learn to Read App
Adaptable to the students learning style and level of education.(K12).
Two learning modes  Game Style Mode of learning and a Regular Style Mode of learning, with
each mode having settings that can modify the appearance and speed of the learning courses.
App Will also have Self Testing features.
App will also have two versions.
A Free Version for any of the millions of children who have no access to schools or
education, and the Paid Professional Version for schools, educators and home schools,
which will have 24/7 support and fully adaptable interactive functions. Eventually the Learn
to Read App will be translated to other languages with several different female and male
voices to choose from.(ESL).
App will also operate on multiple operating systems, online and offline.
http://www.basicknowledge101.com
Sample Learn to Read App
https://play.google.com/store/apps/details?id=com.earlystart.android.monkeyjunior&hl=enHoward Polley20170922T17:09:31ZIs a list of 40000 english words too big for Wolfram Dev. Plat. Free Plan?
http://community.wolfram.com/groups//m/t/1189919
I just started learn Wolfram Language on Wolfram Development Platform Free Plan.
I tried to count number of syllables in English words.
Fortunately WordList[] gives 40127 common english words.
What I want to do in one line is
Counts[Map[ WordData[#, "Hyphenation" ]&, WordList[] ] //Flatten ]//WordCloud
It didn't work.
Same task written in step by step is
WordData["syllabicate", "Hyphenation"]
words = WordList[]
syllables = Map[ WordData[#, "Hyphenation" ]&, words ]//Flatten
counts = Counts[syllables] //Sort//Reverse
WordCloud[counts]
Map function (3rd line) didn't return any result. I don't think 40000 words is too big for Wolfram Language.
Are there something wrong on my code? Or free plan isn't enough to do this?Roman SUZUKI20170922T06:09:32ZIsoperiodic time helices and 3D models
http://community.wolfram.com/groups//m/t/1139696
## Introduction ##
In general the period of an oscillation can depend on a variable parameter such as energy, here $\alpha$. Then two distinct oscillators of variable energy are called *isoperiodic* whenever periods equal for all values of $\alpha$ : $T_{1}(\alpha)=T_{2}(\alpha)$. Even in the simple setting of Hamiltonian oscillation, there are a wide range of possibilities as explored in the following Demonstration:
http://demonstrations.wolfram.com/IsoperiodicPotentialsViaSeriesExpansion
[![enter image description here][1]](http://demonstrations.wolfram.com/IsoperiodicPotentialsViaSeriesExpansion)
This demonstration focuses solely on construction of isoperiodic potentials, which are curves in two dimensions. Exact solutions of the timedependent oscillation make for beautiful geometry in a threedimensional space where time goes along the vertical, above the horizontal plane, phase space. Using the Morse and Pöschl–Teller example, as in the demonstration above, we construct the following models and export to Shapeways:
https://www.shapeways.com/product/GXBPX8F24/morseoscillatortimehelices?optionId=62983286
[![enter image description here][2]](https://www.shapeways.com/product/GXBPX8F24/morseoscillatortimehelices?optionId=62983286)
https://www.shapeways.com/product/BB74864YX/poschltelleroscillatortimehelices?optionId=62983299
[![enter image description here][3]](https://www.shapeways.com/product/BB74864YX/poschltelleroscillatortimehelices?optionId=62983299)
When the printed models stand sidebyside on a flat surface, the uppermost level sets are at equal heights, a stunning and tangible demonstration of isoperiodicity. Contrasting demonstration with models, it's rewarding to imagine that the continuous deformation of potentials implies a continuous deformation between the two models. Expect to hear more about symmetry breaking in the near future, but for now we will discuss an example where symmetry is added rather than broken !
## Simple Pendulum ( Again ) ##
First we construct the phase space trajectory by applying series inversion as in numerous previous references ( Cf. [W.C. 1][4] , [W.C. 2][5] ). We then run a couple of validation test to make sure the general primitives look roughly okay:
vSet[n_, m_] :=
Map[Range[n] /. Append[Rule @@ # & /@ Tally[#], x_Integer :> 0] &,
DeleteCases[
Select[IntegerPartitions[n + m], Length[#] > m  1 &], {n + 1,
1 ...}]]
GExp[n_] := y*Total[y^# g[#] & /@ Range[0, n  1]]
gCalc[0, _] = 1;
gCalc[n_, m_] := With[{vs = vSet[n, m]},
Total@ReplaceAll[Times[1/m, Multinomial @@ #, c[Total[#]  m],
Times @@ Power[gSet[#] & /@ Range[0, n  1], #]] & /@ vs,
{c[0] > 1}]]
MultinomialExpand[n_, m_] := Module[{},
Clear@gSet; Set[gSet[#], Expand@gCalc[#, m]] & /@ Range[0, n  1];
Expand[GExp[n + 1] /. g[n2_] :> gCalc[n2, m]]]
\[Psi]Test = MultinomialExpand[10, 2] /. c[x_] :> c[x] Q^(x + 2);
TrigReduce[Normal@Series[
(p^2 + q^2 + Total[c[#] q^(# + 2) & /@ Range[5] ]
) /. {p > \[Psi]Test Sin[\[Phi]], q > \[Psi]Test Cos[\[Phi]]
} /. Q > Cos[\[Phi]], {y, 0, 5}]]
SameQ[
With[{exp = TrigReduce[Normal@Series[
Q D[\[Psi]Test, Q]/\[Psi]Test  1,
{y, 0, 2}] /. Q > Cos[\[Phi]]]},
Expand[Coefficient[exp, y, #] & /@ Range[0, 2]]],
With[{exp = TrigReduce[Normal@Series[
D[
Expand[(1/2)*\[Psi]Test^2 /.
y > (2 \[Alpha])^(1/2)], \[Alpha]] /. \[Alpha] > (1/
2) y^2,
{y, 0, 2}] /. Q > Cos[\[Phi]]]},
Expand[Coefficient[exp, y, #] & /@ Range[0, 2]]]]
The first prints $y^2$, suggesting the correct series inversion. The second prints "True" suggesting that the timedependence is calculated correctly.
All that remains is to fill in values for the expansion coefficients, and then to plot. Here we limit ourselves to half the total energy range:
\[Psi]Pendulum = Sqrt[2 \[Alpha]] Expand[
(MultinomialExpand[20, 2] /. c[x_] :> c[x] Q^(x + 2) /.
c[x_ /; OddQ[x]] :> 0 /. {c[2] > (1/12), c[4] > 1/360,
c[6] > (1/20160)} /. y > Sqrt[4 \[Alpha]])/
Sqrt[4 \[Alpha]]] /. \[Alpha]^n_ /; n > 3 :> 0
dt = TrigReduce[
D[Expand[(1/2)*\[Psi]Pendulum^2], \[Alpha]] /.
Q > Cos[\[Phi]]] /. \[Alpha]^n_ /; n > 5 :> 0;
t[\[Phi]1_, \[Phi]2_] :=
Expand[(1/2/Pi) Integrate[dt, {\[Phi], \[Phi]1, \[Phi]2}]]
tA\[Phi] = t[Pi/2, Pi/2 + \[Phi]];
tB\[Phi] = t[3 Pi/2, 3 Pi/2 + \[Phi]];
tC\[Phi] = t[Pi/2 + \[Phi], \[Phi] + Pi + Pi/2];
tB\[Phi] + tA\[Phi] /. {\[Phi] > Pi}
PST = (\[Psi]Pendulum /. Q > Cos[\[Phi] + Pi/2]) {Cos[\[Phi] + Pi/2],
Sin[\[Phi] + Pi/2], 0};
PST2 = (\[Psi]Pendulum /.
Q > Cos[\[Phi] + Pi/2 + Pi]) {Cos[\[Phi] + Pi/2 + Pi],
Sin[\[Phi] + Pi/2 + Pi], 0};
DoubleHelixPendulum = Show[
ParametricPlot3D[Evaluate[
Plus[PST, {0, 0, 2 tA\[Phi]}] /. \[Alpha] > .5 #/5 & /@
Range[5]],
{\[Phi], 0, 2 Pi}, PlotStyle > Tube[1/32], PlotPoints > 100],
ParametricPlot3D[Evaluate[
Plus[PST2, {0, 0, 2 tB\[Phi]}] /. \[Alpha] > .5 #/5 & /@
Range[5]],
{\[Phi], 0, 2 Pi}, PlotStyle > Tube[1/32], PlotPoints > 100],
ParametricPlot3D[Evaluate[
Plus[PST, {0, 0, 2 tC\[Phi]}] /. \[Alpha] > .5 #/5 & /@ Range[5]],
{\[Phi], 0, 1.1 2 Pi}, PlotStyle > Tube[1/32], PlotPoints > 100],
ParametricPlot3D[Evaluate[
Plus[PST2, {0, 0,
2 tB\[Phi] /. \[Phi] > 2 Pi}] /. \[Alpha] > .5 #/5 & /@
Range[5]],
{\[Phi], 0, 1.1 2 Pi}, PlotStyle > Tube[1/32], PlotPoints > 100],
ParametricPlot3D[Evaluate[
Plus[PST2, {0, 0,
2 tB\[Phi] /. \[Phi] > 0}] /. \[Alpha] > .5 #/5 & /@
Range[5]],
{\[Phi], 0, 1.1 2 Pi}, PlotStyle > Tube[1/32], PlotPoints > 100],
ParametricPlot3D[Evaluate[
Plus[PST2, {0, 0, 2 tB\[Phi] /. \[Phi] > 0}] /. \[Phi] >
2 Pi #/6 & /@ Range[6]],
{\[Alpha], .1, 0.5}, PlotStyle > Tube[1/32], PlotPoints > 100],
ParametricPlot3D[Evaluate[
Plus[PST, {0, 0, 2 tC\[Phi]}] /. \[Phi] > 2 Pi #/6 & /@ Range[6]],
{\[Alpha], .1, 0.5}, PlotStyle > Tube[1/32], PlotPoints > 100],
ParametricPlot3D[Evaluate[
Plus[PST2, {0, 0, 2 tB\[Phi] /. \[Phi] > 2 Pi}] /. \[Phi] >
2 Pi #/6 & /@ Range[6]],
{\[Alpha], .1, 0.5}, PlotStyle > Tube[1/32], PlotPoints > 100],
Boxed > False, Axes > False, PlotRange > All, ImageSize > 300
]
![ time helices ][6]
This model can be saved as an ".stl" file and exported directly to shapeways.
[Shapeways: Simple Pendulum Time Helices][7]
## Edward's Curve ##
As recently [announced on seqfans][8], it's relatively easy to apply polar coordinates in an analysis of the Edward's Curve ( Cf. [Edwards][9] & [Bernstein & Lange][10] ), and this approach readily yields a simple exact form for the "time dependence" of the addition rules. First we calculate a radius function for the genusone solution:
Edwards = x^2 + y^2  (1 + d x^2 y^2);
r[\[Phi]_] := Sqrt[(1  Sqrt[1  d Sin[2 \[Phi]]^2]) 2 Csc[2 \[Phi]]^2/d]
r[\[Phi]]
![r expression][11]
And check this:
TrigReduce[ Edwards /. {x > r[\[Phi]] Cos[\[Phi]], y > r[\[Phi]] Sin[\[Phi]]}]
Yields Zero, as necessary. TrigReduce can be replaced by a set of replacement rules. Next we write the addition rule in the form of a tangent function,
Tan\[Phi]3[\[Phi]1_, 0] := Tan[\[Phi]1]
Tan\[Phi]3[\[Phi]1_, \[Phi]2_] := Tan[\[Phi]1 + \[Phi]2] (1  d z)/(1 + d z) /.
z > r[\[Phi]1]^2 r[\[Phi]2]^2 Cos[\[Phi]1] Sin[\[Phi]1] Cos[\[Phi]2] Sin[\[Phi]2]
And calculate the derivative
\[Phi]dot = Times[
Normal@Series[ Tan\[Phi]3[\[Phi], \[Omega]dt], {\[Omega]dt, 0, 1}]  Tan\[Phi]3[\[Phi], 0],
Cos[\[Phi]]^2/\[Omega]dt ]
d\[Phi]/TrigReduce[Expand[\[Phi]dot]]
The closed form result is ,
$$\omega \; dt = \frac{d\phi}{\sqrt{1d \sin^2(2\phi)}} = \frac{d\phi}{\sqrt{14 \;d \big( \sin(\phi)\cos(\phi)\big)^2}} .$$
Clearly the complete elliptic integral of the first kind is given by any integral of the form
$$K(d) \propto \frac{1}{2\pi} \int_{0}^{2\pi} \frac{d\phi}{\sqrt{1d \sin^2(n\;\phi)}}, $$
with integer $n$. As is [well known][12] the pendulum has $n=1$, and we see here that Edward's curve has $n=2$, proving the two systems are isoperiodic. Let's now exploit the square symmetry of Edward's curve by making a quadruplehelix, 3Dprintable Model.
First we expand the radius to avoid singular points, and integrate the time dependence ( this is done naively, and could be optimized with a little more effort ),
rEdwards = Normal[Series[Sqrt[2 d] r[\[Phi]0], {d, 0, 20}]];
dt = 1/Sqrt[Expand[1  4 d x^2]] d\[Phi]
t = 1/2/Pi Integrate[ Evaluate[ Expand[TrigReduce[ Normal@Series[dt, {d, 0, 20}] /.
x > Cos[\[Phi]] Sin[\[Phi]]]/d\[Phi]]], {\[Phi], Pi, \[Phi]0}];
The extra factor of $\sqrt{d}$ corresponds to a coordinate change where Edward's equation takes a form $d = x^2+y^2x^2y^2$, which allows us to plot the time spirals,
EdwardsQuadrupleHelix = Show[
Function[{a},
ParametricPlot3D[
Evaluate[{rEdwards Cos[\[Phi]0 + Pi/2 a],
rEdwards Sin[\[Phi]0 + Pi/2 a], 2 t} /. d > #/10 & /@
Range[5]], {\[Phi]0, Pi, 3 Pi}, PlotStyle > Tube[1/32],
PlotPoints > 100]
] /@ Range[0, 3],
Function[{a},
ParametricPlot3D[
Evaluate[{rEdwards Cos[\[Phi]0 + Pi/2 a],
rEdwards Sin[\[Phi]0 + Pi/2 a], 2 t} /. \[Phi]0 > Pi], {d,
0.1, .5}, PlotStyle > Tube[1/32], PlotPoints > 100]
] /@ Range[0, 3],
Function[{a},
ParametricPlot3D[
Evaluate[{rEdwards Cos[\[Phi]0 + Pi/2 a],
rEdwards Sin[\[Phi]0 + Pi/2 a], 2 t} /. \[Phi]0 > 2 Pi], {d,
0.1, .5}, PlotStyle > Tube[1/32], PlotPoints > 100]
] /@ Range[0, 3],
Function[{a},
ParametricPlot3D[
Evaluate[{rEdwards Cos[\[Phi]0 + Pi/2 a],
rEdwards Sin[\[Phi]0 + Pi/2 a], 2 t} /. \[Phi]0 > 3 Pi], {d,
0.1, .5}, PlotStyle > Tube[1/32], PlotPoints > 100]
] /@ Range[0, 3],
ParametricPlot3D[
Evaluate[{rEdwards Cos[\[Phi]0 + Pi], rEdwards Sin[\[Phi]0 + Pi],
2 t /. \[Phi]0 > Pi} /. d > #/10 & /@ Range[5]], {\[Phi]0,
Pi, 3 Pi}, PlotStyle > Tube[1/32], PlotPoints > 100],
ParametricPlot3D[
Evaluate[{rEdwards Cos[\[Phi]0 + Pi], rEdwards Sin[\[Phi]0 + Pi],
2 t /. \[Phi]0 > 2 Pi} /. d > #/10 & /@
Range[5]], {\[Phi]0, Pi, 3 Pi}, PlotStyle > Tube[1/32],
PlotPoints > 100],
ParametricPlot3D[
Evaluate[{rEdwards Cos[\[Phi]0 + Pi], rEdwards Sin[\[Phi]0 + Pi],
2 t /. \[Phi]0 > 3 Pi} /. d > #/10 & /@
Range[5]], {\[Phi]0, Pi, 3 Pi}, PlotStyle > Tube[1/32],
PlotPoints > 100]
, PlotRange > All, Boxed > False, Axes > False, ImageSize > 800
]
![ Edward's isoperiodicity ][13]
Again, can be exported directly to [shapeways][14]. Finally let's take a closer look at isoperiodicity, by comparing the uppermost level sets,
Show[Function[{a},
ParametricPlot3D[
Evaluate[{rEdwards Cos[\[Phi]0 + Pi/2 a],
rEdwards Sin[\[Phi]0 + Pi/2 a], 2 t} /. \[Phi]0 > 3 Pi], {d,
0.1, .5}, PlotStyle > Tube[1/32], PlotPoints > 100]
] /@ Range[0, 3],
ParametricPlot3D[
Evaluate[{rEdwards Cos[\[Phi]0 + Pi], rEdwards Sin[\[Phi]0 + Pi],
2 t /. \[Phi]0 > 3 Pi} /. d > #/10 & /@ Range[5]], {\[Phi]0,
Pi, 3 Pi}, PlotStyle > Tube[1/32], PlotPoints > 100],
Show[ParametricPlot3D[Evaluate[
Plus[PST2, {0, 0,
2 tB\[Phi] /. \[Phi] > 2 Pi}] /. \[Alpha] > .5 #/5 & /@
Range[5]],
{\[Phi], 0, 1.1 2 Pi}, PlotStyle > Tube[1/32], PlotPoints > 100],
ParametricPlot3D[Evaluate[
Plus[PST2, {0, 0, 2 tB\[Phi] /. \[Phi] > 2 Pi}] /. \[Phi] >
2 Pi #/6 & /@ Range[6]],
{\[Alpha], .1, 0.5}, PlotStyle > Tube[1/32], PlotPoints > 100]],
PlotRange > All, Boxed > False, Axes > False, ImageSize > 800
]
![ comparison ][15]
If you look closely, on the outermost trajectory, the effects of overlyliberal series truncation are narrowly observable. However, the physical scale for the error is about 1/100 of an inch, around the precision limit of the printer, so why worry?
## Conclusion ##
We have shown that, with a little more work, computerbased calculus could be exported to a formal penandpaper proof of isoperiodicity between the simple pendulum and the Edward's Curve with addition rules. This is well demonstrated by the level sets of 3D printed models. Work is ongoing regardless of anything else...
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1888popup_3.jpg&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=710x528_19193841_11183854_1497642604.png&userId=11733
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=710x528_19193841_11183854_1497642604.png&userId=11733
[4]: http://community.wolfram.com/groups//m/t/984488
[5]: http://community.wolfram.com/groups//m/t/1023763
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=timeHelices.jpg&userId=234448
[7]: https://www.shapeways.com/product/Q6W2L8EY3/simplependulumtimehelices?optionId=62983347
[8]: http://list.seqfan.eu/pipermail/seqfan/2017July/017783.html
[9]: http://www.ams.org/journals/bull/20074403/S0273097907011536/S0273097907011536.pdf
[10]: http://cr.yp.to/newelliptic/newelliptic20070906.pdf
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rExpression.jpg&userId=234448
[12]: https://en.wikipedia.org/wiki/Pendulum_%28mathematics%29
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=TimeHelices2.jpg&userId=234448
[14]: https://www.shapeways.com/product/Z7RBVMQET/edwardscurvetimehelices?optionId=63105767
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Comparison.jpg&userId=234448Brad Klee20170706T22:14:52ZGraphImage2List for multicolored graphimagedata
http://community.wolfram.com/groups//m/t/1102290
In my previous post [GraphImage2List][1], I created the tool that changes graphimagedata to numericdata in order to do a Machine Learning using graphimagedata. However, it can't deal with multicolored graphimagedata. So I create a new tool by **ImageGraphics** function. ImageGraphics is one of my favorite new functions in [Wolfram Language][2] 11.1.
**Goal**

The left is the original graph(image data) like CPU Utilization of five virtual machines.
The right is the graph(ListPlot) of yellowgreen line picked up by the new tool.
![enter image description here][3]
The tool comes in three steps.
**Step 1**

In the first step, My MakeMasking function masks unnecessary areas around the original image data like plot label, frame and ticks. It outputs "maskedimage".
MakeMasking[img_] := Module[{},
size = {sizex, sizey} = ImageDimensions[img];
backimg = Image[Table[1, {sizey}, {sizex}]];
Manipulate[
Grid[{{"input image", "masked image"},
{Show[img, ImageSize > size],
Show[
maskedimg =
ImageCompose[backimg,
ImageTrim[
img, {{left, bottom}, {right, top}}], {left + right,
bottom + top}/2], ImageSize > size]}}],
Row[{Control[{left, 0, sizex, 1}],
Control[{{right, sizex}, 0, sizex, 1}]}, " "],
Row[{Control[{bottom, 0, sizey, 1}],
Control[{{top, sizey}, 0, sizey, 1}]}, " "]
]
]
The left is the original. The right is masked image.
![enter image description here][4]
**Step 2**

In this step, My SelectColors function selects the areas of two near colors in ImageGraphics output of "maskedimage".
ImageGraphics function returns the content of image. The colors are below.
maskedimggraphics = ImageGraphics[maskedimg, PlotRangePadding > None];
maskedimggraphics[[1, 2, #, 1]] & /@ Range[Length[maskedimggraphics[[1, 2]]]]
![enter image description here][5]
My SelectColors function uses the content and outputs the selected area as "selectpts".
SelectColors[img_, maskedimggraphics_] := Module[{},
{sizex, sizey} = size = ImageDimensions[img] // N;
frame = {{0., 0.}, {0., sizey}, {sizex, sizey}, {sizex, 0.}};
l = Length[maskedimggraphics[[1, 1]]];
colors =
maskedimggraphics[[1, 2, #, 1]] & /@
Range[Length[maskedimggraphics[[1, 2]]]];
Manipulate[
Grid[{{"input image", "selected image"},
{Show[img, ImageSize > ImageDimensions[img]],
Graphics[
GraphicsComplex[Join[maskedimggraphics[[1, 1]], frame],
Join[{LABColor[
0.9988949153893414, 3.6790844387894895`*^6,
0.00042430735605277474`],
FilledCurve[{{Line[{l + 1, l + 2, l + 3, l + 4}]}}]},
selectpts =
FirstCase[maskedimggraphics[[1, 2]], {#, ___},
Nothing] & /@ {color1, color2}]],
ImageSize > size]}}], {{color1, colors[[2]]}, colors,
ControlType > RadioButton,
Method > "Queued"}, {{color2, colors[[2]]}, colors,
ControlType > RadioButton, Method > "Queued"},
SynchronousUpdating > False]
]
The left is the original. The right is selected image.
![enter image description here][6]
**Step 3**

You can make list in the next process by using my GetList2 function.
1. Add/del points with alt+click(WINDOWS)/cmd+click(MAC) if necessary
2. Set x and y values(Min, Max, Accuracy) of red points
3. Click Calculate button
GetList2[img_, imggraphics_, selectpts_] := Module[{},
ClearAll[list]; list = {};
Manipulate[
Grid[{{"Selected Points", "Sample List"},
{Show[img, Graphics[{Point[u]}],
ImageSize > ImageDimensions[img]],
Dynamic[If[(ValueQ[list] == False)  (list == {}),
"1． add/del points if necessary(alt+click/cmd+click）\n
2. set x and y values of red points\n
3. click Calculate button",
Sort[RandomSample[list, UpTo[10]]] // TableForm]]}},
Alignment > Top],
Row[{Control[{xMin, {0}, InputField, ImageSize > 100}],
Control[{xMax, {100}, InputField, ImageSize > 100}],
Control[{{xAccuracy, 1}, InputField, ImageSize > 50}]}, " "],
Row[{Control[{yMin, {0}, InputField, ImageSize > 100}],
Control[{yMax, {100}, InputField, ImageSize > 100}],
Control[{{yAccuracy, 1}, InputField, ImageSize > 50}]}, " "],
Row[{Button["Calculate",
list = locator2coordinate2[u, xMin, xMax, xAccuracy, yMin, yMax,
yAccuracy];, ImageSize > 120, Method > "Queued"]}, " "],
{{u, Sort[GetPointsfromImageGraphics[imggraphics, selectpts]]},
Locator, LocatorAutoCreate > True,
Appearance > Style["\[FilledCircle]", Red, 3]},
ControlPlacement > {Bottom, Bottom, Bottom},
SynchronousUpdating > False]
]
locator2coordinate2[points_, xMin_, xMax_, xAccuracy_, yMin_, yMax_,
yAccuracy_] :=
Module[{solvex, solvey, pointsx, pointsy, points2, coordinatesL,
coordinatesH, nearx, nearxpos, tmp},
solvex =
Solve[{a*#[[1]] + b == xMin, a*#[[2]] + b == xMax}, {a, b}] &@
MinMax[points[[All, 1]]];
pointsx = Flatten[({a, b} /. solvex).{#, 1} & /@ points[[All, 1]]];
solvey =
Solve[{c*#[[1]] + d == yMin, c*#[[2]] + d == yMax}, {c, d}] &@
MinMax[points[[All, 2]]];
pointsy = Flatten[({c, d} /. solvey).{#, 1} & /@ points[[All, 2]]];
points2 = Sort[Thread[{pointsx, pointsy}]];
coordinatesL = (points2 //. {s___, {u_, v_}, {u_, w_},
t___} > {s, {u, v}, t});
coordinatesH = (points2 //. {s___, {u_, v_}, {u_, w_},
t___} > {s, {u, w}, t});
(* High value *)
nearx = (Nearest[coordinatesH[[All, 1]], #, 1] & /@
Range[xMin, xMax, xAccuracy] // Flatten);
nearxpos =
Position[coordinatesH[[All, 1]], #, 1, 1] & /@ nearx // Flatten;
nearyH = Round[#, yAccuracy] & /@ coordinatesH[[All, 2]][[nearxpos]];
(* Low value *)
nearx = (Nearest[coordinatesL[[All, 1]], #, 1] & /@
Range[xMin, xMax, xAccuracy] // Flatten);
nearxpos =
Position[coordinatesL[[All, 1]], #, 1, 1] & /@ nearx // Flatten;
nearyL = Round[#, yAccuracy] & /@ coordinatesL[[All, 2]][[nearxpos]];
(* Middle value *)
nearyM = (nearyH + nearyL)/2;
(* Combination value *)
tmp = ((#[[1]] + #[[3]])/2) & /@ Partition[nearyM, 3, 1];
nearyC = Table[Which[
nearyM[[i + 1]] > tmp[[i]], nearyH[[i + 1]],
nearyM[[i + 1]] < tmp[[i]], nearyL[[i + 1]],
True, Round[nearyM[[i + 1]], yAccuracy]], {i, Length[tmp]}];
PrependTo[nearyC, Round[nearyM[[1]], yAccuracy]];
AppendTo[nearyC, Round[nearyM[[1]], yAccuracy]];
Thread[{Range[xMin, xMax, xAccuracy], nearyC}]
]
![enter image description here][7]
Set x and y values(Min, Max, Accuracy) of red points and click.
![enter image description here][8]
Then it outputs "list".
list
![enter image description here][9]
ListPlot of the list is below.
ListPlot[Style[list, RGBColor[204/255, 204/255, 0]], Joined > True]
![enter image description here][10]
**Differences**

When GetList2 converts area selected in Step 2. to coordinates, there are some points of the same x coordinate. So my locator2coordinate2 function outputs 4 lists of y coordinate, high, low, middle and combination as nearyH, nearyL, nearyM and nearyC. As I show below, nearyC seems to be better than others.
I create this image data below.
data = {{6, 5, 33, 36, 9, 11, 23, 29, 34, 26, 3, 6, 26, 35, 21, 6, 26, 33, 20, 16, 30, 6, 1, 6},
{41, 34, 43, 60, 33, 38, 54, 43, 29, 59, 45, 34, 42, 55, 42, 26, 59, 20, 20, 41, 41, 47, 28, 52},
{58, 55, 61, 56, 40, 47, 50, 72, 72, 66, 69, 69, 78, 75, 70, 66, 56, 76, 66, 43, 47, 79, 56, 49},
{88, 96, 84, 62, 69, 67, 61, 60, 94, 76, 75, 70, 69, 86, 68, 61, 72, 91, 89, 71, 69, 83, 88, 75},
{17, 9, 23, 19, 23, 47, 45, 30, 82, 88, 58, 24, 59, 61, 17, 82, 95, 83, 40, 81, 68, 5, 40, 7}};
graph =
DateListPlot[data, {2017, 5, 20, 0}, Frame > True,
FrameLabel > {"Time", "CPU Utilization %"}, PlotStyle > 96];
img = Rasterize[graph]
Mean and Variance of the difference between the true list(data) and calculated 4 list(nearyH, nearyL, nearyM and nearyC) are below and nearyC is the best of all.
Grid[
Join[{{"", "nearyH", "nearyM", "nearyL", "nearyc"}},
Join[{{"Mean", "Variance"}},
{Mean[#], Variance[#]} & /@ {nearyH  #, nearyM  #, nearyL  #,
nearyC  #} &@data[[5]] // N] // Transpose] // Transpose,
Frame > All]
![enter image description here][11]
ListPlot[{Legended[Style[data[[5]], RGBColor[204/255, 204/255, 0]],
"data[[5]]"], Legended[Style[nearyC, Red], "nearyC"]},
Joined > {True, False}, PlotRange > {0, 100}]
![enter image description here][12]
[1]: http://community.wolfram.com/groups//m/t/1079933?p_p_auth=tIi9evRT
[2]: http://reference.wolfram.com/language/ref/ImageGraphics.html
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.0.jpg&userId=1013863
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9670screenshot.2.jpg&userId=1013863
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1885screenshot.8.jpg&userId=1013863
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10670screenshot.3.jpg&userId=1013863
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1651screenshot.4.jpg&userId=1013863
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9510screenshot.5.jpg&userId=1013863
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6445screenshot.12.jpg&userId=1013863
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2322screenshot.7.jpg&userId=1013863
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1844screenshot.9.jpg&userId=1013863
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6267screenshot.6.jpg&userId=1013863Kotaro Okazaki20170522T20:20:01ZGraphImage2List
http://community.wolfram.com/groups//m/t/1079933
I'm trying to do a Machine Learning using old data. However, the data are not numericdata but graphimagedata. So I create the tool which calculates numericdata from graphimagedata.
**Goal**

The left is the original graphimagedata. The right is the graph made from the tool's output(list data).
![enter image description here][1]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
11.82 12.04 12.09 11.88 12.42 12.48 12.61 12.75 12.53 12.53 12.62 12.68 12.6 12.39 12.24 12.21 12.01 11.91 11.92 11.89 11.91 12.24 12.35 12.11 12.35 12.39 12.53 12.54 13.26 13.3 13.20 13.11 13.17 13.69 13.9 13.54 13.20 13.44 13.35 13.41 13.47 13.24 13.23 13.49 12.87 12.65 12.57 13.04 13.09 12.87
The tool comes in two steps
**Step 1**

In the first step, the tool selects points of the graph roughly by using ImageKeypoints function.
However, it also uses masking option of ImageKeypoints because ImageKeypoints sometimes selects unnecessary points of the graph. I referred to [@Vitaliy Kaurov][2] 's [post][3] about how to make masking area.
My GetPoints function selects the key points of the graph.
GetPoints[i_] := Manipulate[
Grid[{
{"Mask(add/del alt+click/cmd+click）", "Selected Points"},
{Show[i, ImageSize > ImageDimensions[i]],
mask =
Graphics[Disk[#, 10] & /@ p,
PlotRange > Thread[{{1, 1}, ImageDimensions[i]}],
ImageSize > ImageDimensions[i]];
HighlightImage[i,
points =
ImageKeypoints[i, MaxFeatures > n, Method > method,
Masking > mask], ImageSize > ImageDimensions[i]]}
}],
{{p, {ImageDimensions[i]/2}}, Locator, LocatorAutoCreate > True,
Appearance > Style["\[EmptyCircle]", Red, 30]}, {{n, 100,
"number of points"}, 10, 300,
10}, {{method, "FAST"}, {"AGAST", "AKAZE", "BRISK", "FAST", "KAZE",
"ORB", "SURF"}, ControlType > RadioButton},
ControlPlacement > {Top}]
Here is an original data, or graphimagedata. This is a financial graph.
![enter image description here][4]
The red points in the right figure are selected points.
![enter image description here][5]
Some ticks may be selected, but they are unnecessary.
You can mask them by moving the red circle in the left figure.
![enter image description here][6]
You can add more masking areas with alt+click(WINDOWS)/cmd+click(MAC).
![enter image description here][7]
ImageKeypoints has many methods. In this case, "AKAZE" method is the best.
![enter image description here][8]
Forty points are selected. They are stored in "points".
![enter image description here][9]
**Step 2**

In this step, My GetList function makes list of selects the points.
GetList[i_, points_] := Module[{}, ClearAll[list]; list = {};
Row[{Manipulate[Grid[{{"Selected Points", "Sample List"},
{Show[i, Graphics[{Point[u]}],
ImageSize > ImageDimensions[i]],
Dynamic[If[(ValueQ[list] == False)  (list == {}),
"1． move bottomleft and upperright red points\n2. set \
each coordinate\n3. add/del points if necessary(alt+click/cmd+click）\n\
4. click Calculate button", list = Round[#, accuracy] & /@ list;
Sort[RandomSample[list, UpTo[10]]] // TableForm]]}}],
Row[{Dynamic[u[[1]]], ">",
Control[{coordinate1, {{0, 0}}, InputField, ImageSize > 80}],
Dynamic[u[[2]]], ">",
Control[{coordinate2, {{1, 1}}, InputField, ImageSize > 80}],
Control[{{accuracy, 0.01}, InputField, ImageSize > 50}]},
" "],
Row[{Button["Calculate",
list = locator2coordinate[u, {coordinate1, coordinate2}];,
ImageSize > 120]}, " "],
Row[{Button["Clear points", u = Take[u, 2]; Put[u, "locator"],
ImageSize > 120]}, " "],
{{u, Join[{{1, 1}, ImageDimensions[i]  {1, 1}}, Sort[points]]},
Locator, LocatorAutoCreate > True,
Appearance > Style["\[FilledCircle]", Red, 8]},
ControlPlacement > {Bottom, Bottom}]
}, " "]
]
locator2coordinate[list_, sample_] :=
Module[{a, b, c, d, mat, cnst, solve, matx, cnstx},
mat = {{a, 0}, {0, d}}; cnst = {b, c};
solve =
Solve[mat.list[[1]] + cnst == sample[[1]] &&
mat.list[[2]] + cnst == sample[[2]], {a, b, c, d}];
matx = mat /. solve; cnstx = cnst /. solve;
Partition[Flatten[(matx.# + cnstx) & /@ list], 2] // Sort
]
You can make list in the next process.
1. Move bottomleft and upperright red points into where you know the coordinates
![enter image description here][10]
2. Set each coordinate
3. Add points with alt+click(WINDOWS)/cmd+click(MAC)
50 points are selected in the figure below.
![enter image description here][11]
4. Click Calculate button
The summary is displayed in the right.
![enter image description here][12]
And the selected points are stored in "list".
Transpose[{Round[#[[1]], 1], #[[2]]} & /@ list] // TableForm
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
11.82 12.04 12.09 11.88 12.42 12.48 12.61 12.75 12.53 12.53 12.62 12.68 12.6 12.39 12.24 12.21 12.01 11.91 11.92 11.89 11.91 12.24 12.35 12.11 12.35 12.39 12.53 12.54 13.26 13.3 13.20 13.11 13.17 13.69 13.9 13.54 13.20 13.44 13.35 13.41 13.47 13.24 13.23 13.49 12.87 12.65 12.57 13.04 13.09 12.87
Compare the original graph image and calculated graph(ListPlot).
![enter image description here][13]
**Example 1**

Here is a graphimagedata like sin curve.
![enter image description here][14]
Step 1: GetPoints
In this case, "FAST" method is the best.
![enter image description here][15]
Step 2: GetList
![enter image description here][16]
Compare the original graph image and calculated graph(ListPlot).
![enter image description here][17]
**Example 2**

Here is a graphimagedata like barchart.
![enter image description here][18]
Step 1: GetPoints
In this case, "AGAST" method is the best.
![enter image description here][19]
Step 2: GetList
![enter image description here][20]
Compare the original graph image and calculated graph(BarcChart).
Now I have numeric data, so I can set a bar style.
![enter image description here][21]
**Finally**

In this approach, some manual operations are necessary. When there are a lot of image data, this work will be very boring.
There are many many functions in Wolfram Language. By using ImageGraphics, ImageCorners, it may be able to improve the accuracy of selecting points in Step 1. By using TextRecognize, it may be unnecessary to set the coordinates manual setting in Step 2.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5699screenshot.1.jpg&userId=1013863
[2]: http://community.wolfram.com/web/vitaliyk
[3]: http://community.wolfram.com/groups//m/t/121733
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.2.jpg&userId=1013863
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.3.jpg&userId=1013863
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.4.jpg&userId=1013863
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.5.jpg&userId=1013863
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.6.jpg&userId=1013863
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.7.jpg&userId=1013863
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8195screenshot.8.jpg&userId=1013863
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5620screenshot.9.jpg&userId=1013863
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1680screenshot.10.jpg&userId=1013863
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.11.jpg&userId=1013863
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9234screenshot.12.jpg&userId=1013863
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7385screenshot.13.jpg&userId=1013863
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3334screenshot.14.jpg&userId=1013863
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2452screenshot.15.jpg&userId=1013863
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.16.jpg&userId=1013863
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.17.jpg&userId=1013863
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.18.jpg&userId=1013863
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.19.jpg&userId=1013863Kotaro Okazaki20170502T16:30:22Z[✓] Calculate a double integration of two functions?
http://community.wolfram.com/groups//m/t/1189862
Hi, all!
So basically I'm trying to do a double integration of two functions but in one case Mathematica outputs "Null" in the answer while in the other case it outputs the same input and doesn't really do anything.
Here below are snapshots of code and the output given by Mathematica.
1. https://imgur.com/a/XtBu0
2. https://imgur.com/a/KAwep
Thanks in advance :)Raivat Shah20170922T06:16:31Z[GIF] Bertrand Pairs (Bertrand pairs of a helix)
http://community.wolfram.com/groups//m/t/1186568
![Bertrand pairs of a helix][1]
**Bertrand Pairs**
I'm teaching an undergraduate differential geometry course this semester, and was reminded of [Bertrand pairs][2], which are pairs of parametrized curves for which corresponding points have the same normal line.
Most curves don't have a Bertrand pair, and most of the ones that do have exactly one. However, there is an exception: any circular helix has infinitely many Bertrand pairs, given by just traveling any fixed distance in the principal normal direction at every point. The animation shows (some of) the family of Bertrand pairs for the helix $(\cos t, \sin t, t/4)$, as the distance $r$ along the normal line varies from 0 to 2. At distance 2 the Bertrand pair is a congruent helix, $180^\circ$ out of phase, so I'm rotating the family to make the $r=2$ pair exactly in phase and the family is periodic.
To get the family of Bertrand pairs, I first use the arclength parametrization $\alpha(s) = (\cos 4s/\sqrt{17}, \sin 4s/\sqrt{17}, s/\sqrt{17})$, then plug this into `FrenetSerretSystem[]` to get the principal normal $N(s) = (\cos 4s/\sqrt{17}, \sin 4s/\sqrt{17},0)$. The Bertrand pairs are then $\alpha(s) + r N(s)$ for any choice of $r$.
For the actual animation I used a complicated combination of `Blend[]` and `ImageCompose[]` to get all the curves to be visually on the same "level", but that was very slow. Here's the code for a simplified version which is fast enough for a `Manipulate[]` but which will have some artifacts if exported to a GIF:
With[{n = 24},
Manipulate[
ParametricPlot[
Evaluate@
Table[{{0, 1, 0}, {0, 0, 1}}.RotationMatrix[θ + i π/n, {0, 0, 1}].
({Cos[4 t/Sqrt[17]], Sin[4 t/Sqrt[17]], t/Sqrt[17]} +
2 Haversine[θ + i π/n] {Cos[(4 t)/Sqrt[17]], Sin[(4 t)/Sqrt[17]], 0}),
{i, 0, n  1}],
{t, 0, 6 π},
PlotRange > {{1.1, 1.1}, 1/Sqrt[17] {3/2 π, 9/2 π}}, Axes > None,
PlotStyle > Table[Directive[CapForm[None], Thickness[.006], Opacity[.6],
Hue[Haversine[θ + i π/n]]], {i, 0, n  1}],
ImageSize > {540, 540}, Background > Black],
{θ, 0, π/n}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=pairs10Br.gif&userId=610054
[2]: http://mathworld.wolfram.com/BertrandCurves.htmlClayton Shonkwiler20170918T03:55:28ZIs GeoHistogram allowing bspec defined as a GeoVariants entities list?
http://community.wolfram.com/groups//m/t/1189895
Hi everyone,
I wanted to plot a GeoHistogram at World Range with a different color for every country. Everything works fine as long as I set as bspec (bin specification) the entity "Country".
The problem is, for instance, that the "UnitedStates" country does not include Alaska and Hawaii, but I know that you can define an entity that includes them like this
GeoVariant[CountryData["UnitedStates"],"PrincipalArea"]
So I tried to define a bspec list of this GeoVariant entities for plotting, but it doesn't work.
Does anyone know a workaround to solve this problem?Gianluca Teza20170922T08:50:19ZGet a good visual solution of PDE?
http://community.wolfram.com/groups//m/t/1189271
With this program and version 11.01 of Mathematica I have these graphic good solution`
ClearAll["Global`*"]
Needs["NDSolve`FEM`"]
parisradius = 0.3;
parisdiffusionvalue = 150;
carto = DiscretizeGraphics[
CountryData["France", {"Polygon", "Mercator"}] /.
Polygon[x : {{{_, _} ..} ..}] :> Polygon /@ x];
paris = First@
GeoGridPosition[
GeoPosition[
CityData[Entity["City", {"Paris", "IleDeFrance", "France"}]][
"Coordinates"]], "Mercator"];
bmesh = ToBoundaryMesh[carto, AccuracyGoal > 1,
"IncludePoints" > CirclePoints[paris, parisradius, 50]];
mesh = ToElementMesh[bmesh, MaxCellMeasure > 5];
mesh["Wireframe"];
op = Laplacian[u[x, y], {x, y}]  20;
usol = NDSolveValue[{op == 1,
DirichletCondition[u[x, y] == 0, Norm[{x, y}  paris] > .6],
DirichletCondition[u[x, y] == parisdiffusionvalue,
Norm[{x, y}  paris] < .6]}, u, {x, y} \[Element] mesh];
Show[ContourPlot[usol[x, y], {x, y} \[Element] mesh,
PlotPoints > 100, ColorFunction > "Temperature"],
bmesh["Wireframe"]] // Quiet
Plot3D[usol[x, y], {x, y} \[Element] mesh, PlotTheme > "Detailed",
ColorFunction > "Rainbow", PlotPoints > 50]
![enter image description here][1]
But with version 11.2 I obtain these solution. Why et How to obtain the first solution?
![enter image description here][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Captured%E2%80%99%C3%A9cran20170921%C3%A014.09.53.png&userId=77503
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Captured%E2%80%99%C3%A9cran20170921%C3%A014.13.31.png&userId=77503André Dauphiné20170921T12:15:44Z[✓] Use AMSEuler fonts in Mathematica?
http://community.wolfram.com/groups//m/t/1189722
Hi, I would like to use AMSEuler fonts in Mathematica. Is there a way I can do this?
JeffreyJeffrey Denison20170922T02:20:39ZGoodbye to the CDF plugin for web browsers?
http://community.wolfram.com/groups//m/t/1052516
Embedded cdf documents doesn't work. Nor private ones nor the included in the Demonstrations Project Page.
There is no any information or message. After a little search I have found a note in Wolfram Support:
> Quick Answer Which browsers support the CDF plugin?
>
> Email Print
>
> Depending on the browser you use, you may notice that the CDF plugin
> is not available. This is because web browsers are reducing support
> for plugins in favor of HTMLbased technologies.
>
> The CDF plugin is currently compatible with 32bit Internet Explorer
> and Opera. The plugin is not compatible with the latest versions of
> Chrome, Firefox or Safari. Firefox provides an Extended Support
> Release, which is expected to support the CDF plugin through early
> 2018.
>
> As web browsers continue to reduce plugin support, we recommend the
> Wolfram Cloud for deploying CDFs on the web.
But Wolfram Cloud is not a good solution because the most attractive characteristic of CDF –you can see things change in a continuous form in response a controls– disappears.
And downloading documents to use Mathematica o Wolfram Player is not the same.
We can expect some solution or is a lost characteristic (and lost work in some projects)?Javier Puertolas20170404T09:55:51Z[WSC17] Creating A Regular Polyhedron Classifier
http://community.wolfram.com/groups//m/t/1189734
[![enter image description here][5]][4]
# Introduction
The aim of this project is to train a neural network to classify and identify different types of polyhedra. The neural network trained on MNIST handwriting data to identify handwritten numbers was retrained on new data to identify polyhedra. The user can upload any regular polyhedron, preferably one from Mathematica, and the classifier will identify the regular polyhedron, and display the probability of the polyhedron being one of the other regular polyhedra.
## Creating the Data Set
I made the dataset by using the graph function of Mathematica. A function was created to graph the polyhedron given the name, and rotate it about all three axes randomly.
rotate[{rx_, ry_, rz_}, {x_, y_, z_}] :=
{x Cos[ry] Cos[rz] + z Sin[ry] 
y Cos[ry] Sin[rz], z Cos[ry] Sin[rx] +
x (Cos[rz] Sin[rx] Sin[ry] + Cos[rx] Sin[rz]) +
y (Cos[rx] Cos[rz]  Sin[rx] Sin[ry] Sin[rz]),
z Cos[rx] Cos[ry] + x (Cos[rx] Cos[rz] Sin[ry] + Sin[rx] Sin[rz]) +
y (Cos[rz] Sin[rx] + Cos[rx] Sin[ry] Sin[rz])}
This function defines the rotate function that rotates the graphed poylehdron.
randomPoly[name_] :=
ImageCrop[
ImageResize[
Image[With[{graphics = PolyhedronData[name, "Graphics3D"],
rotation = RandomReal[{Pi, Pi}, 3]},
Graphics3D[
GraphicsComplex[rotate[rotation, #] & /@ graphics[[1, 1]],
graphics[[1, 2]]], Boxed > False]
]], 75], {75, 75}];
randomPoly[] :=
randomPoly@
RandomChoice@{"Cube", "Dodecahedron", "Icosahedron", "Octahedron",
"Tetrahedron"};
This function graphs the poyhedra and randomly rotates it about all three axes. This is an example of the polyhedron graphed.
![enter image description here][1]
createExample[name_] := randomPoly[name] > name;
trainingData =
Table[createExample@
RandomChoice@{"Cube", "Dodecahedron", "Icosahedron", "Octahedron",
"Tetrahedron"}, 1000];
This function was then used to create the rules for the training set. As shown in the code, 1000 of each polyhedron was actually generated to create the training data. This is what the rule looked like.
![enter image description here][2]
## Training the Neural Network
The polyhedron classifier used the MNIST neural network and retrained the neural network using the training set generated previously.
![enter image description here][3]
## Creating the Microsite
After the training of the neural network was finished, the final step was to create a microsite. The microsite allowed users to input an image of a polyhedron, and the program would identify the polyhedra.
Export["net.wlnet", trained]
CopyFile["net.wlnet", CloudObject["net.wlnet"]]
form = FormPage[{"image" > "Image"},
"<h2 style='marginbottom:40px' class='section formtitle'>" <>
ToString[Import["net.wlnet"][#image]] <> "</h2>" <>
ExportString[
Grid[{Keys[
Normal[Import["net.wlnet"][#image, "Probabilities"]]],
Values[Normal[
Import["net.wlnet"][#image, "Probabilities"]]]}],
"HTMLFragment"] &,
AppearanceRules > <"Title" > "<b>Polyhedron</b> Classifier",
"Description" >
"<style>.wolframbranding>.wolframbrandingcloud:after{\
backgroundimage: url(http://i.imgur.com/hFuh1YT.png)}</style>Upload \
a regular polyhedra to have it classified",
"SubmitLabel" > "Classify">
, PageTheme > "Red"];
CloudDeploy[form, "test",
Permissions > "Public"
]
This is the link to the microsite you can see at the top:
https://www.wolframcloud.com/objects/user6de45e754ef44882960cddb2c07bd5b5/test
## Conclusion
Although the classifier is currently very accurate in correctly identifying the polyhedra generated by Wolfram and Mathematica, its success rate when using Google images or outside images is a lot lower. I hope to keep training the neural network in order to allow for higher accuracy with outside images as well.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshotrandompoly.PNG&userId=1139975
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshotcreateexample.PNG&userId=1139975
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShotNeuralNetworkTraining.PNG&userId=1139975
[4]: https://www.wolframcloud.com/objects/user6de45e754ef44882960cddb2c07bd5b5/test
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170922at3.59.28AM.png&userId=11733Andre Wang20170922T05:43:01ZSolve a system of linear equations?
http://community.wolfram.com/groups//m/t/1189184
Hi!
I have a pretty easy problem where I have 6 equations and 6 unknown of the form m*x = 0, where m is a 6x6 matrix and x is a vector of length 6 with the 6 unknown variables. I'm looking for a nontrivial solution to this problem so I use the Nullspace command.
My code is the following
m = {{E^(I*x*b/2), E^(I*x*b/2), E^(I*y*b/2), E^(I*y*b/2), 0, 0},
{I*x*E^(I*x*b/2), I*x*E^(I*x*b/2), I*y*E^(I*y*b/2),
I*y*E^(I*y*b/2), 0, 0},
{0, 0, E^(I*y*b/2), E^(I*y*b/2), E^(I*x*b/2), E^(I*x*b/2)},
{0, 0, I*y*E^(I*y*b/2), I*y*E^(I*y*b/2),
I*x*E^(I*x*b/2), I*x*E^(I*x*b/2)},
{E^(I*x*(a + b/2)), E^(I*x*(a + b/2)), 0, 0, 0, 0},
{0, 0, 0, 0, E^(I*x*(a + b/2)), E^(I*x*(a + b/2))}}
ns = NullSpace[m]
This gives me the output { }. What does this result mean?
All the coefficients x,y,a,b are real and greater than zero. Is that something that I should specify in the code? I wonder what is wrong with my code?
I've attached my code to this post.
Thanks in advance!Pontus Vikstål20170921T10:07:02Z[NEW11.2] Torn edge paper effect with semitransparent dropdown shadow
http://community.wolfram.com/groups//m/t/1188397
Some time ago [Vitaliy Kaurov][1] asked an interesting [question][2] about implementing in *Mathematica* so called "torn edge" effect in context of conveying a message perfectly [formulated][3] by [Sjoerd C. de Vries][4]: "There's more of this, but that's not important." As an answer [Heike][5] (a [Mathematica.StackExchange][6] community member) published an ingenious [implementation][7] which deservedly received a huge amount of upvotes. She not only implemented a "torn edge" effect but also provided an option to add a dropdown shadow in order to dramatize the effect. Here is what her function named `torn` generates from the `"Mandrill"` example image in *Mathematica* 8.0.4 (it doesn't work correctly in the recent versions due to an [incompatible change][8] in image processing functionality):
img = ExampleData[{"TestImage", "Mandrill"}];
torn[img, {{0, 1}, {1, 0}}, "offset" > {20, 20}, "gaussianBlur" > 10]
> ![image][9]
As Vitaliy recently [pointed out][10], with the release of *Mathematica* 11.2 we've got a `"TornFrame"` image effect immediately available via `ImageEffect` function:
ImageEffect[img, {"TornFrame", Scaled[1/15], {Right, Bottom}, .05}]
> ![image][11]
But people at StackExchange quickly noticed that this effect doesn't produce a [sufficiently irregular][12] rippedout edge, what means that the message that the image is truncated and "there is more of this, but that's not important" isn't exactly obvious (as opposed to the implementation provided by [Heike][13]). By spelunking the evaluation of `ImageEffect` with `Trace` I've found a simple hack allowing to generate a "Heikestyle" torn edge effect:
tornEdge = Block[{Accumulate = RandomReal[1, Length[#]] &},
ImageEffect[img, {"TornFrame", Scaled[1/15], {Right, Bottom}, .08}]]
> ![image][14]
Further investigation showed that using new `"Frame"` effect we can produce even more irregular torn edge using only documented functionality, but at the cost of sufficiently more lengthy code:
tornEdge2 = Module[{step = 10, if, n = 2 Total[ImageDimensions[img]], k = 0},
if = Interpolation[
Transpose[{Accumulate[Prepend[RandomInteger[{step, 2 step}, n], 0]],
RandomReal[1, n + 1]}],
InterpolationOrder > 1];
ImageEffect[img, {"Frame", if[++k] &, 15, {Right, Bottom}}]]
> ![image][15]
Here is a simple approach allowing to obtain a semitransparent shadow based on the new functionality. In addition I've added an option to add a *darkened* onepixel wide boundary to the torned image:
shadowOffset = 10;
shadowBlur = 5;
shadowTone = .5;
boundaryLigntness = .4;
ImageCompose[
SetAlphaChannel[ColorNegate@#, #] &@
Blur[ImageMultiply[
ImagePad[AlphaChannel[
tornEdge], {{shadowOffset, shadowBlur/2}, {shadowBlur/2, shadowOffset}}],
shadowTone], shadowBlur],
ImagePad[ImageMultiply[tornEdge,
ColorNegate@
ImageMultiply[MorphologicalPerimeter@AlphaChannel[tornEdge], boundaryLigntness]], {{0,
shadowOffset + shadowBlur/2}, {shadowOffset + shadowBlur/2, 0}}]]
> ![image][16]
Any suggestions and comments are welcome!
[1]: http://community.wolfram.com/web/vitaliyk
[2]: https://mathematica.stackexchange.com/q/4148/280
[3]: https://mathematica.stackexchange.com/questions/3897/plottingerrorbarsonalogscale#comment10606_3899
[4]: https://mathematica.stackexchange.com/users/57/sjoerdcdevries
[5]: https://mathematica.stackexchange.com/users/46/heike
[6]: https://mathematica.stackexchange.com/
[7]: https://mathematica.stackexchange.com/a/4155/280
[8]: https://mathematica.stackexchange.com/q/156210/280
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1496i.png&userId=88857
[10]: https://mathematica.stackexchange.com/a/155774/280
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2163i.png&userId=88857
[12]: https://mathematica.stackexchange.com/questions/4148/tornedgepapereffectforimages/155801?noredirect=1#comment415808_155774
[13]: https://mathematica.stackexchange.com/users/46/heike
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5186i.png&userId=88857
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8094i.png&userId=88857
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9680i.png&userId=88857Alexey Popkov20170920T14:04:22ZMove the model named "Generator" to class "ElectricalModels" in SM?
http://community.wolfram.com/groups//m/t/1189085
Hi,
I need/want to move the model named "Generator" from class "GenericModels" to class "ElectricalModels". Of course I want all my models to get that I am moving it so that I do not have to recreate *every* model I ever created containing the model "Generator".
I suspect there is no way of doing this, but if so creating a new library is damn near impossible unless you know from beforehand what categories will be most practical for model grouping.
Thanks,
MartinMartin Nilsson20170921T07:34:28Z[✓] Show gridlines on top of image displayed using Texture in Graphics?
http://community.wolfram.com/groups//m/t/1189398
When I display an image as a Texture in Graphics, I'm unable to show gridlines (in this case the image is a bitmap representing a geological map extract that's georeferenced using British National Grid coordinates). I suspect that the problem is to do with the order in which the graphics objects are rendered (i.e. if the gridlines are rendered before the texture they'll be overwritten). I've searched the documentation and experimented with a few ideas, but haven managed to resolve the problem. I'd be grateful to anyone who could point me to a solution. A notebook containing code extracts is attached.
Thanks in anticipation.
IanIan Williams20170921T20:26:53ZMarket Sentiment Analysis
http://community.wolfram.com/groups//m/t/970999
Text and sentiment analysis has become a very popular topic in quantitative research over the last decade, with applications ranging from market research and political science, to ecommerce. In this post I am going to outline an approach to the subject, together with some core techniques, that have applications in investment strategy.
In the early days of the developing field of market sentiment analysis, the supply of machine readable content was limited mainly to mainstream providers of financial news such as Reuters or Bloomberg. Over time this has changed with the entry of new competitors in the provision of machine readable news, including, for example, Ravenpack or more recent arrivals like Accern. Providers often seek to sell not only the raw news feed service, but also their own proprietary sentiment indicators that are claimed to provide additional insight into how individual stocks, market sectors, or the overall market are likely to react to news. There is now what appears to be a cottage industry producing white papers seeking to demonstrate the value of these services, often accompanied by some impressive proforma performance statistics for the accompanying strategies, which include longonly, long/short, market neutral and statistical arbitrage.
For the purpose of demonstration I intend to forego the blandishments of these services, although many are no doubt are excellent, since the reader is perhaps unlikely to have access to them. Instead, in what follows I will focus on a single news source, albeit a highly regarded one: the Wall Street Journal. This is, of course, a simplification intended for illustrative purposes only – in practice one would need to use a wide variety of news sources and perhaps subscribe to a machine readable news feed service. But similar principles and techniques can be applied to any number of news feeds or online sites.
**The WSJ News Archive**
We are going to access the Journal’s online archive, which presents daily news items in a convenient summary format, an example of which is shown below. The archive runs from the beginning of 2012 through to the current day, providing ample data for analysis. In what follows, I am going to make a few important assumptions, neither of which is likely to be 100% accurate – but which will not detract too much from the validity of the research, I hope. The first assumption is that the news items shown in each daily archive were reported prior to the market open at 9:30 AM. This is likely to be true for the great majority of the stories, but there are no doubt important exceptions. Since we intend to treat the news content of each archive as antecedent to the market action during the corresponding trading session, exceptions are likely to introduce an element of lookahead bias. The second assumption is that the archive for each day is shown in the form in which it would have appeared on the day in question. In reality, there are likely to have been revisions to some of the stores made subsequent to their initial publication. So, here too, we must allow for the possibility of lookahead bias in the ensuing analysis.
![enter image description here][1]
With those caveats out of the way, let’s proceed. We are going to be using broad market data for the S&P 500 index in the analysis to follow, so the first step is to download daily price series for the index. Note that we begin with daily opening prices, since we intend to illustrate the application of news sentiment analysis with a theoretical daytrading strategy that takes positions at the start of each trading session, exiting at market close.
![enter image description here][2]
From there we calculate the intraday return in the index, from market open to close, as follows:
![enter image description here][3]
**Text Analysis & Classification**
Next we turn to the task of reading the news archive and categorizing its content. Mathematica makes the importation of html pages very straightforward, and we can easily crop the raw text string to exclude page headers and footers. The approach I am going to take is to derive a sentiment indicator based on an analysis of the sentiment of each word in the daily archive. Before we can do that we must first convert the text into individuals words, stripping out standard stopwords such as "the" and "in" and converting all the text to lower case. Naturally one can take this preprocessing a great deal further, by identifying and separating out proper nouns, for example. Once the text processing stage is complete we can quickly summarize the content, for example by looking at the most common words, or by representing the entire archive in the form of a word cloud. Given that we are using the archive for the first business day of 2012, it is perhaps unsurprising that we find that "2012", "new" and "year" feature so prominently!
archive =
Import[StringJoin["http://www.wsj.com/public/page/archive",
DateString[
datelist[[1]], {"Year", "", "MonthShort", "", "DayShort"}],
".html"]];
archive =
StringDrop[archive,
StringPosition[archive,
DateString[
datelist[[1]], {"MonthName", " ", "DayShort", ", ",
"Year"}]][[1, 2]]];
archive =
StringTake[
archive, 1 + StringPosition[archive, "ARCHIVE FILTER"][[1, 1]]];
archivewords = ToLowerCase[DeleteStopwords[TextWords[archive]]];
TakeLargest[Counts[archivewords], 20]
WordCloud[archivewords]
<"2012" > 45, "new" > 41, "year" > 39, "u.s." > 33, "2011" > 21,
"asia" > 16, "iowa" > 16, "crisis" > 15, "biggest" > 14,
"news" > 14, "markets" > 14, "india" > 13, "said" > 13,
"economic" > 13, "market" > 13, "oil" > 13, "europe's" > 13,
"look" > 12, "day" > 12, "price" > 12>
![enter image description here][4]
The subject of sentiment analysis is a complex one and I only touch on it here. For those interested in the subject I can recommend The Text Mining Handbook, by Feldman and Sanger, which is a standard work on the topic. Here I am going to employ a machine learning classifier provided with Mathematica 11. It is not terribly sophisticated (or, at least, has not been developed with financial applications especially in mind), but will serve for the purposes of this article. For those unfamiliar with the functionality, the operation of the sentiment classification algorithm is straightforward enough. For instance:
![enter image description here][5]
We apply the algorithm to classify each word in the daily news archive and arrive at a sentiment indicator based on the proportion of words that are classified as "positive". The sentiment reading for the archive for Jan3, 2012, for example, turns out to be 67.4%:
![enter image description here][6]
**Sentiment Index Analytics**
We can automate the process of classifying the entire WSJ archive with just a few lines of code, producing a time series for the daily sentiment indicator, which has an average daily value of 68.5%  the WSJ crowd tends to be bullish, clearly! Note how the 60day moving average of the indicator rises steadily over the period from 2012 through Q1 2015, then abruptly reverses direction, declining steadily thereafter  even somewhat precipitously towards the end of 2016.
WSJSentimentIndicator[date_] :=
Module[{d = date, archive, archivewords, WSJSI},
archive =
Import[StringJoin["http://www.wsj.com/public/page/archive",
DateString[d, {"Year", "", "MonthShort", "", "DayShort"}],
".html"]];
archive =
StringDrop[archive,
StringPosition[archive,
DateString[d, {"MonthName", " ", "DayShort", ", ", "Year"}]][[1,
2]]];
archive =
StringTake[
archive, 1 + StringPosition[archive, "ARCHIVE FILTER"][[1, 1]]];
archivewords = ToLowerCase[DeleteStopwords[TextWords[archive]]];
WSJSI = #Positive/(#Negative + #Positive) &@
Counts[Classify["Sentiment", archivewords]] // N;
{WSJSI, archivewords, archive}]
![enter image description here][7]
As with most data series in investment research, we are less interested in the level of a variable, such as a stock price, than we are in the changes in level. So the next step is to calculate the daily percentage change in the sentiment indicator and examine the correlation with the corresponding intraday return in the S&P 500 Index. At first glance our sentiment indicator appears to have very little predictive power  the correlation between indicator changes and market returns is negligibly small overall  but we shall later see that this is not the last word.
![enter image description here][8]
**Conditional Distributions**
Thus far the results appear discouraging; but as is often the case with this type of analysis we need to look more closely at the conditional distribution of returns. Specifically, we will examine the conditional distribution of S&P 500 Index returns when changes in the sentiment index are in the upper and lower quantiles of the distribution. This will enable us to isolate the impact of changes in market sentiment at times when the swings in sentiment are strongest. In the analysis below, we begin by examining the upper and lower third of the distribution of changes in sentiment:
![enter image description here][9]
The analysis makes clear that the distribution of S&P 500 Index returns is very different on days when the change in market sentiment is large and positive vs. large and negative. The difference is not just limited to the first moment of the conditional distribution, where the difference in the mean return is large and statistically significant, but also in the third moment. The much larger, negative skewness means that there is a greater likelihood of a large decline in the market on days in which there is a sizable drop in market sentiment, than on days in which sentiment significantly improves. In other words, the influence of market sentiment changes is manifest chiefly through the mean and skewness of the conditional distributions of market returns.
**A News Trading Algorithm**
We can capitalize on these effects using a simple trading strategy in which we increase the capital allocated to a longSPX position on days when market sentiment improves, while reducing exposure on days when market sentiment falls. We increase the allocation by a factor  designated the leverage factor  on days when the change in the sentiment indicator is in the upper 1/3 of the distribution, while reducing the allocation by 1/leveragefactor on days when the change in the sentiment indicator falls in lower 1/3 of the distribution. The allocation on other days is 100%. The analysis runs as follows:
period = QuantityMagnitude@
DateDifference[First@datelist, Last@datelist, "Year"];
AnnStd = Sqrt[252]*
StandardDeviation[
Transpose@{tsSPXreturns["Values"], strategyreturns}];
cf = {tsVTDSPX[Last@datelist]/1000  1,
tsVTDstrategy[Last@datelist]/1000  1};
CAGR = 1 + (1 + cf)^(1/period);
IR = CAGR/AnnStd;
Print[Style["News Sentiment Strategy", "Subsection"]];
P1 = Style[
NumberForm[
TableForm[{CAGR, AnnStd, IR},
TableHeadings > {{"CAGR", "Ann. StDev.",
"IR"}, {Style["SP500 Index", Bold],
Style["Strategy", Bold]}}], {6, 2}], FontSize > 14];
P2 = DateListPlot[{tsVTDSPX, tsVTDstrategy}, Filling > Axis,
PlotLegends > {"S&P500 Index", "Strategy"},
PlotLabel > Style["Value of $1,000", Bold], ImageSize > Medium];
Print[P1];
Print[P2];
![enter image description here][10]
It turns out that, using a leverage factor of 2.0, we can increase the CAGR from 10% to 21% over the period from 20122016 using the conditional distribution approach. This performance enhancement comes at a cost, since the annual volatility of the news sentiment strategy is 17% compared to only 12% for the longonly strategy. However, the overall net result is positive, since the riskadjusted rate of return increases from 0.82 to 1.28.
We can explore the robustness of the result, comparing different quantile selections and leverage factors using Mathematica's interactive Manipulate function:
Manipulate[
percentiles =
Quantile[tsSIchange, {quantilefrac, 1  quantilefrac}];
bottompercentile =
Flatten[Position[tsSIchange["Values"],
x_ /; x < percentiles[[1]]]];
toppercentile =
Flatten[Position[tsSIchange["Values"],
x_ /; x > percentiles[[2]]]];
strategyreturns = tsSPXreturns["Values"];
strategyreturns[[bottompercentile]] = (1/leveragefactor)*
strategyreturns[[bottompercentile]];
strategyreturns[[toppercentile]] =
leveragefactor*strategyreturns[[toppercentile]];
tsVTDstrategy =
TimeSeries[
Transpose[{datelist,
1000*FoldList[Times, 1, 1 + strategyreturns]}]];
DateListPlot[{tsVTDSPX, tsVTDstrategy}, Filling > Axis,
PlotLegends > {"S&P500 Index", "Strategy"},
PlotLabel > Style["Value of $1,000", Bold],
ImageSize > Medium], {leveragefactor, 1, 3}, {quantilefrac, 0.1,
0.5}]
![enter image description here][11]
**Conclusion**
A simple market sentiment indicator can be created quite easily from publicly available news archives, using a standard machine learning sentiment classification algorithm. A market sentiment algorithm constructed using methods as straightforward as this appears to provide the capability to differentiate the conditional distribution of market returns on days when changes in market sentiment are significantly positive or negative. The differences in the higher moments of the conditional distribution appears to be as significant as the differences in the mean. In principle, we can use the insight provided by the sentiment indicator to enhance a longonly daytrading strategy, increasing leverage and allocation on days when changes to market sentiment are positive and reducing them on days when sentiment declines. The performance enhancements resulting from this approach appear to be significant.
Several caveats apply. The S&P 500 index is not tradable, of course, and it is not uncommon to find trading strategies that produce interesting theoretical results. In practice one would be obliged to implement the strategy using a tradable market proxy, such as a broad market ETF or futures contract. The strategy described here, which enters and exits positions daily, would incur substantial trading costs, that would be further exacerbated by the use of leverage.
Of course there are many other uses one can make of news data, in particular with firmspecific news and sentiment analytics, that fall outside the scope of this article. Hopefully, however, the methodology described here will provide a signpost towards further, more practically useful research.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig1.png&userId=773999
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig2.png&userId=773999
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig3.png&userId=773999
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig4a.png&userId=773999
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig5.png&userId=773999
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig6.png&userId=773999
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig8.png&userId=773999
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig9.png&userId=773999
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig10.png&userId=773999
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig10a.png&userId=773999
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig15.png&userId=773999Jonathan Kinlay20161128T20:41:07ZGet right results while solving equations?
http://community.wolfram.com/groups//m/t/1189338
Good morning,
I have a problem. Trying to recalculate old equations, I have a completely different result than before. What could be the cause?
I've already reset Kernel but it did not work. With the new equations is the same.
I attach a screenshot, open old scrapbook and dump after recalculation.
I do not know what to do anymore ...
![enter image description here][1]
The result is transmittance {....}
![enter image description here][2]
The result here is V2
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.JPG&userId=1189321
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.JPG&userId=1189321Kris SD20170921T13:29:08ZFind the region after using Minimize?
http://community.wolfram.com/groups//m/t/1189244
i use below command calculator mininum
Minimize[Sum[Abs[x  (3*k  55)], {k, 1, 28}], x]
the answer is {588, {x > 13}}
mininum is right but x must be a region.how can i find the right region.
thankdirac fang20170921T09:43:35ZIncremental Risk Charge (IRC) under alternative distributional models
http://community.wolfram.com/groups//m/t/535080
We present alternative methods for Incremental Risk Calculation which is one of the key pillars of new capital regime for financial institutions under the comprehensive review of the Trading Book – known as Basel 2.5. The primary analytical tool is the ***Copula method*** which enables efficient mixing of marginals into a fullscale multivariate distributions. We test various distributional assumptions and find significant variations in the model measure when different assumptions about risk dynamics are used.
![IRC Image  SD model ][1]
#Objectives#
We will review alternative approaches to IRC modelling where we explore the full power and flexibility of Wolfram Finance Platform  both in symbolic and numerical domains. Our modelling basis resides on *hazard rate process*, Bernoulli event and their extensions to the multivariate setting in the portfolio context where we employ copulas.
We show that this approach is quick and can be extended in various dimensions to accommodate various institutionspecific constraints.
#IRC  Background#
IRC is an integral component of the banks' capital regime and therefore attracts due attention from both regulatory practice and financial professionals who require accurate and robust methods for calculation of this measure
 IRC  the new risk measure (since 2009) as a part of comprehensive
review of Trading Book under Basel 2.5.
 The general consensus relies on the notation of undercapitalisation
and to some extent unpreparedness to withstand severe financial
shocks. Banks have to enhance their capital position
#Why IRC?#
One of the most critical weaknesses of the precrisis capital framework was the inability of ValueatRisk (VaR) models to capture asset value deterioration due to credit *risk migration*.
 Historical observations => massive losses occurred not due to the actual issuers' defaults, but due to credit risk explosion in financial assets (primality bonds and credit default swaps) held on the trading books required to be markedtomarket
 In essence, the losses were generated by significant widening of credit spreads and rating deterioration
 In this respect IRC is a regulatory response to the above problem and its primary objective is to address the VaR limitations in spread migration domain
#Capital regime and IRC#
 Capital requirements under Base II:
> $RC = (MM+F) VaR + VaR(S)$
where:
MM= model multiplier >= 3 , 0<=F<1 is the backtesting excess factor ,
VaR = the 10day 99% loss , VaR(S) = specific VaR
 The risk formula has been significantly enhanced in the Basel 2.5 framework to include:
> $RC (New) =(MM_c+F)VaR + (MM_s+F)StVaR + IRC +Max[CRM, Floor] + SC$
where: StVaR = Stresses VaR with its own factor , IRC = Incremental Risk Charge , CRM = incremental charge for correlation trading books s.t. minimum prescribed floor level , SC = securitised positions specific charge
 As we can see, the new risk capital calculation is more advanced and comprehensive:
 Addresses each individual risk factors separately
 Financial institutions subject to regulatory control are required to calculate the IRC at least weekly with 99.9% confidence with one year capital horizon
 IRC goes beyond VaR in two aspects:
 (i) confidence interval extends "deeper" into the tail of the loss distribution with higher confidence limit[99.9%]
 (ii) capital horizon is set to 1year, as opposed to 10day market risk VaR calculation
 Given the length of capital horizon, regulator allows shorter "liquidity" time frames (min of 3 months) for portfolio rebalancing, however the "constant level of risk" must be preserved
#IRC Methodologies#
 Given the scope and definition, the measure can be calculated in a number of ways. The most frequently used are:
Ratingbased approach with rating migration profile
 Credit spread modelling approach with evolutionary default probabilities
 Mixture methods where both approaches are combined into coherent modelling framework
Our **preferred** choice:
 Direct approach to credit spread that contains all relevant information about the creditworthiness of each issuer.
 With defined recovery rate , the spread determines the default probability for each issuer in the portfolio that will drive and affect the loss distribution
 We will employ standard credit spread modelling approaches for clarity and consistency purposes and demonstrate the ease of the implementation of the proposed solution.
#Defining components of the IRC methods#
When dealing with large portfolio of creditsensitive instruments, it may be advisable to break down the entire set of assets into smaller "clusters"with some common features
 Ratings  external or internal  have been for some time a preferred choice for asset grouping with similar creditquality
 This is because rating clusters enable the treatment of the subset with consistent set of parameters
We assume constant rates and also the notion that credit spread and recovery information exists for each issuer in the cluster. We define the following measures:
 Hazard rate: $ h=Spread/((1R)T) $
 Survival probability: $Q=Exp[h*T]$
 Cumulative default probability: $F = 1  Q = 1  Exp[h*T]$
 If we assume that the default represented though the hazard rate is a Bernoulli event with parameter h, we obtain the volatility of the hazard rate as:
 $vol= Sqrt[h(1h)] $
Knowing the marketimplied hazard rate and its volatility enables us to build distributional models for *hazard rate process* that will drive the future losses in the portfolio due to credit quality deterioration
 This represents the alternative to more traditional rating migration approach and here we simply assume that the hazard rate evolution through time is a stochastic process with a given stationary distribution *H(T)* from which we can compute quantiles at a prescribed confidence level of 99.9%
 Riskadjusted maturity: $T_r=T*Q$
#Portfolio credit risk #
What is special about the portfolio of creditsensitive assets?
 It is similar to individual assets, however beside credit exposure we need to include dependency structure  i.e. the tendency how individual assets react together. Dependency is critically important as it affects the likelihood of joint events such as portfolio loss
 The higher the probability of the joint deterioration of creditworthiness, the bigger the portfolio loss
 Probabilistically, the credit portfolio = collection of individual assets (random variables) \[Pi]={Subscript[X, 1],Subscript[X, 2],...Subscript[X, n],}
 Our primary tool for the analysis => multivariate distributions
 Preferred choice for the study is dependency structure is the *Copulae approach*
 Correlation is probably the best known measure of dependency in finance.
 Although not necessarily true, the association of correlation with dependency is strongly embedded in the mindset of financial professionals
 For the ease of exposition, we will select copulas that explicitly take correlation matrix as an input
 In the credit portfolio context we need to be careful about which correlation to use. Since our building block is the hazard rate, we essentially need the correlation of hazard rates, which, however, is not directly observable in the market
 The hazard rate correlation is defined as ![enter image description here][2]
 The only difficultly in this formula is the calculation of joint expectation. Binormal copula can be easily applied in this setting and correlation can be obtained from this joint distribution. This is available in Wolfram Finance Platform (WFP) so the hazard rate correlation above can be directly obtained from the Bi normal copula with Bernoulli marginals ${h_1, h_2}$ and asset correlation $\rho$.
BerCop = CopulaDistribution[{"Binormal", \[Rho]}, {BernoulliDistribution[h1], BernoulliDistribution[h2]}];
Correlation[BerCop, 1, 2]
![enter image description here][3]
% /. {\[Rho] > 0.3, h1 > 0.03, h2 > 0.05}
> 0.0867005
 Note that the hazard rate correlation (8.6%) is much lower number than the asset correlation (30%) used in the Binormal copula. This is supported by empirical evidence
To complete the credit portfolio setup, we further define:
 Weightedaverage risky duration:
![enter image description here][4] where w = normalised weight vector of each asset in the cluster
 Weightedaverage hazard rate:
![enter image description here][5]
 Clustered portfolio volatility:
![enter image description here][6]
#Hazard rate process#
Reduction of clustered portfolio dimensionality into single variable moments is a practical step for multicluster modelling. We can model the hazard rate evolution through some known Ito stochastic processes, calculate their moments and then parametrise distributional assumptions. The following models can be applied here:
 ![enter image description here][7]
 Each process represents 'single' cluster risk evolution. We combined
clusters through copulas, define their joint distributions and obtain
their quantiles. Quantiles then lead to the 'cluster loss' measure
 IRC is finally the sum of all individual cluster losses.
#WFP / Mathematica implementation#
 WFP  ideal for this task:
 Offers computational efficiency
 Provides required functional interfaces and great visualisation tools
 We assume:
 5 rating categories in the portfolio: A,B,C,D,E
 Same amount per cluster = 100 mil
 Demonstrate the implementation with portfolio A:
 10 positions with maturity from 1 to 10 years
 CDS spreads vary between 100~250 bp
 Each position is represented by its weight in the range [0,1]
.
mat = RandomSample[Range[0.75, 10, 0.1], 10]
cds = RandomSample[Range[0.01, 0.025, 0.001], 10]
wr = RandomSample[Range[0, 1, 0.01], 10]; w = Normalize[wr, Total]
> {3.65, 1.75, 6.35, 8.55, 2.95, 6.15, 4.95, 3.85, 5.15, 1.15}
> {0.018, 0.015, 0.02, 0.016, 0.023, 0.022, 0.021, 0.019, 0.014, 0.01}
> {0.191083, 0.106157, 0.11465, 0.201699, 0.0127389, 0.00424628, 0.0339703, 0.161359, 0.0997877, 0.07431}
 We assume 35% recovery for each asset in the cluster and from there we calculate the hazard rate, cumulative default probability and the hazard rate volatility
h = cds/((1  0.35) mat)
F = 1  Exp[h*mat]
rdur = mat*Exp[h*mat]
vol = Sqrt[h (1  h)]
> {0.00758693, 0.0131868, 0.00484555, 0.00287899, 0.0119948, 0.00550344,
> 0.00652681, 0.00759241, 0.00418223, 0.0133779}
>
> {0.0273124, 0.0228127, 0.0303007, 0.0243149, 0.0347659, 0.0332798,
> 0.0317914, 0.0288077, 0.0213082, 0.0152669}
>
> {3.55031, 1.71008, 6.15759, 8.34211, 2.84744, 5.94533, 4.79263,
> 3.73909, 5.04026, 1.13244}
>
> {0.086772, 0.114074, 0.0694411, 0.0535789, 0.108862, 0.0739808,
> 0.0805246, 0.086803, 0.0645348, 0.114887}
 We build the following (lowlevel) correlation structure:
CM[n_] :=
Module[{tb, ms},
tb = Table[
If[i == j, 1/2, If[j > i, RandomReal[{0, 0.15}], 0]], {i, n}, {j,
n}];
ms = tb + Transpose[tb]]
CM[10] // MatrixForm
![enter image description here][8]
#Obtaining cluster characteristics#
We compute the following measures
 Riskadjusted duration
 Weightedaverage hazard rate
 Weighted hazard rate volatility
wvol = w*vol
cdur = Total[w*rdur]
chmean = Total[w*rdur*h]/cdur
cvol = Sqrt[wvol.CM[10].wvol]
> {0.0165806, 0.0121098, 0.0079614, 0.0108068, 0.00138678, 0.000314143,
> 0.00273544, 0.0140064, 0.00643977, 0.00853723}
> 4.66327
> 0.0054152
> 0.0367442
This shows that the our Rating A Cluster with 100 mil of assets has risky duration of **4.66 years**, riskadjusted hazard rate of 54 **bp** and hazard rate volatility of **3.67%**.
 We skip the repeat of the above for each cluster and simply assume the following measures:
 Duration per cluster:
tdur = Table[
If[i < 1, cdur, cdur*Exp[RandomReal[{0.1, 0.35}]]], {i, 0, 4}]
> {4.66327, 5.83364, 5.59038, 6.52077, 5.42679}
 Hazard rate:
thmean = Table[
If[i < 1, chmean, chmean*Exp[RandomReal[{0.1, 0.25}]]], {i, 0, 4}]
> {0.0054152, 0.00625678, 0.00641638, 0.00653013, 0.00671054}
 Volatility:
tvol = Table[If[i < 1, cvol, cvol*Exp[RandomReal[{0.05, 0.15}]]], {i, 0, 4}]
> {0.0367442, 0.0415985, 0.0397301, 0.0419299, 0.0388844}
 Intercluster correlation matrix:
clcorrel = ({
{1, 0.21, 0.2, 0.23, 0.16},
{0.21, 1, 0.18, 0.16, 0.25},
{0.2, 0.18, 1, 0.19, 0.29},
{0.23, 0.16, 0.19, 1, 0.17},
{0.16, 0.25, 0.29, 0.17, 1}
} );
 from which we build the Covariance matrix:
CovarMatrix[cm_, vols_] :=
Module[{n, fms}, n = Length[vols];
fms = Table[cm[[i, j]]*vols[[i]]*vols[[j]], {i, n}, {j, n}]]
clcovar = CovarMatrix[clcorrel, tvol];
CovarMatrix[clcorrel, tvol] // MatrixForm
![enter image description here][9]
#Defining Copulae#
Depending on the hazard rate process specification, we get the following calibration output for the forward hazard rate setting  in terms of **mean** and **volatility** of the stationary process:
 Normal process:
ito = ItoProcess[{\[Mu], \[Sigma]}, {x, x0}, t];
{Mean[ito[t]], StandardDeviation[ito[t]]}
![enter image description here][10]
 Meanreverting normal process:
ito = ItoProcess[{\[CurlyTheta] (\[Mu]  x), \[Sigma]}, {x, x0}, t];
{Mean[ito[t]], StandardDeviation[ito[t]]} // Simplify
![enter image description here][11]
 LogNormal process:
ito = ItoProcess[{\[Mu]*x, \[Sigma]*x}, {x, x0}, t];
{Mean[ito[t]], StandardDeviation[ito[t]]} // Simplify
![enter image description here][12]
 Squareroot diffusion process:
ito = ItoProcess[{\[CurlyTheta] (\[Mu]  x[t]), \[Sigma]*Sqrt[x[t]]}, {x, x0}, t];
{Mean[ito[t]], StandardDeviation[ito[t]]} // Simplify
![enter image description here][13]
If we know the drift and diffusion parameters, the terminal forward rates can be easily obtained form the above formulas
 Consider the simplest case  the Normal process and assume that t=1 and we set the process drift $\mu=0$. This will lead to the following model calibration: mean = cluster's weightedaverage hazard rate and vol = cluster's volatility.
ito = ItoProcess[{\[Mu], \[Sigma]}, {x, x0}, t];
{Mean[ito[t]], StandardDeviation[ito[t]]} /. {t > 1, \[Mu] > 0} // TraditionalForm
![enter image description here][14]
 If the aim is to preserve full correlation structure of all rating classes, we have two copula choices (i) Gaussian and (ii) StudentT:
 Gaussian: ![enter image description here][15]
 StudentT: ![enter image description here][16]
#Gaussian Copulae model#
Let's recall the basis parameters:
thmean
tvol
> {0.0054152, 0.00625678, 0.00641638, 0.00653013, 0.00671054}
>
> {0.0367442, 0.0415985, 0.0397301, 0.0419299, 0.0388844}
 GC with Normal marginals  99.9% quantiles
cdNorm = CopulaDistribution[{"Multinormal", clcovar},
Table[NormalDistribution[thmean[[i]], tvol[[i]]], {i, 1, 5}]];
rsample = RandomVariate[cdNorm, 10^4];
ircNormhr = Quantile[rsample, 0.999]
> {0.115322, 0.128352, 0.126666, 0.13725, 0.131207}
 Graphical representation
SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]],
rsample[[All, 5]]}, Automatic, "PDF",
PlotLabel > Style["Copula distribution with Normal marginals", 18],
PlotTheme > "Business",
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D", "Rating E"}]
![enter image description here][17]
 We can also easily visualise each cluster with histograms
{Histogram[Style[rsample[[All, 1]], Red], PlotLabel > "Rating A"],
Histogram[Style[rsample[[All, 2]], Green], PlotLabel > "Rating B"],
Histogram[Style[rsample[[All, 3]], Blue], PlotLabel > "Rating C"],
Histogram[Style[rsample[[All, 4]], Purple], PlotLabel > "Rating D"],
Histogram[Style[rsample[[All, 5]], Orange], PlotLabel > "Rating E"]}
![enter image description here][18]
#Mixture copulae models#
We use the Gaussian Copulae with different marginals to create 'mixture' dynamics
 Logistic marginals model:
Logistic model is one of the alternatives to the Normal copula model where we draw marginals from Logistic probability distributions. Parametrisation is done through moment matching where Mean [XYZ] ==$ \eta$ && Variance[XYZ]==$\sigma^2$
Recall: the PDF for value x in a logistic model is proportional to $E^((x\mu)/\beta)/(1+E^((x\mu)/\beta))$
dsoln1 = Refine[Solve[{Mean[LogisticDistribution[a, b]] == \[Eta],
Variance[LogisticDistribution[a, b]] == \[Sigma]^2}, {a, b}, Reals], {\[Sigma] > 0}];
calcmean = Table[dsoln1[[1, 1, 2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
calcvol = Table[dsoln1[[2, 2, 2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
cdLogistic = CopulaDistribution[{"Multinormal", clcovar}, Table[LogisticDistribution[calcmean[[i]], calcvol[[i]]], {i, 1, 5}]];
rsample = RandomVariate[cdLogistic, 10^4];
ircLogistichr = Quantile[rsample, 0.999]
> {0.137919, 0.173553, 0.172077, 0.166075, 0.155967}
 Visualising the model output:
SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]], rsample[[All, 3]], rsample[[All, 4]], rsample[[All, 5]]},
Automatic, "PDF",
PlotLabel > Style["Copula distribution with Logistic marginals", 18],
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D", "Rating E"}]
![enter image description here][19]
 Laplace marginals model
The Laplace distribution gives the distribution of the difference between two independent random variables with identical exponential distributions with PDF:
PDF[LaplaceDistribution[\[Mu], \[Beta]], x]
![enter image description here][20]
dsoln = Refine[
Solve[{Mean[LaplaceDistribution[a, b]] == \[Eta],
Variance[LaplaceDistribution[a, b]] == \[Sigma]^2}, {a, b},
Reals], {\[Sigma] > 0}];
calcmean =
Table[dsoln[[1, 1,
2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
calcvol =
Table[dsoln[[2, 2,
2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
cdLaplace =
CopulaDistribution[{"Multinormal", clcovar},
Table[LaplaceDistribution[calcmean[[i]], calcvol[[i]]], {i, 1, 5}]];
rsample = RandomVariate[cdLaplace, 10^4];
ircLaplacehr = Quantile[rsample, 0.999]
> {0.147783, 0.19349, 0.171785, 0.171115, 0.172867}
SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]],
rsample[[All, 5]]}, Automatic, "PDF",
PlotLabel > Style["Copula distribution with Laplace marginals", 18],
PlotRange > All,
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D", "Rating E"}]
![enter image description here][21]
#GC with Stable family marginals#
 Stable marginals model
A stable distribution is defined in terms of its characteristic function $\phi(t)$ , which satisfies a functional equation where for any a and b there exist c and h such that $\phi(a t) \phi(b t)=\phi(c t) exp(I h t)$
dsoln = Refine[Solve[{Mean[StableDistribution[1, 2, 1, a, b]] == \[Eta],
Variance[StableDistribution[1, 2, 1, a, b]] == \[Sigma]^2}, {a, b}, Reals], {\[Sigma] > 0}];
calcmean = Table[dsoln[[1, 1, 2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
calcvol = Table[dsoln[[2, 2, 2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
cdStable = CopulaDistribution[{"Multinormal", clcovar},
Table[StableDistribution[1, 2, 1, calcmean[[i]], calcvol[[i]]], {i,1, 5}]];
rsample = RandomVariate[cdStable, 10^4];
ircStablehr = Quantile[rsample, 0.999]
> {0.117744, 0.131838, 0.130584, 0.132754, 0.127134}
{SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]], rsample[[All, 5]]},
Automatic, "PDF",
PlotLabel > Style["Copula distribution with Stable marginals", 18],
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D",
"Rating E"}, PlotTheme > "Business", ImageSize > 300],
SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]], rsample[[All, 5]]},
Automatic, "PDF",
PlotLabel > Style["Tail focus: GC with Stable marginals", 16],
PlotRange > {{0.08, 0.15}, All},
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D",
"Rating E"}, ImageSize > 300]}
![enter image description here][22]
 Max Stable marginals model
The probability density for value x in a generalized maximum extreme value distribution is proportional to ![enter image description here][23] for $\xi (x\mu)/\sigma+1>0$ and zero otherwise
dsoln = Refine[
Solve[{Mean[MaxStableDistribution[a, b, 0]] == \[Eta],
Variance[MaxStableDistribution[a, b, 0]] == \[Sigma]^2}, {a, b},
Reals], {\[Sigma] > 0}];
calcmean =
Table[dsoln[[2, 1,
2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
calcvol =
Table[dsoln[[1, 2,
2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
cdMaxStable =
CopulaDistribution[{"Multinormal", clcovar},
Table[MaxStableDistribution[calcmean[[i]], calcvol[[i]], 0], {i, 1,
5}]];
rsample = RandomVariate[cdMaxStable, 10^4];
ircMaxStablehr = Quantile[rsample, 0.999]
> {0.223284, 0.253294, 0.236271, 0.250557, 0.217929}
{SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]], rsample[[All, 5]]},
Automatic, "PDF",
PlotLabel >
Style["Copula distribution with MaxStable marginals", 15],
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D",
"Rating E"}, ImageSize > 330],
SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]], rsample[[All, 5]]},
Automatic, "PDF",
PlotLabel > Style["Tail focus: GC with MaxStable marginals", 16],
PlotRange > {{0.2, 0.3}, All},
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D",
"Rating E"}, ImageSize > 330]}
![enter image description here][24]
 MinStable marginals model
The probability density for value x in a generalized minimum extreme value distribution is proportional to ![enter image description here][25] for $\xi (\mux)/\sigma+1>0$ and zero otherwise.
dsoln = Refine[
Solve[{Mean[MinStableDistribution[a, b, 0]] == \[Eta],
Variance[MinStableDistribution[a, b, 0]] == \[Sigma]^2}, {a, b},
Reals], {\[Sigma] > 0}];
calcmean =
Table[dsoln[[2, 1,
2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
calcvol =
Table[dsoln[[1, 2, 2]] /. {\[Eta] > thmean[[i]], \[Sigma] >
tvol[[i]]}, {i, 1, 5}];
cdMinStable =
CopulaDistribution[{"Multinormal", clcovar},
Table[MinStableDistribution[calcmean[[i]], calcvol[[i]], 0], {i, 1,
5}]];
rsample = RandomVariate[cdMinStable, 10^4];
ircMinStablehr = Quantile[rsample, 0.999]
> {0.0764216, 0.0882136, 0.0843226, 0.0862706, 0.0832078}
SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]],
rsample[[All, 5]]}, Automatic, "PDF",
PlotLabel >
Style["Copula distribution with MinStable marginals", 16],
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D",
"Rating E"}]
![enter image description here][26]
#Extreme value copulae models#
 Extreme value marginals model
The probability density for value x in an extreme value distribution is proportional to ![enter image description here][27]
dsoln = Refine[
Solve[{Mean[ExtremeValueDistribution[a, b]] == \[Eta],
Variance[ExtremeValueDistribution[a, b]] == \[Sigma]^2}, {a, b},
Reals], {\[Sigma] > 0}];
calcmean =
Table[dsoln[[2, 1,
2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
calcvol =
Table[dsoln[[1, 2,
2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
cdExtVal =
CopulaDistribution[{"Multinormal", clcovar},
Table[ExtremeValueDistribution[calcmean[[i]], calcvol[[i]]], {i, 1,
5}]];
rsample = RandomVariate[cdExtVal, 10^4];
ircExtValhr = Quantile[rsample, 0.999]
> {0.223903, 0.250818, 0.228019, 0.255494, 0.249027}
{SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]], rsample[[All, 5]]},
Automatic, "PDF",
PlotLabel >
Style["Copula distribution with Extreme Value marginals", 16],
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D",
"Rating E"}, ImageSize > 350],
SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]], rsample[[All, 5]]},
Automatic, "PDF",
PlotLabel >
Style["Copula distribution with Extreme Value marginals", 16],
PlotRange > {{0.15, 0.25}, All},
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D",
"Rating E"}, ImageSize > 350]}
![enter image description here][28]
 StudentT marginals model
The probability density for value x in a Student t distribution with $\nu$ degrees of freedom is proportional to ![enter image description here][29]
dsoln = Refine[
Solve[{Mean[StudentTDistribution[a, b, 10]] == \[Eta],
Variance[StudentTDistribution[a, b, 10]] == \[Sigma]^2}, {a, b},
Reals], {\[Sigma] > 0}];
calcmean =
Table[dsoln[[1, 1,
2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
calcvol =
Table[dsoln[[2, 2,
2]] /. {\[Eta] > thmean[[i]], \[Sigma] > tvol[[i]]}, {i, 1, 5}];
cdStudent =
CopulaDistribution[{"Multinormal", clcovar},
Table[StudentTDistribution[calcmean[[i]], calcvol[[i]], 10], {i, 1,
5}]];
rsample = RandomVariate[cdStudent, 10^4];
ircStudenthr = Quantile[rsample, 0.999]
> {0.135742, 0.180349, 0.142938, 0.158109, 0.152853}
SmoothHistogram[{rsample[[All, 1]], rsample[[All, 2]],
rsample[[All, 3]], rsample[[All, 4]],
rsample[[All, 5]]}, Automatic, "PDF",
PlotLabel >
Style["Copula distribution with StudentT marginals", 18],
PlotLegends > {"Rating A", "Rating B", "Rating C", "Rating D",
"Rating E"}]
![enter image description here][30]
#Summarising results#
This is the summary of each IRC models discussed above:
ircrates = {ircNormhr, ircLogistichr, ircLaplacehr, ircStablehr,
ircMaxStablehr, ircMinStablehr, ircExtValhr, ircStudenthr};
TableForm[ircrates,
TableHeadings > {{"Normal", "Logistic", "Laplace", "Stable",
"MaxStable", "MinStable", "ExtVal", "StudntT"}, {"Rtng A",
"Rtng B", "Rtng C", "Rtng D", "Rtng E"}}] // TraditionalForm
![enter image description here][31]
We observe:
 Variations of the quantile values for each of the specified model
 The values are quite different, especially for the heavy tail distributions (MaxStable, MinStable and Extreme Value)
 This confirms that the model selection can have a significant impact on the IRC calculation.
#IRC Calculation#
To obtain the final IRC metric, we need to revalue each cluster portfolio with its quantiled hazard rate. We simulate the cash flow vector per each rating cluster first.
CF = Table[{i, RandomReal[{0.03, 0.08}]}, {i, 0.5, 6, 0.5}];
CFT = Table[
Table[{i, RandomReal[{0.03, 0.08}]}, {i, 0.5, 6, 0.5}], {5}];
BarChart[CF[[All, 2]], ChartStyle > "Rainbow",
PlotLabel > Style["Cash flow schedule in cluster A", 20],
ChartLabels > CF[[All, 1]], ImageSize > 500]
![enter image description here][32]
To measure the P&L impact, we evaluate each cluster future cash flows with the "base case" and "quantiled" hazard rates. The difference between these two measures is the portfolio loss due to the credit spread widening, or the **incremental risk change**.
The IRC is driven by the hazard rate and we can visualise its term structure on the chart below:
il[k_] := {{1, Sqrt[(1/tdur[[k]])]*thmean[[k]]}, {3,
Sqrt[(3/tdur[[k]])]*thmean[[k]]}, {tdur[[k]],
Sqrt[(tdur[[k]]/tdur[[k]])]*thmean[[k]]}}
ilirc[j_,
k_] := {{1, Sqrt[(1/tdur[[k]])]*ircrates[[j, k]]}, {3,
Sqrt[(3/tdur[[k]])]*ircrates[[j, k]]}, {tdur[[k]],
Sqrt[(tdur[[k]]/tdur[[k]])]*ircrates[[j, k]]}}
InterpBase[k_] := Interpolation[il[k]]
InterpIrc[j_, k_] := Interpolation[ilirc[j, k]]
Plot[{InterpBase[2][x], InterpIrc[4, 2][x]}, {x, 0, 6},
PlotLabel >
Style["Term structure of Base and Quantiled Hazard Rates", 16],
PlotTheme > "Web", PlotLegends > {"Base", "Quantiled"}] // Quiet
![enter image description here][33]
We calculate the IRC as the difference between the CF per cluster riskadjusted by the "base" case hazard rate and the quantiled one.
The example of the Base Flows:
BaseFlow[n_] :=
Sum[CFT[[n, i, 2]]*
Exp[CFT[[n, i, 1]]*InterpBase[n][CFT[[n, i, 1]]]], {i, 1,
Length[CF]}] // Quiet
IrcFlow[j_, n_] :=
Sum[CFT[[n, i, 2]]*
Exp[CFT[[n, i, 1]]*InterpIrc[j, n][CFT[[n, i, 1]]]], {i, 1,
Length[CF]}] // Quiet
Table[BaseFlow[i], {i, 1, 5}]
> {0.727658, 0.724374, 0.611214, 0.674302, 0.580502}
And these are the similar result for the quantiled hazard rates for each copula model:
TableForm[Table[IrcFlow[i, j], {i, 1, 8}, {j, 1, 5}],
TableHeadings > {{"Normal", "Logistic", "Laplace", "Stable",
"MaxStable", "MinStable", "ExtVal", "StudntT"}, {"Rtng A",
"Rtng B", "Rtng C", "Rtng D", "Rtng E"}}] // TraditionalForm
![enter image description here][34]
Finally we obtain the total IRC for the entire portfolio as the sum of losses across all clusters. As the graph below shows, the losses are highly dependent on the model choice, with MaxStable and Extreme Value models displaying the highest losses  essentially in multiple of other lightertail alternatives.
ModelIrc =
Table[Total[
Table[(IrcFlow[j, i]  BaseFlow[i]), {i, 1, 5}]*10^7], {j, 1,
8}];
BarChart[ModelIrc, PlotLabel > Style["IRC per model", 20],
ChartLabels > {"Norm", "Logs", "Lapl", "Stbl", "MxS", "MnS", "EV",
"StT"}, ImageSize > 500, LabelStyle > Directive[Blue, 16],
PlotRange > All,
ChartElementFunction >
ChartElementDataFunction["SegmentScaleRectangle", "Segments" > 10,
"ColorScheme" > "SolarColors"]]
![enter image description here][35]
#Final words.....#
 Copulas are practical and tractable methods for construction of multivariate distributions and therefore are ideally suited for the portfolio risk modelling
 We have demonstrated enormous flexibility and power of Wolfram Finance Platform for this task
 It provides robust and consistent framework for risk modelling in every single aspect
 It can handle large amount of data, it utilizes optimised protocols and has the speed to obtain the results quickly and efficiently
 The presented approach can be extended in many directions:
 Volume  adding additional assets / cluster is trivial
 Definition of the hazard rate process  eg. extension to jumpdiffusion / mixtures are easy
 Use of different liquidity horizons  this is easily achievable through the calibration of the HR stochastic process
 Copula choices  many alternatives can be offered to test the copula model. StudentT copula, Morgenstern, Frank, Clayton, Gumbel and others are readily available on the Platform. Additional variations can be created through WFP symbolic engine
 Although the modelling approach relies on hazard rate distribution and bypasses rating migration, the latter can be easily included into the framework
 One can translate the migration matrix into distributional moments or use extensive WFP's linear algebra routines to calculate the cumulative terminal probabilities in time
 Another suitable option will be modelling of rating transition through Brownian Bridge Process which is specifically designed for this purpose.
[1]: /c/portal/getImageAttachment?filename=IRCimage.png&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170920at14.15.26.png&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=26242.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170920at14.21.30.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170920at14.25.03.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170920at14.25.14.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170920at14.27.23.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170920at14.42.27.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170920at14.49.48.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at10.05.10.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at10.07.00.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at10.08.10.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at10.10.34.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at10.12.45.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at10.14.37.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at10.14.45.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=78023.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=92494.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16985.png&userId=20103
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20336.png&userId=20103
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=81337.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=37608.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at11.02.58.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=315211.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at11.05.55.png&userId=20103
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=487410.png&userId=20103
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at11.11.19.png&userId=20103
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=397811.png&userId=20103
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170921at11.19.51.png&userId=20103
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=476212.png&userId=20103
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=783013.png&userId=20103
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=632614.png&userId=20103
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=238615.png&userId=20103
[34]: http://community.wolfram.com//c/portal/getImageAttachment?filename=841416.png&userId=20103
[35]: http://community.wolfram.com//c/portal/getImageAttachment?filename=263417.png&userId=20103Igor Hlivka20150723T23:14:37ZSolve differential equation to describe the motion of simple pendulum
http://community.wolfram.com/groups//m/t/99823
[img=width: 228px; height: 236px;]/c/portal/getImageAttachment?filename=pendulum.jpg&userId=99808[/img]
Differential equation is given by
[mcode]y''[t]==(g/L)*Sin[y[t]][/mcode]
where y is the offset angle and t is time. I used the following mathematica code to solve for y[t]
[mcode]DSolve[{y''[t] ==(g/L)*Sin[y[t]], y[0]==Pi/2,y'[0]==0},y,t][/mcode]
and get the following errors
[code]Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. >>[/code][code]DSolve::bvfail: For some branches of the general solution, unable to solve the conditions. >>[/code]
The problem seems to be Sin[y[t]], but I don't know how to work around this. Thanks for any help.Julian Lovlie20130819T07:25:08ZHow to compile efficiently
http://community.wolfram.com/groups//m/t/1189525
*NOTE: Please see the original version of this post [HERE][1]. Crossposted here per suggestion of [@Vitaliy Kaurov][at0].*

I'll just throw in a few random thoughts in no particular order, but this will be a rather highlevel view on things. This is necessarily a subjective exposition, so treat it as such.
###Typical use cases
In my opinion, `Compile` as an efficiencyboosting device is effective in two kinds of situations (and their mixes):
 The problem is solved most efficiently with a procedural style, because for example an efficient algorithm for it is formulated procedurally and does not have a simple / efficient functional counterpart (note also that functional programming in Mathematica is peculiar in many respects, reflecting the fact that functional layer is a thin one on top of the rulebased engine. So, some algorithms which are efficient in other functional languages may be inefficient in Mathematica). A very clear sign of it is when you have to do array indexing in a loop.
 The problem can be solved by joining several `Compile`able builtin functions together, but there are (perhaps several) "joints" where you face the performancehit if using the toplevel code, because it stays general and can not use specialized versions of these functions, and for a few other reasons. In such cases, `Compile` merely makes the code more efficient by effectively typespecializing to numerical arguments and not using the main evaluator. One example that comes to mind is when we compile `Select` with a custom (compilable) predicate and can get a substantial performance boost ([here][2] is one example).
I use this rule of thumb when determining whether or not I will benefit from `Compile`: the more my code inside `Compile` looks like C code I'd write otherwise, the more I benefit from it (strictly speaking, this is only true for the compilation to C, not MVM).
It may happen that some portions of toplevel code will be the major bottleneck and can not be recast into a more efficient form, for a given approach to the problem. In such a case, `Compile` may not really help, and it is best to rethink the whole approach and try to find another formulation for the problem. In other words, it often saves time and effort to do some profiling and get a good idea about the *real* places where the bottlenecks are, *before* turning to `Compile`.
###Limitations of `Compile`
Here is an (incomplete) list of limitations, most of which you mentioned yourself
 Can only accept regular arrays (tensors) of numerical or boolean types. This excludes ragged arrays and more general Mathematica expressions.
 In most cases, can only return a single tensor of some type
 Only machineprecision arithmetic
 From the userdefined functions, only pure functions are compilable, plus one can inline other compiled functions. Rules and "functions" defined with rules are inherently not compilable.
 No way to create functions with memory (ala static variables in C)
 Only a small subset of builtin functions can be compiled to bytecode (or C)
 Possibilities for writing recursive compiled functions seem to be very limited, and most interesting cases seem to be ruled out
 No decent passbyreference semantics, which is a big deal (to me anyways)
 You can not really use indexed variables in `Compile`, although it may *appear* that you can.
 ...
###Whether or not to compile to C?
I think this depends on the circumstances. Compilation to C is expensive, so this makes sense only for performancecritical code to be used many times. There are also many cases when compilation to MVM will give similar performance, while being much faster. One such example can be found in [this answer][3], where the justintime compilation to MVM target led to a major speedup, while compilation to C would have likely destroyed the purpose of it  in that particular case.
Another class of situations when compiling to C is may not be the best option is when you want to "serialize" the `CompiledFunction` object, and distribute it to others, for example in a package, and you don't want to count on a C compiler being installed on the user's machine. As far as I know, there is no automatic mechanism yet to grab the generated shared library and package it together with the `CompiledFunction`, and also one would have to crosscompile for all platforms and automatically dispatch to the right library to load. All this is possible but complicated, so, unless the speed gain can justify such complications for a given problem, it may be not worth it, while compilation to MVM target creates the toplevel `CompiledFunction` object, which is automatically crossplatform, and does not require anything (except Mathematica) to be installed.
So, it really depends, although more often than not compilation to C will lead to faster execution and, if you at all decide to use `Compile`, will be justified.
###What to include in `Compile`
I share an opinion that, unless you have some specific requirements, it is best to only use `Compile` on minimal code fragments which would benefit from it the most, rather than have one big `Compile`. This is good because:
 It allows you to better understand where the real bottlenecks are
 It makes your compiled code more testable and composable
 If you really need it, you can then combine these pieces and use `"InlineCompiledFunctions" > True` option setting, to get all the benefits that one large `Compile` would give you
 Since `Compile` is limited in what it can take, you will have less headaches on how to include some uncompilable pieces, plus less chances to overlook a callback to the main evaluator
That said, you may benefit from one large `Compile` in some situations, including:
 Cases when you want to grab the resulting C code and use it standalone (linked against Wolfram RTL)
 Cases when you want to run your compiled code in parallel on several kernels and don't want to think about possible definitions distribution issues etc (this was noted by @halirutan)
###Listable functions
When you can, it may be a good idea to use the `RuntimeAttributes > Listable` option, so that your code can be executed on (all or some) available cores in parallel. I will give one example which I think is rather interesting, because it represents a problem which may not initially look like one directly amenable to this (although it is surely not at all hard to realize that parallelization may work here)  computation of `Pi` as a partial sum, of a wellknown infinite sum representation. Here is a singlecore function:
Clear[numpi1];
numpi1 =
Compile[{{nterms, _Integer}},
4*Sum[(1)^k/(2 k + 1), {k, 0, nterms}],
CompilationTarget > "C", RuntimeOptions > "Speed"];
Here is a parallel version:
numpiParallelC =
Compile[{{start, _Integer}, {end, _Integer}},
4*Sum[(1)^k/(2 k + 1), {k, start, end}], CompilationTarget > "C",
RuntimeAttributes > Listable, RuntimeOptions > "Speed"];
Clear[numpiParallel];
numpiParallel[nterms_, nkernels_] :=
Total@Apply[numpiParallelC,
MapAt[# + 1 &, {Most[#], Rest[#]  1}, {2, 1}] &@
IntegerPart[Range[0, nterms, nterms/nkernels]]];
Now, some benchmarks (on a 6core machine):
(res0=numpiParallel[10000000,1])//AbsoluteTiming
(res1=numpiParallel[10000000,6])//AbsoluteTiming
(res2=numpi1[10000000])//AbsoluteTiming
Chop[{res0res2,res0res1,res2res1}]
(*
==>
{0.0722656,3.14159}
{0.0175781,3.14159}
{0.0566406,3.14159}
{0,0,0}
*)
A few points to note here:
 Listable attribute for `Compile` works only onelevel, unlike usual `Listable` attribute, which threads down to all levels.
 It may happen that the time it takes to prepare the data to be fed into a `Listable` compiled function, will be much more than the time the function runs (e.g. when we use `Transpose` or `Partition` etc on huge lists), which then sort of destroys the purpose. So, it is good to make an estimate whether or not that will be the case.
 A more "coarsegrained" alternative to this is to run a singlethreaded compiled function in parallel on several Mathematica kernels, using the builtin parallel functionality (`ParallelEvaluate`, `ParallelMap`, etc). These two possibilities are useful in different situations.
###Autocompilation
While this is not directly related to the explicit use of `Compile`, this topic logically belongs here. There are a number of builtin (higherorder) functions, such as `Map`, which can *autocompile*. What this means is that when we execute
Map[f, list]
the function `f` is analyzed by `Map`, which attempts to automatically call `Compile` on it (this is not done at the toplevel, so using `Trace` won't show an explicit call to `Compile`). To benefit from this, the function `f` must be compilable. As a rule of thumb, it has to be a pure function for that (which is not by itself a sufficient condition)  and generally the question of whether or not a function is compilable is answered here in the same way as for explicit `Compile`. In particular, functions defined by patterns will *not* benefit from autocompilation, which is something to keep in mind.
Here is a little contrived but simple example to illustrate the point:
sumThousandNumbers[n_] :=
Module[{sum = 0}, Do[sum += i, {i, n, n + 1000}]; sum]
sumThousandNumbersPF =
Module[{sum = 0}, Do[sum += i, {i, #, # + 1000}]; sum] &
Now, we try:
Map[sumThousandNumbers, Range[3000]]//Short//Timing
Map[sumThousandNumbersPF, Range[3000]]//Short//Timing
(*
==> {3.797,{501501,502502,503503,504504,505505,<<2990>>,3499496,
3500497,3501498,3502499,3503500}}
{0.094,{501501,502502,503503,504504,505505,<<2990>>,3499496,
3500497,3501498,3502499,3503500}}
*)
which shows a 40times speedup in this particular case, due to autocompilation.
There are in fact *many* cases when this is important, and not all of them are as obvious as the above example. One such case was considered in a [recent answer][4] to the question of extracting numbers from a sorted list belonging to some window. The solution is short and I will reproduce it here:
window[list_, {xmin_, xmax_}] :=
Pick[list, Boole[xmin <= # <= xmax] & /@ list, 1]
What may look like a not particularly efficient solution, is actually *quite* fast due to the autocompilation of the predicate `Boole[...]` inside `Map`, plus `Pick` being optimized on packed arrays. See the aforementioned question for more context and discussion.
This shows us another benefit of autocompilation: not only does it often make the code run *much* faster, but it also does not unpack, allowing surrounding functions to also benefit from packed arrays when they can.
Which functions can autocompile? One way to find out is to inspect `SystemOptions["CompileOptions"]`:
Cases["CompileOptions"/.SystemOptions["CompileOptions"],
opt:(s_String>_)/;StringMatchQ[s,__~~"Length"]]
{"ApplyCompileLength" > \[Infinity], "ArrayCompileLength" > 250,
"FoldCompileLength" > 100, "ListableFunctionCompileLength" > 250,
"MapCompileLength" > 100, "NestCompileLength" > 100,
"ProductCompileLength" > 250, "SumCompileLength" > 250,
"TableCompileLength" > 250}
This also tells you the threshold lengths of the list beyond which the autocompilation is turned on. You can also change these values. Setting the value of `...CompileLength` to `Infinity` is effectively disabling the autocompilation. You can see that "ApplyCompileLength" has this value. This is because it can only compile 3 heads: `Times`, `Plus`, and `List`. If you have one of those in your code, however, you can reset this value, to benefit from autocompilation. Generally, the default values are pretty meaningful, so it is rarely necessary to change these defaults.
###A few more techniques
There are a number of techniques involving `Compile`, which are perhaps somewhat more advanced, but which sometimes allow one to solve problems for which plain `Compile` is not flexible enough. Some which I am aware of:
 Sometimes you can trade memory for speed, and, having a nested ragged list, pad it with zeros to form a tensor, and pass that to `Compile`.
 Sometimes your list is general and you can not directly process it in `Compile` to do what you want, however, you can reformulate a problem such that you can instead process a list of element *positions*, which are integers. I call it "elementposition duality". One example of this technique in action is [here][5], for a larger application of this idea see my last post in [this thread][6] (I hesitated to include this reference because my first several posts there are incorrect solutions. Note that for that particular problem, a far more elegant and short, but somewhat less efficient solution was given in the end of that thread).
 Sometimes you may need some structural operations to prepare the input data for `Compile`, and the data contains lists (or, generally, tensors), of different types (say, integer positions and real values). To keep the list packed, it may make sense to convert integers to reals (in this example), converting them back to integers with `IntegerPart` inside `Compile`. One such example is [here][7]
 Runtime generation of compiled functions, where certain runtime parameters get embedded. This may be combined with memoization. One example is [here][8], another very good example is [here][9]
 One can emulate passbyreference and have a way of composing larger compiled functions out of smaller ones *with parameters* (well, sort of), without a loss of efficiency. This technique is showcased for example [here][10]
 A common wisdom is that since neither linkedlists, nor `Sow``Reap` are compilable, one has to preallocate large arrays most of the time, to store the intermediate results. There are at least two other options:
 Use ``Internal`Bag``, which is compilable (the problem however is that it can not be returned as a result of `Compile` as of now, AFAIK).
 It is quite easy to implement an analog of a dynamic array inside your compiled code, by setting up a variable which gives the current size limit, and copy your array to a new larger array once more space is needed. In this way, you only allocate (at the end) as much space as is really needed, for a price of *some* overhead, which is often negligible.
 One may often be able to use vectorized operations like `UnitStep`, `Clip`, `Unitize` etc, to replace the ifelse control flow in inner loops, also inside `Compile`. This *may* give a huge speedup, particularly when compiling to MVM target. Some examples are in my comments in [this][11] and [this][12] blog posts, and one other pretty illustrative example of a vectorized binary search in my answer in [this thread][13]
 Using additional list of integers as "pointers" to some lists you may have. Here, I will make an exception for this post, and give an explicit example, illustrating the point. The following is a fairly efficient function to find a longest increasing subsequence of a list of numbers. It was developed jointly by DrMajorBob, Fred Simons and myself, in an on and offline MathGroup discussion (so this final form is not available publicly AFAIK, thus including it here)
Here is the code
Clear[btComp];
btComp =
Compile[{{lst, _Integer, 1}},
Module[{refs, result, endrefs = {1}, ends = {First@lst},
len = Length@lst, n0 = 1, n1 = 1, i = 1, n, e},
refs = result = 0 lst;
For[i = 2, i <= len, i++,
Which[
lst[[i]] < First@ends,
(ends[[1]] = lst[[i]]; endrefs[[1]] = i; refs[[i]] = 0),
lst[[i]] > Last@ends,
(refs[[i]] = Last@endrefs;AppendTo[ends, lst[[i]]]; AppendTo[endrefs, i]),
First@ends < lst[[i]] < Last@ends,
(n0 = 1; n1 = Length@ends;
While[n1  n0 > 1,
n = Floor[(n0 + n1)/2];
If[ends[[n]] < lst[[i]], n0 = n, n1 = n]];
ends[[n1]] = lst[[i]];
endrefs[[n1]] = i;
refs[[i]] = endrefs[[n1  1]])
]];
For[i = 1; e = Last@endrefs, e != 0, (i++; e = refs[[e]]),
result[[i]] = lst[[e]]];
Reverse@Take[result, i  1]], CompilationTarget > "C"];
Here is an example of use (list should not contain duplicates):
test = RandomSample[#, Length[#]] &@ Union@RandomInteger[{1, 1000000}, 1000000];
btComp[test] // Length // Timing
The fastest solution based on builtins, which is indeed *very* fast, is still about 6 times slower for this size of the list:
LongestCommonSequence[test, Sort@test] // Short // Timing
Anyways, the point here is that this was possible because of extra variables `refs` and `endrefs`, the use of which allowed to only manipulate single integers (representing positions of sublists in a larger list) instead of large integer lists.
###A few assorted remarks
 *Things to watch out for*: see [this discussion][14] for some tips on that. Basically, you should avoid
 Callbacks to the main evaluator
 Excessive copying of tensors (`CopyTensor` instruction)
 Accidental unpacking happening in toplevel functions preparing input for `Compile` or processing its output. This is not related to `Compile` proper, but it happens that `Compile` does not help at all, because the bottleneck is in the toplevel code.
 *Type conversion* I would not worry about performance hit, but sometimes wrong types may lead to runtime errors, or unanticipated callbacks to `MainEvaluate` in the compiled code.
 Certain functions (e.g. `Sort` with the default comparison function, but not only), don't benefit from compilation much or *at all*.
 It is not clear how `Compile` handles `Hold` attributes in compiled code, but [there are indications][15] that it does not fully preserve the standard semantics we are used to in the toplevel.
 *How to see whether or not you can effectively use `Compile` for a given problem*. My experience is that with `Compile` in Mathematica you have to be "proactive" (with all my dislike for the word, I know of nothing better here). What I mean is that to use it effectively, you have to search the structure of your problem / program for places where you could transform the (parts of) data into a form which can be used in `Compile`. In most cases (at least in my experience), except obvious ones where you already have a procedural algorithm in pseudocode, you have to reformulate the problem, so you have to actively ask: *what should I do to use `Compile` here*.
[1]: https://mathematica.stackexchange.com/questions/1803
[2]: https://mathematica.stackexchange.com/questions/2/whatbestpracticesorperformanceconsiderationsarethereforchoosingbetween/42#42
[3]: https://mathematica.stackexchange.com/questions/3345/howtomakeaninkblot/3346#3346
[4]: https://mathematica.stackexchange.com/questions/2369/findingelementsinasortedlist/2374#2374
[5]: https://mathematica.stackexchange.com/questions/1665/efficientconditionalmeanonalargedataset/1671#1671
[6]: http://groups.google.com/group/comp.softsys.math.mathematica/browse_thread/thread/a8d5b98706eaaef7
[7]: https://mathematica.stackexchange.com/questions/1665/efficientconditionalmeanonalargedataset/1671#1671
[8]: https://stackoverflow.com/questions/4973424/inmathematicahowdoicompilethefunctionouterforanarbitrarynumberof/4973603#4973603
[9]: https://stackoverflow.com/questions/8204784/howtocompileafunctionthatcomputesthehessian/8210224#8210224
[10]: https://stackoverflow.com/questions/5246330/deleterepeatinglistelementspreservingorderofappearance/5251034#5251034
[11]: http://smalltalkthoughts.blogspot.com/2009/08/smallmathematicavscbenchmark.html
[12]: http://fsharpnews.blogspot.com/2010/07/fvsmathematicafastpricerfor.html
[13]: http://groups.google.com/group/comp.softsys.math.mathematica/browse_thread/thread/f8685f194db18175
[14]: https://mathematica.stackexchange.com/questions/821/howwelldoesmathematicacodeexportedtoccomparetocodedirectlywrittenfo/
[15]: http://groups.google.com/group/comp.softsys.math.mathematica/browse_thread/thread/f7b6836791fe1d47
[at0]: http://community.wolfram.com/web/vitaliykLeonid Shifrin20170921T16:07:00ZNote for RaspberryPi Zero I/O programming: project#2: "Rolling Words"
http://community.wolfram.com/groups//m/t/1189221
This note shows a sample how to I/O programming of RaspberryPi with Mathematica. Mathematica I/O is not so fast comparing Python etc. however, we can find many strong points through the projects. This sample project shows following I/O programming example.
1. I2C connected SSD1306OLE display module handling
2. SG90 servo motor driving method through GPIO pin
3. GPIO shutdown switch setup
![enter image description here][1]
You can see the [movie here][2].
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=RaspberryPiZero.jpg&userId=897049
[2]: https://www.youtube.com/watch?v=vDtnr4WgC4c&feature=youtu.beHirokazu Kobayashi20170921T07:42:51ZCreate this protein network analysis?
http://community.wolfram.com/groups//m/t/1188892
Hello everyone, i am working with protein networks, and i would like to use mathematica to do the analysis. But there are some kind of thing that i do not how to do, i hope you guys can help me.
My first doubt is about to construct a network . I am trying to connect the first element of the first with the first element of the second list, the second of the first list with the second of the second list and so on. In order to work with long list. But with this command all nodes
RelationGraph[Unequal, {"BTBD3", "NR3C1", "CDH13"}, {"BTBD1", "BTBD6",
"DAT1"}]
The second issue is that i am working with graphml files from cytoscape, but when i try to do thing like put the name of the edges mathematica do not let me do it
I was unable to add the GraphMl fileOscar Rodriguez20170921T01:10:40ZCan RSolve handle 4 simultaneous 1stOrder nonlinear difference equations?
http://community.wolfram.com/groups//m/t/1187983
Hello everyone
I couldn't find a lot on the documentation about limitations to the RSolve functionality. In particular, I have a set of 4 simultaneous 1stOrder nonlinear Difference equations and 4 initial conditions, but keep getting the message that one of my equations "appears with no arguments". Even when I take the equation and corresponding IC away, it just returns the same code I typed, but no solution.
Here's the code:
eqn1 = V1[t + 1] ==
(1/2)*(((E1*(1)^(t + 1)  E1  2)*(beta1*gamma2 + beta2*gamma1)*
Y1[t]  (1 + (1)^(t + 1))*(1 + E1)*(beta1*gamma2 + beta2*gamma1)*
V1[t]  (1 + (1)^(t + 1))*(beta1*gamma2 + beta2*gamma1)*
V2[t] + beta2*(E1*(1)^(t + 1)  E1  2)*
YT*((1 + b)*gamma1  beta1))*
X1[t] + (gamma1 + beta1*(1 + a))*
XT*(E1*(1)^(t + 1)  E1  2)*Y1 (t)*gamma2)*
E1/((1 + E1)*(beta2*gamma1*X1[t] + gamma2*XT*(gamma1 + beta1*(1 + a))));
eqn2 = V1[0] == 0;
eqn3 = V2[t + 1] ==
(1/ 2)*(((1 + (1)^(t + 1))*(1 + E2)*(beta1*gamma2 + beta2*gamma1)*
V2[t]  (E2*(1)^(t + 1) + E2 + 2)*(beta1*gamma2 + beta2*gamma1)*
Y1[t] + (1 + (1)^(t + 1))*(beta1*gamma2 + beta2*gamma1)*
V1[t] + YT*beta2*(E2*(1)^(t + 1) + E2 + 2)*(b*gamma1  beta1))*X1[t] +
XT*((1 + (1)^(t + 1))*(1 + E2)*(beta1*gamma2 + beta2*gamma1)*
V2[t] + gamma2*(E2*(1)^(t + 1) + E2 + 2)*(a*beta1  gamma1)*
Y1[t]  (1 + (1)^(t + 1))*(beta1*gamma2 + beta2*gamma1)*
V1[t]  ((gamma1 + beta1*(1 + a))*gamma2 + beta2*((1 + b)*gamma1  beta1))*(E2*(1)^(t + 1) + E2 + 2)*YT))*
E2/((1 + E2)*(beta2*gamma1*X1[t] + gamma2*XT*(gamma1 + beta1*(1 + a)))) ;
eqn4 = V2[0] == 0;
eqn5 = X1[t + 1] ==
(1/2)*(1 + (1)^(2 + t))*beta1*(((beta1*gamma2 + beta2*gamma1)*X1[t] 
gamma2*XT*(gamma1 + beta1*(1 + a)))*Y1[t] 
X1[t]*(beta1*gamma2  beta2*gamma1)*V1[t] 
beta2*(gamma1*V2[t + 1] + ((1 + b)*gamma1  beta1)*YT)*X1[t] +
gamma2*XT*(gamma1 + beta1*(1 + a))*
V2[t + 1])/((beta1*gamma2*
Y1[t] + (beta1*gamma2  beta2*gamma1)*V1[t] + beta2*((1 + b)*gamma1  beta1)*YT)*(beta1 + gamma1)) +
(1/2)*(((beta1*gamma2  beta2*gamma1)*X1[t] +
gamma2*XT*(gamma1 + beta1*(1 + a)))*Y1[t] 
X1[t]*(beta1*gamma2  beta2*gamma1)*V2[t] +
beta2*(gamma1*V1[t + 1] + ((1 + b)*gamma1  beta1)*YT)*X1[t] +
XT*(gamma1 + beta1*(1 + a))*V1[t + 1]*gamma2)*
beta1*((1)^(t + 1) + 1)/((beta1*gamma2*Y1[t] + (beta1*gamma2 + beta2*gamma1)*
V2[t] + beta2*((1 + b)*gamma1  beta1)*YT)*(beta1 + gamma1));
eqn6 = X1[0] == a*XT;
eqn7 = Y1[t + 1] ==
(1/2)*(((2*beta1*gamma2 + 2*beta2*gamma1)*
Y1[t] + ((1)^(t + 1)  1)*(beta1*gamma2  beta2*gamma1)*
V1[t] + ((1)^(t + 1) + 1)*(beta1*gamma2  beta2*gamma1)*
V2[t]  (gamma1*(V1[t + 1] + V2[t + 1])*(1)^(t + 1) + gamma1*V1[t + 1] 
gamma1*V2[t + 1] + (2*((1 + b)*gamma1  beta1))*YT)*beta2)*X1[t] + gamma2*(gamma1 +
beta1*(1 + a))*(2*Y1[t] + (V1[t + 1] + V2[t + 1])*(1)^(t + 1) + V1[t + 1]  V2[t + 1])*XT)*
gamma1/((beta2*gamma1*X1 (t) + gamma2*XT*(gamma1 + beta1*(1 + a)))*(beta1 + gamma1));
eqn8 = Y1[0] == b*YT;
Followed by:
RSolve[{eqn1, eqn2, eqn3, eqn4, eqn5, eqn6, eqn7, eqn8}, {V1, V2, X1,Y1}, t]
To which I get "RSolve::dvnoarg: The function Y1 appears with no arguments."
Any thoughts are appreciated. Many thanks!Rafael Rossi Silveira20170920T04:09:48ZWhat is the correct ContourPlot here? Where should I assign my parameters?
http://community.wolfram.com/groups//m/t/1187584
I am currently trying to make a contour plot of a transmission function. It is a very naughty function as some parts will tend to infinity and others to zero, however as a whole it will (in the allowed cases at least) stay between 0 and 1, as it should. This function depends on five parameters and these parameters are used to define expressions to simplify the function because it is huge. Just as an example, this is what my code would look like:
V = 5; U = 4; h = 1.6; v = 0.6; (*assigning a numerical value to the parameters*)
K = U + V +hv(a + b) (*defining an expression where 'a' and 'b' are variables.*)
T[a_, b_] = K + hv (*defining a function*)
ContourPlot[T[a,b], {a, 0.0, 1.0}, {b, 0.0, 1.0}]
Now the problem comes when I change the order in which I assign values to my parameters. If I do it at the beginning of my program I'll have one ContourPlot (1), however if I assign values to my parameters just before I order the plot; for example:
K = U + V +hv(a + b)
T[a_, b_] = K + hv
V = 5; U = 4; h = 1.6; v = 0.6;
ContourPlot[T[a,b], {a, 0.0, 1.0}, {b, 0.0, 1.0}]
I'll get something very different (2).So I tried simplifying this last expression before the plot and after assigning the values:
K = U + V +hv(a + b)
T[a_] = K + hv
V = 5; U = 4; h = 1.6; v = 0.6;
T[a_, b_] = Simplify[ T[a_, b_] ]
ContourPlot[T[a,b], {a, 0.0, 1.0}, {b, 0.0, 1.0}]
and the result was again (1).
![Contour plot of my function of transmission when assigning values to my parameters at the beginning of my program][1](1)
![Contour plot of my function of transmission when assigning values to my parameters after definning my function and before plotting][2](2)
I read the Evaluation procedure, and it seems logical that Mathematica was simplifying the expressions as much as it could with the parameters given. When the parameters were not given numerical values, Mathematica did not simplify and ended up with a more complex function and given its naughtiness, I assume some information may change in the process.
The question is, what is the correct plot/process? How can identify the correct procedure in these cases?
Thank you in advance.
Edit: I posted two notebooks where this same thing happens.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.png&userId=1187568
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.png&userId=1187568Emmanuel Guilleminot20170919T14:40:58ZUse mouse for vertex editing or mesh bending ?
http://community.wolfram.com/groups//m/t/1188872
Is there any CAD gui vertex editing in MMA whatsoever?
ie. can mouse be used for vertex editing or mesh bending in 2D or 3D within MMA frontend? Is there any addon that does simple editing (note: that is not a $2000 expensive addon i can't fund)?
System Modeler (i can't afford it) may  might  but I cannot tell from product writeup if it has a CAD interface included.
(There are big suites like sketchup, but it seems like MMA's Import/Export from these are barely supported (I might be wrong), and definitely don't export well (cannot a whole plot3d with mult. objects  maybe 1 Graphics3D obj can export at best). I'm unsure what need those are if one could do the more basic CAD editing in MMA frontend. Something like sketchup is big to install and use even if free  thus I'm avoiding.)
thank youJohn Hendrickson20170920T20:14:13Z[✓] Calculate area of a maple leaf image?
http://community.wolfram.com/groups//m/t/1183671
Hi,
I want to calculate the area of the maple leaf in cm^2.
![enter image description here][1]
I have tried this
img = Binarize@
Import["http://i37.ltalk.ru/22/85/298522/92/9427492/leave2.png"]~
Erosion~1;
(m = MorphologicalComponents[img]) // Colorize
linecount =
2 /. (Binarize@
Import["http://i37.ltalk.ru/22/85/298522/92/9427492/leave2.\
png"] // ColorNegate // Thinning //
ComponentMeasurements[#, "PerimeterCount"] &);
areacount =
2 /. ComponentMeasurements[MorphologicalComponents[img], "Count"];
areacount/linecount^2 // N
but I'm not sure which region is calculated and the unit given.
Can somebody help me? Thank you.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=maple.png&userId=862195Nurul Ainina20170913T07:22:20ZSolve a fraction when the denominator approaches zero?
http://community.wolfram.com/groups//m/t/1186771
I have a function (1+b)^2/Sqrt[1b^2] that I need to solve as b>1, where b is real. The calculation exceeds the precision that Mathematica is using and I am not sure how to increase the precision if I use Solve...or even if this is the best way to get a solution. Is there another way of approaching the solution to this function from a purely analytical approach or using complex variables?
Regards,
LutherLuther Nayhm20170918T15:26:02ZReduce the time of calculations in my program (Table&Sum&function)?
http://community.wolfram.com/groups//m/t/1187156
Hello everybody. I want to evaluate my numerical data from my program, but it takes about 2 hours (for n=1 to 5 in sigma). I have one sum command (with piecewise function) in the Table command. I must reduce the time to calculate it, anyone have an idea??
Thanks a lot.saeid M20170918T23:40:06ZStandardize or normalize a function in a 3dplot?
http://community.wolfram.com/groups//m/t/1187100
Hi all,
I have started to build a simulation of an diffractive axicon and wanted to compare the nearfield of the axicon with an Bessel beam. Now my problem is that the Bessel beam has a value between 0 and 1, the nearfield is between 0 and an intensity value based on the sampling. How can I standardize the 3dplot? I have tried "Rescale, Standardize and Normaliz" but I may have been doing something wrong. Anybody here who can show me the way? Thank you very much!Robert Schröder20170918T18:48:46Z[GOLF] Compact WL for lengthy TeX fractal Sierpinski Triangle
http://community.wolfram.com/groups//m/t/1187128
[Code Golf][1] is a fun coding competition with participants aiming for the shortest possible code that implements an algorithm. But please feel free to post ANY code that solves the problem  just to have fun!

**GOAL: write compact Wolfram Language (WL) code that generates TeX code given below to render fractal known as Sierpinski Triangle.**

I've recently seen a tremendously lengthy TeX code that produced a nicely rendered [Sierpinski triangle][2] posted by [@Szabolcs Horvát][at0] on social media. Because Wolfram Community has builtin MathML plugin (based on LaTeX ) we can see result right here!
$${{{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}^{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}_{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}}^{{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}^{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}_{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}}_{{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}^{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}_{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}}}^{{{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}^{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}_{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}}^{{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}^{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}_{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}}_{{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}^{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}_{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}}}_{{{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}^{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}_{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}}^{{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}^{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}_{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}}_{{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}^{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}_{{{x}^{x}_{x}}^{{x}^{x}_{x}}_{{x}^{x}_{x}}}}}$$
with the TeX code being:
![enter image description here][3]
# Update:
BTW see a similar discussion [**here**][4].
[at0]: http://community.wolfram.com/web/szhorvat
[1]: https://en.wikipedia.org/wiki/Code_golf
[2]: http://mathworld.wolfram.com/SierpinskiSieve.html
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot20170918at5.14.33PM.png&userId=11733
[4]: https://codegolf.stackexchange.com/q/138245/4997Vitaliy Kaurov20170918T21:58:05ZFalling back to Mathematica 11.1. Imports of CSV files are 100 times slower
http://community.wolfram.com/groups//m/t/1186345
Hi,
I originally posted a problem I was encountering using Mathematica 11.2 under:
[Mathematica 11.2 Import Issues][1]
After implementing the recommended workaround that eliminated the Import CSV row truncation issue,
I had to give up, and fall back to Mathematica 11.1.
Working with over 100,000 rows of downloaded Wolfram StarData, I had a minimum of 11 files, with 10,000 rows per file.
The import times under Mathematic 11.1 were around 2 seconds for each CSV file.
Under Mathematica 11.2, Imports of the same files are over 250 seconds for each file, making using Mathematica11.2 to analyze StarData unworkable. Falling back to Mathematica 11.1 is my only solution until this problem is fixed.
See attached Import image file.
I've also included the sample CSV file.
![enter image description here][2]
[1]: http://community.wolfram.com/groups//m/t/1186112
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fallback.jpeg&userId=885550Joseph Karpinski20170917T16:37:27Z[✓] Avoid example of NetChain running in MXNet to cause Error?
http://community.wolfram.com/groups//m/t/1187520
**Problem remains in *Mathematica 11.2***
I have asked a question in [Stack Exchange][1] and got no answer.
First defining the net
SeedRandom[1234];
net = NetInitialize@NetChain[{5, 3}, "Input" > 2];
{net[{1, 2}], net[{0, 0}], net[{0.1, 0.3}]}
>{{1.27735, 1.21455, 1.02647}, {0., 0.,
0.}, {0.141054, 0.145882, 0.099945}}
Exporting it to net of *MXNet*.
Export["simple modelsymbol.json", net, "MXNet"]
The JSON file(It define the net structure) is
[![enter image description here][2]][3]
Via the code we can see the name of parameter is the same between `params` and `JSON` file.
import mxnet as mx
mx.nd.load('simple model0000.params')
[![enter image description here][4]][5]
It has error `ValueError: need more than 1 value to unpack` when importing the net.
import mxnet as mx
sym, arg_params, aux_params = mx.model.load_checkpoint('simple model', 0)
[![enter image description here][6]][7]
using the complete code to make prediction,`data_names` and `label_names` can be shown in JSON file.
Complete code to make prediction.
import mxnet as mx
import numpy as np
sym, arg_params, aux_params = mx.model.load_checkpoint('simple model', 0)
mod = mx.mod.Module(symbol=sym,data_names=['Input'],label_names=['2'])
mod.bind(for_training=False, data_shapes=[('Input', (1,2))])
mod.set_params(arg_params, aux_params)
input_data=np.array([[1,2]])
array = mx.nd.array(input_data)
from collections import namedtuple
Batch = namedtuple('Batch', ['Input'])
mod.forward(Batch([array]))
prob = mod.get_outputs()[0].asnumpy()
prob = np.squeeze(prob)
print prob
**Version**:
Mathematica `"11.2.0 for Microsoft Windows (64bit) (September 11, 2017)"`
MXNet 0.11.1 (the latest version 20170905_mxnet_x64_vc14_cpu.7z)
How to fix it?
[1]: https://mathematica.stackexchange.com/questions/155069/exampleofnetchainrunninginmxnetcauseerror
[2]: https://i.stack.imgur.com/RE75P.png
[3]: https://i.stack.imgur.com/RE75P.png
[4]: https://i.stack.imgur.com/8RycG.png
[5]: https://i.stack.imgur.com/8RycG.png
[6]: https://i.stack.imgur.com/dxZtJ.png
[7]: https://i.stack.imgur.com/dxZtJ.pngzhou xiao20170919T02:39:44ZDiscussion of a puzzle in Newtonian physics requiring Mathematica
http://community.wolfram.com/groups//m/t/1187747
I am not sure the moderator will approve of this thread. It starts with a statement from Newton and then proceeds to build a model to test the statement, where the model to test the statement requires numerical integration in which I employed Mathematica. The results were somewhat unexpected and led to some further parametric modeling using different coordinate systems to build and evaluate the resulting integrals. I constructed spheres and cylinders to test the statement by Newton. The results were not fully consistent, in that the numerical evaluation of integrals disagreed at a certain level of precision. The results using spheres proved most variable under certain circumstances. The results using cylinders had no differences to very high levels of precision that would have shown that using the two coordinate systems would give different results. This then is an issue of when or how are the numerical results derived from using Mathematic noise or true variations in the calculated values.
This thread starts with a statement from Newton. It proceeds to a mathematical proof using Mathematica. This then leads to an issue of how reliable are the results of the numerical integration. In the spirit of full disclosure, I have been working with these models for several years and the comments from various members of the community have been very useful. In none of those threads was the real underlying issue discussed, though now that my analysis is finished, it is useful to broaden the discussion.
Any interest??
LutherLuther Nayhm20170919T17:32:29ZSet ZeroMQ subscribe channel in Mathematica?
http://community.wolfram.com/groups//m/t/1188328
Mathematica 11.2 can do ZMQ socket programming now! But I cannot find out how to set subscribe channel from searching the doc. When using c language there is a function [`zsocket_set_subscribe`](http://zguide.zeromq.org/c:espresso) to set the subscribe channel, it seems lacking corresponding function in [Mathematica Socket API](http://reference.wolfram.com/language/ref/SocketConnect.html) .
(* To create socket *)
client = SocketConnect[{"Pub IP", Pub Port}, "ZMQ_SUB"]
(* Then how to set channel to receive message? *)

I also asked in the [stackexchange](https://mathematica.stackexchange.com/questions/156161/howtosetzeromqsubscribechannelinmathematica) and will sync if there is an answer.hui liu20170920T08:48:17ZSolve bus routing problem with a single destination and multiple depots?
http://community.wolfram.com/groups//m/t/1186558
I'm trying to optimize the transportation of a fleet of buses that: (1) start from several depots, (2) pick up students from several stops, and (3) drop off all students at a single destination (the school). Travel times are not symmetric: the time to go from Point A to Point B is not the same as from Point B to Point A.
The goal is to minimize the number of buses and finding their routes while satisfying the constraint that no student should travel for longer than `k = 1.5` times the duration of the direct route from her bus stop to school.
To simplify the problem, let's assume that buses have infinite capacity, ie. each bus can accommodate all students at all stops on its route.
I am aware that this is a well studied, hard, problem e.g., a recent presentation in:
https://www.acsu.buffalo.edu/~qinghe/thesis/201701%20Caceres%20PhD%20School%20Bus%20Routing.pdf
I have tried without much luck to trick `FindShortestTour` using a cost matrix with "phantom", duplicated destinations, infinite distances, etc.
Any thoughts? Toy data below.
nstops = 50;
nbus = 10;
ndepots = 4;
destination = {0, 0};
depots = Table[
ReIm[(1 + RandomReal[.1*{1, 1}])*
Exp[I*2*(j*Pi/ndepots + RandomReal[.1*{1, 1}])]], {j, 0,
ndepots  1}];
stops = RandomReal[{1, 1}, {nstops, 2}];
ListPlot[stops,
Epilog > {PointSize[Large], Red, Point[destination], Black,
Point[depots]}, PlotRange > {{1.5, 1.5}, {1.5, 1.5}},
AspectRatio > 1]
distStops =
Table[If[i == j, 0,
EuclideanDistance[stops[[i]], stops[[j]]]*RandomReal[{0.9, 1.2}]], {i,
nstops}, {j, nstops}];
distDepotsStops =
Table[EuclideanDistance[depots[[i]], stops[[j]]]*RandomReal[{0.9, 1.2}], {i,
ndepots}, {j, nstops}];
distStopsDestination =
Table[EuclideanDistance[stops[[i]], destination]*RandomReal[{0.9, 1.2}], {i,
nstops}];
Distances from stops to depots and from depots to destination are infinite.Matthias Odisio20170918T02:56:12Z[✓] Reduce this vertical space in Grid?
http://community.wolfram.com/groups//m/t/1188054
The results I want are somewhere between two results I'm able to achieve. My first attempt:
testPlot[] := ListPlot[{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}]
Grid[{{"Label1", testPlot[], testPlot[]}, {"Label2", testPlot[], testPlot[]}}, Frame > All]
![enter image description here][1]
I like the vertical spacing, but I'd prefer larger images, making use of the unused space on the right side of the window.
For my second attempt, I change an option:
testPlot[] := ListPlot[{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, ImageSize > Full]
![enter image description here][2]
Now the horizontal spacing is exactly what I want, but there's major unused vertical space between plots. How can I reduce that vertical space? Thank you.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Oeb4e3X.png&userId=1009289
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5pVxCkv.png&userId=1009289Joe Donaldson20170920T00:02:13ZA Circle, Triangle, Square and Pentagon Walk Into a Pentagon
http://community.wolfram.com/groups//m/t/1187756
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=circle_triangle_square_pentagon_in_pentagon.jpg&userId=29126
See attached notebook. The minimization is done by doing a global random search with ParametricIPOPTMinimizeFrank Kampas20170919T20:24:01ZAvoid MMA 11.2 Import Statement Produces Errors & Truncates CSV Data?
http://community.wolfram.com/groups//m/t/1186112
https://mathematica.stackexchange.com/questions/155908/mathematica112importstatementproduceserrorstruncatescsvdatacreatedi
In Mathematica 11.1 I've created many CSV files from StarData:
sunMass = StarData["Sun", "Mass"];
sunLuminosity = StarData["Sun", "Luminosity"];
sunTemperature = StarData["Sun", "EffectiveTemperature"];
sunGravity = StarData["Sun", "Gravity"];
sunDensity = StarData["Sun", "Density"];
sunVolume = StarData["Sun", "Volume"];
sunDiameter = StarData["Sun", "Diameter"];
SetDirectory[$UserDocumentsDirectory]
listData1 = Take[StarData[EntityClass["Star", All]], {1, 10000}];
CloseKernels[]; LaunchKernels[4]
AbsoluteTiming[
Length[
data =
Transpose[
ParallelMap[
StarData[listData1, #] &,
{"Name", "Metallicity", "SpectralClass", "BVColorIndex",
"EffectiveTemperature",
"Mass", "Luminosity", "AbsoluteMagnitude", "Gravity", "Density",
"Diameter",
"DistanceFromEarth", "MainSequenceLifetime", "Parallax",
"RadialVelocity", "Radius", "StarEndState", "StarType",
"SurfaceArea",
"VariablePeriod", "Volume", "HDName"}]]]]
zeroData = data /. {Missing["NotAvailable"] > 0};
noUnitsData =
zeroData /. {c1_, c2_, c3_, c4_, c5_, c6_, c7_, c8_, c9_, c10_,
c11_, c12_, c13_, c14_, c15_, c16_, c17_, c18_, c19_, c20_, c21_,
c22_} > {c1, c2, c3, c4, QuantityMagnitude[c5],
QuantityMagnitude[c6/sunMass],
QuantityMagnitude[c7/sunLuminosity], c8,
QuantityMagnitude[c9/sunGravity], QuantityMagnitude[c10],
QuantityMagnitude[c11/sunDiameter], QuantityMagnitude[c12],
QuantityMagnitude[c13], QuantityMagnitude[c14]
, QuantityMagnitude[c15], QuantityMagnitude[c16], c17, c18,
QuantityMagnitude[c19], QuantityMagnitude[c20],
QuantityMagnitude[c21/sunVolume], c22};
Length[noUnitsData]
prePendData =
Prepend[noUnitsData, {"Name", "Metallicity", "SpectralClass",
"BVColorIndex", "EffectiveTemperature",
"Mass", "Luminosity", "AbsoluteMagnitude", "Gravity", "Density",
"Diameter",
"DistanceFromEarth", "MainSequenceLifetime", "Parallax",
"RadialVelocity", "Radius", "StarEndState", "StarType",
"SurfaceArea",
"VariablePeriod", "Volume", "HDName"}];
TableForm[Take[prePendData, 5]]
Export["allStars1.csv", prePendData, "CSV"]
This creates a CSV file with 10,000 rows of comma separated data. Works great for all 108,939 rows of StarData, by creating 11 CSV files.
Importing each CSV file in a new notebook is pretty straight forward
Length[data1 =
Import["allStarData1.csv", {"Data", {All}},
"HeaderLines" > 1] /. {c1_, c2_, c3_, c4_, c5_, c6_, c7_, c8_,
c9_, c10_, c11_, c12_, c13_, c14_, c15_, c16_, c17_, c18_, c19_,
c20_, c21_, c22_} > {c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,
c11, c12, c13, c14
, c15, c16, c17, c18, c19, c20, c21, c22}
]
This worked fine across different releases of Mathematica 11 for a large number of CSV files.
Upgraded today to Mathematica 11.2.
None of the existing notebooks work.
Import statements now take forever, generating error messages, and truncating large numbers of rows from existing CSV files.
One workaround that I'm currently testing, is running the StarData extract code listed above under Mathematica 11.2 creating new CSV files, and then importing the new CSV files.
This worked for the first 10,000 row StarData extract. No errors, no truncation. But still runs very slow. Will have to run the other 10 extracts and create new 10,000 row CSV files for each.
Feels like this is a bug in Mathematica 11.2 Import statement internal code. Where new internal data verification checks are incompatible to previously created CSV files.
Anyone else run into this issue?
Also I turned off the error messages to try to get through existing code, but don't know how to turn the error messages back on, so that I can include them in this post. Anyone know how?
Thanks
Including JPEG of Import errors:
![enter image description here][1]
Mathematica 11.2 documentation points to updates in CSV Import & Export functions:
![enter image description here][2]
Sample 10,000 record CSV file Export created in Mathematica 11.1 that is truncated when Import under Mathematica 11.2
Same files created as Export under Mathematica 11.2 and Import under Mathematica 11.2 are not truncated, and have full 10,000 records per file.
![enter image description here][3]
"TextDelimiters">"" fixed the problem. Eliminated row truncation. Import is still horribly slow under Mathematica 11.2 for a 10,000 row CSV file, taking over 250 seconds. Workaround I tested was to create all files under Mathematica 11.2 Export and then Import. No Truncation. The Part::partw warning messages are new under 11.2. They did not show up under Mathematica 11.1. Turning them off via Off[Part::part] did not improve the performance elapsed time of the Import.
![enter image description here][4]
[TextDelimiters Fixed the Problem][5]
Fixed
Falling back to Mathematica 11.1. Importing CSV files under Mathematica 11.2 is 100 times slower.
![enter image description here][6]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=error_snapshot.jpeg&userId=885550
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_0104.PNG&userId=885550
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=full_file_on_import.jpeg&userId=885550
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fixed.jpeg&userId=885550
[5]: https://mathematica.stackexchange.com/questions/155908/mathematica112importstatementproduceserrorstruncatescsvdatacreatedi
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fallback.jpeg&userId=885550Joseph Karpinski20170917T03:53:04ZIGraph/M: igraph interface for Mathematica
http://community.wolfram.com/groups//m/t/560469
The post below was written for the original release of IGraph/M. The current release is 0.3.0. See http://szhorvat.net/mathematica/IGraphM for more details.
It is compatible with Windows 64bit, macOS 10.9 or later, Linux 64bit and Raspbian Jessie (on the Raspberry Pi computer). It requires Mathematica 10.0.2 or later.

I would like to announce IGraph/M, a new igraph interface for Mathematica: http://szhorvat.net/mathematica/IGraphM
[igraph](http://igraph.org/) is a graph manipulation and analysis package. IGraph/M makes its functionality available from Mathematica.
This initial release, version 0.1, covers only some igraph functions, as I focused on the things that I need personally. However the main framework is complete, and new functions can be added quickly. If anyone would like to contribute, please contact me.
Binary packages for OS X (10.9 or later) and Linux can be downloaded [from GitHub](https://github.com/szhorvat/IGraphM/releases). Unfortunately I was unable to compile the development version of igraph for Windows, so I cannot provide a Windows version. If you can help with compiling igraph itself (not IGraph/M) on Windows, please let me know!
Functionality in this release that is not built into Mathematica:
* Vertex betweenness centrality for weighted graphs
* Estimates of vertex betweenness, edge betweenness and closeness centrality; for large graphs
* Minimum feedback arc set for weighted and unweighted graphs
* Find all cliques (not just maximal ones)
* Count 3 and 4motifs
* Rewire edges, keeping either the density or the degree sequence
* Alternative algorithms for isomorphism testing: Bliss, VF2
* Subgraph isomorphism
* Test if a degree sequence is graphical
* Alternative algorithms for generating random graphs with given degree sequence
* Layout algorithms that take weights into account
Note that IGraph/M is *not a replacement* for Mathematica's graphs and networks functionality. It is meant to complement what is already available in Mathematica, thus it primarily focuses on adding functionality that is not already present.
Why did I release the package before covering most of the igraph functionality? I do not have time to work on things I do not personally need or use, so I am unlikely to extend it further unless the need comes up. I do think that the functions that are included in v0.1 can already be useful to others too. I would also like to give the opportunity for people to contribute to the project if they wish to. The groundwork has been laid, so further extensions should be quick and relatively easy.
Also check out a related project, [IGraphR](https://github.com/szhorvat/IGraphR), which makes igraph available for Mathematica users through RLink. I wrote IGraph/M because I needed higher performance and greater reliability (especially for parallel computing) than what RLink could provide.

**A request:** If any of you have used IGraphR in the past to access igraph from Mathematica, please post a response to this thread and let me know which specific functions you were using.Szabolcs Horvát20150906T12:55:14ZFind which Locator is active in a Manipulate or LocatorPane?
http://community.wolfram.com/groups//m/t/1186782
I am working on something where I need to know which locator out of many is currently being changed.
(* edit, I cross posted here: https://mathematica.stackexchange.com/questions/156022/
subsequent edit, there are two nice answers posted at this URL. *)
Here is some working code that uses the second argument to Dynamic to identify which locator is active for a limited set of Locators. I'd like to extend this to many Locators. In this code, the Locators are hardcoded. I'd like to extend this programmatically so I can slot in the Locators and the Manipulator controls. I've tried various combinations of Hold* without really knowing what I am doing.
*Does anyone know how I can achieve my goal of finding which is the active locator?*
See some additional comments below the working code about an attempt to do this with LocatorPane.
DynamicModule[
{active = 0, p},
Manipulate[
Column[
{active,
Deploy@Graphics[
{
{
Locator@
Dynamic[p[1, 1,
1], {((p[1, 1, 1] = {1, Last[#]};
active = {1, 1, 1}) &), ((active = 0) &)}],
Locator@
Dynamic[p[1, 1,
2], {((p[1, 1, 2] = #;
active = {1, 1, 2}) &), ((active = 0) &)}],
Locator@
Dynamic[p[1, 1,
3], {((p[1, 1, 3] = {1, Last[#]};
active = {1, 1, 3}) &), (active = 0) &}]
},
Line[{p[1, 1, 1], p[1, 1, 2], p[1, 1, 3]}]
},
PlotRange > 2
]
}
],
{{p[1, 1, 1], {1, 1}}, ControlType > None},
{{p[1, 1, 2], {0, 0}}, ControlType > None},
{{p[1, 1, 3], {1, 1}}, ControlType > None}
]
]
Trying to do the same thing with a LocatorPane and Nearest. It is not clear to me that the 2 argument form of Dynamic is working here:
DynamicModule[{active = False, pt1 = {1, 1}/2, pt2 = {1, 1}/2,
pt3 = {1, 1}/2},
Row[
{active,
LocatorPane[
{
Dynamic[
pt1, {((active = Nearest[pt1, #, 1][[1]]);
pt1 = {0, 1} #) &, ((active = 0) &)}],
Dynamic[
pt2, {((active = Nearest[pt2, #, 1][[1]]);
pt2 = #) &, ((active = 0) &)}]
},
Graphics[{Yellow, Disk[{0, 0}, 2]},
PlotRange > 2
]
]
}
]
]W. Craig Carter20170918T16:59:49Z