Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Graphics and Visualization sorted by activeCreate a plot of vectors using a Do-Loop?
http://community.wolfram.com/groups/-/m/t/1036796
I am trying to create a plot of vectors using a Do-loop and
"Graphics [{Arrow[{....". Within the Do loop I calculate
the beginning (X,Y) and ending coordinates (X1,Y1) for each vector.
Can I use the "Graphics [{Arrow[{" command within a Do-Loop
or do I first have to create a Table? Below is my code.
Terry
A = {
{.79, 1.36}, {.93, 1.38}, {.58, .38}, {.87, .87}, {.83, .79},
{.31, .99}, {.60, .48}, {.60, .87}, {1.64, .15}, {1.11, 1.30},
{.53, .97}, {1.26, .39}, {2.37, .00}, {1.17, 1.76}, {.96, 1.26},
{.56, .46}, {1.17, .20}, {.63, .26}, {1.01, .47}, {.81, .77}};
DI = {-.90, -1.20, 1., -.97, -1.08, -1.53, -.61, -.60,
1.24, -.69, -1.31, .92, 2.39, -.06, -.48, -.82, -.82,
1.11, .66, -.15, -1.08}
Do[MDISC = Sqrt[A[[i, 1]]^2 + A[[i, 2]]^2];
COSX = A[[i, 1]]/MDISC;
COSY = A[[i, 2]]/MDISC;
BIGD = -DI[[i]]/MDISC;
X = BIGD*COSX;
Y = BIGD*COSY;
X1 = (MDISC + BIGD)*COSX;
Y1 = (MDISC + BIGD)*COSY;
Graphics[{Arrow[{{X, Y}, {X1, Y1}}]}], {i, 20}]Terry Ackerman2017-03-21T11:42:18ZIssue with label backgrounds in Graphics3D, Mathematica 11.1
http://community.wolfram.com/groups/-/m/t/1035985
A new, undesired, "feature" in MMA 11.1 is that labels in Graphics3D seems to always be rendered against a white background. These screenshots below are from a recent [Demonstration by Izidor Hafner][1]
MMA 10.4
![enter image description here][2]
MMA 11.1
![enter image description here][3]
[1]: http://demonstrations.wolfram.com/FourTheoremsOnSphericalTriangles/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MMA10.4.png&userId=93385
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MMA11.1.png&userId=93385Hans Milton2017-03-20T19:20:57ZAlternative framework for stress testing routines - Value at Risk focus
http://community.wolfram.com/groups/-/m/t/458805
Stress testing in the Value-of-Risk context provides complementary view of the firm’s risk profile when market becomes extremely volatile and unstable. In this respect, the stress testing generally replicates crisis scenarios in the current market setting. Traditional approach to stress testing emphasises qualitative factors. We propose alternative arrangement - purely quantitative approach with probabilistic setting where particular stress test is modelled thought the quantiles of the calibrated probability distribution.
![StVar image][1]
#Introduction
Value-at-Risk (VaR) as a standard risk measure is usually defined in terms of time horizon, confidence interval and parameters setting that drive the calculation of the measure. There are few theoretical underlying assumptions behind VaR and one f them refers to VaR as the measure under the normal market conditions. This fundamental principle, by definition, is meant to ignore extreme market scenarios where market parameters - and the volatility in particular - undergoes a period of excessive variations.
Stress testing plays a complementary function to VaR - it identifies the hidden vulnerability of the risk measure as a result of hidden assumptions and thus provides risk and senior managers with a clear view of the loss when market runs into crisis.Therefore, financial institutions these days are required to run the stress testing of their VaR on regular basis to quantify additional loss based on stress-affected para maters.
To ease and streamline the stress testing exercise we papoose new approach to stress testing based on probabilistic representation of VaR variables where particular scenario is modelled through the quantile behaviour of calibrated density. This approach is flexible, easy to implement and leads to elegant and tractable solution that can be applied in various risk quantification settings.
#Stress definition
Stress testing is generally carried away in three settings:
- Historical scenario of particular crisis
- Stylised definition of particular scenarios
- Hypothetical events
The recent regulatory review strongly prefers the historical scenario approach where particular period with consecutive 12 months observation of crisis has to be included into the stress testing exercise.
The objective of this paper is to propose a new parametric approach to stress testing where the underlying market parameters are defined probabilistically and the stress scenario is then expressed as a quantile of certain probability distribution. The solution brings a number of benefits - the primary being simplicity of implementation, ease of maintenance and ease of use.
#Problem setting
Consider the following case - we have built a portfolio of 5 UK stocks - Lloyds, Vodafone, Barclays, BP and HSBC - with £1 million invested in each of them. We want to compute **VaR** and **stressed VaR** for our portfolio with £5 million value.
##Getting market data
We get the market data for the 5 stocks with a long history from Jan 2008
stocks = {"LLOY.L", "VOD.L", "BARC.L", "BP.L", "HSBA.L"};
data = TimeSeries[FinancialData[#, "January 1, 2008"]] & /@ stocks;
The price history of each stock looks as follows:
DateListPlot[data, PlotLegends -> stocks]
![enter image description here][2]
## Processing market data
dret = TimeSeriesResample[MovingMap[Log[Last[#]] - Log[First[#]] &, data, {2}]];
![enter image description here][3]
In the same way we calculate each stock historical daily volatility
qvol = MovingMap[StandardDeviation, dret, {60, "Day"}];
Evaluate@DateListPlot[qvol, PlotRange -> All, PlotLegends -> stocks]
![enter image description here][4]
Both return and its volatility reveal high values for financial stocks - Lloyds and Barclays in particular- with excessive movement in 2009, 2010 and partially in 2012.
![enter image description here][5]
This are the periods that represent the peak of financial crisis and the stress to the market.
#Probabilistic definition of volatility
VaR metrics is primarily dealing with the return volatility and therefore the volatility behaviour in the key object of the stress testing process. As the graph above suggests, the volatility evolution over time is not constant and we observe periods of high and low values. If we make an abstract view on the volatility data in general, we can treat it as a 'random variable' with some probability distribution. Presence of high and low values strongly supports this argument.
We propose Johnson distribution - unbounded type - to define the distribution of volatility density over the observed period of time. Let's recall that Johnson distributions represent a framework of distribution family in the from Y=\[Sigma] g((X-\[Gamma])/\[Delta])+\[Mu] where X~Normal[]. In case of 'unbounded' distribution the function g=sinh(x). The Johnson distribution family are well-behaved functions and with four parameters to use are therefore ideally suited for the data fitting with long and patchy tails. In this respect they can be easily used to model the volatility distribution over time.
The probability density of the Johnson unbounded distribution is:
PDF[JohnsonDistribution["SU", \[Gamma], \[Delta], \[Mu], \[Sigma]], x]
![enter image description here][6]
and the density plot with varying shape factor \[Gamma] looks as follows:
Plot[Evaluate@
Table[PDF[JohnsonDistribution["SU", \[Gamma], 1.25, 0.007, 0.00341],
x], {\[Gamma], {-3, -4, -5}}], {x, 0, 0.15}, Filling -> Axis,
PlotRange -> All,
PlotLegends -> {"\[Gamma]=-3", "\[Gamma]=-4", "\[Gamma]=-5"}]
![enter image description here][7]
Johnson type of distributions are very flexible in terms of tail control.
## Historical distribution of volatility
We can visualise the volatility distribution by looking at each stock histogram:
size = Length[stocks];
Table[Histogram[qvol[[i, All, 2]], 30, "PDF",
PlotLabel -> stocks[[i]],
ColorFunction ->
Function[{height}, ColorData["Rainbow"][height]]], {i, size}]
![enter image description here][8]
We can use the historical data to fit the volatility to the Johnson distribution:
edist = Table[
EstimatedDistribution[qvol[[i, All, 2]],
JohnsonDistribution [
"SU", \[Gamma], \[Delta], \[Mu], \[Sigma]]], {i, size}]
How good is the fit? We can observe this on the charts:
Table[Show[
Histogram[qvol[[i, All, 2]], 20, "PDF", PlotLabel -> stocks[[i]]],
Plot[PDF[edist[[i]], x], {x, 0, 0.1}, PlotRange -> All,
PlotStyle -> {Blue, Thick}]], {i, size}]
![enter image description here][9]
## Defining correlation matrix
Portfolio VaR requires correlation structure amongst the VaR components. We can create it from the historical return data defined above.The only problem is to define the time window from which we select the market data. For the standard VaR we can choose the one year history
tswin = {{2014, 3, 1}, {2015, 3, 1}};
retwin = TimeSeriesWindow[dret, tswin];
volwin = Table[
Mean[MovingMap[StandardDeviation, retwin[[i]], {60, "Day"}][[All,
2]]], {i, size}]
Table[retwin[[i, All, 2]], {i, size}];
wincorr = Correlation[Transpose[%]];
wincorr // MatrixForm
> {0.0117992, 0.0127976, 0.0155371, 0.011966, 0.00891589}
![enter image description here][10]
#Standard VaR calculation
Having defined the volatility and the correlation matrix based on past year data, we can calculate the parametric VaR easily. Assuming the normal distribution for the stock returns and 1 day VaR horizon, this can be defined as follows:
##Individual stock VaR
indVar=-Sqrt[2] \[Pi] \[Sigma] erfc^-1(2 \[Alpha])
where \[Pi] = value of the investment = £1 million, \[Sigma] = stock return volatility and \[Alpha]= confidence level
ndinv = Refine[InverseCDF[NormalDistribution[0, \[Sigma]], \[Alpha]],
0 < \[Alpha] < 1]
> -Sqrt[2] \[Sigma] InverseErfc[2 \[Alpha]]
With 99% confidence and each stock volatility calculated above
indvar = Table[
10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] -> volwin[[i]]}, {i, size}]
Total[%]
BarChart[indvar, ChartStyle -> "Rainbow",
PlotLabel -> Style["Individual stock 1 day VaR", 16],
ChartLegends -> stocks]
> {27449.1, 29771.8, 36144.6, 27837.1, 20741.5}
> 141944.
![enter image description here][11]
We can see the highest individual VaR for Barclays and the lowest for HSBC. This is consistent with the individual stock volatilities observed above.
##Portfolio VaR
Formula-wise this is equivalent to:
portVaR=Sqrt[indVar^T.\[CapitalSigma].indVar]
baseportVaR = Sqrt[indvar.wincorr.indvar]
> 104226.
One can see that the portfolio VaR < \[Sum] individual VaRs due to diversification effect. Portfolio features reduce the sum of individual VaR by almost £40,000.
The 1 day total portfolio VaR is £104k or 2.2% of the portfolio's value
#Stressed VaR
VaR is a function driven primarily by volatility of return. In portfolio context there is another factor - correlations. When stressing VaR, one has to think about stress extension to both parameters.
## Historical scenario of past crisis
This is the most frequently used method to handle stress testing. Having available data for the entire period makes this selection simple.
Looking at the historical volatility graph, it is obvious that the volatility peaked in 2009 - 2010. We therefore select this period for our stress testing.
tswin = {{2009, 3, 1}, {2010, 3, 1}};
retwin = TimeSeriesWindow[dret, tswin];
volwin = Table[
Mean[MovingMap[StandardDeviation, retwin[[i]], {60, "Day"}][[All,
2]]], {i, size}]
Table[retwin[[i, All, 2]], {i, size}];
wincorr = Correlation[Transpose[%]];
wincorr // MatrixForm
> {0.0412017, 0.013961, 0.0342192, 0.0138671, 0.0187612}
![enter image description here][12]
We can now see very different values for both - the volatilities and the correlation matrix. To compute the stressed VaR, all we need is just to replace the standard VaR set with the stress period data:
indSTvar =
Table[10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] -> volwin[[i]]}, {i,
size}]
Total[%]
BarChart[indSTvar, ChartStyle -> "Rainbow",
PlotLabel -> Style["Individual stock 1 day Stress VaR", 16],
ChartLegends -> stocks]
> {95849.5, 32478.1, 79605.9, 32259.8, 43645.}
>
> 283838.
![enter image description here][13]
Consequently, the individual stocks stress VaR is very different from the standard VaR. For example, the Lloyds stressed VaR is almost 3 times higher than the standard VaR measure.
Portfolio-level stressed VaR:
stressPortVaR = Sqrt[indSTvar.wincorr.indSTvar]
> 218471.
BarChart[{baseportVaR, stressPortVaR}, ChartStyle -> {Blue, Red},
ChartLegends -> {"Std VaR", "Stress VaR"}]
![enter image description here][14]
The portfolio 1 day stressed VaR has **doubled** under the stress scenario and represents £218k loss
We can eventually choose any other period to see how the VaR behaves under different set of market parameters.
##Inverse CDF method for stress scenarios - individual case
The calibration of historical volatility to the Johnson distribution enables us to explore alternative route for the stressed VaR generation. This can be described as **Inverse CDF** method.
If we assume that the return distribution is normal and the volatility of that return is calibrated to the Johnson unbounded distribution, we can obtain the stressed VaR metrics as a quantile of both distributions.
- Stressed volatility:
We are interested in the quantile value of the volatility to capture the stressed market sentiment. This can be easily achieved through the Inverse CDF function
invJD = Refine[
InverseCDF[
JohnsonDistribution [
"SU", \[Gamma], \[Beta], \[Kappa], \[Nu]], \[Lambda]],
0 < \[Lambda] < 1] // Simplify
![enter image description here][15]
- Combined stressed VaR:
The VaR is then the composite value of the VaR formula and the quantiled volatility measure
stVaRNd = ndinv /. \[Sigma] -> invJD // Simplify
> ndinv
The above formula is the **parametric definition** of the stressed VaR. The function operates on two quantile parameters:
- VaR confidence level \[Alpha]
- Volatility 'stress' factor \[Lambda]
The stressed VaR parametric model will behave as follows:
Plot3D[stVaRNd /. {\[Kappa] -> 0.006583, \[Nu] ->
0.0002419, \[Gamma] -> -4.978, \[Beta] -> 1.07132}, {\[Alpha],
0.5, 0.9}, {\[Lambda], 0.3, 0.75},
ColorFunction -> "TemperatureMap", PlotLegends -> Automatic]
![enter image description here][16]
It is worth noting that the model is quite sensitive to the volatility stress factor \[Lambda].
##Probabilistic approach to stress VaR - portfolio context
Apart from stressing volatility parameter, we need to define the stressed correlation matrix. We propose simple multiplicative factor approach where the original matrix is multiplied by a positive number which increases the correlation coefficients in the matrix. This is consistent with market practice - in period of crisis there is a strong positive tendency for financial assets in the same class to move together. The function below does exactly this:
stressCM[cm_, f_] :=
Table[If[cm[[i, j]] == 1, 1, Min[0.99, cm[[i, j]]*(1 + f)]], {i,
size}, {j, size}]
Applying 20% increase on the correlations of standard VaR produces the following CM:
stressCM[wincorr, 0.2] // MatrixForm
![enter image description here][17]
The matrix values are in line with the stressed matrix historical scenario of 2009-2010
To execute the computation we first generate the individual stressed VaR metrics using the Inverse CDF method:
indstressvar =
Table[10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] ->
Mean[InverseCDF[edist[[i]], {0.5, 0.7, 0.9}]]}, {i, size}]
Total[%]
> {86608., 37365.8, 82986.8, 41072.8, 41535.8}
> 289569.
And then obtain the portfolio VaR in the same way as in the standard case but with the stressed correlation matrix:
portstvar = Sqrt[indstressvar.stressCM[wincorr, 0.2].indstressvar]
> 232010.
The parametric stressed VaR number is similar to what we obtained when we applied the historical scenario method. This shows that the alternative probabilistic stressed VaR approach works well and can be easily applied in practical setting.
#Extension of the stressed VaR concept
##VaR with Student-T distribution
When we opt for the generalised Student-T distribution for stock returns, the standard VaR can be defined as:
StVaR = Refine[
InverseCDF[StudentTDistribution[0, \[Sigma], \[Nu]], \[Alpha]],
1/2 < \[Alpha] < 1] // Simplify
![enter image description here][18]
The extension to the stressed VaR is trivial:
StVaR /. \[Sigma] -> invJD
![enter image description here][19]
The formula is essentially the Student-T Stressed VaR expression
We need to calibrate the distribution to obtain the degrees of freedom value for each stock
edist2 = Table[
EstimatedDistribution[dret[[i, All, 2]],
StudentTDistribution[0, \[Sigma], \[Nu]]], {i, size}]
> {StudentTDistribution[0, 0.0155062, 1.90416],
StudentTDistribution[0, 0.0093817, 2.82662],
StudentTDistribution[0, 0.0154996, 1.92631],
StudentTDistribution[0, 0.00850414, 2.12496],
StudentTDistribution[0, 0.00925811, 2.35383]}
stparam = List @@@ edist2
> {{0, 0.0155062, 1.90416}, {0, 0.0093817, 2.82662}, {0, 0.0154996,
1.92631}, {0, 0.00850414, 2.12496}, {0, 0.00925811, 2.35383}}
The individual stocks VaR are then:
ststressind =
Table[10^6 StVaR /. {\[Alpha] -> 0.99, \[Nu] ->
stparam[[i, 3]], \[Sigma] ->
Mean[InverseCDF[edist[[i]], {0.5, 0.7, 0.9}]]}, {i, size}]
Total[%]
> {98371.2, 33623.8, 96266.1, 44665.5, 41708.4}
>
> 314635.
and the total portfolio VaR equals to:
Sqrt[ststressind.stressCM[wincorr, 0.2].ststressind]
> 253701.
The Student-T stressed VaR is higher than in the normal case which is in line with expectation, especially when the degree of freedom is < 4.
#Conclusion
Parametric stressed VaR represents elegant and practical extension to the existing methods for stress testing. The main feature of this approach is ease of use and simplicity of application once the calibration dataset is available.
The parametric method can be applied to other probability distributions if one wants to test the parametric stressed VaR under different distributional assumptions. Student T approach is explicitly presented to demonstrate this case. Extension to other distributions is trivial since the stressed volatility with Inverse CDF can be easily applied in arbitrary setting.
[1]: /c/portal/getImageAttachment?filename=StressVarImange.jpg&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.png&userId=95400
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.png&userId=95400
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=95400
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&userId=95400
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.png&userId=95400
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.png&userId=95400
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.png&userId=95400
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9.png&userId=95400
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.png&userId=95400
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.png&userId=95400
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12.png&userId=95400
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=13.png&userId=95400
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=14.png&userId=95400
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15.png&userId=95400
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16.png&userId=95400
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=17.png&userId=95400
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=18.png&userId=95400
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19.png&userId=95400Igor Hlivka2015-03-13T18:06:46Z[GIF] Elaborating on Arrival's Alien Language, Part I., II. & III.
http://community.wolfram.com/groups/-/m/t/1034626
I recently watched "Arrival", and thought that some of the dialogue sounded Wolfram-esque. Later, I saw the following blog post:
[Quick, How Might the Alien Spacecraft Work?][1]
Along with many others, I enjoyed the movie. The underlying artistic concept for the alien language reminded me of decade old memories, a book by Stephen Addiss, [Art of Zen][2]. Asian-influenced symbolism is an interesting place to start building a sci-fi concept, even for western audiences.
I also found Cristopher Wolfram's broadcast and the associated files:
[Youtube Broadcast][3]
[Github Files ( with image files ) ][4]
Thanks for sharing! More science fiction, yes!
I think the constraint of circular logograms could be loosened. This leads to interesting connections with theory of functions, which I think the Aliens would probably know about.
The following code takes an alien logogram as input and outputs a deformation according to do-it-yourself formulation of the Pendulum Elliptic Functions:
![Human Animation][5]
## $m=2$ Inversion Coefficients ##
MultiFactorial[n_, nDim_] := Times[n, If[n - nDim > 1, MultiFactorial[n - nDim, nDim], 1]]
GeneralT[n_, m_] := Table[(-m)^(-j) MultiFactorial[i + m (j - 1) + 1, m]/ MultiFactorial[i + 1, m], {i, 1, n}, {j, 1, i}]
a[n_] := With[{gt = GeneralT[2 n, 2]}, gt[[2 #, Range[#]]] & /@ Range[n] ]
## Pendulum Values : $2(1-\cos(x))$ Expansion Coefficients ##
c[n_ /; OddQ[n]] := c[n] = 0;
c[n_ /; EvenQ[n]] := c[n] = 2 (n!) (-2)^(n/2)/(n + 2)!;
## Partial Bell Polynomials ##
Note: These polynomials are essentially the same as the "**BellY**" ( hilarious naming convention), but recursion optimized. See timing tests below.
B2[0, 0] = 1;
B2[n_ /; n > 0, 0] := 0;
B2[0, k_ /; k > 0] := 0;
B2[n_ /; n > 0, k_ /; k > 0] := B2[n, k] = Total[
Binomial[n - 1, # - 1] c[#] B2[n - #, k - 1] & /@
Range[1, n - k + 1] ];
## Function Construction ##
BasisT[n_] := Table[B2[i, j]/(i!) Q^(i + 2 j), {i, 2, 2 n, 2}, {j, 1, i/2}]
PhaseSpaceExpansion[n_] := Times[Sqrt[2 \[Alpha]], 1 + Dot[MapThread[Dot, {BasisT[n], a[n]}], (2 \[Alpha])^Range[n]]];
AbsoluteTiming[CES50 = PhaseSpaceExpansion[50];] (* faster than 2(s) *)
Fast50 = Compile[{{\[Alpha], _Real}, {Q, _Real}}, Evaluate@CES50];
## Image Processing ##
note: This method is a hack from ".jpg" to sort-of vector drawing. I haven't tested V11.1 vectorization functionality, but it seems like this could be a means to process all jpg's and output a file of vector polygons. Anyone ?
LogogramData = Import["Human1.jpg"];
Logogram01 = ImageData[ColorNegate@Binarize[LogogramData, .9]];
ArrayPlot@Logogram01;
Positions1 =
Position[Logogram01[[5 Range[3300/5], 5 Range[3300/5]]], 1];
Graphics[{Disk[#, 1.5] & /@ Positions1, Red,
Disk[{3300/5/2, 3300/5/2}, 10]}];
onePosCentered =
N[With[{cent = {3300/5/2, 3300/5/2} }, # - cent & /@ Positions1]];
radii = Norm /@ onePosCentered;
maxR = Max@radii;
normRadii = radii/maxR;
angles = ArcTan[#[[2]], #[[1]]] & /@ onePosCentered;
Qs = Cos /@ angles;
## Constructing and Printing Image Frames ##
AlienWavefunction[R_, pixel_, normRad_, Qs_, angles_] := Module[{
deformedRadii = MapThread[Fast50, {R normRad, Qs}],
deformedVectors = Map[N[{Cos[#], Sin[#]}] &, angles],
deformedCoords
},
deformedCoords =
MapThread[Times, {deformedRadii, deformedVectors}];
Show[ PolarPlot[ Evaluate[
CES50 /. {Q -> Cos[\[Phi]], \[Alpha] -> #/10} & /@
Range[9]], {\[Phi], 0, 2 Pi}, Axes -> False,
PlotStyle -> Gray],
Graphics[Disk[#, pixel] & /@ deformedCoords], ImageSize -> 500]]
AbsoluteTiming[ OneFrame =
AlienWavefunction[1, (1 + 1)* 1.5/maxR, normRadii, Qs, angles]
](* about 2.5 (s)*)
![Alien Pendulum][6]
## Validation and Timing ##
In this code, we're using the magic algorithm to get up to about $100$ orders of magnitude in the half energy, $50$ in the energy. I did prove $m=1$ is equivalent to other published forms, but haven't found anything in the literature about $m=2$, and think that the proving will take more time, effort, and insight (?). For applications, we just race ahead without worrying too much, but do check with standard, known expansions:
EK50 = Normal@ Series[D[ Expand[CES50^2/2] /. Q^n_ :> (1/2)^n Binomial[n, n/2], \[Alpha]], {\[Alpha], 0, 50}];
SameQ[Normal@ Series[(2/Pi) EllipticK[\[Alpha]], {\[Alpha], 0, 50}], EK50]
Plot[{(2/Pi) EllipticK[\[Alpha]], EK50}, {\[Alpha], .9, 1}, ImageSize -> 500]
Out[]:= True
![Approximation Validity][7]
This plot gives an idea of approximation validity via the time integral over $2\pi$ radians in phase space. Essentially, even the time converges up to, say, $\alpha = 0.92$. Most of the divergence is tied up in the critical point, which is difficult to notice in the phase space drawings above.
Also compare the time of function evaluation:
tDIY = Mean[ AbsoluteTiming[Fast50[.9, RandomReal[{0, 1}]] ][[1]] & /@ Range[10000]];
tMma = Mean[AbsoluteTiming[JacobiSN[.9, RandomReal[{0, 1}]] ][[1]] & /@ Range[10000]];
tMma/tDIY
In the region of sufficient convergence, Mathematica function **JacobiSN** is almost 20 times slower. The CES radius also requires a function call to **JacobiCN**, so an output-equivalent **AlienWavefunction** algorithm using built-in Mathematica functions would probably take at least 20 times as long to produce. When computing hundreds of images this is a noticeable slow down, something to avoid ! !
Also compare time to evaluate the functional basis via the Bell Polynomials:
BasisT2[n_] := Table[BellY[i, j, c /@ Range[2 n]]/(i!) Q^(i + 2 j), {i, 2, 2 n, 2}, {j, 1, i/2}];
SameQ[BasisT2[20], BasisT[20]]
t1 = AbsoluteTiming[BasisT[#];][[1]] & /@ Range[100];
t2 = AbsoluteTiming[BasisT2[#];][[1]] & /@ Range[25];
ListLinePlot[{t1, t2}, ImageSize -> 500]
![Series Inverse][8]
The graph shows quite clearly that careful evaluation via the recursion relations changes the complexity of the inversion algorithm to polynomial time, $(n^2)$, in one special example where the forward series expansions coefficients have known, numeric values.
## Conclusion ##
We show proof-of-concept that alien logograms admit deformations that preserve the cycle topology. Furthermore we provide an example calculation where the "human" logogram couples to a surface. Deformation corresponds to scale transformation of the logogram along the surface. Each deformation associates with an energy.
Invoking the pendulum analogy gives the energy a physical meaning in terms of gravity, but we are not limited to classical examples alone. The idea extends to arbitrary surfaces in two, three or four dimensions, as long as the surfaces have local extrema. Around the extrema, there will exist cycle contours, which we can inscript with the Alien logograms. This procedure leads readily to large form compositions, especially if the surface has many extrema. Beyond Fourier methods, we might also apply spherical harmonics, and hyperspherical harmonics to get around the limitation of planarity.
The missing proof... Maybe later. LOL! ~ ~ ~ ~ Brad
And in the Fanfiction Voice:
Physicist : "It should be no surprise that heptapod speech mechanism involves an arbitrary deformation of the spacetime manifold."
Linguist : "Space-traveling aliens, yes, of course they know math and physics, but Buddhist symbology, where'd they learn that?"
[1]: http://blog.stephenwolfram.com/2016/11/quick-how-might-the-alien-spacecraft-work/
[2]: https://books.google.com/books/about/Art_of_Zen.html?id=4jGEQgAACAAJ
[3]: https://www.youtube.com/watch?v=8N6HT8hzUCA&t=4992s
[4]: https://github.com/WolframResearch/Arrival-Movie-Live-Coding
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Deformation.gif&userId=234448
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AlienPendulum.png&userId=234448
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=EllipticK.png&userId=234448
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=BellPolynomial.png&userId=234448Brad Klee2017-03-18T20:23:59ZUse FindShortestTour to create dog walking route?
http://community.wolfram.com/groups/-/m/t/1036964
Hi:
I was wondering if anyone has a good .nb I could use to find the most efficient route to walk a dog in any city with stops at a pet supply store, a coffee shop, and a park from a starting point (home) that is returned to. Or if you have a .nb with an example from a specific city, that would be useful. Any help is appreciated!Swede White2017-03-21T16:51:03Z[GIF] Drug overdose trends in USA counties 1999 - 2014
http://community.wolfram.com/groups/-/m/t/837574
The data [Drug Poisoning Mortality: United States, 1999–2014][1] are published by USA government. In a few recent blogs ([1][2], [2][3], [3][4]) **static visualizations** of data were performed. **Here we show how to animate maps of geographical drug overdose spread in USA**. Below you can see 4 images, each reflecting upon ***Age-adjusted death rates for drug poisoning per 100,000 population by county and year***:
1. First static frame 1999
2. Last static frame 2014
3. Animated .GIF of the whole period with 1 frame per year
4. Range of rates versus time, USA average
Quoting NPR news [Obama Asks Congress For More Money To Fight Opioid Drug Abuse][5]:
> Every day in America more than 50 people die from an overdose of prescription pain medication. Some people who start out abusing pain pills later turn to heroin, which claims another 29 lives each day.
----------
**1999: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][6]
![enter image description here][7]
----------
**2014: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][8]
![enter image description here][9]
----------
**1999 - 2014 Animation: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][10]
![enter image description here][11]
----------
**Range of rates versus time: Age-adjusted death rates for drug poisoning per 100,000 for USA average over counties**
![enter image description here][12]
Getting the data
----------------
We can download data in .CSV format from [CDC web site][13]. I keep data file in the same as the notebook directory to shorten file-path strings.
SetDirectory[NotebookDirectory[]]
raw = SemanticImport["ops.csv"]
![enter image description here][14]
Making "interpreted" dataset
----------------------------
In [Wolfram Language][15] (WL) many built-in data allow for interpretation of imported data. For example, the USA counties could be interpreted as entities:
![enter image description here][16]
But I did not use `SemanticImport` to interpret on import automatically, because I would like to do this efficiently. The table has 50247 entries
Normal[raw[All, "County"]] // Length
> 50247
while there are only 3141 actual counties listed:
Normal[raw[All, "County"]] // Union // Length
> 3141
So instead of making 50247 calls to `interpreter` we will make just 3141 and use efficient `Dispatch` after to distribute replacement rules over all 50247 entries. I've spent only 100 seconds on making `Dispatch`
countyRULEs = Dispatch[
Thread[# -> Interpreter["USCounty"][#]] &@
Union[Normal[raw[All, "County"]]]]; // AbsoluteTiming
> {108.124, Null}
And almost no time on interpreting dataset:
data = raw /. countyRULEs; // AbsoluteTiming
data
> {0.441731, Null}
![enter image description here][17]
Bounds of death-rates for future rescaling
------------------------------------------
Note a `StringReplace` trick for going `ToExpression` here and throughout the rest of the post:
MinMax[ToExpression[StringReplace[Normal[
data[All, "Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2]
> {1, 20}
Testing color scheme
--------------------
Color scheme are important to properly blend with native colors of maps and also to express data. These are some tests with [Color Schemes][18] available in Wolfram Language.
tmp = GeoNearest["City",
Entity["City", {"Atlanta", "Georgia", "UnitedStates"}], {All, Quantity[50, "Kilometers"]}];
Multicolumn[Table[
GeoRegionValuePlot[tmp -> "PopulationDensity", PlotLegends -> False,
ColorFunction -> (ColorData[{clmap, "Reverse"}][#] &), ImageSize -> 400]
, {clmap, {"CherryTones", "SolarColors", "SunsetColors",
"RustTones", "WatermelonColors", "Rainbow", "RoseColors",
"ThermometerColors", "BrownCyanTones"}}], 3]
![enter image description here][19]
Year 1999: a specific year GiS plot
---------
GeoRegionValuePlot[
Thread[Normal[data[Select[#Year == 1999 &], "County"]] ->
ToExpression[StringReplace[Normal[data[Select[#Year == 1999 &]][All,
"Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2],
GeoRange -> {{24, 50}, {-125, -66}},
GeoProjection -> "Mercator",
ColorFunctionScaling -> False,
ColorFunction -> (ColorData[{"CherryTones", "Reverse"}][
Rescale[#, {1, 20}]] &),
PlotLegends -> False,
ImageSize -> 1000] // Rasterize
Making animation
----------------
frames = ParallelTable[
GeoRegionValuePlot[
Thread[
Normal[data[Select[#Year == year &], "County"]] ->
ToExpression[StringReplace[Normal[data[Select[#Year == year &],
"Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2],
GeoRange -> {{24, 50}, {-125, -66}},
GeoProjection -> "Mercator",
ColorFunctionScaling -> False,
ColorFunction -> (ColorData[{"CherryTones", "Reverse"}][
Rescale[#, {1, 20}]] &),
PlotLegends -> False,
ImageSize -> 800],
{year, Range[1999, 2014]}];
Making legend
-------------
Panel@Grid[Transpose[{#, ColorData[{"CherryTones", "Reverse"}][Rescale[#, {1, 20}]]} & /@Range[1, 20]]]
Growth of death rates ranges vs time
------------------------------------
bandGrowth = Transpose[Table[N[Mean[ToExpression[
StringReplace[Normal[data[Select[#Year == y &]][All,
"Estimated Age-adjusted Death Rate, 11 Categories (in \
ranges)"]], {"-" -> "~List~", ">" -> "{#,#}&@"}]]]], {y, Range[1999, 2014]}]]
BarChart[{#[[1]], #[[2]] - #[[1]]} & /@ Transpose[bandGrowth],
PlotTheme -> "Marketing", ChartLayout -> "Stacked",
ChartLabels -> {Range[1999, 2014], None}, ImageSize -> 850,
AspectRatio -> 1/3, ChartStyle -> {Yellow, Red}]
Another color scheme sample
------------------
In this dark-low-values color scheme you can see better a few white spots. Those are very few counties where data are missing.
----------
1999
----
![enter image description here][20]
----------
2014
----
![enter image description here][21]
[1]: https://data.cdc.gov/NCHS/NCHS-Drug-Poisoning-Mortality-County-Trends-United/pbkm-d27e?category=NCHS&view_name=NCHS-Drug-Poisoning-Mortality-County-Trends-United
[2]: http://blogs.cdc.gov/nchs-data-visualization/drug-poisoning-mortality/
[3]: https://evergreen.data.socrata.com/stories/s/b5gk-7v6a/
[4]: http://www.nytimes.com/interactive/2016/01/07/us/drug-overdose-deaths-in-the-us.html
[5]: http://www.npr.org/sections/thetwo-way/2016/02/02/465348441/obama-asks-congress-for-more-money-to-fight-opioid-drug-abuse
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10097figure1.png&userId=11733
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6105figure2.png&userId=11733
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif-41464821.gif&userId=11733
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=opoid.png&userId=11733
[13]: https://data.cdc.gov/NCHS/NCHS-Drug-Poisoning-Mortality-County-Trends-United/pbkm-d27e?category=NCHS&view_name=NCHS-Drug-Poisoning-Mortality-County-Trends-United
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdfregdavscr345y5t.png&userId=11733
[15]: https://www.wolfram.com/language/
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2016-04-11_05-01-09.png&userId=11733
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2016-04-11_05-05-57.png&userId=11733
[18]: http://reference.wolfram.com/language/guide/ColorSchemes.html
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdf45yhfhgsdy5uejtyhsgdf.png&userId=11733
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdafsfqegr.png&userId=11733
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dfghu6r7euyrtea.png&userId=11733Vitaliy Kaurov2016-04-11T10:21:01Z[✓] Manipulate of a 2-D plot?
http://community.wolfram.com/groups/-/m/t/1033865
I am trying to create a plot of a logistic function with parameters AHAT and MDIF. The values of these parameters depend up
underlying parameters A and DIF and an underlying ability distribution with a covariance omega and mean vector mu. I have to go through several steps to determine AHAT and MDIF. I would like to create a plot where I can manipulate sigma, mu1 and mu2 to see how it affects the
logistic curve. I can't figure out how to create the plot. Leaving the variable sigma and mu1 and mu2 in the expressions at the outside
will not work. I have attached a MSWord document with the code. Does any have suggestions?
A={{1.2,.4},{.8,.5},{.9,1.0},{1.3,.5},{.6,.9}};u={{μ1,μ2}};
DIF={{.5},{.7},{1.0},{-1.},{2.0}};
Ω={{σ,1},{1,σ}}
L=CholeskyDecomposition[Ω]
W=Eigenvectors[Transpose[L].Transpose[A].A.L]
AM=List[A[[1]]]
W1M=List[W[[1]]]
W2M=List[W[[2]]]
AHAT=AM.Transpose[W1M]/Sqrt[2.89+AM.Transpose[W2M].W2M.Transpose[AM]]
MDIF=(DIF[[1]]-AM.Transpose[u])/AM.Transpose[W1M]
Manipulate[Plot[{(1/(1+Exp[-1.7*AHAT*(θ-MDIF)])),},{θ,-3,3},AxesLabel->{θ,p}],{σ,1,3},{μ1,-2,2},{μ2,-2,2}]Terry Ackerman2017-03-17T19:28:15ZExport a 3D graphics in dxf or stl not converted in triangles?
http://community.wolfram.com/groups/-/m/t/1036461
I need to export in dxf or stl format some 3D geometrical constructions I made in Mathematica. But when the exported files are examined in a CAD viewer I see only figures formed out of triangles. Sometimes some triangles appear in unexpected places. Here is an example.
tube = {CapForm[None], Tube[{{0, 0, 0}, {0, 0, 10}}, 5]};
cube = Cuboid[{10, 10, 0}, {20, 20, 10}];
G3D = Graphics3D[{tube, cube}];
Export["G3D.dxf", G3D];
The circle of the tube is decomposed into a polygon and the tube's surface is decomposed into rectangles and the rectangles into triangles. The cube looks better but I do not want it to be made out of triangles but out of rectangles.Petre Logofatu2017-03-21T11:46:39Z[✓] Avoid problems with Autorun in Manipulates, MMA 11.1?
http://community.wolfram.com/groups/-/m/t/1035529
Help! My own manipulated graphs that worked perfectly with autorun do not work now with 11.1. The autorun does not work despite the fact that I have ContinuousAction -> True. Is there perhaps a new version of mathplayer?
I do not know what happened, but it works now !!!Christos Papahristodoulou2017-03-20T11:10:50Z[✓] Locator constrained by polygon and NSolve
http://community.wolfram.com/groups/-/m/t/1035196
Dear Community,
I have a polygon with a constrained locator in it. I would like to draw a horizontal line from the locator towards the left boundary of the polygon, which is also given by a stewise function called qgHSZ. I try to achieve this with NSolve, but I get some strange warnings, like
"Part 2 of {0.5,10.} does not exist" Why??
"NSolve was unable to solve the system with inexact coeficients, etc." Not clear either. If I test NSolve below the plot, it works fine.
What do I do wrong? Notebook attached.
Tx for the kind help in advance,
regards, AndrasAndras Gilicz2017-03-19T21:32:01Z[✓] Solve the highlighted area?
http://community.wolfram.com/groups/-/m/t/1033774
How can I solve highlighted area?
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=area.PNG&userId=977975Bayasgalan Bayasgalan2017-03-18T05:33:46ZTurning anaglyph images into stereo pairs
http://community.wolfram.com/groups/-/m/t/1035318
A colleague sent this link to an article about the strangest solar system object I've ever seen, Saturn's moon Pan: https://saturn.jpl.nasa.gov/resources/7616 . The article contains an anaglyph stereo image:
![Pan anaglyph][1]
I love stereo images and hate anaglyph images. I prefer free-viewing side-by-side stereo pairs. The images are cleaner, and I don't have to hunt for the stereo glasses that I can never find when I need them. It occurred to me that all the information I need to turn the anaglyph image into a stereo pair is in the image. And the code turns out to be practically a one-liner.
A little poking around with the Get Pixel Color tool (in the right-click menu) revealed that apparently all of the information for the red image is in the red channel and all of the information for the cyan image is in the blue and green channels (surprise!). So producing a left-right pair is just a matter of picking out the red and blue (or green) channels, which is just what ColorSeparate does. For cross-eyed viewing, you want the blue channel on the left and red on the right, so I had to use Reverse to change the channel order. This is the complete code:
UnAnaglyph[image_] :=
Row[Reverse[Image[#, ImageSize -> 250] & /@ Most[ColorSeparate[image, "RGB"]]], " "]
It works beautifully:
UnAnaglyph[ImageTake[image, All, 450]]
![enter image description here][2]
UnAnaglyph[ImageTake[image, All, -450]]
![enter image description here][3]
I had fun applying UnAnaglyph to random anaglyph images I found in a Google image search.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5849PanAnaglyph.jpg&userId=32882
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=LeftStereoPair.png&userId=32882
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=RightStereoPair.png&userId=32882Christopher Carlson2017-03-19T23:53:42ZVisualize links of Facebook pages, as leaves of a tree?
http://community.wolfram.com/groups/-/m/t/1033964
How would someone take a list of Facebook pages and represent them as leaves of a tree? – For one to have it interactive from Mathematica / Wolfram Language (as Links), and to be able to print them and cut them out as simple bulletin board cut outs for let’s say a classroom exercise, AND most importantly, but not necessarily at first, have it stand as the foundation of a more advanced program to be written in the future that will eventually become an interactive digital sign -which I can explain later once the simple pages as leaves gets solved.Brian Woytovich2017-03-18T13:19:50Z[✓] Create a constrained locator inside a polygon?
http://community.wolfram.com/groups/-/m/t/1034164
Dear Community,
I have a closed polygon, and I would like to place a locator into it, so it remains constrained to the interior of the polygon, i.e. it never leaves the polygon when I drag it. How could I do it? My polygon is attached.
Tx for the kind help in advance,
AndrasAndras Gilicz2017-03-18T17:07:06Z[✓] Use RegionMember properly?
http://community.wolfram.com/groups/-/m/t/1034606
Dear Community,
I tried to explore the RegionMember function to my previous post ( constrained locator ). This function should give True, it a point is inside a region, and False otherwise. Applying it to my polygon it gives True, although the investigated point is clearly outside the region :-/ Does anybody has an explanation?
Tx in advance,
AndrasAndras Gilicz2017-03-18T19:47:05Z