Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Graphics and Visualization sorted by activeSolver for unsteady flow with the use of Mathematica FEM
http://community.wolfram.com/groups/-/m/t/1433064
![fig7][331]
I started the discussion [here][1] but I also want to repeat on this forum.
There are many commercial and open code for solving the problems of unsteady flows.
We are interested in the possibility of solving these problems using Mathematica FEM. Previously proposed solvers for stationary incompressible isothermal flows:
Solving 2D Incompressible Flows using Finite Elements:
http://community.wolfram.com/groups/-/m/t/610335
FEM Solver for Navier-Stokes equations in 2D:
http://community.wolfram.com/groups/-/m/t/611304
Nonlinear FEM Solver for Navier-Stokes equations in 2D:
https://mathematica.stackexchange.com/questions/94914/nonlinear-fem-solver-for-navier-stokes-equations-in-2d/96579#96579
We give several examples of the successful application of the finite element method for solving unsteady problem including nonisothermal and compressible flows. We will begin with two standard tests that were proposed to solve this class of problems by
M. Schäfer and S. Turek, Benchmark computations of laminar ﬂow around a cylinder (With support by F. Durst, E. Krause and R. Rannacher). In E. Hirschel, editor, Flow Simulation with High-Performance Computers II. DFG priority research program results 1993-1995, number 52 in Notes Numer. Fluid Mech., pp.547–566. Vieweg, Weisbaden, 1996. https://www.uio.no/studier/emner/matnat/math/MEK4300/v14/undervisningsmateriale/schaeferturek1996.pdf
![fig8][332]
Let us consider the flow in a flat channel around a cylinder at Reynolds number = 100, when self-oscillations occur leading to the detachment of vortices in the aft part of cylinder. In this problem it is necessary to calculate drag coeﬃcient, lift coeﬃcient and pressure diﬀerence in the frontal and aft part of the cylinder as functions of time, maximum drag coeﬃcient, maximum lift coeﬃcient , Strouhal number and pressure diﬀerence $\Delta P(t)$ at $t = t0 +1/2f$. The frequency f is determined by the period of oscillations of lift coeﬃcient f=f(c_L). The data for this test, the code and the results are shown below.
H = .41; L = 2.2; {x0, y0, r0} = {1/5, 1/5, 1/20};
Ω = RegionDifference[Rectangle[{0, 0}, {L, H}], Disk[{x0, y0}, r0]];
RegionPlot[Ω, AspectRatio -> Automatic]
K = 2000; Um = 1.5; ν = 10^-3; t0 = .004;
U0[y_, t_] := 4*Um*y/H*(1 - y/H)
UX[0][x_, y_] := 0;
VY[0][x_, y_] := 0;
P0[0][x_, y_] := 0;
Do[
{UX[i], VY[i], P0[i]} =
NDSolveValue[{{Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
u[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (u[x, y] - UX[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[u[x, y], x] +
VY[i - 1][x, y]*D[u[x, y], y],
Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
v[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (v[x, y] - VY[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[v[x, y], x] +
VY[i - 1][x, y]*D[v[x, y], y],
\!\(\*SuperscriptBox[\(u\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] +
\!\(\*SuperscriptBox[\(v\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y]} == {0, 0, 0} /. μ -> ν, {
DirichletCondition[{u[x, y] == U0[y, i*t0], v[x, y] == 0},
x == 0.],
DirichletCondition[{u[x, y] == 0., v[x, y] == 0.},
0 <= x <= L && y == 0 || y == H],
DirichletCondition[{u[x, y] == 0,
v[x, y] == 0}, (x - x0)^2 + (y - y0)^2 == r0^2],
DirichletCondition[p[x, y] == P0[i - 1][x, y], x == L]}}, {u, v,
p}, {x, y} ∈ Ω,
Method -> {"FiniteElement",
"InterpolationOrder" -> {u -> 2, v -> 2, p -> 1},
"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}], {i, 1, K}];
{ContourPlot[UX[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> u, MaxRecursion -> 2],
ContourPlot[VY[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> v, MaxRecursion -> 2,
PlotRange -> All]} // Quiet
{DensityPlot[UX[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> u, MaxRecursion -> 2],
DensityPlot[VY[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> v, MaxRecursion -> 2, PlotRange -> All]} // Quiet
dPl = Interpolation[
Table[{i*t0, (P0[i][.15, .2] - P0[i][.25, .2])}, {i, 0, K, 1}]];
cD = Table[{t0*i, NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) + Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Sin[θ] -
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Cos[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
cL = Table[{t0*i, -NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) +
Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Cos[θ] +
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Sin[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
{ListLinePlot[cD,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(D\)]\)"}],
ListLinePlot[cL,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(L\)]\)"}],
Plot[dPl[x], {x, 0, 8}, AxesLabel -> {"t", "ΔP"}]}
f002 = FindFit[cL, a*.5 + b*.8*Sin[k*16*t + c*1.], {a, b, k, c}, t]
Plot[Evaluate[a*.5 + b*.8*Sin[k*16*t + c*1.] /. f002], {t, 4, 8},
Epilog -> Map[Point, cL]]
k0=k/.f002;
Struhalnumber = .1*16*k0/2/Pi
cLm = MaximalBy[cL, Last]
sol = {Max[cD[[All, 2]]], Max[cL[[All, 2]]], Struhalnumber,
dPl[cLm[[1, 1]] + Pi/(16*k0)]}
In Fig. 1 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
{3.17805, 1.03297, 0.266606, 2.60427}
lowerbound= { 3.2200, 0.9900, 0.2950, 2.4600};
upperbound = {3.2400, 1.0100, 0.3050, 2.5000};
![Fig1][2]
Note that our results differ from allowable by several percent, but if you look at all the results of Table 4 from the cited article, then the agreement is quite acceptable.The worst prediction is for the Strouhal number. We note that we use the explicit Euler method, which gives an underestimate of the Strouhal number, as follows from the data in Table 4.
The next test differs from the previous one in that the input speed varies according to the `U0[y_, t_] := 4*Um*y/H*(1 - y/H)*Sin[Pi*t/8]`. It is necessary to determine the time dependence of the drag and lift parameters for a half-period of oscillation, as well as the pressure drop at the last moment of time. In Fig. 2 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
sol = {3.0438934441256595`,
0.5073345082785012`, -0.11152933279750943`};
lowerbound = {2.9300, 0.4700, -0.1150};
upperbound = {2.9700, 0.4900, -0.1050};
![Fig2][3]
For this test, the agreement with the data in Table 5 is good. Consequently, the two tests are almost completely passed.
I wrote and debugged this code using Mathematics 11.01. But when I ran this code using Mathematics 11.3, I got strange pictures, for example, the disk is represented as a hexagon, the size of the area is changed.
![Fig3][4]
In addition, the numerical solution of the problem has changed, for example, test 2D2
{3.17805, 1.03297, 0.266606, 2.60427} v11.01
{3.15711, 1.11377, 0.266043, 2.54356} v11.03
The attached file contains the working code for test 2D3 describing the flow around the cylinder in a flat channel with a change in the flow velocity.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D3.png&userId=1218692
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Math11.3.png&userId=1218692
[331]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CylinderRe100test2D2.gif&userId=1218692
[332]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2D2test.png&userId=1218692Alexander Trounev2018-08-31T11:44:04ZMusic Generation with GAN MidiNet
http://community.wolfram.com/groups/-/m/t/1435251
I generate a music with reference to [MidiNet][1]. Most neural network models for music generation use recurrent neural networks. However, MidiNet use convolutional neural networks.
There are three models in MidiNet. Model 1 is Melody generator, no chord condition. Model 2,3 are Melody generators with chord condition. I try Model 1, because it is most interesting in the three models compared in the paper.
**Get MIDI data**
-----------------------
My favorite Jazz bassist is [Jaco Pastorius][2]. I get MIDI data from [here][3]. For example, I get MIDI data of "The Chicken".
url = "http://www.midiworld.com/download/1366";
notes = Select[Import[url, {"SoundNotes"}], Length[#] > 0 &];
There are some styles in the notes. I get base style from them.
notes[[All, 3, 3]]
Sound[notes[[1]]]
![enter image description here][4]
![enter image description here][5]
I change MIDI data to Image data. I fix the smallest note unit to be the sixteenth note. I divide the MIDI data into the sixteenth note period and select the sound found at the beginning of each period. And the pitch of SoundNote function is from 1 to 128. So, I change one bar to grayscale image(h=128*w=16).
First, I create the rule to change each note pitch(C-1,...,G9) to number(1,...,128), C4 -> 61.
codebase = {"C", "C#", "D", "D#", "E" , "F", "F#", "G", "G#" , "A",
"A#", "B"};
num = ToString /@ Range[-1, 9];
pitch2numberrule =
Take[Thread[
StringJoin /@ Reverse /@ Tuples[{num, codebase}] ->
Range[0, 131] + 1], 128]
![enter image description here][6]
Next, I change each bar to image (h = 128*w = 16).
tempo = 108;
note16 = 60/(4*tempo); (* length(second) of 1the sixteenth note *)
select16[snlist_, t_] :=
Select[snlist, (t <= #[[2, 1]] <= t + note16) || (t <= #[[2, 2]] <=
t + note16) || (#[[2, 1]] < t && #[[2, 2]] > t + note16) &, 1]
selectbar[snlist_, str_] :=
select16[snlist, #] & /@ Most@Range[str, str + note16*16, note16]
selectpitch[x_] := If[x === {}, 0, x[[1, 1]]] /. pitch2numberrule
pixelbar[snlist_, t_] := Module[{bar, x, y},
bar = selectbar[snlist, t];
x = selectpitch /@ bar;
y = Range[16];
Transpose[{x, y}]
]
imagebar[snlist_, t_] := Module[{image},
image = ConstantArray[0, {128, 16}];
Quiet[(image[[129 - #[[1]], #[[2]]]] = 1) & /@ pixelbar[snlist, t]];
Image[image]
]
soundnote2image[soundnotelist_] := Module[{min, max, data2},
{min, max} = MinMax[#[[2]] & /@ soundnotelist // Flatten];
data2 = {#[[1]], #[[2]] - min} & /@ soundnotelist;
Table[imagebar[data2, t], {t, 0, max - min, note16*16}]
]
(images1 = soundnote2image[notes[[1]]])[[;; 16]]
![enter image description here][7]
**Create the training data**
-----------------------
First, I drop images1 to an integer multiple of the batch size. Its length is 128 bars and about 284 seconds with a batch size of 16.
batchsize = 16;
getbatchsizeimages[i_] := i[[;; batchsize*Floor[Length[i]/batchsize]]]
imagesall = Flatten[Join[getbatchsizeimages /@ {images1}]];
{Length[imagesall], Length[imagesall]*note16*16 // N}
![enter image description here][8]
MidiNet proposes a novel conditional mechanism to use music from the previous bar to condition the generation of the present bar to take into account the temporal dependencies across a different bar. So, each training data of MidiNet (Model 1: Melody generator, no chord condition) consists of three "noise", "prev", "Input". "noise" is a 100-dimensions random vector. "prev" is an image data(1*128*16) of the previous bar. "Input" is an image data(1*128*16) of the present bar. The first "prev" of each batch is all 0.
I generate training data with a batch size of 16 as follows.
randomDim = 100;
n = Floor[Length@imagesall/batchsize];
noise = Table[RandomReal[NormalDistribution[0, 1], {randomDim}],
batchsize*n];
input = ArrayReshape[ImageData[#], {1, 128, 16}] & /@
imagesall[[;; batchsize*n]];
prev = Flatten[
Join[Table[{{ConstantArray[0, {1, 128, 16}]},
input[[batchsize*(i - 1) + 1 ;; batchsize*i - 1]]}, {i, 1, n}]],
2];
trainingData =
AssociationThread[{"noise", "prev",
"Input"} -> {#[[1]], #[[2]], #[[3]]}] & /@
Transpose[{noise, prev, input}];
**Create GAN**
-----------------------
I create generator with reference to MidiNet.
generator = NetGraph[{
1024, BatchNormalizationLayer[], Ramp, 256,
BatchNormalizationLayer[], Ramp, ReshapeLayer[{128, 1, 2}],
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[1, {128, 1}, "Stride" -> {2, 1}],
LogisticSigmoid,
ConvolutionLayer[16, {128, 1}, "Stride" -> {2, 1}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp, CatenateLayer[],
CatenateLayer[], CatenateLayer[],
CatenateLayer[]}, {NetPort["noise"] ->
1, NetPort["prev"] -> 19,
19 -> 20 ->
21 -> 22 -> 23 -> 24 -> 25 -> 26 -> 27 -> 28 -> 29 -> 30,
1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7, {7, 30} -> 31,
31 -> 8 -> 9 -> 10, {10, 27} -> 32,
32 -> 11 -> 12 -> 13, {13, 24} -> 33,
33 -> 14 -> 15 -> 16, {16, 21} -> 34, 34 -> 17 -> 18},
"noise" -> {100}, "prev" -> {1, 128, 16}
]
![enter image description here][9]
I create discriminator which does not have BatchNormalizationLayer and LogisticSigmoid, because I use [Wasserstein GAN][10] easy to stabilize the training.
discriminator = NetGraph[{
ConvolutionLayer[64, {89, 4}, "Stride" -> {1, 1}], Ramp,
ConvolutionLayer[64, {1, 4}, "Stride" -> {1, 1}], Ramp,
ConvolutionLayer[16, {1, 4}, "Stride" -> {1, 1}], Ramp,
1},
{1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7}, "Input" -> {1, 128, 16}
]
![enter image description here][11]
I create Wasserstein GAN network.
ganNet = NetInitialize[NetGraph[<|"gen" -> generator,
"discrimop" -> NetMapOperator[discriminator],
"cat" -> CatenateLayer[],
"reshape" -> ReshapeLayer[{2, 1, 128, 16}],
"flat" -> ReshapeLayer[{2}],
"scale" -> ConstantTimesLayer["Scaling" -> {-1, 1}],
"total" -> SummationLayer[]|>,
{{NetPort["noise"], NetPort["prev"]} -> "gen" -> "cat",
NetPort["Input"] -> "cat",
"cat" ->
"reshape" -> "discrimop" -> "flat" -> "scale" -> "total"},
"Input" -> {1, 128, 16}]]
![enter image description here][12]
**NetTrain**
-----------------------
I train by using the training data created before. I use RMSProp as the method of NetTrain according to Wasserstein GAN. It take about one hour by using GPU.
net = NetTrain[ganNet, trainingData, All, LossFunction -> "Output",
Method -> {"RMSProp", "LearningRate" -> 0.00005,
"WeightClipping" -> {"discrimop" -> 0.01}},
LearningRateMultipliers -> {"scale" -> 0, "gen" -> -0.2},
TargetDevice -> "GPU", BatchSize -> batchsize,
MaxTrainingRounds -> 50000]
![enter image description here][13]
**Create MIDI**
-----------------------
I create image data of 16 bars by using generator of trained network.
bars = {};
newbar = Image[ConstantArray[0, {1, 128, 16}]];
For[i = 1, i < 17, i++,
noise1 = RandomReal[NormalDistribution[0, 1], {randomDim}];
prev1 = {ImageData[newbar]};
newbar =
NetDecoder[{"Image", "Grayscale"}][
NetExtract[net["TrainedNet"], "gen"][<|"noise" -> noise1,
"prev" -> prev1|>]];
AppendTo[bars, newbar]
]
bars
![enter image description here][14]
I select only the pixel having the max value among each column of the image, because there is a feature that the image generated by Wasserstein GAN is blurred. I clear the images.
clearbar[bar_, threshold_] := Module[{i, barx, col, max},
barx = ConstantArray[0, {128, 16}];
col = Transpose[bar // ImageData];
For[i = 1, i < 17, i++,
max = Max[col[[i]]];
If[max >= threshold,
barx[[First@Position[col[[i]], max, 1], i]] = 1]
];
Image[barx]
]
bars2 = clearbar[#, 0.1] & /@ bars
![enter image description here][15]
I change the image to SoundNote. I concatenate the same continuous pitches.
number2pitchrule = Reverse /@ pitch2numberrule;
images2soundnote[img_, start_] :=
SoundNote[(129 - #[[2]]) /.
number2pitchrule, {(#[[1]] - 1)*note16, #[[1]]*note16} + start,
"ElectricBass", SoundVolume -> 1] & /@
Sort@(Reverse /@ Position[(img // ImageData) /. (1 -> 1.), 1.])
snjoinrule = {x___, SoundNote[s_, {t_, u_}, v_, w_],
SoundNote[s_, {u_, z_}, v_, w_], y___} -> {x,
SoundNote[s, {t, z}, v, w], y};
I generate music and attach its mp3 file.
Sound[Flatten@
MapIndexed[(images2soundnote[#1, note16*16*(First[#2] - 1)] //.
snjoinrule) &, bars2]]
![enter image description here][16]
**Conclusion**
-----------------------
I try music generation with GAN. I am not satisfied with the result. I think that the causes are various, poor training data, poor learning time, etc.
Jaco is gone. I hope Neural Networks will be able to express Jaco's base.
[1]: https://arxiv.org/abs/1703.10847
[2]: https://en.wikipedia.org/wiki/Jaco_Pastorius
[3]: http://www.bock-for-pastorius.de/midi.htm
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=317901.jpg&userId=1013863
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=567502.jpg&userId=1013863
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=476803.jpg&userId=1013863
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=744004.jpg&userId=1013863
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=586405.jpg&userId=1013863
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=707106.jpg&userId=1013863
[10]: https://arxiv.org/abs/1701.07875
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=435507.jpg&userId=1013863
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=170508.jpg&userId=1013863
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=324809.jpg&userId=1013863
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=965210.jpg&userId=1013863
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=706311.jpg&userId=1013863
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=177112.jpg&userId=1013863Kotaro Okazaki2018-09-02T02:30:04ZReverse the axes of a plot?
http://community.wolfram.com/groups/-/m/t/1459957
Hello and thanks for your help.
I am trying to invert the axes provided by the Plot [] command, to invert the Y axis (vertical) and the graphical maintenance of the x axis (horizontal). Thank you very much for your help, I tried to find an answer in the program itself but I did not find it.
Thank you very much for any help you can give me.Miguel Saldias2018-09-15T19:20:21ZScatter plotting satellites?
http://community.wolfram.com/groups/-/m/t/1454309
Im dealing with a table of GEO satellites, I have generated a table with Az and EL values relative to my location. To bad they couldn't be found in the SatelliteData[] query...
This is a snippet of the data.
dataSatTable = {
{"SAT NAME", "EL", "AZ"},
{"NSS-806", 3.26, 99.47},
{"Galaxy-17-19", 5.69, 258.52},
{"Eutelsat-113", 10.4, 254.51}
}
I need to plot each satellite as a dot with a text tag on a scatter plot, This will form an arc. I then need to add another table of data containing obstructions on the plot.
Any pointers?Mathison Ott2018-09-13T07:18:01Zpsfrag for Mathematica 10
http://community.wolfram.com/groups/-/m/t/474155
Hello,
as far as I understand, psfrag is no longer working with Mathematica 10. Does anyone have a solution for this problem or
knows whether there will be a solution in the near future? Or is there an alternative?
What I want to do is export eps files from Mathematica and include them into Latex with nice Labels.a b2015-04-05T14:34:56ZCreate a "Great Circle" on a globe through two given points?
http://community.wolfram.com/groups/-/m/t/1460856
A collegue of mine is on holiday from Amsterdam to Miami. Just for fun I would like to plot the great circle through the center of the earth going through Amsterdam and Miami.
I tried to do that with GeoGraphics/Geopath, but for both came the Error-Message “GeoGraphics/Geopath” is not a graphics primitive. I tried several things but I cannot get it right ? How to create it ?
The code is from the Wolfram help with some adaption. See att.
Thank youChiel Geeraert2018-09-16T17:35:52ZFitting An Ellipse Inside a Non-Convex Curve
http://community.wolfram.com/groups/-/m/t/1453823
The goal is to find the largest ellipse (with given ratio of axes), centered at a given point and with a given orientatiion, that fits inside a specified non-convex oval.
equation of oval
In[1]:= oval[{x_, y_}] = ((x - 1)^2 + y^2) ((x + 1)^2 + y^2) - (21/20)^4;
derive equation of an ellipse with axes "a" and "b", centered at { xc, yc }
with major axis making angle \[Theta] with x-axis.
In[2]:= Thread[{xel, yel} =
DiagonalMatrix[{a, b}^-1].RotationMatrix[-\[Theta]].{x - xc, y - yc}];
In[3]:= eleq[{{a_, b_}, xc_, yc_, \[Theta]_}, {x_, y_}] = xel^2 + yel^2 - 1;
symbolically find largest ellipse with axes "a" and "a/2",
centered at { 1, (1/5) }
oriented at \[Pi]/3
RegionWithin takes about 1 minute.
In[4]:= AbsoluteTiming[
RegionWithin[ImplicitRegion[oval[{x, y}] <= 0, {x, y}],
ImplicitRegion[eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}] <= 0, {x, y}],
GenerateConditions -> True] // N]
Out[4]= {63.4787, 0. < a <= 0.315686 || -0.315686 <= a < 0.}
Calculating it numerically with a Lagrange multiplier
and NSolve takes about 1 second.
The desired answer is the one with the smallest value of a.
In[5]:= AbsoluteTiming[
sln = NSolve[{a >= 0, oval[{x, y}] == 0,
eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}] == 0,
Sequence @@
D[oval[{x,
y}] == \[Lambda] eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}], {{x,
y}}]}, {a, x, y, \[Lambda]}, Reals]]
Out[5]= {0.808361, {{a -> 0.315686, \[Lambda] -> 0.869176, x -> 1.15549,
y -> 0.474695}, {a -> 4.34436, \[Lambda] -> 7.03937, x -> -1.41308,
y -> 0.191823}, {a -> 0.817698, \[Lambda] -> 1.46331, x -> 0.654269,
y -> -0.531984}, {a -> 1.14366, \[Lambda] -> 1.77874, x -> 1.34316,
y -> -0.315728}}}
eliminating the Lagrange multiplier before solving speeds up the calculation.
In[6]:= AbsoluteTiming[
sln = NSolve[{a >= 0, oval[{x, y}] == 0,
eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}] == 0,
Eliminate[
D[oval[{x,
y}] == \[Lambda] eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}], {{x,
y}}], \[Lambda]]}, {a, x, y}, Reals]]
Out[6]= {0.0641214, {{x -> 1.15549, y -> 0.474695, a -> 0.315686}, {x -> -1.41308,
y -> 0.191823, a -> 4.34436}, {x -> 1.34316, y -> -0.315728,
a -> 1.14366}, {x -> 0.654269, y -> -0.531984, a -> 0.817698}}}
Plotting all the results show that the curves are tangent at the intersection point.
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ellipse_in_oval.jpg&userId=29126Frank Kampas2018-09-13T00:45:28Z[GIF] Thoughtform
http://community.wolfram.com/groups/-/m/t/1453464
![enter image description here][1]
Same principle as a previous [post][2], but added some visual aids to make it more intuitive. Drastically resized due to filesize limits, download full-size GIF [here][3] .
Also had some fun with the colors and had an art print made.
![enter image description here][4]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=framesforward30.053.gif&userId=167076
[2]: http://community.wolfram.com/groups/-/m/t/947494
[3]: https://www.dropbox.com/s/rt7cwewf81a0lfy/Thoughtform%200.053.gif?dl=0
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_1335copy.JPG&userId=167076Bryan Lettner2018-09-12T23:36:22Z[Wolfram Media] A Numerical Approach to Real Algebraic Curves
http://community.wolfram.com/groups/-/m/t/1452588
[![enter image description here][1]][2]
Wolfram Media has released a new book, [*A Numerical Approach to Real Algebraic Curves with the Wolfram Language*][2], by Barry H. Dayton. [Dayton][3] is a [mathematician][4] and long-time Mathematica user.
Bridging the gap between the sophisticated topic of real algebraic curve theory and on-the-spot computation and visualization of real algebraic curves, Dayton uses the Wolfram Language to explore and analyze real curves that often do not have rational points on them. In classical texts, analysis of these types of real curves was only really possible in the theoretical sense, but the Wolfram Language's ability to work with machine numbers, both in calculations and in detailed plots, enables accurate analysis of extremely complicated curves. This book is intended for those with some understanding of calculus and partial derivatives and with basic knowledge of the Wolfram Language.
One thing that makes this [Wolfram Media][5] publication unique is that not only is the book available for purchase on Amazon as a Kindle file, the entire text of the book with all of the code used to make the plots is available for free as downloadable Wolfram Notebooks. This book's unique style includes a large function appendix that evaluates independently of the chapter interface and activates the functions used in the text itself.
Read this month's [article of *The Mathematica Journal* for a summary][6]. Below are a few beautiful images from the article.
![enter image description here][7]
![enter image description here][8]
We're excited for this release as it is the first book by a non-Wolfram author that we've published, and we have several additional titles under consideration for 2019. Please check back on the Publishing and Authoring Group discussion over the next few months for updates!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-09-12at4.30.29PM.png&userId=20103
[2]: http://www.wolfram-media.com/products/dayton-algebraic-curves.html
[3]: http://barryhdayton.space
[4]: https://scholar.google.com/citations?user=hHz85rIAAAAJ&hl=en
[5]: http://www.wolfram-media.com
[6]: http://www.mathematica-journal.com/2018/08/a-wolfram-language-approach-to-real-numerical-algebraic-plane-curves
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Dayton_PlacedGraphics_1.gif&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Dayton_PlacedGraphics_7.gif&userId=20103Jeremy Sykes2018-09-12T19:30:25ZNumerical anomalies in a minimax algorithm
http://community.wolfram.com/groups/-/m/t/1449482
I am trying to compute error bounds for polynomial estimates to $\sin(t\theta)/\sin(\theta)$ for $t \in [0,1]$. The polynomials are of the form $p(x,t)$, where $x = \cos(\theta)$. The polynomials have a constant $u \in [0,1]$ that I want to choose to minimize the maximum error. The mathematical derivation is irrelevant, so I have skipped those details. I wrote code (Mathematica 11.3) to do this and plotted the minimax result as a function of $u$. (I omitted the NMinimize call for h[u] in this code sample.)
a[i_?NumericQ, t_?NumericQ] := If[i >= 1, a[i - 1, t]*(t^2 - i^2)/(i*(2*i + 1)), t]
p[n_?NumericQ, y_?NumericQ, t_?NumericQ] := (sum = a[n, t]; For[i = n - 1, i >= 0, i--, sum = a[i, t] + sum*y]; sum)
f[n_?NumericQ, x_?NumericQ, t_?NumericQ] := Sin[t*ArcCos[x]]/Sin[ArcCos[x]] - p[n, x - 1, t]
g[n_?NumericQ, u_?NumericQ, x_?NumericQ, t_?NumericQ] := Abs[f[n, x, t] - u*a[n, t]*(x - 1)^n]
h[n_?NumericQ, u_?NumericQ] := (result = NMaximize[{g[n, u, x, t], 0 <= x <= 1 && 0 <= t <= 1}, {x, t}]; result[[1]])
Plot[h[8, u], {u, 0.7, 0.9}]
The output of Plot has some numerical anomalies.
![Output of Plot function, default method for NMaximize][1]
When I program this in C++ using double precision, the function h(u) is smooth. Evaluating h[8,0.75], Mathematica produces 0.000058529. Evaluating h[8,0.751], Mathematica produces 9.13505e-06. I did not expect the sawtooth-like behavior of the graph. The valleys do not show up in my C++ computations, which shows effectively a V-shaped graph with vertex near (0.85352, 1.91558e-05). I tried to change the working precision, but the sawtooth behavior persisted.
I switched the method to "Simulated Annealing." The output of the Plot function also has some anomalies.
![Output of Plot, simulated annealing][2]
The outputs at the two aforementioned locations are h[8,0.75] = 0.0000583938 and h[8,0.751] = 0.0000580131, but now the anomalies are in a different region of the graph.
Finally, I tried using "Differential Evolution" as the method. The output looks like what I expected.
![Output of Plot, differential evolution][3]
I know how to debug numerical issues in C++ code using a debugger, but I am a novice at Mathematica and wish to know whether there is some standard approach or set of tools that allows me to diagnose such issues. Also, is there some general advice on choosing the method for minimizing or maximizing? Or is this simply something one has to use trial-and-error to determine? Thank you.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HGraph8Anomalies.png&userId=1449429
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HGraph8SimulatedAnnealing.png&userId=1449429
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HGraph8DifferentialEvolution.png&userId=1449429David Eberly2018-09-11T04:30:56ZGenerate a mesh in order to do a heat transfer analysis?
http://community.wolfram.com/groups/-/m/t/1450254
Hi All,
I'm trying to generate a mesh in order to do a heat transfer analysis and have come up with the code in the attached file based on the Wolfram documentation and other posts in the community.
The mesh is broken up into 9 regions to which I would like to assign material properties. I've had a go at assigning point markers to nodes which I then intended to use to assign material properties. However, I've come unstuck because the number of incidents I've created doesn't match the number of nodes (i.e. entries in mesh["Coordinates"]) due to each region being meshed separately and there being two incident IDs per node at each interface between regions. I'm new to Mathematica so I'd be very grateful if anyone could shed some light on how best to go about this. Also, is there a way to show the IDs of all nodes (PointElements?) rather than just the ones on the boundary? I've written my own code to solve for heat transfer in a separate notebook. Many thanks, ArchieArchie Watts-Farmer2018-09-11T14:56:59Z[WSS18] Reinforcement Q-Learning for Atari Games
http://community.wolfram.com/groups/-/m/t/1380007
## Introduction ##
This project aims to create a neural network agent that plays Atari games. This agent is trained using Q-Learning. The agent will not have any priori knowledge of the game. It is able to learn by playing the game and only being told when it loses.
##What is reinforcement learning? ##
Reinforcement learning is an area under the general machine Learning, inspired by behavioral psychology. The agent learns what to do, given a situation and a set of possible actions to choose from, in order to maximize a reward. Therefore, to model a problem to reinforcement learning problem, the game should have a set of states, a set of actions that able to transfer one state into another and a set of reward corresponding to each state. The mathematical formulation of reinforcement learning problem is called Markov Decision Process (MDP).
![An visual representation of reinforcement learning problem][1]
Image From:https://medium.freecodecamp.org/diving-deeper-into-reinforcement-learning-with-q-learning-c18d0db58efe
## Markov Decision Process ##
Before apply Markov decision process to the problem, we need to make sure the problem satisfy the Markov property which is that the current state completely represents the state of the environment. For short, the future depends only on the present.
An MDP can be defined by **(S,A,R,P,γ)** where:
- S — set of possible states
- A — set of possible actions
- R — probability distribution of reward given (state, action) pair
- P — probability distribution over how likely any of the states is to
be the new states, given (state, action) pair. Also known as
transition probability.
- γ — reward discount factor
At initial state $S_{0}$, the agent chooses action $A_{0}$. Then the environment gives reward $R_{0}=R(.|S_{0}, A_{0})$ and next state $S_{1}=P(.|S_{0},A_{0})$. Repeats till the environment ends.
##Value Network##
In value-based RL, the input will be the current state or a combination of few recent states, and the output will be the estimated future reward of every possible action at this state. The goal will be to optimize the value function so that the prediction value is close to the actual reward. In the following graph, each number in the box represents the distance from current box to the goal.
![Value network example][2]
Image From:https://medium.freecodecamp.org/diving-deeper-into-reinforcement-learning-with-q-learning-c18d0db58efe
## Deep Q-Learning ##
Deep Q-learning is the algorithm that I used to construct my agent. The basic idea of Q function is to get the state and action then output the corresponding sum of rewards till the end of the game. In deep Q-learning, we use a neural network as the Q function therefore we can use one state as input and let neural network to generate the prediction for all possible actions.
The Q function is stated as following.
$Q(S_{t},A) = R_{t+1}+\gamma maxQ(S_{t+1},A)\\Where:\\Q(S_{t},A)\,\,\,\,\,\,\,\,\,\,\,\,\, = The \,predicted\,sum \,of rewards \,given\, current\,state\,and\,selected\,action\\R_{t+1} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,= Reward\,received\,after\,taking\,action\\\gamma \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,= Discount\,factor\\maxQ(S_{t+1},A) = The\,prediction\,of\,next\,state$
As we can see that given current state and action, Q function outputs the reward of current plus the max value of the predictions of next state. This function will iteratively predicts the reward till the end of the game where Q[S,A] = R. Therefore we can calculate the loss by minus the prediction of current state with the sum of the reward and the prediction of the next state. When loss equals to 0, the function will able to perfectly predicts the reward of all actions. In another sense that the Q function is predicting the future value of its own prediction. People might ask how could this function be ever converge? Yes, this function is usually hard to converge but when it converges, the performance is really well. There are a lot of techniques that can be used to speed up the converge of the Q function. I will talk about a few techniques I used in this project.
## Experience Replay ##
Experience replay means that the agent will remember the states that it has experienced and learn from those experience when training. It gives more efficient way of using generated data which is by learning it for multiple times. It is important when gaining experience is expensive to agent. Since the Q function usually don't converge in a short time which means a lot of the outcomes from the experience are usually similar, multiple passes on the same data is useful.
## Decaying Random Factor ##
Random Factor is the possibility for the agent to choose a random action instead of the best predicted action. It allows the agent to start with random player to increase the diversity of the sample. The random factor decreases with the more game plays therefore the agent is able to be reinforced on its own action pattern.
## Combine Multiple Observations As Input ##
The following image shows a single frame took out from Atari game BreakOut. From this image, the agent is able to capture information about the location of the ball, the location of the board, etc. But several important information is not shown. If you play as the agent, this image is shown to you, what action you will choose? Feel something is missing? Is the ball going right or left? Is the ball going up or down?
![breakout frame1][3]
Generated Using openAI Gym
The following images are two continuous frames took out from the game BreakOut. From these two images that agent is able to capture the information on the direction of the ball and also the speed of the ball. A lot of people tends to forget this since processing recent memories during playing a game is like a nature to us but not to an reinforcement agent.
![breakout frame1][4]![frame 2][5]
Generated Using openAI Gym
## Agent Play in CartPole environment ##
The main environment for agent to learn and tested is CartPole environment. This environment is consist of two movable parts. One is the cart which is controlled by the agent, has two possible action every state which is moving left or right. The other one is pole. This environment simulate the effect of gravity on pole which makes it fall to left or right due to its orientation with the horizon. For this environment to be considered as solved, the average episodes that the agent able to get in 100 games is over 195. Following graph is a visual representation of the environment. The blue rectangle represents the pole. The black box is the cart. The black line is the horizon.
![cart pole sample][6]
First, let's create an environment
$env = RLEnvironmentCreate["WLCartPole"]
Then, initialize a network for this environment and a generator
policyNet =
NetInitialize@
NetChain[{LinearLayer[128], Tanh, LinearLayer[128], Tanh,
LinearLayer[2]}, "Input" -> 8,
"Output" -> NetDecoder[{"Class", {0, 1}}]];
generator := creatGenerator[$env, 20, 10000, False, 0.98, 1000, 0.95, False]
The generator function plays the game and generates input-output pairs to train the network.
Inside the generator, it initialize the replay buffer which is processed, reward list is used to record the performance, best is to record the peak performance.
If[#AbsoluteBatch == 0,
processed = <|"action"->{},"observation"->{},"next"->{},"reward"->{}|>;
$rewardList = {};
$env=env;
best = 0;
];
Then the environment data are being generated from game function and being preprocessed. At the start of training, the generator will produce more data to fill the replay buffer.
If[#AbsoluteBatch == 0,
experience = preprocess[game[start,maxEp,#Net, render, Power[randomDiscount,#AbsoluteBatch], $env], nor]
,
experience = preprocess[game[1,maxEp,#Net, render, Power[randomDiscount,#AbsoluteBatch],$env], nor]
];
The game function is below, it is joining current observation and next observation as the input to the network.
game[ep_Integer,st_Integer,net_NetChain,render_, rand_, $env_, end_:Function[False]]:= Module[{
states, list,next,observation, punish,choiceSpace,
state,ob,ac,re,action
},
choiceSpace = NetExtract[net,"Output"][["Labels"]];
states = <|"observation"->{},"action"->{},"reward"->{},"next"->{}|>;
Do[
state["Observation"] = RLEnvironmentReset[$env]; (* reset every episode *)
ob = {};
ac = {};
re = {};
next = {};
Do[
observation = {};
observation = Join[observation,state["Observation"]];
If[ob=={},
observation = Join[observation,state["Observation"]]
,
observation = Join[observation, Last[ob][[;;Length[state["Observation"]]]]]
];
action = If[RandomReal[]<=Max[rand,0.1],
RandomChoice[choiceSpace]
,
net[observation]
];
(*Print[action];*)
AppendTo[ob, observation];
AppendTo[ac, action];
state = RLEnvironmentStep[$env, action, render];
If[Or[state["Done"], end[state]],
punish = - Max[Values[net[observation,"Probabilities"]]] - 1;
AppendTo[re, punish];
AppendTo[next, observation];
Break[]
,
AppendTo[re, state["Reward"]];
observation = state["Observation"];
observation = Join[observation, ob[[-1]][[;;Length[state["Observation"]]]]];
AppendTo[next, observation];
]；
,
{step, st}];
AppendTo[states["observation"], ob];
AppendTo[states["action"], ac];
AppendTo[states["reward"], re];
AppendTo[states["next"], next];
,
{episode,ep}
];
(* close the $environment when done *)
states
]
Preprocess function flatten the input and has an option on if normalizing the observation
preprocess[x_, nor_:False] := Module[{result},(
result = <||>;
result["action"] = Flatten[x["action"]];
If[nor,
result["observation"] = N[Normalize/@Flatten[x["observation"],1]];
result["next"] = N[Normalize/@Flatten[x["next"],1]];
,
result["observation"] = Flatten[x["observation"],1];
result["next"] = Flatten[x["next"],1];
];
result["reward"] = Flatten[x["reward"]];
result
)]
Let's continue with generator, after getting the data from the game, generator measures the performance and records it.
NotebookDelete[temp];
reward = Length[experience["action"]];
AppendTo[$rewardList,reward];
temp=PrintTemporary[reward];
Records the net with best performance
If[reward>best,best = reward;bestNet = #Net];
Add these experience to the replay buffer
AppendTo[processed["action"],#]&/@experience["action"];
AppendTo[processed["observation"],#]&/@experience["observation"];
AppendTo[processed["next"],#]&/@experience["next"];
AppendTo[processed["reward"],#]&/@experience["reward"];
Make sure the total size of replay buffer does not exceed the limit
len = Length[processed["action"]] - replaySize;
If[len > 0,
processed["action"] = processed["action"][[len;;]];
processed["observation"] = processed["observation"][[len;;]];
processed["next"] = processed["next"][[len;;]];
processed["reward"] = processed["reward"][[len;;]];
];
Add input of the network to the result
pos = RandomInteger[{1,Length[processed["action"]]},#BatchSize];
result = <||>;
result["Input"] = processed["observation"][[pos]];
Calculates the out put based on the next state and reward and add to the result
predictionsOfCurrentObservation = Values[#Net[processed["observation"][[pos]],"Probabilities"]];
rewardsOfAction = processed["reward"][[pos]];
maxPredictionsOfNextObservation = gamma*Max[Values[#]]&/@#Net[processed["next"][[pos]],"Probabilities"];
temp = rewardsOfAction + maxPredictionsOfNextObservation;
MapIndexed[
(predictionsOfCurrentObservation[[First@#2,(#1+1)]]=temp[[First@#2]])&,(processed["action"][[pos]]-First[NetExtract[net,"Output"][["Labels"]]])
];
result["Output"] = out;
result
In the end, we can start training
trained =
NetTrain[policyNet, generator,
LossFunction -> MeanSquaredLossLayer[], BatchSize -> 32,
MaxTrainingRounds -> 2000]
## Performance of the agent ##
![enter image description here][7]
The graph above show the performance of the agent in 1000 games in cart pole environment. The agent starts with random play which has a low number of episodes lasted. The performance stay low till 800 games. But after 800 games, the performance starts to increase exponentially. In the end of the training, the performance jumps from 3k to 10k which is the maximal number of episode per game in 4 games. This proves that although the Q function is hard to converge, but when it converges, the performance is very well.
##Future Directions##
The current agent uses the classical DQN as its major structure. Other techniques like Noisy Net, DDQN, Prioritized Reply, etc can help the Q function to converge in a shorter time. Other algorithms like Rainbow Algorithm which is based on Q learning will be the next step of this project.
code can be found on [github link][8]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rl.png&userId=1363029
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=vn.png&userId=1363029
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=breakout1.png&userId=1363029
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=breakout1.png&userId=1363029
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=breakout2.png&userId=1363029
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=cp.png&userId=1363029
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=performance.png&userId=1363029
[8]: https://github.com/ianfanx/wss2018ProjectIan Fan2018-07-11T20:52:09Z