Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Language sorted by activeInvert a 2x2 matrix with 2x2 matrix elements?
https://community.wolfram.com/groups/-/m/t/1651695
If this is my matrix a as follows:
a00 a01
a10 a11
Element a00 would be comprised of a matrix b as follows:
b00 b01
b10 b11
And so on ...
I want to solve my A matrix using Mathematica algorithm.
Thanks for your help.
Stanstan gianzero2019-04-07T16:35:03ZAdd contour lines on a map?
https://community.wolfram.com/groups/-/m/t/1668023
Hi,
I would like to know how I can add Contour lines on the map?
X = {1, 2, 4, 5, 7, 9};
Y = {3, 8, 9, 2, 6, 3};
Z = {100, 110, 120, 60, 90, 70};
T = Transpose[{X, Y, Z}];
ListContourPlot[T1, ColorFunction -> "BlueGreenYellow",
PlotLegends -> Automatic]
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=7752123.png&userId=943918
Thanks for your help.M.A. Ghorbani2019-04-24T13:22:23ZUse ParallelTable to improve calculation speed?
https://community.wolfram.com/groups/-/m/t/1667335
**Mathematica V12**:
The snapshot is based on a notebook. I want to improve the calculation speed of my notebook by changing the function **Table** to **ParallelTable**. Unfortunately ParallelTable does not come to an end in appropriate time.
![enter image description here][1]
Mathematica told me that 4 Kernels have been launched, but after more than 30 minutes I aborted the calculation.
Why does it take so much time to create the table with several kernels?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=52542019-04-2314_23_39-Window.png&userId=1369267Jürgen Kanz2019-04-23T12:37:59ZAdd units to variables inside a Manipulate expression?
https://community.wolfram.com/groups/-/m/t/1667761
I''ve created a Manipulate expression with many variables. I would like each of these variables to have associated units as in the standard Quantity[Value,"Unit"] so that I can easily enter the values I want and get the answer in correct units. Unfortunately, Manipulate doesn't seem to tolerate this regardless of where I try to insert this feature. I'd also like to add UnitConvert to the answer so that I can see it in "mL/min".
Manipulate[
(\[Pi] (d/2)^2 c \[Epsilon]^3 Rb^2) ((3 (1 - \[Epsilon]) 0.0728 Cos[\[Theta]])/(\[Epsilon] L Rb) +
9800 Sin[\[Phi]]))/((1 - \[Epsilon])^2 0.00089),
{{L, 0.07, "wick length"}, 0.01,0.1},
{{d, 0.0068, "wick diameter"}, 0.001, 0.008},
{c, 1/60, 1/30},
{{\[Epsilon], 0.5, "porosity"}, 0.3,0.9},
{{Rb, 0.0001, "fiber radius"}, 0.00001,0.0002},
{{\[Theta], (70 \[Pi])/180, "contact angle"}, 0, \[Pi]/2},
{{\[Phi], \[Pi]/2, "wick orientation"}, 0, \[Pi]/2}]
I've already tried replacing the bounds on each variable with quantities. And I've tried defining the symbols as quantities before entering the manipulate expression. I've also tried adding Quantity[] to each variable within the math expression. None of these works. Please help, there must be a way to do this, right?!William Connacher2019-04-24T00:30:20ZSolver for unsteady flow with the use of Mathematica FEM
https://community.wolfram.com/groups/-/m/t/1433064
![fig7][331]
I started the discussion [here][1] but I also want to repeat on this forum.
There are many commercial and open code for solving the problems of unsteady flows.
We are interested in the possibility of solving these problems using Mathematica FEM. Previously proposed solvers for stationary incompressible isothermal flows:
Solving 2D Incompressible Flows using Finite Elements:
http://community.wolfram.com/groups/-/m/t/610335
FEM Solver for Navier-Stokes equations in 2D:
http://community.wolfram.com/groups/-/m/t/611304
Nonlinear FEM Solver for Navier-Stokes equations in 2D:
https://mathematica.stackexchange.com/questions/94914/nonlinear-fem-solver-for-navier-stokes-equations-in-2d/96579#96579
We give several examples of the successful application of the finite element method for solving unsteady problem including nonisothermal and compressible flows. We will begin with two standard tests that were proposed to solve this class of problems by
M. Schäfer and S. Turek, Benchmark computations of laminar ﬂow around a cylinder (With support by F. Durst, E. Krause and R. Rannacher). In E. Hirschel, editor, Flow Simulation with High-Performance Computers II. DFG priority research program results 1993-1995, number 52 in Notes Numer. Fluid Mech., pp.547–566. Vieweg, Weisbaden, 1996. https://www.uio.no/studier/emner/matnat/math/MEK4300/v14/undervisningsmateriale/schaeferturek1996.pdf
![fig8][332]
Let us consider the flow in a flat channel around a cylinder at Reynolds number = 100, when self-oscillations occur leading to the detachment of vortices in the aft part of cylinder. In this problem it is necessary to calculate drag coeﬃcient, lift coeﬃcient and pressure diﬀerence in the frontal and aft part of the cylinder as functions of time, maximum drag coeﬃcient, maximum lift coeﬃcient , Strouhal number and pressure diﬀerence $\Delta P(t)$ at $t = t0 +1/2f$. The frequency f is determined by the period of oscillations of lift coeﬃcient f=f(c_L). The data for this test, the code and the results are shown below.
H = .41; L = 2.2; {x0, y0, r0} = {1/5, 1/5, 1/20};
Ω = RegionDifference[Rectangle[{0, 0}, {L, H}], Disk[{x0, y0}, r0]];
RegionPlot[Ω, AspectRatio -> Automatic]
K = 2000; Um = 1.5; ν = 10^-3; t0 = .004;
U0[y_, t_] := 4*Um*y/H*(1 - y/H)
UX[0][x_, y_] := 0;
VY[0][x_, y_] := 0;
P0[0][x_, y_] := 0;
Do[
{UX[i], VY[i], P0[i]} =
NDSolveValue[{{Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
u[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (u[x, y] - UX[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[u[x, y], x] +
VY[i - 1][x, y]*D[u[x, y], y],
Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
v[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (v[x, y] - VY[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[v[x, y], x] +
VY[i - 1][x, y]*D[v[x, y], y],
\!\(\*SuperscriptBox[\(u\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] +
\!\(\*SuperscriptBox[\(v\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y]} == {0, 0, 0} /. μ -> ν, {
DirichletCondition[{u[x, y] == U0[y, i*t0], v[x, y] == 0},
x == 0.],
DirichletCondition[{u[x, y] == 0., v[x, y] == 0.},
0 <= x <= L && y == 0 || y == H],
DirichletCondition[{u[x, y] == 0,
v[x, y] == 0}, (x - x0)^2 + (y - y0)^2 == r0^2],
DirichletCondition[p[x, y] == P0[i - 1][x, y], x == L]}}, {u, v,
p}, {x, y} ∈ Ω,
Method -> {"FiniteElement",
"InterpolationOrder" -> {u -> 2, v -> 2, p -> 1},
"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}], {i, 1, K}];
{ContourPlot[UX[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> u, MaxRecursion -> 2],
ContourPlot[VY[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> v, MaxRecursion -> 2,
PlotRange -> All]} // Quiet
{DensityPlot[UX[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> u, MaxRecursion -> 2],
DensityPlot[VY[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> v, MaxRecursion -> 2, PlotRange -> All]} // Quiet
dPl = Interpolation[
Table[{i*t0, (P0[i][.15, .2] - P0[i][.25, .2])}, {i, 0, K, 1}]];
cD = Table[{t0*i, NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) + Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Sin[θ] -
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Cos[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
cL = Table[{t0*i, -NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) +
Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Cos[θ] +
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Sin[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
{ListLinePlot[cD,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(D\)]\)"}],
ListLinePlot[cL,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(L\)]\)"}],
Plot[dPl[x], {x, 0, 8}, AxesLabel -> {"t", "ΔP"}]}
f002 = FindFit[cL, a*.5 + b*.8*Sin[k*16*t + c*1.], {a, b, k, c}, t]
Plot[Evaluate[a*.5 + b*.8*Sin[k*16*t + c*1.] /. f002], {t, 4, 8},
Epilog -> Map[Point, cL]]
k0=k/.f002;
Struhalnumber = .1*16*k0/2/Pi
cLm = MaximalBy[cL, Last]
sol = {Max[cD[[All, 2]]], Max[cL[[All, 2]]], Struhalnumber,
dPl[cLm[[1, 1]] + Pi/(16*k0)]}
In Fig. 1 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
{3.17805, 1.03297, 0.266606, 2.60427}
lowerbound= { 3.2200, 0.9900, 0.2950, 2.4600};
upperbound = {3.2400, 1.0100, 0.3050, 2.5000};
![Fig1][2]
Note that our results differ from allowable by several percent, but if you look at all the results of Table 4 from the cited article, then the agreement is quite acceptable.The worst prediction is for the Strouhal number. We note that we use the explicit Euler method, which gives an underestimate of the Strouhal number, as follows from the data in Table 4.
The next test differs from the previous one in that the input speed varies according to the `U0[y_, t_] := 4*Um*y/H*(1 - y/H)*Sin[Pi*t/8]`. It is necessary to determine the time dependence of the drag and lift parameters for a half-period of oscillation, as well as the pressure drop at the last moment of time. In Fig. 2 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
sol = {3.0438934441256595`,
0.5073345082785012`, -0.11152933279750943`};
lowerbound = {2.9300, 0.4700, -0.1150};
upperbound = {2.9700, 0.4900, -0.1050};
![Fig2][3]
For this test, the agreement with the data in Table 5 is good. Consequently, the two tests are almost completely passed.
I wrote and debugged this code using Mathematics 11.01. But when I ran this code using Mathematics 11.3, I got strange pictures, for example, the disk is represented as a hexagon, the size of the area is changed.
![Fig3][4]
In addition, the numerical solution of the problem has changed, for example, test 2D2
{3.17805, 1.03297, 0.266606, 2.60427} v11.01
{3.15711, 1.11377, 0.266043, 2.54356} v11.03
The attached file contains the working code for test 2D3 describing the flow around the cylinder in a flat channel with a change in the flow velocity.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D3.png&userId=1218692
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Math11.3.png&userId=1218692
[331]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CylinderRe100test2D2.gif&userId=1218692
[332]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2D2test.png&userId=1218692Alexander Trounev2018-08-31T11:44:04ZSequence-to-sequence regression with recurrent neural networks
https://community.wolfram.com/groups/-/m/t/1647625
I recently wanted to learn more about recurrent neural networks (RNNs) and how to use them for regression problems. In particular, I wanted to try and combine RNNs with the ideas for Bayesian error bar estimation I discussed in my blog/community posts a while back:
https://blog.wolfram.com/2018/05/31/how-optimistic-do-you-want-to-be-bayesian-neural-network-regression-with-prediction-errors/
https://community.wolfram.com/groups/-/m/t/1319745
I also found the found the following answer on StackExchange about sequence learning with `SequenceAttentionLayer`, which helped me quite a bit to understand how to use to use these layers:
https://mathematica.stackexchange.com/a/143672/43522
With those resources in hand, I wanted to give time-series regression a shot.
First, let's use `FinancialData` as an easy source of a test dataset. Here I download 5 years of data from 5 different indices and turn them into a regularly sampled matrix of vectors and then split the dataset into 80/20% training/test data:
indices = {"GOOGL", "AS:ASML", "MSFT", "AAPL", "GE"};
data = FinancialData[indices, {Today - 5 Quantity[1, "Years"], Today}];
DateListPlot[data]
With[{
matrix = N@Normal[
TimeSeriesRescale[
TimeSeriesResample[TimeSeriesThread[Identity, TimeSeries /@ data]], {0, Max[Length /@ data] - 1}]
][[All, 2]]
},
{fe, data} = FeatureExtraction[matrix, "StandardizedVector", {"ExtractorFunction", "ExtractedFeatures"}]
];
{trainingData, testData} = TakeDrop[data, Round[0.8*Length[data]]];
Dimensions /@ {trainingData, testData}
![enter image description here][1]
Next, we define a function that can generate training data for the network from the data matrix we just made:
Clear[genFun];
genFun[data_, { seqLengthIn_, seqLengthout_, offset_}] := With[{
dim = Dimensions[data][[2]],
max = Max[1, Length[data] - seqLengthout - offset - seqLengthIn - 2]
},
Function[
With[{ri = RandomInteger[{1, max}, #BatchSize]},
<|
"Input" -> Function[data[[# ;; # + seqLengthIn - 1]]] /@ ri,
"Target" -> Join[
ConstantArray[{ConstantArray[0., dim]}, #BatchSize],
Function[With[{j = # + offset + seqLengthIn - 1}, data[[j ;; j + seqLengthout - 1]]]] /@ ri,
2
]
|>]]];
Each time the function is called, it generates a batch of sequences that will be used as training input and target sequences that the network should try to learn and reproduce, but the targets are prepended with a blank start-of-sequence vector:
Dimensions /@ genFun[data, {10, 6, 1}][<|"BatchSize" -> 3|>]
Out[72]= <|"Input" -> {3, 10, 5}, "Target" -> {3, 7, 5}|>
Next we put together our neural network:
Clear[encoderNetAttend, decoderNet, lossLayer, trainingNetAttention];
encoderNetAttend[n_, dropout_] := NetGraph[<|
"lstm" -> LongShortTermMemoryLayer[n, "Dropout" -> dropout],
"last" -> SequenceLastLayer[],
"post" ->
NetChain[{LinearLayer[n], ElementwiseLayer["SELU"],
DropoutLayer[], LinearLayer[]}]
|>,
{"lstm" -> "last" -> "post" -> NetPort["StateInit"], "lstm" -> NetPort["States"]}
];
decoderNet[n_, dropout_] := NetGraph[
<|
"part1" -> PartLayer[{All, 1}], "part2" -> PartLayer[{All, 2}],
"lstm" -> LongShortTermMemoryLayer[n, "Dropout" -> dropout],
"shape1" -> ReshapeLayer[{n, 1}], "shape2" -> ReshapeLayer[{n, 1}], "catState" -> CatenateLayer[2]
|>,
{
NetPort["Input"] -> NetPort["lstm", "Input"],
NetPort["lstm", "Output"] -> NetPort["Output"],
NetPort["StateIn"] -> "part1" -> NetPort["lstm", "State"],
NetPort["StateIn"] -> "part2" -> NetPort["lstm", "CellState"],
NetPort["lstm", "State"] -> "shape1", NetPort["lstm", "CellState"] -> "shape2",
{"shape1", "shape2"} -> "catState" -> NetPort["StateOut"]
},
"StateIn" -> {n, 2}
];
lossLayer["LogPrecision"] = ThreadingLayer@Function[{yObserved, yPredicted, logPrecision},
(yPredicted - yObserved)^2*Exp[logPrecision] - logPrecision
];
regressionNet[n_, dimIn_, dropout : _?NumericQ : 0.5] :=
NetInitialize@NetGraph[
<|
"encoder" -> encoderNetAttend[n, dropout],
"decoder" -> decoderNet[n, dropout],
"attention" -> SequenceAttentionLayer["Bilinear"],
"catenate" -> CatenateLayer[2],
"regression" -> NetMapOperator[LinearLayer[{2, dimIn}]],
"part1" -> PartLayer[{All, 1}], "part2" -> PartLayer[{All, 2}]
|>,
{
NetPort["Input"] -> "encoder",
NetPort["encoder", "StateInit"] -> NetPort["decoder", "StateIn"],
NetPort["encoder", "States"] -> NetPort["attention", "Input"],
NetPort["Prediction"] -> NetPort["decoder", "Input"],
NetPort[{"decoder", "Output"}] ->
NetPort["attention", "Query"], {"decoder", "attention"} ->
"catenate" -> "regression",
"regression" -> "part1" -> NetPort["mu"],
"regression" -> "part2" -> NetPort["logTau"]
},
"Input" -> {"Varying", dimIn},
"Prediction" -> {"Varying", dimIn}
];
trainingNetAttention[n_, dimIn_, dropout : _?NumericQ : 0.5] := NetInitialize@NetGraph[
<|
"regression" -> regressionNet[n, dimIn, dropout],
"loss" -> lossLayer["LogPrecision"],
"total" -> SummationLayer[], "most" -> SequenceMostLayer[], "rest" -> SequenceRestLayer[]
|>,
{
NetPort["Input"] -> NetPort["regression", "Input"],
NetPort["Target"] -> "most" -> NetPort["regression", "Prediction"],
NetPort["Target"] -> "rest",
{"rest", NetPort["regression", "mu"], NetPort["regression", "logTau"]} -> "loss" -> "total" -> NetPort["Loss"]
},
"Input" -> {"Varying", dimIn},
"Target" -> {"Varying", dimIn}
];
nHidden = 250;
pDrop = 0.25;
NetInformation[encoderNetAttend[nHidden, pDrop], "SummaryGraphic"]
NetInformation[decoderNet[nHidden, pDrop], "SummaryGraphic"]
NetInformation[regressionNet[nHidden, 2, 0.25], "SummaryGraphic"]
NetInformation[ trainingNetAttention[nHidden, 2, 0.25], "SummaryGraphic"]
![enter image description here][2]
The idea here is that the encoder produces a state vector that the decoder uses to start making predictions. The decoder, in turn, accepts the networks' own predictions to iteratively crank out the next one. During the training, though, we don't have the actual predictions from the network, so instead we feed in the sequence of actual target values and offset the target sequence at the loss layer by one with a `SequenceRest` layer.
Time to train the network:
seqConfig = {15, 7, 1};
trainedTrainObject = NetTrain[
trainingNetAttention[nHidden, Length[indices], pDrop],
genFun[trainingData, seqConfig],
All,
TargetDevice -> "GPU", LossFunction -> "Loss",
TimeGoal -> 30*60, BatchSize -> 200,
ValidationSet -> genFun[testData, seqConfig][<|"BatchSize" -> 200|>]
];
trainedNet = trainedTrainObject["TrainedNet"];
trainedTrainObject["EvolutionPlots"]
![enter image description here][3]
To make predictions, the network needs to eat its own results iteratively, which we can do with a `NestList`
makePredictorFunction[trainedNet_, evaluationMode : ("Test" | "Train") : "Test", device : ("CPU" | "GPU") : "CPU"] := With[{
dim = Last@NetExtract[trainedNet, {"Input"}],
encoder = NetExtract[trainedNet, {"regression", "encoder"}],
regressionNet =
NetDelete[NetExtract[trainedNet, "regression"], "encoder"]
},
Function[{input, n},
Module[{
encodedInput = encoder[input, NetEvaluationMode -> evaluationMode, TargetDevice -> device],
vectorInputQ = Depth[input] === 4,
length = Length[input],
reshape
},
reshape = If[vectorInputQ, ConstantArray[#, Length[input]] &, Identity];
GeneralUtilities`AssociationTranspose@NestList[
regressionNet[
<|"Input" -> encodedInput["States"], "Prediction" -> #mu, "StateIn" -> #StateOut|>,
NetEvaluationMode -> evaluationMode,
TargetDevice -> device] &,
<|
"mu" -> reshape@{ConstantArray[0, dim]},
"StateOut" -> encodedInput["StateInit"]
|>,
n
][[2 ;;, {"mu", "logTau"}, If[vectorInputQ, All, Unevaluated[Sequence[]]], 1]]
]
]
];
For example, we can predict 5 steps ahead based on 30 points of historical data:
predictor = makePredictorFunction[trainedNet, "Train", "CPU"];
Dimensions /@ predictor[data[[;; 30]], 7]
Out[299]= <|"mu" -> {7, 5}, "logTau" -> {7, 5}|>
Here's a manipulate to visualize the predictions against the actual data. We sample the network several times to get a sense of the uncertainty from the training weights and combine that uncertainty with the variance predicted by the network:
Manipulate[
Manipulate[
Quiet@With[{
predictions = Dot[
{{1., 1.}, {1., 0.}, {1., -1.}},
{Mean[#mu],
nstdev*Sqrt[Variance[#mu] + Mean[Exp[-#logTau]]]} &@Map[
Transpose,
predictor[Table[dat[[Max[1, n - t] ;; UpTo[n]]], s], l]
][[{"mu", "logTau"}, All, All, index]]
],
trueData = ListPlot[
MapIndexed[{First[#2] + Max[1, n - t] - 1, #1} &,
Flatten[dat[[Max[1, n - t] ;; UpTo[n + l], index]]]
],
Joined -> True,
PlotStyle -> Black
]
},
Show[
ListPlot[
Thread@Legended[
MapIndexed[
{First[#2] + Max[1, n - t] - 1, #1} &,
Flatten[{
dat[[Max[1, n - t] ;; UpTo[n], index]],
#
}]
] & /@ predictions,
{StringForm["\[Mu] + `1` \[Sigma]", Dynamic[nstdev]],
"\[Mu]", StringForm["\[Mu] - `1` \[Sigma]", Dynamic[nstdev]]}
],
Joined -> True,
PlotStyle -> (Directive[#, Dashed] & /@ {Blue, Red, Blue}),
ImageSize -> 500,
Filling -> {1 -> {2}, 3 -> {2}}
],
trueData,
ImageSize -> Large,
PlotRange -> All
]
],
{{index, 1, "Index"}, Thread[Range[Length[indices]] -> indices]},
{{n, Round[Length[dat]/10], "Position"}, 1, Length[dat], 1},
{{t, 20, "History length"}, 1, 30, 1},
{{l, 30, "Extrapolation"}, 1, 50, 1},
{{s, 5, "Number of samples"}, 1, 20, 1},
{{nstdev, 2, "Standard deviations"}, 0, 4, 0.1},
Paneled -> False,
ContinuousAction -> True,
TrackedSymbols :> {index, n, t, l, s, nstdev},
SynchronousInitialization -> False
],
{{dat, trainingData, "Dataset"}, {trainingData -> "Training data",
testData -> "Test data"}},
TrackedSymbols :> {dat},
SynchronousInitialization -> False
]
![enter image description here][4]
The results on the validation set do not look great, but the comparison between the first 4 years of the data and the last one may not be fair. Or it may simply be necessary to involve many more indices before it becomes feasible to do extrapolations of this type. At any rate, I hope that this code is useful to other users here! I will probably tweak bits of code here and there over the next few days and maybe add more commentary, so be sure to check back occasionally if you're interested.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8936datelistplot.png&userId=839741
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1613networks.png&userId=839741
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=progressplot.png&userId=839741
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=manipulate.png&userId=839741Sjoerd Smit2019-04-05T15:43:41ZGet Non-negative ODE solution?
https://community.wolfram.com/groups/-/m/t/1667480
Is there any Mathematica option equivalent to “Nonnegative” in “odeset” in Matlab?
Here is the documentation about the function I’m talking about:
https://www.mathworks.com/help/matlab/math/nonnegative-ode-solution.html
I think it’s a very useful function and it’d be nice to have something similar in Mathematica.
Thanks in advance!David Aragonés2019-04-23T17:50:09Z[✓] Place a point out of a region using GeometricScene?
https://community.wolfram.com/groups/-/m/t/1667606
In a GeometricScene, I want a point to be out of a region. Say a triangle and a point B out of it. How can I do it?Vasileios Micholos2019-04-23T19:15:48Z[✓] Evaluate complex function Cos[Sqrt[s]]
https://community.wolfram.com/groups/-/m/t/1665868
Dear Sir/Madam,
I have evaluated a complex function Cos[Sqrt[s]] (s is complex variable) in Mathematica version 10. The results are presented in the attached file. However, Mathematica can not evaluate the function along the negative direction of the x-axis. Such a problem was not in Matlab. How can I resolve such problem in Mathematica?
Best regardsM Abadi2019-04-22T06:45:17ZTest the significance of the result from NonlinearModelFit?
https://community.wolfram.com/groups/-/m/t/1665378
I have some data, and I have done a NonlinearModelFit on it, actually fitting it to a sine curve. I can get the "RSquared" and "AdjustedRSquared", e.g. with nlm["AdjustedRSquared"] where nlm is the output of the NonlinearModelFit. I now want to test the significance of the result. I would like to end up with a single number p, so that I could say, "the probability of getting such a fit by chance is p".
NonlinearModelFit has properties like "ParameterPValues" and "ParameterTStatistics". However, I have looked in the StatisticalModelAnalysis tutorial, and there is no real explanation of how they might be used or generally how to do significance testing.
Does NonlinearModelFit have built in ways to get significance (probability of fit being due to chance)? Or is there a good tutorial on using the output of Mathematica's NonlinearModelFit to do significance testing?Marc Widdowson2019-04-21T13:46:43ZCreate 3DGraphics with SurfaceBuilder and BoxPlot in Mathematica 12?
https://community.wolfram.com/groups/-/m/t/1667819
In the old versions of Mathematica we generated cubic minimal surfaces using the commands:
surface = SurfaceBuilder[initpoints, symmdata, shift, edge];
box = BoxPlot[edge];
Show[box, surface, Axes -> False, PlotRange -> All, Boxed -> False]
In the last few versions the message appears “cannot combine graphical objects”. What has changed? Can anyone help, please?Jacek Klinowski2019-04-23T21:58:14ZGet Feynman diagram for two point function in Phi4 Theory?
https://community.wolfram.com/groups/-/m/t/1667232
I wrote a code which will generate the feynman diagram for two loop two point function for Phi4 Theory, But code became too long as I was trying to make all possible connection, I tried to make to small but I'm unable to do so. here I'm attaching my code.
[I don't know how to attach the code so I 'm sharing the file here][1]
[1]: https://www.dropbox.com/s/ruebfw9056c3v4w/2-loop_1%28completed%29.nb?dl=0Bunny Roy2019-04-23T16:18:40ZInconsistency in molar mass of Oxygen?
https://community.wolfram.com/groups/-/m/t/887789
ChemicalData of O returns the values of water...
In[16]:= ChemicalData["O", "MolarMass"]
Out[16]= Quantity[18.0153, ("Grams")/("Moles")]
Calculation of MolarMass of Silver bromate (AgBrO3) is also inconsistent:
In[17]:= ChemicalData["AgBrO3", "MolarMass"]
Out[17]= Quantity[236.778, ("Grams")/("Moles")]
Instead of the actual valur of: 235.771rafael ibanez2016-07-14T15:49:12ZUnexpected Behaviour with GeoBubbleChart and GeoRange
https://community.wolfram.com/groups/-/m/t/1660255
I have a `GeoBubbleChart` and I am trying to "zoom in" into a specific region with `GeoRange`. The result is unexpected and potentially reflects a bug.
Here is an example. First let's see the expected behaviour in `GeoListPlot`. Consider this graph:
SeedRandom[1];
plotDat = EntityValue[RandomEntity["Country", 100], EntityProperty["Country", "Position"]];
GeoListPlot[plotDat]
which gives
![global GeoListPlot][1]
Now let's zoom into Europe:
GeoListPlot[plotDat, GeoRange -> {{35, 60}, {-10, 40}}]
which works nicely:
![geoListPlot_zoomed][2]
Now let's try the same with GeoBubbleChart:
SeedRandom[2];
plotDat2 = Table[p -> RandomReal[], {p, plotDat}];
GeoBubbleChart[plotDat2]
which gives
![geoBubbleChart][3]
and let's try to zoom in:
GeoBubbleChart[plotDat2, GeoRange -> {{35, 60}, {-10, 40}}]
which runs weirdly long and unexpectedly gives:
![geoBubbleChart_zoomed][4]
This is definitely unexpected and probably a bug, right?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=geoListPlot.png&userId=1652017
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=geoListPlot_Eur.png&userId=1652017
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=geoBubbleChart.png&userId=1652017
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=geoBubbleChart_Eur.png&userId=1652017Stephan Schiffels2019-04-15T07:53:30ZLaunch WolframScripts with subkernels?
https://community.wolfram.com/groups/-/m/t/1667047
Dear Wolfram users,
I am currently using a python script which calls WolframScript for a given task. However, since my python script can work with different cores at the same time, each time the WolframScript starts it opens a new kernel. On the other hand, the number of available mathematica kernels is limited and therefore I would like to use subkernels to launch the WolframScript.
So far, the simplest possibility I can come up with is something like:
- Launch a main.wls
- Start n subkernels "sub1", "sub2", "sub3"... n being the number of cores
- Launch the python script main.py (maybe with specific arguments)
- Each time main.py calls "./script.wls" (which do some computation), pass as an argument the name of a specific subkernel "subN" to be run on.
I think this should do the job. However, I cannot find how to launch ./script.wls on a given subkernel.
Any ideas, hints or documentation would be very much appreciated !
Many thanks for your help,
JordanJordan Bernigaud2019-04-23T08:54:02Z[✓] Solve the following equation?
https://community.wolfram.com/groups/-/m/t/1665736
Consider the following code:
TC = L + (1 - q) + SC + Ca +
Cr + (Exp[1 - RealAbs[1 - \[Mu]/Subscript[y, 0]]])*
MC + ((Exp[(Subscript[\[Sigma], 1] - \[Sigma])/(
Subscript[\[Sigma], 1] - Subscript[\[Sigma], 2])]) - 1)*DC
n = D[TC, {{\[Sigma], \[Mu]}}]
Solve[n == 0 // Rationalize, {\[Sigma], \[Mu]}]
I need to solve the equation, but running so long.maghfira devi2019-04-22T06:51:44ZSpecify meshes and boundaries in NDSolveValue?
https://community.wolfram.com/groups/-/m/t/1665050
I am trying to solve Laplace's equations in two-dimensions in order obtain the voltage field with specified regions and boundaries. One of the regions in the problem is a half-disk. I wish to specify a voltage on the boundary of that half-disk. I do not think that I know what my problem is, but I suspect that the boundary I am generating for that half-disk is wrong.
Ultimately NDSolveValue appears to work, but does not give a plausible answer. This problem does not require much code. It would be very helpful if someone could examine it and let me know what I am doing wrong.Robert Curl2019-04-20T21:53:54ZAnalysis of crop health using NDVI (Red and Near-infrared capturing camera)
https://community.wolfram.com/groups/-/m/t/1663138
Hello Wolfram Community,
I want to take pictures of crops using a drone and better highlight the contrast between regions of healthy vegetation vs. areas with stressed/diseased or no vegetation (in future, extract more information about the crops).
I have recently removed an infrared blocking filter from a camera of a DJI Phantom 3 and instead mounted a filter that blocks every wavelength except Red and Near-infrared. I want red and near-infrared light because healthy vegetation will absorb red light and strongly reflect near-infrared light.
Incorporation of this filter allows reflected red light (660nm) to be captured in the camera sensor’s red channel and reflected near-infrared light (850nm) to be captured in the sensor’s blue channel (the green channel is not used). NDVI can be determined by performing the calculation: NDVI = (blue – red) / (blue + red).
Could you please tell me how to make a heat map (maybe using DensityPlot) or any other way to better highlight the contrast instead of the Colorize function? Or if anything comes to your mind, I would be glad to hear!
![enter image description here][1]
![enter image description here][2]
This is my first attempt in post-processing.
An approximate regular camera picture of the grass spot:
![enter image description here][3]
ColorSeparate[![enter image description here][4], {"R", "B"}]
Out[1] = { ![enter image description here][5], ![enter image description here][6]}
f[pixel1_, pixel2_] := (((pixel1 - pixel2)/(pixel1 + pixel2)) + 1)/2;
ImageApply[f, {![enter image description here][7], ![enter image description here][8]}]
Out[3] = ![enter image description here][9]
The darker spots are supposed to be less healthy vegetation.
Colorize[![enter image description here][10]]
Out[4] = ![enter image description here][11]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-17at15.25.21.png&userId=1343397
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-17at15.26.08.png&userId=1343397
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_7343.jpg&userId=1343397
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1.jpeg&userId=1343397
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2.jpeg&userId=1343397
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3.jpeg&userId=1343397
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3.jpeg&userId=1343397
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2.jpeg&userId=1343397
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=45.jpeg&userId=1343397
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=45.jpeg&userId=1343397
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6.jpeg&userId=1343397Zhamilya Bilyalova2019-04-17T19:02:58ZGet ImageRestyle neural network runtime in Mathematica 12?
https://community.wolfram.com/groups/-/m/t/1663030
Hi,
Playing with Mathematica 12 and ImageRestyle and the system generates an error message:
The neural network is unavailabe. With MM11.3 it has always worked.
Stacktrace shows:
ImageRestyle::nnlibunavail
Running Windows 10
As a comment, before the imagerestyle started it first downloaded the GPU libraries. That was 900MB!
After the download Mathematica showed a message that a restart was needed. After the restart the message above was generated.l van Veen2019-04-17T19:49:31Zcomputable famous theorems of geometry
https://community.wolfram.com/groups/-/m/t/1664846
[GeometricScene][1] and [FindGeometricConjectures][2] are two of my favorite new functions in Wolfram Language V12. V12 provides innovative automated capabilities to draw and reason about abstractly described scenes in the plane.
I also remember that I'd proved famous theorems of geometry over many days when I was a junior high school student. I will show nine theorems, including those in the Documentation Center and [WOLFRAM blog][3].
## Thaless Theorem ##
If A, B, and C are distinct points on a circle where the line AC is a diameter, then the angle \[Angle]ABC is a right angle.
gs = GeometricScene[{"A", "B", "C", "O"},
{Triangle[{"A", "B", "C"}],
CircleThrough[{"A", "B", "C"}, "O"],
"O" == Midpoint[{"A", "C"}],
Style[Line[{"A", "B"}], Orange],
Style[Line[{"B", "C"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][4]
![enter image description here][5]
## Napoleons Theorem ##
If regular triangles are constructed on the sides of any triangle, either all outward or all inward, the lines connecting the centres of those regular triangles themselves form an regular triangle.
gs = GeometricScene[{"C", "B", "A", "C'", "B'", "A'", "Oc", "Ob",
"Oa"},
{Triangle[{"C", "B", "A"}],
TC == Triangle[{"A", "B", "C'"}],
TB == Triangle[{"C", "A", "B'"}],
TA == Triangle[{"B", "C", "A'"}],
GeometricAssertion[{TC, TB, TA}, "Regular"],
"Oc" == TriangleCenter[TC, "Centroid"],
"Ob" == TriangleCenter[TB, "Centroid"],
"Oa" == TriangleCenter[TA, "Centroid"],
Style[Triangle[{"Oc", "Ob", "Oa"}], Orange]}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][6]
![enter image description here][7]
## Finsler-Hadwiger Theorem ##
ABCD and A BB CC DD are two squares with common vertex A. Let Q and S be the midpoints of BB D and DD B respectively, and let R and T be the centers of the two squares. Then the quadrilateral QRST is a square as well.
gs = GeometricScene[{"A", "B", "C", "D", "BB", "CC", "DD", "Q", "R", "S", "T"},
{GeometricAssertion[{Polygon[{"A", "B", "C", "D"}],
Polygon[{"A", "BB", "CC", "DD"}]}, "Regular", "Counterclockwise"],
"Q" == Midpoint[{"BB", "D"}],
"R" == Midpoint[{"A", "C"}],
"S" == Midpoint[{"B", "DD"}],
"T" == Midpoint[{"A", "CC"}],
Style[Polygon[{"Q", "R", "S", "T"}], Orange]}];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][8]
![enter image description here][9]
## Echols Theorem ##
The midpoints of AD, BE, and CF in two equilateral triangles ABC and DEF form a regular triangle.
gs = GeometricScene[{"A", "B", "C", "D", "E", "F", "L", "M", "N"},
{T1 == Triangle[{"A", "B", "C"}],
T2 == Triangle[{"D", "E", "F"}],
GeometricAssertion[{T1, T2}, "Regular"],
"L" == Midpoint[{"A", "D"}],
"M" == Midpoint[{"B", "E"}],
"N" == Midpoint[{"C", "F"}],
Style[Triangle[{"L", "M", "N"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][10]
![enter image description here][11]
## Simson Theorem & Steiner Theorem ##
Simson's Theorem states that ABC and a point P on its circumcircle, the three closest points to P on lines AB, AC, and BC are collinear. Steiner's Theorem states that if the vertical center of triangle ABC is H, the Simson line passes through the midpoint of PH.
gs = GeometricScene[{"A", "B", "C", "P", "L", "M", "N", "H", "S"},
{CircleThrough[{"P", "A", "B", "C"}],
"L" \[Element] InfiniteLine[{"B", "C"}],
"M" \[Element] InfiniteLine[{"C", "A"}],
"N" \[Element] InfiniteLine[{"A", "B"}],
PlanarAngle[{"P", "L", "B"}] == 90 \[Degree],
PlanarAngle[{"P", "M", "C"}] == 90 \[Degree],
PlanarAngle[{"P", "N", "A"}] == 90 \[Degree],
Style[InfiniteLine[{"L", "M"}], Orange],
GeometricAssertion[{InfiniteLine[{"A", "H"}], Line[{"B", "C"}]},
"Perpendicular"],
GeometricAssertion[{InfiniteLine[{"B", "H"}], Line[{"A", "C"}]},
"Perpendicular"],
Style[Line[{"P", "H"}], Orange],
Line[{"P", "S", "H"}], Line[{"L", "S", "M"}]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][12]
![enter image description here][13]
## Aubel Theorem ##
Starting with a given quadrilateral (a polygon having four sides), construct a square on each side.The two line segments between the centers of opposite squares are of equal lengths and are at right angles to one another.
gs = GeometricScene[{"A", "B", "C", "D", "A'", "A''", "B'",
"B''", "C'", "C''", "D'", "D''", "Oa", "Ob", "Oc", "Od"},
{GeometricAssertion[Polygon[{"A", "B", "C", "D"}], "Convex"],
GeometricAssertion[{pa = Polygon[{"A", "B", "A'", "A''"}],
pb = Polygon[{"B", "C", "B'", "B''"}],
pc = Polygon[{"C", "D", "C'", "C''"}],
pd = Polygon[{"D", "A", "D'", "D''"}]}, "Regular",
"Counterclockwise"],
"Oa" == Midpoint[{"A", "A'"}],
"Ob" == Midpoint[{"B", "B'"}],
"Oc" == Midpoint[{"C", "C'"}],
"Od" == Midpoint[{"D", "D'"}],
Style[Line[{"Oa", "Oc"}], Orange],
Style[Line[{"Ob", "Od"}], Orange]}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][14]
![enter image description here][15]
## Brahmagupta Theorem ##
If a cyclic quadrilateral has perpendicular diagonals, then the perpendicular to a side from the point of intersection of the diagonals always bisects the opposite side.
gs = GeometricScene[{"A", "B", "C", "D", "E", "M"},
{Polygon[{"A", "B", "C", "D"}],
CircleThrough[{"A", "B", "C", "D"}],
GeometricAssertion[{Line[{"A", "C"}], Line[{"B", "D"}]},
"Perpendicular"],
Line[{"A", "E", "C"}], Line[{"B", "E", "D"}],
"M" == Midpoint[{"A", "B"}],
Style[InfiniteLine[{"M", "E"}], Orange],
Style[Line[{"C", "D"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][16]
![enter image description here][17]
## Morley Theorem ##
In any triangle, the three points of intersection of the adjacent angle trisectors form a regular triangle.
gs = GeometricScene[{"A", "B", "C", "D", "E", "F"},
{Triangle[{"A", "B", "C"}],
PlanarAngle[{"A", "B", "F"}] == PlanarAngle[{"A", "B", "C"}]/3,
PlanarAngle[{"F", "A", "B"}] == PlanarAngle[{"C", "A", "B"}]/3,
PlanarAngle[{"C", "B", "D"}] == PlanarAngle[{"C", "B", "A"}]/3,
PlanarAngle[{"B", "C", "D"}] == PlanarAngle[{"B", "C", "A"}]/3,
PlanarAngle[{"A", "C", "E"}] == PlanarAngle[{"A", "C", "B"}]/3,
PlanarAngle[{"C", "A", "E"}] == PlanarAngle[{"C", "A", "B"}]/3,
"D" \[Element] Triangle[{"A", "B", "C"}],
"E" \[Element] Triangle[{"A", "B", "C"}],
"F" \[Element] Triangle[{"A", "B", "C"}],
Style[Triangle[{"D", "E", "F"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][18]
![enter image description here][19]
[1]: https://reference.wolfram.com/language/ref/GeometricScene.html
[2]: https://reference.wolfram.com/language/ref/FindGeometricConjectures.html
[3]: https://blog.wolfram.com/2019/04/16/version-12-launches-today-big-jump-for-wolfram-language-and-mathematica/
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=122401.jpg&userId=1013863
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=801502.jpg&userId=1013863
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=508803.jpg&userId=1013863
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=222204.jpg&userId=1013863
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=289105.jpg&userId=1013863
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=932006.jpg&userId=1013863
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=876507.jpg&userId=1013863
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=184108.jpg&userId=1013863
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=350909.jpg&userId=1013863
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=500710.jpg&userId=1013863
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=318711.jpg&userId=1013863
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=254812.jpg&userId=1013863
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=308813.jpg&userId=1013863
[17]: https://community.wolfram.com//c/portal/getImageAttachment?filename=934714.jpg&userId=1013863
[18]: https://community.wolfram.com//c/portal/getImageAttachment?filename=367015.jpg&userId=1013863
[19]: https://community.wolfram.com//c/portal/getImageAttachment?filename=16.jpg&userId=1013863Kotaro Okazaki2019-04-20T14:42:58ZDecorating Easter Eggs with the Planets
https://community.wolfram.com/groups/-/m/t/1665923
![The planets as Easter eggs][1]
Just thought I'd share this fun little exercise. First, we need to get the textures to use.
textures =
ImageReflect[#, Right] & /@
EntityValue["Planet", "CylindricalEquidistantTexture"];
Then, we need to plot parametric surfaces that look like eggs and apply the textures to them.
GraphicsGrid[Partition[With[{l = .75, a = 1, b = 1},
ParametricPlot3D[
Evaluate[
RotationMatrix[
Pi/2, {0, 1,
0}].{l Cos[t] + (a + b Cos[t]) Cos[t], (a + b Cos[t]) Sin[
t] Cos[p], (a + b Cos[t]) Sin[t] Sin[p]}], {p, 0,
2 Pi}, {t, 0, Pi}, Mesh -> None, Boxed -> False, Axes -> False,
Lighting -> "Neutral", PlotStyle -> Texture[#],
ViewPoint -> Left, PlotPoints -> 50, Background -> Black,
SphericalRegion -> True, ViewAngle -> Pi/6]] & /@ textures, 4],
Spacings -> {0, 0}, ImageSize -> 800]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=PlanetEaster.png&userId=25355Jeff Bryant2019-04-21T20:44:15Z[✓] Use bubble points in Plot?
https://community.wolfram.com/groups/-/m/t/1666411
Hi
How do I replace all normal point with bubble point like the below plot:
m = {{0.676168282`, 0.65776549947501`}, {0.671137774`,
0.659222459526278`}, {0.671010714`,
0.659990068731665`}, {0.671762604`,
0.664138272302896`}, {0.67218621`,
0.66552936163932`}, {0.673216749`,
0.668311540479097`}, {0.67799459`,
0.678049166167921`}, {0.679546306`,
0.680831345007698`}, {0.686881695`,
0.693351149536299`}, {0.687708356`,
0.694742238872723`}, {0.668473616`,
0.65776549947501`}, {0.656997668`,
0.65776549947501`}, {0.653419063`, 0.662747182883008`}};
ListPlot[m, Mesh -> All, ImageSize -> 500, AspectRatio -> Automatic,
TicksStyle -> Directive[Black, 15],
AxesStyle -> Directive[Black, 12], Ticks -> Automatic,
GridLines -> Automatic, Axes -> True,
PlotRange -> {{0.64, 0.70}, {0.64, 0.70}},
PlotStyle -> {PointSize[.02], Orange},
Epilog -> Line[{{0, 0}, {1, 1}}]]
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Untitled11.png&userId=943918M.A. Ghorbani2019-04-22T13:21:03Z[WSC18] Analysing Image Dithering
https://community.wolfram.com/groups/-/m/t/1383824
![image dithering cover][1]
This summer, as part of the Wolfram High School Summer Camp program, I decided to do a project analysing image dithering, and the various ways to do it. Over the course of two weeks, I learnt and understood how the algorithms work. Since resources about this process are sparse on the internet, *in this post, I not only implement the algorithms, but additionally describe in detail what image dithering is and how it works.*
*The second part of my project was to use machine learning to classify images that originate from different image dithering algorithms.*
After failing to get high accuracy with a large number of machine learning techniques, I finally came up with one that has an **accuracy of about 90%**.
Note that this community post is an adapted version of my computational essay. *My computational essay is attached at the bottom of the post.*
# What is image dithering?
![image-dithering-FS-example][2]
*Can you tell the difference between the two images above?*
The image on the right uses **just 24 colours** .
Yet, using Floyd-Steinberg dithering, it manages to look as detailed and beautiful as the one on the left, which is using more than 16 million colours!
Let' s formally define what the aim of image dithering is :
Given an RGB image with $256^3$ colors, reduce the colour space of the image to only belong to a certain color palette.
![chromaticity-plot comparison][3]
*A comparison of the chromaticity plots of the above images*
# Color Palette
First, let us tackle the easier problem at hand. Say that we are given the option to choose our color palette, the only restriction being the number of colours in the palette. What would be the best way to obtain a color palette that is the most appropriate for an image?
Thankfully, the Wolfram Language makes this task very easy for us, as we can simply use the inbuilt function `DominantColors[image, n]`. For example, regarding the image above:
![dominant-colors of the image][4]
would be the most appropriate color palette with 12 colours.
Here are some visualisations of the process of choosing the color palette in 3D RGB space.
The color palette of the original image:
![original color palette][5]
The color palette of the dithered image:
![final color palette][6]
*Notice how the final color palette is quite close to points on the diagonal of the cube. I go into a lot more detail about this in my computational essay.*
**Now, let us try to solve the main part of the problem, actually figuring out the mapping of a pixel colour to its new colour.**
# Naive idea
```
colorPallete = {0, 32, 64, 96, 128, 159, 191, 223};
pixel = {{34, 100, 222},
{200, 50, 150}};
result = pixel;
Do [
result[[x, y]] = First[Nearest[colorPallete, pixel[[x, y]]]];,
{x, Length[pixel]},
{y, Length[First[pixel]]}
];
Grid[result, Frame -> All, Spacings -> {1, 1}]
```
![grid naive idea][7]
## Extra!
I had to implement my own (but slower) version of the Wolfram function `Nearest` for the final functions, since pre-compiling Wolfram code to C code does not support `Nearest` natively.
However, at the time of writing, I have heard that there is an internal project going on in Wolfram to enable support for compiling all inbuilt Wolfram function to C natively.
# Better idea
As one can guess, this idea can be improved a lot. One of the important ideas is that of "smoothing". We want the transition between two objects/colours to look smoother. One way to do that would be make a gradient as the transition occurs.
However, how do you formalise a "gradient"? And how do you make a smooth one when all you have are 24 colours?
Dithering basically attempts to solves these problems.
To solve our questions, let's think about a pixel: what's the best way to make it look closest to its original colour?
In the naive idea, we created some error by rounding to some nearby values.
For each pixel, let's formalise the error as:
![error in pixel[1, 1] screenshot][8]
## Error diffusion
It is clear that the error should somehow be transmitted to the neighbouring elements so we can account for the error in a pixel in its neighbours. To maintain an even order of processing, let us assume that we will traverse the 2D array of pixels from the top-left corner, row-wise until we reach the bottom-right corner.
Therefore, it never makes sense to "push" the effects of an error to a cell we've already processed. Finally, let us see some ways to actually diffuse the error across the image.
# Floyd - Steinberg Dithering
In 1976, Robert Floyd and Louis Steinberg published the most popular dithering algorithm<sup>1</sup>. The pattern for error diffusion can be described as:
```
diffusionFormula = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPosition = {1, 2};
```
## What does this mean?
`diffusionFormula` is simply a way to encode the diffusion from a pixel.
`diffusionPosition` refers to the position of the pixel, relative to the `diffusionFormula` encoding.
So, for example, an error of `2` at pixel `{1, 1}` translates to the following additions:
```
pixel[[1, 2]] += error*(7/16);
pixel[[2, 1]] += error*(3/16);
pixel[[2, 2]] += error*(5/16);
pixel[[2, 3]] += error*(1/16);
```
![grid floyd steinberg error diffusion][9]
![floyd Steinberg dithering][10]
## How does one come up with these weird constants?
Notice how the numerator constants are the first 4 odd numbers.
The pattern is chosen specifically to create an even checkerboard pattern for perfect grey images using black and white.
![grayscale floyd steinberg dithering example][11]
*Example Grayscale image dithered with Floyd Steinberg*
![grayscale picture in picture thing][12]
*Note the checkerboard pattern in the image above.*
# Atkinson Dithering
Relative to the other dithering algorithms here, Atkinson's algorithm diffuses a lot less of the error to its surroundings. It tends to preserve detail well, but very continuous sections of colours appear blown out.
This was made by Bill Atkinson<sup>2</sup>, an Apple employee.The pattern for error diffusion is as below :
```
diffusionFormula = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8;
diffusionPosition = {1, 2};
```
![atkinson dithering example][13]
# Jarvis, Judice, and Ninke Dithering
This algorithm<sup>3</sup> spreads the error over more rows and columns, therefore, images should be softer(in theory).
The pattern for error diffusion is as below:
```
diffusionFormula = { {0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPosition = {1, 3};
```
![JJN algorithm example][14]
#### The final 2 dithering algorithms come from Frankie Sierra, who published the Sierra and Sierra Lite matrices<sup>4</sup> in 1989 and 1990 respectively.
# Sierra Dithering
Sierra dithering is based on Jarvis dithering, so it has similar results, but it's negligibly faster.
```
diffusionFormula = { {0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPosition = {1, 3};
```
![sierra dithering example][15]
# Sierra Lite Dithering
This yields results similar to Floyd-Steinberg dithering, but is faster.
```
diffusionFormula = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPosition = {1, 2};
```
![Sierra Lite dithering][16]
# Comparison
Here's an interactive comparison of the algorithms on different images:
```
Manipulate[
Dither[im, c,
StringDelete[StringDelete[StringDelete[algo, " "], ","],
"-"]], {{im, image, "Image"}, {im1, im2, im3}}, {{c, 12,
"Number of colours"}, 2, 1024, 1},
{{algo, "Floyd-Steinberg", "Algorithm"}, {"Floyd-Steinberg",
"Jarvis, Judice, Ninke", "Atkinson", "Sierra", "Sierra Lite"}}]
```
Download my computational essay to see it in action. Alternately, use the functions in the "Implementation" section to evaluate the code. `im1`, `im2`, `im3` can be replaced by images. I have submitted the comparison to [Wolfram Demonstrations][17] as well, so it should be available online soon.
# Side-note
Notice how the denominator in the `diffusionFormula` of a number of algorithms is a power of $2$?
This is because division by a power of 2 is equivalent to [bit-shifting][18] the number to the right by $\log_{2}(divisor)$ bits, making it much faster than division by any other number.
Given the improvements in computer hardware, this is not a major concern anymore.
## Come up with your own dithering algorithm!
I noticed that the dithering algorithms are almost the same as each other(especially the `diffusionPosition`).
However, I have made my functions so that you can just tweak the input arguments `diffusionFormula` and `diffusionPosition`, and test out your own functions!
Here's one I tried:
![my own dithering algorithm][19]
# Implementation
In this section, I will discuss the Wolfram implementation, and some of the features and functions I used in my code.
## applyDithering
Even though it's probably the least interesting part of the algorithm, actually applying the dithering is the most important part of the algorithm, and the one that gave me the most trouble.
I started off by writing it in the functional paradigm. With little knowledge of Wolfram, I stumbled through the docs to assemble pieces of the code. Finally, I had a "working" version of the algorithm, but there was a major problem: A $512\cdot512$ RGB image took 700 seconds for processing!
This number is way too large for an algorithm with linear time complexity in the size of input.
### Fixes
Some of the trivial fixes involved making more use of inbuilt functions(for example, `Nearest`).
The largest problem is that the Wolfram notebook is an interpreter, not a compiler. It interprets code every time it's run.
So the obvious step to optimising performance was using the `Compile` function in Wolfram.
But, there's a catch!
```
T (R1) 18 = MainEvaluate[ Hold[List][ I27, I28, T (R1) 16, R6]]
95 Element[ T (R2) 17, I20] = T (R1) 18
```
If you see something like the above in your machine code, your code is likely to be slow.
`MainEvaluate` basically means that the compiled function is calling back the kernel, which too, is a slow process.
To view the human readable form of your compiled Wolfram function, you can use:
```
Needs["CompiledFunctionTools`"];
CompilePrint[yourFunction]
```
To fix this, you need to basically write everything in a procedural form using loops and similar constructs.
The final step was `RuntimeOptions -> "Speed"`, which trades off some integer overflow checks etc. for a faster runtime.
Find the complete code for the function below:
```
applyDithering =
Compile[{{data, _Real, 3}, {diffusionList, _Real,
2}, {diffusionPos, _Integer, 1}, {colors, _Real, 2}},
Module[{lenx, leny, lenz, lenxdiff, lenydiff, error, val,
realLoc, closestColor, closestColordiff, res, a, diff = data,
diffusionFormula = diffusionList, xx, yy, curxx, curyy,
colorAvailable = colors, tmp, diffusionPosition = diffusionPos,
idx},
{lenx, leny, lenz} = Dimensions[data];
{lenxdiff, lenydiff} = Dimensions[diffusionFormula];
a = data;
res = data;
Do[
val = a[[x, y]];
realLoc = {x - diffusionPos[[1]] + 1,
y - diffusionPos[[2]] + 1};
closestColor = {1000000000., 1000000000., 1000000000.};
closestColordiff = 1000000000.;
Do[
tmp = N[Total[Table[(i[[idx]] - val[[idx]])^2, {idx, 3}]]];
If[tmp < closestColordiff,
closestColordiff = tmp;
closestColor = i;
];,
{i, colorAvailable}
];
error = val - closestColor;
res[[x, y]] = closestColor;
Do[
curxx = realLoc[[1]] + xx - 1;
curyy = realLoc[[2]] + yy - 1;
If[curxx > 0 && curxx <= lenx && curyy > 0 && curyy <= leny,
a[[curxx, curyy, z]] += error[[z]]*diffusionFormula[[xx, yy]]];,
{xx, lenxdiff},
{yy, lenydiff},
{z, 3}
];,
{x, lenx},
{y, leny}
];
Round[res]
],
CompilationTarget -> "C",
RuntimeOptions -> "Speed"
];
```
## Dither
This is the main function that uses `applyDithering`. Their are multiple definitions of the function, one with the hardcoded values, and the other to allow one to easily implement their own dithering algorithm.
```
(* This is the implementation that takes the algorithm name and applies it *)
Dither[img_Image, colorCount_Integer, algorithm_String: ("FloydSteinberg" | "JarvisJudiceNinke" |
"Atkinson" | "Sierra" | "SierraLite")] :=
Module[{diffusionFormulaFS, diffusionPositionFS, diffusionFormulaJJN, diffusionPositionJJN, diffusionFormulaA,
diffusionPositionA, diffusionFormulaS, diffusionPositionS, diffusionFormulaSL, diffusionPositionSL},
(* Floyd Steinberg algorithm constants *)
diffusionFormulaFS = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPositionFS = {1, 2};
(* Jarvis, Judice, and Ninke algorithm constants *)
diffusionFormulaJJN = {{0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPositionJJN = {1, 3};
(* Atkinson algorithm constants *)
diffusionFormulaA = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8 ;
diffusionPositionA = {1, 2};
(* Sierra algorithm constants *)
diffusionFormulaS = {{0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPositionS = {1, 3};
(* Sierra Lite algorithm constants*)
diffusionFormulaSL = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPositionSL = {1, 2};
colorAvailable =
Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Switch[algorithm,
"FloydSteinberg",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaFS, diffusionPositionFS, colorAvailable],
"Byte"],
"JarvisJudiceNinke",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaJJN, diffusionPositionJJN, colorAvailable],
"Byte"],
"Atkinson",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaA, diffusionPositionA, colorAvailable],
"Byte"],
"Sierra",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaS, diffusionPositionS, colorAvailable],
"Byte"],
"SierraLite",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaSL, diffusionPositionSL, colorAvailable],
"Byte"]
]
];
(* This is the function that makes it easy to make your own dithering
algorithm *)
Dither[img_Image, colorCount_Integer, diffusionFormula_List, diffusionPosition_List] := Module[{},
colorAvailable = Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormula, diffusionPosition, colorAvailable],
"Byte"]
];
```
# **Classifying images**
The second part of my project involves classifying dithered images and mapping them to the algorithm they were obtained from.
This sounds like a relatively easy task for machine learning, but it turned out to be much harder. Besides, no similar research on image "metadata" has existed before, which made the task more rewarding.
I ended up creating a model which has an **accuracy of more than 90%**, which is reasonably good for machine learning.
If you are uninterested in the failures encountered and the details of the dataset used, please skip ahead to the section on "ResNet-50 with preprocessing".
# Dataset
To obtain data, I did a web search for images with a keyword chosen randomly from a dictionary of common words.
The images obtained are then run through the five algorithms I implemented and is stored as the training data. This is to ensure that the images aren't distinguished much in terms of their actual contents, since that would interfere with learning about the dithering algorithms used in the image.
The images were allowed to use up to 24 colours which are auto-selected, as described in the section on "Color Palette".
Here is the code for downloading, applying the dithering, and re-storing the images. Note that it is not designed with re-usability in mind, these are just snippets coded at the speed of thought:
```
(* This function scrapes random images from the internet and stores \
them to my computer *)
getImages[out_Integer, folderTo_String] := Module[{},
res = {};
Do [
Echo[x];
l = RemoveAlphaChannel /@
Map[ImageResize[#, {512, 512}] &,
Select[WebImageSearch[RandomChoice[WordList["CommonWords"]],
"Images"],
Min[ImageDimensions[#]] >= 512 &]];
AppendTo[res, Take[RandomSample[l], Min[Length[l], 2]]];
Pause[10];,
{x, out}
];
MapIndexed[Export[
StringJoin[folderTo, ToString[97 + #2[[1]]], "-",
ToString[#2[[2]]], ".png"], #1] &, res, {2}]
]
(* This function applies the dithering and stores the image *)
applyAndStore[folderFrom_String, folderTo_String] := Module[{},
images = FileNames["*.png", folderFrom];
origImages = Map[{RemoveAlphaChannel[Import[#]], #} &, images];
Map[Export[StringJoin[folderTo, StringTake[#[[2]], {66, -1}]],
Dither[#[[1]], 24, "Sierra"]] &, origImages]
];
```
Here are some more variable definitions and metadata about the dataset that is referenced in the following sections.
![dataset metadata][20]
# Plain ResNet-50
My first attempt was to use a popular neural net named "ResNet-50 Trained on ImageNet Competition Data,<sup>5</sup> and retrain it on my training data.
One of the major reasons for choosing this architecture was that it identifies the main object in an image very accurately, and is quite deep. Both these properties seemed very suitable for my use case.
However, the results turned out to be very poor. When I noticed this during the training session, I stopped the process early on. It can be speculated that the poor results were because it was trying to infer relations between the colours in the image.
# Border classification
Since the borders in an image are least affected by the image dithering algorithm, and simply rounded to the closest colours, it should be easier to learn the constants of the diffusionFormula from it.
Therefore, we can pre-process an image and only use its border pixels for classification.
## borderNet
Observing the aforementioned fact, I implemented a neural net which tried to work with just the borders of the image. This decreased the size of the data to $512 \cdot 4$ per image.
## borderNet with just left and top border
Since my implementation of the dithering algorithms starts by applying the algorithm from the top-left corner, the pattern in the left and top borders should be even easier for the net to learn. However, this decreased the size of the data even more to $512 \cdot 2$ per image.
Both the nets failed to work very well, and had **accuracies of around 20%**. This was probably the case because of the lack of data for the net to actually train well enough.
Wolfram code for the net follows:
```
borderNet = NetChain[{
LongShortTermMemoryLayer[100],
SequenceLastLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
},
"Input" -> {"Varying", 3},
"Output" -> NetDecoder[{"Class", classes}]
]
```
# Row and column specific classification
The aim with this approach was to first make the neural net infer patterns in the columns of the image, then combine that information and observe patterns in the rows of the image.
This didn't work very well either. The major reason for the failure was probably that the diffusion is not really as independent as the net might assume it to be.
# Row and column combined classification
Building on the results of Section 2.5, the right method to do the processing seemed to be to use two separate chains, and a `CatenateLayer` to combine the results.
For understanding the architecture, observe the `NetGraph` object below:
![netgraph-lstmNet branchy thingy][21]
The Wolfram language code for the net is as follows:
```
lstmNet = NetGraph[{
TransposeLayer[1 <-> 2],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
TransposeLayer[1 <-> 2],
SequenceLastLayer[],
SequenceLastLayer[],
LongShortTermMemoryLayer[25],
LongShortTermMemoryLayer[25],
SequenceLastLayer[],
SequenceLastLayer[],
CatenateLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
}, {
NetPort["Input"] -> 2 -> 3 -> 7 -> 9 -> 11,
NetPort["Input"] -> 1 -> 4 -> 5 -> 6 -> 8 -> 10 -> 12,
{11, 12} -> 13 -> 14 -> 15 -> 16
},
"Input" -> {512, 512, 3},
"Output" -> NetDecoder[{"Class", classes}]
];
```
However, this net didn't work very well either.
The net had a somewhat unconventional architecture, and the excessive parameter count crashed the Wolfram kernel, so they had to be cut down.
Ultimately, it only managed to get an **accuracy rate of around 25-30%**.
# ResNet-50 with preprocessing
The final idea was to use pre-processing to our advantage. Dithering, in its essence, shifts the error downward and towards the right. Therefore, one way to filter the image would be to pad the image with one row of pixels at the top and one column at the left, and subtracting the padded image from the original one.
Here's an example of what that looks like:
![FS image preprocessing for net][22]
The code for doing this to an image is as simple as:
```
img - ImageTake[ImagePad[img, 1], {1, 512}, {1, 512}]
```
*Notice how the image(right side, after processing) resembles the parts with the "checkerboard" pattern described in the section "How does one come up with these weird constants?" under "Floyd - Steinberg Dithering" .*
The main reason this net works well is that, even with same color palettes, the gradient of the images coming from dithering algorithms is quite different. This is because of the differences in the error diffusion, and by subtracting the padded image from the original image, we obtain a filtered version of the dithering patterns, making it easy for the neural net to spot them.
The net was trained on AWS for more than 7 hours, on a larger dataset of 1500 images.
The results outperformed my expectations, and on a test-set of more than 700 images, 300 of which were part of the original training data, it showed an **accuracy rate of nearly 91%**.
![classifier measurement object][23]
Here is a code of the net with details:
```
baseModel =
NetTake[NetModel["ResNet-50 Trained on ImageNet Competition Data",
"UninitializedEvaluationNet"], 23]
net = NetChain[{
NetReplacePart[baseModel,
"Input" -> NetEncoder[{"Image", {512, 512}}]],
LinearLayer[Length@classes],
SoftmaxLayer[]},
"Input" -> NetEncoder[{"Image", {512, 512}, ColorSpace -> "RGB"}],
"Output" -> NetDecoder[{"Class", classes}]
]
```
So, it's just the **ResNet - 50** modified to work with $512 \cdot 512$ images.
# Future Work
- Look into using machine learning for un-dithering an image.
- Look into creating new dithering algorithms that perform faster or better than the existing ones.
# Notes
All images and visualisations in this post were generated in Wolfram. Their code may be seen in the computational essay attached below.
I would like to thank all the mentors, especially Greg "Chip" Hurst, Michael Kaminsky, Christian Pasquel and Matteo Salvarezza, for their help throughout the project.
Further, I would like to thank Pyokyeong Son and Colin Weller for their help during the project, and refining the essay.
The original, high resolution copies of the images are credited to [Robert Lukeman][24], [Teddy Kelley][25], and [Sebastian Unrau][26] on [Unsplash][27].
# References
[1] : R.W. Floyd, L. Steinberg, An adaptive algorithm for spatial grey scale. Proceedings of the Society of Information Display 17, 75-77 (1976).
[2] : Bill Atkinson, private correspondence with John Balestrieri, January 2003 (unpublished)
[3] : J. F. Jarvis, C. N. Judice and W. H. Ninke, A Survey of Techniques for the Display of Continuous Tone Pictures on Bi-level Displays. Computer Graphics and Image Processing, 5 13-40, 1976
[4] : Frankie Sierra, in LIB 17 (Developer's Den), CIS Graphics Support Forum (unpublished)
[5] : K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition," arXiv:1512.03385 (2015)
# [Link to my computational essay][28]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-dithering-cover.gif&userId=1371661
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at3.50.44PM.png&userId=1371661
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.11.46PM.png&userId=1371661
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.15.27PM.png&userId=1371661
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-og.gif&userId=1371661
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-final.gif&userId=1371661
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.19.57PM.png&userId=1371661
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.23.38PM.png&userId=1371661
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.28.38PM.png&userId=1371661
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.29.39PM.png&userId=1371661
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.31.40PM.png&userId=1371661
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-1.png&userId=1371661
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.38.01PM.png&userId=1371661
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.39.37PM.png&userId=1371661
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.41.39PM.png&userId=1371661
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.43.33PM.png&userId=1371661
[17]: http://demonstrations.wolfram.com
[18]: https://stackoverflow.com/a/141873
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at5.03.37PM.png&userId=1371661
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.11.09AM.png&userId=1371661
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.21.55AM.png&userId=1371661
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.24.56AM.png&userId=1371661
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.28.45AM.png&userId=1371661
[24]: https://unsplash.com/photos/zNN6ubHmruI?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[25]: https://unsplash.com/photos/_4Ib-a8g9aA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[26]: https://unsplash.com/photos/CoD2Q92UaEg?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[27]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[28]: https://www.dropbox.com/s/kmtzq6x4xkdn9y8/computational-essay.cdf?dl=0Nalin Bhardwaj2018-07-14T15:17:17ZTry to beat these MRB constant records!
https://community.wolfram.com/groups/-/m/t/366628
Map:
- First we have these record number of digits of the MRB constant
computations.
- Then we have some hints for anyone serious about breaking my record.
- Next, we have speed records.
- Then we have a program Richard Crandall wrote to check his code for computing record number of digits.
- Then there is a conversation about whether Mathematica uses the same algorithm for computing MRB by a couple of different methods.
- Then, for a few replies, we compute the MRB constant from Crandall's
eta derivative formulas and see records there.
- Then there are three replies about "NEW RECORD ATTEMPTS OF 4,000,000 DIGITS!" and the computation is now complete!!!!!.
- Finally, we see where I am on a 5,000,000 digits calculation.
POSTED BY: Marvin Ray Burns.
**MKB constant calculations,**
![enter image description here][1] ,
**have been moved to their own discussion at**
[Calculating the digits of the MKB constant][2].
I think the following important point got buried near the end.
When it comes to mine and a few other people's passion to calculate many digits of constants and the dislike possessed by a few more people, it is all a matter telling us that minds work differently!
The MRB constant is defined below. See http://mathworld.wolfram.com/MRBConstant.html.
$$\text{MRB}=\sum _{n=1}^{\infty } (-1)^n \left(n^{1/n}-1\right).$$
Here are some record computations. If you know of any others let me know.
1. On or about Dec 31, 1998 I computed 1 digit of the (additive inverse of the) MRB constant with my TI-92's, by adding 1-sqrt(2)+3^(1/3)-4^(1/4) as far as I could and then by using the sum feature to compute $\sum _{n=1}^{1000 } (-1)^n \left(n^{1/n}\right).$ That first digit, by the way, is just 0.
2. On Jan 11, 1999 I computed 3 digits of the MRB constant with the Inverse Symbolic Calculator.
3. In Jan of 1999 I computed 4 correct digits of the MRB constant using Mathcad 3.1 on a 50 MHz 80486 IBM 486 personal computer operating on Windows 95.
4. Shortly afterwards I computed 9 correct digits of the MRB constant using Mathcad 7 professional on the Pentium II mentioned below.
5. On Jan 23, 1999 I computed 500 digits of the MRB constant with the online tool called Sigma.
6. In September of 1999, I computed the first 5,000 digits of the MRB Constant on a 350 MHz Pentium II with 64 Mb of ram using the simple PARI commands \p 5000;sumalt(n=1,((-1)^n*(n^(1/n)-1))), after allocating enough memory.
7. On June 10-11, 2003 over a period, of 10 hours, on a 450mh P3 with an available 512mb RAM, I computed 6,995 accurate digits of the MRB constant.
8. Using a Sony Vaio P4 2.66 GHz laptop computer with 960 MB of available RAM, on 2:04 PM 3/25/2004, I finished computing 8000 digits of the MRB constant.
9. On March 01, 2006 with a 3GH PD with 2GBRAM available, I computed the first 11,000 digits of the MRB Constant.
10. On Nov 24, 2006 I computed 40, 000 digits of the MRB Constant in 33hours and 26min via my own program in written in Mathematica 5.2. The computation was run on a 32-bit Windows 3GH PD desktop computer using 3.25 GB of Ram.
11. Finishing on July 29, 2007 at 11:57 PM EST, I computed 60,000 digits of MRB Constant. Computed in 50.51 hours on a 2.6 GH AMD Athlon with 64 bit Windows XP. Max memory used was 4.0 GB of RAM.
12. Finishing on Aug 3 , 2007 at 12:40 AM EST, I computed 65,000 digits of MRB Constant. Computed in only 50.50 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 5.0 GB of RAM.
13. Finishing on Aug 12, 2007 at 8:00 PM EST, I computed 100,000 digits of MRB Constant. They were computed in 170 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 11.3 GB of RAM. Median (typical) daily record of memory used was 8.5 GB of RAM.
14. Finishing on Sep 23, 2007 at 11:00 AM EST, I computed 150,000 digits of MRB Constant. They were computed in 330 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 22 GB of RAM. Median (typical) daily record of memory used was 17 GB of RAM.
15. Finishing on March 16, 2008 at 3:00 PM EST, I computed 200,000 digits of MRB Constant using Mathematica 5.2. They were computed in 845 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 47 GB of RAM. Median (typical) daily record of memory used was 28 GB of RAM.
16. Washed away by Hurricane Ike -- on September 13, 2008 sometime between 2:00PM - 8:00PM EST an almost complete computation of 300,000 digits of the MRB Constant was destroyed. Computed for a long 4015. Hours (23.899 weeks or 1.4454*10^7 seconds) on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 91 GB of RAM. The Mathematica 6.0 code used follows:
Block[{$MaxExtraPrecision = 300000 + 8, a, b = -1, c = -1 - d,
d = (3 + Sqrt[8])^n, n = 131 Ceiling[300000/100], s = 0}, a[0] = 1;
d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
For[k = 0, k < n, c = b - c;
b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
N[1/2 - s/d, 300000]]
17. On September 18, 2008 a computation of 225,000 digits of MRB Constant was started with a 2.66GH Core2Duo using 64 bit Windows XP. It was completed in 1072 hours. Memory usage is recorded in the attachment pt 225000.xls, near the bottom of this post.
18. 250,000 digits was attempted but failed to be completed to a serious internal error which restarted the machine. The error occurred sometime on December 24, 2008 between 9:00 AM and 9:00 PM. The computation began on November 16, 2008 at 10:03 PM EST. Like the 300,000 digit computation this one was almost complete when it failed. The Max memory used was 60.5 GB.
19. On Jan 29, 2009, 1:26:19 pm (UTC-0500) EST, I finished computing 250,000 digits of the MRB constant. with a multiple step Mathematica command running on a dedicated 64bit XP using 4Gb DDR2 Ram on board and 36 GB virtual. The computation took only 333.102 hours. The digits are at http://marvinrayburns.com/250KMRB.txt . The computation is completely documented in the attached 250000.pd at bottom of this post.
20. On Sun 28 Mar 2010 21:44:50 (UTC-0500) EST, I started a computation of 300000 digits of the MRB constant using an i7 with 8.0 GB of DDR3 Ram on board, but it failed due to hardware problems.
21. I computed 299,998 Digits of the MRB constant. The computation began Fri 13 Aug 2010 10:16:20 pm EDT and ended 2.23199*10^6 seconds later |
Wednesday, September 8, 2010. I used Mathematica 6.0 for Microsoft
Windows (64-bit) (June 19, 2007) That is an average of 7.44 seconds per digit.. I used my Dell Studio XPS 8100 i7 860 @ 2.80 GH 2.80 GH
with 8GB physical DDR3 RAM. Windows 7 reserved an additional 48.929
GB virtual Ram.
22. I computed exactly 300,000 digits to the right of the decimal point
of the MRB constant from Sat 8 Oct 2011 23:50:40 to Sat 5 Nov 2011
19:53:42 (2.405*10^6 seconds later). This run was 0.5766 seconds per digit slower than the
299,998 digit computation even though it used 16GB physical DDR3 RAM on the same machine. The working precision and accuracy goal
combination were maximized for exactly 300,000 digits, and the result was automatically saved as a file instead of just being displayed on the front end. Windows reserved a total of 63 GB of working memory of which at 52 GB were recorded being used. The 300,000 digits came from the Mathematica 7.0 command
Quit; DateString[]
digits = 300000; str = OpenWrite[]; SetOptions[str,
PageWidth -> 1000]; time = SessionTime[]; Write[str,
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> digits + 3, AccuracyGoal -> digits,
Method -> "AlternatingSigns"]]; timeused =
SessionTime[] - time; here = Close[str]
DateString[]
23. 314159 digits of the constant took 3 tries do to hardware failure. Finishing on September 18, 2012 I computed 314159 digits, taking 59 GB of RAM. The digits are came from the Mathematica 8.0.4 code
DateString[]
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> 314169, Method -> "AlternatingSigns"] // Timing
DateString[]
24. Sam Noble of Apple computed 1,000,000 digits of the MRB constant in 18 days 9 hours 11 minutes 34.253417 seconds.
25. Finishing on Dec 11, 2012 Ricard Crandall, an Apple scientist, computed 1,048,576 digits
in a lighting fast 76.4 hours (probably processor time). That's on a 2.93 Ghz 8-core Nehalem. **It took until the use of DDR4 to compute nearly that many digits in an absolute time that quick!!: In Aug of 2018 I computed 1,004,993 digits of the MRB constant in 53.5 hours with 10 processor cores! Search this post for "53.5" for documentation. Sept 21, 2018, I just now computed 1,004,993 digits of the MRB constant in 50.37 hours of absolute time (35.4 hours processor time) with 18 processor cores!** Search this post for "50.37 hours" for documentation.**
26. Previously, I computed a little over 1,200,000 digits of the MRB constant in 11
days, 21 hours, 17 minutes, and 41 seconds,( finishing on on March 31 2013). I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
27. On May 17, 2013 I finished a 2,000,000 or more digit computation of the MRB constant, using only around 10GB of RAM. It took 37 days 5 hours 6 minutes 47.1870579 seconds. I used my six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
28. A previous world record computation of the MRB constant was finished on Sun 21 Sep 2014 18:35:06. It took 1 month 27 days 2 hours 45 minutes 15 seconds.The processor time from the 3,000,000+ digit computation was 22 days. I computed the 3,014,991 digits of the MRB constant with Mathematica 10.0. I Used my new version of Richard Crandall's code in the attached 3M.nb, optimized for my platform and large computations. I also used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz with 64 GB of RAM of which only 16 GB was used. Can you beat it (in more number of digits, less memory used, or less time taken)? This confirms that my previous "2,000,000 or more digit computation" was actually accurate to 2,009,993 digits. they were used to check the first several digits of this computation. See attached 3M.nb for the full code and digits.
29. Finished on Wed 16 Jan 2019 19:55:20, I computed over 4 million digits of the MRB constant.
It took 4 years of continuous tries. This successful run took 65.13 days computation time, with a processor time of 25.17 days, on a 3.7 GH overclocked up to 4.7 GH on all cores Intel 6 core computer with 3000 MHz RAM. According to this computation, the previous record, 3,000,000+ digit computation, was accurate to 3,014,871 decimals, as this computation used my own algorithm for computing n^(1/n) as found at chapter 3 in the paper at
https://www.sciencedirect.com/science/article/pii/0898122189900242
and the 3 million+ computation used Crandall's algorithm. Both algorithms outperform Newton's method per calculation and iteration.
See attached [notebook][3].
M R Burns' algorithm:
x = SetPrecision[x, pr];
y = x^n; z = (n - y)/y;
t = 2 n - 1; t2 = t^2;
x =
x*(1 + SetPrecision[4.5, pr] (n - 1)/t2 + (n + 1) z/(2 n t) -
SetPrecision[13.5, pr] n (n - 1) 1/(3 n t2 + t^3 z));
(*N[Exp[Log[n]/n],pr]*)
Example:
ClearSystemCache[]; n = 123456789;
(*n is the n in n^(1/n)*)
x = N[n^(1/n),100];
(*x starts out as a relatively small precision approximation to n^(1/n)*)
pc = Precision[x]; pr = 10000000;
(*pr is the desired presicion of your n^(1/n)*)
Print[t0 = Timing[While[pc < pr, pc = Min[4 pc, pr];
x = SetPrecision[x, pc];
y = x^n; z = (n - y)/y;
t = 2 n - 1; t2 = t^2;
x = x*(1 + SetPrecision[4.5, pc] (n - 1)/t2 + (n + 1) z/(2 n t)
- SetPrecision[13.5, pc] n (n - 1)/(3 n t2 + t^3 z))];
(*You get a much faster version of N[n^(1/n),pr]*)
N[n - x^n, 10]](*The error*)];
ClearSystemCache[]; n = 123456789; Print[t1 = Timing[N[n - N[n^(1/n), pr]^n, 10]]]
Gives
{25.5469,0.*10^-9999984}
{101.359,0.*10^-9999984}
R Crandall's algorithm:
While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^n - n;
x = x (1 - 2 y/((n + 1) y + 2 n n));];
(*N[Exp[Log[n]/ n],pr]*)
Example:
ClearSystemCache[]; n = 123456789;
(*n is the n in n^(1/n)*)
x = N[n^(1/n)];
(*x starts out as a machine precision approximation to n^(1/n)*)
pc = Precision[x]; pr = 10000000;
(*pr is the desired presicion of your n^(1/n)*)
Print[t0 = Timing[While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^n - n;
x = x (1 - 2 y/((n + 1) y + 2 n n));];
(*N[Exp[Log[n]/n],pr]*)
N[n - x^n, 10]](* The error*)]; Print[
t1 = Timing[N[n - N[n^(1/n), pr]^n, 10]]]
Gives
{32.1406,0.*10^-9999984}
{104.516,0.*10^-9999984}
More information available upon request.
Here is my mini cluster of the fastest 3 computers mentioned below:
The one to the left is my custom built extreme edition 6 core and later with a 8 core Xeon processor.
The one in the center is my fast little 4 core Asus with 2400 MHz RAM.
Then the one on the right is my fastest -- a Digital Storm 6 core overclocked to 4.7 GHz on all cores and with 3000 MHz RAM.
![enter image description here][4]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5860Capturemkb.JPG&userId=366611
[2]: http://community.wolfram.com/groups/-/m/t/1323951?p_p_auth=W3TxvEwH
[3]: https://community.wolfram.com/groups?p_auth=zWk1Qjoj&p_p_auth=r1gPncLu&p_p_id=19&p_p_lifecycle=1&p_p_state=exclusive&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=6&_19_struts_action=/message_boards/get_message_attachment&_19_messageId=1593151&_19_attachment=4%20million%2011%202018.nb
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-video-to-gif.gif&userId=366611Marvin Ray Burns2014-10-09T18:08:49ZGet detailed step by step operations of a Factoring?
https://community.wolfram.com/groups/-/m/t/1666130
In the below example:
In[41]:= Factor[((1/x - 1) (1/x + 1))/((x - 1) (1/x + 1))]
Out[41]= -(1/x)
I need Mathematica to provide a trace of each operation taken to get to the result (possibly with the associated rule but not mandatory).
In another software it is possible to do that with the command:
explain(factorExpr(((1/x - 1) (1/x + 1))/((x - 1) (1/x + 1))))
How can I do the same thing with Mathematica?LV Vß2019-04-22T08:37:25ZEasy Websites with EasyIDE
https://community.wolfram.com/groups/-/m/t/1666112
#Cross-Post [Here](https://b3m2a1.github.io/making-a-website-with-easyide.html)
---
A common thing that people want to do one they have a nice bit of code is that they way to share it with the world. That was me ~2 years ago and so I went out, found a [nice static website builder](https://docs.getpelican.com/en/stable/) , wrote a notebook to Markdown generator that would handle all the most common cell types I used, and then figured out how to deploy this to the [Wolfram Cloud](https://www.wolframcloud.com/) . It worked nicely for a while, but after a bit I wanted more control over how the site was built, so I sat down, wrote a Markdown to XML parser, wrote a website generation framework, and worked up what eventually became [Ems](https://github.com/b3m2a1/Ems) , and used this to build a [Mathematica tutorial](https://mresources.github.io/tutorial/) from some stuff I wrote for some chemists I used to work with.
These days few of my sites are on the Wolfram Cloud anymore, as it's slow, prone to losing CSS, the URLs look like a scam website, and so on and so forth. But I still use the site builder. All the time. For example, the [Mathematica paclet server](https://paclets.github.io/PacletServer/) I built to host Mathematica packages and make it possible install them in a single line is built with this site generator.
And so when I built out EasyIDE one of the first things I wanted to do was integrate my [site builder](https://github.com/b3m2a1/Ems) (via Ems) into the thing. In fact, this blog post itself is being written with EasyIDE:
![post29-966299916507913566](https://b3m2a1.github.io/img/post29-966299916507913566.png)
And now that I have this stuff built out, let's start to make a blog.
# Getting Started
## Getting EasyIDE
If you need to get EasyIDE you can get it off the [paclet server](https://paclets.github.io/PacletServer/easyide.html) . From there you can follow the video here to choose a stylesheet and get it set up:
[![start](https://i.stack.imgur.com/0Vqra.png)](https://www.youtube.com/watch?v=TMHiN9Ov2fQ)
## Creating the Site Root
This is probably the most involved part of the process—and don't worry it's really not very involved at all. What we do is first create a directory for our websites to save to, go to `Plugins ▸ Site Builder ▸ Initialize` and follow the dialog there. After we have a new site, we reassign our project directory to that of the site we want to build and then we're good to go. Here's a video for that:
[![site root](https://i.stack.imgur.com/o8Gv9.png)](https://www.youtube.com/watch?v=P5-_-PhWqGs)
# Creating Content
## Writing Content
Content is divided into two types: pages and posts. Posts are articles, these will be managed and listed on the site index. Pages are more for site info and will not end up on the site index. There should already be a sample page and sample post. We can open up Post #1 using the file browser:
![post29-5467238420810729513](https://b3m2a1.github.io/img/post29-5467238420810729513.png)
You can write pretty much whatever you want here and it will get saved to Markdown, which is the format used to build the site. To maximize similarity between the appearance of the post as written and the post as deployed, it makes sense to use mostly formats that can easily be exported to the web. That means using simple cell styles like `"Section"` , `"Subsection"` , `"Code"` , `"Text"` , `"Item"` , etc. and putting images in their own cells (here I'd recommend putting them in `"Text"` cells as otherwise the exporter thinks they're input).
## Making New Content
To make a new post or new page you can use the entries under `Plugins ▸ Site Builder ▸ New` . These will open up a new page or post or whatnot that you can put whatever you want in.
## Content Metadata
At the top of the content notebook you'll see a metadata section that looks something like this:
![post29-7449085702041565717](https://b3m2a1.github.io/img/post29-7449085702041565717.png)
It's here where we set all the page- or post-specific parameters. You can see there's a space for the list of authors, list of tags, and list of categories. You can also choose the title for your article as well as its slug, which is the name that is generally given to the portion of the URL coming after the root. E.g. in [https://b3m2a1.github.io/making-docs-with-easyide.html](https://b3m2a1.github.io/making-docs-with-easyide.html) the slug is making-docs-with-easyide. This slug will generally be automatically constructed from the title, but if you want a different one you can have it.
You can also customize higher-level things, like change how the Markdown is exported via the `"ExportOptions"` metadata tag. For example, if you want the Markdown to export to a different directory, you can do this like:
"ExportOptions"->{"Directory"->".."}
Which will export the files to the directory one up from the current notebook directory. You can also specify that you'd like all your code to export as copyable images as in the docs, what types of cells should be exported, whether to export math cells via MathJAX or not, whether to use HTML to format more complicated format types, whether to include invisible anchor HTML elements before each section for easy of jumping, what kind of indentation to use for code cells, etc. It's not worth discussing all the possible customizations right now, so if you want to know if a certain customization is supported or request new ones it is probably easiest to [ask that here](https://github.com/b3m2a1/Ems/issues) .
# Building The Site
Now that we have content, we can build our site. This is pretty easy to do generally via `Plugins ▸ Site Builder ▸ Build Site` . Here's an example of how this works:
[![build](https://i.stack.imgur.com/2qra6.png)](https://www.youtube.com/watch?v=4Z7lGg6bNbs)
## Build Options
As you can see in the video, when you request a build there are a number of options you can toggle which will change how your site gets built (and how fast it builds).
* If `Generate Content` is turned off, not pages or posts will be built.
* If `Generate Aggregations` is off, no new tag/author/category pages will be built—as the site size grows this can sometimes yield huge savings.
* If `Generate Index` is off, no index pages are built.
* If `Generate Search` is off, no search page (using [tipue search](http://www.tipue.com/search/) ) will be built.
* If `Use Cache` is off, the content cache that is stored to speed up builds will be ignored.
Finally, the other three options seem self-explanatory, but if not let me know and I'll write up what they do.
## Site Metadata
Every site has a file `SiteConfig.wl` that specifies a bunch of more serious options that get fed into the build process. Here's part of what that looks like:
![post29-7608927685363373399](https://b3m2a1.github.io/img/post29-7608927685363373399.png)
The specific options that are here will differ according to the theme used for the website, but these are the standard ones for a regular blog-type website.
## Site Themes
All sites have a theme that comes with them, which is basically just a pile of XML templates and CSS and JS and such that the extracted content from each page/post gets fed into. My themes work generally the same as pelican themes, so you can find some info on the layout [here](https://docs.getpelican.com/en/stable/themes.html) . The major difference is that my themes use the [```XMLTemplate```](https://reference.wolfram.com/language/ref/XMLTemplate.html) framework and so writing them is a bit more annoying that writing pelican themes would be. I've sunk a good amount of time and effort into make the [ themes I've already written](https://github.com/b3m2a1/Ems/tree/master/Resources/Themes) pretty full featured, though. Given that, if you want a site-specific theme, you just need to make a directory called `theme` in the site root, copy the content of one of my existing themes in and modify that. If you want to write a new theme from scratch, I can find the time to write a tutorial about how to do that, too.
# Deploying The Site
Finally we have a site that looks and feels how we want it to, so it's time to deploy. The way I would do this is:
* Create a git repository in the current directory (via the menu)
* Create an empty repository on GitHub
![post29-6276657249628752117](https://b3m2a1.github.io/img/post29-6276657249628752117.png)
* Set the current repository remote to that of the GitHub repo
* Change the `output` folder name to `docs`
* Push to GitHub
* Turn on GitHub sites for your repository
![post29-9018686530683978441](https://b3m2a1.github.io/img/post29-9018686530683978441.png)
Here's an example of that in action:
[![deploy](https://i.stack.imgur.com/M2dWH.png)](https://www.youtube.com/watch?v=L_r1yidV5F0?autoplay)
# WLSites.GitHub.IO
At this point, I don't have plans to radically revamp this, but one thing I could imagine being useful work be a centralized place for hosting Mathematica-generated websites (doesn't have to be made with EasyIDE/Ems).
To make this easier I created a GitHub organization called WLSites where people can contribute sites if they want. All they'll need to do is [let me know here](https://github.com/wlsites/wlsites.github.io/issues) what they'd like their site to be called and provide a quick description for it. Once I have that I'll make a new repo for them, and give them complete access to that.
Here's an example of the kind of thing that I'm looking for:
> New Site: b3m2a1
>
> <br>
> A personal page for b3m2a1
The benefit of this is that many sites can be hosted under the same roof. I'll also make a nice index page that updates every time a new site is added so that people can find your site if they simply go to [wlsites.github.io](https://wlsites.github.io/) .
In fact, if you want to add a site and want to host the repo yourself that's also fine by me. I'll just add your site and info to the repo and your site will appear on the index.b3m2a1 2019-04-22T07:40:24ZInteger area search, SEPP triangles
https://community.wolfram.com/groups/-/m/t/1665973
#Description:
This work is the search for scalene SEPP (Square Even-Prime-Prime) triangles with integer area. They're semi-rare to be found. There are no square odd-prime-prime triangles with integer area, as there are no triangles with the three odd sides and integer area. Its first representative is the triangle with sides 3-4-5. Below is the representative image of a SEPP:
![enter image description here][1]
I've elaborated this simple code below to find the triangles with these properties. I opted to use parallel computing (8 kernels) and measured the computing time of each evaluate. The sides have the measurement until just below the quantity required in "n" (< n). The answer is in the form: "Quantity used" {"side a", "side b", "side c"} {“area”} “graphics”, with absolute time just below.
# Objective and Coding:
The main objective here is to find how many of these exist for sides varying up to 10,100, 1000… in amounts of powers of 10 (10^x). Following just one example with the sides up to 50 to test the code; only 1 triangle found, the 3-4-5:
Parallelize[n = 50;
p = PrimePi[n];
Do[a = (2*i)^2; b = Prime[j]; c = Prime[k];
If[c < a + b \[And] a < b + c \[And] b < a + c \[And]
Area[SSSTriangle[a, b, c]] \[Element] Integers,
Print[n, {a, b, c}, {Area[SSSTriangle[a, b, c]]},
Graphics[SSSTriangle[a, b, c]]]], {i, 1,
IntegerPart[Sqrt[n - 1]/2]}, {j, 2, p - 1}, {k, j + 1,
p}]] // AbsoluteTiming
![enter image description here][2]
Now a code modification to find multiple results of the processing time in just one evaluation. The example below is programmed to calculate from 10 to 100 with steps of 10 {m,10,100,10}:
Do[Print[Parallelize[n = m;
p = PrimePi[n];
Do[a = (2*i)^2; b = Prime[j]; c = Prime[k];
If[c < a + b \[And] a < b + c \[And] b < a + c \[And]
Area[SSSTriangle[a, b, c]] \[Element] Integers, Null], {i, 1,
IntegerPart[Sqrt[n - 1]/2]}, {j, 2, p - 1}, {k, j + 1, p}]] //
AbsoluteTiming], {m, 10, 100, 10}]
![enter image description here][3]
Above you can change in the code the part {m,10,100,10} by {m,{100,140,210}} to find, for example, the result for specific quantities of 100, 140, 210 etc. You can also change the Null part in the code by Print[n,{a,b,c},{Area[SSSTriangle[a,b,c]]},Graphics[SSSTriangle[a,b,c]]] to have multiple responses seeking the triangles.
#Calculation and Results:
To carry out the evaluation in this work I used the following machine (only to have an idea of the processing used):
Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz, 3600 Mhz, 8 Core(s), 8 Logic Processor(s) (run with 8 Kernels), RAM 16.0 GB, BaseBoard B360M AORUS Gaming 3, X64, NVIDIA GeForce GTX 1060 6GB.
The following table was assembled with the data of the quantities (maximum values for the side) in "n" and the absolute times (seconds) spent on parallel computing:
![enter image description here][4]
Now the results found using values with powers of 10:
- From values up to 10 (<10) and up to 100 (<100):
![enter image description here][5]
- The result for sides up to 1000 (<1000):
![enter image description here][6]
- And finally the result for sides up to 10000 (<10000):
![enter image description here][7]
#Time Prediction (Fitting Model):
I also made an attempt to predict the time required to calculate and find triangles with sides larger than 10000, so I used FindFit as follows (I did using "a.x^b" and "a.x^b.c^x"). I chose reduce the x-axis in a factor of 10 to make the fit (do not know how this affected the accuracy or if has affected..?), below is the example of the first fit (result with the fit of the data from 1000 to 10000 with steps of 1000 and with the prediction for 10^5):
y = a*x^b // StandardForm
data = {{10, 0.171556}, {20, 0.529822}, {30, 1.05338}, {40,
1.86622}, {50, 3.123}, {60, 6.25655}, {70, 9.20718}, {80,
12.59}, {90, 15.356}, {100, 18.9604}, {110, 23.6784}, {120,
21.1587}, {130, 25.256}, {140, 28.0271}, {150, 34.3804}, {160,
38.2134}, {170, 45.9896}, {180, 51.1624}, {190, 56.4499}, {200,
64.8474}, {210, 71.6892}, {220, 115.029}, {230, 124.75}, {240,
139.067}, {250, 146.954}, {260, 117.413}, {300, 167.935}, {340,
236.836}, {380, 305.191}, {400, 345.106}, {440, 431.134}, {460,
475.442}, {500, 566.853}, {520, 623.334}, {550, 695.491}, {560,
716.899}, {580, 794.285}, {590, 833.365}, {600, 852.083}, {630,
982.572}, {640, 1021.09}, {660, 1148.21}, {690, 1265.83}, {700,
1322.47}, {710, 1324.94}, {750, 1428.75}, {780, 1560.02}, {800,
1676.75}, {820, 1819.18}, {860, 2024.3}, {900, 2290.71}, {950,
2629}, {1000, 2972.96}};
FindFit[%, a*x^b, {a, b}, x]
Table[a*x^b /. %, {x, 100, 1000, 100}] \[And]
Table[a*x^b /. %, {x, {10000}}]
![enter image description here][8]
This chart was created using the real absolute time data as well as the two curves generated by FindFit that I tested:
![enter image description here][9]
#Conclusion:
There are only 13 scalene SEPP triangles with integer area and sides varying up to 10000 (10^4).
The curves used in FindFit gave very divergent values to predict the time required to evaluate with the sides up to 100000 (10^5), and the curve fit "a.x^b" (fit 1) was more optimistic and estimated that it would take 8 days of computation in parallel, while the curve fit "a.x^b.c^x" (fit 2) estimated it would take 171.5 days! ... Anyway are very long computing time to calculate all the possibilities of sides up to 10^5.
To choose the best fit curve to be able to predict with longer times, I evaluated with the sides up to 15000 to have a real point of time and get to know which curve approaches better. The real time for sides up to 15000 was 7428.57 seconds. The “fit 1” curve came closest to the value with a prediction of 7679.62 seconds, while the “fit 2” curve estimated a time of 8452.19 seconds. The "fit 1" curve had a difference of 4 minutes and 11 seconds or approximately 3.4% of the real value.
#A Few Questions (that I have):
- Is there any way to make these codes faster or more efficient? Any other way to find that kind of triangle using codes?
- Is there a better way to use FindFit in this case to have a more accurate prediction? Maybe another function or more/less data? How to know the correct function model?
- How many of these triangles will there be if we search for sides up to 10^5 or even 10^6? Would anyone have any idea to help me find it?
Thank you very much to everyone in the community.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i1.png&userId=1316061
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i2.png&userId=1316061
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i3.png&userId=1316061
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i4.png&userId=1316061
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i5.png&userId=1316061
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i6.png&userId=1316061
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i7.png&userId=1316061
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i8.png&userId=1316061
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i9.png&userId=1316061Claudio Chaib2019-04-22T03:51:46ZPlot two lines for max/min error on a plot?
https://community.wolfram.com/groups/-/m/t/1665533
Dear all,
How do I plot two parallel lines for maximum and minimum values of relative error percentage as below:
In[124]:= m = {2.2027036725996783`,
5.280362701954329`, -1.0124449729925813`, 1.698979465419002`,
3.141591718306451`, -2.745060440009399`, -1.6113207086717107`,
0.16638391106163822`, -2.258695482994957`, -0.3482598627250328`,
4.925577851518227`, 2.6750207005122877`, -4.02288009868453`,
2.714952029120464`, 3.034760679525427`,
3.729591243576016`, -3.3014545762020924`, -0.22188437300968553`, \
-5.337065768483884`, -4.1220754806842725`,
0.5332742419880744`, -2.8743222182737225`, -2.693012697508946`,
0.32865798833744514`,
4.027192028466991`, -2.8987838601887117`, -0.16628212916201005`,
3.484024392679025`, -3.2865991892298494`, -1.4074929511476675`, \
-0.1376591620771673`, -1.340169784223353`, -0.18874557070863454`, \
-0.7158258869527857`, -1.6221568977272842`, 2.220428106758959`,
0.18756079584372606`, 2.877020526594872`, 1.6279535082847358`,
0.615660929310434`,
0.8731989288504256`, -2.4430509995815504`, -3.8101555754900276`, \
-0.41354676208662355`, -3.1882480501258503`,
2.797772540468918`, -4.423074431837073`, -0.15315482194415617`, \
-0.9330704276794881`, -2.6279243117503595`, 2.4286072216474692`,
3.303587280307156`,
3.8085267078688805`, -1.9518727734300068`, -4.213412770426206`, \
-2.357017551327045`,
3.4733691167400265`, -0.11673331538713343`, -0.40688016301067764`,
1.6698198640343067`, 0.14831028096303508`, 0.24317430279547272`,
3.011211821071481`, -0.48218554546387205`, 0.09689880402434317`,
1.191374402522961`, -2.7963094788933693`, -1.7336554568628828`,
0.7339703452354714`, -0.8739780822047689`, -3.63379798637767`,
1.0591084727706641`, 5.324198415974884`,
0.0845454122567755`, -3.991208337015724`, -3.793925049867107`,
1.8074800549551044`, 1.1480036635543738`, -2.7476419011691475`,
1.00023361017206`, 0.028334302480905814`, -0.008048998604257977`}
In[129]:= {Max[m], Min[m]}
Out[129]= {5.3242, -5.33707}
In[130]:= ListLinePlot[m, LabelStyle -> {14, GrayLevel[0], Bold},
ImageSize -> 500, PlotStyle -> Blue,
AxesLabel -> {"Sample", "Relative Error (%)"}, Filling -> Axis,
PlotRange -> All]
![enter image description here][1]
Many thank!
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4924Untitled.png&userId=943918M.A. Ghorbani2019-04-21T14:29:26Z[✓] Replace "I" with "0"?
https://community.wolfram.com/groups/-/m/t/1665703
So I wanted to replace i with 0 in my equation (pls don't ask why). I tried several options but none of them gave the result I wanted. What should I do?![showcase of the problem][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfWTF.png&userId=1308552Daniel Voloshin2019-04-21T19:59:34Z[✓] Include "Authorization: Bearer ..." in a URL execute?
https://community.wolfram.com/groups/-/m/t/1664088
I am trying to access the Asana api. In order to do that I need to send http requests of the form
https://api end point "Authorization: Bearer 0/123456789abcdef"
I have tried setting Authetication -> <|"Authorization" -> Bearer... |> but that just brings up the asana sign in box. I also tried "Headers" -> "Authorization: Bearer..." |> but that throws the error
HTTPRequest::nvldheaders: -- Message text not found -- (Join[{user-agent->Wolfram HTTPClient 12.},Authorization: Bearer ...
What is the correct way to include the Authorization header request in the message?Andrew Burnett2019-04-19T17:44:06ZVisualizing simultaneous iteration methods for polynomial roots
https://community.wolfram.com/groups/-/m/t/1646447
One of my longstanding research interests in numerical analysis has been the family of "simultaneous iteration" methods for finding polynomial roots. (See e.g. [McNamee's book][1] for a comprehensive survey.) Briefly put, these are modified Newton-Raphson methods that allow one to find the roots of a polynomial all at once, as opposed to finding them one at a time and "deflating".
I had the idea to visualize how these algorithms gradually proceed from initial approximations to the roots, up to convergence. After a number of experiments, I settled on using [domain coloring][2] for visualization. I have found that the logarithmic derivatives of polynomials gave particularly striking plots.
For this post, I have used the scaled [exponential sum][3] of degree 20:
$$\frac{20!}{20^{20}}\sum_{k=0}^{20}\frac{(20z)^k}{k!}$$
as the example polynomial whose roots we want to see. It [is][4] [known][5] that the zeroes of this polynomial asymptotically approach the so-called Szegő curve as the polynomial degree goes to infinity.
![Szegő curve][6]
expPoly[x_] = With[{n = 20}, Sum[(n! (n x)^k)/(k! n^n), {k, 0, n}]]
---
I will now look at two of the most popular simultaneous iteration methods. The first one is the [(Weierstrass-)Durand-Kerner method][7],
$$x_i^{(k+1)}=x_i^{(k)}-\frac{p(x_i^{(k)})}{\prod\limits_{j\neq i} (x_i^{(k)}-x_j^{(k)})},\qquad i=1\dots n;\; k=0,1,\dots$$
which is (typically) quadratically convergent. (Note that in simultaneous iteration methods, the polynomials are always assumed to be monic (i.e., unit leading coefficient).)
Implementing the iteration is easy using `FixedPointList[]`. As is customary with these methods, we use as a starting approximation points equispaced around the unit circle, and slightly rotated:
ptsdk = FixedPointList[# - expPoly[#]/Table[Apply[Times, #[[k]] - Delete[#, k]],
{k, Length[#]}] &,
N[Exp[2 π I Range[0, 19]/20 - I π/40]], 40,
SameTest -> (EuclideanDistance[##] < 1.*^-6 &)];
I use a loose convergence criterion that is good enough for visualization purposes.
For the domain coloring plot, I will use a slightly modified version of the [DLMF color scheme][8], based on an idea of [Quilez][9].
DLMFPhaseColor[u_, s_:1, b_:1] := Module[{rgb},
rgb = Clip[{1, -1, -1} Abs[{8, 4, 8} Mod[u/(2 π), 1] -
{9, 3, 11}/2] + {-3, 3, 5}/2, {0, 1}];
rgb = (3 - 2 rgb) rgb^2;
Apply[RGBColor, b (1 + s (rgb - 1))]]
I then use a simplified version of [code originally written by user Heike][10] on Mathematica Stack Exchange:
dcdk = RegionPlot[True, {x, -9/8, 9/8}, {y, -9/8, 9/8},
ColorFunction ->
Function[{x, y}, DLMFPhaseColor[Arg[Total[1/(x + I y - #)]]]],
ColorFunctionScaling -> False, Frame -> False,
PlotPoints -> 405] & /@ ptsdk;
(This takes some time, due to the high `PlotPoints` setting.)
We can now see an animation:
ListAnimate[dcdk]
![Durand-Kerner][11]
The other method I will be looking at in this post is the (typically) cubically convergent [Ehrlich-Aberth(-Maehly) method][12],
$$x_i^{(k+1)}=x_i^{(k)}-\frac{\tfrac{p(x_i^{(k)})}{p^\prime(x_i^{(k)})}}{1-\tfrac{p(x_i^{(k)})}{p^\prime(x_i^{(k)})}\sum\limits_{j\neq i} \tfrac1{x_i^{(k)}-x_j^{(k)}}},\qquad i=1\dots n;\; k=0,1,\dots$$
which is also one of the methods available in *Mathematica*'s `NSolve[]`/`NRoots[]`.
Unfortunately, I have no way to get the iterates generated by `NSolve[]`, so I had to reimplement the method myself. We can use essentially the same code as was used for Durand-Kerner, with a few changes:
ptsea = FixedPointList[With[{ld = expPoly[#]/expPoly'[#]},
# - ld/(1 - ld Table[Tr[1/(#[[k]] - Delete[#, k])],
{k, Length[#]}])] &,
N[Exp[2 π I Range[0, 19]/20 - I π/40]], 40,
SameTest -> (EuclideanDistance[##] < 1.*^-6 &)];
dcea = RegionPlot[True, {x, -9/8, 9/8}, {y, -9/8, 9/8},
ColorFunction ->
Function[{x, y}, DLMFPhaseColor[Arg[Total[1/(x + I y - #)]]]],
ColorFunctionScaling -> False, Frame -> False,
PlotPoints -> 405] & /@ ptsea;
ListAnimate[dcea]
![Ehrlich-Aberth][13]
It would be interesting to use this visualization technique on other polynomials with interesting root structure, as well as other simultaneous iteration methods.
[1]: https://books.google.com/books?hl=en&id=4PMqxwG-eqQC&pg=PA67
[2]: https://en.wikipedia.org/wiki/Domain_coloring
[3]: http://mathworld.wolfram.com/ExponentialSumFunction.html
[4]: https://doi.org/10.1216/rmjm/1181072998
[5]: https://doi.org/10.1007/BFb0087909
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=vpliw.gif&userId=520181
[7]: https://en.wikipedia.org/wiki/Durand%E2%80%93Kerner_method
[8]: https://dlmf.nist.gov/help/vrml/aboutcolor#S2
[9]: https://www.shadertoy.com/view/MsS3Wc
[10]: https://mathematica.stackexchange.com/a/7293/
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=dcdk.gif&userId=520181
[12]: https://en.wikipedia.org/wiki/Aberth_method
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=dcea.gif&userId=520181J. M.2019-04-04T04:38:26ZThoughts on a Python interface, and why ExternalEvaluate is just not enough
https://community.wolfram.com/groups/-/m/t/1185247
`ExternalEvaluate`, introduced in M11.2, is a nice initiative. It enables limited communication with multiple languages, including Python, and appears to be designed to be relatively easily extensible (see ``ExternalEvaluate`AddHeuristic`` if you want to investigate, though I wouldn't invest in this until it becomes documented).
**My great fear, however, is that with `ExternalEvaluate` Wolfram will consider the question of a Python interface settled.**
This would be a big mistake. A *general* framework, like `ExternalEvaluate`, that aims to work with *any* language and relies on passing code (contained in a string) to an evaluator and getting JSON back, will never be fast enough or flexible enough for *practical scientific computing*.
Consider a task as simple as computing the inverse of a $100\times100$ Mathematica matrix using Python (using [`numpy.linalg.inv`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html)).
I challenge people to implement this with `ExternalEvaluate`. It's not possible to do it *in a practically useful way*. The matrix has to be sent *as code*, and piecing together code from strings just can't replace structured communication. The result will need to be received as something encodable to JSON. This has terrible performance due to multiple conversions, and even risks losing numerical precision.
Just sending and receiving a tiny list of 10000 integers takes half a second (!)
In[6]:= ExternalEvaluate[py, "range(10000)"]; // AbsoluteTiming
Out[6]= {0.52292, Null}
Since I am primarily interested in scientific and numerical computing (as I believe most M users are), I simply won't use `ExternalEvaluate` much, as it's not suitable for this purpose. What if we need to do a [mesh transformation](https://mathematica.stackexchange.com/q/155484/12) that Mathematica can't currently handle, but there's a Python package for it? It's exactly the kind of problem I am looking to apply Python for. I have in fact done mesh transformations using MATLAB toolboxes directly from within Mathematica, using [MATLink][1], while doing the rest of the processing in Mathematica. But I couldn't do this with ExternalEvaluate/Python in a reasonable way.
In 2017, any scientific computing system *needs* to have a Python interface to be taken seriously. [MATLAB has one][2], and it *is* practically usable for numerical/scientific problems.
----
## A Python interface
I envision a Python interface which works like this:
- The MathLink/WSTP API is exposed to Python, and serves as the basis of the system. MathLink is good at transferring large numerical arrays efficiently.
- Fundamental data types (lists, dictionaries, bignums, etc.) as well as datatypes critical for numerical computing (numpy arrays) can be transferred *efficiently* and *bidirectionally*. Numpy arrays in particular must translate to/from packed arrays in Mathematica with the lowest possible overhead.
- Python functions can be set up to be called from within Mathematica, with automatic argument translation and return type translation. E.g.,
PyFun["myfun"][ (* myfun is a function defined in Python *)
{1,2,3} (* a list *),
PyNum[{1,2,3}] (* cast to numpy array, since the interpretation of {1,2,3} is ambiguous *),
PySet[{1,2,3}] (* cast to a set *)
]
- The system should be user-extensible to add translations for new datatypes, e.g. a Python class that is needed frequently for some application.
- The primary mode of operation should be that Python is run as a slave (subprocess) of Mathematica. But there should be a second mode of operation where both Mathematica and Python are being used interactively, and they are able to send/receive structured data to/from each other on demand.
- As a bonus: Python can also call back to Mathematica, so e.g. we can use a numerical optimizer available in Python to find the minimum of a function defined in Mathematica
- An interface whose primary purpose is to call Mathematica from Python is a different topic, but can be built on the same data translation framework described above.
The development of such an interface should be driven by real use cases. Ideally, Wolfram should talk to users who use Mathematica for more than fun and games, and do scientific computing as part of their daily work, with multiple tools (not just M). Start with a number of realistic problems, and make sure the interface can help in solving them. As a non-trivial test case for the datatype-extension framework, make sure people can set up auto-translation for [SymPy objects][3], or a [Pandas dataframe][4], or a [networkx graph][5]. Run `FindMinimum` on a Python function and make sure it performs well. (In a practical scenario this could be a function implementing a physics simulation rather than a simple formula.) As a performance stress test, run `Plot3D` (which triggers a very high number of evaluations) on a Python function. Performance and usability problems will be exposed by such testing early, and then the interface can be *designed* in such a way as to make these problems at least solvable (if not immediately solved in the first version). I do not believe that they are solvable with the `ExternalEvaluate` design.
Of course, this is not the only possible design for an interface. J/Link works differently: it has handles to Java-side objects. But it also has a different goal. Based on my experience with MATLink and RLink, I believe that *for practical scientific/numerical computing*, the right approach is what I outlined above, and that the performance of data structre translation is critical.
----
## ExternalEvaluate
Don't get me wrong, I do think that the `ExternalEvaluate` framework is a very useful initiative, and it has its place. I am saying this because I looked at its source code and it appears to be easily extensible. R has zeromq and JSON capabilities, and it looks like one could set it up to work with `ExternalEvaluate` in a day or so. So does Perl, anyone want to give it a try? `ExternalEvaluate` is great because it is simple to use and works (or can be made to work) with just about any interpreted language that speaks JSON and zeromq. But it is also, in essence, a quick and dirty hack (that's extensible in a quick and dirty way), and won't be able to scale to the types of problems I mentioned above.
----
## MathLink/WSTP
Let me finally say a few words about why MathLink/WSTP are critical for Mathematica, and what should be improved about them.
I believe that any serious interface should be built on top of MathLink. Since Mathematica already has a good interface capable of inter-process communication, that is designed to work well with Mathematica, and designed to handle numerical and symbolic data efficiently, use it!!
Two things are missing:
- Better documentation and example programs, so more people will learn MathLink
- If the MathLink library (not Mathematica!) were open source, people would be able to use it to link to libraries [which are licensed under the GPL][6]. Even a separate open source implementation that only supports shared memory passing would be sufficient—no need to publish the currently used code in full. Many scientific libraries are licensed under the GPL, often without their authors even realizing that they are practically preventing them from being used from closed source systems like Mathematica (due to the need to link to the MathLink libraries). To be precise, GPL licensed code *can* be linked with Mathematica, but the result cannot be shared with anyone. I have personally requested the author of a certain library to grant an exception for linking to Mathematica, and they did not grant it. Even worse, I am not sure they understood the issue. The authors of other libraries *cannot* grant such a permission because they themselves are using yet other GPL's libraries.
[MathLink already has a more permissive license than Mathematica.][7] Why not go all the way and publish an open source implementation?
I am hoping that Wolfram will fix these two problems, and encourage people to create MathLink-based interfaces to other systems. (However, I also hope that Wolfram will create a high-quality Python link themselves instead of relying on the community.)
I have talked about the potential of Mathematica as a glue-language at some Wolfram events in France, and I believe that the capability to interface external libraries/systems easily is critical for Mathematica's future, and so is a healthy third-party package ecosystem.
[1]: http://matlink.org/
[2]: https://www.mathworks.com/help/matlab/matlab-engine-for-python.html
[3]: http://www.sympy.org/
[4]: http://pandas.pydata.org/
[5]: https://networkx.github.io/
[6]: https://en.wikipedia.org/wiki/Copyleft
[7]: https://www.wolfram.com/legal/agreements/mathlink.htmlSzabolcs Horvát2017-09-15T12:33:04Z96_6 Configuration
https://community.wolfram.com/groups/-/m/t/1664705
Every node has six lines.
Every line has six nodes.
If you like these designs
You can try out my codes.
![96_6 configuration][1]
base={{0,-1},{0,Root[1+2 #1+2 #1^3+#1^4&,2]},{1/2,Root[-11-8 #1+24 #1^2-32 #1^3+16 #1^4&,1]},{1/2,Root[-11+8 #1+24 #1^2+32 #1^3+16 #1^4&,2]},{1/2 (2-Sqrt[3]),-(1/2)},{Root[-1+4 #1-8 #1^3+8 #1^4&,2],Root[3-24 #1+48 #1^2-24 #1^3+8 #1^4&,1]},{Root[-1+4 #1-8 #1^3+8 #1^4&,2],Root[3+24 #1+48 #1^2+24 #1^3+8 #1^4&,2]},{Root[1+4 #1+16 #1^3+16 #1^4&,2],Root[1-20 #1+48 #1^2+16 #1^3+16 #1^4&,1]}};
nodes = Flatten[Table[RootReduce[#.RotationMatrix[n Pi/6 ]], {n, 0, 11}] & /@ base,1];
lines={{1,9,60,59,26,38},{1,8,49,57,83,64},{1,6,54,50,76,69},{1,5,52,51,30,42},{1,24,37,41,84,73},{1,14,27,31,67,68},{12,8,59,58,25,37},{12,7,60,56,82,63},{12,5,49,53,75,68},{12,4,51,50,29,41},{12,13,26,30,66,67},{12,23,48,40,83,84},{11,7,58,57,36,48},{11,6,59,55,81,62},{11,4,60,52,74,67},{11,3,49,50,28,40},{11,24,25,29,66,65},{11,22,47,39,82,83},{10,6,57,56,35,47},{10,5,58,54,80,61},{10,3,59,51,73,66},{10,2,49,60,27,39},{10,23,36,28,64,65},{10,21,46,38,81,82},{9,5,56,55,34,46},{9,4,57,53,79,72},{9,2,58,50,84,65},{9,22,35,27,63,64},{9,20,45,37,80,81},{8,4,55,54,33,45},{8,3,56,52,78,71},{8,21,34,26,62,63},{8,19,44,48,80,79},{7,3,54,53,32,44},{7,2,55,51,77,70},{7,20,33,25,61,62},{7,18,43,47,78,79},{6,2,53,52,31,43},{6,19,32,36,72,61},{6,17,42,46,77,78},{5,18,35,31,71,72},{5,16,45,41,76,77},{4,17,34,30,70,71},{4,15,44,40,75,76},{3,16,33,29,69,70},{3,14,43,39,74,75},{2,13,42,38,73,74},{2,15,32,28,68,69},{49,92,48,41,82,74},{49,94,26,31,69,65},{60,93,25,30,68,64},{60,91,47,40,81,73},{59,92,36,29,67,63},{59,90,46,39,80,84},{58,91,35,28,66,62},{58,89,45,38,83,79},{57,90,34,27,61,65},{57,88,44,37,82,78},{56,89,33,26,72,64},{56,87,43,48,81,77},{55,88,32,25,71,63},{55,86,42,47,80,76},{54,87,36,31,70,62},{54,85,46,41,75,79},{53,86,35,30,69,61},{53,96,45,40,74,78},{52,85,34,29,68,72},{52,95,44,39,73,77},{51,96,33,28,67,71},{51,94,43,38,84,76},{50,93,42,37,83,75},{50,95,32,27,66,70},{13,21,92,91,27,37},{13,20,93,89,82,65},{13,18,86,94,75,70},{13,17,96,95,31,41},{24,20,91,90,26,48},{24,19,92,88,81,64},{24,17,93,85,74,69},{24,16,95,94,30,40},{23,19,90,89,25,47},{23,18,91,87,80,63},{23,16,92,96,73,68},{23,15,93,94,29,39},{22,18,89,88,36,46},{22,17,90,86,79,62},{22,15,91,95,84,67},{22,14,93,92,28,38},{21,17,88,87,35,45},{21,16,89,85,78,61},{21,14,90,94,83,66},{20,16,87,86,34,44},{20,15,88,96,77,72},{19,15,86,85,33,43},{19,14,87,95,76,71},{18,14,85,96,32,42}};
From L. W. Berman, "Geometric Constructions for Symmetric 6-Configurations," Rigidity and Symmetry, Springer, 2014, p. 83.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=96_6Config.jpg&userId=21530Ed Pegg2019-04-19T14:49:09Z3DPlot with axes at the center?
https://community.wolfram.com/groups/-/m/t/1664278
Hi guys , i'm trying to make a 3D plot with the axes at the center, but when i do it i can see through the Plot, does anyone knows how to change it?
Plot3D[{2 (x - y)/(1 + x^2 + y^2)}, {x, -a, a}, {y, -a, a},
AxesOrigin -> {0, 0, 0}, AxesStyle -> Opacity[1],
AxesEdge -> {None, Automatic, None}, ColorFunction -> "Rainbow",
Boxed -> False, Background -> Black, MeshFunctions -> {#3 &},
Mesh -> 6, MeshStyle -> {Blue}]
Thanks in advance
:DDavid Urbaez2019-04-19T01:27:59ZSolve a wave equation with boundary conditions using NDSolve?
https://community.wolfram.com/groups/-/m/t/1664540
I want to study the time evolution of a small perturbation around the static solution of the following Wave Equation
$ -\partial_t^2 v(t,r) + \partial_r^2v(t,r) + \frac{2}{r}\partial_r v(t,r) = \frac{\partial V(v)}{\partial v}(t,r) $
for some expression of the potential $V(v)$ that is written in the code below. The coordinates $t,r$ run over $[0,+\infty]$.
By definition, the static solution $\hat{v}(r)$ is time-independent and I require the following initial/boundary conditions
$ \partial_r \hat{v}(r)|_{r=0} =0\,,\qquad \hat{v}(r \rightarrow +\infty) = 0. $
Obviously, to perform numerical computations the limit $r\rightarrow+\infty$ is replaced by $r=M$ where $M\gg \ell$ where $\ell$ is the characteristic length of the problem; it turns out to be $\ell \sim 2$ for the static solution.
I want to perturb this solution at $t=0$ and see how it evolves with time. So, now I am interested in the time-dependent solution which satisfies
$ v(t=0,r) = \hat{v}(r)\,\qquad \partial_t v(r,t)|_{t=0}=\delta \cdot 10^{-2}\,,\\
\partial_r v(t,r)|_{r=0}=0\,, \qquad v(r=M) = 0. $
where $\delta\ll 1$.
- While the numerical static solution satisfies $\hat{v}'(r=0)=0$, the time-dependent solution I got does not. I don't understand why. For a specific example with $\delta = 0.001$, I get $\partial_r v(t,r) \sim -0.000701892$ irrespectively of the value of the time variable `t`. In particular, it looks the initial condition $v(t=0,r) = \hat{v}(r)$ is not satisfied. Is this normal?
- Moreover, I get the error [![enter image description here][1]][1], why? Are my boundary conditions really inconsistent?
This is my code.
V[v_] = (-1 + (1/8 (-9 + Sqrt[145]) - v)^2)^2 + 3 (1/8 (-9 + Sqrt[145]) - v)^3;
sol[rmax_, \[Delta]_] := Last@Last@ Last@NDSolve[{+D[v[r], {r, 2}] + 2/r D[v[r], {r, 1}] - (D[V[v], v] /. v -> v[r]) == 0, (D[v[r], r] /. r -> SetPrecision[10^-10, 100]) == 0, v[SetPrecision[10^-10, 100]] == SetPrecision[\[Delta], 100]}, v, {r, 10^-10, rmax}, WorkingPrecision -> 50, Method -> "Extrapolation"]
iTf = sol[30, 1.506400187591933106770472351];
Plot[{iTf[r]}, {r, 0, 30}, PlotRange -> All, Frame -> True]
iTfTime = v /. ParametricNDSolve[{-D[v[t, r], {t, 2}] + D[v[t, r], {r, 2}] + 2/r D[v[t, r], {r, 1}] - (D[V[v], v] /. v -> v[t, r]) == 0, v[0, r] == iTf[r], ((D[v[t, r], t]) /. t -> 0) == +\[Delta] 10^-2, (D[v[t, r], r] /. r -> 10^-10) == 0}, v, {t, 0, 40}, {r, 10^-10, 30}, {\[Delta]}, WorkingPrecision -> MachinePrecision, Method -> {"MethodOfLines", "TemporalVariable" -> t, "SpatialDiscretization" -> {"TensorProductGrid", "MinPoints" -> 200}}, PrecisionGoal -> 15]
iTfTimeToPlot0 = iTfTime[0.001];
(*Checking boundary conditions in generic points*)
((D[iTfTimeToPlot0[t, r], t] /. t -> 0) /. r -> RandomReal[]) == +0.001 10^-2
(*Output: True*)
((D[iTfTimeToPlot0[t, r], r] /. r -> 10^-10) /. t -> RandomReal[]) == 0
(*Output: False*)
[1]: https://i.stack.imgur.com/SPDnN.pngmathPhys User2019-04-19T11:01:29ZQMRITools: processing and visualization tools for quantitative MRI data
https://community.wolfram.com/groups/-/m/t/1661539
# QMRITools [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2530801.svg)](https://doi.org/10.5281/zenodo.2530801) [![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/dwyl/esta/issues) [![status](http://joss.theoj.org/papers/ef8bfb6c31499845d353b6a5af0d6300/status.svg)](http://joss.theoj.org/papers/ef8bfb6c31499845d353b6a5af0d6300)
***
![wolfram language][1] ![wolfram workbench][2] ![eclipse][3] ![wolfram mathematica][4]
***
![eye candy][13]
I have been working on this toolbox for years and recently started making the code available online which took considerable extra effort putting everything into the proper formats and generating documentation and usage demonstrations. I thought it would be nice to also show this to the wolfram community so [here][5] it is.
## Contents
* [Information](#Sumary)
* [Information](#information)
* [Documentation](#documentation)
* [Install toolbox](#install-toolbox)
* [Demonstrations](#demonstrations)
* [Using the toolbox](#using-the-toolbox)
* [Functionality](#functionality)
## Summary
``QMRITools`` is written in Mathematica using Wolfram Workbench and Eclipse and contains a collection of tools and functions for processing quantitative MRI data. The toolbox does not provide a GUI and its primary goal is to allow for fast and batch data processing, and facilitate development and prototyping of new functions. The core of the toolbox contains various functions for data manipulation and restructuring.
The toolbox was developed mostly in the context of quantitative muscle, nerve and cardiac magnetic resonance imaging. The library of functions grows along with the research it is used for and started as a toolbox to analyze DWI data of muscle. Although there exist many different software packages and code repositories for much of the functionality in this toolbox, I was in need of one that did all. Furthermore, most diffusion packages are optimized for brain analysis and provide limited flexibility and i just like writing code my self.
## Information
QMRITools is developed for [Mathematica](https://www.wolfram.com/mathematica/).
It contains the following toolboxes:
- CardiacTools
- CoilTools
- DenoiseTools
- DixonTools
- ElastixTools
- GeneralTools
- GradientTools
- ImportTools
- IVIMTools
- JcouplingTools
- MaskingTools
- NiftiTools
- PhysiologyTools
- PlottingTools
- ProcessingTools
- RelaxometryTools
- SimulationTools
- VisteTools
## Documentation
The website with more information about the toolbox can be found [here](https://mfroeling.github.io/QMRITools/). Documentation of all functions and their options is fully integrated in the Mathematica documentation. The toolbox always works within the latest version of Mathematica and does not support any backward compatibility.
After the toolbox is installed correctly it should show up as a package in the Mathematica add-ons (which was a pain and is badly documented).
![QMRITools package][6]
All code and documentation is maintained and uploaded to github using [Workbench](https://www.wolfram.com/workbench/). An online HTML version of the full documentation can be found [here](https://mfroeling.github.io/QMRITools/htmldoc/guide/QMRITools.html). To fix the HMTL compilation i had to write some custom scripts based on code i found [online][7] (again a pain and badly documented).
![Guides QMRITools][8]
## Install toolbox
The latest release can be found [here](https://github.com/mfroeling/QMRITools/releases/download/2.1.2/QMRITools.zip).
Install the toolbox in the Mathematica UserBaseDirectory > Applications.
FileNameJoin[{$UserBaseDirectory, "Applications"}]
Ideally i would like to push the toolbox as a packlet but again this has proven to be a pain and is badly documented.
Some functions of QMRITools call on external executables and software.
These executables need to be present in "QMRITools\Applications" and are included in the release.
If for any reason you want to use other (older/newer) versions you can replace them but functionality is not guaranteed.
For the latest version of these tools and their user license please visit their website.
* [dcm2niix](https://github.com/rordenlab/dcm2niix/)
* dcm2niix.exe
* [Elastix](http://elastix.isi.uu.nl/)
* elastix.exe
* transformix.exe
All functionality is tested under Windows 10 with the latest Mathematica version.
The Mathematica code is cross platform compatible with the exception of the external tools which are compiled for each OS.
The toolbox provides compiled versions for each OS but their functionality is not guaranteed.
The Elastix version used is 4.9 with OpenCL support. Additionally Elastix needs to be compiles with the PCA metrics, all DTI related parameters and all affine related parameters.
Although cross platform compatibility is provided I have only limited options for testing so if any issues arise please let me know.
## Demonstrations
The release contains a zip file [DemoAndTest.zip](https://github.com/mfroeling/QMRITools/releases/download/2.1.2/DemoAndTest.zip) in which there is a file ``demo.nb``, a folder ``DemoData`` and a folder ``Testing``.
To have a global overview of the functionality of the toolbox you can download this folder and run the ``demo.nb``.
By default the ``demo.nb`` looks for the folders ``DemoData`` and ``Testing`` in the same folder as the notebook.
![QMRITools demonstration][9]
In the first section of the demo notebook the toolbox is loaded and two tests are performed. The first test is to check of all files that are needed to run the toolbox are present. The second test runs after the toolbox is loaded and checks if all the functions and their options that are defined are correct.
![enter image description here][10]
## Using the toolbox
The toolbox can be loaded by using: `` <<QMRITools` ``
If you want to monitor the package loading you can use: `` QMRITools`$Verbose = True; <<QMRITools` ``
A list of all QMRITools packages is generated by
QMRIToolsPackages[]
A list of all DTITools functions or functions per toolbox is generated by
QMRIToolsFunctions[]
QMRIToolsFunctions["toolboxname"]
To print the documentation of all functions use
QMRIToolsFuncPrint[]
QMRIToolsFuncPrint["toolboxname"]
A list off all functions and their help can be found in ``All-Functions.nb``, which is alos availible as a [pdf file](https://github.com/mfroeling/QMRITools/releases/download/2.0/All-Functions.pdf).
## Functionality
The toolbox contains over 250 Functions and options of processing and analyzing data.
A summary of the core functionality is listed below.
![QMRITools overview][11]
* **Diffusion Analysis**
* Signal drift correction
* LLS, WLLS and iWLLS methods
* REKINDLE outlier detection
* IVIM fitting (fixed parameters, back-projection and Bayesian fitting)
* Parameter fitting using histogram analysis
* Joining and sorting of multiple series of the same volume
* Joining multiple stacks with slice overlap into one stack
* **Diffusion Gradients optimization**
* Single and multi shell
* Rotating and correcting Bmatrix
* Actual b-value estimation by gradient sequence integration
* Gradient visualization
* **Noise suppression**
* LMMSE noise suppression
* PCA noise suppression based on ramom matrix theory
* Anisotropic tensor smoothing using diffusion filter.
* **Importing and Exporting**
* Dicom data (classing and enhanced file format)
* Nifti data (.nii and .img .hdr, supports .gz files)
* Compatible with ExplorDTI and Viste for fiber tractography
* **Data visualization**
* 2D 3D and 4D viewer
* Multiple images: Transparent overlay, difference and, checkboard overlays
* Legend bars and image labels
* Saving to pdf, jpg, animated gif and movie
![Plot 3D and 4D datasets][12]
* **Masking**
* Automate and threshold masking
* Extracting parameters form masks
* Smoothing masks
* Smoothing muscle segmentation
* **Motion and distortion correction (Registration using elastix)**
* Rigid, affine, b-spline and cyclic registration
* nD to nD registration
* Automated series processing
* Slice to slice motion correction of 3D and 4D data
* **Dixon Reconstruction**
* B0 phase unwrapping
* DIXON iDEAL reconstruction with T2start
* **Relaxometry fitting**
* T2 fitting
* T1rho fitting
* Tri Exponential T2 fitting
* EPG based T2 fitting with slice profile
* **Simulation Framework**
* Diffuison tensor simulation and analysis
* Bloch and EPG simulations
* Cardiac DTI models (fiber architecture)
* **Cardiac Diffusion analysis**
* Breathing motion correction
* Corrupted slice rejection
* Local myocardial coordinate system calculation
* helix angle and fiber architecture matrix
* AHA 17 parameter description
* Transmural parameter description
## License
https://opensource.org/licenses/BSD-3-Clause
Note that restrictions imposed by these patents (and possibly others)
exist independently of and may be in conflict with the freedoms granted
in BSD-3-Clause license, which refers to copyright of the program, not patents
for any methods that it implements. Both copyright and patent law must
be obeyed to legally use and redistribute this program and it is not the
purpose of this license to induce you to infringe any patents or other
property right claims or to contest validity of any such claims. If you
redistribute or use the program, then this license merely protects you
from committing copyright infringement. It does not protect you from
committing patent infringement. So, before you do anything with this
program, make sure that you have permission to do so not merely in terms
of copyright, but also in terms of patent law.
Some code in the NiiTools packages was based on https://github.com/tomdelahaije/nifti-converter
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfram_language.png&userId=1332602
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfram_workbench.jpg&userId=1332602
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=eclipse.png&userId=1332602
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfram_mathematica.png&userId=1332602
[5]: https://github.com/mfroeling/QMRITools
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=AddOns.PNG&userId=1332602
[7]: https://mathematica.stackexchange.com/questions/134069/workbench-eclipse-version-html-documentation-build-html-issues
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Guide.PNG&userId=1332602
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=demo.PNG&userId=1332602
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=testing.PNG&userId=1332602
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=OverView.png&userId=1332602
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=visualization.PNG&userId=1332602
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=animation-small.gif&userId=1332602Martijn Froeling2019-04-16T08:11:08ZRubiks Cubes and OOP in Mathematica
https://community.wolfram.com/groups/-/m/t/1659434
I recently got [nerd sniped](https://xkcd.com/356/) on a [question on Stack Exchange](https://mathematica.stackexchange.com/a/195065/38205)
Originally I just wanted to show off my immutable OOP framework that uses a bunch of Mathematica's built in features to make something with a nice interface like `CloudExpression` but then that devolved into redoing the entirety of [Roman Maeder's Rubiks Cube Demo](http://demonstrations.wolfram.com/RubiksCube/) as an object.
To make this work you need my `InterfaceObjects` package, which implements Mathematica OOP. You can single-line install that [here](https://paclets.github.io/PacletServer/interfaceobjects.html).
Then load the Rubik's cube package off GitHub [here](https://github.com/b3m2a1/mathematica-tools/blob/master/RubiksCube.wl) (you can also go there to see how this OOP package works):
Get["https://github.com/b3m2a1/mathematica-tools/raw/master/RubiksCube.wl"]
Now we can make an object:
new = RubiksCube[]
Visualize it:
new@"Show"[]
[![enter image description here][4]][4]
Make a new one of a different size and change its colors:
r1 = RubiksCube["Size" -> 4];
r1@"Colors" = ColorData["Atoms"] /@ {6, 7, 8, 9, 11, 13, 18};
r1@"Show"[Method -> {"ShrinkWrap" -> True}]
[![enter image description here][1]][1]
Or make two and plot them side by side with some twists:
r2 = RubiksCube["Origin" -> {10, 0, 0}, "Size" -> 10];
Show[
r1@"Twist"[.5, {"Y", 2}]@"Twist"[.5, {"Y", 4}]@"Show"[],
r2@"Show"[],
PlotRange -> All
]
You can also `Manipulate` the twisting, if you want:
Manipulate[
Fold[
#@"Twist"[#2[[1]], #2[[2]]] &,
new,
Thread[
{
{b, f, l, r, d, u},
{"Back", "Front", "Left", "Right", "Down", "Up"}
}
]
]@"Show"[],
{b, 0, 2 π, .01},
{f, 0, 2 π, .01},
{l, 0, 2 π, .01},
{r, 0, 2 π, .01},
{d, 0, 2 π, .01},
{u, 0, 2 π, .01},
DisplayAllSteps -> True
]
[![enter image description here][5]][5]
The OOP interface itself is actually very nice, as it allows us to only have to provide and document a single symbol and then some `"Methods"`, which actually are discoverable by name as well, along with the `"Properties"`:
r1@"Methods"
{"Show", "Twist"}
r1@"Properties"
{"Size", "Origin", "Colors", "Cuboids", "Version", "Properties", "Methods"}
[1]: https://i.stack.imgur.com/tPUcO.png
[2]: https://i.stack.imgur.com/CqJvT.png
[3]: https://i.stack.imgur.com/KKhN6.png
[4]: https://i.stack.imgur.com/pYwi3.png
[5]: https://i.stack.imgur.com/23f6J.png
[6]: https://i.stack.imgur.com/fbJR4.png
[7]: https://i.stack.imgur.com/bFcJo.png
[8]: https://i.stack.imgur.com/3PRrr.png
[9]: https://i.stack.imgur.com/QilkK.png
[10]: https://i.stack.imgur.com/b8lDe.pngb3m2a1 2019-04-13T00:38:14Z[GIF] Five Easy Pieces (Rotating truncation of the tetrahedron)
https://community.wolfram.com/groups/-/m/t/1660413
![Rotating truncation of the tetrahedron][1]
**Five Easy Pieces**
Practically the same idea (and code) as [_Give Me Some Space_][2], just truncating the tetrahedron rather than rectifying it.
The code for the `Manipulate` is below; when exporting to a GIF I used `"DisplayDurations" -> Prepend[Table[1/50, {199}], 1/2]` inside `Export` to get the animation to pause on the original tetrahedron for half a second.
DynamicModule[{viewpoint = {Cos[2π/3], Sin[2π/3], 1/Sqrt[2]},
g = .6, d = .2, n = 4,
v = PolyhedronData["Tetrahedron", "VertexCoordinates"],
e = {{2, 3, 4}, {1, 4, 3}, {4, 1, 2}, {3, 2, 1}},
tt = PolyhedronData["TruncatedTetrahedron", "VertexCoordinates"],
te = PolyhedronData["TruncatedTetrahedron", "Faces"][[2]],
cols = RGBColor /@ {"#e43a19", "#f2f4f7", "#111f4d"},
s, r},
Manipulate[
s = Haversine[π t];
r = Haversine[2 π t];
Graphics3D[{Thickness[.004], EdgeForm[None],
Table[
{GraphicsComplex[
(1/2 + r/2) v[[i]] + RotationTransform[2 π/3 s, v[[i]]][#] & /@ (v/4),
{cols[[1]], Polygon[e[[i]]], cols[[2]], Polygon[e[[Drop[Range[4], {i}]]]]}]},
{i, 1, Length[v]}],
GraphicsComplex[
RotationTransform[0, {0, 0, 1}][RotationTransform[π s, {-Sqrt[3], -3, Sqrt[6]}][1/4 tt]],
{cols[[1]], Polygon[te[[1, ;; 4]]], cols[[2]], Polygon[te[[1, 5 ;;]]]}]},
Boxed -> False, ImageSize -> {540, 540}, PlotRange -> 2.5,
Background -> cols[[-1]], ViewPoint -> 10 viewpoint,
ViewAngle -> π/125, ViewVertical -> {0, 0, 1},
SphericalRegion -> True,
Lighting -> {{"Ambient", GrayLevel[d]},
{"Directional", GrayLevel[g], ImageScaled[{2, 0, 2}]},
{"Directional", GrayLevel[g], ImageScaled[{-2, 2, 2}]},
{"Directional", GrayLevel[g], ImageScaled[{0, -2, 2}]}}],
{t, 0, 1}]
]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=tt8r.gif&userId=610054
[2]: https://community.wolfram.com/groups/-/m/t/962240Clayton Shonkwiler2019-04-14T21:23:52ZAnamorphosis and reflection between a Conical Mirror and a Cylinder
https://community.wolfram.com/groups/-/m/t/1660442
![intro][1]
Catoptric or mirror anamorphoses are deformed images that can only be seen undeformed with the help of a mirror.
Here, we experiment with a conical mirror surrounded by a vertical cylindrical surface.
We want to compute points of a deformed (anamorphic) image on the cylinder's inner surface such that it is perceived by the viewer as an undeformed image when looking down in the cone shaped mirror.
![enter image description here][2]
The above drawing shows the anamorphic setup: a conical mirror (radius r=1, height=h), surrounded by a cylindrical surface (radius R>r).
The viewpoint V is along the vertical axis of the cylinder (at infinity relative to the size of the cone).
A point S (xa,ya,za) on the cylinder's inner surface is reflected by the mirror at Q to the viewer's eye at V. The viewer perceives the point at I (xi,yi,0). The lines VQ and SQ form equal angles with the normal to the sphere at Q.
![enter image description here][3]
The above animation demonstrates the relation between the point I traveling along a straight line while its anamorphic map follows a curve on the inner surface of the cylinder.
We now write a function that expresses this geometric relationship:
cone2Cylinder[imagePoint : {xi_, yi_}, coneHeight : h_,
cylinderRadius : R_] :=
{(R xi)/Sqrt[xi^2 + yi^2], (R yi)/Sqrt[
xi^2 + yi^2],
h - h Sqrt[
xi^2 + yi^2] + (-R + Sqrt[xi^2 + yi^2]) Cot[2 ArcTan[1/h]]}
This function maps an image point to an anamorphic point.
To test our function, we use [again][4] one of the logos generated by the Wolfram Demonstration "[Character Rotation Patterns][5]" by Chris Carlson.
Which, after converting to a GraphicsComplex looks like this:
ig = ImageGraphics[sun, 2, Method -> "Exact"];
lines = Normal[ig][[1, -1]] /. FilledCurve -> Identity;
scaledLines = Map[#/948 - .5 &, lines, {6}]
Graphics[{Thick, scaledLines}]
![enter image description here][6]
We now compute the point coordinates of the lines in the GraphicsComplex to their anamorphic map {xa,ya,za} using the function cone2Cylinder.
anaLines = Map[anaCone2Cylinder[#, 1.57, 1.15] &, scaledLines, {5}];
Graphics3D[{{Opacity[.2], White,
Cylinder[{{0, 0, .3}, {0, 0, 1.2}}, 1.25]},
AbsoluteThickness[3], %}, Boxed -> False]
![enter image description here][7]
We then convert the anamorphic 3D drawing to the 2 dimensional developed interior face of the cylinder as {ArcTan[xa,ya} , za}. This GIF illustrates the unfolding of the cylindrical image:
![enter image description here][8]
developLineCoordinates =
Flatten[Map[{ArcTan @@ Most[#], Last[#]} &, anaLines, {5}][[-1]],
1][[All, 1]];
lstPP = Partition[#, 2, 1] & /@ developLineCoordinates;
DeleteCases[#, _?(EuclideanDistance @@ # > 1 &)] & /@ lstPP;
Graphics[{AbsoluteThickness[2], Line /@ %}, FrameTicks -> None,
Frame -> True, ImageSize -> 600]
develop = Image[%];
![enter image description here][9]
After printing the cylinder development to the right size (52 cm by 14 cm), it is glued around a cardboard cylinder (radius 8 cm). A home made conical mirror (base radius 7 cm, height 12 cm) is put inside the cylinder at the center. The anamorphic image on the cylinder wall is reflected as the undeformed original by the conical mirror. Here is the result: (the center is hidden by a coin resting at the top of the cone since anamorphic maps of points close to the cone center are off at infinite height on the cylinder wall)
![enter image description here][10]
**Another application** of the function is to use one of the many popular curves (".....-like curve" ) that can be extracted using Interpreter
Interpreter["PopularCurve"]["bunny-like curve"];
bugsbunnyPrimitives =
First@Cases[
First[ParametricPlot[
Entity["PopularCurve", "BunnyCurve"]["ParametricEquations"][
t], {t, 0, 30 \[Pi]}]] /. {x_?NumericQ,
y_?NumericQ} :> {x - 85, y - 50}/800, _Line, \[Infinity]];
![enter image description here][11]
The anamorphic map is created by applying anaCone2Cylinder to the point coordinates:
anaBunny =
Map[anaCone2Cylinder[#, 1.755, 1.25] &, bugsbunnyPrimitives, {2}];
Animate[Graphics3D[
Rotate[{{Opacity[.2], White,
Cylinder[{{0, 0, .25}, {0, 0, 1}}, 1.25]}, AbsoluteThickness[3],
Red, anaBunny}, \[Phi], {0, 0, 1}], Boxed -> False], {\[Phi], 0,
2 \[Pi]}]
![enter image description here][12]
This is the developed cylinder:
developRules = {x_?NumericQ, y_?NumericQ,
z_?NumericQ} :> {ArcTan[x, y], z};
developed = anaBunny /. developRules;
DeleteCases[
Partition[developed[[1]], 2, 1], _?(EuclideanDistance @@ # > 1 &)];
Graphics[{Red, AbsoluteThickness[3], Line /@ %}, FrameTicks -> None,
Frame -> True]
![enter image description here][13]
And the result, printed, glued inside a cylinder and using the same setup as in the previous example:
![enter image description here][14]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logojointpics.png&userId=68637
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3379geometrycone.png&userId=68637
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=animationskipeedframes.gif&userId=68637
[4]: https://community.wolfram.com/groups/-/m/t/1646795?p_p_auth=1iKz6YW8
[5]: http://demonstrations.wolfram.com/CharacterRotationPatterns/
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=S-logooutlinecopy.png&userId=68637
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9897logocylinder.png&userId=68637
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=unfolding.gif&userId=68637
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4680logodeveloped.png&userId=68637
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logofinalcombi.png&userId=68637
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9385bunnyoriginal.png&userId=68637
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=cylinderanimation.gif&userId=68637
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8734bunnydeveloped.png&userId=68637
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6598bunnyfinal.jpg&userId=68637Erik Mahieu2019-04-15T07:18:15ZAvoid "Unable to read from network interfaces. Requires read permissions"?
https://community.wolfram.com/groups/-/m/t/1626011
Hello Wolfram Community,
I'm trying to use WL NetworkPacketCapture functions in a Mathematica Notebook and in this particular case I'm getting an error for which I'm required to "elevate permissions" on my Mac. The problem is I can't find any documentation on how to do so. Any ideas?
Unlike, say Cloud permissions, this is low level system permissions. Any links to documentation would be highly appreciated.
"Unable to read from network interfaces. Requires read permissions for devices /dev/bpf"
ThanksJose M.2019-03-05T17:12:57Z