Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Staff Picks sorted by activeSolver for unsteady flow with the use of Mathematica FEM
https://community.wolfram.com/groups/-/m/t/1433064
![fig7][331]
I started the discussion [here][1] but I also want to repeat on this forum.
There are many commercial and open code for solving the problems of unsteady flows.
We are interested in the possibility of solving these problems using Mathematica FEM. Previously proposed solvers for stationary incompressible isothermal flows:
Solving 2D Incompressible Flows using Finite Elements:
http://community.wolfram.com/groups/-/m/t/610335
FEM Solver for Navier-Stokes equations in 2D:
http://community.wolfram.com/groups/-/m/t/611304
Nonlinear FEM Solver for Navier-Stokes equations in 2D:
https://mathematica.stackexchange.com/questions/94914/nonlinear-fem-solver-for-navier-stokes-equations-in-2d/96579#96579
We give several examples of the successful application of the finite element method for solving unsteady problem including nonisothermal and compressible flows. We will begin with two standard tests that were proposed to solve this class of problems by
M. Schäfer and S. Turek, Benchmark computations of laminar ﬂow around a cylinder (With support by F. Durst, E. Krause and R. Rannacher). In E. Hirschel, editor, Flow Simulation with High-Performance Computers II. DFG priority research program results 1993-1995, number 52 in Notes Numer. Fluid Mech., pp.547–566. Vieweg, Weisbaden, 1996. https://www.uio.no/studier/emner/matnat/math/MEK4300/v14/undervisningsmateriale/schaeferturek1996.pdf
![fig8][332]
Let us consider the flow in a flat channel around a cylinder at Reynolds number = 100, when self-oscillations occur leading to the detachment of vortices in the aft part of cylinder. In this problem it is necessary to calculate drag coeﬃcient, lift coeﬃcient and pressure diﬀerence in the frontal and aft part of the cylinder as functions of time, maximum drag coeﬃcient, maximum lift coeﬃcient , Strouhal number and pressure diﬀerence $\Delta P(t)$ at $t = t0 +1/2f$. The frequency f is determined by the period of oscillations of lift coeﬃcient f=f(c_L). The data for this test, the code and the results are shown below.
H = .41; L = 2.2; {x0, y0, r0} = {1/5, 1/5, 1/20};
Ω = RegionDifference[Rectangle[{0, 0}, {L, H}], Disk[{x0, y0}, r0]];
RegionPlot[Ω, AspectRatio -> Automatic]
K = 2000; Um = 1.5; ν = 10^-3; t0 = .004;
U0[y_, t_] := 4*Um*y/H*(1 - y/H)
UX[0][x_, y_] := 0;
VY[0][x_, y_] := 0;
P0[0][x_, y_] := 0;
Do[
{UX[i], VY[i], P0[i]} =
NDSolveValue[{{Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
u[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (u[x, y] - UX[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[u[x, y], x] +
VY[i - 1][x, y]*D[u[x, y], y],
Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
v[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (v[x, y] - VY[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[v[x, y], x] +
VY[i - 1][x, y]*D[v[x, y], y],
\!\(\*SuperscriptBox[\(u\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] +
\!\(\*SuperscriptBox[\(v\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y]} == {0, 0, 0} /. μ -> ν, {
DirichletCondition[{u[x, y] == U0[y, i*t0], v[x, y] == 0},
x == 0.],
DirichletCondition[{u[x, y] == 0., v[x, y] == 0.},
0 <= x <= L && y == 0 || y == H],
DirichletCondition[{u[x, y] == 0,
v[x, y] == 0}, (x - x0)^2 + (y - y0)^2 == r0^2],
DirichletCondition[p[x, y] == P0[i - 1][x, y], x == L]}}, {u, v,
p}, {x, y} ∈ Ω,
Method -> {"FiniteElement",
"InterpolationOrder" -> {u -> 2, v -> 2, p -> 1},
"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}], {i, 1, K}];
{ContourPlot[UX[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> u, MaxRecursion -> 2],
ContourPlot[VY[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> v, MaxRecursion -> 2,
PlotRange -> All]} // Quiet
{DensityPlot[UX[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> u, MaxRecursion -> 2],
DensityPlot[VY[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> v, MaxRecursion -> 2, PlotRange -> All]} // Quiet
dPl = Interpolation[
Table[{i*t0, (P0[i][.15, .2] - P0[i][.25, .2])}, {i, 0, K, 1}]];
cD = Table[{t0*i, NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) + Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Sin[θ] -
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Cos[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
cL = Table[{t0*i, -NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) +
Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Cos[θ] +
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Sin[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
{ListLinePlot[cD,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(D\)]\)"}],
ListLinePlot[cL,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(L\)]\)"}],
Plot[dPl[x], {x, 0, 8}, AxesLabel -> {"t", "ΔP"}]}
f002 = FindFit[cL, a*.5 + b*.8*Sin[k*16*t + c*1.], {a, b, k, c}, t]
Plot[Evaluate[a*.5 + b*.8*Sin[k*16*t + c*1.] /. f002], {t, 4, 8},
Epilog -> Map[Point, cL]]
k0=k/.f002;
Struhalnumber = .1*16*k0/2/Pi
cLm = MaximalBy[cL, Last]
sol = {Max[cD[[All, 2]]], Max[cL[[All, 2]]], Struhalnumber,
dPl[cLm[[1, 1]] + Pi/(16*k0)]}
In Fig. 1 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
{3.17805, 1.03297, 0.266606, 2.60427}
lowerbound= { 3.2200, 0.9900, 0.2950, 2.4600};
upperbound = {3.2400, 1.0100, 0.3050, 2.5000};
![Fig1][2]
Note that our results differ from allowable by several percent, but if you look at all the results of Table 4 from the cited article, then the agreement is quite acceptable.The worst prediction is for the Strouhal number. We note that we use the explicit Euler method, which gives an underestimate of the Strouhal number, as follows from the data in Table 4.
The next test differs from the previous one in that the input speed varies according to the `U0[y_, t_] := 4*Um*y/H*(1 - y/H)*Sin[Pi*t/8]`. It is necessary to determine the time dependence of the drag and lift parameters for a half-period of oscillation, as well as the pressure drop at the last moment of time. In Fig. 2 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
sol = {3.0438934441256595`,
0.5073345082785012`, -0.11152933279750943`};
lowerbound = {2.9300, 0.4700, -0.1150};
upperbound = {2.9700, 0.4900, -0.1050};
![Fig2][3]
For this test, the agreement with the data in Table 5 is good. Consequently, the two tests are almost completely passed.
I wrote and debugged this code using Mathematics 11.01. But when I ran this code using Mathematics 11.3, I got strange pictures, for example, the disk is represented as a hexagon, the size of the area is changed.
![Fig3][4]
In addition, the numerical solution of the problem has changed, for example, test 2D2
{3.17805, 1.03297, 0.266606, 2.60427} v11.01
{3.15711, 1.11377, 0.266043, 2.54356} v11.03
The attached file contains the working code for test 2D3 describing the flow around the cylinder in a flat channel with a change in the flow velocity.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D3.png&userId=1218692
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Math11.3.png&userId=1218692
[331]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CylinderRe100test2D2.gif&userId=1218692
[332]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2D2test.png&userId=1218692Alexander Trounev2018-08-31T11:44:04ZSequence-to-sequence regression with recurrent neural networks
https://community.wolfram.com/groups/-/m/t/1647625
I recently wanted to learn more about recurrent neural networks (RNNs) and how to use them for regression problems. In particular, I wanted to try and combine RNNs with the ideas for Bayesian error bar estimation I discussed in my blog/community posts a while back:
https://blog.wolfram.com/2018/05/31/how-optimistic-do-you-want-to-be-bayesian-neural-network-regression-with-prediction-errors/
https://community.wolfram.com/groups/-/m/t/1319745
I also found the found the following answer on StackExchange about sequence learning with `SequenceAttentionLayer`, which helped me quite a bit to understand how to use to use these layers:
https://mathematica.stackexchange.com/a/143672/43522
With those resources in hand, I wanted to give time-series regression a shot.
First, let's use `FinancialData` as an easy source of a test dataset. Here I download 5 years of data from 5 different indices and turn them into a regularly sampled matrix of vectors and then split the dataset into 80/20% training/test data:
indices = {"GOOGL", "AS:ASML", "MSFT", "AAPL", "GE"};
data = FinancialData[indices, {Today - 5 Quantity[1, "Years"], Today}];
DateListPlot[data]
With[{
matrix = N@Normal[
TimeSeriesRescale[
TimeSeriesResample[TimeSeriesThread[Identity, TimeSeries /@ data]], {0, Max[Length /@ data] - 1}]
][[All, 2]]
},
{fe, data} = FeatureExtraction[matrix, "StandardizedVector", {"ExtractorFunction", "ExtractedFeatures"}]
];
{trainingData, testData} = TakeDrop[data, Round[0.8*Length[data]]];
Dimensions /@ {trainingData, testData}
![enter image description here][1]
Next, we define a function that can generate training data for the network from the data matrix we just made:
Clear[genFun];
genFun[data_, { seqLengthIn_, seqLengthout_, offset_}] := With[{
dim = Dimensions[data][[2]],
max = Max[1, Length[data] - seqLengthout - offset - seqLengthIn - 2]
},
Function[
With[{ri = RandomInteger[{1, max}, #BatchSize]},
<|
"Input" -> Function[data[[# ;; # + seqLengthIn - 1]]] /@ ri,
"Target" -> Join[
ConstantArray[{ConstantArray[0., dim]}, #BatchSize],
Function[With[{j = # + offset + seqLengthIn - 1}, data[[j ;; j + seqLengthout - 1]]]] /@ ri,
2
]
|>]]];
Each time the function is called, it generates a batch of sequences that will be used as training input and target sequences that the network should try to learn and reproduce, but the targets are prepended with a blank start-of-sequence vector:
Dimensions /@ genFun[data, {10, 6, 1}][<|"BatchSize" -> 3|>]
Out[72]= <|"Input" -> {3, 10, 5}, "Target" -> {3, 7, 5}|>
Next we put together our neural network:
Clear[encoderNetAttend, decoderNet, lossLayer, trainingNetAttention];
encoderNetAttend[n_, dropout_] := NetGraph[<|
"lstm" -> LongShortTermMemoryLayer[n, "Dropout" -> dropout],
"last" -> SequenceLastLayer[],
"post" ->
NetChain[{LinearLayer[n], ElementwiseLayer["SELU"],
DropoutLayer[], LinearLayer[]}]
|>,
{"lstm" -> "last" -> "post" -> NetPort["StateInit"], "lstm" -> NetPort["States"]}
];
decoderNet[n_, dropout_] := NetGraph[
<|
"part1" -> PartLayer[{All, 1}], "part2" -> PartLayer[{All, 2}],
"lstm" -> LongShortTermMemoryLayer[n, "Dropout" -> dropout],
"shape1" -> ReshapeLayer[{n, 1}], "shape2" -> ReshapeLayer[{n, 1}], "catState" -> CatenateLayer[2]
|>,
{
NetPort["Input"] -> NetPort["lstm", "Input"],
NetPort["lstm", "Output"] -> NetPort["Output"],
NetPort["StateIn"] -> "part1" -> NetPort["lstm", "State"],
NetPort["StateIn"] -> "part2" -> NetPort["lstm", "CellState"],
NetPort["lstm", "State"] -> "shape1", NetPort["lstm", "CellState"] -> "shape2",
{"shape1", "shape2"} -> "catState" -> NetPort["StateOut"]
},
"StateIn" -> {n, 2}
];
lossLayer["LogPrecision"] = ThreadingLayer@Function[{yObserved, yPredicted, logPrecision},
(yPredicted - yObserved)^2*Exp[logPrecision] - logPrecision
];
regressionNet[n_, dimIn_, dropout : _?NumericQ : 0.5] :=
NetInitialize@NetGraph[
<|
"encoder" -> encoderNetAttend[n, dropout],
"decoder" -> decoderNet[n, dropout],
"attention" -> SequenceAttentionLayer["Bilinear"],
"catenate" -> CatenateLayer[2],
"regression" -> NetMapOperator[LinearLayer[{2, dimIn}]],
"part1" -> PartLayer[{All, 1}], "part2" -> PartLayer[{All, 2}]
|>,
{
NetPort["Input"] -> "encoder",
NetPort["encoder", "StateInit"] -> NetPort["decoder", "StateIn"],
NetPort["encoder", "States"] -> NetPort["attention", "Input"],
NetPort["Prediction"] -> NetPort["decoder", "Input"],
NetPort[{"decoder", "Output"}] ->
NetPort["attention", "Query"], {"decoder", "attention"} ->
"catenate" -> "regression",
"regression" -> "part1" -> NetPort["mu"],
"regression" -> "part2" -> NetPort["logTau"]
},
"Input" -> {"Varying", dimIn},
"Prediction" -> {"Varying", dimIn}
];
trainingNetAttention[n_, dimIn_, dropout : _?NumericQ : 0.5] := NetInitialize@NetGraph[
<|
"regression" -> regressionNet[n, dimIn, dropout],
"loss" -> lossLayer["LogPrecision"],
"total" -> SummationLayer[], "most" -> SequenceMostLayer[], "rest" -> SequenceRestLayer[]
|>,
{
NetPort["Input"] -> NetPort["regression", "Input"],
NetPort["Target"] -> "most" -> NetPort["regression", "Prediction"],
NetPort["Target"] -> "rest",
{"rest", NetPort["regression", "mu"], NetPort["regression", "logTau"]} -> "loss" -> "total" -> NetPort["Loss"]
},
"Input" -> {"Varying", dimIn},
"Target" -> {"Varying", dimIn}
];
nHidden = 250;
pDrop = 0.25;
NetInformation[encoderNetAttend[nHidden, pDrop], "SummaryGraphic"]
NetInformation[decoderNet[nHidden, pDrop], "SummaryGraphic"]
NetInformation[regressionNet[nHidden, 2, 0.25], "SummaryGraphic"]
NetInformation[ trainingNetAttention[nHidden, 2, 0.25], "SummaryGraphic"]
![enter image description here][2]
The idea here is that the encoder produces a state vector that the decoder uses to start making predictions. The decoder, in turn, accepts the networks' own predictions to iteratively crank out the next one. During the training, though, we don't have the actual predictions from the network, so instead we feed in the sequence of actual target values and offset the target sequence at the loss layer by one with a `SequenceRest` layer.
Time to train the network:
seqConfig = {15, 7, 1};
trainedTrainObject = NetTrain[
trainingNetAttention[nHidden, Length[indices], pDrop],
genFun[trainingData, seqConfig],
All,
TargetDevice -> "GPU", LossFunction -> "Loss",
TimeGoal -> 30*60, BatchSize -> 200,
ValidationSet -> genFun[testData, seqConfig][<|"BatchSize" -> 200|>]
];
trainedNet = trainedTrainObject["TrainedNet"];
trainedTrainObject["EvolutionPlots"]
![enter image description here][3]
To make predictions, the network needs to eat its own results iteratively, which we can do with a `NestList`
makePredictorFunction[trainedNet_, evaluationMode : ("Test" | "Train") : "Test", device : ("CPU" | "GPU") : "CPU"] := With[{
dim = Last@NetExtract[trainedNet, {"Input"}],
encoder = NetExtract[trainedNet, {"regression", "encoder"}],
regressionNet =
NetDelete[NetExtract[trainedNet, "regression"], "encoder"]
},
Function[{input, n},
Module[{
encodedInput = encoder[input, NetEvaluationMode -> evaluationMode, TargetDevice -> device],
vectorInputQ = Depth[input] === 4,
length = Length[input],
reshape
},
reshape = If[vectorInputQ, ConstantArray[#, Length[input]] &, Identity];
GeneralUtilities`AssociationTranspose@NestList[
regressionNet[
<|"Input" -> encodedInput["States"], "Prediction" -> #mu, "StateIn" -> #StateOut|>,
NetEvaluationMode -> evaluationMode,
TargetDevice -> device] &,
<|
"mu" -> reshape@{ConstantArray[0, dim]},
"StateOut" -> encodedInput["StateInit"]
|>,
n
][[2 ;;, {"mu", "logTau"}, If[vectorInputQ, All, Unevaluated[Sequence[]]], 1]]
]
]
];
For example, we can predict 5 steps ahead based on 30 points of historical data:
predictor = makePredictorFunction[trainedNet, "Train", "CPU"];
Dimensions /@ predictor[data[[;; 30]], 7]
Out[299]= <|"mu" -> {7, 5}, "logTau" -> {7, 5}|>
Here's a manipulate to visualize the predictions against the actual data. We sample the network several times to get a sense of the uncertainty from the training weights and combine that uncertainty with the variance predicted by the network:
Manipulate[
Manipulate[
Quiet@With[{
predictions = Dot[
{{1., 1.}, {1., 0.}, {1., -1.}},
{Mean[#mu],
nstdev*Sqrt[Variance[#mu] + Mean[Exp[-#logTau]]]} &@Map[
Transpose,
predictor[Table[dat[[Max[1, n - t] ;; UpTo[n]]], s], l]
][[{"mu", "logTau"}, All, All, index]]
],
trueData = ListPlot[
MapIndexed[{First[#2] + Max[1, n - t] - 1, #1} &,
Flatten[dat[[Max[1, n - t] ;; UpTo[n + l], index]]]
],
Joined -> True,
PlotStyle -> Black
]
},
Show[
ListPlot[
Thread@Legended[
MapIndexed[
{First[#2] + Max[1, n - t] - 1, #1} &,
Flatten[{
dat[[Max[1, n - t] ;; UpTo[n], index]],
#
}]
] & /@ predictions,
{StringForm["\[Mu] + `1` \[Sigma]", Dynamic[nstdev]],
"\[Mu]", StringForm["\[Mu] - `1` \[Sigma]", Dynamic[nstdev]]}
],
Joined -> True,
PlotStyle -> (Directive[#, Dashed] & /@ {Blue, Red, Blue}),
ImageSize -> 500,
Filling -> {1 -> {2}, 3 -> {2}}
],
trueData,
ImageSize -> Large,
PlotRange -> All
]
],
{{index, 1, "Index"}, Thread[Range[Length[indices]] -> indices]},
{{n, Round[Length[dat]/10], "Position"}, 1, Length[dat], 1},
{{t, 20, "History length"}, 1, 30, 1},
{{l, 30, "Extrapolation"}, 1, 50, 1},
{{s, 5, "Number of samples"}, 1, 20, 1},
{{nstdev, 2, "Standard deviations"}, 0, 4, 0.1},
Paneled -> False,
ContinuousAction -> True,
TrackedSymbols :> {index, n, t, l, s, nstdev},
SynchronousInitialization -> False
],
{{dat, trainingData, "Dataset"}, {trainingData -> "Training data",
testData -> "Test data"}},
TrackedSymbols :> {dat},
SynchronousInitialization -> False
]
![enter image description here][4]
The results on the validation set do not look great, but the comparison between the first 4 years of the data and the last one may not be fair. Or it may simply be necessary to involve many more indices before it becomes feasible to do extrapolations of this type. At any rate, I hope that this code is useful to other users here! I will probably tweak bits of code here and there over the next few days and maybe add more commentary, so be sure to check back occasionally if you're interested.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8936datelistplot.png&userId=839741
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1613networks.png&userId=839741
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=progressplot.png&userId=839741
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=manipulate.png&userId=839741Sjoerd Smit2019-04-05T15:43:41Zcomputable famous theorems of geometry
https://community.wolfram.com/groups/-/m/t/1664846
[GeometricScene][1] and [FindGeometricConjectures][2] are two of my favorite new functions in Wolfram Language V12. V12 provides innovative automated capabilities to draw and reason about abstractly described scenes in the plane.
I also remember that I'd proved famous theorems of geometry over many days when I was a junior high school student. I will show nine theorems, including those in the Documentation Center and [WOLFRAM blog][3].
## Thaless Theorem ##
If A, B, and C are distinct points on a circle where the line AC is a diameter, then the angle \[Angle]ABC is a right angle.
gs = GeometricScene[{"A", "B", "C", "O"},
{Triangle[{"A", "B", "C"}],
CircleThrough[{"A", "B", "C"}, "O"],
"O" == Midpoint[{"A", "C"}],
Style[Line[{"A", "B"}], Orange],
Style[Line[{"B", "C"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][4]
![enter image description here][5]
## Napoleons Theorem ##
If regular triangles are constructed on the sides of any triangle, either all outward or all inward, the lines connecting the centres of those regular triangles themselves form an regular triangle.
gs = GeometricScene[{"C", "B", "A", "C'", "B'", "A'", "Oc", "Ob",
"Oa"},
{Triangle[{"C", "B", "A"}],
TC == Triangle[{"A", "B", "C'"}],
TB == Triangle[{"C", "A", "B'"}],
TA == Triangle[{"B", "C", "A'"}],
GeometricAssertion[{TC, TB, TA}, "Regular"],
"Oc" == TriangleCenter[TC, "Centroid"],
"Ob" == TriangleCenter[TB, "Centroid"],
"Oa" == TriangleCenter[TA, "Centroid"],
Style[Triangle[{"Oc", "Ob", "Oa"}], Orange]}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][6]
![enter image description here][7]
## Finsler-Hadwiger Theorem ##
ABCD and A BB CC DD are two squares with common vertex A. Let Q and S be the midpoints of BB D and DD B respectively, and let R and T be the centers of the two squares. Then the quadrilateral QRST is a square as well.
gs = GeometricScene[{"A", "B", "C", "D", "BB", "CC", "DD", "Q", "R", "S", "T"},
{GeometricAssertion[{Polygon[{"A", "B", "C", "D"}],
Polygon[{"A", "BB", "CC", "DD"}]}, "Regular", "Counterclockwise"],
"Q" == Midpoint[{"BB", "D"}],
"R" == Midpoint[{"A", "C"}],
"S" == Midpoint[{"B", "DD"}],
"T" == Midpoint[{"A", "CC"}],
Style[Polygon[{"Q", "R", "S", "T"}], Orange]}];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][8]
![enter image description here][9]
## Echols Theorem ##
The midpoints of AD, BE, and CF in two equilateral triangles ABC and DEF form a regular triangle.
gs = GeometricScene[{"A", "B", "C", "D", "E", "F", "L", "M", "N"},
{T1 == Triangle[{"A", "B", "C"}],
T2 == Triangle[{"D", "E", "F"}],
GeometricAssertion[{T1, T2}, "Regular"],
"L" == Midpoint[{"A", "D"}],
"M" == Midpoint[{"B", "E"}],
"N" == Midpoint[{"C", "F"}],
Style[Triangle[{"L", "M", "N"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][10]
![enter image description here][11]
## Simson Theorem & Steiner Theorem ##
Simson's Theorem states that ABC and a point P on its circumcircle, the three closest points to P on lines AB, AC, and BC are collinear. Steiner's Theorem states that if the vertical center of triangle ABC is H, the Simson line passes through the midpoint of PH.
gs = GeometricScene[{"A", "B", "C", "P", "L", "M", "N", "H", "S"},
{CircleThrough[{"P", "A", "B", "C"}],
"L" \[Element] InfiniteLine[{"B", "C"}],
"M" \[Element] InfiniteLine[{"C", "A"}],
"N" \[Element] InfiniteLine[{"A", "B"}],
PlanarAngle[{"P", "L", "B"}] == 90 \[Degree],
PlanarAngle[{"P", "M", "C"}] == 90 \[Degree],
PlanarAngle[{"P", "N", "A"}] == 90 \[Degree],
Style[InfiniteLine[{"L", "M"}], Orange],
GeometricAssertion[{InfiniteLine[{"A", "H"}], Line[{"B", "C"}]},
"Perpendicular"],
GeometricAssertion[{InfiniteLine[{"B", "H"}], Line[{"A", "C"}]},
"Perpendicular"],
Style[Line[{"P", "H"}], Orange],
Line[{"P", "S", "H"}], Line[{"L", "S", "M"}]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][12]
![enter image description here][13]
## Aubel Theorem ##
Starting with a given quadrilateral (a polygon having four sides), construct a square on each side.The two line segments between the centers of opposite squares are of equal lengths and are at right angles to one another.
gs = GeometricScene[{"A", "B", "C", "D", "A'", "A''", "B'",
"B''", "C'", "C''", "D'", "D''", "Oa", "Ob", "Oc", "Od"},
{GeometricAssertion[Polygon[{"A", "B", "C", "D"}], "Convex"],
GeometricAssertion[{pa = Polygon[{"A", "B", "A'", "A''"}],
pb = Polygon[{"B", "C", "B'", "B''"}],
pc = Polygon[{"C", "D", "C'", "C''"}],
pd = Polygon[{"D", "A", "D'", "D''"}]}, "Regular",
"Counterclockwise"],
"Oa" == Midpoint[{"A", "A'"}],
"Ob" == Midpoint[{"B", "B'"}],
"Oc" == Midpoint[{"C", "C'"}],
"Od" == Midpoint[{"D", "D'"}],
Style[Line[{"Oa", "Oc"}], Orange],
Style[Line[{"Ob", "Od"}], Orange]}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][14]
![enter image description here][15]
## Brahmagupta Theorem ##
If a cyclic quadrilateral has perpendicular diagonals, then the perpendicular to a side from the point of intersection of the diagonals always bisects the opposite side.
gs = GeometricScene[{"A", "B", "C", "D", "E", "M"},
{Polygon[{"A", "B", "C", "D"}],
CircleThrough[{"A", "B", "C", "D"}],
GeometricAssertion[{Line[{"A", "C"}], Line[{"B", "D"}]},
"Perpendicular"],
Line[{"A", "E", "C"}], Line[{"B", "E", "D"}],
"M" == Midpoint[{"A", "B"}],
Style[InfiniteLine[{"M", "E"}], Orange],
Style[Line[{"C", "D"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][16]
![enter image description here][17]
## Morley Theorem ##
In any triangle, the three points of intersection of the adjacent angle trisectors form a regular triangle.
gs = GeometricScene[{"A", "B", "C", "D", "E", "F"},
{Triangle[{"A", "B", "C"}],
PlanarAngle[{"A", "B", "F"}] == PlanarAngle[{"A", "B", "C"}]/3,
PlanarAngle[{"F", "A", "B"}] == PlanarAngle[{"C", "A", "B"}]/3,
PlanarAngle[{"C", "B", "D"}] == PlanarAngle[{"C", "B", "A"}]/3,
PlanarAngle[{"B", "C", "D"}] == PlanarAngle[{"B", "C", "A"}]/3,
PlanarAngle[{"A", "C", "E"}] == PlanarAngle[{"A", "C", "B"}]/3,
PlanarAngle[{"C", "A", "E"}] == PlanarAngle[{"C", "A", "B"}]/3,
"D" \[Element] Triangle[{"A", "B", "C"}],
"E" \[Element] Triangle[{"A", "B", "C"}],
"F" \[Element] Triangle[{"A", "B", "C"}],
Style[Triangle[{"D", "E", "F"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][18]
![enter image description here][19]
[1]: https://reference.wolfram.com/language/ref/GeometricScene.html
[2]: https://reference.wolfram.com/language/ref/FindGeometricConjectures.html
[3]: https://blog.wolfram.com/2019/04/16/version-12-launches-today-big-jump-for-wolfram-language-and-mathematica/
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=122401.jpg&userId=1013863
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=801502.jpg&userId=1013863
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=508803.jpg&userId=1013863
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=222204.jpg&userId=1013863
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=289105.jpg&userId=1013863
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=932006.jpg&userId=1013863
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=876507.jpg&userId=1013863
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=184108.jpg&userId=1013863
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=350909.jpg&userId=1013863
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=500710.jpg&userId=1013863
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=318711.jpg&userId=1013863
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=254812.jpg&userId=1013863
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=308813.jpg&userId=1013863
[17]: https://community.wolfram.com//c/portal/getImageAttachment?filename=934714.jpg&userId=1013863
[18]: https://community.wolfram.com//c/portal/getImageAttachment?filename=367015.jpg&userId=1013863
[19]: https://community.wolfram.com//c/portal/getImageAttachment?filename=16.jpg&userId=1013863Kotaro Okazaki2019-04-20T14:42:58ZDecorating Easter Eggs with the Planets
https://community.wolfram.com/groups/-/m/t/1665923
![The planets as Easter eggs][1]
Just thought I'd share this fun little exercise. First, we need to get the textures to use.
textures =
ImageReflect[#, Right] & /@
EntityValue["Planet", "CylindricalEquidistantTexture"];
Then, we need to plot parametric surfaces that look like eggs and apply the textures to them.
GraphicsGrid[Partition[With[{l = .75, a = 1, b = 1},
ParametricPlot3D[
Evaluate[
RotationMatrix[
Pi/2, {0, 1,
0}].{l Cos[t] + (a + b Cos[t]) Cos[t], (a + b Cos[t]) Sin[
t] Cos[p], (a + b Cos[t]) Sin[t] Sin[p]}], {p, 0,
2 Pi}, {t, 0, Pi}, Mesh -> None, Boxed -> False, Axes -> False,
Lighting -> "Neutral", PlotStyle -> Texture[#],
ViewPoint -> Left, PlotPoints -> 50, Background -> Black,
SphericalRegion -> True, ViewAngle -> Pi/6]] & /@ textures, 4],
Spacings -> {0, 0}, ImageSize -> 800]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=PlanetEaster.png&userId=25355Jeff Bryant2019-04-21T20:44:15Z[WSC18] Analysing Image Dithering
https://community.wolfram.com/groups/-/m/t/1383824
![image dithering cover][1]
This summer, as part of the Wolfram High School Summer Camp program, I decided to do a project analysing image dithering, and the various ways to do it. Over the course of two weeks, I learnt and understood how the algorithms work. Since resources about this process are sparse on the internet, *in this post, I not only implement the algorithms, but additionally describe in detail what image dithering is and how it works.*
*The second part of my project was to use machine learning to classify images that originate from different image dithering algorithms.*
After failing to get high accuracy with a large number of machine learning techniques, I finally came up with one that has an **accuracy of about 90%**.
Note that this community post is an adapted version of my computational essay. *My computational essay is attached at the bottom of the post.*
# What is image dithering?
![image-dithering-FS-example][2]
*Can you tell the difference between the two images above?*
The image on the right uses **just 24 colours** .
Yet, using Floyd-Steinberg dithering, it manages to look as detailed and beautiful as the one on the left, which is using more than 16 million colours!
Let' s formally define what the aim of image dithering is :
Given an RGB image with $256^3$ colors, reduce the colour space of the image to only belong to a certain color palette.
![chromaticity-plot comparison][3]
*A comparison of the chromaticity plots of the above images*
# Color Palette
First, let us tackle the easier problem at hand. Say that we are given the option to choose our color palette, the only restriction being the number of colours in the palette. What would be the best way to obtain a color palette that is the most appropriate for an image?
Thankfully, the Wolfram Language makes this task very easy for us, as we can simply use the inbuilt function `DominantColors[image, n]`. For example, regarding the image above:
![dominant-colors of the image][4]
would be the most appropriate color palette with 12 colours.
Here are some visualisations of the process of choosing the color palette in 3D RGB space.
The color palette of the original image:
![original color palette][5]
The color palette of the dithered image:
![final color palette][6]
*Notice how the final color palette is quite close to points on the diagonal of the cube. I go into a lot more detail about this in my computational essay.*
**Now, let us try to solve the main part of the problem, actually figuring out the mapping of a pixel colour to its new colour.**
# Naive idea
```
colorPallete = {0, 32, 64, 96, 128, 159, 191, 223};
pixel = {{34, 100, 222},
{200, 50, 150}};
result = pixel;
Do [
result[[x, y]] = First[Nearest[colorPallete, pixel[[x, y]]]];,
{x, Length[pixel]},
{y, Length[First[pixel]]}
];
Grid[result, Frame -> All, Spacings -> {1, 1}]
```
![grid naive idea][7]
## Extra!
I had to implement my own (but slower) version of the Wolfram function `Nearest` for the final functions, since pre-compiling Wolfram code to C code does not support `Nearest` natively.
However, at the time of writing, I have heard that there is an internal project going on in Wolfram to enable support for compiling all inbuilt Wolfram function to C natively.
# Better idea
As one can guess, this idea can be improved a lot. One of the important ideas is that of "smoothing". We want the transition between two objects/colours to look smoother. One way to do that would be make a gradient as the transition occurs.
However, how do you formalise a "gradient"? And how do you make a smooth one when all you have are 24 colours?
Dithering basically attempts to solves these problems.
To solve our questions, let's think about a pixel: what's the best way to make it look closest to its original colour?
In the naive idea, we created some error by rounding to some nearby values.
For each pixel, let's formalise the error as:
![error in pixel[1, 1] screenshot][8]
## Error diffusion
It is clear that the error should somehow be transmitted to the neighbouring elements so we can account for the error in a pixel in its neighbours. To maintain an even order of processing, let us assume that we will traverse the 2D array of pixels from the top-left corner, row-wise until we reach the bottom-right corner.
Therefore, it never makes sense to "push" the effects of an error to a cell we've already processed. Finally, let us see some ways to actually diffuse the error across the image.
# Floyd - Steinberg Dithering
In 1976, Robert Floyd and Louis Steinberg published the most popular dithering algorithm<sup>1</sup>. The pattern for error diffusion can be described as:
```
diffusionFormula = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPosition = {1, 2};
```
## What does this mean?
`diffusionFormula` is simply a way to encode the diffusion from a pixel.
`diffusionPosition` refers to the position of the pixel, relative to the `diffusionFormula` encoding.
So, for example, an error of `2` at pixel `{1, 1}` translates to the following additions:
```
pixel[[1, 2]] += error*(7/16);
pixel[[2, 1]] += error*(3/16);
pixel[[2, 2]] += error*(5/16);
pixel[[2, 3]] += error*(1/16);
```
![grid floyd steinberg error diffusion][9]
![floyd Steinberg dithering][10]
## How does one come up with these weird constants?
Notice how the numerator constants are the first 4 odd numbers.
The pattern is chosen specifically to create an even checkerboard pattern for perfect grey images using black and white.
![grayscale floyd steinberg dithering example][11]
*Example Grayscale image dithered with Floyd Steinberg*
![grayscale picture in picture thing][12]
*Note the checkerboard pattern in the image above.*
# Atkinson Dithering
Relative to the other dithering algorithms here, Atkinson's algorithm diffuses a lot less of the error to its surroundings. It tends to preserve detail well, but very continuous sections of colours appear blown out.
This was made by Bill Atkinson<sup>2</sup>, an Apple employee.The pattern for error diffusion is as below :
```
diffusionFormula = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8;
diffusionPosition = {1, 2};
```
![atkinson dithering example][13]
# Jarvis, Judice, and Ninke Dithering
This algorithm<sup>3</sup> spreads the error over more rows and columns, therefore, images should be softer(in theory).
The pattern for error diffusion is as below:
```
diffusionFormula = { {0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPosition = {1, 3};
```
![JJN algorithm example][14]
#### The final 2 dithering algorithms come from Frankie Sierra, who published the Sierra and Sierra Lite matrices<sup>4</sup> in 1989 and 1990 respectively.
# Sierra Dithering
Sierra dithering is based on Jarvis dithering, so it has similar results, but it's negligibly faster.
```
diffusionFormula = { {0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPosition = {1, 3};
```
![sierra dithering example][15]
# Sierra Lite Dithering
This yields results similar to Floyd-Steinberg dithering, but is faster.
```
diffusionFormula = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPosition = {1, 2};
```
![Sierra Lite dithering][16]
# Comparison
Here's an interactive comparison of the algorithms on different images:
```
Manipulate[
Dither[im, c,
StringDelete[StringDelete[StringDelete[algo, " "], ","],
"-"]], {{im, image, "Image"}, {im1, im2, im3}}, {{c, 12,
"Number of colours"}, 2, 1024, 1},
{{algo, "Floyd-Steinberg", "Algorithm"}, {"Floyd-Steinberg",
"Jarvis, Judice, Ninke", "Atkinson", "Sierra", "Sierra Lite"}}]
```
Download my computational essay to see it in action. Alternately, use the functions in the "Implementation" section to evaluate the code. `im1`, `im2`, `im3` can be replaced by images. I have submitted the comparison to [Wolfram Demonstrations][17] as well, so it should be available online soon.
# Side-note
Notice how the denominator in the `diffusionFormula` of a number of algorithms is a power of $2$?
This is because division by a power of 2 is equivalent to [bit-shifting][18] the number to the right by $\log_{2}(divisor)$ bits, making it much faster than division by any other number.
Given the improvements in computer hardware, this is not a major concern anymore.
## Come up with your own dithering algorithm!
I noticed that the dithering algorithms are almost the same as each other(especially the `diffusionPosition`).
However, I have made my functions so that you can just tweak the input arguments `diffusionFormula` and `diffusionPosition`, and test out your own functions!
Here's one I tried:
![my own dithering algorithm][19]
# Implementation
In this section, I will discuss the Wolfram implementation, and some of the features and functions I used in my code.
## applyDithering
Even though it's probably the least interesting part of the algorithm, actually applying the dithering is the most important part of the algorithm, and the one that gave me the most trouble.
I started off by writing it in the functional paradigm. With little knowledge of Wolfram, I stumbled through the docs to assemble pieces of the code. Finally, I had a "working" version of the algorithm, but there was a major problem: A $512\cdot512$ RGB image took 700 seconds for processing!
This number is way too large for an algorithm with linear time complexity in the size of input.
### Fixes
Some of the trivial fixes involved making more use of inbuilt functions(for example, `Nearest`).
The largest problem is that the Wolfram notebook is an interpreter, not a compiler. It interprets code every time it's run.
So the obvious step to optimising performance was using the `Compile` function in Wolfram.
But, there's a catch!
```
T (R1) 18 = MainEvaluate[ Hold[List][ I27, I28, T (R1) 16, R6]]
95 Element[ T (R2) 17, I20] = T (R1) 18
```
If you see something like the above in your machine code, your code is likely to be slow.
`MainEvaluate` basically means that the compiled function is calling back the kernel, which too, is a slow process.
To view the human readable form of your compiled Wolfram function, you can use:
```
Needs["CompiledFunctionTools`"];
CompilePrint[yourFunction]
```
To fix this, you need to basically write everything in a procedural form using loops and similar constructs.
The final step was `RuntimeOptions -> "Speed"`, which trades off some integer overflow checks etc. for a faster runtime.
Find the complete code for the function below:
```
applyDithering =
Compile[{{data, _Real, 3}, {diffusionList, _Real,
2}, {diffusionPos, _Integer, 1}, {colors, _Real, 2}},
Module[{lenx, leny, lenz, lenxdiff, lenydiff, error, val,
realLoc, closestColor, closestColordiff, res, a, diff = data,
diffusionFormula = diffusionList, xx, yy, curxx, curyy,
colorAvailable = colors, tmp, diffusionPosition = diffusionPos,
idx},
{lenx, leny, lenz} = Dimensions[data];
{lenxdiff, lenydiff} = Dimensions[diffusionFormula];
a = data;
res = data;
Do[
val = a[[x, y]];
realLoc = {x - diffusionPos[[1]] + 1,
y - diffusionPos[[2]] + 1};
closestColor = {1000000000., 1000000000., 1000000000.};
closestColordiff = 1000000000.;
Do[
tmp = N[Total[Table[(i[[idx]] - val[[idx]])^2, {idx, 3}]]];
If[tmp < closestColordiff,
closestColordiff = tmp;
closestColor = i;
];,
{i, colorAvailable}
];
error = val - closestColor;
res[[x, y]] = closestColor;
Do[
curxx = realLoc[[1]] + xx - 1;
curyy = realLoc[[2]] + yy - 1;
If[curxx > 0 && curxx <= lenx && curyy > 0 && curyy <= leny,
a[[curxx, curyy, z]] += error[[z]]*diffusionFormula[[xx, yy]]];,
{xx, lenxdiff},
{yy, lenydiff},
{z, 3}
];,
{x, lenx},
{y, leny}
];
Round[res]
],
CompilationTarget -> "C",
RuntimeOptions -> "Speed"
];
```
## Dither
This is the main function that uses `applyDithering`. Their are multiple definitions of the function, one with the hardcoded values, and the other to allow one to easily implement their own dithering algorithm.
```
(* This is the implementation that takes the algorithm name and applies it *)
Dither[img_Image, colorCount_Integer, algorithm_String: ("FloydSteinberg" | "JarvisJudiceNinke" |
"Atkinson" | "Sierra" | "SierraLite")] :=
Module[{diffusionFormulaFS, diffusionPositionFS, diffusionFormulaJJN, diffusionPositionJJN, diffusionFormulaA,
diffusionPositionA, diffusionFormulaS, diffusionPositionS, diffusionFormulaSL, diffusionPositionSL},
(* Floyd Steinberg algorithm constants *)
diffusionFormulaFS = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPositionFS = {1, 2};
(* Jarvis, Judice, and Ninke algorithm constants *)
diffusionFormulaJJN = {{0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPositionJJN = {1, 3};
(* Atkinson algorithm constants *)
diffusionFormulaA = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8 ;
diffusionPositionA = {1, 2};
(* Sierra algorithm constants *)
diffusionFormulaS = {{0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPositionS = {1, 3};
(* Sierra Lite algorithm constants*)
diffusionFormulaSL = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPositionSL = {1, 2};
colorAvailable =
Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Switch[algorithm,
"FloydSteinberg",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaFS, diffusionPositionFS, colorAvailable],
"Byte"],
"JarvisJudiceNinke",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaJJN, diffusionPositionJJN, colorAvailable],
"Byte"],
"Atkinson",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaA, diffusionPositionA, colorAvailable],
"Byte"],
"Sierra",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaS, diffusionPositionS, colorAvailable],
"Byte"],
"SierraLite",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaSL, diffusionPositionSL, colorAvailable],
"Byte"]
]
];
(* This is the function that makes it easy to make your own dithering
algorithm *)
Dither[img_Image, colorCount_Integer, diffusionFormula_List, diffusionPosition_List] := Module[{},
colorAvailable = Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormula, diffusionPosition, colorAvailable],
"Byte"]
];
```
# **Classifying images**
The second part of my project involves classifying dithered images and mapping them to the algorithm they were obtained from.
This sounds like a relatively easy task for machine learning, but it turned out to be much harder. Besides, no similar research on image "metadata" has existed before, which made the task more rewarding.
I ended up creating a model which has an **accuracy of more than 90%**, which is reasonably good for machine learning.
If you are uninterested in the failures encountered and the details of the dataset used, please skip ahead to the section on "ResNet-50 with preprocessing".
# Dataset
To obtain data, I did a web search for images with a keyword chosen randomly from a dictionary of common words.
The images obtained are then run through the five algorithms I implemented and is stored as the training data. This is to ensure that the images aren't distinguished much in terms of their actual contents, since that would interfere with learning about the dithering algorithms used in the image.
The images were allowed to use up to 24 colours which are auto-selected, as described in the section on "Color Palette".
Here is the code for downloading, applying the dithering, and re-storing the images. Note that it is not designed with re-usability in mind, these are just snippets coded at the speed of thought:
```
(* This function scrapes random images from the internet and stores \
them to my computer *)
getImages[out_Integer, folderTo_String] := Module[{},
res = {};
Do [
Echo[x];
l = RemoveAlphaChannel /@
Map[ImageResize[#, {512, 512}] &,
Select[WebImageSearch[RandomChoice[WordList["CommonWords"]],
"Images"],
Min[ImageDimensions[#]] >= 512 &]];
AppendTo[res, Take[RandomSample[l], Min[Length[l], 2]]];
Pause[10];,
{x, out}
];
MapIndexed[Export[
StringJoin[folderTo, ToString[97 + #2[[1]]], "-",
ToString[#2[[2]]], ".png"], #1] &, res, {2}]
]
(* This function applies the dithering and stores the image *)
applyAndStore[folderFrom_String, folderTo_String] := Module[{},
images = FileNames["*.png", folderFrom];
origImages = Map[{RemoveAlphaChannel[Import[#]], #} &, images];
Map[Export[StringJoin[folderTo, StringTake[#[[2]], {66, -1}]],
Dither[#[[1]], 24, "Sierra"]] &, origImages]
];
```
Here are some more variable definitions and metadata about the dataset that is referenced in the following sections.
![dataset metadata][20]
# Plain ResNet-50
My first attempt was to use a popular neural net named "ResNet-50 Trained on ImageNet Competition Data,<sup>5</sup> and retrain it on my training data.
One of the major reasons for choosing this architecture was that it identifies the main object in an image very accurately, and is quite deep. Both these properties seemed very suitable for my use case.
However, the results turned out to be very poor. When I noticed this during the training session, I stopped the process early on. It can be speculated that the poor results were because it was trying to infer relations between the colours in the image.
# Border classification
Since the borders in an image are least affected by the image dithering algorithm, and simply rounded to the closest colours, it should be easier to learn the constants of the diffusionFormula from it.
Therefore, we can pre-process an image and only use its border pixels for classification.
## borderNet
Observing the aforementioned fact, I implemented a neural net which tried to work with just the borders of the image. This decreased the size of the data to $512 \cdot 4$ per image.
## borderNet with just left and top border
Since my implementation of the dithering algorithms starts by applying the algorithm from the top-left corner, the pattern in the left and top borders should be even easier for the net to learn. However, this decreased the size of the data even more to $512 \cdot 2$ per image.
Both the nets failed to work very well, and had **accuracies of around 20%**. This was probably the case because of the lack of data for the net to actually train well enough.
Wolfram code for the net follows:
```
borderNet = NetChain[{
LongShortTermMemoryLayer[100],
SequenceLastLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
},
"Input" -> {"Varying", 3},
"Output" -> NetDecoder[{"Class", classes}]
]
```
# Row and column specific classification
The aim with this approach was to first make the neural net infer patterns in the columns of the image, then combine that information and observe patterns in the rows of the image.
This didn't work very well either. The major reason for the failure was probably that the diffusion is not really as independent as the net might assume it to be.
# Row and column combined classification
Building on the results of Section 2.5, the right method to do the processing seemed to be to use two separate chains, and a `CatenateLayer` to combine the results.
For understanding the architecture, observe the `NetGraph` object below:
![netgraph-lstmNet branchy thingy][21]
The Wolfram language code for the net is as follows:
```
lstmNet = NetGraph[{
TransposeLayer[1 <-> 2],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
TransposeLayer[1 <-> 2],
SequenceLastLayer[],
SequenceLastLayer[],
LongShortTermMemoryLayer[25],
LongShortTermMemoryLayer[25],
SequenceLastLayer[],
SequenceLastLayer[],
CatenateLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
}, {
NetPort["Input"] -> 2 -> 3 -> 7 -> 9 -> 11,
NetPort["Input"] -> 1 -> 4 -> 5 -> 6 -> 8 -> 10 -> 12,
{11, 12} -> 13 -> 14 -> 15 -> 16
},
"Input" -> {512, 512, 3},
"Output" -> NetDecoder[{"Class", classes}]
];
```
However, this net didn't work very well either.
The net had a somewhat unconventional architecture, and the excessive parameter count crashed the Wolfram kernel, so they had to be cut down.
Ultimately, it only managed to get an **accuracy rate of around 25-30%**.
# ResNet-50 with preprocessing
The final idea was to use pre-processing to our advantage. Dithering, in its essence, shifts the error downward and towards the right. Therefore, one way to filter the image would be to pad the image with one row of pixels at the top and one column at the left, and subtracting the padded image from the original one.
Here's an example of what that looks like:
![FS image preprocessing for net][22]
The code for doing this to an image is as simple as:
```
img - ImageTake[ImagePad[img, 1], {1, 512}, {1, 512}]
```
*Notice how the image(right side, after processing) resembles the parts with the "checkerboard" pattern described in the section "How does one come up with these weird constants?" under "Floyd - Steinberg Dithering" .*
The main reason this net works well is that, even with same color palettes, the gradient of the images coming from dithering algorithms is quite different. This is because of the differences in the error diffusion, and by subtracting the padded image from the original image, we obtain a filtered version of the dithering patterns, making it easy for the neural net to spot them.
The net was trained on AWS for more than 7 hours, on a larger dataset of 1500 images.
The results outperformed my expectations, and on a test-set of more than 700 images, 300 of which were part of the original training data, it showed an **accuracy rate of nearly 91%**.
![classifier measurement object][23]
Here is a code of the net with details:
```
baseModel =
NetTake[NetModel["ResNet-50 Trained on ImageNet Competition Data",
"UninitializedEvaluationNet"], 23]
net = NetChain[{
NetReplacePart[baseModel,
"Input" -> NetEncoder[{"Image", {512, 512}}]],
LinearLayer[Length@classes],
SoftmaxLayer[]},
"Input" -> NetEncoder[{"Image", {512, 512}, ColorSpace -> "RGB"}],
"Output" -> NetDecoder[{"Class", classes}]
]
```
So, it's just the **ResNet - 50** modified to work with $512 \cdot 512$ images.
# Future Work
- Look into using machine learning for un-dithering an image.
- Look into creating new dithering algorithms that perform faster or better than the existing ones.
# Notes
All images and visualisations in this post were generated in Wolfram. Their code may be seen in the computational essay attached below.
I would like to thank all the mentors, especially Greg "Chip" Hurst, Michael Kaminsky, Christian Pasquel and Matteo Salvarezza, for their help throughout the project.
Further, I would like to thank Pyokyeong Son and Colin Weller for their help during the project, and refining the essay.
The original, high resolution copies of the images are credited to [Robert Lukeman][24], [Teddy Kelley][25], and [Sebastian Unrau][26] on [Unsplash][27].
# References
[1] : R.W. Floyd, L. Steinberg, An adaptive algorithm for spatial grey scale. Proceedings of the Society of Information Display 17, 75-77 (1976).
[2] : Bill Atkinson, private correspondence with John Balestrieri, January 2003 (unpublished)
[3] : J. F. Jarvis, C. N. Judice and W. H. Ninke, A Survey of Techniques for the Display of Continuous Tone Pictures on Bi-level Displays. Computer Graphics and Image Processing, 5 13-40, 1976
[4] : Frankie Sierra, in LIB 17 (Developer's Den), CIS Graphics Support Forum (unpublished)
[5] : K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition," arXiv:1512.03385 (2015)
# [Link to my computational essay][28]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-dithering-cover.gif&userId=1371661
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at3.50.44PM.png&userId=1371661
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.11.46PM.png&userId=1371661
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.15.27PM.png&userId=1371661
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-og.gif&userId=1371661
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-final.gif&userId=1371661
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.19.57PM.png&userId=1371661
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.23.38PM.png&userId=1371661
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.28.38PM.png&userId=1371661
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.29.39PM.png&userId=1371661
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.31.40PM.png&userId=1371661
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-1.png&userId=1371661
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.38.01PM.png&userId=1371661
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.39.37PM.png&userId=1371661
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.41.39PM.png&userId=1371661
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.43.33PM.png&userId=1371661
[17]: http://demonstrations.wolfram.com
[18]: https://stackoverflow.com/a/141873
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at5.03.37PM.png&userId=1371661
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.11.09AM.png&userId=1371661
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.21.55AM.png&userId=1371661
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.24.56AM.png&userId=1371661
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.28.45AM.png&userId=1371661
[24]: https://unsplash.com/photos/zNN6ubHmruI?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[25]: https://unsplash.com/photos/_4Ib-a8g9aA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[26]: https://unsplash.com/photos/CoD2Q92UaEg?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[27]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[28]: https://www.dropbox.com/s/kmtzq6x4xkdn9y8/computational-essay.cdf?dl=0Nalin Bhardwaj2018-07-14T15:17:17ZVisualizing simultaneous iteration methods for polynomial roots
https://community.wolfram.com/groups/-/m/t/1646447
One of my longstanding research interests in numerical analysis has been the family of "simultaneous iteration" methods for finding polynomial roots. (See e.g. [McNamee's book][1] for a comprehensive survey.) Briefly put, these are modified Newton-Raphson methods that allow one to find the roots of a polynomial all at once, as opposed to finding them one at a time and "deflating".
I had the idea to visualize how these algorithms gradually proceed from initial approximations to the roots, up to convergence. After a number of experiments, I settled on using [domain coloring][2] for visualization. I have found that the logarithmic derivatives of polynomials gave particularly striking plots.
For this post, I have used the scaled [exponential sum][3] of degree 20:
$$\frac{20!}{20^{20}}\sum_{k=0}^{20}\frac{(20z)^k}{k!}$$
as the example polynomial whose roots we want to see. It [is][4] [known][5] that the zeroes of this polynomial asymptotically approach the so-called Szegő curve as the polynomial degree goes to infinity.
![Szegő curve][6]
expPoly[x_] = With[{n = 20}, Sum[(n! (n x)^k)/(k! n^n), {k, 0, n}]]
---
I will now look at two of the most popular simultaneous iteration methods. The first one is the [(Weierstrass-)Durand-Kerner method][7],
$$x_i^{(k+1)}=x_i^{(k)}-\frac{p(x_i^{(k)})}{\prod\limits_{j\neq i} (x_i^{(k)}-x_j^{(k)})},\qquad i=1\dots n;\; k=0,1,\dots$$
which is (typically) quadratically convergent. (Note that in simultaneous iteration methods, the polynomials are always assumed to be monic (i.e., unit leading coefficient).)
Implementing the iteration is easy using `FixedPointList[]`. As is customary with these methods, we use as a starting approximation points equispaced around the unit circle, and slightly rotated:
ptsdk = FixedPointList[# - expPoly[#]/Table[Apply[Times, #[[k]] - Delete[#, k]],
{k, Length[#]}] &,
N[Exp[2 π I Range[0, 19]/20 - I π/40]], 40,
SameTest -> (EuclideanDistance[##] < 1.*^-6 &)];
I use a loose convergence criterion that is good enough for visualization purposes.
For the domain coloring plot, I will use a slightly modified version of the [DLMF color scheme][8], based on an idea of [Quilez][9].
DLMFPhaseColor[u_, s_:1, b_:1] := Module[{rgb},
rgb = Clip[{1, -1, -1} Abs[{8, 4, 8} Mod[u/(2 π), 1] -
{9, 3, 11}/2] + {-3, 3, 5}/2, {0, 1}];
rgb = (3 - 2 rgb) rgb^2;
Apply[RGBColor, b (1 + s (rgb - 1))]]
I then use a simplified version of [code originally written by user Heike][10] on Mathematica Stack Exchange:
dcdk = RegionPlot[True, {x, -9/8, 9/8}, {y, -9/8, 9/8},
ColorFunction ->
Function[{x, y}, DLMFPhaseColor[Arg[Total[1/(x + I y - #)]]]],
ColorFunctionScaling -> False, Frame -> False,
PlotPoints -> 405] & /@ ptsdk;
(This takes some time, due to the high `PlotPoints` setting.)
We can now see an animation:
ListAnimate[dcdk]
![Durand-Kerner][11]
The other method I will be looking at in this post is the (typically) cubically convergent [Ehrlich-Aberth(-Maehly) method][12],
$$x_i^{(k+1)}=x_i^{(k)}-\frac{\tfrac{p(x_i^{(k)})}{p^\prime(x_i^{(k)})}}{1-\tfrac{p(x_i^{(k)})}{p^\prime(x_i^{(k)})}\sum\limits_{j\neq i} \tfrac1{x_i^{(k)}-x_j^{(k)}}},\qquad i=1\dots n;\; k=0,1,\dots$$
which is also one of the methods available in *Mathematica*'s `NSolve[]`/`NRoots[]`.
Unfortunately, I have no way to get the iterates generated by `NSolve[]`, so I had to reimplement the method myself. We can use essentially the same code as was used for Durand-Kerner, with a few changes:
ptsea = FixedPointList[With[{ld = expPoly[#]/expPoly'[#]},
# - ld/(1 - ld Table[Tr[1/(#[[k]] - Delete[#, k])],
{k, Length[#]}])] &,
N[Exp[2 π I Range[0, 19]/20 - I π/40]], 40,
SameTest -> (EuclideanDistance[##] < 1.*^-6 &)];
dcea = RegionPlot[True, {x, -9/8, 9/8}, {y, -9/8, 9/8},
ColorFunction ->
Function[{x, y}, DLMFPhaseColor[Arg[Total[1/(x + I y - #)]]]],
ColorFunctionScaling -> False, Frame -> False,
PlotPoints -> 405] & /@ ptsea;
ListAnimate[dcea]
![Ehrlich-Aberth][13]
It would be interesting to use this visualization technique on other polynomials with interesting root structure, as well as other simultaneous iteration methods.
[1]: https://books.google.com/books?hl=en&id=4PMqxwG-eqQC&pg=PA67
[2]: https://en.wikipedia.org/wiki/Domain_coloring
[3]: http://mathworld.wolfram.com/ExponentialSumFunction.html
[4]: https://doi.org/10.1216/rmjm/1181072998
[5]: https://doi.org/10.1007/BFb0087909
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=vpliw.gif&userId=520181
[7]: https://en.wikipedia.org/wiki/Durand%E2%80%93Kerner_method
[8]: https://dlmf.nist.gov/help/vrml/aboutcolor#S2
[9]: https://www.shadertoy.com/view/MsS3Wc
[10]: https://mathematica.stackexchange.com/a/7293/
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=dcdk.gif&userId=520181
[12]: https://en.wikipedia.org/wiki/Aberth_method
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=dcea.gif&userId=520181J. M.2019-04-04T04:38:26ZQMRITools: processing and visualization tools for quantitative MRI data
https://community.wolfram.com/groups/-/m/t/1661539
# QMRITools [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2530801.svg)](https://doi.org/10.5281/zenodo.2530801) [![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/dwyl/esta/issues) [![status](http://joss.theoj.org/papers/ef8bfb6c31499845d353b6a5af0d6300/status.svg)](http://joss.theoj.org/papers/ef8bfb6c31499845d353b6a5af0d6300)
***
![wolfram language][1] ![wolfram workbench][2] ![eclipse][3] ![wolfram mathematica][4]
***
![eye candy][13]
I have been working on this toolbox for years and recently started making the code available online which took considerable extra effort putting everything into the proper formats and generating documentation and usage demonstrations. I thought it would be nice to also show this to the wolfram community so [here][5] it is.
## Contents
* [Information](#Sumary)
* [Information](#information)
* [Documentation](#documentation)
* [Install toolbox](#install-toolbox)
* [Demonstrations](#demonstrations)
* [Using the toolbox](#using-the-toolbox)
* [Functionality](#functionality)
## Summary
``QMRITools`` is written in Mathematica using Wolfram Workbench and Eclipse and contains a collection of tools and functions for processing quantitative MRI data. The toolbox does not provide a GUI and its primary goal is to allow for fast and batch data processing, and facilitate development and prototyping of new functions. The core of the toolbox contains various functions for data manipulation and restructuring.
The toolbox was developed mostly in the context of quantitative muscle, nerve and cardiac magnetic resonance imaging. The library of functions grows along with the research it is used for and started as a toolbox to analyze DWI data of muscle. Although there exist many different software packages and code repositories for much of the functionality in this toolbox, I was in need of one that did all. Furthermore, most diffusion packages are optimized for brain analysis and provide limited flexibility and i just like writing code my self.
## Information
QMRITools is developed for [Mathematica](https://www.wolfram.com/mathematica/).
It contains the following toolboxes:
- CardiacTools
- CoilTools
- DenoiseTools
- DixonTools
- ElastixTools
- GeneralTools
- GradientTools
- ImportTools
- IVIMTools
- JcouplingTools
- MaskingTools
- NiftiTools
- PhysiologyTools
- PlottingTools
- ProcessingTools
- RelaxometryTools
- SimulationTools
- VisteTools
## Documentation
The website with more information about the toolbox can be found [here](https://mfroeling.github.io/QMRITools/). Documentation of all functions and their options is fully integrated in the Mathematica documentation. The toolbox always works within the latest version of Mathematica and does not support any backward compatibility.
After the toolbox is installed correctly it should show up as a package in the Mathematica add-ons (which was a pain and is badly documented).
![QMRITools package][6]
All code and documentation is maintained and uploaded to github using [Workbench](https://www.wolfram.com/workbench/). An online HTML version of the full documentation can be found [here](https://mfroeling.github.io/QMRITools/htmldoc/guide/QMRITools.html). To fix the HMTL compilation i had to write some custom scripts based on code i found [online][7] (again a pain and badly documented).
![Guides QMRITools][8]
## Install toolbox
The latest release can be found [here](https://github.com/mfroeling/QMRITools/releases/download/2.1.2/QMRITools.zip).
Install the toolbox in the Mathematica UserBaseDirectory > Applications.
FileNameJoin[{$UserBaseDirectory, "Applications"}]
Ideally i would like to push the toolbox as a packlet but again this has proven to be a pain and is badly documented.
Some functions of QMRITools call on external executables and software.
These executables need to be present in "QMRITools\Applications" and are included in the release.
If for any reason you want to use other (older/newer) versions you can replace them but functionality is not guaranteed.
For the latest version of these tools and their user license please visit their website.
* [dcm2niix](https://github.com/rordenlab/dcm2niix/)
* dcm2niix.exe
* [Elastix](http://elastix.isi.uu.nl/)
* elastix.exe
* transformix.exe
All functionality is tested under Windows 10 with the latest Mathematica version.
The Mathematica code is cross platform compatible with the exception of the external tools which are compiled for each OS.
The toolbox provides compiled versions for each OS but their functionality is not guaranteed.
The Elastix version used is 4.9 with OpenCL support. Additionally Elastix needs to be compiles with the PCA metrics, all DTI related parameters and all affine related parameters.
Although cross platform compatibility is provided I have only limited options for testing so if any issues arise please let me know.
## Demonstrations
The release contains a zip file [DemoAndTest.zip](https://github.com/mfroeling/QMRITools/releases/download/2.1.2/DemoAndTest.zip) in which there is a file ``demo.nb``, a folder ``DemoData`` and a folder ``Testing``.
To have a global overview of the functionality of the toolbox you can download this folder and run the ``demo.nb``.
By default the ``demo.nb`` looks for the folders ``DemoData`` and ``Testing`` in the same folder as the notebook.
![QMRITools demonstration][9]
In the first section of the demo notebook the toolbox is loaded and two tests are performed. The first test is to check of all files that are needed to run the toolbox are present. The second test runs after the toolbox is loaded and checks if all the functions and their options that are defined are correct.
![enter image description here][10]
## Using the toolbox
The toolbox can be loaded by using: `` <<QMRITools` ``
If you want to monitor the package loading you can use: `` QMRITools`$Verbose = True; <<QMRITools` ``
A list of all QMRITools packages is generated by
QMRIToolsPackages[]
A list of all DTITools functions or functions per toolbox is generated by
QMRIToolsFunctions[]
QMRIToolsFunctions["toolboxname"]
To print the documentation of all functions use
QMRIToolsFuncPrint[]
QMRIToolsFuncPrint["toolboxname"]
A list off all functions and their help can be found in ``All-Functions.nb``, which is alos availible as a [pdf file](https://github.com/mfroeling/QMRITools/releases/download/2.0/All-Functions.pdf).
## Functionality
The toolbox contains over 250 Functions and options of processing and analyzing data.
A summary of the core functionality is listed below.
![QMRITools overview][11]
* **Diffusion Analysis**
* Signal drift correction
* LLS, WLLS and iWLLS methods
* REKINDLE outlier detection
* IVIM fitting (fixed parameters, back-projection and Bayesian fitting)
* Parameter fitting using histogram analysis
* Joining and sorting of multiple series of the same volume
* Joining multiple stacks with slice overlap into one stack
* **Diffusion Gradients optimization**
* Single and multi shell
* Rotating and correcting Bmatrix
* Actual b-value estimation by gradient sequence integration
* Gradient visualization
* **Noise suppression**
* LMMSE noise suppression
* PCA noise suppression based on ramom matrix theory
* Anisotropic tensor smoothing using diffusion filter.
* **Importing and Exporting**
* Dicom data (classing and enhanced file format)
* Nifti data (.nii and .img .hdr, supports .gz files)
* Compatible with ExplorDTI and Viste for fiber tractography
* **Data visualization**
* 2D 3D and 4D viewer
* Multiple images: Transparent overlay, difference and, checkboard overlays
* Legend bars and image labels
* Saving to pdf, jpg, animated gif and movie
![Plot 3D and 4D datasets][12]
* **Masking**
* Automate and threshold masking
* Extracting parameters form masks
* Smoothing masks
* Smoothing muscle segmentation
* **Motion and distortion correction (Registration using elastix)**
* Rigid, affine, b-spline and cyclic registration
* nD to nD registration
* Automated series processing
* Slice to slice motion correction of 3D and 4D data
* **Dixon Reconstruction**
* B0 phase unwrapping
* DIXON iDEAL reconstruction with T2start
* **Relaxometry fitting**
* T2 fitting
* T1rho fitting
* Tri Exponential T2 fitting
* EPG based T2 fitting with slice profile
* **Simulation Framework**
* Diffuison tensor simulation and analysis
* Bloch and EPG simulations
* Cardiac DTI models (fiber architecture)
* **Cardiac Diffusion analysis**
* Breathing motion correction
* Corrupted slice rejection
* Local myocardial coordinate system calculation
* helix angle and fiber architecture matrix
* AHA 17 parameter description
* Transmural parameter description
## License
https://opensource.org/licenses/BSD-3-Clause
Note that restrictions imposed by these patents (and possibly others)
exist independently of and may be in conflict with the freedoms granted
in BSD-3-Clause license, which refers to copyright of the program, not patents
for any methods that it implements. Both copyright and patent law must
be obeyed to legally use and redistribute this program and it is not the
purpose of this license to induce you to infringe any patents or other
property right claims or to contest validity of any such claims. If you
redistribute or use the program, then this license merely protects you
from committing copyright infringement. It does not protect you from
committing patent infringement. So, before you do anything with this
program, make sure that you have permission to do so not merely in terms
of copyright, but also in terms of patent law.
Some code in the NiiTools packages was based on https://github.com/tomdelahaije/nifti-converter
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfram_language.png&userId=1332602
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfram_workbench.jpg&userId=1332602
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=eclipse.png&userId=1332602
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfram_mathematica.png&userId=1332602
[5]: https://github.com/mfroeling/QMRITools
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=AddOns.PNG&userId=1332602
[7]: https://mathematica.stackexchange.com/questions/134069/workbench-eclipse-version-html-documentation-build-html-issues
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Guide.PNG&userId=1332602
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=demo.PNG&userId=1332602
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=testing.PNG&userId=1332602
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=OverView.png&userId=1332602
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=visualization.PNG&userId=1332602
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=animation-small.gif&userId=1332602Martijn Froeling2019-04-16T08:11:08ZRubiks Cubes and OOP in Mathematica
https://community.wolfram.com/groups/-/m/t/1659434
I recently got [nerd sniped](https://xkcd.com/356/) on a [question on Stack Exchange](https://mathematica.stackexchange.com/a/195065/38205)
Originally I just wanted to show off my immutable OOP framework that uses a bunch of Mathematica's built in features to make something with a nice interface like `CloudExpression` but then that devolved into redoing the entirety of [Roman Maeder's Rubiks Cube Demo](http://demonstrations.wolfram.com/RubiksCube/) as an object.
To make this work you need my `InterfaceObjects` package, which implements Mathematica OOP. You can single-line install that [here](https://paclets.github.io/PacletServer/interfaceobjects.html).
Then load the Rubik's cube package off GitHub [here](https://github.com/b3m2a1/mathematica-tools/blob/master/RubiksCube.wl) (you can also go there to see how this OOP package works):
Get["https://github.com/b3m2a1/mathematica-tools/raw/master/RubiksCube.wl"]
Now we can make an object:
new = RubiksCube[]
Visualize it:
new@"Show"[]
[![enter image description here][4]][4]
Make a new one of a different size and change its colors:
r1 = RubiksCube["Size" -> 4];
r1@"Colors" = ColorData["Atoms"] /@ {6, 7, 8, 9, 11, 13, 18};
r1@"Show"[Method -> {"ShrinkWrap" -> True}]
[![enter image description here][1]][1]
Or make two and plot them side by side with some twists:
r2 = RubiksCube["Origin" -> {10, 0, 0}, "Size" -> 10];
Show[
r1@"Twist"[.5, {"Y", 2}]@"Twist"[.5, {"Y", 4}]@"Show"[],
r2@"Show"[],
PlotRange -> All
]
You can also `Manipulate` the twisting, if you want:
Manipulate[
Fold[
#@"Twist"[#2[[1]], #2[[2]]] &,
new,
Thread[
{
{b, f, l, r, d, u},
{"Back", "Front", "Left", "Right", "Down", "Up"}
}
]
]@"Show"[],
{b, 0, 2 π, .01},
{f, 0, 2 π, .01},
{l, 0, 2 π, .01},
{r, 0, 2 π, .01},
{d, 0, 2 π, .01},
{u, 0, 2 π, .01},
DisplayAllSteps -> True
]
[![enter image description here][5]][5]
The OOP interface itself is actually very nice, as it allows us to only have to provide and document a single symbol and then some `"Methods"`, which actually are discoverable by name as well, along with the `"Properties"`:
r1@"Methods"
{"Show", "Twist"}
r1@"Properties"
{"Size", "Origin", "Colors", "Cuboids", "Version", "Properties", "Methods"}
[1]: https://i.stack.imgur.com/tPUcO.png
[2]: https://i.stack.imgur.com/CqJvT.png
[3]: https://i.stack.imgur.com/KKhN6.png
[4]: https://i.stack.imgur.com/pYwi3.png
[5]: https://i.stack.imgur.com/23f6J.png
[6]: https://i.stack.imgur.com/fbJR4.png
[7]: https://i.stack.imgur.com/bFcJo.png
[8]: https://i.stack.imgur.com/3PRrr.png
[9]: https://i.stack.imgur.com/QilkK.png
[10]: https://i.stack.imgur.com/b8lDe.pngb3m2a1 2019-04-13T00:38:14Z[GIF] Five Easy Pieces (Rotating truncation of the tetrahedron)
https://community.wolfram.com/groups/-/m/t/1660413
![Rotating truncation of the tetrahedron][1]
**Five Easy Pieces**
Practically the same idea (and code) as [_Give Me Some Space_][2], just truncating the tetrahedron rather than rectifying it.
The code for the `Manipulate` is below; when exporting to a GIF I used `"DisplayDurations" -> Prepend[Table[1/50, {199}], 1/2]` inside `Export` to get the animation to pause on the original tetrahedron for half a second.
DynamicModule[{viewpoint = {Cos[2π/3], Sin[2π/3], 1/Sqrt[2]},
g = .6, d = .2, n = 4,
v = PolyhedronData["Tetrahedron", "VertexCoordinates"],
e = {{2, 3, 4}, {1, 4, 3}, {4, 1, 2}, {3, 2, 1}},
tt = PolyhedronData["TruncatedTetrahedron", "VertexCoordinates"],
te = PolyhedronData["TruncatedTetrahedron", "Faces"][[2]],
cols = RGBColor /@ {"#e43a19", "#f2f4f7", "#111f4d"},
s, r},
Manipulate[
s = Haversine[π t];
r = Haversine[2 π t];
Graphics3D[{Thickness[.004], EdgeForm[None],
Table[
{GraphicsComplex[
(1/2 + r/2) v[[i]] + RotationTransform[2 π/3 s, v[[i]]][#] & /@ (v/4),
{cols[[1]], Polygon[e[[i]]], cols[[2]], Polygon[e[[Drop[Range[4], {i}]]]]}]},
{i, 1, Length[v]}],
GraphicsComplex[
RotationTransform[0, {0, 0, 1}][RotationTransform[π s, {-Sqrt[3], -3, Sqrt[6]}][1/4 tt]],
{cols[[1]], Polygon[te[[1, ;; 4]]], cols[[2]], Polygon[te[[1, 5 ;;]]]}]},
Boxed -> False, ImageSize -> {540, 540}, PlotRange -> 2.5,
Background -> cols[[-1]], ViewPoint -> 10 viewpoint,
ViewAngle -> π/125, ViewVertical -> {0, 0, 1},
SphericalRegion -> True,
Lighting -> {{"Ambient", GrayLevel[d]},
{"Directional", GrayLevel[g], ImageScaled[{2, 0, 2}]},
{"Directional", GrayLevel[g], ImageScaled[{-2, 2, 2}]},
{"Directional", GrayLevel[g], ImageScaled[{0, -2, 2}]}}],
{t, 0, 1}]
]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=tt8r.gif&userId=610054
[2]: https://community.wolfram.com/groups/-/m/t/962240Clayton Shonkwiler2019-04-14T21:23:52ZAnamorphosis and reflection between a Conical Mirror and a Cylinder
https://community.wolfram.com/groups/-/m/t/1660442
![intro][1]
Catoptric or mirror anamorphoses are deformed images that can only be seen undeformed with the help of a mirror.
Here, we experiment with a conical mirror surrounded by a vertical cylindrical surface.
We want to compute points of a deformed (anamorphic) image on the cylinder's inner surface such that it is perceived by the viewer as an undeformed image when looking down in the cone shaped mirror.
![enter image description here][2]
The above drawing shows the anamorphic setup: a conical mirror (radius r=1, height=h), surrounded by a cylindrical surface (radius R>r).
The viewpoint V is along the vertical axis of the cylinder (at infinity relative to the size of the cone).
A point S (xa,ya,za) on the cylinder's inner surface is reflected by the mirror at Q to the viewer's eye at V. The viewer perceives the point at I (xi,yi,0). The lines VQ and SQ form equal angles with the normal to the sphere at Q.
![enter image description here][3]
The above animation demonstrates the relation between the point I traveling along a straight line while its anamorphic map follows a curve on the inner surface of the cylinder.
We now write a function that expresses this geometric relationship:
cone2Cylinder[imagePoint : {xi_, yi_}, coneHeight : h_,
cylinderRadius : R_] :=
{(R xi)/Sqrt[xi^2 + yi^2], (R yi)/Sqrt[
xi^2 + yi^2],
h - h Sqrt[
xi^2 + yi^2] + (-R + Sqrt[xi^2 + yi^2]) Cot[2 ArcTan[1/h]]}
This function maps an image point to an anamorphic point.
To test our function, we use [again][4] one of the logos generated by the Wolfram Demonstration "[Character Rotation Patterns][5]" by Chris Carlson.
Which, after converting to a GraphicsComplex looks like this:
ig = ImageGraphics[sun, 2, Method -> "Exact"];
lines = Normal[ig][[1, -1]] /. FilledCurve -> Identity;
scaledLines = Map[#/948 - .5 &, lines, {6}]
Graphics[{Thick, scaledLines}]
![enter image description here][6]
We now compute the point coordinates of the lines in the GraphicsComplex to their anamorphic map {xa,ya,za} using the function cone2Cylinder.
anaLines = Map[anaCone2Cylinder[#, 1.57, 1.15] &, scaledLines, {5}];
Graphics3D[{{Opacity[.2], White,
Cylinder[{{0, 0, .3}, {0, 0, 1.2}}, 1.25]},
AbsoluteThickness[3], %}, Boxed -> False]
![enter image description here][7]
We then convert the anamorphic 3D drawing to the 2 dimensional developed interior face of the cylinder as {ArcTan[xa,ya} , za}. This GIF illustrates the unfolding of the cylindrical image:
![enter image description here][8]
developLineCoordinates =
Flatten[Map[{ArcTan @@ Most[#], Last[#]} &, anaLines, {5}][[-1]],
1][[All, 1]];
lstPP = Partition[#, 2, 1] & /@ developLineCoordinates;
DeleteCases[#, _?(EuclideanDistance @@ # > 1 &)] & /@ lstPP;
Graphics[{AbsoluteThickness[2], Line /@ %}, FrameTicks -> None,
Frame -> True, ImageSize -> 600]
develop = Image[%];
![enter image description here][9]
After printing the cylinder development to the right size (52 cm by 14 cm), it is glued around a cardboard cylinder (radius 8 cm). A home made conical mirror (base radius 7 cm, height 12 cm) is put inside the cylinder at the center. The anamorphic image on the cylinder wall is reflected as the undeformed original by the conical mirror. Here is the result: (the center is hidden by a coin resting at the top of the cone since anamorphic maps of points close to the cone center are off at infinite height on the cylinder wall)
![enter image description here][10]
**Another application** of the function is to use one of the many popular curves (".....-like curve" ) that can be extracted using Interpreter
Interpreter["PopularCurve"]["bunny-like curve"];
bugsbunnyPrimitives =
First@Cases[
First[ParametricPlot[
Entity["PopularCurve", "BunnyCurve"]["ParametricEquations"][
t], {t, 0, 30 \[Pi]}]] /. {x_?NumericQ,
y_?NumericQ} :> {x - 85, y - 50}/800, _Line, \[Infinity]];
![enter image description here][11]
The anamorphic map is created by applying anaCone2Cylinder to the point coordinates:
anaBunny =
Map[anaCone2Cylinder[#, 1.755, 1.25] &, bugsbunnyPrimitives, {2}];
Animate[Graphics3D[
Rotate[{{Opacity[.2], White,
Cylinder[{{0, 0, .25}, {0, 0, 1}}, 1.25]}, AbsoluteThickness[3],
Red, anaBunny}, \[Phi], {0, 0, 1}], Boxed -> False], {\[Phi], 0,
2 \[Pi]}]
![enter image description here][12]
This is the developed cylinder:
developRules = {x_?NumericQ, y_?NumericQ,
z_?NumericQ} :> {ArcTan[x, y], z};
developed = anaBunny /. developRules;
DeleteCases[
Partition[developed[[1]], 2, 1], _?(EuclideanDistance @@ # > 1 &)];
Graphics[{Red, AbsoluteThickness[3], Line /@ %}, FrameTicks -> None,
Frame -> True]
![enter image description here][13]
And the result, printed, glued inside a cylinder and using the same setup as in the previous example:
![enter image description here][14]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logojointpics.png&userId=68637
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3379geometrycone.png&userId=68637
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=animationskipeedframes.gif&userId=68637
[4]: https://community.wolfram.com/groups/-/m/t/1646795?p_p_auth=1iKz6YW8
[5]: http://demonstrations.wolfram.com/CharacterRotationPatterns/
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=S-logooutlinecopy.png&userId=68637
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9897logocylinder.png&userId=68637
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=unfolding.gif&userId=68637
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4680logodeveloped.png&userId=68637
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logofinalcombi.png&userId=68637
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9385bunnyoriginal.png&userId=68637
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=cylinderanimation.gif&userId=68637
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8734bunnydeveloped.png&userId=68637
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6598bunnyfinal.jpg&userId=68637Erik Mahieu2019-04-15T07:18:15Z