Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag sorted by activeSequence-to-sequence regression with recurrent neural networks
https://community.wolfram.com/groups/-/m/t/1647625
I recently wanted to learn more about recurrent neural networks (RNNs) and how to use them for regression problems. In particular, I wanted to try and combine RNNs with the ideas for Bayesian error bar estimation I discussed in my blog/community posts a while back:
https://blog.wolfram.com/2018/05/31/how-optimistic-do-you-want-to-be-bayesian-neural-network-regression-with-prediction-errors/
https://community.wolfram.com/groups/-/m/t/1319745
I also found the found the following answer on StackExchange about sequence learning with `SequenceAttentionLayer`, which helped me quite a bit to understand how to use to use these layers:
https://mathematica.stackexchange.com/a/143672/43522
With those resources in hand, I wanted to give time-series regression a shot.
First, let's use `FinancialData` as an easy source of a test dataset. Here I download 5 years of data from 5 different indices and turn them into a regularly sampled matrix of vectors and then split the dataset into 80/20% training/test data:
indices = {"GOOGL", "AS:ASML", "MSFT", "AAPL", "GE"};
data = FinancialData[indices, {Today - 5 Quantity[1, "Years"], Today}];
DateListPlot[data]
With[{
matrix = N@Normal[
TimeSeriesRescale[
TimeSeriesResample[TimeSeriesThread[Identity, TimeSeries /@ data]], {0, Max[Length /@ data] - 1}]
][[All, 2]]
},
{fe, data} = FeatureExtraction[matrix, "StandardizedVector", {"ExtractorFunction", "ExtractedFeatures"}]
];
{trainingData, testData} = TakeDrop[data, Round[0.8*Length[data]]];
Dimensions /@ {trainingData, testData}
![enter image description here][1]
Next, we define a function that can generate training data for the network from the data matrix we just made:
Clear[genFun];
genFun[data_, { seqLengthIn_, seqLengthout_, offset_}] := With[{
dim = Dimensions[data][[2]],
max = Max[1, Length[data] - seqLengthout - offset - seqLengthIn - 2]
},
Function[
With[{ri = RandomInteger[{1, max}, #BatchSize]},
<|
"Input" -> Function[data[[# ;; # + seqLengthIn - 1]]] /@ ri,
"Target" -> Join[
ConstantArray[{ConstantArray[0., dim]}, #BatchSize],
Function[With[{j = # + offset + seqLengthIn - 1}, data[[j ;; j + seqLengthout - 1]]]] /@ ri,
2
]
|>]]];
Each time the function is called, it generates a batch of sequences that will be used as training input and target sequences that the network should try to learn and reproduce, but the targets are prepended with a blank start-of-sequence vector:
Dimensions /@ genFun[data, {10, 6, 1}][<|"BatchSize" -> 3|>]
Out[72]= <|"Input" -> {3, 10, 5}, "Target" -> {3, 7, 5}|>
Next we put together our neural network:
Clear[encoderNetAttend, decoderNet, lossLayer, trainingNetAttention];
encoderNetAttend[n_, dropout_] := NetGraph[<|
"lstm" -> LongShortTermMemoryLayer[n, "Dropout" -> dropout],
"last" -> SequenceLastLayer[],
"post" ->
NetChain[{LinearLayer[n], ElementwiseLayer["SELU"],
DropoutLayer[], LinearLayer[]}]
|>,
{"lstm" -> "last" -> "post" -> NetPort["StateInit"], "lstm" -> NetPort["States"]}
];
decoderNet[n_, dropout_] := NetGraph[
<|
"part1" -> PartLayer[{All, 1}], "part2" -> PartLayer[{All, 2}],
"lstm" -> LongShortTermMemoryLayer[n, "Dropout" -> dropout],
"shape1" -> ReshapeLayer[{n, 1}], "shape2" -> ReshapeLayer[{n, 1}], "catState" -> CatenateLayer[2]
|>,
{
NetPort["Input"] -> NetPort["lstm", "Input"],
NetPort["lstm", "Output"] -> NetPort["Output"],
NetPort["StateIn"] -> "part1" -> NetPort["lstm", "State"],
NetPort["StateIn"] -> "part2" -> NetPort["lstm", "CellState"],
NetPort["lstm", "State"] -> "shape1", NetPort["lstm", "CellState"] -> "shape2",
{"shape1", "shape2"} -> "catState" -> NetPort["StateOut"]
},
"StateIn" -> {n, 2}
];
lossLayer["LogPrecision"] = ThreadingLayer@Function[{yObserved, yPredicted, logPrecision},
(yPredicted - yObserved)^2*Exp[logPrecision] - logPrecision
];
regressionNet[n_, dimIn_, dropout : _?NumericQ : 0.5] :=
NetInitialize@NetGraph[
<|
"encoder" -> encoderNetAttend[n, dropout],
"decoder" -> decoderNet[n, dropout],
"attention" -> SequenceAttentionLayer["Bilinear"],
"catenate" -> CatenateLayer[2],
"regression" -> NetMapOperator[LinearLayer[{2, dimIn}]],
"part1" -> PartLayer[{All, 1}], "part2" -> PartLayer[{All, 2}]
|>,
{
NetPort["Input"] -> "encoder",
NetPort["encoder", "StateInit"] -> NetPort["decoder", "StateIn"],
NetPort["encoder", "States"] -> NetPort["attention", "Input"],
NetPort["Prediction"] -> NetPort["decoder", "Input"],
NetPort[{"decoder", "Output"}] ->
NetPort["attention", "Query"], {"decoder", "attention"} ->
"catenate" -> "regression",
"regression" -> "part1" -> NetPort["mu"],
"regression" -> "part2" -> NetPort["logTau"]
},
"Input" -> {"Varying", dimIn},
"Prediction" -> {"Varying", dimIn}
];
trainingNetAttention[n_, dimIn_, dropout : _?NumericQ : 0.5] := NetInitialize@NetGraph[
<|
"regression" -> regressionNet[n, dimIn, dropout],
"loss" -> lossLayer["LogPrecision"],
"total" -> SummationLayer[], "most" -> SequenceMostLayer[], "rest" -> SequenceRestLayer[]
|>,
{
NetPort["Input"] -> NetPort["regression", "Input"],
NetPort["Target"] -> "most" -> NetPort["regression", "Prediction"],
NetPort["Target"] -> "rest",
{"rest", NetPort["regression", "mu"], NetPort["regression", "logTau"]} -> "loss" -> "total" -> NetPort["Loss"]
},
"Input" -> {"Varying", dimIn},
"Target" -> {"Varying", dimIn}
];
nHidden = 250;
pDrop = 0.25;
NetInformation[encoderNetAttend[nHidden, pDrop], "SummaryGraphic"]
NetInformation[decoderNet[nHidden, pDrop], "SummaryGraphic"]
NetInformation[regressionNet[nHidden, 2, 0.25], "SummaryGraphic"]
NetInformation[ trainingNetAttention[nHidden, 2, 0.25], "SummaryGraphic"]
![enter image description here][2]
The idea here is that the encoder produces a state vector that the decoder uses to start making predictions. The decoder, in turn, accepts the networks' own predictions to iteratively crank out the next one. During the training, though, we don't have the actual predictions from the network, so instead we feed in the sequence of actual target values and offset the target sequence at the loss layer by one with a `SequenceRest` layer.
Time to train the network:
seqConfig = {15, 7, 1};
trainedTrainObject = NetTrain[
trainingNetAttention[nHidden, Length[indices], pDrop],
genFun[trainingData, seqConfig],
All,
TargetDevice -> "GPU", LossFunction -> "Loss",
TimeGoal -> 30*60, BatchSize -> 200,
ValidationSet -> genFun[testData, seqConfig][<|"BatchSize" -> 200|>]
];
trainedNet = trainedTrainObject["TrainedNet"];
trainedTrainObject["EvolutionPlots"]
![enter image description here][3]
To make predictions, the network needs to eat its own results iteratively, which we can do with a `NestList`
makePredictorFunction[trainedNet_, evaluationMode : ("Test" | "Train") : "Test", device : ("CPU" | "GPU") : "CPU"] := With[{
dim = Last@NetExtract[trainedNet, {"Input"}],
encoder = NetExtract[trainedNet, {"regression", "encoder"}],
regressionNet =
NetDelete[NetExtract[trainedNet, "regression"], "encoder"]
},
Function[{input, n},
Module[{
encodedInput = encoder[input, NetEvaluationMode -> evaluationMode, TargetDevice -> device],
vectorInputQ = Depth[input] === 4,
length = Length[input],
reshape
},
reshape = If[vectorInputQ, ConstantArray[#, Length[input]] &, Identity];
GeneralUtilities`AssociationTranspose@NestList[
regressionNet[
<|"Input" -> encodedInput["States"], "Prediction" -> #mu, "StateIn" -> #StateOut|>,
NetEvaluationMode -> evaluationMode,
TargetDevice -> device] &,
<|
"mu" -> reshape@{ConstantArray[0, dim]},
"StateOut" -> encodedInput["StateInit"]
|>,
n
][[2 ;;, {"mu", "logTau"}, If[vectorInputQ, All, Unevaluated[Sequence[]]], 1]]
]
]
];
For example, we can predict 5 steps ahead based on 30 points of historical data:
predictor = makePredictorFunction[trainedNet, "Train", "CPU"];
Dimensions /@ predictor[data[[;; 30]], 7]
Out[299]= <|"mu" -> {7, 5}, "logTau" -> {7, 5}|>
Here's a manipulate to visualize the predictions against the actual data. We sample the network several times to get a sense of the uncertainty from the training weights and combine that uncertainty with the variance predicted by the network:
Manipulate[
Manipulate[
Quiet@With[{
predictions = Dot[
{{1., 1.}, {1., 0.}, {1., -1.}},
{Mean[#mu],
nstdev*Sqrt[Variance[#mu] + Mean[Exp[-#logTau]]]} &@Map[
Transpose,
predictor[Table[dat[[Max[1, n - t] ;; UpTo[n]]], s], l]
][[{"mu", "logTau"}, All, All, index]]
],
trueData = ListPlot[
MapIndexed[{First[#2] + Max[1, n - t] - 1, #1} &,
Flatten[dat[[Max[1, n - t] ;; UpTo[n + l], index]]]
],
Joined -> True,
PlotStyle -> Black
]
},
Show[
ListPlot[
Thread@Legended[
MapIndexed[
{First[#2] + Max[1, n - t] - 1, #1} &,
Flatten[{
dat[[Max[1, n - t] ;; UpTo[n], index]],
#
}]
] & /@ predictions,
{StringForm["\[Mu] + `1` \[Sigma]", Dynamic[nstdev]],
"\[Mu]", StringForm["\[Mu] - `1` \[Sigma]", Dynamic[nstdev]]}
],
Joined -> True,
PlotStyle -> (Directive[#, Dashed] & /@ {Blue, Red, Blue}),
ImageSize -> 500,
Filling -> {1 -> {2}, 3 -> {2}}
],
trueData,
ImageSize -> Large,
PlotRange -> All
]
],
{{index, 1, "Index"}, Thread[Range[Length[indices]] -> indices]},
{{n, Round[Length[dat]/10], "Position"}, 1, Length[dat], 1},
{{t, 20, "History length"}, 1, 30, 1},
{{l, 30, "Extrapolation"}, 1, 50, 1},
{{s, 5, "Number of samples"}, 1, 20, 1},
{{nstdev, 2, "Standard deviations"}, 0, 4, 0.1},
Paneled -> False,
ContinuousAction -> True,
TrackedSymbols :> {index, n, t, l, s, nstdev},
SynchronousInitialization -> False
],
{{dat, trainingData, "Dataset"}, {trainingData -> "Training data",
testData -> "Test data"}},
TrackedSymbols :> {dat},
SynchronousInitialization -> False
]
![enter image description here][4]
The results on the validation set do not look great, but the comparison between the first 4 years of the data and the last one may not be fair. Or it may simply be necessary to involve many more indices before it becomes feasible to do extrapolations of this type. At any rate, I hope that this code is useful to other users here! I will probably tweak bits of code here and there over the next few days and maybe add more commentary, so be sure to check back occasionally if you're interested.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8936datelistplot.png&userId=839741
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1613networks.png&userId=839741
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=progressplot.png&userId=839741
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=manipulate.png&userId=839741Sjoerd Smit2019-04-05T15:43:41ZHow should I use this DateHistogram Funcion correctly??
https://community.wolfram.com/groups/-/m/t/1665479
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=%E5%BE%AE%E4%BF%A1%E5%9B%BE%E7%89%87_20190421225933.png&userId=1665464
I want to draw a histogram using the data as in the pic I uploaded, but errors occurred, can someone help me?Siyao Liu2019-04-21T15:09:37ZPlotting two lines for max/min error on a plot
https://community.wolfram.com/groups/-/m/t/1665533
Dear all,
How do I plot two parallel lines for maximum and minimum values of relative error percentage as below:
In[124]:= m = {2.2027036725996783`,
5.280362701954329`, -1.0124449729925813`, 1.698979465419002`,
3.141591718306451`, -2.745060440009399`, -1.6113207086717107`,
0.16638391106163822`, -2.258695482994957`, -0.3482598627250328`,
4.925577851518227`, 2.6750207005122877`, -4.02288009868453`,
2.714952029120464`, 3.034760679525427`,
3.729591243576016`, -3.3014545762020924`, -0.22188437300968553`, \
-5.337065768483884`, -4.1220754806842725`,
0.5332742419880744`, -2.8743222182737225`, -2.693012697508946`,
0.32865798833744514`,
4.027192028466991`, -2.8987838601887117`, -0.16628212916201005`,
3.484024392679025`, -3.2865991892298494`, -1.4074929511476675`, \
-0.1376591620771673`, -1.340169784223353`, -0.18874557070863454`, \
-0.7158258869527857`, -1.6221568977272842`, 2.220428106758959`,
0.18756079584372606`, 2.877020526594872`, 1.6279535082847358`,
0.615660929310434`,
0.8731989288504256`, -2.4430509995815504`, -3.8101555754900276`, \
-0.41354676208662355`, -3.1882480501258503`,
2.797772540468918`, -4.423074431837073`, -0.15315482194415617`, \
-0.9330704276794881`, -2.6279243117503595`, 2.4286072216474692`,
3.303587280307156`,
3.8085267078688805`, -1.9518727734300068`, -4.213412770426206`, \
-2.357017551327045`,
3.4733691167400265`, -0.11673331538713343`, -0.40688016301067764`,
1.6698198640343067`, 0.14831028096303508`, 0.24317430279547272`,
3.011211821071481`, -0.48218554546387205`, 0.09689880402434317`,
1.191374402522961`, -2.7963094788933693`, -1.7336554568628828`,
0.7339703452354714`, -0.8739780822047689`, -3.63379798637767`,
1.0591084727706641`, 5.324198415974884`,
0.0845454122567755`, -3.991208337015724`, -3.793925049867107`,
1.8074800549551044`, 1.1480036635543738`, -2.7476419011691475`,
1.00023361017206`, 0.028334302480905814`, -0.008048998604257977`}
In[129]:= {Max[m], Min[m]}
Out[129]= {5.3242, -5.33707}
In[130]:= ListLinePlot[m, LabelStyle -> {14, GrayLevel[0], Bold},
ImageSize -> 500, PlotStyle -> Blue,
AxesLabel -> {"Sample", "Relative Error (%)"}, Filling -> Axis,
PlotRange -> All]
![enter image description here][1]
Many thank!
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4924Untitled.png&userId=943918M.A. Ghorbani2019-04-21T14:29:26ZWhy doesn't Mathematica replace i with anything I want?
https://community.wolfram.com/groups/-/m/t/1665703
So I wanted to replace i with 0 in my equation (pls don't ask why). I tried several options but none of them gave the result I wanted. What should I do?![showcase of the problem][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfWTF.png&userId=1308552Daniel Voloshin2019-04-21T19:59:34ZSolve running forever
https://community.wolfram.com/groups/-/m/t/1666013
Hi,
I own Mathematica student edition 11.3 and try to solve a quadratic equation (not even a hard one) and the program runs forever without ever returning a solution. Meanwhile, I tried the exact same piece of code in Wolfram Alpha (directly from the website) and it returned a solution right away. Is there such limitation with the student edition or is there something wrong with my version? Is there nay way to avoid this issue?
PS: the code is extremely simple,
Solve[1 - w + (a^2 w^2)/r^2 == 0, w]Max Mayca2019-04-21T20:53:30ZGet surface volume?
https://community.wolfram.com/groups/-/m/t/1658725
get the mathematical expression for the volume formula.
known coordinate system x, y, z
receive .stl model
![enter image description here][1]
![enter image description here][2]
The surface area was calculated..how to calculate surface volume ...?
by coordinate equations, the surface was obtained by integration, and a mathematical expression was obtained
![enter image description here][3]
Please help me to get the expressions of the volume of the figure, similar to the calculation of the surface area, according to the specified conditions
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4REiV.jpg&userId=1658711
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=EXQfO.png&userId=1658711
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Ku1G0.png&userId=1658711alex m.2019-04-12T00:12:45ZDecorating Easter Eggs with the Planets
https://community.wolfram.com/groups/-/m/t/1665923
![The planets as Easter eggs][1]
Just thought I'd share this fun little exercise. First, we need to get the textures to use.
textures =
ImageReflect[#, Right] & /@
EntityValue["Planet", "CylindricalEquidistantTexture"];
Then, we need to plot parametric surfaces that look like eggs and apply the textures to them.
GraphicsGrid[Partition[With[{l = .75, a = 1, b = 1},
ParametricPlot3D[
Evaluate[
RotationMatrix[
Pi/2, {0, 1,
0}].{l Cos[t] + (a + b Cos[t]) Cos[t], (a + b Cos[t]) Sin[
t] Cos[p], (a + b Cos[t]) Sin[t] Sin[p]}], {p, 0,
2 Pi}, {t, 0, Pi}, Mesh -> None, Boxed -> False, Axes -> False,
Lighting -> "Neutral", PlotStyle -> Texture[#],
ViewPoint -> Left, PlotPoints -> 50, Background -> Black,
SphericalRegion -> True, ViewAngle -> Pi/6]] & /@ textures, 4],
Spacings -> {0, 0}, ImageSize -> 800]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=PlanetEaster.png&userId=25355Jeff Bryant2019-04-21T20:44:15ZInvert a 2x2 matrix with 2x2 matrix elements?
https://community.wolfram.com/groups/-/m/t/1651695
If this is my matrix a as follows:
a00 a01
a10 a11
Element a00 would be comprised of a matrix b as follows:
b00 b01
b10 b11
And so on ...
I want to solve my A matrix using Mathematica algorithm.
Thanks for your help.
Stanstan gianzero2019-04-07T16:35:03ZRoadmap to Mathematica 12 on Raspberry Pi?
https://community.wolfram.com/groups/-/m/t/1664758
Great to see Mathematica 12 released and I can't wait to try the new functionality!
What is the timeline and roadmap to get Mathematica 12 to the Pi? – Thanks!Michael Byczkowski2019-04-20T08:40:16ZHow can I significance test a NonlinearModelFit?
https://community.wolfram.com/groups/-/m/t/1665378
I have some data, and I have done a NonlinearModelFit on it, actually fitting it to a sine curve. I can get the "RSquared" and "AdjustedRSquared", e.g. with nlm["AdjustedRSquared"] where nlm is the output of the NonlinearModelFit. I now want to test the significance of the result. I would like to end up with a single number p, so that I could say, "the probability of getting such a fit by chance is p".
NonlinearModelFit has properties like "ParameterPValues" and "ParameterTStatistics". However, I have looked in the StatisticalModelAnalysis tutorial, and there is no real explanation of how they might be used or generally how to do significance testing.
Does NonlinearModelFit have built in ways to get significance (probability of fit being due to chance)? Or is there a good tutorial on using the output of Mathematica's NonlinearModelFit to do significance testing?Marc Widdowson2019-04-21T13:46:43ZSpecifying meshes and boundaries in NDSolveValue
https://community.wolfram.com/groups/-/m/t/1665050
I am trying to solve Laplace's equations in two-dimensions in order obtain the voltage field with specified regions and boundaries. One of the regions in the problem is a half-disk. I wish to specify a voltage on the boundary of that half-disk. I do not think that I know what my problem is, but I suspect that the boundary I am generating for that half-disk is wrong.
Ultimately NDSolveValue appears to work, but does not give a plausible answer. This problem does not require much code. It would be very helpful if someone could examine it and let me know what I am doing wrong.Robert Curl2019-04-20T21:53:54ZHow do I include "Authorization: Bearer ..." in a URL execute
https://community.wolfram.com/groups/-/m/t/1664088
I am trying to access the Asana api. In order to do that I need to send http requests of the form
https://api end point "Authorization: Bearer 0/123456789abcdef"
I have tried setting Authetication -> <|"Authorization" -> Bearer... |> but that just brings up the asana sign in box. I also tried "Headers" -> "Authorization: Bearer..." |> but that throws the error
HTTPRequest::nvldheaders: -- Message text not found -- (Join[{user-agent->Wolfram HTTPClient 12.},Authorization: Bearer ...
What is the correct way to include the Authorization header request in the message?Andrew Burnett2019-04-19T17:44:06Zcomputable famous theorems of geometry
https://community.wolfram.com/groups/-/m/t/1664846
[GeometricScene][1] and [FindGeometricConjectures][2] are two of my favorite new functions in Wolfram Language V12. V12 provides innovative automated capabilities to draw and reason about abstractly described scenes in the plane.
I also remember that I'd proved famous theorems of geometry over many days when I was a junior high school student. I will show eight theorems, including those in the Documentation Center and [WOLFRAM blog][3].
## Thaless Theorem ##
If A, B, and C are distinct points on a circle where the line AC is a diameter, then the angle \[Angle]ABC is a right angle.
gs = GeometricScene[{"A", "B", "C", "O"},
{Triangle[{"A", "B", "C"}],
CircleThrough[{"A", "B", "C"}, "O"],
"O" == Midpoint[{"A", "C"}],
Style[Line[{"A", "B"}], Orange],
Style[Line[{"B", "C"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][4]
![enter image description here][5]
## Napoleons Theorem ##
If regular triangles are constructed on the sides of any triangle, either all outward or all inward, the lines connecting the centres of those regular triangles themselves form an regular triangle.
gs = GeometricScene[{"C", "B", "A", "C'", "B'", "A'", "Oc", "Ob",
"Oa"},
{Triangle[{"C", "B", "A"}],
TC == Triangle[{"A", "B", "C'"}],
TB == Triangle[{"C", "A", "B'"}],
TA == Triangle[{"B", "C", "A'"}],
GeometricAssertion[{TC, TB, TA}, "Regular"],
"Oc" == TriangleCenter[TC, "Centroid"],
"Ob" == TriangleCenter[TB, "Centroid"],
"Oa" == TriangleCenter[TA, "Centroid"],
Style[Triangle[{"Oc", "Ob", "Oa"}], Orange]}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][6]
![enter image description here][7]
## Finsler-Hadwiger Theorem ##
ABCD and A BB CC DD are two squares with common vertex A. Let Q and S be the midpoints of BB D and DD B respectively, and let R and T be the centers of the two squares. Then the quadrilateral QRST is a square as well.
gs = GeometricScene[{"A", "B", "C", "D", "BB", "CC", "DD", "Q", "R", "S", "T"},
{GeometricAssertion[{Polygon[{"A", "B", "C", "D"}],
Polygon[{"A", "BB", "CC", "DD"}]}, "Regular", "Counterclockwise"],
"Q" == Midpoint[{"BB", "D"}],
"R" == Midpoint[{"A", "C"}],
"S" == Midpoint[{"B", "DD"}],
"T" == Midpoint[{"A", "CC"}],
Style[Polygon[{"Q", "R", "S", "T"}], Orange]}];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][8]
![enter image description here][9]
## Echols Theorem ##
The midpoints of AD, BE, and CF in two equilateral triangles ABC and DEF form a regular triangle.
gs = GeometricScene[{"A", "B", "C", "D", "E", "F", "L", "M", "N"},
{T1 == Triangle[{"A", "B", "C"}],
T2 == Triangle[{"D", "E", "F"}],
GeometricAssertion[{T1, T2}, "Regular"],
"L" == Midpoint[{"A", "D"}],
"M" == Midpoint[{"B", "E"}],
"N" == Midpoint[{"C", "F"}],
Style[Triangle[{"L", "M", "N"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][10]
![enter image description here][11]
## Simson Theorem & Steiner Theorem ##
Simson's Theorem states that ABC and a point P on its circumcircle, the three closest points to P on lines AB, AC, and BC are collinear. Steiner's Theorem states that if the vertical center of triangle ABC is H, the Simson line passes through the midpoint of PH.
gs = GeometricScene[{"A", "B", "C", "P", "L", "M", "N", "H", "S"},
{CircleThrough[{"P", "A", "B", "C"}],
"L" \[Element] InfiniteLine[{"B", "C"}],
"M" \[Element] InfiniteLine[{"C", "A"}],
"N" \[Element] InfiniteLine[{"A", "B"}],
PlanarAngle[{"P", "L", "B"}] == 90 \[Degree],
PlanarAngle[{"P", "M", "C"}] == 90 \[Degree],
PlanarAngle[{"P", "N", "A"}] == 90 \[Degree],
Style[InfiniteLine[{"L", "M"}], Orange],
GeometricAssertion[{InfiniteLine[{"A", "H"}], Line[{"B", "C"}]},
"Perpendicular"],
GeometricAssertion[{InfiniteLine[{"B", "H"}], Line[{"A", "C"}]},
"Perpendicular"],
Style[Line[{"P", "H"}], Orange],
Line[{"P", "S", "H"}], Line[{"L", "S", "M"}]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][12]
![enter image description here][13]
## Aubel Theorem ##
Starting with a given quadrilateral (a polygon having four sides), construct a square on each side.The two line segments between the centers of opposite squares are of equal lengths and are at right angles to one another.
gs = GeometricScene[{"A", "B", "C", "D", "A'", "A''", "B'",
"B''", "C'", "C''", "D'", "D''", "Oa", "Ob", "Oc", "Od"},
{GeometricAssertion[Polygon[{"A", "B", "C", "D"}], "Convex"],
GeometricAssertion[{pa = Polygon[{"A", "B", "A'", "A''"}],
pb = Polygon[{"B", "C", "B'", "B''"}],
pc = Polygon[{"C", "D", "C'", "C''"}],
pd = Polygon[{"D", "A", "D'", "D''"}]}, "Regular",
"Counterclockwise"],
"Oa" == Midpoint[{"A", "A'"}],
"Ob" == Midpoint[{"B", "B'"}],
"Oc" == Midpoint[{"C", "C'"}],
"Od" == Midpoint[{"D", "D'"}],
Style[Line[{"Oa", "Oc"}], Orange],
Style[Line[{"Ob", "Od"}], Orange]}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][14]
![enter image description here][15]
## Brahmagupta Theorem ##
If a cyclic quadrilateral has perpendicular diagonals, then the perpendicular to a side from the point of intersection of the diagonals always bisects the opposite side.
gs = GeometricScene[{"A", "B", "C", "D", "E", "M"},
{Polygon[{"A", "B", "C", "D"}],
CircleThrough[{"A", "B", "C", "D"}],
GeometricAssertion[{Line[{"A", "C"}], Line[{"B", "D"}]},
"Perpendicular"],
Line[{"A", "E", "C"}], Line[{"B", "E", "D"}],
"M" == Midpoint[{"A", "B"}],
Style[InfiniteLine[{"M", "E"}], Orange],
Style[Line[{"C", "D"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][16]
![enter image description here][17]
## Morley Theorem ##
In any triangle, the three points of intersection of the adjacent angle trisectors form a regular triangle.
gs = GeometricScene[{"A", "B", "C", "D", "E", "F"},
{Triangle[{"A", "B", "C"}],
PlanarAngle[{"A", "B", "F"}] == PlanarAngle[{"A", "B", "C"}]/3,
PlanarAngle[{"F", "A", "B"}] == PlanarAngle[{"C", "A", "B"}]/3,
PlanarAngle[{"C", "B", "D"}] == PlanarAngle[{"C", "B", "A"}]/3,
PlanarAngle[{"B", "C", "D"}] == PlanarAngle[{"B", "C", "A"}]/3,
PlanarAngle[{"A", "C", "E"}] == PlanarAngle[{"A", "C", "B"}]/3,
PlanarAngle[{"C", "A", "E"}] == PlanarAngle[{"C", "A", "B"}]/3,
"D" \[Element] Triangle[{"A", "B", "C"}],
"E" \[Element] Triangle[{"A", "B", "C"}],
"F" \[Element] Triangle[{"A", "B", "C"}],
Style[Triangle[{"D", "E", "F"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][18]
![enter image description here][19]
[1]: https://reference.wolfram.com/language/ref/GeometricScene.html
[2]: https://reference.wolfram.com/language/ref/FindGeometricConjectures.html
[3]: https://blog.wolfram.com/2019/04/16/version-12-launches-today-big-jump-for-wolfram-language-and-mathematica/
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=122401.jpg&userId=1013863
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=801502.jpg&userId=1013863
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=508803.jpg&userId=1013863
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=222204.jpg&userId=1013863
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=289105.jpg&userId=1013863
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=932006.jpg&userId=1013863
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=876507.jpg&userId=1013863
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=184108.jpg&userId=1013863
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=350909.jpg&userId=1013863
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=500710.jpg&userId=1013863
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=318711.jpg&userId=1013863
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=254812.jpg&userId=1013863
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=308813.jpg&userId=1013863
[17]: https://community.wolfram.com//c/portal/getImageAttachment?filename=934714.jpg&userId=1013863
[18]: https://community.wolfram.com//c/portal/getImageAttachment?filename=367015.jpg&userId=1013863
[19]: https://community.wolfram.com//c/portal/getImageAttachment?filename=16.jpg&userId=1013863Kotaro Okazaki2019-04-20T14:42:58ZVisualizing simultaneous iteration methods for polynomial roots
https://community.wolfram.com/groups/-/m/t/1646447
One of my longstanding research interests in numerical analysis has been the family of "simultaneous iteration" methods for finding polynomial roots. (See e.g. [McNamee's book][1] for a comprehensive survey.) Briefly put, these are modified Newton-Raphson methods that allow one to find the roots of a polynomial all at once, as opposed to finding them one at a time and "deflating".
I had the idea to visualize how these algorithms gradually proceed from initial approximations to the roots, up to convergence. After a number of experiments, I settled on using [domain coloring][2] for visualization. I have found that the logarithmic derivatives of polynomials gave particularly striking plots.
For this post, I have used the scaled [exponential sum][3] of degree 20:
$$\frac{20!}{20^{20}}\sum_{k=0}^{20}\frac{(20z)^k}{k!}$$
as the example polynomial whose roots we want to see. It [is][4] [known][5] that the zeroes of this polynomial asymptotically approach the so-called Szegő curve as the polynomial degree goes to infinity.
![Szegő curve][6]
expPoly[x_] = With[{n = 20}, Sum[(n! (n x)^k)/(k! n^n), {k, 0, n}]]
---
I will now look at two of the most popular simultaneous iteration methods. The first one is the [(Weierstrass-)Durand-Kerner method][7],
$$x_i^{(k+1)}=x_i^{(k)}-\frac{p(x_i^{(k)})}{\prod\limits_{j\neq i} (x_i^{(k)}-x_j^{(k)})},\qquad i=1\dots n;\; k=0,1,\dots$$
which is (typically) quadratically convergent. (Note that in simultaneous iteration methods, the polynomials are always assumed to be monic (i.e., unit leading coefficient).)
Implementing the iteration is easy using `FixedPointList[]`. As is customary with these methods, we use as a starting approximation points equispaced around the unit circle, and slightly rotated:
ptsdk = FixedPointList[# - expPoly[#]/Table[Apply[Times, #[[k]] - Delete[#, k]],
{k, Length[#]}] &,
N[Exp[2 π I Range[0, 19]/20 - I π/40]], 40,
SameTest -> (EuclideanDistance[##] < 1.*^-6 &)];
I use a loose convergence criterion that is good enough for visualization purposes.
For the domain coloring plot, I will use a slightly modified version of the [DLMF color scheme][8], based on an idea of [Quilez][9].
DLMFPhaseColor[u_, s_:1, b_:1] := Module[{rgb},
rgb = Clip[{1, -1, -1} Abs[{8, 4, 8} Mod[u/(2 π), 1] -
{9, 3, 11}/2] + {-3, 3, 5}/2, {0, 1}];
rgb = (3 - 2 rgb) rgb^2;
Apply[RGBColor, b (1 + s (rgb - 1))]]
I then use a simplified version of [code originally written by user Heike][10] on Mathematica Stack Exchange:
dcdk = RegionPlot[True, {x, -9/8, 9/8}, {y, -9/8, 9/8},
ColorFunction ->
Function[{x, y}, DLMFPhaseColor[Arg[Total[1/(x + I y - #)]]]],
ColorFunctionScaling -> False, Frame -> False,
PlotPoints -> 405] & /@ ptsdk;
(This takes some time, due to the high `PlotPoints` setting.)
We can now see an animation:
ListAnimate[dcdk]
![Durand-Kerner][11]
The other method I will be looking at in this post is the (typically) cubically convergent [Ehrlich-Aberth(-Maehly) method][12],
$$x_i^{(k+1)}=x_i^{(k)}-\frac{\tfrac{p(x_i^{(k)})}{p^\prime(x_i^{(k)})}}{1-\tfrac{p(x_i^{(k)})}{p^\prime(x_i^{(k)})}\sum\limits_{j\neq i} \tfrac1{x_i^{(k)}-x_j^{(k)}}},\qquad i=1\dots n;\; k=0,1,\dots$$
which is also one of the methods available in *Mathematica*'s `NSolve[]`/`NRoots[]`.
Unfortunately, I have no way to get the iterates generated by `NSolve[]`, so I had to reimplement the method myself. We can use essentially the same code as was used for Durand-Kerner, with a few changes:
ptsea = FixedPointList[With[{ld = expPoly[#]/expPoly'[#]},
# - ld/(1 - ld Table[Tr[1/(#[[k]] - Delete[#, k])],
{k, Length[#]}])] &,
N[Exp[2 π I Range[0, 19]/20 - I π/40]], 40,
SameTest -> (EuclideanDistance[##] < 1.*^-6 &)];
dcea = RegionPlot[True, {x, -9/8, 9/8}, {y, -9/8, 9/8},
ColorFunction ->
Function[{x, y}, DLMFPhaseColor[Arg[Total[1/(x + I y - #)]]]],
ColorFunctionScaling -> False, Frame -> False,
PlotPoints -> 405] & /@ ptsea;
ListAnimate[dcea]
![Ehrlich-Aberth][13]
It would be interesting to use this visualization technique on other polynomials with interesting root structure, as well as other simultaneous iteration methods.
[1]: https://books.google.com/books?hl=en&id=4PMqxwG-eqQC&pg=PA67
[2]: https://en.wikipedia.org/wiki/Domain_coloring
[3]: http://mathworld.wolfram.com/ExponentialSumFunction.html
[4]: https://doi.org/10.1216/rmjm/1181072998
[5]: https://doi.org/10.1007/BFb0087909
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=vpliw.gif&userId=520181
[7]: https://en.wikipedia.org/wiki/Durand%E2%80%93Kerner_method
[8]: https://dlmf.nist.gov/help/vrml/aboutcolor#S2
[9]: https://www.shadertoy.com/view/MsS3Wc
[10]: https://mathematica.stackexchange.com/a/7293/
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=dcdk.gif&userId=520181
[12]: https://en.wikipedia.org/wiki/Aberth_method
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=dcea.gif&userId=520181J. M.2019-04-04T04:38:26ZAnalysis of crop health using NDVI (Red and Near-infrared capturing camera)
https://community.wolfram.com/groups/-/m/t/1663138
Hello Wolfram Community,
I want to take pictures of crops using a drone and better highlight the contrast between regions of healthy vegetation vs. areas with stressed/diseased or no vegetation (in future, extract more information about the crops).
I have recently removed an infrared blocking filter from a camera of a DJI Phantom 3 and instead mounted a filter that blocks every wavelength except Red and Near-infrared. I want red and near-infrared light because healthy vegetation will absorb red light and strongly reflect near-infrared light.
Incorporation of this filter allows reflected red light (660nm) to be captured in the camera sensor’s red channel and reflected near-infrared light (850nm) to be captured in the sensor’s blue channel (the green channel is not used). NDVI can be determined by performing the calculation: NDVI = (blue – red) / (blue + red).
Could you please tell me how to make a heat map (maybe using DensityPlot) or any other way to better highlight the contrast instead of the Colorize function? Or if anything comes to your mind, I would be glad to hear!
![enter image description here][1]
![enter image description here][2]
This is my first attempt in post-processing.
An approximate regular camera picture of the grass spot:
![enter image description here][3]
ColorSeparate[![enter image description here][4], {"R", "B"}]
Out[1] = { ![enter image description here][5], ![enter image description here][6]}
f[pixel1_, pixel2_] := (((pixel1 - pixel2)/(pixel1 + pixel2)) + 1)/2;
ImageApply[f, {![enter image description here][7], ![enter image description here][8]}]
Out[3] = ![enter image description here][9]
The darker spots are supposed to be less healthy vegetation.
Colorize[![enter image description here][10]]
Out[4] = ![enter image description here][11]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-17at15.25.21.png&userId=1343397
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-17at15.26.08.png&userId=1343397
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_7343.jpg&userId=1343397
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1.jpeg&userId=1343397
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2.jpeg&userId=1343397
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3.jpeg&userId=1343397
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3.jpeg&userId=1343397
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2.jpeg&userId=1343397
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=45.jpeg&userId=1343397
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=45.jpeg&userId=1343397
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6.jpeg&userId=1343397Zhamilya Bilyalova2019-04-17T19:02:58ZThoughts on a Python interface, and why ExternalEvaluate is just not enough
https://community.wolfram.com/groups/-/m/t/1185247
`ExternalEvaluate`, introduced in M11.2, is a nice initiative. It enables limited communication with multiple languages, including Python, and appears to be designed to be relatively easily extensible (see ``ExternalEvaluate`AddHeuristic`` if you want to investigate, though I wouldn't invest in this until it becomes documented).
**My great fear, however, is that with `ExternalEvaluate` Wolfram will consider the question of a Python interface settled.**
This would be a big mistake. A *general* framework, like `ExternalEvaluate`, that aims to work with *any* language and relies on passing code (contained in a string) to an evaluator and getting JSON back, will never be fast enough or flexible enough for *practical scientific computing*.
Consider a task as simple as computing the inverse of a $100\times100$ Mathematica matrix using Python (using [`numpy.linalg.inv`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html)).
I challenge people to implement this with `ExternalEvaluate`. It's not possible to do it *in a practically useful way*. The matrix has to be sent *as code*, and piecing together code from strings just can't replace structured communication. The result will need to be received as something encodable to JSON. This has terrible performance due to multiple conversions, and even risks losing numerical precision.
Just sending and receiving a tiny list of 10000 integers takes half a second (!)
In[6]:= ExternalEvaluate[py, "range(10000)"]; // AbsoluteTiming
Out[6]= {0.52292, Null}
Since I am primarily interested in scientific and numerical computing (as I believe most M users are), I simply won't use `ExternalEvaluate` much, as it's not suitable for this purpose. What if we need to do a [mesh transformation](https://mathematica.stackexchange.com/q/155484/12) that Mathematica can't currently handle, but there's a Python package for it? It's exactly the kind of problem I am looking to apply Python for. I have in fact done mesh transformations using MATLAB toolboxes directly from within Mathematica, using [MATLink][1], while doing the rest of the processing in Mathematica. But I couldn't do this with ExternalEvaluate/Python in a reasonable way.
In 2017, any scientific computing system *needs* to have a Python interface to be taken seriously. [MATLAB has one][2], and it *is* practically usable for numerical/scientific problems.
----
## A Python interface
I envision a Python interface which works like this:
- The MathLink/WSTP API is exposed to Python, and serves as the basis of the system. MathLink is good at transferring large numerical arrays efficiently.
- Fundamental data types (lists, dictionaries, bignums, etc.) as well as datatypes critical for numerical computing (numpy arrays) can be transferred *efficiently* and *bidirectionally*. Numpy arrays in particular must translate to/from packed arrays in Mathematica with the lowest possible overhead.
- Python functions can be set up to be called from within Mathematica, with automatic argument translation and return type translation. E.g.,
PyFun["myfun"][ (* myfun is a function defined in Python *)
{1,2,3} (* a list *),
PyNum[{1,2,3}] (* cast to numpy array, since the interpretation of {1,2,3} is ambiguous *),
PySet[{1,2,3}] (* cast to a set *)
]
- The system should be user-extensible to add translations for new datatypes, e.g. a Python class that is needed frequently for some application.
- The primary mode of operation should be that Python is run as a slave (subprocess) of Mathematica. But there should be a second mode of operation where both Mathematica and Python are being used interactively, and they are able to send/receive structured data to/from each other on demand.
- As a bonus: Python can also call back to Mathematica, so e.g. we can use a numerical optimizer available in Python to find the minimum of a function defined in Mathematica
- An interface whose primary purpose is to call Mathematica from Python is a different topic, but can be built on the same data translation framework described above.
The development of such an interface should be driven by real use cases. Ideally, Wolfram should talk to users who use Mathematica for more than fun and games, and do scientific computing as part of their daily work, with multiple tools (not just M). Start with a number of realistic problems, and make sure the interface can help in solving them. As a non-trivial test case for the datatype-extension framework, make sure people can set up auto-translation for [SymPy objects][3], or a [Pandas dataframe][4], or a [networkx graph][5]. Run `FindMinimum` on a Python function and make sure it performs well. (In a practical scenario this could be a function implementing a physics simulation rather than a simple formula.) As a performance stress test, run `Plot3D` (which triggers a very high number of evaluations) on a Python function. Performance and usability problems will be exposed by such testing early, and then the interface can be *designed* in such a way as to make these problems at least solvable (if not immediately solved in the first version). I do not believe that they are solvable with the `ExternalEvaluate` design.
Of course, this is not the only possible design for an interface. J/Link works differently: it has handles to Java-side objects. But it also has a different goal. Based on my experience with MATLink and RLink, I believe that *for practical scientific/numerical computing*, the right approach is what I outlined above, and that the performance of data structre translation is critical.
----
## ExternalEvaluate
Don't get me wrong, I do think that the `ExternalEvaluate` framework is a very useful initiative, and it has its place. I am saying this because I looked at its source code and it appears to be easily extensible. R has zeromq and JSON capabilities, and it looks like one could set it up to work with `ExternalEvaluate` in a day or so. So does Perl, anyone want to give it a try? `ExternalEvaluate` is great because it is simple to use and works (or can be made to work) with just about any interpreted language that speaks JSON and zeromq. But it is also, in essence, a quick and dirty hack (that's extensible in a quick and dirty way), and won't be able to scale to the types of problems I mentioned above.
----
## MathLink/WSTP
Let me finally say a few words about why MathLink/WSTP are critical for Mathematica, and what should be improved about them.
I believe that any serious interface should be built on top of MathLink. Since Mathematica already has a good interface capable of inter-process communication, that is designed to work well with Mathematica, and designed to handle numerical and symbolic data efficiently, use it!!
Two things are missing:
- Better documentation and example programs, so more people will learn MathLink
- If the MathLink library (not Mathematica!) were open source, people would be able to use it to link to libraries [which are licensed under the GPL][6]. Even a separate open source implementation that only supports shared memory passing would be sufficient—no need to publish the currently used code in full. Many scientific libraries are licensed under the GPL, often without their authors even realizing that they are practically preventing them from being used from closed source systems like Mathematica (due to the need to link to the MathLink libraries). To be precise, GPL licensed code *can* be linked with Mathematica, but the result cannot be shared with anyone. I have personally requested the author of a certain library to grant an exception for linking to Mathematica, and they did not grant it. Even worse, I am not sure they understood the issue. The authors of other libraries *cannot* grant such a permission because they themselves are using yet other GPL's libraries.
[MathLink already has a more permissive license than Mathematica.][7] Why not go all the way and publish an open source implementation?
I am hoping that Wolfram will fix these two problems, and encourage people to create MathLink-based interfaces to other systems. (However, I also hope that Wolfram will create a high-quality Python link themselves instead of relying on the community.)
I have talked about the potential of Mathematica as a glue-language at some Wolfram events in France, and I believe that the capability to interface external libraries/systems easily is critical for Mathematica's future, and so is a healthy third-party package ecosystem.
[1]: http://matlink.org/
[2]: https://www.mathworks.com/help/matlab/matlab-engine-for-python.html
[3]: http://www.sympy.org/
[4]: http://pandas.pydata.org/
[5]: https://networkx.github.io/
[6]: https://en.wikipedia.org/wiki/Copyleft
[7]: https://www.wolfram.com/legal/agreements/mathlink.htmlSzabolcs Horvát2017-09-15T12:33:04Z96_6 Configuration
https://community.wolfram.com/groups/-/m/t/1664705
Every node has six lines.
Every line has six nodes.
If you like these designs
You can try out my codes.
![96_6 configuration][1]
base={{0,-1},{0,Root[1+2 #1+2 #1^3+#1^4&,2]},{1/2,Root[-11-8 #1+24 #1^2-32 #1^3+16 #1^4&,1]},{1/2,Root[-11+8 #1+24 #1^2+32 #1^3+16 #1^4&,2]},{1/2 (2-Sqrt[3]),-(1/2)},{Root[-1+4 #1-8 #1^3+8 #1^4&,2],Root[3-24 #1+48 #1^2-24 #1^3+8 #1^4&,1]},{Root[-1+4 #1-8 #1^3+8 #1^4&,2],Root[3+24 #1+48 #1^2+24 #1^3+8 #1^4&,2]},{Root[1+4 #1+16 #1^3+16 #1^4&,2],Root[1-20 #1+48 #1^2+16 #1^3+16 #1^4&,1]}};
nodes = Flatten[Table[RootReduce[#.RotationMatrix[n Pi/6 ]], {n, 0, 11}] & /@ base,1];
lines={{1,9,60,59,26,38},{1,8,49,57,83,64},{1,6,54,50,76,69},{1,5,52,51,30,42},{1,24,37,41,84,73},{1,14,27,31,67,68},{12,8,59,58,25,37},{12,7,60,56,82,63},{12,5,49,53,75,68},{12,4,51,50,29,41},{12,13,26,30,66,67},{12,23,48,40,83,84},{11,7,58,57,36,48},{11,6,59,55,81,62},{11,4,60,52,74,67},{11,3,49,50,28,40},{11,24,25,29,66,65},{11,22,47,39,82,83},{10,6,57,56,35,47},{10,5,58,54,80,61},{10,3,59,51,73,66},{10,2,49,60,27,39},{10,23,36,28,64,65},{10,21,46,38,81,82},{9,5,56,55,34,46},{9,4,57,53,79,72},{9,2,58,50,84,65},{9,22,35,27,63,64},{9,20,45,37,80,81},{8,4,55,54,33,45},{8,3,56,52,78,71},{8,21,34,26,62,63},{8,19,44,48,80,79},{7,3,54,53,32,44},{7,2,55,51,77,70},{7,20,33,25,61,62},{7,18,43,47,78,79},{6,2,53,52,31,43},{6,19,32,36,72,61},{6,17,42,46,77,78},{5,18,35,31,71,72},{5,16,45,41,76,77},{4,17,34,30,70,71},{4,15,44,40,75,76},{3,16,33,29,69,70},{3,14,43,39,74,75},{2,13,42,38,73,74},{2,15,32,28,68,69},{49,92,48,41,82,74},{49,94,26,31,69,65},{60,93,25,30,68,64},{60,91,47,40,81,73},{59,92,36,29,67,63},{59,90,46,39,80,84},{58,91,35,28,66,62},{58,89,45,38,83,79},{57,90,34,27,61,65},{57,88,44,37,82,78},{56,89,33,26,72,64},{56,87,43,48,81,77},{55,88,32,25,71,63},{55,86,42,47,80,76},{54,87,36,31,70,62},{54,85,46,41,75,79},{53,86,35,30,69,61},{53,96,45,40,74,78},{52,85,34,29,68,72},{52,95,44,39,73,77},{51,96,33,28,67,71},{51,94,43,38,84,76},{50,93,42,37,83,75},{50,95,32,27,66,70},{13,21,92,91,27,37},{13,20,93,89,82,65},{13,18,86,94,75,70},{13,17,96,95,31,41},{24,20,91,90,26,48},{24,19,92,88,81,64},{24,17,93,85,74,69},{24,16,95,94,30,40},{23,19,90,89,25,47},{23,18,91,87,80,63},{23,16,92,96,73,68},{23,15,93,94,29,39},{22,18,89,88,36,46},{22,17,90,86,79,62},{22,15,91,95,84,67},{22,14,93,92,28,38},{21,17,88,87,35,45},{21,16,89,85,78,61},{21,14,90,94,83,66},{20,16,87,86,34,44},{20,15,88,96,77,72},{19,15,86,85,33,43},{19,14,87,95,76,71},{18,14,85,96,32,42}};
From L. W. Berman, "Geometric Constructions for Symmetric 6-Configurations," Rigidity and Symmetry, Springer, 2014, p. 83.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=96_6Config.jpg&userId=21530Ed Pegg2019-04-19T14:49:09Z3DPlot with axes at the center?
https://community.wolfram.com/groups/-/m/t/1664278
Hi guys , i'm trying to make a 3D plot with the axes at the center, but when i do it i can see through the Plot, does anyone knows how to change it?
Plot3D[{2 (x - y)/(1 + x^2 + y^2)}, {x, -a, a}, {y, -a, a},
AxesOrigin -> {0, 0, 0}, AxesStyle -> Opacity[1],
AxesEdge -> {None, Automatic, None}, ColorFunction -> "Rainbow",
Boxed -> False, Background -> Black, MeshFunctions -> {#3 &},
Mesh -> 6, MeshStyle -> {Blue}]
Thanks in advance
:DDavid Urbaez2019-04-19T01:27:59ZMathematica speedup from 2019 iMac over late 2015 iMac?
https://community.wolfram.com/groups/-/m/t/1639163
My current computer, a late 2015 iMac 27” has a 4 GHz Core i7 processor, 32 GB RAM, 3 TB Fusion drive, and Radeon R9 M395X with 4 GB VRAM, Retina display.
What kind of speedup with Mathematica, if any, might I expect from a new, 2019, iMac 27” with the following configuration?
- 3.6 GHz Core i9 processor (8-core)
- 64GB RAM
- 3 TB Fusion drive
- Radeon Pro Vega 48 with 8GB HBM2 memory
Although this new CPU has Turbo Boost up to 5 GHz, I’m concerned that the “default” of only 3.6 GHZ, being below my current 4 GHz, might impede perfornance.
Presumably the graphics rendering will be faster, right?
Or do the additional cores make up for that difference? In fact, can Mathematica take advantage of those additional cores (without my explicitly coding for parallel kernels)?Murray Eisenberg2019-03-24T16:24:32ZBoundary conditions are not satisfied with NDSolve. Why?
https://community.wolfram.com/groups/-/m/t/1664540
I want to study the time evolution of a small perturbation around the static solution of the following Wave Equation
$ -\partial_t^2 v(t,r) + \partial_r^2v(t,r) + \frac{2}{r}\partial_r v(t,r) = \frac{\partial V(v)}{\partial v}(t,r) $
for some expression of the potential $V(v)$ that is written in the code below. The coordinates $t,r$ run over $[0,+\infty]$.
By definition, the static solution $\hat{v}(r)$ is time-independent and I require the following initial/boundary conditions
$ \partial_r \hat{v}(r)|_{r=0} =0\,,\qquad \hat{v}(r \rightarrow +\infty) = 0. $
Obviously, to perform numerical computations the limit $r\rightarrow+\infty$ is replaced by $r=M$ where $M\gg \ell$ where $\ell$ is the characteristic length of the problem; it turns out to be $\ell \sim 2$ for the static solution.
I want to perturb this solution at $t=0$ and see how it evolves with time. So, now I am interested in the time-dependent solution which satisfies
$ v(t=0,r) = \hat{v}(r)\,\qquad \partial_t v(r,t)|_{t=0}=\delta \cdot 10^{-2}\,,\\
\partial_r v(t,r)|_{r=0}=0\,, \qquad v(r=M) = 0. $
where $\delta\ll 1$.
- While the numerical static solution satisfies $\hat{v}'(r=0)=0$, the time-dependent solution I got does not. I don't understand why. For a specific example with $\delta = 0.001$, I get $\partial_r v(t,r) \sim -0.000701892$ irrespectively of the value of the time variable `t`. In particular, it looks the initial condition $v(t=0,r) = \hat{v}(r)$ is not satisfied. Is this normal?
- Moreover, I get the error [![enter image description here][1]][1], why? Are my boundary conditions really inconsistent?
This is my code.
V[v_] = (-1 + (1/8 (-9 + Sqrt[145]) - v)^2)^2 + 3 (1/8 (-9 + Sqrt[145]) - v)^3;
sol[rmax_, \[Delta]_] := Last@Last@ Last@NDSolve[{+D[v[r], {r, 2}] + 2/r D[v[r], {r, 1}] - (D[V[v], v] /. v -> v[r]) == 0, (D[v[r], r] /. r -> SetPrecision[10^-10, 100]) == 0, v[SetPrecision[10^-10, 100]] == SetPrecision[\[Delta], 100]}, v, {r, 10^-10, rmax}, WorkingPrecision -> 50, Method -> "Extrapolation"]
iTf = sol[30, 1.506400187591933106770472351];
Plot[{iTf[r]}, {r, 0, 30}, PlotRange -> All, Frame -> True]
iTfTime = v /. ParametricNDSolve[{-D[v[t, r], {t, 2}] + D[v[t, r], {r, 2}] + 2/r D[v[t, r], {r, 1}] - (D[V[v], v] /. v -> v[t, r]) == 0, v[0, r] == iTf[r], ((D[v[t, r], t]) /. t -> 0) == +\[Delta] 10^-2, (D[v[t, r], r] /. r -> 10^-10) == 0}, v, {t, 0, 40}, {r, 10^-10, 30}, {\[Delta]}, WorkingPrecision -> MachinePrecision, Method -> {"MethodOfLines", "TemporalVariable" -> t, "SpatialDiscretization" -> {"TensorProductGrid", "MinPoints" -> 200}}, PrecisionGoal -> 15]
iTfTimeToPlot0 = iTfTime[0.001];
(*Checking boundary conditions in generic points*)
((D[iTfTimeToPlot0[t, r], t] /. t -> 0) /. r -> RandomReal[]) == +0.001 10^-2
(*Output: True*)
((D[iTfTimeToPlot0[t, r], r] /. r -> 10^-10) /. t -> RandomReal[]) == 0
(*Output: False*)
[1]: https://i.stack.imgur.com/SPDnN.pngmathPhys User2019-04-19T11:01:29ZQMRITools: processing and visualization tools for quantitative MRI data
https://community.wolfram.com/groups/-/m/t/1661539
# QMRITools [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.2530801.svg)](https://doi.org/10.5281/zenodo.2530801) [![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/dwyl/esta/issues) [![status](http://joss.theoj.org/papers/ef8bfb6c31499845d353b6a5af0d6300/status.svg)](http://joss.theoj.org/papers/ef8bfb6c31499845d353b6a5af0d6300)
***
![wolfram language][1] ![wolfram workbench][2] ![eclipse][3] ![wolfram mathematica][4]
***
![eye candy][13]
I have been working on this toolbox for years and recently started making the code available online which took considerable extra effort putting everything into the proper formats and generating documentation and usage demonstrations. I thought it would be nice to also show this to the wolfram community so [here][5] it is.
## Contents
* [Information](#Sumary)
* [Information](#information)
* [Documentation](#documentation)
* [Install toolbox](#install-toolbox)
* [Demonstrations](#demonstrations)
* [Using the toolbox](#using-the-toolbox)
* [Functionality](#functionality)
## Summary
``QMRITools`` is written in Mathematica using Wolfram Workbench and Eclipse and contains a collection of tools and functions for processing quantitative MRI data. The toolbox does not provide a GUI and its primary goal is to allow for fast and batch data processing, and facilitate development and prototyping of new functions. The core of the toolbox contains various functions for data manipulation and restructuring.
The toolbox was developed mostly in the context of quantitative muscle, nerve and cardiac magnetic resonance imaging. The library of functions grows along with the research it is used for and started as a toolbox to analyze DWI data of muscle. Although there exist many different software packages and code repositories for much of the functionality in this toolbox, I was in need of one that did all. Furthermore, most diffusion packages are optimized for brain analysis and provide limited flexibility and i just like writing code my self.
## Information
QMRITools is developed for [Mathematica](https://www.wolfram.com/mathematica/).
It contains the following toolboxes:
- CardiacTools
- CoilTools
- DenoiseTools
- DixonTools
- ElastixTools
- GeneralTools
- GradientTools
- ImportTools
- IVIMTools
- JcouplingTools
- MaskingTools
- NiftiTools
- PhysiologyTools
- PlottingTools
- ProcessingTools
- RelaxometryTools
- SimulationTools
- VisteTools
## Documentation
The website with more information about the toolbox can be found [here](https://mfroeling.github.io/QMRITools/). Documentation of all functions and their options is fully integrated in the Mathematica documentation. The toolbox always works within the latest version of Mathematica and does not support any backward compatibility.
After the toolbox is installed correctly it should show up as a package in the Mathematica add-ons (which was a pain and is badly documented).
![QMRITools package][6]
All code and documentation is maintained and uploaded to github using [Workbench](https://www.wolfram.com/workbench/). An online HTML version of the full documentation can be found [here](https://mfroeling.github.io/QMRITools/htmldoc/guide/QMRITools.html). To fix the HMTL compilation i had to write some custom scripts based on code i found [online][7] (again a pain and badly documented).
![Guides QMRITools][8]
## Install toolbox
The latest release can be found [here](https://github.com/mfroeling/QMRITools/releases/download/2.1.2/QMRITools.zip).
Install the toolbox in the Mathematica UserBaseDirectory > Applications.
FileNameJoin[{$UserBaseDirectory, "Applications"}]
Ideally i would like to push the toolbox as a packlet but again this has proven to be a pain and is badly documented.
Some functions of QMRITools call on external executables and software.
These executables need to be present in "QMRITools\Applications" and are included in the release.
If for any reason you want to use other (older/newer) versions you can replace them but functionality is not guaranteed.
For the latest version of these tools and their user license please visit their website.
* [dcm2niix](https://github.com/rordenlab/dcm2niix/)
* dcm2niix.exe
* [Elastix](http://elastix.isi.uu.nl/)
* elastix.exe
* transformix.exe
All functionality is tested under Windows 10 with the latest Mathematica version.
The Mathematica code is cross platform compatible with the exception of the external tools which are compiled for each OS.
The toolbox provides compiled versions for each OS but their functionality is not guaranteed.
The Elastix version used is 4.9 with OpenCL support. Additionally Elastix needs to be compiles with the PCA metrics, all DTI related parameters and all affine related parameters.
Although cross platform compatibility is provided I have only limited options for testing so if any issues arise please let me know.
## Demonstrations
The release contains a zip file [DemoAndTest.zip](https://github.com/mfroeling/QMRITools/releases/download/2.1.2/DemoAndTest.zip) in which there is a file ``demo.nb``, a folder ``DemoData`` and a folder ``Testing``.
To have a global overview of the functionality of the toolbox you can download this folder and run the ``demo.nb``.
By default the ``demo.nb`` looks for the folders ``DemoData`` and ``Testing`` in the same folder as the notebook.
![QMRITools demonstration][9]
In the first section of the demo notebook the toolbox is loaded and two tests are performed. The first test is to check of all files that are needed to run the toolbox are present. The second test runs after the toolbox is loaded and checks if all the functions and their options that are defined are correct.
![enter image description here][10]
## Using the toolbox
The toolbox can be loaded by using: `` <<QMRITools` ``
If you want to monitor the package loading you can use: `` QMRITools`$Verbose = True; <<QMRITools` ``
A list of all QMRITools packages is generated by
QMRIToolsPackages[]
A list of all DTITools functions or functions per toolbox is generated by
QMRIToolsFunctions[]
QMRIToolsFunctions["toolboxname"]
To print the documentation of all functions use
QMRIToolsFuncPrint[]
QMRIToolsFuncPrint["toolboxname"]
A list off all functions and their help can be found in ``All-Functions.nb``, which is alos availible as a [pdf file](https://github.com/mfroeling/QMRITools/releases/download/2.0/All-Functions.pdf).
## Functionality
The toolbox contains over 250 Functions and options of processing and analyzing data.
A summary of the core functionality is listed below.
![QMRITools overview][11]
* **Diffusion Analysis**
* Signal drift correction
* LLS, WLLS and iWLLS methods
* REKINDLE outlier detection
* IVIM fitting (fixed parameters, back-projection and Bayesian fitting)
* Parameter fitting using histogram analysis
* Joining and sorting of multiple series of the same volume
* Joining multiple stacks with slice overlap into one stack
* **Diffusion Gradients optimization**
* Single and multi shell
* Rotating and correcting Bmatrix
* Actual b-value estimation by gradient sequence integration
* Gradient visualization
* **Noise suppression**
* LMMSE noise suppression
* PCA noise suppression based on ramom matrix theory
* Anisotropic tensor smoothing using diffusion filter.
* **Importing and Exporting**
* Dicom data (classing and enhanced file format)
* Nifti data (.nii and .img .hdr, supports .gz files)
* Compatible with ExplorDTI and Viste for fiber tractography
* **Data visualization**
* 2D 3D and 4D viewer
* Multiple images: Transparent overlay, difference and, checkboard overlays
* Legend bars and image labels
* Saving to pdf, jpg, animated gif and movie
![Plot 3D and 4D datasets][12]
* **Masking**
* Automate and threshold masking
* Extracting parameters form masks
* Smoothing masks
* Smoothing muscle segmentation
* **Motion and distortion correction (Registration using elastix)**
* Rigid, affine, b-spline and cyclic registration
* nD to nD registration
* Automated series processing
* Slice to slice motion correction of 3D and 4D data
* **Dixon Reconstruction**
* B0 phase unwrapping
* DIXON iDEAL reconstruction with T2start
* **Relaxometry fitting**
* T2 fitting
* T1rho fitting
* Tri Exponential T2 fitting
* EPG based T2 fitting with slice profile
* **Simulation Framework**
* Diffuison tensor simulation and analysis
* Bloch and EPG simulations
* Cardiac DTI models (fiber architecture)
* **Cardiac Diffusion analysis**
* Breathing motion correction
* Corrupted slice rejection
* Local myocardial coordinate system calculation
* helix angle and fiber architecture matrix
* AHA 17 parameter description
* Transmural parameter description
## License
https://opensource.org/licenses/BSD-3-Clause
Note that restrictions imposed by these patents (and possibly others)
exist independently of and may be in conflict with the freedoms granted
in BSD-3-Clause license, which refers to copyright of the program, not patents
for any methods that it implements. Both copyright and patent law must
be obeyed to legally use and redistribute this program and it is not the
purpose of this license to induce you to infringe any patents or other
property right claims or to contest validity of any such claims. If you
redistribute or use the program, then this license merely protects you
from committing copyright infringement. It does not protect you from
committing patent infringement. So, before you do anything with this
program, make sure that you have permission to do so not merely in terms
of copyright, but also in terms of patent law.
Some code in the NiiTools packages was based on https://github.com/tomdelahaije/nifti-converter
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfram_language.png&userId=1332602
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfram_workbench.jpg&userId=1332602
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=eclipse.png&userId=1332602
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolfram_mathematica.png&userId=1332602
[5]: https://github.com/mfroeling/QMRITools
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=AddOns.PNG&userId=1332602
[7]: https://mathematica.stackexchange.com/questions/134069/workbench-eclipse-version-html-documentation-build-html-issues
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Guide.PNG&userId=1332602
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=demo.PNG&userId=1332602
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=testing.PNG&userId=1332602
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=OverView.png&userId=1332602
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=visualization.PNG&userId=1332602
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=animation-small.gif&userId=1332602Martijn Froeling2019-04-16T08:11:08ZRubiks Cubes and OOP in Mathematica
https://community.wolfram.com/groups/-/m/t/1659434
I recently got [nerd sniped](https://xkcd.com/356/) on a [question on Stack Exchange](https://mathematica.stackexchange.com/a/195065/38205)
Originally I just wanted to show off my immutable OOP framework that uses a bunch of Mathematica's built in features to make something with a nice interface like `CloudExpression` but then that devolved into redoing the entirety of [Roman Maeder's Rubiks Cube Demo](http://demonstrations.wolfram.com/RubiksCube/) as an object.
To make this work you need my `InterfaceObjects` package, which implements Mathematica OOP. You can single-line install that [here](https://paclets.github.io/PacletServer/interfaceobjects.html).
Then load the Rubik's cube package off GitHub [here](https://github.com/b3m2a1/mathematica-tools/blob/master/RubiksCube.wl) (you can also go there to see how this OOP package works):
Get["https://github.com/b3m2a1/mathematica-tools/raw/master/RubiksCube.wl"]
Now we can make an object:
new = RubiksCube[]
Visualize it:
new@"Show"[]
[![enter image description here][4]][4]
Make a new one of a different size and change its colors:
r1 = RubiksCube["Size" -> 4];
r1@"Colors" = ColorData["Atoms"] /@ {6, 7, 8, 9, 11, 13, 18};
r1@"Show"[Method -> {"ShrinkWrap" -> True}]
[![enter image description here][1]][1]
Or make two and plot them side by side with some twists:
r2 = RubiksCube["Origin" -> {10, 0, 0}, "Size" -> 10];
Show[
r1@"Twist"[.5, {"Y", 2}]@"Twist"[.5, {"Y", 4}]@"Show"[],
r2@"Show"[],
PlotRange -> All
]
You can also `Manipulate` the twisting, if you want:
Manipulate[
Fold[
#@"Twist"[#2[[1]], #2[[2]]] &,
new,
Thread[
{
{b, f, l, r, d, u},
{"Back", "Front", "Left", "Right", "Down", "Up"}
}
]
]@"Show"[],
{b, 0, 2 π, .01},
{f, 0, 2 π, .01},
{l, 0, 2 π, .01},
{r, 0, 2 π, .01},
{d, 0, 2 π, .01},
{u, 0, 2 π, .01},
DisplayAllSteps -> True
]
[![enter image description here][5]][5]
The OOP interface itself is actually very nice, as it allows us to only have to provide and document a single symbol and then some `"Methods"`, which actually are discoverable by name as well, along with the `"Properties"`:
r1@"Methods"
{"Show", "Twist"}
r1@"Properties"
{"Size", "Origin", "Colors", "Cuboids", "Version", "Properties", "Methods"}
[1]: https://i.stack.imgur.com/tPUcO.png
[2]: https://i.stack.imgur.com/CqJvT.png
[3]: https://i.stack.imgur.com/KKhN6.png
[4]: https://i.stack.imgur.com/pYwi3.png
[5]: https://i.stack.imgur.com/23f6J.png
[6]: https://i.stack.imgur.com/fbJR4.png
[7]: https://i.stack.imgur.com/bFcJo.png
[8]: https://i.stack.imgur.com/3PRrr.png
[9]: https://i.stack.imgur.com/QilkK.png
[10]: https://i.stack.imgur.com/b8lDe.pngb3m2a1 2019-04-13T00:38:14Z[GIF] Five Easy Pieces (Rotating truncation of the tetrahedron)
https://community.wolfram.com/groups/-/m/t/1660413
![Rotating truncation of the tetrahedron][1]
**Five Easy Pieces**
Practically the same idea (and code) as [_Give Me Some Space_][2], just truncating the tetrahedron rather than rectifying it.
The code for the `Manipulate` is below; when exporting to a GIF I used `"DisplayDurations" -> Prepend[Table[1/50, {199}], 1/2]` inside `Export` to get the animation to pause on the original tetrahedron for half a second.
DynamicModule[{viewpoint = {Cos[2π/3], Sin[2π/3], 1/Sqrt[2]},
g = .6, d = .2, n = 4,
v = PolyhedronData["Tetrahedron", "VertexCoordinates"],
e = {{2, 3, 4}, {1, 4, 3}, {4, 1, 2}, {3, 2, 1}},
tt = PolyhedronData["TruncatedTetrahedron", "VertexCoordinates"],
te = PolyhedronData["TruncatedTetrahedron", "Faces"][[2]],
cols = RGBColor /@ {"#e43a19", "#f2f4f7", "#111f4d"},
s, r},
Manipulate[
s = Haversine[π t];
r = Haversine[2 π t];
Graphics3D[{Thickness[.004], EdgeForm[None],
Table[
{GraphicsComplex[
(1/2 + r/2) v[[i]] + RotationTransform[2 π/3 s, v[[i]]][#] & /@ (v/4),
{cols[[1]], Polygon[e[[i]]], cols[[2]], Polygon[e[[Drop[Range[4], {i}]]]]}]},
{i, 1, Length[v]}],
GraphicsComplex[
RotationTransform[0, {0, 0, 1}][RotationTransform[π s, {-Sqrt[3], -3, Sqrt[6]}][1/4 tt]],
{cols[[1]], Polygon[te[[1, ;; 4]]], cols[[2]], Polygon[te[[1, 5 ;;]]]}]},
Boxed -> False, ImageSize -> {540, 540}, PlotRange -> 2.5,
Background -> cols[[-1]], ViewPoint -> 10 viewpoint,
ViewAngle -> π/125, ViewVertical -> {0, 0, 1},
SphericalRegion -> True,
Lighting -> {{"Ambient", GrayLevel[d]},
{"Directional", GrayLevel[g], ImageScaled[{2, 0, 2}]},
{"Directional", GrayLevel[g], ImageScaled[{-2, 2, 2}]},
{"Directional", GrayLevel[g], ImageScaled[{0, -2, 2}]}}],
{t, 0, 1}]
]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=tt8r.gif&userId=610054
[2]: https://community.wolfram.com/groups/-/m/t/962240Clayton Shonkwiler2019-04-14T21:23:52ZAnamorphosis and reflection between a Conical Mirror and a Cylinder
https://community.wolfram.com/groups/-/m/t/1660442
![intro][1]
Catoptric or mirror anamorphoses are deformed images that can only be seen undeformed with the help of a mirror.
Here, we experiment with a conical mirror surrounded by a vertical cylindrical surface.
We want to compute points of a deformed (anamorphic) image on the cylinder's inner surface such that it is perceived by the viewer as an undeformed image when looking down in the cone shaped mirror.
![enter image description here][2]
The above drawing shows the anamorphic setup: a conical mirror (radius r=1, height=h), surrounded by a cylindrical surface (radius R>r).
The viewpoint V is along the vertical axis of the cylinder (at infinity relative to the size of the cone).
A point S (xa,ya,za) on the cylinder's inner surface is reflected by the mirror at Q to the viewer's eye at V. The viewer perceives the point at I (xi,yi,0). The lines VQ and SQ form equal angles with the normal to the sphere at Q.
![enter image description here][3]
The above animation demonstrates the relation between the point I traveling along a straight line while its anamorphic map follows a curve on the inner surface of the cylinder.
We now write a function that expresses this geometric relationship:
cone2Cylinder[imagePoint : {xi_, yi_}, coneHeight : h_,
cylinderRadius : R_] :=
{(R xi)/Sqrt[xi^2 + yi^2], (R yi)/Sqrt[
xi^2 + yi^2],
h - h Sqrt[
xi^2 + yi^2] + (-R + Sqrt[xi^2 + yi^2]) Cot[2 ArcTan[1/h]]}
This function maps an image point to an anamorphic point.
To test our function, we use [again][4] one of the logos generated by the Wolfram Demonstration "[Character Rotation Patterns][5]" by Chris Carlson.
Which, after converting to a GraphicsComplex looks like this:
ig = ImageGraphics[sun, 2, Method -> "Exact"];
lines = Normal[ig][[1, -1]] /. FilledCurve -> Identity;
scaledLines = Map[#/948 - .5 &, lines, {6}]
Graphics[{Thick, scaledLines}]
![enter image description here][6]
We now compute the point coordinates of the lines in the GraphicsComplex to their anamorphic map {xa,ya,za} using the function cone2Cylinder.
anaLines = Map[anaCone2Cylinder[#, 1.57, 1.15] &, scaledLines, {5}];
Graphics3D[{{Opacity[.2], White,
Cylinder[{{0, 0, .3}, {0, 0, 1.2}}, 1.25]},
AbsoluteThickness[3], %}, Boxed -> False]
![enter image description here][7]
We then convert the anamorphic 3D drawing to the 2 dimensional developed interior face of the cylinder as {ArcTan[xa,ya} , za}. This GIF illustrates the unfolding of the cylindrical image:
![enter image description here][8]
developLineCoordinates =
Flatten[Map[{ArcTan @@ Most[#], Last[#]} &, anaLines, {5}][[-1]],
1][[All, 1]];
lstPP = Partition[#, 2, 1] & /@ developLineCoordinates;
DeleteCases[#, _?(EuclideanDistance @@ # > 1 &)] & /@ lstPP;
Graphics[{AbsoluteThickness[2], Line /@ %}, FrameTicks -> None,
Frame -> True, ImageSize -> 600]
develop = Image[%];
![enter image description here][9]
After printing the cylinder development to the right size (52 cm by 14 cm), it is glued around a cardboard cylinder (radius 8 cm). A home made conical mirror (base radius 7 cm, height 12 cm) is put inside the cylinder at the center. The anamorphic image on the cylinder wall is reflected as the undeformed original by the conical mirror. Here is the result: (the center is hidden by a coin resting at the top of the cone since anamorphic maps of points close to the cone center are off at infinite height on the cylinder wall)
![enter image description here][10]
**Another application** of the function is to use one of the many popular curves (".....-like curve" ) that can be extracted using Interpreter
Interpreter["PopularCurve"]["bunny-like curve"];
bugsbunnyPrimitives =
First@Cases[
First[ParametricPlot[
Entity["PopularCurve", "BunnyCurve"]["ParametricEquations"][
t], {t, 0, 30 \[Pi]}]] /. {x_?NumericQ,
y_?NumericQ} :> {x - 85, y - 50}/800, _Line, \[Infinity]];
![enter image description here][11]
The anamorphic map is created by applying anaCone2Cylinder to the point coordinates:
anaBunny =
Map[anaCone2Cylinder[#, 1.755, 1.25] &, bugsbunnyPrimitives, {2}];
Animate[Graphics3D[
Rotate[{{Opacity[.2], White,
Cylinder[{{0, 0, .25}, {0, 0, 1}}, 1.25]}, AbsoluteThickness[3],
Red, anaBunny}, \[Phi], {0, 0, 1}], Boxed -> False], {\[Phi], 0,
2 \[Pi]}]
![enter image description here][12]
This is the developed cylinder:
developRules = {x_?NumericQ, y_?NumericQ,
z_?NumericQ} :> {ArcTan[x, y], z};
developed = anaBunny /. developRules;
DeleteCases[
Partition[developed[[1]], 2, 1], _?(EuclideanDistance @@ # > 1 &)];
Graphics[{Red, AbsoluteThickness[3], Line /@ %}, FrameTicks -> None,
Frame -> True]
![enter image description here][13]
And the result, printed, glued inside a cylinder and using the same setup as in the previous example:
![enter image description here][14]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logojointpics.png&userId=68637
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3379geometrycone.png&userId=68637
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=animationskipeedframes.gif&userId=68637
[4]: https://community.wolfram.com/groups/-/m/t/1646795?p_p_auth=1iKz6YW8
[5]: http://demonstrations.wolfram.com/CharacterRotationPatterns/
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=S-logooutlinecopy.png&userId=68637
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9897logocylinder.png&userId=68637
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=unfolding.gif&userId=68637
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4680logodeveloped.png&userId=68637
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logofinalcombi.png&userId=68637
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9385bunnyoriginal.png&userId=68637
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=cylinderanimation.gif&userId=68637
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8734bunnydeveloped.png&userId=68637
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6598bunnyfinal.jpg&userId=68637Erik Mahieu2019-04-15T07:18:15ZAvoid "Unable to read from network interfaces. Requires read permissions"?
https://community.wolfram.com/groups/-/m/t/1626011
Hello Wolfram Community,
I'm trying to use WL NetworkPacketCapture functions in a Mathematica Notebook and in this particular case I'm getting an error for which I'm required to "elevate permissions" on my Mac. The problem is I can't find any documentation on how to do so. Any ideas?
Unlike, say Cloud permissions, this is low level system permissions. Any links to documentation would be highly appreciated.
"Unable to read from network interfaces. Requires read permissions for devices /dev/bpf"
ThanksJose M.2019-03-05T17:12:57ZInclude special (reserved) symbols in variable names?
https://community.wolfram.com/groups/-/m/t/1663842
I'm trying to translate some gnarly physics equations over to Mathematica for easier exploration / manipulation / etc, and I'm running into a problem when it comes to variable names like ![enter image description here][1] .
In particular, these sorts of variable names are enormously convenient when writing equations and doing derivations by hand, but Mathematica seems to insist on treating the <parallel> symbol as an infix operator, causing it to produce errors.
Is there any way to override that behavior and tell Mathematica to treat the whole thing as a single, atomic symbol, or am I stuck with the undesirable and much less readable ![enter image description here][2] ?
The latter becomes especially cumbersome when I've got complicated equations with many expressions involving subscripts like "parallel", "perpendicular", etc. Doubly so when I've already got those expressions, using those symbols, in offline form in pen-and-paper notes.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=v_parallel.gif&userId=1663808
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=v_parallel2.gif&userId=1663808Michael W.2019-04-18T19:19:37ZSolver for unsteady flow with the use of Mathematica FEM
https://community.wolfram.com/groups/-/m/t/1433064
![fig7][331]
I started the discussion [here][1] but I also want to repeat on this forum.
There are many commercial and open code for solving the problems of unsteady flows.
We are interested in the possibility of solving these problems using Mathematica FEM. Previously proposed solvers for stationary incompressible isothermal flows:
Solving 2D Incompressible Flows using Finite Elements:
http://community.wolfram.com/groups/-/m/t/610335
FEM Solver for Navier-Stokes equations in 2D:
http://community.wolfram.com/groups/-/m/t/611304
Nonlinear FEM Solver for Navier-Stokes equations in 2D:
https://mathematica.stackexchange.com/questions/94914/nonlinear-fem-solver-for-navier-stokes-equations-in-2d/96579#96579
We give several examples of the successful application of the finite element method for solving unsteady problem including nonisothermal and compressible flows. We will begin with two standard tests that were proposed to solve this class of problems by
M. Schäfer and S. Turek, Benchmark computations of laminar ﬂow around a cylinder (With support by F. Durst, E. Krause and R. Rannacher). In E. Hirschel, editor, Flow Simulation with High-Performance Computers II. DFG priority research program results 1993-1995, number 52 in Notes Numer. Fluid Mech., pp.547–566. Vieweg, Weisbaden, 1996. https://www.uio.no/studier/emner/matnat/math/MEK4300/v14/undervisningsmateriale/schaeferturek1996.pdf
![fig8][332]
Let us consider the flow in a flat channel around a cylinder at Reynolds number = 100, when self-oscillations occur leading to the detachment of vortices in the aft part of cylinder. In this problem it is necessary to calculate drag coeﬃcient, lift coeﬃcient and pressure diﬀerence in the frontal and aft part of the cylinder as functions of time, maximum drag coeﬃcient, maximum lift coeﬃcient , Strouhal number and pressure diﬀerence $\Delta P(t)$ at $t = t0 +1/2f$. The frequency f is determined by the period of oscillations of lift coeﬃcient f=f(c_L). The data for this test, the code and the results are shown below.
H = .41; L = 2.2; {x0, y0, r0} = {1/5, 1/5, 1/20};
Ω = RegionDifference[Rectangle[{0, 0}, {L, H}], Disk[{x0, y0}, r0]];
RegionPlot[Ω, AspectRatio -> Automatic]
K = 2000; Um = 1.5; ν = 10^-3; t0 = .004;
U0[y_, t_] := 4*Um*y/H*(1 - y/H)
UX[0][x_, y_] := 0;
VY[0][x_, y_] := 0;
P0[0][x_, y_] := 0;
Do[
{UX[i], VY[i], P0[i]} =
NDSolveValue[{{Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
u[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (u[x, y] - UX[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[u[x, y], x] +
VY[i - 1][x, y]*D[u[x, y], y],
Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
v[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (v[x, y] - VY[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[v[x, y], x] +
VY[i - 1][x, y]*D[v[x, y], y],
\!\(\*SuperscriptBox[\(u\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] +
\!\(\*SuperscriptBox[\(v\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y]} == {0, 0, 0} /. μ -> ν, {
DirichletCondition[{u[x, y] == U0[y, i*t0], v[x, y] == 0},
x == 0.],
DirichletCondition[{u[x, y] == 0., v[x, y] == 0.},
0 <= x <= L && y == 0 || y == H],
DirichletCondition[{u[x, y] == 0,
v[x, y] == 0}, (x - x0)^2 + (y - y0)^2 == r0^2],
DirichletCondition[p[x, y] == P0[i - 1][x, y], x == L]}}, {u, v,
p}, {x, y} ∈ Ω,
Method -> {"FiniteElement",
"InterpolationOrder" -> {u -> 2, v -> 2, p -> 1},
"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}], {i, 1, K}];
{ContourPlot[UX[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> u, MaxRecursion -> 2],
ContourPlot[VY[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> v, MaxRecursion -> 2,
PlotRange -> All]} // Quiet
{DensityPlot[UX[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> u, MaxRecursion -> 2],
DensityPlot[VY[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> v, MaxRecursion -> 2, PlotRange -> All]} // Quiet
dPl = Interpolation[
Table[{i*t0, (P0[i][.15, .2] - P0[i][.25, .2])}, {i, 0, K, 1}]];
cD = Table[{t0*i, NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) + Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Sin[θ] -
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Cos[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
cL = Table[{t0*i, -NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) +
Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Cos[θ] +
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Sin[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
{ListLinePlot[cD,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(D\)]\)"}],
ListLinePlot[cL,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(L\)]\)"}],
Plot[dPl[x], {x, 0, 8}, AxesLabel -> {"t", "ΔP"}]}
f002 = FindFit[cL, a*.5 + b*.8*Sin[k*16*t + c*1.], {a, b, k, c}, t]
Plot[Evaluate[a*.5 + b*.8*Sin[k*16*t + c*1.] /. f002], {t, 4, 8},
Epilog -> Map[Point, cL]]
k0=k/.f002;
Struhalnumber = .1*16*k0/2/Pi
cLm = MaximalBy[cL, Last]
sol = {Max[cD[[All, 2]]], Max[cL[[All, 2]]], Struhalnumber,
dPl[cLm[[1, 1]] + Pi/(16*k0)]}
In Fig. 1 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
{3.17805, 1.03297, 0.266606, 2.60427}
lowerbound= { 3.2200, 0.9900, 0.2950, 2.4600};
upperbound = {3.2400, 1.0100, 0.3050, 2.5000};
![Fig1][2]
Note that our results differ from allowable by several percent, but if you look at all the results of Table 4 from the cited article, then the agreement is quite acceptable.The worst prediction is for the Strouhal number. We note that we use the explicit Euler method, which gives an underestimate of the Strouhal number, as follows from the data in Table 4.
The next test differs from the previous one in that the input speed varies according to the `U0[y_, t_] := 4*Um*y/H*(1 - y/H)*Sin[Pi*t/8]`. It is necessary to determine the time dependence of the drag and lift parameters for a half-period of oscillation, as well as the pressure drop at the last moment of time. In Fig. 2 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
sol = {3.0438934441256595`,
0.5073345082785012`, -0.11152933279750943`};
lowerbound = {2.9300, 0.4700, -0.1150};
upperbound = {2.9700, 0.4900, -0.1050};
![Fig2][3]
For this test, the agreement with the data in Table 5 is good. Consequently, the two tests are almost completely passed.
I wrote and debugged this code using Mathematics 11.01. But when I ran this code using Mathematics 11.3, I got strange pictures, for example, the disk is represented as a hexagon, the size of the area is changed.
![Fig3][4]
In addition, the numerical solution of the problem has changed, for example, test 2D2
{3.17805, 1.03297, 0.266606, 2.60427} v11.01
{3.15711, 1.11377, 0.266043, 2.54356} v11.03
The attached file contains the working code for test 2D3 describing the flow around the cylinder in a flat channel with a change in the flow velocity.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D3.png&userId=1218692
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Math11.3.png&userId=1218692
[331]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CylinderRe100test2D2.gif&userId=1218692
[332]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2D2test.png&userId=1218692Alexander Trounev2018-08-31T11:44:04ZGet Wolfram Blog post "Version 12 Launches Today!" as notebook?
https://community.wolfram.com/groups/-/m/t/1663937
Is there some way to obtain Stephen Wolfram's post "Vesion 12 Launches Today!" https://blog.wolfram.com/2019/04/16/version-12-launches-today-big-jump-for-wolfram-language-and-mathematica/) as an actual Mathematica `.nb` notebook?
I know one can click on images of code there and have them copied to the clipboard for pasting into one's own notebook, but that's far less convenient than being able to read text and evaluate code directly in a notebook.Murray Eisenberg2019-04-18T20:58:08ZPacking arbitrary shapes with WordCloud
https://community.wolfram.com/groups/-/m/t/1659824
![enter image description here][1]
We can extract information from `WordCloud` in order to translate a collection of regions so they pack nicely. First I'll create some `BoundaryMeshRegions` similar to how the glyphs were created by OP:
$letters = Table[BoundaryDiscretizeGraphics[
Text[Style[c, Italic, FontFamily -> "Times"]], _Text], {c, Alphabet[]}];
n = 30;
BlockRandom[
glyphs = RandomChoice[$letters, n];
scales = RandomReal[5, n],
RandomSeeding -> 1234
];
Plot the word cloud using random orientations:
wc = WordCloud[AssociationThread[glyphs, scales], WordSpacings -> 0,
WordOrientation -> "Random", RandomSeeding -> 1234]
<img src="https://i.stack.imgur.com/cquRW.png" width="360"/>
Notice that the objects aren't quite touching. Luckily when we convert this scene back to a collection of regions, they will seem to be touching. I think this has to do with padding within `Inset`. Using regions in the beginning rather then just graphics makes it easier to convert the insets into explicit coordinates and avoid padding.
insets = Cases[wc2, _Inset, ∞];
insetToReg[mr_, c_, p_, s_] :=
BoundaryMeshRegion[TransformedRegion[#,
TranslationTransform[c - RegionCentroid[BoundingRegion[#]]]],
MeshCellStyle -> {1 -> Black, 2 -> RandomColor[Hue[_]]}]& @ RegionResize[mr[[1]], s]
BlockRandom[Show[insetToReg @@@ insets], RandomSeeding -> 1234]
<img src="https://i.stack.imgur.com/1k8w7.png" width="360"/>
Or if you prefer a region instead of just a visualization:
RegionUnion[insetToReg @@@ insets]
<img src="https://i.stack.imgur.com/JByJo.png" width="360"/>
We can do this for polygons too:
BlockRandom[
polys =
Table[BoundaryMeshRegion[#[[FindShortestTour[#][[2]]]],
Line[Mod[Range[16], 15, 1]]] &[RandomReal[{0, 1}, {15, 2}]], n];
scales = RandomReal[{0, 1}, n],
RandomSeeding -> 1234
];
wc = WordCloud[AssociationThread[polys, scales], WordSpacings -> 0,
WordOrientation -> "Random", RandomSeeding -> 1234];
BlockRandom[Show[insetToReg @@@ Cases[wc, _Inset, ∞]],
RandomSeeding -> 1234]
<img src="https://i.stack.imgur.com/HMvfF.png" width="360"/>
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-15at11.14.52AM.png&userId=11733Chip Hurst2019-04-14T08:47:18ZCalculate the magnetic dipole due to a current circular loop?
https://community.wolfram.com/groups/-/m/t/1663559
Hey guys, I'm very new to mathematica, and am not a programmer. The last programming class I took was in 2005 for C++! But I'm taking a physics course to get into grad school and this is a project that we're workin on.
I can't seem to get actual values for my integrals. I understand that along the x-axis, I should see an output of 0. But, I get some message that doesn't mean anything to me. The code is posted below, and I've attached both the .nb file as well as our rubric so that you can see what we're trying to do.
The rubric says:
(a) Use the Biot-Savart law to calculate the magnetic field at any point in the yz-plane (i.e., at (0, y, z)) of the current loop shown in the figure below. The source point is (R cos φ, R sin φ, 0), and φ runs from 0 to 2π. Set up the integrals from which you could calculate Bx, By, and Bz.
(b) Evaluate the integral for Bx(y,z) and show that it is equal to zero.
(c) Use the Mathematica functions NIntegrate[] and StreamPlot[] to display the magnetic field lines By(y, z) and Bz(y, z) due to a current circular loop in the yz-plane. Take the current on the loop of radius 15 cm to be 1.5 A.
My code:
Clear["Global'*"]
u0 = 1.257*10^(-6); (*Permiability of free space*)
Ic = 1.5; (*Current in amps*)
R = .15; (*Radius in meters*)
ymax = 2.5*R;
zmax = 2.5*R;
\[Mu] = 4*\[Pi]*10^(-7);
B = \[Mu]*Ic/(2*R);
Bz[0, 0];
Bx[y_,z_]:=\[Mu]*Ic/(4*\[Pi]) *NIntegrate[((z*R*Cos[\[Phi]])/(R^2+y^2-2*y*R*Sin[\[Phi]]+z^2)^(3/2)),{\[Phi],0,2\[Pi]}] (*Integral for Biot-Savart Law along the x-axis. This will equal 0. Prove it*)
By[y_, z_] := \[Mu]*Ic/(4*\[Pi]) *NIntegrate[((z*R*Sin[\[Phi]])/(R^2 + y^2 - 2*y*R*Sin[\[Phi]] + z^2)^(3/2)), {\[Phi], 0, 2 \[Pi]}] (*Integral for Biot-Savart Law along the y-axis*)
Bz[y_, z_] := \[Mu]*Ic/(4*\[Pi]) *NIntegrate[((R^2 - y*R*Sin[\[Phi]])/(R^2 + y^2 - 2*y*R*Sin[\[Phi]] + z^2)^(3/2)), {\[Phi], 0, 2 \[Pi]}] (*Integral for Biot-Savart Law along the z-axis*)
p1 = StreamPlot[{By[y, z], Bz[y, z]}, {y, -ymax, ymax}, {z, -zmax, zmax}]Ryan Schmidt2019-04-18T14:21:49ZMathematica 12.0 on Linux issue with CUDA
https://community.wolfram.com/groups/-/m/t/1661787
I installed Mathematica 12.0 on Ubuntu Linux 18.10 and activated cuda paclet 12.0.287.
This simple code:
Needs["CUDALink`"]
array = Table[i + j, {i, 1000}, {j, 1000}];
Timing[CUDADot[array, array]]
gives the error:
/usr/local/Wolfram/Mathematica/12.0/SystemFiles/Kernel/Binaries/Linux-x86-64/WolframKernel: symbol lookup error: /home/bert/.Mathematica/Paclets/Repository/CUDAResources-Lin64-12.0.287/LibraryResources/Linux-x86-64/libCUDALink_Double.so: undefined symbol: cublasCreate_v2
I have nVIDIA driver 418.56 supporting CUDA 10.1 (while the paclet is based on 10.0).
I start Mathematica with following script:
export NVIDIA_DRIVER_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/libnvidia-tls.so.418.56
export CUDA_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/libcuda.so
mathematica &
Am I missing something?Bert Aerts2019-04-16T20:22:53Z[WSC18] Analysing Image Dithering
https://community.wolfram.com/groups/-/m/t/1383824
![image dithering cover][1]
This summer, as part of the Wolfram High School Summer Camp program, I decided to do a project analysing image dithering, and the various ways to do it. Over the course of two weeks, I learnt and understood how the algorithms work. Since resources about this process are sparse on the internet, *in this post, I not only implement the algorithms, but additionally describe in detail what image dithering is and how it works.*
*The second part of my project was to use machine learning to classify images that originate from different image dithering algorithms.*
After failing to get high accuracy with a large number of machine learning techniques, I finally came up with one that has an **accuracy of about 90%**.
Note that this community post is an adapted version of my computational essay. *My computational essay is attached at the bottom of the post.*
# What is image dithering?
![image-dithering-FS-example][2]
*Can you tell the difference between the two images above?*
The image on the right uses **just 24 colours** .
Yet, using Floyd-Steinberg dithering, it manages to look as detailed and beautiful as the one on the left, which is using more than 16 million colours!
Let' s formally define what the aim of image dithering is :
Given an RGB image with $256^3$ colors, reduce the colour space of the image to only belong to a certain color palette.
![chromaticity-plot comparison][3]
*A comparison of the chromaticity plots of the above images*
# Color Palette
First, let us tackle the easier problem at hand. Say that we are given the option to choose our color palette, the only restriction being the number of colours in the palette. What would be the best way to obtain a color palette that is the most appropriate for an image?
Thankfully, the Wolfram Language makes this task very easy for us, as we can simply use the inbuilt function `DominantColors[image, n]`. For example, regarding the image above:
![dominant-colors of the image][4]
would be the most appropriate color palette with 12 colours.
Here are some visualisations of the process of choosing the color palette in 3D RGB space.
The color palette of the original image:
![original color palette][5]
The color palette of the dithered image:
![final color palette][6]
*Notice how the final color palette is quite close to points on the diagonal of the cube. I go into a lot more detail about this in my computational essay.*
**Now, let us try to solve the main part of the problem, actually figuring out the mapping of a pixel colour to its new colour.**
# Naive idea
```
colorPallete = {0, 32, 64, 96, 128, 159, 191, 223};
pixel = {{34, 100, 222},
{200, 50, 150}};
result = pixel;
Do [
result[[x, y]] = First[Nearest[colorPallete, pixel[[x, y]]]];,
{x, Length[pixel]},
{y, Length[First[pixel]]}
];
Grid[result, Frame -> All, Spacings -> {1, 1}]
```
![grid naive idea][7]
## Extra!
I had to implement my own (but slower) version of the Wolfram function `Nearest` for the final functions, since pre-compiling Wolfram code to C code does not support `Nearest` natively.
However, at the time of writing, I have heard that there is an internal project going on in Wolfram to enable support for compiling all inbuilt Wolfram function to C natively.
# Better idea
As one can guess, this idea can be improved a lot. One of the important ideas is that of "smoothing". We want the transition between two objects/colours to look smoother. One way to do that would be make a gradient as the transition occurs.
However, how do you formalise a "gradient"? And how do you make a smooth one when all you have are 24 colours?
Dithering basically attempts to solves these problems.
To solve our questions, let's think about a pixel: what's the best way to make it look closest to its original colour?
In the naive idea, we created some error by rounding to some nearby values.
For each pixel, let's formalise the error as:
![error in pixel[1, 1] screenshot][8]
## Error diffusion
It is clear that the error should somehow be transmitted to the neighbouring elements so we can account for the error in a pixel in its neighbours. To maintain an even order of processing, let us assume that we will traverse the 2D array of pixels from the top-left corner, row-wise until we reach the bottom-right corner.
Therefore, it never makes sense to "push" the effects of an error to a cell we've already processed. Finally, let us see some ways to actually diffuse the error across the image.
# Floyd - Steinberg Dithering
In 1976, Robert Floyd and Louis Steinberg published the most popular dithering algorithm<sup>1</sup>. The pattern for error diffusion can be described as:
```
diffusionFormula = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPosition = {1, 2};
```
## What does this mean?
`diffusionFormula` is simply a way to encode the diffusion from a pixel.
`diffusionPosition` refers to the position of the pixel, relative to the `diffusionFormula` encoding.
So, for example, an error of `2` at pixel `{1, 1}` translates to the following additions:
```
pixel[[1, 2]] += error*(7/16);
pixel[[2, 1]] += error*(3/16);
pixel[[2, 2]] += error*(5/16);
pixel[[2, 3]] += error*(1/16);
```
![grid floyd steinberg error diffusion][9]
![floyd Steinberg dithering][10]
## How does one come up with these weird constants?
Notice how the numerator constants are the first 4 odd numbers.
The pattern is chosen specifically to create an even checkerboard pattern for perfect grey images using black and white.
![grayscale floyd steinberg dithering example][11]
*Example Grayscale image dithered with Floyd Steinberg*
![grayscale picture in picture thing][12]
*Note the checkerboard pattern in the image above.*
# Atkinson Dithering
Relative to the other dithering algorithms here, Atkinson's algorithm diffuses a lot less of the error to its surroundings. It tends to preserve detail well, but very continuous sections of colours appear blown out.
This was made by Bill Atkinson<sup>2</sup>, an Apple employee.The pattern for error diffusion is as below :
```
diffusionFormula = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8;
diffusionPosition = {1, 2};
```
![atkinson dithering example][13]
# Jarvis, Judice, and Ninke Dithering
This algorithm<sup>3</sup> spreads the error over more rows and columns, therefore, images should be softer(in theory).
The pattern for error diffusion is as below:
```
diffusionFormula = { {0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPosition = {1, 3};
```
![JJN algorithm example][14]
#### The final 2 dithering algorithms come from Frankie Sierra, who published the Sierra and Sierra Lite matrices<sup>4</sup> in 1989 and 1990 respectively.
# Sierra Dithering
Sierra dithering is based on Jarvis dithering, so it has similar results, but it's negligibly faster.
```
diffusionFormula = { {0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPosition = {1, 3};
```
![sierra dithering example][15]
# Sierra Lite Dithering
This yields results similar to Floyd-Steinberg dithering, but is faster.
```
diffusionFormula = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPosition = {1, 2};
```
![Sierra Lite dithering][16]
# Comparison
Here's an interactive comparison of the algorithms on different images:
```
Manipulate[
Dither[im, c,
StringDelete[StringDelete[StringDelete[algo, " "], ","],
"-"]], {{im, image, "Image"}, {im1, im2, im3}}, {{c, 12,
"Number of colours"}, 2, 1024, 1},
{{algo, "Floyd-Steinberg", "Algorithm"}, {"Floyd-Steinberg",
"Jarvis, Judice, Ninke", "Atkinson", "Sierra", "Sierra Lite"}}]
```
Download my computational essay to see it in action. Alternately, use the functions in the "Implementation" section to evaluate the code. `im1`, `im2`, `im3` can be replaced by images. I have submitted the comparison to [Wolfram Demonstrations][17] as well, so it should be available online soon.
# Side-note
Notice how the denominator in the `diffusionFormula` of a number of algorithms is a power of $2$?
This is because division by a power of 2 is equivalent to [bit-shifting][18] the number to the right by $\log_{2}(divisor)$ bits, making it much faster than division by any other number.
Given the improvements in computer hardware, this is not a major concern anymore.
## Come up with your own dithering algorithm!
I noticed that the dithering algorithms are almost the same as each other(especially the `diffusionPosition`).
However, I have made my functions so that you can just tweak the input arguments `diffusionFormula` and `diffusionPosition`, and test out your own functions!
Here's one I tried:
![my own dithering algorithm][19]
# Implementation
In this section, I will discuss the Wolfram implementation, and some of the features and functions I used in my code.
## applyDithering
Even though it's probably the least interesting part of the algorithm, actually applying the dithering is the most important part of the algorithm, and the one that gave me the most trouble.
I started off by writing it in the functional paradigm. With little knowledge of Wolfram, I stumbled through the docs to assemble pieces of the code. Finally, I had a "working" version of the algorithm, but there was a major problem: A $512\cdot512$ RGB image took 700 seconds for processing!
This number is way too large for an algorithm with linear time complexity in the size of input.
### Fixes
Some of the trivial fixes involved making more use of inbuilt functions(for example, `Nearest`).
The largest problem is that the Wolfram notebook is an interpreter, not a compiler. It interprets code every time it's run.
So the obvious step to optimising performance was using the `Compile` function in Wolfram.
But, there's a catch!
```
T (R1) 18 = MainEvaluate[ Hold[List][ I27, I28, T (R1) 16, R6]]
95 Element[ T (R2) 17, I20] = T (R1) 18
```
If you see something like the above in your machine code, your code is likely to be slow.
`MainEvaluate` basically means that the compiled function is calling back the kernel, which too, is a slow process.
To view the human readable form of your compiled Wolfram function, you can use:
```
Needs["CompiledFunctionTools`"];
CompilePrint[yourFunction]
```
To fix this, you need to basically write everything in a procedural form using loops and similar constructs.
The final step was `RuntimeOptions -> "Speed"`, which trades off some integer overflow checks etc. for a faster runtime.
Find the complete code for the function below:
```
applyDithering =
Compile[{{data, _Real, 3}, {diffusionList, _Real,
2}, {diffusionPos, _Integer, 1}, {colors, _Real, 2}},
Module[{lenx, leny, lenz, lenxdiff, lenydiff, error, val,
realLoc, closestColor, closestColordiff, res, a, diff = data,
diffusionFormula = diffusionList, xx, yy, curxx, curyy,
colorAvailable = colors, tmp, diffusionPosition = diffusionPos,
idx},
{lenx, leny, lenz} = Dimensions[data];
{lenxdiff, lenydiff} = Dimensions[diffusionFormula];
a = data;
res = data;
Do[
val = a[[x, y]];
realLoc = {x - diffusionPos[[1]] + 1,
y - diffusionPos[[2]] + 1};
closestColor = {1000000000., 1000000000., 1000000000.};
closestColordiff = 1000000000.;
Do[
tmp = N[Total[Table[(i[[idx]] - val[[idx]])^2, {idx, 3}]]];
If[tmp < closestColordiff,
closestColordiff = tmp;
closestColor = i;
];,
{i, colorAvailable}
];
error = val - closestColor;
res[[x, y]] = closestColor;
Do[
curxx = realLoc[[1]] + xx - 1;
curyy = realLoc[[2]] + yy - 1;
If[curxx > 0 && curxx <= lenx && curyy > 0 && curyy <= leny,
a[[curxx, curyy, z]] += error[[z]]*diffusionFormula[[xx, yy]]];,
{xx, lenxdiff},
{yy, lenydiff},
{z, 3}
];,
{x, lenx},
{y, leny}
];
Round[res]
],
CompilationTarget -> "C",
RuntimeOptions -> "Speed"
];
```
## Dither
This is the main function that uses `applyDithering`. Their are multiple definitions of the function, one with the hardcoded values, and the other to allow one to easily implement their own dithering algorithm.
```
(* This is the implementation that takes the algorithm name and applies it *)
Dither[img_Image, colorCount_Integer, algorithm_String: ("FloydSteinberg" | "JarvisJudiceNinke" |
"Atkinson" | "Sierra" | "SierraLite")] :=
Module[{diffusionFormulaFS, diffusionPositionFS, diffusionFormulaJJN, diffusionPositionJJN, diffusionFormulaA,
diffusionPositionA, diffusionFormulaS, diffusionPositionS, diffusionFormulaSL, diffusionPositionSL},
(* Floyd Steinberg algorithm constants *)
diffusionFormulaFS = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPositionFS = {1, 2};
(* Jarvis, Judice, and Ninke algorithm constants *)
diffusionFormulaJJN = {{0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPositionJJN = {1, 3};
(* Atkinson algorithm constants *)
diffusionFormulaA = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8 ;
diffusionPositionA = {1, 2};
(* Sierra algorithm constants *)
diffusionFormulaS = {{0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPositionS = {1, 3};
(* Sierra Lite algorithm constants*)
diffusionFormulaSL = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPositionSL = {1, 2};
colorAvailable =
Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Switch[algorithm,
"FloydSteinberg",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaFS, diffusionPositionFS, colorAvailable],
"Byte"],
"JarvisJudiceNinke",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaJJN, diffusionPositionJJN, colorAvailable],
"Byte"],
"Atkinson",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaA, diffusionPositionA, colorAvailable],
"Byte"],
"Sierra",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaS, diffusionPositionS, colorAvailable],
"Byte"],
"SierraLite",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaSL, diffusionPositionSL, colorAvailable],
"Byte"]
]
];
(* This is the function that makes it easy to make your own dithering
algorithm *)
Dither[img_Image, colorCount_Integer, diffusionFormula_List, diffusionPosition_List] := Module[{},
colorAvailable = Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormula, diffusionPosition, colorAvailable],
"Byte"]
];
```
# **Classifying images**
The second part of my project involves classifying dithered images and mapping them to the algorithm they were obtained from.
This sounds like a relatively easy task for machine learning, but it turned out to be much harder. Besides, no similar research on image "metadata" has existed before, which made the task more rewarding.
I ended up creating a model which has an **accuracy of more than 90%**, which is reasonably good for machine learning.
If you are uninterested in the failures encountered and the details of the dataset used, please skip ahead to the section on "ResNet-50 with preprocessing".
# Dataset
To obtain data, I did a web search for images with a keyword chosen randomly from a dictionary of common words.
The images obtained are then run through the five algorithms I implemented and is stored as the training data. This is to ensure that the images aren't distinguished much in terms of their actual contents, since that would interfere with learning about the dithering algorithms used in the image.
The images were allowed to use up to 24 colours which are auto-selected, as described in the section on "Color Palette".
Here is the code for downloading, applying the dithering, and re-storing the images. Note that it is not designed with re-usability in mind, these are just snippets coded at the speed of thought:
```
(* This function scrapes random images from the internet and stores \
them to my computer *)
getImages[out_Integer, folderTo_String] := Module[{},
res = {};
Do [
Echo[x];
l = RemoveAlphaChannel /@
Map[ImageResize[#, {512, 512}] &,
Select[WebImageSearch[RandomChoice[WordList["CommonWords"]],
"Images"],
Min[ImageDimensions[#]] >= 512 &]];
AppendTo[res, Take[RandomSample[l], Min[Length[l], 2]]];
Pause[10];,
{x, out}
];
MapIndexed[Export[
StringJoin[folderTo, ToString[97 + #2[[1]]], "-",
ToString[#2[[2]]], ".png"], #1] &, res, {2}]
]
(* This function applies the dithering and stores the image *)
applyAndStore[folderFrom_String, folderTo_String] := Module[{},
images = FileNames["*.png", folderFrom];
origImages = Map[{RemoveAlphaChannel[Import[#]], #} &, images];
Map[Export[StringJoin[folderTo, StringTake[#[[2]], {66, -1}]],
Dither[#[[1]], 24, "Sierra"]] &, origImages]
];
```
Here are some more variable definitions and metadata about the dataset that is referenced in the following sections.
![dataset metadata][20]
# Plain ResNet-50
My first attempt was to use a popular neural net named "ResNet-50 Trained on ImageNet Competition Data,<sup>5</sup> and retrain it on my training data.
One of the major reasons for choosing this architecture was that it identifies the main object in an image very accurately, and is quite deep. Both these properties seemed very suitable for my use case.
However, the results turned out to be very poor. When I noticed this during the training session, I stopped the process early on. It can be speculated that the poor results were because it was trying to infer relations between the colours in the image.
# Border classification
Since the borders in an image are least affected by the image dithering algorithm, and simply rounded to the closest colours, it should be easier to learn the constants of the diffusionFormula from it.
Therefore, we can pre-process an image and only use its border pixels for classification.
## borderNet
Observing the aforementioned fact, I implemented a neural net which tried to work with just the borders of the image. This decreased the size of the data to $512 \cdot 4$ per image.
## borderNet with just left and top border
Since my implementation of the dithering algorithms starts by applying the algorithm from the top-left corner, the pattern in the left and top borders should be even easier for the net to learn. However, this decreased the size of the data even more to $512 \cdot 2$ per image.
Both the nets failed to work very well, and had **accuracies of around 20%**. This was probably the case because of the lack of data for the net to actually train well enough.
Wolfram code for the net follows:
```
borderNet = NetChain[{
LongShortTermMemoryLayer[100],
SequenceLastLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
},
"Input" -> {"Varying", 3},
"Output" -> NetDecoder[{"Class", classes}]
]
```
# Row and column specific classification
The aim with this approach was to first make the neural net infer patterns in the columns of the image, then combine that information and observe patterns in the rows of the image.
This didn't work very well either. The major reason for the failure was probably that the diffusion is not really as independent as the net might assume it to be.
# Row and column combined classification
Building on the results of Section 2.5, the right method to do the processing seemed to be to use two separate chains, and a `CatenateLayer` to combine the results.
For understanding the architecture, observe the `NetGraph` object below:
![netgraph-lstmNet branchy thingy][21]
The Wolfram language code for the net is as follows:
```
lstmNet = NetGraph[{
TransposeLayer[1 <-> 2],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
TransposeLayer[1 <-> 2],
SequenceLastLayer[],
SequenceLastLayer[],
LongShortTermMemoryLayer[25],
LongShortTermMemoryLayer[25],
SequenceLastLayer[],
SequenceLastLayer[],
CatenateLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
}, {
NetPort["Input"] -> 2 -> 3 -> 7 -> 9 -> 11,
NetPort["Input"] -> 1 -> 4 -> 5 -> 6 -> 8 -> 10 -> 12,
{11, 12} -> 13 -> 14 -> 15 -> 16
},
"Input" -> {512, 512, 3},
"Output" -> NetDecoder[{"Class", classes}]
];
```
However, this net didn't work very well either.
The net had a somewhat unconventional architecture, and the excessive parameter count crashed the Wolfram kernel, so they had to be cut down.
Ultimately, it only managed to get an **accuracy rate of around 25-30%**.
# ResNet-50 with preprocessing
The final idea was to use pre-processing to our advantage. Dithering, in its essence, shifts the error downward and towards the right. Therefore, one way to filter the image would be to pad the image with one row of pixels at the top and one column at the left, and subtracting the padded image from the original one.
Here's an example of what that looks like:
![FS image preprocessing for net][22]
The code for doing this to an image is as simple as:
```
img - ImageTake[ImagePad[img, 1], {1, 512}, {1, 512}]
```
*Notice how the image(right side, after processing) resembles the parts with the "checkerboard" pattern described in the section "How does one come up with these weird constants?" under "Floyd - Steinberg Dithering" .*
The main reason this net works well is that, even with same color palettes, the gradient of the images coming from dithering algorithms is quite different. This is because of the differences in the error diffusion, and by subtracting the padded image from the original image, we obtain a filtered version of the dithering patterns, making it easy for the neural net to spot them.
The net was trained on AWS for more than 7 hours, on a larger dataset of 1500 images.
The results outperformed my expectations, and on a test-set of more than 700 images, 300 of which were part of the original training data, it showed an **accuracy rate of nearly 91%**.
![classifier measurement object][23]
Here is a code of the net with details:
```
baseModel =
NetTake[NetModel["ResNet-50 Trained on ImageNet Competition Data",
"UninitializedEvaluationNet"], 23]
net = NetChain[{
NetReplacePart[baseModel,
"Input" -> NetEncoder[{"Image", {512, 512}}]],
LinearLayer[Length@classes],
SoftmaxLayer[]},
"Input" -> NetEncoder[{"Image", {512, 512}, ColorSpace -> "RGB"}],
"Output" -> NetDecoder[{"Class", classes}]
]
```
So, it's just the **ResNet - 50** modified to work with $512 \cdot 512$ images.
# Future Work
- Look into using machine learning for un-dithering an image.
- Look into creating new dithering algorithms that perform faster or better than the existing ones.
# Notes
All images and visualisations in this post were generated in Wolfram. Their code may be seen in the computational essay attached below.
I would like to thank all the mentors, especially Greg "Chip" Hurst, Michael Kaminsky, Christian Pasquel and Matteo Salvarezza, for their help throughout the project.
Further, I would like to thank Pyokyeong Son and Colin Weller for their help during the project, and refining the essay.
The original, high resolution copies of the images are credited to [Robert Lukeman][24], [Teddy Kelley][25], and [Sebastian Unrau][26] on [Unsplash][27].
# References
[1] : R.W. Floyd, L. Steinberg, An adaptive algorithm for spatial grey scale. Proceedings of the Society of Information Display 17, 75-77 (1976).
[2] : Bill Atkinson, private correspondence with John Balestrieri, January 2003 (unpublished)
[3] : J. F. Jarvis, C. N. Judice and W. H. Ninke, A Survey of Techniques for the Display of Continuous Tone Pictures on Bi-level Displays. Computer Graphics and Image Processing, 5 13-40, 1976
[4] : Frankie Sierra, in LIB 17 (Developer's Den), CIS Graphics Support Forum (unpublished)
[5] : K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition," arXiv:1512.03385 (2015)
# [Link to my computational essay][28]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-dithering-cover.gif&userId=1371661
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at3.50.44PM.png&userId=1371661
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.11.46PM.png&userId=1371661
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.15.27PM.png&userId=1371661
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-og.gif&userId=1371661
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-final.gif&userId=1371661
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.19.57PM.png&userId=1371661
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.23.38PM.png&userId=1371661
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.28.38PM.png&userId=1371661
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.29.39PM.png&userId=1371661
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.31.40PM.png&userId=1371661
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-1.png&userId=1371661
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.38.01PM.png&userId=1371661
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.39.37PM.png&userId=1371661
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.41.39PM.png&userId=1371661
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.43.33PM.png&userId=1371661
[17]: http://demonstrations.wolfram.com
[18]: https://stackoverflow.com/a/141873
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at5.03.37PM.png&userId=1371661
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.11.09AM.png&userId=1371661
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.21.55AM.png&userId=1371661
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.24.56AM.png&userId=1371661
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.28.45AM.png&userId=1371661
[24]: https://unsplash.com/photos/zNN6ubHmruI?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[25]: https://unsplash.com/photos/_4Ib-a8g9aA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[26]: https://unsplash.com/photos/CoD2Q92UaEg?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[27]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[28]: https://www.dropbox.com/s/kmtzq6x4xkdn9y8/computational-essay.cdf?dl=0Nalin Bhardwaj2018-07-14T15:17:17Z[✓] Find area of polygons in a geometric scene?
https://community.wolfram.com/groups/-/m/t/1663665
Is it possible to find a polygon area in geometric scenes?
For example, I want to find the area of a triangle kln:
scene = GeometricScene[{k, l, n, m}, {EuclideanDistance[k, n] == 3,
PlanarAngle[{l, m, n}] == 120 \[Degree],
Triangle[{k, l, n}],
GeometricAssertion[{CircleThrough[{k, l, n}], Line[{m, n}]},
"PairwiseTangent"],
GeometricAssertion[{CircleThrough[{k, l, n}], Line[{l, m}]},
"PairwiseTangent"], Polygon[{k, l, m, n}],
GeometricAssertion[{Line[{k, n}], Line[{l, m}]}, "Parallel"]}];
RandomInstance[scene, RandomSeeding -> 1]Nikita Tokarev2019-04-18T18:08:02ZGet ImageRestyle neural network runtime in Mathematica 12?
https://community.wolfram.com/groups/-/m/t/1663030
Hi,
Playing with Mathematica 12 and ImageRestyle and the system generates an error message:
The neural network is unavailabe. With MM11.3 it has always worked.
Stacktrace shows:
ImageRestyle::nnlibunavail
Running Windows 10
As a comment, before the imagerestyle started it first downloaded the GPU libraries. That was 900MB!
After the download Mathematica showed a message that a restart was needed. After the restart the message above was generated.l van Veen2019-04-17T19:49:31ZPerform queries remotely to reduce data size?
https://community.wolfram.com/groups/-/m/t/1351872
Hello,
I'd like to produce a list of the most (un)common words in French. There is WordList and for a list of words, I can get the frequency for each word in a list of words via WordFrequencyData. But it is highly inefficient to get the total list of words to my computer, then get their frequency only to sort them on frequency and through away almost all the data except the top/bottom ten items.
Indeed, a naive approach times out every time:
words = WordList["CommonWords", Language -> "French"]
wordFreq = Take[WordFrequencyData[words, Language -> "French"], 10]
... WordFrequencyData::timeout: A network operation for WordFrequencyData timed out. Please try again later.
Dividing the queries into smaller partitions,
wordsPartitions=Partition[words,100];
Join[Map[WordFrequencyData[#,Language->"French"]&,wordsPartitions]]
takes many hours and with many failures along the way, so retry logic is also necessary to add. **Hence, back to the title, is there a better way?** If this was a remote SQL database, I would be able to select words, order by frequency and set limit to 10. Not much data would have to travel over the API. (Btw it's not even much data in the first place, it's incredibly slow for a commercial product.)Daniel Janzon2018-06-05T20:07:36ZCreate a 2 dimensional random walk with a do loop?
https://community.wolfram.com/groups/-/m/t/1663702
I am trying to create a 2 dimensional random walk that has an equal chance to do up, down, right, or left with a do loop and which statement, but I am not sure I have the correct coding.
M = Table[0, {1000}, {2}];
Do[x = RandomReal[];
Which[0 <= x <= 0.25, y = 1,
0.25 <= x <= 0.50, y = -1,
0.50 <= x <= 0.75, y = 1,
0.75 <= x <= 1.0, y = -1];
M[[i + 1, j]] = y + M[[i, j]],
{i, -999, 999}, {j, -1, 2}];
Grid[M]Jennifer Shaw2019-04-18T14:42:59ZPlot the change of variable for a Gaussian distribution?
https://community.wolfram.com/groups/-/m/t/1663517
I am going through the [Pattern Matching and Machine Learning][1] book by Bishop and have gotten stuck on a few things in Mathematica. In the [solution][2] for exercise 1.4 (page 7/101) there is a graph showing a probability distribution transformed by a function.
Here are the questions that I have regarding this:
* How do I plot the green and the magenta curves on the y axis?
* Why are the green and the magenta curves near each other in size in the book, but not in my own plots?
* Why can't I integrate the transformed curves so as to get the CDF?
* How do I get the mode of the transformed curves?
Here is the code for what I've done so far. I am been trying out various things to no avail.
```
dist = PDF[NormalDistribution[6, 1]]
g[y_] := Log[y] - Log[1.0 - y] + 5.0
Plot[{dist[x], dist[g[x]]*Abs[g'[x]], dist[g[x]]}, {x, 0, 10},
PlotRange -> Full, PlotLegends -> "Expressions"]
```
[1]: https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book
[2]: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/05/prml-web-sol-2009-09-08.pdfMarko Grdinić2019-04-18T12:55:38ZComparing the classic Compile with new FunctionCompile
https://community.wolfram.com/groups/-/m/t/1663261
Am I wrong to expect **FunctionCompile** to be faster than **Compile**? What can be done to increase the speed of this function?
With the arrival of version 12.0, I dug up an old function I have been trying to speed up for years.
The function transforms the Real vectors (xi,zi} and {yv,zv} to a new Real vector {x,y}.
fun[{xi_, zi_}, {yv_, zv_}] :=
Module[{t},
t = Sqrt[-(-1 +
yv^2)^3 (1 + (-1 + xi^2) yv^2)]; {(xi (-2 yv^5 t zi +
2 yv t (zi - 2 zv) + zv + 2 yv^3 t (xi^2 (zi - 2 zv) + 2 zv) +
yv^2 (-4 zi + 2 xi^2 zv) +
yv^8 (-4 (-1 + xi^2) zi + (-3 + 2 xi^2) zv) +
yv^4 (-4 (-3 + xi^2) zi + (-6 - 2 xi^2 + xi^4) zv) +
2 yv^6 ((-6 + 4 xi^2) zi - (-4 + xi^2 + xi^4) zv))),
((-1 + 2 xi^2) yv^12 zi - 2 yv t (zi - zv) - zv -
2 yv^3 t ((-3 + xi^2) zi + 3 zv) -
2 yv^7 t ((-1 + 2 xi^2) zi + zv - xi^2 zv) +
yv^2 (zi + (5 - 2 xi^2) zv) +
5 yv^8 ((-2 + xi^4) zi + (-1 + 4 xi^2 - 2 xi^4) zv) -
yv^4 ((5 + 2 xi^2) zi + (10 - 12 xi^2 + xi^4) zv) -
2 yv^5 t (-3 (-1 + xi^2) zi + (-3 + xi^2 + xi^4) zv) +
yv^10 ((5 - 4 xi^2 - 2 xi^4) zi + (1 - 6 xi^2 + 4 xi^4) zv) +
yv^6 ((10 + 4 xi^2 - 3 xi^4) zi + (10 - 24 xi^2 +
7 xi^4) zv))/(yv (-1 + yv^2) )}/((1 + (-2 + xi^2) yv^2 +
yv^4)^2 (zi - zv))]
I made a table to time the function on 100,000 runs. The uncompiled version takes about 6.5 seconds on my machine: Mac 3,5 GHz Intel Core i7 with macOS Mojave and 16 GB 1600 MHz DDR3 of memory.
tF = Timing[Table[fun[{.5, zi}, {5., 3.}], {zi, .5, 1.5, .00001}];] //
First
tF=6.54502 seconds
This is the same function compiled using the old Compile giving a considerable speed gain: (approx 35 times faster)
funCF = Compile[{{iPt, _Real, 1}, {vPt, _Real, 1}},
Module[{xi, zi, yv, zv, t},
{xi, zi} = iPt; {yv, zv} = vPt;
t = Sqrt[-(-1 +
yv^2)^3 (1 + (-1 + xi^2) yv^2)]; {(xi (-2 yv^5 t zi +
2 yv t (zi - 2 zv) + zv +
2 yv^3 t (xi^2 (zi - 2 zv) + 2 zv) +
yv^2 (-4 zi + 2 xi^2 zv) +
yv^8 (-4 (-1 + xi^2) zi + (-3 + 2 xi^2) zv) +
yv^4 (-4 (-3 + xi^2) zi + (-6 - 2 xi^2 + xi^4) zv) +
2 yv^6 ((-6 + 4 xi^2) zi - (-4 + xi^2 + xi^4) zv))),
((-1 + 2 xi^2) yv^12 zi - 2 yv t (zi - zv) - zv -
2 yv^3 t ((-3 + xi^2) zi + 3 zv) -
2 yv^7 t ((-1 + 2 xi^2) zi + zv - xi^2 zv) +
yv^2 (zi + (5 - 2 xi^2) zv) +
5 yv^8 ((-2 + xi^4) zi + (-1 + 4 xi^2 - 2 xi^4) zv) -
yv^4 ((5 + 2 xi^2) zi + (10 - 12 xi^2 + xi^4) zv) -
2 yv^5 t (-3 (-1 + xi^2) zi + (-3 + xi^2 + xi^4) zv) +
yv^10 ((5 - 4 xi^2 - 2 xi^4) zi + (1 - 6 xi^2 + 4 xi^4) zv) +
yv^6 ((10 + 4 xi^2 - 3 xi^4) zi + (10 - 24 xi^2 +
7 xi^4) zv))/(yv (-1 + yv^2) )}/((1 + (-2 + xi^2) yv^2 +
yv^4)^2 (zi - zv))]]
tCF = Timing[
Table[funCF[{.5, zi}, {5., 3.}], {zi, .5, 1.5, .00001}];] // First
tCF=.182818 seconds
This is the function compiled with the new FunctionCompile (works only with Real64 numbers?)
Gives less gain in speed compared to classic Compile(approx 23 times faster)
funCCF = FunctionCompile[
Function[{Typed[iPt, TypeSpecifier["PackedArray"]["Real64", 1]],
Typed[vPt, TypeSpecifier["PackedArray"]["Real64", 1]]},
Module[{xi, zi, yv, zv, t},
{xi, zi} = iPt; {yv, zv} = vPt;
t = Sqrt[-(-1 +
yv^2)^3 (1 + (-1 + xi^2) yv^2)]; {(xi (-2 yv^5 t zi +
2 yv t (zi - 2 zv) + zv +
2 yv^3 t (xi^2 (zi - 2 zv) + 2 zv) +
yv^2 (-4 zi + 2 xi^2 zv) +
yv^8 (-4 (-1 + xi^2) zi + (-3 + 2 xi^2) zv) +
yv^4 (-4 (-3 + xi^2) zi + (-6 - 2 xi^2 + xi^4) zv) +
2 yv^6 ((-6 + 4 xi^2) zi - (-4 + xi^2 + xi^4) zv))),
((-1 + 2 xi^2) yv^12 zi - 2 yv t (zi - zv) - zv -
2 yv^3 t ((-3 + xi^2) zi + 3 zv) -
2 yv^7 t ((-1 + 2 xi^2) zi + zv - xi^2 zv) +
yv^2 (zi + (5 - 2 xi^2) zv) +
5 yv^8 ((-2 + xi^4) zi + (-1 + 4 xi^2 - 2 xi^4) zv) -
yv^4 ((5 + 2 xi^2) zi + (10 - 12 xi^2 + xi^4) zv) -
2 yv^5 t (-3 (-1 + xi^2) zi + (-3 + xi^2 + xi^4) zv) +
yv^10 ((5 - 4 xi^2 - 2 xi^4) zi + (1 - 6 xi^2 + 4 xi^4) zv) +
yv^6 ((10 + 4 xi^2 - 3 xi^4) zi + (10 - 24 xi^2 +
7 xi^4) zv))/(yv (-1 +
yv^2) )}/((1 + (-2 + xi^2) yv^2 + yv^4)^2 (zi - zv))]]]
tCCF = Timing[
Table[funCCF[{.5, zi}, {5., 3.}], {zi, .5, 1.5, .00001}];] // First
tCCF=.288228 secondsErik Mahieu2019-04-18T13:38:42ZDefine a new output format based on an existing output format, e.g. Fortran
https://community.wolfram.com/groups/-/m/t/1663445
Hallo *.*!
I would like to define a new output format based on an existing output format (e.g. FortranForm), and I'll call it "PascalForm".
My first idea ...
1.
Format[x_, PascalForm] := Format[x, FortranForm]
2.
Format[x_ ^ y_, PascalForm] := StringForm["power(\`\`,\`\`)", x, y]
fails twice.
to 1. with ...
Format::nosym: x_ does not contain a symbol to attach a rule to. >>
to 2. with ...
SetDelayed::write: Tag Power in y_
(x_) is Protected. >>
Do you have an idea that could work?
Or does someone know the suitable documents to my task?
Or is it the depths of internet already a difficult to find solution for me?
Thank you for your help.
Greetings
BerndBernd Rueffer2019-04-18T10:18:12ZExtract data from table & use it to perform arithmetic operation?
https://community.wolfram.com/groups/-/m/t/1663074
I am not sure about how I can explain this problem clearly. Nevertheless, let me try it. And it's bit lengthy:
I need to solve a sixth order equation given by the expression:
$$1 + {\frac{2\kappa-1}{2\kappa-3}\frac{Nezero}{k^2}} + {\frac{2\kappa-1}{2\kappa-3}}{\frac{Nbzero}{k^2 Tbe}} - \frac{(2\kappa-1)(2\kappa+1)}{(2\kappa-3)^2}\Bigg[{\frac{w^2 Nezero}{k^4 mpe}}+{\frac{(w-kU_b)^2Nbzero}{k^4Tbe^2mpe}}\Bigg]\\-{\frac{Npzero}{w^2}\bigg(1+\frac{3k^2Tpe}{w^2}\bigg)} - {\frac{z^2Nizero\times mpi}{w^2}\bigg(1+\frac{3k^2Tpe}{w^2}\bigg)}$$
Here there are some constant values, I vary ‘k’ and find the values of ‘w’. For single value of k, I will get 6 w’s. Some of it are complex. I separate it out into real and imaginary and write it on ‘**solnofequation**’. For simplicity, one can think **solnofequation** as a matrix or table, which in the present case has a dimension of 10 X 13 (10 rows and 13 columns; **rows will change as one change ‘kinterval’**). First column gives values of k, second column gives **real values** of **first root** for each k, third column gives **imaginary part** of **first root** etc.
Now I have to take the each real part in a row, subtract ‘k’ times ub divide by k times mpe. And then whole of this should get multiplied by kappa2 given in the programme.
E.g. `solnofequation[[2]]` gives `{0.2, 0.161643, 0, 1.06185, 0, -0.144433, -0.109161, -0.144433, 0.109161, 0.0116811, -0.117174, 0.0116811, 0.117174}`.
Here, first element if k, second element is real par of first root, third element is imaginary part of first root, fourth element is real par of second root and so on. Here, what I have to do is $\frac{(0.161643-0.2*ub)}{mpe}*kappa2$, then $\frac{(1.06185-0.2*ub)}{mpe}*kappa2$ so on.
Further, `solnofequation[[8]]` gives `{0.8, 0.609319, 0, 4.42253, 0, -0.636614, -0.409612, -0.636614,0.409612, 0.0366778, -0.453399, 0.0366778, 0.453399}`
Here, effectively what I have to do is $\frac{(0.609319-0.8*ub)}{mpe}*kappa2$, then $\frac{(4.42253-0.8*ub)}{mpe}*kappa2$ and so on.
This has to be done on each row. Then I need to export into a dat file, where **first column** gives values of k, **second column** gives real values of **first root** for each k, **third column** gives **imaginary** part of **first root** for each k, **fourth column** gives the **value of the real part of first root after above calculation for each k, fifth column** gives real values of second root for each k, **sixth column** gives **imaginary** part of **second root** for each k, **seventh column** gives **the value of the real part of second root after above calculation for each k** and so on.
All I could muster was:
1. solve the equation
2. write it on to a table with real and imaginary part separated
3. to export the file without what the calculation that I require.
Ideally, what I am expecting is something like this:
![expected][1]
The code that I have written is attached with this question.
Any help will be deeply appreciated. I know that this question is very lengthy, I could not help it. Thanks at least for those who took time to read this lengthy question.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=XMNkC.png&userId=895827Sreeraj T2019-04-18T06:57:48ZEasyIDE: An IDE for Mathematica
https://community.wolfram.com/groups/-/m/t/1655203
This is a cross-post from [here](https://b3m2a1.github.io/easyide-an-ide-for-mathematica.html)
---
Contrary to what would probably be best practice, I do all of my Mathematica development inside Mathematica itself. To support this I built out a suite of [application development tools](https://paclets.github.io/PacletServer/btools.html) , a [web site builder](https://paclets.github.io/PacletServer/ems.html) , a [bug tracker](https://github.com/b3m2a1/BugTracker) , and a [documentation writing system](https://paclets.github.io/PacletServer/simpledocs.html) . Each of these worked nicely for me separately, but each of these required a palette and each one ran on notebooks, which meant that my screen filled with too many notebooks to keep track of. And then for each of these palettes and systems I had to write new resource finding code based off the palette or some arbitrarily imposed root directory or else provide some other way to specify where things would be found.
In short, it got messy.
Then, in a very relaxing hiatus from Mathematica I did some python development, writing a package for linking [Mathematica to python](https://paclets.github.io/PacletServer/pjlink.html) as well as some stuff for coordinate transforms and finite differencing and other little utilities. In doing this I noticed that everything was just...better. Partly this is because python is much nicer to write significant amounts of code in, being a language that actually supports developers and with actual object orientation and modularity. But another significant part of it was in the tools available to me. In particular I had the python plugin to [IntelliJ](https://www.jetbrains.com/idea/) , which is also repackaged as [PyCharm](https://www.jetbrains.com/pycharm/) . The fact that I had tabbing, plugins (e.g. for [Git](https://git-scm.com/) ), a file browser inside my dev environment, etc. was at once so entirely normal (I used to be a python programmer before switching over to mostly using Mathematica) and at the same time so nice. I then tried to use the *very nice and well constructed* [IntelliJ plugin for Mathematica](https://wlplugin.halirutan.de/) but it was just too much of a hurdle to lose everything I was used to and liked about writing my code directly in Mathematica.
And that long, unnecessary background is why today we're gonna look at a Mathematica IDE written and operating entirely within Mathematica.
# EasyIDE
Mostly for the rhyme, I called this thing [EasyIDE](https://paclets.github.io/PacletServer/easyide.html) but it is pretty easy to use, too.
## Basics
###Installation
Install it off the [Paclet Server](https://paclets.github.io/PacletServer/) :
<< https://paclets.github.io/PacletServer/Install.wl
PublicPacletInstall["EasyIDE"]
(*Out:*)
![post27-5677831900721146064](https://b3m2a1.github.io/img/post27-5677831900721146064.png)
###Making a New IDE Notebook
This IDE system is also basically just a package and a stylesheet, so it's pretty easy to get started. Simply go to `Format ▸ Stylesheet ▸ EasyIDE ▸ LightMode` . It'll prompt you for a directory to use as the root directory. Here's a video as an example:
[![intro1](https://i.stack.imgur.com/YokQs.png)](https://www.youtube.com/watch?v=TMHiN9Ov2fQ)
You can play around with the file browser now or the plugin menus in the top right
###Notebooks, Packages, and Text Files
As things currently stand, the IDE recognizes three types of files to handle in different ways. The first, of course, are plain notebooks. These can be manipulated like normal. Here's an example of making and editing a notebook file in the IDE:
[![intro 2](https://i.stack.imgur.com/76Gcd.png)](https://www.youtube.com/watch?v=I1MnB7duneA)
Text and package files can be made in the same way--just assign the appropriate file extension.
Each of these files will work basically as a regular file would, except their contents will be saved to their original file on the disk rather than the current `NotebookFileName[]` .
###The File Browser
One of the most useful and intuitive features of this IDE is the file browser it has built in. This allows you to quickly find files inside the active directory. Here's a screen shot of what that can look like:
![post27-1442079965320612466](https://b3m2a1.github.io/img/post27-1442079965320612466.png)
Each entry in this has a `ContextMenu` that allows for some file- or directory-specific actions.
###Stylesheet / Extension Based Behavior
EasyIDE is built to be extensible. It provides a way to get different behavior depending on what would be useful for the specific type of notebook or file is being fed in. These are controlled in the EasyIDE settings, in particular at `EasyIDE ▸ Resources ▸ Settings ▸ Mappings` where there are many files that control how these should map. This directory may also be created in `$UserBaseDirectory/ApplicationData` and the settings there will take precedence over those in the paclet folder itself.
These customizations include stylesheets, toolbars, and what to do when the file browser is active.
###Plugins and Toolbars
Probably the best feature of having something like EasyIDE is the ability to hook external code into the IDE and have it give new, more powerful capabilities. To make this easy to work with I added both a plugin system and a toolbar system (although the latter is really just a special case of the first). Plugins appear as either menus--such as the `File` and `Project` menus which are themselves just plugins--or as commands under the plugins menu. Currently I already have a decent number of these:
![post27-6483075564890356321](https://b3m2a1.github.io/img/post27-6483075564890356321.png)
All of these add new functionality to the IDE based on code I'd written before. In that screenshot you can also see a toolbar, which exists right below the tabs. This can be stylesheet specific and thus adds an even more targeted way to add functionality to the system. Here's an example of the four different toolbars I've implemented as well as the different stylesheets they go with:
[![intro 3](https://i.stack.imgur.com/QHCwH.png)](https://www.youtube.com/watch?v=fFiDiO-oM7w)
In that you can also see the major downside of putting everything into an IDE: when the files get big (as is the one I'm using to write this post) things can get slower. On the other hand as long as one is only writing code, this is never an issue. And even with a ~12MB file like this things are still more than fast enough to not be frustrating to work with.
## Extensions
###Styles
EasyIDE was built to be customizable. This holds first and foremost for the stylesheets it works with. Even though currently there is only a set of LightMode styles, as DarkMode style set could be constructed without too much more difficulty. To do this, one would merely have to take the existing LightMode stylesheet, copy it, and make the necessary cosmetic changes. These changes should then propagate reasonably naturally to the extension styles if the inheritance is changed. This is on the TODO list, but if there is a quality existing DarkMode stylesheet to work off that would also make life much easier.
###Plugins and Toolbars
These may be hooked in by adding things to `EasyIDE ▸ Resources ▸ Settings ▸ Plugins` and `EasyIDE ▸ Resources ▸ Settings ▸ Toolbars` . There are a number of good examples there already.
###Miscellaneous Extensions
I had already implemented stuff for creating nice docs, Markdown notebooks, websites, bug tracking, paclet creation, etc. and some of this has made it in as plugins already. More is forthcoming, but for now one can always play with what's in the Plugins menu. In particular the Git plugin is useful for me as I write and develop.
###The EasyIDE API
EasyIDE is just a collection of functions wrapped into a single unit. These were designed to (hopefully) be modular and clean to work with. Eventually all core functionality will also make its way to being attached to a single object, the `IDENotebookObject` . The API for this is based off of my [InterfaceObjects](https://paclets.github.io/PacletServer/interfaceobjects.html) package and is object-oriented. This will be documented in due time, but as a taste here's what it can look like:
ide = IDENotebookObject[]
(*Out:*)
![post27-2774621243698091573](https://b3m2a1.github.io/img/post27-2774621243698091573.png)
ide@"Methods"
(*Out:*)
{"Open","Save","Close","SwitchTab","Path","Data","SetData","ToggleFileViewer","AddToolbar","RemoveToolbar","AddStyles","RemoveStyles","GetStylesheet","SetStylesheet","SetProjectDirectory","CreateMessage","CreateDialog"}
These `"Methods"` are all operations that the IDE notebook referenced to by `EvaluationNotebook[]` can perform. Here's an example of creating a message:
ide@"CreateMessage"["Hello!"]
(*Out:*)
![post27-6469651990570110283](https://b3m2a1.github.io/img/post27-6469651990570110283.png)
![post27-316085028871083490](https://b3m2a1.github.io/img/post27-316085028871083490.png)
As the IDE grows in sophistication so will the methods the API supports. For now, though, these provide the most direct control that is possible to get with the IDE.b3m2a1 2019-04-09T10:47:07ZReplicate Dave Giles' permutation test using WL?
https://community.wolfram.com/groups/-/m/t/1661603
Dear community members,
I'm trying to replicate Dave Giles' permutation test [examples][1] using Mathematica but I'm having difficulties randomly selecting 50,000 samples from all the possible permutations of a list of 20 prices.
Mathematica runs out of memory in my laptop (8 GB RAM).
prices = {5.0, 4.8, 4.7, 4.0, 5.3, 4.1, 5.5, 4.7, 3.3, 4.0, 4.0, 4.6,
5.3, 3.0, 3.5, 3.9, 4.7, 5.0, 5.2, 4.6};
In[4]:= RandomSample[Permutations[prices], 50000]
During evaluation of In[4]:= General::nomem: The current computation was aborted because there was insufficient memory available to complete the computation.
During evaluation of In[4]:= Throw::sysexc: Uncaught SystemException returned to top level. Can be caught with Catch[\[Ellipsis], _SystemException].
Out[4]= SystemException["MemoryAllocationFailure"]
Is there an alternative way that I could use to generate those samples?
[1]: https://davegiles.blogspot.com/2019/04/what-is-permutation-test.htmlRuben Garcia Berasategui2019-04-16T02:25:05ZAlter the default line length when using PutAppend? i.e.>>>
https://community.wolfram.com/groups/-/m/t/1662915
Can anyone shed any light on this problem? I am saving to a file the results of various length calculations, i.e. numbers separated by commas and it seems that when the line hits 17 numbers the remainder are put on a separate line, this is causing problems when reading back in, using Import with the Line extension, as it is missing data for those lines that have been truncated. For example
{299999434999992, 2, 3, 3, 1, 7, 1, 13, 1, 19, 1, 29, 1, 31, 1, 1601, 1,
5023, 1}
The 5023, 1 is on a separate line and when imported back the two lines are treated separately, any help would be appreciated.Paul Cleary2019-04-17T16:06:58ZHow to stop a WSM simulation change variables and then continue?
https://community.wolfram.com/groups/-/m/t/1660482
Hi everybody,
coming from [System Dynamics][1] I am wondering how stopping a simulation, changing variables (including state reinitialization), and continueing the simulation can be achieved from the outside (e.g. using Mathematica's integration with the WSM)?
Use cases for this would first of all be interactive games (e.g. ["Applying Modelica Tools to System Dynamics Based Learning Games: Project Management Gameo"][2] by Miettinen et al. (2016)), but also it may be needed to control Bayesian identification procedures that require state reinitialization when observed state data are available.
How should one best go about this using WSM and Mathematica? One would need to stop and continue/restart the simulation at arbitrary points in time with the following requirements:
- all states (even "hidden" states within some components) must be at the exact state they were in, when the simulation was interrupted
- some variables (e.g. discrete vars) may be modified by the user and the simulation has to use that new value when it is continued
- If noise is present in the model, the stochastic processes mus be continued using the appropriate seed
- ( optionally: selected states may also be "reinitialized" )
Thanks in advance for some ideas!
Regards,
Guido
PS: A related question can be found on [StackOverflow][3] -- unfortunately the answers only relate to Dymola as far as I can see.
[1]: https://en.wikipedia.org/wiki/System_dynamics
[2]: http://Applying%20Modelica%20Tools%20to%20System%20Dynamics%20Based%20Learning%20Games:%20Project%20Management%20Game
[3]: https://stackoverflow.com/q/34661475/5363743Guido Wolf Reichert2019-04-15T10:17:01ZKnitting images
https://community.wolfram.com/groups/-/m/t/1659553
Dear all, inspired by another [great post][1] of [@Anton Antonov][at0] and in particular there by a remark of [@Vitaliy Kaurov][at1] pointing to [the art of knitting images][2] I could not resist trying with Mathematica. Clearly - this problem is crying out loudly for **Radon transform**!
![enter image description here][3]
I start by choosing some example image, convert it to inverse grayscale and perform the Radon transform.
ClearAll["Global`*"]
img0 = RemoveBackground[
ImageTrim[
ExampleData[{"TestImage", "Girl3"}], {{80, 30}, {250, 240}}], {"Background", {"Uniform", .29}}];
img1 = ImageAdjust[ColorNegate@ColorConvert[RemoveAlphaChannel[img0], "Grayscale"]];
{xDim, yDim} = {180, 400}; (* i.e. angles between 1\[Degree] and 180\[Degree] *)
rd0 = Radon[img1, {xDim, yDim}];
ImageCollage[{img0, ImageAdjust@rd0}, Method -> "Rows",
Background -> None, ImagePadding -> 10]
![enter image description here][4]
Every column of the Radon image represents a different angle of projection. So next I separate these columns into (here 180) single Radon images and do an inverse Radon transform on each:
maskLine[a_] := Table[If[a == n, 1, 0], {n, 1, xDim}];
maskImg = Table[Image[ConstantArray[maskLine[c], yDim]], {c, 1, xDim}];
rdImgs = rd0 maskImg;
ProgressIndicator[Dynamic[n], {1, xDim}]
invRadImgs =
Table[{ImageApply[If[# > 0, #, 0] &,
InverseRadon[rdImgs[[n]]]], -(n - 91) \[Degree]}, {n, 1, xDim}];
These data already represent the angle dependent intensities for backpropagation. Now one just has *somehow* to translate these intensities into discretely spaced lines (because this is the actual task in analogy to the above mentioned knitting ). Here is my simple attempt, which e.g. for 69° gives the following result (I am not really happy with this - there is definitely room for improvement!):
![enter image description here][5]
valsAngle[invRads_] := Module[{img, angle, data, l2},
angle = Last@invRads;
data = Max /@ (Transpose@*ImageData@*ImageRotate @@ invRads);
l2 = Round[Length[data]/2];
data = MapIndexed[{First[#2] - l2, #1} &, data];
{Select[
Times @@@ ({#1,
If[#2 > .0003, 1, 0]} & @@@ ((Mean /@ # &)@*Transpose /@
Partition[data, 5])), # != 0 &], angle} (*
limiting value of 0.0003 is just empirical! *)
];
va = valsAngle /@ invRadImgs;
graphicsData[va_] := Module[{u, angle},
{u, angle} = va;
InfiniteLine[# {Cos[angle], -Sin[angle]}, {Sin[angle],
Cos[angle]}] & /@ u];
gd = graphicsData /@ va;
Graphics[{Thickness[.0003], gd}, ImageSize -> 600,
PlotRange -> {{-170, 170}, {-220, 220}}]
... and the result is a bunch of lines:
![enter image description here][6]
[at0]: https://community.wolfram.com/web/antononcube
[at1]: https://community.wolfram.com/web/vitaliyk
[1]: https://community.wolfram.com/groups/-/m/t/1555648?p_p_auth=T7A50bYl
[2]: http://artof01.com/vrellis/works/knit.html
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ImageOfLines.gif&userId=32203
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img0rd0.jpg&userId=32203
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=linesample.png&userId=32203
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ImageOfLines.png&userId=32203Henrik Schachner2019-04-13T20:01:08ZSimplify an expression by substitution of a known constant?
https://community.wolfram.com/groups/-/m/t/1662796
Hello,
I'm trying the following command to simplify an expression by substitution of a known constant, but it's not performing the substitution. Any ideas why?
FullSimplify[((krRA + aRA) (kfRL L aRL + aR (krRL + aRL)))/(kfRA aRA (krRL + aRL)), Assumptions -> kmL == (aR (krRL + aRL))/(kfRL aRL), Assumptions -> {kfRL > 0, aRL > 0, kfRL aRL > 0}]Abed Alnaif2019-04-17T14:57:24ZNeural Network on Cortex-M ARM processors with Mathematica
https://community.wolfram.com/groups/-/m/t/1661629
## Introduction ##
Disclaimer : I am an ARM employee but this is a personal work.
I wanted to understand how to convert a Neural network from Mathematica to use it on a Cortex-M with the CMSIS-NN library. CMSIS-NN is a free ARM library containing a few optimized functions for Neural networks on embedded systems (convolutional layers and fully connected).
There are a few demos (CIFAR and Keyword spotting) running on Cortex-M. There were generated either from Caffe framework or with TensorFlow Lite.
I wanted to do the same from Mathematica and also understand a bit more the CMSIS-NN library. So, I attempted to reproduce a keyword spotting example.
## The Network ##
The network is quite simple but for an embedded system we cannot use something too complex.
First the audio is going through a MFCC step:
audioEnc =
NetEncoder[{"AudioMFCC", "WindowSize" -> 4*160, "Offset" -> 2*160,
"NumberOfCoefficients" -> 10, "TargetLength" -> 49,
SampleRate -> 16000}]
The network is standard : a few convolutional layers followed by a few fully connected layers:
kwsModel = NetChain[{
ReplicateLayer[1],
ConvolutionLayer[channels[[1]], {10, 4}],
ElementwiseLayer[Ramp],
ConvolutionLayer[channels[[2]], {10, 4},
"Input" -> {channels[[1]], 40, 7}, "Stride" -> {2, 1}],
ElementwiseLayer[Ramp],
LinearLayer[58],
ElementwiseLayer[Ramp],
LinearLayer[128],
ElementwiseLayer[Ramp],
LinearLayer[Length[wantedClass]],
SoftmaxLayer[]
}, "Input" -> audioEnc, "Output" -> classDec]
At the output, there is a NetDecoder which is converting the output into 3 classes. I am trying to detect the word "backward", "yes", "no".
The test patterns are coming from the TensorFlow keyword spotting example (link in attached notebook). But my network is different.
## Problems to solve ##
There are 2 problems to solve to be able to convert this network for CMSIS-NN.
First problem : the library is using a different convention for the tensors which means that the weights have to be reordered before being used by CMSIS-NN. Since it is not too difficult to do with Mathematica, I won't detail it here.
Second problem : CMSIS-NN is using fixed point arithmetic (Q15 or Q7). But Mathematica is using float.
Two limitations of Mathematica : there are no quantization operators so we cannot learn the network with the quantization effects. It is not a major issue but we can expect that the quantized network will be less good than if we had trained directly with quantization effects.
Second limitation : During training Mathematica is not keeping track of the statistics of the intermediate values (input and output of layers). But to convert the float into fixed point we need to know some statistics about the dynamic of those values. Once the dynamic is known, the quantization is controlled with parameters of the CMSIS-NN layers : shift values for the weight and bias.
So, to get those statistics I am just applying each layer of the trained network one after another and keeping track of the input / output. I do this on all training patterns. By luck embedded networks are small so even if it is slow to do this, it is not too slow.
I get beautiful histograms (log scale) which are used as a basis to choose how to quantize the values. A simple strategy is to just use the min and max values.
![Histograms][1]
## Code generation ##
Once I have statistics for the dynamics of the values, I can generate C code for CMSIS-NN and C arrays containing the quantized values.
Since quantization has an effect on the performance of the network, I want to be able to test the result easily. So, I have customized CMSIS-NN to be able to run it from Mathematica. The C code generated by the Notebook can be compiled and used with Mathlink.
Like that I can compare the behavior of the original network and the CMSIS-NN quantized one.
Here is an example:
![UI to play with both networks (Mathematica and CMSIS-NN)][2]
## To use the notebook ##
The steps to convert a network are:
- Train a network
- Compute statistics on the network intermediate values
netStats = ComputeAllFiles[result, audioEnc, trainingFiles, SumStat] ;
result is the trained network.
audioEnc is the MFCC
trainingFiles are the training files.
SumStat is the strategy used for the statistics. Here we just get a summary statistics : just min/max
- Quantize the network and generate C code
mfcc = audioEnc[AudioResample[SpeechSynthesize["backward"], 16000]];
quantizedNetwork1 = CorrectedFormats[result, netStats, 15, 0];
quantizedNetwork = <| "w" -> quantizedNetwork1["w"],
"net" -> Drop[quantizedNetwork1["net"], 1]|>;
TestPatterns[NetDrop[result, 1],
NetExtract[result, 1][mfcc], quantizedNetwork];
CompileNetwork[
NetDrop[result, 1], NetExtract[result, 1][mfcc],
result[mfcc, None], quantizedNetwork]
TestCode[NetDrop[result, 1], quantizedNetwork];
In this example NetDrop is just dropping the first ReplicateLayer. It does not exist in CMSIS-NN and it is used here just to adapt the tensor shape at the input of the network.
The second and third arguments of CompileNetwork are input and output of the network on one test pattern. It is used only when debugging the network.
mfcc is the input pattern (mfcc of some audio pattern).
- Compiling the generated code in ctests using the provided Makefile
- Linking the executable and start using it
link = Install[
FileNameJoin[{NotebookDirectory[], "ctests", "cmsisnn.exe"}]];
cmsiskws[s_] :=
classDec[CMSISNN[
QuantizeData[15, quantizedNetwork["net"][[1, 2]],
Transpose[audioEnc[s] // ReplicateLayer[1] , {3, 2, 1}] //
Flatten]]];
The cmsiskws is a convenience function. It is quantizing the input data using the format computed during quantization of the network.
Then it is computing the MFCC of the audio, reordering the data (Transpose) to use the same convention as CMSIS-NN. Then the CMSISNN function is called on the result.
cmsiskws[AudioResample[SpeechSynthesize["yes"], 16000]]
We can now test that this C code can recognize the word "yes".
The same notebook and same principles were used on CIFAR example.
I can't include the zip containing the C sources to this post. It is not accepted. Without those C sources you won't be able to reproduce the results of this post.
Any idea how
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Pict2.png&userId=89693
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Pict1.png&userId=89693Christophe Favergeon2019-04-16T08:59:54ZDoes Windows Desktop Search still index notebooks?
https://community.wolfram.com/groups/-/m/t/1662264
At some point in the past years I was able to find notebook content via Windows Desktop Search (WDS). Recently I've had trouble with this (null results searching for content and "ext:nb"). I checked WDS Indexing Options, and found a problem with file extensions "cdf", "nb", "nbp": "Registered IFilter is not found". Extensions "m", "ma", and "wl" appear to not have such a problem, with WDS reporting "Plain Text Filter". I notice this after a clean install of both v11.3 and v12.
If the relevant IFilters are indeed installed on my system, perhaps my company's IT security policies are preventing their proper registration. I ran the installers as admin, but perhaps that wasn't enough.Vincent Virgilio2019-04-17T02:59:03ZRestore Mathematica 11.3 vanished Front End?
https://community.wolfram.com/groups/-/m/t/1659421
This is on Windows 10. Upon startup, Spikey apears, and the last thing he says is: "Initializing Default Notebooks." When I add "-cleanstart" to cmd line (properties of Mma shortcut), he says "Starting with clean preferences . . . " but still there's no front end. The kernel remains available. Much obliged.Christopher Lamb2019-04-13T00:00:33Z[✓] Import FITS images with identical results in MMA 11.3 and 12.0?
https://community.wolfram.com/groups/-/m/t/1662062
I have been using Mathematica 11.3 to analyze experimental data in the form of FITS format images. In 11.3, Import[filename] evauated to an image. In 12.0, it seems to evaluate to an association:
![enter image description here][1]
So this has broken my previous notebooks, and also means that my collaborators will all need to upgrade to V12 if we are to conveniently share code.
Am I right about this? Does anyone have an insight as to why this has happened?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=fits.PNG&userId=98179David Keith2019-04-16T21:38:39Z