Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions from all groups sorted by activedifficulty recognizing gcc compiler
https://community.wolfram.com/groups/-/m/t/1669257
I installed gcc but Mathematica (12) cannot seem to find it.
I installed gcc two ways: (i) via MinGW and (ii) CodeBlocks. A windows command line check confirms the compilers are installed.
I updated my Windows Environments. GetEnvironment[] now shows Path -> ..... C:\\ Program Files (x86)\\ CodeBlocks \\ MinGW \\ bin;C:\\ MinGW \\ \ ....
Still Mathematica does not find it. Executing
Needs["CCompilerDriver`"]
CCompilers[]
CCompilers[Full]
yields
{}
{{"Name" -> "Intel Compiler",
"Compiler" -> CCompilerDriver`IntelCompiler`IntelCompiler,
"CompilerInstallation" -> None,
"CompilerName" -> Automatic}, {"Name" -> "Generic C Compiler",
"Compiler" -> CCompilerDriver`GenericCCompiler`GenericCCompiler,
"CompilerInstallation" -> None, "CompilerName" -> Automatic}}
Attempting to Compile using CompilationTarget -> "C" produces the errors
- CreateLibrary::nocomp: A C compiler cannot be found on your system. Please consult the documentation to learn how to set up suitable compilers.
- Compile::nogen: A library could not be generated from the compiled function.
I consulted the documentation but don't get anywhere.
PS: tried compiling using the new FunctionCompile but this produces much slower code than Compile without CompilationTarget -> "C"
Thanks,
EricEric Michielssen2019-04-25T17:18:09ZUse ParallelTable to improve calculation speed?
https://community.wolfram.com/groups/-/m/t/1667335
**Mathematica V12**:
The snapshot is based on a notebook. I want to improve the calculation speed of my notebook by changing the function **Table** to **ParallelTable**. Unfortunately ParallelTable does not come to an end in appropriate time.
![enter image description here][1]
Mathematica told me that 4 Kernels have been launched, but after more than 30 minutes I aborted the calculation.
Why does it take so much time to create the table with several kernels?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=52542019-04-2314_23_39-Window.png&userId=1369267Jürgen Kanz2019-04-23T12:37:59ZHow to make a Wolfram blog?
https://community.wolfram.com/groups/-/m/t/1669176
Hi,
I would like to create a blog mainly for the purpose to demo small data explorations with WL. I would like to use my personal website for the purpose (not cloud notebooks which are of course a simple solutions). Does anybody know what is needed to achieve something to https://blog.wolfram.com with posts generated from WL notebooks.
Currently I use generated pdf's in wordpress, but it's not all that useful for readers who would want to try code.
Any new features in WL12 I could use?Fabian Wenger2019-04-25T19:30:07ZWhy doesn't Im[] return the imaginary part of the equation?
https://community.wolfram.com/groups/-/m/t/1669504
So I have an expression called "y" in the form X+i*Y. I need to get X and Y respectively as separate expressions. Obviously, Re[y]=X and Im[y]=Y. But for some reason functions Re[] and Im[] do not return what I expect. Even though I explicitly defined my variables "a" and "w" as Real numbers. Why is that?
![theproblem][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-25at10.22.14PM.png&userId=1308552Daniel Voloshin2019-04-25T19:29:56ZSolve a system of 7 nonlinear equations with 7 variables
https://community.wolfram.com/groups/-/m/t/1668537
Hello, everyone. I am a learner of Mathematica. I want to solve a CSTR system of 7 nonlinear equations consists of 6 mass balance equations and one energy balance with 7 variables. But NSolve can't solve and findroot doesn't give me realistic solutions. Can someone help me out? thank you.Jake Chen2019-04-24T21:05:43ZWhat is wrong with these statements?
https://community.wolfram.com/groups/-/m/t/1668579
What is wrong with the following statement?
A="sedf"
B="eret"
shift-enter which asked for echo each statement.
Perfect.
But as I typed again
A
B
The A is not assigned? But B is assigned Why?
See the attached screen.AI Algorithm2019-04-25T00:04:27ZInteger area search, SEPP triangles
https://community.wolfram.com/groups/-/m/t/1665973
#Description:
This work is the search for scalene SEPP (Square Even-Prime-Prime) triangles with integer area. They're semi-rare to be found. There are no square odd-prime-prime triangles with integer area, as there are no triangles with the three odd sides and integer area. Its first representative is the triangle with sides 3-4-5. Below is the representative image of a SEPP:
![enter image description here][1]
I've elaborated this simple code below to find the triangles with these properties. I opted to use parallel computing (8 kernels) and measured the computing time of each evaluate. The sides have the measurement until just below the quantity required in "n" (< n). The answer is in the form: "Quantity used" {"side a", "side b", "side c"} {“area”} “graphics”, with absolute time just below.
# Objective and Coding:
The main objective here is to find how many of these exist for sides varying up to 10,100, 1000… in amounts of powers of 10 (10^x). Following just one example with the sides up to 50 to test the code; only 1 triangle found, the 3-4-5:
Parallelize[n = 50;
p = PrimePi[n];
Do[a = (2*i)^2; b = Prime[j]; c = Prime[k];
If[c < a + b \[And] a < b + c \[And] b < a + c \[And]
Area[SSSTriangle[a, b, c]] \[Element] Integers,
Print[n, {a, b, c}, {Area[SSSTriangle[a, b, c]]},
Graphics[SSSTriangle[a, b, c]]]], {i, 1,
IntegerPart[Sqrt[n - 1]/2]}, {j, 2, p - 1}, {k, j + 1,
p}]] // AbsoluteTiming
![enter image description here][2]
Now a code modification to find multiple results of the processing time in just one evaluation. The example below is programmed to calculate from 10 to 100 with steps of 10 {m,10,100,10}:
Do[Print[Parallelize[n = m;
p = PrimePi[n];
Do[a = (2*i)^2; b = Prime[j]; c = Prime[k];
If[c < a + b \[And] a < b + c \[And] b < a + c \[And]
Area[SSSTriangle[a, b, c]] \[Element] Integers, Null], {i, 1,
IntegerPart[Sqrt[n - 1]/2]}, {j, 2, p - 1}, {k, j + 1, p}]] //
AbsoluteTiming], {m, 10, 100, 10}]
![enter image description here][3]
Above you can change in the code the part {m,10,100,10} by {m,{100,140,210}} to find, for example, the result for specific quantities of 100, 140, 210 etc. You can also change the Null part in the code by Print[n,{a,b,c},{Area[SSSTriangle[a,b,c]]},Graphics[SSSTriangle[a,b,c]]] to have multiple responses seeking the triangles.
#Calculation and Results:
To carry out the evaluation in this work I used the following machine (only to have an idea of the processing used):
Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz, 3600 Mhz, 8 Core(s), 8 Logic Processor(s) (run with 8 Kernels), RAM 16.0 GB, BaseBoard B360M AORUS Gaming 3, X64, NVIDIA GeForce GTX 1060 6GB.
The following table was assembled with the data of the quantities (maximum values for the side) in "n" and the absolute times (seconds) spent on parallel computing:
![enter image description here][4]
Now the results found using values with powers of 10:
- From values up to 10 (<10) and up to 100 (<100):
![enter image description here][5]
- The result for sides up to 1000 (<1000):
![enter image description here][6]
- And finally the result for sides up to 10000 (<10000):
![enter image description here][7]
#Time Prediction (Fitting Model):
I also made an attempt to predict the time required to calculate and find triangles with sides larger than 10000, so I used FindFit as follows (I did using "a.x^b" and "a.x^b.c^x"). I chose reduce the x-axis in a factor of 10 to make the fit (do not know how this affected the accuracy or if has affected..?), below is the example of the first fit (result with the fit of the data from 1000 to 10000 with steps of 1000 and with the prediction for 10^5):
y = a*x^b // StandardForm
data = {{10, 0.171556}, {20, 0.529822}, {30, 1.05338}, {40,
1.86622}, {50, 3.123}, {60, 6.25655}, {70, 9.20718}, {80,
12.59}, {90, 15.356}, {100, 18.9604}, {110, 23.6784}, {120,
21.1587}, {130, 25.256}, {140, 28.0271}, {150, 34.3804}, {160,
38.2134}, {170, 45.9896}, {180, 51.1624}, {190, 56.4499}, {200,
64.8474}, {210, 71.6892}, {220, 115.029}, {230, 124.75}, {240,
139.067}, {250, 146.954}, {260, 117.413}, {300, 167.935}, {340,
236.836}, {380, 305.191}, {400, 345.106}, {440, 431.134}, {460,
475.442}, {500, 566.853}, {520, 623.334}, {550, 695.491}, {560,
716.899}, {580, 794.285}, {590, 833.365}, {600, 852.083}, {630,
982.572}, {640, 1021.09}, {660, 1148.21}, {690, 1265.83}, {700,
1322.47}, {710, 1324.94}, {750, 1428.75}, {780, 1560.02}, {800,
1676.75}, {820, 1819.18}, {860, 2024.3}, {900, 2290.71}, {950,
2629}, {1000, 2972.96}};
FindFit[%, a*x^b, {a, b}, x]
Table[a*x^b /. %, {x, 100, 1000, 100}] \[And]
Table[a*x^b /. %, {x, {10000}}]
![enter image description here][8]
This chart was created using the real absolute time data as well as the two curves generated by FindFit that I tested:
![enter image description here][9]
#Conclusion:
There are only 13 scalene SEPP triangles with integer area and sides varying up to 10000 (10^4).
The curves used in FindFit gave very divergent values to predict the time required to evaluate with the sides up to 100000 (10^5), and the curve fit "a.x^b" (fit 1) was more optimistic and estimated that it would take 8 days of computation in parallel, while the curve fit "a.x^b.c^x" (fit 2) estimated it would take 171.5 days! ... Anyway are very long computing time to calculate all the possibilities of sides up to 10^5.
To choose the best fit curve to be able to predict with longer times, I evaluated with the sides up to 15000 to have a real point of time and get to know which curve approaches better. The real time for sides up to 15000 was 7428.57 seconds. The “fit 1” curve came closest to the value with a prediction of 7679.62 seconds, while the “fit 2” curve estimated a time of 8452.19 seconds. The "fit 1" curve had a difference of 4 minutes and 11 seconds or approximately 3.4% of the real value.
#A Few Questions (that I have):
- Is there any way to make these codes faster or more efficient? Any other way to find that kind of triangle using codes?
- Is there a better way to use FindFit in this case to have a more accurate prediction? Maybe another function or more/less data? How to know the correct function model?
- How many of these triangles will there be if we search for sides up to 10^5 or even 10^6? Would anyone have any idea to help me find it?
Thank you very much to everyone in the community.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i1.png&userId=1316061
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i2.png&userId=1316061
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i3.png&userId=1316061
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i4.png&userId=1316061
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i5.png&userId=1316061
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i6.png&userId=1316061
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i7.png&userId=1316061
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i8.png&userId=1316061
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=i9.png&userId=1316061Claudio Chaib2019-04-22T03:51:46ZRefuses to complete final “For” command
https://community.wolfram.com/groups/-/m/t/1669091
Found this code in the literature that evidently worked in Version 7 of Mathematica evidenced by the published output. It now seems to ignore the final “test” in the “For” statement. Any ideas as to why? Attached it the code.Carroll Stroh2019-04-25T11:16:48ZCan't specify first derivative initial condition using DSOLVE
https://community.wolfram.com/groups/-/m/t/1669367
I am getting an error message when I try to include an initial condition for a second order ODE using DSOLVE:
Clear[y]
DSolve[{y''[x] + 7 y'[x] + 12 y[x] == 0, y[0] == -1, y'[0] == 0}, y[x], x]
DSolve::deqn: Equation or list of equations expected instead of True in the first argument {12 y[x]+7 (y^\[Prime])[x]+(y^\[Prime]\[Prime])[x]==0,y[0]==-1,True}.
This happened even after cutting and pasting an example from the documentation:
Clear[y]
DSolve[{y''[x] + y[x] == 0, y[0] == 1, y'[0] == 1/3}, y , x]Bruce Hugo2019-04-25T17:19:48ZAdd units to variables inside a Manipulate expression?
https://community.wolfram.com/groups/-/m/t/1667761
I''ve created a Manipulate expression with many variables. I would like each of these variables to have associated units as in the standard Quantity[Value,"Unit"] so that I can easily enter the values I want and get the answer in correct units. Unfortunately, Manipulate doesn't seem to tolerate this regardless of where I try to insert this feature. I'd also like to add UnitConvert to the answer so that I can see it in "mL/min".
Manipulate[
(\[Pi] (d/2)^2 c \[Epsilon]^3 Rb^2) ((3 (1 - \[Epsilon]) 0.0728 Cos[\[Theta]])/(\[Epsilon] L Rb) +
9800 Sin[\[Phi]]))/((1 - \[Epsilon])^2 0.00089),
{{L, 0.07, "wick length"}, 0.01,0.1},
{{d, 0.0068, "wick diameter"}, 0.001, 0.008},
{c, 1/60, 1/30},
{{\[Epsilon], 0.5, "porosity"}, 0.3,0.9},
{{Rb, 0.0001, "fiber radius"}, 0.00001,0.0002},
{{\[Theta], (70 \[Pi])/180, "contact angle"}, 0, \[Pi]/2},
{{\[Phi], \[Pi]/2, "wick orientation"}, 0, \[Pi]/2}]
I've already tried replacing the bounds on each variable with quantities. And I've tried defining the symbols as quantities before entering the manipulate expression. I've also tried adding Quantity[] to each variable within the math expression. None of these works. Please help, there must be a way to do this, right?!William Connacher2019-04-24T00:30:20ZSequence-to-sequence regression with recurrent neural networks
https://community.wolfram.com/groups/-/m/t/1647625
I recently wanted to learn more about recurrent neural networks (RNNs) and how to use them for regression problems. In particular, I wanted to try and combine RNNs with the ideas for Bayesian error bar estimation I discussed in my blog/community posts a while back:
https://blog.wolfram.com/2018/05/31/how-optimistic-do-you-want-to-be-bayesian-neural-network-regression-with-prediction-errors/
https://community.wolfram.com/groups/-/m/t/1319745
I also found the found the following answer on StackExchange about sequence learning with `SequenceAttentionLayer`, which helped me quite a bit to understand how to use to use these layers:
https://mathematica.stackexchange.com/a/143672/43522
With those resources in hand, I wanted to give time-series regression a shot.
First, let's use `FinancialData` as an easy source of a test dataset. Here I download 5 years of data from 5 different indices and turn them into a regularly sampled matrix of vectors and then split the dataset into 80/20% training/test data:
indices = {"GOOGL", "AS:ASML", "MSFT", "AAPL", "GE"};
data = FinancialData[indices, {Today - 5 Quantity[1, "Years"], Today}];
DateListPlot[data]
With[{
matrix = N@Normal[
TimeSeriesRescale[
TimeSeriesResample[TimeSeriesThread[Identity, TimeSeries /@ data]], {0, Max[Length /@ data] - 1}]
][[All, 2]]
},
{fe, data} = FeatureExtraction[matrix, "StandardizedVector", {"ExtractorFunction", "ExtractedFeatures"}]
];
{trainingData, testData} = TakeDrop[data, Round[0.8*Length[data]]];
Dimensions /@ {trainingData, testData}
![enter image description here][1]
Next, we define a function that can generate training data for the network from the data matrix we just made:
Clear[genFun];
genFun[data_, { seqLengthIn_, seqLengthout_, offset_}] := With[{
dim = Dimensions[data][[2]],
max = Max[1, Length[data] - seqLengthout - offset - seqLengthIn - 2]
},
Function[
With[{ri = RandomInteger[{1, max}, #BatchSize]},
<|
"Input" -> Function[data[[# ;; # + seqLengthIn - 1]]] /@ ri,
"Target" -> Join[
ConstantArray[{ConstantArray[0., dim]}, #BatchSize],
Function[With[{j = # + offset + seqLengthIn - 1}, data[[j ;; j + seqLengthout - 1]]]] /@ ri,
2
]
|>]]];
Each time the function is called, it generates a batch of sequences that will be used as training input and target sequences that the network should try to learn and reproduce, but the targets are prepended with a blank start-of-sequence vector:
Dimensions /@ genFun[data, {10, 6, 1}][<|"BatchSize" -> 3|>]
Out[72]= <|"Input" -> {3, 10, 5}, "Target" -> {3, 7, 5}|>
Next we put together our neural network:
Clear[encoderNetAttend, decoderNet, lossLayer, trainingNetAttention];
encoderNetAttend[n_, dropout_] := NetGraph[<|
"lstm" -> LongShortTermMemoryLayer[n, "Dropout" -> dropout],
"last" -> SequenceLastLayer[],
"post" ->
NetChain[{LinearLayer[n], ElementwiseLayer["SELU"],
DropoutLayer[], LinearLayer[]}]
|>,
{"lstm" -> "last" -> "post" -> NetPort["StateInit"], "lstm" -> NetPort["States"]}
];
decoderNet[n_, dropout_] := NetGraph[
<|
"part1" -> PartLayer[{All, 1}], "part2" -> PartLayer[{All, 2}],
"lstm" -> LongShortTermMemoryLayer[n, "Dropout" -> dropout],
"shape1" -> ReshapeLayer[{n, 1}], "shape2" -> ReshapeLayer[{n, 1}], "catState" -> CatenateLayer[2]
|>,
{
NetPort["Input"] -> NetPort["lstm", "Input"],
NetPort["lstm", "Output"] -> NetPort["Output"],
NetPort["StateIn"] -> "part1" -> NetPort["lstm", "State"],
NetPort["StateIn"] -> "part2" -> NetPort["lstm", "CellState"],
NetPort["lstm", "State"] -> "shape1", NetPort["lstm", "CellState"] -> "shape2",
{"shape1", "shape2"} -> "catState" -> NetPort["StateOut"]
},
"StateIn" -> {n, 2}
];
lossLayer["LogPrecision"] = ThreadingLayer@Function[{yObserved, yPredicted, logPrecision},
(yPredicted - yObserved)^2*Exp[logPrecision] - logPrecision
];
regressionNet[n_, dimIn_, dropout : _?NumericQ : 0.5] :=
NetInitialize@NetGraph[
<|
"encoder" -> encoderNetAttend[n, dropout],
"decoder" -> decoderNet[n, dropout],
"attention" -> SequenceAttentionLayer["Bilinear"],
"catenate" -> CatenateLayer[2],
"regression" -> NetMapOperator[LinearLayer[{2, dimIn}]],
"part1" -> PartLayer[{All, 1}], "part2" -> PartLayer[{All, 2}]
|>,
{
NetPort["Input"] -> "encoder",
NetPort["encoder", "StateInit"] -> NetPort["decoder", "StateIn"],
NetPort["encoder", "States"] -> NetPort["attention", "Input"],
NetPort["Prediction"] -> NetPort["decoder", "Input"],
NetPort[{"decoder", "Output"}] ->
NetPort["attention", "Query"], {"decoder", "attention"} ->
"catenate" -> "regression",
"regression" -> "part1" -> NetPort["mu"],
"regression" -> "part2" -> NetPort["logTau"]
},
"Input" -> {"Varying", dimIn},
"Prediction" -> {"Varying", dimIn}
];
trainingNetAttention[n_, dimIn_, dropout : _?NumericQ : 0.5] := NetInitialize@NetGraph[
<|
"regression" -> regressionNet[n, dimIn, dropout],
"loss" -> lossLayer["LogPrecision"],
"total" -> SummationLayer[], "most" -> SequenceMostLayer[], "rest" -> SequenceRestLayer[]
|>,
{
NetPort["Input"] -> NetPort["regression", "Input"],
NetPort["Target"] -> "most" -> NetPort["regression", "Prediction"],
NetPort["Target"] -> "rest",
{"rest", NetPort["regression", "mu"], NetPort["regression", "logTau"]} -> "loss" -> "total" -> NetPort["Loss"]
},
"Input" -> {"Varying", dimIn},
"Target" -> {"Varying", dimIn}
];
nHidden = 250;
pDrop = 0.25;
NetInformation[encoderNetAttend[nHidden, pDrop], "SummaryGraphic"]
NetInformation[decoderNet[nHidden, pDrop], "SummaryGraphic"]
NetInformation[regressionNet[nHidden, 2, 0.25], "SummaryGraphic"]
NetInformation[ trainingNetAttention[nHidden, 2, 0.25], "SummaryGraphic"]
![enter image description here][2]
The idea here is that the encoder produces a state vector that the decoder uses to start making predictions. The decoder, in turn, accepts the networks' own predictions to iteratively crank out the next one. During the training, though, we don't have the actual predictions from the network, so instead we feed in the sequence of actual target values and offset the target sequence at the loss layer by one with a `SequenceRest` layer.
Time to train the network:
seqConfig = {15, 7, 1};
trainedTrainObject = NetTrain[
trainingNetAttention[nHidden, Length[indices], pDrop],
genFun[trainingData, seqConfig],
All,
TargetDevice -> "GPU", LossFunction -> "Loss",
TimeGoal -> 30*60, BatchSize -> 200,
ValidationSet -> genFun[testData, seqConfig][<|"BatchSize" -> 200|>]
];
trainedNet = trainedTrainObject["TrainedNet"];
trainedTrainObject["EvolutionPlots"]
![enter image description here][3]
To make predictions, the network needs to eat its own results iteratively, which we can do with a `NestList`
makePredictorFunction[trainedNet_, evaluationMode : ("Test" | "Train") : "Test", device : ("CPU" | "GPU") : "CPU"] := With[{
dim = Last@NetExtract[trainedNet, {"Input"}],
encoder = NetExtract[trainedNet, {"regression", "encoder"}],
regressionNet =
NetDelete[NetExtract[trainedNet, "regression"], "encoder"]
},
Function[{input, n},
Module[{
encodedInput = encoder[input, NetEvaluationMode -> evaluationMode, TargetDevice -> device],
vectorInputQ = Depth[input] === 4,
length = Length[input],
reshape
},
reshape = If[vectorInputQ, ConstantArray[#, Length[input]] &, Identity];
GeneralUtilities`AssociationTranspose@NestList[
regressionNet[
<|"Input" -> encodedInput["States"], "Prediction" -> #mu, "StateIn" -> #StateOut|>,
NetEvaluationMode -> evaluationMode,
TargetDevice -> device] &,
<|
"mu" -> reshape@{ConstantArray[0, dim]},
"StateOut" -> encodedInput["StateInit"]
|>,
n
][[2 ;;, {"mu", "logTau"}, If[vectorInputQ, All, Unevaluated[Sequence[]]], 1]]
]
]
];
For example, we can predict 5 steps ahead based on 30 points of historical data:
predictor = makePredictorFunction[trainedNet, "Train", "CPU"];
Dimensions /@ predictor[data[[;; 30]], 7]
Out[299]= <|"mu" -> {7, 5}, "logTau" -> {7, 5}|>
Here's a manipulate to visualize the predictions against the actual data. We sample the network several times to get a sense of the uncertainty from the training weights and combine that uncertainty with the variance predicted by the network:
Manipulate[
Manipulate[
Quiet@With[{
predictions = Dot[
{{1., 1.}, {1., 0.}, {1., -1.}},
{Mean[#mu],
nstdev*Sqrt[Variance[#mu] + Mean[Exp[-#logTau]]]} &@Map[
Transpose,
predictor[Table[dat[[Max[1, n - t] ;; UpTo[n]]], s], l]
][[{"mu", "logTau"}, All, All, index]]
],
trueData = ListPlot[
MapIndexed[{First[#2] + Max[1, n - t] - 1, #1} &,
Flatten[dat[[Max[1, n - t] ;; UpTo[n + l], index]]]
],
Joined -> True,
PlotStyle -> Black
]
},
Show[
ListPlot[
Thread@Legended[
MapIndexed[
{First[#2] + Max[1, n - t] - 1, #1} &,
Flatten[{
dat[[Max[1, n - t] ;; UpTo[n], index]],
#
}]
] & /@ predictions,
{StringForm["\[Mu] + `1` \[Sigma]", Dynamic[nstdev]],
"\[Mu]", StringForm["\[Mu] - `1` \[Sigma]", Dynamic[nstdev]]}
],
Joined -> True,
PlotStyle -> (Directive[#, Dashed] & /@ {Blue, Red, Blue}),
ImageSize -> 500,
Filling -> {1 -> {2}, 3 -> {2}}
],
trueData,
ImageSize -> Large,
PlotRange -> All
]
],
{{index, 1, "Index"}, Thread[Range[Length[indices]] -> indices]},
{{n, Round[Length[dat]/10], "Position"}, 1, Length[dat], 1},
{{t, 20, "History length"}, 1, 30, 1},
{{l, 30, "Extrapolation"}, 1, 50, 1},
{{s, 5, "Number of samples"}, 1, 20, 1},
{{nstdev, 2, "Standard deviations"}, 0, 4, 0.1},
Paneled -> False,
ContinuousAction -> True,
TrackedSymbols :> {index, n, t, l, s, nstdev},
SynchronousInitialization -> False
],
{{dat, trainingData, "Dataset"}, {trainingData -> "Training data",
testData -> "Test data"}},
TrackedSymbols :> {dat},
SynchronousInitialization -> False
]
![enter image description here][4]
The results on the validation set do not look great, but the comparison between the first 4 years of the data and the last one may not be fair. Or it may simply be necessary to involve many more indices before it becomes feasible to do extrapolations of this type. At any rate, I hope that this code is useful to other users here! I will probably tweak bits of code here and there over the next few days and maybe add more commentary, so be sure to check back occasionally if you're interested.
**Update**
Some of the code needed to be updated slightly to work in the recently released Mathematica V12. The `SequenceAttentionLayer` is being phased out, but still works (see the "Properties & Relations" documentation for `AttentionLayer` for information about how to replace `SequenceAttentionLayer` with `AttentionLayer`). The financial data also needed a few more processing steps in V12 before it can be used. Please see the attached notebook for the V12 version of the code.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8936datelistplot.png&userId=839741
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1613networks.png&userId=839741
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=progressplot.png&userId=839741
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=manipulate.png&userId=839741Sjoerd Smit2019-04-05T15:43:41ZApr 23, 2019 - Operator Notation and Application: Data Extraction notebook
https://community.wolfram.com/groups/-/m/t/1669320
Can Wolfram provide a link to download the notebook for this live coding session? The content was excellent!John McCallon2019-04-25T15:40:48ZHigh Performance NLP with ELMo and BERT
https://community.wolfram.com/groups/-/m/t/1669242
For anyone interested in understanding ELMo, BERT and MT-DNN for NLP in simple terms, check out my article :slightly_smiling_face: https://medium.com/@laura.mitchell1604/achieving-state-of-the-art-results-in-natural-language-processing-d6fd25954a90laura mitchell2019-04-25T15:33:05ZRoadmap to Mathematica 12 on Raspberry Pi?
https://community.wolfram.com/groups/-/m/t/1664758
Great to see Mathematica 12 released and I can't wait to try the new functionality!
What is the timeline and roadmap to get Mathematica 12 to the Pi? – Thanks!Michael Byczkowski2019-04-20T08:40:16ZHow to use the GraphData to draw graphs
https://community.wolfram.com/groups/-/m/t/1669102
Hello everyone;
As the function of GraphData[3] executes, outputs
the results shown below
GraphData[3]
{{"Empty", 3}, "P2+K1", {"Path", 3}, "TriangleGraph"}}
How to present these graphs of list of above?
Just like following graphs:
![enter image description here][1]
Thanks in advance for your help !
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=graphofoerder3.png&userId=1618716Licheng Zhang2019-04-25T12:09:55ZImport a downloaded Cactus Graph program from a web?
https://community.wolfram.com/groups/-/m/t/1618730
Hellow everyone
I downloaded a notebook about Cactus Graph downloaded from *http://mathworld.wolfram.com/CactusGraph.html.* **I want to get all cacti Graph of order 8.**
But I don't know how to **use** the program from it. what should I do ?
thanksLicheng Zhang2019-02-23T05:36:44ZWish to automate perturbation solution to a 4th order differential equation
https://community.wolfram.com/groups/-/m/t/1668740
In the perturbation solution, each subsequent addition to the series is based on the previous addition solution. I have manually written the solution additions up to the 4th order but wish to automate for calculations up to the 10th order but do not see how to label each subsequent addition so it can be called into the next addition in an iterative method? Anyone have any ideas? The attached is my present code.Carroll Stroh2019-04-25T11:03:21ZSolving Four Trigonometric Equations Simultaneously
https://community.wolfram.com/groups/-/m/t/1668728
I am very new to the Wolfram Language and am trying to solve the below four equations simultaneously in Mathematica, but I receive the following error message:
"Solve was unable to solve the system with inexact coefficients or the system obtained by direct rationalization of inexact numbers present in the system. Since many of the methods used by Solve require exact input, providing Solve with an exact version of the system may help."
I tried rounding these long numbers and converting them to fraction form, but I receive another error message:
"This system cannot be solved with the methods available to Solve."
Thanks in advance for any help.
Solve[{a*cos[b(88+c)]+d=5.981937616(88)-527.770953984,-a*b*sin[b(88+c)]=5.981937616,a*cos[b(120+c)]+d=-20,-a*b*sin[b(120+c)]=0},{a, b, c, d}]Joseph Ninio2019-04-25T07:04:45ZGeoStyling["ReliefMap"] doesn't work !
https://community.wolfram.com/groups/-/m/t/1668928
In "Version : 12.0.0 for Mac OS X x86 (64-bit) (April 7, 2019)", when I input the following code on the notebook,
GeoGraphics[{EdgeForm[Black],
Polygon[EntityClass["Country", "Korea"]]},
GeoRange -> {{33, 39}, {125, 130}},
GeoProjection -> "Equirectangular",
GeoBackground -> GeoStyling["ReliefMap"],
GeoGridLines -> Quantity[15, "AngularDegrees"], Frame -> True]
I got an error message : GeoElevationData::etiles: Unable to download 9 tiles of geo elevation data.
Actually, this code had worked very well just few days ago. Why suddenly it doesn't work ?
If someone knows the reason, please would you reply for that ?
Thanks a lot !HwaSung Cheon2019-04-25T09:53:48ZAdd contour lines on a map?
https://community.wolfram.com/groups/-/m/t/1668023
Hi,
I would like to know how I can add Contour lines on the map?
X = {1, 2, 4, 5, 7, 9};
Y = {3, 8, 9, 2, 6, 3};
Z = {100, 110, 120, 60, 90, 70};
T = Transpose[{X, Y, Z}];
ListContourPlot[T1, ColorFunction -> "BlueGreenYellow",
PlotLegends -> Automatic]
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=7752123.png&userId=943918
Thanks for your help.M.A. Ghorbani2019-04-24T13:22:23ZSolve equation with unknown numeric operators
https://community.wolfram.com/groups/-/m/t/1668677
How to solve an equation with unknown numeric operators in Wolfram Alpha?
75 ? 4 ? 10 ? 6 ? 2 ? 2 = 286
? are the basic numeric operators + - * /
With a bash shell, I can use the brace expansion to generate all combinations and use the basic calculator (bc) to evaluate it:
n=0
for i in 75{+,-,*,/}4{+,-,*,/}10{+,-,*,/}6{+,-,*,/}2{+,-,*,/}2; do
n=$((n+1))
j=$(echo $i | bc)
if [ $j -eq 286 ]; then
echo "Result is ${i}=${j} (${n} iterations)" | sed -r 's/([=+*\/-])/ \1 /g'
break
fi
done
The answer is
75 * 4 + 10 - 6 * 2 * 2 = 286
I understand this could be tricky to implement if you also permute all numbers.Guy Baconniere2019-04-25T08:20:03ZTest the significance of the result from NonlinearModelFit?
https://community.wolfram.com/groups/-/m/t/1665378
I have some data, and I have done a NonlinearModelFit on it, actually fitting it to a sine curve. I can get the "RSquared" and "AdjustedRSquared", e.g. with nlm["AdjustedRSquared"] where nlm is the output of the NonlinearModelFit. I now want to test the significance of the result. I would like to end up with a single number p, so that I could say, "the probability of getting such a fit by chance is p".
NonlinearModelFit has properties like "ParameterPValues" and "ParameterTStatistics". However, I have looked in the StatisticalModelAnalysis tutorial, and there is no real explanation of how they might be used or generally how to do significance testing.
Does NonlinearModelFit have built in ways to get significance (probability of fit being due to chance)? Or is there a good tutorial on using the output of Mathematica's NonlinearModelFit to do significance testing?Marc Widdowson2019-04-21T13:46:43Zcontour plot implicit equation with conditions
https://community.wolfram.com/groups/-/m/t/1668472
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-24at6.01.26PM.png&userId=1668504
I want to contour plot x versus T but don't know how to plot it. Appreciate your help.Jake Chen2019-04-24T23:03:28ZRunning Streak Counter
https://community.wolfram.com/groups/-/m/t/1668269
Hello everyone!
After I finished reading An Elementary Introduction to Wolfram Language earlier this year, I was so excited to try out something that would be useful in my own life. Before I get into what I did, I'll just give a little background.
I have been running for several years now, and in September of 2017 I decided that I would start a "run streak." A running streak is defined as running at least 1 mile (1.61km) every calendar day consecutively. I did this mostly because I found that I couldn't stick to only running 3 or 4 days a week - I would suddenly find myself not having run for several weeks or months. A little while before I started my running streak, I began using RunKeeper to track my runs. I would go to the RunKeeper website and copy all my running data to an excel spreadsheet to see how I was doing by hand.
I knew there had to be a better way, and once I was introduced to Wolfram Language I was sure I could do much better!
First, I used ServiceConnect and retrieved my running data from my last 1250 days (The service returns running data in chunks of 25 once given a date) using NestList
runkeeper=ServiceConnect["RunKeeper","New"]
runkeeper["UserData"]["Name"] (*Display Username after logging in to RunKeeper, just to make sure*)
data=runkeeper["FitnessActivities","NoLaterThan"->#]&/@NestList[DayPlus[#,-25]&,Today,50]
Great! But to do cool things with the data it needs to be cleaned up a bit since there are empty entries which will return errors, and the data is a list of associations which isn't nice to deal with.
cleandata[d_]:=ArrayFlatten[d//.{}->Sequence[],1]
dataset=Dataset[cleandata[data]]
People often record other activities using RunKeeper like swimming, biking etc. But only running should be included in any calculations so that needs to be defined.
runningdataset=dataset[Select[#Type\[Equal]"Running"&]]
Here's a sample of what my running data looks like today:
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=RunningData.png&userId=1633713
Now, dealing with time is a pain. I spent the last two summers in Germany, and most people have run in a different time zone at least a few times. Dealing with this programatically gave me a bit of trouble, but I settled on taking the AbsoluteTime of the datetime (AbsoluteTime returns the total number of seconds since the beginning of January 1, 1900) Then, I wanted to find the current running streak with the criterea that each run must have occured within 2 days (172 000s) of each other to count.
absolutesdata[ds_]:=AbsoluteTime/@Query[All,#StartTime&]@ds
differencesdata[ds_]:=#*-1&/@Differences[ds]
removesduplicates[ds_]:=Select[ds,#>3600&] (*Function that removes activities that were started less than 1 hour of each other*)
currentstreaks[ds_]:=LengthWhile[removesduplicates[differencesdata[absolutesdata[ds]]],#<172000&]+1
(Plus one because we are counting the number of differences, so there will be one missing unless we add one)
Finally! The moment I spent so long trying to get to - a variable that contains my current streak number and a dataset of runs in my current streak:
currentstreak=currentstreaks[runningdataset]
currentstreakdataset=Query[1;;currentstreak,All]@runningdataset
My currentstreak=590 days!
Right now this code doesn't test for the >1 mile criterea, but that's something that I can easily include in a later version. I did a couple other things with my data, like visually representing the distance of each run on a plot and returning a dataset of runs that are longer than a certain distance, but I think I'll save that for another post. For something so simple I'm really happy with it!
I would like to deploy this code by making a little web app that anyone can use. Unfortunately I immediately hit a stumbling block! I can't figure out how to make ServiceConnect work using ExportForm/FormFunction. I tried making a little button that you can click to use ServiceConnect but it didn't seem to work. Any help on how to deploy this code would be so great! I think it would be really cool to share this with the running community.
Thanks for reading! I'm going to be attending the Wolfram Summer School this year, and I'm looking forward to meeting some people who read this post in person!Stephen Schroeder2019-04-24T23:15:33ZResult of Equal with multiple arguments
https://community.wolfram.com/groups/-/m/t/1668066
In the documentation it is written: "Approximate numbers with machine precision or higher are considered equal if they differ in at most their last seven binary digits (roughly their last two decimal digits)." and "e1==e2==e3 gives True if all the ei are equal."
This works fine with an equation of two arguments in all combinations, but it fails with a comparison of three values, but why?
a = 0.999999999999991;
b = 1.000000000000000;
c = 1.000000000000001;
Print[a == b, " : a==b is True"]
Print[b == c, " : b==c is True"]
Print[a == c, " : a==c is also True"]
Print[a == b == c, " : a==b==c should be True as well?"]
> True : a==b is True
> True : b==c is True
> True : a==c is also True
> False : a==b==c should be True as well?
I use Mathematica 10.0.2.0 on Windows 7 (x64)Martin Guttmann2019-04-24T18:48:35ZInvert a 2x2 matrix with 2x2 matrix elements?
https://community.wolfram.com/groups/-/m/t/1651695
If this is my matrix a as follows:
a00 a01
a10 a11
Element a00 would be comprised of a matrix b as follows:
b00 b01
b10 b11
And so on ...
I want to solve my A matrix using Mathematica algorithm.
Thanks for your help.
Stanstan gianzero2019-04-07T16:35:03ZDetrimental changes to ChoiceDialog in v12
https://community.wolfram.com/groups/-/m/t/1668433
There is a change to the rendering ChoiceDialog, if there are more than 2 buttons, from v11 to v12. The examples below are from evaluating the input cell of the 2nd of the two Basic Examples in the [documentation][1]
## V11.3
![enter image description here][2]
## V12
![enter image description here][3]
Note that v12 has an, uncalled for, difference in the width of the buttons.
[1]: https://reference.wolfram.com/language/ref/ChoiceDialog.html?q=ChoiceDialog
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=v11.3.png&userId=93385
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=v12.png&userId=93385Hans Milton2019-04-24T18:53:18ZProgram speed up
https://community.wolfram.com/groups/-/m/t/1668320
I have tried all the tips and built-in functions. It seems the program run is still a little slow ....Any advice to further improve speed would be appreciated.
ThanksNabeel Butt2019-04-24T15:31:03ZSolver for unsteady flow with the use of Mathematica FEM
https://community.wolfram.com/groups/-/m/t/1433064
![fig7][331]
I started the discussion [here][1] but I also want to repeat on this forum.
There are many commercial and open code for solving the problems of unsteady flows.
We are interested in the possibility of solving these problems using Mathematica FEM. Previously proposed solvers for stationary incompressible isothermal flows:
Solving 2D Incompressible Flows using Finite Elements:
http://community.wolfram.com/groups/-/m/t/610335
FEM Solver for Navier-Stokes equations in 2D:
http://community.wolfram.com/groups/-/m/t/611304
Nonlinear FEM Solver for Navier-Stokes equations in 2D:
https://mathematica.stackexchange.com/questions/94914/nonlinear-fem-solver-for-navier-stokes-equations-in-2d/96579#96579
We give several examples of the successful application of the finite element method for solving unsteady problem including nonisothermal and compressible flows. We will begin with two standard tests that were proposed to solve this class of problems by
M. Schäfer and S. Turek, Benchmark computations of laminar ﬂow around a cylinder (With support by F. Durst, E. Krause and R. Rannacher). In E. Hirschel, editor, Flow Simulation with High-Performance Computers II. DFG priority research program results 1993-1995, number 52 in Notes Numer. Fluid Mech., pp.547–566. Vieweg, Weisbaden, 1996. https://www.uio.no/studier/emner/matnat/math/MEK4300/v14/undervisningsmateriale/schaeferturek1996.pdf
![fig8][332]
Let us consider the flow in a flat channel around a cylinder at Reynolds number = 100, when self-oscillations occur leading to the detachment of vortices in the aft part of cylinder. In this problem it is necessary to calculate drag coeﬃcient, lift coeﬃcient and pressure diﬀerence in the frontal and aft part of the cylinder as functions of time, maximum drag coeﬃcient, maximum lift coeﬃcient , Strouhal number and pressure diﬀerence $\Delta P(t)$ at $t = t0 +1/2f$. The frequency f is determined by the period of oscillations of lift coeﬃcient f=f(c_L). The data for this test, the code and the results are shown below.
H = .41; L = 2.2; {x0, y0, r0} = {1/5, 1/5, 1/20};
Ω = RegionDifference[Rectangle[{0, 0}, {L, H}], Disk[{x0, y0}, r0]];
RegionPlot[Ω, AspectRatio -> Automatic]
K = 2000; Um = 1.5; ν = 10^-3; t0 = .004;
U0[y_, t_] := 4*Um*y/H*(1 - y/H)
UX[0][x_, y_] := 0;
VY[0][x_, y_] := 0;
P0[0][x_, y_] := 0;
Do[
{UX[i], VY[i], P0[i]} =
NDSolveValue[{{Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
u[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (u[x, y] - UX[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[u[x, y], x] +
VY[i - 1][x, y]*D[u[x, y], y],
Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
v[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (v[x, y] - VY[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[v[x, y], x] +
VY[i - 1][x, y]*D[v[x, y], y],
\!\(\*SuperscriptBox[\(u\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] +
\!\(\*SuperscriptBox[\(v\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y]} == {0, 0, 0} /. μ -> ν, {
DirichletCondition[{u[x, y] == U0[y, i*t0], v[x, y] == 0},
x == 0.],
DirichletCondition[{u[x, y] == 0., v[x, y] == 0.},
0 <= x <= L && y == 0 || y == H],
DirichletCondition[{u[x, y] == 0,
v[x, y] == 0}, (x - x0)^2 + (y - y0)^2 == r0^2],
DirichletCondition[p[x, y] == P0[i - 1][x, y], x == L]}}, {u, v,
p}, {x, y} ∈ Ω,
Method -> {"FiniteElement",
"InterpolationOrder" -> {u -> 2, v -> 2, p -> 1},
"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}], {i, 1, K}];
{ContourPlot[UX[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> u, MaxRecursion -> 2],
ContourPlot[VY[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> v, MaxRecursion -> 2,
PlotRange -> All]} // Quiet
{DensityPlot[UX[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> u, MaxRecursion -> 2],
DensityPlot[VY[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> v, MaxRecursion -> 2, PlotRange -> All]} // Quiet
dPl = Interpolation[
Table[{i*t0, (P0[i][.15, .2] - P0[i][.25, .2])}, {i, 0, K, 1}]];
cD = Table[{t0*i, NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) + Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Sin[θ] -
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Cos[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
cL = Table[{t0*i, -NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) +
Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Cos[θ] +
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Sin[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
{ListLinePlot[cD,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(D\)]\)"}],
ListLinePlot[cL,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(L\)]\)"}],
Plot[dPl[x], {x, 0, 8}, AxesLabel -> {"t", "ΔP"}]}
f002 = FindFit[cL, a*.5 + b*.8*Sin[k*16*t + c*1.], {a, b, k, c}, t]
Plot[Evaluate[a*.5 + b*.8*Sin[k*16*t + c*1.] /. f002], {t, 4, 8},
Epilog -> Map[Point, cL]]
k0=k/.f002;
Struhalnumber = .1*16*k0/2/Pi
cLm = MaximalBy[cL, Last]
sol = {Max[cD[[All, 2]]], Max[cL[[All, 2]]], Struhalnumber,
dPl[cLm[[1, 1]] + Pi/(16*k0)]}
In Fig. 1 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
{3.17805, 1.03297, 0.266606, 2.60427}
lowerbound= { 3.2200, 0.9900, 0.2950, 2.4600};
upperbound = {3.2400, 1.0100, 0.3050, 2.5000};
![Fig1][2]
Note that our results differ from allowable by several percent, but if you look at all the results of Table 4 from the cited article, then the agreement is quite acceptable.The worst prediction is for the Strouhal number. We note that we use the explicit Euler method, which gives an underestimate of the Strouhal number, as follows from the data in Table 4.
The next test differs from the previous one in that the input speed varies according to the `U0[y_, t_] := 4*Um*y/H*(1 - y/H)*Sin[Pi*t/8]`. It is necessary to determine the time dependence of the drag and lift parameters for a half-period of oscillation, as well as the pressure drop at the last moment of time. In Fig. 2 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
sol = {3.0438934441256595`,
0.5073345082785012`, -0.11152933279750943`};
lowerbound = {2.9300, 0.4700, -0.1150};
upperbound = {2.9700, 0.4900, -0.1050};
![Fig2][3]
For this test, the agreement with the data in Table 5 is good. Consequently, the two tests are almost completely passed.
I wrote and debugged this code using Mathematics 11.01. But when I ran this code using Mathematics 11.3, I got strange pictures, for example, the disk is represented as a hexagon, the size of the area is changed.
![Fig3][4]
In addition, the numerical solution of the problem has changed, for example, test 2D2
{3.17805, 1.03297, 0.266606, 2.60427} v11.01
{3.15711, 1.11377, 0.266043, 2.54356} v11.03
The attached file contains the working code for test 2D3 describing the flow around the cylinder in a flat channel with a change in the flow velocity.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D3.png&userId=1218692
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Math11.3.png&userId=1218692
[331]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CylinderRe100test2D2.gif&userId=1218692
[332]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2D2test.png&userId=1218692Alexander Trounev2018-08-31T11:44:04ZImproving speed of Binomial and Multinomial Random Draws
https://community.wolfram.com/groups/-/m/t/1668032
The Binomial and Multinomial random number generators in Mathematica are fast if multiple draws are needed from the same distribution, i.e., when the distribution parameters do not change across the draws. This can be seen by generating, for example, 500,000 draws from the Binomial distribution.
In[30]:= AbsoluteTiming[ RandomVariate[BinomialDistribution[100, 0.6], 500000];]
Out[30]= {0.017365, Null}
However, the speed is slower, relative to some other systems, when the parameters change across draws. Such a situation arises often when performing certain Monte Carlo simulations.
For example, if we have a vector nvec that contains the number of binomial trials for each draw and a vector pvec that contains the corresponding probabilities of success.
nvec = RandomInteger[{5, 125}, 500000];
pvec = RandomReal[{0, 1}, 500000];
Then we have
In[28]:= AbsoluteTiming[
Mean[Table[
RandomVariate[BinomialDistribution[nvec[[i]], pvec[[i]]]], {i, 1,
Length@nvec}]] // N
]
Out[28]= {36.2144, 32.5283}
This can be addressed via an implementation of fast random number generators for the Binomial as described in
Kachitvichyanukul, V.; Schmeiser, B.W. "Binomial random variate generation." Comm. ACM 31 (1988), no .2, 216 - 222.
A Mathematica implementation based on the above paper involves the following three functions.
When the number of trials is small, the geometric method from
Devroye. L. "Generating the maximum of independent identically distributed random variables." Computers and Mathematics with Applications 6, 1960, 305-315. can be use, as in the following function.
ablRanBinGeom = Compile[{{n, _Integer}, {p, _Real}},
Module[
{y = 0, x = 0, comp = 0, er, v, scale = 0.0},
If[p >= 1.0, Return[n]];
If[p <= 0.5, comp = 0; scale = -1.0/Internal`Log1p[-p], comp = 1;
scale = -1.0/Log[p]
];
While[True,
er = -Log[RandomReal[]];
v = er*scale;
If[v > n, Break[]];
y = y + Ceiling[v];
If[y > n, Break[]];
x = x + 1;
];
If[comp == 1, n - x, x]
],
CompilationTarget -> "C", RuntimeAttributes -> {Listable}
];
For larger n, we can use another function from the Communications of ACM paper referred above.
ablRanBinBtpe = Compile[{{n, _Integer}, {p, _Real}},
Module[
{comp = 0, r, q, nrq, fM, M, Mi, p1, xM, xL, xR, c, a, lamL,
lamR, p2, p3, p4,
y, u, v, x, k, S, F, t, A, x1, f1, z, w, rho},
If[p >= 1.0, Return[n]];
If[p <= 0.5,
comp = 0; r = p; q = 1.0 - p,
comp = 1; r = 1.0 - p; q = p
];
nrq = n*r*q;
fM = (n + 1)*r;
M = Floor[fM];
Mi = Floor[M];
p1 = Floor[2.195*Sqrt[nrq] - 4.6*q] + 0.5;
xM = M + 0.5;
xL = xM - p1;
xR = xM + p1;
c = 0.134 + 20.5/(15.3 + M);
a = (fM - xL)/(fM - xL*r);
lamL = a*(1.0 + 0.5*a);
a = (xR - fM)/(xR*q);
lamR = a*(1.0 + 0.5*a);
p2 = p1*(1.0 + 2.0*c);
p3 = p2 + c/lamL;
p4 = p3 + c/lamR;
y = 0;
While[True, (* Step 1 *)
u = p4*RandomReal[];
v = RandomReal[];
Which[
u <= p1,
y = Floor[xM - p1*v + u];
Break[],
u <= p2, (* Step 2 *)
x = xL + (u - p1)/c;
v = v*c + 1.0 - Abs[M - x + 0.5]/p1;
If[v > 1, Continue[]];
y = Floor[x],
u <= p3 ,(* Step 3 *)
y = Floor[xL + Log[v]/lamL];
If[y < 0, Continue[]];
v = v*(u - p2)*lamL,
True, (* Step 4 *)
y = Floor[xR - Log[v]/lamR];
If[y > n, Continue[]];
v = v*(u - p3)*lamR
];
A = Log[v];
If[A > (LogGamma[Mi + 1] + LogGamma[n - Mi + 1] +
LogGamma[y + 1] + LogGamma[n - y + 1] + (y - Mi)*Log[r/q]),
Continue[]
];
Break[];
];
If[comp == 1, n - y, y]
],
CompilationTarget -> "C",
CompilationOptions -> {"InlineExternalDefinitions" -> True},
RuntimeAttributes -> {Listable}
];
We can automatically choose the appropriate method for different values of the parameters using the function
ablRanBinomial = Compile[{{n, _Integer}, {p, _Real}},
Module[
{q = 1.0 - p},
If[n*Min[p, q] > 20, ablRanBinBtpe[n, p], ablRanBinGeom[n, p]]
],
CompilationTarget -> "C",
CompilationOptions -> {"InlineExternalDefinitions" -> True},
RuntimeAttributes -> {Listable}
];
The above functions result in considerable improvement in speed of random number generation for the Binomial.
In[32]:= AbsoluteTiming[
Mean@Table[
ablRanBinomial[nvec[[i]], pvec[[i]]], {i, 1, Length@nvec}]] // N
Out[32]= {0.413019, 32.5307}
This can be further improved by leveraging the listability property of the functions as follows.
In[33]:= AbsoluteTiming[Mean@ablRanBinomial[nvec, pvec] // N]
Out[33]= {0.156881, 32.5337}
The above functions can then be used for generating multinomial draws. The multinomial distribution is specified via two parameters. n is the number of trials, and p is a vector of probabilities that sum to 1.
Multinomial variates can be generated using the following function:
ablRanMultinomial=Compile[{{n, _Integer},{p, _Real, 1}},
Module[
{k=Length[p], rp,i, km1,cn,pi,xi,x},
rp=1.0;cn=n;
i=0;
km1=k-1;
x=Table[0, {k}];
While[i<km1 && cn >0,
i+=1;
pi=p[[i]];
If[pi < rp,
xi=ablRanBinomial[cn, pi/rp];
x[[i]]=xi;
cn-=xi;
rp-=pi;,
x[[i]]=cn;
cn=0
];
];
If[i==km1, x[[k]]=cn,
Do[x[[j]]=0,{j,i+1, k}]
];
x
],
CompilationTarget->"C",
CompilationOptions->{"InlineExternalDefinitions" -> True},
RuntimeAttributes->{Listable}
];
This can be used as follows:
In[36]:= ablRanMultinomial[20, {0.5, 0.3, 0.2}]
Out[36]= {12, 7, 1}Asim Ansari2019-04-24T14:20:43ZGet Non-negative ODE solution?
https://community.wolfram.com/groups/-/m/t/1667480
Is there any Mathematica option equivalent to “Nonnegative” in “odeset” in Matlab?
Here is the documentation about the function I’m talking about:
https://www.mathworks.com/help/matlab/math/nonnegative-ode-solution.html
I think it’s a very useful function and it’d be nice to have something similar in Mathematica.
Thanks in advance!David Aragonés2019-04-23T17:50:09ZHow to stop a WSM simulation change variables and then continue?
https://community.wolfram.com/groups/-/m/t/1660482
Hi everybody,
coming from [System Dynamics][1] I am wondering how stopping a simulation, changing variables (including state reinitialization), and continueing the simulation can be achieved from the outside (e.g. using Mathematica's integration with the WSM)?
Use cases for this would first of all be interactive games (e.g. ["Applying Modelica Tools to System Dynamics Based Learning Games: Project Management Gameo"][2] by Miettinen et al. (2016)), but also it may be needed to control Bayesian identification procedures that require state reinitialization when observed state data are available.
How should one best go about this using WSM and Mathematica? One would need to stop and continue/restart the simulation at arbitrary points in time with the following requirements:
- all states (even "hidden" states within some components) must be at the exact state they were in, when the simulation was interrupted
- some variables (e.g. discrete vars) may be modified by the user and the simulation has to use that new value when it is continued
- If noise is present in the model, the stochastic processes mus be continued using the appropriate seed
- ( optionally: selected states may also be "reinitialized" )
Thanks in advance for some ideas!
Regards,
Guido
PS: A related question can be found on [StackOverflow][3] -- unfortunately the answers only relate to Dymola as far as I can see.
[1]: https://en.wikipedia.org/wiki/System_dynamics
[2]: http://Applying%20Modelica%20Tools%20to%20System%20Dynamics%20Based%20Learning%20Games:%20Project%20Management%20Game
[3]: https://stackoverflow.com/q/34661475/5363743Guido Wolf Reichert2019-04-15T10:17:01Z[✓] Place a point out of a region using GeometricScene?
https://community.wolfram.com/groups/-/m/t/1667606
In a GeometricScene, I want a point to be out of a region. Say a triangle and a point B out of it. How can I do it?Vasileios Micholos2019-04-23T19:15:48Z[✓] Evaluate complex function Cos[Sqrt[s]]
https://community.wolfram.com/groups/-/m/t/1665868
Dear Sir/Madam,
I have evaluated a complex function Cos[Sqrt[s]] (s is complex variable) in Mathematica version 10. The results are presented in the attached file. However, Mathematica can not evaluate the function along the negative direction of the x-axis. Such a problem was not in Matlab. How can I resolve such problem in Mathematica?
Best regardsM Abadi2019-04-22T06:45:17ZCreate 3DGraphics with SurfaceBuilder and BoxPlot in Mathematica 12?
https://community.wolfram.com/groups/-/m/t/1667819
In the old versions of Mathematica we generated cubic minimal surfaces using the commands:
surface = SurfaceBuilder[initpoints, symmdata, shift, edge];
box = BoxPlot[edge];
Show[box, surface, Axes -> False, PlotRange -> All, Boxed -> False]
In the last few versions the message appears “cannot combine graphical objects”. What has changed? Can anyone help, please?Jacek Klinowski2019-04-23T21:58:14ZGet Feynman diagram for two point function in Phi4 Theory?
https://community.wolfram.com/groups/-/m/t/1667232
I wrote a code which will generate the feynman diagram for two loop two point function for Phi4 Theory, But code became too long as I was trying to make all possible connection, I tried to make to small but I'm unable to do so. here I'm attaching my code.
[I don't know how to attach the code so I 'm sharing the file here][1]
[1]: https://www.dropbox.com/s/ruebfw9056c3v4w/2-loop_1%28completed%29.nb?dl=0Bunny Roy2019-04-23T16:18:40ZAvoid excessive times required for "Formatting Notebook Contents" in M12?
https://community.wolfram.com/groups/-/m/t/1666529
When evaluating the new Mathematica 12 functionality, I came across the following issues with this simple expression:
![enter image description here][1]
**Excessive times required for "Formatting Notebook Contents"**
- The "Formatting Notebook Contents" when opening a notebook with the above mentioned expression will take about 25 sec on my machine.
- Evaluating the expression will take another 30 sec of formatting contents.
**Mathematica 12 crash**
the following expressions will consistently crash Mathematica 12 on my machine:
data // TableForm
When I click Uniconize from the expression
Iconize[data, "Country Data"]
will crash Mathematica too.
This will crash Mathematica 12
data[[All, {2, 3}]]
However, this will not:
ListLogLogPlot@data[[All, {2, 3}]]
Mathematica 12 was installed after I uninstalled version 11.3 on my Windows 10 Pro 17763.437 machine.
Anyone else experience something (or exactly the same) issues? Are there any work-arounds?
Cheers,
Dave
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Annotation2019-04-22075610.jpg&userId=893648Dave Middleton2019-04-22T13:14:18Z[✓] Set frame ticks on GeoListPlot?
https://community.wolfram.com/groups/-/m/t/1666873
When I use a frame on a GeoListPlot, the longitude labels are correct but the latitude labels are not.
GeoListPlot[GeoPosition@{{32, -122}, {33, -123}, {34, -124}},
Frame -> True]
![GeoListPlot with incorrect latitude labels][1]
Can anyone tell me what's going on here?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=GeoListPlot.png&userId=133182Paul McGill2019-04-23T00:28:54ZInconsistency in molar mass of Oxygen?
https://community.wolfram.com/groups/-/m/t/887789
ChemicalData of O returns the values of water...
In[16]:= ChemicalData["O", "MolarMass"]
Out[16]= Quantity[18.0153, ("Grams")/("Moles")]
Calculation of MolarMass of Silver bromate (AgBrO3) is also inconsistent:
In[17]:= ChemicalData["AgBrO3", "MolarMass"]
Out[17]= Quantity[236.778, ("Grams")/("Moles")]
Instead of the actual valur of: 235.771rafael ibanez2016-07-14T15:49:12ZUnexpected Behaviour with GeoBubbleChart and GeoRange
https://community.wolfram.com/groups/-/m/t/1660255
I have a `GeoBubbleChart` and I am trying to "zoom in" into a specific region with `GeoRange`. The result is unexpected and potentially reflects a bug.
Here is an example. First let's see the expected behaviour in `GeoListPlot`. Consider this graph:
SeedRandom[1];
plotDat = EntityValue[RandomEntity["Country", 100], EntityProperty["Country", "Position"]];
GeoListPlot[plotDat]
which gives
![global GeoListPlot][1]
Now let's zoom into Europe:
GeoListPlot[plotDat, GeoRange -> {{35, 60}, {-10, 40}}]
which works nicely:
![geoListPlot_zoomed][2]
Now let's try the same with GeoBubbleChart:
SeedRandom[2];
plotDat2 = Table[p -> RandomReal[], {p, plotDat}];
GeoBubbleChart[plotDat2]
which gives
![geoBubbleChart][3]
and let's try to zoom in:
GeoBubbleChart[plotDat2, GeoRange -> {{35, 60}, {-10, 40}}]
which runs weirdly long and unexpectedly gives:
![geoBubbleChart_zoomed][4]
This is definitely unexpected and probably a bug, right?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=geoListPlot.png&userId=1652017
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=geoListPlot_Eur.png&userId=1652017
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=geoBubbleChart.png&userId=1652017
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=geoBubbleChart_Eur.png&userId=1652017Stephan Schiffels2019-04-15T07:53:30ZLoad a notebook into the Postgresql database?
https://community.wolfram.com/groups/-/m/t/1667074
I'm running Mathematica 12 on Windows 10 pro, 64bit Processor together with Postgresql and Dbeaver. A professor wrote a tool to safe in advance prepared mathematica notebooks into the postgres database.
I had everything running, but when I rebooted the computer to make it run smooth again and tried to reinstall everything, it did not work anymore.
![The database is connected; stylesheets I create via mathematica work][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Postgres.PNG&userId=1667059
![But then as soon as I try to load a notebook into the database I get the following error:][2]
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=error.PNG&userId=1667059
I reinstalled Mathematica on the latest version, Postgresql, Dbeaver.
I renamed the folders, double checked if there is no appendix or mistake.
And also tried other libraries and books.
It somehow seems to be a problem by Mathematica, but I really don't know what to do.
Does anyone have an idea or did anyone ever experience a similar problem?
Best
PhilipPhilip Kaiser2019-04-23T09:45:04ZLaunch WolframScripts with subkernels?
https://community.wolfram.com/groups/-/m/t/1667047
Dear Wolfram users,
I am currently using a python script which calls WolframScript for a given task. However, since my python script can work with different cores at the same time, each time the WolframScript starts it opens a new kernel. On the other hand, the number of available mathematica kernels is limited and therefore I would like to use subkernels to launch the WolframScript.
So far, the simplest possibility I can come up with is something like:
- Launch a main.wls
- Start n subkernels "sub1", "sub2", "sub3"... n being the number of cores
- Launch the python script main.py (maybe with specific arguments)
- Each time main.py calls "./script.wls" (which do some computation), pass as an argument the name of a specific subkernel "subN" to be run on.
I think this should do the job. However, I cannot find how to launch ./script.wls on a given subkernel.
Any ideas, hints or documentation would be very much appreciated !
Many thanks for your help,
JordanJordan Bernigaud2019-04-23T08:54:02Z[✓] Solve the following equation?
https://community.wolfram.com/groups/-/m/t/1665736
Consider the following code:
TC = L + (1 - q) + SC + Ca +
Cr + (Exp[1 - RealAbs[1 - \[Mu]/Subscript[y, 0]]])*
MC + ((Exp[(Subscript[\[Sigma], 1] - \[Sigma])/(
Subscript[\[Sigma], 1] - Subscript[\[Sigma], 2])]) - 1)*DC
n = D[TC, {{\[Sigma], \[Mu]}}]
Solve[n == 0 // Rationalize, {\[Sigma], \[Mu]}]
I need to solve the equation, but running so long.maghfira devi2019-04-22T06:51:44ZSpecify meshes and boundaries in NDSolveValue?
https://community.wolfram.com/groups/-/m/t/1665050
I am trying to solve Laplace's equations in two-dimensions in order obtain the voltage field with specified regions and boundaries. One of the regions in the problem is a half-disk. I wish to specify a voltage on the boundary of that half-disk. I do not think that I know what my problem is, but I suspect that the boundary I am generating for that half-disk is wrong.
Ultimately NDSolveValue appears to work, but does not give a plausible answer. This problem does not require much code. It would be very helpful if someone could examine it and let me know what I am doing wrong.Robert Curl2019-04-20T21:53:54ZAnalysis of crop health using NDVI (Red and Near-infrared capturing camera)
https://community.wolfram.com/groups/-/m/t/1663138
Hello Wolfram Community,
I want to take pictures of crops using a drone and better highlight the contrast between regions of healthy vegetation vs. areas with stressed/diseased or no vegetation (in future, extract more information about the crops).
I have recently removed an infrared blocking filter from a camera of a DJI Phantom 3 and instead mounted a filter that blocks every wavelength except Red and Near-infrared. I want red and near-infrared light because healthy vegetation will absorb red light and strongly reflect near-infrared light.
Incorporation of this filter allows reflected red light (660nm) to be captured in the camera sensor’s red channel and reflected near-infrared light (850nm) to be captured in the sensor’s blue channel (the green channel is not used). NDVI can be determined by performing the calculation: NDVI = (blue – red) / (blue + red).
Could you please tell me how to make a heat map (maybe using DensityPlot) or any other way to better highlight the contrast instead of the Colorize function? Or if anything comes to your mind, I would be glad to hear!
![enter image description here][1]
![enter image description here][2]
This is my first attempt in post-processing.
An approximate regular camera picture of the grass spot:
![enter image description here][3]
ColorSeparate[![enter image description here][4], {"R", "B"}]
Out[1] = { ![enter image description here][5], ![enter image description here][6]}
f[pixel1_, pixel2_] := (((pixel1 - pixel2)/(pixel1 + pixel2)) + 1)/2;
ImageApply[f, {![enter image description here][7], ![enter image description here][8]}]
Out[3] = ![enter image description here][9]
The darker spots are supposed to be less healthy vegetation.
Colorize[![enter image description here][10]]
Out[4] = ![enter image description here][11]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-17at15.25.21.png&userId=1343397
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-17at15.26.08.png&userId=1343397
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_7343.jpg&userId=1343397
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1.jpeg&userId=1343397
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2.jpeg&userId=1343397
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3.jpeg&userId=1343397
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3.jpeg&userId=1343397
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2.jpeg&userId=1343397
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=45.jpeg&userId=1343397
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=45.jpeg&userId=1343397
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6.jpeg&userId=1343397Zhamilya Bilyalova2019-04-17T19:02:58ZGet ImageRestyle neural network runtime in Mathematica 12?
https://community.wolfram.com/groups/-/m/t/1663030
Hi,
Playing with Mathematica 12 and ImageRestyle and the system generates an error message:
The neural network is unavailabe. With MM11.3 it has always worked.
Stacktrace shows:
ImageRestyle::nnlibunavail
Running Windows 10
As a comment, before the imagerestyle started it first downloaded the GPU libraries. That was 900MB!
After the download Mathematica showed a message that a restart was needed. After the restart the message above was generated.l van Veen2019-04-17T19:49:31Zcomputable famous theorems of geometry
https://community.wolfram.com/groups/-/m/t/1664846
[GeometricScene][1] and [FindGeometricConjectures][2] are two of my favorite new functions in Wolfram Language V12. V12 provides innovative automated capabilities to draw and reason about abstractly described scenes in the plane.
I also remember that I'd proved famous theorems of geometry over many days when I was a junior high school student. I will show nine theorems, including those in the Documentation Center and [WOLFRAM blog][3].
## Thaless Theorem ##
If A, B, and C are distinct points on a circle where the line AC is a diameter, then the angle \[Angle]ABC is a right angle.
gs = GeometricScene[{"A", "B", "C", "O"},
{Triangle[{"A", "B", "C"}],
CircleThrough[{"A", "B", "C"}, "O"],
"O" == Midpoint[{"A", "C"}],
Style[Line[{"A", "B"}], Orange],
Style[Line[{"B", "C"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][4]
![enter image description here][5]
## Napoleons Theorem ##
If regular triangles are constructed on the sides of any triangle, either all outward or all inward, the lines connecting the centres of those regular triangles themselves form an regular triangle.
gs = GeometricScene[{"C", "B", "A", "C'", "B'", "A'", "Oc", "Ob",
"Oa"},
{Triangle[{"C", "B", "A"}],
TC == Triangle[{"A", "B", "C'"}],
TB == Triangle[{"C", "A", "B'"}],
TA == Triangle[{"B", "C", "A'"}],
GeometricAssertion[{TC, TB, TA}, "Regular"],
"Oc" == TriangleCenter[TC, "Centroid"],
"Ob" == TriangleCenter[TB, "Centroid"],
"Oa" == TriangleCenter[TA, "Centroid"],
Style[Triangle[{"Oc", "Ob", "Oa"}], Orange]}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][6]
![enter image description here][7]
## Finsler-Hadwiger Theorem ##
ABCD and A BB CC DD are two squares with common vertex A. Let Q and S be the midpoints of BB D and DD B respectively, and let R and T be the centers of the two squares. Then the quadrilateral QRST is a square as well.
gs = GeometricScene[{"A", "B", "C", "D", "BB", "CC", "DD", "Q", "R", "S", "T"},
{GeometricAssertion[{Polygon[{"A", "B", "C", "D"}],
Polygon[{"A", "BB", "CC", "DD"}]}, "Regular", "Counterclockwise"],
"Q" == Midpoint[{"BB", "D"}],
"R" == Midpoint[{"A", "C"}],
"S" == Midpoint[{"B", "DD"}],
"T" == Midpoint[{"A", "CC"}],
Style[Polygon[{"Q", "R", "S", "T"}], Orange]}];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][8]
![enter image description here][9]
## Echols Theorem ##
The midpoints of AD, BE, and CF in two equilateral triangles ABC and DEF form a regular triangle.
gs = GeometricScene[{"A", "B", "C", "D", "E", "F", "L", "M", "N"},
{T1 == Triangle[{"A", "B", "C"}],
T2 == Triangle[{"D", "E", "F"}],
GeometricAssertion[{T1, T2}, "Regular"],
"L" == Midpoint[{"A", "D"}],
"M" == Midpoint[{"B", "E"}],
"N" == Midpoint[{"C", "F"}],
Style[Triangle[{"L", "M", "N"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][10]
![enter image description here][11]
## Simson Theorem & Steiner Theorem ##
Simson's Theorem states that ABC and a point P on its circumcircle, the three closest points to P on lines AB, AC, and BC are collinear. Steiner's Theorem states that if the vertical center of triangle ABC is H, the Simson line passes through the midpoint of PH.
gs = GeometricScene[{"A", "B", "C", "P", "L", "M", "N", "H", "S"},
{CircleThrough[{"P", "A", "B", "C"}],
"L" \[Element] InfiniteLine[{"B", "C"}],
"M" \[Element] InfiniteLine[{"C", "A"}],
"N" \[Element] InfiniteLine[{"A", "B"}],
PlanarAngle[{"P", "L", "B"}] == 90 \[Degree],
PlanarAngle[{"P", "M", "C"}] == 90 \[Degree],
PlanarAngle[{"P", "N", "A"}] == 90 \[Degree],
Style[InfiniteLine[{"L", "M"}], Orange],
GeometricAssertion[{InfiniteLine[{"A", "H"}], Line[{"B", "C"}]},
"Perpendicular"],
GeometricAssertion[{InfiniteLine[{"B", "H"}], Line[{"A", "C"}]},
"Perpendicular"],
Style[Line[{"P", "H"}], Orange],
Line[{"P", "S", "H"}], Line[{"L", "S", "M"}]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][12]
![enter image description here][13]
## Aubel Theorem ##
Starting with a given quadrilateral (a polygon having four sides), construct a square on each side.The two line segments between the centers of opposite squares are of equal lengths and are at right angles to one another.
gs = GeometricScene[{"A", "B", "C", "D", "A'", "A''", "B'",
"B''", "C'", "C''", "D'", "D''", "Oa", "Ob", "Oc", "Od"},
{GeometricAssertion[Polygon[{"A", "B", "C", "D"}], "Convex"],
GeometricAssertion[{pa = Polygon[{"A", "B", "A'", "A''"}],
pb = Polygon[{"B", "C", "B'", "B''"}],
pc = Polygon[{"C", "D", "C'", "C''"}],
pd = Polygon[{"D", "A", "D'", "D''"}]}, "Regular",
"Counterclockwise"],
"Oa" == Midpoint[{"A", "A'"}],
"Ob" == Midpoint[{"B", "B'"}],
"Oc" == Midpoint[{"C", "C'"}],
"Od" == Midpoint[{"D", "D'"}],
Style[Line[{"Oa", "Oc"}], Orange],
Style[Line[{"Ob", "Od"}], Orange]}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][14]
![enter image description here][15]
## Brahmagupta Theorem ##
If a cyclic quadrilateral has perpendicular diagonals, then the perpendicular to a side from the point of intersection of the diagonals always bisects the opposite side.
gs = GeometricScene[{"A", "B", "C", "D", "E", "M"},
{Polygon[{"A", "B", "C", "D"}],
CircleThrough[{"A", "B", "C", "D"}],
GeometricAssertion[{Line[{"A", "C"}], Line[{"B", "D"}]},
"Perpendicular"],
Line[{"A", "E", "C"}], Line[{"B", "E", "D"}],
"M" == Midpoint[{"A", "B"}],
Style[InfiniteLine[{"M", "E"}], Orange],
Style[Line[{"C", "D"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][16]
![enter image description here][17]
## Morley Theorem ##
In any triangle, the three points of intersection of the adjacent angle trisectors form a regular triangle.
gs = GeometricScene[{"A", "B", "C", "D", "E", "F"},
{Triangle[{"A", "B", "C"}],
PlanarAngle[{"A", "B", "F"}] == PlanarAngle[{"A", "B", "C"}]/3,
PlanarAngle[{"F", "A", "B"}] == PlanarAngle[{"C", "A", "B"}]/3,
PlanarAngle[{"C", "B", "D"}] == PlanarAngle[{"C", "B", "A"}]/3,
PlanarAngle[{"B", "C", "D"}] == PlanarAngle[{"B", "C", "A"}]/3,
PlanarAngle[{"A", "C", "E"}] == PlanarAngle[{"A", "C", "B"}]/3,
PlanarAngle[{"C", "A", "E"}] == PlanarAngle[{"C", "A", "B"}]/3,
"D" \[Element] Triangle[{"A", "B", "C"}],
"E" \[Element] Triangle[{"A", "B", "C"}],
"F" \[Element] Triangle[{"A", "B", "C"}],
Style[Triangle[{"D", "E", "F"}], Orange]
}
];
RandomInstance[gs]
FindGeometricConjectures[gs]["Conclusions"]
![enter image description here][18]
![enter image description here][19]
[1]: https://reference.wolfram.com/language/ref/GeometricScene.html
[2]: https://reference.wolfram.com/language/ref/FindGeometricConjectures.html
[3]: https://blog.wolfram.com/2019/04/16/version-12-launches-today-big-jump-for-wolfram-language-and-mathematica/
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=122401.jpg&userId=1013863
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=801502.jpg&userId=1013863
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=508803.jpg&userId=1013863
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=222204.jpg&userId=1013863
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=289105.jpg&userId=1013863
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=932006.jpg&userId=1013863
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=876507.jpg&userId=1013863
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=184108.jpg&userId=1013863
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=350909.jpg&userId=1013863
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=500710.jpg&userId=1013863
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=318711.jpg&userId=1013863
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=254812.jpg&userId=1013863
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=308813.jpg&userId=1013863
[17]: https://community.wolfram.com//c/portal/getImageAttachment?filename=934714.jpg&userId=1013863
[18]: https://community.wolfram.com//c/portal/getImageAttachment?filename=367015.jpg&userId=1013863
[19]: https://community.wolfram.com//c/portal/getImageAttachment?filename=16.jpg&userId=1013863Kotaro Okazaki2019-04-20T14:42:58ZDecorating Easter Eggs with the Planets
https://community.wolfram.com/groups/-/m/t/1665923
![The planets as Easter eggs][1]
Just thought I'd share this fun little exercise. First, we need to get the textures to use.
textures =
ImageReflect[#, Right] & /@
EntityValue["Planet", "CylindricalEquidistantTexture"];
Then, we need to plot parametric surfaces that look like eggs and apply the textures to them.
GraphicsGrid[Partition[With[{l = .75, a = 1, b = 1},
ParametricPlot3D[
Evaluate[
RotationMatrix[
Pi/2, {0, 1,
0}].{l Cos[t] + (a + b Cos[t]) Cos[t], (a + b Cos[t]) Sin[
t] Cos[p], (a + b Cos[t]) Sin[t] Sin[p]}], {p, 0,
2 Pi}, {t, 0, Pi}, Mesh -> None, Boxed -> False, Axes -> False,
Lighting -> "Neutral", PlotStyle -> Texture[#],
ViewPoint -> Left, PlotPoints -> 50, Background -> Black,
SphericalRegion -> True, ViewAngle -> Pi/6]] & /@ textures, 4],
Spacings -> {0, 0}, ImageSize -> 800]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=PlanetEaster.png&userId=25355Jeff Bryant2019-04-21T20:44:15Z[✓] Use bubble points in Plot?
https://community.wolfram.com/groups/-/m/t/1666411
Hi
How do I replace all normal point with bubble point like the below plot:
m = {{0.676168282`, 0.65776549947501`}, {0.671137774`,
0.659222459526278`}, {0.671010714`,
0.659990068731665`}, {0.671762604`,
0.664138272302896`}, {0.67218621`,
0.66552936163932`}, {0.673216749`,
0.668311540479097`}, {0.67799459`,
0.678049166167921`}, {0.679546306`,
0.680831345007698`}, {0.686881695`,
0.693351149536299`}, {0.687708356`,
0.694742238872723`}, {0.668473616`,
0.65776549947501`}, {0.656997668`,
0.65776549947501`}, {0.653419063`, 0.662747182883008`}};
ListPlot[m, Mesh -> All, ImageSize -> 500, AspectRatio -> Automatic,
TicksStyle -> Directive[Black, 15],
AxesStyle -> Directive[Black, 12], Ticks -> Automatic,
GridLines -> Automatic, Axes -> True,
PlotRange -> {{0.64, 0.70}, {0.64, 0.70}},
PlotStyle -> {PointSize[.02], Orange},
Epilog -> Line[{{0, 0}, {1, 1}}]]
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Untitled11.png&userId=943918M.A. Ghorbani2019-04-22T13:21:03Z[WSC18] Analysing Image Dithering
https://community.wolfram.com/groups/-/m/t/1383824
![image dithering cover][1]
This summer, as part of the Wolfram High School Summer Camp program, I decided to do a project analysing image dithering, and the various ways to do it. Over the course of two weeks, I learnt and understood how the algorithms work. Since resources about this process are sparse on the internet, *in this post, I not only implement the algorithms, but additionally describe in detail what image dithering is and how it works.*
*The second part of my project was to use machine learning to classify images that originate from different image dithering algorithms.*
After failing to get high accuracy with a large number of machine learning techniques, I finally came up with one that has an **accuracy of about 90%**.
Note that this community post is an adapted version of my computational essay. *My computational essay is attached at the bottom of the post.*
# What is image dithering?
![image-dithering-FS-example][2]
*Can you tell the difference between the two images above?*
The image on the right uses **just 24 colours** .
Yet, using Floyd-Steinberg dithering, it manages to look as detailed and beautiful as the one on the left, which is using more than 16 million colours!
Let' s formally define what the aim of image dithering is :
Given an RGB image with $256^3$ colors, reduce the colour space of the image to only belong to a certain color palette.
![chromaticity-plot comparison][3]
*A comparison of the chromaticity plots of the above images*
# Color Palette
First, let us tackle the easier problem at hand. Say that we are given the option to choose our color palette, the only restriction being the number of colours in the palette. What would be the best way to obtain a color palette that is the most appropriate for an image?
Thankfully, the Wolfram Language makes this task very easy for us, as we can simply use the inbuilt function `DominantColors[image, n]`. For example, regarding the image above:
![dominant-colors of the image][4]
would be the most appropriate color palette with 12 colours.
Here are some visualisations of the process of choosing the color palette in 3D RGB space.
The color palette of the original image:
![original color palette][5]
The color palette of the dithered image:
![final color palette][6]
*Notice how the final color palette is quite close to points on the diagonal of the cube. I go into a lot more detail about this in my computational essay.*
**Now, let us try to solve the main part of the problem, actually figuring out the mapping of a pixel colour to its new colour.**
# Naive idea
```
colorPallete = {0, 32, 64, 96, 128, 159, 191, 223};
pixel = {{34, 100, 222},
{200, 50, 150}};
result = pixel;
Do [
result[[x, y]] = First[Nearest[colorPallete, pixel[[x, y]]]];,
{x, Length[pixel]},
{y, Length[First[pixel]]}
];
Grid[result, Frame -> All, Spacings -> {1, 1}]
```
![grid naive idea][7]
## Extra!
I had to implement my own (but slower) version of the Wolfram function `Nearest` for the final functions, since pre-compiling Wolfram code to C code does not support `Nearest` natively.
However, at the time of writing, I have heard that there is an internal project going on in Wolfram to enable support for compiling all inbuilt Wolfram function to C natively.
# Better idea
As one can guess, this idea can be improved a lot. One of the important ideas is that of "smoothing". We want the transition between two objects/colours to look smoother. One way to do that would be make a gradient as the transition occurs.
However, how do you formalise a "gradient"? And how do you make a smooth one when all you have are 24 colours?
Dithering basically attempts to solves these problems.
To solve our questions, let's think about a pixel: what's the best way to make it look closest to its original colour?
In the naive idea, we created some error by rounding to some nearby values.
For each pixel, let's formalise the error as:
![error in pixel[1, 1] screenshot][8]
## Error diffusion
It is clear that the error should somehow be transmitted to the neighbouring elements so we can account for the error in a pixel in its neighbours. To maintain an even order of processing, let us assume that we will traverse the 2D array of pixels from the top-left corner, row-wise until we reach the bottom-right corner.
Therefore, it never makes sense to "push" the effects of an error to a cell we've already processed. Finally, let us see some ways to actually diffuse the error across the image.
# Floyd - Steinberg Dithering
In 1976, Robert Floyd and Louis Steinberg published the most popular dithering algorithm<sup>1</sup>. The pattern for error diffusion can be described as:
```
diffusionFormula = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPosition = {1, 2};
```
## What does this mean?
`diffusionFormula` is simply a way to encode the diffusion from a pixel.
`diffusionPosition` refers to the position of the pixel, relative to the `diffusionFormula` encoding.
So, for example, an error of `2` at pixel `{1, 1}` translates to the following additions:
```
pixel[[1, 2]] += error*(7/16);
pixel[[2, 1]] += error*(3/16);
pixel[[2, 2]] += error*(5/16);
pixel[[2, 3]] += error*(1/16);
```
![grid floyd steinberg error diffusion][9]
![floyd Steinberg dithering][10]
## How does one come up with these weird constants?
Notice how the numerator constants are the first 4 odd numbers.
The pattern is chosen specifically to create an even checkerboard pattern for perfect grey images using black and white.
![grayscale floyd steinberg dithering example][11]
*Example Grayscale image dithered with Floyd Steinberg*
![grayscale picture in picture thing][12]
*Note the checkerboard pattern in the image above.*
# Atkinson Dithering
Relative to the other dithering algorithms here, Atkinson's algorithm diffuses a lot less of the error to its surroundings. It tends to preserve detail well, but very continuous sections of colours appear blown out.
This was made by Bill Atkinson<sup>2</sup>, an Apple employee.The pattern for error diffusion is as below :
```
diffusionFormula = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8;
diffusionPosition = {1, 2};
```
![atkinson dithering example][13]
# Jarvis, Judice, and Ninke Dithering
This algorithm<sup>3</sup> spreads the error over more rows and columns, therefore, images should be softer(in theory).
The pattern for error diffusion is as below:
```
diffusionFormula = { {0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPosition = {1, 3};
```
![JJN algorithm example][14]
#### The final 2 dithering algorithms come from Frankie Sierra, who published the Sierra and Sierra Lite matrices<sup>4</sup> in 1989 and 1990 respectively.
# Sierra Dithering
Sierra dithering is based on Jarvis dithering, so it has similar results, but it's negligibly faster.
```
diffusionFormula = { {0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPosition = {1, 3};
```
![sierra dithering example][15]
# Sierra Lite Dithering
This yields results similar to Floyd-Steinberg dithering, but is faster.
```
diffusionFormula = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPosition = {1, 2};
```
![Sierra Lite dithering][16]
# Comparison
Here's an interactive comparison of the algorithms on different images:
```
Manipulate[
Dither[im, c,
StringDelete[StringDelete[StringDelete[algo, " "], ","],
"-"]], {{im, image, "Image"}, {im1, im2, im3}}, {{c, 12,
"Number of colours"}, 2, 1024, 1},
{{algo, "Floyd-Steinberg", "Algorithm"}, {"Floyd-Steinberg",
"Jarvis, Judice, Ninke", "Atkinson", "Sierra", "Sierra Lite"}}]
```
Download my computational essay to see it in action. Alternately, use the functions in the "Implementation" section to evaluate the code. `im1`, `im2`, `im3` can be replaced by images. I have submitted the comparison to [Wolfram Demonstrations][17] as well, so it should be available online soon.
# Side-note
Notice how the denominator in the `diffusionFormula` of a number of algorithms is a power of $2$?
This is because division by a power of 2 is equivalent to [bit-shifting][18] the number to the right by $\log_{2}(divisor)$ bits, making it much faster than division by any other number.
Given the improvements in computer hardware, this is not a major concern anymore.
## Come up with your own dithering algorithm!
I noticed that the dithering algorithms are almost the same as each other(especially the `diffusionPosition`).
However, I have made my functions so that you can just tweak the input arguments `diffusionFormula` and `diffusionPosition`, and test out your own functions!
Here's one I tried:
![my own dithering algorithm][19]
# Implementation
In this section, I will discuss the Wolfram implementation, and some of the features and functions I used in my code.
## applyDithering
Even though it's probably the least interesting part of the algorithm, actually applying the dithering is the most important part of the algorithm, and the one that gave me the most trouble.
I started off by writing it in the functional paradigm. With little knowledge of Wolfram, I stumbled through the docs to assemble pieces of the code. Finally, I had a "working" version of the algorithm, but there was a major problem: A $512\cdot512$ RGB image took 700 seconds for processing!
This number is way too large for an algorithm with linear time complexity in the size of input.
### Fixes
Some of the trivial fixes involved making more use of inbuilt functions(for example, `Nearest`).
The largest problem is that the Wolfram notebook is an interpreter, not a compiler. It interprets code every time it's run.
So the obvious step to optimising performance was using the `Compile` function in Wolfram.
But, there's a catch!
```
T (R1) 18 = MainEvaluate[ Hold[List][ I27, I28, T (R1) 16, R6]]
95 Element[ T (R2) 17, I20] = T (R1) 18
```
If you see something like the above in your machine code, your code is likely to be slow.
`MainEvaluate` basically means that the compiled function is calling back the kernel, which too, is a slow process.
To view the human readable form of your compiled Wolfram function, you can use:
```
Needs["CompiledFunctionTools`"];
CompilePrint[yourFunction]
```
To fix this, you need to basically write everything in a procedural form using loops and similar constructs.
The final step was `RuntimeOptions -> "Speed"`, which trades off some integer overflow checks etc. for a faster runtime.
Find the complete code for the function below:
```
applyDithering =
Compile[{{data, _Real, 3}, {diffusionList, _Real,
2}, {diffusionPos, _Integer, 1}, {colors, _Real, 2}},
Module[{lenx, leny, lenz, lenxdiff, lenydiff, error, val,
realLoc, closestColor, closestColordiff, res, a, diff = data,
diffusionFormula = diffusionList, xx, yy, curxx, curyy,
colorAvailable = colors, tmp, diffusionPosition = diffusionPos,
idx},
{lenx, leny, lenz} = Dimensions[data];
{lenxdiff, lenydiff} = Dimensions[diffusionFormula];
a = data;
res = data;
Do[
val = a[[x, y]];
realLoc = {x - diffusionPos[[1]] + 1,
y - diffusionPos[[2]] + 1};
closestColor = {1000000000., 1000000000., 1000000000.};
closestColordiff = 1000000000.;
Do[
tmp = N[Total[Table[(i[[idx]] - val[[idx]])^2, {idx, 3}]]];
If[tmp < closestColordiff,
closestColordiff = tmp;
closestColor = i;
];,
{i, colorAvailable}
];
error = val - closestColor;
res[[x, y]] = closestColor;
Do[
curxx = realLoc[[1]] + xx - 1;
curyy = realLoc[[2]] + yy - 1;
If[curxx > 0 && curxx <= lenx && curyy > 0 && curyy <= leny,
a[[curxx, curyy, z]] += error[[z]]*diffusionFormula[[xx, yy]]];,
{xx, lenxdiff},
{yy, lenydiff},
{z, 3}
];,
{x, lenx},
{y, leny}
];
Round[res]
],
CompilationTarget -> "C",
RuntimeOptions -> "Speed"
];
```
## Dither
This is the main function that uses `applyDithering`. Their are multiple definitions of the function, one with the hardcoded values, and the other to allow one to easily implement their own dithering algorithm.
```
(* This is the implementation that takes the algorithm name and applies it *)
Dither[img_Image, colorCount_Integer, algorithm_String: ("FloydSteinberg" | "JarvisJudiceNinke" |
"Atkinson" | "Sierra" | "SierraLite")] :=
Module[{diffusionFormulaFS, diffusionPositionFS, diffusionFormulaJJN, diffusionPositionJJN, diffusionFormulaA,
diffusionPositionA, diffusionFormulaS, diffusionPositionS, diffusionFormulaSL, diffusionPositionSL},
(* Floyd Steinberg algorithm constants *)
diffusionFormulaFS = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPositionFS = {1, 2};
(* Jarvis, Judice, and Ninke algorithm constants *)
diffusionFormulaJJN = {{0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPositionJJN = {1, 3};
(* Atkinson algorithm constants *)
diffusionFormulaA = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8 ;
diffusionPositionA = {1, 2};
(* Sierra algorithm constants *)
diffusionFormulaS = {{0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPositionS = {1, 3};
(* Sierra Lite algorithm constants*)
diffusionFormulaSL = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPositionSL = {1, 2};
colorAvailable =
Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Switch[algorithm,
"FloydSteinberg",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaFS, diffusionPositionFS, colorAvailable],
"Byte"],
"JarvisJudiceNinke",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaJJN, diffusionPositionJJN, colorAvailable],
"Byte"],
"Atkinson",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaA, diffusionPositionA, colorAvailable],
"Byte"],
"Sierra",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaS, diffusionPositionS, colorAvailable],
"Byte"],
"SierraLite",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaSL, diffusionPositionSL, colorAvailable],
"Byte"]
]
];
(* This is the function that makes it easy to make your own dithering
algorithm *)
Dither[img_Image, colorCount_Integer, diffusionFormula_List, diffusionPosition_List] := Module[{},
colorAvailable = Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormula, diffusionPosition, colorAvailable],
"Byte"]
];
```
# **Classifying images**
The second part of my project involves classifying dithered images and mapping them to the algorithm they were obtained from.
This sounds like a relatively easy task for machine learning, but it turned out to be much harder. Besides, no similar research on image "metadata" has existed before, which made the task more rewarding.
I ended up creating a model which has an **accuracy of more than 90%**, which is reasonably good for machine learning.
If you are uninterested in the failures encountered and the details of the dataset used, please skip ahead to the section on "ResNet-50 with preprocessing".
# Dataset
To obtain data, I did a web search for images with a keyword chosen randomly from a dictionary of common words.
The images obtained are then run through the five algorithms I implemented and is stored as the training data. This is to ensure that the images aren't distinguished much in terms of their actual contents, since that would interfere with learning about the dithering algorithms used in the image.
The images were allowed to use up to 24 colours which are auto-selected, as described in the section on "Color Palette".
Here is the code for downloading, applying the dithering, and re-storing the images. Note that it is not designed with re-usability in mind, these are just snippets coded at the speed of thought:
```
(* This function scrapes random images from the internet and stores \
them to my computer *)
getImages[out_Integer, folderTo_String] := Module[{},
res = {};
Do [
Echo[x];
l = RemoveAlphaChannel /@
Map[ImageResize[#, {512, 512}] &,
Select[WebImageSearch[RandomChoice[WordList["CommonWords"]],
"Images"],
Min[ImageDimensions[#]] >= 512 &]];
AppendTo[res, Take[RandomSample[l], Min[Length[l], 2]]];
Pause[10];,
{x, out}
];
MapIndexed[Export[
StringJoin[folderTo, ToString[97 + #2[[1]]], "-",
ToString[#2[[2]]], ".png"], #1] &, res, {2}]
]
(* This function applies the dithering and stores the image *)
applyAndStore[folderFrom_String, folderTo_String] := Module[{},
images = FileNames["*.png", folderFrom];
origImages = Map[{RemoveAlphaChannel[Import[#]], #} &, images];
Map[Export[StringJoin[folderTo, StringTake[#[[2]], {66, -1}]],
Dither[#[[1]], 24, "Sierra"]] &, origImages]
];
```
Here are some more variable definitions and metadata about the dataset that is referenced in the following sections.
![dataset metadata][20]
# Plain ResNet-50
My first attempt was to use a popular neural net named "ResNet-50 Trained on ImageNet Competition Data,<sup>5</sup> and retrain it on my training data.
One of the major reasons for choosing this architecture was that it identifies the main object in an image very accurately, and is quite deep. Both these properties seemed very suitable for my use case.
However, the results turned out to be very poor. When I noticed this during the training session, I stopped the process early on. It can be speculated that the poor results were because it was trying to infer relations between the colours in the image.
# Border classification
Since the borders in an image are least affected by the image dithering algorithm, and simply rounded to the closest colours, it should be easier to learn the constants of the diffusionFormula from it.
Therefore, we can pre-process an image and only use its border pixels for classification.
## borderNet
Observing the aforementioned fact, I implemented a neural net which tried to work with just the borders of the image. This decreased the size of the data to $512 \cdot 4$ per image.
## borderNet with just left and top border
Since my implementation of the dithering algorithms starts by applying the algorithm from the top-left corner, the pattern in the left and top borders should be even easier for the net to learn. However, this decreased the size of the data even more to $512 \cdot 2$ per image.
Both the nets failed to work very well, and had **accuracies of around 20%**. This was probably the case because of the lack of data for the net to actually train well enough.
Wolfram code for the net follows:
```
borderNet = NetChain[{
LongShortTermMemoryLayer[100],
SequenceLastLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
},
"Input" -> {"Varying", 3},
"Output" -> NetDecoder[{"Class", classes}]
]
```
# Row and column specific classification
The aim with this approach was to first make the neural net infer patterns in the columns of the image, then combine that information and observe patterns in the rows of the image.
This didn't work very well either. The major reason for the failure was probably that the diffusion is not really as independent as the net might assume it to be.
# Row and column combined classification
Building on the results of Section 2.5, the right method to do the processing seemed to be to use two separate chains, and a `CatenateLayer` to combine the results.
For understanding the architecture, observe the `NetGraph` object below:
![netgraph-lstmNet branchy thingy][21]
The Wolfram language code for the net is as follows:
```
lstmNet = NetGraph[{
TransposeLayer[1 <-> 2],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
TransposeLayer[1 <-> 2],
SequenceLastLayer[],
SequenceLastLayer[],
LongShortTermMemoryLayer[25],
LongShortTermMemoryLayer[25],
SequenceLastLayer[],
SequenceLastLayer[],
CatenateLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
}, {
NetPort["Input"] -> 2 -> 3 -> 7 -> 9 -> 11,
NetPort["Input"] -> 1 -> 4 -> 5 -> 6 -> 8 -> 10 -> 12,
{11, 12} -> 13 -> 14 -> 15 -> 16
},
"Input" -> {512, 512, 3},
"Output" -> NetDecoder[{"Class", classes}]
];
```
However, this net didn't work very well either.
The net had a somewhat unconventional architecture, and the excessive parameter count crashed the Wolfram kernel, so they had to be cut down.
Ultimately, it only managed to get an **accuracy rate of around 25-30%**.
# ResNet-50 with preprocessing
The final idea was to use pre-processing to our advantage. Dithering, in its essence, shifts the error downward and towards the right. Therefore, one way to filter the image would be to pad the image with one row of pixels at the top and one column at the left, and subtracting the padded image from the original one.
Here's an example of what that looks like:
![FS image preprocessing for net][22]
The code for doing this to an image is as simple as:
```
img - ImageTake[ImagePad[img, 1], {1, 512}, {1, 512}]
```
*Notice how the image(right side, after processing) resembles the parts with the "checkerboard" pattern described in the section "How does one come up with these weird constants?" under "Floyd - Steinberg Dithering" .*
The main reason this net works well is that, even with same color palettes, the gradient of the images coming from dithering algorithms is quite different. This is because of the differences in the error diffusion, and by subtracting the padded image from the original image, we obtain a filtered version of the dithering patterns, making it easy for the neural net to spot them.
The net was trained on AWS for more than 7 hours, on a larger dataset of 1500 images.
The results outperformed my expectations, and on a test-set of more than 700 images, 300 of which were part of the original training data, it showed an **accuracy rate of nearly 91%**.
![classifier measurement object][23]
Here is a code of the net with details:
```
baseModel =
NetTake[NetModel["ResNet-50 Trained on ImageNet Competition Data",
"UninitializedEvaluationNet"], 23]
net = NetChain[{
NetReplacePart[baseModel,
"Input" -> NetEncoder[{"Image", {512, 512}}]],
LinearLayer[Length@classes],
SoftmaxLayer[]},
"Input" -> NetEncoder[{"Image", {512, 512}, ColorSpace -> "RGB"}],
"Output" -> NetDecoder[{"Class", classes}]
]
```
So, it's just the **ResNet - 50** modified to work with $512 \cdot 512$ images.
# Future Work
- Look into using machine learning for un-dithering an image.
- Look into creating new dithering algorithms that perform faster or better than the existing ones.
# Notes
All images and visualisations in this post were generated in Wolfram. Their code may be seen in the computational essay attached below.
I would like to thank all the mentors, especially Greg "Chip" Hurst, Michael Kaminsky, Christian Pasquel and Matteo Salvarezza, for their help throughout the project.
Further, I would like to thank Pyokyeong Son and Colin Weller for their help during the project, and refining the essay.
The original, high resolution copies of the images are credited to [Robert Lukeman][24], [Teddy Kelley][25], and [Sebastian Unrau][26] on [Unsplash][27].
# References
[1] : R.W. Floyd, L. Steinberg, An adaptive algorithm for spatial grey scale. Proceedings of the Society of Information Display 17, 75-77 (1976).
[2] : Bill Atkinson, private correspondence with John Balestrieri, January 2003 (unpublished)
[3] : J. F. Jarvis, C. N. Judice and W. H. Ninke, A Survey of Techniques for the Display of Continuous Tone Pictures on Bi-level Displays. Computer Graphics and Image Processing, 5 13-40, 1976
[4] : Frankie Sierra, in LIB 17 (Developer's Den), CIS Graphics Support Forum (unpublished)
[5] : K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition," arXiv:1512.03385 (2015)
# [Link to my computational essay][28]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-dithering-cover.gif&userId=1371661
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at3.50.44PM.png&userId=1371661
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.11.46PM.png&userId=1371661
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.15.27PM.png&userId=1371661
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-og.gif&userId=1371661
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-final.gif&userId=1371661
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.19.57PM.png&userId=1371661
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.23.38PM.png&userId=1371661
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.28.38PM.png&userId=1371661
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.29.39PM.png&userId=1371661
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.31.40PM.png&userId=1371661
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-1.png&userId=1371661
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.38.01PM.png&userId=1371661
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.39.37PM.png&userId=1371661
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.41.39PM.png&userId=1371661
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.43.33PM.png&userId=1371661
[17]: http://demonstrations.wolfram.com
[18]: https://stackoverflow.com/a/141873
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at5.03.37PM.png&userId=1371661
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.11.09AM.png&userId=1371661
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.21.55AM.png&userId=1371661
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.24.56AM.png&userId=1371661
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.28.45AM.png&userId=1371661
[24]: https://unsplash.com/photos/zNN6ubHmruI?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[25]: https://unsplash.com/photos/_4Ib-a8g9aA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[26]: https://unsplash.com/photos/CoD2Q92UaEg?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[27]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[28]: https://www.dropbox.com/s/kmtzq6x4xkdn9y8/computational-essay.cdf?dl=0Nalin Bhardwaj2018-07-14T15:17:17Z