Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Scienceshowthread.php?threadid=78 sorted by activeIs there a way to plot data as ListContourPlot inside a circle/disk ?
http://community.wolfram.com/groups/-/m/t/1226511
I've data points for cylindrical coordinates that I need to plot as a contour plot in a circle. the plot is being generated as x-y plot .. I need it to be in cylindrical ordinate so 2D would be a disk or a circle .. is there anyway to generate this ?Omar Habib2017-11-22T12:47:01ZProblem with dynamic ListStepPlot
http://community.wolfram.com/groups/-/m/t/1225961
Dear Community,
I'm trying to construct a dynamic ListStepPlot via locators. The three locators (pt1, pt2, pt3) are confined inside a polygon. Upon running the code however the ListStepPlot constructed from the dynamic points does not show up. Could someone pls. have a look and help? The notebook is attached.
Tx in advance,
best regards
AndrasAndras Gilicz2017-11-21T14:11:59Zuse of Symbolic Vectors for Physics
http://community.wolfram.com/groups/-/m/t/1226443
Dear all
It is possible to use Symbolic Vector to perform the typical vector operations we use in Physics (for example, electromagnetism)? I mean, for example, applying vector identities to vector fields, like
rot ( rot (A)) = grad( div(A)) - laplacian(A)
o, more interesting for me, to obtain dispersion relations from differential equations. I have seen the documentation for symbolic vectors and tensors, but I assume that there should be more detailed information somewhere. There is already a thread on a closely related subject
http://community.wolfram.com/groups/-/m/t/1127218
but I am not able to obtain a practical conclusion.
Is there an example or tutorial of these type of calculations with Mathematica?
Thank you. Best wishes.
Carlos Soria-Hoyo
Sevilla
SPAINCarlos Soria-Hoyo2017-11-22T09:23:01ZMake a proper Classic Klein Bottle in version 11.20 of Mathematica?
http://community.wolfram.com/groups/-/m/t/1224031
Everything works fine in version 10, but in Version 11.20 there is a slice taken out of the Klein Bottle image...It does this no matter what version of the code I use in Version 11.2. It also does it while making a stylized Klein Bottle described here...
http://members.wolfram.com/jeffb/visualization/klein.shtml
Here is an example of code that makes a faulty output...Can someone come up with code that makes a proper Klein Bottle without a slice taken out out it in Version 11.20?
klein[u_, v_] := Module[{
bx = 6 Cos[u] (1 + Sin[u]),
by = 16 Sin[u],
rad = 4 (1 - Cos[u]/2),
X, Y, Z},
X = If[Pi < u <= 2 Pi, bx + rad Cos[v + Pi], bx + rad Cos[u] Cos[v]];
Y = If[Pi < u <= 2 Pi, by, by + rad Sin[u] Cos[v]];
Z = rad Sin[v];
{X, Y, Z}
]
ParametricPlot3D[klein[u, v], {u, 0, 2 Pi}, {v, 0, 2 Pi},
Axes -> False, Boxed -> False, ViewPoint -> {1.4, -2.6, -1.7}]
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=werrwrrw.png&userId=11733Roy Scott2017-11-18T01:58:23ZPredicting Winning Odds of Italian Serie A Soccer 1934-2017
http://community.wolfram.com/groups/-/m/t/1218215
Dear Community! Although gambling is not my hobby, the validation of predictor engine inside Wolfram Mathematica 11.2 is. Sooner or later someone may get huge benefits in gambling using predictors as they have been developed already today. I got LaLiga "Outcomes" 100 % but my example will be Italian Serie A analysis, yrs 1934-2017. It is downloadable from
https://github.com/jalapic/engsoccerdata
Note that the source was found by google'ing. I am avoiding to be too lengthy and therefore only minimum amount of outputs are given. For necessary prediction only **goals** and **dates** are needed. I have always wondered whether it is useful to put into prediction as much statistical data as possible and it turns out that this must be avoided i.e. the right data for right prediction should be selected for best results. The CSV file should be first downloaded and a little-bit modified in order to use it as Mher Davtyan used it in his original example:
http://community.wolfram.com/groups/-/m/t/908804
then the path may be seen from the import function below:
oddsData =
SemanticImport[
"C:\\Users\\user\\Documents\\engsoccerdata-master\\data-raw\\italy3.\
csv", <|1 -> "Integer", 2 -> "String", 3 -> "Integer", 4 -> "String",
5 -> "String", 7 -> Automatic, 8 -> Automatic|>]
giving an output:
![Input data][1]
Yes, it consists of 25784 games. The collection of additions to Dataset are listed below and explained later:
scoreAdded =
oddsData[All,
Append[#,
"Score" ->
ToString[Slot["hgoal"]] <> ":" <> ToString[Slot["vgoal"]]] &]
homeAndAway =
scoreAdded[
Association["Home" -> GroupBy[#home &],
"Away" -> GroupBy[#visitor &]]];
data01 = (Flatten[(Transpose[{homeAndAway["Home",
scoreAdded[#]["home"], All, "Nr"] // Normal,
Thread[Rule[
homeAndAway["Home", scoreAdded[#]["home"], All,
"home"] // Normal,
homeAndAway["Home", scoreAdded[#]["home"], All,
"hgoal"] // Normal // Accumulate]]}] & /@
Range[1, 25784, 1]) // Union, 1] // Sort)[[All, 2]][[All,
2]];
data02 = (Flatten[(Transpose[{homeAndAway["Away",
scoreAdded[#]["visitor"], All, "Nr"] // Normal,
Thread[Rule[
homeAndAway["Away", scoreAdded[#]["visitor"], All,
"visitor"] // Normal,
homeAndAway["Away", scoreAdded[#]["visitor"], All,
"vgoal"] // Normal // Accumulate]]}] & /@
Range[1, 25784, 1]) // Union, 1] // Sort)[[All, 2]][[All,
2]];
data03 = (Flatten[(Transpose[{homeAndAway["Away",
scoreAdded[#]["visitor"], All, "Nr"] // Normal,
Thread[Rule[
homeAndAway["Away", scoreAdded[#]["visitor"], All,
"visitor"] // Normal,
homeAndAway["Away", scoreAdded[#]["visitor"], All,
"hgoal"] // Normal // Accumulate]]}] & /@
Range[1, 25784, 1]) // Union, 1] // Sort)[[All, 2]][[All,
2]];
data04 = (Flatten[(Transpose[{homeAndAway["Home",
scoreAdded[#]["home"], All, "Nr"] // Normal,
Thread[Rule[
homeAndAway["Home", scoreAdded[#]["home"], All,
"home"] // Normal,
homeAndAway["Home", scoreAdded[#]["home"], All,
"vgoal"] // Normal // Accumulate]]}] & /@
Range[1, 25784, 1]) // Union, 1] // Sort)[[All, 2]][[All,
2]];
addIndividualGoalsCumul[d_] :=
Join[d, Association["homeTeamHomeRating" -> (data01[[d["Nr"]]]),
"VisitorAwayRating" -> (data02[[d["Nr"]]]),
"homeTeamHomeConceded" -> (data04[[d["Nr"]]]),
"VisitorAwayConceded" -> (data03[[d["Nr"]]])]];
ratingsAdded = Map[addIndividualGoalsCumul[#] &, scoreAdded]
extractDate[a_] :=
Append[a, <|
"DateSuccessive" ->
FromDigits[StringSplit[a["Date"], "-"][[1]]] 100 - 190000 +
FromDigits[StringSplit[a["Date"], "-"][[2]]]|>]
dateNew = Map[extractDate[#] &, ratingsAdded]
addOutcome[a_Association] :=
If[a["hgoal"] == a["vgoal"], Append[a, Association["Outcome" -> 0]],
If[a["hgoal"] > a["vgoal"], Append[a, Association["Outcome" -> 1]],
Append[a, Association["Outcome" -> -1]]]]
outcomeAdded = Map[addOutcome[#] &, dateNew]
where data01, data02, data03 and data04 compute the cumulative scores and conceded scores for every team, the "DateSuccessive" modifies the "Date": 1934-06-23 to 03406 and 2017-05-28 to 11705 i.e. year and month, and "Outcome" is similar as in original post. The result table is:
![Final Dataset 1][2]
![Final Dataset 2][3]
The test data starts from index:
In[96]:= Round[0.99218 Length[outcomeAdded]]
Out[96]= 25582
and consists of all 202 games played on yr 2017 for test set for Classifier:
![Testset][4]
Above we see the Testset and needed data using the code:
n = Round[0.99218 Length[outcomeAdded]];
{trainingSet, testSet} =
TakeDrop[outcomeAdded[
All, {"DateSuccessive", "homeTeamHomeRating", "VisitorAwayRating",
"homeTeamHomeConceded", "VisitorAwayConceded", "hgoal", "vgoal",
"Outcome"}], n];
c = Classify[trainingSet -> "Outcome", Method -> "LogisticRegression"] cm = ClassifierMeasurements[c, testSet -> "Outcome"]
with following results:
![enter image description here][5]
Visualisation:
In[138]:= actaulValue = testSet[All, "Outcome"] // Normal
Out[138]= {1, 1, -1, -1, 1, 1, 1, 0, -1, -1, 1, 1, 1, 1, 1, 0, 1, -1, \
0, -1, -1, 1, 1, 1, 0, 1, -1, -1, 1, 1, -1, 0, 1, 0, 0, 1, -1, 0, 1, \
-1, -1, 1, 0, 0, -1, 1, -1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, \
-1, 1, 0, 1, 1, -1, -1, -1, 1, 1, 1, 0, -1, 1, -1, 1, -1, 0, -1, 1, \
0, -1, 0, 1, -1, 1, 0, -1, -1, 0, -1, 1, 0, 1, -1, 1, 1, 1, 1, -1, \
-1, -1, 1, 1, 0, 1, 1, 0, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, 0, -1, \
0, 0, -1, 0, 0, 1, -1, -1, 1, -1, 1, 0, 1, 1, -1, 0, 0, 1, 0, -1, 0, \
1, 0, 1, 1, -1, 1, 1, -1, -1, 0, 1, -1, 0, 0, 1, 1, 0, -1, -1, -1, 1, \
-1, 0, 1, 0, 1, 1, 1, -1, -1, 0, 0, 0, 1, 1, 1, 1, -1, 1, 1, 0, -1, \
-1, 1, -1, 1, 1, -1, 1, 1, 0, 1, 1, -1, 1, 1, 0, 1, 1, 1, -1, 1}
In[139]:= predictedValue =
c[Normal[testSet[[All, Values]]][[All, ;; 7]]]
Out[139]= {1, 1, -1, -1, 1, 1, 1, 0, -1, -1, 1, 1, 1, 1, 1, 0, 1, -1, \
0, -1, -1, 1, 1, 1, 0, 1, -1, -1, 1, 1, -1, 0, 1, 0, 0, 1, -1, 0, 1, \
-1, -1, 1, 0, 0, -1, 1, -1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, \
-1, 1, 0, 1, 1, -1, -1, -1, 1, 1, 1, 0, -1, 1, -1, 1, -1, 0, -1, 1, \
0, -1, 0, 1, -1, 1, 0, -1, -1, 0, -1, 1, 0, 1, -1, 1, 1, 1, 1, -1, \
-1, -1, 1, 1, 0, 1, 1, 0, -1, -1, 1, -1, 1, 1, -1, -1, 1, -1, 0, -1, \
0, 0, -1, 0, 0, 1, -1, -1, 1, -1, 1, 0, 1, 1, -1, 0, 0, 1, 0, -1, 0, \
1, 0, 1, 1, -1, 1, 1, -1, -1, 0, 1, -1, 0, 0, 1, 1, 0, -1, -1, -1, 1, \
-1, 0, 1, 0, 1, 1, 1, -1, -1, 0, 0, 0, 1, 1, 1, 1, -1, 1, 1, 0, -1, \
-1, 1, -1, 1, 1, -1, 1, 1, 0, 1, 1, -1, 1, 1, 0, 1, 1, 1, -1, 1}
In[140]:= difference = actaulValue - predictedValue
Out[140]= {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
In[141]:= N@Normalize[Counts[Abs[difference]], Total]
Out[141]= <|0 -> 1.|>
Concluding that all yr 2017 game winners were correctly classified.
Now, prediction with slightly larger set:
In[143]:= n = Round[0.99 Length[outcomeAdded]]
Out[143]= 25526
In[144]:= {trainingSetFull, testSetFull} = TakeDrop[outcomeAdded, n];
{trainingSet1,
testSet1} = {trainingSetFull[
All, {"DateSuccessive", "homeTeamHomeRating", "VisitorAwayRating",
"homeTeamHomeConceded", "VisitorAwayConceded", "hgoal", "vgoal",
"Outcome"}],
testSetFull[
All, {"DateSuccessive", "homeTeamHomeRating", "VisitorAwayRating",
"homeTeamHomeConceded", "VisitorAwayConceded", "hgoal", "vgoal",
"Outcome"}]};
![enter image description here][6]
In[156]:= predictedValue =
Round[p[Normal[testSet1[[All, Values]]][[All, ;; 7]]], 1]
Out[156]= {1, 1, -1, -1, 1, 0, 1, -1, 1, -1, 1, 0, 1, -1, 1, 1, 0, 1, \
1, 1, -1, 1, 0, 1, 1, 0, 1, 1, -1, -1, 0, -1, 1, -1, -1, 1, 1, 1, 1, \
1, 0, 1, 0, 1, 1, -1, -1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, -1, -1, 1, \
1, 1, 0, -1, -1, 1, 1, 1, 1, 1, 0, 1, -1, 0, -1, -1, 1, 1, 1, 0, 1, \
0, -1, 1, 1, -1, 0, 1, 0, 0, 0, -1, 0, 1, -1, -1, 1, 0, 0, -1, 1, -1, \
1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, -1, 1, 0, 1, 1, -1, -1, -1, \
1, 1, 1, 0, -1, 1, -1, 1, -1, 0, -1, 1, 0, -1, 0, 1, -1, 1, 0, -1, \
-1, 0, -1, 1, 0, 1, -1, 1, 1, 1, 1, -1, -1, -1, 1, 1, 0, 1, 1, 0, -1, \
-1, 1, -1, 1, 1, -1, -1, 1, -1, 0, -1, 0, 0, -1, 0, 0, 1, -1, -1, 1, \
-1, 1, 0, 1, 1, -1, 0, 0, 1, 0, -1, 0, 1, 0, 1, 0, -1, 1, 1, -1, -1, \
0, 1, -1, 0, 0, 1, 1, 0, -1, -1, -1, 1, -1, 0, 1, 0, 1, 1, 1, -1, -1, \
0, 0, 0, 1, 1, 1, 1, -1, 1, 1, 0, -1, -1, 1, -1, 1, 1, -1, 1, 1, 0, \
1, 1, -1, 0, 1, 0, 1, 1, 1, -1, 1}
In[157]:= actaulValue = testSet1[All, "Outcome"] // Normal
Out[157]= {1, 1, -1, -1, 1, 0, 1, -1, 1, -1, 1, 0, 1, -1, 1, 1, 0, 1, \
1, 1, -1, 1, 0, 1, 1, 0, 1, 1, -1, -1, 0, -1, 1, -1, -1, 1, 1, 1, 1, \
1, 0, 1, -1, 1, 1, -1, -1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, -1, -1, \
1, 1, 1, 0, -1, -1, 1, 1, 1, 1, 1, 0, 1, -1, 0, -1, -1, 1, 1, 1, 0, \
1, -1, -1, 1, 1, -1, 0, 1, 0, 0, 1, -1, 0, 1, -1, -1, 1, 0, 0, -1, 1, \
-1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1, -1, 1, 0, 1, 1, -1, -1, \
-1, 1, 1, 1, 0, -1, 1, -1, 1, -1, 0, -1, 1, 0, -1, 0, 1, -1, 1, 0, \
-1, -1, 0, -1, 1, 0, 1, -1, 1, 1, 1, 1, -1, -1, -1, 1, 1, 0, 1, 1, 0, \
-1, -1, 1, -1, 1, 1, -1, -1, 1, -1, 0, -1, 0, 0, -1, 0, 0, 1, -1, -1, \
1, -1, 1, 0, 1, 1, -1, 0, 0, 1, 0, -1, 0, 1, 0, 1, 1, -1, 1, 1, -1, \
-1, 0, 1, -1, 0, 0, 1, 1, 0, -1, -1, -1, 1, -1, 0, 1, 0, 1, 1, 1, -1, \
-1, 0, 0, 0, 1, 1, 1, 1, -1, 1, 1, 0, -1, -1, 1, -1, 1, 1, -1, 1, 1, \
0, 1, 1, -1, 1, 1, 0, 1, 1, 1, -1, 1}
In[158]:= difference = Round[actaulValue - predictedValue, 0.01]
Out[158]= {0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., -1., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
-1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., \
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., \
0., 0., 0., 0., 0., 0.}
In[159]:= N@Normalize[Counts[Abs[difference]], Total]
Out[159]= <|0. -> 0.98062, 1. -> 0.0193798|>
showing 98 % accuracy.
## 10 Random Samples ##
randomMatches = RandomSample[testSetFull, 10]
![enter image description here][7]
Goals prediction is not so exact:
![enter image description here][8]
and for visiting team:
![enter image description here][9]
giving 60 % of accuracy.
Having "Outcome" correct, I will just show two tables of Predicted and Actual scores:
![enter image description here][10]
## Conclusions ##
Clearly, game scores are harder to predict, almost 60 % are correct and 38 % differs with 1 goal. The predictor seems to work better when adding time-scale. Over the time the team's performance is changing and this must be reflected.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture1.JPG&userId=94243
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture2.JPG&userId=94243
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture3.JPG&userId=94243
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture4.JPG&userId=94243
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture5.JPG&userId=94243
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture6.JPG&userId=94243
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture7.JPG&userId=94243
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture8.JPG&userId=94243
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture9.JPG&userId=94243
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture11.JPG&userId=94243Tanel Telliskivi2017-11-08T01:18:39ZPairs Trading with Copulas
http://community.wolfram.com/groups/-/m/t/1111149
**Introduction**
In a previous post, [Copulas in Risk Management][1], I covered the theory and applications of copulas in the area of risk management, pointing out the potential benefits of the approach and how it could be used to improve estimates of Value-at-Risk by incorporating important empirical features of asset processes, such as asymmetric correlation and heavy tails.
In this post I take a different tack, to show how copula models can be applied in pairs trading and statistical arbitrage strategies.
This is not a new concept - it stems from when copulas began to be widely adopted in financial engineering, risk management and credit derivatives modeling. But it remains relatively under-explored compared to more traditional techniques in this field. Fresh research suggests that it may be a useful adjunct to the more common methods applied in pairs trading, and may even be a more robust methodology altogether, as we shall see.
**Traditional Approaches to Pairs Trading**
Researchers often use simple linear correlation or distance metrics as the basis for their statistical arbitrage strategies. The problem is that statistical relationships may be nonlinear or nonstationary. Correlations (and betas) that have fluctuated in a defined range over a considerable period of time may suddenly break down, producing substantial losses.
A more sophisticated technique is the Kalman Filter, which can be used as a means of dynamically updating the the estimated correlation or relative beta between pairs (or portfolios) of stocks, a technique I have written about in the post Statistical Arbitrage with the Kalman Filter.
Another commonly employed econometric technique relies on cointegration relationships between pairs or small portfolios of stocks, as described in my post on Developing Statistical Arbitrage Strategies Using Cointegration. The central idea is that, in theory, cointegration is a more stable and reliable basis for assessing the relationship between stocks than correlation.
Researchers often use a combination of methods, for example by requiring stocks to be both cointegrated and with stable, high correlation throughout the in-sample formation period in which betas are estimated.
In all these cases, however, the challenge is that, no matter how they are derived or estimated, statistical relationships have a tendency towards instability. Even a combination of several of these methods often fails to detect signs of a breakdown in statistical relationships. There is even evidence that cointegration models are no more robust or reliable than simple correlations. For example, in his paper On the Persistence of Cointegration in Pairs Trading, Matthew Clegg assess the persistence of cointegration among U.S. equities in the calendar years 2002-2012, comprising over 860,000 pairs in total. He concludes that “the evidence does not support the hypothesis that cointegration is a persistent property”.
**Pairs Trading in the S&P500 and Nasdaq Indices**
To illustrate the copula methodology I will use an equity pair comprising the S&P 500 and Nasdaq indices. These are not tradable assets, but the approach is the same regardless and will serve for the purposes of demonstrating the technique.
We begin by gathering daily data on the indices and calculating the log returns series. We will use the data from 2010 to 2015 as the in-sample “formation” period, and test the strategy out of sample on data from Jan 2016-Feb 2017.
![enter image description here][2]
![enter image description here][3]
![enter image description here][4]
![enter image description here][5]
![enter image description here][6]
The chart below shows a scatter plot of daily percentage log returns on the SP500 and NASDAQ indices.
![enter image description here][7]
![enter image description here][8]
**MODELING**
**Marginal Distribution Fitting**
In the post Copulas in Risk Management it was shown that the returns series for the two indices were well-represented by Student T distributions. I replicate that analysis here, estimating the parameters by maximum likelihood and proceed from there to test each distribution for goodness of fit. In each case, the Student T distribution appears to provide an adequate fit for both series.
![enter image description here][9]
![enter image description here][10]
![enter image description here][11]
![enter image description here][12]
**Copula Calibration**
We next calibrate the parameters for the Gaussian copula by maximum likelihood, from which we derive the joint distribution for returns in the two indices via Sklar’s decomposition. This will be used directly in the pairs trading algorithm. As pointed out previously, there are several alternatives to MLE, including the Method of Moments, for example, and these are listed in the Mathematica documentation for the EstimatedDistrubution function.
![enter image description here][13]
![enter image description here][14]
![enter image description here][15]
![enter image description here][16]
![enter image description here][17]
![enter image description here][18]
**Pairs Trading with the Copula Model**
Once we have successfully fitted marginal distributions for the two series and a copula distribution to describe their relationship, we are able to derive the joint distribution. This means that we can directly calculate the joint probability of each pair of data observations. So, for instance, we find that the probability of a return in the S&P500 of 5% or more, together with a return in the Nasdaq of 1% or higher, is approximately 0.2%:
![enter image description here][19]
![enter image description here][20]
So the way we test our model is to calculate the daily returns for the two indices during the-out-of sample period from Jan 2016 to Feb 2017 and compute the probability of each pair of daily observations. On days where we see observation pairs with abnormally low estimated probabilities, we trade the pair accordingly over the following day.
Naturally, there are multiple issues with this simplistic approach. To begin with, the indices are not tradable and if they were we would have to account for transaction costs including the bid-offer spread. Then there is the issue of determining where to set the probability threshold for initiating a trade. We also need to decide on criteria to try to optimize the trade holding period or trade exit rules. And, finally, we need to think about trade expression: for example, we might attempt to trade both legs passively, perhaps crossing the spread to fill the remaining leg when an order for one of the pairs is filled.
But none of these issues are specific to the copula approach - they apply equally to all of the methods discussed previously. So, for the sake of clarity, I am going to ignore them. In this analysis I pick a threshold probability level of 15% and assume we hold the trade for one day only, opening and closing the trade at the start and end of the day after we receive a signal. In computing the returns for each trade I ignore any transaction costs.
First, we gather data for the test period:
![enter image description here][21]
Next, we use the estimated joint distribution to compute the probability of each daily observation of index returns. We gather the daily returns series and associated probability series into a single temporal variable:
![enter image description here][22]
![enter image description here][23]
We plot the time series of index returns and associated probabilities as follows:
![enter image description here][24]
![enter image description here][25]
![enter image description here][26]
![enter image description here][27]
**Trade Signal Generation**
The table below lists the index returns and joint probabilities over the first several days of the series. The sequence of trade signals is as follows:
After a very low probability reading for 2016/1/4, we take equally weighted positions short the S&P500 Index and long the Nasdaq index on 2016/1/5. We close the position at the end of the day, producing a total return of 0.44%. Similar signals are generated on 2016/1/6, 2016/1/7, 2016/1/8, 2016/1/13 , 2016/1/15 and 2016/1/20 (assuming a 15% probability threshold). We take the reverse trade (Buy the S&P500, Sell the Nasdaq) on only one occasion in the initial part of the sample, on 2016/1/14.
![enter image description here][28]
![enter image description here][29]
**Pairs Trading Strategy Results**
We are now ready to apply the trading algorithm to the entire sample and chart the resulting P&L.
![enter image description here][30]
![enter image description here][31]
![enter image description here][32]
![enter image description here][33]
**Comment on Strategy Performance**
The performance of the strategy over the out-of-sample period, at just under 4%, can hardly be described as stellar. But this is largely due to the dampening of volatility seen in both indices over the last year, which is reflected in the progressively lower volatility of joint probabilities over the course of the test period. Such variations in signal frequency and trading strategy performance are commonplace in any statistical arbitrage strategy, regardless of the methodology used to generate the signals.
The obvious remedy is to create similar trading algorithms for a large number of pairs and combine them together in an overall portfolio that will produce a sufficient number of signals and trading opportunities to make the performance sufficiently attractive. One of the benefits of statistical arbitrage strategies developed in this way is their highly efficient use of capital, since the combination of long and short positions minimizes the margin requirement for each trade and for the portfolio as a whole.
Finally, it is worth noting here that, in principle, one could easily create similar copula-based arbitrage strategies for triplets, quadruplets, or any (reasonably small) number of assets. The principle restriction lies in the increasing difficulty of estimating the copulas and joint densities, given the slow convergence of the MLE method.
**Recent Research**
In the last few years several researchers have begun exploring the application of copulas as a basis for statistical arbitrage. In their paper “Nonlinear dependence modeling with bivariate copulas: Statistical arbitrage pairs trading on the S&P 100”, Krauss and Stubinger apply the copula approach to pairs drawn from the universe of S&P 100 index constituents, with promising results. They conclude that their “findings pose a severe challenge to the semi-strong form of market efficiency and demonstrate a sophisticated yet profitable alternative to classical pairs trading”.
In the paper by Rad, et al., cited below, the researchers compare several different methods for pairs trading strategies. They find that all of the tested methods produce economically significant returns, but only the performance of the copula-based approach remains consistent after 2009. Further, the copula method shows better performance for its unconverged trades compared to those of the other methods.
**Conclusion**
The application of copulas to statistical arbitrage strategies is an interesting and relatively under-explored alternative to the usual distance and correlation based methods. In addition to its sound theoretical underpinnings, the copula approach appears to offer greater consistency in performance compared to traditional techniques, whose efficacy has declined since the financial crisis on 2008/09. The benefits of the approach must be weighed against its greater computational complexity, although with the growth in the power of modeling software in recent years this represents less of an obstacle than it has previously.
**References**
Clegg., M., , On the Persistence of Cointegration in Pairs Trading, Jan. 2014
Krauss, C. and Stubinger , J., Nonlinear dependence modeling with bivariate copulas: Statistical arbitrage pairs trading on the S&P 100, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung, No 15/2015.
Rad, H., Kwong, R., Low, Y. and Faff, R., The profitability of pairs trading strategies: distance, cointegration, and copula methods, Quantitative Finance, DOI: org/10.1080/14697688.2016.1164337, 2015
[1]: http://jonathankinlay.com/2017/01/copulas-risk-management/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_1.gif&userId=773999
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_2.png&userId=773999
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7037Fig1.png&userId=773999
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_3.gif&userId=773999
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_4.gif&userId=773999
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_5.png&userId=773999
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_6.gif&userId=773999
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_7.gif&userId=773999
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_8.png&userId=773999
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_10.gif&userId=773999
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4282Fig2.png&userId=773999
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_16.gif&userId=773999
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_17.png&userId=773999
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_18.png&userId=773999
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_19.gif&userId=773999
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_20.png&userId=773999
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_21.gif&userId=773999
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_22.png&userId=773999
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_23.png&userId=773999
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_24.gif&userId=773999
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_25.gif&userId=773999
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_26.gif&userId=773999
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_27.png&userId=773999
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_28.gif&userId=773999
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_29.png&userId=773999
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_30.gif&userId=773999
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_31.gif&userId=773999
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2430Fig3.png&userId=773999
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_32.gif&userId=773999
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_33.gif&userId=773999
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_34.png&userId=773999
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PairsTradingwithCopulas_35.gif&userId=773999Jonathan Kinlay2017-05-30T17:41:07ZProfessional deployment limitations of Enterprise CDF?
http://community.wolfram.com/groups/-/m/t/1225644
Hello,
I have been developing CDF applications to be deployed (sold) some will be in mass quantities (hundreds) but some will be tailored to one application. But I am faced with rather an important limitation when it comes to protecting my IP and controlling the use of my codes. (One side complain) Even though it is not "nice" I accept the fact that the user has to download CDF player, which is a huge file. My main concerns are:
1- Encrypting the code. I follow the http://www.wolfram.com/broadcast/video.php?c=88&v=596 but this looks like it predates the Enterprise edition. I could not find an official documentation about the security of Encoding and SetAttributes. This is unsettling. My question is: Is there a real encryption option for CDF deployment that people do agree upon?
2- Control after the deployment. There is no option to control how long the CDF can run, or it can run on one computer. Of course, you can manually set a " time limitation" in every code and ask your client to give MAC address or etc. But that is unproductive and unprofessional. I am surprised that Wolfram came up a product that is advertised as commercial deployment and don't provide necessary tools for that (to my knowledge) . My questions is: How to deploy and keep the control/licensing of my solvers?
I am aware that there is a cloud option which is by itself really confusing Mathematica Online, Wolfram Development Platform, Wolfram One. I read many posts about those and it looks like even Wolfram Research does not know how they are going to make a differentiation between products.
According to me, one of the weakest points of Wolfram Research is to provide advance tutorials. They like to show the concepts in a really basic environment which kind of fails for real-world applications.
Thank you for the responses.Erdem Uguz2017-11-21T10:16:01Z[✓] Plot rational functions?
http://community.wolfram.com/groups/-/m/t/1225057
Please, could any body help me? I want to plot graphics of elementary rational functions like f(x)=1/x, or g(x)= (x^2+1)/(x^2-4), what I got is not correct. I have tried to indicate the intervals of the domain of the function with inequalities but it does not work, for instance for g(x): -6<x<-2, -2<x<2, 2<x<6. I understand is a problem with x values that make zero the denominator. I attach a file with the examples. Thank you very much.
----------
Plot[f[x] = 1/x, {x, -6, 6}, PlotStyle -> Directive[RGBColor[1, 0, 0], AbsoluteThickness[2.25],
Arrowheads[{-.05, .05}]], GridLines -> {Range[-6, 6, 1], Range[-6, 6, 1]},
Ticks -> {Range[-6, 6, 1], Range[-6, 6, 1]}] /. Line -> Arrow
![enter image description here][1]
Plot[f[x] = (1 + x^2)/(-4 + x^2), {x, -6, 6}, PlotStyle -> Directive[RGBColor[1, 0, 0],
AbsoluteThickness[2.25], Arrowheads[{-.05, .05}]], GridLines -> {Range[-6, 6, 1], Range[-6, 6, 1]},
Ticks -> {Range[-6, 6, 1], Range[-6, 6, 1]}] /. Line -> Arrow
![enter image description here][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=asdf43qtwrhgs.png&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4356yehtgfsd.png&userId=11733Salvador Benjamin P. R.2017-11-20T08:20:42ZIs Dr. Chung's symbolic package access available?
http://community.wolfram.com/groups/-/m/t/1225366
Dr. Chung recently received a reward from Wolfram for his symbolic package. Does anyone know how to gain access?
here is his book:
[book][1]
defunct link:
http://symbcomp.gist.ac.kr/
defunct 2011 sample:
http://library.wolfram.com/infocenter/Conferences/8063/
[1]: https://link.springer.com/chapter/10.1007/978-3-662-44199-2_4Kay Herbert2017-11-21T04:30:43Z[✓] Color Function not correctly mapping to range?
http://community.wolfram.com/groups/-/m/t/1224856
Observe the following code:
nd[y_, a_, b_, n_] := (Binomial[b - a + 1, y]*Sum[((-1)^i)*Binomial[y, i]*((y - i)/(b - a + 1))^n, {i, 0, y}]);
mycolorfun = Function[Blend[{Black, Purple, Blue, Green, Yellow, Orange, Red, White}, Rescale[#, {0, 1}]]]
Manipulate[
ArrayPlot[Table[nd[y, 1, K, n], {K, 1, 25}, {n, 1, 25}],
ColorFunction -> mycolorfun, DataReversed -> True, Frame -> True,
FrameTicks -> Automatic,
FrameLabel -> {Rotate["K", -90 Degree], "N"}], {y, 1, 15, 1}]
Notice how when the slider is moved up to y=15, there is a white square that appears at N = 25 and K = 21:
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/8Qutc.png
My goal is for the color function to map the individual value of the output of my function to a color as dependent on its closeness to 0 or 1 (since my function is a probability density function, this makes sense).
However, look what happens when you check the actual value of the function for N = 25 and b=K=21:
nd[15, 1, 21, 25.0]
(out) 0.268495
I get 26%! 26% should be mapping to a color somewhere around Purple/Blue, but it's getting mapped all the way up to the color white! White is what's supposed to be mapped to when the output of my function is really close to 1! Clearly something is amiss here, and I cannot figure out what it is. Can somebody please explain what is going on and what I need to do to get the desired color output?John Travolski2017-11-19T17:59:19Z[✓] Combine plots made in a Do loop?
http://community.wolfram.com/groups/-/m/t/1225263
Is there a way to use Show, e.g. Show[ Do[Graphics3D[ .. ],{i,1,n}]]?David Torney2017-11-20T19:45:22ZTurn this system of differential equations into a single equation?
http://community.wolfram.com/groups/-/m/t/1224952
Consider the following code:
{
0 == -voltageC0[t] - 2*voltageC0''[t] + 2*voltageC1''[t],
0 == -voltageC1[t] - 2*voltageC1''[t] + 2*voltageC0''[t]
}
I am able to solve this system using DSolve, but now I want to do something different. I want to convert it into a single differential equation, expressed in terms of voltageC0 without reference to voltageC1, in this form:
0 == a0*voltageC0[t] + a1*voltageC0'[t] + a2*voltageC0''[t] + a3*voltageC0'''[t]
I show a 3rd-order solution as an example, but that is just for example of the target form, and the actual order may turn out to be more or less than 3 for all I know.
I want to find what the values of a0, a1, etc. are. Does Mathematica have facilities to help with this? I believe the conversion should be possible, but I don't understand the process for converting.Joe Donaldson2017-11-20T01:58:40ZUsing the GPIO with the Wolfram Language + Raspberry Pi
http://community.wolfram.com/groups/-/m/t/157473
This post shows how to use the GPIO with the Wolfram Language on a Raspberry Pi.
To recreate this experiment you will need the following hardware (in addition to the Raspberry Pi itself):
[list]
[*][url=http://www.adafruit.com/products/301]Super Bright Blue 5mm LED[/url]
[*][url=http://www.adafruit.com/products/1105]Pi T-Cobbler Breakout Kit[/url]
[*][url=http://www.adafruit.com/products/239]Full sized breadboard[/url]
[*][url=http://www.adafruit.com/products/758]Male/male jumper wires[/url]
[*]A 400 ohm resistor (to prevent the LEDs from burning out).
[/list]
Set up the breadboard as shown: Plug the T-Cobbler into the breadboard with 13 pins in the E column and 13 pin in the G column. Use the jumper wires
to connect pins 4, 17, 27, 22, 18, 23, 24 and 25 to evenly spaced free rows lower on the breadboard. Connect 8 blue LEDs from each jumper wire row to
the blue - column, with the flattened cathode side on the blue - column. Complete the circuit by connecting the resistor from the blue - column to the GND pin.
Connect the ribbon cable to the T-Cobbler and the Raspberry Pi correctly, and turn on your Raspberry Pi.
[img=width: 512px; height: 683px;]/c/portal/getImageAttachment?filename=2-5344gpio_community.jpg&userId=11733[/img]
The GPIO interface requires root privilege for access so the Wolfram Language or Mathematica needs to be started as root for this experiment.
In a terminal start the Wolfram Language using the following command (as root):
[code]> sudo wolfram
Wolfram Language (Raspberry Pi Pilot Release)
Copyright 1988-2013 Wolfram Research
Information & help: wolfram.com/raspi
In[1]:=
[/code]
First we define the pins that correspond to connected LEDs:
[mcode]pins = {4,17,27,22,18,23,24,25}
[/mcode]
Next we can turn on individual LEDs by writing the value '1' to it:
[mcode]DeviceWrite[ "GPIO", First[pins] -> 1 ]
[/mcode]
And of course turn it back off, by writing the value '0':
[mcode]DeviceWrite[ "GPIO", First[pins] -> 0 ]
[/mcode]
Or turn the LEDs on and off one at a time:
[mcode]Do[
DeviceWrite[ "GPIO", pins[[i]]->1 ];
Pause[.2];
DeviceWrite[ "GPIO", pins[[i]]->0 ];
,{i,8}]
[/mcode]Arnoud Buzing2013-11-21T17:13:19ZGet FinancialData price history?
http://community.wolfram.com/groups/-/m/t/1097940
Since 16 may 2017, why am I not able to get the price history of US stock market tickers? Please see output below! Has anything changed?
In[9]:= FinancialData["GE", {{2017, 1, 3}, {2017, 5, 15}}]
Out[9]= Missing["NotAvailable"]
In[10]:= FinancialData["IBM", {{2017, 1, 3}, {2017, 5, 15}}]
Out[10]= Missing["NotAvailable"]sridev ramaswamy2017-05-18T12:09:48ZWhy does the WolframKernel exit on OS X without a trace during Integrate?
http://community.wolfram.com/groups/-/m/t/1222503
I am trying to calculate a quantity from its density in 3D over a rectangular 2D window frame the following way:
mflxx = Integrate[
mi3ret[[1, 1]], {y, -w/2,
w/2}, {z, -(c/2 + zf + d/2), -(c/2 + zf + d/2 + h)}]
where `mi3ret[[1,1]]` is a fairly long expression containing only elementary functions of the `x,y` and `z` variables, and it is the x-component of a 3D vector function. It also contains some constant parameters, like `a,b,c`. `zf,d,h` are additional constants for the definite integral. After a long calculation the `WolframKernel` is killed by the OS without any message or CrashReport. So, after rebooting the machine and running just Mathematica and the Activity Monitor, I made some screenshuts to see, how the memory is used and put the memory related value into lists. See notebook attached, where the 0 minute is really after about one and half hours into the run. The machine is an early 2011 Early MacPro with 64GB memory and a 500GB Flashdrive partitioned into two 250GB partition. So, the questions are:
1. Why the `WolframKernel` is killed without a trace?
2. Can the OS tuned in such a way that `Integrate` can finish its run?
3 Is there a command or function in Mathematica that would alert to the coming
doom and would allow the examination of the status of the machine, including the status of the `WolframKernel`?
4. Is it possible to configure Mathematica to use, for disk based intermediate storage, the partition that have more free space, /the boot drive itself has 150GB, the other partition has 230GB free space/ ?
Thanks ahead,
JánosJanos Lobb2017-11-15T04:29:13ZWhy this simple case of watershed segmentation fails?
http://community.wolfram.com/groups/-/m/t/1225443
The question has been posted on StackExchange. https://mathematica.stackexchange.com/questions/160304/why-this-simple-case-of-watershed-segmentation-fails.
I tried a simple experiment today with `WatershedComponents`. I took this image from: https://fr.mathworks.com/help/images/ref/watershed.html **(readers should use the image from the link above and crop the image themselves, rather than using the image directly below)**
![enter image description here][1]
then I binarized the image
bin = Binarize[img];
the distance transform of the image yields dist:
dist = DistanceTransform[bin]//ImageAdjust;
![enter image description here][2]
Now to determine the seeds:
seeds = MaxDetect[dist]; (* notice two nice seeds in the image *)
![enter image description here][3]
finally using WatershedComponents i get this:
WatershedComponents[bin, seeds] // Colorize
![enter image description here][4]
**Note:** This is clearly how they should not be segmented. In contrast please see the segmentation performed by Matlab (link mentioned above). I tried using different methods but could not get a proper segmentation. What am i doing wrong?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mask.png&userId=942204
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dist.png&userId=942204
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=seeds.png&userId=942204
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=watershed.png&userId=942204Ali Hashmi2017-11-20T17:51:46Z"ImageEffect: EdgeStylization is not a known image effect." Suggestions?
http://community.wolfram.com/groups/-/m/t/1225304
Hi.
I'm trying to get someone else's functional mathematica notebook (worked on PC) to function on a mac. I'm using 11.0.1.0 and he built it in a somewhat more updated version.
Most of the code runs until I get the following error: "ImageEffect::effect: EdgeStylization is not a known image effect" and then everything goes downhill from there.
As far as I can see in Mathematica documentation, EdgeStylization is an effect part of ImageEffect. Has this been added only since 11.0.1.0? Any other ideas for why this is happening?
Thanks!
This is the command giving the error:
Table[Module[{cellNum, img0, img1, img2, img3, blobs, res, img4},
cellNum = 7;
img0 = ColorConvert[
ImageTake[
Import[pathUI359x041017p1, {"QuickTime", "Frames", {i}}],
3684 - # & /@ (cells[[cellNum, 1]]), cells[[cellNum, 2]]],
"Grayscale"];
img1 = EntropyFilter[ImageAdjust[CurvatureFlowFilter[img0, 20]], 1];
img2 = ImageEffect[MeanFilter[img1, 3] // ImageAdjust,
"EdgeStylization"];
img3 = DeleteSmallComponents[
Erosion[Binarize[
FillingTransform[
Dilation[Threshold[Erosion[img1 + img2, 1], .65], 1], .75]],
5], 1000];
blobs = SelectComponents[img3,
20000 > #Count > 1000 && #"Rectangularity" > .55 &];
res = ComponentMeasurements[ {MorphologicalComponents[blobs],
img0} , {"Centroid", "TotalIntensity", "Area"}][[All, 2]];
img4 = HighlightImage[
img0 // ImageAdjust, {{Opacity[.07], Green, blobs}, {Opacity[1],
Magenta, Style[Text[{
{ToString[Round[#[[2]]]]},
{ToString[Round[#[[3]]]]}
} , #[[1]]], 14] & /@ res}}];Adria LeBoeuf2017-11-20T12:46:00Z[✓] Solve/Reduce systems with an array of variables?
http://community.wolfram.com/groups/-/m/t/1224718
As a simple example I want to reduce the equation d1+2*d2==2, where d1 and d2 are either 0 or 1. This is easy:
In[147]:= Reduce[{d1 + 2 d2 == 2, d1 == 0 || d1 == 1,
d2 == 0 || d2 == 1}, {d1, d2}]
Out[147]= d1 == 0 && d2 == 1
But if I use an array d[i] instead of d1, d2 for the unknown variables, I cannot get it to work. The following seems to be syntactically correct, but it does not work:
In[152]:= Reduce[{d[1] + 2 d[2] == 2,
ForAll[i, d[i] == 0 || d[i] == 1]}, d]
During evaluation of In[152]:= Reduce::nsmet: This system cannot be solved with the methods available to Reduce.
Out[152]= Reduce[{d[1] + 2 d[2] == 2, \!\(
\*SubscriptBox[\(\[ForAll]\), \(i\)]\((d[i] == 0 || d[i] == 1)\)\)},
d]
Of course this example ist extremely simplified. I need some solution for a large d-Array of variables and some equation containing them.Werner Geiger2017-11-19T13:54:34ZWhy do grid marks appear when exporting SVG but not EPS?
http://community.wolfram.com/groups/-/m/t/1225125
Observe the following Mathematica Code:
nd[y_, a_, b_, n_] := (Binomial[b - a + 1, y]*Sum[((-1)^i)*Binomial[y, i]*((y - i)/(b - a + 1))^n, {i, 0, y}]);
Unprotect[ColorData];
ColorData["My_Rainbow"] = Function[x, Blend[{Black, Purple, Blue, Cyan, Green, Yellow, Orange, Red}, x]];
Protect[ColorData]
Manipulate[
ArrayPlot[Table[nd[y, 1, K, n], {K, 1, 50}, {n, 1, 50}],
ColorFunction -> Function[{y}, ColorData["My_Rainbow"][y]],
ColorFunctionScaling -> False, DataReversed -> True, Frame -> True,
FrameTicks -> Automatic,
FrameLabel -> {Rotate["K", -90 Degree], "N"}], {y, 1, 50, 1}]
I've noticed some very unusual results attempting to export the graphics as both an SVG and EPS vector image. When I export it as EPS, it behaves exactly as anticipated; it looks just like it does in Mathematica, and nothing gets rasterized, which is fine. But I'd like an SVG image so I can display it on the web. However, when I export as SVG, I get some really strange grid lines that appear! Take a look below:
[![enter image description here][1]][1]
[![enter image description here][2]][2]
As you can see, there are some very noticeable grid lines that appear in the SVG version, which change inconsistently with the zoom level, making them rather annoying. Obviously, I'd prefer to not have the grid lines, so the reasonable solution seems to be to convert the EPS to an SVG, right? Unfortunately, whenever I do that, I get this blurry mess!
[![enter image description here][3]][3]
[1]: https://i.stack.imgur.com/ipFu5.png
[2]: https://i.stack.imgur.com/oeJe4.png
[3]: https://i.stack.imgur.com/fRmwT.png
It's very frustrating, because no matter what I try I can't get what I want. I just don't understand what's going on. If somebody could please shed some light on the situation, I would greatly appreciate it. Thank you very much.John Travolski2017-11-20T05:31:36Z[✓] Solve a cubic equation?
http://community.wolfram.com/groups/-/m/t/1222449
Please look at this outcome:
![enter image description here][1]
As you see, all 3 roots are complex numbers. But this equition has a real root at least 1.
This is 0.1736 ... (exactly sin 10 degrees).
![enter image description here][2]
Where is my fault?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ucuncuDD_forumSoru1.PNG&userId=1222434
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ucuncuDD_forumSoru2.PNG&userId=1222434Metin Guney2017-11-15T09:21:51Z[GIF] Microcosm (Stereographic projection of cube grid)
http://community.wolfram.com/groups/-/m/t/1225032
![Stereographic projection of cube grid][1]
**Microcosm**
This is conceptually very simple: take a $7 \times 7$ grid of unit cubes in space, normalize to the unit sphere, stereographically project to the plane, then apply a rotation to the original grid of cubes. Here's the code:
Stereo[p_] := 1/(1 - p[[-1]]) p[[;; 2]];
With[{n = 3, cols = RGBColor /@ {"#4EEAF6", "#291F71"}},
Manipulate[
Graphics[
{Opacity[.5], CapForm[None], cols[[1]], Thickness[.006],
Line[
Flatten[
Transpose[
Table[
Stereo[Normalize[#]] & /@
{{t,
y Cos[θ] - z Sin[θ],
z Cos[θ] + y Sin[θ]},
{y, t Cos[θ] - z Sin[θ],
z Cos[θ] + t Sin[θ]},
{y, z Cos[θ] - t Sin[θ],
t Cos[θ] + z Sin[θ]}},
{z, -n - 1/2, n + 1/2}, {y, -n - 1/2, n + 1/2}, {t, -n - 1/2.,
n + 1/2, 1/20}],
{2, 3, 4, 1, 5}],
2]
]
},
PlotRange -> 1, Axes -> False, ImageSize -> 540, Background -> cols[[-1]]],
{θ, 0, π/2}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rotateStereo12c.gif&userId=610054Clayton Shonkwiler2017-11-20T05:51:30Z[✓] Avoid crooked line in $\log(\frac{a+1}{c+1})$ contour plot?
http://community.wolfram.com/groups/-/m/t/1224136
I plotted contour plot of this formula by Wolfram Alpha
$$\log(\frac{a+1}{c+1}), 0 < a < 1000, 0 < c < 1000$$
![enter image description here][1]
In the above figure, there is a crooked line in the bottom part. If I plot this, there is no crooked line.
$$\log(\frac{a}{c}), 0 < a < 1000, 0 < c < 1000$$
Why there is crooked line in the first case?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=a7xLc.gif&userId=11733Jayong Kim2017-11-18T08:36:17ZAvoid syntax problems in the following functions?
http://community.wolfram.com/groups/-/m/t/1224968
Good evening! I am working on a quantum mechanics problem using Mathematica and when I typed in the functions I need, I keep getting syntax problem saying " Expression imcomplete, more input needed". I have no idea what is going on. Will anyone help me with it? Thank you so much!!!
Input:
\!\(\*SuperscriptBox[\(J\), \('\)]\)[D_] :=
N[E, 10]^2/(
4*\[Pi]*Quantity[1, "ElectricConstant"]*
Quantity["BohrRadius"]) (1/D -
Exp[-2*D] (1/D + 11/8 + (3 D)/4 + D^2/6))
Getting:Syntax::sntxi
Input:
(J^')[D_]:=N[E,10]^2/(4*\[Pi]*Quantity[1, "ElectricConstant"]*Quantity["BohrRadius"]) (1/D-Exp[-2D]*(1/D+11/8+(3D)/4+D^2/6))
\!\(\*SuperscriptBox[\(K\), \('\)]\)[D_] :=
N[E, 10]^2/(
20*\[Pi]*Quantity[1, "ElectricConstant"]*
Quantity[
"BohrRadius"]) (-Exp[-2*D] (-25/8 + 23/4 D + 3 D^2 + 1/3 D^3) +
6/D (\[CapitalDelta]^2 (N[EulerGamma, 10] + Log[D]) +
\!\(\*SuperscriptBox[\(\[CapitalDelta]\), \('\)]\)^2*
ExpIntegralEi[-4 D] - 2*\[CapitalDelta]*
\!\(\*SuperscriptBox[\(\[CapitalDelta]\), \('\)]\)*
ExpIntegralEi[-2 D]))
Getting:Syntax::sntxiMinghui Wang2017-11-20T03:29:49ZEvaluate this integral?
http://community.wolfram.com/groups/-/m/t/1224875
Is there a way to help Mathematica evaluate the following integral?
Integrate[((4*Log[((p1 - p3)^2 + \[Omega]^2)/((p1 + p3)^2 + \[Omega]^2)]*
Log[((p1 - p4)^2 + \[Omega]^2)/((p1 + p4)^2 + \[Omega]^2)])/(p3*p4))*(Sin[p1]/p1),
{p1, 0, Infinity}, Assumptions -> Element[p1 | p3 | p4 | \[Omega], Reals],
Assumptions -> p1 > 0 && p3 > 0 && p4 > 0]
or alternatively
Integrate[Limit[Integrate[((4*Log[(p1 - p3 + \[Omega])/(p1 + p3 + \[Omega])]*
Log[(p1 - p4 + \[Omega])/(p1 + p4 + \[Omega])])/(p3*p4))*(Sin[p1]/p1), {p1, 0, Infinity},
Assumptions -> Element[p1 | p3 | p4, Reals]], \[Omega] -> 0], {p1, 0, Infinity},
Assumptions -> Element[p1 | p3 | p4 | \[Omega], Reals]]Arny Toynbee2017-11-19T23:42:57Z[✓] Implement an external function to a discretionary notebook?
http://community.wolfram.com/groups/-/m/t/1224634
Hey guys,
I've got a function which computes the propagation of uncertainty for a given function. It is called "fehler" and an example would be:
fehler[Sin[2*a], a]
which gives me
2 Sqrt[\[CapitalDelta]a^2 Cos[2 a]^2]
So basically the syntax is fehler[function, variable1, variable2,...]. The whole file is attached to the post, the credit goes to some random guy from my university, unluckily I could not figure out who made this function.
Now I'd like to make this function usable inside every notebook, because right now I always use it externally which is not really nice. So for example if i have a random notebook it should work like this:
(*random notebook*)
fehler= (*this is the code i need*)
a = 1; b=2; const=9;
fehler[a*const+b*2, a, b]
(*the output should like this*) Sqrt[const^2 \[CapitalDelta]a^2 + 4 \[CapitalDelta]b^2]
So that in every notebook I use i can simply use the "fehler"-function to calculate to propagation of uncertainty without having to use it globally. I tried to make this possible for a long time, but the main problem is that the existing fehler-function requires to clear all Variables with ClearAll["Global`*"]; before executing fehler. It is important that the output can be further used as a function, and not is simply a (numerical) result.
Do you guys have an idea how I can solve this problem? It would help me (and other students) a lot, if we could the function inside our notebooks and not externally.
Thanks you and best regards,
TobiasTobias Mitterdorfer2017-11-19T00:27:13ZReplace some terms in an expression by an abbreviation?
http://community.wolfram.com/groups/-/m/t/1224451
I have a symbolic expression (don't ask why, that doesn't matter), say:
exp1 = 2 n (r + h) Sin[\[Pi]/n]
Since I know, that n*Sin[\[Pi]/n] is something special (namely \[Pi] for large n), I want to replace that terms by an abbreviation pn:
pn = n Sin[\[Pi]/n]
and get something like exp2 from exp1 using that abbreviation. I.e.:
exp2 = 2 (r+h) pn
I tried several things like Replace, Reduce, Evaluate and the like, but could get no reasonable result. Does anybody know, how to do that kind of things?Werner Geiger2017-11-18T19:22:14Z[✓] Convert DiscretePlot3D to 2D plot with a Color Function?
http://community.wolfram.com/groups/-/m/t/1224586
Observe the following wolfram Mathematica code:
nd[y_, a_, b_, n_] := (Binomial[b - a + 1, y]*Sum[((-1)^i)*Binomial[y, i]*((y - i)/(b - a + 1))^n, {i, 0, y}]);
Animate[DiscretePlot3D[nd[i, 1, n, j], {j, 1, 15}, {i, 1, 15}, ExtentSize -> Full, ColorFunction -> "DarkRainbow"], {n, 1, 50, 1}]
nd is just the name of a discrete probability density function of 4 variables that I have defined earlier on. What I would really like to do is make this a 2 dimensional grid of squares where the value of nd is shown only by the color as opposed to both the color and the height; in other words, I don't want it to be 3D. Of course, I can rotate what I have to get a top down view, but this isn't exactly ideal:
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/W83BS.png
I would much prefer to eliminate the third dimension altogether and have a simple grid where the output of nd is shown by a change in color of the squares in the 2D grid. Is this possible in Mathematica? If so, how would I do this? Thank you.John Travolski2017-11-19T04:41:20ZDynamical Systems with Applications using Mathematica, 2nd Edition
http://community.wolfram.com/groups/-/m/t/1224393
The book "Dynamical Systems with Applications using Mathematica, 2nd Edition" has been published:
http://www.springer.com/us/book/9783319614847
The notebook (and other files) to accompany the book can be downloaded here:
http://library.wolfram.com/infocenter/Books/9563/
Best wishes
StephenStephen Lynch2017-11-19T08:52:45ZReplace an "Except" element of a matrix under a Table?
http://community.wolfram.com/groups/-/m/t/1224574
Dear Friends,
In a "For" operation, I have a table defined by indices i & j. Any (i, j)th element of this table consist of the elements of matrix f[m] (6X1 matrix), where m is the element (Integer) on which "For" loop is running.
During the "For" loop I want to replace the terms f[m][[k] by f[m-1][[k]], where k is a number other than the indices i & j for the (i, j)th element.
When I tried to use
`/.f[m][[X-]] -> f[m-1][[X]]`,
I am getting the following error message --
Part::pkspec1: The expression x$_ cannot be used as a part specification.
Could there be an alternative way to achieve this ??
thanks for the help in advance !S G2017-11-19T01:59:09ZIs Mathematica 11.0.1.0 still compatible with OS X 10.13.1 ?
http://community.wolfram.com/groups/-/m/t/1224565
Hi,
Recently I upgraded to OS X 10.13.1 and since I have been having strange freezes of the machine caused by Mathematica. For example, I have a fairly large `List`, about 65000 points, for `ListDensityPlot3D`. The command to show the List is this:
ListDensityPlot3D[Partition[Flatten[tblmi3retx], 4],
PlotLabel -> "delta = " <> ToString[N[Pi/500/7]], PlotRange -> All,
PlotLegends -> Automatic]
, so after the `Partition` I have as elements the `x,y,z` coordinates and the density of the quantity I want to display, one after another. `ListDensityPlot3D` actually does draw the list as it should do. The problems start when I want to enlarge the plot or rotate it into another position. Then Mathematica freeze and gives me a spinning beachball. In earlier OS X versions I could kill Mathematica, restart the machine and start over. Not now. Despite every claim by Apple that no user program can crash the OS, Mathematica is able to do it. The machine is locked up. Interestingly, the cursor is still available, but nothing responds from the OS for any attempt to select a Finder window, or to bring up the Force Quit window. I also cannot ssh into the machine from another computer. If I try to save the desktop with <cmd><shift><3>, I can hear the snap, but it produces only an empty grayish picture
![grayish picture][1]
Ultimately I have to do a hard reset via the Power Button. After restart, the OS is bringing up Mathematica automagically and the strangeness continues, because Mathematica gives a few empty windows on the top of each other, moderately scattered, with the topmost having a spinning cursor in it.
![spinning cursor in empty window][2]
So I have to quit from this automatically booted Mathematica and launch it from the Finder to bring things back to something normal.
This episode shows me that the earlier compatibility with OS X is gone with OS X 10.13.1. Im wondering if Mathematica 10.2 also has this problem, but I do not want to find it out by shelling out my hard earned tax dollars.
Thanks ahead,
János
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-11-18at19.37.12.png&userId=277053
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-11-18at19.52.10.png&userId=277053Janos Lobb2017-11-19T01:42:29ZProper use of PredictorMeasurements[]?
http://community.wolfram.com/groups/-/m/t/1208606
I've been playing around with `Predict[]` with multi-dimensional datasets and, for small training sets anyway, things seem to work correctly. For example,
trainingset = {<|"age" -> 47, "sex" -> "M", "height" -> 100,
"weight" -> 60|>, <|"age" -> 22, "sex" -> "M", "height" -> 90,
"weight" -> 55|>, <|"age" -> 43, "sex" -> "M", "height" -> 110,
"weight" -> 61|>, <|"age" -> 23, "sex" -> "F", "height" -> 100,
"weight" -> 41|>, <|"age" -> 33, "sex" -> "F", "height" -> 80,
"weight" -> 50|>, <|"age" -> 43, "sex" -> "F", "height" -> 70,
"weight" -> 51|>};
testset = {<|"age" -> 37, "sex" -> "M", "height" -> 100|>, <|
"age" -> 22, "sex" -> "M", "height" -> 90|>, <|"age" -> 43,
"sex" -> "F", "height" -> 80|>, <|"age" -> 33, "sex" -> "F",
"height" -> 70|>};
p1 = Predict[trainingset -> "weight", PerformanceGoal -> "Quality",
Method -> "RandomForest"];
We can get predictions from the p1 `PredictorFunction` with
Map[Append[#, "prediction" -> p1[#]] &, testset] (* this works *)
I can then compute residuals, etc., myself.
Since version 10, Wolfram Language has included the function `PredictorMeasurements[]`, and the documentation suggests that I should be able to get the predictions above, plus residual reports and other information, with
PredictorMeasurements[p1, testset]
But this does not work. I get the following error: `PredictorMeasurements::bdfmt: Argument {<|age->37,sex->M,height->100,weight->60|>,<|age->22,sex->M,height->90|>,<|age->43,sex->F,height->80|>,<|age->33,sex->F,height->70|>} should be a rule or a list of rules.`
What am I missing?Michael Stern2017-10-25T16:44:44ZSimplify of -Sqrt[r] >= 0 where r is real
http://community.wolfram.com/groups/-/m/t/1224276
Am I missing something? Assuming r is real, the only r that can satisfy -Sqrt[r]>=0 is zero.
$Assumptions = {r \[Element] Reals};
FullSimplify[-Sqrt[r] >= 0]
yields `Sqrt[r] <= 0`
rather than `r==0`Paul Reiser2017-11-18T18:56:53Z[✓] Simplify powers, (x^n)^(1/n) to x?
http://community.wolfram.com/groups/-/m/t/1224078
Mathematica fails to simplify the following, even with the Assumptions I give it:
In[59]:= FullSimplify[(xe^n)^(1/n),
Assumptions -> {xe \[Element] Reals, n \[Element] integers, n > 0}]
Out[59]= (xe^n)^(1/n)
Does anyone know how to get Mathematica to simplify (x^n)^(1/n) to x?Kenneth Miller2017-11-18T07:01:16ZRandom variables transformation?
http://community.wolfram.com/groups/-/m/t/1224436
Hi,
I am trying to transform a PDF function of x to a PDF function of r using Mathematica, noting that r=x^2 and the PDF of x is as below:
f[x_] := (( (4 Sqrt[π]) (μ^(μ +
0.5)) (h^(μ)) (x^(2 μ)))/((Gamma[μ]) (H^(μ -
0.5)) ((Ω)^(μ +
0.5)))) (E^((-2 (μ) (h) (x^2))/(Ω))) *
BesselI[μ - 0.5, 2 h x^2 μ/Ω]
Anyone can help of how could I do this?
BR,Wisam Abbasi2017-11-18T18:23:34ZRepair a corrupted notebook?
http://community.wolfram.com/groups/-/m/t/1224314
Just for everybodies information: I had some problems with corrupted notebooks, which I could not really solve. Now I found a Wolfram tool and procedure to repair such a notebook. It does the job:
See: [How do I repair corrupted notebooks using the AuthorTools package?][1]
Like any files, Wolfram Language notebooks can be corrupted, making them unreadable. In general, we recommend backing up important work on different media. In case notebooks become corrupted, Mathematica provides a tool to help restore them using the AuthorTools package. Restoring a notebook removes any corrupted sections of the notebook, allowing it to be opened and read. Follow these steps to o repair a corrupted notebook:
1) In a new notebook, load the AuthorTools package and open the NotebookRestore dialog by executing:
Quiet[<<AuthorTools`]
NotebookRestore[]
Please note that the tick mark after AuthorTools is a back tick character (`), not an apostrophe (‘). The back tick character is available underneath the tilde key on most keyboards.
2) Select the notebook you want to restore in the NotebookRestore dialog. Mathematica will open a new notebook with what could be restored from the original one. We recommend saving the new notebook with a new name.
[1]: http://support.wolfram.com/kb/12423Werner Geiger2017-11-18T17:13:13ZThe 27_4 Configuration in Barycentrics
http://community.wolfram.com/groups/-/m/t/1224304
A configuration is a set of points and lines such that $n$ points are on each line and $n$ lines go through each point. Here are a few 4-configurations:
![4-configurations][1]
In [barycentric coordinates](http://community.wolfram.com/groups/-/m/t/947771), both lines and points can be represented as triples. If point $P$ is on line $L$, then the dot product $P.L=0$. For awhile now I've wondered if a self-dual barycentric configuration existed where the point set and line set used the same set of triples. The answer is YES, they exist. Here's a $24_3$ configuration. I've found about 120 of these.
set243 = {{-1, 1, 2}, {-1, 2, 1}, {-1, 2, 3}, {-1, 2, 4}, {-1, 3, 2}, {-1, 4, 2}, {0, 1, 2}, {0, 2, 1}, {1, 2, -1}, {2, 1, -1}, {2, 3, -1}, {2, 4, -1}, {3, 2, -1}, {4, 2, -1}, {1, 2, 0}, {2, 1, 0}, {2, -1, 1}, {1, -1, 2}, {3, -1, 2}, {4, -1, 2}, {2, -1, 3}, {2, -1, 4}, {2, 0, 1}, {1, 0, 2}};
tri=Reverse[{{Sqrt[3]/2,-(1/2)},{0,1},{-(Sqrt[3]/2),-(1/2)}}];
FromBarycentrics[{m_,n_,o_},{{x1_,y1_},{x2_,y2_},{x3_,y3_}}]:={m*x1+n*x2+(1-m-n)*x3,m*y1+n*y2+(1-m-n)*y3};With[{full=Union[Flatten[{#, RotateRight[#,1],RotateLeft[#,1]}&/@{{-1,1,2},{-1,2,1},{-1,2,3},{-1,2,4},{-1,3,2},{-1,4,2},{0,1,2},{0,2,1}},1]]},
Graphics[{EdgeForm[Black],Tooltip[Line[#[[2]]],Style[Row[Switch[Sign[#],-1,Style[ToString[Abs[#]],Red],0,Style[ToString[Abs[#]],Darker[Green]],1,Style[ToString[Abs[#]],Blue]]&/@#[[1]]],16,Bold]]&/@Table[{full[[k]],Sort[FromBarycentrics[#/Total[#],tri]&/@Select[full,full[[k]].#==0&]]},{k,1,Length[full]}],White,{Disk[FromBarycentrics[#/Total[#],tri],.075],Black, Style[Text[Row[Switch[Sign[#],-1,Style[ToString[Abs[#]],Red],0,Style[ToString[Abs[#]],Darker[Green]],1,Style[ToString[Abs[#]],Blue]]&/@#],FromBarycentrics[#/Total[#],tri]],16,Bold]}&/@full}]]
![Config 24_3][2]
I also found a single $27_4$ configuration.
config274={
{-2,-1,4},{-2,1,3},{-1,1,1},{-1,2,0},{-1,2,1},{-1,3,2},{-1,4,2},{0,1,2},{1,1,2},
{-1,4,-2},{1,3,-2},{1,1,-1},{2,0,-1},{2,1,-1},{3,2,-1},{4,2,-1},{1,2,0},{1,2,1},
{4,-2,-1},{3,-2,1},{1,-1,1},{0,-1,2},{1,-1,2},{2,-1,3},{2,-1,4},{2,0,1},{2,1,1}}
With[{full=Union[Flatten[{#, RotateRight[#,1],RotateLeft[#,1]}&/@{{-2,-1,4},{-2,1,3},{-1,1,1},{-1,2,0},{-1,2,1},{-1,3,2},{-1,4,2},{0,1,2},{1,1,2}},1]]},
Graphics[{EdgeForm[Black],Tooltip[Line[#[[2]]],Style[Row[Switch[Sign[#],-1,Style[ToString[Abs[#]],Red],0,Style[ToString[Abs[#]],Darker[Green]],1,Style[ToString[Abs[#]],Blue]]&/@#[[1]]],16,Bold]]&/@Table[{full[[k]],Sort[FromBarycentrics[#/Total[#],tri]&/@Select[full,full[[k]].#==0&]]},{k,1,Length[full]}],White,{Disk[FromBarycentrics[#/Total[#],tri],.07],Black, Style[Text[Row[Switch[Sign[#],-1,Style[ToString[Abs[#]],Red],0,Style[ToString[Abs[#]],Darker[Green]],1,Style[ToString[Abs[#]],Blue]]&/@#],FromBarycentrics[#/Total[#],tri]],7]}&/@full}]]
![27_4][3]
It's easy to check. The following code gives a list of 27 4's.
Length /@ Table[Select[config274, config274[[n]].# == 0 &], {n, 1, 27}]
We can make a graph of these triples.
Graph[#[[1]] <-> #[[2]] & /@ Select[Subsets[Sort[config274], {2}], #[[1]].#[[2]] == 0 &], VertexLabels -> "Name"]
![graph 27_4][4]
Whether there is a 5-configuration with this method is something I'm still running.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=config-4.jpg&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=config24_3.jpg&userId=21530
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=27_4.jpg&userId=21530
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=graph274.jpg&userId=21530Ed Pegg2017-11-18T15:03:30ZSimplify a complicated solution by a simpler way?
http://community.wolfram.com/groups/-/m/t/1176517
eq50 has given a complicated solution, and it should be simplified into the sum of several proper-fraction-alike parts.
I can get a simpler one in eq55, but it takes 5 abstruse steps and isn't the simplest yet.
Can you give a easy and straight way?
In[159]:= eq12 = (cinf \[Theta]c \[Lambda])/(s + \[Theta]c) - (
E^((lh - x) /Sqrt[Dc ] Sqrt[s + \[Theta]c]) F1 Sqrt[
Dc ] \[Lambda])/
Sqrt[ (s + \[Theta]c)] /. {Sqrt[Dc] F1 \[Lambda] -> A1,
cinf \[Theta]c \[Lambda] -> A2}
Out[159]= A2/(s + \[Theta]c) - (
A1 E^(((lh - x) Sqrt[s + \[Theta]c])/Sqrt[Dc]))/Sqrt[s + \[Theta]c]
In[160]:= eq13 = s *q[x, s] - Dp*D[q[x, s], {x, 2}] - pinf + eq12 == 0
Out[160]= -pinf + A2/(s + \[Theta]c) - (
A1 E^(((lh - x) Sqrt[s + \[Theta]c])/Sqrt[Dc]))/Sqrt[
s + \[Theta]c] + s q[x, s] - Dp
\!\(\*SuperscriptBox[\(q\), \*
TagBox[
RowBox[{"(",
RowBox[{"2", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, s] == 0
In[161]:= eq14 = \[Alpha]1*(q[x, s] /. x -> lh) - (D[q[x, s], x] /.
x -> lh) == 0
Out[161]= \[Alpha]1 q[lh, s] -
\!\(\*SuperscriptBox[\(q\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[lh, s] == 0
In[162]:= eq15 = (D[q[x, s], x] /. x -> +\[Infinity]) == 0
Out[162]=
\!\(\*SuperscriptBox[\(q\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[\[Infinity], s] == 0
In[163]:= eq50 =
Assuming[(lh - x) < 0 && s > 0 && (s + \[Theta]c) > 0 && Dp > 0 &&
Dc > 0 && \[Alpha]1 > 0,
DSolve[{eq13, eq14, eq15}, q[x, s], {x, s},
GeneratedParameters -> B4]] // Simplify // Normal
Out[163]= {{q[x,
s] -> (E^(-(Sqrt[s]/Sqrt[Dp] + Sqrt[s/
Dp]) x) (-A2 (-Dc s +
Dp (s + \[Theta]c)) (-4 E^(Sqrt[s/Dp] (lh + x)) Sqrt[Dp/s]
s Sqrt[s/Dp] Sqrt[s + \[Theta]c] Sqrt[
Dp s (s + \[Theta]c)] -
Dp (E^((2 Sqrt[s] x)/Sqrt[Dp])
s (Sqrt[s/Dp] + \[Alpha]1) (s + \[Theta]c) +
E^(2 Sqrt[s/Dp] x)
s (Sqrt[s/Dp] + \[Alpha]1) (s + \[Theta]c) -
2 E^(Sqrt[s/Dp] (lh + x)) Sqrt[s/
Dp] (2 s^2 +
2 s \[Theta]c + \[Alpha]1 Sqrt[s + \[Theta]c] Sqrt[
Dp s (s + \[Theta]c)]))) +
Sqrt[s/
Dp] (1/(Sqrt[s/Dp] Sqrt[s + \[Theta]c])
Dp pinf (-2 E^(Sqrt[s/Dp] (lh + x)) Sqrt[s/
Dp] (Sqrt[Dp/s] s^4 Sqrt[Dp s (s + \[Theta]c)] +
5 Sqrt[Dp/s] s^3 \[Theta]c Sqrt[
Dp s (s + \[Theta]c)] +
6 Sqrt[Dp/s] s^2 \[Theta]c^2 Sqrt[
Dp s (s + \[Theta]c)] +
2 Sqrt[Dp/s] s \[Theta]c^3 Sqrt[
Dp s (s + \[Theta]c)] +
Sqrt[Dp/s] s^(5/2) Sqrt[s + \[Theta]c] Sqrt[
s (s + \[Theta]c)] Sqrt[Dp s (s + \[Theta]c)] -
s \[Alpha]1 Sqrt[
Dp^3 s^5 (s + \[Theta]c)] - \[Alpha]1 \[Theta]c \
Sqrt[Dp^3 s^5 (s + \[Theta]c)]) -
Dp (s + \[Theta]c) (E^((2 Sqrt[s] x)/Sqrt[Dp])
s (Sqrt[s/Dp] + \[Alpha]1) (s + \[Theta]c)^(5/2) +
E^(2 Sqrt[s/Dp] x)
s (Sqrt[s/Dp] + \[Alpha]1) (s + \[Theta]c)^(5/2) -
2 E^(Sqrt[s/Dp] (lh + x)) Sqrt[s/
Dp] (2 s^3 Sqrt[s + \[Theta]c] +
4 s^2 \[Theta]c Sqrt[
s + \[Theta]c] + \[Alpha]1 \[Theta]c^2 Sqrt[
Dp s (s + \[Theta]c)] +
2 s \[Theta]c (\[Theta]c Sqrt[
s + \[Theta]c] + \[Alpha]1 Sqrt[
Dp s (s + \[Theta]c)])))) +
1/(Sqrt[Dp s])
Dc (2 E^(Sqrt[s/Dp] (lh + x)) Sqrt[Dp/s] s Sqrt[Dp s]
Sqrt[Dp s (s + \[Theta]c)] (pinf s (s Sqrt[
s + \[Theta]c] + 2 \[Theta]c Sqrt[s + \[Theta]c] +
Sqrt[s] Sqrt[s (s + \[Theta]c)]) -
A1 (s + \[Theta]c) (-2 s + Sqrt[(
Dp s (s + \[Theta]c))/Dc])) +
Dp^2 s (-2 A1 E^(Sqrt[s/Dp] (lh + x))
s \[Alpha]1 \[Theta]c Sqrt[s + \[Theta]c] +
E^((2 Sqrt[s] x)/Sqrt[Dp]) pinf Sqrt[s/Dp] Sqrt[
Dp s] \[Alpha]1 (s + \[Theta]c)^2 +
E^(2 Sqrt[s/Dp] x) pinf Sqrt[s/Dp] Sqrt[
Dp s] \[Alpha]1 (s + \[Theta]c)^2 -
A1 E^((2 Sqrt[s] x)/Sqrt[
Dp] + (lh - x) Sqrt[(s + \[Theta]c)/
Dc]) (s + \[Theta]c) Sqrt[
Dp s (s + \[Theta]c)] (Sqrt[s/Dp] Sqrt[(
s + \[Theta]c)/
Dc] + \[Alpha]1 (-Sqrt[(s/Dp)] + Sqrt[(
s + \[Theta]c)/Dc])) +
A1 E^(2 Sqrt[s/Dp] x + (lh - x) Sqrt[(s + \[Theta]c)/
Dc]) (s + \[Theta]c) Sqrt[
Dp s (s + \[Theta]c)] (Sqrt[s/Dp] Sqrt[(
s + \[Theta]c)/
Dc] + \[Alpha]1 (Sqrt[s/Dp] + Sqrt[(
s + \[Theta]c)/Dc]))) +
Dp (E^((2 Sqrt[s] x)/Sqrt[Dp]) pinf s^2 Sqrt[
Dp s] (s + \[Theta]c)^2 +
E^(2 Sqrt[s/Dp] x) pinf s^2 Sqrt[
Dp s] (s + \[Theta]c)^2 + (
A1 E^((2 Sqrt[s] x)/Sqrt[
Dp] + (lh - x) Sqrt[(s + \[Theta]c)/Dc])
s (Dp s (s + \[Theta]c))^(3/2))/Dp + (
A1 E^(2 Sqrt[s/Dp] x + (lh - x) Sqrt[(s + \[Theta]c)/
Dc]) s (Dp s (s + \[Theta]c))^(3/2))/Dp -
2 E^(Sqrt[s/
Dp] (lh +
x)) (A1 (Sqrt[Dp s] Sqrt[Dp s^5] \[Alpha]1 Sqrt[
s + \[Theta]c] + (
2 s (Dp s (s + \[Theta]c))^(3/2))/Dp) +
pinf Sqrt[
Dp s] (2 s^4 + 4 s^3 \[Theta]c +
2 s^2 \[Theta]c^2 +
s \[Alpha]1 \[Theta]c Sqrt[s + \[Theta]c] Sqrt[
Dp s (s + \[Theta]c)] + \[Alpha]1 Sqrt[
s + \[Theta]c] Sqrt[
Dp s^5 (s + \[Theta]c)])))))))/(2 Dc Dp^3 (s/Dp)^(
3/2) (Sqrt[s/Dp] + \[Alpha]1) (s + \[Theta]c)^(3/2) Sqrt[
Dp s (s + \[Theta]c)] (Sqrt[s/Dp] - Sqrt[(s + \[Theta]c)/
Dc]) (Sqrt[s/Dp] + Sqrt[(s + \[Theta]c)/Dc]))}}
In[164]:= eq51 =
Assuming[(lh - x) < 0 && s > 0 && (s + \[Theta]c) > 0 && Dp > 0 &&
Dc > 0 && \[Alpha]1 > 0, FullSimplify[q[x, s] /. eq50[[1, 1]]]]
Out[164]= -(E^(-2 Sqrt[s/Dp]
x) (2 A2 s (-Dp E^(Sqrt[s/Dp] (lh + x)) \[Alpha]1 +
E^(2 Sqrt[s/Dp]
x) (Sqrt[Dp s] + Dp \[Alpha]1)) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c)) +
Sqrt[s/Dp] (-1/(Sqrt[(s (s + \[Theta]c))/Dp])2 Dp E^(
Sqrt[s/Dp] x)
pinf (-Dp (E^(lh Sqrt[s/Dp]) - E^(
Sqrt[s/Dp] x)) s \[Alpha]1 (s + \[Theta]c)^(7/2) +
E^(Sqrt[s/Dp]
x) (\[Theta]c^3 Sqrt[Dp s^3 (s + \[Theta]c)] +
3 \[Theta]c^2 Sqrt[Dp s^5 (s + \[Theta]c)] +
3 \[Theta]c Sqrt[Dp s^7 (s + \[Theta]c)] + Sqrt[
Dp s^9 (s + \[Theta]c)])) +
2 Sqrt[Dc]
Dp s (s + \[Theta]c) (pinf (Sqrt[Dc] E^(2 Sqrt[s/Dp] x)
s + (E^(2 Sqrt[s/Dp] x) - E^(
Sqrt[s/Dp] (lh + x))) Sqrt[
Dc Dp s] \[Alpha]1) (s + \[Theta]c) +
A1 (-Sqrt[Dp] E^(Sqrt[s/Dp] (lh + x)) s^(3/2) -
E^(Sqrt[s/
Dp] (lh + x)) (Sqrt[Dp s] \[Theta]c + \[Alpha]1 Sqrt[
Dc Dp s (s + \[Theta]c)]) +
E^(2 Sqrt[s/Dp] x + (lh - x) Sqrt[(s + \[Theta]c)/
Dc]) (s Sqrt[Dc (s + \[Theta]c)] + \[Alpha]1 Sqrt[
Dc Dp s (s + \[Theta]c)]))))))/(2 Dp s^2 (Sqrt[s/
Dp] + \[Alpha]1) (s + \[Theta]c)^2 (-Dc s + Dp (s + \[Theta]c)))
In[165]:= eq52 =
ExpandAll[eq51,
x] /. {(Sqrt[s/Dp] + \[Alpha]1) -> ((
Sqrt[s] + Sqrt[ Dp ] \[Alpha]1)/Sqrt[ Dp ]),
Sqrt[s/Dp] -> Sqrt[s]/Sqrt[ Dp ]} // Normal
Out[165]= (
A2 Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[Dp]) \[Alpha]1)/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) - (
A2 (Sqrt[Dp s] + Dp \[Alpha]1))/(
Sqrt[Dp] s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) - (
Dc pinf Sqrt[
s])/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (Sqrt[Dc] pinf Sqrt[Dc Dp s] \[Alpha]1)/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) + (
Sqrt[Dc] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[Dp]) pinf Sqrt[
Dc Dp s] \[Alpha]1)/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) + (
A1 Sqrt[Dc] Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) s)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c))) + (Dp pinf \[Alpha]1 (s + \[Theta]c)^(3/2))/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[(s (s + \[Theta]c))/
Dp] (-Dc s + Dp (s + \[Theta]c))) - (
Dp E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[Dp])
pinf \[Alpha]1 (s + \[Theta]c)^(3/2))/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[(s (s + \[Theta]c))/
Dp] (-Dc s + Dp (s + \[Theta]c))) + (
A1 Sqrt[Dc]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[Dp s] \[Theta]c + \[Alpha]1 Sqrt[
Dc Dp s (s + \[Theta]c)]))/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Sqrt[Dc] E^(
lh Sqrt[(s + \[Theta]c)/Dc] -
x Sqrt[(s + \[Theta]c)/
Dc]) (s Sqrt[Dc (s + \[Theta]c)] + \[Alpha]1 Sqrt[
Dc Dp s (s + \[Theta]c)]))/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c))) + (
pinf (\[Theta]c^3 Sqrt[Dp s^3 (s + \[Theta]c)] +
3 \[Theta]c^2 Sqrt[Dp s^5 (s + \[Theta]c)] +
3 \[Theta]c Sqrt[Dp s^7 (s + \[Theta]c)] + Sqrt[
Dp s^9 (s + \[Theta]c)]))/(
s^(3/2) (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)^2 Sqrt[(
s (s + \[Theta]c))/Dp] (-Dc s + Dp (s + \[Theta]c)))
In[166]:= eq53 = Map[FullSimplify, eq52]
Out[166]= (A2 Sqrt[Dp] E^((Sqrt[s] (lh - x))/Sqrt[Dp]) \[Alpha]1)/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) - (
A2 (Sqrt[Dp s] + Dp \[Alpha]1))/(
Sqrt[Dp] s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) - (
Dc pinf Sqrt[
s])/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (Sqrt[Dc] pinf Sqrt[Dc Dp s] \[Alpha]1)/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) + (
Sqrt[Dc] E^((Sqrt[s] (lh - x))/Sqrt[Dp]) pinf Sqrt[
Dc Dp s] \[Alpha]1)/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) + (
A1 Sqrt[Dc] Sqrt[Dp] E^((Sqrt[s] (lh - x))/Sqrt[
Dp]) s)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c))) + (
pinf Sqrt[s] \[Alpha]1 (s + \[Theta]c)^(
5/2))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) ((s (s + \[Theta]c))/Dp)^(
3/2) (-Dc s + Dp (s + \[Theta]c))) - (
E^((Sqrt[s] (lh - x))/Sqrt[Dp]) pinf Sqrt[
s] \[Alpha]1 (s + \[Theta]c)^(
5/2))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) ((s (s + \[Theta]c))/Dp)^(
3/2) (-Dc s + Dp (s + \[Theta]c))) + (
A1 Sqrt[Dc] E^((Sqrt[s] (lh - x))/Sqrt[
Dp]) (Sqrt[Dp s] \[Theta]c + \[Alpha]1 Sqrt[
Dc Dp s (s + \[Theta]c)]))/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Sqrt[Dc]
E^((lh - x) Sqrt[(s + \[Theta]c)/
Dc]) (s Sqrt[Dc (s + \[Theta]c)] + \[Alpha]1 Sqrt[
Dc Dp s (s + \[Theta]c)]))/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c))) + (
pinf (\[Theta]c^3 Sqrt[Dp s^3 (s + \[Theta]c)] +
3 \[Theta]c^2 Sqrt[Dp s^5 (s + \[Theta]c)] +
3 \[Theta]c Sqrt[Dp s^7 (s + \[Theta]c)] + Sqrt[
Dp s^9 (s + \[Theta]c)]))/(
s^(3/2) (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)^2 Sqrt[(
s (s + \[Theta]c))/Dp] (-Dc s + Dp (s + \[Theta]c)))
In[167]:= eq54 =
Collect[eq52, { E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[Dp]), E^(
lh Sqrt[(s + \[Theta]c)/Dc] - x Sqrt[(s + \[Theta]c)/Dc])},
Simplify]
Out[167]= -((
A1 Sqrt[Dc] E^(
lh Sqrt[(s + \[Theta]c)/Dc] -
x Sqrt[(s + \[Theta]c)/
Dc]) (s Sqrt[Dc (s + \[Theta]c)] + \[Alpha]1 Sqrt[
Dc Dp s (s + \[Theta]c)]))/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c)))) + (E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/
Sqrt[Dp]) (A1 Sqrt[Dc]
s (Sqrt[Dp] s^(3/2) +
Sqrt[Dp s] \[Theta]c + \[Alpha]1 Sqrt[
Dc Dp s (s + \[Theta]c)]) + \[Alpha]1 (A2 Sqrt[Dp] Sqrt[
s] (-Dc s + Dp (s + \[Theta]c)) +
pinf (s + \[Theta]c) (Sqrt[Dc] s Sqrt[Dc Dp s] -
Dp^2 Sqrt[s + \[Theta]c] Sqrt[(s (s + \[Theta]c))/
Dp]))))/(s^(
3/2) (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c))) + (-Sqrt[Dc] Sqrt[Dp] pinf s^2 Sqrt[
Dc Dp s] \[Alpha]1 (s + \[Theta]c)^3 +
Dc s^(5/2) (s + \[Theta]c)^2 (A2 Sqrt[Dp s] + A2 Dp \[Alpha]1 -
Sqrt[Dp] pinf Sqrt[s] (s + \[Theta]c)) +
1/(Sqrt[s + \[Theta]c])
Dp (-A2 s^(3/2) (Sqrt[Dp s] + Dp \[Alpha]1) (s + \[Theta]c)^(
7/2) + Sqrt[Dp] pinf Sqrt[(s (s + \[Theta]c))/
Dp] (Dp s \[Alpha]1 (s + \[Theta]c)^4 +
Sqrt[s + \[Theta]c] (\[Theta]c^3 Sqrt[
Dp s^3 (s + \[Theta]c)] +
3 \[Theta]c^2 Sqrt[Dp s^5 (s + \[Theta]c)] +
3 \[Theta]c Sqrt[Dp s^7 (s + \[Theta]c)] + Sqrt[
Dp s^9 (s + \[Theta]c)]))))/(Sqrt[Dp] s^(
5/2) (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)^3 (-Dc s +
Dp (s + \[Theta]c)))
In[168]:= eq55 =
Assuming[s > 0 && (s + \[Theta]c) > 0 && Dp > 0 && Dc > 0,
FullSimplify[eq54[[1]]] + FullSimplify[eq54[[2]]] +
FullSimplify[eq54[[3]]]] // Normal
Out[168]= (-A2 + pinf (s + \[Theta]c))/(s (s + \[Theta]c)) - (
A1 E^((lh - x) Sqrt[(s + \[Theta]c)/Dc]) Sqrt[Dc/
s] (s Sqrt[Dc (s + \[Theta]c)] + \[Alpha]1 Sqrt[
Dc Dp s (s + \[Theta]c)]))/((Sqrt[s] +
Sqrt[Dp] \[Alpha]1) (s + \[Theta]c) (-Dc s +
Dp (s + \[Theta]c))) + (E^((s (lh - x))/Sqrt[
Dp s]) (-Sqrt[
Dp s] \[Alpha]1 (-Zhonghui Ou2017-09-04T09:25:34ZAvoid SelectionMove inconsistency?
http://community.wolfram.com/groups/-/m/t/1214368
I believe there is a bug in `SelectionMove` when it is used to select a `CellGroup`. The bug is two-fold: not only `SelectionMove` fails to perform the operation, but also it doesn't return `$Failed` is it should according to the Documentation:
> `SelectionMove` returns `$Failed` if it cannot move the selection in the way you request.
Suppose we have a Notebook containing a `CellGroup`. Then we can paste the following cell after this `CellGroup` and evaluate it in order to select the previous `CellGroup`:
SelectionMove[PreviousCell[], All, CellGroup]
This works in the latest version of *Mathematica* if `PreviousCell[]` is indeed a member of a `CellGroup`. But what if it is not? In this case this piece of code just does nothing AND nothing is returned as output, the latter directly contradicts the cited Documentation statement.
The described behavior is also inconsistent with other cases, for example
SelectionMove[EvaluationCell[], Previous, CellGroup]
selects the previous cell (which isn't a member of a `CellGroup`). Why then the same approach fail to work with `PreviousCell[]`?
The fact that `SelectionMove` doesn't return `$Failed` when it is unable to perform the requested operation is very unfortunate because it makes programming much more difficult: we must check whether selection is moved after EVERY call to `SelectionMove`, but there is no universal way to do this. In the best case it complicates code a lot and is very inefficient. In the worst case it makes it impossible to perform some simple tasks, as a bright example consider this StackExchange question:
- [Stop notebook from auto-scrolling upon printing](https://mathematica.stackexchange.com/q/157183/280)
The problem raised in this question is so simple that it is hard to believe it can't be solved reliably: we need just to move the selection after the input cell OR after the `CellGroup` containing output generated by this cell, and print new cell without scrolling using `NotebookWrite` (`CellPrint` always scrolls, hence it cannot be used in this case). But careful investigation showed that it is currently impossible to do this in an efficient way mainly due to the described bug.
My main question is:
<br>
**Is it possible to check in an efficient and reliable way, whether `SelectionMove` has failed or not?**
<sub>Reported to the support as [CASE:3968507].</sub>Alexey Popkov2017-11-04T17:05:57ZSelect CUDA C compiler in Mathematica 11.2 with CUDA paclet 11.2 ?
http://community.wolfram.com/groups/-/m/t/1193573
Dear,
I am using Mathematica 11.2 with the CUDA paclet 11.2.22 on Windows 10 64-bit.
I have Microsoft Visual Studio 2015 Community.
Although CUDACCompliers[] gives an empty list, (with only VS 2015 installed) CUDAFunctionLoad works fine in Mathematica.
But I have just installed Visual Studio 2017 Community to program in C++ with the just released nVIDIA CUDA Toolkit 9.0.
The CUDA paclet 11.2 is still based on nVIDIA CUDA Toolkit 8.0 which does not support VS 2017.
CUDACCompliers[] is still an empty list, although VS 2015 and VS 2017 are both installed.
But CUDAFunctionLoad selects VS 2017 to compile, which fails for the reason I just mentioned.
How can I overrule Mathematica's choice for VS 2017? It should select VS 2015 for now...
Kind regards,
BertBert Aerts2017-09-28T11:02:40ZMachine learning with weighted data
http://community.wolfram.com/groups/-/m/t/1223968
It does not appear easy within the current Classify, Predict or NetTrain functions to accommodate weighted data. It I am wrong about this, can anyone tell me how to do so. If anyone has a work around, could you please make a suggestion.
Motivation: I have several projects that would benefit from weighted data, the latest being to explain this really interesting paper using Mathematica: [https://arxiv.org/pdf/1602.04938.pdf][1] . The idea would basically be to emulate one machine learning method -- the complex one such as a neural net -- with a less opaque one (such as a decision tree) but with the emulator only being responsible for producing similar answers within a neighborhood of certain points. This way one could use something close to human language to explain what a more complex and more opaque classifier was doing.
I'm attaching a draft notebook that shows some preliminary work in this field and indicates where having a weighted classifier would be an improvement.
[1]: https://arxiv.org/pdf/1602.04938.pdfSeth Chandler2017-11-17T22:10:07ZFinding yoga-poses constellations in the night sky
http://community.wolfram.com/groups/-/m/t/1207400
![found constellations][9]
## Story Time ##
At its first meeting in 1922, the International Astronomical Union (IAU), officially adopted the list of 88 constellations that we use today.
These include 14 men and women, 9 birds, two insects, 19 land animals, 10 water creatures, two centaurs, one head of hair, a serpent, a dragon, a flying horse, a river and 29 inanimate objects. As many of us have (frustratingly) witnessed first hand while star-gazing - most of these bear little resemblance to their supposed figures. Instead, it is more likely that the ancient constellation-makers meant them to be symbolic, a kind of celestial "Hall of Fame" for their favorite animals or heroes.
This begs two questions I sought to answer with this project:
1. Can we 'do better' now with the WL's StarData[] curated data and Machine Learning functionality?
2. What if the ancient constellation-makers were slightly more creative? Say they looked up at the sky, and only saw yoga-poses!
Some examples of the found yoga-pose constellations, projected on images of the night sky are shown here, with a walk-through of the code below:
[![example yoga-pose constellations][1]][2]
## Yoga Poses ##
First things first, finding images for yoga poses.
Turns out, the WL has a built in YogaPose Entity Class with 216(!) available entities and their schematics.
A lot of these are very similar (e.g. palms facing up/down) and we therefore only select a subset of them, which differ substantially from each other:
yogaposes = EntityClass["YogaPose", All] // EntityList;
chosenPoses =
List /@ {4, 6, 7, 8, 11, 14, 23, 25, 28, 35, 38, 43, 51, 54, 56, 59,
63, 65, 71, 72, 76, 77, 78, 84, 88, 90, 98, 100, 102, 103, 110,
111, 112, 113, 116, 118, 119, 125, 126, 133, 139, 142, 149, 152,
153, 154, 155, 156, 160, 164, 165, 167, 172, 177, 178, 180, 182,
183, 184, 190, 193, 194, 195, 197, 202, 204, 206, 207, 209, 210,
212, 215, 216};
Shallow[Extract[yogaposes, chosenPoses]]
We write a wrapper to ensure the schematics are padded to give a square image and visualize our 73 constellations:
makeSquare[gr_, size_, bool_: False] :=
Block[{range, magnitudes, order, padding, newRanges, res},
range = AbsoluteOptions[gr, PlotRange][[1, 2]];
magnitudes = Abs[Subtract @@@ range];
order = Ordering[magnitudes, 2, Greater];
padding = Subtract @@ magnitudes[[order]]/2;
newRanges = {{-1, 1} padding + Last[range[[order]]],
First[range[[order]]]};
res = Show[gr, PlotRange -> Reverse@newRanges[[order]],
ImageSize -> size];
If[bool, Rasterize[res], res]]
Multicolumn[
makeSquare[#["SimplifiedSchematic"], 64] & /@
Most[Extract[yogaposes, chosenPoses]], 9, Frame -> All]
![constellations graphic][3]
## Neural Network ##
Our problem now can be worded as a classification one: *given 5-10 stars, classify them as one of the 73 yoga-poses constellations*
The problem with that formulation of-course is that 5-10 randomly selected points inside the constellation region isn't specific enough to differentiate between constellations. The image below shows that although 1000 sets of 10 points will define the shape, 10 points alone fall short:
![pointsInMesh][4]
Instead, we compute the Voronoi diagram of the points, using DistanceTransform to leverage already-optimized convolutional neural nets used for classifying images. Note that, even if (naturally) the results are less clear with fewer points (left to right) - the result with only 10 points at the far right is still quite recognizable to a human eye:
![distanceTransforms][5]
With this in-mind, we create a neural network similar to the [VGG16][6] neural network, with successive 5x5 convolutions, ReLu activation functions and Max pooling layers. Finally note that we use an average pooling instead of fully connected layers to reduce the number of parameters and the use of dropout layers.
conv[output_] := ConvolutionLayer[output, {5, 5}, "PaddingSize" -> 2]
pool[size_] := PoolingLayer[{size, size}, "Stride" -> size]
lenet = NetChain[
{conv[32], Ramp, conv[32], Ramp, pool[2], conv[64], Ramp, conv[64],
Ramp, pool[2], conv[128], Ramp, conv[128], Ramp, conv[128], Ramp,
PoolingLayer[{9, 9}, "Stride" -> 9, "Function" -> Mean],
FlattenLayer[], DropoutLayer[], 73, SoftmaxLayer[]},
"Output" -> NetDecoder[{"Class", Extract[yogaposes, chosenPoses]}],
"Input" -> NetEncoder[{"Image", {64, 64}, ColorSpace -> "Grayscale"}]
]
The training set then consists of generating 10-15 points inside the constellation region (resized and padded to allow for rotations later), and taking their DistanceTransform:
meshes[n_] :=
meshes[n] =
ImagePad[ColorNegate@
makeSquare[
yogaposes[[chosenPoses[[n, 1]]]]["SimplifiedSchematic"], 44], 10]
trainingSet[n_, m_, iter_] :=
With[{reg =
ImageMesh[meshes[n], CornerNeighbors -> False,
Method -> "MarchingSquares"]},
Thread[ColorConvert[
ImageAdjust[
DistanceTransform[Image[SparseArray[# -> 0, {64, 64}, 1]]]],
"Grayscale"] & /@
Clip[Round@RandomPoint[reg, {iter, m}], {1, 64}] ->
yogaposes[[chosenPoses[[n, 1]]]]]]
exampleTrainingSet =
Flatten[trainingSet[#, 10, 1] & /@ RandomInteger[{1, 73}, 10], 1]
The net took a couple of hours to train on ~100,000 examples on my CPU.
Here we import the trained net:
lenetTrained = Import["constellationsNet.wlnet"]
## Night Sky ##
We're almost ready to classify the night sky. We first need the location of the 10,000 brightest stars along with their Right Ascension and Declination:
brightest =
StarData[EntityClass[
"Star", {EntityProperty["Star", "ApparentMagnitude"] ->
TakeSmallest[10000]}], {"RightAscension", "Declination",
"ApparentMagnitude"}]
We can plot these on the night sky using no projection (i.e. RA Vs Dec), using the sinusoidal projection (taken by Kuba's excellent [answer][7] in SE) or on the celestial sphere given by the following transformations respectively:
$$sinusoidal:\{(\alpha -\pi ) \cos (\delta
),\delta \}$$
$$map to3D:\{\cos (\alpha ) \cos (\delta ),\sin
(\alpha ) \cos (\delta ),\sin (\delta
)\}$$
![brightestStarsProjections][8]
## Classification ##
Finally, we use these 10,000 brightest stars to compute a multivariate smooth kernel distribution out of which to sample from and a Nearest function to compute neighboring stars. We need to of-course use our own distance function on the celestial sphere:
wrap[list_] := Block[{xs, ys},
{xs, ys} = Transpose[list];
Thread[{Mod[xs, 2 \[Pi]], Mod[ys, \[Pi], -\[Pi]/2]}]]
mapTo3D[\[Alpha]_, \[Delta]_] = {Cos[\[Alpha]] Cos[\[Delta]],
Cos[\[Delta]] Sin[\[Alpha]], Sin[\[Delta]]};
dist[{u_, v_}, {x_,
y_}] := (#1[[1]] - #2[[1]])^2 + (#1[[2]] - #2[[2]])^2 + (#1[[
3]] - #2[[3]])^2 & @@ mapTo3D @@@ {{u, v}, {x, y}}
pts = wrap@QuantityMagnitude[UnitConvert[brightest[[6 ;;, ;; 2]]]];
nf = Nearest[pts, DistanceFunction -> dist]
sm = SmoothKernelDistribution[pts]
Our search algorithm is therefore defined as follows:
1. Pick a random position from the night sky distribution
2. Compute its 5-10 nearest neighbors
3. Classify those stars and their rotations by $\frac{2 \pi }{15}$
4. Select the rotation which gives the highest accuracy
5. Associate constellation to running association and repeat
rescale[list_] := Block[{xs, ys},
{xs, ys} = Thread[list];
Thread[Rescale[#, MinMax[#], {11, 54}] & /@ {xs, ys}]]
rotate[\[Alpha]_, pts_] :=
ImageAdjust@
DistanceTransform[
Image@SparseArray[
Round@RotationTransform[\[Alpha] , {65/2, 65/2}][rescale[pts]] ->
0, {64, 64}, 1]]
sky = <||>;
accumulate[] :=
Block[{pts =
nf[Mod[RandomVariate[sm] + {0, \[Pi]}, 2 \[Pi]] - {0, \[Pi]},
RandomInteger[{5, 10}]], \[Alpha], pred},
{\[Alpha], pred} =
Last[SortBy[
First /@
Table[Thread[{\[Alpha],
lenetTrained[rotate[\[Alpha], pts],
"TopProbabilities"]}], {\[Alpha], 0, 2 \[Pi], (2 \[Pi])/
15}], Last]];
If[Not[KeyExistsQ[sky, pred[[1]]]] ||
TrueQ[sky[pred[[1]], "Accuracy"] < pred[[2]]],
AssociateTo[sky,
pred[[1]] -> <|"Accuracy" -> pred[[2]],
"Image" ->
HighlightImage[
makeSquare[pred[[1]]["SimplifiedSchematic"],
64], {PointSize[Medium], Red,
Round[RotationTransform[\[Alpha], 65/2 {1, 1}][rescale@pts],
0.5]}], "Points" -> pts, "Angle" -> \[Alpha]|>], Nothing];]
We can import a precomputed association with 20 such constellations:
selectedAsc = Import["skyAsc.m.gz"]
## Results ##
Finally, we orient the schematic based on the optimally found angle and (manually) connect the dots.
orient[pts_, \[Alpha]_] :=
Round[RotationTransform[\[Alpha], 65/2 {1, 1}][rescale@pts], 0.5]
lines = {{{10, 7, 3, 1, 4, 9, 2, 5, 8}, {2, 6}}, {{7, 3, 1, 9,
6}, {10, 2, 1, 8}, {4, 3, 5}}, {{7, 8, 9, 4, 3, 2, 10}, {1, 2, 6,
5}}, {{10, 2, 1, 3, 9, 5}, {7, 1, 4, 6, 8}}, {{5, 4, 1, 2,
3}}, {{10, 4, 5, 7, 9, 3, 8}, {6, 1, 2, 3}}, {{9, 5, 6, 3, 1, 2,
7}, {7, 4, 8}}, {{7, 6, 2, 3, 5, 8}, {9, 5, 4}, {3, 1}}, {{6, 2,
1, 3, 4}, {2, 5}}, {{5, 6, 4, 1, 2, 8, 3, 7}}, {{10, 6, 3, 2,
7}, {5, 2, 1, 4, 9, 8}}, {{9, 5, 1, 2, 4, 8, 7}, {6, 3, 8}}, {{6,
10, 4, 2, 1, 3, 9, 7, 5, 8}}, {{6, 7, 3, 2, 1, 4, 5}}, {{10, 8,
3, 6, 7, 2, 5, 9}, {5, 1, 4, 3}}, {{6, 5, 2, 1, 3}, {1, 4}}, {{5,
1, 2, 3}, {1, 4}}, {{6, 3, 2, 1, 4}, {3, 5}, {2, 7}}, {{3, 6, 8,
2, 4, 5, 1, 7}}, {{3, 1, 2, 6, 5}}};
makeSquareAndScale[gr_, \[Alpha]_, size_, opacity_] :=
Block[{range, magnitudes, order, padding, newRanges, s},
range = AbsoluteOptions[gr, PlotRange][[1, 2]];
magnitudes = Abs[Subtract @@@ range];
order = Ordering[magnitudes, 2, Greater];
padding = Subtract @@ magnitudes[[order]]/2;
s = size/First[magnitudes[[order]]];
newRanges = {{-1, 1} padding + Last[range[[order]]],
First[range[[order]]]};
Graphics[{Opacity[opacity],
GeometricTransformation[
GeometricTransformation[
gr[[1]], {ScalingMatrix[{s,
s}], -1 First /@ (s Reverse@newRanges[[order]])}],
RotationTransform[\[Alpha], (size + 1)/2 {1, 1}]]},
ImageSize -> size]]
With[{l = Map[Line, lines, {2}]},
Multicolumn[
Table[Show[
makeSquareAndScale[
Keys[selectedAsc][[i]][
"SimplifiedSchematic"], -selectedAsc[[i, -1]], 64, 0.25],
Graphics[
GraphicsComplex[
rescale[selectedAsc[[i, -2]]], {l[[i]], Red, PointSize[Large],
Point /@ Sort /@ Flatten /@ lines[[i]]}]],
ImageSize -> 150], {i, 20}], 5, Frame -> All,
Appearance -> "Horizontal"]]
![found constellations][9]
These can also be superimposed on the full night-sky:
Graphics[{With[{l = Map[Line, lines, {2}]},
Table[GraphicsComplex[
selectedAsc[[i, -2]], {Orange, Thickness[Large], l[[i]], Red,
PointSize[Medium], Point /@ Sort /@ Flatten /@ lines[[i]]}], {i,
20}]], Opacity[0.25], PointSize[Small],
Point[Complement[pts, Flatten[Values@selectedAsc[[All, -2]], 1]]]},
ImageSize -> 750]
![found overlay][10]
sinusoidal[\[Alpha]_, \[Delta]_] = {(\[Alpha] - \[Pi]) Cos[\[Delta]], \
\[Delta]}
Graphics[{With[{l = Map[Line, lines, {2}]},
Table[GraphicsComplex[
sinusoidal @@@ selectedAsc[[i, -2]], {Orange, Thickness[Large],
l[[i]], Red, PointSize[Medium],
Point /@ Sort /@ Flatten /@ lines[[i]]}], {i, 20}]],
Opacity[0.25], PointSize[Small],
Point[sinusoidal @@@
Complement[pts, Flatten[Values@selectedAsc[[All, -2]], 1]]]},
ImageSize -> 750]
![found sinusoidal][11]
It is then a matter of Overlaying these found constellations to existing images of night skies to produce the images at the beginning of the post:
overlay[{img_, constellation_}, {size_, loc_, opac_}] :=
ImageCompose[imgs[[img]],
ImageResize[
Show[makeSquareAndScale[
Keys[selectedAsc][[constellation]][
"SimplifiedSchematic"], -selectedAsc[[constellation, -1]], 64,
opac], Graphics[
GraphicsComplex[
rescale[selectedAsc[[constellation, -2]]], {White,
Thickness[0.0075], Line /@ lines[[constellation]], White,
PointSize[.025],
Point /@ Sort /@ Flatten /@ lines[[constellation]]}]],
ImageSize -> 1000], size], Scaled[loc]]
## Conclusions / Lessons Learned ##
- It IS possible to find collections of stars in the night sky matching all sorts of shapes. Other built-in entities to try are Pokemons, Dinosaurs etc
- Not all the found constellation work 'perfectly'
- Machine Learning is a powerful tool, and the WL implementation makes it easy to get started
- Reformulating the problem to an easier, already solved problem (e.g. points -> 2D image input) can help Classification Accuracy
- Perhaps the most interesting aspect of neural networks now, is diversifying its applications - so be creative
This work was presented at the 2017 WTC (as part of a larger talk entitled "(De)Generative Art"). <br>
I look forward to any comments/suggestions.
<br>
<br>George
<br>
<br> PS The editor doesn't seem to let me attach the .wlnet and .m.gz files.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=exampleImages.png&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=exampleImages.png&userId=616023
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4214constellationsGraphic.png&userId=616023
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=randomPointsInmesh.png&userId=616023
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=distanceTransforms.png&userId=616023
[6]: https://arxiv.org/pdf/1409.1556v6.pdf
[7]: https://mathematica.stackexchange.com/questions/89668/geoprojection-for-astronomical-data-wrong-ticks/89792#89792
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=maps.png&userId=616023
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=foundConstellations.png&userId=616023
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=overlaidConstellations.png&userId=616023
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=overlaidConstellationsSinusoidal.png&userId=616023George Varnavides2017-10-24T04:17:44ZRead result file, avoid missing data matrix, the file might be corrupt?
http://community.wolfram.com/groups/-/m/t/1223831
I'm trying to run a Modelica code in SystemModeler:
model code
//constants
constant Real pi = 2 * Modelica.Math.asin(1.0);
parameter Real mukpc = 0.2;
parameter Real muspc = 0.0;
parameter Real mukps = 0.0;
parameter Real musps = 0.0;
parameter Real mukcs = 0.2;
parameter Real muscs = 0.0;
parameter Real mp = 1.0;
parameter Real js = 2.0;
parameter Real b1 = 0.01;
parameter Real b2 = 0.01;
parameter Real b3 = 0.015;
parameter Real r1 = 0.01;
parameter Real r2 = 0.003;
parameter Real r3 = 0.003;
parameter Real Ts = 1;
parameter Real Tw = 0.9;
parameter Real absTol = 0.01;
//variables
Real X1, X2, X3, Y1, Y2, Y3, tets, V1, V2, V3, omgs, A1, A2, A3, alps, Fp1, Fp2, Fp3, Fncp1, Fncp2, Fncp3, Ffcp1, Ffcp2, Ffcp3, Tcp1, Tcp2, Tcp3, Ffsp1, Ffsp2, Ffsp3, Fnsp1, Fnsp2, Fnsp3, Tfsp1, Tfsp2, Tfsp3, Text, Tfsc, Fscx, Fscy, T1, T2, T3;
initial equation
tets = pi / 3;
omgs = 0.1;
equation
//inpute forces
Fp1 = if mod(time, 3 * Ts) < Tw then 1000 else 0;
Fp2 = if mod(time, 3 * Ts) < Tw + Ts and mod(time, 3 * Ts) > Ts then 1000 else 0;
Fp3 = if mod(time, 3 * Ts) < Tw + 2 * Ts and mod(time, 3 * Ts) > 2 * Ts then 1000 else 0;
Text = 0;
//derivatives
V1 = der(X1);
V2 = der(X2);
V3 = der(X3);
A1 = der(V1);
A2 = der(V2);
A3 = der(V3);
omgs = der(tets);
alps = der(omgs);
//kinematics
X1 = r1 * cos(tets) + r2;
Y1 = r1 * sin(tets);
X2 = r1 * cos(tets + 2 * pi / 3) + r2;
Y2 = r1 * sin(tets + 2 * pi / 3);
X3 = r1 * cos(tets + 4 * pi / 3) + r2;
Y3 = r1 * sin(tets + 4 * pi / 3);
//dynamic equations
//shaft
js * alps = T1 + T2 + T3 - Text - Tfsc;
Fscx = Fnsp1 + Fnsp2 + Fnsp3;
Fscy = Ffsp1 + Ffsp2 + Ffsp3;
//Fsc =sqrt(Fscx^2+Fscx^2);
T1 = Fnsp1 * Y1 - Ffsp1 * X1 - Tfsp1;
T2 = Fnsp1 * Y2 - Ffsp2 * X2 - Tfsp2;
T3 = Fnsp3 * Y3 - Ffsp3 * X3 - Tfsp3;
//p1
mp * A1 = Fp1 - Ffcp1 - Fnsp1;
Ffsp1 = Fncp1;
Tfsp1 + Fnsp1 * (b2 / 2 - Y1) = Ffsp1 * b1 - Fp1 * b2 / 2 + Tcp1;
//p2
mp * A2 = Fp2 - Ffcp2 - Fnsp2;
Ffsp2 = Fncp2;
Tfsp2 + Fnsp2 * (b2 / 2 - Y2) = Ffsp2 * b1 - Fp2 * b2 / 2 + Tcp2;
//p3
mp * A3 = Fp3 - Ffcp3 - Fnsp3;
Ffsp3 = Fncp3;
Tfsp3 + Fnsp3 * (b2 / 2 - Y3) = Ffsp3 * b1 - Fp3 * b2 / 2 + Tcp3;
//friction
Ffcp1 = -mukpc * (abs(Fncp1) + abs(Tcp1 / b3)) * sign(V1);
Ffcp2 = -mukpc * (abs(Fncp2) + abs(Tcp2 / b3)) * sign(V2);
Ffcp3 = -mukpc * (abs(Fncp3) + abs(Tcp3 / b3)) * sign(V3);
Tfsp1 = 0;
Tfsp2 = 0;
Tfsp3 = 0;
Ffsp1 = 0;
Ffsp2 = 0;
Ffsp3 = 0;
Tfsc = mukcs * sqrt(Fscx ^ 2 + Fscy ^ 2) * sign(omgs);
end code;
But I get the error:
> Error
> Simulation of foo exited with an error(-1073741819), see log window for details
![enter image description here][2]
and then:
> Error: Could not read result file "C:/Users/foo/AppData/Local/Temp/WolframSystemModeler-5.0.0/sme.5.0.0_1510933977_23281.mat". Missing data_2 matrix, the file might be corrupt.
![enter image description here][3]
**I would appreciate if you could help me understand what is the problem and how can I solve it?**
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5892Untitled.png&userId=1014408
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4053Untitled.png&userId=1014408
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10469Untitled.png2.png&userId=1014408Foad Sojoodi Farimani2017-11-17T16:05:50Z[✓] Can this expression be written in closed form?
http://community.wolfram.com/groups/-/m/t/1222861
Observe the following Wolfram Mathematica expression which results in a table of integer sequences:
b[n_, k_] := b[n, k] = If[n == 0, 1, If[n == 1, If[k > 0, 1, 0], Sum[Binomial[n-1, r-1]*b[r-1, k-1]*b[n-r, k-1], {r, 1, n}] ] ]; t [n_, k_] := b[n, k] - If[k > 0, b[n, k-1], 0]; Table[Table[t[n, k], {k, 0, n}], {n, 0, 10}]
I'm struggling to interpret what this is actually doing. Clearly it's quite a complicated mess of if statements, but I'm not entirely sure if they are necessary or not. I obtained the expression from here:
https://oeis.org/A195581
Because I wanted to determine an expression (dependent on the two variables n and k) which would result in table of integers seen there. Would somebody please write the Mathematica code above in closed form so that it's easier to understand? If this is not possible, can you at least try to explain what's going on, and perhaps give an expression which models the above in *most* situations?John Travolski2017-11-16T03:22:25ZCorrection to Documentation for Cepstrogram & Cepstrogram
http://community.wolfram.com/groups/-/m/t/1218929
Just as an FYI, the default partition size for these two routines is
2^(Round[Log[2,Length[list]]]+1)
rather than
2^Round[Log[2,Length[list]]]+1
as stated in the docs. The default step size is any number between `Round[partition/3]` and the length of the list.
Cheers, ScottScott Guthery2017-11-09T01:11:25ZMy Mathematica keeps asking for Activation Key on startup?
http://community.wolfram.com/groups/-/m/t/1222950
Hi all,
Upon startup, every now and then, my Mathematica program will ask for an activation key. **It is not expired.**I will go on the Wolfram User Portal to find my activation key, and I will enter it. It will not work. I will have to uninstall and redownload Mathematica.
Does anybody have any suggestions or know what may be causing this issue?
Best,
NNish Paul2017-11-16T01:31:52Z[✓] Pasting numerical expressions into Input Cells?
http://community.wolfram.com/groups/-/m/t/1220439
I would like to create a PasteButton that will paste the following expression into an existing Input Cell and display in the Input Cell with the 6 place precision while keeping the full underlying machine precision. From my efforts I suspect that this is not possible because the only way to obtain the shortened display is to paste an entire Output Cell..
expr = N[x/3 + 2 y/3]
giving
0.333333 x + 0.666667 y
The following is just the simplest approach, which does not work. I haven't found any other approach that works either.
PasteButton["1/3", expr]
The resulting button pastes the following where the shorter form would be preferable.
0.3333333333333333` x+0.6666666666666666` y
Does anyone know a way to specify a PasteButton that will paste the 6 place Output display into an Input Cell?David J M Park Jr2017-11-12T06:12:37ZSort a table with respect to other?
http://community.wolfram.com/groups/-/m/t/1221539
Hi,
I have two 36 X 3 tables. First columns in both of them are independent terms/numbers, while second and third columns have same indices (numbers) from 1 - 6, but in different orders. I want to arrange the rows of table B in the same order as in A, with respect to second and third elements of them.
For example if
A = {{X, 2, 1}, {Y, 2, 3}, {Z, 2, 4}...}
and
B = {{x, 2, 3}, {y, 2, 4}, {z, 2, 1}....}
I want B to be rearranged to
B_new = {{z, 2, 1}, {x, 2, 3}, {y, 2, 4}....}
Will be thankful for any suggestion. Attached is the *.nb file having these tables.S G2017-11-14T10:28:55Z[✓] Trace path of a point from a body using MultiBody library in SM?
http://community.wolfram.com/groups/-/m/t/1222597
In many CAE software (SolidWorks, RecurDyn...) it is possible to trace the curve that a certain point follows during a simulation. This is useful to show the motion of the point and compare the results with the expected ones or even with the output of another software.
Is there a way to do this in SystemModeler? I am working with mutibody mechanical systems using the MultiBody library.
For example, in the following simulation:
![enter image description here][1]
The goal is to show the arc followed by the sphere during the simulation. The curve is to be drawn as the motion occurs and export the data to a plain text file (if possible).
Best regards,
William
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SMMotion.gif&userId=1190887JOSE OLIVER2017-11-15T11:03:39ZGet all entries of a Databin?
http://community.wolfram.com/groups/-/m/t/1222161
Hi, I recently started to use Data Drop to store sensor data for analysis and visualization,
Creating the bin was easy and my Python program on my Raspberry Pi writes data to it via the Web API.
When I refer to it in Mathematica on my Mac (as Databin["psyLf0X2"] ) it tells me it has 6 entries and every time I evaluate it again that does not change.
However when I look in My Databins it actually has 14 entries... Why is it not updated?
What am I missing?Ted vanderTogt2017-11-15T15:53:18ZThe voters have spoken - but what did they say? (Part 3)
http://community.wolfram.com/groups/-/m/t/1222959
Introduction
----------
Elections are fundamental to our democracies and way of life. The electorate bestows power on the lawmakers and public officials whose legitimisation is derived solely from the fact that they are elected. In recent times, many elections, such as the presidential elections the United States and the Brexit and general elections in the United Kingdom have seen a very pronounced polarisation the electorate. Can mathematics and the Wolfram Language help to better understand this crucial process of the most fundamental decision making process in societies? The main question I want to discuss in a sequence of posts is what can be learnt from elections. So this post is about computational politics.
It is important to notice that democracies rely on a general acceptance of the result of elections. Without that consensus a peaceful co-existence is at risk. Societies agree on certain electoral procedures and the results have to be understood to be final. There are, however, more fundamental questions such as how do we derive the "voters will" from ballots. That is a highly non-trivial problem. Also, elected politicians often claim that they represent the "will of the people". But is that true and what is the "will of the people"? Hence, my question: the voters have spoken - but what did they say?
This is the third of a series of posts:
1. [Electoral systems - interpreting the will of the voters][1]
2. Gerrymandering - shaping the voters' will to your liking
3. **Analysis of the Brexit vote - how do we know what did the voters really want?**
This part of the sequence of posts will analyse the results of the Brexit vote in the United Kingdom. The question asked was basically a yes/no question and with small majority the social choice was to conduct Brexit. The rules of this process have been followed and the decision is binding. There is, however, a related question: Did the outcome in this particular case actually give sufficient evidence that this is the "will of the people" as many politicians state. If we want to evaluate the power of evidence there are mathematically established procedures. Similar procedures are for example applied in medical trials to evaluate the evidence provided by data to support a particular hypothesis. We will apply this type of procedure to the Brexit elections.
All posts were developed with [Bjoern Schelter][2], who is also a member of this community. He is indeed the expert on these methods.
**Disclaimer:** this post might be modified slightly over the next couple of days.
Data download and preparation
----------
We will start by obtaining some of the data required for this post. The [central dataset is provided by the electoral commission][3]. It is attached to this post. We download and them import the data like so:
SetDirectory[NotebookDirectory[]]
votingData = Import["EU-referendum-result-data.csv"];
Here is a short extract of the data:
votingData[[1 ;; 5]] // Transpose // TableForm
![enter image description here][4]
Next, we will need some data on the local authorities/electoral districts from [here][5]. We then import:
localauthorities = Import["Local_Authority_Districts_December_2016_Generalised_Clipped_Boundaries_in_Great_Britain.shp"];
explanationShape = Import["Local_Authority_Districts_December_2016_Full_Clipped_Boundaries_in_Great_Britain.csv"];
These data sets contain information about the shape and location of the local authority boundaries and data on how to match that with the voting outcome in the first file. To get a first glance at the data we re-arrange and organise the data like so:
Monitor[areas =
Table[QuantityMagnitude[UnitConvert[If[Length[#] > 1, Total[GeoArea /@ #], GeoArea[#]]], "SI"] & @(localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]), {k, 1, Length[localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]]]}];, k]
and
translateshpvote = SortBy[(Flatten /@ Transpose[{SortBy[Transpose[{Range[380], areas}], Last], SortBy[explanationShape[[2 ;;]], #[[9]] &]}])[[All, {1, 4}]], First];
The first representation is not very telling. It shows the outcome of the election.
Graphics[Flatten[
Table[{ColorDataFunction["TemperatureMap", "Gradients", {0, 1}, Blend["TemperatureMap", #]& ][(Select[
votingData[[2 ;;]], #[[4]] == translateshpvote[[k, 2]] &][[1]])[[-3]]/100.], localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1, 380}],1], AspectRatio -> GoldenRatio]
![enter image description here][6]
The "blue-r" the colour the higher the percentage of leave votes. Yellow indicates a preference for remain. We will need to clean the data further and will then plot it with a threshold to see which regions decided for Leave and which ones for Remain:
substitute = ({#[[1]][[{1, 4}]] -> #[[2]][[{1, 4}]], #[[2]][[{1,
4}]] -> #[[1]][[{1, 4}]]} &@((Flatten /@
Transpose[{SortBy[Transpose[{Range[380], areas}], Last],
SortBy[explanationShape[[2 ;;]], #[[9]] &]}])[[205 ;; 206]]))
translateshpvoteclean = translateshpvote /. substitute;
Graphics[Flatten[
Table[{ColorDataFunction["TemperatureMap", "Gradients", {0, 1}, Blend["TemperatureMap", #]& ][
If[# > 0.5, 1, 0] & @((Select[Select[votingData[[2 ;;]], #[[1]] < 381 &], #[[4]] == translateshpvoteclean[[k, 2]] &][[1]])[[-3]]/100.)],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1, 380}],1], AspectRatio -> GoldenRatio]
![enter image description here][7]
The strong divide between England and Scotland becomes apparent from that figure. We will use the following style of the image in much of the remainder of the post:
Graphics[Flatten[
Table[{ColorData[
"RedGreenSplit"][((Select[
Select[votingData[[2 ;;]], #[[1]] < 381 &], #[[4]] ==
translateshpvoteclean[[k, 2]] &][[1]])[[-3]]/100.)],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1, 380}],
1], Background -> Black, AspectRatio -> GoldenRatio]
![enter image description here][8]
What does the Brexit election outcome tell us? What did voters want?
----------
UK wide the percentage of leave votes (with respect to all valid votes) is:
N[Total[votingData[[2 ;;, 13]]]/Total[votingData[[2 ;;, 11]]]]
51.89%. This is in percentage of all valid votes. The percentage of leave with respect to the entire electorate is:
N[Total[votingData[[2 ;;, 13]]]/Total[votingData[[2 ;;, 6]]]]
37.4%. This is the percentage of the entire electorate who voted for remain.
N[Total[votingData[[2 ;;, 12]]]/Total[votingData[[2 ;;, 6]]]]
34.7%. The percentage of non-voters is:
N[(Total[votingData[[2 ;;, 6]]] - Total[Flatten@votingData[[2 ;;, 12 ;; 13]]])/Total[votingData[[2 ;;, 6]]]]
27.8%. This means that we do not know the will of about 28% of all voters. Following the predefined social choice procedure, the vote was narrowly won by leave. That does not necessarily imply that this is "the will of the majority of people".
What would the result be with indirect representation?
----------
If we had a representational vote we would have to count the districts for remain and leave
{Count[#, "Remain"], Count[#, "Leave"]} &@((#1 > #2) & @@@ votingData[[2 ;;, {12, 13}]] //. {False -> "Leave", True -> "Remain"})
{119,263}. This result is even more extreme for leave. This is because the population in the local authorities is very different.
Histogram[votingData[[2 ;;, 6]], PlotRange -> All, PlotTheme -> "Marketing", ImageSize -> Large,
FrameLabel -> {"population", "number of authorities"}, LabelStyle -> Directive[Bold, 15]]
![enter image description here][9]
This shows that there are large fluctuations in the population of the areas. Let's represent that slightly differently. We first collect data about Leave votes, Electorate, Turnout and Electorate.
electionEUresolved =
Transpose@({#[[1]], #[[2]], #[[3]], #[[4]]} &@Transpose@((({#[[13]], #[[11]], #[[9]], #[[6]]} & /@ (Reverse@SortBy[votingData[[2 ;;]], #[[9]] &])))));
BubbleChart[{#[[3]], N@#[[1]]/#[[2]], #[[4]]} & /@ electionEUresolved,
BubbleSizes -> Small,
ChartLabels ->
Placed[(Reverse@SortBy[votingData[[2 ;;]], #[[9]] &])[[
All, {5, 3}]], Tooltip],
Epilog -> {Red, Line[{{50, 0.5}, {100, 0.5}}]},
PlotTheme -> "Marketing", BubbleScale -> "Area", ImageSize -> Large,
FrameLabel -> {"turnout", "percentage leave votes"},
LabelStyle -> Directive[Bold, 15]]
![enter image description here][10]
The bubble sizes (areas) correspond to the size of the electorate. Let's see whether there is a correlation with turnout:
lm = LinearModelFit[{#[[3]], N@#[[1]]/#[[2]]} & /@ electionEUresolved,x, x, Weights -> electionEUresolved[[All, 4]]]
![enter image description here][11]
The following table shows that the positive slope is highly significant.
lm["ParameterTable"]
![enter image description here][12]
We can add that to the figure:
Show[BubbleChart[{#[[3]], N@#[[1]]/#[[2]], #[[4]]} & /@
electionEUresolved, BubbleSizes -> Small,
ChartLabels ->
Placed[(Reverse@SortBy[votingData[[2 ;;]], #[[9]] &])[[
All, {5, 3}]], Tooltip],
Epilog -> {Red, Line[{{50, 0.5}, {100, 0.5}}]},
PlotTheme -> "Marketing", BubbleScale -> "Area", ImageSize -> Large,
FrameLabel -> {"turnout", "percentage leave votes"},
LabelStyle -> Directive[Bold, 15]],
Plot[lm[x], {x, 50, 85}, PlotStyle -> White]]
![enter image description here][13]
Seeing such a linear dependence, raises the natural question to which extent the result is biased by such correlations in the data. Linear Models are a powerful tool to account for various potential influencing factors on the final outcome of the election.
Linear Modelling
----------
Linear Models are a generalisation of linear regression; linear models allow us to model a Gaussian Distributed outcome by various inputs that may or may not be continuous. To demonstrate this we use the simple voting data; the dependent, outcome, variable is the percentage of people who voted to leave the EU.
percentleave = N[votingData[[2 ;;, 13]]/votingData[[2 ;;, 11]]];
collatedData = (Flatten /@
Transpose[{Table[
Select[votingData, explanationShape[[k, 2]] === #[[4]] &], {k,
2, Length[explanationShape[[1 ;;]]]}],
explanationShape[[2 ;;]]}]);
We can then prepare the data for the linear model:
inputLinearModel =
Flatten /@ Transpose[{# - Mean[#] &@ collatedData[[All, 9]], collatedData[[All, 3]], # - Mean[#] &@ (collatedData[[All, 6]]/collatedData[[All, 30]]),
N[collatedData[[All, 13]]/collatedData[[All, 11]]]}];
The steps above generate the table that defines the input into the linear model. All continuous input variables have been centralised such that their mean is zero. This is not needed but simplifies the interpretability of the results. This input variables to the linear model are turnout, i.e., the fraction of people who voted in the given local administrative area, region, i.e., the rough geographical region (non continuous, nominal variable), and a proxy for the population density.
lmodel = LinearModelFit[inputLinearModel, {turnout, region, popdensity}, {turnout, region, popdensity}, NominalVariables -> region];
lmodel["ParameterTable"]
![enter image description here][14]
The model output shows the estimates with respect to the intercept (1) in the table. The missing region is Yorkshire and The Humber such that Mathematica relates all results to this region. Note, that the estimate of 0.548879 means that Yorkshire and The Humber would have voted to leave the EU with 54.89%; this is the model output and does not necessarily match the actual result of the election. Since all continuous variables are mean zero, these 54.89% would be the result for Yorkshire and The Humber under the assumption that the have a typical UK average population density and turnout. The model can obviously be used to approximate the result for Yorkshire and The Humber better by evaluating the function with the correct values for the area. All other influencing factors are estimated with respect to this result. In other words to obtain the result for Scotland, the Scotland result would need to be added to the Yorkshire and The Humber result; so the best model guess for Scotland if otherwise indistinguishable from the while UK would have been -23.93%+54.89%=30.96%. This obviously deviates from the actual result for Scotland, which is likely caused by the other covariates, the turnout and population density in Scotland are different from the average UK value.
Next we want to analyse the same model but with data for Gibraltar and Ni added. We first need the respective areas of the regions:
fixNIGI =
Join[#[[1 ;; -2]],
ConstantArray[Missing[],
8], {#[[-1]]}, {Missing[]}] & /@ (Flatten /@
Transpose[{Select[
votingData, #[[1]] > 380 &], {QuantityMagnitude[
UnitConvert[
Entity["AdministrativeDivision", {"NorthernIreland",
"UnitedKingdom"}]["Area"], Quantity[1, (("Meters")^2)]]],
QuantityMagnitude[
UnitConvert[Entity["Country", "Gibraltar"]["Area"],
Quantity[1, (("Meters")^2)]]]}}]);
Then we can add the data to the table, create the input the linear model and estimate it:
collatedDatacomplete = Join[collatedData, fixNIGI];
inputLinearModelcomplete =
Flatten /@ Transpose[{# - Mean[#] &@ collatedDatacomplete[[All, 9]],
collatedDatacomplete[[All, 3]], # - Mean[#] &@ (collatedDatacomplete[[All, 6]]/collatedDatacomplete[[All, 30]]),
N[collatedDatacomplete[[All, 13]]/collatedDatacomplete[[All, 11]]]}];
lmodelcomplete = LinearModelFit[inputLinearModelcomplete, {turnout, region, popdensity}, {turnout, region, popdensity}, NominalVariables -> region]
lmodelcomplete["ParameterTable"]
![enter image description here][15]
The values match the result above quite well, obviously there is now a result for NI as well. The key thing about linear models is that one can combine the various areas to get a area weighted estimate for the election result. To achieve this we need to obtain the parameters from the table above.
parametermatrix = Delete[lmodelcomplete["ParameterTableEntries"], {{2}, {-1}}]
To achieve a area weighted result, we need to obtain the weights for each region which is defined as the number of entries per area divided by the total number of entries:
weights = N[#/Total[#]] &@RotateRight[SortBy[Tally[inputLinearModelcomplete[[All, 2]]], First]][[All, 2]]
As all parameters are given with respect to one area and present differences to this area, we need to add all estimates to the intercept, see example for Scotland above. This is achieved through linear algebra, by defining a so called contrast matrix.
contrast = IdentityMatrix[Length[weights]];
contrast[[1]] = ConstantArray[1, Length[weights]];
contrast = Transpose[contrast]
Multiplying the matrix to the parameter estimates gives the response functions, the model results for each area.
responsefunctions = contrast.parametermatrix[[All, 1]]
Multiplying this with the weights for each area results in a weighted mean for the UK. It is worth noting that this result is no longer influenced by the turnout and the average population density as these are 0 for the UK, as we subtracted their overall mean in the beginning.
weights.responsefunctions
So the model's referendum result would have been 52.99% in favour of leave, so higher than the actual result. Importantly, Linear Models provide errors for each parameter estimate, such that we can estimate the overall uncertainty of this result. As standard errors are not additive but variances are, we obtain the estimated standard error for this result as follows, assuming independence of the estimates
seresponsefunctions = contrast.(parametermatrix[[All, 2]]^2)
The final standard error is
Sqrt[weights.seresponsefunctions]
0.0255668. This means that best model guess is that 52.99% p/m 2.56% are in favour of leave, using a Gaussian approximation, this implies that there is a
Probability[x < -(weights.responsefunctions - 0.5)/Sqrt[weights.seresponsefunctions], x \[Distributed] NormalDistribution[0, 1]]
12.11% chance that this result was obtained although the majority of the UK electorate does actually not want to leave the EU. We emphasise that by no means are we challenging the result of the referendum, we are just arguing that given model assumptions the results obtained are actually compatible with a population split in which the majority does not want to leave the EU. In many fields such as Science and Medicine, a 12.11% 'risk' that the result is actually a false positive, i.e., the conclusions drawn and actions taken are wrong, is considered too high, 5% or 1% are typical values, in particle Physics the threshold is much lower. So it remains to be investigated if statements that the election results reflect the true opinion of the population. Again, we do not challenge the outcome of the election as there are clear rules to the election that are known in advance such as that it is everybody's free choice to vote or not and how the final result is obtained. There is just a potential non-causal link to people's real opinions, which is often assumed but not necessarily correctly so.
Getting more data...
----------
An interesting thing to check is whether we can learn something from adding additional data on the various voting districts. The data is from [here][16]. We import the file with the data
additionaldata = Import["EU-referendum-results-and-characteristics-data.xlsx"];
We also use the electoral region id data from above:
ids = collatedDatacomplete[[All, 4]];
Then we need to deal with incomplete data/clean the data up:
fillers = ConstantArray[Missing[], #] & /@ Length /@ additionaldata[[2 ;; 11, 4]];
We now can generate a fixed data table:
fulldatatable =
Table[Join[collatedDatacomplete[[k]], Flatten@Table[If[Length[#] > 0, #, fillers[[i - 1]]] &@
Select[additionaldata[[i]], MemberQ[#, ids[[k]]] &], {i, 2, 11}]] /. "" -> Missing[], {k, 1, Length[ids]}];
To better work with the data we then generate a look-up table for the various columns:
datadescriptors =
Rule @@@ (Transpose[{Range[Length[#]], #}] & @
Join[votingData[[1]], explanationShape[[1]],
additionaldata[[2, 9]], additionaldata[[3, 1]],
additionaldata[[4, 8]], additionaldata[[5, 9]],
additionaldata[[6, 8]], additionaldata[[7, 4]],
additionaldata[[8, 8]], additionaldata[[9, 4]],
additionaldata[[10, 3]], additionaldata[[11, 3]]])
![enter image description here][17]
The following shows again a part of the table of all the data we can use now:
![enter image description here][18]
We can then generate some first part of the input for the linear model.
inputLinearModel =
Flatten /@ Transpose[{# - Mean[#] &@ collatedData[[All, 9]], collatedData[[All, 3]], # - Mean[#] &@ (collatedData[[All, 6]]/collatedData[[All, 30]]),
N[collatedData[[All, 13]]/collatedData[[All, 11]]]}];
Again we need to clean the data
fulldatatable[[All, 198]] /. "#" -> Missing[]
subMissing[x_] := If[StringContainsQ[ToString[FullForm[x]], "Missing"], Missing[], x ]
SetAttributes[subMissing, Listable]
This generates the full input for the linear model:
inputLinearModelcomplete =
Flatten /@
Transpose[{fulldatatable[[All, 3]], (# - Mean[#]) &@
fulldatatable[[All,
9]], (# - Mean[#]) &@ (fulldatatable[[All, 6]]/
fulldatatable[[All, 30]]), (# -
Mean[DeleteMissing[#]]) &@ (subMissing /@ (fulldatatable[[
All, 198]]/fulldatatable[[All, 6]])), (# -
Mean[DeleteMissing[#]]) &@ (fulldatatable[[All, 243]] /.
"#" -> Missing[]), (# -
Mean[DeleteMissing[#]]) &@ (subMissing /@ (fulldatatable[[
All, 227]]/fulldatatable[[All, 6]])),
N[fulldatatable[[All, 13]]/fulldatatable[[All, 11]]]}];
Here are a couple of lines of the input data we choose for our linear model:
inputLinearModelcomplete[[1 ;; 20]] // TableForm
![enter image description here][19]
Some more cleaning:
inputLinearModelcomplete = Select[subMissing /@ inputLinearModelcomplete, ! MemberQ[#, Missing[]] &];
Next we fit the linear model:
lmodelfulldata =
LinearModelFit[inputLinearModelcomplete, {region, turnout, popdensity, highqual, earning, nchildren}, {region, turnout, popdensity, highqual,
earning, nchildren}, NominalVariables -> {region}];
lmodelfulldata["ParameterTable"]
![enter image description here][20]
It is easy to see that the income and the number of children in the household significantly influence the preference for/against Brexit. We now use the parameter matrix to compute the weights and contrasts as before:
parametermatrix = lmodelfulldata["ParameterTableEntries"][[1 ;; 12]];
weights = N[#/Total[#]] &@RotateRight[SortBy[Tally[inputLinearModelcomplete[[All, 2]]], First]][[All, 2]];
contrast = IdentityMatrix[Length[weights]];
contrast[[1]] = ConstantArray[1, Length[weights]]; contrast = Transpose[contrast];
With the contrasts we can calculate the response functions:
responsefunctions = contrast[[1 ;; 12, 1 ;; 12]].parametermatrix[[1 ;; 12, 1]]
What is the best estimate for the "will of the voters"?
----------
We an now also use the linear model to estimate what the most likely scenario (based on the model) for the "underlying will of the voter" is given the data from above and the result of the election. We first prepare the data we need, which this time includes the outcome of the election.
inputLinearModelcompletewRcode =
Flatten /@
Transpose[{fulldatatable[[All, 4]],
fulldatatable[[All, 3]], (# - Mean[#]) &@
fulldatatable[[All,
9]], (# - Mean[#]) &@ (fulldatatable[[All, 6]]/
fulldatatable[[All, 30]]), (# -
Mean[DeleteMissing[#]]) &@ (subMissing /@ (fulldatatable[[
All, 198]]/fulldatatable[[All, 6]])), (# -
Mean[DeleteMissing[#]]) &@ (fulldatatable[[All, 243]] /.
"#" -> Missing[]), (# -
Mean[DeleteMissing[#]]) &@ (subMissing /@ (fulldatatable[[
All, 227]]/fulldatatable[[All, 6]])),
N[fulldatatable[[All, 13]]/fulldatatable[[All, 11]]]}];
This is how the data looks:
inputLinearModelcompletewRcode[[1 ;; 20]] // TableForm
![enter image description here][21]
Again we clean the data a bit:
inputLinearModelcompletewRcode = Select[subMissing /@ inputLinearModelcompletewRcode, ! MemberQ[#, Missing[]] &];
Now we can use the linear model to estimate the actual percentage of yes/no votes in the different voting areas:
codepercent = Transpose[{inputLinearModelcompletewRcode[[All, 1]], MapThread[lmodelfulldata, Transpose[inputLinearModelcompletewRcode[[All, 2 ;; -2]]]]}]
First we represent the actual election outcome. Green corresponds to "Remain" and red to "Leave" votes.
Graphics[Flatten[
Table[{ColorData[
"RedGreenSplit"][((Select[
Select[votingData[[2 ;;]], #[[1]] < 381 &], #[[4]] ==
translateshpvoteclean[[k, 2]] &][[1]])[[-3]]/100.)],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1, 380}],
1], Background -> Black, AspectRatio -> GoldenRatio]
![enter image description here][22]
We can now calculate the best estimate of the Leave/Remain split based on our model:
polygonscoloured =
Table[{ColorData["RedGreenSplit"][1. - ((Select[codepercent, #[[1]] == translateshpvoteclean[[k, 2]] &][[1]])[[-1]]) /. Missing[] -> 0.],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1, Length[translateshpvote]}];
Graphics[polygonscoloured, AspectRatio -> GoldenRatio, Background -> Black]
![enter image description here][23]
The split between England and Scotland has become even more pronounced. Note that we haven't dealt very cleanly with the few missing data cases. Next, we can compare the outcome of the elections with the best estimate of the actual Remain/Leave distribution.
Next, we compare the election outcome to our best model estimate:
GraphicsRow[{
Graphics[
Flatten[Table[{ColorData[
"RedGreenSplit"][((Select[
Select[votingData[[2 ;;]], #[[1]] < 381 &], #[[4]] ==
translateshpvoteclean[[k, 2]] &][[1]])[[-3]]/100.)],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1,
380}], 1], Background -> Black, AspectRatio -> GoldenRatio],
Graphics[Table[{ColorData["RedGreenSplit"][
1 - ((Select[
codepercent, #[[1]] ==
translateshpvoteclean[[k, 2]] &][[1]])[[-1]]) /.
Missing[] -> 0.],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1,
Length[translateshpvote]}], Background -> Black,
AspectRatio -> GoldenRatio],
Graphics[Flatten[
Table[{ColorData[
"RedGreenSplit"][(0.5 +
3.*((1 - ((Select[
codepercent, #[[1]] ==
translateshpvoteclean[[k, 2]] &][[1]])[[-1]] /.
Missing[] ->
0.)) - ((Select[
Select[votingData[[2 ;;]], #[[1]] <
381 &], #[[4]] ==
translateshpvoteclean[[k, 2]] &][[1]])[[-3]]/
100.)))],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1,
380}], 1], Background -> Black, AspectRatio -> GoldenRatio]},
ImageSize -> Full]
![enter image description here][24]
On the left we have the outcome of the elections. In the middle we represent the estimate of the remain/leave ratio based on our linear model. The right figure shows the size of the "correction". The model shows a strong separation between Scotland and most regions in England. The exception in England is the region around London.
Area weighted model
----------
Next we will pool the different regions to better understand regional difference.
codepercent = Transpose[{inputLinearModelcompletewRcode[[All, 1]], MapThread[lmodelfulldata, Transpose[inputLinearModelcompletewRcode[[All, 2 ;; -2]]]]}]
We basically set all the inputs to zero apart from the region data
codepercentregion =
Transpose[{inputLinearModelcompletewRcode[[All, 1]],
MapThread[lmodelfulldata,
Transpose[(Flatten /@
Transpose[{inputLinearModelcompletewRcode[[All, 1 ;; 2]],
ConstantArray[0, {382, 6}]}])[[All, 2 ;; -2]]]]}];
GraphicsRow[{Graphics[
Flatten[Table[{ColorData[
"RedGreenSplit"][((Select[
Select[votingData[[2 ;;]], #[[1]] < 381 &], #[[4]] ==
translateshpvoteclean[[k, 2]] &][[1]])[[-3]]/100.)],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1,
380}], 1], Background -> Black, AspectRatio -> GoldenRatio],
Graphics[Table[{ColorData["RedGreenSplit"][
1 - (((If[# == {}, {Select[
votingData, #[[4]] ==
translateshpvoteclean[[k, 2]] &][[1, {4, -2}]]/{1,
100.}}, #] &@
Select[codepercentregion, #[[1]] ==
translateshpvoteclean[[k, 2]] &])[[1]])[[-1]])],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1,
Length[translateshpvote]}], Background -> Black,
AspectRatio -> GoldenRatio],
Graphics[Flatten[
Table[{ColorData[
"RedGreenSplit"][(0.5 +
3.*((1 - (((If[# == {}, {Select[
votingData, #[[4]] ==
translateshpvoteclean[[k, 2]] &][[1, {4, -2}]]/{1,
100.}}, #] &@
Select[
codepercentregion, #[[1]] ==
translateshpvoteclean[[k, 2]] &])[[
1]])[[-1]])) - (((If[# == {}, {Select[
votingData, #[[4]] ==
translateshpvoteclean[[k, 2]] &][[1, {4, -2}]]/{1,
100.}}, #] &@
Select[
Select[
votingData[[2 ;;]], #[[1]] < 381 &], #[[4]] ==
translateshpvoteclean[[k, 2]] &])[[1]])[[-3]]/
100.)))],
localauthorities[[1, 2, 1, 2, 1, 2, 1, 1, 1]][[k]]}, {k, 1,
380}], 1], Background -> Black, AspectRatio -> GoldenRatio]},
ImageSize -> Full]
![enter image description here][25]
This provides the smoothed picture as regions are pooled and weighted according to their representation in the UK. As before the leftmost image represents the outcome of the election, the middle one is the best region weighted estimate and the right figure shows the differences. The strong divide between the different regions is enhanced in this representation.
Population weighted representation
----------
The representations above have the critical drawback that equal area sizes do not necessarily correspond to equally sized electorates. We can represent everything such that area corresponds to population size. This will decrease the size of the regions with low population densities such as Scotland. The following file (attached) contains a hexagonal lattice that describes the areas in the UK.
hexagonshape = Import["GB_Hex_Cartogram_LAs.shp"]
![enter image description here][26]
The following file contains a lookup table of regions vs area codes.
hexagonlegend = Import["GB_Hex_Cartogram_LAs.dbf"];
hexagonlegend[[1 ;; 10, {3, 2}]] // TableForm
![enter image description here][27]
We now need to translate between the hexagonal structure and the voting areas from our model.
tranlatehexagonvoteclean =
Table[Select[hexagonlegend, #[[2]] == translateshpvoteclean[[i, 2]] &], {i, 1,Length[translateshpvoteclean]}][[All, 1, 1]];
We can then represent the estimate from our model in an population/area adjusted representation.
Graphics[Table[{ColorData["RedGreenSplit"][1 - (( If[# == {}, {Select[votingData, #[[4]] ==
translateshpvoteclean[[k, 2]] &][[1, {4, -2}]]/{1, 100.}}, #] &@
Select[codepercent, #[[1]] == translateshpvoteclean[[k, 2]] &])[[1]][[-1]]) /. Missing[] -> 0.],
hexagonshape[[1, 2, 1, 2]][[tranlatehexagonvoteclean[[k]]]]}, {k, 1, Length[tranlatehexagonvoteclean]}], Background -> Black]
![enter image description here][28]
Preliminary summary
----------
These calculations show that we might have to perform some sort of estimation procedure to better understand the "will of the people". The method we have used is quite standard in for example medical studies. This is, however, not a suitable way of decision making in societies. As said many times before, the decision is based on an agreed upon procedure. It is not based on a statistical analysis of the data, which as in our case is model dependent.
Bootstrapping
----------
Note, that there are different approaches to try to mathematically guesstimate what the people who did not vote think about Brexit. Alternatively to the method described above we could pool all data from all local authorities and scale up to the entire electorate (unnecessary) and bootstrap on that.
The argument of many politicians would be that the non-voters essentially behave like the voters. In that case we basically scale up the results from the voters to the entire electorate. The bootstrapping in this case would look like this:
Monitor[fullpoolresults =
Table[Count[RandomChoice[Total /@ Transpose[{N@#[[13]]/#[[11]]*#[[6]], #[[6]] - N@#[[13]]/#[[11]]*#[[6]]} & /@ (Reverse@
SortBy[votingData[[2 ;;]], #[[9]] &])] -> {"Leave", "Remain"}, 46500001], "Remain"]/46500001., {i, 1, 100}];, i]
We can fit a smooth kernel distribution to the outcomes of the bootstrapping procedure:
distfullpool = SmoothKernelDistribution[fullpoolresults]
As expected the probability that more than 50% of the electorate would have wanted to Remain is (numerically) zero.
Probability[x > 0.50, x \[Distributed] distfullpool]
(*0*)
So, under this model assumption it is certain that people really wanted Brexit.
The assumption that the electorate is that homogeneous is quite questionable. We might therefore choose to bootstrap with respect to the local authorities. In other words, we assume that it is on the authority level that voters are similar.
Monitor[localauthresults =
Table[N@(Total[#[[All, 13]]/#[[All, 11]]*#[[All, 6]]]/
Total[#[[All, 6]]]) &@
RandomChoice[(Reverse@SortBy[votingData[[2 ;;]], #[[9]] &])[[All,
6]] -> (Reverse@SortBy[votingData[[2 ;;]], #[[9]] &]),
Length[votingData[[2 ;;]]]], {i, 1, 1000}], I];
This bootstraps for the percentage of Remain votes. We can then again fit a distribution:
distlocalauthorities = SmoothKernelDistribution[localauthresults]
and plot the result:
Plot[PDF[distlocalauthorities, x], {x, 0.45, 0.55}, PlotRange -> All,
FrameLabel -> {"percent leave", "probablity density"},
LabelStyle -> Directive[Bold, 18], ImageSize -> Large,
PlotTheme -> "Marketing"]
![enter image description here][29]
This means that the average of the bootstrap results for Remain votes is below 50%.
Mean[distlocalauthorities]
0.494498. This is, however, quite close to 50%. There is a substantial area above the 50% level:
Probability[x > 0.50, x \[Distributed] distlocalauthorities]
0.17122. This suggests that there is a 17% probability that the voter actually wanted to tell us that they want to remain. This is a substantial fraction/probability. Under these assumptions it is by no means clear that the electorate wanted to leave. Once again, this is not how we take democratic decisions. The decision making process is determined by the outcome of the election and not by an estimate of the voters' will. The point is that it is questionable whether politicans can claim that the voters' will is clear.
The last approach is to bootstrap over the local authorities according to the number of ballots cast:
localvotersresults =
Table[N@(Total[#[[All, 13]]]/Total[#[[All, 11]]]) &@RandomChoice[(Reverse@SortBy[votingData[[2 ;;]], #[[9]] &])[[All, 11]] -> (Reverse@SortBy[votingData[[2 ;;]], #[[9]] &]), Length[votingData[[2 ;;]]]], {10000}];
Once more we estimate the distribution:
localvotersdist = SmoothKernelDistribution[localvotersresults]
and plot the resulting probability density function:
Plot[PDF[localvotersdist, x], {x, 0.45, 0.55}, PlotRange -> All,
FrameLabel -> {"percent leave", "probablity density"},
LabelStyle -> Directive[Bold, 18], ImageSize -> Large,
PlotTheme -> "Marketing"]
![enter image description here][30]
Probability[x < 0.50, x \[Distributed] localvotersdist]
0.487381. So under this assumption we would interpret the voters' will as nearly completely undecided. There is a 50%/50% split, i.e. it is practically equally likely that the voters wanted to remain as that voters wanted to leave.
Conclusion
----------
"The will of the voters" is difficult to understand in close elections. It is by no means as clear that the winning alternative of an election represents the "will of the people" when elections are very close. Politicians often like to stress that they represent the will of the people after winning an election. Mathematically speaking, this is not necessarily true. Note, that this is irrelevant for the decision making process. As a society we have decided to abide by fixed electoral rules. Only people who actually vote will make their voices heard. We can mathematically try to figure out what the entire electorate wanted, or to be precise make probability statements about what they wanted. These statements are clearly model dependent.
[1]: http://community.wolfram.com/groups/-/m/t/1221950
[2]: http://community.wolfram.com/web/bschelter
[3]: https://www.electoralcommission.org.uk/find-information-by-subject/elections-and-referendums/past-elections-and-referendums/eu-referendum/electorate-and-count-information
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1600.47.30.png&userId=48754
[5]: http://esriuktechnicalsupportopendata-techsupportuk.opendata.arcgis.com/datasets/686603e943f948acaa13fb5d2b0f1275_0
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1600.55.48.png&userId=48754
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1600.59.21.png&userId=48754
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.01.56.png&userId=48754
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.09.35.png&userId=48754
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.09.59.png&userId=48754
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.12.00.png&userId=48754
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.12.45.png&userId=48754
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.13.40.png&userId=48754
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.17.37.png&userId=48754
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.20.53.png&userId=48754
[16]: https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwihifGz0cHXAhXHKcAKHdtTDU0QFggmMAA&url=http://researchbriefings.files.parliament.uk/documents/CBP-7639/EU-referendum-results-and-characteristics-data.xlsx&usg=AOvVaw3RFJU1dCOJM-tukE0VZGpx
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.28.13.png&userId=48754
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.29.24.png&userId=48754
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.32.00.png&userId=48754
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.33.34.png&userId=48754
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.37.36.png&userId=48754
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.39.19.png&userId=48754
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.40.41.png&userId=48754
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.42.25.png&userId=48754
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.44.22.png&userId=48754
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.47.25.png&userId=48754
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.49.14.png&userId=48754
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.51.29.png&userId=48754
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.55.45.png&userId=48754
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-11-1601.58.42.png&userId=48754Marco Thiel2017-11-16T02:01:31Z