Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag sorted by activeHow do you plot a blank Cartesian plane?
https://community.wolfram.com/groups/-/m/t/1711580
Hi everyone,
A very simple question, please. I need to plot a blank Cartesian coordinate plane for students to use to plot graphs.
Thank you.
StanleyStanley Max2019-06-27T02:12:02ZRemoved support for Singapore Exchange in FinancialData?
https://community.wolfram.com/groups/-/m/t/1711740
I am concerned about the removal of curated data that were previously available.
TableForm[FinancialData["Exchanges"]]
does not list the Singapore Exchange. If not mistaken, data from the Singapore Exchange used to be available. What happened to the data/data source? Is the removal documented somewhere?
Thank you in advance.CW Foo2019-06-27T03:39:51ZNeed help with Market Sentiment Analysis
https://community.wolfram.com/groups/-/m/t/1710931
Dear Wolfram Users,
I am new to Mathematica and I wanted to replicate this study in order to learn how to do sentiment analysis on website data.
https://community.wolfram.com/groups/-/m/t/970999
However, it seems to me, that the code is not working. Maybe it was an older Mathematica code?
For example:
archive =
Import[StringJoin["http://www.wsj.com/public/page/archive-",
DateString[
datelist[[1]], {"Year", "-", "MonthShort", "-", "DayShort"}],
".html"]];
archive =
StringDrop[archive,
StringPosition[archive,
DateString[
datelist[[1]], {"MonthName", " ", "DayShort", ", ",
"Year"}]][[1, 2]]];
archive =
StringTake[
archive, -1 + StringPosition[archive, "ARCHIVE FILTER"][[1, 1]]];
archivewords = ToLowerCase[DeleteStopwords[TextWords[archive]]];
TakeLargest[Counts[archivewords], 20]
WordCloud[archivewords]
This command should download news from the WSJ archive and format for Machine Learning purposes. But somehow it is not working. Can somebody help me to replicate this code in a new way?
I also think that one of the problems is that this code is not able to switch archive dated anymore.
Or maybe there is a newer version of the article?
Thank you very much.Roman Ubaydullaev2019-06-24T21:40:59ZAdafruit and the Wolfram Language Microcontroller Kit
https://community.wolfram.com/groups/-/m/t/1711627
![enter image description here][1]
The Wolfram Language [Microcontroller Kit][2], which is part of the recently released version 12 of Mathematica, automates the generation and deployment of code to microcontrollers from the Wolfram Language. It supports several [Adafruit][3] development boards. So now you can take controllers, filters, or other systems models developed using the Wolfram Language and see them in action in the real world.
To showcase the Microcontroller Kit with an Adafruit development board, I will outline a basic analysis I did with a Force-sensitive Resistor (FSR). Since the functionality is designed with an unified interface, the workflow for another board, or input, or output channel follows essentially the same paradigm.
![enter image description here][4]
For the complete post please see this [cloud notebook][5].
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=FSRtoLCD-3160x2261.png&userId=29473
[2]: http://www.wolfram.com/language/12/microcontroller-kit/
[3]: https://www.adafruit.com/
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ForceSensorLCD_bb.png&userId=29473
[5]: https://www.wolframcloud.com/obj/microcontroller.user/Published/FSR_Adafruit_LCD.nbSuba Thomas2019-06-26T21:45:09ZDo service credits come with personal license service?
https://community.wolfram.com/groups/-/m/t/1711559
I have a home use license with personal license service and just noticed that I have some service credits. But I do not recall ever purchasing any credits. I also think I read that these credits expire a year after purchase so it is unlikely that I purchased them long ago and just forgot about it. Is it the case that purchase of the personal license service includes a number of service credits and, if so, how many credits are included each year?Mike Luntz2019-06-26T19:42:54ZRoadmap to Mathematica 12 on Raspberry Pi?
https://community.wolfram.com/groups/-/m/t/1664758
Great to see Mathematica 12 released and I can't wait to try the new functionality!
What is the timeline and roadmap to get Mathematica 12 to the Pi? – Thanks!Michael Byczkowski2019-04-20T08:40:16ZHow to use the output of the Normal function as input to a function def?
https://community.wolfram.com/groups/-/m/t/1711314
The Normal function is supposed to truncate the output of the Series function to a normal expression. But when I try to use that expression to define another function it fails: seems to keep some series attrfibutes. So I have to copy the expression and paste it to the right side of the new function definition. Here is an example from a Mathematica notebook:
In[242]:= (*Let's try the Series function with variable exponent k.
Want to truncate with the Normal function, then use the resulting \
expression as input to define a new function. But the output of \
Normal does not behave like a normal expression. Have to copy and \
paste instead.*)
In[209]:= xpand[k_, r_, n_] := Series[(1 + r)^(1/k), {r, 0, n}]
In[223]:= xpand[k, r, 5]
Out[223]= SeriesData[r, 0, {
1, k^(-1),
Rational[1, 2] ((-1 + k^(-1))/k), ((Rational[1, 6] (-1 + k))
k^(-3)) (-1 + 2 k), (((Rational[-1, 24] (-1 + k))
k^(-4)) (-1 + 2 k)) (-1 + 3 k), ((((Rational[1, 120] (-1 + k))
k^(-5)) (-1 + 2 k)) (-1 + 3 k)) (-1 + 4 k)}, 0, 6, 1]
In[227]:= xpandN5[k_, r_] := Normal[xpand[k, r, 5]]
In[230]:= xpandN5[2, r]
Out[230]= 1 + r/2 - r^2/8 + r^3/16 - (5 r^4)/128 + (7 r^5)/256
In[234]:= pand[r_] := xpandN5[2, r]
In[235]:= pand[r]
Out[235]= 1 + r/2 - r^2/8 + r^3/16 - (5 r^4)/128 + (7 r^5)/256
In[236]:= pand[.2]
During evaluation of In[236]:= General::ivar: 0.2` is not a valid variable.
During evaluation of In[236]:= General::ivar: 0.2` is not a valid variable.
Out[236]= Series[1.09545, {0.2, 0, 5}]
(*Failed. Can't get rid of some of the series attributes, so that the \
alleged normal expression could be used as the righthand side in \
function definitions. So try it with copy and paste instead.*)
In[238]:=
pand1[r_] := 1 + r/2 - r^2/8 + r^3/16 - (5 r^4)/128 + (7 r^5)/256
In[239]:= pand1[r]
Out[239]= 1 + r/2 - r^2/8 + r^3/16 - (5 r^4)/128 + (7 r^5)/256
In[240]:= pand1[.2]
Out[240]= 1.09545
In[241]:= (*So after I copy and paste the displayed output of Normal, \
I can use it to define a new function. But this is cumbersome when \
the expression is very long.*)Karl Petersen2019-06-25T18:14:15ZCalculate this expression with Bessel functions?
https://community.wolfram.com/groups/-/m/t/1710955
Consider the following code:
eq20 = (BesselI[0, k/\[Lambda] r] BesselK[1, k/\[Lambda] \[Rho]] +
BesselK[0, k/\[Lambda] r] BesselI[1,
k/\[Lambda] \[Rho]])/(BesselI[1, k/\[Lambda]] BesselK[1,
k/\[Lambda] \[Rho]] -
BesselI[1, k/\[Lambda] \[Rho]] BesselK[1, k/\[Lambda]] ) //
Simplify // Normal
where \[Lambda]=800, \[Rho]=10, and k is an arbitrary integer.
The object of calculation is to estimate the order of k or the asymptZhonghui Ou2019-06-25T07:56:06ZEquilateral into 13 Isosceles triangles
https://community.wolfram.com/groups/-/m/t/1711154
7 years ago William Somsky showed me how to divide an equilateral triangle into 13 strictly acute isosceles triangles.
When I noticed the image, I realized I could now solve it exactly. Seems to need roots of order 10 polynomials.
![Somsky 13][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=somsky13.jpg&userId=21530Ed Pegg2019-06-25T19:24:26Z[GIF] Off the End (Stereographic projections of rotating regular polygons)
https://community.wolfram.com/groups/-/m/t/1710318
![Stereographic projections of rotating regular polygons][1]
**Off the End**
Each horizontal row shows the stereographic projection to the line of (the vertices of) a rotating regular polygon. The middle row shows the image of the vertices of an equilateral triangle, the rows above and below show a square, the rows above and below that a regular pentagon, etc. The speeds are all chosen so that one vertex passes through the south pole of the unit circle every $\frac{4}{5}$ of a second, which obviously means the triangle is spinning much faster than the regular 60-gon that shows up in the first and last rows. Here's an unoptimized animation showing all the rotating polygons to get a sense of their relative speeds:
![Rotating regular polygons][2]
Of course, this uses stereographic projection:
Stereo[p_] := p[[;; -2]]/(1 + p[[-1]]);
The actual image is set up as a `GraphicsGrid`; note that `offset` just a small irrational number to ensure that none of the polygon vertices pass through the north pole (where stereographic projection is undefined) at any of the time steps.
With[{bg = GrayLevel[.2], max = 60, offset = Sqrt[2.]/1000000},
Manipulate[
GraphicsGrid[
Table[
{Graphics[
{White, PointSize[.005],
Table[
Point[Append[Stereo[ReIm[Exp[I (-2 π t/n + π/2 + 2 π i/n)]]], 0]],
{i, 1, n}]},
Background -> bg, PlotRange -> {{-2, 2}, {-.0125, .0125}}]},
{n, Join[Reverse[Range[4, max]], Range[3, max]]}],
ImageSize -> 540, Background -> bg, Spacings -> 1,
PlotRangePadding -> None],
{t, offset, 1 + offset - #, #}] &[1/40]
]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=proj11r.gif&userId=610054
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rot1.gif&userId=610054Clayton Shonkwiler2019-06-24T16:14:52ZUse machine learning, neural networks to predict tennis matches?
https://community.wolfram.com/groups/-/m/t/1711030
I use Wolfram Mathematica (WM) for my calculation but I'm not much familiar with neural networks, machine learning, etc. On the other hand, I know that WM includes such tools. I would like to learn this stuff just for operating on this level. I read some tutorials and I would like to try some practical problem.
I would like to try to apply these tools to predict tennis matches. I read something about functions (I write function names with the first capital letter): Predict, Classify, TimeSeriesForecast, etc. But practically I need some combination of these functions. I have enough data, the results of many players. I suggested input of learning data like (vector_palyer1),(vector_player2) -> (result) or something like that and then for prediction input (vectro_player1),(vector_player2) and output (results). I would like to predict "a future", learn from previous results. Not just classify data.
a) This input is definitely not good, because if I always give the winner as "player1", I assume that the network will learn to evaluate the first input as the winner. Sorting a player randomly doesn't come as a good idea. Does exist some good idea or function for such case?
b) Is such a problem reasonably solvable by WM?
Thank you for all comments or suggestions.Jarek Vrba2019-06-25T06:31:21ZFind the closer point to the observed point in the Taylor Plot?
https://community.wolfram.com/groups/-/m/t/1710945
Hi,
Is there any numerical way to find the closer model (Model1 or Model 2) to the Bluepoint (Observed) in the Taylor Plot?
Observed = {8.241666666666665`, 9.533333333333333`, 10.629166666666666`,
11.8125`, 13.429166666666669`, 13.583333333333332`, 12.454166666666666`,
11.166666666666666`, 11.024999999999999`, 11.074999999999998`};
Model1 = {7.388576416143433`, 8.611162282190872`, 9.708615798586239`,
10.676244526223503`, 11.756032351657968`, 13.282924395953437`,
13.431274460380749`, 12.355429878281225`, 11.162417014336363`,
11.03356816155934`};
Model2 = {7.388576416143433`, 8.611162282190872`, 9.708615798586239`,
10.676244526223503`, 11.756032351657968`, 13.282924395953437`,
13.431274460380749`, 12.355429878281225`, 11.162417014336363`,
11.03356816155934`};
Mean:
{mObserved, mModel1, mModel2} = Mean /@ {Observed, Model1, Model2}
{11.295, 10.9406, 10.9406}
Standard deviations:
{sdObserved, sdModel1, sdModel2} =
StandardDeviation /@ {Observed, Model1, Model2}
{1.64688, 1.94494, 1.94494}
Correlation coefficients:
{ccObserved, ccModel1, ccModel2} =
Correlation[YObserved, #] & /@ {Observed, Model1, Model2}
{1., 0.879641, 0.879641}
rmse[obs_, model_] :=
Sqrt[Length[obs]/(Length[obs] - 1)]
RootMeanSquare[(obs - Mean[obs]) - (model - Mean[model])]
{rmseModel1, rmseModel2} = rmse[Observed, #] & /@ {Model1, Model2}
{0.927299, 0.927299}
Taylor Diagram
taylorDiagram[obs_, models_, labels_, maxSD_, stepRMSE_] :=
Module[{ticksize = 0.02, frameticksoffset = 0.014,
label = "Standard deviation", corrlabel = "Correlation",
radialcolor = Darker@Cyan, observedcolor = Blue, rmsarccolor = Brown,
RadialPosition, RadialLine, arcs, radial, tickmarks, frameticks,
frameticklabels, axesticks, framelabels, origin = StandardDeviation[obs],
datapoints, meanarc, CreateRMSarcs, rmsarcs, sdtext, rmsetext, cctext,
obstext, sdo = StandardDeviation[obs],
sdm = StandardDeviation[#] & /@ models, rmses, points},
rmses = Sqrt[
Length[obs]/(Length[obs] - 1)] RootMeanSquare[(obs - Mean[obs]) - (# -
Mean[#])] & /@ models;
points = MapThread[
Select[{x, y} /.
NSolve[{x^2 + y^2 == #1^2, (x - sdo)^2 + y^2 == #2^2}, {x,
y}], #[[1]] > 0 && #[[2]] > 0 &] &, {sdm, rmses}];
arcs = Circle[{0, 0}, #, {0, Pi/2}] & /@ Range[maxSD, 0, -stepRMSE];
arcs[[1]] = {Thick, arcs[[1]]};
RadialPosition[corr_?NumericQ] := AngleVector[{maxSD, ArcCos[corr]}];
RadialLine[corr_?NumericQ] := {{0, 0}, RadialPosition[corr]};
radial = {{Black, Line[RadialLine /@ {0, 1}]}, {radialcolor, Dashed,
Line[RadialLine /@ Join[Range[0.1, 0.9, 0.1], {0.95, 0.99}]]}};
tickmarks =
RadialPosition /@ Join[Range[0.05, 0.85, 0.1], Range[0.91, 0.99, 0.01]];
tickmarks = {#, (1 - ticksize) #} & /@ tickmarks;
tickmarks = {radialcolor, Line[tickmarks]};
frameticks = Range[0, 1, 0.1]~Join~{0.95, 0.99};
frameticklabels =
If[Round[#, 0.1] == #, NumberForm[#, {\[Infinity], 1}], #] & /@ frameticks;
frameticks =
MapThread[
Text[#1, (1 + frameticksoffset) #2, {-1, 0},
AngleVector[#3]] &, {frameticklabels, RadialPosition /@ frameticks,
ArcCos[frameticks]}];
axesticks = Range[maxSD, stepRMSE, -stepRMSE];
axesticks = Join[
Text[If[Round[#] == #, Round[#], #], {#, -maxSD frameticksoffset}, {0,
1}] & /@ axesticks,
Text[If[Round[#] == #, Round[#], #], {-maxSD frameticksoffset, #}, {1,
0}] & /@ axesticks];
framelabels = {
Text[Style[label, 17], {maxSD/2, -4 maxSD frameticksoffset}, {0, 1}],
Text[Style[label, 17], {-4 maxSD frameticksoffset, maxSD/2}, {0, -1}, {0,
1}],
Text[Style[corrlabel, 17],
AngleVector[{(1 + 6 frameticksoffset) maxSD, 45 \[Degree]}], {1, 0},
AngleVector[-45 \[Degree]]]};
datapoints = {{observedcolor, PointSize[0.038], Point[{origin, 0}]}};
meanarc = {Black, Thick, Dashed, Circle[{0, 0}, origin, {0, Pi/2}]};
CreateRMSarcs[origin_, arcsize_, maxsize_] := Module[{start, stop},
start = If[origin + arcsize > maxsize,
Pi - ArcCos[(maxsize^2 - arcsize^2 - origin^2)/(-2 arcsize origin)], 0];
stop = If[origin - arcsize >= 0, Pi, Pi - ArcCos[origin/arcsize]];
{Dashed, AbsoluteThickness[1.5], rmsarccolor,
Circle[{origin, 0}, arcsize, {start, stop}],
Text[arcsize,
AngleVector[{origin, 0}, {arcsize - 0.025 maxsize, (stop + start)/2}]]}];
rmsarcs =
CreateRMSarcs[origin, #, maxSD] & /@ Range[stepRMSE, maxSD, stepRMSE];
cctext =
Text[Style["Cyan lines: contours of constant correlation coefficient", 12],
Scaled[{1, 0.97}], {1, 1}];
sdtext =
Text[Style["Black circular arcs: contours of constant standard deviation",
12], Scaled[{1, 0.945}], {1, 1}];
rmsetext =
Text[Style["Brown circular arcs: contours of constant RMSE", 12],
Scaled[{1, 0.92}], {1, 1}];
obstext =
Text[Style["Blue point: observed standard deviation", 12],
Scaled[{1, 0.895}], {1, 1}];
Show[Graphics[{radial, tickmarks, arcs, frameticks, axesticks, framelabels,
meanarc, datapoints, rmsarcs, Darker@Cyan, cctext, Black, sdtext, Brown,
rmsetext, Blue, obstext}, ImageSize -> 600],
ListPlot[points, PlotMarkers -> {Automatic, 40}, PlotLegends -> labels],
BaseStyle -> 16]]
Taylor Plot
pT = taylorDiagram[Observed, {Model1, Model2}, {"Model1", "Model2"}, 4, .75]
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=111.png&userId=943918M.A. Ghorbani2019-06-25T05:18:33Z[✓] Avoid error with NSum : it returns NSum::nsnum: Summand f[n]?
https://community.wolfram.com/groups/-/m/t/1709961
Consider the following example (I had a lot of trouble to find a minimal working example, I think it is compactified enough now).
Omega0 = 1.
nAvg = 10.
t=2.
Omegan[n_] := Omega0*Sqrt[n + 1]
f[n_] := Piecewise[{{Cos[Omegan[n]*t/2]^2*
Abs[Exp[-nAvg/2]*Sqrt[nAvg]^n/Sqrt[Factorial[n]]],
0 <= n <= 20}}, 0]
NSum[f[n], {n, 1, 100}]
If you run this short script, it should return you :
> NSum::nsnum: Summand (or its derivative) f[n] is not numerical at
> point n = 17.
This problem I am facing occurs only with some specific function. It occurs with this complicated looking function I gave you but if you try simpler one the script may just work correctly.
My questions :
First: I would like to understand why I have this error.
Second: How to solve it ?
In short : how to do numeric summation in general with mathematica ? In my specific case I have functions that may be piecewise defined. In all generality my function can be a product/sum of piecewise functions so it is not obvious at first view to know the boundary of the sum without looking more carefully, which I would like to avoid.M F2019-06-23T19:18:51ZChange font size of input search box on Mathematica Documentation page?
https://community.wolfram.com/groups/-/m/t/1709488
The input search box on the Documentation Page has a very small font (my notebook fonts are ok). How can I change this?Doug Telford2019-06-22T15:23:51ZUS Climate Change at the County Level
https://community.wolfram.com/groups/-/m/t/1671437
![US County century-scale climate change][3]
(Edited 4/29/2019) Continuing my efforts to visualize climate change in the US, I found that the US National Oceanic and Atmospheric Administration (NOAA) maintains a dataset of monthly average temperatures for every county in the 48 contiguous US States from 1895 to the present. To avoid burdening the server during testing, I downloaded the file to my computer, but it is available [here][1]. On the NOAA ftp site the file will have the name climdiv-tmpccy-v1.0.0-YYYYMMDD, where YYYY, MM and DD correspond to the year, month and date when the file was updated. When I downloaded the file its name was climdiv-tmpccy-v1.0.0-20190408.
It takes some time to read in all 388,375 lines of this file:
cdata = SemanticImport["climdiv-tmpccy-v1.0.0-20190408.txt",
PadRight[{"String"}, 13, "Real"]];
The dataset consists of 13 columns: the first is a string of digits encoding the year and something that is almost -- but not quite -- the FIPS code for the county. The next 12 are the monthly average temperatures in degrees Fahrenheit in that county for January through December of that year. For clarity, I like to add column headers first:
cdata = cdata[All,
AssociationThread[
Prepend[Table[
DateString[d, "MonthNameShort"], {d,
DateRange[{0, 1}, {0, 12}, "Month"]}], "Code"], Range[13]]];
To correct the FIPS codes, we require the FIPS codes for the 48 contiguous States:
stateFIPS =
EntityValue[
EntityList[
EntityClass["AdministrativeDivision", "ContinentalUSStates"]],
"FIPSCode"];
Now we can add a column containing the correct FIPS code for each county. While we're at it we add a column for the year:
cdata = cdata[
All, <|"FIPS" ->
stateFIPS[[ToExpression[StringTake[#Code, 2]]]] <>
StringTake[#Code, {3, 5}],
"Year" -> ToExpression[StringTake[#Code, -4]], #|> &];
Then I build an association of FIPS codes and entity values for the corresponding counties:
counties =
EntityList[
EntityClass["AdministrativeDivision", "USCountiesAllCounties"]];
massoc = AssociationThread[EntityValue[counties, "FIPSCode"],
counties];
With this association, we can add a column to the dataset corresponding to the entity value of each county. In the same statement I also add a column for the average annual temperature:
cdata = cdata[
All, <|"County" -> massoc[#FIPS], #,
"Tavg" ->
Mean[WeightedData[{#Jan, #Feb, #Mar, #Apr, #May, #Jun, #Jul, \
#Aug, #Sep, #Oct, #Nov, #Dec}, {31, 28 + Boole[LeapYearQ[#Year]], 31,
30, 31, 30, 31, 31, 30, 31, 30, 31}]]|> &];
The file includes data for the first couple of months of 2019, so I strip that out:
cdata = cdata[Select[#Year <= 2018 &]];
I tried several different ways of smoothing the temperatures and decided on an moving average of the past 10 years:
s = cdata[GroupBy["County"],
With[{ma = MovingAverage[#, 10]},
Last[ma] - ma[[-100]]] &, "Tavg"];
I experimented with the color functions available in Mathematica, but decided to build my own using a color pallette from the [ColorBrewer][2] website.
col1 = RGBColor[{255, 245, 240}/255];
col2 = RGBColor[{254, 224, 210}/255];
col3 = RGBColor[{252, 187, 161}/255];
col4 = RGBColor[{252, 146, 114}/255];
col5 = RGBColor[{251, 106, 74}/255];
col6 = RGBColor[{239, 59, 44}/255];
col7 = RGBColor[{203, 24, 29}/255];
col8 = RGBColor[{165, 15, 21}/255];
col9 = RGBColor[{103, 0, 13}/255];
qq = Values[s];
zz = Table[Min[qq] + i*(Max[qq] - Min[qq])/9, {i, 9}];
cfunc[x_?NumericQ] := Which[
x <= zz[[1]], col1,
x <= zz[[2]], col2,
x <= zz[[3]], col3,
x <= zz[[4]], col4,
x <= zz[[5]], col5,
x <= zz[[6]], col6,
x <= zz[[7]], col7,
x <= zz[[8]], col8,
x <= zz[[9]], col9]
Then it's just a matter of building the legend and displaying the map:
legend = SwatchLegend[{col1, col2, col3, col4, col5, col6, col7, col8,
col9}, {"-0.30 - 0.28", " 0.28 - 0.87", " 0.87 - 1.45",
" 1.45 - 2.04", " 2.04 - 2.62", " 2.62 - 3.21", " 3.21 - 3.79",
" 3.79 - 4.38", " 4.38 - 4.96"},
LegendMarkers ->
Graphics[{EdgeForm[Black], Opacity[1], Rectangle[]}],
LegendLabel -> "\[CapitalDelta]T(\[Degree]F)",
LegendFunction -> (Framed[#, RoundingRadius -> 5] &),
LegendMargins -> 5];
climvis =
GeoRegionValuePlot[s, Frame -> True, FrameTicks -> None,
FrameLabel -> {"Change in Mean Annual Temperature for US Counties, \
1919-2018"}, LabelStyle -> Larger,
PlotLegends -> Placed[legend, Right], ColorFunction -> cfunc,
ColorFunctionScaling -> False,
PlotStyle -> Directive[EdgeForm[{Thin, White}]],
GeoBackground -> None,
GeoProjection -> {"LambertAzimuthal",
"Centering" -> GeoPosition[{30, -195/2}]},
PlotRange -> {{-0.37, 0.38}, {-0.13, 0.38}}, ImageSize -> 800,
PlotLegends -> Placed[legend, Right]]
And finally here is the result. This is of course just a "proof of concept" exercise. There are any number of ways to analyze this dataset and visualize the results.
![US County century-scale climate change][3]
[1]: http://ftp://ftp.ncdc.noaa.gov/pub/data/cirs/climdiv/
[2]: http://colorbrewer2.org
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=figmean.jpg&userId=66744John Shonder2019-04-28T15:26:25ZConnect Mathematica 12 frontend to old kernel, like Mathematica 6? (macOS)
https://community.wolfram.com/groups/-/m/t/1710121
Hello,
I have zillions of notebooks collected over many years that I occasionally like to use, written for old versions of Mathematica, like 6 or 7. Newer versions are not necessarily compatible (and sometimes are considerably slower), and since I don't like to recode hundreds of megabytes of programs, I used to connect the newer frontends to those old kernels. This worked all fine until Mathematica 12 came out, for which it turned out that the frontend does not connect any more to the old kernels (64 bit of course).
I am not sure whether the connection protocol has changed, or whether some macOS security feature steps in.
Is there a workaround? The dilemma is that from macOS 10.15 onwards, the 32 bit frontends won't work any more, which means that a large amount of proven old code will become unusuable.
Thanks!
W.wolfgang.lerche2019-06-24T13:18:51Zpsfrag for Mathematica 10
https://community.wolfram.com/groups/-/m/t/474155
Hello,
As far as I understand, psfrag is no longer working with Mathematica 10. Does anyone have a solution for this problem or knows whether there will be a solution in the near future? Or is there an alternative?
What I want to do is export eps files from Mathematica and include them into Latex with nice Labels.a b2015-04-05T14:34:56ZOpen .c file in Mathematica?
https://community.wolfram.com/groups/-/m/t/1710258
I was working on Mathematica file and then I saved it but it saved in a .c file (C file) instead of .nb. I tried to open it again but it does not work. It's giving me a blank file and written failed on it. How can I open it again on Mathematica?Mirza Farrukh Baig2019-06-24T14:54:01ZGet tabulated results from a W|A output plot?
https://community.wolfram.com/groups/-/m/t/1710041
Wolfram Alpha has produced a graph of a function which I am integrating. Is it possible to obtain a table of results? If so, how is it done?Rob Louw2019-06-24T13:04:32ZDoes the InverseLaplaceTransform work correctly?
https://community.wolfram.com/groups/-/m/t/1708984
Hi, I notice a strange thing:
InverseLaplaceTransform[(g+s)/Sqrt[s],s,t]
yields a result proportional to (2*g*t-1)/t^(3/2), with a coefficient involving HeavisideTheta function. Is this a correct result? In principle, the term 1/t^(3/2) does not have a Laplace transform, as it is not integrable at t->0. I do not see how the HeavisideTheta function might change this fact. In addition, if we take g=0, then (g+s)/Sqrt[s] reduces to Sqrt[s], for which MATHEMATICA currently says there is no inverse transform, although older versions returned some expressions proportional to 1/t^(3/2). So, does the InverseLaplaceTransform operate correctly? It would be nice to trust MATHEMATICA, but after such experiences I am no longer sure if I can.
LeslawLeslaw Bieniasz2019-06-21T08:43:30ZTraining NN for 2D regession with f(s,g(P),h(P))
https://community.wolfram.com/groups/-/m/t/1710236
6/24:
I think I may have misunderstood the point of the NetEncode[] and NetDecode[] functions. I've gotten away on this simpler fit with using a ThreadingLayer[] that won't be stripped off from either the input of the loss function or the output of the network. But I don't think that solves my overall problem of the fact that I need an NIntegrate[] and a compiled function to make the final problem work correctly. I'm sure those things can be made with the standard layers, but that still sounds like hell and really long training times.
Here's what I was trying to stuff into the NetDecode[] because it's custom function doesn't have restrictions like ThreadingLayer[]:
CustomFunction=NetGraph[<|"s"->PartLayer[1],"A"->PartLayer[2],"\[Gamma]"->PartLayer[3],"MJ\[CapitalPsi]"->PartLayer[4],"Function"->ThreadingLayer[#2*#3/(\[Pi]*((#1-#4^2)^2+#3^2))&,"Inputs"->4]|>,{NetPort["Input"]->"s"->NetPort["Function","1"],NetPort["Input"]->"A"->NetPort["Function","2"],NetPort["Input"]->"\[Gamma]"->NetPort["Function","3"],NetPort["Input"]->"MJ\[CapitalPsi]"->NetPort["Function","4"]}]
At this point, something seems wrong and I could use some advice. The NN that does A(P), gamma(P), MJPsi(P) trained off the functions of the parameters converged very quickly to good fits. I don't have functions of the parameters to train off in the final problem. I only have f(s,P). Training off f(s,P) is 1) going slowly (5000 rounds in "5 hours" instead of 10 minutes), and 2) not converging (no progress in 300 rounds instead of instant improvement). Does anyone know how make it converge and in a timely manner? Parameter Function Fit.nb is the first case and is there to show that the NN can hold a fit of the 3 functions. Function Fit.nb is the one that is step closer to the final problem.
6/23:
I'm trying train a NN that will fit a given f(s,P) as f(s,A(P),gamma(P),M(P)). The reason for this is that this is the simpler version of this problem and I know that f(s,P) has a certain form as a function of s and it's parameters are functions of P.
At this point, I think I'm ready to train it, but I'm generating the error NetTrain::invindim3. I tried to search the forum and I think I found this related post (https://community.wolfram.com/groups/-/m/t/1262168). In the replies, the OP comes across a similar problem and is told he needs a SoftmaxLayer before the Output decoder so NetTrain doesn't munch something. My NetDecoder is a custom function representing the functional form I know is there. I suppose I need to make a training network to prevent NetTrain from munching my output layer, but if there are warnings, hazarads, other solutions, or advice I should aware, please tell me. I trained the NN to reproduce the parameters as a function of P. I previously trained it directly off the parameters. This is cheating for the final problem, but I can get it into the network and reproduce the final function correctly. So I know the NN is working correctly. I just need to train the whole thing against my data before I try final problem.
To get your notebook ready to replicate the error (I understand that the custom decoder is a version 12 feature):
A[P_]:=.5/Sqrt[1.2^2+P^2]+.2
\[Gamma][P_]:=.05+(.6Log[2.1 .194 5])/Log[(P^2+(2.1 .194)^2)25]
MJ\[CapitalPsi][P_]:=3.04-.5Exp[-((P-5)^2/5)]
\[Sigma][s_,P_]:=(A[P]\[Gamma][P])/(\[Pi]((s-MJ\[CapitalPsi][P]^2)^2+\[Gamma][P]^2))
Data=Flatten[RandomSample[Table[{{s,P},\[Sigma][s,P]},{s,0,20,.1},{P,0,30,.1}]],1];
TrainingData=<|"Input"->Data[[;;50000,1]],"Output"->Data[[;;50000,2]]|>;
ValidationData=<|"Input"->Data[[50001;;,1]],"Output"->Data[[50001;;,2]]|>;
BlackBox=NetChain[{150, Tanh, 150, Tanh, 3}]
Decoder=NetDecoder[{"Function",(#[[2]]#[[3]])/(\[Pi]((#[[1]]-#[[4]]^2)^2+#[[3]]^2))&}]
EmbededBlackBox=NetGraph[<|"BlackBox"->BlackBox,"Join"->PrependLayer[],"s"->PartLayer[1],"P"->PartLayer[2]|>,{NetPort["Input"]->"P"->"BlackBox"->NetPort["Join","Input"],NetPort["Input"]->"s"->NetPort["Join","Element"]},"Input"->2,"Output"->Decoder]
This is the line where everything falls apart:
result=NetTrain[EmbededBlackBox,TrainingData,All,ValidationSet->ValidationData,MaxTrainingRounds->5000,TrainingProgressMeasurements->"MeanSquare"]
NetTrain::invindim3: Data provided to port "Output" should be a non-empty list of length-4 vectors of real numbers, but was a length-50000 vector of real numbers.
I understand the custom decode layer maybe unusual, but it can't be replaced with standard layers without a great deal of pain as it will become a function that includes an NIntegrate or a related custom GPU kernel in the next step.Isaac Sarver2019-06-24T05:01:09ZAlternating terms of two lists, as function.
https://community.wolfram.com/groups/-/m/t/1703087
Hello community. I have created a function and would like to know if it is worth submitting in Function Repository or if there is already something simpler that does this same job? If anyone can give any opinion on this I will be very grateful.
I modestly have created a function that can interleave two lists by alternating their terms (unlike Riffle, which only fits the terms into gaps, this function does this keeping the same number of terms as it replaces them by both functions simultaneously).
- It works like this:
If the third term inside the function ("c_") is {} the function does this automatically in a 1 to 1 pattern of each group:
Alternate[a_, b_] := Alternate[a, b, {}]
Alternate[a_, b_, c_] :=
PadRight[a*
PadRight[
Take[Flatten@Table[If[c == {}, {1, 0}, c], Count[a, _]],
Min[Count[a, _], Count[b, _]]], Count[a, _], 1],
Max[Count[a, _], Count[b, _]]] +
PadRight[b*
PadRight[
Take[Flatten@Table[Abs[If[c == {}, {1, 0}, c] - 1], Count[b, _]],
Min[Count[a, _], Count[b, _]]], Count[b, _], 1],
Max[Count[a, _], Count[b, _]]]
r = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20};
s = {3, 5, 7, 9, 11, 13, 15, 17, 19, 21};
Alternate[r, s]
Alternate[s, r]
![ie1][1]
The function works even with lists of different sizes, keeping the terms in excess unchanged:
p = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20};
q = {3, 5, 7, 9, 11, 13, 15, 17};
Alternate[p, q]
Alternate[q, p]
![ie2][2]
Or you can change the third term ("c") in the function to any pattern (eg: {0,1,1,1}). Where "1" refers to the first term ("a") from within the function while "0" refers to the term ("b") from within the function:
t = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20};
u = {3, 5, 7, 9, 11, 13, 15, 17};
Alternate[t, u, {0, 1, 1, 1}]
Alternate[u, t, {0, 1, 1, 1}]
![ie3][3]
I would like to know if is this a good idea or there is a simpler way to do this? Is it worth sending a repository request?
Thank you.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ie1.png&userId=1316061
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ie2.png&userId=1316061
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ie3.png&userId=1316061Claudio Chaib2019-06-12T19:48:51ZDoes SystemModeler have a spark gap among its analog electrical comonents?
https://community.wolfram.com/groups/-/m/t/1709816
I'm crafting a circuit to simulate, etc, and I can't find any spark gap component in its library.
How could it be custom designed, or imported, if it's missing from the library?
In the alternative, could a voltage sensor tied to a switch act as a substitute since the only purpose I'm using the spark gap for is to periodically dissipate a build up of voltage?Vinyasi .2019-06-22T19:22:17ZEnumerate all possible high order combinations of functions?
https://community.wolfram.com/groups/-/m/t/1709922
Hi,
I am thinking about exploring the space of high-order functions, by constructing (combine) complex fuctions using small and simple functions.
For example, I have a set of simple functions:
$f1(x, y) = x + y$
$f2(x, y) = x*y$
$f3(x) = log(x)$
$f4(x) = exp(x)$
Given 2 variables, x and y, I could create a fairly complex function
$f_{agg}(x, y) = f1( f2(f4(x) ,f3(y)) , f4(y) )$
If we plot the structure of these functions, in this specific case, it is a tree:
f1
f2 f4
f4 f3
We all know that tree is a special case of a graph.
Now here comes the problem: What if I do not know the stucture of $f_{agg}$ in the first place, and I want to enumerate all possible combinations?
In other words, we have a few nodes , and we want to find all possible graphs that can be created by connecting these nodes.
An obvious constraint is the arity of the function. Unary function takes 1 argument, so it has only 1 inbound edge in a graph, while binary functions have 2, trinary functions have 3.
Another constraint is the domain ( i am not sure if this is the right word), a function $f_{n} (x) = - |x| $ will only give non-positive result, therefore we cannot combine it wiht $f_{p} (x) = log(x) $ .
Can anyone give me some clue how to combine functions in a systematic way? Please leave comments if I haven't explain myself clearly.Felix Chern2019-06-23T09:29:45Z[✓] Solve an equation with a constraint?
https://community.wolfram.com/groups/-/m/t/1709692
Consider the following code:
Clear["Global`*"]
h[e_, u_] := (e + u) Log[
u/(e + u)] + (1 - e - u) Log[(1 - u)/(1 - e - u)];
Y = 10000000;
t = {0.10, 0.15, 0.16};
a = 0.05;
n = 56;
NSolve[Exp[n*h[u - Total[t]/n, 1 - u]] == a/2 && 1 >= u >= 0, u]
NSolve[Exp[n*h[u - Total[t]/n, 1 - u]] == a/2 && 1 >= u >= 0.01, u]
Hello The first one with NSolve works and gives that answer
{{u -> 3.34543*10^-7}, {u -> 0.0879319}}
But the second dosent
{}
What's is the problem with the second one if I want to get one root that satisfies 1 >= u >= 0.01?Alex Graham2019-06-22T21:37:00ZGet list all the binary functions?
https://community.wolfram.com/groups/-/m/t/1709678
Hello,
I found this webpage very helpful. It lists all the **unary** functions, i.e., f(x).
[math functions][1]
I am curious if Wolfram has a similar webpage that shows all the **binary** functions, i.e., f(x, y) . What I can think of is sum, subtract, multiply and divide. Are there any another binary functions?
[1]: https://reference.wolfram.com/language/guide/MathematicalFunctions.htmlFelix Chern2019-06-22T16:30:13ZHas this person lived for two millennia Entity["Person", "Zhou::6dj36"]?
https://community.wolfram.com/groups/-/m/t/1709636
Consider the following code:
Entity["Person", "Zhou::6dj36"]["Dataset"]
King Zhou of Shang, Shang Zhou, Zhou Xin, Di Xin, King Dixin of Shang}
EntityProperty[Person, DeathDate] -> DateObject[{1045}, Year, Gregorian, -5.],tom kirkman2019-06-22T16:13:37ZIs there a miss-spelling in {Entity["Person", "ValDemmings::4rmmb"]?
https://community.wolfram.com/groups/-/m/t/1709627
FL rep Val Demingstom kirkman2019-06-22T15:12:34ZA new form for the MRB constant
https://community.wolfram.com/groups/-/m/t/1709127
$$MRB=\sum_{n=1}^\infty (-1)^n( n^{1 \over n}-1)$$
$$=\sum_{m=1}^\infty (-1)^m\bigg( \exp\bigg( {\log(m) \over m}\bigg)-1\bigg)$$
$$\begin{array} {rclll}
-\exp\bigg( {\log(1) \over 1}\bigg)+1 & = & -{\log(1) \over 1}
&-{\log(1)^2 \over 1^2 2!}
&-{\log(1)^3 \over 1^3 3!} & - \cdots \\
+\exp\bigg( {\log(2) \over 2}\bigg)-1 & = &+{\log(2) \over 2}
&+{\log(2)^2 \over 2^2 2!}
&+{\log(2)^3 \over 2^3 3!} & + \cdots \\
-\exp\bigg( {\log(3) \over 3}\bigg)+1 & = &-{\log(3) \over 3}
&-{\log(3)^2 \over 3^2 2!}
&-{\log(3)^3 \over 3^3 3!} & - \cdots \\
\vdots \qquad & \vdots & \quad\vdots & \quad\vdots& \quad\vdots
& \ddots \\
\hline \\
MRB \qquad & = & {\eta^{(1)}(1) \over 1!}
&- {\eta^{(2)}(2) \over 2!}
&+ {\eta^{(3)}(3) \over 3!} & - \cdots
\end{array}.$$
So,
![MRB sum table][1]
I don't think I should continue to say nul= $\sum _{m=2}^{\infty } (-1)^m \left( \eta ^{(m)}(m)\right).$
That Is why nul means "No."
But here is the work with just the given value of nul:
sum = 1/2 Log[2] (-2 EulerGamma + Log[2]);
nul = -0.07608671642673194446;
mb = -(sum + nul/E)
> 0.18785964246206712025
MRB =
NSum[(-1)^n (n^(1/n) - 1), {n, 1, Infinity},
Method -> "AlternatingSigns", WorkingPrecision -> 30]
> 0.18785964246206712024857897184
MRB - mb
> 0.*10^-21
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=81351.JPG&userId=366611Marvin Ray Burns2019-06-21T14:37:59ZCompare two "equivalent" graphs?
https://community.wolfram.com/groups/-/m/t/1703001
Hi everyone,
I have seemingly easy problem that I really can't wrap my head around. I am not that experienced with graphs, so I apologize in advance if I am using incorrect terminology.
I have some graphs where I am not sure in which order the nodes have been specified, and I want to see if they are "equal" (not sure about the terminology. Say I have:
g1 = Graph[{1 <-> 2}];
g2 = Graph[{2 <-> 1}];
If I do
g1 == g2
Then the output is false, even though as far as I understand it they are describing the same thing. I know I can use **IsomorphicGraphQ** but that doesn't seem to work if the vertex names convey some meaning. Say I have:
g3 = Graph[{1 <-> 2, 2 <-> 3}];
g4 = Graph[{3 <-> 2, 2 <-> 1}];
g5 = Graph[{1 <-> 3, 3 <-> 2}];
Then both of these return **True**:
IsomorphicGraphQ[g3, g4]
IsomorphicGraphQ[g3, g5]
But in g3 and g4, vertex 2 is the one that has two connections. In my application, it is then not equivalent to the graph having 2 connections for vertex 3. My current implementation is to extract the edges and sort them twice, and then compare them. It works but it seems super inelegant.
In[]: Sort[Map[Sort, EdgeList[g3]]] === Sort[Map[Sort, EdgeList[g4]]]
Out[]: True
In[]: Sort[Map[Sort, EdgeList[g3]]] === Sort[Map[Sort, EdgeList[g5]]]
Out[]: False
There seems like there must be some better way. Does anyone know of one?
Thanks ever so much!Patrik Ekenberg2019-06-11T21:11:34ZImport data from an Excel file?
https://community.wolfram.com/groups/-/m/t/1708859
Hi,
I have an Excel file like this:
![enter image description here][1]
How do I import the data to *Mathematica* so that I have a list like below:
Ukraine={x1,x2,x3,x4,x5,x6,x7,x8,x9}
Best regards,
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=5327Untitled.png&userId=943918M.A. Ghorbani2019-06-20T21:01:52ZCalculate the following integration (unable to do it after running it 4hr)?
https://community.wolfram.com/groups/-/m/t/1707171
![enter image description here][1]
Help me to integrate it out
After running 4hrs , Mathematica unable to do this integration.
Then I reduced constants and variables and try to do similar type integration with variable 'r' in 1D.
![enter image description here][2]
Still not getting the result.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=required.png&userId=1706636
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=what_to_do.png&userId=1706636Soumyaranjan Jhankar2019-06-19T12:32:36ZGet a "flights overhead map" for an arbitrary location?
https://community.wolfram.com/groups/-/m/t/1709306
If I query "flights overhead map" I get something that looks like this:
![Flights Map][1]
But I've been completely unable to get Alpha to give me that same map for an arbitrary location like "Chicago". "Flights over Chicago" gives me a list and a sky map which is not the same thing. "Flights over Chicago map" doesn't do any better. "Flights over Chicago local map" gives me a map of Chicago without any flights.
Seems like I'm missing something fundamental! How can I get that map for an arbitrary location?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=flights.png&userId=1709173Robert Withrow2019-06-22T00:11:26ZCompile to find digits ratio of integers composed by sequential numbers?
https://community.wolfram.com/groups/-/m/t/1701517
Hello community, I'm trying to optimize the speed of a code using Parallelize and Compile but I'm not having very expressive results (I'm still inexperienced in these commands). I believe that I am using these commands in an inefficient way (my attempt is at the end of the post)... below is explained where is the problem for my case in detail. First I'll explain the context of my question and then explain what I want to do.
Taking the table of the following numbers for example:
z = 5;
FromDigits[Table[10^x - 1, {x, {Range[z]}}]]
![ima1][1]
I then transformed into integers composed of sequential numbers as follows:
z = 2;
FromDigits[Table[10^x - 1, {x, {Range[z]}}]]
g[k_] := (ToExpression[
StringDelete[
ToString[Table[i, {i, Range[k]}]], {"{", "}", " ", ","}]]);
Table[g[j1], {j1, FromDigits[Table[10^x - 1, {x, {Range[z]}}]]}]
![ima2][2]
To count the different digits of each of these numbers:
z = 6;
g[k_] := (DigitCount[
ToExpression[
StringDelete[
ToString[Table[i, {i, Range[k]}]], {"{", "}", " ", ","}]]]);
Table[Print[g[j1]], {j1,
FromDigits[Table[10^x - 1, {x, {Range[z]}}]]}]
![ima3][3]
And finally, make a relationship (ratio) between the number of digits ´0´ (which is less than any other digit) and another digit (e.g. ´2´):
z = 6;
g[j_, h_] := (Take[
DigitCount[
ToExpression[
StringDelete[
ToString[Table[i, {i, 1, j}]], {"{", "}", " ", ","}]]], {h}]);
Table[g[k1, 10], {k1, FromDigits[Table[10^x - 1, {x, {Range[z]}}]]}]
Table[g[k2, 2], {k2, FromDigits[Table[10^x - 1, {x, {Range[z]}}]]}]
Table[g[k3, 10]/g[k3, 2], {k3,
FromDigits[Table[10^x - 1, {x, {Range[z]}}]]}]
Table[N[g[k4, 10]/g[k4, 2], 11], {k4,
FromDigits[Table[10^x - 1, {x, {Range[z]}}]]}]
![ima4][4]
The largest number of terms I can calculate is 8 before it crashes (?!) the program. Below the result obtained for 8 terms **without** using Parallelize or Compile:
EvaluationData[z = 8;
a = FromDigits[Table[10^x - 1, {x, {Range[z]}}]];
g[j_, h_] := (Take[
DigitCount[
ToExpression[
StringDelete[
ToString[Table[i, {i, 1, j}]], {"{", "}", " ", ","}]]], {h}]);
b = Flatten[Table[N[g[k4, 10]/g[k4, 2], 11], {k4, a}]];
{ListLinePlot[b,
PlotRange -> {{1, Automatic}, {Automatic, Automatic}},
LabelingFunction -> (#1 &), ImageSize -> Large], b}]
![ima5][5]
Using the result data I can make an estimate of the equation governing these fractions based on experimental (result made only by observing the first 8 terms; by guessing) and also test the value of the guessed equation at the infinite limit:
u: {1,2,3,4,5,6,7,8}
num = numerator: {0,9,189,2889,38889,488889,5888889,68888889}
dem = denominator: {1,20,300,4000,50000,600000,7000000,80000000}
z = 8;
num = (10 - 10^(1 + u) + 9*u*10^u)/90
dem = u*10^(u - 1)
sim = Simplify[num/dem]
Table[N[(-10 + 10^(1 - u) + 9*u)/(9*u), 11], {u, Range[z]}]
Limit[sim, u -> \[Infinity]]
![ima6][6]
- My attempt using Compile and Parallelize:
Now comes my attempt to improve the initial code using Parallelize and Compile to get the dots (experimental) with a better timing. I would like to be able to calculate faster and more points using the experimental code to check the data in the guessed equation. But I did not get a much better result due to my lack of experience using these commands...my attempt:
z = 8;
g = Compile[{{x, _Integer}}, Range[10^x - 1]]
EvaluationData[
Parallelize[
h[x1_] := (DigitCount[
ToExpression[
StringDelete[ToString[g[x1]], {"{", "}", " ", ","}]]])];
b = Flatten[
Table[N[Take[h[i], {10}]/Take[h[i], {2}], 11], {i, Range[z]}]];
{ListLinePlot[b,
PlotRange -> {{1, Automatic}, {Automatic, Automatic}},
LabelingFunction -> (#1 &), ImageSize -> Large], b}]
![ima7][7]
- My question:
There was no significant improvement in computing time (approx. 10%) and I believe that discrete improvement was due to Parallelize and not to Compile. How do I use these commands to optimize this code and make it faster (or lighter..?)?? For I believe I did not know how to use them properly in the best way....
Thanks.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4721ima1.png&userId=1316061
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=7984ima2.png&userId=1316061
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ima3.png&userId=1316061
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ima4.png&userId=1316061
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ima5.png&userId=1316061
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ima6.png&userId=1316061
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ima7.png&userId=1316061Claudio Chaib2019-06-09T04:27:48ZInconsistency in evaluating a definite integral?
https://community.wolfram.com/groups/-/m/t/887867
Consider the integral $ \int_1^{\infty } \log
\left(1-\frac{\alpha
}{x}\right) \, dx$ where $0<\alpha <1$
This integral is divergent as can be verified by evaluating the indefinite integral: $x \log \left(1-\frac{\alpha
}{x}\right)-\alpha \log
(x-\alpha )$ between the two limits.
However, the *Mathematica* input: $\text{Assuming}\left[0<\alpha
<1,\int_1^{\infty } \log
\left(1-\frac{\alpha
}{x}\right) \, dx\right]$
produces the output: $(\alpha -1) \log (1-\alpha
)-\alpha$Joel Storch2016-07-14T06:04:47ZPrevent SystemException["MemoryAllocationFailure"] ?
https://community.wolfram.com/groups/-/m/t/1709142
I am running code that on occasion shows this message
"General: Overflow occurred in computation". It then proceedes to display "Further output of General::ovfl will be suppressed during this calculation."
and carries on, however on occasion it also displays this message
"SystemException["MemoryAllocationFailure"]" and aborts the calculation. There is a message displayed after this namely "Uncaught SystemException returned to top level. Can be caught with Catch[...._SystemException]".
I have tried implementing the Catch but it is unclear how it should be done and I have had no success, so, is there a method to preventing the program from stopping in the event of the memory allocationfailure message? I would be greatful of any pointers.Paul Cleary2019-06-21T15:43:13ZUse NestList, ListLinePlot and matrices for a population growth model?
https://community.wolfram.com/groups/-/m/t/1708506
Hi,
I'm trying to work on a population growth model, but don't know what is wrong with this simple code. Can I use ListLinePlot on matrices' entires and plot the curves? The NestList works fin, but ListLinePlot does not plot it. Can you guys help me here with this simple question?
ListLinePlot[NestList[#^2 &, ( {{0.95},{1.1}} ), 4]]
Thank you.
Andreandreusp62019-06-20T22:30:40ZImplementing WolframAlpha's Cryptograms Functionality In Mathematica
https://community.wolfram.com/groups/-/m/t/1707649
Summary
-------
WolframAlpha has a useful function called **cryptograms** which, given a word enciphered in a simple substitution alphabet, returns all possible English words that the enciphered word could represent. I thought it would be even more useful to be able to enter a list of ciphertext words (assumed to be enciphered in the same substitution alphabet), and get a list of all possible English words that could be represented by those words. An example might be "cryptograms UMVUYM QRZMSFRN". This would give a list like {{*people, asterisk*}, {*people, asterism*}, ..., {*proper, snarling*},..., {*proper, unarming*}, ...}. The word pairs not only match the patterns of the enciphered words, but are also consistent with an enciphering alphabet. Since WolframAlpha's **cryptograms** function does not have this functionality, I decided to implement it in Mathematica.
Note that this program is not intended to solve cryptograms automatically (the way [this program][1] from the Wolfram Demonstrations Project does). I think of it more as a tool to aid in the solution of more difficult problems.
Introduction
------------
In a simple substitution cipher, plaintext letters are replaced by ciphertext letters according to an enciphering rule. For example given the rule {p $\rightarrow$ U, e $\rightarrow$ M, o $\rightarrow$ V, l $\rightarrow$ Y} the word *people* becomes UMVUYM (here I use the convention that plaintext letters are lowercase and ciphertext letters are uppercase). The resulting ciphertext, presented without the enciphering rule, is sometimes called a cryptogram. The objective is to reconstruct the enciphering rule and read the original plaintext. An example cryptogram is:
UWJPQTHB ZVCWFUOPQGN FEHPBNOGDX RGTHQNZVPF JWQMYZS TWQPNZFV
The Wolfram Alpha function **cryptograms** is a useful aid in solving short cryptograms like this. For example, given ciphertext word UMVUYM, Wolfram Alpha returns:
WolframAlpha["cryptograms UMVUYM"]
![UMVUYM][2]
Basically the function gives a list of all six-letter English words that have the first and fourth letters the same, the second and sixth the same, and no other repeats. Experienced solvers will recognize *people* as the most likely word on the list. If that doesn't lead to the solution of a given cryptogram, it would not be difficult to try each of the remaining 8 possibilities.
Unfortunately, in many cases the list of words produced is much longer. For example:
WolframAlpha["cryptograms QRZMSFRN"]
![QRZMSFRN][3]
Here we get a list of eight-letter words in which the second and seventh letters are the same and there are no other repeats. There are 409 such words -- far more than can be checked by hand. But it occurred to me that if it were possible to cross-index the lists generated for two or more ciphertext words, the number of possible solutions could be shortened considerably.
What I mean by cross-index is this: Assume that UMVUYM and QRZMSFRN both come from the same cryptogram. Given the two lists of possible words: {*aerate, balboa, briber, esters, laelia, people, proper, tantra, thatch, triter*} and {*acrylics, adenoids, anderons, ..., zirconia*}, there are 9x409 = 3,681 word pairs representing possible solutions. The first pair is {*aerate, acrylics*}. Are these two words compatible with UMVUYM and QRZMSFRN? No, because the first implies U $\rightarrow$ a and the second implies Q $\rightarrow$ a. The second pair, {*aerate, anderons*} leads to a similar contradiction.
Automating the Process
----------------------
Unfortunately, I can't get the Wolfram Language to disclose the code behind **cryptograms**. This is supposed to work, but it fails:
WolframAlpha["cryptograms UMVULM", "WolframParse"]
So I needed to write my own version. I began with the following function, which finds the position of repeated letters in a word:
pattern[wrd_String] :=
StringPosition[wrd, #][[All, 1]] & /@
Select[Tally[Characters[wrd]], #[[2]] > 1 &][[All, 1]]
We know that the word *people* has the first and fourth letters the same, and the second and sixth letters the same. That's exactly what this function returns:
pattern["people"]
{{1, 4}, {2, 6}}
Note that since non-repeated letters at the end of a word do not affect the output of **pattern** ...
pattern["peopled"]
{{1, 4}, {2, 6}}
...we still have to check if candidate plaintext words have the same length as the ciphertext word. Here then is a function that generates a list of English words compatible with a given ciphertext word:
wordList[wrd_String] :=
With[{pat = pattern[wrd], len = StringLength[wrd]},
Select[WordList["KnownWords", IncludeInflections -> True],
If[StringLength[#] != len, False, SameQ[pattern[#], pat]] &]]
This operates just like WolframAlpha's **cryptogram** function: Given a ciphertext word, the function checks every word in the dictionary sequentially to see whether it has the same length and pattern as the ciphertext word, and returns a list of those that do. We can try this out with a ciphertext word like XJXDRWX, which I just made up:
wordList["XJXDRWX"]
{"acantha", "acapnia", "acardia", "amastia", "anaemia", "elegise", \
"elegize", "elevate", "epergne", "eremite", "execute", "eyehole", \
"eyelike", "eyesore", "sisters", "susliks", "systems"}
Now we need a function that checks if a list of plaintext words is compatible with a list of ciphertext words:
compatible[cw_List, pw_List] :=
Module[{cta = <|{}|>, pta = <|{}|>, ct = Characters[StringJoin[cw]],
pt = Characters[StringJoin[pw]]},
Catch[MapThread[
If[KeyFreeQ[cta, #1], cta = Join[cta, <|#1 -> #2|>],
If[cta[#1] != #2, Throw[False]]];
If[KeyFreeQ[pta, #2], pta = Join[pta, <|#2 -> #1|>],
If[pta[#2] != #1, Throw[False]]]; &
, {ct, pt}]; True]]
What I'm doing here is building the enciphering rule and its inverse one letter at a time using MapThread. If any ciphertext letter is found to correspond to two different plaintext letters -- or if any plaintext letter is found to correspond to two different ciphertext letters -- then we know the list of plaintext words is incompatible with the list of ciphertext words. The function throws an exception, exits immediately and returns the value False. Otherwise if we get to the end of the strings and no exception was thrown, the lists are compatible and the function returns True.
For example:
compatible[{"UMVULM", "QRZMSFRN"}, {"people", "asterisk"}]
True
compatible[{"UMVULM", "QRZMSFRN"}, {"people", "antimony"}]
False
The previous list is incompatible because the first word implies M $\rightarrow$ e and the second word implies M $\rightarrow$ i.
Now, finally, we can write a function to cross-index lists of possible words corresponding to a list of ciphertext words:
cryptograms[ctwords_List] :=
Module[{ptwords, lst}, ptwords = wordList /@ ctwords;
lst = Tuples[ptwords]; Select[lst, compatible[ctwords, #] & ]]
Let's go back to the cryptogram presented at the beginning of this post:
ctext = "UWJPQTHB ZVCWFUOPQGN FEHPBNOGDX RGTHQNZVPF JWQMYZS TWQPNZFV";
Obviously the running time of the function **cryptograms** depends on the number of possible plaintext words corresponding to each ciphertext word, so let's find the length of each list:
ctext = StringSplit[ctext];
Length /@ wordList /@ ctext
{4194, 321, 1028, 1028, 6018, 4194}
This suggests that checking the second and third words (or the second and fourth) would have the shortest running time:
cryptograms[{ctext[[2]], ctext[[3]]}]
{{"atmospheric", "squelching"}}
So there is only one possibility in the dictionary for these two ciphertext words. They can only be *atmospheric* and *squelching*. Knowing these two is enough to make a good start at solving what would otherwise be a very difficult cryptogram.
Unfortunately we don't always get such clear answers. The original two words UMVUYM and QRZMSFRN return a list of 68 possible pairs:
cryptograms[{"UMVUYM", "QRZMSFRN"}]
{{"aerate", "Angevins"}, {"aerate", "cohesion"}, {"aerate", "Numenius"},
{"aerate", "Poseidon"}, {"aerate", "unsexing"},
{"balboa", "decanter"}, {"balboa", "deranges"},
{"balboa", "detaches"}, {"balboa", "encasing"},
{"balboa", "escapism"}, {"balboa", "escapist"},
{"balboa", "kneading"}, {"balboa", "Picardie"},
{"balboa", "recanted"}, {"balboa", "regained"},
{"balboa", "remained"}, {"balboa", "retained"},
{"balboa", "revamped"}, {"balboa", "rifampin"},
{"balboa", "scraunch"}, {"balboa", "sneaking"},
{"balboa", "treasury"}, {"balboa", "Treasury"},
{"balboa", "uncaring"}, {"balboa", "uncasing"},
{"balboa", "unfading"}, {"balboa", "unmaking"},
{"balboa", "unsaying"}, {"briber", "Hydromys"},
{"briber", "hydroxyl"}, {"briber", "madronas"},
{"briber", "touracos"}, {"esters", "flashily"},
{"laelia", "runabout"}, {"laelia", "scraunch"},
{"laelia", "turacous"}, {"people", "Angevins"},
{"people", "asterisk"}, {"people", "asterism"},
{"people", "Bayesian"}, {"people", "cadenzas"},
{"people", "Hibernia"}, {"people", "Jamesian"},
{"people", "Madeiras"}, {"people", "magentas"},
{"people", "Numenius"}, {"people", "strength"},
{"people", "unsexing"}, {"people", "waterman"},
{"proper", "dairyman"}, {"proper", "madrigal"},
{"proper", "snarfing"}, {"proper", "snarling"},
{"proper", "unarming"}, {"tantra", "becalmed"},
{"tantra", "behalves"}, {"tantra", "bewailed"},
{"tantra", "debacles"}, {"tantra", "devalues"},
{"tantra", "escapism"}, {"tantra", "Islamise"},
{"tantra", "legacies"}, {"tantra", "limacoid"},
{"thatch", "Burhinus"}, {"thatch", "keyholes"},
{"triter", "Hydromys"}, {"triter", "hydroxyl"},
{"triter", "madronas"}}
However given a third word FMSMZKRM from the same cryptogram , we do get a single result:
cryptograms[{"UMVUYM", "FMSMZKRM", "QRZMSFRN"}]
{{"people", "generate", "strength"}}
Others can undoubtedly improve on these functions. Obviously if we have a list of 68 possible pairs for two words, we're going to want to start there when we add in a third word, rather than testing all 8 x 409 x 48 = 157,056 triples as the function currently does. I leave these improvements as an exercise for the reader.
[1]: https://demonstrations.wolfram.com/SolveTheCryptoquoteAutomatically/
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=figure1.jpg&userId=66744
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=figure2.jpg&userId=66744John Shonder2019-06-19T22:30:28ZCompress data that has been saved using Put?
https://community.wolfram.com/groups/-/m/t/1708062
Just want to confirm that the Put function does not have an option for compressing large expressions (associations, upwards of 1GB). So, if I compressed the saved files after the fact, will Get automatically decompress them? There is nothing to that effect in the documentation, so I am assuming not.
GregGregory Lypny2019-06-20T14:25:23Z[✓] Visualize European Union without United Kingdom?
https://community.wolfram.com/groups/-/m/t/1705143
Hi everyone,
I am trying to create a map of the European Union, but exclude the UK (thus, the EU after Brexit, if Brexit occurs). The code for mapping the EU was easy enough:
GeoGraphics[
{EdgeForm[Black], FaceForm[Red],
Polygon[EntityClass["Country", "EuropeanUnion"]]}
]
Please, however, what do I add to not include the UK?
Thank you.
StanleyStanley Max2019-06-15T00:42:52ZDeduct by the rules for two schemes?
https://community.wolfram.com/groups/-/m/t/1707269
Consider the following code:
Input[1]: rules = {a -> b, b -> c, c -> x, b -> d, d -> y};a//.rules;
output[2]:x
In the above program, "b" respectively implies two variables "c" and "d", and the two branches respectively go to the two terminal results "x" and "y". But the program runs only one branch a->b->c->x,losing a->b->d->y. Is there any approach to run all elements of the set of rules, so that the output would be "x" and "y"?Math Logic2019-06-19T09:03:14ZAnnouncing the Wolfram Function Repository on Community
https://community.wolfram.com/groups/-/m/t/1703373
In his [blog Yesterday](https://blog.stephenwolfram.com/2019/06/the-wolfram-function-repository-launching-an-open-platform-for-extending-the-wolfram-language/), Stephen Wolfram announced the launch of the Wolfram Function Repository (WFR). While the blog was the official announcement, this repository has been in development for quite some time, with numerous functions already in use hereabouts. So we are,not surprisingly, quite excited to see this now "go live", and I want to introduce it, informally, in the Community forum. I should mention that several Wolfram Community readers are already familiar with this from our live-streamed meetings, and a few have already had functions published there.
The idea is this. You have a function, say something like `MyDailyGoTo[str_String,n_Integer]`, and you think it is really useful for some purpose. You want it to be available to others, you want it to have a documentation page similar to the ones for functions built into the Wolfram Language, you want to have it when you are visiting someplace and forgot to bring your laptop, etc. You can now have all this. In a notebook go to File > New > Repository Item > Function Repository Item. So with four clicks you are off and running. A templated notebook has appeared, with various fields pre-populated e.g. a name (MyFunction) and a descriptor field. There is a Definition section where you replace the prefilled boilerplate with whatever you need to define your function. Fill in the Usage message section. Provide a few examples. You can have options, applications, show possible issues, fill in keywords and Related Symbols (in the Wolfram Language) for improved discoverability, whatever; all of these have fields that can be added to or left alone. The minimum requirement is that there be a usage message and one example.
If you are new to this, you do not yet know what are the style expectations. No big deal, just hit the Style Guidelines button at the top of the notebook and documentation appears. Want to see what an existing contribution looks like? Hit the Open Sample button. Don't know how to format things like the arguments in the Usage section? Heck, nobody knows how to do that (okay, three people know, but their identities are a corporate secret). Hit the Tools button and new bottons, among them Template Input, appear below the main bar. When you think you have something to submit there is the Check button. Hit that and, well, things happen. An automated set of checks might bring up not just missing item or formatting issues, but also a set of possible actions to redress them. Once the notebook looks right and passes the Check criteria, you can see how it will look once published using the Preview button (with the option to see it in a new notebook or in the Wolfram Cloud). When everything looks just so, hit Submit to Repository. There is also a Deploy button, useful for various purposes when one does not want the work to go into the WFR; see the blog for details.
What happens next? The submission goes into a queue for review at our end. We have over 500 published functions and I have been involved in reviews of more than 300 of them (while not as prolific as some colleagues, I also wrote I think 14 of them). First thing we do is see if the name and description make sense. We then check for formatting issues, assess usability, run the examples, have a look at the code, check for related functions, and myriad other things. We even had a couple of "review" sessions live-streamed. (These were not the ordinary reviews insofar as the boss was in charge, and the functions under consideration were ones that were already published. Also, while I do not think we made this clear, we only went over a random selection from among those we ourselves had authored.)
And following a review? If a function is fine on the first go round (possibly after mild editing), it gets published. If it looks like a good idea but needs work (a common outcome, even for those we ourselves write) then a message is sent to the author requesting revisions. We try to give sufficient detail for this. And rest assured, we do not send to external authors things like a particular 'needs revisions' note I received, which began "Danny you're a moron".
Let me say a bit about what is the expectation for submissions to the WFR. We are not terribly rigid here. Functions you find useful in everyday work, or ones that do very particular tasks that are not covered by existing Wolfram Language functions, or... A function can work in a very narrow area, provided it is something that others in that area might find useful. Existing WFR functions hit areas from basic programming language extensions to specialized graphics to STEM to "just for fun" to user interface extensions to, well, functions for working with WFR functions (e.g. a message formatter specialized to the WFR). If we think a submission is too close to existing functionality, we will tell you that (this has happened with a few in-house submissions, though it seems to be quite infrequent). If we think changes are needed, well, I guess I covered that already. Functions need not be generalized to all manner of inputs; we simply ask that they fail gracefully (e.g. not crash) with unexpected ones. As with any curation effort (e.g. like what was done for the Wolfram Demonstrations Project), we will exercise some judgement as to standards. But we are not reviewing at anything like the standards for the Wolfram Language itself.
So now you have a function in the WFR. Or maybe you are using a function in the WFR, whether yours or written by others. Can it be changed out from under you? We have in place a versioning system, which is still itself undergoing revisions. The idea is that things will be cached on your local machine and/or stored to a Cloud account. Revisions can be obtained when wanted, and avoided when not wanted. We expect we will have a hiccup or two as we proceed, but anticipate a smooth process in the not-distant future. Now suppose a function shows problems, or maybe could benefit from extensions: can this be addressed? Again, there is a mechanism for communicating to us and we are also working on facilitating direct communication to authors (when they opt in for this).
To be clear, we expect there might be growing pains. But the benefits of the WFR are rife: we now have a platform for external contributions of myriad functions, and already they are getting used in day to day workflows. I hope Community members will join in this venture, either by using these functions or by contributing some of your own favorites.Daniel Lichtblau2019-06-12T16:08:15ZGraphically solve a system of equations and plot them in R^3?
https://community.wolfram.com/groups/-/m/t/1707231
Hello,
I have an assignment that asks me to graphically solve the system of equations i.e. plot them in `ℝ^3` :
> x + 2y + 3z = 1
>
> 2x + 4y + 7z = 2
>
> 3x + 7y + 11z = 8
Currently, I've used ContourPlot3D for plotting my planes in a 3-dimensional space and then FindRoot in order to find the solution to the system.
ContourPlot3D[{x + 2 y + 3 z == 1, 2 x + 4 y + 7 z == 2,
3 x + 7 y + 11 z == 8}, {x, -10, 10}, {y, -10, 10}, {z, -10, 10}]
FindRoot[{x + 2 y + 3 z == 1, 2 x + 4 y + 7 z == 2,
3 x + 7 y + 11 z == 8}, {x, 0}, {y, 0}, {z, 0}]
My question here is if I've correctly completed the assignement and if I did, why would I need to plot the three planes in `ℝ^3` when it doesn't really help me in solving the problem?Filip Fornell2019-06-18T22:24:18Z[✓] Plot a logarithmic function?
https://community.wolfram.com/groups/-/m/t/1707570
For some reason I am having a lot of trouble graphing a function. The graph will appear, but no line will show up on the graph. I will attach what I am typing in and a picture of what shows up. ANY help would be greatly appreciated!!Skylar Bair2019-06-19T16:02:23Z[WSS19] Solving the wave equation on a torus
https://community.wolfram.com/groups/-/m/t/1707287
In this post, I will share the code and methods I used to solve the wave equation on a torus. To construct a solution to the wave equation on a torus, we need the Laplace-Beltrami operator from differential geometry. For more information on this operator, I recommend looking at this paper: [http://www.math.mcgill.ca/toth/spectral%20geometry.pdf][1].
Define coordinates, dimension, surface, and metric:
n = 2; coords = {u, v};
surf = {Cos[u] (2 + Cos[v]), (2 + Cos[v]) Sin[u], Sin[v]};
g =
FullSimplify[
Table[D[surf, coords[[i]]].D[surf, coords[[j]]], {i, 1, n}, {j, 1,
n}]];
ParametricPlot3D[surf, {u, 0, 2 \[Pi]}, {v, 0, 2 \[Pi]}]
![enter image description here][2]
Find the inverse metric:
ginv = Inverse[g];
Find the determinant of the metric:
detg = Det[g];
For some extra *fun*, we can find the [Christoffel symbols of the second kind][3], [the Riemann curvature tensor][4], the [Ricci curvature tensor][5], and the [Riemann scalar curvature][6]:
\[CapitalGamma] =
FullSimplify[ginv.Table[
1/2 (D[g, coords[[j]]][[k, i]] +
D[g, coords[[i]]][[k, j]] -
D[g, coords[[k]]][[i, j]]), {k, 1, n}, {i, 1, n}, {j,
1, n}]];
riemann =
FullSimplify[
Table[Sum[
D[\[CapitalGamma][[l, i, k]], coords[[j]]] -
D[\[CapitalGamma][[l, i, j]],
coords[[k]]] + \[CapitalGamma][[l, j, s]] \[CapitalGamma][[s,
i, k]] - \[CapitalGamma][[l, k, s]] \[CapitalGamma][[s, i,
j]], {s, 1, n}], {l, 1, n}, {i, 1, n}, {j, 1, n}, {k, 1, n}]];
ricci = FullSimplify[
Sum[Table[
D[\[CapitalGamma][[l, i, j]], coords[[l]]] -
D[\[CapitalGamma][[l, i, l]],
coords[[j]]] + \[CapitalGamma][[m, i, j]] \[CapitalGamma][[l,
l, m]] - \[CapitalGamma][[m, i, l]] \[CapitalGamma][[l, j,
m]], {i, 1, n}, {j, 1, n}], {m, 1, n}, {l, 1, n}]];
scalar = FullSimplify[
Sum[ginv[[i, j]] ricci[[i, j]], {i, 1, n}, {j, 1, n}]];
However, what really interests us is the Laplace-Beltrami operator.
\[ScriptCapitalL][f_, coord_] :=
Sum[1/Sqrt[
det[g] D[ginv[[\[Mu], \[Nu]]] Sqrt[
detg] D[f, coord[[\[Nu]]]], coord[[\[Mu]]]], {\[Mu],
1 n}, {\[Nu], 1, n}];
To solve a differential equation on a torus, we first need to create a gluing diagram to represent how our coordinates u and v map on to the torus:
![enter image description here][7]
We can define our quotient space:
quotientspace = Rectangle[{0, 0}, {2 \[Pi], 2 \[Pi]}];
We then need to specify appropriate boundary conditions. I will use the `PeriodicBoundaryConditions` symbol allowing us to connect the two edges of the gluing diagram. This can also be thought of as creating a large grid of squares where the solution to the partial differential equation repeats itself in each square.
bcs = {PeriodicBoundaryCondition[\[Psi][t, u, v], u == 2 \[Pi],
TranslationTransform[{-2 \[Pi], 0}]],
PeriodicBoundaryCondition[\[Psi][t, u, v], v == 2 \[Pi],
TranslationTransform[{0, -2 \[Pi]}]];
We then should specify some options:
opts = Method -> {"MethodOfLines", "TemporalVariable" -> t,
"SpatialDiscretization" -> {"FiniteElement",
"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}};
Let's solve with the initial conditions being a small bump at u = Pi and v = Pi.
sol = NDSolveValue[{\[ScriptCapitalL][ \[Psi][t, u, v], coords] -
D[\[Psi][t, u, v], {t, 2}] == 0,
bcs, \[Psi][0, u, v] ==
Exp[-(8 (u - \[Pi]))^2 - (8 (v - \[Pi]))^2],
Derivative[1, 0, 0][\[Psi]][0, u, v] == 0}, \[Psi], {t, 0,
10}, {u, v} \[Element] quotientspace, opts];
To visualize the solution, we will use a color function. Let's define the color function first.
colorf=Function[t, Function[{x,y,z,u,v},ColorData["FuchsiaTones"][Rescale[sol[t,u,v],{-.1/3,.1/3}]]]];
To animate, we need to generate a set of frames:
frames = ParallelTable[
ParametricPlot3D[surf, {u, 0, 2 \[Pi]}, {v, 0, 2 \[Pi]},
MeshFunctions -> Automatic, ColorFunction -> colorf[t],
ColorFunctionScaling -> False, Boxed -> False, Axes -> False,
ViewPoint -> {0, 15, 10}], {t, 0, 10, 1/30}];
Putting this all together in combination with some heavy computing power (thanks Boston University SCC), we get an animation that can be found at this link: [https://drive.google.com/file/d/1gf0fMxZeM_SInLbLe_mtFkvg1cGDb5ss/view?usp=sharing][8]. Here are some screenshots from that animation:
![enter image description here][9]
![enter image description here][10]
![enter image description here][11]
![enter image description here][12]
![enter image description here][13]
![enter image description here][14]
![enter image description here][15]
As seen in the animation the `PeriodicBoundaryCondition` allows the wave to interact with itself. The interaction of the wave with itself creates an increasingly chaotic solution. From my experience adjusting the size of the torus and the size of the initial bump. There is one additional representation I would like to leave you with. We can represent the solution by multiplying the solution by the tangent vector to create a torus that 'wiggles.'
frames = Table[
ParametricPlot3D[{Cos[
u] (2 + (1 + 1.2 \[Psi][t, u, v]) Cos[
v]), (2 + (1 + 1.2 \[Psi][t, u, v]) Cos[v]) Sin[u],
Sin[v]}, {u, 0, 2 \[Pi]}, {v, 0, 2 \[Pi]},
ViewPoint -> {0, 15, 10}, PlotRange -> 3.5, Boxed -> False,
Axes -> False], {t, 0, 20, .1}];
![enter image description here][16]
[1]: http://www.math.mcgill.ca/toth/spectral%20geometry.pdf
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-18at12.03.59PM.png&userId=1705502
[3]: http://mathworld.wolfram.com/ChristoffelSymboloftheSecondKind.html
[4]: http://mathworld.wolfram.com/RiemannTensor.html
[5]: http://mathworld.wolfram.com/RicciCurvatureTensor.html
[6]: http://mathworld.wolfram.com/ScalarCurvature.html
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6445torusgluing.png&userId=1705502
[8]: https://drive.google.com/file/d/1gf0fMxZeM_SInLbLe_mtFkvg1cGDb5ss/view?usp=sharing
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.44.26AM.png&userId=1705502
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.44.42AM.png&userId=1705502
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.45.32AM.png&userId=1705502
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.45.50AM.png&userId=1705502
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.46.40AM.png&userId=1705502
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.47.02AM.png&userId=1705502
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.47.20AM.png&userId=1705502
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wiggletorus.gif&userId=1705502Emmy Blumenthal2019-06-19T11:15:50Z[WSS19] World Oldest People: Age, Gender, Racial, Geographical Composition
https://community.wolfram.com/groups/-/m/t/1707390
![mapplot1: place of death of the oldest people][6]
Abstract
--------
The world's oldest people reflect the underlying medical, technological, and socio-economical conditions of their places of residence. The oldest people on record are ten times more likely to be females than males, supporting the traditional hypothesis that females make less risky social choices. Although all broad racial groups are represented, more than half of the oldest people on record since 1842 are white, and eighty percent was born in G8 countries, indicating the earlier advancement of Western European and North American's healthcare system. Similarly, the oldest people are clustered in Eastern United States, Japan, Britain, and France, justifying the correlation between socio-economical development and increased life-span. Furthermore, the world's oldest people are living longer - by an average of 1.11 years each decade, which demonstrates the overall improvement of global living conditions. The influence of socio-economic conditions on longevity suggests many directions to improve life-span around the world.
----------
I. Data Import
--------------
The list of worlds' oldest people, curated by Gerontology Research Group, was imported directly into Wolfram Mathematica using the following code:
ClearAll["Global`*"];
rawData1 = Import["http://archive.is/4kwbk", "Data"];
The data table - the only element of interest from the website - was located by trial and error: it is in the level [[2,2,1]], from position 8 to position 71. In case the data table would be updated in the future, a While loop was implemented to locate the table's last line:
count = 71;
continue = True;
While[And[continue == True, count < Length[rawData1[[2, 1, 1]]]],
If[NumberQ[First[rawData1[[2, 1, 1]][[count]]]],
count++,
continue = False
]];
Since the last line of the data table describes the current oldest person, with missing date of death/Most recent alive date, it was separately processed - adding the current date, and manually appended into the table:
rawData2 = Take[rawData1[[2, 1, 1]], {8, count - 2}];
lastLine =
Insert[rawData1[[2, 1, 1, count - 1]], DateString["ISODate"], 5];
AppendTo[rawData2, lastLine];
This raw data table was not usable, as it contained disparage type of data in one cell or column: separating the years and days of age, mixing up cities and countries, including annotation [] and parentheses () within data. Furthermore, it gave secondary/derived data: age, which could be obtained from date of birth and most recent alive date. Hence, the data was cleaned up:
rawData3 =
Extract[Transpose[
Sort[rawData2, #1[[4]] < #2[[4]] &]], {#} & /@ {1, 2, 3, 4, 5, 8,
9, 10}];
cleanfunc[s_] :=
StringReplace[
s, {" [" ~~ x___ ~~ "]" -> "", "(" ~~ x___ ~~ ")" -> x}];
cleanfuncRace[s_] :=
StringReplace[
s, {"W" -> "white", "B" -> "black", "EA" -> "asian",
"O" -> "asian", "M" -> "multiracial", "H" -> "hispanic"}];
cleanfuncSex[s_] := StringReplace[s, {"M" -> "male", "F" -> "female"}];
cleanStrings = {"Que" -> "Quebec", "GA" -> "Georgia",
"now Poland" -> "", "British West Indies now Jamaica" -> "Jamaica",
"U.S. MI" -> "Michigan", "Cape Verde Portugal" -> "Cape Verde",
"France St. Barts" -> "Saint Barthelemy"};
The data table received additional attributes - Date Object or City/Country Entity - thanks to SemanticInterpretation[]. In order to implement SemanticInterpretation[], certain strings were removed or clarified, as detailed above. Nevertheless, SemanticInterpretation[] may fail in some instance, which then requires the re-implementation of the following block of code:
rawData4 =
Transpose[{rawData3[[1]], cleanfunc[rawData3[[3]]],
SemanticInterpretation[cleanfunc[rawData3[[4]]]],
SemanticInterpretation[rawData3[[5]]],
cleanfuncRace[cleanfunc[rawData3[[6]]]], rawData3[[7]],
SemanticInterpretation[
StringReplace[cleanfunc[rawData3[[2]]], cleanStrings]],
SemanticInterpretation[
StringReplace[cleanfunc[rawData3[[8]]], cleanStrings]]}];
dataNoHeading =
Transpose[
Insert[Transpose[rawData4],
Table[DateDifference[rawData4[[i, 3]],
rawData4[[i, 4]], {"Year", "Day"}], {i, 1, Length[rawData4]}],
5]];
headings = {"No.", "Name", "Date of birth", "Most recent alive date",
"Age", "Race", "Sex", "Birthplace", "Deathplace"};
data = Prepend[dataNoHeading, headings];
data // TableForm
Upon successful running of SemanticInterpretation[], the data table appeared as follow:
![Part of the data table][1]
II. Data Visualizations and Analysis
-----------------------
The oldest people are more likely to be women than men, with a ratio of ten-to-one (pieChart1). This discrepancy underscores socio-environmental choices of each gender: men tend to engage in more risky activities: smoking, drinking, using drugs, reckless driving, ignoring health issues, working in dangerous occupations, participating in war, etc. (This gender discrepancy might be attributed to biological differences between male and female also: women were observed to have more resistance to infections and degenerative diseases than men.)
genderCounts =
Counts[Transpose[dataNoHeading][[7]] //. {"F" -> "Female",
"M" -> "Male"}];
genderPercentage = genderCounts/Total[Values[genderCounts]];
pieChartLabel1 =
Table[Style[
StringJoin[Keys[genderPercentage][[n]], " ",
ToString[Round[Values[genderPercentage][[n]]*100, 1]], "%"],
Bold, 14], {n, 1, Length[genderPercentage]}];
pieChart1 =
PieChart[Counts[Transpose[dataNoHeading][[7]]],
ChartLabels -> pieChartLabel1,
ChartStyle -> {Lighter[Pink], Lighter[Blue]},
PlotLabel ->
Style[Framed["Gender distribution of the worlds' oldest people"],
16]]
![pieChart1: the gender composition of the oldest people][2]
Among all the racial groups - Black, White, Asian, Hispanic, the oldest people are more than half as likely to be White (barChart1). This skewed racial representation may attest to the fact that North American and European countries industrialized earlier than the rest of the world.
Accordingly, the oldest people are four times as likely to have been born in G8 countries - the more developed, more resourceful nations of the world (barChart2). Furthermore, the oldest people who come from the same country tend to have similar age, which suggests some influence of environmental conditions (clusterImage1).
Following the trend, the oldest people's last place of residence tend to cluster around Eastern United States, Europe, or Japan, where advance, life-extending medical services are available. Hence, the racial, place-of-birth, and place-of-death composition of the oldest people all imply that socio-economic conditions greatly affect life-span. (Again, the contribution of genetic, biological factors cannot be discounted - neither is the fact that more develop countries keep better record/census of their people.)
raceCounts = Counts[Transpose[dataNoHeading][[6]]];
racePercentage = raceCounts/Total[Values[raceCounts]];
barChartLabel1 =
Table[Style[
Framed[StringJoin[Keys[racePercentage][[n]], " ",
ToString[Round[Values[racePercentage][[n]]*100, 1]], "%"]
], 12], {n, 1, Length[racePercentage]}];
barChart1 =
BarChart[raceCounts, ChartLabels -> barChartLabel1,
AxesLabel -> {"Race", "Counts"},
ChartStyle -> {White, Orange, Black, Brown, Gray},
PlotLabel ->
Style[Framed["Racial distribution of the worlds' oldest people"],
16], ImageSize -> Large]
![barChart1: racial distribution of the world oldest people][3]
EntityValue[Entity["HistoricalCountry", "Czechoslovakia"], "Flag"] =
EntityValue[Entity["Country", "CzechRepublic"], "Flag"];
birthPlaceList = Transpose[dataNoHeading][[8]];
birthCountryList =
Table[If[EntityTypeName[birthPlaceList[[i]]] == "Country" ||
EntityTypeName[birthPlaceList[[i]]] == "HistoricalCountry",
birthPlaceList[[i]], birthPlaceList[[i]]["Country"]], {i, 1,
Length[birthPlaceList]}];
deathPlaceList = Transpose[dataNoHeading][[9]];
deathCountryList =
Table[If[EntityTypeName[deathPlaceList[[i]]] == "Country" ||
EntityTypeName[deathPlaceList[[i]]] == "HistoricalCountry",
deathPlaceList[[i]], deathPlaceList[[i]]["Country"]], {i, 1,
Length[deathPlaceList]}];
f[x_] := Magnify[Framed[x], 0.1];
chartList2 =
Sort[Counts[#]] & /@
GatherBy[birthCountryList,
MemberQ[EntityList[EntityClass["Country", "G8"]], #] &];
chartList2Flag = Map[f, EntityValue[Keys[chartList2], "Flag"], {2}];
chartList2ForPlot =
Table[Table[
Labeled[chartList2[[n, m]], chartList2Flag[[n, m]]], {m, 1,
Length[chartList2[[n]]]}], {n, 1, Length[chartList2]}];
barChart2GroupLabel = {Placed[{Style[Framed["G8"], 14],
Style[Framed["non G8"], 14]}, Above], Automatic};
barChart2 =
BarChart[chartList2ForPlot, ChartStyle -> "Pastel",
ChartLabels -> barChart2GroupLabel,
AxesLabel -> {"Country", "Counts"},
PlotLabel ->
Style[Framed["Birth country of the worlds' oldest people"], 16],
ImageSize -> Large]
![barChart2: birth country of the oldest people][4]
clusterFlagLabel =
Magnify[Framed[#], 0.1] & /@ EntityValue[birthCountryList, "Flag"];
clusterImage1 =
ClusteringTree[
UnitConvert[Drop[data[[All, 5]], 1], "Year"] -> clusterFlagLabel,
ClusterDissimilarityFunction -> "Centroid",
GraphLayout -> "RadialEmbedding",
PlotLabel ->
Style[Framed[
"Cluster by age of the oldest people\nwith respect to their \
birthplace "], 16], ImageSize -> Large]
![clusterImage1: cluster by age of the oldest people][5]
mapPlot1 =
GeoGraphics[{GeoMarker[deathPlaceList,
EntityValue[Entity["Icon", "MensRoom"], "Image"]]},
GeoRange -> "World", GeoBackground -> "Coastlines",
GeoProjection -> "Robinson", ImageSize -> Full,
PlotLabel ->
Style[Framed[
"Current residence or place of death since 1955\nof the world's \
oldest people"], 20]]
![mapplot1: place of death of the oldest people][6]
Finally, the oldest people are living longer: the linear model for date of birth and age predicts that age would increase by 1.11 years each decade (listPlot1). This trend of increment in the oldest people's life-span correlates with better living conditions worldwide.
dateOfBirthAgeList =
TimeSeries[
Transpose[
Append[{Transpose[dataNoHeading][[3]]},
QuantityMagnitude[
UnitConvert[Transpose[dataNoHeading][[5]], "Year"]]]]];
datePairList = dateOfBirthAgeList["Path"];
modelFit = LinearModelFit[dateOfBirthAgeList, x, x];
modelFitList =
Table[{x, modelFit[x]}, {x, First[datePairList][[1]],
Last[datePairList][[1]], (
Last[datePairList][[1]] - First[datePairList][[1]])/50}];
plotlabel =
Style[Framed[
"Date of birth and age of the oldest people, in blue\nwith \
best-fitted line in dashed orange"], 16];
listPlot1 =
DateListPlot[{dateOfBirthAgeList, modelFitList},
PlotLabel -> plotlabel, FrameLabel -> {"Date of Birth", "Age"},
PlotStyle -> {Thick, {Thick, Dashed}}, Joined -> {False, True},
PlotMarkers -> {All, None}, ImageSize -> Large]
ageDifference = modelFit[10*365.25*24*60*60] - modelFit[0];
rSquare = modelFit["RSquared"];
Print["The best-fitted linear model predicts that the oldest people's \
age would increase by ", ageDifference, " years each decade."];
![listPlot1: Date of birth and age of oldest people][7]
In conclusion, the oldest people are predominantly women, are mostly white, and are very likely to be born or lived in developed countries. The gender, racial, and geographical distribution of the oldest people show the impact of socio-economical conditions on life-span. The age of the oldest people is increasing, correlating with the overall advances in technology and healthcare.
The correlation between (the number of) oldest people and improved living conditions suggests certain directions to improve life-span in less-developed parts of the world: providing healthcare, improving living conditions, discouraging/alleviating (male) idiosyncratic choices, investing in research and development, etc.
Bonus
-----
Dynamic plot/video of the oldest people:
ClearAll[beginDate, endDate, beginYear, endYear, listLiveDeath,
listLive, listDeath, plotMap, geoLive, geoGold, geoRed, geoBlue,
listDate, listDateValue];
beginDate = First[dataNoHeading][[3]] - Quantity[1, "years"];
endDate = Last[data][[4]] + Quantity[0, "years"];
beginYear = DateObject[beginDate, "Year"];
endYear = DateObject[endDate, "Year"];
dayRange = QuantityMagnitude[UnitConvert[endDate - beginDate, "Days"]];
listDate =
Union[Transpose[dataNoHeading][[3]], Transpose[dataNoHeading][[4]]];
listDateValue =
QuantityMagnitude[
UnitConvert[DateDifference[beginDate, #] & /@ listDate, "Days"]];
plotMap[time_] :=
plotMap[time] =
Module[{t = time, listDeath, listLive, timeDeathPlaceListDeath,
timeDeathPlaceListLive, geoLive, geoBlue, plotLabel},
{listLive =
Select[dataNoHeading,
And[QuantityMagnitude[
DateDifference[#[[3]], beginYear + Quantity[t, "days"]]] >=
0, QuantityMagnitude[
DateDifference[#[[4]],
beginYear + Quantity[t, "days"]]] <= 0] &];
plotLabel =
Style[Framed[
"The world's oldest people who are still alive at time t\n\
Blue: the oldest person, Red: people who are going to be oldest"], 20];
Which[
Length[listLive] == 0,
{GeoGraphics[GeoRange -> "World", GeoBackground -> "Coastlines",
GeoProjection -> "Robinson", ImageSize -> Full,
PlotLabel -> plotLabel]
},
Length[listLive] == 1,
{timeDeathPlaceListLive = Transpose[listLive][[9]];
geoBlue =
GeoMarker[First[timeDeathPlaceListLive],
EntityValue[Entity["Icon", "MensRoom"], "Image"],
"Color" -> Blue, "Scale" -> Scaled[0.04]];
GeoGraphics[{geoBlue}, GeoRange -> "World",
GeoBackground -> "Coastlines", GeoProjection -> "Robinson",
ImageSize -> Full, PlotLabel -> plotLabel]
},
Length[listLive] > 1,
{timeDeathPlaceListLive = Transpose[listLive][[9]];
geoBlue =
GeoMarker[First[timeDeathPlaceListLive],
EntityValue[Entity["Icon", "MensRoom"], "Image"],
"Color" -> Blue, "Scale" -> Scaled[0.04]];
geoLive =
GeoMarker[Drop[timeDeathPlaceListLive, 1],
EntityValue[Entity["Icon", "MensRoom"], "Image"]];
GeoGraphics[{geoBlue, geoLive}, GeoRange -> "World",
GeoBackground -> "Coastlines", GeoProjection -> "Robinson",
ImageSize -> Full, PlotLabel -> plotLabel]
}]
}];
Table[plotMap[t], {t, listDateValue}];
Hold[
plotMap[time_] :=
plotMap[time] =
Module[{t = time, listDeath, listLive, timeDeathPlaceListDeath,
timeDeathPlaceListLive, geoLive},
{listLive =
Select[dataNoHeading,
And[QuantityMagnitude[
DateDifference[#[[3]],
beginYear + Quantity[t, "days"]]] >= 0,
QuantityMagnitude[
DateDifference[#[[4]],
beginYear + Quantity[t, "days"]]] <= 0] &];
If[listLive == {},
{GeoGraphics[GeoRange -> "World",
GeoBackground -> "Coastlines", GeoProjection -> "Robinson",
ImageSize -> Full,
PlotLabel ->
Style[Framed[
"The world's oldest people who are still alive at time \
t"], 20]]},
{timeDeathPlaceListLive = Transpose[listLive][[9]];
geoLive = {GeoMarker[timeDeathPlaceListLive,
EntityValue[Entity["Icon", "MensRoom"], "Image"]]};
GeoGraphics[geoLive, GeoRange -> "World",
GeoBackground -> "Coastlines", GeoProjection -> "Robinson",
ImageSize -> Full,
PlotLabel ->
Style[Framed[
"The world's oldest people who are still alive at time \
t"], 20]]
}]
}];
Table[plotMap[t], {t, 0, dayRange, 2000}];
];
mapAnimate =
Animate[plotMap[t], {t, listDateValue}, DefaultDuration -> 20,
AnimationRunning -> False]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06problemT1datatable.PNG&userId=1707333
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1pieChart1.png&userId=1707333
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1barChart1.png&userId=1707333
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1barChart2.png&userId=1707333
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1clusterImage1.png&userId=1707333
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1mapPlot.png&userId=1707333
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1listPlot1.png&userId=1707333Nam Tran-Hoang2019-06-19T06:12:55ZObtain series expansions using Frobenius Method
https://community.wolfram.com/groups/-/m/t/1705566
## Fail Cases ##
Series[Exp[-2 Pi/Sqrt[3] Hypergeometric2F1[1/2, 1/2, 1, 1 - x]/Hypergeometric2F1[1/2, 1/2, 1, x]], {x, 0, 5}]
Series[Exp[-2 Pi/Sqrt[3] Hypergeometric2F1[1/3, 2/3, 1, 1 - x]/Hypergeometric2F1[1/3, 2/3, 1, x]], {x, 0, 5}]
Series[Exp[-2 Pi/Sqrt[2] Hypergeometric2F1[1/4, 3/4, 1, 1 - x]/Hypergeometric2F1[1/4, 3/4, 1, x]], {x, 0, 5}]
Series[Exp[-2 Pi Hypergeometric2F1[1/6, 5/6, 1, 1 - x]/Hypergeometric2F1[1/6, 5/6, 1, x]], {x, 0, 5}]
![Fail Cases][1]
Looks like something is going wrong on your end. Or try typing one of these into Wolfram|Alpha:
![enter image description here][2]
No response is slightly better than printing out nonsense, but why shouldn't we try and do better? I asked Bill Gosper, and he also thinks these expansions need to be fixed. We could try to do something like this:
## Frobenius Method ##
TSol[PFCS_, nMax_] := With[{TAnsatz = {
Dot[a1 /@ Range[0, nMax], x^Range[0, nMax]],
Plus[Log[x] Dot[a1 /@ Range[0, nMax], x^Range[0, nMax]],
Dot[a2 /@ Range[0, nMax], x^Range[0, nMax]]]} /. {a1[0] -> 1,
a2[0] -> 0}}, TAnsatz /. Solve[# == 0 & /@
Flatten[CoefficientList[#, {x, Log[x]}][[1 ;; nMax]
] & /@ Dot[PFCS, D[TAnsatz, {x, #}] & /@ Range[0, 2]]],
Flatten[{a1 /@ Range[1, nMax], a2 /@ Range[1, nMax]}]
][[1]] /. {a1[_] -> 0, a2[_] -> 0}]
MapThread[With[{f1 = TSol[{#1 - 1, #1^2 (-1 + 2 x), #1^2 (-1 + x) x}, 14]},
Expand[1/#2 Normal[Series[Exp[f1[[2]]/f1[[1]]], {x, 0, 10}]] /. x -> #2 x]]
&, {{2, 3, 4, 6}, {16, 27, 64, 432}}]
Out[]:= {
x + 8 x^2 + 84 x^3 + 992 x^4 + 12514 x^5 + 164688 x^6 + 2232200 x^7 + 30920128 x^8 + 435506703 x^9 + 6215660600 x^10,
x + 15 x^2 + 279 x^3 + 5729 x^4 + 124554 x^5 + 2810718 x^6 + 65114402 x^7 + 1538182398 x^8 + 36887880105 x^9 + 895303119303 x^10,
x + 40 x^2 + 1876 x^3 + 95072 x^4 + 5045474 x^5 + 276107408 x^6 + 15444602248 x^7 + 878268335296 x^8 + 50588345910799 x^9 + 2944021398570264 x^10,
x + 312 x^2 + 107604 x^3 + 39073568 x^4 + 14645965026 x^5 + 5609733423408 x^6 + 2182717163349896 x^7 + 859521859502348352 x^8 + 341679883727799750159 x^9 + 136868519056531319862408 x^10
}
I'm also willing to give a talk as to why I think these are important evaluations and how they fit into the wider context of what we can possibly hope to accomplish using Mathematica.
Cheers --Brad
PS. Don't feel too bad, other than A005797, these expansions are not in OEIS either.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=FailCases.png&userId=234448
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2019-06-1603-14-17.png&userId=234448Brad Klee2019-06-16T08:39:24Z[✓] Deal with "Encountered non-numeric value for a derivative at t == 0" ?
https://community.wolfram.com/groups/-/m/t/1707096
Hi, new in mathematica and im facing a problem. Im trying to solve a diferencial equation with NDSolve and but the error "Encountered non-numeric value for a derivative at t == 0" shows. I understand the nature of the error but I don't know how to manage it. The function im obtaining is defined in the small interval t around 0 to 1. Here the code:
V=NDSolve[{v'[t]==-g +(a*ve)/(mr+mw*Exp[-ve*t]) +(mw*ve*Exp[-ve*t])/(mr+mw*Exp[-ve*t])*v[t]-(b/(mr+mw*Exp[-ve*t]))*v[t]^2 ,v[0]==0.01},v[t],{t,0.01,0.4}]
Here is an screenshot:
![enter image description here][1]
Thank you!
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at02.42.47.png&userId=1707082L GonGa2019-06-19T01:44:35ZA primer on Association and Dataset
https://community.wolfram.com/groups/-/m/t/1167544
*NOTE: all Wolfram Language code and data are available in the attached notebook at the end of the post.*
----------
For my class this fall, I developed a little primer on Association and Dataset that I think might be useful for many people. So, I'm sharing the attached notebook. It's somewhat about the concepts embedded inside these features. It's intended for people at an beginner-intermediate level of Mathematica/Wolfram Language programming but might be of value even to some more advanced users who have not poked about the Dataset functionality.
The sections of the notebook are:
1. The world before Associations and Datasets
2. Datasets without Associations
3. Enter the Association
4. Creating a Dataset from a List of Associations
5. Nice queries with Dataset
6. Query
7. Some Recipes
#The world before Associations and Datasets#
Here' s an array of data. The data happens to represent the cabin class, age, gender, and survival of some of the passengers on the Titanic.
t = {{"1st", 29, "female", True}, {"1st", 30, "male", False}, {"1st",
58, "female", True}, {"1st", 52, "female", True}, {"1st", 21,
"female", True}, {"2nd", 54, "male", False}, {"2nd", 29, "female",
False}, {"3rd", 42, "male", False}};
As it stands, our data is a List of Lists.
Head[t]
> List
Head /@ t
> {List, List, List, List, List, List, List, List}
Suppose I wanted to get the second and fifth rows of the data. This is how I could do it.
t[[{2, 5}]]
> {{"1st", 30, "male", False}, {"1st", 21, "female", True}}
Suppose we want to group the passengers by gender and then compute the mean age. We could do this with the following pretty confusing code.
Use and enjoy. Constructive feedback appreciated.
grouped = GatherBy[t, #[[3]] &];
justTheAges = grouped[[All, All, 2]];
Mean /@ justTheAges
> {189/5, 42}
Or I could write it as a one liner this way.
Map[Mean, GatherBy[t, #[[3]] &][[All, All, 2]]]
> {189/5, 42}
But either way, realize that I have to remember that gender is the third column and that age is the second column. When there is a lot of data, this can get hard to remember.
#Datasets without Associations#
I could, if I wanted, convert this data into a Dataset. I do this below simply by wrapping Dataset about t. You see there is now some formatting about the data. But there are no column headers (because no one has told Dataset what to use). And there are no row headers, again because no one has told Dataset what to use.
t2 = Dataset[t]
![enter image description here][1]
The head of the expression has changed.
Head[t2]
> Dataset
Now, I can now access the data in a different way.
Query[{2, 5}][t2]
![enter image description here][2]
Or, I can do this. Mathematica basically converts this expression into Query[{2,5}][t2]. The expression t2[{2,5}] is basically syntactic sugar.
t2[{2, 5}]
![enter image description here][3]
##Digression : Using Query explicitly or using syntactic sugar##
Why, by the way would anyone use the longer form if Mathematica does the work for you? Suppose you want to store a Dataset operation -- perhaps a complex series of Dataset operations -- but you want it to work not just on a particular Dataset but on any Dataset (that is compatible). Here's how you could do it.
q = Query[{2, 5}]
> Query[{2, 5}]
q[t2]
![enter image description here][4]
Now, let' s create a permutation of the t2 Dataset so that the rows are scrambled up.
t2Scrambled = t2[{1, 4, 8, 3, 2, 7, 5}]
![enter image description here][5]
We can now run the q operation on t2Scrambled. Notice that the output has changed even though the query has stayed the same.
q[t2Scrambled]
![enter image description here][6]
We can also generate Query objects with functions. Here's a trivial example. There are very few languages of which I am aware that have the ability to generate queries by using a function. The one other example is Julia.
makeASimpleQuery[n_] := Query[n]
makeASimpleQuery[{3, 4, 7}][t2]
![enter image description here][7]
##MapReduce operations on Dataset objects##
Now, if I want to know the mean ages of the genders I can use this code. This kind of grouping of data and then performing some sort of aggregation operation on the groups is sometimes known as a MapReduce. (I'm not a fan of the name, but it is widely used). It's also sometimes known as a rollup or an aggregation.
Query[GroupBy[#[[3]] &], Mean, #[[2]] &][t2]
![enter image description here][8]
Or this shorthand form in which the Query is constructed.
t2g = t2[GroupBy[#[[3]] &], Mean, #[[2]] &]
![enter image description here][9]
I think this is a little cleaner. But we still have to remember the numbers of the columns, which can be challenging.
By the way, just to emphasize how we can make this all functional, here's a function that creates a query that can run any operation (not just computing the mean) on the Dataset grouped by gender and then working on age.
genderOp[f_] := Query[GroupBy[#[[3]] &], f, #[[2]] &]
genderOp[Max][t2]
![enter image description here][10]
To test your understanding, see if you can find the minimum age for each class of passenger on the Titanic in our Dataset **t2**.
Query[GroupBy[#[[1]] &], Min, #[[2]] &][t2]
![enter image description here][11]
#Enter the Association#
##Review of Association##
If you feel comfortable with Associations, you can skip this section; otherwise read it carefully. Basically the key to understanding most Dataset operations is understanding Associations.
###Construction of Associations###
Now let' s alter the data so that we don't have to remember those facts. To do this we will create an **Association**. Here's an example called **assoc1**. Notice that we do so by creating a sequence of rules and then wrapping it in an Association head. Notice that the standard output does not preserve the word "Association" as the head but, just as List is outputted as stuff inside curly braces, Association is outputted as stuff inside these funky "<|" and "|>" glyphs.
assoc1 = Association["class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True|>
I could equivalently have created a list of rules rather than a sequence. Mathematica would basically unwrap the **List** and create a sequence.
assoc1L = Association[{"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True}]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True|>
We can use **AssociationThread** to create Associations in a different way. The first argument is the list of things that go on the left hand side of the Rules -- the "keys" -- and the second argument is the list of things that go on the right hand side of the Rules -- the "values".
assoc1T = AssociationThread[{"class", "age", "gender", "survived"}, {"1st", 29, "female", True}]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True|>
Now let's use **AssociationThread** function to create a list of Associations similar to our original data.
convertListToAssociation =
list \[Function]
AssociationThread[{"class", "age", "gender", "survived"}, list]
> Function[list, AssociationThread[{"class", "age", "gender", "survived"}, list]]
I start with t and Map the **convertListToAssociation** function over the rows of the data. I end up with a list of Associations.
t3 = Map[convertListToAssociation, t]
![enter image description here][12]
###Keys and Values###
Associations have keys and values. These data structures are used in other computer languages but known by different names: *Python* and *Julia* call them dictionaries. *Go* and *Scala* call them maps. *Perl* and *Ruby* call them hashes. *Java* calls it a *HashMap*. And *Javascript* calls it an object. But they all work pretty similarly. Anyway, the keys of an **Association** are the things on the left hand side of the Rules.
Keys[assoc1]
> {"class", "age", "gender", "survived"}
And the values of an Association are the things on the right hand side of the Rules.
Values[assoc1]
> {"1st", 29, "female", True}
That' s about all there is too it. Except for one thing. Take a look at the input and output that follows.
assoc2 = Association["a" -> 3, "b" -> 4, "a" -> 5]
> <|"a" -> 5, "b" -> 4|>
You can' t have duplicate keys in an Association. So, when Mathematica confronts duplicate keys, it uses the last key it saw. You might think this is a minor point, but it is actually very important in coding. We will see why soon.
###Nested Associations###
A funny thing happens if you nest an **Association** inside another **Association**.
Association[assoc1, assoc2]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True, "a" -> 5, "b" -> 4|>
You end up with a single un - nested (flat) association. That's a little unusual for Mathematica, but we can exploit this flattening as a way of adding elements to an Association.
Association[Association["dances" -> False], assoc1]
> <|"dances" -> False, "class" -> "1st", "age" -> 29, "gender" ->
> "female", "survived" -> True|>
Or, here' s a function that exploits the flattening to add elements to an **Association**.
addstuff = Association[#, "dances" -> False, "sings" -> True] &
> Association[#1, "dances" -> False, "sings" -> True] &
addstuff[assoc1]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True, "dances" -> False, "sings" -> True|>
###Extracting Values from Associations###
Just as the values contained in a **List** can be accessed by using the **Part** function, the values contained in an **Association** can likewise be accessed. Suppose, for example that I wanted to compute double the age of the person in **assoc1**.
It turns out there are a lot of ways of doing this. The first is to treat the Association as a list except that the indices, instead of being integers, are the "keys" that are on the left hand side of the rules.
2*Part[assoc1, "age"]
> 58
2*assoc1[["age"]]
> 58
A second way is to use Query. We can wrap the "key" in the head **Key** just to make sure Mathematica understands that the thing is a Key.
2*Query[Key["age"]][assoc1]
> 58
Usually we can omit the Key and everything works fine.
2*Query["age"][assoc1]
> 58
A third way is to write a function that has an association as its argument.
af = Function[Slot["age"]]
> "#age &"
Now look what we can do.
2*Query[af][assoc1]
> 58
We can shorten this approach by using a simpler syntax for a function.
2*Query[#age &][assoc1]
> 58
Note, though that this still will not work. Basically, Mathematica is confused. It thinks the function itself is the key.
2*assoc1[af]
> 2 Missing["KeyAbsent", #age &]
But here' s a simple workaround. For very simple functions, I can just use the name of the key.
2*assoc1["age"]
> 58
##A Note on Slot Arguments##
And please pay attention to this : sometimes the Mathematica parser gets confused when it confronts a "slot argument" written as #something. If you see this happening, write it as Slot["something"].
Slot["iamaslot"] === #iamaslot
> True
Here' s another problem. What if the key in the association has spaces or non-standard characters in it. Any of these, for example, are perfectly fine keys: the string "I have a lot of spaces in me", the string "I_have_underscores", the symbol True, the integer 43. But if we try to denote those keys by putting a hash in front of them, it will lead to confusion and problems.
problemAssociation = Association["I have a lot of spaces in me" -> 1, "I_have_underscores" -> 2, True -> 3, 43 -> 4]
> <|"I have a lot of spaces in me" -> 1, "I_have_underscores" -> 2,
True -> 3, 43 -> 4|>
{Query[#I have a lot of spaces in me &][problemAssociation],
Query[#I _have _underescores &][problemAssociation]}
![enter image description here][13]
Here' s a solution.
{Query[Slot["I have a lot of spaces in me"] &][problemAssociation],
Query[Slot["I_have_underscores"] &][problemAssociation]}
> {1, 2}
Here' s how we solve the use of True and an integer as keys. We preface them with **Key**.
{Query[#True &][problemAssociation], Query[#43 &][problemAssociation]}
![enter image description here][14]
{Query[Key[True]][problemAssociation],
Query[Key[43]][problemAssociation]}
> {3, 4}
##Working with Associations and Lists of Associations##
Here' s something we can do with the data in the form of an Association. I could ask for the gender of the person in the third row as follows. Notice I did not have to remember that "gender" was generally in the third position.
t3[[3]][["gender"]]
> "female"
So, even if I scramble the rows, I can still use the same code.
t3Scrambled = Map[convertListToAssociation, t[[All, {4, 1, 3, 2}]]]
![enter image description here][15]
t3Scrambled[[3]][["gender"]]
> female
I could also group the people according to their cabin class. Here I use Query on a list of Associations.
Query[GroupBy[#class &]][t3]
![enter image description here][16]
Again, the following code, which does not explicitly use **Query**, won' t work. Basically, nothing has told Mathematica to translate t3[stuff___] \[RightArrow]Query[stuff][t3]. If t3 had a head of Dataset, Mathematica would know to make the translation.
t3[GroupBy[#class &]]
![enter image description here][17]
I can also get certain values for all the Associations in a list of Associations.
Query[All, #age &][t3]
> {29, 30, 58, 52, 21, 54, 29, 42}
I can also map a function onto the result. I don't have to go outside the Query form to do so.
Query[f, #age &][t3]
> f[{29, 30, 58, 52, 21, 54, 29, 42}]
Or, without exiting the Query form, I can map a function onto each element of the result.
Query[Map[f], #age &][t3]
> {f[29], f[30], f[58], f[52], f[21], f[54], f[29], f[42]}
I could also do the same thing as follows.
Query[All, #age &, f][t3]
> {f[29], f[30], f[58], f[52], f[21], f[54], f[29], f[42]}
#Creating a Dataset from a List of Associations#
To get full use out of Query and to permit syntactic shorthands, we need for Mathematica to understand that the list of Associations is in fact a Dataset. Here' s all it takes.
d3 = Dataset[t3]
![enter image description here][18]
We can recover our original list of associations by use of the **Normal** command.
t3 === Normal[d3]
> True
With the data inside a Dataset object we now have pretty formatting. But we have more.
We can still do this. We get the same result but in a more attractive form.
d3g = Query[GroupBy[#class &]][d3]
![enter image description here][19]
But now this shorthand works too.
d3g = d3[GroupBy[#class &]]
![enter image description here][20]
And compare these two elements of code. When the data is in the form of a dataset, Mathematica understands that the stuff in the brackets is not intended as a key but rather is intended to be transformed into a Query.
{Query[#age &][t3[[1]]], d3[[1]][#age &]}
> {29, 29}
##A Dataset that is an Association of Associations##
Let' s look under the hood of **d3g**.
d3gn = Normal[d3g]
![enter image description here][21]
Note : if you *really* want to look under the hood of a **Dataset** ask to see the **Dataset** in **FullForm**. You can also get more information by running the undocumented package Dataset`, but this is definitely NOT recommended for the non-advanced user.
What we see is an Association in which each of the values is itself a list of Associations.
We can map a function over d3gn.
Map[f, d3gn]
![enter image description here][22]
I can of course do the mapping within the **Query** construct.
Query[All, f][d3gn]
![enter image description here][23]
If I try synactic sugar, it doesn' t work because d3gn is not a Dataset.
d3gn[All, f]
> Missing["KeyAbsent", All]
But, if I use the Dataset version, it does work. (The first line may be an ellipsis depending on your operating system and display, but if you look under the hood it looks just like the values for 2nd and 3rd. I have no idea why an ellipsis is being inserted.
d3g[All, f]
![enter image description here][24]
##A Dataset that just has a single Association inside.##
We can also have a Dataset that just has a single Association inside. Mathematica presents the information with the keys and values displayed vertically.
Dataset[d3[[1]]]
![enter image description here][25]
In theory, we could have a Dataset that just had a single number inside it.
Dataset[6]
![enter image description here][26]
#Nice queries with Dataset#
Now I can construct a query that takes a dataset and groups it by the gender column. It then takes each grouping and applies the Mean function to at least part of it. What part? The "age" column part. Notice that I no longer have to remember that gender is the third column and age is the second column.
qd = Query[GroupBy[#gender &], Mean, #age &]
> Query[GroupBy[#gender &], Mean, #age &]
Now I can run this query on t3.
qd[d3]
![enter image description here][27]
We can now learn a lot about Query. So long as our data is in the form of a Dataset we can write the query as either a formal Query or use syntactic sugar.
#Query#
A major part of working with data is to understand **Query**. Let's start with a completely abstract **Query**, that we will call **q1**.
q1 = Query[f];
Now let' s run q1 on t3.
q1[t3]
![enter image description here][28]
We end up with a list of Associations that has f wrapped around it at the highest level. It's the same as if I wrote the following code.
f[t3] === q1[t3]
> True
Now, let' s write a **Query** that applies the function g at the top level of the list of associations and the function **f** at the second level, i.e. to each of the rows. Why does it work at the second level? Because it's the second argument to **Query**.
q2 = Query[g, f];
q2[t3]
![enter image description here][29]
The result is the same as if I mapped **f** onto t3 at its first level and then wrapped **g** around it.
g[Map[f, t3, {1}]] === q2[d3]
Query[All, MapAt[StringTake[#, 1] &, #, {{"class"}, {"gender"}}] &][d3]
Here' s a function **firstchar** that takes the first character in a string.
firstchar = StringTake[#, 1] &
> StringTake[#1, 1] &
Now, let' s construct a query **cg1** that applies **firstchar** to the class and gender keys in each row.
cg1 = Query[All,
a \[Function] MapAt[firstchar, a, {{"class"}, {"gender"}}]]
> Query[All, Function[a, MapAt[firstchar, a, {{"class"}, {"gender"}}]]]
We apply **cg1** to our little dataset **d3**.
cg1[d3]
![enter image description here][30]
What if we want to apply the same function to every element of the Dataset. We just apply it at the lowest level. Here's one way.
Query[Map[f, #, {-1}] &][d3]
![enter image description here][31]
We can also combine it with column wise and entirety wise operations. For reasons that are not clear, Mathematica can't understand this as a Dataset and returns the Normal form.
Query[(Map[f, #, {-1}] &) /* entiretywise, columnwise][d3]
![enter image description here][32]
Here' s how we could actually a multilevel **Query**.
Suppose we want to write a function that computes the fraction of the people in this little dataset that survived. The first step is simply going to be to extract the survival value and convert it to 1 if True and 0 otherwise. There's a built in function Boole that does this.
{Boole[True], Boole[False]}
> {1, 0}
q3 = Query[something,
assoc \[Function] assoc["survived"] /. {True -> 1, _ -> 0}]
> Query[something, Function[assoc, assoc["survived"] /. {True -> 1, _
> -> 0}]]
q3[t3]
> something[{1, 0, 1, 1, 1, 0, 0, 0}]
So, now we have something wrapping a list of 1 s and 0 s. By making **something** the **Mean** function, we can achieve our result.
q4 = Query[Mean, Boole[#survived] &]
> Query[Mean, Boole[#survived] &]
q4[d3]
> 1/2
We can also examine survival by gender. Notice that **Query** is a little like **Association**: it gets automatically flattened.
Query[GroupBy[#gender &], q4][t3]
> <|"female" -> 4/5, "male" -> 0|>
If the data is held in a **Dataset**, we can also write the final step as follows.
d3[GroupBy[#gender &], q4]
![enter image description here][33]
Notice that even if we omit the "Query", this code works. Mathematica just figures out that you meant Query.
The code immediately above is in the form we typically see and often use.
#Some Recipes#
titanic = ExampleData[{"Dataset", "Titanic"}]
![enter image description here][34]
How to add a value to the Dataset based on values external to the existing columns.
Here' s some additional data. Notice that the data is the same length as the titanic dataset.
stuffToBeAdded =
Table[Association["id" -> i,
"weight" -> RandomInteger[{80, 200}]], {i, Length[titanic]}]
![enter image description here][35]
We use **Join** at level 2.
augmentedTitanic = Join[titanic, stuffToBeAdded, 2]
![enter image description here][36]
##How to add a column to a Dataset based on values in the existing columns and to do so row-wise##
Notice that the query below does NOT change the value of the titanic dataset. To change the value of the titanic dataset, one would need to set titanic to the result of the computation. Remember, Mathematica generally does not have side effects or do modifications in place.
Query[All, Association[#, "classsex" -> {#class, #sex}] &][titanic]
![enter image description here][37]
We can add multiple columns this way.
Query[All,
Association[#, "classsex" -> {#class, #sex},
"agesqrt" -> Sqrt[#age]] &][titanic]
![enter image description here][38]
##How to change the value of an existing column : row - wise##
Age everyone one year.
Query[All, Association[#, "age" -> #age + 1] &][titanic]
![enter image description here][39]
How to change the value of columns selectively.
Query[All,
Association[#,
"age" -> If[#sex === "male", #age + 1, #age]] &][titanic]
![enter image description here][40]
How to create a new column based on some aggregate operator applied to another column.
With[{meanAge = Query[Mean, #age &][titanic]},
Query[All,
Association[#, "ageDeviation" -> #age - meanAge] &]][titanic]
![enter image description here][41]
Can you develop your own recipes?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=17751.png&userId=20103
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=47222.png&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=60813.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=83664.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=44435.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=105416.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=49777.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16898.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=28239.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=262710.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=749611.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=209912.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=925313.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=851614.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=627315.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=679516.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=674717.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=707518.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1053319.png&userId=20103
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=579820.png&userId=20103
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=980221.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=450322.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=113723.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=932624.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=754825.png&userId=20103
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=836826.png&userId=20103
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=393227.png&userId=20103
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=577928.png&userId=20103
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=158329.png&userId=20103
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=984930.png&userId=20103
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=664831.png&userId=20103
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1090332.png&userId=20103
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=450733.png&userId=20103
[34]: http://community.wolfram.com//c/portal/getImageAttachment?filename=740234.png&userId=20103
[35]: http://community.wolfram.com//c/portal/getImageAttachment?filename=143835.png&userId=20103
[36]: http://community.wolfram.com//c/portal/getImageAttachment?filename=36.png&userId=20103
[37]: http://community.wolfram.com//c/portal/getImageAttachment?filename=37.png&userId=20103
[38]: http://community.wolfram.com//c/portal/getImageAttachment?filename=38.png&userId=20103
[39]: http://community.wolfram.com//c/portal/getImageAttachment?filename=39.png&userId=20103
[40]: http://community.wolfram.com//c/portal/getImageAttachment?filename=40.png&userId=20103
[41]: http://community.wolfram.com//c/portal/getImageAttachment?filename=41.png&userId=20103Seth Chandler2017-08-20T19:26:30ZVisualizing hours of daylight on the summer solstice
https://community.wolfram.com/groups/-/m/t/1706977
The climatologist Brian Brettschneider shared a nice [visualization][1] showing the hours of daylight on the summer solstice.
[![enter image description here][2]][1]
We can easily create a similar visualization using Wolfram Language. First let's get the date of Summer solstice, from WA:
summerSolstice = DateObject[{2019, 6, 21, 10, 42}, TimeZone -> "America/Chicago"]
![enter image description here][3]
Then we construct a function to get the number of hours from sunrise to sunset for a given latitude:
SunriseToSunset[lat_] :=
Sunset[GeoPosition[{lat, -90}], DateObject[{2019, 6, 21}]] -
Sunrise[GeoPosition[{lat, -90}], DateObject[{2019, 6, 21}]]
Compute the results for latitudes from the equator to the Arctic Circle:
data = With[{lats = Range[0, 66.5, 1]},
Transpose[{lats,
N@QuantityMagnitude[SunriseToSunset /@ lats, "Hours"]}]]
>
> {{0, 12.1167}, {1, 12.1833}, {2, 12.2333}, {3, 12.3}, {4, 12.35}, {5,
> 12.4167}, {6, 12.4667}, {7, 12.5333}, {8, 12.5833}, {9, 12.65}, {10,
> 12.7167}, {11, 12.7667}, {12, 12.8333}, {13, 12.8833}, {14,
> 12.95}, {15, 13.0167}, {16, 13.0833}, {17, 13.15}, {18,
> 13.2167}, {19, 13.2833}, {20, 13.35}, {21, 13.4167}, {22,
> 13.4833}, {23, 13.55}, {24, 13.6167}, {25, 13.6833}, {26,
> 13.7667}, {27, 13.85}, {28, 13.9167}, {29, 14.}, {30, 14.0833}, {31,
> 14.1667}, {32, 14.25}, {33, 14.3333}, {34, 14.4167}, {35,
> 14.5167}, {36, 14.6167}, {37, 14.7}, {38, 14.8}, {39, 14.9167}, {40,
> 15.0167}, {41, 15.1333}, {42, 15.25}, {43, 15.3667}, {44,
> 15.4833}, {45, 15.6167}, {46, 15.75}, {47, 15.9}, {48, 16.05}, {49,
> 16.2}, {50, 16.3667}, {51, 16.55}, {52, 16.7333}, {53,
> 16.9333}, {54, 17.15}, {55, 17.3833}, {56, 17.6167}, {57,
> 17.8833}, {58, 18.1833}, {59, 18.5167}, {60, 18.8667}, {61,
> 19.2833}, {62, 19.75}, {63, 20.3167}, {64,
> 21.0167}, {65, -1.96667}, {66, -0.783333}}
The last two values need 24 hours to be added, because we obtained the sunrise of the next day:
data[[-2]] += {0, 24}
> `{65, 22.0333}`
data[[-1]] += {0, 24}
> `{66, 23.2167}`
ListPlot[data]
![enter image description here][4]
Interpolate that data:
SunriseToSunsetFunction = Interpolation[data]
![enter image description here][5]
From this InterpolationFunction object we can find the parallels that correspond to values of 12 to 24 hours of sunlight, in intervals of 30 minutes:
InverseSunriseToSunset[hours_] := Block[{lat},
lat /. FindRoot[SunriseToSunsetFunction[lat] - hours, {lat, 30, 0, 66.56}]]
hours = Range[12, 24, 0.5]
> {12., 12.5, 13., 13.5, 14., 14.5, 15., 15.5, 16., 16.5, 17., 17.5,
> 18., 18.5, 19., 19.5, 20., 20.5, 21., 21.5, 22., 22.5, 23., 23.5, 24.}
Quiet the extrapolation messages:
parallels = Quiet[InverseSunriseToSunset /@ hours]
> {0., 6.5, 14.75, 22.25, 29., 34.8377, 39.8359, 44.1294, 47.6667,
> 50.7311, 53.316, 55.509, 57.4022, 58.9516, 60.3359, 61.4843, 62.4667,
> 63.2934, 63.9798, 64.5051, 64.9703, 65.4042, 65.8216, 66.2328, 66.56}
Finally one gets the following map, with tooltips on the parallel lines instead of labels:
GeoGraphics[{Thick, Black, GeoPath["NorthernTropic"],
GeoPath["ArcticCircle"], Thin, Red,
Inner[Tooltip, GeoPath[{"Parallel", #}] & /@ parallels, hours, List]},
GeoProjection -> {"Orthographic", "Centering" -> {60, -50}},
GeoRange -> "World", GeoGridLines -> Automatic]
![enter image description here][6]
[1]: https://twitter.com/Climatologist49/status/1140789836963446784
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6468ScreenShot2019-06-18at5.15.03PM.jpg&userId=20103
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=51641.png&userId=20103
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=32722.png&userId=20103
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=54113.png&userId=20103
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=39444.png&userId=20103Jose M. Martin-Garcia2019-06-18T22:08:35Z