Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag sorted by activeImplementing WolframAlpha's Cryptogram Functionality In Mathematica
https://community.wolfram.com/groups/-/m/t/1707649
Summary
-------
WolframAlpha has a useful function called **cryptograms** which, given a word enciphered in a simple substitution alphabet, returns all possible English words that the enciphered word could represent. I thought it would be even more useful to be able to enter a list of ciphertext words (assumed to be enciphered in the same substitution alphabet), and get a list of all possible English words that could be represented by those words. An example might be "cryptograms UMVUYM QRZMSFRN". This would give a list like {{*people, asterisk*}, {*people, asterism*}, ..., {*proper, snarling*},..., {*proper, unarming*}, ...}. The word pairs not only match the patterns of the enciphered words, but are also consistent with an enciphering alphabet. Since WolframAlpha's **cryptograms** function does not have this functionality, I decided to implement it in Mathematica.
Note that this program is not intended to solve cryptograms automatically (the way [this program][1] from the Wolfram Demonstrations Project does). I think of it more as a tool to aid in the solution of more difficult problems.
Introduction
------------
In a simple substitution cipher, plaintext letters are replaced by ciphertext letters according to an enciphering rule. For example given the rule {p -> U, e -> M, o -> V, l -> Y} the word *people* becomes UMVUYM (here I use the convention that plaintext letters are lowercase and ciphertext letters are uppercase). The resulting ciphertext, presented without the enciphering rule, is sometimes called a cryptogram. The objective is to reconstruct the enciphering rule and read the original plaintext. An example cryptogram is:
UWJPQTHB ZVCWFUOPQGN FEHPBNOGDX RGTHQNZVPF JWQMYZS TWQPNZFV
The Wolfram Alpha function **cryptograms** is a useful aid in solving short cryptograms like this. For example, given ciphertext word UMVUYM, Wolfram Alpha returns:
WolframAlpha["cryptograms UMVUYM"]
![UMVUYM][2]
Basically the function gives a list of all six-letter English words that have the first and fourth letters the same, the second and sixth the same, and no other repeats. Experienced solvers will recognize people as the most likely word on the list. If that doesn't lead to the solution of a given cryptogram, it would not be difficult to try each of the remaining 8 possibilities.
Unfortunately, in many cases the list of words produced is much longer. For example:
WolframAlpha["cryptograms QRZMSFRN"]
![QRZMSFRN][3]
Here we get a list of eight-letter words in which no letters are repeated. There are 409 such words -- far more than can be checked by hand. But it occurred to me that if it were possible to cross-index the lists generated for two or more ciphertext words, the number of possible solutions could be shortened considerably.
What I mean by cross-index is this: Assume that UMVUYM and QRZMSFRN both come from the same cryptogram. Given the two lists of possible words: {*aerate, balboa, briber, esters, laelia, people, proper, tantra, thatch, triter*} and {*acrylics, adenoids, anderons, ..., zirconia*}, there are 9x409 = 3,681 word pairs representing possible solutions. The first pair is {*aerate, acrylics*}. Are these two words compatible with UMVUYM and QRZMSFRN? No, because the first implies U -> a and the second implies Q -> a. The second pair, {aerate, anderons} leads to a similar contradiction.
Automating the Process
----------------------
Unfortunately, I can't get the Wolfram Language to disclose the code behind **cryptograms**. This is supposed to work, but it fails:
WolframAlpha["cryptograms UMVULM", "WolframParse"]
So I needed to write my own version. I began with the following function, which finds the position of repeated letters in a word:
pattern[wrd_String] :=
StringPosition[wrd, #][[All, 1]] & /@
Select[Tally[Characters[wrd]], #[[2]] > 1 &][[All, 1]]
We know that the word *people* has the first and fourth letters the same, and the second and sixth letters the same. That's exactly what this function returns:
pattern["people"]
{{1, 4}, {2, 6}}
Note that since non-repeated letters at the end of a word do not affect the output of **pattern** ...
pattern["peopled"]
{{1, 4}, {2, 6}}
...we still have to check if candidate plaintext words have the same length as the ciphertext word. Here then is a function that generates a list of English words compatible with a given ciphertext word:
wordList[wrd_String] :=
With[{pat = pattern[wrd], len = StringLength[wrd]},
Select[WordList["KnownWords", IncludeInflections -> True],
If[StringLength[#] != len, False, SameQ[pattern[#], pat]] &]]
This operates just like WolframAlpha's **cryptogram** function: Given a ciphertext word, the function checks every word in the dictionary sequentially to see whether it has the same length and pattern as the ciphertext word, and returns a list of those that do. We can try this out with a ciphertext word like XJXDRWX, which I just made up:
wordList["XJXDRWX"]
{"acantha", "acapnia", "acardia", "amastia", "anaemia", "elegise", \
"elegize", "elevate", "epergne", "eremite", "execute", "eyehole", \
"eyelike", "eyesore", "sisters", "susliks", "systems"}
Now we need a function that checks if a list of plaintext words is compatible with a list of ciphertext words:
compatible[cw_List, pw_List] :=
Module[{cta = <|{}|>, pta = <|{}|>, ct = Characters[StringJoin[cw]],
pt = Characters[StringJoin[pw]]},
Catch[MapThread[
If[KeyFreeQ[cta, #1], cta = Join[cta, <|#1 -> #2|>],
If[cta[#1] != #2, Throw[False]]];
If[KeyFreeQ[pta, #2], pta = Join[pta, <|#2 -> #1|>],
If[pta[#2] != #1, Throw[False]]]; &
, {ct, pt}]; True]]
What I'm doing here is building the enciphering rule and its inverse one letter at a time using MapThread. If any ciphertext letter is found to correspond to two different plaintext letters -- or if any plaintext letter is found to correspond to two different ciphertext letters -- then we know the list of plaintext words is incompatible with the list of ciphertext words. The function throws an exception, exits immediately and returns the value False. Otherwise if we get to the end of the strings and no exception was thrown, the lists are compatible and the function returns True.
For example:
compatible[{"UMVULM", "QRZMSFRN"}, {"people", "asterisk"}]
True
compatible[{"UMVULM", "QRZMSFRN"}, {"people", "antimony"}]
False
The previous two are incompatible because the first word implies M -> e and the second word implies M -> i.
Now, finally, we can write a function to cross-index lists of possible words corresponding to a list of ciphertext words:
cryptograms[ctwords_List] :=
Module[{ptwords, lst}, ptwords = wordList /@ ctwords;
lst = Tuples[ptwords]; Select[lst, compatible[ctwords, #] & ]]
Let's go back to the cryptogram presented at the beginning of this post:
ctext = "UWJPQTHB ZVCWFUOPQGN FEHPBNOGDX RGTHQNZVPF JWQMYZS TWQPNZFV";
Obviously the running time of the function cryptograms depends on the number of possible plaintext words corresponding to each ciphertext word, so let's find the length of each list:
ctext = StringSplit[ctext];
Length /@ wordList /@ ctext
{4194, 321, 1028, 1028, 6018, 4194}
This suggests that checking the second and third words (or the second and fourth) would have the shortest running time:
cryptograms[{ctext[[2]], ctext[[3]]}]
{{"atmospheric", "squelching"}}
So there are no other possibilities. Knowing these two words would be enough to make a good start at solving what would otherwise be a very difficult cryptogram.
Unfortunately we don't always get such clear answers! Our original two words UMVUYM and QRZMSFRN give a list of 68 possible pairs:
cryptograms[{"UMVUYM", "QRZMSFRN"}]
{{"aerate", "Angevins"}, {"aerate", "cohesion"}, {"aerate",
"Numenius"}, {"aerate", "Poseidon"}, {"aerate",
"unsexing"}, {"balboa", "decanter"}, {"balboa",
"deranges"}, {"balboa", "detaches"}, {"balboa",
"encasing"}, {"balboa", "escapism"}, {"balboa",
"escapist"}, {"balboa", "kneading"}, {"balboa",
"Picardie"}, {"balboa", "recanted"}, {"balboa",
"regained"}, {"balboa", "remained"}, {"balboa",
"retained"}, {"balboa", "revamped"}, {"balboa",
"rifampin"}, {"balboa", "scraunch"}, {"balboa",
"sneaking"}, {"balboa", "treasury"}, {"balboa",
"Treasury"}, {"balboa", "uncaring"}, {"balboa",
"uncasing"}, {"balboa", "unfading"}, {"balboa",
"unmaking"}, {"balboa", "unsaying"}, {"briber",
"Hydromys"}, {"briber", "hydroxyl"}, {"briber",
"madronas"}, {"briber", "touracos"}, {"esters",
"flashily"}, {"laelia", "runabout"}, {"laelia",
"scraunch"}, {"laelia", "turacous"}, {"people",
"Angevins"}, {"people", "asterisk"}, {"people",
"asterism"}, {"people", "Bayesian"}, {"people",
"cadenzas"}, {"people", "Hibernia"}, {"people",
"Jamesian"}, {"people", "Madeiras"}, {"people",
"magentas"}, {"people", "Numenius"}, {"people",
"strength"}, {"people", "unsexing"}, {"people",
"waterman"}, {"proper", "dairyman"}, {"proper",
"madrigal"}, {"proper", "snarfing"}, {"proper",
"snarling"}, {"proper", "unarming"}, {"tantra",
"becalmed"}, {"tantra", "behalves"}, {"tantra",
"bewailed"}, {"tantra", "debacles"}, {"tantra",
"devalues"}, {"tantra", "escapism"}, {"tantra",
"Islamise"}, {"tantra", "legacies"}, {"tantra",
"limacoid"}, {"thatch", "Burhinus"}, {"thatch",
"keyholes"}, {"triter", "Hydromys"}, {"triter",
"hydroxyl"}, {"triter", "madronas"}}
However given a third word FMSMZKRM from the same cryptogram , we do get a single result:
cryptograms[{"UMVUYM", "FMSMZKRM", "QRZMSFRN"}]
{{"people", "generate", "strength"}}
Others can undoubtedly improve on these functions. Obviously if we have a list of 68 possible pairs for two words, we're going to want to start there when we add in a third word, rather than testing all 8*409*48 = 157,056 triples as the function currently does. I leave these improvements as an exercise for the reader.
[1]: https://demonstrations.wolfram.com/SolveTheCryptoquoteAutomatically/
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=figure1.jpg&userId=66744
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=figure2.jpg&userId=66744John Shonder2019-06-19T22:30:28ZAnnouncing the Wolfram Function Repository on Community
https://community.wolfram.com/groups/-/m/t/1703373
In his [blog Yesterday](https://blog.stephenwolfram.com/2019/06/the-wolfram-function-repository-launching-an-open-platform-for-extending-the-wolfram-language/), Stephen Wolfram announced the launch of the Wolfram Function Repository (WFR). While the blog was the official announcement, this repository has been in development for quite some time, with numerous functions already in use hereabouts. So we are,not surprisingly, quite excited to see this now "go live", and I want to introduce it, informally, in the Community forum. I should mention that several Wolfram Community readers are already familiar with this from our live-streamed meetings, and a few have already had functions published there.
The idea is this. You have a function, say something like `MyDailyGoTo[str_String,n_Integer]`, and you think it is really useful for some purpose. You want it to be available to others, you want it to have a documentation page similar to the ones for functions built into the Wolfram Language, you want to have it when you are visiting someplace and forgot to bring your laptop, etc. You can now have all this. In a notebook go to File > New > Repository Item > Function Repository Item. So with four clicks you are off and running. A templated notebook has appeared, with various fields pre-populated e.g. a name (MyFunction) and a descriptor field. There is a Definition section where you replace the prefilled boilerplate with whatever you need to define your function. Fill in the Usage message section. Provide a few examples. You can have options, applications, show possible issues, fill in keywords and Related Symbols (in the Wolfram Language) for improved discoverability, whatever; all of these have fields that can be added to or left alone. The minimum requirement is that there be a usage message and one example.
If you are new to this, you do not yet know what are the style expectations. No big deal, just hit the Style Guidelines button at the top of the notebook and documentation appears. Want to see what an existing contribution looks like? Hit the Open Sample button. Don't know how to format things like the arguments in the Usage section? Heck, nobody knows how to do that (okay, three people know, but their identities are a corporate secret). Hit the Tools button and new bottons, among them Template Input, appear below the main bar. When you think you have something to submit there is the Check button. Hit that and, well, things happen. An automated set of checks might bring up not just missing item or formatting issues, but also a set of possible actions to redress them. Once the notebook looks right and passes the Check criteria, you can see how it will look once published using the Preview button (with the option to see it in a new notebook or in the Wolfram Cloud). When everything looks just so, hit Submit to Repository. There is also a Deploy button, useful for various purposes when one does not want the work to go into the WFR; see the blog for details.
What happens next? The submission goes into a queue for review at our end. We have over 500 published functions and I have been involved in reviews of more than 300 of them (while not as prolific as some colleagues, I also wrote I think 14 of them). First thing we do is see if the name and description make sense. We then check for formatting issues, assess usability, run the examples, have a look at the code, check for related functions, and myriad other things. We even had a couple of "review" sessions live-streamed. (These were not the ordinary reviews insofar as the boss was in charge, and the functions under consideration were ones that were already published. Also, while I do not think we made this clear, we only went over a random selection from among those we ourselves had authored.)
And following a review? If a function is fine on the first go round (possibly after mild editing), it gets published. If it looks like a good idea but needs work (a common outcome, even for those we ourselves write) then a message is sent to the author requesting revisions. We try to give sufficient detail for this. And rest assured, we do not send to external authors things like a particular 'needs revisions' note I received, which began "Danny you're a moron".
Let me say a bit about what is the expectation for submissions to the WFR. We are not terribly rigid here. Functions you find useful in everyday work, or ones that do very particular tasks that are not covered by existing Wolfram Language functions, or... A function can work in a very narrow area, provided it is something that others in that area might find useful. Existing WFR functions hit areas from basic programming language extensions to specialized graphics to STEM to "just for fun" to user interface extensions to, well, functions for working with WFR functions (e.g. a message formatter specialized to the WFR). If we think a submission is too close to existing functionality, we will tell you that (this has happened with a few in-house submissions, though it seems to be quite infrequent). If we think changes are needed, well, I guess I covered that already. Functions need not be generalized to all manner of inputs; we simply ask that they fail gracefully (e.g. not crash) with unexpected ones. As with any curation effort (e.g. like what was done for the Wolfram Demonstrations Project), we will exercise some judgement as to standards. But we are not reviewing at anything like the standards for the Wolfram Language itself.
So now you have a function in the WFR. Or maybe you are using a function in the WFR, whether yours or written by others. Can it be changed out from under you? We have in place a versioning system, which is still itself undergoing revisions. The idea is that things will be cached on your local machine and/or stored to a Cloud account. Revisions can be obtained when wanted, and avoided when not wanted. We expect we will have a hiccup or two as we proceed, but anticipate a smooth process in the not-distant future. Now suppose a function shows problems, or maybe could benefit from extensions: can this be addressed? Again, there is a mechanism for communicating to us and we are also working on facilitating direct communication to authors (when they opt in for this).
To be clear, we expect there might be growing pains. But the benefits of the WFR are rife: we now have a platform for external contributions of myriad functions, and already they are getting used in day to day workflows. I hope Community members will join in this venture, either by using these functions or by contributing some of your own favorites.Daniel Lichtblau2019-06-12T16:08:15ZGraphically solve a system of equations and plot them in R^3?
https://community.wolfram.com/groups/-/m/t/1707231
Hello,
I have an assignment that asks me to graphically solve the system of equations i.e. plot them in `ℝ^3` :
> x + 2y + 3z = 1
>
> 2x + 4y + 7z = 2
>
> 3x + 7y + 11z = 8
Currently, I've used ContourPlot3D for plotting my planes in a 3-dimensional space and then FindRoot in order to find the solution to the system.
ContourPlot3D[{x + 2 y + 3 z == 1, 2 x + 4 y + 7 z == 2,
3 x + 7 y + 11 z == 8}, {x, -10, 10}, {y, -10, 10}, {z, -10, 10}]
FindRoot[{x + 2 y + 3 z == 1, 2 x + 4 y + 7 z == 2,
3 x + 7 y + 11 z == 8}, {x, 0}, {y, 0}, {z, 0}]
My question here is if I've correctly completed the assignement and if I did, why would I need to plot the three planes in `ℝ^3` when it doesn't really help me in solving the problem?Filip Fornell2019-06-18T22:24:18ZGraphing Issues
https://community.wolfram.com/groups/-/m/t/1707570
For some reason I am having a lot of trouble graphing a function. The graph will appear, but no line will show up on the graph. I will attach what I am typing in and a picture of what shows up. ANY help would be greatly appreciated!!Skylar Bair2019-06-19T16:02:23ZDeduct by the rules for two schemes?
https://community.wolfram.com/groups/-/m/t/1707269
Consider the following code:
Input[1]: rules = {a -> b, b -> c, c -> x, b -> d, d -> y};a//.rules;
output[2]:x
In the above program, "b" respectively implies two variables "c" and "d", and the two branches respectively go to the two terminal results "x" and "y". But the program runs only one branch a->b->c->x,losing a->b->d->y. Is there any approach to run all elements of the set of rules, so that the output would be "x" and "y"?Math Logic2019-06-19T09:03:14Z[WSS19] Solving the wave equation on a torus
https://community.wolfram.com/groups/-/m/t/1707287
In this post, I will share the code and methods I used to solve the wave equation on a torus. To construct a solution to the wave equation on a torus, we need the Laplace-Beltrami operator from differential geometry. For more information on this operator, I recommend looking at this paper: [http://www.math.mcgill.ca/toth/spectral%20geometry.pdf][1].
Define coordinates, dimension, surface, and metric:
n = 2; coords = {u, v};
surf = {Cos[u] (2 + Cos[v]), (2 + Cos[v]) Sin[u], Sin[v]};
g =
FullSimplify[
Table[D[surf, coords[[i]]].D[surf, coords[[j]]], {i, 1, n}, {j, 1,
n}]];
ParametricPlot3D[surf, {u, 0, 2 \[Pi]}, {v, 0, 2 \[Pi]}]
![enter image description here][2]
Find the inverse metric:
ginv = Inverse[g];
Find the determinant of the metric:
detg = Det[g];
For some extra *fun*, we can find the [Christoffel symbols of the second kind][3], [the Riemann curvature tensor][4], the [Ricci curvature tensor][5], and the [Riemann scalar curvature][6]:
\[CapitalGamma] =
FullSimplify[ginv.Table[
1/2 (D[g, coords[[j]]][[k, i]] +
D[g, coords[[i]]][[k, j]] -
D[g, coords[[k]]][[i, j]]), {k, 1, n}, {i, 1, n}, {j,
1, n}]];
riemann =
FullSimplify[
Table[Sum[
D[\[CapitalGamma][[l, i, k]], coords[[j]]] -
D[\[CapitalGamma][[l, i, j]],
coords[[k]]] + \[CapitalGamma][[l, j, s]] \[CapitalGamma][[s,
i, k]] - \[CapitalGamma][[l, k, s]] \[CapitalGamma][[s, i,
j]], {s, 1, n}], {l, 1, n}, {i, 1, n}, {j, 1, n}, {k, 1, n}]];
ricci = FullSimplify[
Sum[Table[
D[\[CapitalGamma][[l, i, j]], coords[[l]]] -
D[\[CapitalGamma][[l, i, l]],
coords[[j]]] + \[CapitalGamma][[m, i, j]] \[CapitalGamma][[l,
l, m]] - \[CapitalGamma][[m, i, l]] \[CapitalGamma][[l, j,
m]], {i, 1, n}, {j, 1, n}], {m, 1, n}, {l, 1, n}]];
scalar = FullSimplify[
Sum[ginv[[i, j]] ricci[[i, j]], {i, 1, n}, {j, 1, n}]];
However, what really interests us is the Laplace-Beltrami operator.
\[ScriptCapitalL][f_, coord_] :=
Sum[1/Sqrt[
det[g] D[ginv[[\[Mu], \[Nu]]] Sqrt[
detg] D[f, coord[[\[Nu]]]], coord[[\[Mu]]]], {\[Mu],
1 n}, {\[Nu], 1, n}];
To solve a differential equation on a torus, we first need to create a gluing diagram to represent how our coordinates u and v map on to the torus:
![enter image description here][7]
We can define our quotient space:
quotientspace = Rectangle[{0, 0}, {2 \[Pi], 2 \[Pi]}];
We then need to specify appropriate boundary conditions. I will use the `PeriodicBoundaryConditions` symbol allowing us to connect the two edges of the gluing diagram. This can also be thought of as creating a large grid of squares where the solution to the partial differential equation repeats itself in each square.
bcs = {PeriodicBoundaryCondition[\[Psi][t, u, v], u == 2 \[Pi],
TranslationTransform[{-2 \[Pi], 0}]],
PeriodicBoundaryCondition[\[Psi][t, u, v], v == 2 \[Pi],
TranslationTransform[{0, -2 \[Pi]}]];
We then should specify some options:
opts = Method -> {"MethodOfLines", "TemporalVariable" -> t,
"SpatialDiscretization" -> {"FiniteElement",
"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}};
Let's solve with the initial conditions being a small bump at u = Pi and v = Pi.
sol = NDSolveValue[{\[ScriptCapitalL][ \[Psi][t, u, v], coords] -
D[\[Psi][t, u, v], {t, 2}] == 0,
bcs, \[Psi][0, u, v] ==
Exp[-(8 (u - \[Pi]))^2 - (8 (v - \[Pi]))^2],
Derivative[1, 0, 0][\[Psi]][0, u, v] == 0}, \[Psi], {t, 0,
10}, {u, v} \[Element] quotientspace, opts];
To visualize the solution, we will use a color function. Let's define the color function first.
colorf=Function[t, Function[{x,y,z,u,v},ColorData["FuchsiaTones"][Rescale[sol[t,u,v],{-.1/3,.1/3}]]]];
To animate, we need to generate a set of frames:
frames = ParallelTable[
ParametricPlot3D[surf, {u, 0, 2 \[Pi]}, {v, 0, 2 \[Pi]},
MeshFunctions -> Automatic, ColorFunction -> colorf[t],
ColorFunctionScaling -> False, Boxed -> False, Axes -> False,
ViewPoint -> {0, 15, 10}], {t, 0, 10, 1/30}];
Putting this all together in combination with some heavy computing power (thanks Boston University SCC), we get an animation that can be found at this link: [https://drive.google.com/file/d/1gf0fMxZeM_SInLbLe_mtFkvg1cGDb5ss/view?usp=sharing][8]. Here are some screenshots from that animation:
![enter image description here][9]
![enter image description here][10]
![enter image description here][11]
![enter image description here][12]
![enter image description here][13]
![enter image description here][14]
![enter image description here][15]
As seen in the animation the `PeriodicBoundaryCondition` allows the wave to interact with itself. The interaction of the wave with itself creates an increasingly chaotic solution. From my experience adjusting the size of the torus and the size of the initial bump. There is one additional representation I would like to leave you with. We can represent the solution by multiplying the solution by the tangent vector to create a torus that 'wiggles.'
frames = Table[
ParametricPlot3D[{Cos[
u] (2 + (1 + 1.2 \[Psi][t, u, v]) Cos[
v]), (2 + (1 + 1.2 \[Psi][t, u, v]) Cos[v]) Sin[u],
Sin[v]}, {u, 0, 2 \[Pi]}, {v, 0, 2 \[Pi]},
ViewPoint -> {0, 15, 10}, PlotRange -> 3.5, Boxed -> False,
Axes -> False], {t, 0, 20, .1}];
![enter image description here][16]
[1]: http://www.math.mcgill.ca/toth/spectral%20geometry.pdf
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-18at12.03.59PM.png&userId=1705502
[3]: http://mathworld.wolfram.com/ChristoffelSymboloftheSecondKind.html
[4]: http://mathworld.wolfram.com/RiemannTensor.html
[5]: http://mathworld.wolfram.com/RicciCurvatureTensor.html
[6]: http://mathworld.wolfram.com/ScalarCurvature.html
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6445torusgluing.png&userId=1705502
[8]: https://drive.google.com/file/d/1gf0fMxZeM_SInLbLe_mtFkvg1cGDb5ss/view?usp=sharing
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.44.26AM.png&userId=1705502
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.44.42AM.png&userId=1705502
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.45.32AM.png&userId=1705502
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.45.50AM.png&userId=1705502
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.46.40AM.png&userId=1705502
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.47.02AM.png&userId=1705502
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at6.47.20AM.png&userId=1705502
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wiggletorus.gif&userId=1705502Emmy Blumenthal2019-06-19T11:15:50ZCalculate the following integration (unable to do it after running it 4h)?
https://community.wolfram.com/groups/-/m/t/1707171
![enter image description here][1]
Help me to integrate it out
After running 4hr , mathematica unable to do this integration.
Then I reduced constants and variables and try to do similar type integration with variable 'r' in 1D.
![enter image description here][2]
Still not getting the result
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=required.png&userId=1706636
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=what_to_do.png&userId=1706636Soumyaranjan Jhankar2019-06-19T12:32:36Z[WSS19] World Oldest People: Age, Gender, Racial, Geographical Composition
https://community.wolfram.com/groups/-/m/t/1707390
![mapplot1: place of death of the oldest people][6]
Abstract
--------
The world's oldest people reflect the underlying medical, technological, and socio-economical conditions of their places of residence. The oldest people on record are ten times more likely to be females than males, supporting the traditional hypothesis that females make less risky social choices. Although all broad racial groups are represented, more than half of the oldest people on record since 1842 are white, and eighty percent was born in G8 countries, indicating the earlier advancement of Western European and North American's healthcare system. Similarly, the oldest people are clustered in Eastern United States, Japan, Britain, and France, justifying the correlation between socio-economical development and increased life-span. Furthermore, the world's oldest people are living longer - by an average of 1.11 years each decade, which demonstrates the overall improvement of global living conditions. The influence of socio-economic conditions on longevity suggests many directions to improve life-span around the world.
----------
I. Data Import
--------------
The list of worlds' oldest people, curated by Gerontology Research Group, was imported directly into Wolfram Mathematica using the following code:
ClearAll["Global`*"];
rawData1 = Import["http://archive.is/4kwbk", "Data"];
The data table - the only element of interest from the website - was located by trial and error: it is in the level [[2,2,1]], from position 8 to position 71. In case the data table would be updated in the future, a While loop was implemented to locate the table's last line:
count = 71;
continue = True;
While[And[continue == True, count < Length[rawData1[[2, 1, 1]]]],
If[NumberQ[First[rawData1[[2, 1, 1]][[count]]]],
count++,
continue = False
]];
Since the last line of the data table describes the current oldest person, with missing date of death/Most recent alive date, it was separately processed - adding the current date, and manually appended into the table:
rawData2 = Take[rawData1[[2, 1, 1]], {8, count - 2}];
lastLine =
Insert[rawData1[[2, 1, 1, count - 1]], DateString["ISODate"], 5];
AppendTo[rawData2, lastLine];
This raw data table was not usable, as it contained disparage type of data in one cell or column: separating the years and days of age, mixing up cities and countries, including annotation [] and parentheses () within data. Furthermore, it gave secondary/derived data: age, which could be obtained from date of birth and most recent alive date. Hence, the data was cleaned up:
rawData3 =
Extract[Transpose[
Sort[rawData2, #1[[4]] < #2[[4]] &]], {#} & /@ {1, 2, 3, 4, 5, 8,
9, 10}];
cleanfunc[s_] :=
StringReplace[
s, {" [" ~~ x___ ~~ "]" -> "", "(" ~~ x___ ~~ ")" -> x}];
cleanfuncRace[s_] :=
StringReplace[
s, {"W" -> "white", "B" -> "black", "EA" -> "asian",
"O" -> "asian", "M" -> "multiracial", "H" -> "hispanic"}];
cleanfuncSex[s_] := StringReplace[s, {"M" -> "male", "F" -> "female"}];
cleanStrings = {"Que" -> "Quebec", "GA" -> "Georgia",
"now Poland" -> "", "British West Indies now Jamaica" -> "Jamaica",
"U.S. MI" -> "Michigan", "Cape Verde Portugal" -> "Cape Verde",
"France St. Barts" -> "Saint Barthelemy"};
The data table received additional attributes - Date Object or City/Country Entity - thanks to SemanticInterpretation[]. In order to implement SemanticInterpretation[], certain strings were removed or clarified, as detailed above. Nevertheless, SemanticInterpretation[] may fail in some instance, which then requires the re-implementation of the following block of code:
rawData4 =
Transpose[{rawData3[[1]], cleanfunc[rawData3[[3]]],
SemanticInterpretation[cleanfunc[rawData3[[4]]]],
SemanticInterpretation[rawData3[[5]]],
cleanfuncRace[cleanfunc[rawData3[[6]]]], rawData3[[7]],
SemanticInterpretation[
StringReplace[cleanfunc[rawData3[[2]]], cleanStrings]],
SemanticInterpretation[
StringReplace[cleanfunc[rawData3[[8]]], cleanStrings]]}];
dataNoHeading =
Transpose[
Insert[Transpose[rawData4],
Table[DateDifference[rawData4[[i, 3]],
rawData4[[i, 4]], {"Year", "Day"}], {i, 1, Length[rawData4]}],
5]];
headings = {"No.", "Name", "Date of birth", "Most recent alive date",
"Age", "Race", "Sex", "Birthplace", "Deathplace"};
data = Prepend[dataNoHeading, headings];
data // TableForm
Upon successful running of SemanticInterpretation[], the data table appeared as follow:
![Part of the data table][1]
II. Data Visualizations and Analysis
-----------------------
The oldest people are more likely to be women than men, with a ratio of ten-to-one (pieChart1). This discrepancy underscores socio-environmental choices of each gender: men tend to engage in more risky activities: smoking, drinking, using drugs, reckless driving, ignoring health issues, working in dangerous occupations, participating in war, etc. (This gender discrepancy might be attributed to biological differences between male and female also: women were observed to have more resistance to infections and degenerative diseases than men.)
genderCounts =
Counts[Transpose[dataNoHeading][[7]] //. {"F" -> "Female",
"M" -> "Male"}];
genderPercentage = genderCounts/Total[Values[genderCounts]];
pieChartLabel1 =
Table[Style[
StringJoin[Keys[genderPercentage][[n]], " ",
ToString[Round[Values[genderPercentage][[n]]*100, 1]], "%"],
Bold, 14], {n, 1, Length[genderPercentage]}];
pieChart1 =
PieChart[Counts[Transpose[dataNoHeading][[7]]],
ChartLabels -> pieChartLabel1,
ChartStyle -> {Lighter[Pink], Lighter[Blue]},
PlotLabel ->
Style[Framed["Gender distribution of the worlds' oldest people"],
16]]
![pieChart1: the gender composition of the oldest people][2]
Among all the racial groups - Black, White, Asian, Hispanic, the oldest people are more than half as likely to be White (barChart1). This skewed racial representation may attest to the fact that North American and European countries industrialized earlier than the rest of the world.
Accordingly, the oldest people are four times as likely to have been born in G8 countries - the more developed, more resourceful nations of the world (barChart2). Furthermore, the oldest people who come from the same country tend to have similar age, which suggests some influence of environmental conditions (clusterImage1).
Following the trend, the oldest people's last place of residence tend to cluster around Eastern United States, Europe, or Japan, where advance, life-extending medical services are available. Hence, the racial, place-of-birth, and place-of-death composition of the oldest people all imply that socio-economic conditions greatly affect life-span. (Again, the contribution of genetic, biological factors cannot be discounted - neither is the fact that more develop countries keep better record/census of their people.)
raceCounts = Counts[Transpose[dataNoHeading][[6]]];
racePercentage = raceCounts/Total[Values[raceCounts]];
barChartLabel1 =
Table[Style[
Framed[StringJoin[Keys[racePercentage][[n]], " ",
ToString[Round[Values[racePercentage][[n]]*100, 1]], "%"]
], 12], {n, 1, Length[racePercentage]}];
barChart1 =
BarChart[raceCounts, ChartLabels -> barChartLabel1,
AxesLabel -> {"Race", "Counts"},
ChartStyle -> {White, Orange, Black, Brown, Gray},
PlotLabel ->
Style[Framed["Racial distribution of the worlds' oldest people"],
16], ImageSize -> Large]
![barChart1: racial distribution of the world oldest people][3]
EntityValue[Entity["HistoricalCountry", "Czechoslovakia"], "Flag"] =
EntityValue[Entity["Country", "CzechRepublic"], "Flag"];
birthPlaceList = Transpose[dataNoHeading][[8]];
birthCountryList =
Table[If[EntityTypeName[birthPlaceList[[i]]] == "Country" ||
EntityTypeName[birthPlaceList[[i]]] == "HistoricalCountry",
birthPlaceList[[i]], birthPlaceList[[i]]["Country"]], {i, 1,
Length[birthPlaceList]}];
deathPlaceList = Transpose[dataNoHeading][[9]];
deathCountryList =
Table[If[EntityTypeName[deathPlaceList[[i]]] == "Country" ||
EntityTypeName[deathPlaceList[[i]]] == "HistoricalCountry",
deathPlaceList[[i]], deathPlaceList[[i]]["Country"]], {i, 1,
Length[deathPlaceList]}];
f[x_] := Magnify[Framed[x], 0.1];
chartList2 =
Sort[Counts[#]] & /@
GatherBy[birthCountryList,
MemberQ[EntityList[EntityClass["Country", "G8"]], #] &];
chartList2Flag = Map[f, EntityValue[Keys[chartList2], "Flag"], {2}];
chartList2ForPlot =
Table[Table[
Labeled[chartList2[[n, m]], chartList2Flag[[n, m]]], {m, 1,
Length[chartList2[[n]]]}], {n, 1, Length[chartList2]}];
barChart2GroupLabel = {Placed[{Style[Framed["G8"], 14],
Style[Framed["non G8"], 14]}, Above], Automatic};
barChart2 =
BarChart[chartList2ForPlot, ChartStyle -> "Pastel",
ChartLabels -> barChart2GroupLabel,
AxesLabel -> {"Country", "Counts"},
PlotLabel ->
Style[Framed["Birth country of the worlds' oldest people"], 16],
ImageSize -> Large]
![barChart2: birth country of the oldest people][4]
clusterFlagLabel =
Magnify[Framed[#], 0.1] & /@ EntityValue[birthCountryList, "Flag"];
clusterImage1 =
ClusteringTree[
UnitConvert[Drop[data[[All, 5]], 1], "Year"] -> clusterFlagLabel,
ClusterDissimilarityFunction -> "Centroid",
GraphLayout -> "RadialEmbedding",
PlotLabel ->
Style[Framed[
"Cluster by age of the oldest people\nwith respect to their \
birthplace "], 16], ImageSize -> Large]
![clusterImage1: cluster by age of the oldest people][5]
mapPlot1 =
GeoGraphics[{GeoMarker[deathPlaceList,
EntityValue[Entity["Icon", "MensRoom"], "Image"]]},
GeoRange -> "World", GeoBackground -> "Coastlines",
GeoProjection -> "Robinson", ImageSize -> Full,
PlotLabel ->
Style[Framed[
"Current residence or place of death since 1955\nof the world's \
oldest people"], 20]]
![mapplot1: place of death of the oldest people][6]
Finally, the oldest people are living longer: the linear model for date of birth and age predicts that age would increase by 1.11 years each decade (listPlot1). This trend of increment in the oldest people's life-span correlates with better living conditions worldwide.
dateOfBirthAgeList =
TimeSeries[
Transpose[
Append[{Transpose[dataNoHeading][[3]]},
QuantityMagnitude[
UnitConvert[Transpose[dataNoHeading][[5]], "Year"]]]]];
datePairList = dateOfBirthAgeList["Path"];
modelFit = LinearModelFit[dateOfBirthAgeList, x, x];
modelFitList =
Table[{x, modelFit[x]}, {x, First[datePairList][[1]],
Last[datePairList][[1]], (
Last[datePairList][[1]] - First[datePairList][[1]])/50}];
plotlabel =
Style[Framed[
"Date of birth and age of the oldest people, in blue\nwith \
best-fitted line in dashed orange"], 16];
listPlot1 =
DateListPlot[{dateOfBirthAgeList, modelFitList},
PlotLabel -> plotlabel, FrameLabel -> {"Date of Birth", "Age"},
PlotStyle -> {Thick, {Thick, Dashed}}, Joined -> {False, True},
PlotMarkers -> {All, None}, ImageSize -> Large]
ageDifference = modelFit[10*365.25*24*60*60] - modelFit[0];
rSquare = modelFit["RSquared"];
Print["The best-fitted linear model predicts that the oldest people's \
age would increase by ", ageDifference, " years each decade."];
![listPlot1: Date of birth and age of oldest people][7]
In conclusion, the oldest people are predominantly women, are mostly white, and are very likely to be born or lived in developed countries. The gender, racial, and geographical distribution of the oldest people show the impact of socio-economical conditions on life-span. The age of the oldest people is increasing, correlating with the overall advances in technology and healthcare.
The correlation between (the number of) oldest people and improved living conditions suggests certain directions to improve life-span in less-developed parts of the world: providing healthcare, improving living conditions, discouraging/alleviating (male) idiosyncratic choices, investing in research and development, etc.
Bonus
-----
Dynamic plot/video of the oldest people:
ClearAll[beginDate, endDate, beginYear, endYear, listLiveDeath,
listLive, listDeath, plotMap, geoLive, geoGold, geoRed, geoBlue,
listDate, listDateValue];
beginDate = First[dataNoHeading][[3]] - Quantity[1, "years"];
endDate = Last[data][[4]] + Quantity[0, "years"];
beginYear = DateObject[beginDate, "Year"];
endYear = DateObject[endDate, "Year"];
dayRange = QuantityMagnitude[UnitConvert[endDate - beginDate, "Days"]];
listDate =
Union[Transpose[dataNoHeading][[3]], Transpose[dataNoHeading][[4]]];
listDateValue =
QuantityMagnitude[
UnitConvert[DateDifference[beginDate, #] & /@ listDate, "Days"]];
plotMap[time_] :=
plotMap[time] =
Module[{t = time, listDeath, listLive, timeDeathPlaceListDeath,
timeDeathPlaceListLive, geoLive, geoBlue, plotLabel},
{listLive =
Select[dataNoHeading,
And[QuantityMagnitude[
DateDifference[#[[3]], beginYear + Quantity[t, "days"]]] >=
0, QuantityMagnitude[
DateDifference[#[[4]],
beginYear + Quantity[t, "days"]]] <= 0] &];
plotLabel =
Style[Framed[
"The world's oldest people who are still alive at time t\n\
Blue: the oldest person, Red: people who are going to be oldest"], 20];
Which[
Length[listLive] == 0,
{GeoGraphics[GeoRange -> "World", GeoBackground -> "Coastlines",
GeoProjection -> "Robinson", ImageSize -> Full,
PlotLabel -> plotLabel]
},
Length[listLive] == 1,
{timeDeathPlaceListLive = Transpose[listLive][[9]];
geoBlue =
GeoMarker[First[timeDeathPlaceListLive],
EntityValue[Entity["Icon", "MensRoom"], "Image"],
"Color" -> Blue, "Scale" -> Scaled[0.04]];
GeoGraphics[{geoBlue}, GeoRange -> "World",
GeoBackground -> "Coastlines", GeoProjection -> "Robinson",
ImageSize -> Full, PlotLabel -> plotLabel]
},
Length[listLive] > 1,
{timeDeathPlaceListLive = Transpose[listLive][[9]];
geoBlue =
GeoMarker[First[timeDeathPlaceListLive],
EntityValue[Entity["Icon", "MensRoom"], "Image"],
"Color" -> Blue, "Scale" -> Scaled[0.04]];
geoLive =
GeoMarker[Drop[timeDeathPlaceListLive, 1],
EntityValue[Entity["Icon", "MensRoom"], "Image"]];
GeoGraphics[{geoBlue, geoLive}, GeoRange -> "World",
GeoBackground -> "Coastlines", GeoProjection -> "Robinson",
ImageSize -> Full, PlotLabel -> plotLabel]
}]
}];
Table[plotMap[t], {t, listDateValue}];
Hold[
plotMap[time_] :=
plotMap[time] =
Module[{t = time, listDeath, listLive, timeDeathPlaceListDeath,
timeDeathPlaceListLive, geoLive},
{listLive =
Select[dataNoHeading,
And[QuantityMagnitude[
DateDifference[#[[3]],
beginYear + Quantity[t, "days"]]] >= 0,
QuantityMagnitude[
DateDifference[#[[4]],
beginYear + Quantity[t, "days"]]] <= 0] &];
If[listLive == {},
{GeoGraphics[GeoRange -> "World",
GeoBackground -> "Coastlines", GeoProjection -> "Robinson",
ImageSize -> Full,
PlotLabel ->
Style[Framed[
"The world's oldest people who are still alive at time \
t"], 20]]},
{timeDeathPlaceListLive = Transpose[listLive][[9]];
geoLive = {GeoMarker[timeDeathPlaceListLive,
EntityValue[Entity["Icon", "MensRoom"], "Image"]]};
GeoGraphics[geoLive, GeoRange -> "World",
GeoBackground -> "Coastlines", GeoProjection -> "Robinson",
ImageSize -> Full,
PlotLabel ->
Style[Framed[
"The world's oldest people who are still alive at time \
t"], 20]]
}]
}];
Table[plotMap[t], {t, 0, dayRange, 2000}];
];
mapAnimate =
Animate[plotMap[t], {t, listDateValue}, DefaultDuration -> 20,
AnimationRunning -> False]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06problemT1datatable.PNG&userId=1707333
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1pieChart1.png&userId=1707333
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1barChart1.png&userId=1707333
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1barChart2.png&userId=1707333
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1clusterImage1.png&userId=1707333
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1mapPlot.png&userId=1707333
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2019-06-06ProblemT1listPlot1.png&userId=1707333Nam Tran-Hoang2019-06-19T06:12:55ZObtain series expansions using Frobenius Method
https://community.wolfram.com/groups/-/m/t/1705566
## Fail Cases ##
Series[Exp[-2 Pi/Sqrt[3] Hypergeometric2F1[1/2, 1/2, 1, 1 - x]/Hypergeometric2F1[1/2, 1/2, 1, x]], {x, 0, 5}]
Series[Exp[-2 Pi/Sqrt[3] Hypergeometric2F1[1/3, 2/3, 1, 1 - x]/Hypergeometric2F1[1/3, 2/3, 1, x]], {x, 0, 5}]
Series[Exp[-2 Pi/Sqrt[2] Hypergeometric2F1[1/4, 3/4, 1, 1 - x]/Hypergeometric2F1[1/4, 3/4, 1, x]], {x, 0, 5}]
Series[Exp[-2 Pi Hypergeometric2F1[1/6, 5/6, 1, 1 - x]/Hypergeometric2F1[1/6, 5/6, 1, x]], {x, 0, 5}]
![Fail Cases][1]
Looks like something is going wrong on your end. Or try typing one of these into Wolfram|Alpha:
![enter image description here][2]
No response is slightly better than printing out nonsense, but why shouldn't we try and do better? I asked Bill Gosper, and he also thinks these expansions need to be fixed. We could try to do something like this:
## Frobenius Method ##
TSol[PFCS_, nMax_] := With[{TAnsatz = {
Dot[a1 /@ Range[0, nMax], x^Range[0, nMax]],
Plus[Log[x] Dot[a1 /@ Range[0, nMax], x^Range[0, nMax]],
Dot[a2 /@ Range[0, nMax], x^Range[0, nMax]]]} /. {a1[0] -> 1,
a2[0] -> 0}}, TAnsatz /. Solve[# == 0 & /@
Flatten[CoefficientList[#, {x, Log[x]}][[1 ;; nMax]
] & /@ Dot[PFCS, D[TAnsatz, {x, #}] & /@ Range[0, 2]]],
Flatten[{a1 /@ Range[1, nMax], a2 /@ Range[1, nMax]}]
][[1]] /. {a1[_] -> 0, a2[_] -> 0}]
MapThread[With[{f1 = TSol[{#1 - 1, #1^2 (-1 + 2 x), #1^2 (-1 + x) x}, 14]},
Expand[1/#2 Normal[Series[Exp[f1[[2]]/f1[[1]]], {x, 0, 10}]] /. x -> #2 x]]
&, {{2, 3, 4, 6}, {16, 27, 64, 432}}]
Out[]:= {
x + 8 x^2 + 84 x^3 + 992 x^4 + 12514 x^5 + 164688 x^6 + 2232200 x^7 + 30920128 x^8 + 435506703 x^9 + 6215660600 x^10,
x + 15 x^2 + 279 x^3 + 5729 x^4 + 124554 x^5 + 2810718 x^6 + 65114402 x^7 + 1538182398 x^8 + 36887880105 x^9 + 895303119303 x^10,
x + 40 x^2 + 1876 x^3 + 95072 x^4 + 5045474 x^5 + 276107408 x^6 + 15444602248 x^7 + 878268335296 x^8 + 50588345910799 x^9 + 2944021398570264 x^10,
x + 312 x^2 + 107604 x^3 + 39073568 x^4 + 14645965026 x^5 + 5609733423408 x^6 + 2182717163349896 x^7 + 859521859502348352 x^8 + 341679883727799750159 x^9 + 136868519056531319862408 x^10
}
I'm also willing to give a talk as to why I think these are important evaluations and how they fit into the wider context of what we can possibly hope to accomplish using Mathematica.
Cheers --Brad
PS. Don't feel too bad, other than A005797, these expansions are not in OEIS either.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=FailCases.png&userId=234448
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2019-06-1603-14-17.png&userId=234448Brad Klee2019-06-16T08:39:24Z[✓] Deal with "Encountered non-numeric value for a derivative at t == 0" ?
https://community.wolfram.com/groups/-/m/t/1707096
Hi, new in mathematica and im facing a problem. Im trying to solve a diferencial equation with NDSolve and but the error "Encountered non-numeric value for a derivative at t == 0" shows. I understand the nature of the error but I don't know how to manage it. The function im obtaining is defined in the small interval t around 0 to 1. Here the code:
V=NDSolve[{v'[t]==-g +(a*ve)/(mr+mw*Exp[-ve*t]) +(mw*ve*Exp[-ve*t])/(mr+mw*Exp[-ve*t])*v[t]-(b/(mr+mw*Exp[-ve*t]))*v[t]^2 ,v[0]==0.01},v[t],{t,0.01,0.4}]
Here is an screenshot:
![enter image description here][1]
Thank you!
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-19at02.42.47.png&userId=1707082L GonGa2019-06-19T01:44:35ZA primer on Association and Dataset
https://community.wolfram.com/groups/-/m/t/1167544
*NOTE: all Wolfram Language code and data are available in the attached notebook at the end of the post.*
----------
For my class this fall, I developed a little primer on Association and Dataset that I think might be useful for many people. So, I'm sharing the attached notebook. It's somewhat about the concepts embedded inside these features. It's intended for people at an beginner-intermediate level of Mathematica/Wolfram Language programming but might be of value even to some more advanced users who have not poked about the Dataset functionality.
The sections of the notebook are:
1. The world before Associations and Datasets
2. Datasets without Associations
3. Enter the Association
4. Creating a Dataset from a List of Associations
5. Nice queries with Dataset
6. Query
7. Some Recipes
#The world before Associations and Datasets#
Here' s an array of data. The data happens to represent the cabin class, age, gender, and survival of some of the passengers on the Titanic.
t = {{"1st", 29, "female", True}, {"1st", 30, "male", False}, {"1st",
58, "female", True}, {"1st", 52, "female", True}, {"1st", 21,
"female", True}, {"2nd", 54, "male", False}, {"2nd", 29, "female",
False}, {"3rd", 42, "male", False}};
As it stands, our data is a List of Lists.
Head[t]
> List
Head /@ t
> {List, List, List, List, List, List, List, List}
Suppose I wanted to get the second and fifth rows of the data. This is how I could do it.
t[[{2, 5}]]
> {{"1st", 30, "male", False}, {"1st", 21, "female", True}}
Suppose we want to group the passengers by gender and then compute the mean age. We could do this with the following pretty confusing code.
Use and enjoy. Constructive feedback appreciated.
grouped = GatherBy[t, #[[3]] &];
justTheAges = grouped[[All, All, 2]];
Mean /@ justTheAges
> {189/5, 42}
Or I could write it as a one liner this way.
Map[Mean, GatherBy[t, #[[3]] &][[All, All, 2]]]
> {189/5, 42}
But either way, realize that I have to remember that gender is the third column and that age is the second column. When there is a lot of data, this can get hard to remember.
#Datasets without Associations#
I could, if I wanted, convert this data into a Dataset. I do this below simply by wrapping Dataset about t. You see there is now some formatting about the data. But there are no column headers (because no one has told Dataset what to use). And there are no row headers, again because no one has told Dataset what to use.
t2 = Dataset[t]
![enter image description here][1]
The head of the expression has changed.
Head[t2]
> Dataset
Now, I can now access the data in a different way.
Query[{2, 5}][t2]
![enter image description here][2]
Or, I can do this. Mathematica basically converts this expression into Query[{2,5}][t2]. The expression t2[{2,5}] is basically syntactic sugar.
t2[{2, 5}]
![enter image description here][3]
##Digression : Using Query explicitly or using syntactic sugar##
Why, by the way would anyone use the longer form if Mathematica does the work for you? Suppose you want to store a Dataset operation -- perhaps a complex series of Dataset operations -- but you want it to work not just on a particular Dataset but on any Dataset (that is compatible). Here's how you could do it.
q = Query[{2, 5}]
> Query[{2, 5}]
q[t2]
![enter image description here][4]
Now, let' s create a permutation of the t2 Dataset so that the rows are scrambled up.
t2Scrambled = t2[{1, 4, 8, 3, 2, 7, 5}]
![enter image description here][5]
We can now run the q operation on t2Scrambled. Notice that the output has changed even though the query has stayed the same.
q[t2Scrambled]
![enter image description here][6]
We can also generate Query objects with functions. Here's a trivial example. There are very few languages of which I am aware that have the ability to generate queries by using a function. The one other example is Julia.
makeASimpleQuery[n_] := Query[n]
makeASimpleQuery[{3, 4, 7}][t2]
![enter image description here][7]
##MapReduce operations on Dataset objects##
Now, if I want to know the mean ages of the genders I can use this code. This kind of grouping of data and then performing some sort of aggregation operation on the groups is sometimes known as a MapReduce. (I'm not a fan of the name, but it is widely used). It's also sometimes known as a rollup or an aggregation.
Query[GroupBy[#[[3]] &], Mean, #[[2]] &][t2]
![enter image description here][8]
Or this shorthand form in which the Query is constructed.
t2g = t2[GroupBy[#[[3]] &], Mean, #[[2]] &]
![enter image description here][9]
I think this is a little cleaner. But we still have to remember the numbers of the columns, which can be challenging.
By the way, just to emphasize how we can make this all functional, here's a function that creates a query that can run any operation (not just computing the mean) on the Dataset grouped by gender and then working on age.
genderOp[f_] := Query[GroupBy[#[[3]] &], f, #[[2]] &]
genderOp[Max][t2]
![enter image description here][10]
To test your understanding, see if you can find the minimum age for each class of passenger on the Titanic in our Dataset **t2**.
Query[GroupBy[#[[1]] &], Min, #[[2]] &][t2]
![enter image description here][11]
#Enter the Association#
##Review of Association##
If you feel comfortable with Associations, you can skip this section; otherwise read it carefully. Basically the key to understanding most Dataset operations is understanding Associations.
###Construction of Associations###
Now let' s alter the data so that we don't have to remember those facts. To do this we will create an **Association**. Here's an example called **assoc1**. Notice that we do so by creating a sequence of rules and then wrapping it in an Association head. Notice that the standard output does not preserve the word "Association" as the head but, just as List is outputted as stuff inside curly braces, Association is outputted as stuff inside these funky "<|" and "|>" glyphs.
assoc1 = Association["class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True|>
I could equivalently have created a list of rules rather than a sequence. Mathematica would basically unwrap the **List** and create a sequence.
assoc1L = Association[{"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True}]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True|>
We can use **AssociationThread** to create Associations in a different way. The first argument is the list of things that go on the left hand side of the Rules -- the "keys" -- and the second argument is the list of things that go on the right hand side of the Rules -- the "values".
assoc1T = AssociationThread[{"class", "age", "gender", "survived"}, {"1st", 29, "female", True}]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True|>
Now let's use **AssociationThread** function to create a list of Associations similar to our original data.
convertListToAssociation =
list \[Function]
AssociationThread[{"class", "age", "gender", "survived"}, list]
> Function[list, AssociationThread[{"class", "age", "gender", "survived"}, list]]
I start with t and Map the **convertListToAssociation** function over the rows of the data. I end up with a list of Associations.
t3 = Map[convertListToAssociation, t]
![enter image description here][12]
###Keys and Values###
Associations have keys and values. These data structures are used in other computer languages but known by different names: *Python* and *Julia* call them dictionaries. *Go* and *Scala* call them maps. *Perl* and *Ruby* call them hashes. *Java* calls it a *HashMap*. And *Javascript* calls it an object. But they all work pretty similarly. Anyway, the keys of an **Association** are the things on the left hand side of the Rules.
Keys[assoc1]
> {"class", "age", "gender", "survived"}
And the values of an Association are the things on the right hand side of the Rules.
Values[assoc1]
> {"1st", 29, "female", True}
That' s about all there is too it. Except for one thing. Take a look at the input and output that follows.
assoc2 = Association["a" -> 3, "b" -> 4, "a" -> 5]
> <|"a" -> 5, "b" -> 4|>
You can' t have duplicate keys in an Association. So, when Mathematica confronts duplicate keys, it uses the last key it saw. You might think this is a minor point, but it is actually very important in coding. We will see why soon.
###Nested Associations###
A funny thing happens if you nest an **Association** inside another **Association**.
Association[assoc1, assoc2]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True, "a" -> 5, "b" -> 4|>
You end up with a single un - nested (flat) association. That's a little unusual for Mathematica, but we can exploit this flattening as a way of adding elements to an Association.
Association[Association["dances" -> False], assoc1]
> <|"dances" -> False, "class" -> "1st", "age" -> 29, "gender" ->
> "female", "survived" -> True|>
Or, here' s a function that exploits the flattening to add elements to an **Association**.
addstuff = Association[#, "dances" -> False, "sings" -> True] &
> Association[#1, "dances" -> False, "sings" -> True] &
addstuff[assoc1]
> <|"class" -> "1st", "age" -> 29, "gender" -> "female", "survived" -> True, "dances" -> False, "sings" -> True|>
###Extracting Values from Associations###
Just as the values contained in a **List** can be accessed by using the **Part** function, the values contained in an **Association** can likewise be accessed. Suppose, for example that I wanted to compute double the age of the person in **assoc1**.
It turns out there are a lot of ways of doing this. The first is to treat the Association as a list except that the indices, instead of being integers, are the "keys" that are on the left hand side of the rules.
2*Part[assoc1, "age"]
> 58
2*assoc1[["age"]]
> 58
A second way is to use Query. We can wrap the "key" in the head **Key** just to make sure Mathematica understands that the thing is a Key.
2*Query[Key["age"]][assoc1]
> 58
Usually we can omit the Key and everything works fine.
2*Query["age"][assoc1]
> 58
A third way is to write a function that has an association as its argument.
af = Function[Slot["age"]]
> "#age &"
Now look what we can do.
2*Query[af][assoc1]
> 58
We can shorten this approach by using a simpler syntax for a function.
2*Query[#age &][assoc1]
> 58
Note, though that this still will not work. Basically, Mathematica is confused. It thinks the function itself is the key.
2*assoc1[af]
> 2 Missing["KeyAbsent", #age &]
But here' s a simple workaround. For very simple functions, I can just use the name of the key.
2*assoc1["age"]
> 58
##A Note on Slot Arguments##
And please pay attention to this : sometimes the Mathematica parser gets confused when it confronts a "slot argument" written as #something. If you see this happening, write it as Slot["something"].
Slot["iamaslot"] === #iamaslot
> True
Here' s another problem. What if the key in the association has spaces or non-standard characters in it. Any of these, for example, are perfectly fine keys: the string "I have a lot of spaces in me", the string "I_have_underscores", the symbol True, the integer 43. But if we try to denote those keys by putting a hash in front of them, it will lead to confusion and problems.
problemAssociation = Association["I have a lot of spaces in me" -> 1, "I_have_underscores" -> 2, True -> 3, 43 -> 4]
> <|"I have a lot of spaces in me" -> 1, "I_have_underscores" -> 2,
True -> 3, 43 -> 4|>
{Query[#I have a lot of spaces in me &][problemAssociation],
Query[#I _have _underescores &][problemAssociation]}
![enter image description here][13]
Here' s a solution.
{Query[Slot["I have a lot of spaces in me"] &][problemAssociation],
Query[Slot["I_have_underscores"] &][problemAssociation]}
> {1, 2}
Here' s how we solve the use of True and an integer as keys. We preface them with **Key**.
{Query[#True &][problemAssociation], Query[#43 &][problemAssociation]}
![enter image description here][14]
{Query[Key[True]][problemAssociation],
Query[Key[43]][problemAssociation]}
> {3, 4}
##Working with Associations and Lists of Associations##
Here' s something we can do with the data in the form of an Association. I could ask for the gender of the person in the third row as follows. Notice I did not have to remember that "gender" was generally in the third position.
t3[[3]][["gender"]]
> "female"
So, even if I scramble the rows, I can still use the same code.
t3Scrambled = Map[convertListToAssociation, t[[All, {4, 1, 3, 2}]]]
![enter image description here][15]
t3Scrambled[[3]][["gender"]]
> female
I could also group the people according to their cabin class. Here I use Query on a list of Associations.
Query[GroupBy[#class &]][t3]
![enter image description here][16]
Again, the following code, which does not explicitly use **Query**, won' t work. Basically, nothing has told Mathematica to translate t3[stuff___] \[RightArrow]Query[stuff][t3]. If t3 had a head of Dataset, Mathematica would know to make the translation.
t3[GroupBy[#class &]]
![enter image description here][17]
I can also get certain values for all the Associations in a list of Associations.
Query[All, #age &][t3]
> {29, 30, 58, 52, 21, 54, 29, 42}
I can also map a function onto the result. I don't have to go outside the Query form to do so.
Query[f, #age &][t3]
> f[{29, 30, 58, 52, 21, 54, 29, 42}]
Or, without exiting the Query form, I can map a function onto each element of the result.
Query[Map[f], #age &][t3]
> {f[29], f[30], f[58], f[52], f[21], f[54], f[29], f[42]}
I could also do the same thing as follows.
Query[All, #age &, f][t3]
> {f[29], f[30], f[58], f[52], f[21], f[54], f[29], f[42]}
#Creating a Dataset from a List of Associations#
To get full use out of Query and to permit syntactic shorthands, we need for Mathematica to understand that the list of Associations is in fact a Dataset. Here' s all it takes.
d3 = Dataset[t3]
![enter image description here][18]
We can recover our original list of associations by use of the **Normal** command.
t3 === Normal[d3]
> True
With the data inside a Dataset object we now have pretty formatting. But we have more.
We can still do this. We get the same result but in a more attractive form.
d3g = Query[GroupBy[#class &]][d3]
![enter image description here][19]
But now this shorthand works too.
d3g = d3[GroupBy[#class &]]
![enter image description here][20]
And compare these two elements of code. When the data is in the form of a dataset, Mathematica understands that the stuff in the brackets is not intended as a key but rather is intended to be transformed into a Query.
{Query[#age &][t3[[1]]], d3[[1]][#age &]}
> {29, 29}
##A Dataset that is an Association of Associations##
Let' s look under the hood of **d3g**.
d3gn = Normal[d3g]
![enter image description here][21]
Note : if you *really* want to look under the hood of a **Dataset** ask to see the **Dataset** in **FullForm**. You can also get more information by running the undocumented package Dataset`, but this is definitely NOT recommended for the non-advanced user.
What we see is an Association in which each of the values is itself a list of Associations.
We can map a function over d3gn.
Map[f, d3gn]
![enter image description here][22]
I can of course do the mapping within the **Query** construct.
Query[All, f][d3gn]
![enter image description here][23]
If I try synactic sugar, it doesn' t work because d3gn is not a Dataset.
d3gn[All, f]
> Missing["KeyAbsent", All]
But, if I use the Dataset version, it does work. (The first line may be an ellipsis depending on your operating system and display, but if you look under the hood it looks just like the values for 2nd and 3rd. I have no idea why an ellipsis is being inserted.
d3g[All, f]
![enter image description here][24]
##A Dataset that just has a single Association inside.##
We can also have a Dataset that just has a single Association inside. Mathematica presents the information with the keys and values displayed vertically.
Dataset[d3[[1]]]
![enter image description here][25]
In theory, we could have a Dataset that just had a single number inside it.
Dataset[6]
![enter image description here][26]
#Nice queries with Dataset#
Now I can construct a query that takes a dataset and groups it by the gender column. It then takes each grouping and applies the Mean function to at least part of it. What part? The "age" column part. Notice that I no longer have to remember that gender is the third column and age is the second column.
qd = Query[GroupBy[#gender &], Mean, #age &]
> Query[GroupBy[#gender &], Mean, #age &]
Now I can run this query on t3.
qd[d3]
![enter image description here][27]
We can now learn a lot about Query. So long as our data is in the form of a Dataset we can write the query as either a formal Query or use syntactic sugar.
#Query#
A major part of working with data is to understand **Query**. Let's start with a completely abstract **Query**, that we will call **q1**.
q1 = Query[f];
Now let' s run q1 on t3.
q1[t3]
![enter image description here][28]
We end up with a list of Associations that has f wrapped around it at the highest level. It's the same as if I wrote the following code.
f[t3] === q1[t3]
> True
Now, let' s write a **Query** that applies the function g at the top level of the list of associations and the function **f** at the second level, i.e. to each of the rows. Why does it work at the second level? Because it's the second argument to **Query**.
q2 = Query[g, f];
q2[t3]
![enter image description here][29]
The result is the same as if I mapped **f** onto t3 at its first level and then wrapped **g** around it.
g[Map[f, t3, {1}]] === q2[d3]
Query[All, MapAt[StringTake[#, 1] &, #, {{"class"}, {"gender"}}] &][d3]
Here' s a function **firstchar** that takes the first character in a string.
firstchar = StringTake[#, 1] &
> StringTake[#1, 1] &
Now, let' s construct a query **cg1** that applies **firstchar** to the class and gender keys in each row.
cg1 = Query[All,
a \[Function] MapAt[firstchar, a, {{"class"}, {"gender"}}]]
> Query[All, Function[a, MapAt[firstchar, a, {{"class"}, {"gender"}}]]]
We apply **cg1** to our little dataset **d3**.
cg1[d3]
![enter image description here][30]
What if we want to apply the same function to every element of the Dataset. We just apply it at the lowest level. Here's one way.
Query[Map[f, #, {-1}] &][d3]
![enter image description here][31]
We can also combine it with column wise and entirety wise operations. For reasons that are not clear, Mathematica can't understand this as a Dataset and returns the Normal form.
Query[(Map[f, #, {-1}] &) /* entiretywise, columnwise][d3]
![enter image description here][32]
Here' s how we could actually a multilevel **Query**.
Suppose we want to write a function that computes the fraction of the people in this little dataset that survived. The first step is simply going to be to extract the survival value and convert it to 1 if True and 0 otherwise. There's a built in function Boole that does this.
{Boole[True], Boole[False]}
> {1, 0}
q3 = Query[something,
assoc \[Function] assoc["survived"] /. {True -> 1, _ -> 0}]
> Query[something, Function[assoc, assoc["survived"] /. {True -> 1, _
> -> 0}]]
q3[t3]
> something[{1, 0, 1, 1, 1, 0, 0, 0}]
So, now we have something wrapping a list of 1 s and 0 s. By making **something** the **Mean** function, we can achieve our result.
q4 = Query[Mean, Boole[#survived] &]
> Query[Mean, Boole[#survived] &]
q4[d3]
> 1/2
We can also examine survival by gender. Notice that **Query** is a little like **Association**: it gets automatically flattened.
Query[GroupBy[#gender &], q4][t3]
> <|"female" -> 4/5, "male" -> 0|>
If the data is held in a **Dataset**, we can also write the final step as follows.
d3[GroupBy[#gender &], q4]
![enter image description here][33]
Notice that even if we omit the "Query", this code works. Mathematica just figures out that you meant Query.
The code immediately above is in the form we typically see and often use.
#Some Recipes#
titanic = ExampleData[{"Dataset", "Titanic"}]
![enter image description here][34]
How to add a value to the Dataset based on values external to the existing columns.
Here' s some additional data. Notice that the data is the same length as the titanic dataset.
stuffToBeAdded =
Table[Association["id" -> i,
"weight" -> RandomInteger[{80, 200}]], {i, Length[titanic]}]
![enter image description here][35]
We use **Join** at level 2.
augmentedTitanic = Join[titanic, stuffToBeAdded, 2]
![enter image description here][36]
##How to add a column to a Dataset based on values in the existing columns and to do so row-wise##
Notice that the query below does NOT change the value of the titanic dataset. To change the value of the titanic dataset, one would need to set titanic to the result of the computation. Remember, Mathematica generally does not have side effects or do modifications in place.
Query[All, Association[#, "classsex" -> {#class, #sex}] &][titanic]
![enter image description here][37]
We can add multiple columns this way.
Query[All,
Association[#, "classsex" -> {#class, #sex},
"agesqrt" -> Sqrt[#age]] &][titanic]
![enter image description here][38]
##How to change the value of an existing column : row - wise##
Age everyone one year.
Query[All, Association[#, "age" -> #age + 1] &][titanic]
![enter image description here][39]
How to change the value of columns selectively.
Query[All,
Association[#,
"age" -> If[#sex === "male", #age + 1, #age]] &][titanic]
![enter image description here][40]
How to create a new column based on some aggregate operator applied to another column.
With[{meanAge = Query[Mean, #age &][titanic]},
Query[All,
Association[#, "ageDeviation" -> #age - meanAge] &]][titanic]
![enter image description here][41]
Can you develop your own recipes?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=17751.png&userId=20103
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=47222.png&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=60813.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=83664.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=44435.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=105416.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=49777.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16898.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=28239.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=262710.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=749611.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=209912.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=925313.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=851614.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=627315.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=679516.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=674717.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=707518.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1053319.png&userId=20103
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=579820.png&userId=20103
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=980221.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=450322.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=113723.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=932624.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=754825.png&userId=20103
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=836826.png&userId=20103
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=393227.png&userId=20103
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=577928.png&userId=20103
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=158329.png&userId=20103
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=984930.png&userId=20103
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=664831.png&userId=20103
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1090332.png&userId=20103
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=450733.png&userId=20103
[34]: http://community.wolfram.com//c/portal/getImageAttachment?filename=740234.png&userId=20103
[35]: http://community.wolfram.com//c/portal/getImageAttachment?filename=143835.png&userId=20103
[36]: http://community.wolfram.com//c/portal/getImageAttachment?filename=36.png&userId=20103
[37]: http://community.wolfram.com//c/portal/getImageAttachment?filename=37.png&userId=20103
[38]: http://community.wolfram.com//c/portal/getImageAttachment?filename=38.png&userId=20103
[39]: http://community.wolfram.com//c/portal/getImageAttachment?filename=39.png&userId=20103
[40]: http://community.wolfram.com//c/portal/getImageAttachment?filename=40.png&userId=20103
[41]: http://community.wolfram.com//c/portal/getImageAttachment?filename=41.png&userId=20103Seth Chandler2017-08-20T19:26:30ZVisualizing hours of daylight on the summer solstice
https://community.wolfram.com/groups/-/m/t/1706977
The climatologist Brian Brettschneider shared a nice [visualization][1] showing the hours of daylight on the summer solstice.
[![enter image description here][2]][1]
We can easily create a similar visualization using Wolfram Language. First let's get the date of Summer solstice, from WA:
summerSolstice = DateObject[{2019, 6, 21, 10, 42}, TimeZone -> "America/Chicago"]
![enter image description here][3]
Then we construct a function to get the number of hours from sunrise to sunset for a given latitude:
SunriseToSunset[lat_] :=
Sunset[GeoPosition[{lat, -90}], DateObject[{2019, 6, 21}]] -
Sunrise[GeoPosition[{lat, -90}], DateObject[{2019, 6, 21}]]
Compute the results for latitudes from the equator to the Arctic Circle:
data = With[{lats = Range[0, 66.5, 1]},
Transpose[{lats,
N@QuantityMagnitude[SunriseToSunset /@ lats, "Hours"]}]]
>
> {{0, 12.1167}, {1, 12.1833}, {2, 12.2333}, {3, 12.3}, {4, 12.35}, {5,
> 12.4167}, {6, 12.4667}, {7, 12.5333}, {8, 12.5833}, {9, 12.65}, {10,
> 12.7167}, {11, 12.7667}, {12, 12.8333}, {13, 12.8833}, {14,
> 12.95}, {15, 13.0167}, {16, 13.0833}, {17, 13.15}, {18,
> 13.2167}, {19, 13.2833}, {20, 13.35}, {21, 13.4167}, {22,
> 13.4833}, {23, 13.55}, {24, 13.6167}, {25, 13.6833}, {26,
> 13.7667}, {27, 13.85}, {28, 13.9167}, {29, 14.}, {30, 14.0833}, {31,
> 14.1667}, {32, 14.25}, {33, 14.3333}, {34, 14.4167}, {35,
> 14.5167}, {36, 14.6167}, {37, 14.7}, {38, 14.8}, {39, 14.9167}, {40,
> 15.0167}, {41, 15.1333}, {42, 15.25}, {43, 15.3667}, {44,
> 15.4833}, {45, 15.6167}, {46, 15.75}, {47, 15.9}, {48, 16.05}, {49,
> 16.2}, {50, 16.3667}, {51, 16.55}, {52, 16.7333}, {53,
> 16.9333}, {54, 17.15}, {55, 17.3833}, {56, 17.6167}, {57,
> 17.8833}, {58, 18.1833}, {59, 18.5167}, {60, 18.8667}, {61,
> 19.2833}, {62, 19.75}, {63, 20.3167}, {64,
> 21.0167}, {65, -1.96667}, {66, -0.783333}}
The last two values need 24 hours to be added, because we obtained the sunrise of the next day:
data[[-2]] += {0, 24}
> `{65, 22.0333}`
data[[-1]] += {0, 24}
> `{66, 23.2167}`
ListPlot[data]
![enter image description here][4]
Interpolate that data:
SunriseToSunsetFunction = Interpolation[data]
![enter image description here][5]
From this InterpolationFunction object we can find the parallels that correspond to values of 12 to 24 hours of sunlight, in intervals of 30 minutes:
InverseSunriseToSunset[hours_] := Block[{lat},
lat /. FindRoot[SunriseToSunsetFunction[lat] - hours, {lat, 30, 0, 66.56}]]
hours = Range[12, 24, 0.5]
> {12., 12.5, 13., 13.5, 14., 14.5, 15., 15.5, 16., 16.5, 17., 17.5,
> 18., 18.5, 19., 19.5, 20., 20.5, 21., 21.5, 22., 22.5, 23., 23.5, 24.}
Quiet the extrapolation messages:
parallels = Quiet[InverseSunriseToSunset /@ hours]
> {0., 6.5, 14.75, 22.25, 29., 34.8377, 39.8359, 44.1294, 47.6667,
> 50.7311, 53.316, 55.509, 57.4022, 58.9516, 60.3359, 61.4843, 62.4667,
> 63.2934, 63.9798, 64.5051, 64.9703, 65.4042, 65.8216, 66.2328, 66.56}
Finally one gets the following map, with tooltips on the parallel lines instead of labels:
GeoGraphics[{Thick, Black, GeoPath["NorthernTropic"],
GeoPath["ArcticCircle"], Thin, Red,
Inner[Tooltip, GeoPath[{"Parallel", #}] & /@ parallels, hours, List]},
GeoProjection -> {"Orthographic", "Centering" -> {60, -50}},
GeoRange -> "World", GeoGridLines -> Automatic]
![enter image description here][6]
[1]: https://twitter.com/Climatologist49/status/1140789836963446784
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6468ScreenShot2019-06-18at5.15.03PM.jpg&userId=20103
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=51641.png&userId=20103
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=32722.png&userId=20103
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=54113.png&userId=20103
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=39444.png&userId=20103Jose M. Martin-Garcia2019-06-18T22:08:35ZGet the same FourierTransform results across different versions?
https://community.wolfram.com/groups/-/m/t/1706369
My research lab has used Mathematica for many years developing models for fuel cells and simulating signals we expect from experimental measurements. Many of these files were created in version 9.0 and I have been running version 11.0.1 on my personal laptop. I updated to version 12 today and the code is giving a different output from a Fourier Transform. Specifically, it is a Fourier transform of a cosine that is multiplied by a Gaussian window function and a rectangular window function. From the revision history, it seems the FourierTransform function was last changed in version 11.2.
All of this is shown in the attached file, but I will also repeat here for easy viewing. A few variables that need to be explained, ft= applied frequency, k= harmonic index, b= Gaussian window parameter, Cyc= number of signal waveforms, and nt= number of samples.
The cosine is defined by:
![voltage wave definition][1]
Where Vk and Vmk are complex coefficients as: ![complex voltage coefficients][2]
The Gaussian window function is defined as: ![gaussian window function][3]
The rectangular window function is defined as: ![rectangular window function][4]
Finally, we take the Fourier transform of the windowed signal: ![fourier transform][5]
The (presumably correct) answer from Versions older than 11.2 is: ![correct output][6]
But the answer I'm getting now is: ![incorrect output][7]
Using SameQ confirms these equations aren't equivalent. I've tinkered with the Fourier Parameters a bit, but that only seems to change the scaling coefficients. One obvious solution is reverting to pre-11.2, but I would like to avoid being stuck to a past version. Thank you in advance!
----------
**Update:** The apparent errors stemmed from changes in how Mathematica handles variable precisions and underflow. Using some workarounds from [@Valerio in Q170416][8] and [@halirutan in Q69912][9] I've updated my notebook to a working version. I would, however, appreciate if anyone knows of more elegant solutions than what I implemented. Thank you again!
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=VoltageEq.JPG&userId=1706346
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Voltagecomplexcoeffs.JPG&userId=1706346
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Gausssianwindow.JPG&userId=1706346
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Rectangularwindow.JPG&userId=1706346
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Fouriertransform.JPG&userId=1706346
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=CorrectFFToutput.JPG&userId=1706346
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=IncorrectFFToutput.JPG&userId=1706346
[8]: https://mathematica.stackexchange.com/questions/170416/new-generalmunfl-error-and-loss-of-precision?noredirect=1&lq=1
[9]: https://mathematica.stackexchange.com/questions/69912/format-exp-in-outputBrian Gerwe2019-06-18T00:39:38ZAvoid syntax issue in cloud notebooks (" _." converted to "_" and ".")?
https://community.wolfram.com/groups/-/m/t/1706500
I have found what I am certain is a bug in wolfram cloud. I generate the following code in Mathematica and then save it to the cloud.
Clear["Global`*"]
List@@Expand[(1+x)^6]/.a_. x^b_.->{b,a}
This initially runs correctly in the cloud notebook. But if I edit the notebook on the cloud, for example changing the exponent in the expansion from 6 to 7, then running the notebook results in a syntax error. I looked at the text on the cloud after editing the notebook and the
" _."
has been converted to separate objects,
"_" and ".".Mike Luntz2019-06-18T12:46:21Z[✓] Solve a simple equation using RSolve?
https://community.wolfram.com/groups/-/m/t/1706491
I have the following two equations:
I. Subscript[Q, t] == Subscript[EE, t]*Subscript[NN, t] +
Subscript[FF, t]*Subscript[MM, t]
II. Subscript[FF, t] == Subscript[EE, t] - 1
I would like to solve it for Q_t so that II. in I. :
Subscript[Q, t] == Subscript[EE, t]*Subscript[NN, t] +
(Subscript[EE, t] - 1)*Subscript[MM,t]
How I can I do this in Mathematica?
Using RSolve gives me the following error:
RSolve::deqx: Supplied equations are not difference equations of the given functions.
I would also like to learn how to properly apply the RSolve function with symbolic equations.Anna Ebert2019-06-18T12:40:21ZInput data for simple probability distribution problem?
https://community.wolfram.com/groups/-/m/t/1707014
This is a pretty simple problem to solve manually, but I'm just curious how I would go about entering data like this into Wolfram Alpha. I can't really find anything in the probability distribution tutorials that matches what I'd like to do here. Any ideas? ![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Capture.PNG&userId=1706782Dustin Haning2019-06-18T15:42:11Z[✓] Solve this equation with Solve?
https://community.wolfram.com/groups/-/m/t/1706730
Hello everyone. My Name is Karl and at the Moment i try to learn mathematica, but i have some Problems solving an equation.
I want to solve this:![enter image description here][1]
First, i want an Expression for m =………, then I want to use this Expression and use the given values for Tx = 68.4, T9=25.4, T1=90, L=0.35 and x=0.05. My Code Looks like this: ![enter image description here][2]
;If I use STRG C to copy the Right solution it Looks like: ![enter image description here][3]
First Question is: What means this " ` " Symbol in the copied stuff infront of the T9 or the L?! Once i had this symbol and because of this the solution was false. For what is this " 1. " ?
Second Question is: when i type all the values dircetly in the Solve line like this: ![enter image description here][4] I get those two values but i guess they are wrong! If I check it with my normal Calculator it should be ~8.22. Also a friend of mine used another Programm and got the value of About 8.22.
I hope you can help me and explain me where my mistakes are. Thanks a lot ;)
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Unbenannt.PNG&userId=1706712
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Formel.PNG&userId=1706712
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=kpiert.PNG&userId=1706712
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=gel%C3%B6st.PNG&userId=1706712Karl Eberl2019-06-18T13:33:11ZAvoid machine precision issues while using FindRoot?
https://community.wolfram.com/groups/-/m/t/1705595
I am facing Precision error in Find Root
FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances.Muhammad Atif Masoud2019-06-16T14:28:40ZWhy can input connectors for blocks not have flexible array sizes?
https://community.wolfram.com/groups/-/m/t/1706682
Unspecified array dimensions using `[:]` are very important to write flexible components for frequent reuse. While I am aware, that the actual size of an array has to be determined at the time a model is compiled, I was having the opinion that this can be done by binding a variable with a definite dimension to one with an unspecified one.
The following simple example for a `block` `VectorSum` that is to `sum` a vector `input` will not work:
package VectorFunctions
model Test
VectorSum converter "Component taking the sum of a vector input";
InformationSource source "Vector input";
equation
connect( source.y, converter.u );
end Test;
block VectorSum "Take the sum of an input with unspecified dimension"
Modelica.Blocks.Interfaces.RealInput u[:];
Modelica.Blocks.Interfaces.RealOutput y;
equation
y = sum(u);
end VectorSum;
block InformationSource "Provide some vector output"
Modelica.Blocks.Interfaces.RealOutput y[3];
equation
y = ones( 3 );
end InformationSource;
end VectorFunctions;
Why does this not work out (after all the dimension for `u` should be clear from the `connect` statement as the dimension of the `input` array is a `constant`)? What other way to do something like this is there?
**Edit**:
I changed `inputs` and `outputs` into `connectors` so that the use of `connect` is cleaner. The Modelica Specs imo are not so clear about this as *Section 10.2 Flexible Array Sizes* simply points to *Section 12.4.5* where this is discussed for `functions`. Functions according to the specs explicitly allow the above use of unspecified array dimensions for inputs. Since `blocks` share quite a bit with `functions` I am not seeing a good reason for this not to work out.Guido Wolf Reichert2019-06-18T13:35:59ZFunction Repository mysterious submission error
https://community.wolfram.com/groups/-/m/t/1702947
Attempting to play with the new Function Repository I've been receiving the error
ResourceSubmit::apierr The parameter FunctionLocation is missing.
As yet I don't understand how to address this. Suggestions very welcome!David Gathercole2019-06-11T19:25:54ZHow to use inner and outer correctly?
https://community.wolfram.com/groups/-/m/t/1706543
While I noted some implementation problems regarding `inner` and `outer` in [another context][1] which have been acknowledged, I would like to get very general advice on how to reliably (= correctly?) make use of `inner` and `outer` declarations in Wolfram System Modeler.
I would like to provide three simple examples for a clarifying discussion.
## Outer Declaration inside a subcomponent##
The following example works fine: The `inner` variables `x, y, z[3]` are passed on towards the submodel using corresponding outer declarations and accordingly `subX`, `subY`, and `subZ` show correct values if we simulate `MainModel`.
package TestingInnerOuter_1
model MainModel
inner parameter Integer x = 1;
inner parameter Real y = 2;
inner Real z[3];
Submodel submodel;
equation
z = ones(3);
end MainModel;
block Submodel
outer parameter Integer x;
outer parameter Real y;
outer Real[3] z;
output Real subX, subY;
output Real[3] subZ;
equation
subX = x;
subY = y;
subZ = z;
end Submodel;
end TestingInnerOuter_1;
## Simultaneous inner outer declaration inside a subcomponent ##
Now we simply add `inner` as a prefix to the `outer` declarations making them simultaneous `inner outer` declarations.
package TestingInnerOuter_2
model MainModel
inner parameter Integer x = 1;
inner parameter Real y = 2;
inner Real z[3];
Submodel submodel;
equation
z = ones(3);
end MainModel;
block Submodel
inner outer parameter Integer x;
inner outer parameter Real y;
inner outer Real[3] z;
output Real subX, subY;
output Real[3] subZ;
equation
subX = x;
subY = y;
subZ = z;
end Submodel;
end TestingInnerOuter_2;
And now the `MainModel` will not run but give errors from compilation:
![Error message][2]
From this message we already get a "hunch" about what is going amiss. And indeed the following modified version of Submodel will work in the second example:
block Submodel
inner outer parameter Integer x;
inner outer parameter Real y;
output Real subX, subY;
equation
subX = x;
subY = y;
end Submodel;
While the `MainModel` will now simulate, we note that the values for `x` and `y` are not passed on, instead we see that each has been given the default value of zero.
## Simultaneous inner outer declaration inside a subcomponent with modifications ##
So, given the error message for z we finally look at a version where the submodel will modify the values for `x` and `y`.
package TestingInnerOuter_3
model MainModel
inner parameter Integer x = 1;
inner parameter Real y = 2;
inner Real z[3];
Submodel submodel;
equation
z = ones(3);
end MainModel;
block Submodel
inner outer parameter Integer x = -1;
inner outer parameter Real y = -2;
output Real subX, subY;
equation
subX = x;
subY = y;
end Submodel;
end TestingInnerOuter_3;
Interestingly, this works out for the parameters and `subX` and `subY` show the correct values (e.g. 1 and 2). But, what surprises me, is that 'submodl.x' and 'submodel.y' are reported to be -1 and -2 in the ModelCenter.
So to wrap it up, the following questions arise for me:
1. Why will an `inner outer` declaration not work out for continuous variables? What can be done?
2. Why are `submodel.x` and `submodel.y` not reported with the values according to the lookup to the `inner` values in a higher scope (e.g. the `MainModel` values)?
[1]: https://community.wolfram.com/groups/-/m/t/1700660
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Errors.PNG&userId=566944Guido Wolf Reichert2019-06-18T11:49:29ZFix output of Neural Network predict method?
https://community.wolfram.com/groups/-/m/t/1701153
Hi,
How do I fix the predicted values (YPrediction) when I run the program every time.
Input: X1 Output: Y
In[1]:= Y = {14.437499999999996`, 13.629166666666666`, 11.016666666666666`,
9.254166666666666`, 9.483333333333334`, 11.029166666666669`,
9.916666666666668`, 8.637500000000001`, 10.629166666666666`,
11.399999999999999`, 12.962500000000002`, 13.320833333333333`,
16.14166666666667`, 16.425`, 15.149999999999995`, 13.791666666666664`,
12.908333333333331`, 10.83333333333333`, 8.674999999999997`, 9.7625`,
10.595833333333333`, 9.266666666666666`, 9.0625`, 9.333333333333332`,
10.866666666666664`, 12.741666666666664`, 12.158333333333333`,
14.470833333333331`, 15.666666666666668`, 15.625`, 14.891666666666667`,
13.454166666666662`, 10.083333333333332`, 8.029166666666663`,
9.504166666666666`, 9.683333333333334`, 8.879166666666666`,
8.691666666666663`, 7.862499999999999`, 10.416666666666668`,
12.137500000000003`, 10.9`, 14.554166666666667`, 17.82083333333333`,
20.55416666666666`, 20.104166666666664`, 17.129166666666663`,
12.866666666666667`, 9.112499999999997`, 8.5125`, 9.245833333333334`,
8.624999999999998`, 8.2375`, 7.391666666666666`, 9.820833333333333`,
11.6375`, 13.145833333333337`, 17.8125`, 22.883333333333333`,
25.108333333333334`, 25.162499999999998`, 20.825`, 17.416666666666664`,
13.7`, 11.683333333333332`, 11.283333333333331`, 10.687500000000002`,
8.691666666666666`, 6.866666666666665`, 12.6875`, 16.437499999999996`,
19.112499999999997`, 22.66666666666667`, 28.09583333333333`,
28.758333333333333`, 27.95833333333333`, 23.349999999999998`,
21.40416666666666`, 17.720833333333335`, 14.187499999999995`, 14.625`,
16.724999999999998`, 12.983333333333334`, 9.787499999999998`,
17.379166666666663`, 22.091666666666672`, 24.7125`, 27.208333333333332`,
32.89583333333333`, 32.98333333333333`, 30.199999999999992`,
25.762499999999996`, 24.854166666666664`, 20.595833333333335`,
16.858333333333334`, 17.674999999999997`, 19.870833333333334`, 17.95`,
13.591666666666669`, 19.870833333333334`};
X1 = {16.625`, 14.437499999999996`, 13.629166666666666`, 11.016666666666666`,
9.254166666666666`, 9.483333333333334`, 11.029166666666669`,
9.916666666666668`, 8.637500000000001`, 10.629166666666666`,
11.399999999999999`, 12.962500000000002`, 13.320833333333333`,
16.14166666666667`, 16.425`, 15.149999999999995`, 13.791666666666664`,
12.908333333333331`, 10.83333333333333`, 8.674999999999997`, 9.7625`,
10.595833333333333`, 9.266666666666666`, 9.0625`, 9.333333333333332`,
10.866666666666664`, 12.741666666666664`, 12.158333333333333`,
14.470833333333331`, 15.666666666666668`, 15.625`, 14.891666666666667`,
13.454166666666662`, 10.083333333333332`, 8.029166666666663`,
9.504166666666666`, 9.683333333333334`, 8.879166666666666`,
8.691666666666663`, 7.862499999999999`, 10.416666666666668`,
12.137500000000003`, 10.9`, 14.554166666666667`, 17.82083333333333`,
20.55416666666666`, 20.104166666666664`, 17.129166666666663`,
12.866666666666667`, 9.112499999999997`, 8.5125`, 9.245833333333334`,
8.624999999999998`, 8.2375`, 7.391666666666666`, 9.820833333333333`,
11.6375`, 13.145833333333337`, 17.8125`, 22.883333333333333`,
25.108333333333334`, 25.162499999999998`, 20.825`, 17.416666666666664`,
13.7`, 11.683333333333332`, 11.283333333333331`, 10.687500000000002`,
8.691666666666666`, 6.866666666666665`, 12.6875`, 16.437499999999996`,
19.112499999999997`, 22.66666666666667`, 28.09583333333333`,
28.758333333333333`, 27.95833333333333`, 23.349999999999998`,
21.40416666666666`, 17.720833333333335`, 14.187499999999995`, 14.625`,
16.724999999999998`, 12.983333333333334`, 9.787499999999998`,
17.379166666666663`, 22.091666666666672`, 24.7125`, 27.208333333333332`,
32.89583333333333`, 32.98333333333333`, 30.199999999999992`,
25.762499999999996`, 24.854166666666664`, 20.595833333333335`,
16.858333333333334`, 17.674999999999997`, 19.870833333333334`, 17.95`,
13.591666666666669`};
In[3]:= tuples = Thread[Transpose[{X1}] -> Y];
In[4]:= Length[tuples]
Out[4]= 100
In[5]:= train = Take[tuples, 70];
In[6]:= test = Take[tuples, -30];
In[12]:= cfunc = Predict[train, Method -> "NeuralNetwork",
PerformanceGoal -> "Quality"]
Out[12]= PredictorFunction[\[Ellipsis]]
In[13]:= YPrediction = Map[cfunc, test[[All, 1]]];
In[14]:= YObserved = test[[All, 2]];
In[15]:= Correlation[YPrediction, YObserved]
Out[15]= 0.721966
In[16]:= ListLinePlot[{YPrediction, YObserved}]M.A. Ghorbani2019-06-08T03:59:23ZSolve a systems of equations with a sum in it?
https://community.wolfram.com/groups/-/m/t/1706439
COMPLETE EDIT:
I am trying to solve a symbolic equation and I've heard from some people that Mathematica is able to do it and from others that it isn't possible. It is actually quite easy to solve by hand but I want to learn Mathematica and I would be great if Mathematica is capable of doing something like this.
The problem is the following:
[![enter image description here][1]][1]
Mathematica Code
Subscript[X, t] ==
(1 - \[Theta])*Sum[\[Theta]^(k - 1)*(Subscript[P, t - k] - Subscript[P, t - k - 1]), {k, 1, t - 1}] +
\[Theta]^(t - 1)*1; Subscript[NN, t] == Subscript[X, t]/(\[Gamma]*\[Sigma]^2);
Subscript[M, t] == (Subscript[D, t] - (T - t - 1)*\[Gamma]*\[Sigma]^2*Q - Subscript[P, t])/(\[Gamma]*\[Sigma]^2);
u*Subscript[NN, t] + \[Mu]*Subscript[M, t]==Q; u == \[Mu] - 1;
Inequality[1, Greater, \[Mu], GreaterEqual, 0
[1]: https://i.stack.imgur.com/A0HkM.png
People have been saying that it is possible, suggest that I should use RSolve but I do not fully understand the function. The guide on Mathematica says the following:
RSolve[eqn,a[n],n]
so for eqn I should put my equations in with a comma as separator.
a[n] are my variables I think.
n should be t and k.
Trying to do so doesn't give me the result I want though unfortunately.
All in all my problem does not only concern these equations I posted. I want to learn how to solve symbolic equations in general with Mathematica.
Thanks!Anna Ebert2019-06-17T19:58:34ZCompute the spatiotemporal temperature distribution of a liquid droplet?
https://community.wolfram.com/groups/-/m/t/1705790
Hello everyone!
This is probably the first time I'm posting here.
I want to compute the spatiotemporal temperature distribution of a liquid droplet which is based on a Fourier series solution to the transient heat conduction equation in spherical coordinates. The solution requires us to compute the eigenfunctions and Fourier coefficients along with a few other terms before we can get to the final expression. These results have already been published, the solution algorithm, results from the paper and the references are given in the algorithm.doc file as attached. The code I have solves all of that but it is not able to replicate the original results even though all the inputs are the same. I haven't been able to figure out where I'm erring. Please let me know if you need more details. I would be grateful if anyone could pitch in to help. I have attached the files you may need during the discourse.
Regards
ZuhaibZuhaib Nissar2019-06-17T06:39:43ZResult after vector transformation matrix
https://community.wolfram.com/groups/-/m/t/1705865
I'm an old guy trying to learn, first exposure to matrices.
Have a vector that points to (-4,2)
Have a transformation matrix (4x4)
Not looking for someone to solve it but can someone direct me to an area in the documentation that may help?
I'm new to Mathematica and totally out of my element with matrices (at this point) but struggling to learn.
TimTimothy Tapio2019-06-17T00:06:06Z[WSS19] Analyzing and visualizing data about the world's oldest people
https://community.wolfram.com/groups/-/m/t/1705844
Data about the oldest humans that have lived allow a small data set to reveal larger patterns about human life expectancy and general quality of life. The data used in this brief analysis can be found at [http://archive.is/4kwbk][1].The first step in my data analysis was to extract data and reformat it. First, I import the data and define the headers of the dataset.
orig = Import["http://archive.is/4kwbk", "Data"]
heads = {"#", "Birthplace", "Name", "Born", "Died", "Age", "Days",
"Race", "Sex", "Deathplace"}
Next, I convert the data to the `Dataset` form.
dataSimplified =
Dataset[Association[Thread[heads -> #[[;; 10]]]] & /@
Take[orig[[2, 1, 1]], {8, 72}]]
The data is relatively disorganized and complex, so many substitutions were necessary so that the `Interpreter` function was able to understand the data.
dataCorrected =
dataSimplified[
All, {"Birthplace" ->
StringReplace[{"U.S." ~~ ___ -> "USA", ___ ~~ "(UK)" -> "UK",
"Czechoslovakia" -> "Czech Republic",
"Germany (now Poland)" ~~ ___ -> "Poland",
"Canada (Que)" -> "Canada",
"Cape Verde (Portugal) [8]" -> "Portugal", ___ ~~ "Jamaica)" ->
"Jamaica"}], "Born" -> StringReplace[{"[1]" -> ""}]}]
The last line of the data is offset as Kane Tanaka is the current oldest person alive.
In[ ] = Normal[dataCorrected1[65, All]]
Out[ ] = <|"#" -> 65, "Birthplace" -> "Japan", "Name" -> "Kane Tanaka",
"Born" -> "Jan. 2, 1903", "Died" -> "115*", "Age" -> "214*",
"Days" -> "EA", "Race" -> "F", "Sex" -> "Japan (Fukuoka)",
"Deathplace" -> "2018-"|>
We'll have to manually edit this data.
row65 = Dataset[<|"#" -> 65, "Birthplace" -> "Japan",
"Name" -> "Kane Tanaka", "Born" -> "Jan. 2, 1903", "Died" -> "n/a",
"Age" -> "n/a", "Days" -> "n/a", "Race" -> "EA", "Sex" -> "F"|>]
dataCorrected2 = Append[Drop[dataCorrected1, -1], row65]
Using the `Interpreter` function, I was able to convert birthplaces to a format that is easily computable.
dataInterpreted =
dataCorrected2[
All, {"Birthplace" -> Interpreter["Country"],
"Born" -> Interpreter["Date"], "Died" -> Interpreter["Date"]}]
I then used the `GeoHistogram` command to create the following figure.
GeoHistogram[
Normal[dataInterpreted[All, "Birthplace"]], {"Rectangle", 15},
PlotTheme -> "Scientific"]
![Geo histogram][2]
The figure displays the density of the worlds oldest people in different regions.
**Analysis:** This figure supports the hypothesis of the [*Global South*][3]. This controversial hypothesis that is often said to overgeneralize suggests that individuals living South of the equator are more likely to have a lower quality of life than those living North of the equator. This figure supports the hypothesis as better quality of life is associated with longer life expectancy, and the figure suggests that those who reign as the oldest humans are more likely to be found North of the equator.
We can also observe similar disparities across races. The following figure was generated using the `PieChart` function.
racecounts = Counts[Normal[dataInterpreted[All, "Race"]]]
PieChart[Values[racecounts],
ChartLabels -> {"White", "Japanese", "Black", "Hispanic",
"East Asian", "Multiracial"}]
![enter image description here][4]
This figure displays the proportion of those who have reigned as the oldest human on earth by race. Note Japanese is separated as a different race in accordance with the original data. **Analysis:** This figure displays that a large majority of those who have reigned as oldest person are white. This data is reflective of general, global racial inequity.
Finally, we can look at some statistics of this elite group. The distribution of their ages can be generated using the `Histogram` function.
lifeLengthData =
Drop[Quantity[Normal[dataInterpreted[All, "Age"]], "Years"] +
Quantity[Normal[dataInterpreted[All, "Days"]], "Days"], -1]
Histogram[lifeLengthData, {600}, Frame -> True,
FrameLabel -> {"Age (days)", "Frequency"}]
![enter image description here][5]
Note the last value in the is dropped as Kane Tanaka is still living.
**Analysis:** What is particularly interesting about this data is the relative symmetry it exhibits. Using the `Skewness` function, I found that the skewness of this distribution is only `.174`, relatively low.
N@Skewness[lifeLengthData]
This is unexpected as age distributions typically have strong positive skew. If we interpret this data as a sample from a theoretical distribution, we can conduct inference tests. The inference test of interest would be the Mardia skewness test for normality. This test allows us to assess whether the skewness observed is significantly different than the skewness measurements produced by random sampling of the normal distribution. The Wolfram Language is able to run this test with the `MardiaSkewnessTest`. Testing the data returns a p-value of `.569`, above almost all significance thresholds.
MardiaSkewnessTest[N@lifeLengthData]
Running the function `DistributionFitTest` on the data returns the following table.
DistributionFitTest[
N@lifeLengthData, Automatic, {"TestDataTable", All}]
![enter image description here][6]
For all tests appropriate for the data, none flagged the data as significantly different than variates of the normal distribution. Again, this is unexpected because of the typical strong positive skew of age distributions.
[1]: http://archive.is/4kwbk
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-16at12.30.01PM.png&userId=1705502
[3]: https://en.wikipedia.org/wiki/Global_South
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-16at12.39.43PM.png&userId=1705502
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-16at2.16.27PM.png&userId=1705502
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-06-16at2.25.59PM.png&userId=1705502Emmy Blumenthal2019-06-16T20:29:20ZSolve an ODE using Runge-Kutta methods?
https://community.wolfram.com/groups/-/m/t/1702767
Hello All:
Hope everyone is doing well. I am attempting to create functions that use Runge-Kutta to solve an ODE. At the moment results are very close to exact solutions in a small example.
My question is in regards to the need of using "Evaluate Notebook" twice for getting good results after opening the notebook. Why is this happening(?)
Any advise is welcome!!! I highly appreciate your feedback letting me know my failures.
The attached file contains my attempts.Marcolino R-M2019-06-11T15:34:05ZUse a function from the Wolfram Function Repository?
https://community.wolfram.com/groups/-/m/t/1704380
I am unable to use the functions in the repository. I go to the WFW and select a function, such as "HessianDeterminant". I don't see any instructions regarding what to do. There seems to be a clipboard button attached to the function name, which I click on; then I can paste
ResourceFunction["HessianDeterminant"]
into my notebook (Version 12 running on a MacBook). I execute the cell and get an error message:
CloudObject::srverr
Now I'm flummoxed. What to do?
--MarkMark Fisher2019-06-14T11:09:30ZUS Climate Change at the County Level
https://community.wolfram.com/groups/-/m/t/1671437
![US County century-scale climate change][3]
(Edited 4/29/2019) Continuing my efforts to visualize climate change in the US, I found that the US National Oceanic and Atmospheric Administration (NOAA) maintains a dataset of monthly average temperatures for every county in the 48 contiguous US States from 1895 to the present. To avoid burdening the server during testing, I downloaded the file to my computer, but it is available [here][1]. On the NOAA ftp site the file will have the name climdiv-tmpccy-v1.0.0-YYYYMMDD, where YYYY, MM and DD correspond to the year, month and date when the file was updated. When I downloaded the file its name was climdiv-tmpccy-v1.0.0-20190408.
It takes some time to read in all 388,375 lines of this file:
cdata = SemanticImport["climdiv-tmpccy-v1.0.0-20190408.txt",
PadRight[{"String"}, 13, "Real"]];
The dataset consists of 13 columns: the first is a string of digits encoding the year and something that is almost -- but not quite -- the FIPS code for the county. The next 12 are the monthly average temperatures in degrees Fahrenheit in that county for January through December of that year. For clarity, I like to add column headers first:
cdata = cdata[All,
AssociationThread[
Prepend[Table[
DateString[d, "MonthNameShort"], {d,
DateRange[{0, 1}, {0, 12}, "Month"]}], "Code"], Range[13]]];
To correct the FIPS codes, we require the FIPS codes for the 48 contiguous States:
stateFIPS =
EntityValue[
EntityList[
EntityClass["AdministrativeDivision", "ContinentalUSStates"]],
"FIPSCode"];
Now we can add a column containing the correct FIPS code for each county. While we're at it we add a column for the year:
cdata = cdata[
All, <|"FIPS" ->
stateFIPS[[ToExpression[StringTake[#Code, 2]]]] <>
StringTake[#Code, {3, 5}],
"Year" -> ToExpression[StringTake[#Code, -4]], #|> &];
Then I build an association of FIPS codes and entity values for the corresponding counties:
counties =
EntityList[
EntityClass["AdministrativeDivision", "USCountiesAllCounties"]];
massoc = AssociationThread[EntityValue[counties, "FIPSCode"],
counties];
With this association, we can add a column to the dataset corresponding to the entity value of each county. In the same statement I also add a column for the average annual temperature:
cdata = cdata[
All, <|"County" -> massoc[#FIPS], #,
"Tavg" ->
Mean[WeightedData[{#Jan, #Feb, #Mar, #Apr, #May, #Jun, #Jul, \
#Aug, #Sep, #Oct, #Nov, #Dec}, {31, 28 + Boole[LeapYearQ[#Year]], 31,
30, 31, 30, 31, 31, 30, 31, 30, 31}]]|> &];
The file includes data for the first couple of months of 2019, so I strip that out:
cdata = cdata[Select[#Year <= 2018 &]];
I tried several different ways of smoothing the temperatures and decided on an moving average of the past 10 years:
s = cdata[GroupBy["County"],
With[{ma = MovingAverage[#, 10]},
Last[ma] - ma[[-100]]] &, "Tavg"];
I experimented with the color functions available in Mathematica, but decided to build my own using a color pallette from the [ColorBrewer][2] website.
col1 = RGBColor[{255, 245, 240}/255];
col2 = RGBColor[{254, 224, 210}/255];
col3 = RGBColor[{252, 187, 161}/255];
col4 = RGBColor[{252, 146, 114}/255];
col5 = RGBColor[{251, 106, 74}/255];
col6 = RGBColor[{239, 59, 44}/255];
col7 = RGBColor[{203, 24, 29}/255];
col8 = RGBColor[{165, 15, 21}/255];
col9 = RGBColor[{103, 0, 13}/255];
qq = Values[s];
zz = Table[Min[qq] + i*(Max[qq] - Min[qq])/9, {i, 9}];
cfunc[x_?NumericQ] := Which[
x <= zz[[1]], col1,
x <= zz[[2]], col2,
x <= zz[[3]], col3,
x <= zz[[4]], col4,
x <= zz[[5]], col5,
x <= zz[[6]], col6,
x <= zz[[7]], col7,
x <= zz[[8]], col8,
x <= zz[[9]], col9]
Then it's just a matter of building the legend and displaying the map:
legend = SwatchLegend[{col1, col2, col3, col4, col5, col6, col7, col8,
col9}, {"-0.30 - 0.28", " 0.28 - 0.87", " 0.87 - 1.45",
" 1.45 - 2.04", " 2.04 - 2.62", " 2.62 - 3.21", " 3.21 - 3.79",
" 3.79 - 4.38", " 4.38 - 4.96"},
LegendMarkers ->
Graphics[{EdgeForm[Black], Opacity[1], Rectangle[]}],
LegendLabel -> "\[CapitalDelta]T(\[Degree]F)",
LegendFunction -> (Framed[#, RoundingRadius -> 5] &),
LegendMargins -> 5];
climvis =
GeoRegionValuePlot[s, Frame -> True, FrameTicks -> None,
FrameLabel -> {"Change in Mean Annual Temperature for US Counties, \
1919-2018"}, LabelStyle -> Larger,
PlotLegends -> Placed[legend, Right], ColorFunction -> cfunc,
ColorFunctionScaling -> False,
PlotStyle -> Directive[EdgeForm[{Thin, White}]],
GeoBackground -> None,
GeoProjection -> {"LambertAzimuthal",
"Centering" -> GeoPosition[{30, -195/2}]},
PlotRange -> {{-0.37, 0.38}, {-0.13, 0.38}}, ImageSize -> 800,
PlotLegends -> Placed[legend, Right]]
And finally here is the result. This is of course just a "proof of concept" exercise. There are any number of ways to analyze this dataset and visualize the results.
![US County century-scale climate change][3]
[1]: http://ftp://ftp.ncdc.noaa.gov/pub/data/cirs/climdiv/
[2]: http://colorbrewer2.org
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=figmean.jpg&userId=66744John Shonder2019-04-28T15:26:25ZMathematica 11.0.1 now available for the Raspberry Pi
https://community.wolfram.com/groups/-/m/t/1028536
Hi all,
Mathematica 11.0.1 is now available for the Raspberry Pi on Raspbian. If you already have Mathematica installed on your Raspberry Pi, you can update with the following:
sudo apt-get update && sudo apt-get upgrade wolfram-engine
If you don't already have Mathematica installed you can run the following commands to install it:
sudo apt-get update && sudo apt-get install wolfram-engine
New features for the Raspberry Pi include :
- Neural Network features including constructing custom nets : http://reference.wolfram.com/language/guide/NeuralNetworks.html
- Audio processing features including out of core streaming of large sounds as well as advanced audio processing : http://reference.wolfram.com/language/guide/AudioProcessing.html
- Travel based path plan functions including path finding from one city to another : http://reference.wolfram.com/language/guide/LocationsPathsAndRouting.html
- Channel based communication for sending and receiving messages : http://reference.wolfram.com/language/guide/Channel-BasedCommunication.html
- Powerful and easy scripting through WolframScript : http://reference.wolfram.com/language/ref/program/wolframscript.html
- And many more : http://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn11.html
Additionally, with the new release of WolframScript on the Raspberry Pi, you can install WolframScript standalone and run it without a local kernel against the cloud using the `-cloud` option. This means you can use the Wolfram Language through WolframScript on the Raspberry Pi without having wolfram-engine installed by running it against the cloud. See the documentation page for WolframScript for more details.Ian Johnson2017-03-09T21:02:49ZWolfram|Alpha says "invalid appid"... how to rectify?
https://community.wolfram.com/groups/-/m/t/957549
On my Samsung s6, my wolfram alpha app (and even the Web version in the chrome app) continually responds to my queries with "invalid appid". Does anyone know how to overcome this technical difficulty? Thanks for your time everyone!Aaron Bondy2016-11-06T07:33:39ZCan a type be set globally using inner/outer and also be replaceable?
https://community.wolfram.com/groups/-/m/t/1700660
## Problem Description & Motivation: Using Non-SI-Units for time ##
This post culminates earlier questions I voiced on Community ([Unit checking and making use of unit attributes][1] and [Why can't a parameter be used to set the unit attribute?][2]). I am writing a library for System Dynamics modeling for business, economics, and social sciences. From what I have learned in the discussions cited above, treating time in models one should preferably do what engineers do: Use SI-units whenever possible and then use `displayUnit` to convert them to whatever you want.
There are a few inconveniences though: The `displayUnit` for the Modelica var `time` cannot be modified and unfortunately in diagrams there is no nice drop-down selection for a different displayUnit. Also, everything internally is stored in seconds making parameter values rather bizarre when we think in months or years.
So, writing a library I would like the user to make a choice of a global `type` called `ModelTime` which ideally would be declared as `inner` and `replaceable` at some top-level class. Then any component within a model written using the libary could use the global type to consistently treat any time-related vars.
## Minimal Example ##
The following example shows how I would like to implement this.
- `package Units` declares two Non-SI Unit types ( `Time_year`, `Time_month`)
- `package Interfaces` contains a partial model class `GenericSimulationModel` which will be the top-level scope for any model written using the library. It is supposed to provide the type `ModelTime` as an `inner` and `replaceable` class
- `package Components` defines a simple `block` class that uses `ModelTime` via an `outer` construct to define its `output y` that simply shows `time` in the globally chosen units of time
- `model Example` ties all of this together to provide an example how any model using the library should work out
Here is the code:
model MinimalExample
package Units
type Time_year = Real(final quantity = "Time", final unit = "yr");
type Time_month = Real(final quantity = "Time", final unit = "mo");
end Units;
package Interfaces
partial model GenericSimulationModel "Top-level model scope providing global vars"
inner replaceable type ModelTime = Years "Set to : Months, Years";
protected
type Years = Units.Time_year;
type Months = Units.Time_month;
end GenericSimulationModel;
end Interfaces;
package Components
block ComponentUsingTime
outer type ModelTime = MinimalExample.Units.Time_year;
output ModelTime y;
equation
y = time;
end ComponentUsingTime;
end Components;
model Example
extends Interfaces.GenericSimulationModel(
redeclare replaceable type ModelTime = Months
);
Components.ComponentUsingTime c;
end Example;
equation
end MinimalExample;
While this model compiles without error it does not work out as I intended it: *The redelcared type is not used within the component so that it remains set to "years" and not "months".*
**What can I do to achieve what I want to do?**
*Note: I crossposted the question on [StackOverflow][3].*
[1]: https://community.wolfram.com/groups/-/m/t/1451968
[2]: https://community.wolfram.com/groups/-/m/t/1684057
[3]: https://stackoverflow.com/q/56491749/5363743Guido Wolf Reichert2019-06-07T10:05:08ZHow to use flexible array sizes inside expandable connectors?
https://community.wolfram.com/groups/-/m/t/1706043
> A support case with the identification [CASE:4272754] was created.
----------
## Motivation ##
Building more complex components (e.g. subsystems in a larger model) inevitably leads to a growing number of `connectors`. To make this more manageable we may use `records` or `arrays`. `Records` as part a connector as far as I know have the disadvantage, that we may not easily access a single element within the record when we are making a connection. An `array` (e.g. a flattened list of `output` for a subsystem) on the other hand allows to connect individual parts (e.g. a subrange or an individual element inside the array) <s>but does not allow to assign different units to elements inside the same array. Also</s> but flattened lists quickly become unwieldly.
The above considerations seem to be the reason for the development of the `expandable connector` class, which can be used to model [*DataBus*][1]-like structures (Note: Why are Dymola pop-up windows referenced in the Wolfram System Modeler documentation? ).
A few "mysteries" remain though which I would like to inquire about.
## Adding variables without predefinition ##
In the following simple example we simply define an empty *DataBus* structure and then add elements using `connect()`:
expandable connector DataBus
end DataBus;
model SimpleModel
DataBus dataBus "An empty bus";
Modelica.Blocks.Interfaces.RealOutput x[ nElements ]( unit = "widgets" ) "A vector of real elements";
parameter Integer nElements = 2 "Length of x vector";
equation
x = ones( 2 ); // assign dummy results to x
connect( x, dataBus.x );
end SimpleModel;
Entering this model in the Model Center and then Simulating it reveals, that `dataBus.x` has the correct dimension, but does not have the appropriate units (and other attributes). What to do?
## Adding variables with predefinition (first attempt) ##
In order to make use of *units* and other *attributes* (e.g. maybe using predefined *types*) we will now declare a variable inside the `expandable connector`. Since we do not know the size of the vector `x` in advance (its size is a `parameter`), we will try to define `x` inside the `DataBus` in a flexible fashion:
expandable connector DataBus
Real x[:] ( unit = "widgets" );
end DataBus;
model SimpleModel
DataBus dataBus "A predefined bus";
Modelica.Blocks.Interfaces.RealOutput x[ nElements ]( unit = "widgets" ) "A vector of real elements";
parameter Integer nElements = 2 "Length of x vector";
equation
x = ones( 2 );
connect( x, dataBus.x );
end SimpleModel;
Unfortunately, this model will give errors upon validation:
![Error message][2]
I do not understand, why this use of `[:]` is throwing an error - a similar example is given by Peter Fritzson in chapter 5.9.5 of "*Principles of Object-Oriented Modeling and Simulation with Modelica 3.3*" - also see the reference to the Modelica Specification in the update below.
(Note: As I am [told][3], this code runs without problems in Dymola.)
So, we maybe need to somehow pass the `parameter` `nElements` (defining the array size) to the `expandable connector`?
## Adding variables with predefinition (second attempt) ##
To make the size of the vector `x` known to the `expandable connector DataBus`, we make the bus a sub-component of the model and pass on the `parameter` `nElements` via `inner` and `outer` constructs:
model SimpleModel
DataBus dataBus "A predefined bus";
Modelica.Blocks.Interfaces.RealOutput x[2](unit = "widgets") "A vector of real elements";
inner parameter Integer nElements = 2 "Length of x vector";
expandable connector DataBus
Real x[ nElements ];
outer parameter Integer nElements;
end DataBus;
equation
x = ones(2);
connect(x, dataBus.x);
end SimpleModel;
But again, the model will not validate without errors:
![error message][4]
Unfortunately, neither adding annotation( Evaluate = true ) or change the `parameter` to a `constant` will help.
**How then can predefined variables inside an expandable connector be created while still allowing structural variability (e.g. have parametric array sizes)?**
**Update**:
In the [Modelica Specification][5] (Version 3.2 Revision 2) we find in **Section 9.1.3 Expandable Connectors**:
> Before generating connection equations non-parameter scalar variables
> and non-parameter array lements declared in expandable connectors
> are marked as only being potentially present. A non-parameter array
> element may be declared with array dimensions “:” indicating that the
> size is unknown. This applies to both variables of simple types, and
> variables of structured types.
[1]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Blocks.Examples.BusUsage.html
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2558ScreenShot.PNG&userId=566944
[3]: https://stackoverflow.com/q/56634321/5363743
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1568ScreenShot.PNG&userId=566944
[5]: https://www.modelica.org/documents/ModelicaSpec32Revision2.pdfGuido Wolf Reichert2019-06-17T10:41:50ZUse NIntegrate In parallel?
https://community.wolfram.com/groups/-/m/t/1704570
The integrand I calculate is very complex and location-related. It takes a lot of time to go through a two dimensional region step by step using "while" loop structure, so how to use NIntegrate In parallel? I am a complete beginner in Mathematica.Zheng Li2019-06-14T02:53:51ZPerform calculations with the $ symbol within the elements a dataset?
https://community.wolfram.com/groups/-/m/t/1705611
Hello Wolfram Mathematica community,
I've been playing around with Mathematica and I've isolated a set of data on a website; however, there are money signs within the data set. Is it possible to still perform mathematical processes with the $ symbol within the elements in the set? I tried and it thought the money sign was a variable.William Ferguson2019-06-15T23:02:10ZRadical axis of two circles lying outside each other
https://community.wolfram.com/groups/-/m/t/1705270
I would like to share an interesting way to generate [radical axis][1] for two circles given in the title by `GeometricScene`.
![def][2]
Usually the procedural (compass and ruler version) to find the radical axis for two fixed detached circle O1 and O2 are the following: introduce a new circle O3 and find two sets of intersections, {P1,P2} and {Q1,Q2}.
![intersection][3]
The two secant lines meet at `P`. Do this again with smaller circle centered at O3 and find a new `P'`. By joining $P$ and $P'$ we have found the radical axis.
We can take advantage of `RandomIntance` to use tangent length as result of circle power of point:
gs=GeometricScene[{{"P1","P2","P","O1"-> {0,0},"O2"-> {7,0},"O3"},{r}},{
cir1=Circle["O1",2],cir2=Circle["O2",3],
GeometricAssertion[{cir1,Circle["O3",r]},{"Tangent","P1"}],
GeometricAssertion[{Line[{"P","P1"}],Circle["O3",r]},{"Tangent","P1"}],
GeometricAssertion[{cir2,Circle["O3",r]},{"Tangent","P2"}],
GeometricAssertion[{Line[{"P","P2"}],Circle["O3",r]},{"Tangent","P2"}]
}
];
instance = RandomInstance[gs, 2, RandomSeeding -> 1001]
We introduce a third circle $O_3$ with a moving center and tangent to the two fixed circles. The power of $P$ w.r.t. $O_1$ is automatically transferred to $O_2$ because the two tangent segments of P and $O_3$ are of same length. Because we still need two point $P$'s to determine the radical axis, we simply use `RandomInstance[obj,2]` to create two circle $O_3$'s.
![tangents][4]
radicalLinePts="P"/.Through[instance["Points"]]
{{3.14286,-1.62054},{3.14286,-5.25494}}
We can visualize the tangent rays shooting from any point on the radical axis:
![anim][5]
[1]: http://mathworld.wolfram.com/RadicalLine.html
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=def.png&userId=23928
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=intersection.png&userId=23928
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=tl.png&userId=23928
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1113loop.gif&userId=23928Shenghui Yang2019-06-16T09:27:21Z[✓] Plot Video Conference engagement levels from log files?
https://community.wolfram.com/groups/-/m/t/1705346
I recently ran a webinar that had a lot of participants. Not all the participants logged in at the start of the event, and not all stayed to the end. The log file contains data on participant numbers in the format
Email, login date/time, logout date/time
I would like to produce a bar chart showing the number of participants logged in by minute, over the 60 minutes of the webinar. I was thinking of doing something like this:
- Generate Range[60]
- Sort the Dataset by login date/time
- For each value in the range, map a function over each record, and if they had not logged out, increment the value in a list for that minute.
- Plot the list
However, this all seems rather procedural, and given that I am new'ish to Mathematica, I am wondering if there is a much better approach?Andrew Burnett2019-06-15T18:42:42ZVisualize European Union in a previous year?
https://community.wolfram.com/groups/-/m/t/1705626
Hi everyone,
I am trying to produce a map of the European Union in a previous year (for example 2004 when the EU had fewer countries).
Here is the code for mapping the EU as it currently stands, and that code is quite simple:
GeoGraphics[{EdgeForm[Black], FaceForm[Red],Polygon[EntityClass["Country","EuropeanUnion"]]}]
Could someone please tell me what I should add to graph a former year?
Thank you.
StanleyStanley Max2019-06-16T00:47:34ZCompare two "equivalent" graphs?
https://community.wolfram.com/groups/-/m/t/1703001
Hi everyone,
I have seemingly easy problem that I really can't wrap my head around. I am not that experienced with graphs, so I apologize in advance if I am using incorrect terminology.
I have some graphs where I am not sure in which order the nodes have been specified, and I want to see if they are "equal" (not sure about the terminology. Say I have:
g1 = Graph[{1 <-> 2}];
g2 = Graph[{2 <-> 1}];
If I do
g1 == g2
Then the output is false, even though as far as I understand it they are describing the same thing. I know I can use **IsomorphicGraphQ** but that doesn't seem to work if the vertex names convey some meaning. Say I have:
g3 = Graph[{1 <-> 2, 2 <-> 3}];
g4 = Graph[{3 <-> 2, 2 <-> 1}];
g5 = Graph[{1 <-> 3, 3 <-> 2}];
Then both of these return **True**:
IsomorphicGraphQ[g3, g4]
IsomorphicGraphQ[g3, g5]
But in g3 and g4, vertex 2 is the one that has two connections. In my application, it is then not equivalent to the graph having 2 connections for vertex 3. My current implementation is to extract the edges and sort them twice, and then compare them. It works but it seems super inelegant.
In[]: Sort[Map[Sort, EdgeList[g3]]] === Sort[Map[Sort, EdgeList[g4]]]
Out[]: True
In[]: Sort[Map[Sort, EdgeList[g3]]] === Sort[Map[Sort, EdgeList[g5]]]
Out[]: False
There seems like there must be some better way. Does anyone know of one?
Thanks ever so much!Patrik Ekenberg2019-06-11T21:11:34Z[✓] Add words to the stop word list?
https://community.wolfram.com/groups/-/m/t/1705077
I want to generate a WordCloud from people's submitted questions. DeleteStopWords is a useful start, but there are also a collection of technical terms that I want to remove from the WordCloud. Is there a way to temporarily add to the stop word list from within a notebook?Andrew Burnett2019-06-15T12:32:13ZRepresentation of symmetric group
https://community.wolfram.com/groups/-/m/t/1637345
I can't seem to find any information/algorithms on generating matrix representations of the symmetric group. Can someone point me in the right direction?John Garrison2019-03-22T05:31:30Z[✓] Visualize European Union without United Kingdom?
https://community.wolfram.com/groups/-/m/t/1705143
Hi everyone,
I am trying to create a map of the European Union but exclude the UK (thus, the EU after Brexit, if Brexit occurs). The code for mapping the EU was easy enough:
GeoGraphics[
{EdgeForm[Black], FaceForm[Red],
Polygon[EntityClass["Country", "EuropeanUnion"]]}
]
Please, however, what do I add to not include the UK?
Thank you.
StanleyStanley Max2019-06-15T00:42:52ZUse a natural cubic spline to create a function that interpolates data?
https://community.wolfram.com/groups/-/m/t/1705047
Hello,
I have to use a natural cubic spline to create a function that interpolates set of data point.
Is possible to do this by using Interpolation or InterpolatingFunction methods?
Looking at the interpolation method, I saw that there is an option InterpolationOrder but paying a bit with it, I was not able to reach a good result.
Thank you in advance,
best regards
Ptarpanelli paolo2019-06-14T16:31:21Z[✓] Implement a flat right interpolation?
https://community.wolfram.com/groups/-/m/t/1703361
Hello,
Ii am using Interpolation function with my table and I would like to implement a flat right interpolation; looking at the InterpolationOrder->0 it seems it does a flat left interpolation.
Is there any method available to perform the flat right?
thank you
best regardstarpanelli paolo2019-06-12T14:47:06Z[✓] Create a letter frequency BarChart?
https://community.wolfram.com/groups/-/m/t/1704466
Hi all,
I'm looking for a way to produce a bar chart that will display the frequencies of letters in a given string which would include empty bars for any letter which isn't in the string.
BarChart[LetterCounts["GIVEN STRING GOES HERE"],
ChartLabels -> Automatic]
Almost does what I want, but I would like to have the entire English upper case alphabet, in alphabetical order, as the horizontal labels, and have empty bars where any letter is not contained in the given string.
Also, any way to turn this into a relative frequency bar chart? I.e. have the vertical measurements as count(letter)/count(total) instead of count(letter)
Thanks for any help!Jess M2019-06-14T01:31:46Z[✓] Solve an equation for peptide deletion sequences?
https://community.wolfram.com/groups/-/m/t/1704239
Hi there,
I am trying to solve the equation
[known value]=a*113+b*147+c*99+d*163+e*106+f*101+g*186+h*97+i*103
for a number of different [known values], where each variable is limited to its individual small set of positive integers up to 4, including 0.
However, I am unable to input this into Wolfram Alpha. I have tried
635=a*113+b*147+c*99+d*163+e*106+f*101+g*186+h*97+i*103 ; a=0,1,2 ; b=0,1 ; c=0,1 ; d=0,1 ; e=0,1 ; f=0,1,2,3,4 ; g=0,1,2,3 ; h=0,1 ; i=0,1
solve [635=a*113+b*147+c*99+d*163+e*106+f*101+g*186+h*97+i*103] over a=0,1,2 ; b=0,1 ; c=0,1 ; d=0,1 ; e=0,1 ; f=0,1,2,3,4 ; g=0,1,2,3 ; h=0,1 ; i=0,1
solve [635=a*113+b*147+c*99+d*163+e*106+f*101+g*186+h*97+i*103] over [a,b,c,d,e,f,g,h,i] where [a=0,1,2 ; b=0,1 ; c=0,1 ; d=0,1 ; e=0,1 ; f=0,1,2,3,4 ; g=0,1,2,3 ; h=0,1 ; i=0,1]
635=a*113+b*147+c*99+d*163+e*106+f*101+g*186+h*97+i*103 AND a=0,1,2 AND b=0,1AND c=0,1 AND d=0,1 AND e=0,1 AND f=0,1,2,3,4 AND g=0,1,2,3 AND h=0,1 AND i=0,1
635=a*113+b*147+c*99+d*163+e*106+f*101+g*186+h*97+i*103 ; a={0,1,2}...
...and a bunch of other guesses. Basically I need someone to tell me how to limit different variables to individual sets of numbers. Wolfram Alpha examples and Google are not of much help.
For those interested in the background: I work as a chemist synthesizing peptides. Sometimes in complicated sequences some amino acids won't give full conversion during coupling, leading to deletion sequences in the final product mixture. The number left of the = is the mass differential to the expected product, the variables' factors are masses of individual amino acid building blocks. By finding out which of the amino acids are missing, I'll be able to take measures to improve coupling yields in following attempts.Lutz Adam2019-06-13T15:35:11ZInterpret a LinearModelFit ANOVATable?
https://community.wolfram.com/groups/-/m/t/1704109
Having evaluated the code below,
data = {{0, 2}, {1, 0}, {2, 1}, {3, 8}, {4, 8}, {5, 6}, {6, 7}}
lm = LinearModelFit[data, {x, x^2}, x]
lm["ANOVATable"]
lm["ParameterTable"]
Show[ListPlot[data], Plot[lm[x], {x, 0, 10}], Frame -> True]
what is the meaning of the ANOVATable p-values list (what statistics and hypotesis is it related)? Which table (ANOVATable or ParameterTable) p-values should I use for regression model coefficients significiance inference? Where can I get detailed information (in Mathematica help it is very poor )?Denis S2019-06-13T09:11:17ZDisplay polar ticks on a polar plot?
https://community.wolfram.com/groups/-/m/t/1704602
After plotting a polar diagram for Sin 3theta by the following code, some polar ticks are not fully displayed.
PolarPlot[Sin[3 t], {t, 0, 1}, PolarAxes -> Automatic, PolarTicks->{"Degrees", None}]
![note the some ticks are not fully displayed][1]
Does anyone have a same problem?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=polar.png&userId=1704270Haobo Zhu2019-06-13T22:38:05ZGet bordering entities of administrative division entities?
https://community.wolfram.com/groups/-/m/t/1704434
Hi everyone,
Do anyone know how to get a list of bordering entities from an administrative division entity? There are two properties, "BorderingCounties" and "BorderingStates", and both just returns Missing[NotAvailable] when I try to use them:
In[]: Entity["AdministrativeDivision", {"Stockholm", "Stockholm", "Sweden"}]["BorderingCounties"]
Out[]: Missing["NotAvailable"]
It seems like the information is available from Wolfram|Alpha though, I could for example use:
In[]: WolframAlpha["Stockholm Municipality", {{"BorderingEntities:AdministrativeDivisionData", 1}, "ComputableData"}, InputAssumptions -> {"*MC.%7E-_*AdministrativeDivision-"}]
Out[]: {{"divisions: ", {"Ekerö", "Huddinge", "Järfälla", "Nacka", "Sollentuna", "Solna", "Sundbyberg", "Tyresö"}}}
But the syntax for the input assumptions in the call is weird to say the least, so it is hard to generalize. If the information is present in Wolfram|Alpha I was hoping that you could also get it through the much easier to use entity framework, but I haven't found a solution.
Does anyone know if this can be done?
Thanks!Patrik Ekenberg2019-06-13T20:54:20ZAlternating terms of two lists, as function.
https://community.wolfram.com/groups/-/m/t/1703087
Hello community. I have created a function and would like to know if it is worth submitting in Function Repository or if there is already something simpler that does this same job? If anyone can give any opinion on this I will be very grateful.
I modestly have created a function that can interleave two lists by alternating their terms (unlike Riffle, which only fits the terms into gaps, this function does this keeping the same number of terms as it replaces them by both functions simultaneously).
- It works like this:
If the third term inside the function ("c_") is {} the function does this automatically in a 1 to 1 pattern of each group:
Alternate[a_, b_] := Alternate[a, b, {}]
Alternate[a_, b_, c_] :=
PadRight[a*
PadRight[
Take[Flatten@Table[If[c == {}, {1, 0}, c], Count[a, _]],
Min[Count[a, _], Count[b, _]]], Count[a, _], 1],
Max[Count[a, _], Count[b, _]]] +
PadRight[b*
PadRight[
Take[Flatten@Table[Abs[If[c == {}, {1, 0}, c] - 1], Count[b, _]],
Min[Count[a, _], Count[b, _]]], Count[b, _], 1],
Max[Count[a, _], Count[b, _]]]
r = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20};
s = {3, 5, 7, 9, 11, 13, 15, 17, 19, 21};
Alternate[r, s]
Alternate[s, r]
![ie1][1]
The function works even with lists of different sizes, keeping the terms in excess unchanged:
p = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20};
q = {3, 5, 7, 9, 11, 13, 15, 17};
Alternate[p, q]
Alternate[q, p]
![ie2][2]
Or you can change the third term ("c") in the function to any pattern (eg: {0,1,1,1}). Where "1" refers to the first term ("a") from within the function while "0" refers to the term ("b") from within the function:
t = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20};
u = {3, 5, 7, 9, 11, 13, 15, 17};
Alternate[t, u, {0, 1, 1, 1}]
Alternate[u, t, {0, 1, 1, 1}]
![ie3][3]
I would like to know if is this a good idea or there is a simpler way to do this? Is it worth sending a repository request?
Thank you.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ie1.png&userId=1316061
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ie2.png&userId=1316061
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ie3.png&userId=1316061Claudio Chaib2019-06-12T19:48:51Z