Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Scienceshowthread.php?postid=4141 sorted by activeAvoid errors while using InteractiveTradingChart?
http://community.wolfram.com/groups/-/m/t/1344085
So I'm trying to use InteractiveTradingChart, however, not a single example from the documentation page is working.
InteractiveTradingChart[{"GOOGL", {{2009, 1, 1}, {2009, 12, 31}}}]
Just throws a bunch of errors, most notably:
> TimeSeries::invstrct: The data TemporalData[TimeSeries,{<<1>>},True,11.1] is not a structurally valid TemporalData object.
InteractiveTradingChart::ldata: {GOOGL,{{2009,1,1},{2009,12,31}}} is not a valid dataset or list of datasets.
So what's the problem? any help appreciated.
ThanksLombardo Paredes2018-05-23T16:49:21ZTerminology of mathematics by computer
http://community.wolfram.com/groups/-/m/t/1343869
PM. This is a short essay, that has been included in the May 23 2018 update for this notebook and package:
http://community.wolfram.com/groups/-/m/t/1313302
Mathematics concerns patterns and can involve anything, so that we need flexibility in our tools when we do or use mathematics. In the dawn of mankind we used stories. When writing was invented we used pen and paper. It is a revolution for mankind, comparable to the invention of the wheel and the alphabet, that we now can do mathematics using a computer. Many people focus on the computer and would say that it is a computer revolution, but computers might also generate chaos, which shows that the true relevance comes from structured use.
I regard mathematics by computer as a two-sided coin, that involves both human thought (supported by tools) and what technically happens within a computer. The computer language (software) is the interface between the human mind and the hardware with the flow of electrons, photons or whatever (I am no physicist). We might hold that thought is more fundamental, but this is of little consequence, since we still need consistency that 1+1 = 2 in math also is 1+1 = 2 in the computer, and properly interfaced by the language that would have 1+1 = 2 too. The clearest expression of mathematics by computer is in "computer algebra" languages, that understand what this revolution for mankind is about, and which were developed for the explicit support of doing mathematics by computer.
The makers of Mathematica (WRI) might be conceptually moving to regarding computation itself as a more fundamental notion than mathematics or the recognition and handling of patterns. Perhaps in their view there would be no such two-sided coin. The brain might be just computation, the computer would obviously be computation, and the language is only a translator of such computations. The idea that we are mainly interested in the structured products of the brain could be less relevant.
Stephen Wolfram by origin is a physicist and the name "Mathematica" comes from Newton's book and not from "mathematics" itself, though Newton made that reference. Stephen Wolfram obviously has a long involvement with cellular automata, culminating in his New Kind of Science. Wolfram (2013) distinguishes Mathematica as a computer program from the language that the program uses and is partially written in. Eventually he settled for the term "Wolfram language" for the computer language that he and WRI use, like "English" is the language used by the people in England (codified by their committees on the use of the English language).
My inclination however was to regard "Mathematica" primarily as the name of the language that happened to be evaluated by the program of the same name. I compared Mathematica to Algol and Fortran. I found Wolfram's Addison-Wesley book title in 1991 & 1998 "Mathematica. A system for doing mathematics by computers" as quite apt. Obviously the system consists of the language and the software that runs it, but the latter might be provided by other providers too, like Fortran has different compilers. Every programmer knows that the devil is in the details, and that a language documentation on paper might not give the full details of actually running the software. Thus when there are not more software providers then it is only accurate to state the the present definition of the language is given precisely by the one program that runs it. This is only practical and not fundamental. In this situation there is no conflict in thinking of "Mathematica as the language of Mathematica". Thus in my view there is no need to find a new name for the language. I thought that I was using a language but apparently in Wolfram's recent view the emphasis was on the computer program. I didn't read Wolfram's blog in 2013 and otherwise might have given this feedback.
Wolfram (2017) and (2018) uses the terms "computational essay" and "computational thinking" while the latter is used such that he apparently intends this to mean something like (my interpretation): programming in the Wolfram Language, using internet resources, e.g. the cloud and not necessarily the stand-alone version of Mathematica or now also Wolfram Desktop. My impression is that Wolfram indeed emphasizes computation, and that he perhaps also wants to get rid of a popular confusion of the name "Mathematica" with mathematics only. Apparently he doesn't want to get rid of that name altogether, likely given his involvement in its history and also its fine reputation.
A related website is https://www.computerbasedmath.org (CBM) by Conrad Wolfram. Most likely Conrad adopts Stephen's view on computation. It might also be that CBM finds the name "Mathematica" disinformative, as educators (i) may be unaware of what this language and program is, (ii) may associate mathematics with pen and paper, and (iii) would pay attention however at the word "computer". Perhaps CBM also thinks: You better adopt the language of your audience than teach them to understand your terminology on the history of Mathematica.
I am not convinced by these recent developments. I still think: (1) that this is a two-sided coin (but I am no physicist and do no know about electrons and such), (2) that it is advantageous to clarify to the world: (2a) that mathematics can be used for everything, and (2b) that doing mathematics by computer is a revolution for mankind, and (3) that one should beware of people without didactic training who want to ship computer technology into the classroom. My suggestion to Stephen Wolfram remains, as I did before in (2009, 2015a), that he turns WRI into a public utility like those that exist in Holland - while it already has many characteristics of this. It is curious to see the open source initiatives that apparently will not use the language of Mathematica, now by WRI (also) called the Wolfram Language, most likely because of copyright fears even while it is good mathematics.
Apparently there are legal concerns (but I am no lawyer) that issues like 1+1 = 2 or \[Pi] are not under copyright, but that choices for software can be. For example the use of h[x] with square brackets rather than parentheses h(x), might be presented to the copyright courts as a copyright issue. This is awkward, because it is good didactics of mathematics to use the square brackets. Not only computers but also kids may get confused by expressions a(2 + b) and f(x + h) - f(x). Let me refer to my suggestion that each nation sets up its own National Center for Mathematics Education. Presently we have a jungle that is no good for WRI, no good for the open source movement (e.g. R or https://www.python.org or http://jupyter.org), and especially no good for the students. Everyone will be served by clear distinctions between (i) what is in the common domain for mathematics and education of mathematics (the language) and (ii) what would be subject to private property laws (programs in that language, interpreters and compilers for the language) (though such could also be placed into the common domain).
Colignatus, Th. (2009, 2015a), Elegance with Substance,
(1) website: http://thomascool.eu/Papers/Math/Index.html
(2) PDF on Zenodo: https://zenodo.org/record/291974
Wolfram, S. (1991, 1998), Mathematica. A system for doing mathematics by computer, 2nd edition, Addison-Wesley
Wolfram, S. (2013), What Should We Call the Language of Mathematica?, http://blog.stephenwolfram.com/2013/02/what-should-we-call-the-language-of-mathematica/
Wolfram, S. (2017), What Is a Computational Essay?, http://blog.stephenwolfram.com/2017/11/what-is-a-computational-essay/
Wolfram, S. (2018), Launching the Wolfram Challenges Site, http://blog.stephenwolfram.com/2017/11/what-is-a-computational-essay/Thomas Colignatus2018-05-23T10:29:27ZWolfram has abandoned the Graphs and Networks functionality (??)
http://community.wolfram.com/groups/-/m/t/1321057
This is a cautionary tale for those who choose Mathematica as the main tool for their work.
It is now clear to me that Wolfram has simply abandoned the [Graphs and Networks](http://reference.wolfram.com/language/guide/GraphsAndNetworks.html) functionality area and I am left high and dry. I have no recourse because Mathematica is closed source so there is only so much a user can do to fix or work around problems. Reporting bugs in this particular area has now clearly proven to be useless. Most simply do not get fixed, no matter how serious they are, or how great a hindrance they are to practical use. No new functionality has been added since version 10.0. My colleagues who use other tools (mostly Python and R packages) are more productive at this point, but I have a handicap with those systems because I made the mistake of investing most of my time into Mathematica, and stayed optimistic about it even in the face of the most obvious warning signs.
I am writing this post because those people who have not heavily invested in Mathematica, and in particular this functionality area of Mathematica, are not in a position to see this and may fall in the same trap I did. What if the same thing happens to the functionality area that is critical to *your* work?
Wolfram Research, of course, will not tell you that they gave up on `Graph`. Thus, after my experience, I think I owe it to the community to warn you about the situation.
----
Some might ask me what specifically is wrong. I have made many posts on this forum about `Graph`-bugs (you only have to search), and I reported many more to WRI. There is always a last straw—it would be pointless to show it. Those who know me will know that I am not writing this admittedly emotional post out of ill will towards WRI. I have betted on Mathematica more than most, and have been advocating for it throughout the years. I even have a network analysis package with ~250 functions. If I am forced to abandon Mathematica for this type of work, then the countless hours that went into this package will all have been in vain.
I admit that I am writing this public post partly out of desperation to try to get WRI to either fix the many serious `Graph`-problems, or otherwise publicly state that `Graph` is now abandoned so those of us who have been using it can stop wasting our time.Szabolcs Horvát2018-04-16T09:54:42ZFind the chemical composition of a compound solving a system of equations?
http://community.wolfram.com/groups/-/m/t/1343953
Hi to all,
This is the first time I'm using Mathematica. I'm trying to analyze data from elemental analysis experiments on a compound in order to find its chemical composition, so I've typed a six equation system with five variables (please find attached the file containing the system if you want to have a look). However, I'm not sure about the command I should use. Solve gives me a result (which does not match the expected composition, but this is another question), but previosuly writes this error message: Solve was unable to solve the system with inexact coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result. What are inexact coefficients and why does the program tell me this?
When I use FindInstance command, the program gives me the same result, but no error message is printed this time. Why?
Finally, when I use Reduce command (I want to know all the possible solutions for my system), the program again prints an error message (Reduce was unable to solve the system with inexact coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result) and the result is False. Why? I expected the program to give me at least the same solution that Solve and FindInstante did.
Could anyone tell me if there is an evident mistake in the code I typed that I'm seeing?
Thank you very much in advance!
Regards!Juan Manuel Casas-Solvas2018-05-23T10:11:30ZWhy does using x_1 instead of x make a difference in this system of eqs.?
http://community.wolfram.com/groups/-/m/t/1344284
Hello,
I was trying to solve a system of equations, however it seems that depending on whether I use x or x_1 the results differ. Which result can I trust and why does it make a difference in the frist place? I hope you can enlighten me. Thanks in advance.
with x_1:
https://www.wolframalpha.com/input/?i=findinstance%5B+-83.2*x_1-76.6*x_2-130.4*x_3-1*q%3E0+%26%26+(6*x_1%2B3*x_2%2B68*x_3)%3E%3D0+%26%26+(6*x_1%2B3*x_2%2B92*x_3%2Bq)%3E%3D0+%26%26+(106*x_1%2B103*x_2%2B0x_3)%3E%3D0%5D
with x:
https://www.wolframalpha.com/input/?i=findinstance%5B+-83.2*x-76.6*x_2-130.4*x_3-1*q%3E0+%26%26+(6*x%2B3*x_2%2B68*x_3)%3E%3D0+%26%26+(6*x%2B3*x_2%2B92*x_3%2Bq)%3E%3D0+%26%26+(106*x%2B103*x_2%2B0x_3)%3E%3D0%5DMaro Polo2018-05-23T23:47:00ZChange Warping Correspondence to allow for decreasing positions?
http://community.wolfram.com/groups/-/m/t/1344226
Hello,
WarpingCorrespondence[{11, 12, 13, 14, 11}, {12, 13}] // TextGrid
gives
{
{1, 2, 3, 4, 5},
{1, 1, 2, 2, 2}
}
The last element in the second line should be a "1" however.
I realized that this is due to the fact that WarpingCorrespondence allows only increasing positions, but I wonder whether there is a way to change it to allow also for decreasing ones.
Thanks for your input,
MichaelMichael Raatz2018-05-23T14:07:29ZCombine Plots into one using a loop?
http://community.wolfram.com/groups/-/m/t/1344453
I want this program's plots to be combined into one plot - just can't seem to make it appear in one plot. Any suggestions?
n = -1; While[n < 3,
sol = NDSolve[{D[xtraj[t],
t] == -(Sinh[
2 xtraj[t] (-t)]/
(Cosh[2 xtraj[t] (-t)])), xtraj[0] == n},
xtraj[t], {t, -4, 8}]; n = n + 0.25;
p = ParametricPlot[{t, xtraj[t]} /. sol, {t, -4, 8},
PlotRange -> All, PlotStyle -> {Blue, Full, Thick},
AxesStyle -> Thickness[.001], LabelStyle -> {Black, Medium}];
Print[p]]Estelle Asmodelle2018-05-24T02:24:42ZExport a smaller Classify or Predict model to use in the cloud?
http://community.wolfram.com/groups/-/m/t/833348
For LogisticRegression, it is easy, I can just use the "Function", but in general, both for Classify and Predict, I'd just like the output function, which is usually about the same size as the training data, which can be huge.
A common use case would be to train locally to create a model, then upload that model to the cloud through an API that should be quick and small and easy to use.Philip Maymin2016-04-01T23:45:03ZWhat's the hardest integral Mathematica running Rubi can find?
http://community.wolfram.com/groups/-/m/t/1343015
***Rubi*** (***Ru***le-***b***ased ***i***ntegrator) is an open source program written in ***Mathematica***'s powerful pattern-matching language. The recently released version 4.15 of ***Rubi*** at http://www.apmaths.uwo.ca/~arich/ requires ***Mathematica*** 7 or better to run. Among other improvements, ***Rubi*** 4.15 enhances the functionality of its integrate command as follows:
- Int[*expn*, *var*] returns the antiderivative (indefinite integral) of *expn* with respect to *var*.
- Int[*expn*, *var*, Step] displays the first step used to integrate *expn* with respect to *var*, and returns the intermediate result.
- Int[*expn*, *var*, Steps] displays all the steps used to integrate *expn* with respect to *var*, and returns the antiderivative.
- Int[*expn*, *var*, Stats], before returning the antiderivative of *expn* with respect to *var*, displays a list of statistics of the form {*a*, *b*, *c*, *d*, *e*} where
*a*) is the number of steps used to integrate *expn*,
*b*) is the number of distinct rules used to integrate *expn*,
*c*) is the leaf count size of *expn*,
*d*) is the leaf count size of the antiderivative of *expn*, and
*e*) is the rule-to-size ratio of the integration (i.e. the quotient of elements *b* and *c*).
The last element of the list of statistics displayed by ***Rubi***'s Int[*expn*, *var*, Stats] command is the number of distinct rules required to integrate *expn* divided by the size of *expn*. This rule-to-size ratio provides a normalized measure of the amount of mathematical knowledge ***Rubi*** uses to integrate expressions. In other words, this ratio can be used as a metric showing the difficulty of solving indefinite integration problems. For example, the hardest problem in ***Rubi***'s 70,000+ test suite is integrating (a+b ArcTanh[c/x^2])^2 which has a rule-to-size ratio of 2.5.
On ***Rubi***'s website are the terms of a challenge, for which there is a substantial prize, for the user who finds the hardest problem ***Rubi*** can integrate.Albert Rich2018-05-21T22:17:32ZIntegral w/ DiracDelta gives different results on different versions
http://community.wolfram.com/groups/-/m/t/1343989
I have access to Mathematica 9.0.0 and Mathematica 10.4.0. Recently I discovered that computing the same exact integral gives different answers in different versions of Mathematica. The integral involves the product of a DiracDelta function and a Gaussian density. I believe the result in 10.4 is incorrect and that there is a [B-word] in this version. I have emailed technical support about the issue. In the meanwhile, can someone with access to Mathematica 11 check whether this issue is still present?
Code in question:
Assuming[
Element[a, Reals] && Element[y, Reals] && Element[z, Reals],
Integrate[
DiracDelta[y - (x + a)/z] PDF[NormalDistribution[0, s], x],
{x, -Infinity, Infinity}]]
**Output in Mathematica 9.0.0:**
(E^(-((a - y z)^2/(2 s^2))) Abs[z])/(Sqrt[2 \[Pi]] s)
**Output in Mathematica 10.4.0:**
(1 + E^(-((a - y z)^2/(2 s^2))) + Abs[z])/(Sqrt[2 \[Pi]] s)
Plotting the two functions will confirm that they are not mathematically equivalent (and that the result from 10.4 does not make any sense).
Thanks for your input!Chris Sims2018-05-23T12:39:26ZHow fast is my fidget spinner spinning? A sound experiment!
http://community.wolfram.com/groups/-/m/t/1344151
I would like to measure how fast my 6-bladed fidget spinner spins. To do so, after giving it a hard spin, I gently touch the spinner with a wooden stirring stick to create a buzzing sound which usually last for a minute.
[![enter image description here][1]][2]
I have recorded and plotted the sound it generates:
[![enter image description here][3]][4]
How can I **automatically** generate a list of peak times for the above data? My final goal is to plot revolutions per second as a function of time to show spin decay.
### Data
To hear the sound in your Mathematica notebook, run the following code:
audio = Sound[SampledSoundList[
Flatten@ImageData@Import["https://i.stack.imgur.com/qHpp6.png"], 22050]]
![enter image description here][5]
This will download the following image, turn it into an array, and finally, convert it to a sound object.
[![enter image description here][6]][7]
First, import the audio and extract usable data from it:
audioDuration = Duration[audio];
audioSampleRate = AudioSampleRate[audio];
data = AudioData[audio][[1]];
Second, use `PeakDetect` to see which points are peaks (`= 1`) and which points are not peaks (`= 0`). Find the location of peaks in seconds.
peaks = PeakDetect[data, 150, 0.0, 0.4];
peakPos = 1./audioSampleRate Position[peaks, 1] // Flatten;
Length[peakPos]
The period of the spinner is the separation between the beats (peaks) times the number of blades:
periods = 6 (peakPos[[2 ;; -1]] - peakPos[[1 ;; -2]])/1
Spin rate, that is revolutions per second, is reciprocal of the period:
spinRates = 1/periods;(* Revolutions per second *)
Convert the data into a list of `{time, spin rate}` and plot it:
spinRateVStime =
Table[{i audioDuration/Length[spinRates], spinRates[[i]]}, {i,
Length[spinRates]}];
[![enter image description here][8]][9]
As it can be seen, the spinner spins 6 times per second and eventually comes to a stop after 12 seconds.
### Details
The parameters for `PeakDetect` needs to be adjusted. To do so, you need to reduce the amount of data to speed up the process, and plot `PeakDetect` on top of the data and look for a good agreement.
data = AudioData[audio][[1]][[800 ;; 11111]];
peaks = PeakDetect[data, 150, 0.0, 0.4];
ListLinePlot[{data , peaks}, PlotRange -> {All, {0, 1.1}}]
[![enter image description here][10]][11]
[1]: https://i.stack.imgur.com/WnMrF.gif
[2]: https://i.stack.imgur.com/WnMrF.gif
[3]: https://i.stack.imgur.com/oxAvw.png
[4]: https://i.stack.imgur.com/oxAvw.png
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sfd324qwrea.png&userId=20103
[6]: https://i.stack.imgur.com/qHpp6.png
[7]: https://i.stack.imgur.com/qHpp6.png
[8]: https://i.stack.imgur.com/XXPkC.png
[9]: https://i.stack.imgur.com/XXPkC.png
[10]: https://i.stack.imgur.com/H3hPm.png
[11]: https://i.stack.imgur.com/H3hPm.pngMilad Pourrahmani2018-05-23T14:57:14ZObtain the field intensity at a certain position of a maser interferometer?
http://community.wolfram.com/groups/-/m/t/1329310
Hey guys,
I would lik to know how normalizing these figures giving by this programm for obtaining the field intensity at an arbitrary off-center for exemple at x=0.5a in order to find the samme valeus as given in Resonant Modes in a Maser Interferometer by using equation 26:
Exp[I*0.25*Pi]/(2*Sqrt[d])*\[Integral](Exp[-I*k*Sqrt[b^2 + (x1 - x2)^2]]/ Sqrt[Sqrt[b^2 + (x1 - x2)^2]])*(1 + b/Sqrt[b^2 + (x1 - x2)^2])
By A. G. FOX and TINGYE Ll (Manuscript received October 20, 1960) articl.
(==================================================================)
(* lam=d; a=25d;b=100d ; k=2[Pi]/d)-one Trip // 0 < x2 < 1 a)
(==================================================================)
d = 1; lam = d; a = 25*d; b = 100*d ; k = 2 [Pi]/d
x2 = Table[x2, {x2, 0, 1, 0.01}]*a
f1 = (Exp[-IkSqrt[b^2 + (x1 - x2)^2]]/ Sqrt[Sqrt[b^2 + (x1 - x2)^2]])*(1 + b/Sqrt[b^2 + (x1 - x2)^2])
g1 = NIntegrate[f1, {x1, -a, a}]
fact = Exp[I*0.25Pi]/(2*Sqrt[d])
g2 = fact*g1
Abg = Abs[g2]
ListLinePlot[Abg]
Ag = Arg[g2]
ListLinePlot[Ag]
Please see my attachment for more details.
Thanks in advance.MOUMA MIRAL2018-04-29T09:56:57ZThe Hippasus Primes
http://community.wolfram.com/groups/-/m/t/965609
According to legend, when Hippasus proved [the irrationality of $\sqrt2$](http://mathworld.wolfram.com/PythagorassConstant.html), he was thrown off a ship. Poor guy.
..
Gauss discovered that the numbers 1, 2, 3, 7, 11, 19, 43, 67, 163 led to unique factorization domains, and conjectured these were the only such numbers. Another person almost forgotten was Kurt Heegner, who proved Gauss's conjecture. But there was a small gap in his proof. Years later, Alan Baker and Harold Stark proved the result. But then they looked at Heegner's proof and announced it was pretty much correct, four years after Heegner's death. In his honor, 1, 2, 3, 7, 11, 19, 43, 67, 163 are known as [Heegner numbers](http://mathworld.wolfram.com/HeegnerNumber.html).
..
The $\mathbb{Q}(\sqrt{-1})$ numbers are known as [Gaussian integers](http://mathworld.wolfram.com/GaussianInteger.html).
The $\mathbb{Q}(\sqrt{-3})$ numbers are known as [Eisenstein integers](http://mathworld.wolfram.com/EisensteinInteger.html).
The $\mathbb{Q}(\sqrt{-7})$ numbers are known as [Kleinian integers](https://en.wikipedia.org/wiki/Kleinian_integer).
..
What about $\mathbb{Q}(\sqrt{-2})$? Why doesn't it have a name? I propose we call these **Hippasus integers**. He doesn't get much credit for his discoveries about $\sqrt{2}$, so may as well give him this to fill in the gap.
..
So what do the Hippasus primes look like? Here's some code based on the [Sieve of Eratosthenes](http://mathworld.wolfram.com/SieveofEratosthenes.html) that seems to work. I'm sure it can be vastly improved upon.
heeg = 2;
klein = RootReduce[Select[SortBy[Flatten[Table[a + b (Sqrt[heeg] I - 1)/2, {a, -50, 50}, {b, -70, 70}]], N[Norm[#]] &], 1 < Norm[#] < 40 &]];
sieve = Take[#, -2] & /@ (Last /@ (Sort /@ SplitBy[SortBy[{Norm[#]^2, 2 Re[#], 2 Im[#]/Sqrt[heeg]} & /@ klein, Abs[#] &], Abs[#] &]));
primes = {};
Module[{addedprime, remove},
While[Length[sieve] > 1,
addedprime = sieve[[1]];
primes = Append[primes, addedprime];
remove = Union[Join[Abs[{#[[1]], #[[2]]/Sqrt[heeg]}] & /@ (ReIm[2 (addedprime.{1, Sqrt[heeg] I}/2) (#.{1, Sqrt[heeg] I}/2)] & /@ sieve),
Abs[{#[[1]], #[[2]]/Sqrt[heeg]}] & /@ (ReIm[2 (addedprime.{1, -Sqrt[heeg] I}/2) (#.{1, Sqrt[heeg] I}/2)] & /@ sieve)]];
sieve = Select[Drop[sieve, 1], Not[MemberQ[remove, #]] &]]];
Graphics[Table[Point[{{1, 1}, {1, -1}, {-1, 1}, {-1, -1}}[[k]] ReIm[#]] & /@ (#.{1, Sqrt[heeg] I}/2 & /@ primes), {k, 1, 4}]]
![Hippasus primes][1]
With a change of the Heegner number at the top, the Gaussian primes, Hippasus primes, Eisenstein primes, and Kleinian primes can all be calculated:
![Heegner 1 2 3 7][2]
In case you were curious, we can also calculate the primes based on Heegner numbers 11, 19, 43, and 67.
![Heegner 11 19 43 67][3]
Those last two look pretty weird, so maybe I'm making a mistake somewhere. The primes based on 163 look even stranger.
![Heegner 163][4]
There are so many weird patterns that I almost didn't show this one. But then I remembered the [lucky numbers of Euler](http://mathworld.wolfram.com/LuckyNumberofEuler.html), which are based on Heegner numbers. The long line of primes is likely accurate. If anyone can improve/speed up the code and make a much larger picture, I'd love to see that.
..
The same goes for a bigger picture of the **Hippasus primes**. If there is another name for these, please let me know. If you agree this is a great name for them, also let me know.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HippasusPrimes.gif&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Heegner1237.gif&userId=21530
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Heegner11194367.gif&userId=21530
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Heegner163.gif&userId=21530Ed Pegg2016-11-17T23:30:46ZOpen my nb file in Mathematica Cloud?
http://community.wolfram.com/groups/-/m/t/1343203
Hey everyone!
I am in a bit of a panic. I have a student Mathematica online account that I have been using for over a year now. I have a template file from a class I am doing that has opened just fine this whole time until last night. I have searched for an answer (to no avail), have made several copies & used backups of the file...nothing I do seems to work.
When I attempt to open it in the cloud, I see a blue line traversing the top of the screen like it is working on opening it although it never does no matter how long I wait. It also only shows limited option such as "Open this file in Mathematica", File, and Help at the top of the screen. It never lets me get to evaluation options or anything.
Any ideas? Anyone else know what is happening?Kasi Clark2018-05-22T02:42:48ZConvert 1040 Seconds Into 17 Minutes and 20 Seconds?
http://community.wolfram.com/groups/-/m/t/844216
Hello,
I have a basic question which I cannot find the answer to:
How do I make Mathematica convert 1040 seconds into 17 minutes and 20 seconds? I want Mathematica to give me the conversion with one command or cell input. I would also like for the output to have the words "minutes" and "seconds" if possible.
Thanks.Dan M2016-04-22T10:07:25ZMathEd: Arithmetic with H = -1: additive and multiplicative inverses
http://community.wolfram.com/groups/-/m/t/1343859
This notebook and package of May 23 2018 includes the additive inverse too, while the original notebook and package of April 2 2018 only contained the multiplicative inverse. To avoid litter of different versions, I have updated at the April 2 2018 location, where the new notebook and package can be found under the old name.
http://community.wolfram.com/groups/-/m/t/1313302
New summary:
H = -1 is an universal constant. H represents a half turn along a circle, like complex number i represents a quarter turn. Kids know what it is to turn around and walk back along the same path. H creates the additive inverse with x + H x = 0 and the multiplicative inverse with x x^H = 1 for x != 0. Pronounce H as "ehta" or "symbolic negative one". The choice of H is well-considered: its shape reminds of -1 and even more (-1). Pierre van Hiele (1909-2010) already proposed to use y x^-1 and drop the fraction bar y / x with its needless complexity. Students must learn exponents anyway. The negative exponent might confuse pupils to think that they must subtract something, but the use of an algebraic symbol clinches the proposal. Also 5/2 can be written as 2 + 2^H, so that it is clearer where it is on the number line. This approach also causes a re-evaluation of the didactics of the negative numbers. The US Common Core has them only in Grade 6 which is remarkably late. The negative numbers arise from the positive axis x by rotating or alternatively mirroring into H x. Algebraic thinking starts with the rules that a + H a can be replaced by 0 and that H H can be replaced by 1. Subtraction a - b >= 0 may be extended into a - b < 0 with its present didactics, e.g. 2 - 5 = 2 - (2 + 3) = 2 - 2 - 3 = 0 - 3 = -3, but there is an intermediate stage with familiar addition 2 + 5 H = 2 + (2 + 3) H = 2 + 2 H + 3 H = 0 + 3 H = 3 H, that does not require (i) the switch at the brackets from plus to minus and (ii) the transformation of binary 0 - 3 to number -3. The expression a - (-b) involves (scalar) multiplication which indicates why pupils find this hard, and a + H H b is clearer. The use of H would affect the whole curriculum. There appears to be a remarkable incoherence in mathematics education and its research w.r.t. the negative numbers, which reminds of the problems that the world itself had since the discovery of direction by Albert Girard in 1629 and the introduction of the number line by John Wallis in 1673. This notebook provides a package to support the use of H in Mathematica. The notebook and package are intended for researchers, teachers and (Common Core) educators in mathematics education. Pupils in elementary school would work with pencil and paper of course.Thomas Colignatus2018-05-23T10:14:17ZRedirect temporary files to an external disk?
http://community.wolfram.com/groups/-/m/t/1341347
Many times Mathematica aborts because there is no more space available on disk. Presumably this is due to the temporary files that it creates and needs for its calculations. If I were to install a large disk on my Macintosh for all the space required by Mathematica, how can I tell it to address all its temporary files and swapping to that extra disk?
I do not wish to reinstall Mathematica on an external disk. I wish to maintain the rest of Mathematica's files in my current disk, and send only the temporary files to a secondary disk.Juan José Basagoiti2018-05-17T23:36:11ZUse index.html files in Wolfram Cloud sites?
http://community.wolfram.com/groups/-/m/t/1250045
### Cross post on StackExchange: https://mathematica.stackexchange.com/questions/162265/using-index-html-files-in-wolfram-cloud-sites
---
Part as exercise, part so I could write data-science blog posts I built a website builder using Mathematica that sets up sites in the cloud.
As an example site, here is a paclet server website I set up: https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/main.html
Unfortunately, to get this to work I had to remap my site's index.html file to a main.html file, because when I try to view the site at the index.html either by explicitly routing there or by going to the implicit view I am pushed back to the implicit view and given a 500 error.
Note that I cannot copy the index.html file to the site root i.e.,
CopyFile[
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/index.html"],
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer", Permissions->"Public"]
]
as I get a `CloudObject::srverr` failure
I can't even set up a permanent redirect like so:
CloudDeploy[
Delayed@HTTPRedirect[
"https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/main.html",
<|"StatusCode" -> 301|>
],
"server",
Permissions -> "Public"
]
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/server"]
As while this apparently worked, going to that site causes my browser to spin infinitely and before finally giving up.
Even more, all of these possible hacks are ugly and I'd much rather work with the standard website setup.
How can I do this?b3m2a1 2017-12-19T17:59:55Z[✓] Insert text in maps?
http://community.wolfram.com/groups/-/m/t/1343637
Friends, I want to insert text in a map. For example, in the following code I would like to put a label of the two towns I am highlighting in the map. I have not been able to do it: not by labeled nor by any other method. Actually, I would like to use Inset or Text, to have full control of the text style.
Can somebody help me?
Attached, the example-code. ThanksFrancisco Gutierrez2018-05-22T19:36:19ZHaz que Wolfram|Alpha funcione en cualquier idioma con TextTranslation
http://community.wolfram.com/groups/-/m/t/1342695
Nota: Esta publicación es una traducción de la publicación por Arnoud Buzing: [Using TextTranslation to make WolframAlpha work with ANY language][1].
Algo que no me agrada mucho de Wolfram|Alpha es que solo funciona bien con entradas en inglés y no muy bien (o nada bien) en otros idiomas.
Cuando incluímos una nueva función de traducción en la versión 11.1, TextTranslation, inmediatamente pensé en usarlo con Wolfram|Alpha.
Cuando finalmente pude investigar esto, descubrí que los resultados son realmente sorprendentes, así que quiero compartir algunos resultados aquí.
Una manera muy fácil de probar esto es instalando el siguiente paclet de la [siguiente página GitHub][2]:
PacletInstall["https://github.com/arnoudbuzing/prototypes/releases/download/v0.2.3/Prototypes-0.2.3.paclet"]
O si no gusta instalar el paclet completo (porque también incluye docenas de otras funciones), puede usar las definiciones del código desde aquí:
[https://github.com/arnoudbuzing/prototypes/blob/master/Prototypes/WolframAlpha.wl][3]
Veamos algunos ejemplos y cómo estos se comparan con la función WolframAlpha no traducida. Supongamos que queremos preguntarle a Wolfram|Alpha cuál es la capital de Japón (Tokio). Claramente, la consulta en inglés funciona de la siguiente manera:
WolframAlpha["what is the capital of Japan", "Result"]
(Esto retorna Tokio como respuesta). Pero ahora hagamos la pregunta en neerlandés:
WolframAlpha["wat is de hoofdstad van Japan", "Result"]
Ahora se obtiene una respuesta muy extraña: 18,4 millones de vehículos (estimación de 2004). Esto está claramente incorrecto, lo cual hace que Wolfram|Alpha sea completamente inútil para las preguntas en el idioma neerlandés.
Entonces, ahora pensemos en lo que se requiere para mejorar esta situación: primero debemos revisar si estamos lidiando con una entrada que no está en inglés y, de ser así, tenemos que traducirla a una entrada en inglés y ejecutarla a través de la función WolframAlpha. Este es el código de Wolfram Language que hace exactamente eso. Lo llamé WolframBeta para distinguirlo de la función original:
WolframBeta[ input_String, args___ ] := Module[{language, translation},
language = LanguageIdentify[input];
translation = If[language =!= Entity["Language", "English"], TextTranslation[input, language -> "English"], input];
WolframAlpha[translation, args]
]
Ahora probemos esta función:
WolframBeta["wat is de hoofdstad van Japan", "Result"]
Es un poco más lento debido a la llamada a la función de traducción, ¡pero regresa el resultado correcto (Tokio)!
Y de inmediato funciona para muchos idiomas:
WolframBeta["cual es la capital de japón?", "Result"]
WolframBeta["日本の首都は何ですか", "Result"]
WolframBeta["Was ist die Hauptstadt von Japan?", "Result"]
WolframBeta["什么是日本的首都", "Result"]
Todas estas preguntas retornan como resultado a Tokio, mientras que la función WolframAlpha falla de maneras distintas en cada idioma! (No incluiré los resultados aquí, pero son vergonzosos)
(Para obtener las traducciones en estos idiomas traduje del inglés, así que espero que sean correctas)
WolframAlpha también puede retornar "resultados hablados", una cadena de caracteres sencilla en inglés como respuesta. Por ejemplo:
WolframAlpha["what is the capital of Japan?", "SpokenResult"]
Esto retorna la cadena de caracteres en inglés: "La capital de Japón es Tokio, Japón"
Pero TextTranslation funciona en ambos sentidos (de hecho, funciona entre dos idiomas, pero en este contexto solo nos interesa la traducción desde y hacia inglés).
Aquí hay una modificación que a) traduce un idioma que no sea ingles a inglés, b) hace la consulta, y c) traduce el resultado hablado nuevamente al idioma original:
WolframBeta[ input_String, "SpokenResult", args___ ] := Module[{language, translation,result},
language = LanguageIdentify[input];
translation = If[language =!= Entity["Language", "English"], TextTranslation[input, language -> "English"], input];
result = WolframAlpha[translation, "SpokenResult", args];
If[language =!= Entity["Language", "English"], TextTranslation[result, "English" -> language], result]
]
Ahora echemos un vistazo a algunos ejemplos en español. Como entrada tenemos “¿Qué distancia hay entre Amsterdam y Rotterdam en kilómetros?”:
WolframBeta“¿Qué distancia hay entre Amsterdam y Rotterdam en kilómetros?”, "SpokenResult"]
Esto retorna (correctamente): " La respuesta es de 56,4 kilómetros"
Y ahora preguntemos, cuánta vitamina C hay en un vaso de jugo de naranja:
WolframBeta["¿cuánta vitamina C hay en un vaso de jugo de naranja?", "SpokenResult"]
Obtenemos el siguiente resultado: "La respuesta es aproximadamente 93 miligramos"
Ahora probemos algo con una respuesta un poco más compleja (producto interno bruto de México), al menos gramaticalmente (esta vez realmente me sorprendió):
WolframBeta["cual es el producto interno bruto de México?", "SpokenResult"]
Respuesta: "El producto interno bruto de México es de $ 1,05 trillones por año"
A veces al preguntar por el tiempo, se debe proporcionar el país además de la ciudad:
WolframBeta ["¿qué tan cálido es Monterrey, Mexico?", SpokenResult"]
Respuesta (en Farenheit): "La temperatura en Monterrey, Nuevo León, México es de 86 grados Fahrenheit"
Tambien podemos obtener respuestas a consultas matemáticas:
WolframBeta["¿Cuál es la derivada de seno de X?", "SpokenResult"]
Respuesta: "La respuesta es coseno de X"
Y preguntas sobre personas famosas y su relación con otras personas:
WolframBeta["¿Quiénes son los hijos del Príncipe William?","SpokenResult"]
Respuesta: "Los hijos de Prince William son Prince George de Cambridge; Carlota de Cambridge; y Luis de Cambridge"
Espero que esta o una versión de esta idea se pueda agregar oficialmente a Wolfram|Alpha en algún momento. Creo que sería útil hacer que Wolfram|Alpha se utilice más en todo el mundo para ayudar a las personas con su curiosidad computacional.
Déjame saber lo que piensas. Comentarios, sugerencias y pull requests son bienvenidos!
[1]: http://community.wolfram.com/groups/-/m/t/1337022
[2]: https://github.com/arnoudbuzing/prototypes/releases
[3]: https://github.com/arnoudbuzing/prototypes/blob/master/Prototypes/WolframAlpha.wlKarla Santana2018-05-21T22:20:10ZNeural Nets for time series prediction: Where to start?
http://community.wolfram.com/groups/-/m/t/1343037
Dear Members
I have used mathematica to learn some things such as regression for time series. There are plenty of models with unctions of very high leves of abstraction such as "TimeSeriesForecast". Also, many detailed examples are given.
I like to try to learn newural nets for time series forecasting but it has not been easy at all. I have not found examples in the subject an not related models are available in the neural net repository-
Could any one help me to find a good place to start on the subject?
Best regards
JesusJ Jesus Rico-Melgoza2018-05-22T01:14:25ZUse RLink to run the quantile regression function from R?
http://community.wolfram.com/groups/-/m/t/1343508
I am running a Monte Carlo simulation to compare errors from least squares method and quantile regression.
I have generated the data as per below *(y = x Beta + error)*, for three different beta's. (\\[Tau] is my quantile level)
The data is ready for *LinearModelFit*.
But how can I apply the *rq* function in *R* from the *quantreg* library to my data?
I appreciate your help. This community is awesome.
Thanks in advance,
Thad
Set n, m and \[Tau]
n = 1000;
m = n;
\[Tau] = 0.9;
columns = 100;
Generate data
SeedRandom[1976];
xdata = Table[RandomVariate[NormalDistribution[], n], columns];
\[Epsilon]data = Table[RandomVariate[NormalDistribution[], n], columns];
\[Beta]data = {1/3, 1, 3};
num\[Beta] = Length[\[Beta]data];
ydata = Table[xdata \[Beta]data[[k]] + \[Epsilon]data, {k, num\[Beta]}];
data = Table[
Transpose[{ydata[[q, k]], xdata[[k]]}], {q, num\[Beta]}, {k, columns}];
Run Least Squares
lsFunc = Table[LinearModelFit[#, x, x] & /@ data[[q]], {q, num\[Beta]}];
Moreover, I need the ability to extract the parameters from the quantile regression results, like I can do with *LinearModelFit["BestFitParameters"]* and *["FitResiduals"]*.Thadeu Freitas Filho2018-05-22T10:49:59ZRemove `InvisiblePrefixScriptBase` ?
http://community.wolfram.com/groups/-/m/t/1342717
I have accidentally deleted a successful A=A.nb, but have a paper printout A` of it.
I have then taken `a similar` B=B.nb and hand-entered-edited it using A` to reproduce A; call it C=C.nb
C does not run properly and I get a `InvisiblePrefixScriptBase` Message
How do I use paper A` to (re)create a good A?
Thanks, Michael Caola
(I am as ignorant as I seem, and would appreciate any advice)michael caola2018-05-21T12:59:21ZProject a picture onto a specific geographic area?
http://community.wolfram.com/groups/-/m/t/1343159
How to replace a world map with a picture and project it to a specific geographic area?
I can already use shapefiles to draw satellite photos of specific administrative regions.
However, replacing the original global satellite image with a picture cannot be done.
Request friendly expert guidance ~~~
data = Import["COUNTY201804300214.shp", "Data"];
picture= Import["https://upload.wikimedia.org/wikipedia/commons/thumb/4/41/Simple_\world_map.svg/2000px-Simple_world_map.svg.png", "PNG"];
data[[All, 1]]
geometry = ("Geometry" /. data);
GeoGraphics[{GeoStyling["Satellite"], geometry[[12]]}, GeoBackground -> None]
GeoGraphics[{GeoStyling[{"Image", picture}], geometry[[12]]}, GeoBackground -> None]Tsai Ming-Chou2018-05-22T09:19:35ZAutomatically sliding a conv net onto a larger image
http://community.wolfram.com/groups/-/m/t/1343104
**How to control the step size of the following conv net as it slides onto a larger image?**
See also: https://mathematica.stackexchange.com/questions/144060/sliding-fullyconvolutional-net-over-larger-images/148033
As a toy example, I'd like to slide a digit classifier trained on 28x28 images to classify each neighborhood of a larger image.
This is lenet with linear layers replaced by 1x1 convolutional layers.
trainingData = ResourceData["MNIST", "TrainingData"];
testData = ResourceData["MNIST", "TestData"];
lenetModel =
NetModel["LeNet Trained on MNIST Data",
"UninitializedEvaluationNet"];
newlenet = NetExtract[lenetModel, All];
newlenet[[7]] = ConvolutionLayer[500, {4, 4}];
newlenet[[8]] = ElementwiseLayer[Ramp];
newlenet[[9]] = ConvolutionLayer[10, 1];
newlenet[[10]] = SoftmaxLayer[1];
newlenet[[11]] = PartLayer[{All, 1, 1}];
newlenet =
NetChain[newlenet,
"Input" ->
NetEncoder[{"Image", {28, 28}, ColorSpace -> "Grayscale"}]]
Now train it:
newtd = First@# -> UnitVector[10, Last@# + 1] & /@ trainingData;
newvd = First@# -> UnitVector[10, Last@# + 1] & /@ testData;
ng = NetGraph[
<|"inference" -> newlenet,
"loss" -> CrossEntropyLossLayer["Probabilities", "Input" -> 10]
|>,
{
"inference" -> NetPort["loss", "Input"],
NetPort["Target"] -> NetPort["loss", "Target"]
}
]
tnew = NetTrain[ng, newtd, ValidationSet -> newvd,
TargetDevice -> "GPU"]
Now remove dimensions information (see stackexchange for the code definition of `removeInputInformation`):
removeInputInformation[layer_ConvolutionLayer] :=
With[{k = NetExtract[layer, "OutputChannels"],
kernelSize = NetExtract[layer, "KernelSize"],
weights = NetExtract[layer, "Weights"],
biases = NetExtract[layer, "Biases"],
padding = NetExtract[layer, "PaddingSize"],
stride = NetExtract[layer, "Stride"],
dilation = NetExtract[layer, "Dilation"]},
ConvolutionLayer[k, kernelSize, "Weights" -> weights,
"Biases" -> biases, "PaddingSize" -> padding, "Stride" -> stride,
"Dilation" -> dilation]]
removeInputInformation[layer_PoolingLayer] :=
With[{f = NetExtract[layer, "Function"],
kernelSize = NetExtract[layer, "KernelSize"],
padding = NetExtract[layer, "PaddingSize"],
stride = NetExtract[layer, "Stride"]},
PoolingLayer[kernelSize, stride, "PaddingSize" -> padding,
"Function" -> f]]
removeInputInformation[layer_ElementwiseLayer] :=
With[{f = NetExtract[layer, "Function"]}, ElementwiseLayer[f]]
removeInputInformation[x_] := x
tmp = NetExtract[NetExtract[tnew, "inference"], All];
n3 = removeInputInformation /@ tmp[[1 ;; -3]];
AppendTo[n3, SoftmaxLayer[1]];
n3 = NetChain@n3;
And the network `n3` slides onto any larger input. However, note that it seems to slide with steps of 4. How could I make it take steps of 1 instead?
In[358]:= n3[RandomReal[1, {1, 28*10, 28}]] // Dimensions
Out[358]= {10, 64, 1}
In[359]:= BlockMap[Length, Range[28*10], 28, 4] // Length
Out[359]= 64Matthias Odisio2018-05-21T22:20:23ZControl a replacement step in any step and get the number of steps?
http://community.wolfram.com/groups/-/m/t/1342594
Given the following sets with names
s1={x1,x2}
x1={y1,y2}
y1 ={z1,z2}
When s1 is entered, the names would be replaced finally by all the sets. That is
Input s1
output::{{{z1,z2},y2},x2}
Question1: could we control a replacement step in any step? For example, we get
{{y1,y2},x2} so that y1 is not replaced?
Question2:: could we get the number of the replacement step, for example,to know, at step 2, we get {{y1,y2},x2}?Math Logic2018-05-21T20:20:58ZPrinciple of RandomSearch method in NMinimize?
http://community.wolfram.com/groups/-/m/t/1314173
Hello everyone,
I am using NMinimize procedure with RandomSearch method explicitly chosen for optimization of a non-convex 6 dimensional problem. Those 6 variables are non-negative and they sum up to one.
Can someone explain me how does RandomSearch method in Wolfram Language environment work? It is unclear from http://reference.wolfram.com/language/tutorial/ConstrainedOptimizationGlobalNumerical.html
For example: "... generating a population of random starting points…“ - how admissible solutions are obtained? From which (multivariate) distribution we are sampling from? A similar question may be asked for remaining 3 methods, "NelderMead", "DifferentialEvolution", and "SimulatedAnnealing".
The method seems to be different from method described at en.wikipedia.org/wiki/Random_search where hypercubes are mentioned. Am I right?
Thank you for your answers!Lukas Vacek2018-04-04T16:46:52ZUse While loop (run code until both conditions are satisfied)?
http://community.wolfram.com/groups/-/m/t/1342630
Hello,
I have an error while using "while" loop (I want the code to run until both condition satified:
xx +zz =2 && yy +ww =0.
any suggestions?
Here's the code:
ClearAll[Y ,X,z,w];
T0=1 ;
Hmu=0.8;
SA=10+Hmu ;
SB=10-Hmu;
Sigma=1.5;
K=10;
r=1.02;
gamma=0.1;
Vi=2;
Eye=2;
ZC=(K-SA)/Sigma;
ZB=(K-SB)/Sigma ;
S0=(SA-gamma(Sigma^2)(Vi /Eye ))/r
C0=((SA-gamma(Sigma^2)(Vi /Eye ) -K)(1-CDF[NormalDistribution[0, 1], ZC+gamma*Sigma*(Vi /Eye ) ])+(PDF[NormalDistribution[0, 1], ZC+gamma*Sigma*(Vi /Eye ) ])Sigma)/r
xx=0; zz=0;
while[xx +zz =2 && yy +ww =0]
A=(((Vi /Eye ) -X)S0-Y*C0)r+X*SA-(gamma/2)(X^2)(Sigma^2)
B=(((Vi /Eye ) -X)S0-Y*C0)r +(X +Y)SA -(gamma/2)((X +Y) ^2)(Sigma^2) -Y *K
Q=(((Vi /Eye ) -z)S0-w*C0)r+z*SB-(gamma/2)(z^2)(Sigma^2)
P=(((Vi /Eye ) -z)S0-w*C0)r +(z +w)SB -(gamma/2)(z +w) ^2(Sigma^2) -w *K
UA=-Exp[-gamma*A]*(CDF[NormalDistribution[0, 1], ZC+gamma*X *Sigma ]) -Exp[-gamma*B]*(1-CDF[NormalDistribution[0, 1], ZC+gamma*(X +Y )*Sigma ])
UB=-Exp[-gamma*Q ]*(CDF[NormalDistribution[0, 1], ZB+gamma*z *Sigma ]) -Exp[-gamma*P ]*(1-CDF[NormalDistribution[0, 1], ZB+gamma*(z+w )*Sigma ])
AMAX=Maximize[UA ,{X >=0,X<=2,Y>=-1,Y<=1},{X ,Y}]
BMAX=Maximize[UB ,{z >=0,z<=2,w>=-1,w<=1},{z ,w}]
xx=Replace[X,AMAX [[2,1]]]
yy=Replace[Y,AMAX [[2,2]]]
zz=Replace[z,BMAX [[2,1]]]
ww=Replace[w,BMAX [[2,2]]]
If[xx +zz<2,S0=S0-0.001,If[xx+zz>2,S0,S0+0.001,S0]]
If[yy +ww<0,C0=C0-0.001,If[yy+ww>0,C0,C0+0.001,C0]]yossi sh2018-05-21T11:24:34ZLearning to See in the Dark
http://community.wolfram.com/groups/-/m/t/1342609
spoiler alert: this is just a request...
Fresh out of the University of Illinois Urbana-Champaign (a few kilometers away of WRI headquarters) and Intel, there's a new NN conceived to improve ultra low light photography.
The article: [article][1]
Some examples: [video showing multiple examples][2]
![enter image description here][3]
And the request: **PLEASE PLEASE PLEASE PLEASE***
(also eagerly waiting for the [Deep Image Prior][4] to be made available on the Wolfram Neural Net Repository, or better, [this][5] more recent one from NVIDIA)
\* either integration as a function (or in a function), or available within the Neural Net Repository framwork.
[1]: https://arxiv.org/abs/1805.01934
[2]: https://www.youtube.com/watch?v=qWKUFK7MWvg&feature=youtu.be
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2018-05-21_11-07-56.gif&userId=26431
[4]: https://sites.skoltech.ru/app/data/uploads/sites/25/2017/12/deep_image_prior.pdf
[5]: https://www.youtube.com/watch?v=gg0F5JjKmhAPedro Fonseca2018-05-21T09:28:19ZMinimum time optimal control: How to use NDSolve for unknown final time?
http://community.wolfram.com/groups/-/m/t/1342397
Hi, I am new to Mathematica and am trying to solve minimum time optimal control problem using NDSolve. Please see attached for my notebook. I have 4 ODEs and 4 boundary conditions. Final time (tf) in this case is also unknown and need to be calculated by another constraint (transversality condition for free final time):
1 + \[Lambda1][tf]*x2[tf] - \[Lambda2][tf]*Sign[\[Lambda2]][tf] == 0
I am not sure how to add this equation to NDSolve and solve the problem altogether to include tf? I am sure there is a better way to do it. I thought this problem was straightforward but I couldn't find solution online...so here I am asking for help. This problem can actually be solved by hand calculation but I want to learn how to solve it with Mathematica. Thank you so much for your help!Danop Rajabhandharaks2018-05-21T05:20:46ZLoop subdivision on triangle meshes
http://community.wolfram.com/groups/-/m/t/1338790
(Cross-posted from [Mathematica.StackExchange](https://mathematica.stackexchange.com/q/161331/38178))
Every now and then, the question pops up how a given geometric mesh (e.g. a `MeshRegion`) can be refined to produce a i.) finer and ii.) smoother mesh. For example, the following triangle mesh from the example database is pretty coarse.
R = ExampleData[{"Geometry3D", "Triceratops"}, "MeshRegion"]
MeshCellCount[R, 2]
[![enter image description here][4]][1]
> 5660
Well, we _could_ execute this
S = DiscretizeRegion[R, MaxCellMeasure -> {1 -> 0.01}]
MeshCellCount[S, 2]
[![enter image description here][4]][1]
> 1332378
only to learn that the visual appearance hasn't improved at all.
So, how can we refine in a smoothing way with Mathematica? There are several subdivision schemes known in geometry processing, e.g. [Loop subdivision](https://en.wikipedia.org/wiki/Loop_subdivision_surface) and [Catmull-Clark subdivision](https://en.wikipedia.org/wiki/Catmull-Clark_subdivision_surface) for general polyhedral meshes, but there seem to be no built-in methods for these.
Implementation
---
Let's see if we can do that with what Mathematica offers us. Still, we need quite a bit of preparation. In the first place we need methods to compute cell adjacency matrices form [here](https://mathematica.stackexchange.com/questions/160443/how-to-obtain-the-cell-adjacency-graph-of-a-mesh/160457#160457). I copied the code for completeness. The built-in `"ConnectivityMatrix"` properties for `MeshRegions` return pattern arrays, so we start to convert them into numerical matrices.
SparseArrayFromPatternArray[A_SparseArray] := SparseArray @@ {
Automatic, Dimensions[A], A["Background"], {1, {
A["RowPointers"],
A["ColumnIndices"]
},
ConstantArray[1, Length[A["ColumnIndices"]]]
}
}
CellAdjacencyMatrix[R_MeshRegion, d_, 0] := If[MeshCellCount[R, d] > 0,
SparseArrayFromPatternArray[R["ConnectivityMatrix"[d, 0]]],
{}
];
CellAdjacencyMatrix[R_MeshRegion, 0, d_] := If[MeshCellCount[R, d] > 0,
SparseArrayFromPatternArray[R["ConnectivityMatrix"[0, d]]],
{}
];
CellAdjacencyMatrix[R_MeshRegion, 0, 0] :=
If[MeshCellCount[R, 1] > 0,
With[{A = CellAdjacencyMatrix[R, 0, 1]},
With[{B = A.Transpose[A]},
SparseArray[B - DiagonalMatrix[Diagonal[B]]]
]
],
{}
];
CellAdjacencyMatrix[R_MeshRegion, d1_, d2_] :=
If[(MeshCellCount[R, d1] > 0) && (MeshCellCount[R, d2] > 0),
With[{B = CellAdjacencyMatrix[R, d1, 0].CellAdjacencyMatrix[R, 0, d2]},
SparseArray[
If[d1 == d2,
UnitStep[B - DiagonalMatrix[Diagonal[B]] - d1],
UnitStep[B - (Min[d1, d2] + 1)]
]
]
],
{}
];
Alternatively to copying the code above, simply make sure that you have [IGraph/M](http://szhorvat.net/pelican/igraphm-a-mathematica-interface-for-igraph.html) version 0.3.93 or later installed and run
Needs["IGraphM`"];
CellAdjacencyMatrix = IGMeshCellAdjacencyMatrix;
Next is a `CompiledFunction` to compute the triangle faces for the new mesh:
getSubdividedTriangles =
Compile[{{ff, _Integer, 1}, {ee, _Integer, 1}},
{
{Compile`GetElement[ff, 1],Compile`GetElement[ee, 3],Compile`GetElement[ee, 2]},
{Compile`GetElement[ff, 2],Compile`GetElement[ee, 1],Compile`GetElement[ee, 3]},
{Compile`GetElement[ff, 3],Compile`GetElement[ee, 2],Compile`GetElement[ee, 1]},
ee
},
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
];
Finally, the methods that webs everything together. It assembles the subdivision matrix (which maps the old vertex coordinates to the new ones), uses it to compute the new positions and calls `getSubdividedTriangles` in order to generate the new triangle faces.
ClearAll[LoopSubdivide];
Options[LoopSubdivide] = {
"VertexWeightFunction" -> Function[n, 5./8. - (3./8. + 1./4. Cos[(2. Pi)/n])^2],
"EdgeWeight" -> 3./8.,
"AverageBoundary" -> True
};
LoopSubdivide[R_MeshRegion, opts : OptionsPattern[]] := LoopSubdivide[{R, {{0}}}, opts][[1]];
LoopSubdivide[{R_MeshRegion, A_?MatrixQ}, OptionsPattern[]] :=
Module[{A00, A10, A12, A20, B00, B10, n, n0, n1, n2, βn, pts,
newpts, edges, faces, edgelookuptable, triangleneighedges,
newfaces, subdivisionmatrix, bndedgelist, bndedges, bndvertices,
bndedgeQ, intedgeQ, bndvertexQ,
intvertexQ, β, βbnd, η},
pts = MeshCoordinates[R];
A10 = CellAdjacencyMatrix[R, 1, 0];
A20 = CellAdjacencyMatrix[R, 2, 0];
A12 = CellAdjacencyMatrix[R, 1, 2];
edges = MeshCells[R, 1, "Multicells" -> True][[1, 1]];
faces = MeshCells[R, 2, "Multicells" -> True][[1, 1]];
n0 = Length[pts];
n1 = Length[edges];
n2 = Length[faces];
edgelookuptable = SparseArray[
Rule[
Join[edges, Transpose[Transpose[edges][[{2, 1}]]]],
Join[Range[1, Length[edges]], Range[1, Length[edges]]]
],
{n0, n0}];
(*A00=CellAdjacencyMatrix[R,0,0];*)
A00 = Unitize[edgelookuptable];
bndedgelist = Flatten[Position[Total[A12, {2}], 1]];
If[Length[bndedgelist] > 0, bndedges = edges[[bndedgelist]];
bndvertices = Sort[DeleteDuplicates[Flatten[bndedges]]];
bndedgeQ = SparseArray[Partition[bndedgelist, 1] -> 1, {n1}];
bndvertexQ = SparseArray[Partition[bndvertices, 1] -> 1, {n0}];
B00 = SparseArray[ Join[bndedges, Reverse /@ bndedges] -> 1, {n0, n0}];
B10 = SparseArray[ Transpose[{Join[bndedgelist, bndedgelist],
Join @@ Transpose[bndedges]}] -> 1, {n1, n0}];
,
bndedgeQ = SparseArray[{}, {Length[edges]}];
bndvertexQ = SparseArray[{}, {n0}];
B00 = SparseArray[{}, {n0, n0}];
B10 = SparseArray[{}, {n1, n0}];
];
intedgeQ = SparseArray[Subtract[1, Normal[bndedgeQ]]];
intvertexQ = SparseArray[Subtract[1, Normal[bndvertexQ]]];
n = Total[A10];
β = OptionValue["VertexWeightFunction"];
η = OptionValue["EdgeWeight"];
βn = β /@ n;
βbnd = If[TrueQ[OptionValue["AverageBoundary"]], 1./8., 0.];
subdivisionmatrix =
Join[Plus[
DiagonalMatrix[SparseArray[1. - βn] intvertexQ + (1. - 2. βbnd) bndvertexQ],
SparseArray[(βn/n intvertexQ)] A00, βbnd B00],
Plus @@ {((3. η - 1.) intedgeQ) (A10),
If[Abs[η - 0.5] < Sqrt[$MachineEpsilon],
Nothing, ((0.5 - η) intedgeQ) (A12.A20)], 0.5 B10}];
newpts = subdivisionmatrix.pts;
triangleneighedges = Module[{f1, f2, f3},
{f1, f2, f3} = Transpose[faces];
Partition[
Extract[
edgelookuptable,
Transpose[{Flatten[Transpose[{f2, f3, f1}]],
Flatten[Transpose[{f3, f1, f2}]]}]],
3]
];
newfaces =
Flatten[getSubdividedTriangles[faces, triangleneighedges + n0],
1];
{
MeshRegion[newpts, Polygon[newfaces]],
subdivisionmatrix
}
]
Test examples
---
So, let's test it. A classical example is subdividing an `"Isosahedron"`:
R = RegionBoundary@PolyhedronData["Icosahedron", "MeshRegion"];
regions = NestList[LoopSubdivide, R, 5]; // AbsoluteTiming // First
g = GraphicsGrid[Partition[regions, 3], ImageSize -> Full]
> 0.069731
[![enter image description here][1]][1]
Now, let's tackle the `"Triceratops"` from above:
R = ExampleData[{"Geometry3D", "Triceratops"}, "MeshRegion"];
regions = NestList[LoopSubdivide, R, 2]; // AbsoluteTiming // First
g = GraphicsGrid[Partition[regions, 3], ImageSize -> Full]
> 0.270776
[![enter image description here][2]][2]
The meshes so far had trivial boundary. As for an example with nontrivial boundary, I dug out the `"Vase"` from the example dataset:
R = ExampleData[{"Geometry3D", "Vase"}, "MeshRegion"];
regions = NestList[LoopSubdivide, R, 2]; // AbsoluteTiming // First
g = GraphicsRow[
Table[Show[S, ViewPoint -> {1.4, -2.1, -2.2},
ViewVertical -> {1.7, -0.6, 0.0}], {S, regions}],
ImageSize -> Full]
> 1.35325
[![enter image description here][3]][3]
Remarks and edits
---
Added some performance improvements and incorporated some ideas by [Chip Hurst](https://mathematica.stackexchange.com/users/4346) form [this post](https://mathematica.stackexchange.com/questions/160443/how-to-obtain-the-cell-adjacency-graph-of-a-mesh/166491#166491).
Added options for customization of the subdivision process, in particular for planar subdivision (see [this post](https://mathematica.stackexchange.com/a/170604/38178) for an application example).
Added a way to also return the subdivision matrix since it can be useful, e.g. for [geometric multigrid solvers](https://mathematica.stackexchange.com/a/173617/38178). Just call with a matrix as second argument, e.g., `LoopSubdivide[R,{{1}}]`.
Fixed a bug that produced dense subdivision matrices in some two-dimensional examples due to not using `0` as `"Background"` value.
[4]: https://i.stack.imgur.com/nuWBd.png
[1]: https://i.stack.imgur.com/l1VcB.png
[2]: https://i.stack.imgur.com/qSbBh.png
[3]: https://i.stack.imgur.com/dp1BY.pngHenrik Schumacher2018-05-14T16:15:13ZEmbedding images in QR code
http://community.wolfram.com/groups/-/m/t/1341834
![manually edited][11]
I guess a story-telling type post would attract more upvotes and probably give some insight about how to 'solve problems' using Mathematica, so I would go into details and try to explain not only the code but also how I figured out how to write them.
To begin with, here's three QR code generated with the code, check it by yourself, they are actually scan-able~ It's also amazing that even very fine details of the image can be shown in the QR code (Note that it would better if you view these QR with your glass off XD)
![mma1][1] ![poa][2]
![mma2][3]
#How this works?
In fact, this form of QR code is not my original idea, I came across such type of QR code on internet but failed to find its origin. So I tried to figure out the principle by myself.
Carefully observe the image, one can find out that there's something odd about this QR:
![wierd behavior][4]
The marker on the corner are **three times coarser** than the majority of the QR. So I initially hypothesize that the QR recognition algorithm would first average the brightness of a segment, turning it into a normal QR and then recognize it. The code I used is as follows:
Block[{img = Import["http://community.wolfram.com//c/portal/getImageAttachment?filename=mathematica1.png&userId=1340903"], dat, partitioned},
dat = ImageData@ImagePad[Binarize[ImageResize[img, Scaled[1/3]]], -4];
partitioned = Partition[dat, {3, 3}];
Grid[{ImageResize[#, Dimensions@dat], BarcodeRecognize@#} & /@
{Image@dat,Binarize@Image@Map[Mean@*Flatten, partitioned, {2}]}]
]
The result proved me wrong as the averaged version cannot be properly recognized. Then further observer the QR code, I found that there are mysterious dots even in the places which should be purely white, also the dots are a bit *too* structural. So I suspect that normal QR code recognition algorithm only takes the color of the center dot, so I added this to the previous code:
Map[#[[2,2]]&,partitioned,{2}]
then it worked out properly!
![theory][5]
As we've already cracked the theory, we can now generate some of our own.
#How to generate?
####QR code generation
First we can use `BarcodeImage` to generate a QR code, for example here I would use: `"This is a sample QR generated by Mathematica!"` as the content of the QR code:
text = "This is a sample QR generated by Mathematica!";
qrraw = BarcodeImage[text, {"QR", "H"}, 1]
BarcodeRecognize@qrraw
####Image processing
Then we create a black and white image to use as background, for example here we use the wolfram wolf icon:
![wolfram wolf][6]
Import, convert to grayscale and adjust the grayscale a bit:
img=ColorConvert[Rasterize[Graphics[{
Inset[Import["http://community.wolfram.com//c/portal/getImageAttachment?filename=wolframwolf.png&userId=1340903"],{.6,.4},Automatic,.8],
Text[Style["WOLFRAM",Bold,14],{.5,.92}]
},PlotRange->{{0,1},{0,1}},ImageSize->3ImageDimensions@qrraw]],"Grayscale"]^.45
which returns:
![B&W image][7]
Note that in order to get enough resolution while keeping the QR code easy to scan, the dimension of the QR code is best in the range of [25,50], one can test that using `ImageDimensions@img` and adjust it by changing the error correction level by setting `{"QR",lev}` where lev can be "L", "M", "Q", or "H".
####Merging
Then we should merge this two image together. Here we use the technique of [dither](https://en.wikipedia.org/wiki/Dither) to display grayscale image using only white and black pixels. In the process of dithering, we should notice that at center of each 9*9 pixel the value should correspond to the value in the QR image, or the QR code would be invalid. The code could be easily written out as follows:
dithering[imgdat_, qrdat_] :=
Block[{imgdat1 = imgdat, dimx, dimy, tmp1, tmp2, f = UnitStep[# - .5] &},
{dimx, dimy} = Dimensions@imgdat;
Quiet@Do[
(*Rounding*)
tmp1 = If[Mod[{i, j}, 3] == {2, 2}, qrdat[[(i + 1)/3, (j + 1)/3]], f[imgdat1[[i, j]]]];
tmp2 = Clip[imgdat1[[i, j]] - tmp1, {-.5, .5}];
(*Diffuse Error*)
imgdat1[[i, j]] = tmp1;
imgdat1[[i, j + 1]] += 0.4375 tmp2;
If[j != 1, imgdat1[[i + 1, j - 1]] += 0.1875 tmp2];
imgdat1[[i + 1, j]] += 0.3125 tmp2;
imgdat1[[i + 1, j + 1]] += 0.0625 tmp2
, {i, dimx}, {j, dimy}];
imgdat1
]
Special attention should be paid to the handling key pixels of the QR code, the error created by introducing it should not be ignored, but its influence should be limited in a range, so here a `Clip` in error is required, while in a traditional dithering process it would be redundant.
Apply dithering to the image and we have:
Image[ditherdat=dithering[ImageData@img, ImageData@qrraw]]
![output 1][8]
####Refinement
One can see that the shape of the original image is quite well preserved and key points of the QR code are properly dealt with. Then the final step is to process the key features on the corner and edge of the QR code, which is quite trivial:
replicate = (Flatten[ConstantArray[#, {3, 3}], {{3, 1}, {4, 2}}] &);
refineqr[qrdat_] :=
Block[{qrd = qrdat, d = Length[qrdat]},
(*Corner*)
(qrd[[#1 ;; 24 #1 ;; #1, #2 ;; 24 #2 ;; #2]] = replicate[qrd[[2 #1 ;; 23 #1 ;; 3 #1, 2 #2 ;; 23 #2 ;; 3 #2]]]) & @@@ {{1, 1}, {1, -1}, {-1, 1}};
(*Edge*)
qrd[[22 ;; d - 21, 19 ;; 21]] = Transpose[qrd[[19 ;; 21, 22 ;; d - 21]] = replicate[{Mod[Range[(d + 1)/3 - 14], 2]}]];
qrd]
Then apply this to previously get result, we get the final result, which is scan-able:
Image[final = refineqr@ditherdat]
BarcodeRecognize@%
It's usually favourable to have a 3x zoom to the image:
Image@replicate@final
![final result][9]
A fully packed version is shown in the attachment notebook file, where:
createqr[text,img]
would generate the same result.
Further optimizations could include using machine learning to further refine the display effect. Sharper lines, Less interfering key points and more could be expected.
**ENJOY~**
----
#Update
@Henrik Schachner kindly remind me that the previous QR is not that easy to scan with average QR scanning software. So I made some tiny updates to make the QR more standardized and much more easier to scan:
refineqr[qrdat_] :=
Block[{qrd = qrdat, d = Length[qrdat], temp = Fold[ArrayPad[#1, 1, #2] &, {{{0}}, 1, 0}], p},
p = Position[Round@ListCorrelate[temp, qrdat[[2 ;; ;; 3, 2 ;; ;; 3]], {{1, 1}, {-1, -1}}, 0, Abs@*Subtract], 0, 2];
(*Corner*)
(qrd[[#1 ;; 24 #1 ;; #1, #2 ;; 24 #2 ;; #2]] = replicate[qrd[[2 #1 ;; 23 #1 ;; 3 #1, 2 #2 ;; 23 #2 ;; 3 #2]]]) & @@@ {{1, 1}, {1, -1}, {-1, 1}};
(*Edge*)
qrd[[22 ;; d - 21, 19 ;; 21]] = Transpose[ qrd[[19 ;; 21, 22 ;; d - 21]] = replicate[{Mod[Range[(d + 1)/3 - 14], 2]}]];
(*Special*)
(qrd[[3 #1 - 2 ;; 3 #1 + 12, 3 #2 - 2 ;; 3 #2 + 12]] = replicate@temp) & @@@ p;
qrd]
after this update, the QR code would look like this:
![edited][10]
After minor manual edition, it could be like:
![manually edited][11]
Maybe this would be easier to scan due to the newly add correction block on the right-down corner.
Also, I think I found a better realization using same basic design principle [here](http://vecg.cs.ucl.ac.uk/Projects/SmartGeometry/halftone_QR/paper_docs/halftoneQR_sigga13.pdf).
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mma1.png&userId=1340903
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5511POA.png&userId=1340903
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mma2.png&userId=1340903
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=illus.png&userId=1340903
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6963illus.png&userId=1340903
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=wolframwolf.png&userId=1340903
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out0.png&userId=1340903
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out1.png&userId=1340903
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=final_big.png&userId=1340903
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=edited.png&userId=1340903
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ww_manual.png&userId=1340903Jingxian Wang2018-05-18T19:38:43ZAvoid getting $Aborted in FinancialData?
http://community.wolfram.com/groups/-/m/t/1342726
Using mathematica online FinancialData function often returns $Aborted.Kianoosh Kassiri2018-05-21T13:20:07ZTranslationCell - Instantly translate English text cells to your language!
http://community.wolfram.com/groups/-/m/t/1341205
Note: Please check out this [interesting post on a NotebookTranslate][1] function([direct link to cloud notebook][2]), by [Thomas Colignatus][3].
Starting with version 11.1 the Wolfram Language includes a new text translation function, aptly named [`TextTranslation`][4].
Recently I made a post which showed how you can use this function to [query Wolfram|Alpha in any language][5].
But there is so much more you can do with this function. In this post I will share the idea of a `TranslationCell` function, which
creates a regular text cell, with an attached button which lets you toggle from English to a specific language.
Let's start with a famous English quote from the recent past:
> We choose to go to the Moon! We choose to go to the Moon in this
> decade and do the other things, not because they are easy, but because
> they are hard; because that goal will serve to organize and measure
> the best of our energies and skills, because that challenge is one
> that we are willing to accept, one we are unwilling to postpone, and
> one we intend to win, and the others, too.
And let's assign this quote to a variable named `quote`:
```
quote = "We choose to go to the Moon! We choose to go to the Moon in \
this decade and do the other things, not because they are easy, but \
because they are hard; because that goal will serve to organize and \
measure the best of our energies and skills, because that challenge \
is one that we are willing to accept, one we are unwilling to \
postpone, and one we intend to win, and the others, too."
```
I don't want to get to deeply into the implementation details, but if you are interested in them I recommend perusing the code for it on my GitHub project:
https://github.com/arnoudbuzing/prototypes/blob/master/Prototypes/Notebook.wl#L148
And if you want to try this function, simply install the paclet that has this function included:
```
PacletInstall["https://github.com/arnoudbuzing/prototypes/releases/download/v0.2.5/Prototypes-0.2.5.paclet"]
```
So let's take a look at an example:
```
TranslateCell[ quote, "Spanish" ]
```
This creates the following cell:
![enter image description here][6]
And clicking on the button will translate the English text to Spanish (this may take 1-2 seconds since it is calling a translation service):
![enter image description here][7]
Clicking the button again reverts to English (this is fast, because it stored the original text in the cell as metadata):
![enter image description here][8]
And of course this works for many languages, like Russian:
```
TranslationCell[ quote, "Russian" ]
```
![enter image description here][9]
Or Swedish:
```
TranslationCell[ quote, "Swedish" ]
```
![enter image description here][10]
Or Arabic:
```
TranslationCell[ quote, "Arabic" ]
```
![enter image description here][11]
It might be useful to extend this idea to support translation between any two languages ( "LanguageA" -> "LanguageB" ), so I think this will be the next improvement.
Let me know what you think! I am interested in feedback and additional ideas on how to use `TextTranslation` in the Wolfram Language!
[1]: http://community.wolfram.com/groups/-/m/t/1313456
[2]: https://www.wolframcloud.com/objects/thomas-cool/Utilities/2018-04-02-NotebookTranslate.nb
[3]: http://community.wolfram.com/web/cool
[4]: http://reference.wolfram.com/language/ref/TextTranslation.html
[5]: http://community.wolfram.com/groups/-/m/t/1337022
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-04.png&userId=22112
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-05.png&userId=22112
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-04.png&userId=22112
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-06.png&userId=22112
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-07.png&userId=22112
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-08.png&userId=22112Arnoud Buzing2018-05-17T16:14:18ZUse NMaximize output as parameter?
http://community.wolfram.com/groups/-/m/t/1342618
Hello,
I'm running two maximization problem and I want to use the output for additional condition in "If" function.
How can I use the output as parameter? (e.g. X+z in this code)
Here is the Code:
ClearAll[Y ,X,z,w]
T0=1
Hmu=0.2
SA=10+Hmu
SB=10-Hmu
Sigma=1.5
K=10
r=1.02
gamma=0.1
Vi=2
Eye=2
ZC=(K-SA)/Sigma
ZB=(K-SB)/Sigma
S0=(SA-gamma(Sigma^2)(Vi /Eye ))/r
C0=((SA-gamma(Sigma^2)(Vi /Eye ) -K)(1-CDF[NormalDistribution[0, 1], ZC+gamma*Sigma*(Vi /Eye ) ])+(PDF[NormalDistribution[0, 1], ZC+gamma*Sigma*(Vi /Eye ) ])Sigma)/r
A=(((Vi /Eye ) -X)S0-Y*C0)r+X*SA-(gamma/2)(X^2)(Sigma^2)
B=(((Vi /Eye ) -X)S0-Y*C0)r +(X +Y)SA -(gamma/2)(X +Y) ^2(Sigma^2) -Y *K
Q=(((Vi /Eye ) -z)S0-w*C0)r+z*SB-(gamma/2)(z^2)(Sigma^2)
P=(((Vi /Eye ) -z)S0-w*C0)r +(z +w)SB -(gamma/2)(z +w) ^2(Sigma^2) -w *K
UA=-Exp[-gamma*A]*(CDF[NormalDistribution[0, 1], ZC+gamma*X *Sigma ]) -Exp[-gamma*B]*(1-CDF[NormalDistribution[0, 1], ZC+gamma*(X +Y )*Sigma ])
UB=-Exp[-gamma*Q ]*(CDF[NormalDistribution[0, 1], ZB+gamma*z *Sigma ]) -Exp[-gamma*P ]*(1-CDF[NormalDistribution[0, 1], ZB+gamma*(z +w )*Sigma ])
NMaximize[UA ,{X >=0,X<=2,Y>=-1,Y<=1},{X ,Y}]
NMaximize[UB ,{z >=0,z<=2,w>=-1,w<=1},{z ,w}]
If[X+z<=2,1,0]yossi sh2018-05-21T09:37:41ZGet Stellate image using the PolyhedronOperations` package?
http://community.wolfram.com/groups/-/m/t/1341888
I tried using the PolyhedronOperations` package. The Needs command worked OK. I used an example in the mathematica help file but Stellate didn't produce the output in the help file. Here is the output I got
![enter image description here][1]
Here is the output in the help file
![enter image description here][2]
Any ideas about the reason for the difference? Thanks
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Q1.png&userId=764017
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Q1.1.png&userId=764017fajad binj2018-05-19T17:08:26ZAvoid problem with the Mathematica Trial edition installation?
http://community.wolfram.com/groups/-/m/t/1342279
I have a problem with the Mathematica installation.
I tried to install the Mathematica Trial edition.
But, Mathematiaca Trial edition can not installed with the phrase related to
"This application can only run a single instance"
It's probably a problem with " a single instance ".
What is the problem? What should I do?Yoon Young Jin2018-05-21T04:32:36ZFind formula of a Gamma[] product with complex conjugate pair of numbers?
http://community.wolfram.com/groups/-/m/t/1342337
Wolfram Language knows about some simplifications for the product of the Gammafunction for complex conjugate numbers, i.e.
In[1]:= Gamma[n I] Gamma[-n I] // FullSimplify
Out[1]= (\[Pi] Csch[n \[Pi]])/n
In[2]:= Gamma[1 + n I] Gamma[1 - n I] // FullSimplify
Out[2]= n \[Pi] Csch[n \[Pi]]
In[3]:= Gamma[2 + n I] Gamma[2 - n I] // FullSimplify
Out[3]= n (1 + n^2) \[Pi] Csch[n \[Pi]]
In[4]:= Gamma[3 + n I] Gamma[3 - n I] // FullSimplify
Out[4]= n (4 + 5 n^2 + n^4) \[Pi] Csch[n \[Pi]]
In[5]:= Gamma[4 + n I] Gamma[4 - n I] // FullSimplify
Out[5]= Gamma[4 - I n] Gamma[4 + I n]
... but at some point it is stuck. With the help of WL I found these Identities for m=4,5 and 6.
In[8]:= N[Table[{Gamma[4 + n I] Gamma[
4 - n I] == (\[Pi] n (n^6 + 14 n^4 + 49 n^2 + 36))/
Sinh[n \[Pi]]}, {n, 1, 4}], 20]
Out[8]= {{True}, {True}, {True}, {True}}
In[9]:= N[
Table[{Gamma[5 + n I] Gamma[
5 - n I] == (\[Pi] n (n^8 + 30 n^6 + 273 n^4 + 820 n^2 + 576))/
Sinh[n \[Pi]]}, {n, 1, 4}], 20]
Out[9]= {{True}, {True}, {True}, {True}}
In[10]:= N[
Table[{Gamma[6 + n I] Gamma[
6 - n I] == (\[Pi] n (n^10 + 55 n^8 + 1023 n^6 + 7645 n^4 +
21076 n^2 + 14400))/Sinh[n \[Pi]]}, {n, 1, 4}], 20]
Out[10]= {{True}, {True}, {True}, {True}}
And of course more identities for other natural numbers can be found with some effort. Maybe someone can find a closed formula. for the case Gamma[m+I n]Gamma[m-I n], where m is a natural number.Oliver Seipel2018-05-20T12:55:15ZClarify sequence-to-sequence learning with neural nets in WL?
http://community.wolfram.com/groups/-/m/t/1341622
I've been learning about recurrent neural networks lately and I think I'm starting to get the basic idea of how they work. I'm particularly interested in the sequence transformation capabilities of these nets for applications in both NLP and generative art. I've played with a few simple (non-recurrent) nets in Mathematica, but would like to learn more about how to implement recurrent sequence-to-sequence learning.
I've read the Wolfram tutorial [Sequence Learning and NLP with Neural Networks][1], and I'm particularly interested in the section titled **Integer Addition with Variable-Length Output**. If I understand correctly, sequence-to-sequence learning involves converting a sequence to a vector, and then converting that vector into another sequence. I understand (mostly) the "sequence-to-vector" parts with things like `SequenceLastLayer[]`. However, I'm still not entirely clear from the tutorial how the "vector-to-sequence" part of this works. Are there other, more descriptive examples somewhere?
[1]: http://reference.wolfram.com/language/tutorial/NeuralNetworksSequenceLearning.htmlAndrew Campbell2018-05-18T14:00:11ZProve a formula for a known convergent series with Resolve?
http://community.wolfram.com/groups/-/m/t/1342090
I ran across this behaviour that I do not understand. I wanted to prove a formula for a known convergent series:
$\sum_{k=-\infty}^\infty \frac{1}{(2k+1)(2q-2k-1)} = -\frac{\pi^2}{4} \delta_{q,0}\quad$ for $\quad q\in\mathbb{Z}$.
However, Mathematica says this is wrong:
Resolve[ForAll[q, Sum[1/((2 k + 1) (2 q - 2 k - 1)) == -(\[Pi]^2/4) KroneckerDelta[q, 0], {k,-Infinity,Infinity}], Integers]
returns `False`. Moreover, if the values for $q$ are restricted, i.e. if I look separately at the cases $q=0$ and $q\neq 0$,
I get a `True` result: The two commands
Resolve[ForAll[q, q == 0, Sum[1/((2 k + 1) (2 q - 2 k - 1)) == -(\[Pi]^2/4) KroneckerDelta[q, 0], {k,-Infinity,Infinity}], Integers]
Resolve[ForAll[q, q != 0, Sum[1/((2 k + 1) (2 q - 2 k - 1)) == -(\[Pi]^2/4) KroneckerDelta[q, 0], {k,-Infinity,Infinity}], Integers]
both evaluate to `True`. Logically, this is a contradiction, so my question is whether I misunderstood the function of these commands or this is a bug.
Thanks in advance for clarification!
(Using Mathematica 10.0)Julian Farnsteiner2018-05-19T19:29:14ZCreate Magic Cubes, moon level ( 9x9) ?
http://community.wolfram.com/groups/-/m/t/1341127
![Magic cube 9x9][1]
Hello Wolfram Comunity ! My name is Serg and im wondering how it could be possibly done ?
Besides that by all horizontal, vertical and diagonal rows it gives 369, it also gives 1,6,2,7,3,8,4,9,5 if you add numbers in a cell
which in one step gives sequence 1,2,3,4,5 and 6,7,8,9,
+
if you'll look at second number in any cell, you'll see that it creates a perfect pattern that is mirors from both sides
like : 7,6,7,6,7,6,7,6,7 from left side and 5,6,5,6,5,6,5,6,5 from other, and in a whole picture you may see that it is sequesialy creates a wonderfull
pattern from both sides with only 1,1,1,1,1,1,1,1,1 in the central row.
Also if you'll look at second numbers at the top and the bottom, you'll find out intresting mirror pattern 7,8,9,0,1,2,3,4,5 which has some meaning for
shure, plus at the bottom we could find out that the pattern is the same and it applies to all vertical rows, all second numbers are the same.
So, my question is : How it is possibly be done ?
Im learning math for years and realy don't understand the key of making such beautifull constructions like this one.
I got software like HypercubeGenerator, TesseractGenerator, CubeGenerator but it gives me the results so far from this miracle.
Could you plese help me to understand how it was done ?
Thank You a lot for you attention
Here is the cube itself in the attachment
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=talisman_137a.jpg&userId=210822Sergiy Skorin2018-05-17T16:06:29ZPerform Community-specific search?
http://community.wolfram.com/groups/-/m/t/1342064
Perhaps I am missing something, but is there a Wolfram Community specific search field where I can look for particular posts?
There is the overall WRI search in the upper right hand corner of the window, but as far as I can see, nothing to search the community only.
Have I missed it in my coffee-limited state of mind?David Reiss2018-05-19T17:17:03ZUNET image segmentation in stem cells research
http://community.wolfram.com/groups/-/m/t/1341081
For my research project I had to encounter a thorny problem. But before I tell about the problem I would like to briefly mention something about my research project. Basically I am using embryonic stem cells that self-organize to form spheroids (balls of cells) to study gastrulation events. In order to not bog down the readers with technical jargon, “gastrulation” is a process where the stem cells start to form the different layers; each layer then goes onto form the various tissues/organs, in the process unraveling the developmental plan of the entire organism. I am using experimental techniques and quantitative principles from biophysics and engineering to understand some aspects of this crucial process
Now coming back to the problem at hand, the gastruloids (image below) are quite rough in their appearance and not as beautiful as one would like them to be (only a mother can love such an image). Any means of quantifying these gastruloids requires me to initially segment them. When you see a time-lapse images of gastruloids it becomes apparent that they shed a lot of cells (for reasons I do not know yet). This adds considerable noise to the system; oftentimes to the point that – as a human – my eyes are fooled and run into the difficulty of finding the right contours for the spheroids. Here comes the disclosure: classical means/operations in image-processing (gradients and edge detection, filtering, morphological operations etc.. ) prove utterly futile for image segmentation in my case.
![enter image description here][1]
(A gastruloid – virtually a ball of cells with many shed around the periphery)
So what can you do to address the problem where even the best image processing tool in existence – the human eyes – fails. This is precisely where you take help of neural networks. Neural networks are selling like hotcakes during the recent years and added life and hope to the once dead area of artificial intelligence. Again to avoid underlying technical details, neural networks is a paradigm utilized by the computer to mimic the working of a human brain by taking into account the complex interactions between the cells – but only digitally. There are many flavours of neural networks out there, each one geared towards performing a specific task. With advancements made in the area of deep learning/artificial intelligence, the neural nets have started to surpass humans in tasks that humans have been known to be best for i.e. classification tasks. A few recent examples that come to mind include Google’s AlphaGo beating the former World Go champion and an AI diagnosing skin cancer with an unprecedented accuracy.
I utilized one such flavour of neural networks (a deep convolutional network – termed as UNET) to solve my longstanding problem. I constructed the network in Wolfram-Language with external help from Alexey Golyshev. UNET is a deep convolutional network that has a series of convolutional and pooling operations in the contraction phase of the net (wherein the features are extracted) and a sequence of deconvolution & convolution operations in the expansion phase which then yields an output from the network. This output can be subjected to a threshold to ultimately generate a binarized mask (the image segmentation).
![enter image description here][2]
The architecture of UNET as provided by the author: https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/
(* ::Package:: *)
BeginPackage["UNETSegmentation`"]
(* ::Section:: *)
(*Creating UNet*)
conv[n_]:=NetChain[
{
ConvolutionLayer[n,3,"PaddingSize"->{1,1}],
Ramp,
BatchNormalizationLayer[],
ConvolutionLayer[n,3,"PaddingSize"->{1,1}],
Ramp,
BatchNormalizationLayer[]
}
];
pool := PoolingLayer[{2,2},2];
dec[n_]:=NetGraph[
{
"deconv" -> DeconvolutionLayer[n,{2,2},"Stride"->{2,2}],
"cat" -> CatenateLayer[],
"conv" -> conv[n]
},
{
NetPort["Input1"]->"cat",
NetPort["Input2"]->"deconv"->"cat"->"conv"
}
];
nodeGraphMXNET[net_,opt: ("MXNetNodeGraph"|"MXNetNodeGraphPlot")]:= net~NetInformation~opt;
UNET := NetGraph[
<|
"enc_1"-> conv[64],
"enc_2"-> {pool,conv[128]},
"enc_3"-> {pool,conv[256]},
"enc_4"-> {pool,conv[512]},
"enc_5"-> {pool,conv[1024]},
"dec_1"-> dec[512],
"dec_2"-> dec[256],
"dec_3"-> dec[128],
"dec_4"-> dec[64],
"map"->{ConvolutionLayer[1,{1,1}],LogisticSigmoid}
|>,
{
NetPort["Input"]->"enc_1"->"enc_2"->"enc_3"->"enc_4"->"enc_5",
{"enc_4","enc_5"}->"dec_1",
{"enc_3","dec_1"}->"dec_2",
{"enc_2","dec_2"}->"dec_3",
{"enc_1","dec_3"}->"dec_4",
"dec_4"->"map"},
"Input"->NetEncoder[{"Image",{160,160},ColorSpace->"Grayscale"}]
]
(* ::Section:: *)
(*DataPrep*)
dataPrep[dirImage_,dirMask_]:=Module[{X, masks,imgfilenames, maskfilenames,ordering, fNames,func},
func[dir_] := (SetDirectory[dir];
fNames = FileNames[];
ordering = Flatten@StringCases[fNames,x_~~p:DigitCharacter.. :> ToExpression@p];
Part[fNames,Ordering@ordering]);
imgfilenames = func@dirImage;
X = ImageResize[Import[dirImage<>"\\"<>#],{160,160}]&/@imgfilenames;
maskfilenames = func@dirMask;
masks = Import[dirMask<>"\\"<>#]&/@maskfilenames;
{X, NetEncoder[{"Image",{160,160},ColorSpace->"Grayscale"}]/@masks}
]
(* ::Section:: *)
(*Training UNet*)
trainNetwithValidation[net_,dataset_,labeldataset_,validationset_,labelvalidationset_, batchsize_: 8, maxtrainRounds_: 100]:=Module[{},
SetDirectory[NotebookDirectory[]];
NetTrain[net, dataset->labeldataset,All, ValidationSet -> Thread[validationset-> labelvalidationset],
BatchSize->batchsize,MaxTrainingRounds->maxtrainRounds, TargetDevice->"GPU",
TrainingProgressCheckpointing->{"Directory","results","Interval"->Quantity[5,"Rounds"]}]
];
trainNet[net_,dataset_,labeldataset_, batchsize_:8, maxtrainRounds_: 10]:=Module[{},
SetDirectory[NotebookDirectory[]];
NetTrain[net, dataset->labeldataset,All,BatchSize->batchsize,MaxTrainingRounds->maxtrainRounds, TargetDevice->"GPU",
TrainingProgressCheckpointing->{"Directory","results","Interval"-> Quantity[5,"Rounds"]}]
];
(* ::Section:: *)
(*Measure Accuracy*)
measureModelAccuracy[net_,data_,groundTruth_]:= Module[{acc},
acc =Table[{i, 1.0 - HammingDistance[N@Round@Flatten@net[data[[i]],TargetDevice->"GPU"],
Flatten@groundTruth[[i]]]/(160*160)},{i,Length@data}
];
{Mean@Part[acc,All,2],TableForm@acc}
];
(* ::Section:: *)
(*Miscellaneous*)
saveNeuralNet[net_]:= Module[{dir = NotebookDirectory[]},
Export[dir<>"unet.wlnet",net]]/; Head[net]=== NetGraph;
saveInputs[data_,labels_,opt:("data"|"validation")]:=Module[{},
SetDirectory[NotebookDirectory[]];
Switch[opt,"data",
Export["X.mx",data];Export["Y.mx",labels],
"validation",
Export["Xval.mx",data];Export["Yval.mx",labels]
]
]
EndPackage[];
The above code can also be found in the repository @ [Wolfram-MXNET GITHUB][3]
I trained my network over my laptop GPU (Nvidia GTX 1050) by feeding an augmented data (a set of 300 images constructed from a small dataset) . The training was done in under 3 minutes !. The accuracy (computed as the Hamming Distance between two vectors) of the generated binary masks with respect to the ground truth (unseen data) for a set of 90 images was 98.55 %. And with this a task that previously required me to painstakingly trace the contour of the gastruloids manually can now be performed in a matter of milliseconds. All the saved time and perspiration to be utilized somewhere else?
![enter image description here][4]
Below is the results obtained by applying our trained net on one input:
![enter image description here][5]
The interesting aspect for me regarding the network was that despite my gastruloids being highly dynamic (changing shape over time) I never had to explicity state it to the network. All the necessary features were learned from the limited number of images that I trained my network with. This is the beauty of the neural network.
![enter image description here][6]
Finally the output of the net as applied on a number of unseen images:
![enter image description here][7]
Note: I have a python MXNET version of UNET @ [python mxnet GITHUB][8]
The wolfram version of UNET however seems to outperform the python version even though it also utilizes MXNET at the back-end for implementing neural networks. It should not come as a surprise because my guess is that the people at Wolfram Research may have done internal optimizations on top of the library
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gastruloid.png&userId=942204
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=u-net-architecture-initial-authors-implementation.png&userId=942204
[3]: https://github.com/alihashmiii/UNet-Segmentation-Wolfram
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img1.png&userId=942204
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img2.png&userId=942204
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img3.png&userId=942204
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img4.png&userId=942204
[8]: https://github.com/alihashmiii/blobsegmentationAli Hashmi2018-05-17T15:06:45ZReflect a plot on the other side of the axe y?
http://community.wolfram.com/groups/-/m/t/1341497
Hi guys!
How could I reflect the plot on the other side of the axe y (like the image) so I can work on two wheels?
Is there any method of Mathematica or...?
Quadrilatero[q_, xA_, yA_, Lc_, L2_, L3_, modo_, xD_, yD_] :=
Module[
{xB, yB, L5, \[Theta]5, c, \[Alpha], \[Theta]2, xC, yC, \[Theta]3},
xB = xA - Lc;
yB = yA;
L5 = Sqrt[(xD - xB)^2 + (yD - yB)^2];
\[Theta]5 = ArcTan[xD - xB, yD - yB];
c = (L5^2 + L2^2 - L3^2)/(2 L5 L2);
If[modo > 0, \[Alpha] = ArcCos[c], \[Alpha] = -ArcCos[c]];
\[Theta]2 = \[Theta]5 + \[Alpha];
xC = xB + L2 Cos[\[Theta]2];
yC = yB + L2 Sin[\[Theta]2];
\[Theta]3 = 0;
{\[Theta]3, \[Theta]2, {{xA, yA}, {xB, yB}, {xC, yC}, {xD, yD}}}
]
Manipulate[
Module [{sol, coordinate, qq, xD, xA, yD, yA, modo, q},
q = \[Pi];
xD = -0.75;
yD = 0;
xA = s;
yA = 0.30;
modo = 1;
sol = Quadrilatero[q, xA, yA, Lc, L2, L3, modo, xD, yD];
coordinate = sol[[3]];
Show[
(* Plot Traiettoria Nera *)
ParametricPlot[
Quadrilatero[qq, xA, yA, Lc, L2, L3, modo, xD, yD][[3]][[2]],
{qq, 0, q + 0.00001},
PlotRange -> {{-1, 1}, {-.2, .4}},
AspectRatio -> .5,
PlotStyle -> {Black, Dashed}],
(* Plot Traiettoria Blu *)
ParametricPlot[
Quadrilatero[qq, xA, yA, Lc, L2, L3, modo, xD, yD][[3]][[3]],
{qq, 0, q + 0.00001},
PlotRange -> {{-1, .2}, {-.2, .4}},
AspectRatio -> .5,
PlotStyle -> {Blue, Dashed}],
(* Plot Aste *)
ListLinePlot[coordinate,
PlotRange -> {{-1, .2}, {-.2, .4}},
AspectRatio -> .5,
PlotStyle -> Thick],
(* Plot Ruota *)
ListLinePlot[{coordinate[[3]], coordinate[[4]]},
PlotRange -> {{-1, .2}, {-.2, .4}},
AspectRatio -> .5,
PlotStyle -> {Thickness[0.1], Opacity[0.08], Red}
],
Graphics[
{
(*LightBlue,Opacity[0.2],Rectangle[{-0.75,-0.2},{0.75,0.4}],*)
Gray, Thick, Disk[{xA, yA}, .02], Disk[sol[[3]][[4]], .02],
Orange, Thick, Disk[sol[[3]][[2]], .02], Disk[sol[[3]][[3]], .02]
}
]
]
],
{{s, -0.035}, 0.04, -0.075, Appearance -> "Open"},
{{Lc, .1}, 0.04, 0.14, Appearance -> "Open"},
{{L2, 0.7}, 0, 2, Appearance -> "Open"},
{{L3, .1}, 0, .25, Appearance -> "Open"}
]
I attatched even the .nb file, but I don't know why I get a lot of errors when I open the file and still I don't "Evaluate Notebook", why?Lorenzo Cristofori2018-05-18T11:09:06ZSolving the Douglas-Plateau Problem Numerically
http://community.wolfram.com/groups/-/m/t/1341653
Cross-posted from [Mathematica.StackExchange](https://mathematica.stackexchange.com/a/158356/38178).
Douglas-Plateau Problem
---
Given a compact, two-dimensional smooth manifold $\varSigma$ with boundary $\partial\varSigma$ and an embedding $\gamma \colon \partial\varSigma \to \mathbb{R}^3$, find an immersion $f \colon \varSigma \to \mathbb{R}^3$ of minimal area $\mathcal{A}(f) \,\colon = \int_{f(\varSigma)} \operatorname{d} \mathcal{H}^2$ among those immersions satisfying $f|_{\partial \varSigma} = \gamma$.
The instance of this problem in which $\varSigma$ is the closed disk is called Plateau's problem. In the 1930s, [Radó](http://www.jstor.org/stable/1968237) and [Douglas](https://www.ams.org/tran/1931-033-01/S0002-9947-1931-1501590-9/S0002-9947-1931-1501590-9.pdf) showed independently that there is always at least one solution of Plateau's problem. (This is not true for manifolds $\varSigma$ in different topological classes.)
If $f$ is a local minimizer of the Douglas-Plateau problem that happens to be also an embedding, then $f(\varSigma)$ describes the shape of a soap film at rest that is spanned into the boundary curve $\gamma(\partial \varSigma)$.
There are several ways to treat this problem numerically but the simplest method might be to discretize the boundary curve $\gamma(\partial \varSigma)$ by an inscribed polygonal line and a candidate surface $f(\varSigma)$ by an immersed triangle meshe. Then the surface area is merely a function in the coordinates of the (interior) vertices of the immersed mesh, so that one can apply numerical optimization methods in the search for minimizers. By the way, that is [precisely what Douglas did](https://www.jstor.org/stable/pdf/1967991) before he moved on to solve Plateau's problem theoretically. (The technique that Douglas used in his proof was also exploited by [Dziuk and Hutchinson](https://www.jstor.org/stable/2585097) to derive a numerical method for solving Plateau's problem.)
Background on the Algorithm
---
Here is a method that utilizes $H^1$-gradient flows. This is far quicker than the $L^2$-gradient flow (a.k.a. _[mean curvature flow](https://mathematica.stackexchange.com/a/172603/38178)_) or using `FindMinimum` and friends, in particular when dealing with finely discretized surfaces. The algorithm was originally developed by [Pinkall and Poltier](https://projecteuclid.org/download/pdf_1/euclid.em/1062620735).
For those who are interested: A major reason is the [Courant–Friedrichs Lewy condition](https://en.wikipedia.org/wiki/Courant-Friedrichs-Lewy_condition), which enforces the time step size in explicit integration schemes for parabolic PDEs to be proportial to the maximal cell diameter of the mesh. This leads to the need for _many_ time iterations for fine meshes. Another problem is that the Hessian of the surface area with repect to the surface positions is highly ill-conditioned (both in the continuous as well as in the discretized setting.)
In order to compute $H^1$-gradients, we need the Laplace-Beltrami operator of an immersed surface $\varSigma$, or rather its associated bilinear form
$$ a_\varSigma(u,v) = \int_\varSigma \langle\operatorname{d} u, \operatorname{d} v \rangle \, \operatorname{vol}, \quad u,\,v\in H^1(\varSigma;\mathbb{R}^3).$$
The $H^1$-gradient $\operatorname{grad}^{H^1}_\varSigma(F) \in H^1_0(\varSigma;\mathbb{R}^3)$ of the area functional $F(\varSigma)$ solves the the following Poisson problem
$$a_\varSigma(\operatorname{grad}^{H^1}_\varSigma(F),v) = DF(\varSigma) \, v \quad \text{for all $v\in H^1_0(\varSigma;\mathbb{R}^3)$}.$$
When the gradient at the surface configuration $\varSigma$ is known, we simply translate $\varSigma$ by $- \delta t \, \operatorname{grad}^{H^1}_\varSigma(F)$ with some step size $\delta t>0$.
Surprisingly, the differential $DF(\varSigma)$ is given by
$$ DF(\varSigma) \, v = \int_\varSigma \langle\operatorname{d} \operatorname{id}_\varSigma, \operatorname{d} v \rangle \, \operatorname{vol},$$
so, we can also use the discretized Laplace-Beltrami to compute it.
Implementation
---
Unfortunately, Mathematica's FEM tools cannot deal with finite elements on surfaces, yet. Therefore, I provide some code to assemble the Laplace-Beltrami operator of a triangular mesh.
getLaplacian = Quiet[Block[{xx, x, PP, P, UU, U, VV, V, f, Df, u, Du, v, Dv, g, integrant, quadraturepoints, quadratureweights},
xx = Table[x[[i]], {i, 1, 2}];
PP = Table[P[[i, j]], {i, 1, 3}, {j, 1, 3}];
UU = Table[U[[i]], {i, 1, 3}];
VV = Table[V[[i]], {i, 1, 3}];
(*local affine parameterization of the surface with respect to the "standard triangle"*)
f = x \[Function] PP[[1]] + x[[1]] (PP[[2]] - PP[[1]]) + x[[2]] (PP[[3]] - PP[[1]]);
Df = x \[Function] Evaluate[D[f[xx], {xx}]];
(*the Riemannian pullback metric with respect to f*)
g = x \[Function] Evaluate[Df[xx]\[Transpose].Df[xx]];
(*two affine functions u and v and their derivatives*)
u = x \[Function] UU[[1]] + x[[1]] (UU[[2]] - UU[[1]]) + x[[2]] (UU[[3]] - UU[[1]]);
Du = x \[Function] Evaluate[D[u[xx], {xx}]];
v = x \[Function] VV[[1]] + x[[1]] (VV[[2]] - VV[[1]]) + x[[2]] (VV[[3]] - VV[[1]]);
Dv = x \[Function] Evaluate[D[v[xx], {xx}]];
integrant = x \[Function] Evaluate[D[D[
Dv[xx].Inverse[g[xx]].Du[xx] Sqrt[Abs[Det[g[xx]]]],
{UU}, {VV}]]];
(*since the integrant is constant over each trianle, we use a one-
point Gauss quadrature rule (for the standard triangle) *)
quadraturepoints = {{1/3, 1/3}};
quadratureweights = {1/2};
With[{
code = N[quadratureweights.Map[integrant, quadraturepoints]] /. Part -> Compile`GetElement
},
Compile[{{P, _Real, 2}}, code,
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True]
]
]
];
getLaplacianCombinatorics = Quiet[Module[{ff},
With[{
code = Flatten[Table[Table[{ff[[i]], ff[[j]]}, {i, 1, 3}], {j, 1, 3}], 1] /. Part -> Compile`GetElement
},
Compile[{{ff, _Integer, 1}},
code,
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True
]
]]];
LaplaceBeltrami[pts_, flist_, pat_] := With[{
spopt = SystemOptions["SparseArrayOptions"],
vals = Flatten[getLaplacian[Partition[pts[[flist]], 3]]]
},
Internal`WithLocalSettings[
SetSystemOptions["SparseArrayOptions" -> {"TreatRepeatedEntries" -> Total}],
SparseArray[Rule[pat, vals], {Length[pts], Length[pts]}, 0.],
SetSystemOptions[spopt]]
];
Now we can minimize: We utilize that the differential of area with respect to vertex positions `pts` equals `LaplaceBeltrami[pts, flist, pat].pts`. I use constant step size `dt = 1`; this works surprisingly well. Of course, one may add a line search method of one's choice.
areaGradientDescent[R_MeshRegion, stepsize_: 1., steps_: 10,
reassemble_: False] :=
Module[{method, faces, bndedges, bndvertices, pts, intvertices, pat,
flist, A, S, solver}, Print["Initial area = ", Area[R]];
method = If[reassemble, "Pardiso", "Multifrontal"];
pts = MeshCoordinates[R];
faces = MeshCells[R, 2, "Multicells" -> True][[1, 1]];
bndedges = Developer`ToPackedArray[Region`InternalBoundaryEdges[R][[All, 1]]];
bndvertices = Union @@ bndedges;
intvertices = Complement[Range[Length[pts]], bndvertices];
pat = Flatten[getLaplacianCombinatorics[faces], 1];
flist = Flatten[faces];
Do[A = LaplaceBeltrami[pts, flist, pat];
If[reassemble || i == 1,
solver = LinearSolve[A[[intvertices, intvertices]], Method -> method]];
pts[[intvertices]] -= stepsize solver[(A.pts)[[intvertices]]];, {i, 1, steps}];
S = MeshRegion[pts, MeshCells[R, 2], PlotTheme -> "LargeMesh"];
Print["Final area = ", Area[S]];
S
];
Example 1
---
We have to create some geometry. Any `MeshRegion` with triangular faces and nonempty boundary would do (although it is not guaranteed that an area minimizer exists).
h = 0.9;
R = DiscretizeRegion[
ImplicitRegion[{x^2 + y^2 + z^2 == 1}, {{x, -h, h}, {y, -h, h}, {z, -h, h}}],
MaxCellMeasure -> 0.00001,
PlotTheme -> "LargeMesh"
]
[![enter image description here][1]][1]
And this is all we have to do for minimization:
areaGradientDescent[R, 1., 20., False]
> Initial area = 8.79696
> Final area = 7.59329
[![enter image description here][2]][2]
Example 2
---
Since creating interesting boundary data along with suitable initial surfaces is a bit involved and since <s>I cannot upload `MeshRegions` here</s> it is just more fun, I decided to compress the initial surface for this example into these two images:
[![enter image description here][3]][3]
[![enter image description here][4]][4]
The surface can now obtained with
R = MeshRegion[
Transpose[ImageData[Import["https://i.stack.imgur.com/aaJPM.png"]]],
Polygon[Round[#/Min[#]] &@ Transpose[ ImageData[Import["https://i.stack.imgur.com/WfjOL.png"]]]]
]
[![enter image description here][5]][5]
With the function `LoopSubdivide` [from this post](http://community.wolfram.com/groups/-/m/t/1338790), we can successively refine and minimize with
SList = NestList[areaGradientDescent@*LoopSubdivide, R, 4]
[![enter image description here][6]][6]
Here is the final minimizer in more detail:
[![enter image description here][7]][7]
Final Remarks
---
If huge deformations are expected during the gradient flow, it helps a lot to set `reassemble = True`. This uses always the Laplacian of the current surface for the gradient computation. However, this is considerably slower since the Laplacian has to be refactorized in order to solve the linear equations for the gradient. Using `"Pardiso"` as `Method` does help a bit.
Of course, the best we can hope to obtain this way is a _local_ minimizer.
[1]: https://i.stack.imgur.com/KByfZ.png
[2]: https://i.stack.imgur.com/H7GCH.png
[3]: https://i.stack.imgur.com/aaJPM.png
[4]: https://i.stack.imgur.com/WfjOL.png
[5]: https://i.stack.imgur.com/Aabqj.png
[6]: https://i.stack.imgur.com/vZnFl.png
[7]: https://i.stack.imgur.com/UTjfT.pngHenrik Schumacher2018-05-18T18:30:21ZGet and plot Financial Data for NASDAQ - Price Works but Volume Does Not?
http://community.wolfram.com/groups/-/m/t/1320190
Hi,
I'm trying to plot NASDAQ price and volume using the following commands. Interestingly the "Volume" property works for individual securities but does not work for market indices like NASDAQ. Has anyone else observed this issue?
DateListPlot[FinancialData["NASDAQ",{{2018,1},{2018,4}}],PlotLabel ->"NASDAQ" ] --> this works
DateListPlot[FinancialData["NASDAQ","Volume",{{2018,1},{2018,3}}],PlotLabel ->"NASDAQ Volume" ] --> this does not work
ThanksRobert Stephens2018-04-14T22:27:37ZHow to realize the function Nest[] with two replaced variables?
http://community.wolfram.com/groups/-/m/t/1340863
How to realize Nest[{a,b,#1,#2} &, ]?
For example, #1 should be replaced by x, and #2 by y simultaneously or even respectively ?Math Logic2018-05-17T06:01:04ZTry to beat these MRB constant records!
http://community.wolfram.com/groups/-/m/t/366628
POSTED BY: Marvin Ray Burns .
**MKB constant calculations have been moved to their own discussion at**
[Calculating the digits of the MKB constant][1] .
I think this important point got buried near the end.
When it comes to mine and a few more educated people's passion to calculate many digits and the dislike possessed by a few more educated people; it is all a matter telling us that the human mind is multi faceted in giving passion, to person a, for one task and to person b for another task!
The MRB constant is defined below. See http://mathworld.wolfram.com/MRBConstant.html
> ![enter image description here][2]
Here are some record computations. If you know of any others let me know..
1. On or about Dec 31, 1998 I computed 1 digit of the (additive inverse of the) MRB constant with my TI-92's, by adding 1-sqrt(2)+3^(1/3)-4^(1/4) as far as I could. That first digit by the way is just 0.
2. On Jan 11, 1999 I computed 3 digits of the MRB constant with the Inverse Symbolic Calculator.
3. In Jan of 1999 I computed 4 correct digits of the MRB constant using Mathcad 3.1 on a 50 MHz 80486 IBM 486 personal computer operating on Windows 95.
4. Shortly afterwards I computed 9 correct digits of the MRB constant using Mathcad 7 professional on the Pentium II mentioned below.
5. On Jan 23, 1999 I computed 500 digits of the MRB constant with the online tool called Sigma.
6. In September of 1999, I computed the first 5,000 digits of the MRB Constant on a 350 MHz Pentium II with 64 Mb of ram using the simple PARI commands \p 5000;sumalt(n=1,((-1)^n*(n^(1/n)-1))), after allocating enough memory.
7. On June 10-11, 2003 over a period, of 10 hours, on a 450mh P3 with an available 512mb RAM: I computed 6,995 accurate digits of the MRB constant.
8. Using a Sony Vaio P4 2.66 GHz laptop computer with 960 MB of available RAM, on 2:04 PM 3/25/2004, I finished computing 8000 digits of the MRB constant.
9. On March 01, 2006 with a 3GH PD with 2GBRAM available, I computed the first 11,000 digits of the MRB Constant.
10. On Nov 24, 2006 I computed 40, 000 digits of the MRB Constant in 33hours and 26min via my own program in written in Mathematica 5.2. The computation was run on a 32-bit Windows 3GH PD desktop computer using 3.25 GB of Ram.
11. Finishing on July 29, 2007 at 11:57 PM EST, I computed 60,000 digits of MRB Constant. Computed in 50.51 hours on a 2.6 GH AMD Athlon with 64 bit Windows XP. Max memory used was 4.0 GB of RAM.
12. Finishing on Aug 3 , 2007 at 12:40 AM EST, I computed 65,000 digits of MRB Constant. Computed in only 50.50 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 5.0 GB of RAM.
13. Finishing on Aug 12, 2007 at 8:00 PM EST, I computed 100,000 digits of MRB Constant. They were computed in 170 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 11.3 GB of RAM. Median (typical) daily record of memory used was 8.5 GB of RAM.
14. Finishing on Sep 23, 2007 at 11:00 AM EST, I computed 150,000 digits of MRB Constant. They were computed in 330 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 22 GB of RAM. Median (typical) daily record of memory used was 17 GB of RAM.
15. Finishing on March 16, 2008 at 3:00 PM EST, I computed 200,000 digits of MRB Constant using Mathematica 5.2. They were computed in 845 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 47 GB of RAM. Median (typical) daily record of memory used was 28 GB of RAM.
16. Washed away by Hurricane Ike -- on September 13, 2008 sometime between 2:00PM - 8:00PM EST an almost complete computation of 300,000 digits of the MRB Constant was destroyed. Computed for a long 4015. Hours (23.899 weeks or 1.4454*10^7 seconds) on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 91 GB of RAM. The Mathematica 6.0 code used follows:
Block[{$MaxExtraPrecision = 300000 + 8, a, b = -1, c = -1 - d,
d = (3 + Sqrt[8])^n, n = 131 Ceiling[300000/100], s = 0}, a[0] = 1;
d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
For[k = 0, k < n, c = b - c;
b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
N[1/2 - s/d, 300000]]
17. On September 18, 2008 a computation of 225,000 digits of MRB Constant was started with a 2.66GH Core2Duo using 64 bit Windows XP. It was completed in 1072 hours. Memory usage is recorded in the attachment pt 225000.xls, near the bottom of this post. .
18. 250,000 digits was attempted but failed to be completed to a serious internal error which restarted the machine. The error occurred sometime on December 24, 2008 between 9:00 AM and 9:00 PM. The computation began on November 16, 2008 at 10:03 PM EST. Like the 300,000 digit computation this one was almost complete when it failed. The Max memory used was 60.5 GB.
19. On Jan 29, 2009, 1:26:19 pm (UTC-0500) EST, I finished computing 250,000 digits of the MRB constant. with a multiple step Mathematica command running on a dedicated 64bit XP using 4Gb DDR2 Ram on board and 36 GB virtual. The computation took only 333.102 hours. The digits are at http://marvinrayburns.com/250KMRB.txt . The computation is completely documented in the attached 250000.pd at bottom of this post.
20. On Sun 28 Mar 2010 21:44:50 (UTC-0500) EST, I started a computation of 300000 digits of the MRB constant using an i7 with 8.0 GB of DDR3 Ram on board.; But it failed due to hardware problems.
21. I computed 299,998 Digits of the MRB constant. The computation began Fri 13 Aug 2010 10:16:20 pm EDT and ended 2.23199*10^6 seconds later |
Wednesday, September 8, 2010. I used Mathematica 6.0 for Microsoft
Windows (64-bit) (June 19, 2007) That is an average of 7.44 seconds per digit.. I used my Dell Studio XPS 8100 i7 860 @ 2.80 GH 2.80 GH
with 8GB physical DDR3 RAM. Windows 7 reserved an additional 48.929
GB virtual Ram.
22. I computed exactly 300,000 digits to the right of the decimal point
of the MRB constant from Sat 8 Oct 2011 23:50:40 to Sat 5 Nov 2011
19:53:42 (2.405*10^6 seconds later). This run was 0.5766 seconds per digit slower than the
299,998 digit computation even though it used 16GB physical DDR3 RAM on the same machine. The working precision and accuracy goal
combination were maximized for exactly 300,000 digits, and the result was automatically saved as a file instead of just being displayed on the front end. Windows reserved a total of 63 GB of working memory of which at 52 GB were recorded being used. The 300,000 digits came from the Mathematica 7.0 command
Quit; DateString[]
digits = 300000; str = OpenWrite[]; SetOptions[str,
PageWidth -> 1000]; time = SessionTime[]; Write[str,
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> digits + 3, AccuracyGoal -> digits,
Method -> "AlternatingSigns"]]; timeused =
SessionTime[] - time; here = Close[str]
DateString[]
23. 314159 digits of the constant took 3 tries do to hardware failure. Finishing on September 18, 2012 I computed 314159 digits, taking 59 GB of RAM. The digits are came from the Mathematica 8.0.4 code
DateString[]
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> 314169, Method -> "AlternatingSigns"] // Timing
DateString[]
Where I have 10 digits to round off. (The command NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> big number, Method -> "AlternatingSigns"] tends to give about 3 digits of error to the right.)
**The following records are due to the work of Richard Crandall found [here][3]. **
24. Sam Noble of Apple computed 1,000,000 digits of the MRB constant in 18 days 9 hours 11 minutes 34.253417 seconds
25. Finishing on Dec 11, 2012 Ricard Crandall, an Apple scientist, computed 1,048,576 digits
in a lighting fast 76.4 hours. That's on a 2.93 Ghz 8-core Nehalem
26. I computed a little over 1,200,000 digits of the MRB constant in 11
days, 21 hours, 17 minutes, and 41 seconds,( finishing on on March 31 2013). I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
27. On May 17, 2013 I finished a 2,000,000 or more digit computation of the MRB constant, using only around 10GB of RAM. It took 37 days 5 hours 6 minutes 47.1870579 seconds. I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
28. Finally, I would like to announce a new unofficial world record computation of the MRB constant that was finished on Sun 21 Sep 2014 18:35:06. It took 1 month 27 days 2 hours 45 minutes 15 seconds. I computed 3,014,991 digits of the MRB constant with Mathematica 10.0. I Used my new version of Richard Crandall's code, below, optimized for my platform and large computations. I also used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz with 64 GB of RAM of which only 16 GB was used. Can you beat it (in more number of digits, less memory used, or less time taken)? This confirms that my previous "2,000,000 or more digit computation" was actually accurate to 2,009,993 digits. (They were used as MRBtest2M.)
(**Fastest (at MRB's end) as of 25 Jul 2014*.*)
DateString[]
prec = 3000000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] :=
Module[{a, d, s, k, bb, c, n, end, iprec, xvals, x, pc, cores = 12,
tsize = 2^7, chunksize, start = 1, ll, ctab,
pr = Floor[1.005 pre]}, chunksize = cores*tsize;
n = Floor[1.32 pr];
end = Ceiling[n/chunksize];
Print["Iterations required: ", n];
Print["end ", end];
Print[end*chunksize]; d = ChebyshevT[n, 3];
{b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
iprec = Ceiling[pr/27];
Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
x = N[E^(Log[ll]/(ll)), iprec];
pc = iprec;
While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^ll - ll;
x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];(**N[Exp[Log[ll]/ll], pr]**)x, {l, 0, tsize - 1}], {j, 0, cores - 1},
Method -> "EvaluationsPerKernel" -> 4]];
ctab = ParallelTable[Table[c = b - c;
ll = start + l - 2;
b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
c, {l, chunksize}], Method -> "EvaluationsPerKernel" -> 2];
s += ctab.(xvals - 1);
start += chunksize;
Print["done iter ", k*chunksize, " ", SessionTime[] - T0];, {k, 0,
end - 1}];
N[-s/d, pr]];
t2 = Timing[MRBtest2 = expM[prec];]; DateString[]
Print[MRBtest2]
MRBtest2 - MRBtest2M
t2 From the computation was {1.961004112059*10^6, Null}.
Here are a couple of graphs of my record computations in max digits/ year:
![enter image description here][4]![enter image description here][5]
[1]: http://community.wolfram.com/groups/-/m/t/1323951?p_p_auth=W3TxvEwH
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=68115.JPG&userId=366611
[3]: http://www.marvinrayburns.com/UniversalTOC25.pdf
[4]: /c/portal/getImageAttachment?filename=7559mrbrecord1.JPG&userId=366611
[5]: /c/portal/getImageAttachment?filename=mrbrecord3.JPG&userId=366611Marvin Ray Burns2014-10-09T18:08:49ZMathEd: Major update of 2 notebooks & packages on the place value system
http://community.wolfram.com/groups/-/m/t/1341254
These 2 notebooks with packages have major updates, at their original locations (otherwise some users might miss out on the update):
(1) Pronunciation of the integers with full use of the place value system
http://community.wolfram.com/web/community/groups/-/m/t/1334793
(2) Tables for addition and subtraction with better use of the place value system
http://community.wolfram.com/groups/-/m/t/1313408
PM. I noticed that attaching a notebook to a Posting might cause that this notebook is transformed and included into the html of the Posting. This happened in (2) above. Let me ask Staff not to do this. I do not want to sound ungrateful, when someone has done some effort to make this happen, but it is better not to do so. An attachment is not the same as a Posting. Their titles differ. The layouts differ. While I checked the notebook, I did not check the html transcription. I might not want a html layout when I made an interactive notebook. While I updated the notebook at the original location, I have no access to the html that was created there, and it gives the old text. It is okay to let bygones be bygones, but it would help to know for future submissions that attachments are such only, with perhaps another button that asks for conversion to html. It is unclear to me whether version management would be a more general issue for these Postings.Thomas Colignatus2018-05-17T20:50:31ZPlot regional data from by-country indicators?
http://community.wolfram.com/groups/-/m/t/1337822
Hi,
I have a csv containing columns for countries, indicator name (e.g. population, fertility), and several columns for values on different years, such as a column for 1960, another for 1961, etc.
How do I create a listplot for regional data (grouping countries) taking the indicator of each country, calculculating a weighted average by population.
Below the example
Any help would be highly appreciated as my table is huge and have to create dozens of similar charts.
s = Import[
"C:\\Users\\Jesus Enrique\\Documents\\Wolfram \
Desktop\\Sample1.csv"];Enrique Vargas2018-05-13T01:44:32Z