Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag sorted by activeCatch comet C/2020 F3 (NEOWISE) which is putting on quite a show
https://community.wolfram.com/groups/-/m/t/2024815
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/cf70227c-9327-43a4-b99a-2682df626bb4Jeff Bryant2020-07-12T02:07:28ZSharing Mathematica notebooks with non-Mathematica colleagues?
https://community.wolfram.com/groups/-/m/t/2013396
What is your best recommendation for sharing Mathematica notebooks with non-Mathematica colleagues?
1. Printing to pdf often cuts off graphics and they are only partially contained in the file.
2. Latex export for a notebook produces a mess. Sometimes I see 1000s of nested Holds.
3. Sharing the nb directly and asking the colleague to download and install Wolfram Player. There is little documentation for Player from the user perspective. It appears unprofessional The differences between Player Pro and Player are not documented, so the benefits of the upgrade are unknown to the colleague and me. The cost of the upgrade is also hard to find out. Finally, if I use a private style sheet in a Mathematica notebook, Player is confused and has no way to specify to default style sheet, instead showing a mess.
4. With regard to 3. The CDF format appears to be deprecated (legacy) by Wolfram. But much of the documentation still refers to CDF Player. This confuses colleagues.
5. S. Wolfram talks about the computational essay, but more how-to documentation is required.
6. The cloud is not the answer when one wants to share Mathematica notebooks within a company and protect proprietary information.Paul Nielan2020-06-26T14:25:54ZCreate Topographic KMZ Maps for Google Earth and Garmin GPS Receivers
https://community.wolfram.com/groups/-/m/t/2024639
&[BaseCampReadMe][1]
[1]: https://www.wolframcloud.com/obj/a56e5c89-d2c8-47f5-9b73-45b70df82067Robert Rimmer2020-07-11T18:37:33ZHow to calculate with symbolic vectors?
https://community.wolfram.com/groups/-/m/t/2023839
I want to generate a system of differential equations for DSolve. The unknown function p[t] is a 2D-function with parameter t and coordinates {x[t], y[t]}. Think of the movement of a point in a plane.
Thereby I encountered a general problem: As long as p is not represented by a list, I cannot tell WL that p is a vector, not a scalar.
**I think there should be a symbol attribute, telling that the symbol is meant to be a nD-vector (or nxm matrice, etc.).**
See the following piece of code, where p0={p0x, p0y} is a fixed vector, and p shall be a general 2D-Vector. I want to calculate and use d(p) = Norm(p0 - p) within a larger program:
Block[{p0, p0x = 11, p0y = 12, p, x, y, d},
p0 = {p0x, p0y};
Print["p0 = ", p0, ", Attr(p0) = ", Attributes[p0], "\tp = ", p,
", Attr(p) = ", Attributes[p]];
d = Norm[p0 - p];
Print["1. d(p) = Norm({p0x,p0y} - p) = ", d];
p = {x, y}; Print["2. d(p=", p, ") = ", d,
{x, y} = {21, 22}; "\td(p=", p, ") = ", d];
Clear[x, y];
d = Norm[p0 - p];
Print["3. d(p=", p, ") = Norm({p0x,p0y} - {x,y}) = ", d,
{x, y} = {21, 22}; "\td(p=", p, ") = ", d];
];
p0 = {11,12}, Attr(p0) = {} p = p, Attr(p) = {}
1. d(p) = Norm({p0x,p0y} - p) = Sqrt[Abs[11-p]^2+Abs[12-p]^2]
2. d(p={x,y}) = {Sqrt[Abs[11-x]^2+Abs[12-x]^2],Sqrt[Abs[11-y]^2+Abs[12-y]^2]} d(p={21,22}) = {Sqrt[181],Sqrt[221]}
3. d(p={x,y}) = Norm({p0x,p0y} - {x,y}) = Sqrt[Abs[11-x]^2+Abs[12-y]^2] d(p={21,22}) = 10 Sqrt[2]
In the first step, I calculate Norm[p0 - p], which gives the correct result for a scalar p, but a wrong result for a vector p.
If in the second step I replace p by {x,y} within the result of the first step, I get the wrong result of course.
In the third step, I recalculate Norm[p0 - p], which is correct now.
Of course I can redefine d(p) by a function with set-delayed:
d[p_] := Norm[p0 - p]
Then everything works fine, if I use d[p] instead of Norm[p0 -p] and replacement.
Block[{p0, p0x = 11, p0y = 12, p, x, y, d},
p0 = {p0x, p0y};
Print["p0 = ", p0, ", Attr(p0) = ", Attributes[p0], "\tp = ", p,
", Attr(p) = ", Attributes[p]];
d[p_] := Norm[p0 - p];
Print["1. d(p) = Norm({p0x,p0y} - p) = ", d[p]];
p = {x, y}; Print["2. d(p=", p, ") = Norm({p0x,p0y} - p] = ", d[p],
{x, y} = {21, 22}; "\td(p=", p, ") = ", d[p]];
Clear[x, y];
Print["3. d(p=", p, ") = Norm({p0x,p0y} - {x,y}) = ", d[p],
{x, y} = {21, 22}; "\td(p=", p, ") = ", d[p]];
];
p0 = {11,12}, Attr(p0) = {} p = p, Attr(p) = {}
1. d(p) = Norm({p0x,p0y} - p) = Sqrt[Abs[11-p]^2+Abs[12-p]^2]
2. d(p={x,y}) = Norm({p0x,p0y} - p] = Sqrt[Abs[11-x]^2+Abs[12-y]^2] d(p={21,22}) = 10 Sqrt[2]
3. d(p={x,y}) = Norm({p0x,p0y} - {x,y}) = Sqrt[Abs[11-x]^2+Abs[12-y]^2] d(p={21,22}) = 10 Sqrt[2]
The problem is, that this doesn't help in my original problem of a differential equation for a vector-valued function p[t].
See the following code for a trivial constant-velocity movement in a plane, starting at some point p0:
Block[{p0 = {1, 2}, t, p, v = {2, 1}, eqs, sol},
eqs = {p'[t] == v, p[0] == p0};
Print["eqs = ", Column[eqs]];
sol = DSolve[eqs, p[t], {t, 0, 4}];
Print["sol = ", sol];
];
eqs = (p^\[Prime])[t]=={2,1}
p[0]=={1,2}
DSolve::nolist: List encountered within {(p^\[Prime])[t]=={2,1}}. There should be no lists on either side of the equations.
sol = DSolve[{(p^\[Prime])[t]=={2,1},p[0]=={1,2}},p[t],{t,0,4}]
Even though by the boundary condition p[0]==p0 it is clear that p is a vector, WL doesn't recognize p[t] and therefore p'[t] to be vector-valued functions of a scalar t.
Since I cannot tell WL that p[t] is vector-valued, I am forced to switch to coordinate-based differential equations:
Block[{p0 = {1, 2}, t, p, x, y, v = {2, 1}, eqs, sol},
p[t_] = {x[t], y[t]};
eqs = {p'[t] == v, p[0] == p0};
Print["eqs = ", Column[eqs]];
sol = DSolve[eqs, p[t], {t, 0, 4}];
Print["sol = ", sol];
];
eqs = {(x^\[Prime])[t],(y^\[Prime])[t]}=={2,1}
{x[0],y[0]}=={1,2}
sol = {{x[t]->1+2 t,y[t]->2+t}}
This seems to be no big issue but imagine p'[t] as a complicated expression in p[t] and t. You have to switch to NDSolve then and you will get a list of scalar InterpolatingFunction(s) as the solution for {x[t], y[t]}. It would be much better to get one vector-valued InterpolationFunction as the solution for p[t].Werner Geiger2020-07-10T18:10:09ZHow to use TransformedRegion
https://community.wolfram.com/groups/-/m/t/2024295
Starting with a polygon from the countries data set, I am interested in building a 3D extrusion of that country polygon to assemble it in 3D with the other countries.
I get the polygon and extrude it using regions:
data = Entity["Country", "UnitedStates"]["Polygon"][[1, 1, 1]];
R = BoundaryMeshRegion[data, Line[Join[Range[Length[data]], {1}]] ];
R3 = RegionProduct[R, MeshRegion[{{0}, {1}}, Line[{1, 2}]]]
It looks like this: as expected
![a flat extruded version of the USA mainland][1]
Now I would like to map it to earth, a sphere, and change the coordinates from Lat,Long, Altitude to {x,y,z}
TransformedRegion[R3, (1 +
Indexed[#, 3]/10) {Cos[Indexed[#, 2] N[Degree]]*
Cos[Indexed[#, 1] N[Degree]], Sin[Indexed[#, 1] N[Degree]],
Sin[Indexed[#, 2] N[Degree]]*Cos[Indexed[#, 1] N[Degree]]} &]
But I only get
![enter image description here][2]
What am I missing?
Thank you,
Luc
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2020-07-11at6.37.13AM.png&userId=1791006
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2020-07-11at6.40.44AM.png&userId=1791006Luc Barthelet2020-07-11T13:42:04ZSolve not working
https://community.wolfram.com/groups/-/m/t/2024428
I am trying to solve the following equation for x, but it is not working at all. What I am doing wrong?
c1 = 1.6*10^(-33)
c2 = 5.23*10^(-27)
eqn = ExpandAll[(c2/2) + ((1/c1)*(((x^2)/4) + c1)^(1/2)) == Log[(x/2 - ((((x^2)/4) + c1)^(1/2)))/((((x^2)/4) + c1 - x/2)^(1/2))]]
Solve[eqn, x]Neerav Kaushal2020-07-11T05:46:18ZUnable to get image from Raspberry Pi High Quality Camera with Mathematica
https://community.wolfram.com/groups/-/m/t/2024277
Hi,
I'd like to capture and process video frames using the Raspberry Pi High Quality Camera with Mathematica on the Raspberry PI 4 8GB.
The camera is connected to the camera port.
I am able to capture images and videos using the raspistill and raspivid commands, but not within Mathematica.
Output of $Version is:
12.0.1 for Linux ARM (32-bit) (June 23, 2019)
The following commands return no output:
$ImagingDevices
Devices[]
FindDevices[]
ImageCapture[]
DeviceRead::raspiCamError: Error enabling camera-preview
Output of the linux command:
v4l2-ctl --list-devices
is:
bcm2835-codec-decode (platform:bcm2835-codec):
/dev/video10
/dev/video11
/dev/video12
mmal service 16.1 (platform:bcm2835-v4l2):
/dev/video0
Thank you,
Ed.Ed Saribatir2020-07-11T10:09:36ZGradient Origin Networks
https://community.wolfram.com/groups/-/m/t/2024226
In [this work][1], the authors expand on a recent paper about [representation networks with sine active functions][2] and try to make such representations generate a latent space.
In a nutshell, they add extra dimensions to the input space and use the gradient of the loss function in regards to these extra parameters as a first approximation which is then fed to a more global loss function.
So there are actually two loss functions, one calling the other and the network function F ends up being called twice in the global loss function.
Anyway, I'm not sure how I could implement this in Mathematica.
First, I would need to compute the gradient of the inner loss function.
Then I would have to write the global loss function and provide it to the NetTrain method, I suppose? Would Mathematica be able to automatically do the differentiation?
Here is my code to train one single image, that is training a SIREN without any attempt at implementing the Gradient Origin Network. It works well and quick with such tiny (28x28) image.
mnist = ResourceObject["MNIST"]
trainingData = ResourceData[mnist, "TrainingData"]
img = Keys[trainingData] // RandomChoice
linspace[n_] := Range[(1 - n)/2, (n - 1)/2];
grid[w_, h_] := Pi/2 Tuples[{linspace[h], linspace[w]}] // N;
input = grid @@ ImageDimensions[img]
output = Partition[Flatten[2 # - 1 &@ImageData@img], 1]
net = NetInitialize[
28 // NetChain[
{
#, Sin,
#, Sin,
#, Sin,
#, Sin,
1, Sin
},
"Input" -> 2
] &,
Method -> "Xavier"
];
net = NetTrain[net, (input -> output)]
Image@Partition[(# + 1)/2 &[net /@ input], 28]
How would I go about expanding this code to use the method describe in the paper so that I build a latent space to represent all the MNIST digit dataset ?
[1]: https://cwkx.github.io/data/GON/
[2]: https://vsitzmann.github.io/siren/Lucien Grondin2020-07-11T08:12:25ZFitting a very simple dataset
https://community.wolfram.com/groups/-/m/t/2023771
Hi all,
I am a beginner in Mathematica. I am trying to fit the data from a CSV file using FindFit.
In doing so, I am getting the following error message
FindFit::nrjnum: The Jacobian is not a matrix of real numbers at {a} = {1.}.
My CSV file can be found attached. I wonder what is the right way of performing this task. I would like to fit my data into something of the form:
y = 1/x^{a}
My attempt so far has been
data=Import["/home/data_variance2_r \_final.csv"]
modeltwolevelsat = 1/x^a;
eq = FindFit[data, modeltwolevelsat, {a}, x]
I would appreciate if someone can help me out or give any tip on how to solve this problem.
Thanks in advance.Heitor Silva2020-07-10T15:20:03ZIs it the right plot?
https://community.wolfram.com/groups/-/m/t/2024138
The task is to display the graph of the Amplitude versus time and the displacement of velocity versus time. Displacement of velocity is the derivative of the coordinate. Isn't it? And
&[Wolfram Notebook][1]
Жылдамдық is velocity Амплитуда is Amplitude in plot.
And is it right plot? :D
[1]: https://www.wolframcloud.com/obj/150b4a9f-d989-4104-b074-740bf3911cecSanzhar Mukatay2020-07-10T19:10:11ZFun with Perspective Anamorphism: a coke can, a coffee cup and Lady Liberty
https://community.wolfram.com/groups/-/m/t/1964116
![enter image description here][1]
We all know that what you see depends on your viewpoint or, as Mathematica says, the *ViewVector*. If you look with one eye or a camera at a 3D scene, you see a "[3D Perspective Projection][2]" of the scene. So what you see can be modified, fooled or embellished by modifying this projection.
This "3D perspective anamorphism" has been widely used in popular art forms. [3D sidewalk- and street art][3] are only some of the examples.
Mathematica has some very useful functions to explore the mathematics of 3D perspective anamorphism and to create some examples yourself. In this community contribution, I explore some of the many possibilities and hope to give inspiration to others...
Here is some perspective geometry: Point A in the x-y plane is observed by the eye E as an image point I. The point I is in the view-plane of E. This is the plane perpendicular to the view- line EO and anchored at O. The angle between the view-line and an x-y plane is the view direction \[Phi]. The function *perspectivePoint* computes the perspective point A from the image point I (xi,yi) given the view direction phi. xE and xO are the abscissas of the points E and O.
![enter image description here][4]
perspectivePoint[imagePoint : {xi_, yi_}, viewDirection : phi_,
eyeAbscissa : xE_, anchorAbscissa : xO_] :=
Quiet[Module[{zE, eye, zO, ptO, imPlane, ptI, ray}, zE = xE Tan[phi];
zO = xO Tan[phi]; ptO = {xO, 0, zO};
imPlane = InfinitePlane[ptO, {{0, 1, 0}, {xO, 0, -xO Cot[phi]}}];
ptI = {xO - yi Sin[phi], xi, zO + yi Cos[phi]}; eye = {xE, 0, zE};
ray = HalfLine[{eye, ptI}];
First[{x, y, z} /.
NSolve[{{x, y, z} \[Element] ray, z == 0}, {x, y, z}]]]]
![enter image description here][5]
Applying the function *perspectivePoint* to a unit square (the blue polygon), we see its projection in the x-y plane as the red polygon. Likewise, all image points within the blue polygon will be projected as points within the red polygon.
The following GIF shows the transformation of the red polygon into its blue projection as the ViewVector is directing the camera towards the right viewpoint.
![enter image description here][6]
To apply the function *perspectivePoint* to real images, we can use two methods:
1. Converting the image to graphics primitives using *ImageGraphics* and map the function *perspectivePoint* to all the coordinates of the resulting graphics objects .
2. Using the function *perspectivePoint* to create a set of corresponding pairs (image-point/perspective-point). We use this point set inside *FindGeometricTransform* to compute a *TransformationFunction*. This transformation function can then be used inside *ImagePerspectiveTransformation*
We first start with a coke can and use method 1
Module[{w, h, colaGC, colaGCScaled, colaGCOK},
{w, h} = ImageDimensions@cola; colaGC = ImageGraphics[cola][[1]];
colaGCScaled =
MapAt[# /. {x_?NumericQ,
y_?NumericQ} :> {y - h/2, x - w/2}/h*1.65 &, colaGC, 1];
colaGCOK =
MapAt[Most@perspectivePoint[#, .35, 10, 4] &,
colaGCScaled, {1, All}] /. {x_?NumericQ, y_, 0} :> {x, y};
Graphics[colaGCOK]]
![enter image description here][7]
The following GIF shows the effect of changing the camera position with *ViewVector* toward the right viewpoint:
![enter image description here][8]
After printing the output on paper, we can create a mixed scene of anamorphic and real objects creating a welcome confusion between the real and the perspective:
![enter image description here][9]
To use method 2 with a coffee cup, we first find the necessary transformation function to be inserted into *ImagePerspectiveTransformation*. We use *FindGeometricTransform* with a set of points on a circle and their corresponding perspectives .
Block[{xv = -10, zv = 8, imagePoints, perspectivePoints},
imagePoints = Table[{ Cos[t], Sin[t]}, {t, 6}];
perspectivePoints = (Most[perspectivePoint[#1, .35, 10, 4]] &) /@
imagePoints;
tf = Last[FindGeometricTransform[imagePoints, perspectivePoints]]];
This is the result of our transformation function applied to a circle:
Row[{Image[
g = Graphics[{AbsoluteThickness[5],
Line[Table[{2.68 Cos[t], 2.68 Sin[t]}, {t, 0,
6.3, .1}]]}]] ImagePerspectiveTransformation[g, tf,
PlotRange -> {{-1, 1}, {-1, 1}}, DataRange -> {{-1, 1}, {-1, 1}},
Padding -> "Fixed", Masking -> Full]}]
![enter image description here][10]
And this is how the transformation function performs on a photo of a coffee cup.
![enter image description here][11]
As the following movie shows, an unexpected scene can be created by mixing real world objects with a printout of our anamorphic coffee cup. What you see is not always what you get!
![enter image description here][12]
Perspective Anamorphism is when only one viewpoint gives the correct image. An extreme example of this are the "[Living Photographs][13]" made in the US by [Arthur Mole][14] during the first world war. Using tens of thousands of soldiers positioned in a deformed but well designed layout, he made photographs from the right vantage point on top of a tower. Using our function *perspectivePoint*, we try to simulate the setup of his "[Human Statue of Liberty][15]" photograph from 1918.
![enter image description here][16]
The right picture shows the view from above and shows the positions of the 18,000 (!) soldiers. The middle picture is the photo of the same soldiers made by a camera on top of a 25 meter high tower. According to the website "[Human Statue of Liberty][17]" by [David Mikkelson][18], those were the number of soldiers in each part of the scene:
Number of men in the flame of torch: 12,000
Number of men in the torch: 2,800
Number of men in the right arm: 1,200
Number of men in the body, head and balance of the figure only: 2,000
I used the following *Manipulate* to get an idea of the surface area distribution within the anamorphic image and this way, approximate the settings for the function *perspectivePoint*. Starting from the 16 equal squares (left) we see 16 polygons (right) with a surface area distribution close to that of the number of soldiers in the original 1918 layout.
Module[{verticesI, verticesQ, areasQ},
verticesI =
Table[{{-1 + i, -1 + j}, {-.5 + i, -1 + j}, {-.5 + i, -.5 +
j}, {-1 + i, -.5 + j}}, {i, 0, 1.5, .5}, {j, 0, 1.5, .5}];
verticesQ =
Map[Most[perspectivePoint[#1, 0.35, 10, 4]] &, verticesI, {3}];
areasQ = First[Map[Area[Polygon[#1]] &, verticesQ, {2}]];
Column[{areasQ/First[areasQ],
GraphicsRow[{Graphics[{{FaceForm[], EdgeForm[Black],
Polygon /@ (.75 verticesI)}}],
Graphics[{{FaceForm[], EdgeForm[Black],
Map[Polygon, verticesQ, {2}]}}]}]}]]
![enter image description here][19]
Together with other dimensions gathered from the internet, we could make a close estimate for the settings of our *perspectivePoint* function.
![enter image description here][20]
Above is the result of this "investigation": To the left is the historic aerial view of the 18,000 servicemen, in the middle is the perspective transformation by our function *perspectivePoint* using the original postcard by Arthur Mole, and to the right is the same transformation using a modern photograph of the statue of liberty. The human statue of liberty, understandingly, did not have exactly the same proportions as the real Lady Liberty.
I hope these experiments entice community members to explore the astonishing world of anamorphic and perspective art. Have fun!
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=intropic.png&userId=68637
[2]: https://en.wikipedia.org/wiki/3D_projection#Perspective_projection
[3]: https://www.instructables.com/id/How-to-do-3D-Anamorphic-Artwork-and-Sidewalk-Stre/
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2547ScreenShotpoint.png&userId=68637
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShotsquarecone.png&userId=68637
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1382rectangleviewanimation.gif&userId=68637
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10170colacansduo.png&userId=68637
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rotatingviewcola.gif&userId=68637
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10732colaCanmixedrealty.png&userId=68637
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6893circletransform.png&userId=68637
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2577coffeecuptransform.png&userId=68637
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=coffeeCupoptimzed.gif&userId=68637
[13]: https://web.archive.org/web/20140102194340/http://www.punjabiportal.com/articles/art-of-living-photographs-arthur-mole-and-john-thomas
[14]: https://en.wikipedia.org/wiki/Arthur_Mole
[15]: https://www.snopes.com/fact-check/human-statue-of-liberty/
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=107463croppedphotos.png&userId=68637
[17]: https://www.snopes.com/fact-check/human-statue-of-liberty/
[18]: https://www.snopes.com/author/snopes/
[19]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9055sizecomparison.png&userId=68637
[20]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3perspectiveliberty.png&userId=68637Erik Mahieu2020-05-04T09:21:22ZDisplaying 27 lines on a Clebsch surface
https://community.wolfram.com/groups/-/m/t/2020517
Hi all,
I was able to follow another example posted somewhere on these forums to get Mathematica to render a Clebsch surface using the following:
clebsch[x_, y_, z_] :=
81 (x^3 + y^3 + z^3) - 9 (x^2 + y^2 + z^2) -
189 (x^2 y + x^2 z + x y^2 + x z^2 + y^2 z + y z^2) + 54 x y z -
9 (x + y + z) + 126 (x y + x z + y z) - 1
ContourPlot3D[clebsch[x, y, z] == 0, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}]
But I was wondering it is possible to draw the 27 lines present on any smooth cubic surface like here: https://blogs.ams.org/visualinsight/2016/02/15/27-lines-on-a-cubic-surface/
Furthermore if you can draw those lines, is it possible export this to a 3D file?
Any help would be greatly appreciated!
Many thanks.Alex Turner2020-07-06T14:38:09Z[Solved] CloudDeploy of an APIFunction not working properly
https://community.wolfram.com/groups/-/m/t/2023716
My code is as follows:
apiFunc[opt_, f_] := Switch[
opt,
"derivative", FullForm[Collect[ Derivative[f, x], x, FullSimplify]],
"integrate", FullForm[Collect[ Integrate[f, x], x, FullSimplify]],
"nabla", FullForm[Collect[(D[#,x]&) /@ f, x, FullSimplify]],
_, "invalid option"
];
func = APIFunction[{"opt"->"String","f"->"Expression"}, apiFunc[#opt,#f] &];
api = CloudDeploy[func]
When I run `apiFunc["nabla", List[ArcSin[x]]]` on the notebook, it gives the desired result:
List[Power[Plus[1,Times[-1,Power[x,2]]],Rational[-1,2]]]
However, when I get the response from the web api, it gives
FullForm[{1/Sqrt[1 - x^2]}]
Why are they different? Is there any way to get the desired result on the web api?
I have the basic plan though. Does it matter?Keonwoo Kim2020-07-10T14:40:43ZRulePlot[WolframModel[ ]] wrong rule representation
https://community.wolfram.com/groups/-/m/t/2023615
Is the lefthandside of the rule in the plot correct?
RulePlot[ResourceFunction["WolframModel"][{{x, y}, {y, z}} -> {{x, y}, {y, x}, {w, x}, {w,z}}],VertexLabels -> Automatic, "RulePartsAspectRatio" -> 0.5]
![enter image description here][1]
I would expect:
![enter image description here][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=download.png&userId=1994579
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Aantekening2020-07-10170022.png&userId=1994579Chris Van Damme2020-07-10T15:02:22ZIntegration steps in TeX from command line
https://community.wolfram.com/groups/-/m/t/2023332
Hi, I am trying to get step by step integration output in a proper format from command line.
I can get it in plaintext with:
WolframAlpha["int x*exp(2*x)", {{"IndefiniteIntegral", 2}, "Plaintext"},
PodStates -> {"IndefiniteIntegral__Step-by-step solution"}]
but I want it in a more proper format like LaTeX. Calling the command
ExportString[
WolframAlpha["int x*exp(2*x)", {{"IndefiniteIntegral", 2}, "Plaintext"},
PodStates -> {"IndefiniteIntegral__Step-by-step solution"}],
"TeX"]
does not make sense. Also changing ""Plaintext" term with "Cell" or "Content" does not work.
If i want only the result, the command
ExportString[
WolframAlpha["int x*exp(2*x)", {{"IndefiniteIntegral", 1}, "Cell"}],
"TEX"]
works well.
How can I adapt this for integration steps?
Thanks.Süleyman Öğrekçi2020-07-10T07:54:11Z[WSC19] Classifying Fiction and Non-Fiction Works Using Machine Learning
https://community.wolfram.com/groups/-/m/t/1731958
## Abstract ##
The objective of this project was to create a program that can determine whether an unknown text is a work of fiction or non-fiction using machine learning. I utilized various datasets of speeches, ebooks, poems, scientific papers, and more texts from Project Gutenberg and the Wolfram Example Data to train and test a Markov Chain machine learning model. I deployed a microsite with my final product that returns a probability of fictionality based on input from the user with 95% accuracy.
## Introduction ##
The majority of literature and texts can be classified into fiction and non-fiction. Fiction is defined as the class of literature comprising works of imaginative narration, especially in prose form, while non-fiction is defined as all content discussing real events and facts. Although most of the time the class of a work is fairly obvious due to fantastical elements and writing style, there are many cases where the class is unclear because of the various types of fiction and non-fiction and the complexities of the boundaries between them. The goal of this project was to create a classifier that can accurately understand those boundaries in order to classify an unknown work.
## Obtaining and Processing Data ##
I imported 300 full .txt files of various speeches, ebooks, poems, essays, scientific papers, and more texts from the public dataset for Project Gutenberg. However, this data was not tagged and sorted into Fiction and Non-Fiction so I had to manually assign classes to each of the files based on research on the works. I made a dataset containing the name, class, and full text of each file.
datasetGutFull =
Dataset[Table[<|"FileName" -> filenames[[n]],
"Nonfiction" -> classGut[[n]],
"FullText" -> DataGutAll[[n]]|>, {n, 1, Length[filenames}]];
![Dataset][1]
I also added various texts from the Example Data function to my data. In order to do this, I manually classified the texts and filtered out all the Example Data texts that were religious or not in English.
## Partitioning ##
In order to increase the efficiency of processing and prepare for training, I made a function to partition each text into sections of 5000 characters each and associate the sections with their corresponding class.
partition[text_, class_] :=
With[{a = StringPartition[text, 5000]},
Thread[a -> Table[class, {Length[a]}]]
];
datasetGutPart =
Flatten[Table[
partition[datasetGutFull[n, "FullText"],
datasetGutFull[n, "Nonfiction"]], {n, 1, Length[datasetGutFull]}]]
dataExamplePart =
Flatten[Table[
partition[textsEx[[n]], classEx[[n]]], {n, 1, Length[textsEx]}]];
datasetGutAndEx = Join[datasetGutPart, dataExamplePart];
textsEx =
Table[ExampleData[{"Text", filenamesExEng[[n]]}], {n, 1,
Length[filenamesExEng]}];
## Training the Machine Learning Model ##
After much experimentation with various types of machine learning such as Neural Networks, Random Forest Models, and Linear Regression Models, I found that the Markov Method of machine learning yielded the highest accuracy of approximately 95%.
classifyMarkov300Gut =
Classify[Table[
datasetGutAndEx[[n]], {n, 1, Length[datasetGutAndEx]}],
Method -> "Markov"]
![Markov Information][2]
## Deploying the Model ##
I made a function called finalClass that averaged the probabilities for Fiction and Non-Fiction for each section of a given text and returns a final probability and class. Then, I used CloudDeploy to create a microsite that takes a text from the user and returns the probability that it is Fiction. This site can be accessed at https://www.wolframcloud.com/obj/rachgupta200/FictionvsNonFiction.
finalClass[text_String] :=
Mean[classifyMarkov300Gut[StringPartition[text, 500],
"Probabilities"]]*100
CloudDeploy[
FormFunction[{"Text" -> "TextArea"}, finalClass[#Text] &,
AppearanceRules -> <|
"Title" ->
"Classifying Fiction And Non-Fiction Works Using Machine \
Learning",
"Description" ->
"This project was completed during the Wolfram High School \
Summer Camp by Rachna Gupta using Wolfram Language in Mathematica. A \
Machine Learning Markov Model was trained to determine the \
fictionality of an unknown work using various work from the Project \
Gutenberg public dataset and data from the Wolfram Example Data. \
Please input a passage of more than 500 characters of the work to be \
classified in the field below. The result will be a list of \
probabilities that the work is fiction or non-fiction as computed by \
the model. As of right now, this model is approximately 95% \
accurate."|>, PageTheme -> "Blue"], "FictionvsNonFiction",
Permissions -> "Public"]
![CloudDeploy Result][3]
## Conclusion and Future Work ##
Through this project, I successfully created a model that can accurately classify fiction and non-fiction texts. The classifier takes an input of any English text greater than 500 characters and gives an output of the probability that the given work is fiction or non-fiction. The working project can be found at this link https://www.wolframcloud.com/obj/rachgupta200/FictionvsNonFiction. In the future, I would like to explore the possibility of using more data from the Wolfram Data Repository to train the model and increase the accuracy. I would also like to explore making the classifier more specific, such as identifying the genre of the work and mapping the "fictionality" of the work on a scale. For example, historical fiction would be closer to non-fiction on the scale than fantasy or magical realism.
## Acknowledgements ##
I would like to thank my wonderful mentor, Sylvia Haas, as well as all the other mentors from the Wolfram Summer Camp, for guiding me and helping me with this project.
Github Link: https://github.com/rachgupta/WSS-Template
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Dataset.PNG&userId=1724999
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=MachineLearning.PNG&userId=1724999
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=PresentationPhoto.PNG&userId=1724999Rachna Gupta2019-07-11T22:37:31ZDisable invoking ExternalEvaluate with > shortcut
https://community.wolfram.com/groups/-/m/t/2022355
I want to disable the behavior associated with > in Mathematica 12+.
There is now a shortcut to start a Python session:
*Use > at the beginning of a line to start an external code cell evaluated with ExternalEvaluate*
This "hijacks" the intended command and puts it in a Python cell annoyingly when I type a > to modify a line. How to turn this off?John Bechhoefer2020-07-09T00:06:39ZGrid with horizontal and vertical backgrounds of different colors
https://community.wolfram.com/groups/-/m/t/2021184
Hello
I want to define a grid such as the first row has a horizontal background of a color different from the next rows that have vertical backgrounds of alternate colors..
Extrapolating from Mathematica help I nearly managed to get what I want, except the first row that I would like to have a yellow horizontal background. Here is my code:
data = {{"first", 1, a, aa}, {"second", 11, b, bb}, {"third", 111, c,
cc}}: (* the number of data rows may vary *)
Grid[Prepend[data, {"My Data", SpanFromLeft}],
Background -> {{{LightBlue, LightRed}}, {1, 1} -> Yellow},
Frame -> All]
What is the correct way to get that first row in a different color from the other two alternating ones?
Thanksjan potocki2020-07-07T21:21:52ZRunning Wolfram Mathematica inside Jupiter Notebook in Google Cloud + GPUs
https://community.wolfram.com/groups/-/m/t/1884517
I am a Data Scientist working in a startup in Brazil. Recently I was working with Social Networks in Python using networkx library to analyse posts from a given hashtag for our client. However, I was completely aware of the additional functionalities of Wolfram Mathematica. Take the following picture as an example, done with networkx, 100,000 people:
![Social Network Emotions Connections][1]
It's an interesting perspective, but adds little value to generate strategic insights for our clients, given that we need to zoom it to analyze details of this network. In my previous experience of Wolfram Mathematica, I was able to use some reasoning coming from cellular automata interactions to map, track people of interest (blue circle), and watch the evolution of mood (color), number of connections (size of circle) in a social network:
![Evolution of Mood][2]
Besides, one can use Mathematica's features of finding communities, coloring them and even highlight people of interest when passing the mouse over the social network and also speaking their names:
![Communities][3]
More than that, you can highlight communities and check the number of connections of each individual:
![Communities Two][4]
So, it's clear that Wolfram Mathematica is way ahead of networkx. However, I was dealing with a drawback: if you choose to plot more than 10,000 connections in Mathematica, that can take some time. And time is what you don't have in a startup. So, I started wondering how I could use GPUs with Wolfram Mathematica. One way it's to use webMathematica, installing Java and Apache Tomcat in a cloud instance. However, I was told one can also use Wolfram Client Python library and run Mathematica in a Python notebook.
So, I will present how I was able to run Mathematica inside a Jupyter notebook located in a Google Cloud instance with 8 V100 GPUs. My co-worker, Gustavo Gouvea, also put efforts in this solution.
First of all, go to Google Cloud Platform (GCP) Compute Engine and select Create Instance in a given region. Choose your CPUs, Memory, GPUs and regarding the boot disk, I used Debian GNU/Linux with Anaconda, PyTorch and CUDA already installed, as we also work with Deep Learning and NLP.
After that, you need to configure Jupyter in order to be able to open it in the local browser, by doing the following:
ipython
from IPython.lib import passwd
passwd()
Now that you created the password, you are going to save "sha1:49b8799c22..."
Then you will edit Jupyter configuration file (use sudo or chmod -R 777 /home/anaconda3):
sudo vi ~/.jupyter/jupyter_notebook_config.py
Add (type "i") these line of code:
c=get_config()
c.NotebookApp.password = paste your sha1 here
c.IPKernelApp.pylab = 'inline'
c.NotebookApp.ip = "*" or 0.0.0.0
c.NotebookApp.open_browser = False
Go to GCP VPC and create a static IP for your instance, SSH into it.
So, to start Jupyter notebook you will need to run:
jupyter notebook
![Jupyter][5]
Now you will access Jupyter in the following address:
http://http://55.100.30.200:8888/
![Jupyter 2][6]
That will allow you to run Python notebooks in Jupyter. Now Mathematica. Get the mathematica .sh file:
sudo wget link_to_mathematica.sh
To install Mathematica run in another SSH session (terminal window):
sudo bash link_to_mathematica.sh
Ok, you installed Mathematica. Now run:
math
This command will ask for your Activation key and Password. After providing them, Mathematica notebook starts in the command line
![Mathematica running][7]
**Important:** You must run Mathematica ("math") **before** you open Jupyter Notebook.
Now that you've opened Mathematica in the command line with one SSH and after that Jupyter Notebook in other SSH, access the web address, port 8888, create a new Python 3 notebook and install wolframclient library:
![pip install][8]
Now you can run your Mathematica scripts with backup of how many GPUs you need:
![NVIDIA-SMI][9]
from wolframclient.language import wl as w1
from wolframclient.language import wlexpr
from wolframclient.evaluation import WolframLanguageSession
session = WolframLanguageSession('/usr/local/Wolfram/Mathematica/12.0/Executables/MathKernel')
import numpy as np
import imageio
import matplotlib.pyplot as plt
from PIL import Image
![Output][10]
In bird's eye:
![Overview][11]
![enter image description here][12]
The only point is that you won't have access to all Mathematica functionalities, as Manipulate, Mouse Over and the display of some images when output is truncated. My guess is that init.m config file must be adjusted to increase output size before truncation (in the command line) :
![GraphPlot][13]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Easter0.PNG&userId=992056
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0.jpeg&userId=992056
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=01.png&userId=992056
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=02.png&userId=992056
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2918jupy.png&userId=992056
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=jupyter.png&userId=992056
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=pp.png&userId=992056
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=pip.png&userId=992056
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=nvidia.png&userId=992056
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=py.png&userId=992056
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=bird.png&userId=992056
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=sss.png&userId=992056
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=issues.png&userId=992056Rubens Zimbres2020-02-21T00:39:03Z[NB] Logistic Model for Quarantine Controlled Epidemics
https://community.wolfram.com/groups/-/m/t/1900530
*MODERATOR NOTE: coronavirus resources & updates:* https://wolfr.am/coronavirus
&[LogisticModelEpidemics][1]
[1]: https://www.wolframcloud.com/obj/wolfram-community/Published/LogisticGrowthRobert Rimmer2020-03-18T01:29:03ZHow black holes and accretion disks around them actually look like?
https://community.wolfram.com/groups/-/m/t/2017849
#Introduction
One of the most thrilling parts in Nolan's movie "Interstellar" is the black hole Gargantua and its accretion disk. Its weird shape has surely attracted lots of attention. But do black holes and their accretion disks actually look like that? Not exactly.
![BH and accretion disk in Interstellar][1]
In the paper "Gravitational lensing by spinning black holes in astrophysics, and in the movie Interstellar", the authors say that in the movie, redshift and blueshift, as well as the intensity difference is ignored to prevent confusion to the audience. So, although the outline of the accretion disk in "Interstellar" is accurate, the color and the intensity are not. Furthermore, even in the paper, effects brought by the time delay in the propagation of light are ignored, and the influence of gravity lensing on light intensity is simplified.
Though I cannot easily render a spinning black hole (Kerr black hole), what I can do is try to render an accretion disk around a Schwarzschild black hole, and as physically correct as possible. The result would be something like this (observer static at 6 times the Schwarzschild radius):
![my rendering result, static observer][2]
I strongly recommend you to see the videos at [Bilibili](https://www.bilibili.com/video/BV1Fp4y1S7EF) or [Youtube](https://www.youtube.com/watch?v=Dux1NkTaqwo) (Both have English subtitles) first to have a first impression, and it would be the best if you can click the vote up button XD. After that, If you would like to know more about the Mathematica realization and physical principles behind the scene, please continue.
**A disclaimer first:** I know nothing about accretion disk physics, so the property of accretion disks are set arbitrarily. Furthermore, actual accretion disks are actually blazingly bright, and you would be blind instantly if you are looking at it from a short distance, so I have to make some modifications.
#Analytics
###Physics Perspective
First, we need to analyze this problem from the physics perspective, get to know about the problems we should consider. **For observers**, the intensity of light is determined by how much photons reached their eye in a certain angle range, and the color is determined by the spectrum of the light. However, the orientation, spectrum, and intensity of light beams can be influenced by the observer's movement, so we have to consider that. Naturally, the next question should be, where the light comes? Well, all the light must have come from some **light-emitting materials**, so we have to consider the light-emitting materials' property and movement. But the **light should travel** for some time and distance before reaching the observer's eye. This process involves tracing the light from the emitter to the observer to determine the direction of light the observer perceived, as well as how much portion of light can reach the observer's eyes. Till now, we have already listed out all effects, from the emission to the perception, which could influence our view, so I believe this rendering is "physically accurate".
### Programming Perspective
But view from the programming perspective, the zeroth problem should be how lights travel around a black hole: we need the light path to calculate all other effects. Then, based on the light path, we can directly compute the equivalent of "brighter when closer" rule, as well as the time delay between light emission and observation. If combined with the movement of the light-emitting materials and the observer, we can compute the redshift and the blueshift.
#Details, Theory, Coding and Results
Now let's assume that we are stationary observers viewing from 6 times the Schwarzchild radius.
##Ray Tracing
The first problem to solve is tracing the light beam. Light bends around black holes following the geodesics, and the most apparent consequence of this would be that the accretion disk we see would not be on a plane, but rather curved and bent. Fortunately for us, because Schwartzchild black holes are spherically symmetric, we can reduce the problem to 2D. The parametric equation of geodesics around a Schwarzschild black hole can be derived as follows:
$$
\left\{
\begin{aligned}
t''(\lambda)&=\frac{R_s r' t'}{R_s r-r^2}\\
r''(\lambda)&=\frac{-R_s r^2 r'^2-2r^3\theta'^2(R_s-r)^2+R_s(R_s-r)^2t'^2}{2r^3(R_s-r)}\\
\theta''(\lambda)&=-\frac{2r'\theta'}{r}
\end{aligned}
\right.
$$
Where $\lambda$ is the ray parameter.
Now we construct a set of light which originates from the observer, and trace them backward:
![Possible light paths][3]
![Definition of variables][4]
![How interpolation works][5]
On each ray, we take some sample points and record the corresponding angle $\theta_0$, $\theta_1$, and time $\Delta T$. By interpolating them, we know about how a random object will look like in our eyes.
(*Initial definitions*)
Rs = 1;
R0 = 6 Rs;
Rmax = 6 Rs + 1.*^-6;
(*Tracking the light*)
parfunc =
ParametricNDSolveValue[{{tt''[\[Tau]],
rr''[\[Tau]], \[Theta]\[Theta]''[\[Tau]]} == {(
Derivative[1][rr][\[Tau]] Derivative[1][tt][\[Tau]])/(
rr[\[Tau]] - rr[\[Tau]]^2), (
rr[\[Tau]]^2 Derivative[1][
rr][\[Tau]]^2 - (-1 + rr[\[Tau]])^2 Derivative[1][
tt][\[Tau]]^2)/(
2 (-1 + rr[\[Tau]]) rr[\[Tau]]^3) + (-1 +
rr[\[Tau]]) Derivative[1][\[Theta]\[Theta]][\[Tau]]^2, -((
2 Derivative[1][rr][\[Tau]] Derivative[
1][\[Theta]\[Theta]][\[Tau]])/rr[\[Tau]])}, {tt'[0],
rr'[0], \[Theta]\[Theta]'[
0]} == {1/(1 - Rs/R0), -Cos[\[Theta]0],
Sqrt[1/(1 - Rs/R0)]/R0 Sin[\[Theta]0]}, {tt[0],
rr[0], \[Theta]\[Theta][0]} == {0, R0, 0},
WhenEvent[
1.01 Rs >= rr[\[Tau]] ||
rr[\[Tau]] >= Rmax || \[Theta]\[Theta][\[Tau]] >= 3.1 Pi,
"StopIntegration"]}, {tt[\[Tau]],
rr[\[Tau]], \[Theta]\[Theta][\[Tau]]}, {\[Tau], 0,
1000}, {\[Theta]0}];
(*data used in interpolation*)
datp = Catenate@
Table[With[{pf = parfunc[\[Theta]]},
With[{\[Tau]max = pf[[1, 0, 1, 1, 2]], df = D[Rest@pf, \[Tau]],
f = Rest@pf},
Block[{\[Tau] =
Range[RandomReal[{0, \[Tau]max/500}], \[Tau]max, \[Tau]max/
500]}, Select[
Thread[(Thread@f ->
Thread@{\[Theta],
ArcTan[-df[[1]], df[[2]] f[[1]] Sqrt[1 - Rs/f[[1]]]],
pf[[1]]})],
2.4 Rs < #[[1, 1]] < 5.6 Rs && -0.05 Pi < #[[1, 1]] <
3.08 Pi &]]]], {\[Theta],
Range[-2.5 Degree, 80 Degree, 1 Degree]~Join~
Range[20.2 Degree, 28.2 Degree, 0.5 Degree]~Join~
Range[23.025 Degree, 24.05 Degree, 0.05 Degree]~Join~
Range[23.2825 Degree, 23.4 Degree, 0.005 Degree]~Join~
Range[23.28525 Degree, 23.30025 Degree, 0.001 Degree]}];
datp = First /@ GatherBy[datp, Floor[#[[1]]/{0.01 Rs, 1 Degree}] &];
(*Construct InterpolatingFunctions*)
ReceiveAngleFunction =
Interpolation[Thread[{datp[[;; , 1]], datp[[;; , 2, 1]]}],
InterpolationOrder -> 1];
EmitAngleFunction =
Interpolation[Thread[{datp[[;; , 1]], datp[[;; , 2, 2]]}],
InterpolationOrder -> 1];
DelayFunction =
Interpolation[Thread[{datp[[;; , 1]], datp[[;; , 2, 3]]}],
InterpolationOrder -> 1];
(*Angle vs time of observation*)
GenerateAngleFunctions[R1_, \[Theta]1_] := Block[{\[Phi]1},
With[{interpol =
Interpolation@
Table[{DelayFunction[R1, #] +
Sqrt[2 R1^3] \[Phi]1, \[Phi]1}, {\[Phi]1, 0., 2 Pi,
10. Degree}]},
With[{ts = interpol[[1, 1, 1]], tperiod = 2. Pi Sqrt[2 R1^3]},
Function[t, interpol[t - Floor[t - ts, tperiod]]]]] & /@ ({#,
2 Pi - #, 2 Pi + #} &@ArcCos[Sin[\[Phi]1] Sin[\[Theta]1]])]
If we only consider this effect, then we will have something like this:
![Light bending included][6]
The inner two rings correspond to the light that rotates around the black hole for more than half a round.
And if we consider the propagation time of light, the right side will be a bit brighter.
![Time correction included][7]
This is because on the right, objects are moving away from you. So from your point of view, these particles will stay for a longer time on the right. (The reason is explained in the figure)
![Illustration of how velocity influence timing][8]
##"Brighter when Closer"
The next question is about the "brighter when closer" rule. We all know that the further a bulb is, the dimmer it would appear to be. This is because the light from the bulb is approximately evenly distributed across solid angles, but as we move further, the solid angle corresponding to our eyes will be smaller. Mathematically, this is saying $I\propto S_0 \frac{d\Omega}{dS}$ where $S_0$ is the surface area of our eyes, $S$ is area, and $\Omega$ is the solid angle.
!["Brighter when closer" rule in flat space][9]
The same rules apply here in curved spacetime, except the light beams are weirder.
!["Brighter when closer" rule in curved space][10]
We know that $\frac{d\Omega}{dS}=(\frac{dx}{d\theta_x}\frac{dy}{d\theta_y})^{-1}$. While $\frac{dy}{d\theta_y}=\frac{R_0 \sin \alpha}{\sin \theta_1}$ can be derived using basic solid geometry, $\frac{dx}{d\theta_x}$ must be calculated numerically by tracing the light from the object to the observer. Similarly, we use interpolating function to generalize the result from a set of sample points to the whole space.
(*Reverse ray tracing*)
parfuncrev =
ParametricNDSolveValue[{{tt''[\[Tau]],
rr''[\[Tau]], \[Theta]\[Theta]''[\[Tau]]} == {(
Derivative[1][rr][\[Tau]] Derivative[1][tt][\[Tau]])/(
rr[\[Tau]] - rr[\[Tau]]^2), (
rr[\[Tau]]^2 Derivative[1][
rr][\[Tau]]^2 - (-1 + rr[\[Tau]])^2 Derivative[1][
tt][\[Tau]]^2)/(
2 (-1 + rr[\[Tau]]) rr[\[Tau]]^3) + (-1 +
rr[\[Tau]]) Derivative[1][\[Theta]\[Theta]][\[Tau]]^2, -((
2 Derivative[1][rr][\[Tau]] Derivative[
1][\[Theta]\[Theta]][\[Tau]])/rr[\[Tau]])}, {tt'[0],
rr'[0], \[Theta]\[Theta]'[0]} == {1/(1 - Rs/R1),
Cos[\[Theta]0], -Sqrt[1/(1 - Rs/R1)]/R1 Sin[\[Theta]0]}, {tt[0],
rr[0], \[Theta]\[Theta][0]} == {0, R1, \[Theta]1},
WhenEvent[\[Theta]\[Theta][\[Tau]] == 0,
"StopIntegration"]}, {tt[\[Tau]],
rr[\[Tau]], \[Theta]\[Theta][\[Tau]]}, {\[Tau], 0,
100}, {\[Theta]0, R1, \[Theta]1}];
(*data used in interpolation*)
\[CapitalDelta]\[Phi] = 1.*^-5;
intensity =
Catenate@Table[{{R, \[Theta]},
R0^2 With[{\[Theta] = Abs[\[Theta]]},
Abs[Sin[EmitAngleFunction[
R, \[Theta]]]/(R0 \
Sin[\[Theta]])]*(\[CapitalDelta]\[Phi]/(Sin[
ReceiveAngleFunction[R, \[Theta]]]*
Subtract @@ (With[{f =
parfuncrev[
EmitAngleFunction[
R, \[Theta]] + # \[CapitalDelta]\[Phi],
R, \[Theta]][[2, 0]]}, f@f[[1, 1, -1]]] & /@ {-1,
1})))]}, {R, 2.45 Rs, 5.55 Rs,
0.02 Rs}, {\[Theta], -3 Degree, 543 Degree, 2 Degree}];
(*Construct InterpolatingFunction*)
IntensityFunction1 = Interpolation[intensity];
![With intensity correction 1][11]
The figure will be much more realistic in the aspect of intensity after we added this effect. The inner two rings are much dimmer because light bent dramatically is rare after all.
##Doppler Effect and Headlight Effect
Now its time for Doppler effect and headlight effect. These two effects are related to the movement of light-emitting objects and observers. Though the names of these effects can be forbidding, these effects are quite common in everyday life. Blueshift refers to the phenomenon that when a car is approaching you, the noise made by the car would be more acute and loud, and redshift means when the car is leaving you, the noise would quieter and be of lower frequency.
![Doppler effect][12]
The equation for the relativistic Doppler effect is:
$$
f'=f\frac{\sqrt{1-\beta^2}}{1-\beta \cos \theta}
$$
where $\beta=\frac{v}{c}$ and $\theta$ is the angle between the velocity direction of the light-emitting object and the light emitted, as observed by an external observer. In this case, we should further add a coefficient of
$$
f''=f'\sqrt{\frac{1 - R_s/R_1}{1 - R_s/R_0}}
$$
due to general relativistic effects.
Headlight effect means when you are driving a car on rainy days, no matter how the wind blows, the raindrops will always run towards the windshield. But if you stop your vehicle, you can see how the wind influences the dropping direction of rain.
![Headlight effect][13]
The equation for angle transformation is:
$$
\cos \theta'= \frac{\cos \theta +\beta}{1+\beta \cos \theta}
$$
and such, the intensity difference introduced by this can be written as:
$$
\frac{dP'}{d\Omega}= \frac{dP}{d\Omega}\frac{\sin \theta}{\sin \theta'}\frac{d \theta}{d \theta'}=\frac{dP}{d\Omega}\frac{1 - \beta^2}{(1 -\beta \cos \theta)^2}
$$
Except for the difference in timing brought by the curved spacetime, these two effects are purely in the special relativity regime. The only thing involved in coding is tedious coordinate transformation.
(*Calculate moving speed*)
Calc\[Beta][{R1_, \[Theta]_, \[Phi]_}, {vr_, v\[Theta]_, v\[Phi]_}] :=
Sqrt[vr^2/(1 -
Rs/R1) + (R1 v\[Theta])^2 + (R1 Sin[\[Theta]] v\[Phi])^2]/
Sqrt[1 - Rs/R1]
(*Calculate inner product between moving direction and light direction*)
CalcCosAngle[{R1_, \[Theta]_, \[Phi]_}, {vr_, v\[Theta]_, v\[Phi]_}] :=
With[{v = {vr/Sqrt[1 - Rs/R1], R1 v\[Theta],
R1 Sin[\[Theta]] v\[Phi]}},
MapThread[With[{\[Theta]0 = EmitAngleFunction[R1, #1]},
With[{vnormed = MapThread[Normalize@*List, v]},
MapThread[
Dot, {vnormed, Thread@{Cos[\[Theta]0], #2 Sin[\[Theta]0], 0}},
1]]] &, {{\[Theta], 2 Pi - \[Theta], 2 Pi + \[Theta]}, {-1,
1, -1}}]]
(*Frequency shift, Doppler effect + GR timing effects*)
FrequencyMult[R1_, \[Beta]_, cos_] :=
Sqrt[(1 - Rs/R1)/(1 - Rs/R0)]*Sqrt[1 - \[Beta]^2]/(1 - \[Beta] cos)
(*Intensity shift due to headlight effect only*)
IntensityMult2[\[Beta]_,
cos_] := (Sqrt[1 - \[Beta]^2]/(1 - \[Beta] cos))^2
Then we can put all these effects together and see how things works out!
<< PhysicalColor`
IntensityFunctionScaling::usage = "Scale Intensity.";
Protect@IntensityFunctionScaling;
Options[RenderFunc] = {ColorFunction -> TemperatureColor,
ColorFunctionScaling -> (# &), IntensityFunctionScaling -> (# &),
"StaticObserver" -> True};
RenderFunc[R1_, {\[Theta]1_, t1_, \[Gamma]1_}, {T0_, I0_},
OptionsPattern[]] :=
Function[t, Through[#[t]]] &@Module[{
(*Velocity of observer*)
vobs = N@Sqrt[(1 - Rs/R0) Rs/(2 R0)],
(*list of \[Phi]1 parameters*)
\[Phi]1l = Range[0., 2 Pi, 1 Degree],
(*Polar coordinates \[Theta] and \[Phi]*)
\[Theta]l0, \[Phi]l0,
(*velocity of object and its norm*)
vrl, v\[Theta]l, v\[Phi]l, vnorml
},
(*Polar coordinate \[Theta]*)
\[Theta]l0 = ArcCos[Sin[\[Phi]1l] Sin[\[Theta]1]];
(*Original \[Phi]*)
\[Phi]l0 =
ArcTan[Cos[\[Phi]1l], Sin[\[Phi]1l] Cos[\[Theta]1]] + \[Gamma]1;
(*velocity of object*)
vrl = ConstantArray[0, Length@\[Phi]1l];
v\[Theta]l = -(Cos[\[Phi]1l] Sin[\[Theta]1])/
Sqrt[1 - Sin[\[Theta]1]^2 Sin[\[Phi]1l]^2]*Sqrt[Rs/(2 R1^3)];
v\[Phi]l =
1/(Cos[\[Phi]1l]^2/Cos[\[Theta]1] +
Cos[\[Theta]1] Sin[\[Phi]1l]^2)*Sqrt[Rs/(2 R1^3)];
(*velocity norm*)
vnorml =
Calc\[Beta][{R1, \[Theta]l0, 0}, {vrl, v\[Theta]l, v\[Phi]l}];
MapThread[Module[{
(*Observed \[Phi]1 parameter - t*)
\[Phi]1t = #3,
(*actual \[Theta] of object*)
\[Theta]l = #1,
(*angle between velocy and ray*)
cosl = #4,
(*Observed values - \[Phi]1*)
(*Geometry*)
\[Theta]obsl, \[Phi]obsl = \[Phi]l0 + #2,
(*Frequency and intensity shift*)
freqobsl, intobsl,
(*helper function*)
helpf
},
\[Theta]obsl = ReceiveAngleFunction[R1, \[Theta]l];
(*Frequency*)
freqobsl = FrequencyMult[R1, vnorml, cosl];
(*Process with the non-static observer*)
If[OptionValue["StaticObserver"] =!= True,
Module[{\[Theta]transl, \[Phi]transl, \[Delta] = ArcSin[vobs]},
(*Geometrics, static frame*)
\[Theta]transl = ArcCos[Sin[\[Theta]obsl] Cos[\[Phi]obsl]];
\[Phi]transl =
ArcTan[Sin[\[Theta]obsl] Sin[\[Phi]obsl], Cos[\[Theta]obsl]];
(*Frequency shift due to movement of observer,
intensity shift is calculated together later*)
freqobsl *= (1 + vobs Cos[\[Theta]transl])/Sqrt[1 - vobs^2];
(*Angle shift due to movement of observer*)
\[Theta]transl =
ArcCos[(vobs + Cos[\[Theta]transl])/(1 +
vobs Cos[\[Theta]transl])];
(*Transform back*)
(*Here we change the center of viewing angle so that the \
black hole's center is at {0,0}*)
\[Theta]obsl =
ArcCos[Sin[\[Delta]] Cos[\[Theta]transl] +
Cos[\[Delta]] Sin[\[Theta]transl] Sin[\[Phi]transl]];
\[Phi]obsl =
ArcTan[Cos[\[Delta]] Cos[\[Theta]transl] -
Sin[\[Delta]] Sin[\[Theta]transl] Sin[\[Phi]transl],
Sin[\[Theta]transl] Cos[\[Phi]transl]]
]
];
\[Phi]obsl =
Catenate[
MapIndexed[#1 + 2 Pi #2[[1]] &, Split[\[Phi]obsl, Less]]];
(*Intensity*)
intobsl =
freqobsl^2*IntensityFunction1[R1, \[Theta]l]*
IntensityMult2[vnorml, cosl]*
TemperatureIntensity[freqobsl T0]/TemperatureIntensity[T0]/
freqobsl^4;
(*Helper function to construct interpolating functions*)
helpf[l_] := Interpolation[Thread[{\[Phi]1l, l}]];
With[{
cf = OptionValue[ColorFunction],
(*Interpolating functions*)
t11 = t1,
\[Phi]1f = #3,
\[Theta]func = helpf[\[Theta]obsl],
\[Phi]func = helpf[\[Phi]obsl],
freqfunc =
helpf[OptionValue[ColorFunctionScaling][T0 freqobsl]],
intfunc =
helpf[OptionValue[IntensityFunctionScaling][I0 intobsl]]
},
(*Final function*)
Function[t,
With[{\[Phi]11 = \[Phi]1f[t + t11]},
{Append[cf[freqfunc[\[Phi]11]], intfunc[\[Phi]11]],
With[{\[Theta] = \[Theta]func[\[Phi]11], \[Phi] = \
\[Phi]func[\[Phi]11]},
(*Point[{Sin[\[Theta]]Cos[\[Phi]],Sin[\[Theta]]Sin[\[Phi]],
Cos[\[Theta]]}]*)
Point[Tan[\[Theta]] {Cos[\[Phi]], Sin[\[Phi]]}]]}
]]
]
] &, {{\[Theta]l0, 2 Pi - \[Theta]l0, 2 Pi + \[Theta]l0}, {0,
Pi, 0}, GenerateAngleFunctions[R1, \[Theta]1],
CalcCosAngle[{R1, \[Theta]l0, 0}, {vrl, v\[Theta]l, v\[Phi]l}]}]]
(*My version of rasterize, which increase color precision in dimmer areas*)
HDRRasterize[gr_Graphics, convertfunc_,
opts : OptionsPattern[Rasterize]] :=
Module[{rasterl =
Join[ColorSeparate[ColorConvert[Rasterize[gr, opts], "HSB"]],
ColorSeparate[
ColorConvert[
Rasterize[
gr /. RGBColor[r_, g_, b_, op_] :> RGBColor[r, g, b, 16 op],
opts], "HSB"]]], mask, invmask},
mask = Binarize[rasterl[[3]], 1/16];
invmask = 1 - mask;
ColorCombine[{
mask*rasterl[[1]] + invmask*rasterl[[4]],
mask*rasterl[[2]] + invmask*rasterl[[5]],
mask*convertfunc[rasterl[[3]]] +
invmask*convertfunc[rasterl[[6]]/16.]}, "HSB"]
]
(*Preliminary computation*)
npts = 5000;
rflist = MapThread[
Function[{R1, \[Theta]1, t1, \[Gamma]1, T0, I0},
RenderFunc[R1, {\[Theta]1, t1, \[Gamma]1}, {T0, I0},
"StaticObserver" -> False(*,
IntensityFunctionScaling\[Rule](.7(#/.5)^0.5&)*)]],
{RandomReal[{3, 4.5}, npts],
RandomReal[-{83, 86} Degree, npts],
RandomReal[{0, 10000}, npts],
RandomReal[15 Degree + {-2, 2} Degree, npts],
RandomReal[{4000, 10000}, npts],
RandomReal[{.03, .1}, npts]
}
];
(*rendering!!!*)
g = Graphics[{(*AbsolutePointSize@.1,White,Point[{Sin[20Degree]Cos[#],
Sin[20Degree]Sin[#],Cos[20Degree]}&/@Range[0.,360.Degree,
60.Degree]],*)AbsoluteThickness@2,
Map[Line[#[[;; , 2, 1]],
VertexColors ->
MapThread[
Function[{col, len, mult},
MapAt[mult^2*#*0.006/len &, col, 4]], {#[[;; , 1]],
Prepend[#, #[[1]]] &@
BlockMap[Norm[#[[2]] - #[[1]]] &, #[[;; , 2, 1]], 2, 1],
Subdivide[Length[#] - 1]}]] &,
Reverse@Transpose[
Through[rflist[#]] & /@ (Range[0, 3, .1]), {3, 2, 1}], {2}]},
Background -> Black, ImageSize -> {500, Automatic},
PlotRange -> {{-1.28, 1.28}, {-0.72, 0.72}}];
HDRRasterize[g, #^(1/2.2) &, ImageSize -> {1920, 1080}]
![With all effects, static observer][14]
Well, because objects at left are moving towards you, they will appear much brighter and blue-ish, while objects at right are much dimmer and red-ish.
We can also consider the movement of the observer, which will make the image something like this:
![With all effects, observer moving][15]
Hooray!
The notebook can be found in the attachment or at [my github repo](https://github.com/wjxway/Realistic_Blackhole_Accretion_Disk).
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=bh10.png&userId=1340903
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=T_001.png&userId=1340903
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Path_4.png&userId=1340903
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=illus2.png&userId=1340903
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=illus1.png&userId=1340903
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=LightBending.png&userId=1340903
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WithTimeCorrection.png&userId=1340903
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=illus3.png&userId=1340903
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=IntensityIllus.png&userId=1340903
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=illus4.png&userId=1340903
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WithIntensity1.png&userId=1340903
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Doppler.jpg&userId=1340903
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=FrontLight.jpg&userId=1340903
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WithAllEffects.png&userId=1340903
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WithObserverMoving.png&userId=1340903Jingxian Wang2020-07-02T11:26:03ZSet the default folder to be that of the notebook?
https://community.wolfram.com/groups/-/m/t/2014327
My system: Mathematica 11.2, Windows 10
The problem: I want to configure my Mathematica system to save all graphics *to the folder of the current notebook*. Currently the default folder seems to be whatever folder was used the last time.
Caveat: I don't know what the various pieces of software do for me. I did all my programming in DOS-era, and the distinction between Mathematica-kernel and the FrontEnd/GUI is blurry for me.
I asked for help in the appropriate StackExchange forum. There the useful suggestion to give the command
SetDirectory[NoteBookDirectory[]]
was given. While helpful, this is unsatisfactory, for apparently it only affects the default output folder of an Export-command.
My uneducated guess is that the mouse action in "Save Graphics As" does not touch the kernel, and it is thus blissfully unaware of the above command. On the other hand, the Export-command is affected by that.
> How can I configure the mouse interface (the FrontEnd??) to also use the folder of the current notebook as the default folder for saving graphics?
Why isn't the folder of the notebook NOT the default folder for the mouse action of "save graphics as"?Jyrki Lahtonen2020-06-27T13:29:54ZUS Counties COVID-19 confirmed cases by population density timelines
https://community.wolfram.com/groups/-/m/t/1992898
*MODERATOR NOTE: coronavirus resources & updates:* https://wolfr.am/coronavirus
----------
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/bobs/Published/COVID19-USCounty-PopulationDensityTimelines.nbBob Sandheinrich2020-06-02T16:28:47ZCompatibility Mathematica 11.O and 12
https://community.wolfram.com/groups/-/m/t/2021269
Hello, I am French.
I have a trial version of Mathematica 12 and I would like to run a program found on Wikipedia and written in Mathematica 11.O (the program is here : https://commons.wikimedia.org/wiki/File:Bremsstrahlung.gif . It doesn't work.
Can you help me please ?
Sincerely yours,
Armen SARIANArmen SARIAN2020-07-07T18:50:46Z[Solved] Red square in brackets in front of repository functions
https://community.wolfram.com/groups/-/m/t/2021418
What is this red square symbol, can I type it with an esc combination? Is there another way to shortcut typing "resourcefunction" every time you want to call one? (I did have a search around.)
Thanks
Sam
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=O_1.png&userId=1958263Samuel Collins2020-07-08T01:56:47ZWhy Wolfram|Alpha gives the right result?
https://community.wolfram.com/groups/-/m/t/2020021
!Arc length integral][1]![Arc length integral with the right value][1]
I would like to know why I can not get the value for the arc integral using Mathematica 12.0 , the value of 69473167820511768024711168 has all the digits different from each other while the value I get using
n=100000000000
a= 0.988639922/0.29719183431
b=Sqrt[a^2+1]
c=n*b*2
n=N[c]
just returns me 6.94732*10^11 instead of all the digits that are supposed to exist for the right calculus of the arc integral, which is necessary for me to prove that it is a non trivial zero, Wolfram|Alpha gives me the correct value while I can not get it by writing lines of a program on Mathematica, could anyone help me I have just proven that the formula below gives non trivial zeros numbers that are very closely related to prime finding, which can be done using Wolfram|Alpha but I can not reproduce it on my computer.
Could anyone help me to reproduce what Wolfram|Alpha does?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=firstworldrecordforthenontrivialzeros.jpg&userId=1147177Luis Felipe Massena Misiec2020-07-05T23:19:35Z[WSG20] Programming Fundamentals Week 4
https://community.wolfram.com/groups/-/m/t/1985992
This week we will be looking at the following topics:
- Tuesday May 26: Dataset
- Wednesday May 27: Package Development
- Thursday May 28: Workbench
- Friday May 29: Useful Tips and Session Wrap Up
Please post your questions on this thread and look out for daily challenges.Abrita Chakravarty2020-05-26T15:21:32ZWriting a solution dependent coefficient in a PDE using Do & Table command
https://community.wolfram.com/groups/-/m/t/2021079
The goal is to find the displacement response of the following equation with the updating coefficient on the time and disp domain called diffusion coefficient or Epsilon. For this purpose, Epsilon is considered to be zero at the moment of zero and at the other times with the relation e = 0.01u [x, t]. Also, the boundary conditions and the initial condition are given in the code.part of the my code is below:
Needs["NDSolve`FEM`"];
s[x_] := UnitBox[0.3 x - 1.5](* initial dispalacement *)
ics = {u[x, 0] == s[x]}
nr = ToNumericalRegion[FullRegion[1], {{0, 10}}]
mesh = ToElementMesh[nr, MaxCellMeasure -> 0.25]
pde=D[u[x,t],t]+D[u[x,t],x]==ϵ[x]*D[u[x,t],x,x
]
The problem here is how to write this part of the code loop using the combination of table & interpolation command It should be noted that I have already tried this section as follows:
data1 = Table[ {x,0.01*u[x,t]}, {x, 0, 10, 0.25}];
ϵ[x] = Interpolation[data1, InterpolationOrder -> 0]
I also tried to use the thread command to link this coefficient, which is on the disp domain, to the time domain, as shown below:
data2 = Table[t, {t, 0, 10, 0.001}];
Thread[{data1 -> data2}]
That doesn't seem to be true :-)
The solution command is also given in the following format, which gives us an interpolating function that represents the displacement response, and we need to update the Epsilon coefficient on each displacement mesh at each time step:
t0 = 2;
sol = First[NDSolve[{pde, ics, bcs}, u , x \[Element] mesh, {t, 0, t0, 0.001}, MaxSteps -> 1000,
Method -> {"PDEDiscretization" -> {"MethodOfLines","SpatialDiscretization" -> {"FiniteElement",
"MeshOptions" -> {"MaxCellMeasure" -> 0.25}}}}]]Mohammed Taherkhani2020-07-07T15:05:24ZGeneral and COVID-19 deaths in Sweden
https://community.wolfram.com/groups/-/m/t/2006377
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/e45ecfd2-3859-4f64-9b29-2448f643f14fOscar Rodriguez2020-06-17T22:27:27ZFitting Data With Poisson and Normal Distributions With Fit Data (Chi^2)
https://community.wolfram.com/groups/-/m/t/2020806
Hello!
I have been at this for a week and I don't know where else to ask. I am trying to solve for chi^2 from a distribution fit using Mathematica. I have the following data:
{812,827,782,841,839,803,800,818,790,838,783,768,799,854,805,832,786,838,802,805,825,784,797,808,846,809,814,857,884,855,800,855,818,865,801,823,812,861,807,830,813,827,819,796,870,825,827,872,770,849,813,828,839,804,805,833,833,871,856,819,787,819,852,786,864,841,799,804,812,831,830,810,863,856,794,808,794,838,748,757,866,778,823,818,830,794,831,838,853,836,821,836,797,863,811,867,803,821,864,819}
It is expected that this data will follow Poisson and Normal Distributions. I can get these fit parameters by using FindDistributionParameters[]. I can also plot these distribution equations against the data to show that it's a good fit visually speaking (although the units may be incorrect):
![Fits vs Data][1]
What I can't figure out though is how to obtain data showing why this is a good fit, specifically the value chi^2 (which is required for what I am doing.) I tried to calculate this manually, but my values are way off what my peers are obtaining using different programs. The equation I am attempting to solve manually is this oneL
![Chi^2 Equation][2]
Where Ok is the measured point and Ek is the point as expected by the fit. I don't think this is working because I think this equation requires actual values, while Mathematica's Poisson and Normal distributions are probability based. Here's the code I have been using:
FindDistributionParameters[CS2S1r, NormalDistribution[Mu, Sig]]
FindDistributionParameters[CS2S1r, PoissonDistribution[Mu]]
mu = 822.04;
sig = 27.6452;
binCount= 100;
Chi2g = 0;
Chi2p = 0;
h[x_] = PDF[HistogramDistribution[CS2S2r, binCount], x]
f[x_] = PDF[NormalDistribution[mu, sig], x];
g[x_] = PDF[PoissonDistribution[mu], x];
For[i = 764;, i <= 892, i += 1, Chi2g += (h[i] - f[i])^2/f[i]];
For[i = 764;, i <= 892, i += 1, Chi2p += (h[i] - g[i])^2/g[i]];
Here, mu is the mean of the distribution and sig is the standard deviation. Both of these were determined by using FindDistributionParameters[]. binCount is a number I am going to tweak later, but I set it as high as I could so that each step is the same size as the steps in the data. CS2S2r is the name of my data itself, and 764 and 892 were chosen as the bounds because those are the greatest and smallest numbers found in the data itself.
Does anyone know how I might obtain this? Am I fitting the distributions in the correct way? Is there anything else I should do differently?
All help is appreciated, thank you!
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=unknown.png&userId=2020287
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=trash.png&userId=2020287
Edit: It looks like FindDistribution[] does what I am looking for, but I am not sure if the ChiSquared that it reports is the actual ChiSquared value, or if it's the p-value that comes from a non displayed ChiSquared. I also don't know how to get this to work with error bars, any ideas?Alexander Peterson2020-07-06T22:33:51ZHow to realize the inverse Laplace Transform?
https://community.wolfram.com/groups/-/m/t/2017577
This expression came from the Laplace Transform,
(Sqrt[Dc] E^((
Sqrt[s] (lh - x))/Sqrt[Dc]) F1)/(Sqrt[s] (s - \[Theta]c))
where s is the unique argument.
Now it is needed to make the inverse Laplace Transform and it can be realized by the convolution theorem.
But it will be simpler to do it with the Residue Theorem.
Would you like to help me with this issue?
Thanks.Zhonghui Ou2020-07-02T06:09:59ZSimplification of simple matrix expressions
https://community.wolfram.com/groups/-/m/t/2019171
Can Mathematica Simplify simple matrix expressions like
$Assumptions = Element[C1, Matrices[{4, 4}]]
$Assumptions = Element[C2, Matrices[{4, 4}]]
Simplify[Transpose[C1.C2] - Transpose[C2].Transpose[C1]]
Simplify[TensorReduce[TensorTranspose[C1.C2]] - TensorTranspose[C2].TensorTranspose[C1]]
Running this code does not yield 0 for either expression.
More generally, how can I use Mathematica do simplify more complicated matrix expressions involving sums and products of matrices, some of which are symmetric, self-adjoint, etc...?
Thanks!Eric Michielssen2020-07-04T17:16:03Z[WSG20] New Daily Study Group begins Monday, June 22
https://community.wolfram.com/groups/-/m/t/2007434
Our newest [Daily Study Group][1] offers a jump-start on earning Wolfram certifications and covers topics including Wolfram Notebooks, image processing and multiparadigm data science. Sign up at: https://wolfr.am/NpiuhRsg
[1]: https://www.wolfram.com/wolfram-u/special-event/study-groups/Jamie Peterson2020-06-18T21:21:10ZSolving 2-dimensional PDE
https://community.wolfram.com/groups/-/m/t/2020082
I want to solve a PDE of the form:
$$\frac{\partial}{\partial x}\left(C_1(x,y)u(x,y)\right)+\frac{\partial}{\partial y}\left(C_2(x,y)u(x,y)\right)+\frac{\partial^2}{\partial x^2}\left(C_3(x,y)u(x,y)\right)\\ +\frac{\partial^2}{\partial x \partial y}\left(C_4(x,y)u(x,y)\right)+\frac{\partial^2}{\partial y^2}\left(C_5(x,y)u(x,y)\right)=0$$ with no-flux boundary conditions. The derivative is $0$ on the boundary, which is the triangular domain $x+y\leq1$ with $x\geq 0$, $y \geq 0$. I also have an absorbing boundary condition u[1,0]=0. In addition to this, I require that $u$ integrate to 1 over the domain.
```
g = 0.15;
a = 0.2;
b = 0.2;
NN = 1500;
C1[x, y] = -2 a*x + 2 a*x^2 + g*g + a*x*y;
C2[x, y] = 2 g + (2 a - 2 g)*x - 2 a*x^2 - (a + b + 3 g)*y - 2 a*x*y - (a/2)*y^2;
C3[x, y] = 2 a*x - 2 a*x^2 + g*y - a*x*y;
C4[x, y] = C3[x, y];
C5[x, y] = 2 g - 2 a*x^2 + (a + b - g)*y - (a/2)*y^2 + (2 a - 2 g)*x - 2 a*x*y;
uval = NDSolve[{D[C1[x, y]*u[x, y], x] +
D[C2[x, y]*u[x, y], y] + (1/(2 NN))*
D[C3[x, y]*u[x, y], x, x] - (1/NN)*
D[C4[x, y]*u[x, y], x, y] + (1/(2*NN))*
D[C5[x, y]*u[x, y], y, y] == 0, u[1, 0] == 0,
Derivative[1, 0][u][0, y] == 0, Derivative[0, 1][u][x, 0] == 0,
Derivative[1, 0][u][x, 1 - y] + Derivative[0, 1][u][x, 1 - y] ==
0}, u, {x, 0, 1}, {y, 0, 1 - x}];
```
If anyone knows why this is not working, and a way I can implement the integral condition, your help would be much appreciated.A H2020-07-06T12:08:33ZEndless song, never repeats a chord, plays all chords eventually.
https://community.wolfram.com/groups/-/m/t/2017293
This is a program that plays every possible chord in a somewhat "shuffled" sequence. It's based on a modification of the Recaman sequence.
There's a second program that doesn't play every chord, but it is based on Woflram's cellular automaton rule 225, and it never repeats a chord. When it's about to repeat, another possible note is added to allow it to continue on.Eric Parfitt2020-07-01T16:38:20ZNeural network regression with error bars
https://community.wolfram.com/groups/-/m/t/1319745
This community post accompanies my Wolfram blog post [How Optimistic Do You Want to Be? Bayesian Neural Network Regression with Prediction Errors ][1], so if you find the following interesting, please give that a read as well.
In the documentation there is a [tutorial][2] about doing neural network regression with uncertainty. This approach works under certain circumstances, but it can be difficult to generalize, so I started looking for other ways to do it.
As it turns out, there is a link between regression neural networks and [Gaussian processes][3] which can be exploited to put error bands on the predictions (see, e.g., [this post by Yarin Gal][4]; [his thesis][5] and [the PhD thesis by R.M. Neal 1995][6]). The basic idea here is to use the `DropoutLayer` to create a noisy neural network which can be sampled multiple times to get a sense of the errors in the predictions (though it's not quite as simple as I'm making it sound here).
Inspired by Yarin's post above and his interactive example of a network that is continuously being retrained on the example data, I decided to do something similar in Mathematica. The result is the code below, which generates an interactive example in which you can edit the network's training data (by clicking in the figure) and adjust the network parameters with controls. I had some trouble getting the code to not cause strange front end issues, but it seems to work quite well now.
In the attached notebook I go into a bit more detail of my implementation of this method and also show how to do regression with a non-constant noise level. I hope this is of some use to anyone here :)
![Example 1 of neural network regression with 1-sigma error bars][7]
Example 1: fitting with a network that assumes a constant noise level (mean + 1 sigma error bars)
![Example 2 of neural network regression with 1-sigma error bars][8]
Example 2: fitting with a network that fits the noise level to the data (heteroscedastic regression)
##Homoscedastic regression##
First generate some training examples:
exampleData = Table[{x, Sin[x]} + RandomVariate[NormalDistribution[0, .15]],
{x, RandomVariate[UniformDistribution[{-3, 3}], 15]}];
ListPlot[exampleData]
![enter image description here][9]
In homoscedastic regression, the noise level of the data is assumed constant across the x-axis. To calibrate the model, you need to provide a prior length scale l that expresses your belief in how correlated the data are over distance (just like in Gaussian process regression). Together with the L2 regularisation coefficient Subscript[λ, 2]; the dropout probability p and the number of training data points N, you have to add the following variance to the sample variance of the network:
![enter image description here][10]
For demonstration purposes, we'll be using a net with one non-linearity. If you want to use more, you need to put a dropout layer before every linear layer.
λ2 = 0.01;
pdrop = 0.1;
nUnits = 100;
activation = Ramp;
net = NetChain[
{LinearLayer[nUnits], ElementwiseLayer[activation], DropoutLayer[pdrop],
LinearLayer[]},
"Input" -> "Scalar",
"Output" -> "Scalar"
]
![enter image description here][11]
Train the network:
trainedNet = NetTrain[
net,
<|"Input" -> exampleData[[All, 1]], "Output" -> exampleData[[All, 2]]|>,
LossFunction -> MeanSquaredLossLayer[],
Method -> {"ADAM", "L2Regularization" -> λ2}
]
![enter image description here][12]
This function takes a trained net and samples it multiple times with the dropout layers active (using NetEvaluationMode -> "Train"). It then constructs a timeseries object of the - 1, 0 and + 1 sigma bands of the predictions.
sampleNet[net : (_NetChain | _NetGraph), xvalues_List,
sampleNumber_Integer?Positive, {lengthScale_, l2reg_, prob_,
nExample_}] := TimeSeries[
Map[
With[{
mean = Mean[#],
stdv =
Sqrt[Variance[#] + (2 l2reg nExample)/(lengthScale^2 (1 -
prob))]
},
mean + stdv*{-1, 0, 1}
] &,
Transpose@
Select[Table[
net[xvalues, NetEvaluationMode -> "Train"], {i, sampleNumber}],
ListQ]],
{xvalues},
ValueDimensions -> 3
]
Now we can plot the predictions with 1σ error bands. The prior l=2 seems to work reasonably well, though in real applications you'd need to calibrate it with a validation set (just like Subscript[λ, 2] and p).
l = 2;
samples = sampleNet[trainedNet, Range[-5, 5, 0.05],
200, {l, λ2, pdrop, Length[exampleData]}]
Show[
ListPlot[
samples,
Joined -> True,
Filling -> {1 -> {2}, 3 -> {2}},
PlotStyle -> {Lighter[Blue], Blue, Lighter[Blue]}
],
ListPlot[exampleData, PlotStyle -> Red],
ImageSize -> 600,
PlotRange -> All
]
![enter image description here][13]
![enter image description here][14]
##Heteroscedastic regression##
exampleData = Table[(*initialise training data*){x, Sin[x]} + RandomVariate[NormalDistribution[0, .15]],
{x, RandomVariate[UniformDistribution[{-3, 3}], 15]}];
ListPlot[exampleData]
![enter image description here][15]
In heteroscedastic regression we let the neural net try and find the noise level for itself (see section 4.6 in the PhD thesis by Yarin Gal linked at the top of the notebook). This means that the regression network outputs 2 numbers instead of 1: a mean and a standard deviation. However, since the output of the network is a real number, we interpret it as the log of the precision logτ = Log[τ] = Log[1/σ^2]:
λ2 = 0.01;
pdrop = 0.1;
nUnits = 200;
activation = Ramp;
regressionNet = NetGraph[
{LinearLayer[nUnits], ElementwiseLayer[activation], DropoutLayer[pdrop],
LinearLayer[], LinearLayer[]},
{
NetPort["Input"] -> 1 -> 2 -> 3,
3 -> 4 -> NetPort["Mean"],
3 -> 5 -> NetPort["LogPrecision"]
},
"Input" -> "Real",
"Mean" -> "Real",
"LogPrecision" -> "Real"
]
![enter image description here][16]
Next, instead of using a MeanSquaredLossLayer to train the network, we minimise the negative log-likelihood of the observed data. Again, we replace σ with the log of the precision and we multiplying by 2 to be in agreement with the convention of MeanSquaredLossLayer.
FullSimplify[-2*LogLikelihood[NormalDistribution[μ, σ], {yobs}] /. σ -> 1/Sqrt[Exp[logτ]],
Assumptions -> logτ \[Element] Reals]
![enter image description here][17]
Discarding the constant term gives us the following loss which we will incorporate into the net:
loss = Function[{y, mean, logPrecision},
(y - mean)^2*Exp[logPrecision] - logPrecision
];
net = NetGraph[<|
"reg" -> regressionNet,
"negLoglikelihood" -> ThreadingLayer[loss]
|>,
{
NetPort["x"] -> "reg",
{NetPort["y"], NetPort[{"reg", "Mean"}],
NetPort[{"reg", "LogPrecision"}]} -> "negLoglikelihood" -> NetPort["Loss"]
},
"y" -> "Real",
"Loss" -> "Real"
]
![enter image description here][18]
trainedNet = NetTrain[
net,
<|"x" -> exampleData[[All, 1]], "y" -> exampleData[[All, 2]]|>,
LossFunction -> "Loss",
Method -> {"ADAM", "L2Regularization" -> \[Lambda]2}
]
![enter image description here][19]
Again, the predictions are sampled multiple times. The predictive variance is now the sum of the variance of the predicted mean + mean of the predicted variance. The priors no longer influence the variance directly, but only through the network training. Note that we need to use [NetExtract][20] to get the regression net out of the trained net.
sampleNetHetero[net : (_NetChain | _NetGraph), xvalues_List,
sampleNumber_Integer?Positive] :=
With[{regressionNet = NetExtract[net, "reg"]},
TimeSeries[
With[{
samples =
Select[Table[
regressionNet[xvalues, NetEvaluationMode -> "Train"], {i,
sampleNumber}], AssociationQ]
},
With[{
mean = Mean[samples[[All, "Mean"]]],
stdv =
Sqrt[Variance[samples[[All, "Mean"]]] +
Mean[Exp[-samples[[All, "LogPrecision"]]]]]
},
Transpose[{mean - stdv, mean, mean + stdv}]
]
],
{xvalues},
ValueDimensions -> 3
]
];
Now we can plot the predictions with 1σ error bands:
samples = sampleNetHetero[trainedNet, Range[-5, 5, 0.05], 200]
Show[
ListPlot[
samples,
Joined -> True,
Filling -> {1 -> {2}, 3 -> {2}},
PlotStyle -> {Lighter[Blue], Blue, Lighter[Blue]}
],
ListPlot[exampleData, PlotStyle -> Red],
ImageSize -> 600,
PlotRange -> All
]
![enter image description here][21]
![enter image description here][22]
##Implementation of a loss function (from comments)##
The following code shows how to implement the loss function described in the paper *Dropout Inference in Bayesian Neural Networks with Alpha-divergences* by Li and Gal: https://arxiv.org/pdf/1703.02914.pdf
In this paper, the authors propose a modified loss function α for a stochastic neural network (e.g., a network that uses dropout layers). During training, the training inputs Subscript[x, n] (with 1<=n<= N indexing the training examples) are fed through the network K times to sample the outputs Subsuperscript[Overscript[y, ~], n, k] and compared to the training outputs Subscript[y, n]. Given a particular standard loss function l (e.g., mean square error, negative loglikelihood, cross entropy) and regularisation function Subscript[L, 2] for the weights λ, the modified loss function L is given as:
![enter image description here][23]
The parameter α is the divergence parameter which can be tuned 0<α<=1
As can be seen, we need to sample the network several times during training. We can accomplish this with NetMapOperator. As a simple example, suppose we want to apply a dropout layer K=10 times to the same input. To do this, we duplicate the input and then wrap a NetMapOperatore around the dropout layer and map it over the duplicated input:
input = Range[5];
duplicatedInput = ConstantArray[input, 10];
NetMapOperator[
DropoutLayer[0.5]
][duplicatedInput, NetEvaluationMode -> "Train"]
![enter image description here][24]
Let's implement this loss function for a simple regression example.
First generate some example data:
exampleData =
Table[{x, Sin[x] + RandomVariate[NormalDistribution[0, .15]]}, {x,
RandomVariate[UniformDistribution[{-3, 3}], 15]}];
ListPlot[exampleData]
![enter image description here][25]
Next, define a net that will try to fit the data points with a normal distribution. The output of the net is a length-2 vector with the mean and the log-precision logτ = Log[τ] = Log[1/σ^2]:
alpha = 0.5;
pdrop = 0.5;
units = 200;
activation = Ramp;
λ2 = 0.001; (*L2 regularisation coefficient*)
k = 25; (* number of samples of the network for calculating the loss*)
regnet = NetInitialize@NetChain[{
LinearLayer[units],
ElementwiseLayer[activation],
DropoutLayer[pdrop],
LinearLayer[]
},
"Input" -> "Real",
"Output" -> {2}
];
We will also need a network element to calculate the log-sum-exp operator that aggregates the losses of the different samples of the regression network. We implement the log-sum-exp in the following way (i.e., by factorising out the largest term before feeding the vector into the Exp operator) to make it more numerically stable:
logsumexp[alpha_] :=
NetGraph[<|
"timesAlpha" -> ElementwiseLayer[Function[-alpha #]],
"max" -> AggregationLayer[Max, 1],
"rep" -> ReplicateLayer[k],
"sub" -> ThreadingLayer[Subtract],
"expAlph" -> ElementwiseLayer[Exp],
"sum" -> SummationLayer[],
"logplusmax" -> ThreadingLayer[Function[{sum, max}, Log[sum] + max]],
"invalpha" -> ElementwiseLayer[-(1/alpha) # &]
|>,
{
NetPort["Input"] -> "timesAlpha",
"timesAlpha" -> "max" -> "rep",
{"timesAlpha", "rep"} -> "sub" -> "expAlph" -> "sum" ,
{"sum", "max"} -> "logplusmax" -> "invalpha"
},
"Input" -> {k}
];
logsumexp[alpha]
![enter image description here][26]
Define the network that will be used for training:
net[alpha_] :=
NetGraph[<|
"rep1" -> ReplicateLayer[k],(*
replicate the inputs and outputs of the network *)
"rep2" -> ReplicateLayer[k],
"map" -> NetMapOperator[regnet],
"mean" -> PartLayer[{All, 1}],
"logprecision" -> PartLayer[{All, 2}],
"loss" ->
ThreadingLayer[
Function[{mean, logprecision, y}, (mean - y)^2*Exp[logprecision] -
logprecision]],
"logsumexp" -> logsumexp[alpha]
|>,
{
NetPort["x"] -> "rep1" -> "map",
"map" -> "mean",
"map" -> "logprecision",
NetPort["y"] -> "rep2",
{"mean", "logprecision", "rep2"} -> "loss" -> "logsumexp" -> NetPort["Loss"]
},
"x" -> "Real",
"y" -> "Real"
]
net[alpha]
![enter image description here][27]
and train it:
alpha = 0.1;
trainedNet = NetTrain[
net[alpha],
<|"x" -> exampleData[[All, 1]], "y" -> exampleData[[All, 2]]|>,
LossFunction -> "Loss",
Method -> {"ADAM", "L2Regularization" -> λ2},
TargetDevice -> "CPU",
TimeGoal -> 60
];
This function helps to sample the trained net several times to get a measure of the predictive mean and standard deviation:
sampleNetAlpha[net : (_NetChain | _NetGraph), xvalues_List,
nSamples_Integer?Positive] :=
With[{regnet = NetExtract[net, {"map", "Net"}]},
TimeSeries[
Map[
With[{
mean = Mean[#[[All, 1]]],
stdv = Sqrt[Variance[#[[All, 1]]] + Mean[Exp[-#[[All, 2]]]]]
},
mean + stdv*{-1, 0, 1}
] &,
Transpose @ Select[
Table[
regnet[xvalues, NetEvaluationMode -> "Train"],
{i, nSamples}
], ListQ]],
{xvalues},
ValueDimensions -> 3
]
];
samples = sampleNetAlpha[trainedNet, Range[-5, 5, 0.05], 200];
Show[
ListPlot[
samples,
Joined -> True,
Filling -> {1 -> {2}, 3 -> {2}},
PlotStyle -> {Lighter[Blue], Blue, Lighter[Blue]}
],
ListPlot[exampleData, PlotStyle -> Red],
ImageSize -> 600,
PlotRange -> All
]
Ultimately, you'd need to do some validation tests to calibrate the parameters of your model. To give a feel of how the α parameter influences the result, below are some figures from previous runs with different α parameters.
The other parameters that were used were:
pdrop = 0.5;
units = 200;
activation = Ramp;
λ2 = 0.001;
k = 25;
α = 0.1
![enter image description here][28]
α = 0.5
![enter image description here][29]
α = 1
![enter image description here][30]
##Interactive example##
Below is a dynamic example (inspired by the link above) of a network that is continuously retrained on the data. You can edit the points by dragging the locators and delete points by alt-clicking.
DynamicModule[{
exampleData,
net ,
prob = 0.2,
\[Lambda] = 0.01,
rounds = 10,
sampleNumber = 100,
samples,
l = 2,
nlayers = 300,
activation = Ramp,
init,
sampleNet,
xmin = -5,
xmax = 5,
ymin = -2,
ymax = 2
},
exampleData = Table[ (*initialise training data *)
{x, Sin[x]} + RandomVariate[NormalDistribution[0, .15]],
{x, RandomVariate[UniformDistribution[{-3, 3}], 15]}
];
Function to sample the noisy net multiple times and calculate mean + stdev
sampleNet[net_NetChain, xvalues_List, sampleNumber_Integer?Positive] :=
PreemptProtect[
TimeSeries[
Map[
With[{
mean = Mean[#],
stdv = Sqrt[Variance[#] + (2 \[Lambda] Length[exampleData])/(l^2 (1 - prob))]
},
mean + stdv*{-1, 0, 1}
] &,
Transpose@Select[
Table[
net[xvalues, NetEvaluationMode -> "Train"],
{i, sampleNumber}
],
ListQ
]
],
{xvalues},
ValueDimensions -> 3
]
];
Network initialisation function. Necessary when one of the network parameters is changed.
init[] := PreemptProtect[
net = NetInitialize@NetChain[
{
LinearLayer[nlayers],
ElementwiseLayer[activation],
DropoutLayer[prob],
1
},
"Input" -> "Scalar",
"Output" -> "Scalar"
]
];
init[];
samples = sampleNet[net, N@Subdivide[xmin, xmax, 100], sampleNumber];
DynamicWrapper[
Grid[{
(* Controls *)
{
Labeled[Manipulator[Dynamic[l], {0.01, 10}],
Tooltip["l", "GP prior length scale"], Right],
Labeled[Manipulator[Dynamic[\[Lambda]], {0.0001, 0.1}],
Tooltip["\[Lambda]", "L2 regularisation coefficient"], Right]
},
{
Labeled[Manipulator[Dynamic[sampleNumber], {10, 500, 1}], "# samples",
Right],
SpanFromLeft
},
{
Labeled[Manipulator[Dynamic[prob], {0, 0.95}, ContinuousAction -> False],
Tooltip["p", "Dropout probability"], Right],
Labeled[
Manipulator[Dynamic[nlayers], {20, 500, 1}, ContinuousAction -> False],
"# layers", Right]
},
{
Labeled[
PopupMenu[
Dynamic[activation],
{
Ramp, Tanh, ArcTan, LogisticSigmoid, "ExponentialLinearUnit",
"ScaledExponentialLinearUnit",
"SoftSign", "SoftPlus", "HardTanh", "HardSigmoid"
},
ContinuousAction -> False
],
"Activation function"
,
Right
],
(* This resets the network if one of the network parameters changes *)
DynamicWrapper[
"",
init[],
SynchronousUpdating -> False,
TrackedSymbols :> {activation, prob, nlayers}
]
},
(* Main contents *)
{
Labeled[
LocatorPane[
Dynamic[exampleData],
Dynamic[
Show[
ListPlot[exampleData, PlotStyle -> Red],
ListPlot[
samples,
Joined -> True,
Filling -> {1 -> {2}, 3 -> {2}},
PlotStyle -> {Lighter[Blue], Blue, Lighter[Blue]}
],
ImageSize -> 600,
PlotRange -> {{xmin, xmax}, {ymin, ymax}}
],
TrackedSymbols :> {samples, exampleData}
],
ContinuousAction -> False,
LocatorAutoCreate -> All
],
"1 \[Sigma] error bands (\[AltKey] + click to delete points)",
Top
],
SpanFromLeft
}
},
BaseStyle -> "Text",
Alignment -> Left
],
(* Continuously retrain the net on the current examples and resample the network *)
net = Quiet@With[{
new = NetTrain[
net,
<|
"Input" -> exampleData[[All, 1]],
"Output" -> exampleData[[All, 2]]
|>,
LossFunction -> MeanSquaredLossLayer[],
Method -> {"ADAM", "L2Regularization" -> \[Lambda], "LearningRate" -> 0.005},
MaxTrainingRounds -> rounds,
TrainingProgressReporting -> None
]
},
If[ Head[new] === NetChain, new, net]
];
samples = sampleNet[net, N@Subdivide[xmin, xmax, 50], sampleNumber],
SynchronousUpdating -> False
]
]
[1]: https://blog.wolfram.com/2018/05/31/how-optimistic-do-you-want-to-be-bayesian-neural-network-regression-with-prediction-errors/
[2]: http://reference.wolfram.com/language/tutorial/NeuralNetworksRegressionWithUncertainty.html
[3]: http://reference.wolfram.com/language/ref/method/GaussianProcess.html
[4]: http://mlg.eng.cam.ac.uk/yarin/blog_3d801aa532c1ce.html
[5]: http://www.cs.ox.ac.uk/people/yarin.gal/website/thesis/thesis.pdf
[6]: https://www.cs.toronto.edu/~radford/ftp/thesis.pdf
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6114regression.png&userId=839741
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4624regression2.png&userId=839741
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=58571.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=188122.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=63162.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=92003.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=18954.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=48095.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=109286.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=91537.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=88.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=73578.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=38449.png&userId=20103
[20]: http://reference.wolfram.com/language/ref/NetExtract.html
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=645310.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=156311.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=b1.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=b2.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=b3.png&userId=20103
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=b4.png&userId=20103
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=b5.png&userId=20103
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=b6.png&userId=20103
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=b7.png&userId=20103
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=b8.png&userId=20103Sjoerd Smit2018-04-13T11:19:59ZObtain the estimated value from GeoDensityPlot or GeoContourPlot?
https://community.wolfram.com/groups/-/m/t/2020033
I want to know the estimated value of Specific location (for example, University of Chicago GPS: 41.788056, -87.599518), what should I do?
The following are examples provided in the documentation.
GeoDensityPlot[Table[lat, {lat, 0, 90}, {lon, 0, 180}],
DataRange -> {GeoPosition[{0, 0}], GeoPosition[{90, 180}]}]Tsai Ming-Chou2020-07-06T02:38:19Z[Solved] Solutions using Radicals
https://community.wolfram.com/groups/-/m/t/2019592
Am brand new to Mathematica, but would greatly value your help.
Mathematica should be able to give exact roots to any polynomial of degree up to and including four and does so *most* of the time.
*However,*, when I ask it for the exact solutions to, amongst other examples
x^4-x-2=0
it insists on giving me numerical approximations to the equation, whether I use Solve or Reduce.
It must be me, what am I doing wrong?Anthony Morris2020-07-05T06:54:30Z[Solved] "The first argument is expected to be a string, a valid..."
https://community.wolfram.com/groups/-/m/t/2019837
First day trying out cloud notebooks.
Unsure why this produces an error:
CloudDeploy[FormFunction[{{"input","Comma Separated Words or Phrases"}->"String"},Sort[WordFrequencyData[{#input}],Greater]]]Casey Martell2020-07-05T14:42:02ZGraphing double pendulum position equation
https://community.wolfram.com/groups/-/m/t/2018249
Hi, there all you guys. I'm so happy to be part of the community.
For a couple of days, I've been working on both simple and double pendulum motion. By Lagrange's equation, I just managed to find the motion equations of both problems. Graphing the motion equation of the simple pendulum wasn't a great deal, but I'm stuck trying to graph the motion equation in the second one. Perhaps I'm not using the DSolve command correctly to solve the system of the differential equation I have to deal with.
Thank you so much for your help!John Mosquera2020-07-02T17:40:43ZExporting data to ASCII format
https://community.wolfram.com/groups/-/m/t/2019922
How does one export mathematica data into ascii format?
{0.356329, 0.385175, 0.416346, 0.450027, 0.486421, 0.525741, \
0.568222, 0.614113, 0.663687, 0.717232, 0.775063, 0.837518, 0.904959, \
0.977776, 1.05639, 1.14125, 1.23284, 1.33168, 1.43832, 1.55337, \
1.67746, 1.81128, 1.95555, 2.11107, 2.27865, 2.4592, 2.65365, 2.863, \
3.08834, 3.33077, 3.5915, 3.87179, 4.17296, 4.4964, 4.84358, 5.21601, \
5.61528, 6.04303, 6.50094, 6.99077, 7.51428, 8.07326, 8.66953, \
9.30488, 9.98109, 10.6999, 11.4629, 12.2717, 13.1277, 14.032, \
14.9858, 15.9896, 17.044, 18.1488, 19.3038, 20.508, 21.7598, 23.0573, \
24.3975, 25.777, 27.1915, 28.6357, 30.1037, 31.5885, 33.0823, \
34.5764, 36.0615, 37.5271, 38.9625, 40.3562, 41.6962, 42.9707, \
44.1675, 45.2747, 46.2809, 47.1753, 47.9479, 48.59, 49.0941, 49.4541, \
49.6658, 49.7267, 49.6359, 49.3945, 49.0056, 48.4737, 47.8051, \
47.0077, 46.0904, 45.0633, 43.9375, 42.7244, 41.436, 40.0843, \
38.6815, 37.2392, 35.7689, 34.2813, 32.7864, 31.2938, 29.8117, \
28.3479, 26.9091, 25.5012, 24.1292, 22.7971, 21.5085, 20.2659, \
19.0714, 17.9263, 16.8314, 15.7871, 14.7932, 13.8493, 12.9546, \
12.108, 11.3084, 10.5543, 9.84403, 9.17604, 8.54856, 7.95981, \
7.40798, 6.89128, 6.4079, 5.95609, 5.53411, 5.14028, 4.77296, 4.4306, \
4.11168, 3.81474, 3.53843, 3.28141, 3.04245, 2.82037, 2.61404, \
2.42242, 2.24451, 2.07938, 1.92615, 1.78401, 1.65217, 1.52992, \
1.41658, 1.31153, 1.21417, 1.12395, 1.04036, 0.962929, 0.891208, \
0.824783, 0.763271, 0.706313, 0.653577, 0.604755, 0.559558, 0.517722, \
0.478998, 0.443158, 0.409988, 0.379292, 0.350885, 0.324599, 0.300276, \
0.277771, 0.256948, 0.237682, 0.219858, 0.203367, 0.188111, 0.173998, \
0.160941, 0.148863, 0.13769, 0.127355, 0.117794, 0.108951, 0.10077, \
0.0932038}Sam Rametse2020-07-05T10:22:49ZIssue with Findroot command
https://community.wolfram.com/groups/-/m/t/2019162
Hi, I have a problem with the FindRoot command (code attached). I cannot find the solution. Is it because of the functional form I am using (one is the CRRA and the other is the exponential function)? Does anyone know how I can solve this issue? thanks, a.Alessandro M.2020-07-04T17:04:42ZMathematics- Can equations for a, b & c be worked out?
https://community.wolfram.com/groups/-/m/t/2019428
**Is this possible to work out?**
From the three equations below I need the equation to what a=, b=, and c=
a equals the same value in all three equations.
b equals the same value in all three equations.
c equals the same value in all three equations.
I have all the known variables apart from a,b and c.
D - G = a(D-1) + b(D-1)² + c(D-1)³
E - H = a(E-1) + b(E-1)² + c(E-1)³
F - J = a(F-1) + b(F-1)² + c(F-1)³
**Known Variables**
D = 1.8926846
E = 2.5686994
F = 3.3756357
G = 1.892797681
H = 2.568917298
J = 3.376008599
D - G = -1.1308073E-04
E - H = -2.1789774E-04
F - J = -3.7289941E-04
D-1 = 0.8926846
E-1 = 1.5686994
F-1 = 2.3756357
**Unknown equation (help !)**
a = (equation required, must not contain variables b or c)
b = (equation required, must not contain variables a or c)
c = (equation required, must not contain variables a or b)
**To possibly help as an example**
Below are values of a,b and c that **I have been given**;
a = -1.145854E-04
b = -1.095579E-05
c = -2.898115E-06
these values of a, b, c have been
calculated from the above known variables and the three equations but I don’t know the equations
of how a, b, c have been worked out separately from the above data?
The known variables are from measurements, so if they change then a,b,c will also change, hence
why I require the equation to work out the new values of a, b, c.J F2020-07-04T15:59:04ZUse output of the Position function to modify an expression?
https://community.wolfram.com/groups/-/m/t/2018853
Hello
In the following expression
exp = HoldForm[#1^2 "+"] & /@ {4, 1, 7}
I want to remove the last "+". Its position I get with:
pos = Last[Position[exp, "+"]]
I know it's correct because :
exp[[##]] & @@ pos == "+" (* True*)
But
exp[[##]] & @@ pos = ""
will not work. I cannot understand it since
exp[[3, 1, 2]] = ""
works OK.
Please explain what is going on ? Thanksjan potocki2020-07-03T20:15:41ZPlot the change in potential energies depending on the angle phi
https://community.wolfram.com/groups/-/m/t/2018799
I have a task where I need to display in the graph the change in potential energies depending on the angle phi, but I need to change phi and get the values of potential energies. And get stable segments of the mechanism on the chart
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/9a023856-92fe-4d60-94a9-18972e5bfbc5Sanzhar Mukatay2020-07-03T17:31:08Z[Solved] DSolve in Wolfram|Alpha?
https://community.wolfram.com/groups/-/m/t/2018448
tried the example:
DSolve[{y'[x]==y[x]}, y[x], x]
and just get reply that alpha can't understand, even though this is an exact copy of the documentation example???jon styles2020-07-03T07:18:27ZUse FormControl for Changing values in a List of Parameters?
https://community.wolfram.com/groups/-/m/t/2018335
Dera members of the community,
I have been trying to change values in a list. I have used FormControl for that purpose.
par1 = FormControl[Association[Thread[coefLabels -> 0]],
Thread[coefLabels -> "Number"]]
This gives me the chance to modify values, however, I do not know how to recover the updated values.
I am afraid this could be very simple but I do not how to do it.
Cheers
JesusJ Jesus Rico-Melgoza2020-07-03T01:07:33ZDracula Theme for Wolfram Notebooks
https://community.wolfram.com/groups/-/m/t/2018326
![example-screenshot][2]
This theme is my go-to for dark mode, and I use it across several applications. So, I implemented the Dracula theme for Wolfram Notebooks!
If you're interested in using it, you can get it from two places:
## [GitHub repo](https://github.com/jldohmann/wolfram-notebooks)
### Get the files
If you are a git user, you can get the theme and keep up to date by cloning the repo:
$ git clone https://github.com/jldohmann/wolfram-notebooks.git
### Install the stylesheet
1. The stylesheet is titled "Dracula.nb". Move the stylesheet to folder where your Wolfram Desktop application is installed, and into SystemFiles. Run this code to open the folder:
SystemOpen[FileNameJoin[{$UserBaseDirectory, "SystemFiles", "FrontEnd", "StyleSheets"}]]
You may need to restart your application to see the changes take effect.
2. Open a new notebook, and from the menu go to Format > Stylesheet > Dracula to change the stylesheet to the Dracula theme.
Please feel free to file issues for suggestions or bugs!
## Function Repository
1. Get the [DraculaTheme Resource](https://resources.wolframcloud.com/FunctionRepository/resources/DraculaTheme)
2. Install the stylesheet system-wide with the following:
ResourceFunction["DraculaTheme"]["Install"]
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.png&userId=515558Jesse Dohmann2020-07-03T01:01:57ZUse Amazon AWS to run my Wolfram Code?
https://community.wolfram.com/groups/-/m/t/2018174
Hello,
I like to look for patterns in numbers, but sometimes it takes too long to get enough examples using my laptop. How do I use Amazon AWS to run billions of calculations written in Wolfram Language?
I'm not a computer scientist or anything like that, just a hobbyist.Jonathan Silverman2020-07-02T23:53:02ZSiemens sine-wave stars
https://community.wolfram.com/groups/-/m/t/2012431
Hi Wolframians,
I am new to the Wolfram language, and I only used a trial version of the Mathematica to see if there is a way to replace my Siemens sine-wave star measurement for my test chart, using MatLab.
I am under impression that Mathematica is much more powerful, but as with anything new, it is hard for me to find a function or algorithm of how I can measure resolution of a Siemens sine-wave star, which I use in my ViDi Labs test charts (https://vidilabs.com/testcharts.html) and developed a little program using MatLab to find the circles in an image, and then measure resolution (or MTF = Modulation Transfer Function), based on the 10% Depth of Modulation, as described by the IEC 62676-5 standards.
If anybody can help me in either giving me a list of functions to look at, or perhaps somebody would know exactly how to calculate the above mentioned resolution, I would gladly purchase a full licence of Mathematica and investigate this further.
Happy to consider even paying somebody for their time if they can do what I have described in my video in the able mentioned URL link.
I am also adding an attachment, a simple explanation on how this resolution is measured.
Thank you so much.Vlado Damjanovski2020-06-25T01:30:17Z