Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions from all groups sorted by activeHawaii weather stations list
https://community.wolfram.com/groups/-/m/t/2244206
Dear all,
I need a list of weather stations in Hawaii. Is it possible to extract that with Mathematica?
Thank you in advance for your kind support.Alex Teymouri2021-04-14T20:20:42ZHow do I represent and compute with matrix-valued functions of a vector?
https://community.wolfram.com/groups/-/m/t/2247942
In the attachment I pose a problem of representing and computing with a matrix-valued function of a vector. Before I go to the effort of learning Mathematica, I would like to know that it can do these kinds of computations and yield results in the usual notation or something close to it. I would appreciate any guidance, including being told that my request is impossible. Thanks.Darwin Poritz2021-04-19T21:33:03ZApollonius, Thale's Theorem and Orthic Triangle
https://community.wolfram.com/groups/-/m/t/2247692
*The largest solid circle is always tangent to three dashed circles, whose diameters coincide with the sides of given triangle respectively.*
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2021-04-19_15-24-09.gif&userId=23928
[2]: https://www.wolframcloud.com/obj/shenghuiy/Published/Apollonius.nbShenghui Yang2021-04-19T21:23:14ZCrown structure of pine trees: analyzing deep hierarchical data
https://community.wolfram.com/groups/-/m/t/2246602
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/c3b90a54-36a1-4b5b-bcd6-13d7dbc178f6
[Original]: https://www.wolframcloud.com/obj/aem94/Published/Using%20Datasets%20to%20analyze%20deep%20hierarchical%20data.nbAssaad Mrad2021-04-18T13:43:47Z[WSG21] Daily Study Group: System Modeler for Engineering
https://community.wolfram.com/groups/-/m/t/2232566
A new Daily Study Group on model-based systems engineering starts next week. Certified instructor [@Ankit Naik][at0] will lead the group with assistance from MathCore engineers. See details about daily topics and sign up: [https://wolfr.am/UoUvzCnD][1]
[at0]: https://community.wolfram.com/web/ankitn
[1]: https://wolfr.am/UoUvzCnDJamie Peterson2021-03-30T22:43:11ZPerspective anamorphism and the ambiguous garage roof
https://community.wolfram.com/groups/-/m/t/2246017
![enter image description here][1]
Looking at the YouTube illusion "[Ambiguous Garage Roof][2]" by the Kokichi Sugihara demonstrates once more that what we see does not always represent reality. His beautiful illusion is based on perspective anamorphism. This kind of anamorphic effect shows a completely deformed image and only needs a specific viewpoint to see the undeformed picture. It would be hard to equal the perfection of Mr. Sugihara but we can explore this type of anamorphic illusions with Mathematica...
![enter image description here][3]
Above is a deformed tubular curve observed by the eye as a circular one. The eye tends to put the observed image in a plane perpendicular to the view direction: the "*view plane*". We take the view plane as parallel with the y-axis and forming the viewing angle phi with the z-axis.
viewPlane = InfinitePlane[{0, 0, 0}, {{0, 1, 0}, {1, 0, Cot[phi]}}];
The "*projection surface*", where the deformed image is located, can be any surface flat or curved. Here are two examples: a sphere (L) and a sinusoidal surface (R). In both cases, the deformed (perspective anamorphic) image is very different but the same circle is seen by the observer if he has the right viewing angle.
![enter image description here][4]
The projection surface we will use here is a folded set of rectangles parallel to the x-axis:
projProfile = {{-2.5, -3.9}, {-1.05, -2.8}, {0, -3.9}, {1.05, -2.8}, \
{2.5, -3.9}};
projectSurface =
RegionUnion[
Rationalize[
Polygon /@
Map[Flatten[#, 1] &,
Transpose[{Partition[Prepend[#, -10] & /@ projProfile, 2, 1],
Reverse /@
Partition[Prepend[#, 10] & /@ projProfile, 2, 1]}]]]];
![enter image description here][5]
The *observed image* consists of image points. Rays, straight lines perpendicular to the view plane, are leaving the image points and will intersect the projection surface to form the *anamorphic image*. This is a function that computes these intersections:
intersect[pt_List] :=
Quiet[First[{x, y, z} /.
NSolve[Element[{x, y, z}, HalfLine[pt, {1, 0, -Tan[phi]}]] &&
Element[{x, y, z}, projectSurface], {x, y, z}]]]
To start with our function, we compute the anamorphic image of a circle in the view plane:
phi = Pi/4;
circle = Table[
Prepend[AngleVector[{0, 2}, {2, t}], 0], {t, 0, 2 Pi, Pi/20}];
virtCircle = RotationTransform[phi, {0, 1, 0}][circle];
rays = HalfLine[#, {1, 0, -Tan[phi]}] & /@ virtCircle;
anaCircle = intersect /@ virtCircle;
Graphics3D[{{Opacity[.35],
InfinitePlane[{0, 0, -4}, {{0, 1, 0}, {1, 0, 0}}]}, {Opacity[.25],
viewPlane, projectSurface[[-1]]}, {Green, Tube[virtCircle, .1],
Tube[anaCircle, .1]}, {AbsoluteThickness[.5], Red, Thin, rays}}]
![enter image description here][6]
The perspective anamorphic image is a curve in the projection plane. It takes different shapes as we change our viewing angle. These shapes range from the original circle over the anamorphic image to the projection surface profile. The observer sees the original circle only if the viewing angle is phi.
![enter image description here][7]
We now look at the garage roof cross section profile. This consists of line segments defined by a list of point coordinates. We chose a roof profile with sections aligned to the projection surface profile:
roofProfile = {{0, -2, 0}, {0, -1.05`, 0.6}, {0, 0, 1}, {0, 1.05,
0.8}, {0, 2, 0}};
ListLinePlot[{projProfile, Rest /@ roofProfile}]
![enter image description here][8]
This is the roof profile and its anamorphic image on the projection surface:
![enter image description here][9]
We are now ready to test the complete garage roof as a set of 4 rectangles between roof profiles at the front and back:
roofBack = roofProfile /. {x_, y_, z_} -> {x, y, z + 4 Cos[phi]};
roofLines =
Transpose[{roofProfile,
roofProfile /. {x_, y_, z_} -> {x + 4, y, z}}];
virtlRoofs =
RotationTransform[phi, {0, 1, 0}] /@ {roofBack, roofProfile};
anaRoofs = Map[intersect, virtlRoofs, {2}];
Graphics3D[{AbsoluteThickness[3],
Map[Line, {roofProfile,
TranslationTransform[{4, 0, 0}] /@ roofProfile, roofLines}]},
Boxed -> False]
![enter image description here][10]
Here is the perceived roof in the view plane and its anamorphic image in the projection surface.
Graphics3D[{{Opacity[.35],
InfinitePlane[{0, 0, -4}, {{0, 1, 0}, {1, 0, 0}}]}, {Opacity[.25],
viewPlane, Opacity[.15], projectSurface[[-1]]}, {Gray,
Tube[virtlRoofs, .03], Tube[Transpose[virtlRoofs], .03]}, {Gray,
Tube /@ anaRoofs, Tube /@ Transpose[anaRoofs]}, {Red,
Map[HalfLine[#, {1, 0, -Tan[phi]}] &, virtlRoofs, {2}]}}]
![enter image description here][11]
This GIF shows a sequence of different viewing directions of the complete garage with its anamorphic roof:
Module[{pts1},
pts1 = RotationTransform[\[Phi], {0, 0, 1}] /@
Join[Flatten[{anaRoof2, anaRoof1},
1], {{3.52, -2., -3.9 - 1}, {7.52, -2., -3.9 - 1}, {3.52,
2., -3.9 - 1}, {7.52, 2., -3.9 - 1}}];
frames =
ParallelTable[
Graphics3D[{FaceForm[LightGray], EdgeForm[Thick],
ph = Polyhedron[
pts1, {{1, 2, 7, 6}, {1, 2, 7, 6} + 1, {1, 2, 7, 6} +
2, {1, 2, 7, 6} + 3, {1, 6, 12, 11}, {5, 10, 14, 13}, {11,
12, 14, 13}}]}, Lighting -> "Neutral", Boxed -> False,
ViewPoint ->
20 {-Cos[\[Phi]], Sin[\[Phi]], Tan[\[Pi]/4]}], {\[Phi], \[Pi]/4,
0, -\[Pi]/160}];
Export[NotebookDirectory[] <> "garage views.GIF", frames,
AnimationRepetitions -> \[Infinity]]];
![enter image description here][12]
Since the projection surface used is a developable surface, we can build the anamorphic garage and its roof out of cardboard paper . This code computes the 2D development of the anamorphic roof...
development[d_, d1_, d2_, d3_, d4_, d5_, z0_ : 40, d0_ : 2.5] :=
Module[{projProfile, devlOrdinates, anchors},
projProfile = {{-d0, d1 - z0}, {-d, d2 - z0}, {0, d3 - z0}, {d,
d4 - z0}, {d0, d5 - z0}};
devlOrdinates =
Prepend[Accumulate@
Apply[EuclideanDistance, Partition[projProfile, 2, 1], {1}], 0];
Interpolation[Transpose@{projProfile[[All, 1]], devlOrdinates},
InterpolationOrder -> 1]]
With[{d1 = 0.1`, d2 = 1.2, d3 = 0.1, d4 = 1.2, d5 = 0.1, d = 1.05},
projProfile = {{-2.5, -3.9}, {-1.05, -2.8}, {0, -3.9}, {1.05, -2.8}, \
{2.5, -3.9}};
roofProfile = {{0, -2, 0}, {0, -1.05`, 0.6}, {0, 0, 1}, {0, 1.05,
0.8}, {0, 2, 0}};
anaOrdinates1 =
development[d, d1, d2, d3, d4, d5] /@ roofBack[[All, 2]];
anaOrdinates2 =
development[d, d1, d2, d3, d4, d5] /@ roofProfile[[All, 2]];
devVertices1 = Transpose@{anaRoof1[[All, 1]], anaOrdinates1};
devVertices2 = Transpose@{anaRoof2[[All, 1]], anaOrdinates2};]
Graphics[{{AbsoluteThickness[3],
Line /@ Transpose[{devVertices1,
devVertices2}]}, {AbsoluteThickness[2], Line[devVertices1],
Line[devVertices2]}}]
![enter image description here][13]
... and we can use it tomake a complete garage from it.
![enter image description here][14]
This collage shows the garage from different view angles. Only a viewing angle from 45 degrees with view plane parallel to the y axis will give he desired, "realistic", view.
![enter image description here][15]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3178introcollage.jpg&userId=68637
[2]: https://www.youtube.com/watch?v=KtA6u1HIqbg
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=5160circleperspectiveeyeflat.png&userId=68637
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10693surfaceexamples.png&userId=68637
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2733projectionsurface.png&userId=68637
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8907circleperspective.png&userId=68637
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ring1.gif&userId=68637
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=crosssectionprofiles.png&userId=68637
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=profileperspective.png&userId=68637
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=roof.png&userId=68637
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=roofprojection.png&userId=68637
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=garageviews.gif&userId=68637
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=development.png&userId=68637
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=papergarage.JPG&userId=68637
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=collage-3.jpg&userId=68637Erik Mahieu2021-04-17T12:55:02ZNDSolveValue: crashes kernel for FEM model
https://community.wolfram.com/groups/-/m/t/2247416
Hi everyone,
I am trying to solve a real-valued 2D advection-diffusion equation on the unit square. My Mathematica version is 12.1.1.0 on Windows 10. All seems to work well until the specified velocity coefficient functions get too complicated and reach some kind of internal threshold, in which case upon execution of NDSolveValue[] the kernel beeps and shuts down immediately. I would like to know if anyone can suggest a way forward to correcting this problem. Method details follow.
First a mesh is constructed:
W = ImplicitRegion[True,{{x,0,1},{y,0,1}}];
mesh = ToElementMesh[W, MaxCellMeasure->mcell];
The PDE operator is quite conventional
operator = Derivative[1,0,0][u[t,x,y]] - Dmol Derivative[0,2,0][u[t,x,y]] - Dmol Derivative[0,0,2][u[t,x,y]] + vx[t,x,y] Derivative[0,1,0][u[t,x,y]] + vy[t,x,y] Derivative[0,0,1][u[t,x,y]];
The boundary conditions are simple (natural Neumann boundaries are assumed elsewhere):
boundaryconditions = DirichletCondition[u[t,x,y]==0, x==1];
The initial condition is a simple Gaussian blob:
uInitial = Exp[-({x, y} - centroid).Inverse[s2].({x, y} - centroid)/2];
The problem parameters mcell, Dmol, vx, vy, centroid, s2 are specified as inputs. Once the various problem parameters are set, the solution is generated by the following statement:
uSolution =
NDSolveValue[{operator == 0, boundaryconditions, u[0, x, y] == uInitial},
u, {t, 0, 1}, {x, y} \[Element] mesh, AccuracyGoal -> 10, InterpolationOrder -> All];
For simple cases with steady, uniform and homogeneous velocity coefficient functions {vx, vy} and low mesh Peclet numbers NDSolveValue returns accurate solutions quite readily for meshes with up to 100k triangles, which is all my RAM can handle. So far so good.
The problems start occurring when I specify more interesting velocity coefficient functions. The velocities of most interest to me are generated by high-resolution time-dependent interpolations over a 501x501 grid of points in the unit square. These data points are generated by an underlying FD scheme of dimension Nfd x Nfd. If Nfd is too large then, somehow, NDSolveValue[] simply crashes the kernel. The limit I have found is Nfd = 140, beyond which crashes occur no matter how coarse my FEM mesh is (even down to only 4k triangles). The "Why the Beep?" command reports simply that the kernel has encountered an exception.
I would like to have higher Nfd as this guarantees a physically accurate velocity field, but advecting the Gaussian blob in such a field just crashes immediately (before the first time step is made).
**Questions:**
1. I recognize that my preferred vx and vy are very complicated, time-dependent and spatially heterogeneous functions (although continuous and non-singular), but I don't understand how the dimension of the FD problem that specifies them is relevant to generating a kernel crash in a separate function call. There must be another problem, potentially memory related, or perhaps related to sampling of the vx,vy functions onto the FE mesh.
2. Is there any understandable tool that can help me debug a kernel crash?
Here are some example parameter values and a simply velocity specification that provides no problems to NDSolveValue[]:
s2 = IdentityMatrix[2]/625.0;
pos = {0.8,0.5};
Dmol = 0.0025;
mcell = 0.0004;
vx[t_,x_,y_] = -0.5;
vy[t_,x_,y_] = 0;
Cheers,
Mike
PS: Further resources (notebook, results etc) are available if people are interested.Mike T2021-04-19T14:29:30ZPlotting Bessel and Hankel functions on an interval?
https://community.wolfram.com/groups/-/m/t/2247269
![enter image description here][1]
Hello, I try to prepare this function for plotting, but it appeared more difficult than expected. I have written the following code for the three respective functions on the image:
u3[r_, \[Phi]_] := BesselJ[r, n]*Exp[I n phi], {r, 0, 0.5}, {\[Phi], 0, 2 \[Pi]}, {n, 0, 5};
u4[r_, \[Phi]_] := (BesselJ[3*r, n] + BesselY[3*r, n])*Exp[I n phi], {r, 0.5, 1}, {\[Phi], 0, 2 \[Pi]}, {n, 0, 5};
u5[r_, \[Phi]_] := HankelH1[r, n]*Exp[I n phi], {r, 0.5, 10}, {\[Phi], 0, 2 \[Pi]}, {n, 0, 5};
Then I was going to make a function W which sums the three together, and then plot W.
But I get an error over and over again for the Bessel and Hankel formulation. Based on the image given, where zeta 1 and 2 are constants, what is wrong here?
Any help appreciated!
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9533Untitled.jpg&userId=967554Ser Man2021-04-19T13:17:04ZConditions on the number entered in InputField?
https://community.wolfram.com/groups/-/m/t/2247344
I am looking for a simple way to show that a number that I have entered in an InputField in a CDF is higher or lower than acceptible, and to make sure that in the CDF the entered number is not used for further calculations. The only result of evaluation might for instance be that in the InpuField the same number is shown as before the new number was entered. An other possibility is that in the InputField the text "too high" or "too low" is shown.
Thanks in advance!.Laurens Wachters2021-04-19T13:36:10ZExport to Excel file with headers?
https://community.wolfram.com/groups/-/m/t/2245509
Hi. I want to export a list from Mathematica to Excel, but would want to be able to plug column names into the Excel file.
Something like this:
aToy={{1,2,3},{4,5,6},{7,8,9}}
headers={"A","B","C"}
and then be able to do something like:
Export["C:\\, "aToy.xls", (*columnNames=headers*)]
In this toy example, it would be very easy to do it manually in the Excel file,
but if you have tens of columns then it is another story...
Is this at all possible? Then, how?
Also, can I name the rows of the excel file?
Thanks in advance,
FranciscoFrancisco Gutierrez2021-04-16T15:55:08ZLooking for a closed-form solution using Solve[ ]
https://community.wolfram.com/groups/-/m/t/2246994
I was wondering if a closed form solution does exist for the following equation:
a^2 Exp[x/a] + x^2/2 (1 - b) - a^2 - a x Exp[x/a] = 0
where a and b are Real numbers. I would like to obtain a solution of the form x=f(a,b).
I tried to use the following command:
Solve[a^2*Exp[x/a] + x^2/2*(1 - b) - a^2 - a*x*Exp[x/a] == 0, x,
Assumptions -> Element[a, Reals], Assumptions -> Element[b, Reals]]
but I obtained the following message: Solve: This system cannot be solved with the methods available to Solve.
I was wondering whether I could use the Lambert W function to solve this equation or if there is any way to obtain an approximated solution of this equation. I am looking for a solution for x<0, and I know that a>0, if this can make any difference.
Thank you.
**Edited after the 1st reply:**
The last sentence is probably superfluous: I am interested in the general closed-form solution of this equation. Then, I will use the part of solution when x<0 (which happens when 0<b<1).sebastiano piccolroaz2021-04-19T08:13:21ZHow to implement a custom regularisation for a given LinearLayer?
https://community.wolfram.com/groups/-/m/t/2190766
Hi everybody,
I would like to implement a custom norm regularisation on the weights of a LinearLayer. But Let's suppose for ease of discussion that I want to implement a L1 regularisation. In this case my objective will be to minimise a loss which is: (loss of task) + (Total[Abs[weights]])
I tried to play around with NetArray with without much success, here's an example:
net = NetGraph[
Association[
"linear" -> LinearLayer[1, "Weights" -> NetArray["c"]],
"reg" ->
FunctionLayer[{Total[
Abs[NetArray[<|"Name" -> "c", "Dimensions" -> 100|>]]]} &],
"thread" -> ThreadingLayer[#1 + #2 &]
],
{
NetPort["Input"] -> "linear",
"reg" -> "thread",
NetPort["linear", "Output"] -> "thread"
}]
That can be trained for example with:
dataTrain = Table[RandomReal[1, 100] -> {RandomReal[]}, 100];
trained = NetTrain[net, dataTrain]
But when I go to inspect the actual value of the weights, I get different results. That is:
NetExtract[trained, {"linear", "Weights"}]
is different from
NetExtract[trained, {"reg", "Net", 1, "Array"}]
How can I implement it?
Do you have any ideas/comment/observations?
(I'm pretty sure that the example I gave is wrong as I would like to minimise the loss of the ouput of linear layer + the sum of the absolute value of the weights, but I think that the essence is the same)Ettore Mariotti2021-02-11T00:56:58ZError in NDSolveValue[ ]: solve the heat equation in 2D?
https://community.wolfram.com/groups/-/m/t/2245405
Dear All, As I'm learning Mathematica, I encountered troubles I can't seem to solve on my own. Here's My code :
Clear["Global`*"];
HeatEq = \!\(
\*SubscriptBox[\(\[PartialD]\), \(t\)]\(T[x, y, t]\)\) == Laplacian[
T[x, y, t],{x, y}];
L = 2; l = 1/2; tmax = 15;
\[CapitalOmega] = Rectangle[{0, 0}, {L, l}];
boundaries = {T[x, y, 0] == 5, T[0, y, t] == 0, T[L, y, t] == 0,
T[x, 0, t] == 0, T[x, l, t] == 0};
boundaries2 = {DirichletCondition[T[x, y, t] == 0, True],
T[x, y, 0] == 5};
sol = NDSolveValue[Join[{HeatEq}, boundaries2],
T, {x, y} \[Element] \[CapitalOmega], {t, 0, tmax}];
Animate[DensityPlot[
sol[x, y, t], {x, y} \[Element] \[CapitalOmega]], {t, 0, tmax}]
As you wander through my code, here is what it is supposed to do :
- Clear all variables previously used, and define the Heat equation in 2 dimensions of space
- Define the rectangle I'm going to cool down.
- Define the boundaries (0°C on the edges, 5°C elsewhere)
- Solve the equation, and get the temperature.
- Plot an animation over time of the cooling
If facing two major issues here. The first one concerns the boundaries conditions. When I use the DirichletCondition, it seems to work "on its own", and yet produces strange results (Where the boundaries are not respected in the simulation). Moreover, As you can see I defined an original boundary condition, and exposing all of them, edge per edge. Yet this one doesn't work, and I really don't understand why. It produces the message : "Boundary condition T[x,1/2,t]==0 is not specified on a single edge of the boundary of the computational domain".
In a second time, I just can't seem to find how you fix a color scale for the integrity of a sequence of density plot. I want to have each color affected to a single value of temperature, for every density plot in a list. And after hours of testing of changes for ColourFunction, I got nothing viable.
I've been searching on the web for hours, and I can't find any solution. Would you kindly help me? I'm looking forward to your answers!
Kind Regards.Alexandre Barboteu2021-04-16T14:39:38ZIntegrate piecewise-function processes forever?
https://community.wolfram.com/groups/-/m/t/2246791
I want to integrate the attached piecewise function over theta from 0 to 1. The single cases work fine, but integrating the function itself processes forever. I noted that cmd+. (the abortion command) takes a few seconds until Mathematica stops compiling, so I assume the problem lies in integrating the piecewise-function itself. What am I doing wrong?Lukas Berger2021-04-19T10:39:51ZFree webinar: parallel programming in the Wolfram Language
https://community.wolfram.com/groups/-/m/t/2246589
Hi all,
We have an upcoming Parallel Programming in the Wolfram Language webinar, on Tuesday 11 May at 5pm (BST).
Learn about the local and global optimization techniques and parallel programming paradigms integrated into the Wolfram Language, along with parallelization fundamentals. In this webinar you will learn about the basics of parallel programing, optimization for a single kernel, the parallel computation architecture available in the Wolfram Language for simultaneously utilizing multiple kernels and how to use Wolfram kernels with managed clusters.
Registration link: https://wolfr.am/USqdGAp4
Please contact events-europe@wolfram.com for more details or if you have any questions!Idowu Badmus2021-04-19T08:17:16ZCapture an infinite audio stream?
https://community.wolfram.com/groups/-/m/t/2245279
Hello,
I have an infinite audio stream (online radio) that I want to capture. Using the Audio function (https://reference.wolfram.com/language/ref/Audio.html) goes on indefinitely capturing it until I'm out of resources. Is it possible to capture only, say, 1 minute of the stream as it's currently playing?
Best,
P.Przemyslaw K.2021-04-16T15:12:16ZThe maximum output precision of InterpolatingFunction[]?
https://community.wolfram.com/groups/-/m/t/2233043
Hello, I am using Interpolation[] to connect the data points in the attachment DATA.nb
and trying with high precision(70). However it seems that Interpolation[] suppresses the precision of the table it acts on, so the values of the resulting interpolating function become all the same with 16 precision in the range(the notebook below shows the beginning and ending values):
&[Wolfram Notebook][1]
I searched for the precision option of it in Wolfram documentation center but found nothing. How can I refine the result of such interpolation? Thanks!
[1]: https://www.wolframcloud.com/obj/6f203712-61ba-4d71-bd46-7c42404608bb袁 旭龙2021-03-31T10:32:28ZOdd Similarity Between Logistic Map & Laniakea & Wolfram Physics Project?
https://community.wolfram.com/groups/-/m/t/2120045
I was watching Jonathan Gorards' basic formalism talk on the Wolfram Physics project from 2018 and he said something about the "universe bifurcating", which made me think of something odd (at least I thought) I saw awhile back. I found it odd how similar the logistic maps bifurcation and Laniakeas movement pattern were, and after hearing him discuss that I just had to come ask. If anyone thinks it's weird as well I'd love to hear your thoughts.
(he talks about it at 20:00 if you wanted to hear https://www.youtube.com/watch?v=BV3a0PzNNqE)
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logistic_laniakea.jpg&userId=2120012David R2020-11-21T02:23:50ZFitting multi-datasets sharing same parameters?
https://community.wolfram.com/groups/-/m/t/2245184
I have three data sets and three fitting functions.
I want to fit three curves in the same time, because these fitting curves share the fitting parameters.
The figure is one example.
I have three data sets (blue, red, green).
Blue ones should be fitted by the function y=ax^2.
Red ones should be fitted by the function y=ax+b.
Green ones should be fitted by the function y=bx+2a.
In this kind of situation I want to find the best a and b.
normal "Findfit" allows only one functions.
Does anyone have ideas to solve?
![example of data][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=fit.PNG&userId=2245141Masahiro Haze2021-04-16T11:44:59ZComputer Analysis of Poetry — Part 1: Metrical Pattern
https://community.wolfram.com/groups/-/m/t/1677058
Poets pay attention to the natural stresses in words, and sometimes they arrange words so that the stresses form patterns. Typical patterns stress every other syllable (duple meter) or every third syllable (triple meter). Conventions exist to further classify poetic lines according to a unit of two or three syllables, called a *foot*. I choose not to follow this convention, instead looking at the line of poetry as a continuous pattern. The goal of step 1 is to display the pattern of a line of poetry graphically around the printed syllables .
The function below accepts a line of English poetry (or prose) and returns the stress pattern with syllables. It gets the stress information from the "PhoneticForm" property in WordData and the syllabification information from the "Hyphenation" property. Sometimes words are not in WordData, or the database doesn't have phonetic or hyphenation values for the word. Much of the code deals with how to guess at those values when they are missing. Also, 1-syllable words are stressed in the database, but stopwords are usually unstressed in context. So the code demotes single-syllable stopwords from stressed to undetermined. A series of replacement rules attempts to resolve syllables that the program has not yet determined to be stressed or unstressed.
analyzeMeter[verse_] := {
ipaVowels = {"a?", "a?", "e?", "??", "o?", "?", "?", "?", "?", "?",
"?", "?", "?", "?", "?", "?", "?", "?", "?", "?", "?", "?", "?",
"?", "?", "a", "æ", "e", "i", "o", "", "ø", "u", "y"};
words = ToLowerCase[TextWords[verse]];
getWordInfo[wd_] := {
ipa = WordData[wd, "PhoneticForm"];
str = If[StringQ[ipa],
vow = StringCases[ipa, "?" | "?" ... ~~ ipaVowels];
ToExpression[
StringReplace[
vow, {"?" ~~ __ -> "1", "?" ~~ __ -> ".5", __ -> "0"}]],
dips = {"ae", "ai", "au", "ay", "ea", "ee", "ei", "eu", "ey",
"ie", "oa", "oe", "oi", "oo", "ou", "oy", "ue", "ui", "uy"};
vows = {"a", "e", "i", "o", "u", "y"};
Table[.5,
Total[ToExpression[
Characters[
StringReplace[
wd, {StartOfString ~~ "y" -> "0",
"e" ~~ EndOfString -> "0", dips -> "1",
vows -> "1", _ -> "0"}]]]]]];
hyp = WordData[wd, "Hyphenation"];
fauxSyl =
StringPartition[wd, UpTo[Ceiling[StringLength[wd]/Length[str]]]];
syl =
If[ListQ[hyp] && Length[hyp] == Length[fauxSyl], hyp, fauxSyl];
{wd, str, syl}};
wordInfo = getWordInfo[#][[1]] & /@ words;
stops1IPA =
Select[DeleteMissing[
WordData[#, "PhoneticForm"] & /@ WordData[All, "Stopwords"]],
StringCount[#, ipaVowels] < 2 &];
wordInfo =
wordInfo /. {a_, b_List, c_} /;
MemberQ[stops1IPA, WordData[a, "PhoneticForm"]] -> {a, {.5}, c};
wordInfo =
wordInfo /. {a_, b_List,
c_} /; ! MemberQ[stops1IPA, WordData[a, "PhoneticForm"]] &&
b == {.5} -> {a, {1}, c};
preMeter = wordInfo[[;; , 2]] // Flatten;
meter =
preMeter //. {
{a___, .5, 1, 1, b___} -> {a, 0, 1, 1, b},
{a___, 1, 1, .5, b___} -> {a, 1, 1, 0, b},
{a___, 1, .5, 1, b___} -> {a, 1, 0, 1, b},
{a___, 0, .5, 0, b___} -> {a, 0, 1, 0, b},
{a___, .5, 1, b___} -> {a, 0, 1, b},
{a___, 1, .5, b___} -> {a, 1, 0, b},
{a___, 0, .5} -> {a, 0, 1},
{.5, 0, b___} -> {1, 0, b},
{a___, .5, 0, 1, 0, 1, b___} -> {a, 1, 0, 1, 0, 1, b},
{a___, .5, 1, 0, 1, 0, b___} -> {a, 0, 1, 0, 1, 0, b},
{a___, .5, 0, 0, 1, 0, 0, 1, b___} -> {a, 1, 0, 0, 1, 0, 0, 1,
b},
{a___, 1, 0, 1, 0, .5, b___} -> {a, 1, 0, 1, 0, 1, b},
{a___, 0, 1, 0, 1, .5, b___} -> {a, 0, 1, 0, 1, 0, b},
{a___, 1, 0, 0, 1, 0, 0, .5, b___} -> {a, 1, 0, 0, 1, 0, 0, 1,
b},
{a___, .5, .5, .5} -> {a, 0, 1, 0},
{.5, .5, b___} -> {1, 0, b}};
coords = Partition[Riffle[Range[Length[meter]], meter], 2];
syllab = Flatten[wordInfo[[;; , 3]]];
visual =
Graphics[{Line[coords],
MapIndexed[
Style[Text[#1, {#2[[1]], .5}], 15, FontFamily -> "Times"] &,
syllab]}, ImageMargins -> {{10, 10}, {0, 0}},
ImageSize -> 48*Length[meter]]
};
analyzeMeter["Once upon a midnight dreary, while I pondered, weak and \
weary,"]
![enter image description here][1]
Thanks to Edgar Allan Poe for his poem "The Raven." The zigzag line zigs up for stressed syllables and down for unstressed. The program analyzes this verse without error or deviation from the expected meter. However, poets don't always follow the expected pattern, and the program occasional makes mistakes. Consider the program's output for the entire second stanza of "The Raven."
![enter image description here][2]
The graphic makes it easy to see deviations from the pattern. In the second line of this stanza, the program mistakenly considers "separate" to have three syllables as if it were a verb. However, when "separate" is used as an adjective, as in "separate dying ember," it only has two syllables. In the third verse, the last syllable of "eagerly" is so weak that the program marks it as unstressed. This is a reasonable and arguably correct way to assess the syllable, though traditionally it should be marked as stressed. The fifth verse also has an anomaly. Poe has added an extra syllable to the line with the word "radiant."
As an English teacher, I think this visual gives insight into such subtle poetic notions as elision, secondary stress, and masculine/feminine rhyme. A possible activity is for students to use the program to analyze the prevailing pattern in a stanza of poetry and then explain the variations from that pattern as nuances of the language (as in "eagerly"), deliberate deviations by the poet (as in "radiant"), or mistakes by the program (as in "separate").
"The Raven" follows a duple meter pattern of alternating stressed and unstressed syllables. The program can also handle poems that follow the other major metrical pattern, triple meter. Here are verses from "Evangeline" by Henry Wadsworth Longfellow and "'Twas the Night Before Christmas" by Clement Clarke Moore.
![enter image description here][3]
One would expect that the program would show free verse and prose as having no recognizable metrical pattern. Let's see. Here are two lines of Walt Whitman's free verse poem "When I Heard the Learn'd Astronomer":
![enter image description here][6]
And here is a sentence from the Wikipedia article on butterflies.
![enter image description here][7]
The traditional way to teach meter in poetry is to explain about iambs, trochees, etc. and then have students try to mark lines of poetry with those units. Students, who may be distinguishing stressed syllables for the first time are hard pressed to find metrical feet in a verse. With this program, a student has a starting point to explore, analyze, interpret, and critique. It's like using Wolfram Alpha to understand the graph of a rational function rather than trying to sketch it yourself following the rules the teacher lectured about.
I would call the program a work in progress rather than a success. If you experiment with poems of your choice, you'll find that it sometimes fails to resolve a syllable, leaving it stuck halfway between stressed and unstressed. Also, if it misinterprets a syllable, marking it stressed, for instance, when it shouldn't be, the error can spread to neighboring syllables and corrupt the interpretation of the whole line. It works more consistently with duple meter than triple meter.
Twice I tried to improve the program with machine learning. I thought that if machine learning could classify the unresolved pattern as either duple meter, triple meter, or neither, then the program could better resolve the undetermined syllables. I was encouraged when it had 99% confidence that lines from "The Raven" were duple meter, but then I realized it was just as certain that any input was duple meter. My second attempt was to make a neural net that accepted a word and returned a likely stress pattern. For instance, I would feed it "Lenore" and it would return {0,1}. I think this should be doable, training it on data from WordData, but I am not strong enough in machine learning to make it happen (yet).
I subtitled this "Part 1," which implies that there is more to come. I intend to follow this with a program that makes rhyme visible, including alliteration, assonance, and other sound features loosely associated with rhyme.
Thanks for sticking with this to the end,
Mark Greenberg
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rav11.png&userId=788861
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-05-06at5.05.53PM.png&userId=788861
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-05-06at5.08.16PM.png&userId=788861
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ast1.png&userId=788861
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ast2.png&userId=788861
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-05-06at5.10.17PM.png&userId=788861
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wik1.png&userId=788861Mark Greenberg2019-05-07T00:14:16ZClassification of single image based user-selected training areas?
https://community.wolfram.com/groups/-/m/t/2237155
Below an image extracted from drone footage of my son's soccer ("football" in these parts of the world) practice session. ![drone image][1]. My goal is to track the position of players during the practice session and create some simple data analytics such as heat maps to share with the coaches. I'm exploring different approaches with Mathematica. One approach is to characterise the background carefully and subtract the background leaving only the players and their shadows. I've done the first step based on image subtraction from frames spaced by 1s which nicely isolates the players but also includes their shadows.
I'm thinking classification might be a good approach to distinguish four classes: players in orange shirts, players in blue shirts, shadows and other people (note the top part of the image can be masked out to focus only on what is happening on the pitch). I've sampled the RGB training signatures by drawing masks on top of them and extracting the pixels. However, I'm at a loss how to use these training sets in classification. Most examples I've seen focus on detecting animals, faces and others based on a series of training images. In my case I'm interested in classifying the content of a single image based on similarity to the training data from that same image.
Here an example if what I've tried.
imgdiff = ImageDifference[vidframes[[2]], vidframes[[1]]] (*image difference *)
shadowpixels = PixelValue[imgdiff, imgmaskshadow]; (* extract pixels in user defined mask for shadows*)
cf = ClusterClassify[Join[shadowpixels, orangplayerpixels]] (* create classifier based on representative shadow and orange shirt players*)
ClusteringComponents[imgdiff] (* this is what does not work *)
I also include a [link][2] to the first 10 images of the video sequence sampled at 1s.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Frame1s.png&userId=913461
[2]: https://www.dropbox.com/s/2dyvy6093crvvw3/FootballFrames.zip?dl=0Malcolm Davidson2021-04-05T08:35:10ZConstructing cyclic graphs that are as pair-wise disjoint as possible
https://community.wolfram.com/groups/-/m/t/2240973
I cross-posted this on Mathematica Stack Exchange but haven't received any traction there. Perhaps someone on this forum has an idea. I've re-written the description somewhat in hopes this is more clear.
I have an application for which I need a set of circle graphs, where by circle, I mean that the graph forms a single complete loop. So with n vertices, it starts at one vertex, goes to another and finally loops back on itself. An example small circle graph:
aGraph=Graph[DirectedEdge @@ # & /@
Partition[RandomSample[Range[10]], 2, 1, {1, 1}]]
For a set of such graphs (really edge lists), I want to find out their pairwise intersection. I don't know if it is the right way, but I use this:
GraphEdgeIntersections[edgePairs_List] :=
With[{intersection = Intersection @@ edgePairs},
If[intersection == {}, 0,
Max@Flatten[GraphDistanceMatrix[intersection] /. Infinity -> 0]]
];
I know how to construct a set of "perfect" circles where the pairwise intersections yield the null set for the case I really care about, circles with 23 vertices:
anExampleSet =
Table[DirectedEdge @@ # & /@
Transpose[{Range[23], RotateRight[Range[23], i]}], {i, 1, 22}];
Partition[GraphEdgeIntersections[#] & /@ Tuples[anExampleSet, 2],
22] // MatrixForm
You see that the diagonals have a path length of 22 and all other entries are 0 (indicating that they are completely orthogonal). When I look at the number of edges in the flattened set, it is 506, which is exactly the same as the total possible edges in a set of 23.
Length[Select[Tuples[Range[23], 2], #[[1]] != #[[2]] &]]
Is there a method to construct the best set of circles so that I minimize the intersections when going beyond the 22 (or any similar set of 22) shown above?
As part of this work, I came up with a way to visualize the "goodness" of the resulting set using "Image". For this, I construct the graph edge intersection matrix and "color" it according to whether I'm looking for "goodness" or "badness". The idea is to take the matrix and change the values into {R,G,B} pixels at each point according to the rules I want. Since I don't know how many entries are in the resulting set, I wrote a function to generate the replacement rules. I should probably have called the last parameter "badNotGoodP" but this is what I first wrote:
ConstructReplacementRule::usage =
"ConstructReplacementRule[values_,blackValue_,threshold_,\
redNotGreen_]";
ConstructReplacementRule[values_, blackValue_, threshold_,
redNotGreen_] :=
Block[{white = {1, 1, 1}, black = {0, 0, 0}, red = {1, 0, 0},
green = {0, 1, 0}, replacementValues, valuesNoDuplicates},
valuesNoDuplicates = DeleteDuplicates[Flatten@values];
replacementValues =
Cases[valuesNoDuplicates,
a_ :>
If[a == blackValue, black,
If[redNotGreen, If[a > threshold, red, white],
If[a < threshold, green, white]]]];
Thread[valuesNoDuplicates -> replacementValues]
];
This is used for example in the following way:
someRandomGraphs =
With[{randomSample = #},
DirectedEdge @@ # & /@
Partition[Append[randomSample, randomSample[[1]]], 2, 1]] & /@
Table[RandomSample[Range[23]], {i, 1, 256}];
gei = Partition[
GraphEdgeIntersections[#] & /@ Tuples[someRandomGraphs, 2], 256];
Now, if I want to show an image with green pixels for any point in the cross-matrix where we have 0 or 1 intersection, I do this:
Image[gei /. ConstructReplacementRule[gei, 22, 2, False]]
Or if I want to know cases where there are only 0 overlaps, I use this:
Image[gei /.
ConstructReplacementRule[DeleteDuplicates@Flatten@gei, 22, 1,
False]]
If I want to show "badness", I invert the last argument to the rule replacement.
Image[gei /.
ConstructReplacementRule[DeleteDuplicates@Flatten@gei, 22, 1, True]]
I have some generators for producing circle graphs that yield decent results but they aren't perfect. I'm hoping someone with a good background in graphs knows a construction method that yields a better result. One of my methods gives this result for a set of 256 tables:
![Results for a set of circles][1]
Repeating, the objective is to get as many "0" and "1" overlaps as possible. For this table, there are 65536 entries and of those, 256 are the diagonal where the overlap is complete, yielding the count of 22. The unweighted percentage "badness" for the set I'm providing is (3248+222+16+2)/65280 = 5.34%. Another construction method gives 5.37% "badness". Are there any with 0% badness?
I cannot "brute force" this since the total possible sets is huge (23!).
If you've read this far, I appreciate your time and any help you may give.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2021-04-09at12.06.45PM.png&userId=1954142Mark Ross2021-04-09T19:18:56ZSearching for language structure universals
https://community.wolfram.com/groups/-/m/t/2245923
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/1ee6b441-eef8-4288-b4ad-e4658db3535bLuis Antonio Vasquez Reina2021-04-17T15:20:09ZHow I calculated the digits of the MKB constant
https://community.wolfram.com/groups/-/m/t/1323951
March 12, 2015
--------------
What about records of computing the integral analog of the MRB constant? (I call it the MKB constant.) See Google Scholar [MKB constant][1].
Richard Mathar did a lot of work on it [here][4] , where M is the MRB constant and M1 is MKB:
![enter image description here][2]
M1 (MKB) can be written as and integral of a power of ***e***:
![enter image description here][3]
I've gotten Matheamtica to compute 125 digits. However, they are not proven to be correct yet!
They are
0.68765236892769436980931240936544016493963738490362254179507101010743\
366253478493706862729824049846818873192933433546612328629
.
First we compute the real part as far as Mathematica will allow.
a1 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 100]
0.07077603931152880353952802183028200136575469620336299759658471973672\
987938741600037745028756981434374
a2 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 120]
a2 - a1
0.07077603931152880353952802183028200136575469620336302758317278266053\
31986618615110244568060496758380620699811570793175408
2.998658806292380331927444551064700651847986149432*10^-53
a3 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 150]
a3 - a2
0.07077603931152880353952802183028200136575469620336302758317278816361\
8457264385970709799491401005081151056924116255307801983594127144525095\
5653544005192
5.5030852586025244596853426853513292430889869429591759902612*10^-63
a4 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 200]
a4 - a3
0.07077603931152880353952802183028200136575469620336302758317278816361\
8457264382036580831881266177238210031756216795402920795214039271485948\
634659563768084109747493815003439875479076850383786911941519465
-3.9341289676101348278429410251678994599048811883800878730391469306948\
367511*10^-78
a5 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 250]
a5 - a4
0.07077603931152880353952802183028200136575469620336302758317278816361\
8457264382036580831881266177238209440733969109717926999044694539086929\
3857095687266500964737783523859835124762555195276023702167529617039725\
7261177753806842756198742365511173334813888
-5.9102224768568499379616934473239901924894999504143401327371546261745\
6363002821330856184541724766503*10^-103
Next we compute the imaginary part to the same precision.
b1 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 100] - I/Pi
0.*10^-117 -
0.6840003894379321291827444599926611267109914826550016181302726087470\
544306934833279937664708191960468 I
b2 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 120] - I/Pi
b2 - b1
0.*10^-137 -
0.6840003894379321291827444599926611267109914826549994343226304054256\
46767722886537984405858512438464223325361496951820797 I
0.*10^-117 +
2.1838076422033214076629705967900093606123067575826*10^-51 I
b3 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 150] - I/Pi
b3 - b2
0.*10^-167 -
0.6840003894379321291827444599926611267109914826549994343226303771381\
5305812568206208637713014270949108628424796532117557865488349026349505\
4352728287677 I
0.*10^-137 +
2.8287493709597204475898028728369728973137041113531630645218*10^-62 I
b4 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 200] - I/Pi
b4 - b3
0.*10^-218 -
0.6840003894379321291827444599926611267109914826549994343226303771381\
5305812497663815095983421272147867735031056071869477552727290571462108\
208123698276619850397331432861469605963724235550107655309644965 I
0.*10^-167 +
7.0542393541729592998801240893393740460248080312761058454887397227149\
1304910*10^-76 I
b5 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 250] - I/Pi
b5 - b4
0.*10^-268 -
0.6840003894379321291827444599926611267109914826549994343226303771381\
5305812497663815095983421272147867223796451609148860995867828496814126\
9810848570299802270095261060286697622600207986034863822997401942304753\
4951409792726050072747412751162199808963072 I
0.*10^-218 +
5.1123460446272061655685946207464798122703884124663962338780532683279\
9843703703436946621273009904771*10^-102 I
b6 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 300] - I/Pi
b6 - b5
0.*10^-318 -
0.6840003894379321291827444599926611267109914826549994343226303771381\
5305812497663815095983421272147867223796451609148860995867804988314557\
9408739051911924508290758754789975176921766748245229306743723292030351\
1357229649514450909272015113199881208930542548540913212596310791355732\
04151474091653439098975 I
0.*10^-268 +
2.3508499569040210951838787776180450230549672244567844123778963451625\
36786502744023594180143211599163475397637962318600032530*10^-127 I
Notice that WorkingPrecision->100 gave 51 consistant (correct) digits, WorkingPrecision->120 gave 62 correct digits, WorkingPrecision->150 gave 76 correct digits, WorkingPrecision->200 gave 102 correct digits, so it is not too much of a stretch to believe WorkingPrecision->250 gave 125 correct digits.
In[78]:= c = N[Abs[a5 + b5], 125]
Out[78]= 0.\
6876523689276943698093124093654401649396373849036225417950710101074336\
6253478493706862729824049846818873192933433546612328629
April 18, 2015
--------------
Going back to **integral analog of the MRB constant**'
![enter image description here][5]:
Using formula 5 on page 3 of http://arxiv.org/pdf/0912.3844v3.pdf
.![enter image description here][6]
We can compute a great deal of digits of the **integral analog of the MRB constant**' (I once called it the MKB constant, named after Marsha Kell-Burns my, now ex, wife.) In the paper Mathar simply calls it M1.
**Until further notice in this post when we compute the imaginary part of M1, we will be concerned with the imaginary part's absolute value only,**
This time we will compute the Imaginary part first to at least 500 digits:
a[1] = 0; For[n = 1, n < 11,
a[n] = N[2/Pi -
1/Pi*NIntegrate[
Cos[Pi*x]*x^(1/x)*(1 - Log[x])/x^2, {x, 1, Infinity},
WorkingPrecision -> 100*n], 50 n]; Print[a[n] - a[n - 1]],
n++]; Print[a[11]]
\
giving
0.6840003894379321291827444599926611267109914826549994343226303771381530581249766381509598342127214787
0.*10^-101
0.*10^-151
0.*10^-201
0.*10^-251
0.*10^-301
0.*10^-351
0.*10^-401
0.*10^-451
0.*10^-501
0.6840003894379321291827444599926611267109914826549994343226303771381530581249766381509598342127214786722379645160914886099586780498831455794087390519118879988351918366211827085883779918191195794251385436100844782462528597869421390620796113023053439642582325892202911183326091512210367124716901047132601108752764946385830438156754378694878046808312868541961166205744280461776232345922905313658259576212809654022016030244583148587352474339130505540080799774619683572540292971258866450201101870835703060314349396491402064932644813564545345219868887520120
.
Likewise the real part:
b[1] = 0; For[n = 1, n < 11,
b[n] = N[-1/Pi*
NIntegrate[Sin[Pi*x]*x^(1/x)*(1 - Log[x])/x^2, {x, 1, Infinity},
WorkingPrecision -> 100*n], 50 n]; Print[b[n] - b[n - 1]],
n++]; Print[b[11]]
giving
0.07077603931152880353952802183028200136575469620336302758317278816361845726438203658083188126617723821
0.*10^-102
0.*10^-152
0.*10^-202
0.*10^-252
0.*10^-302
0.*10^-352
0.*10^-402
0.*10^-452
0.*10^-502
0.07077603931152880353952802183028200136575469620336302758317278816361845726438203658083188126617723820944073396910971792699904464538475364292258443860652193330471222906120205483985764336623434898438270710499897053952312269178485299032185072743545220051257328105422174249313177670295863771714489658779291185716175115405623656039914848817528200250723061535734571065031458992196831648681239079549382556509741967588147362548743205919028695774572411439927516593391029992733107982746794845130889328251307263102570083031527430861023428334369104098217022622689
.
Then the magnitude:
N[Sqrt[a[11]^2 + b[11]^2], 500]
giving
0.68765236892769436980931240936544016493963738490362254179507101010743\
3662534784937068627298240498468188731929334335466123286287665409457565\
9577211580255650416284625143925097120589697986500952590195706813170472\
5387265069668971286335322245474865156721299946377659227025219748069576\
0895993932096027520027641920489863095279507385793449828250341732295653\
3809181101532087948181335825805498812728097520936901677028741356923292\
2644964771090329726483682930417491673753430878118054062296678424687465\
624513174205
.
That checks with the 200 digits computed by the quadosc command in mpmath by FelisPhasma at https://github.com/FelisPhasma/MKB-Constant .The function is defined here: http://mpmath.googlecode.com/svn/trunk/doc/build/calculus/integration.html#oscillatory-quadrature-quadosc
![enter image description here][7]
**P.S.**
I just now finished 750 digits, (about the max with formula 5 from the paper, as far as Mathematica is concerned).
Here is the work:
a[1] = 0; For[n = 1, n < 16,
a[n] = N[2/Pi -
1/Pi*NIntegrate[
Cos[Pi*x]*x^(1/x)*(1 - Log[x])/x^2, {x, 1, Infinity},
WorkingPrecision -> 100*n], 50 n]; Print[a[n] - a[n - 1]],
n++]; Print[a[16]];
b[1] = 0; For[n = 1, n < 16,
b[n] = N[-1/Pi*
NIntegrate[Sin[Pi*x]*x^(1/x)*(1 - Log[x])/x^2, {x, 1, Infinity},
WorkingPrecision -> 100*n], 50 n]; Print[b[n] - b[n - 1]],
n++]; Print[b[16]]; Print[N[Sqrt[a[16]^2 + b[16]^2], 750]]
0.6840003894379321291827444599926611267109914826549994343226303771381530581249766381509598342127214787
0.*10^-101
0.*10^-151
0.*10^-201
0.*10^-251
0.*10^-301
0.*10^-351
0.*10^-401
0.*10^-451
0.*10^-501
0.*10^-551
0.*10^-601
3.*10^-650
-4.*10^-700
-2.6*10^-749
0.68400038943793212918274445999266112671099148265499943432263037713815\
3058124976638150959834212721478672237964516091488609958678049883145579\
4087390519118879988351918366211827085883779918191195794251385436100844\
7824625285978694213906207961130230534396425823258922029111833260915122\
1036712471690104713260110875276494638583043815675437869487804680831286\
8541961166205744280461776232345922905313658259576212809654022016030244\
5831485873524743391305055400807997746196835725402929712588664502011018\
7083570306031434939649140206493264481356454534521986888752011950353818\
1776359577265099302389566135475579468144849763261779452665955246258699\
8679271659049208654746533234375478909962633090080006358213908728990850\
5026759549928935029206442637425786005036048098598304092996753145589012\
64547453361707037686708654522699
0.07077603931152880353952802183028200136575469620336302758317278816361845726438203658083188126617723821
0.*10^-102
0.*10^-152
0.*10^-202
0.*10^-252
0.*10^-302
0.*10^-352
0.*10^-402
0.*10^-452
0.*10^-502
2.*10^-551
-1.*10^-600
1.8*10^-650
1.27*10^-699
4.34*10^-749
0.07077603931152880353952802183028200136575469620336302758317278816361\
8457264382036580831881266177238209440733969109717926999044645384753642\
9225844386065219333047122290612020548398576433662343489843827071049989\
7053952312269178485299032185072743545220051257328105422174249313177670\
2958637717144896587792911857161751154056236560399148488175282002507230\
6153573457106503145899219683164868123907954938255650974196758814736254\
8743205919028695774572411439927516593391029992733107982746794845130889\
3282513072631025700830315274308610234283343691040982170226226904594029\
7055093272952022662549075225941956559080574835998923469310063614655255\
0629713179601483134045038416878054929072981851045829413286377842843667\
5378730394247519728064887287780998671021887797977772522419765594172569\
277490031071938177749184834961300
0.687652368927694369809312409365440164939637384903622541795071010107433662534784937068627298240498468188731929334335466123286287665409457565957721158025565041628462514392509712058969798650095259019570681317047253872650696689712863353222454748651567212999463776592270252197480695760895993932096027520027641920489863095279507385793449828250341732295653380918110153208794818133582580549881272809752093690167702874135692329226449647710903297264836829304174916737534308781180540622966784246874656245131742049004832216427665542900559350289936114782223424261285828326467186036500189315374147638489679365569122714398706519530651330568884655048857998738535162606116788633540389660052822237449082894798620397228331715198160243676576563833057235963591510865254600
Using formula 7 from http://arxiv.org/pdf/0912.3844v3.pdf,
![enter image description here][8]
.
(Treating it as we did formula 5),
First, the imaginary part to at least 1000 digits::
a[1] = 0; For[n = 1, n < 21,
a[n] = N[2/Pi +
1/Pi^2 NIntegrate[
Sin[x Pi] x^(1/x) (1 - 3 x + 2 (x - 1) Log[x] + Log[x]^2)/
x^4, {x, 1, Infinity}, WorkingPrecision -> 100 n], 50 n];
Print[a[n] - a[n - 1]], n++]; Print[a[21]]
0.6840003894379321291827444599926611267109914826549994343226303771381530581249766381509598342127214787
0.*10^-101
0.*10^-151
0.*10^-201
0.*10^-251
0.*10^-301
0.*10^-351
0.*10^-401
0.*10^-451
0.*10^-501
0.*10^-551
0.*10^-601
0.*10^-651
0.*10^-701
0.*10^-751
0.*10^-801
0.*10^-851
0.*10^-901
-2.*10^-950
5.*10^-1000
0.684000389437932129182744459992661126710991482654999434322630377138153058124976638150959834212721478672237964516091488609958678049883145579408739051911887998835191836621182708588377991819119579425138543610084478246252859786942139062079611302305343964258232589220291118332609151221036712471690104713260110875276494638583043815675437869487804680831286854196116620574428046177623234592290531365825957621280965402201603024458314858735247433913050554008079977461968357254029297125886645020110187083570306031434939649140206493264481356454534521986888752011950353818177635957726509930238956613547557946814484976326177945266595524625869986792716590492086547465332343754789099626330900800063582139087289908505026759549928935029206442637425786005036048098598304092996753145589012645474533617070376867086545228223060940434935219252885333298390272342234952870883304116640409421452765284609364941205344122569781634782508368641126766528707019957340895061936246645065753101916781254557006989818409283317145837167345971516970849116096077030635788389165381066055992688
Then the real part to at least 1000 digits:
b[1] = 0; For[n = 1, n < 21,
b[n] = N[1/Pi^2 -
1/Pi^2 NIntegrate[
Cos[Pi x] x^(1/x) (1 - 3 x + 2 (x - 1) Log[x] + Log[x]^2)/
x^4, {x, 1, Infinity}, WorkingPrecision -> 100 n], 50 n];
Print[b[n] - b[n - 1]], n++]; Print[b[21]]
0.07077603931152880353952802183028200136575469620336302758317278816361845726438203658083188126617723821
0.*10^-102
0.*10^-152
0.*10^-202
0.*10^-252
0.*10^-302
0.*10^-352
0.*10^-402
0.*10^-452
0.*10^-502
0.*10^-552
0.*10^-602
0.*10^-652
0.*10^-702
0.*10^-752
0.*10^-802
0.*10^-852
-3.*10^-901
8.*10^-951
-4.6*10^-1000
0.0707760393115288035395280218302820013657546962033630275831727881636184572643820365808318812661772382094407339691097179269990446453847536429225844386065219333047122290612020548398576433662343489843827071049989705395231226917848529903218507274354522005125732810542217424931317767029586377171448965877929118571617511540562365603991484881752820025072306153573457106503145899219683164868123907954938255650974196758814736254874320591902869577457241143992751659339102999273310798274679484513088932825130726310257008303152743086102342833436910409821702262269045940297055093272952022662549075225941956559080574835998923469310063614655255062971317960148313404503841687805492907298185104582941328637784284366753787303942475197280648872877809986710218877979777725224197655941725692774900310719381777491848349627938468198411955193898347075098152638657614980900350262780319142430252921925131515239611841070722530473939496294305264627977744876814858325335947117076721493110160508928494597906728688873533031986215124467678736429981544321187124269147141804397293341613
Then the magnitude:
In[97]:= N[Sqrt[a[21]^2 + b[21]^2], 1000]
Out[97]= 0.\
6876523689276943698093124093654401649396373849036225417950710101074336\
6253478493706862729824049846818873192933433546612328628766540945756595\
7721158025565041628462514392509712058969798650095259019570681317047253\
8726506966897128633532224547486515672129994637765922702521974806957608\
9599393209602752002764192048986309527950738579344982825034173229565338\
0918110153208794818133582580549881272809752093690167702874135692329226\
4496477109032972648368293041749167375343087811805406229667842468746562\
4513174204900483221642766554290055935028993611478222342426128582832646\
7186036500189315374147638489679365569122714398706519530651330568884655\
0488579987385351626061167886335403896600528222374490828947986203972283\
3171519816024367657656383305723596359151086525460036387486837632622334\
2987257095524637683005910353149353985736118868884201748241906260834981\
7303422370398413326428269921074045506558966667483453656748906071577744\
4147548424388220133662816274116986724576330176058912438027319979840883\
05950589130911719199
**PPS.**
I just now finished a 1500 digit computation of the integral analog of the MRB constant, but I don't have any way of checking it other than to see that it confirms smaller computations. Which thing it does.
In[99]:= aa =
N[2/Pi + 1/Pi^2 NIntegrate[
Sin[x Pi] x^(1/x) (1 - 3 x + 2 (x - 1) Log[x] + Log[x]^2)/
x^4, {x, 1, Infinity}, WorkingPrecision -> 3000], 1500]
Out[99]= 0.\
6840003894379321291827444599926611267109914826549994343226303771381530\
5812497663815095983421272147867223796451609148860995867804988314557940\
8739051911887998835191836621182708588377991819119579425138543610084478\
2462528597869421390620796113023053439642582325892202911183326091512210\
3671247169010471326011087527649463858304381567543786948780468083128685\
4196116620574428046177623234592290531365825957621280965402201603024458\
3148587352474339130505540080799774619683572540292971258866450201101870\
8357030603143493964914020649326448135645453452198688875201195035381817\
7635957726509930238956613547557946814484976326177945266595524625869986\
7927165904920865474653323437547890996263309008000635821390872899085050\
2675954992893502920644263742578600503604809859830409299675314558901264\
5474533617070376867086545228223060940434935219252885333298390272342234\
9528708833041166404094214527652846093649412053441225697816347825083686\
4112676652870701995734089506193624664506575310191678125455700698981840\
9283317145837167345971516970849116096077030635788389165381066055992708\
4284702473154303800276803908560080204997803241058414188902018357202062\
9532415382916822796942734253441520784640814155687968986766443021927163\
6249354786973717955004441549085673392105556692081075647388204227896978\
1483978754685921758294318270385312597177598977912650715548994562461701\
1553879109152932039370312241134127950112036269188660519350584627066913\
4925878278209048717316088629321353274101519307401594635990058104175474\
300641475776727955287474213040
In[98]:= bb =
N[1/Pi^2 -
1/Pi^2 NIntegrate[
Cos[Pi x] x^(1/x) (1 - 3 x + 2 (x - 1) Log[x] + Log[x]^2)/
x^4, {x, 1, Infinity}, WorkingPrecision -> 3000], 1500]
Out[98]= 0.\
0707760393115288035395280218302820013657546962033630275831727881636184\
5726438203658083188126617723820944073396910971792699904464538475364292\
2584438606521933304712229061202054839857643366234348984382707104998970\
5395231226917848529903218507274354522005125732810542217424931317767029\
5863771714489658779291185716175115405623656039914848817528200250723061\
5357345710650314589921968316486812390795493825565097419675881473625487\
4320591902869577457241143992751659339102999273310798274679484513088932\
8251307263102570083031527430861023428334369104098217022622690459402970\
5509327295202266254907522594195655908057483599892346931006361465525506\
2971317960148313404503841687805492907298185104582941328637784284366753\
7873039424751972806488728778099867102188779797777252241976559417256927\
7490031071938177749184834962793846819841195519389834707509815263865761\
4980900350262780319142430252921925131515239611841070722530473939496294\
3052646279777448768148583253359471170767214931101605089284945979067286\
8887353303198621512446767873642998154432118712426914714180439729334146\
8345902382977472975053271988386946291215512340931334841526712825988330\
6521193975174379922254198045615178994412133135553490942451521573377205\
4086429300485891441696490339106907723915822537813700713422515725943626\
7756749980892097547020923938358076198570370106085596863039832425037481\
4946826330552459256977035009973219582010379262683780372730214991685800\
3676611833579648850161974289307066295385292264148146789532534018500663\
1153014589399140567464592864024
In[109]:= c1500 = Sqrt[aa^2 + bb^2]
Out[109]= \
0.68765236892769436980931240936544016493963738490362254179507101010743\
3662534784937068627298240498468188731929334335466123286287665409457565\
9577211580255650416284625143925097120589697986500952590195706813170472\
5387265069668971286335322245474865156721299946377659227025219748069576\
0895993932096027520027641920489863095279507385793449828250341732295653\
3809181101532087948181335825805498812728097520936901677028741356923292\
2644964771090329726483682930417491673753430878118054062296678424687465\
6245131742049004832216427665542900559350289936114782223424261285828326\
4671860365001893153741476384896793655691227143987065195306513305688846\
5504885799873853516260611678863354038966005282223744908289479862039722\
8331715198160243676576563833057235963591510865254600363874868376326223\
3429872570955246376830059103531493539857361188688842017482419062608349\
8173034223703984133264282699210740455065589666674834536567489060715777\
4441475484243882201336628162741169867245763301760589124380273199798408\
8305950589130911719198776146941477264898934365742508503405073273852990\
3546587114217499635584514475429656959327732862489935076490012861232249\
2446704232200904844779690044774489466704342791971033325818579375177198\
9865742583276770011926585495711579480114327818546199372349313180236079\
1389248808154759564302727311223193005229640892474022665093207969297797\
9723087954832182561714039165214592519432072341006090867558444590500046\
6707963346545638317950978935794173691635274461184852166407791838662429\
40408834876470623546535579027725
Mathar gives a simple scheme to find better formulas at http://arxiv.org/pdf/0912.3844v3.pdf . I could use some help in programming it:
(I keep getting erroneous results!) Does anyone get the right results here?
![enter image description here][9]
Below, where the upper limit of the following integrals shows Infinity, it is meant to be the (Ultraviolet limit of the sequence)
as mentioned by Mathar here:
![enter image description here][10]
**Until further notice in this post when we compute the imaginary part of M1, we will be concerned with the imaginary part's absolute value only,**
I derived a new formula for computing the integral analog of the MRB constant':
f[x_]:=x^(1/x);-((2 I)/\[Pi]^3) + 1/\[Pi]^2 - (
2 I)/\[Pi] + (I/Pi)^3*
Integrate[(-1)^x*D[f[x], {x, 3}], {x, 1, Infinity}]
In traditional form that is M1=
![enter image description here][11]
Using it I computed 2000 digits in only 10.8 minutes:
In[131]:= Timing[f[x_] = x^(1/x);
a = N[1/\[Pi]^2 + (1/Pi)^3*
NIntegrate[Sin[Pi*x]*D[f[x], {x, 3}], {x, 1, Infinity},
WorkingPrecision -> 4000], 2000];
b = N[2/\[Pi]^3 +
2/\[Pi] + (1/Pi)^3*
NIntegrate[Cos[Pi x]*D[f[x], {x, 3}], {x, 1, Infinity},
WorkingPrecision -> 4000], 2000];
Print[N[Sqrt[a^2 + b^2], 2000]]]
During evaluation of In[131]:= 0.68765236892769436980931240936544016493963738490362254179507101010743366253478493706862729824049846818873192933433546612328628766540945756595772115802556504162846251439250971205896979865009525901957068131704725387265069668971286335322245474865156721299946377659227025219748069576089599393209602752002764192048986309527950738579344982825034173229565338091811015320879481813358258054988127280975209369016770287413569232922644964771090329726483682930417491673753430878118054062296678424687465624513174204900483221642766554290055935028993611478222342426128582832646718603650018931537414763848967936556912271439870651953065133056888465504885799873853516260611678863354038966005282223744908289479862039722833171519816024367657656383305723596359151086525460036387486837632622334298725709552463768300591035314935398573611886888420174824190626083498173034223703984133264282699210740455065589666674834536567489060715777444147548424388220133662816274116986724576330176058912438027319979840883059505891309117191987761469414772648989343657425085034050732738529903546587114217499635584514475429656959327732862489935076490012861232249244670423220090484477969004477448946670434279197103332581857937517719898657425832767700119265854957115794801143278185461993723493131802360791389248808154759564302727311223193005229640892474022665093207969297797972308795483218256171403916521459251943207234100609086755844459050004667079633465456383179509789357941736916352744611848521664077918386624294040883487647062354653558109265769644276994369741555722263494599492834558291937955573706480722982389806312472239746286527176248883116124285469947303667188075506826507811479428582807366599407544908560990699866167233307144245764835741501174979679166078765231145175411199825822532170091858833628202128777966026600647843068442894310401343003939117236867245656732686719139206716028255819141802331701942027248337771633882445225049334329008827371320849006472846226868011129149192754883153995560921671208059671732704499253517327447921147157
Out[131]= {653.145, Null}
I am presently computing 10,000 digits using that formula. Come back here for results!
That formula didn't work out; I will try one of the following formulas.
Here are 2 more, more advanced formulas; remember f(x) is x^(1/x):
![enter image description here][12]
I did finish a 5,000 digit computation using M1=
![enter image description here][13]
in 48.11 minutes.
Here are the 5000 digits:of the magnitude:
0.68765236892769436980931240936544016493963738490362254179507101010743366253478493706862729824049846818873192933433546612328628766540945756595772115802556504162846251439250971205896979865009525901957068131704725387265069668971286335322245474865156721299946377659227025219748069576089599393209602752002764192048986309527950738579344982825034173229565338091811015320879481813358258054988127280975209369016770287413569232922644964771090329726483682930417491673753430878118054062296678424687465624513174204900483221642766554290055935028993611478222342426128582832646718603650018931537414763848967936556912271439870651953065133056888465504885799873853516260611678863354038966005282223744908289479862039722833171519816024367657656383305723596359151086525460036387486837632622334298725709552463768300591035314935398573611886888420174824190626083498173034223703984133264282699210740455065589666674834536567489060715777444147548424388220133662816274116986724576330176058912438027319979840883059505891309117191987761469414772648989343657425085034050732738529903546587114217499635584514475429656959327732862489935076490012861232249244670423220090484477969004477448946670434279197103332581857937517719898657425832767700119265854957115794801143278185461993723493131802360791389248808154759564302727311223193005229640892474022665093207969297797972308795483218256171403916521459251943207234100609086755844459050004667079633465456383179509789357941736916352744611848521664077918386624294040883487647062354653558109265769644276994369741555722263494599492834558291937955573706480722982389806312472239746286527176248883116124285469947303667188075506826507811479428582807366599407544908560990699866167233307144245764835741501174979679166078765231145175411199825822532170091858833628202128777966026600647843068442894310401343003939117236867245656732686719139206716028255819141802331701942027248337771633882445225049334329008827371320849006472846226868011129149192754883153995560921671208059671732704499253517327447529208297180672654123457301218758892278525894167935930983363218877512533994251978272092700003994136520699813263053327399132641690231179063314931546906927612775633995348209911166678724589467821767106592498663827057034363632241807121831546175498178011687284590439293322231263406301066863589072717290630291441982684113819198880100231182613587798104863611185433976009254862585527222843445901958943153561148829083242874018226480554274231391324767376148485531787767908124831873688579979114662856184612164534836370699371440464263768724668291617743681719766849740663590277737977490693183461320266666793472116774276618408124767965369796362732668987556797338128876129264558867657737417548617146808592137056879602982206609613881069490166381528825180204703315896719667069923077454352649723496033985893188309150391579573916059639453655188856334980355047281560296288150836680499821806918067869468571687709518088408966653716009356556714281694904914038988996962213833530636987279769672200413448893419914190954063100962251649102614676944333201213024711868954772741991675045198246947499574872027800654821823797116399297131866662866832215332914761325880983081211272181775518951539503852063119472301382766303820851467743266039356123495461914463960644386394228342211998370152351720235034997434035743513051754761571835043769475528640144621307760159481496713401409374957729200400650100318226988524015127382509490642900236553851499823658269458873976032051355393161653806016080446394196719312454167915154602448638624354575153334932298393406734174580316934939632892851077461038399470015366439910136971186909599331204517462262508377673477745789645309425145559198802530351403897927622891172233239135167420567162398873965477371498335087310395422796362380227536212159184529243644094285328763286873653399867593200891823468738537356817916009007206857590792983184556882143118383332812491747733056313117179696094921120670802012310012864110800437831852620698327457619035904268498030693438632685623213366864129523404256345542376567721287706234359125016588483777876970236084456277023948551334490591022594253744077631232660869593809453087749830900393202787736482133628148979992109544954840067942735030391105496026321872468122542495017023785810605820545392820104069279893067324597299043883381251767370331206913429284614563732308018369972360638019778425246546329838131639355043236388708044857300408692365733932897876809202025693305332974091411983635619038514442263783801745983300121464879550146672827072002317686396598587702487509572349422593441184802476344187280014450860069307120621758277552124841158659386176036703247124389223327008210072318671884895179305778728051888524412158486781863155034447221379906386062559915129172725833420555901857729690605950941678587057025641848365090809750870051863842805803189784976076099574956436664131457150096711473033060684065060747340764998195621425524824611657787212347497307297184843276100338110267863618974154272345482369968216663233417338501929114697679974461999040589290327155974468087040862022522065912789
I'm getting closer to 10K digits of M1: Using ![enter image description here][14] , where f(x)=x^(1/x).
I got approx. 10K digits of the imaginary part, but the real part was a little garbled.
Finally using
![enter image description here][15] , where f(x) = x^(1/x) ,
.I got about 10000 digits of M1 in about 12 hours. (It showed that the 5000 digit computation was only correct to 4979 digits, though.)
Here is a rough program to get it:
f[x_]:=x^(1/x); Print[DateString[]]; Print[T0 = SessionTime[]]; prec = 10000;
Timing[Print[
a = N[Re[-(136584/Pi^10) - (34784*I)/Pi^9 +
5670/Pi^8 + (786*I)/Pi^7 - 90/Pi^6 -
(4*I)/Pi^5 - 3/Pi^4 - (2*I)/Pi^3 +
1/Pi^2 - (2*I)/Pi] -
(1/Pi)^10*
NIntegrate[Cos[Pi*x]*D[f[x], {x, 10}], {x, 1, Infinity},
WorkingPrecision -> prec,
PrecisionGoal -> prec], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[
N[b = -Im[-(136584/Pi^10) - (34784*I)/Pi^9 +
5670/Pi^8 + (786*I)/Pi^7 - 90/Pi^6 -
(4*I)/Pi^5 - 3/Pi^4 - (2*I)/Pi^3 +
1/Pi^2 - (2*I)/Pi] +
(1/Pi)^10*
NIntegrate[Sin[Pi*x]*D[f[x], {x, 10}], {x, 1, Infinity},
WorkingPrecision -> prec,
PrecisionGoal -> prec], prec]]]; Print[
SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
See attached 10000MKB.pdf and 10KMKB.nb for work and digits.
On May 5, I computed another 10,000 digits in 9.55 hours see attached faster10KMKB.
On May 6, I computed another 10,000 digits in a blistering fast 5.1 hours see attached fastest10KMKB.nb.
**On May 9, I improved that timing to 4.8 hours (17355 seconds). Here is the code I used:**
d = 15; f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; h[0] = 1; g =
2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}]; Print[
DateString[]];
Print[T0 = SessionTime[]]; prec = 10000;
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
April 20, 2015
--------------
FelisPhasma has been helpful in providing me with a little competition in computing the integral analog of the MRB constant. See [https://github.com/FelisPhasma/MKB-Constant.][16]
I've never done this before. But I so much would like to see others breaking these records that I'm going to give away a program that is practically guaranteed to break my record of 10,000 digits, for the integral analog of the MRB constant in a day or so. The program could use some "clean up" if you care to go that far. (The imaginary part is given as a positive, real constant: it actually starts with a negative sign and of course ends with I.)
Here it is:
f[x_] = x^(1/x); Print[DateString[]]; Print[
T0 = SessionTime[]]; prec = 11000; Timing[
Print[a =
N[Re[(633666648 I)/\[Pi]^13 -
33137280/\[Pi]^12 - ((824760 I)/\[Pi]^11) -
136584/\[Pi]^10 - (34784 I)/\[Pi]^9 +
5670/\[Pi]^8 + (786 I)/\[Pi]^7 - 90/\[Pi]^6 - (4 I)/\[Pi]^5 -
3/\[Pi]^4 - (2 I)/\[Pi]^3 +
1/\[Pi]^2 - (2 I)/\[Pi]] + (1/Pi)^12*
NIntegrate[Cos[Pi x]*D[f[x], {x, 12}], {x, 1, Infinity},
WorkingPrecision -> prec, PrecisionGoal -> prec], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b = -Im[(633666648 I)/\[Pi]^13 -
33137280/\[Pi]^12 - ((824760 I)/\[Pi]^11) -
136584/\[Pi]^10 - (34784 I)/\[Pi]^9 +
5670/\[Pi]^8 + (786 I)/\[Pi]^7 + 90/\[Pi]^6 - (4 I)/\[Pi]^5 -
3/\[Pi]^4 - (2 I)/\[Pi]^3 +
1/\[Pi]^2 - (2 I)/\[Pi]] - (1/Pi)^13*
NIntegrate[Cos[Pi x]*D[f[x], {x, 13}], {x, 1, Infinity},
WorkingPrecision -> prec, PrecisionGoal -> prec],
prec]]]; Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
Will anyone let me know you are running this program to break my record?
Edit: On Sat 2 May 2015 19:03:45 I started a 15,000 digit, new record computation of the real and imaginary parts and magnitude of the integral analog of the MRB constant, (where the imaginary part is given as a positive, real constant), using the following code.
f[x_] = x^(1/x); ClearAll[a];
h[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; h[0] = 1; g = -2 I/Pi +
Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, 18}]; Print[DateString[]];
Print[T0 = SessionTime[]]; prec = 15000;
Print[N[a =
Re[g] + (1/Pi)^19*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, 19}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b = -Im[g] + (1/Pi)^19*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, 19}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
The formula behind this computation is ![enter image description here][17]
Edit: The program took 33.75 hours,The full run is attached in 15KMKB3.nb.
Edit May 9, 2015: **I better than halved my time! I computed 15000 digits in 14.83 hours. See fastestMKB15K.nb/. The faster formula is**
![enter image description here][18]
If you still want me to write out a code for more digits, for you to break that record, let me know.
May 11, 2015
------------
Still talking about the integral analog of the MRB constant:![enter image description here][19]
Here are my speed records -- can you beat any of them?
![enter image description here][20]
Here is a graph of those speed records with a trendline:
![enter image description here][21]
The 20,000 digit run is attached as MKB20K.nb, and MKB20K.pdf,
Here is the algorithm used:
![enter image description here][22]
Here is the code:
d = 30; f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; a[0] = 1; g =
2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}]; Print[
DateString[]];
Print[T0 = SessionTime[]]; prec = 20000;
Print[N[a = -Re[g] - (1/Pi)^(d)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
I just now completed a 25,000 digit computation. It took 63.7 hours and confirmed the 20,000 digits. I updated MKB20K.nb and MKB20K.pdf.
Here is the algorithm and the code I used:
![enter image description here][23]
d = 35; f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; h[0] = 1; g =
2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}]; Print[
DateString[]];
Print[T0 = SessionTime[]]; prec = 25000;
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
Here is new a graph of those speed records with a trendline:
![enter image description here][24]
**Edit:**
On Tue 26 May 2015 06:21:00, I started a 30,000 digit computation using the following code.
Does any one else want to try to break the record?
$MaxExtraPrecision = 100; d = 43; f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; h[0] = 1; g =
2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}]; Print[
DateString[]];
Print[T0 = SessionTime[]]; prec = 30000;
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
**Edit:**
My first full 30,0000 run finished on Sun 31 May 2015 00:45:09.
Time span: {"4.767 days", "114.4 hours", "6864 minutes", "411849 seconds"}
See attached MKB30K2.nb worksheet.
Here is an updated speed record plot, with trendline. (I think the 30,000 digit run can be done faster.)
![enter image description here][25]
Here is an extensive record of records of computing the integral analog of the MRB constant:
![enter image description here][26]
![enter image description here][62]
Here is a graph of those records. (The progression of computed digits is so extreme, it is almost unbelievable!)
![enter image description here][27]
6[2]: /c/portal/getImageAttachment?filename=13442.JPG&userId=366611
June 5, 2015
------------
I think I came up with a rough program that computes any "prec" digits of the integral analog of the MRB constant.
It chooses, d, the best (or close to the best) order of derivative to use in Mathar's algorithm mentioned in a previous post (formula (12) at [http://arxiv.org/pdf/0912.3844v3.pdf][28] ), Then uses the appropriate code that integrates the integral analog of the constant. It shows the real and imaginary parts as postive real constants and the value the integral, and gives some timings. It could use a lot of cleanup!
I hope someone can help me test it with varying values of prec. Please reply your intentions to use it and results.If no one else can clean it up I will after I tested it more.
prec = 2000; d = Ceiling[0.264086 + 0.00143657 prec]; If[
Mod[d, 4] == 0, f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}]; a[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 1, f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[
a = -Re[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[
b = Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 2, f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
a[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[
a = -Re[g] - (1/Pi)^(d)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[
b = Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 3, f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[
N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[
N[b = Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];]]]]
Here are some of my best timings to compare with the program's results:
digits seconds
1000 38.8650545
2000 437.4906125
3000 889.473875
4000 1586.000714
5000 2802.591704
6000 4569.41586
7000 6891.057587
8000 9659.318566
9000 13491.43967
10000 17355
11000
12000
13000
14000
15000 53385.02323
16000
17000
18000
19000
20000 123876.4331
21000
22000
23000
24000
25000 229130.3088
26000
27000
28000
29000
30000 411848.6322
**Edit:** On Fri 5 Jun 2015 20:41:45 I started a 35,000 digit computation with the above "automated" program.
**Edit:** The 35,000 digit computation should be done by 10:24:38 am EDT | Sunday, June 14, 2015. In the above "automated" program I forgot to adjust the MaxExtraPrecision, but that shouldn't affect the accuracy in that program. It already computed the real part of the integral to 35,000 digits and the first 30,000 of those are the same as the real part of my previously mentioned 30,000 digit calculation. I will keep you posted.
**Edit:** The 35,000 digit computation finished on Sun 14 Jun 2015 06:52:29, taking 727844 seconds. It is attached as 35KMKB.nb. The first 30,000 digits of those are the same as the ones of my previously mentioned 30,000 digit calculation. (That shows the computation didn't take any "wild" turns because of the lack of MaxExtraPrecision.) Further it is a good check of the 30,000 digit run, as all of the bigger computations are of the smaller, because they all are calculated with distinct formulas using different orders of the derivative of x^(1/x).
Feb 28, 2016
------------
For 2000 digits Mathematica 10. 2.0 shows some remarkable improvement over 10.1.2 with the above "automated program" for computing the digits of the integral analog of the MRB constant.
I will post some speed records that are strictly what the program produces in V 10.2.0, below, no picking and choosing of the methods by a human being. Some results will naturally be slower than my previously mentioned speed records, because I tried so very many methods and recorded only the fastest results.
digits seconds
2000 256.3853590
3000 794.4361122
4000 1633.5822870
5000 2858.9390025
10000 17678.7446323
20000 121431.1895170
40000 I got error msg
to be continued
I have to change the program for 40,000 digits! I'll post the new program when I get 40,000 to work.
As of Wed 29 Jul 2015 11:40:10, one of my computers was happily and busily churning away at 40,000 digits of the integral analog of the MRB constant, using the following formula.
**Edit: Mathematica crashed at 11:07 PM 8/4/2016**
(I used MKB as a symbol for the integral analog because it is called the MKB constant. You can find the name MKB constant at http://www.ebyte.it/library/educards/constants/MathConstants.html .) If you can weed through my code, at the bottom of this reply, **you might want to check the formula for the placement of pluses, minuses and imaginary units!!!** A little hint when checking if the formula matches the code, d is 80 so Mod[d,4] =0.
f[x_] = x^(1/x) : a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}]; a[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, 80}]
MKB = -g + (I/Pi)^81*
Integrate[f[x]*D[f[x], {x, 81}], {x, 1, Infinity}]
Here is the code,cleaned up a little: **This is the version from Aug 6, 2015 452 pm; for the first time the imaginary part is signed and shown to be multiplied by the imaginary unit!**
Block[{$MaxExtraPrecision = 200}, prec = 4000; f[x_] = x^(1/x);
ClearAll[a, b, h];
Print[DateString[]];
Print[T0 = SessionTime[]];
If[prec > 35000, d = Ceiling[0.002 prec],
d = Ceiling[0.264086 + 0.00143657 prec]];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
sinplus1 :=
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
cosplus1 :=
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
middle := Print[SessionTime[] - T0, " seconds"];
end := Module[{}, Print[SessionTime[] - T0, " seconds"];
Print[c = Abs[a + b]]; Print[DateString[]]];
If[Mod[d, 4] == 0,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 1,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*sinplus1), prec]]; end];
If[Mod[d, 4] == 2,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 3,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*sinplus1), prec]];
end];]
Come back to see if I decided whether to try the 40K run again.
EDIT: It looks like I've only got one more test for the program (if it passes) before I retry the 40,000 digit calculation!
**EDIT:On Thu 6 Aug 2015 17:23:18, I restarted the 40K run with Windows 10.**
EDIT: My first thought was the program took up too much RAM, apparently over 115 GB! ( I have 64GB installed and a 51 GB paging file; nevertheless, Windows 10 closed the Mathematica kernel to keep from the computer from loosing data.
Can someone else try the 40K run on their computer? It should take 2 weeks on a fast one. Please let me know if you try it and let me know the results, so I will know I don't have a problem with my computer., If two weeks is too great of a commitment, can you try taking note on the RAM used for two progressively larger runs, like 20K and 30 K? I will do the same, and we can compare notes. Thank You!
EDIT: I've been monitoring memory usage for smaller runs and found the program only uses minimal memory! This makes the action of Windows 10 (closing Mathematica kernel to avoid data loss) all the more a mystery! Could the 40K run really use up all of that RAM?
I know there are quite a few of you viewing this post; however, is anyone out there working on these calculations?.
Aug 10, 2016
------------
V. 11 is about 1.25 times faster with my newest program for calculating MKB, (the integral analog of the MRB sum).
V 10. 4 calculated 20,000 digits in 121431.1895170 seconds and V 11 did it in 96979.6545388 seconds.
I've got a little more testing to do, (about 1 day's worth), then I'll try 40,000 digits again, which should take about 12 days.
I will post all my updates here, so you might want to save this message as a favorite so you won't loose it.
Update 1
========
The 40 K automatically started against my wishes on Thu 11 Aug 2016 15:42:08, (due to my pasting two codes at once). I'll keep you informed, how it goes.
Update 2
========
Windows 10 is pushing an update. Wednesday is the latest it will let me restart.
I will restart now with all the updates I can get, Then deffer further ones and hopefully get 12 restart free days to do my 40K.
Update 3
========
I ran all the updates I could find, differed further ones and restarted 40K on Sun 14 Aug 2016 10:32:40.
Update 4
========
Widows 10 stopped the calculation! AGAIN!
Can anyone else try it and see if you get anywhere?
Here is my latest code:
(*Other program:For large calculations.Tested for 1000-35000 digits-- \
see post at \
http://community.wolfram.com/groups/-/m/t/366628?p_p_auth=KA7y1gD4 \
and search for "analog" to find pertinent replies.Designed to include \
40000 digits.A157852 is saved as c,the real part as a and the \
imaginary part as b.*)Block[{$MaxExtraPrecision = 200},
prec = 40000(*Replace 40000 with number of desired digits.40000 \
digits should take two weeks on a 3.5 GH Pentium processor.*);
f[x_] = x^(1/x);
ClearAll[a, b, h];
Print[DateString[]];
Print[T0 = SessionTime[]];
If[prec > 35000, d = Ceiling[0.002 prec],
d = Ceiling[0.264086 + 0.00143657 prec]];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
sinplus1 :=
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
cosplus1 :=
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
middle := Print[SessionTime[] - T0, " seconds"];
end := Module[{}, Print[SessionTime[] - T0, " seconds"];
Print[c = Abs[a + b]]; Print[DateString[]]];
If[Mod[d, 4] == 0,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 1,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*sinplus1), prec]]; end];
If[Mod[d, 4] == 2,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 3,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*sinplus1), prec]];
end];] (*Marvin Ray Burns,Aug 06 2015*)
Sometime in 2017,
-----------------
To try to get windows 10 from closing Mathematica during the computation I tried the instructions found at https://www.autoitscript.com/forum/topic/177749-stopping-windows-10-from-auto-closing-programs-to-free-up-ram/
. I will record progress in this spot as I did before.
**UPDATE**
I followed the memory usage on my computer and it did use around 64 GB of RAM.
And then Windows closed down the Mathematica kernel. I assume that If I can ever afford to maximize my
RAM to its 128GB limit the computation will be successful!
Anyone have better luck?
Nov, 2017
---------
Concentrating on integral analog of the MRB constant:
-----------------------------------------------------
**Search "integral analog" in the above messages for understanding of the integral anaolg of the MRB constant.**
**And search "For 2000 digits Mathematica 10. 2.0" for my history of calculating 40,000 digits of it.**
.
The basic program I wrote to calculate the Integral analog of the MRB constant is
(*Other program:For large calculations.Tested for 1000-35000 digits-- \
see post at \
http://community.wolfram.com/groups/-/m/t/366628?p_p_auth=KA7y1gD4 \
and search for "analog" to find pertinent replies.Designed to include \
40000 digits.A157852 is saved as c,the real part as a and the \
imaginary part as b.*)Block[{$MaxExtraPrecision = 200},
prec = 40000(*Replace 40000 with number of desired digits.40000 \
digits should take two weeks on a 3.5 GH Pentium processor.*);
f[x_] = x^(1/x);
ClearAll[a, b, h];
Print[DateString[]];
Print[T0 = SessionTime[]];
If[prec > 35000, d = Ceiling[0.264086 + 0.0017 prec],
d = Ceiling[0.264086 + 0.00143657 prec]];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
sinplus1 :=
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
cosplus1 :=
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
middle := Print[SessionTime[] - T0, " seconds"];
end := Module[{}, Print[SessionTime[] - T0, " seconds"];
Print[c = Abs[a + b]]; Print[DateString[]]];
If[Mod[d, 4] == 0,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 1,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*sinplus1), prec]]; end];
If[Mod[d, 4] == 2,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 3,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*sinplus1), prec]];
end];] (*Marvin Ray Burns,
I think I found out why the integral analog of the MRB constant is so hard to calculate to prec=40000 digits!
I've been using too high of an order of the derivative of x^(1/x). I've been running out of memory because of using the 80th derivative from `d = Ceiling[0.002 prec]`, because the 58th derivative from `Ceiling[0.264086 + 0.00143657 prec]` was apparently to small leaving an error statement. I just now asked myself, why make such a big jump?
When my big computer gets back from its tuneup I think I will try `Ceiling[0.00146 prec]` = 59th derivative.
EDIT
----
I tried `Ceiling[0.00146 prec]` and `Ceiling[0.00145 prec]` in Mathematica 11.0 and lost the kernel both times after 6 - 12 hours!
I'm now trying `Ceiling[0.0017 prec]` with v 10.4. It's been over 12 hours and I've not lost the kernel yet. Wish me luck!
EDIT
----
I got the following error message and a real part that does not agree with previous computations.
"NIntegrate failed to converge to prescribed accuracy after 9 recursive bisections in x near {x} = {<<42008>>}. NIntegrate obtained -<<42012>> and <<42014>> for the integral and error estimates."
I'm now trying `Ceiling[0.0018 prec]` with v 10.4....
`Ceiling[0.0018 prec]` with v 10.4 gave the same error.
I'm working on a new program that uses less memory; stay tuned!
March 13, 2018
--------------
I'm now trying `Ceiling[0.0019 prec]` with v 11.2...., at Mon 12 Mar 2018 05:40:13
Here is a record of the memory used by the program. At times the computer may use significantly more.
"Mon 12 Mar 2018 10:30:00" 14 GB DDR3 RAM
"Mon 12 Mar 2018 13:00:00" 15 GB DDR3 RAM
"Mon 12 Mar 2018 14:00:00" 16 GB DDR3 RAM
"Mon 12 Mar 2018 14:30:00" 17 GB DDR3 RAM
"Mon 12 Mar 2018 15:00:00" 18 GB DDR3 RAM
"Mon 12 Mar 2018 22:30:00" 24 GB DDR3 RAM
"Mon 12 Mar 2018 24:00:00" 26 GB DDR3 RAM
"Tue 13 Mar 2018 06:30:00" 33 GB DDR3 RAM
"Tue 13 Mar 2018 07:30:00" 14 GB DDR3 RAM
"Tue 13 Mar 2018 08:00:00" 15 GB DDR3 RAM
"Tue 13 Mar 2018 08:30:00" 16 GB DDR3 RAM
"Tue 13 Mar 2018 11:30:00" 19 GB DDR3 RAM
"Tue 13 Mar 2018 12:00:00" 20 GB DDR3 RAM
"Tue 13 Mar 2018 14:00:00" 22 GB DDR3 RAM
"Tue 13 Mar 2018 14:30:00" 5 GB DDR3 RAM
"Tue 13 Mar 2018 15:00:00" 6 GB DDR3 RAM
"Tue 13 Mar 2018 16:30:00" 8 GB DDR3 RAM
"Tue 13 Mar 2018 18:30:00" 11 GB DDR3 RAM
"Tue 13 Mar 2018 19:30:00" 13 GB DDR3 RAM
"Tue 13 Mar 2018 20:30:00" 14 GB DDR3 RAM
"Tue 13 Mar 2018 21:00:00" 8 GB DDR3 RAM
"Tue 13 Mar 2018 21:30:00" 11 GB DDR3 RAM
"Wed 14 Mar 2018 07:30:00" 26 GB DDR3 RAM
"Wed 14 Mar 2018 07:30:00" 26 GB DDR3 RAM
"Wed 14 Mar 2018 08:00:00" 25 GB DDR3 RAM.Total used by programs 44.54 GB DDR3 RAM.
"Wed 14 Mar 2018 20:00:00" 37 GB DDR3 RAM.Total used by programs 40.32 GB DDR3 RAM.
"Thu 15 Mar 2018 08:00:00" 0 GB DDR3 RAM.Total used by programs 3.84 GB DDR3 RAM.
Update:
-------
V 11.2 cut off its kernel sometime between "Mon 14 Mar 2018 20:00:00" and "Mon 15 Mar 2018 08:00:00."
It seems to me that V11 under Windows 10 cuts off my RAM intensive operations.
The last success I had was using V10.2 under Windows 7. I am trying that combination again, this time for the 40 k digits.
Below is the code I used than and am using now. At first I just changed only "prec=35000" to "prac=40000" and got an errant answer for the real part.
And I got memory use starting out at
"Thu 15 Mar 2018 08:53:47" 0.3 GB, total computer use 3.84 GB
"Thu 15 Mar 2018 11:48:47" 04.3 GB, total computer use 7.68 GB
"Thu 15 Mar 2018 13:00:00" 01.3 GB, total computer use 5.12 GB
"Thu 15 Mar 2018 14:00:00" 01.3 GB, total computer use 5.12 GB
So now I also changed the coefficient of prec from "d = Ceiling[0.264086 + 0.00143657 prec]" to "d = Ceiling[ 0.002 prec]." I think I can get by with .002 because 10.3 in Windows 7 seems to use less memory that the V 11's in Windows 10.
prec = 40000; d = Ceiling[0.002 prec]; If[Mod[d, 4] == 0,
f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}]; a[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 1, f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[
a = -Re[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[
b = Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 2, f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
a[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[
a = -Re[g] - (1/Pi)^(d)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[
b = Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 3, f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[
N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[
N[b = Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];]]]]
Memory use:
"Thu 15 Mar 2018 16:30:46" .3 GB. Total used by programs 3.8 GB.
Here is a break down of the memory use as of 16:45 March 17, 2018
![enter image description here][30]
Mar 18, 2018, 8:10 AM
------------
Just in case I rum out of memory, I increased the size of my paging file!
![enter image description here][31]
"Sun 17 Mar 2018 16:00:00" 15 GB. Total used by programs 49.92 GB.
![enter image description here][32]
![enter image description here][33]
I no longer believe the V 11's use a lot more memory. If I hadn't increased my paging file Windows would have closed Mathematica already!
![enter image description here][34]
I might be slowing the computation down a little, but I don't want to take any chances of running out of memory, so I increased the paging file 1 more time.
![enter image description here][35]
![enter image description here][36]
The computer has been commiting up to 160 GB of total RAM for a while now.
![enter image description here][37]
Finally, the committed memory is going down.
![enter image description here][38]
My computer is acting real sluggish right now. Mathematica is using a minimum amount of DDR3 RAM, but the computer is still committing a near record of virtual RAM.
![enter image description here][39]
My computer is acting too funny, so I aborted the evaluation. The kernel remained running and overall memory remained maxed out. I tried to retrieve a and b (the variables with the real and imaginary parts of the solution), but the computer wouldn't recall them for me. The computer won't evaluate any Mathematica operations. I am restarting my computer and inspecting the damage!
Update:
-------
Windows said it found no errors on my hard drive; that's great!
I'm going to replace my Intel 6 core processor with a faster 8 -core Intel Xeon E5-2687W v2 CPU, and ad an additional hard drive. The new processor and my motherboard both take 128 GB RAM, but ECC is the only 16G DDR3 mims I can find. I'm not sure if my MSI Big Bang-XPower II will take ECC.
40,000 digits of the integral analog might have to wait for me to get a new system.
I am working on a new program to compute the MRB constant in little steps, and will use it on my new processor.
[1]: https://scholar.google.com/scholar?hl=en&as_sdt=0,15&q=%22MKB%20constant%22&btnG=
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capturemkb2.JPG&userId=366611
[3]: /c/portal/getImageAttachment?filename=3763a.JPG&userId=366611
[4]: http://arxiv.org/abs/0912.3844
[5]: /c/portal/getImageAttachment?filename=462244.JPG&userId=366611
[6]: /c/portal/getImageAttachment?filename=41011.JPG&userId=366611
[7]: /c/portal/getImageAttachment?filename=31733.JPG&userId=366611
[8]: /c/portal/getImageAttachment?filename=82262.JPG&userId=366611
[9]: /c/portal/getImageAttachment?filename=42253.JPG&userId=366611
[10]: /c/portal/getImageAttachment?filename=3763a.JPG&userId=366611
[11]: http://arxiv.org/abs/0912.3844
[12]: /c/portal/getImageAttachment?filename=40936.JPG&userId=366611
[13]: /c/portal/getImageAttachment?filename=106828.JPG&userId=366611
[14]: /c/portal/getImageAttachment?filename=27409.JPG&userId=366611
[15]: /c/portal/getImageAttachment?filename=902710.JPG&userId=366611
[16]: https://github.com/FelisPhasma/MKB-Constant
[17]: /c/portal/getImageAttachment?filename=18.JPG&userId=366611
[18]: /c/portal/getImageAttachment?filename=15K.JPG&userId=366611
[19]: /c/portal/getImageAttachment?filename=aaa.JPG&userId=366611
[20]: /c/portal/getImageAttachment?filename=mkbspeed1.JPG&userId=366611
[21]: /c/portal/getImageAttachment?filename=mkbspeedgraph1.JPG&userId=366611
[22]: /c/portal/getImageAttachment?filename=mdk20K1.JPG&userId=366611
[23]: /c/portal/getImageAttachment?filename=35.JPG&userId=366611
[24]: /c/portal/getImageAttachment?filename=2898recordplota1.jpg&userId=366611
[25]: /c/portal/getImageAttachment?filename=3063a.JPG&userId=366611
[26]: /c/portal/getImageAttachment?filename=29741.JPG&userId=366611
[27]: /c/portal/getImageAttachment?filename=54721a.JPG&userId=366611
[28]: http://arxiv.org/pdf/0912.3844v3.pdf
[29]: /c/portal/getImageAttachment?filename=42253.JPG&userId=366611
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10p140kmarch2018memeory1.jpg&userId=366611
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=32901.JPG&userId=366611
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=53022.JPG&userId=366611
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=78733.jpg&userId=366611
[34]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.jpg&userId=366611
[35]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.jpg&userId=366611
[36]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.jpg&userId=366611
[37]: http://community.wolfram.com//c/portal/getImageAttachment?filename=25548.jpg&userId=366611
[38]: http://community.wolfram.com//c/portal/getImageAttachment?filename=76749.jpg&userId=366611
[39]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.jpg&userId=366611Marvin Ray Burns2018-04-20T12:06:18ZSolving an equation with complex parameters in Wolfram|Alpha
https://community.wolfram.com/groups/-/m/t/2244163
If one has an equation of the form: (y + i /2) g(u, w) = 0, where i denotes √(-1), where y is real while u & v are complex, and where g(v, w) is an algebraic function of v and w; then: is the correct solution for y of this equation: 'no solution for y' {as is given by Wolfram Alpha Pro for a special case of the algebraic function denoted by g(u, w)} ... or else is the correct solution: 'y = any real number'? This strange problem arose during my recent correspondence with a Prof. of Mathematics at my alma mater of UC Berkeley, who specializes in algebra & number theory & who argued that one could divide both sides of this equation by (y + i / 2) since y is defined to be real and so (y + i / 2) cannot vanish; and then he argued further that therefore one obtains (after this division) the algebraic equation: g(v, w) = 0 which is, of course, trivially satisfied for all real y since y does not appear in it, and so this equation places no constraints on (assumed real) y at all ... whatsoever. However, this seems to me to be very poor methodology, does it to you as well? (And please give me a good supporting reference or two to your answer to this question.)Dr. Dennis P Allen Jr.2021-04-14T19:02:50ZParallel Computing and Object Oriented Programming - Part 1
https://community.wolfram.com/groups/-/m/t/1830825
**See Part 2 here:** https://community.wolfram.com/groups/-/m/t/1851278
Wolfram language includes Parallel Computing Tools and the parallel computing is a first step of high-performance computing. A combination of the Parallel Computing Tools and Object Oriented Programming, here after OOP, introduces a new view for the Wolfram language programming. Up to now author had introduced a OOP system for Wolfram language, and now shows an example of OOP based parallel computing.
OOP suits well with the parallel computation because the Instance is basically independent calculation unit and you can find that the same code can apply for both parallel and mono kernel environment.
Part 1 is concerning to the calculation with one instance on one local kernel and the method is explained by an example of Mersenne number calculation. Followings are the Wolfram code steps and you can see first that OOP is composed of basic functions.
step 1; startup local kernels
CloseKernels[];
LaunchKernels[];
kernelList = ParallelEvaluate[$KernelID]
step 2; definition of parallel class using instance preceded method
mq[nam_] := Module[{ins, ine},
nam[set[{ns_, ne_}]] := {ins, ine} = {ns, ne};
go[] := Select[Range[ins, ine], PrimeQ[2^# - 1] &]
];
step 3; definition of object properties with the association list supposing the local kernel number is 4.
object = {
Association["name" -> Unique[k], "range" -> {9000, 9399},
"kernel" -> kernelList[[1]]],
Association["name" -> Unique[k], "range" -> {9400, 9699},
"kernel" -> kernelList[[2]]],
Association["name" -> Unique[k], "range" -> {9700, 9899},
"kernel" -> kernelList[[3]]],
Association["name" -> Unique[k], "range" -> {9900, 10000},
"kernel" -> kernelList[[4]]
]};
step 4; using the association list, we will construct instances, then set parameters for each instance.
Map[ParallelEvaluate[mq[#name], #kernel] &, object];
Map[ParallelEvaluate[#name[set[#range]], #kernel] &, object]
step 5; execute parallel calculation (this case used Mac 3.4 GHz Intel Core i5)
ts = SessionTime[];
ans = ParallelEvaluate[go[]];
{SessionTime[] - ts, ans}
result is
{10.542539, {{}, {9689}, {}, {9941}}}
To evaluate the result we can compare with a mono-kernel computation.
mq[mono];
mono[set[{9000, 10000}]];
ts = SessionTime[];
ans = go[];
{SessionTime[] - ts, ans}
result is
{30.087534, {9689, 9941}}
The parallel computation with 4 local kernels get about 3 times faster than the mono kernel calculation.
Enjoy OOP for Wolfram language.Hirokazu Kobayashi2019-11-25T01:21:43ZHow to control PointSize in GeoListPlot?
https://community.wolfram.com/groups/-/m/t/2244579
I marked two groups very close points on GeoListPlot and hoping to control PointSize to avoid overlap.
Should I how to do it?
a={{24.9954,121.304},{25.0603,121.202},{24.9528,121.204},{24.9528,121.204},{24.8639,121.216},{24.9001,121.039},{24.9001,121.039},{24.7406,121.089},{24.8056,120.972},{24.697,120.899},{24.5653,120.82},{24.9533,121.222},{25.0355,121.083},{24.9868,121.309}};
b={{24.9001,121.039},{24.8056,120.972},{24.9528,121.204},{24.9948,121.32},{24.9533,121.222},{25.0603,121.202},{24.697,120.899},{24.5653,120.82},{25.0355,121.083},{24.8639,121.216},{24.7406,121.089}};
GeoListPlot[{GeoPosition@a,GeoPosition@b},PlotLegends->Placed[{"A","B"},Bottom],PlotStyle->PointSize[0.01]]Tsai Ming-Chou2021-04-15T20:50:07ZForecast timeseries with Zeros / Gaps in between?
https://community.wolfram.com/groups/-/m/t/2245262
Good afternoon,
I wonder how to forecast or predict values for a "Time Series", when the existing data from the past has gaps, or Zeros.
- Should I use "Event Series"?
- Or Predict?
- Or change the Prediction Method of the Time Series?
If we have data consisting of pairs of date and an amount (e.g. Sales, or kilometers run on that date), how can we create a "Forecast", if that data is not continuous? (Example Data below.)
I tried to create a "Event Series", but it seems as if **TimeSeriesModelFit** cannot be used on EventSeries (?). But when creating a TimeSeries instead, the values for each day/date that is not given is interpolated (= not zero).
To avoid that, I filled the dates where "nothing happened" with Zeros. But not only does this make the plot look horrible, it also then uses the Moving Average as default calculation for a TimeSeriesModel. Resulting in a Forecast that is a horizontal line.
Should I just override that? Or what else can I do to build a forecast with data where, at some days, the value is Zero? (Actually, where most dates don't even exist in the list.)
(Sorry, all this TimeSeries stuff is still new to me.)
Thanks a lot for your help already!
Oliver
Example Data, from attached Notebook:
{{DateObject[{2010, 1, 4}, "Day", "Gregorian", 2.], 453.}, {DateObject[{2010, 1, 5}, "Day", "Gregorian", 2.], 511.},
{DateObject[{2010, 1, 6}, "Day", "Gregorian", 2.], 493.}, {DateObject[{2010, 1, 7}, "Day", "Gregorian", 2.], 530.},
{DateObject[{2010, 1, 8}, "Day", "Gregorian", 2.], 449.}, {DateObject[{2010, 1, 11}, "Day", "Gregorian", 2.], 484.},
{DateObject[{2010, 1, 12}, "Day", "Gregorian", 2.], 518.}, {DateObject[{2010, 1, 14}, "Day", "Gregorian", 2.], 533.},
{DateObject[{2010, 1, 15}, "Day", "Gregorian", 2.], 465.}, {DateObject[{2010, 1, 18}, "Day", "Gregorian", 2.], 456.},
{DateObject[{2010, 1, 20}, "Day", "Gregorian", 2.], 455.}, {DateObject[{2010, 1, 21}, "Day", "Gregorian", 2.], 473.},
{DateObject[{2010, 1, 22}, "Day", "Gregorian", 2.], 501.}, {DateObject[{2010, 1, 27}, "Day", "Gregorian", 2.], 454.},
{DateObject[{2010, 1, 29}, "Day", "Gregorian", 2.], 497.}, {DateObject[{2010, 2, 1}, "Day", "Gregorian", 2.], 508.},
{DateObject[{2010, 2, 2}, "Day", "Gregorian", 2.], 497.}, {DateObject[{2010, 2, 3}, "Day", "Gregorian", 2.], 520.},
{DateObject[{2010, 2, 4}, "Day", "Gregorian", 2.], 460.}, {DateObject[{2010, 2, 5}, "Day", "Gregorian", 2.], 464.},
{DateObject[{2010, 2, 12}, "Day", "Gregorian", 2.], 536.}, {DateObject[{2010, 2, 15}, "Day", "Gregorian", 2.], 563.},
{DateObject[{2010, 2, 17}, "Day", "Gregorian", 2.], 495.}, {DateObject[{2010, 2, 22}, "Day", "Gregorian", 2.], 539.},
{DateObject[{2010, 2, 23}, "Day", "Gregorian", 2.], 560.}, {DateObject[{2010, 2, 26}, "Day", "Gregorian", 2.], 547.},
{DateObject[{2010, 3, 2}, "Day", "Gregorian", 2.], 482.}, {DateObject[{2010, 3, 9}, "Day", "Gregorian", 2.], 537.},
{DateObject[{2010, 3, 10}, "Day", "Gregorian", 2.], 501.}, {DateObject[{2010, 3, 12}, "Day", "Gregorian", 2.], 538.},
{DateObject[{2010, 3, 18}, "Day", "Gregorian", 2.], 582.}, {DateObject[{2010, 3, 22}, "Day", "Gregorian", 2.], 578.},
{DateObject[{2010, 3, 24}, "Day", "Gregorian", 2.], 590.}, {DateObject[{2010, 3, 25}, "Day", "Gregorian", 2.], 516.},
{DateObject[{2010, 3, 29}, "Day", "Gregorian", 2.], 506.}, {DateObject[{2010, 3, 31}, "Day", "Gregorian", 2.], 554.},
{DateObject[{2010, 4, 1}, "Day", "Gregorian", 2.], 533.}, {DateObject[{2010, 4, 5}, "Day", "Gregorian", 2.], 505.},
{DateObject[{2010, 4, 6}, "Day", "Gregorian", 2.], 586.}, {DateObject[{2010, 4, 8}, "Day", "Gregorian", 2.], 505.},
{DateObject[{2010, 4, 9}, "Day", "Gregorian", 2.], 527.}}Oliver Ruessing2021-04-16T14:43:56ZNo output in solving system of trigonometric equations
https://community.wolfram.com/groups/-/m/t/2243285
Hello everybody, I hope you are doing all well today.
I'm new to Mathematica, I have a system of equations that I want to solve using Mathematica: <br><br>
$\begin{equation}
\begin{cases}
sin(x)cos( \epsilon y)cosh(\epsilon z) + sin(\epsilon x)cos(y)cosh(z) - sin(x(1 +
\epsilon)) = 0 \\
sin(x)sin(\epsilon y)sinh(\epsilon z) + sin(\epsilon x)sin(y)sinh(z) = 0
\end{cases}
\end{equation}$ <br><br>
I'm only interested in {x, y, z} values in the domain [0, 2*Pi] <br>
$\epsilon$ is a parameter that can take any value between 1.8 to 3.
I tried solving the problem first by setting $\epsilon = 2$, I used methods like `Solve` and `Nsolve`, but it didn't work. it took too much time to execute without any result. after a long day I found out about `Reduce` function, i used it and it works it gave me the full answer. but that's only for the case of $\epsilon = 2$. When I change $\epsilon$ to 2.5 it didn't gave me any answer. I left it to run to about 10 hours or more without any result. <br>
This is my fifth day with this problem, and I'm still stuck.
can anyone provide me with any advices. I really appreciate your help. Thank you so much <br><br>
This is my code for the case of $\epsilon = 2$
Reduce[{
Sin[x]*Cos[2.5*y]*Cosh[2.5*z] + Sin[2.5*x]*Cos[y]*Cosh[z] -
Sin[3.5*x] == 0,
Sin[x]*Sin[2.5*y]*Sinh[2.5*z] + Sin[2.5*x]*Sin[y]*Sinh[z] == 0,
0 <= x <= 2*Pi, 0 <= y <= Pi, 0<= z <= 2*Pi}, {x, w, y}, Reals]Mohamed El Ghafiani2021-04-13T17:04:13ZReduce time of differentiation?
https://community.wolfram.com/groups/-/m/t/2245248
Hi.
I am facing a problem regarding runtime for a differentiation. Here is my code
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/75e66234-cbac-4a81-a34a-8ab67500d9f1Tanish Dey2021-04-16T13:59:16ZDo logicians use Wolfram Language?
https://community.wolfram.com/groups/-/m/t/2242679
Do logicians use Wolfram Language? For instance for proof trees in sequent calculus. General answers are welcome.Stephan Spahn2021-04-12T19:39:01ZReplacing runs of identical numbers in lists
https://community.wolfram.com/groups/-/m/t/2244422
I have nested lists like {{2,2,7},{2,2,3,7},{3},{6,6,6,7}}
I need to replace the runs of consecutive numbers x in the sublists with Range[x-n-1, x], n being the repeat count of x.
For the above example:2,2 -> 1,2; 6,6,6->4,5,6 , giving {{1,2,7},{1,2,3,7},{3},{4,5,6,7}}.
How do I achieve this (efficiently)?Achim Luhn2021-04-15T05:10:50Z[WSG21] Daily Study Group: Multiparadigm Data Science
https://community.wolfram.com/groups/-/m/t/2244736
A new study group for Multiparadigm Data Science with the Wolfram Language begins Monday, Apr 19, 2021!
Making progress in an online course can be daunting when you have to study all alone. Join a cohort of fellow Wolfram Language users for a two-week study group that works through the Wolfram U course "[Multiparadigm Data Science][1]". A certified instructor will guide each session by reviewing the lesson notebooks from the course, working through the code and answering questions.
Get support for starting on the path to earning Level 1 and Level 2 certifications in multiparadigm data science.
**Sign up here:** https://wolfr.am/UNdaIas0
[1]: https://www.wolfram.com/wolfram-u/multiparadigm-data-science/Abrita Chakravarty2021-04-15T22:05:55Z[WWS21] Dimension Algorithm and Dimension Tensor in Wolfram Model
https://community.wolfram.com/groups/-/m/t/2162387
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/0f74e104-cf5a-4f94-a23b-f93993074a55
[Original]: https://www.wolframcloud.com/obj/94b998d8-1b15-4d44-b000-4b51b8913ce1Zhenzhong Xing2021-01-14T18:14:32ZSomething About The Automated Stippling Drawing
https://community.wolfram.com/groups/-/m/t/759091
# 1. Stippling drawing -- an introduction
![results examples][1]
[Stippling](https://en.wikipedia.org/wiki/Stippling) is a kind of drawing style using only points to mimic lines, edges and grayscales. The entire drawing consists only of black points on a white background. The density of the points gives the impression of grayscale shading.
Back in 1510, stippling was first invented as an [engraving technique](https://en.wikipedia.org/wiki/Stipple_engraving), then became popular in many fields because of its requirement of just one color of ink. The style influence widely beyond engraving. We can see many technical illustrations done by it, especially in math and mechanics books. [Even today many people are interested in it](http://mathematica.stackexchange.com/q/21240/17), and archaeologists, geologists and biologists are still using it in field works.
On the art side, stippling shows interesting relation with the [divisionism](https://en.wikipedia.org/wiki/Divisionism) and [pointillism](https://en.wikipedia.org/wiki/Pointillism), which both belong to the neo-impressionism.
In the domain of so-called non-photorealistic computer graphics, stippling is one of the basic artistic painting style, and can be used as the foundation of many other styles such as mosaic, stained glass, hatching, etc. (*Non-Photorealistic Computer Graphics*, Thomas Strothotte, Stefan Schlechtweg; *The Algorithms and Principles of Non-photorealistic Graphics*, Weidong Geng)
# 2. The essential properties of a "good" stippling
## 2.1 Well-spaced points
When using points to approximate a grayscale, a random set of points with uniform distributuion is usually not good enough. To illustrate, both images below have 63024 points, but the one on the right is a good stippling (or usually called **_well-spaced_**). The one on the left has too many unwanted small artifacts -- clumps and voids do not exist in the original image. (And please be patient, we'll talk about how to generate the "good" one in the next section:)
![comparison between random and stippling - non-uniform case][2]
It can be seen that there are too many unwanted small artifacts in the former one -- clumps and voids that do not exist in the original image.
The phenomenon can be seen even more clearly in the uniform grayscale case:
![comparison between random and stippling - uniform case][3]
I hope it's plain to see, that in the stippling graphics, the distance between any point and the one nearest to it is nearly a **constant**. So it looks like we found the essential property of a "good" stippling.
But wait... What about the Lena example? Clearly it's "good" ( if you allow me to say so) but the distances between closest points are various.
It turns out we just need to take one more step to notice that the constant distance implies equilateral triangles:
![comparison between random and stippling - uniform DelaunayMesh][4]
So the so-called _good_ or _well-spaced_ property can be described more precisely as:
> The cells of the Delaunay mesh induced by the points are **as equilateral triangles as possible**.
The statement can be confirmed by the statistics on the set of interior angles of all the cells
![interior angles statistics][5]
## 2.2 Another (equivalent) definition
There is still [another view point]((http://arxiv.org/abs/1509.00229)) to judge if the positions of the points are good or not.
Consider the **same blur** taken on all the three Lena images: the original one, the random points one and the well-spaced one:
![comparison among blurred images][6]
The so-called *projection algorithm* looks for a point distribution, when convolved with a certain kernel h, which is as the same as the one obtained by convolving the original image with the same h. We are not going to talk about the details here. Readers who are interested in it can refer to the mentioned paper.
# 3. Stippling of a uniform area -- centroidal Voronoi diagram approach
Now let's do the real fun :)
For an easy start we consider the stippling technique for a uniform area (i.e. an area with constant grayscale). There is a de facto standard method for it called **_centroidal Voronoi diagram_** (CVD) which is usually generated by the [*Lloyd's algorithm*](https://en.wikipedia.org/wiki/Lloyd%27s_algorithm).
Basically, the algorithm acts as the following relaxation steps:
1. Generate $n$ random points inside the interested region
2. Generate the [***Voronoi diagram***](https://en.wikipedia.org/wiki/Voronoi_diagram) of the $n$ points
3. Find the ***centroid*** (i.e. center of mass) of each Voronoi cell
4. Use the $n$ centroids as the resulting points
5. If satisfied, algorithm stop, else goto step 1
Here the key techniques are the Voronoi diagram generation and the centroid finding. The former one is a perfect match for the bounded version of the built-in function `VoronoiMesh`. The latter one, as the Voronoi cells for a closed region are always closed convex polygons, has a [simple formula](https://en.wikipedia.org/wiki/Centroid#Centroid_of_polygon).
Suppose the cell is defined by $n$ vertices ordered (counter-)clockwise $\{\boldsymbol{P}_1=(x_1,y_1),\boldsymbol{P}_2=(x_2,y_2),\dots,\boldsymbol{P}_n=(x_n,y_n),\boldsymbol{P}_{n+1}=(x_1,y_1)\}$, then its centroid $\boldsymbol{C}$ can be determined as follwing Eq. 1:
$$\left\{\begin{eqnarray}
\boldsymbol{C}&=&\frac{1}{6A}\sum_{k=1}^n (\boldsymbol{P}_i+\boldsymbol{P}_{i+1})\,\left|\begin{matrix}x_i&y_i\\x_{i+1}&y_{i+1}\end{matrix}\right|\\
A&=&\frac{1}{2}\sum_{k=1}^n \left|\begin{matrix}x_i&y_i\\x_{i+1}&y_{i+1}\end{matrix}\right|
\end{eqnarray}\right.$$
As a test we generate 2500 uniformly distributed random points in a square region $[-1,1]\times[-1,1]$:
initPts = RandomPoint[Rectangle[{-1, -1}, {1, 1}], 2500];
whose Voronoi diagram is:
VoronoiMesh[initPts, {{-1, 1}, {-1, 1}}] //
Graphics[{
GrayLevel[.6],
Line[Identity @@@ MeshPrimitives[#, 1]],
Red, AbsolutePointSize[2],
Point[initPts]
}] &
![Voronoi of random points][7]
The Lloyd's algorithm can be expressed as the following `findCentroid` function:
Clear[findCentroid]
findCentroid[p : Polygon[{{__Real} ..}]] :=
Module[{pts = p[[1]], pairs, dets, area},
pairs = Partition[pts, 2, 1, {1, 1}];
dets = Det /@ pairs;
area = 1/2 Plus @@ dets;
dets.(Plus @@@ pairs)/(6 area)
]
The cell `Polygon`-s of the Voronoi mesh can be extracted with `MeshPrimitives[...,2]`, which then should be piped to the `findCentroid` to complete one iteration:
Module[{vm},
Function[pts,
vm = VoronoiMesh[pts, {{-1, 1}, {-1, 1}}];
findCentroid /@ MeshPrimitives[vm, 2]
]@initPts
];
Now we are ready to animate the first 50 iteration results to give a intuitive feeling about the CVD:
intermRes =
Module[{vm},
NestList[
Function[pts,
vm = VoronoiMesh[pts, {{-1, 1}, {-1, 1}}];
findCentroid /@ MeshPrimitives[vm, 2]
],
initPts,
500
]
]; // AbsoluteTiming
refinedPts = intermRes[[-1]];
allframes =
Function[pts,
VoronoiMesh[pts, {{-1, 1}, {-1, 1}}] //
Graphics[{
GrayLevel[.6],
Line[Identity @@@ MeshPrimitives[#, 1]],
Red, AbsolutePointSize[2],
Point[pts]
}, ImageSize -> 600] &
] /@ intermRes[[;; 50]];
ListAnimate[allframes, 24, DisplayAllSteps -> True, AnimationRepetitions -> 1]
{197.984, Null}
![uniform CVD][8]
There are various ways to show the difference between the points distributions before and after the process.
For example we can use `NearestNeighborGraph` to illustrate the connectivities of them, which will highlight the unwanted voids in the former case:
MapThread[
Function[{pts, label},
Graphics[
Line[EdgeList[NearestNeighborGraph[pts, 3]] /.
UndirectedEdge -> List], ImageSize -> 400] //
{Style[label, 20, FontFamily -> "Constantia"], #} &
],
{{initPts, refinedPts}, {"initial points", "refined points"}}
] // Grid[# // Transpose, Alignment -> Left, Frame -> All] &
![Comparison of connectivity between random and stippling][9]
Or as having been shown in the previous section, we can compare the statistics of the interior angles:
Plot[
Evaluate[MapThread[
Legended[PDF[KernelMixtureDistribution[#1], ?],
Style[#2, 15]] &,
{
(
# // DelaunayMesh // MeshPrimitives[#, 2] & //
Function[pts,
VectorAngle[#2 - #1, #3 - #1]/Degree & @@@
Partition[pts, 3, 1, {1, 1}]] @@@ # & // Flatten
) & /@ {initPts, refinedPts},
{"initial points", "refined points"}
}]],
{?, 0, 180},
PlotRange -> All, GridLines -> {{60}, None}, Frame -> True,
FrameLabel -> (Style[#, Bold, 15] & /@ {"angle in °", "PDF"})
]
![interior angles statistics][10]
To give another intuitive impression on the "well-spaced" property from a different point of view, we compare the discrete Fourier transformation of the initial points with the one of the refined points:
MapThread[
Function[{pts, label},
Graphics[{AbsolutePointSize[0], Point@pts}] //
Rasterize[#, ImageSize -> {701, 701}] & // Binarize //
ColorNegate //
Function[img,
ImageMultiply[img,
DiskMatrix[.4 ImageDimensions[img],
ImageDimensions[img]] // Image]] //
ImageData // # - Mean[Flatten@#] & //
Fourier // Abs // Rescale //
RotateRight[#, Floor[Dimensions[#]/2]] & //
Image[1.2 - (1 - #)^5, "Real", ImageSize -> 400] & //
{Style[label, 20, FontFamily -> "Constantia"], #} &
],
{{initPts, refinedPts}, {"initial points", "refined points"}}
] // Grid[#//Transpose, Alignment -> Left, Frame -> All] &
![comparison of FFT between random and stippling][11]
It can be clearly seen that in the refined points' case, we effectively achieved both **isotropism** and **low-stop filter**, which is another facet of view to understand the term *well-spaced*. Note that the property is also related to the concept of ["*colors*" of noise](https://en.wikipedia.org/wiki/Blue_noise).
# 4. Stippling of a non-uniform area
So far we only talked about the method for stippling a uniform area. But how about a non-uniform area, like the Lena example?
The Lloyd's algorithm described above can not be directly adapted here without modification, as it always smoothes out the distribution, results a uniform CVD. So how can we generalize it to non-uniform case?
## 4.1 Approach using the conformal mapping
Recall that we were looking for a Delaunay mesh with cells that resembles equilateral triangles as much as possible, one thing that immediately came to our mind is the [conformal map](https://en.wikipedia.org/wiki/Conformal_map) which transforms between two spaces while preserving angles locally. Thus a good stippling on a uniform area is guaranteed to be mapped to a good one on a specified non-uniform area.
A simple example of conformal map is a complex holomorphic function $f$, say $f(z)=(z-z^2/2)\,\exp\left[-(z-1)^2/5\right]$:
cmfunc = Compile[{{z, _Complex}},
Module[{w = 1. I},
w = (z - 1/2 z^2) Exp[-((z - 1)^2/5)];
{Re@w, Im@w}
],
RuntimeOptions -> "Speed",
RuntimeAttributes -> {Listable},
Parallelization -> True
];
which transforms the stippling points `refinedPts` in the uniform square $[-1,1]\times[-1,1]$ to a new points distribution `transPts`:
transPts = refinedPts.{1, I} // cmfunc;
transPts // Graphics[{AbsolutePointSize[2], Point@#}] &
![conformal transformed points][12]
The result looks very nice in the sense of "well-spaced", which can also be confirmed with the Delaunay mesh, Voronoi mesh and its connectivity:
transPts // DelaunayMesh
![DelaunayMesh of the transPts][13]
Module[{vm = VoronoiMesh[refinedPts, {{-1, 1}, {-1, 1}}]},
Graphics[GraphicsComplex[
(Identity @@@ MeshPrimitives[vm, 0]).{1, I} // cmfunc,
vm // MeshCells[#, 1] & // Line[Identity @@@ #] &
]]
]
![VoronoiMesh of the transPts][14]
transPts // NearestNeighborGraph[#, 3] & //
EdgeList // # /. UndirectedEdge -> List & // Line // Graphics
![connectivity of the transPts][15]
So far so good. However, this theoretically elegant approach is not easily generalizable. To find the right $f$ for an arbitrary target distribution will need sophisticated mathematical skill. Again we are not going to talk about the details here. For readers who are interested in it, there is a whole dedicated research field called *computing conformal mapping*.
## 4.2 Approach using the weighted Voronoi diagram
Despite the elegance of the conformal mapping, the popular method for stippling non-uniform area comes from a modification of the CVD, which is called the _**weighted** centroidal Voronoi diagram_ (A. Secord. Weighted Voronoi stippling. In *Proceedings of the second international symposium on Non-photorealistic animation and rendering*, pages 37-43. ACM Press, 2002.) (A. Secord. Random Marks on Paper, Non-Photorealistic Rendering with Small Primitives, *Thesis for the Degree of Master of Science*, 2002).
The algorithm is similar to the Lloyd's algorithm, only in step 3, when looking for the centroid of the Voronoi cell, a variable areal density $\rho(\boldsymbol{P})$ is considered, which is proportional to the grayscale at the location $\boldsymbol{P}$. Thus instead of using Eq. 1, we shall calculate the centroid according to the following definition (Eq. 2):
$$\boldsymbol{C}_{\text{Cell}(\boldsymbol{P})} = \frac{\int_{\boldsymbol{x}\in\text{Cell}(\boldsymbol{P})}\boldsymbol{x}\rho(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}}{\int_{\boldsymbol{x}\in\text{Cell}(\boldsymbol{P})}\boldsymbol{x}\,\mathrm{d}\boldsymbol{x}}$$
Clearly the integrations are much more time-consuming than Eq. 1. In his paper and master thesis Secord presented an efficient way to compute them, which involves a precomputation of certain integrations. However, I noticed (without theoretical proof) that the weighted CVD can be sufficiently approximated in a much cheaper way if we accept a compromise not to stick to the exact formula (i.e. Eq. 2) of centroid but only to emphasis the core idea of choosing $\boldsymbol{C}$ **closer to points with larger weights**.
The new idea is simple. For a cell of $n$ vertices $\{\boldsymbol{P}_1,\boldsymbol{P}_2,\dots,\boldsymbol{P}_n\}$, the algorithm acts as follows:
![approx algorithm sketch][16]
1. Compute the geometric centroid $\boldsymbol{C}$
2. Compute the weights of the vertices as $\{w_1,\dots,w_n\}$
3. Compute normalized weights as $W_k = \frac{w_k}{\max(w_1,\dots,w_n)}$
4. For every vertex $\boldsymbol{P}_k$, move it along $\overrightarrow{\boldsymbol{C}\boldsymbol{P}_k}$ with factor of $W_k$ to new position $\boldsymbol{P}_k'$ (So vertex with largest weight does not move, vertex with smallest weight moves most)
5. Compute the geometric centroid $\boldsymbol{C}'$ of the new cell defined by $\{\boldsymbol{P}_1',\dots,\boldsymbol{P}_n'\}$ as the final result
Note that the convergency of our simplified algorithm is not obvious, so it might be wise to do an "*early stop*" during iteration.
Written in the Wolfram Language, it's this `findCentroidW` function:
Clear[findCentroidW]
findCentroidW[p : Polygon[{{__Real} ..}], densityFunc_] :=
Module[{cent, pts = p[[1]], wlst},
wlst = #/Max[#] &[densityFunc @@@ pts];
cent = findCentroid[p];
cent + findCentroid@Polygon@MapThread[#2 (#1 - cent) &, {pts, wlst}]
]
Numerical experiments show that this approximation gives fairly good results. In the next section we'll demonstrate its application on the Lena image.
## 4.3 Numerical experiment on Lena
In this section we test our non-uniform stippling method on the famous Lena's photo.
First let's import the original image. We'll keep both the grayscale and color versions for later use in the artistic rendering section:
imgOrigColored = ExampleData[{"TestImage", "Lena"}];
imgOrig = imgOrigColored // ColorConvert[#, "Grayscale"] & // ImageAdjust;
For `Interpolation`'s convenience, we'll use `ColorNegate`. For a better visual feeling, we enhance the edges with large gradient:
img = Function[img,
ImageAdd[ImageMultiply[img, # // ColorNegate], #] &[
GradientFilter[img, 1] // ImageAdjust[#, {2, 2, 1}] &
]
][imgOrig // ColorNegate]
![Lena for density image][17]
The image coordinate is rescaled to the rectangle region $[-1,1]\times[-1,1]$ for personal convenience and Interpolation-ed to get a smooth description of the grayscale field:
densityFunc =
img // Reverse[ImageData[#, "Real"]] & // Transpose // Function[array,
Module[{dim = Dimensions@array},
MapIndexed[{(1 + dim - 2 #2)/(1 - dim), #1} &, array, {-1}]]
] // Flatten[#, 1] & // Interpolation;
To have a good initial points distribution, we would like to sample points so the local density is **proportional** to the local grayscale (though we don't need to have this precisely, as the weighted Voronoi process will anyway smooth the distribution.) So taking advantage of `ContourPlot`, we generate a few regions according to the **level set** of the `densityFunc`:
levelRegions =
Module[{ctplot, pts, polys},
ctplot =
ContourPlot[densityFunc[x, y], {x, -1, 1}, {y, -1, 1},
Contours -> 10];
{pts, polys} =
Cases[ctplot, GraphicsComplex[pts_, e_, ___] :> {pts,
Cases[e, GraphicsGroup[r : {__Polygon}] :> r, ?]
}, ?][[1]];
BoundaryDiscretizeGraphics[GraphicsComplex[pts, #]] & /@ polys
]; // AbsoluteTiming
{111.862, Null}
![level sets][18]
For each region in `levelRegions`, we sample points inside it on regular grid, with **area of the grid cell inversely proportional to the level counter** of the region. Notice the regular grid can be a steady state of our algorithm, to ensure a isotropic result, initial randomness is needed. For that purpose we add dithering effect on the points, with its strength specified by a parameter $0\leq\kappa\leq 1$ where $0$ gives no dithering while $1$ gives a globally random distribution:
Clear[ditherFunc]
ditherFunc[pts_, width_, ?_: .5] :=
pts + ? width/2 RandomReal[{-1, 1}, {Length@pts, 2}]
levelPts =
Module[{baseGridWidth = 1/50., width, num, min = -1,
max = 1, ? = 1},
MapIndexed[
Function[{region, idx},
width = baseGridWidth/Sqrt[idx[[1]]];
num = Ceiling[(max - min)/width];
Tuples[Range@num, 2] //
Rescale[#, {1, num}, {min + width/2, max - width/2}] & // N //
ditherFunc[Select[#, RegionMember[region]], width, ?] &
],
levelRegions,
{1}]
]; // AbsoluteTiming
initPts = levelPts // Join @@ # &;
{15.3046, Null}
The total number of points we sampled is huge:
initPts // Length
63024
But their quality is poor:
Graphics[{AbsolutePointSize[0], Point@initPts}]
![Lena of random sample][19]
Now we perform the weighted Voronoi relaxation process, which is similar to the one in the uniform case, though the computation is a bit slow due to the amount of points:
refinedPts = Module[{vm},
Nest[
Function[pts,
vm = VoronoiMesh[pts, {{-1, 1}, {-1, 1}}];
findCentroidW[#, densityFunc] & /@ MeshPrimitives[vm, 2]
],
initPts,
30
]
]; // AbsoluteTiming
{1106.06, Null}
In spite of only 30 iterations, the result is in my opinion fairly good (Note that some visual artifacts in the following image is due to the rasterizing process during *display*, try right-click and "open image in new tab"):
Graphics[{AbsolutePointSize[0], Point@refinedPts}]
![Lena of stippling][20]
The pattern of the connectivity rather interestingly forms some kind of self-similar multi-resolution tiles:
refinedPts // NearestNeighborGraph[#, 2] & //
EdgeList // # /. UndirectedEdge -> List & // Line // Graphics
![Lena of connectivity][21]
Statistics on the interior angles of the corresponding `DelaunayMesh` indicates we have indeed achieved the well-spaced distribution:
Plot[
Evaluate[MapThread[
Legended[PDF[KernelMixtureDistribution[#1], ?],
Style[#2, 15]] &,
{
(
# // DelaunayMesh // MeshPrimitives[#, 2] & //
Function[pts,
VectorAngle[#2 - #1, #3 - #1]/Degree & @@@
Partition[pts, 3, 1, {1, 1}]] @@@ # & // Flatten
) & /@ {initPts, refinedPts},
{"initial points", "refined points"}
}]],
{?, 0, 180},
PlotRange -> All, GridLines -> {{60}, None}, Frame -> True,
FrameLabel -> (Style[#, Bold, 15] & /@ {"angle in °", "PDF"})
]
![interior angles statistics of Lena][22]
# 5. Artistic styles based on stippling
## 5.1 Hatching
As we mentioned in the introduction, some artistic rendering effects can be simulated based on the stippling result. One of them is hatching, for our example which will act like a pencil sketch. The lengths and orientations of the strokes are controled by `densityFunc` *i.e.* local grayscale, while the positions of the them are controled by the stippling points. Note there are artifacts in the dark regions due to our naïve approach:
Module[{strokeLength, strokeOrient, dens, stroke},
Function[{x, y},
dens = densityFunc[x, y];
strokeLength = .03 dens^.3 (1 + .1 RandomReal[{-1, 1}]);
strokeOrient = ?/
3 + ?/6 (1 - (2 dens - 1)^2)^20 RandomReal[{-1, 1}];
stroke = strokeLength Through[{Cos, Sin}[strokeOrient]];
{{x, y} - stroke, {x, y} + stroke}
] @@@ refinedPts // Line // Graphics[{AbsoluteThickness[0], #}] &
]
![Lena sketch][23]
## 5.2 Pointillism
Another styling is the simulation of the pointillism.
For this simulation, we need to take care of the colors:
imgColorChannels = imgOrigColored // ColorSeparate[#, "RGB"] &;
densityFuncColorChannels =
Function[img,
img // Reverse[ImageData[#, "Real"]] & // Transpose //
Function[array,
Module[{dim = Dimensions@array},
MapIndexed[{(1 + dim - 2 #2)/(1 - dim), #1} &, array, {-1}]]
] // Flatten[#, 1] & // Interpolation] /@ imgColorChannels;
We add a random deviation of the color on each points:
randomColoredImg =
{
RGBColor @@ (
Through[
densityFuncColorChannels[##]] (1 + .3 RandomReal[{-1, 1},
3])
),
Point[{##}]
} & @@@ refinedPts //
Graphics[{AbsolutePointSize[0], #} // Flatten,
ImageSize -> 1000] & //
Rasterize[#, ImageSize -> 1000] & //
ImageConvolve[#, GaussianMatrix[{10, 1.3}]] & // ImageAdjust //
ImageCrop
![Lena color stippling][24]
We finalize the process with the morphological `Opening` operation and match the histogram with the original color image using `HistogramTransform`:
Opening[randomColoredImg, DiskMatrix[6]] // HistogramTransform[#, imgOrigColored] &
![Lena pointillism][25]
## 5.3 Pointillism -- another attempt
The rendering in the last section uses strokes with identical size, but in practice it can often vary with some local properties of the targets being drawn. So let's try to include that characteristic here.
For the special property to be reflected, we choose the ImageSaliencyFilter, but it is of course totally fine to choose other ones:
ImageAdjust[
ImageSaliencyFilter[imgOrigColored, Method -> #]] & /@ {"Itti",
"IttiColor", "IttiIntensity", "IttiOrientation"};
coarseMask = ImageAdd @@ (Image[#, "Byte"] & /@ %) // Blur[#, 20] &
![coarseMask][26]
Like the `densityFunc`, we interpolation the `coarseMask` and generate some level regions:
coarseFunc =
coarseMask // Reverse[ImageData[#, "Real"]] & // Transpose //
Function[array,
Module[{dim = Dimensions@array},
MapIndexed[{(1 + dim - 2 #2)/(1 - dim), #1} &, array, {-1}]]
] // Flatten[#, 1] & // Interpolation;
coarseLevelRegions =
Module[{ctplot, pts, polys},
ctplot =
ContourPlot[coarseFunc[x, y], {x, -1, 1}, {y, -1, 1},
Contours -> 3];
{pts, polys} =
Cases[ctplot, GraphicsComplex[pts_, e_, ___] :> {pts,
Cases[e, GraphicsGroup[r : {__Polygon}] :> r, ?]
}, ?][[1]];
BoundaryDiscretizeGraphics[GraphicsComplex[pts, #]] & /@ polys
]; // AbsoluteTiming
{11.1717, Null}
So now the stippling points can be grouped according to the `coarseLevelRegions`:
coarseLevelPts = Select[refinedPts, RegionMember[#]] & /@ coarseLevelRegions;
The stroke template is composed with squares centered at the stippling points, with their color determined and randomized in a way similar to that in the last section:
strokeFunc = Function[{pt, size},
GeometricTransformation[
Rectangle[{-1, -1}, {1, 1}],
RotationMatrix[
RandomReal[?/
2]].ScalingMatrix[(1 + .2 RandomReal[{-1, 1}]) size {1, 1}]
] // GeometricTransformation[#, TranslationTransform[pt]] &
];
Now we are ready to paint the picture layer by layer. NOTE THAT the following code can be very time-consuming.
coarseLayers =
Function[{pts, size},
{
FaceForm[RGBColor @@ (
Append[
(1 + .5 RandomReal[{-1, 1}, 3]) Through[
densityFuncColorChannels @@ #],
RandomReal[{.2, 1}]
]
)],
strokeFunc[#, pts]} & /@ pts //
Graphics[Flatten@{EdgeForm[], #},
PlotRange -> 1.5 {{-1, 1}, {-1, 1}}] & //
Rasterize[#, ImageSize -> 1000, Background -> None] &
] @@ # & /@ ({coarseLevelPts, {.2, .1, .05, .02}}//Transpose)
![paint layers][27]
Composing the layers, we get our final "neo-impressionism" result:
composedImg = FoldList[ImageCompose, coarseLayers[[1]], coarseLayers[[2 ;;]]]
![paint stages][28]
composedImg[[-1]] // ImageCrop
![Lena pointillism 2][29]
Note the local vivid color blocks especially those at the feather and hat regions, and how the rendering still preserved an accurate scene at the macro-scale. :)
# 6. Epilogue
In this small writings, we discussed a morden automated method for simulating an ancient art skill called stippling drawing. We also demonstrated how its power can be extended to more advanced artistic renderings. But readers should be reminded that there are other methods for generating the well-spaced point distribution, and there are still many fields related to non-photorealistic computer graphics which are not necessarily related to it. Finally, although not mentioned in this article, the applications of the so-called well-spaced point distribution goes way beyond the examples shown here. The idea can be generalized to high dimensions, and its nice properties makes it an import tool in many scientific feilds, like re-meshing in the finite element method, sparse representation and compressed sensing.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=resultsexamples.png&userId=93201
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonbetweenrandomandstippling-non-uniformcase.png&userId=93201
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonbetweenrandomandstippling-uniformcase.png&userId=93201
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonbetweenrandomandstippling-uniformDelaunayMesh.png&userId=93201
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=interioranglesstatistics.png&userId=93201
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonamongblurredimages.png&userId=93201
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Voronoiofrandompoints.png&userId=93201
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=uniformCVD.gif&userId=93201
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Comparisonofconnectivitybetweenrandomandstippling.png&userId=93201
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=interioranglesstatistics.png&userId=93201
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonofFFTbetweenrandomandstippling.png&userId=93201
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=conformaltransformedpoints.png&userId=93201
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=DelaunayMeshofthetransPts.png&userId=93201
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=VoronoiMeshofthetransPts.png&userId=93201
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=connectivityofthetransPts.png&userId=93201
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=approxalgorithmsketch.png&userId=93201
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenafordensityimage.png&userId=93201
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=levelsets.png&userId=93201
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenaofrandomsample.png&userId=93201
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenaofstippling.png&userId=93201
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenaofconnectivity.png&userId=93201
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=interioranglesstatisticsofLena.png&userId=93201
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenasketch.png&userId=93201
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenacolorstippling.png&userId=93201
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenapointillism.png&userId=93201
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=coarseMask.png&userId=93201
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=paintlayers.png&userId=93201
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=paintstages.png&userId=93201
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenapointillism2.png&userId=93201Silvia Hao2015-12-16T12:59:38ZComputational genealogy with the Wolfram Language
https://community.wolfram.com/groups/-/m/t/2241480
![enter image description here][2]
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/427d4810-05a8-402f-b0de-ab997db2eac3
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2021-04-10at5.48.08PM.png&userId=228444
[Original]: https://www.wolframcloud.com/obj/rnachbar/Published/Genealogy%20With%20Wolfram%20Language.nbRobert Nachbar2021-04-10T22:07:42ZLooking for a similar function to Fold[ ]?
https://community.wolfram.com/groups/-/m/t/2241046
My current project has me creating yet another data mart using Microsoft Dynamics (a CRM solution) as the source. I have nearly 200 tables each having up to several hundred columns.
Unfortunately, the table and column names are a concatenation of words with no capitalization.
So, in order to help me build some documentation, and eventually the SQL for the data mart, I need to turn the gibberish into something the team (and our users) can easily read.
For example, I want to have a function to work like this:
strSeparateWords["fourscoreandsevenyearsago", {"four", "score",
"seven", "and", "seven", "years", "ago"}, "Capitalize" -> True]
to return:
"Four Score And Seven Years Ago"
While I was able to write such a function, I cannot help but think that there is a better (more Wolfram-like) solution.
Here is my solution:
Options[strSeparateWords] = {
"Capitalize" -> False
};
strSeparateWords[string_String, word_String, opt : OptionsPattern[]] :=
Module[
{
replacement = If[OptionValue["Capitalize"], Capitalize[word], word]
},
StringTrim@StringReplace[string, word -> " " <> replacement]
];
strSeparateWords[string_String, words_?matchListOfStringsQ,
opt : OptionsPattern[]] := Module[
{
retVal = string
},
Do[
retVal = strSeparateWords[retVal, word, opt]
,
{word, words}
];
retVal
];
I was hoping to find a function similar to Fold[] that works with single argument functions. In other words, I want a function that worked like this:
anotherFold[
f[initialValue, #] &,
{ a, b, c}
]
would return:
f[f[f[initialValue, a], b], c]
As you can see, I cannot even figure out a good name for such a function. But I think it would be useful to have a generic function like this.
I've searched the documentation and web for over an hour now. But, before giving up, I wanted to ask the Community.
Thanks, and have a great weekend.Mike Besso2021-04-09T21:14:37ZBlack Hole Discussion (based on “[WSS20] Hunting for Black Holes” post)
https://community.wolfram.com/groups/-/m/t/2203331
Hey guys,
Please make sure to glance at <https://community.wolfram.com/groups/-/m/t/2029731> project first. It looks for singularities in WMs and especially the ones that persist for at least 20 steps. Some very useful functions are used that look for the presence of singularities, filter WMs based on that criteria and also look at dimensionality of the system. Their conclusions reflect a disappointment of not finding a change in dimensions as they assume a Schwartz type of BH would have.
Before I dive into physics, let me add that I ran their 21 surviving BH models for a greater number of steps as I summarize in the attached picture (I tried to do 50 steps but some proved too computationally intensive while others I was able to run for 100s of iterations).
Four more models lost their singularities (bringing the total down to 17). Here are the questions we can still answer by looking further into this:
• Which remaining models BHs survive after 100, 200, 300 etc. steps?
• Can models reacquire singularities after losing them?
• If so, we need to map durations of BHs lifetimes and frequency of occurrence.
• Write new function that can identify # of singularities in a given system as well as whether any of them are nested (BH inside another BH).
Now for the physics…
Given a tiny # of steps that can be run on these models we are probably looking at vacuum fluctuations on a very small scale. That makes it unlikely to observe any BHs form via gravitational collapse (not enough steps).
What are these singularities then? To me, they look like topological BHs that have nothing to do with gravitational collapse and whose stability depends on the rewriting rules alone. Now imagine that our expanding universe forms these sub-plank BHs that leech some of the spacetime into pocket universes. WMs show that nothing special happens in those regions and that they expand same as everywhere else.
Our own vacuum can have a specific signature of these topological BHs. Average density and duration can not only affect our cosmological constant but also be a dark matter candidate. Moreover, one could try and match one of the WMs to our own universe based on these criteria.
Sooner or later, certain interesting WMs will need to be placed on the server cloud with large number of steps computed and stored to be explored by the community.
There is much more to discuss here but it’s probably a good start.
Legend: WM = Wolfram Model | BH = Black Hole | sub-plank = reference to Steven’s belief that these “atoms of space” are much smaller thank plank scale
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WPPBHSummary.png&userId=1964811Anton Spektorov2021-02-24T13:19:50ZA Preliminary Code for the Fox - H Function
https://community.wolfram.com/groups/-/m/t/57378
Hey guys, how are you doing? I hope really fine!
While doing my research on pure mathematics / Statistics, I came across this beautiful function called Fox-H function.
This function is quite important to the study of Statistics (Algebra of Random Variables) and Science in general (Fractional Partial Differential Equations, for example).
Unfortunately, Mathematica does not have this function implemented. On the other hand, it has everything you need to build a code to implement the function!
It is interesting to notice that Mathematica has the Meijer-G function implemented, which is a special case of the Fox-H function.
In order to compute some results, I did a quick implementation of the function. In the cases I tested (large scale tests with Alfa-Stable random variables of type 1 and their ratio) the code worked nicely.
Please find the code below:[mcode]Needs["NumericalCalculus`"];?FoxH[a_, b_, z_] :=? ? Module[{SPA, SPB, IPA, IPB, T, LeftP, RightP, Poles, RadiusP, ?c, ?c, ? MaxPossibleResidueIncrementsto?, ?, NRightPolesLessThan?, W, H},? SPA = Product[? Gamma[1 - a[[1, j, 1]] - a[[1, j, 2]]*s], {j, 1, Length[a[[1]]]}];? SPB = Product[? Gamma[b[[1, j, 1]] + b[[1, j, 2]]*s], {j, 1, Length[b[[1]]]}];? IPA = Product[? Gamma[a[[2, j, 1]] + a[[2, j, 2]]*s], {j, 1, Length[a[[2]]]}];? IPB = Product[? Gamma[1 - b[[2, j, 1]] - b[[2, j, 2]]*s], {j, 1, Length[b[[2]]]}];? T := SPA*SPB/(IPA*IPB);? LeftP[p_] := ? DeleteDuplicates[? Flatten[? Table[-(b[[1, j, 1]] + k)/b[[1, j, 2]], {j, 1, Length[b[[1]]]}, {k, 0, ? p}]]];? RightP[p_] := ? DeleteDuplicates[? Flatten[? Table[(1 - a[[1, j, 1]] + k)/a[[1, j, 2]], {j, 1, Length[a[[1]]]}, {k, 0,? p}]]];? ?c = Product[a[[1, j, 2]]^(-a[[1, j, 2]]), {j, 1, Length[a[[1]]]}]*? Product[a[[2, j, 2]]^(-a[[2, j, 2]]), {j, 1, Length[a[[2]]]}]*? Product[b[[1, j, 2]]^(b[[1, j, 2]]), {j, 1, Length[b[[1]]]}]*? Product[b[[2, j, 2]]^(b[[2, j, 2]]), {j, 1, Length[b[[2]]]}];? ?c = Sum[b[[1, j, 2]], {j, 1, Length[b[[1]]]}] + ? Sum[b[[2, j, 2]], {j, 1, ? Length[b[[2]]]}] - (Sum[a[[1, j, 2]], {j, 1, Length[a[[1]]]}] + ? Sum[a[[2, j, 2]], {j, 1, Length[a[[2]]]}]);? Poles[p_] := Sort[DeleteDuplicates[Flatten[{LeftP[p], RightP[p]}]]];? RadiusP[p_] := ? Min[Table[? Abs[Poles[p][[i + 1]] - Poles[p][[i]]], {i, 1, Length[Poles[p]] - 1}]]/2;? MaxPossibleResidueIncrementsto? = ? Ceiling[Re[(Max[LeftP[0]] - Min[RightP[0]])*Max[a[[1, All, 2]]]]];? If[Max[LeftP[0]] < Min[RightP[0]], ? = (Max[LeftP[0]] + Min[RightP[0]])/2, ? ? = Max[LeftP[0]] + RadiusP[MaxPossibleResidueIncrementsto?]];? NRightPolesLessThan? = ? Catch[Do[? If[Length[Select[RightP[i], # < ? &]] - ? Length[Select[RightP[i + 1], # < ? &]] >= 0, Throw[i]], {i, 10, 1000, ? 10}]];? W = Max[Im[Poles[0]]] + 50;? If[Abs[z] >= 0.2, Which[?c > 0 ? And[?c == 0, 0 < Abs[z] < ?c],? H[p1_] := ? Re[(1/(2*Pi*I))*? NIntegrate[? T*z^(-s), {s, ? - I*W, ? + I*W, ? - p1 + I*W, ? - p1 - I*W, ? ? - I*W}]] - ? Sum[? Re[? NResidue[T*z^(-s), {s, r}, ? Radius -> Min[0.001, RadiusP[MaxPossibleResidueIncrementsto?]]]], {r,? Select[RightP[NRightPolesLessThan?], # < ? &]}];? H[1000],? ?c < 0 ? And[?c == 0, Abs[z] > ?c],? H[p1_] := ? Re[(1/(2*Pi*I))*? NIntegrate[? T*z^(-s), {s, ? - I*W, ? + I*W, ? + p1 + I*W, ? + p1 - I*W, ? ? - I*W}]] - ? Sum[? Re[? NResidue[T*z^(-s), {s, r}, ? Radius -> Min[0.001, RadiusP[MaxPossibleResidueIncrementsto?]]]], {r,? Select[RightP[NRightPolesLessThan?], # < ? &]}];? H[1000]], ? H[p1_] := ? Re[(1/(2*Pi*I))*? NIntegrate[T*z^(-s), {s, ? - I*p1, ? + I*p1}, MaxRecursion -> 40, ? PrecisionGoal -> 15]] - ? Sum[Re[? NResidue[T*z^(-s), {s, r}, ? Radius -> Min[0.001, RadiusP[MaxPossibleResidueIncrementsto?]]]], {r, ? Select[RightP[NRightPolesLessThan?], # < ? &]}];? H[2000]]][/mcode][i][i][i]
[/i][/i][/i]The general idea of the code is presented above. I can send the notebook with the code if anyone wants it.[i][i][i]
[/i][/i][/i]
I basically use 3 possible contours in the complex plane to numerically evaluate the complex integral in Mathematica. Each contour is selected according to existence conditions.
The input insertion is similar to that of Meijer-G function. In the case of the H-Function, for example, each of the elements of the sublists of input a is not a constant, but a list with the values {a_j,alpha_j}, according to the definition (http://www.wolframalpha.com/input/?i=fox+H+function).
You can take a deeper look at this function's theory on:
[url=http://www.wolframalpha.com/input/?i=fox+H+function]http://www.wolframalpha.com/input/?i=fox+H+function[/url]
[url=http://en.wikipedia.org/wiki/Fox_H-function]http://en.wikipedia.org/wiki/Fox_H-function[/url]
For applications and mathematical definitions, one may check:
Mathai, A.M., Saxena, R.K. and Haubold, H.J. (2010) The H-Function: Theory and Applications, Springer, New York.
M. D. Springer (1979), The Algebra of Random Variables, John Wiley, New York.
I have used the code to write a paper about the analytical obtention and evaluation of the PDF of the ratio of two Alfa-Stable Random Variables. The paper has just been submitted for publication but I can discuss it with[font='times new roman', times, serif] anyone interested in the topic. I also have other papers about the usage of the function itself in pure and applied math (analytically solving special real degree equations, civil engineering applications, etc), which I would be also happy to share =)
Anyway, I guess that if you somehow get into this area, this code would be useful.
That is it guys, please let me know if you have any suggestions on the improvement of the code or any ideas on the subject!!
Best Regards
LuanLuan Ozelim2013-07-12T06:26:45ZAppending a property to a variable for use in derivations
https://community.wolfram.com/groups/-/m/t/2242983
Folks -- one of my students asked if it is possible to directly associate a mathematical property with a variable for use in derivations involving that variable. For example, can one associate the property that a variable a is a real number that is greater than zero when performing integrals and the like? I know that one can make this assumption in the Integrate command and in other commands via the option Assumptions -> a>0 , but it would be useful to be able to do this at the beginning of a series of derivations that involve the variable, without constantly having to repeat the Assumption at every step.
Thanks -- Dan Dubin, UCSDDan Dubin2021-04-13T21:14:41ZA question on using dynamic control objects
https://community.wolfram.com/groups/-/m/t/2237195
Reference the attached notebook (which is a toy example), I have a PopupMenu with three items (opt1). For each item in the PopupMenu there's a list of options that can be selected/deselected using a CheckboxBar (opt2). The point I'm stuck on is how to reset the list of selected options, opt2, from the CheckboxBar whenever the selected item (opt1) in the main PopupMen changes. Basically, I want to reset the list of items in opt2 to an empty lists {} whenever opt1 changes.
I could add a button to do this, but it seems unnecessary and I'd prefer not to have do so.
Any suggestions would be very much appreciated.
----------
Solved
------
Have figured out a solution - see second attached notebook (ExampleNotebook2.nb).Ian Williams2021-04-05T18:47:06ZPairwise Correlation of Financial Data
https://community.wolfram.com/groups/-/m/t/2242326
One of the regular tasks in statistical arbitrage is to compute correlations between a large universe of stocks, such as the S&P500 index members, for example.
Mathematica/WL has some very nice features for obtaining financial data and manipulating time series. And of course it offers all the commonly required statistical functions, including correlation. But the WL Correlation function is missing one vital feature - the ability to handle data series of unequal length. This arises, of course, because stock data series do not all share a common start date and (very occasionally) omit data for dates in the middle of the series. This creates an issue for the Correlation function, which can only handle series of equal length.
The usual way of handling this is to apply pairwise correlation, in which each pair of data vectors is truncated to include only the dates common to both series. Of course this can easily be done in WL; but it is very inefficient.
Let's take an example. We start with the last 10 symbols in the S&P 500 index membership:
In[1]:= tickers = Take[FinancialData["^GSPC", "Members"], -10]
Out[1]= {"NASDAQ:WYNN", "NASDAQ:XEL", "NYSE:XRX", "NASDAQ:XLNX", \
"NYSE:XYL", "NYSE:YUM", "NASDAQ:ZBRA", "NYSE:ZBH", "NASDAQ:ZION", \
"NYSE:ZTS"}
Next we obtain the returns series for these stocks, over the last several years. By default, FinancialData retrieves the data as TimeSeries Objects. This is very elegant, but slows the processing of the data, as we shall see.
tsStocks =
FinancialData[tickers, "Return",
DatePlus[Today, {-2753, "BusinessDay"}]];
Not all the series contain the same number of date-return pairs. So using Correlation is out of the question:
In[282]:= Table[Length@tsStocks[[i]]["Values"], {i, 10}]
Out[282]= {2762, 2762, 2762, 2762, 2388, 2762, 2762, 2762, 2762, 2060}
Since Correlation doesn't offer a pairwise option, we have to create the required functionality in WL. Let's start with:
PairsCorrelation[ts_] := Module[{td, correl},
If[ts[[1]]["PathLength"] == ts[[2]]["PathLength"],
correl = Correlation @@ ts,
td = TimeSeriesResample[ts, "Intersection"];
correl = Correlation @@ td[[All, All, 2]]]];
We first check to see if the two arguments are of equal length, in which case we can Apply the Correlation function directly. If not, we use the "Intersection" option of the TSResample function to reduce the series to a set of common observation dates. The function is designed to be deployed using parallelization, as follows:
PairsListCorrelation[tslist_] := Module[{pairs, i, td, c, correl = {}},
pairs = Subsets[Range[Length@tslist], {2}];
correl =
ParallelTable[
PairsCorrelation[tslist[[pairs[[i]]]]], {i, 1, Length@pairs}];
{correl, pairs}]
The Subsets function is used to generate a non-duplicative list of index pairs and then a correlation table is built in parallel using PairsCorrelation function on each pair of series.
When we apply the function to the ten stock time series, we get the following results:
In[263]:= AbsoluteTiming[{correl, pairs} =
PairsListCorrelation[tsStocks];]
Out[263]= {13.4791, Null}
In[270]:= Length@correl
Out[270]= 45
In[284]:= Through[{Mean, Median, Min, Max}[correl]]
Out[284]= {0.381958, 0.396429, 0.200828, 0.536383}
So far, so good. But look again at the timing of the PairsListCorrelation function. It takes 13.5 seconds to calculate the 45 correlation coefficients for 10 series. To carry out an equivalent exercise for the entire S&P 500 universe would entail computing 124,750 coefficients, taking approximately 10.5 hours! This is far too slow to be practically useful in the given context.
Some speed improvement is achievable by retrieving the stock returns data in legacy (i.e. list rather than time series) format, but it still takes around 10 seconds to calculate the coefficients for our 10 stocks. Perhaps further speed improvements are possible through other means (e.g. compilation), but what is really required is a core language function to handle series of unequal length (or a Pairwise method for the Correlation function).
For comparison, I can produce the correlation coefficients for all 500 S&P member stocks in under 3 seconds using the 'Rows', 'pairwise' options of the equivalent correlation function in another scientific computing language.
----------
# UPDATE
Another Mathematica user suggested a way to speed up the pairwise correlation algorithm using associations.
We begin by downloading returns data for the S&P500 membership in legacy (i.e. list) format:
tickers = Take[FinancialData["^GSPC", "Members"]];
stockdata =
FinancialData[tickers, "Return",
DatePlus[Today, {-753, "BusinessDay"}], Method -> "Legacy"];
Then define:
PairwiseCorrelation[stockdata_] :=
Module[{assocStocks, pairs, correl},
assocStocks = Apply[Rule, stockdata, {2}] // Map[Association];
pairs = Subsets[Range@Length@assocStocks, {2}];
correl =
Map[Correlation @@ Values@KeyIntersection[assocStocks[[#]]] &,
pairs];
{correl, pairs}]
Here we are using the KeyIntersection function to identify common dates between two series, which is much faster than other methods. Accordingly:
In[317]:= AbsoluteTiming[{correl, pairs} =
PairwiseCorrelation[stockdata];]
Out[317]= {112.836, Null}
In[318]:= Length@correl
Out[318]= 127260
In[319]:= Through[{Mean, Median, Min, Max}[correl]]
Out[319]= {0.428747, 0.43533, -0.167036, 0.996379}
This is many times faster than the original algorithm and, although much slower (40x to 50x) than equivalent algorithms in other languages, gets the job done in reasonable time.
So I still think we need a Method-> "Pairwise" option for the Correlation function.Jonathan Kinlay2021-04-12T07:05:32ZApril Fool's day 2021 was near the start of π's continued fraction
https://community.wolfram.com/groups/-/m/t/2243429
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/wolfram-community/Published/APR-FOOL-GOSPERBill Gosper2021-04-13T17:00:32ZA formula for the n-th Laplacian of a Gaussian function
https://community.wolfram.com/groups/-/m/t/2242639
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/f966d372-af2b-4b74-88d7-97d40410310c
[Original]: https://www.wolframcloud.com/obj/rauan234/Published/gaussian_laplacian.nbRauan Kaldybaev2021-04-12T15:48:11ZTennis racket (Dzhanibekov) effect: torque-free rotational motion
https://community.wolfram.com/groups/-/m/t/2243140
![enter image description here][1]
*SUPPLEMENTARY WOLFRAM MATERIALS for ARTICLE:*
> Peterson Christian, and William Schwalm. 2021. "Euler's rigid rotators, Jacobi elliptic functions, and the Dzhanibekov or tennis racket effect"
> American Journal of Physics 89, 349 (2021).
> https://doi.org/10.1119/10.0003372
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=hero.gif&userId=20103
[2]: https://www.wolframcloud.com/obj/wolfram-community/Published/tennis_racket_effect3.nbChristian Peterson2021-04-13T14:48:09ZPreserve arguments' patterns while taking a Derivative?
https://community.wolfram.com/groups/-/m/t/2242146
Hello!
I've encountered a problem with loss of pattern information after using "function-producing" `Derivative` function.
Assume I have a scalar-valued function defined on vectors:
H[r_?VectorQ] := r.r
(evidently, `r` is assumed to be a three-dimentional vector, but id does not matter)
And then I would like to solve a differential equation like *dr / dt == grad(H)*. Note that I would like to keep the vectorial notation i.e. the solution must be a vector-valued function. I've tried the following:
NDSolveValue[{D[r[t], t] == Table[Derivative[xspec][H][r[t]], {xspec, IdentityMatrix[3]}],
r[0] == {1, 1, 1}}, r, {t, 0, 100}]
This returns an interpolated function, but it does not handle the derivatives of `H` properly. For example, if in the output of `Derivative` appeared term like `#1[[1]]` then it accepts `r[t]` as argument and evaluates simply to `t` and if there's `#1[[2]]` it throws a message that this part does not exist. It seems like `Derivative` loses argument check and the returned function does not formally require vectorial input anymore.
So, the question is: can I somehow tell WL that this derivative is still a function of a vector?Nikolay Shilov2021-04-12T12:32:31Z[feature req] GUI-item for showing parentheses of grouping
https://community.wolfram.com/groups/-/m/t/2241774
When we write code and don't use parentheses, *Mathematica* still knows the grouping of expressions and operators, which results in a determined order of execution, because of the system's internal set of rules for priority/**precendence**. We can and sometimes *must* control the grouping of expressions/operators by adding parentheses in order to get the desired output. If parentheses aren't needed, we can see in <code>FullForm[]</code> that they get dropped along the way by *Mathematica*.
My idea/suggestion/feature request (for us to discuss, and for the Wolfram developers to consider) is:
How about a GUI-item, maybe in form of a button or within a context menu, which automatically **expands the selected code in-line by adding parentheses** for the recognized code structure, thus making the implicit precedence and grouping **visible**?
When we "multi-click" (double-click, triple-click, quadruple-click, quintuple-click, etc) with the left mouse-button in the **middle** of any large chunk of code, *Mathematica* automatically recognizes and **selects** the next bigger balanced compound expression ("code snippet"): this visual auto-select feature is already helpful in giving us a rough idea about the code line structure from inner to outer expressions. The context menu item "**Un/Comment**" alters the code by automatically adding $\text{(* *)}$ to the selected code. So why not have a similar context menu item for the auto-selected code snippet called "**Show Grouping**" or "**[Explicitize][1] Grouping**"?
My idea is powerful. I might be a beginner and always stay at that level but my [ideas][2] are great and contribute to the improvement of the *Mathematica* product, if they get implemented. I would love to hear what the Wolfram developers think of my idea and what the odds are that it gets implemented the sooner the better, thanks.
[1]: https://en.wiktionary.org/wiki/explicitize
[2]: https://community.wolfram.com/groups/-/m/t/2205811Raspi Rascal2021-04-11T11:43:57ZA more practical template for pupils and students
https://community.wolfram.com/groups/-/m/t/2205811
**Summary:** I would love the folks at Wolfram to provide an **attractive** simple modern practical user-friendly "filled" template for a pupil's or student's **workbook** where he/she can dive in right away, edit the sample entries (by overwriting) by typing up problem statements and writing down their own homework and solutions, or notes. The "epm-file" serves as a great example, Wolfram is allowed to copy/emulate it!!
**TL;DR:** Yes, *Mathematica* comes with a few "blank" templates, even one for writing a textbook. And very few professional writers were successful at employing that template to write and publish their book. But I am talking pupils here, highschoolers, students: "we" don't want to publish a pro-quality book but only need something very neat (simple, easy, fun!) which is very usable to use as workbook or solutions book.
The "template" I have been working enthusiastically with, is the notebook format by [@Paul Wellin][at0] . @wolframdevelopers Just download the [21.8MB *.nb-file][1] (let's call it 'the epm-template') and witness for yourself! This is an amazing effective beautiful attractive format to work with. It is "much better" (more practical, more user-friendly, more etc) than the blank templates which come with *Mathematica*; this must have been his reason to create that notebook structure. To create something better. And one must applaud his efforts. His example of a solutions book inspired and motivated me to write my own solutions to problems, e.g. from maths texts. There are some technical problems I am running into because only parts of that notebook structure are easily editable, the other parts require notebook programming knowledge .. and were never meant for the EPM-user to be edited (e.g. the drop-down menus for subchapters). Even with the author's help, I can't figure out some technicals.
That's why I am sharing this idea in public, for the Wolfram developers to see. If Paul can build such an amazing ebook structure, why can't/don't the Wolfram team do it and also include the documentation or how-to-use-this-template youtube video tutorial? I showed the epm-template and my work with it to friends, schoolers, a.o. and they feel inspired to do the same with their (say maths) homework! Then I must admit to them that "1st, you'll need a raspberry, 2nd, the epm-template comes with some technical restrictions, e.g. chapter numbering doesn't go higher than chapter 10", then they ask the same question, why Wolfram the maker of *Mathematica* doesn't offer such an attractive readily usable notebook structure for pupils and high schoolers to fill in their homework and problem solutions.
The following comment is imho not too far-fetched: If there were such a (very similar to the epm-template) file which became popular on the internet, it could in turn even popularize *Mathematica* itself! Youtubers (incl. high schoolers, students) could make viral videos of how they write down their homework with the help of *Mathematica* instead of M$ Word or paper and pencil. And the spark would be really that fantastic easy-to-edit notebook structure! — At least that worked out for me! I couldn't get bothered by the built-in blank templates, I felt appalled, sorry to say; they might work (like a spark) for some, but totally didn't for me. I got the spark only until I saw Paul's creation for the very first time .. and all of a sudden the impact, I was hooked!
So @wolframdevelopers, why not put some efforts into a notebook structure creation similar to Paul's? — And this has to be Wolfram's task, not the end-user's, imho. As an end-user, I want to just use what's handed to me, and not build something even better, BEFORE I can finally start writing down my homework and textbook problem solutions.
Templates have to come from Wolfram. It's their job. The template should be built-in and called ''**New >> Styled Notebook... >> Stylesheets: Solutions Book**". In the meantime I am trying to figure out how to exploit more of the epm-template.
Point being, somebody at Wolfram should care. Paul cared, so he put his Wolfram L knowledge to practice and build that wonderful epm notebook. That notebook structure, in turn, inspired *me* to become productive with Mathematica, what a spark! And it inspires others whom i demonstrate it to. I obviously care, you can feel it with this lengthy post. Paul doesn't get paid for making his creation more accessible/usable to non-EPM readers or for writing up a documentation file how to edit/manipulate/expand the structure step-by-step (so that a high schooler could follow) or even shoot a tutorial video "How to use the epm-template for doing your (maths) homework". It's not his job. But, as I understood very well, anyone is allowed to edit and learn from the file structure, and re-use it.
I think that Paul did enough, kudos. Now it's time for Wolfram to learn from him/it and make it more public and accessible to all current users and prospective Wolfram users.
If nobody at Wolfram cares about Paul's wonderful creation, or my here presented idea (see summary), then .. such is life (and Wolfram is missing out on a simple yet effective way of popularizing their main product, at least among young students) and i will accept, even though i wouldn't understand. I can just wish good luck to everyone 's all. It is not *my* job (and shouldn't be one of my concerns) to market and popularize *Mathematica*, even though I care about its non-popularity; if a difficult application is not popular, then it's difficult to get new users, youngsters, on board. Free on raspberry was just the first step (and it appears that it did **not** have the wished effect of popularization unfortunately)! Kids choose 'the popular girl', the easy-to-use user-friendly GUI-driven application, which so many people and books talk about (and that is not our Mathematica, we all know it). I know that my idea is great. It worked on me. Paul's file. If Wolfram doesn't pick it up, at least i got this topic off my chest for the world to see and learn. At least the marketing team should pick it up, study what's so great about the file (the effect on the user), and seriously consider.
[at0]: https://community.wolfram.com/web/paulwellin
[1]: https://www.programmingmathematica.com/exercises-and-solutions1.htmlRaspi Rascal2021-02-27T18:10:14ZAligning the horizontal axis of a combination chart
https://community.wolfram.com/groups/-/m/t/2240344
I have some data that I need to draw ListLinePlot and BarChart separately and then combine them.
My question is how to combine and align the horizontal axis correctly.
data = Table[{RandomInteger[{1, 200}], RandomReal[{0.3, 0.8}], i}, {i,
0.3, 30, 0.3}];
Column[{ListLinePlot[data[[All, {3, 2}]], PlotRange -> All,
ImageSize -> Medium],
BarChart[data[[All, 1]], ImageSize -> Medium, AspectRatio -> 1/8]},
Alignment -> Center]
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=plot.jpg&userId=1170351Tsai Ming-Chou2021-04-08T20:00:50ZKernel quits while using DSolve[ ]?
https://community.wolfram.com/groups/-/m/t/2241599
Why does my DSolve command does not solve the Non-linear ODEs?
Clear["Global`*"]
(*Parameters*)
length = 1.5;
width = 0.25;
height = 0.25;
area = width*height;
cforce = -400;
dforce = (500*(X1^2));
stress = 10*displacement'[X1] + 100000*displacement'[X1]^3;
DE1 = D[stress*area, X1] + dforce;
DE2 = 10*displacement'[X1] + 100000*displacement'[X1]^3 + 6400;
(* Solving for the Exact Displacement Function *)
solution =
DSolve[{DE1 == 0, displacement[0] == 0, DE2 == 0 /. X1 -> length},
displacement[X1], X1];
displacement = displacement[X1] /. solution[[1]];
Print["The Exact Displacement Function of the Bar is"]
Print[displacement];
Print[Plot[{displacement}, {X1, 0, 1.5}, AspectRatio -> 0.4,
AxesLabel -> {"Bar Length (m)", "Axial Displacement (m)"},
PlotLegends -> {"Exact Displacement of the Bar"}]];
Print[" "]Aslam Kittur2021-04-10T18:10:30Z