Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Language sorted by active[WSG21] Daily Study Group: Multiparadigm Data Scince
https://community.wolfram.com/groups/-/m/t/2244736
A new study group for Multiparadigm Data Science with the Wolfram Language begins Monday, Apr 19, 2021!
Making progress in an online course can be daunting when you have to study all alone. Join a cohort of fellow Wolfram Language users for a two-week study group that works through the Wolfram U course "[Multiparadigm Data Science][1]". A certified instructor will guide each session by reviewing the lesson notebooks from the course, working through the code and answering questions.
Get support for starting on the path to earning Level 1 and Level 2 certifications in multiparadigm data science.
**Sign up here:** https://wolfr.am/UNdaIas0
[1]: https://www.wolfram.com/wolfram-u/multiparadigm-data-science/Abrita Chakravarty2021-04-15T22:05:55Z[WWS21] Dimension Algorithm and Dimension Tensor in Wolfram Model
https://community.wolfram.com/groups/-/m/t/2162387
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/0f74e104-cf5a-4f94-a23b-f93993074a55
[Original]: https://www.wolframcloud.com/obj/94b998d8-1b15-4d44-b000-4b51b8913ce1Zhenzhong Xing2021-01-14T18:14:32ZHow to control PointSize in GeoListPlot?
https://community.wolfram.com/groups/-/m/t/2244579
I marked two groups very close points on GeoListPlot and hoping to control PointSize to avoid overlap.
Should I how to do it?
a={{24.9954,121.304},{25.0603,121.202},{24.9528,121.204},{24.9528,121.204},{24.8639,121.216},{24.9001,121.039},{24.9001,121.039},{24.7406,121.089},{24.8056,120.972},{24.697,120.899},{24.5653,120.82},{24.9533,121.222},{25.0355,121.083},{24.9868,121.309}};
b={{24.9001,121.039},{24.8056,120.972},{24.9528,121.204},{24.9948,121.32},{24.9533,121.222},{25.0603,121.202},{24.697,120.899},{24.5653,120.82},{25.0355,121.083},{24.8639,121.216},{24.7406,121.089}};
GeoListPlot[{GeoPosition@a,GeoPosition@b},PlotLegends->Placed[{"A","B"},Bottom],PlotStyle->PointSize[0.01]]Tsai Ming-Chou2021-04-15T20:50:07ZReplacing runs of identical numbers in lists
https://community.wolfram.com/groups/-/m/t/2244422
I have nested lists like {{2,2,7},{2,2,3,7},{3},{6,6,6,7}}
I need to replace the runs of consecutive numbers x in the sublists with Range[x-n-1, x], n being the repeat count of x.
For the above example:2,2 -> 1,2; 6,6,6->4,5,6 , giving {{1,2,7},{1,2,3,7},{3},{4,5,6,7}}.
How do I achieve this (efficiently)?Achim Luhn2021-04-15T05:10:50ZHawaii weather stations list
https://community.wolfram.com/groups/-/m/t/2244206
Dear all,
I need a list of weather stations in Hawaii. Is it possible to extract that with Mathematica?
Thank you in advance for your kind support.Alex Teymouri2021-04-14T20:20:42ZSomething About The Automated Stippling Drawing
https://community.wolfram.com/groups/-/m/t/759091
# 1. Stippling drawing -- an introduction
![results examples][1]
[Stippling](https://en.wikipedia.org/wiki/Stippling) is a kind of drawing style using only points to mimic lines, edges and grayscales. The entire drawing consists only of black points on a white background. The density of the points gives the impression of grayscale shading.
Back in 1510, stippling was first invented as an [engraving technique](https://en.wikipedia.org/wiki/Stipple_engraving), then became popular in many fields because of its requirement of just one color of ink. The style influence widely beyond engraving. We can see many technical illustrations done by it, especially in math and mechanics books. [Even today many people are interested in it](http://mathematica.stackexchange.com/q/21240/17), and archaeologists, geologists and biologists are still using it in field works.
On the art side, stippling shows interesting relation with the [divisionism](https://en.wikipedia.org/wiki/Divisionism) and [pointillism](https://en.wikipedia.org/wiki/Pointillism), which both belong to the neo-impressionism.
In the domain of so-called non-photorealistic computer graphics, stippling is one of the basic artistic painting style, and can be used as the foundation of many other styles such as mosaic, stained glass, hatching, etc. (*Non-Photorealistic Computer Graphics*, Thomas Strothotte, Stefan Schlechtweg; *The Algorithms and Principles of Non-photorealistic Graphics*, Weidong Geng)
# 2. The essential properties of a "good" stippling
## 2.1 Well-spaced points
When using points to approximate a grayscale, a random set of points with uniform distributuion is usually not good enough. To illustrate, both images below have 63024 points, but the one on the right is a good stippling (or usually called **_well-spaced_**). The one on the left has too many unwanted small artifacts -- clumps and voids do not exist in the original image. (And please be patient, we'll talk about how to generate the "good" one in the next section:)
![comparison between random and stippling - non-uniform case][2]
It can be seen that there are too many unwanted small artifacts in the former one -- clumps and voids that do not exist in the original image.
The phenomenon can be seen even more clearly in the uniform grayscale case:
![comparison between random and stippling - uniform case][3]
I hope it's plain to see, that in the stippling graphics, the distance between any point and the one nearest to it is nearly a **constant**. So it looks like we found the essential property of a "good" stippling.
But wait... What about the Lena example? Clearly it's "good" ( if you allow me to say so) but the distances between closest points are various.
It turns out we just need to take one more step to notice that the constant distance implies equilateral triangles:
![comparison between random and stippling - uniform DelaunayMesh][4]
So the so-called _good_ or _well-spaced_ property can be described more precisely as:
> The cells of the Delaunay mesh induced by the points are **as equilateral triangles as possible**.
The statement can be confirmed by the statistics on the set of interior angles of all the cells
![interior angles statistics][5]
## 2.2 Another (equivalent) definition
There is still [another view point]((http://arxiv.org/abs/1509.00229)) to judge if the positions of the points are good or not.
Consider the **same blur** taken on all the three Lena images: the original one, the random points one and the well-spaced one:
![comparison among blurred images][6]
The so-called *projection algorithm* looks for a point distribution, when convolved with a certain kernel h, which is as the same as the one obtained by convolving the original image with the same h. We are not going to talk about the details here. Readers who are interested in it can refer to the mentioned paper.
# 3. Stippling of a uniform area -- centroidal Voronoi diagram approach
Now let's do the real fun :)
For an easy start we consider the stippling technique for a uniform area (i.e. an area with constant grayscale). There is a de facto standard method for it called **_centroidal Voronoi diagram_** (CVD) which is usually generated by the [*Lloyd's algorithm*](https://en.wikipedia.org/wiki/Lloyd%27s_algorithm).
Basically, the algorithm acts as the following relaxation steps:
1. Generate $n$ random points inside the interested region
2. Generate the [***Voronoi diagram***](https://en.wikipedia.org/wiki/Voronoi_diagram) of the $n$ points
3. Find the ***centroid*** (i.e. center of mass) of each Voronoi cell
4. Use the $n$ centroids as the resulting points
5. If satisfied, algorithm stop, else goto step 1
Here the key techniques are the Voronoi diagram generation and the centroid finding. The former one is a perfect match for the bounded version of the built-in function `VoronoiMesh`. The latter one, as the Voronoi cells for a closed region are always closed convex polygons, has a [simple formula](https://en.wikipedia.org/wiki/Centroid#Centroid_of_polygon).
Suppose the cell is defined by $n$ vertices ordered (counter-)clockwise $\{\boldsymbol{P}_1=(x_1,y_1),\boldsymbol{P}_2=(x_2,y_2),\dots,\boldsymbol{P}_n=(x_n,y_n),\boldsymbol{P}_{n+1}=(x_1,y_1)\}$, then its centroid $\boldsymbol{C}$ can be determined as follwing Eq. 1:
$$\left\{\begin{eqnarray}
\boldsymbol{C}&=&\frac{1}{6A}\sum_{k=1}^n (\boldsymbol{P}_i+\boldsymbol{P}_{i+1})\,\left|\begin{matrix}x_i&y_i\\x_{i+1}&y_{i+1}\end{matrix}\right|\\
A&=&\frac{1}{2}\sum_{k=1}^n \left|\begin{matrix}x_i&y_i\\x_{i+1}&y_{i+1}\end{matrix}\right|
\end{eqnarray}\right.$$
As a test we generate 2500 uniformly distributed random points in a square region $[-1,1]\times[-1,1]$:
initPts = RandomPoint[Rectangle[{-1, -1}, {1, 1}], 2500];
whose Voronoi diagram is:
VoronoiMesh[initPts, {{-1, 1}, {-1, 1}}] //
Graphics[{
GrayLevel[.6],
Line[Identity @@@ MeshPrimitives[#, 1]],
Red, AbsolutePointSize[2],
Point[initPts]
}] &
![Voronoi of random points][7]
The Lloyd's algorithm can be expressed as the following `findCentroid` function:
Clear[findCentroid]
findCentroid[p : Polygon[{{__Real} ..}]] :=
Module[{pts = p[[1]], pairs, dets, area},
pairs = Partition[pts, 2, 1, {1, 1}];
dets = Det /@ pairs;
area = 1/2 Plus @@ dets;
dets.(Plus @@@ pairs)/(6 area)
]
The cell `Polygon`-s of the Voronoi mesh can be extracted with `MeshPrimitives[...,2]`, which then should be piped to the `findCentroid` to complete one iteration:
Module[{vm},
Function[pts,
vm = VoronoiMesh[pts, {{-1, 1}, {-1, 1}}];
findCentroid /@ MeshPrimitives[vm, 2]
]@initPts
];
Now we are ready to animate the first 50 iteration results to give a intuitive feeling about the CVD:
intermRes =
Module[{vm},
NestList[
Function[pts,
vm = VoronoiMesh[pts, {{-1, 1}, {-1, 1}}];
findCentroid /@ MeshPrimitives[vm, 2]
],
initPts,
500
]
]; // AbsoluteTiming
refinedPts = intermRes[[-1]];
allframes =
Function[pts,
VoronoiMesh[pts, {{-1, 1}, {-1, 1}}] //
Graphics[{
GrayLevel[.6],
Line[Identity @@@ MeshPrimitives[#, 1]],
Red, AbsolutePointSize[2],
Point[pts]
}, ImageSize -> 600] &
] /@ intermRes[[;; 50]];
ListAnimate[allframes, 24, DisplayAllSteps -> True, AnimationRepetitions -> 1]
{197.984, Null}
![uniform CVD][8]
There are various ways to show the difference between the points distributions before and after the process.
For example we can use `NearestNeighborGraph` to illustrate the connectivities of them, which will highlight the unwanted voids in the former case:
MapThread[
Function[{pts, label},
Graphics[
Line[EdgeList[NearestNeighborGraph[pts, 3]] /.
UndirectedEdge -> List], ImageSize -> 400] //
{Style[label, 20, FontFamily -> "Constantia"], #} &
],
{{initPts, refinedPts}, {"initial points", "refined points"}}
] // Grid[# // Transpose, Alignment -> Left, Frame -> All] &
![Comparison of connectivity between random and stippling][9]
Or as having been shown in the previous section, we can compare the statistics of the interior angles:
Plot[
Evaluate[MapThread[
Legended[PDF[KernelMixtureDistribution[#1], ?],
Style[#2, 15]] &,
{
(
# // DelaunayMesh // MeshPrimitives[#, 2] & //
Function[pts,
VectorAngle[#2 - #1, #3 - #1]/Degree & @@@
Partition[pts, 3, 1, {1, 1}]] @@@ # & // Flatten
) & /@ {initPts, refinedPts},
{"initial points", "refined points"}
}]],
{?, 0, 180},
PlotRange -> All, GridLines -> {{60}, None}, Frame -> True,
FrameLabel -> (Style[#, Bold, 15] & /@ {"angle in °", "PDF"})
]
![interior angles statistics][10]
To give another intuitive impression on the "well-spaced" property from a different point of view, we compare the discrete Fourier transformation of the initial points with the one of the refined points:
MapThread[
Function[{pts, label},
Graphics[{AbsolutePointSize[0], Point@pts}] //
Rasterize[#, ImageSize -> {701, 701}] & // Binarize //
ColorNegate //
Function[img,
ImageMultiply[img,
DiskMatrix[.4 ImageDimensions[img],
ImageDimensions[img]] // Image]] //
ImageData // # - Mean[Flatten@#] & //
Fourier // Abs // Rescale //
RotateRight[#, Floor[Dimensions[#]/2]] & //
Image[1.2 - (1 - #)^5, "Real", ImageSize -> 400] & //
{Style[label, 20, FontFamily -> "Constantia"], #} &
],
{{initPts, refinedPts}, {"initial points", "refined points"}}
] // Grid[#//Transpose, Alignment -> Left, Frame -> All] &
![comparison of FFT between random and stippling][11]
It can be clearly seen that in the refined points' case, we effectively achieved both **isotropism** and **low-stop filter**, which is another facet of view to understand the term *well-spaced*. Note that the property is also related to the concept of ["*colors*" of noise](https://en.wikipedia.org/wiki/Blue_noise).
# 4. Stippling of a non-uniform area
So far we only talked about the method for stippling a uniform area. But how about a non-uniform area, like the Lena example?
The Lloyd's algorithm described above can not be directly adapted here without modification, as it always smoothes out the distribution, results a uniform CVD. So how can we generalize it to non-uniform case?
## 4.1 Approach using the conformal mapping
Recall that we were looking for a Delaunay mesh with cells that resembles equilateral triangles as much as possible, one thing that immediately came to our mind is the [conformal map](https://en.wikipedia.org/wiki/Conformal_map) which transforms between two spaces while preserving angles locally. Thus a good stippling on a uniform area is guaranteed to be mapped to a good one on a specified non-uniform area.
A simple example of conformal map is a complex holomorphic function $f$, say $f(z)=(z-z^2/2)\,\exp\left[-(z-1)^2/5\right]$:
cmfunc = Compile[{{z, _Complex}},
Module[{w = 1. I},
w = (z - 1/2 z^2) Exp[-((z - 1)^2/5)];
{Re@w, Im@w}
],
RuntimeOptions -> "Speed",
RuntimeAttributes -> {Listable},
Parallelization -> True
];
which transforms the stippling points `refinedPts` in the uniform square $[-1,1]\times[-1,1]$ to a new points distribution `transPts`:
transPts = refinedPts.{1, I} // cmfunc;
transPts // Graphics[{AbsolutePointSize[2], Point@#}] &
![conformal transformed points][12]
The result looks very nice in the sense of "well-spaced", which can also be confirmed with the Delaunay mesh, Voronoi mesh and its connectivity:
transPts // DelaunayMesh
![DelaunayMesh of the transPts][13]
Module[{vm = VoronoiMesh[refinedPts, {{-1, 1}, {-1, 1}}]},
Graphics[GraphicsComplex[
(Identity @@@ MeshPrimitives[vm, 0]).{1, I} // cmfunc,
vm // MeshCells[#, 1] & // Line[Identity @@@ #] &
]]
]
![VoronoiMesh of the transPts][14]
transPts // NearestNeighborGraph[#, 3] & //
EdgeList // # /. UndirectedEdge -> List & // Line // Graphics
![connectivity of the transPts][15]
So far so good. However, this theoretically elegant approach is not easily generalizable. To find the right $f$ for an arbitrary target distribution will need sophisticated mathematical skill. Again we are not going to talk about the details here. For readers who are interested in it, there is a whole dedicated research field called *computing conformal mapping*.
## 4.2 Approach using the weighted Voronoi diagram
Despite the elegance of the conformal mapping, the popular method for stippling non-uniform area comes from a modification of the CVD, which is called the _**weighted** centroidal Voronoi diagram_ (A. Secord. Weighted Voronoi stippling. In *Proceedings of the second international symposium on Non-photorealistic animation and rendering*, pages 37-43. ACM Press, 2002.) (A. Secord. Random Marks on Paper, Non-Photorealistic Rendering with Small Primitives, *Thesis for the Degree of Master of Science*, 2002).
The algorithm is similar to the Lloyd's algorithm, only in step 3, when looking for the centroid of the Voronoi cell, a variable areal density $\rho(\boldsymbol{P})$ is considered, which is proportional to the grayscale at the location $\boldsymbol{P}$. Thus instead of using Eq. 1, we shall calculate the centroid according to the following definition (Eq. 2):
$$\boldsymbol{C}_{\text{Cell}(\boldsymbol{P})} = \frac{\int_{\boldsymbol{x}\in\text{Cell}(\boldsymbol{P})}\boldsymbol{x}\rho(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}}{\int_{\boldsymbol{x}\in\text{Cell}(\boldsymbol{P})}\boldsymbol{x}\,\mathrm{d}\boldsymbol{x}}$$
Clearly the integrations are much more time-consuming than Eq. 1. In his paper and master thesis Secord presented an efficient way to compute them, which involves a precomputation of certain integrations. However, I noticed (without theoretical proof) that the weighted CVD can be sufficiently approximated in a much cheaper way if we accept a compromise not to stick to the exact formula (i.e. Eq. 2) of centroid but only to emphasis the core idea of choosing $\boldsymbol{C}$ **closer to points with larger weights**.
The new idea is simple. For a cell of $n$ vertices $\{\boldsymbol{P}_1,\boldsymbol{P}_2,\dots,\boldsymbol{P}_n\}$, the algorithm acts as follows:
![approx algorithm sketch][16]
1. Compute the geometric centroid $\boldsymbol{C}$
2. Compute the weights of the vertices as $\{w_1,\dots,w_n\}$
3. Compute normalized weights as $W_k = \frac{w_k}{\max(w_1,\dots,w_n)}$
4. For every vertex $\boldsymbol{P}_k$, move it along $\overrightarrow{\boldsymbol{C}\boldsymbol{P}_k}$ with factor of $W_k$ to new position $\boldsymbol{P}_k'$ (So vertex with largest weight does not move, vertex with smallest weight moves most)
5. Compute the geometric centroid $\boldsymbol{C}'$ of the new cell defined by $\{\boldsymbol{P}_1',\dots,\boldsymbol{P}_n'\}$ as the final result
Note that the convergency of our simplified algorithm is not obvious, so it might be wise to do an "*early stop*" during iteration.
Written in the Wolfram Language, it's this `findCentroidW` function:
Clear[findCentroidW]
findCentroidW[p : Polygon[{{__Real} ..}], densityFunc_] :=
Module[{cent, pts = p[[1]], wlst},
wlst = #/Max[#] &[densityFunc @@@ pts];
cent = findCentroid[p];
cent + findCentroid@Polygon@MapThread[#2 (#1 - cent) &, {pts, wlst}]
]
Numerical experiments show that this approximation gives fairly good results. In the next section we'll demonstrate its application on the Lena image.
## 4.3 Numerical experiment on Lena
In this section we test our non-uniform stippling method on the famous Lena's photo.
First let's import the original image. We'll keep both the grayscale and color versions for later use in the artistic rendering section:
imgOrigColored = ExampleData[{"TestImage", "Lena"}];
imgOrig = imgOrigColored // ColorConvert[#, "Grayscale"] & // ImageAdjust;
For `Interpolation`'s convenience, we'll use `ColorNegate`. For a better visual feeling, we enhance the edges with large gradient:
img = Function[img,
ImageAdd[ImageMultiply[img, # // ColorNegate], #] &[
GradientFilter[img, 1] // ImageAdjust[#, {2, 2, 1}] &
]
][imgOrig // ColorNegate]
![Lena for density image][17]
The image coordinate is rescaled to the rectangle region $[-1,1]\times[-1,1]$ for personal convenience and Interpolation-ed to get a smooth description of the grayscale field:
densityFunc =
img // Reverse[ImageData[#, "Real"]] & // Transpose // Function[array,
Module[{dim = Dimensions@array},
MapIndexed[{(1 + dim - 2 #2)/(1 - dim), #1} &, array, {-1}]]
] // Flatten[#, 1] & // Interpolation;
To have a good initial points distribution, we would like to sample points so the local density is **proportional** to the local grayscale (though we don't need to have this precisely, as the weighted Voronoi process will anyway smooth the distribution.) So taking advantage of `ContourPlot`, we generate a few regions according to the **level set** of the `densityFunc`:
levelRegions =
Module[{ctplot, pts, polys},
ctplot =
ContourPlot[densityFunc[x, y], {x, -1, 1}, {y, -1, 1},
Contours -> 10];
{pts, polys} =
Cases[ctplot, GraphicsComplex[pts_, e_, ___] :> {pts,
Cases[e, GraphicsGroup[r : {__Polygon}] :> r, ?]
}, ?][[1]];
BoundaryDiscretizeGraphics[GraphicsComplex[pts, #]] & /@ polys
]; // AbsoluteTiming
{111.862, Null}
![level sets][18]
For each region in `levelRegions`, we sample points inside it on regular grid, with **area of the grid cell inversely proportional to the level counter** of the region. Notice the regular grid can be a steady state of our algorithm, to ensure a isotropic result, initial randomness is needed. For that purpose we add dithering effect on the points, with its strength specified by a parameter $0\leq\kappa\leq 1$ where $0$ gives no dithering while $1$ gives a globally random distribution:
Clear[ditherFunc]
ditherFunc[pts_, width_, ?_: .5] :=
pts + ? width/2 RandomReal[{-1, 1}, {Length@pts, 2}]
levelPts =
Module[{baseGridWidth = 1/50., width, num, min = -1,
max = 1, ? = 1},
MapIndexed[
Function[{region, idx},
width = baseGridWidth/Sqrt[idx[[1]]];
num = Ceiling[(max - min)/width];
Tuples[Range@num, 2] //
Rescale[#, {1, num}, {min + width/2, max - width/2}] & // N //
ditherFunc[Select[#, RegionMember[region]], width, ?] &
],
levelRegions,
{1}]
]; // AbsoluteTiming
initPts = levelPts // Join @@ # &;
{15.3046, Null}
The total number of points we sampled is huge:
initPts // Length
63024
But their quality is poor:
Graphics[{AbsolutePointSize[0], Point@initPts}]
![Lena of random sample][19]
Now we perform the weighted Voronoi relaxation process, which is similar to the one in the uniform case, though the computation is a bit slow due to the amount of points:
refinedPts = Module[{vm},
Nest[
Function[pts,
vm = VoronoiMesh[pts, {{-1, 1}, {-1, 1}}];
findCentroidW[#, densityFunc] & /@ MeshPrimitives[vm, 2]
],
initPts,
30
]
]; // AbsoluteTiming
{1106.06, Null}
In spite of only 30 iterations, the result is in my opinion fairly good (Note that some visual artifacts in the following image is due to the rasterizing process during *display*, try right-click and "open image in new tab"):
Graphics[{AbsolutePointSize[0], Point@refinedPts}]
![Lena of stippling][20]
The pattern of the connectivity rather interestingly forms some kind of self-similar multi-resolution tiles:
refinedPts // NearestNeighborGraph[#, 2] & //
EdgeList // # /. UndirectedEdge -> List & // Line // Graphics
![Lena of connectivity][21]
Statistics on the interior angles of the corresponding `DelaunayMesh` indicates we have indeed achieved the well-spaced distribution:
Plot[
Evaluate[MapThread[
Legended[PDF[KernelMixtureDistribution[#1], ?],
Style[#2, 15]] &,
{
(
# // DelaunayMesh // MeshPrimitives[#, 2] & //
Function[pts,
VectorAngle[#2 - #1, #3 - #1]/Degree & @@@
Partition[pts, 3, 1, {1, 1}]] @@@ # & // Flatten
) & /@ {initPts, refinedPts},
{"initial points", "refined points"}
}]],
{?, 0, 180},
PlotRange -> All, GridLines -> {{60}, None}, Frame -> True,
FrameLabel -> (Style[#, Bold, 15] & /@ {"angle in °", "PDF"})
]
![interior angles statistics of Lena][22]
# 5. Artistic styles based on stippling
## 5.1 Hatching
As we mentioned in the introduction, some artistic rendering effects can be simulated based on the stippling result. One of them is hatching, for our example which will act like a pencil sketch. The lengths and orientations of the strokes are controled by `densityFunc` *i.e.* local grayscale, while the positions of the them are controled by the stippling points. Note there are artifacts in the dark regions due to our naïve approach:
Module[{strokeLength, strokeOrient, dens, stroke},
Function[{x, y},
dens = densityFunc[x, y];
strokeLength = .03 dens^.3 (1 + .1 RandomReal[{-1, 1}]);
strokeOrient = ?/
3 + ?/6 (1 - (2 dens - 1)^2)^20 RandomReal[{-1, 1}];
stroke = strokeLength Through[{Cos, Sin}[strokeOrient]];
{{x, y} - stroke, {x, y} + stroke}
] @@@ refinedPts // Line // Graphics[{AbsoluteThickness[0], #}] &
]
![Lena sketch][23]
## 5.2 Pointillism
Another styling is the simulation of the pointillism.
For this simulation, we need to take care of the colors:
imgColorChannels = imgOrigColored // ColorSeparate[#, "RGB"] &;
densityFuncColorChannels =
Function[img,
img // Reverse[ImageData[#, "Real"]] & // Transpose //
Function[array,
Module[{dim = Dimensions@array},
MapIndexed[{(1 + dim - 2 #2)/(1 - dim), #1} &, array, {-1}]]
] // Flatten[#, 1] & // Interpolation] /@ imgColorChannels;
We add a random deviation of the color on each points:
randomColoredImg =
{
RGBColor @@ (
Through[
densityFuncColorChannels[##]] (1 + .3 RandomReal[{-1, 1},
3])
),
Point[{##}]
} & @@@ refinedPts //
Graphics[{AbsolutePointSize[0], #} // Flatten,
ImageSize -> 1000] & //
Rasterize[#, ImageSize -> 1000] & //
ImageConvolve[#, GaussianMatrix[{10, 1.3}]] & // ImageAdjust //
ImageCrop
![Lena color stippling][24]
We finalize the process with the morphological `Opening` operation and match the histogram with the original color image using `HistogramTransform`:
Opening[randomColoredImg, DiskMatrix[6]] // HistogramTransform[#, imgOrigColored] &
![Lena pointillism][25]
## 5.3 Pointillism -- another attempt
The rendering in the last section uses strokes with identical size, but in practice it can often vary with some local properties of the targets being drawn. So let's try to include that characteristic here.
For the special property to be reflected, we choose the ImageSaliencyFilter, but it is of course totally fine to choose other ones:
ImageAdjust[
ImageSaliencyFilter[imgOrigColored, Method -> #]] & /@ {"Itti",
"IttiColor", "IttiIntensity", "IttiOrientation"};
coarseMask = ImageAdd @@ (Image[#, "Byte"] & /@ %) // Blur[#, 20] &
![coarseMask][26]
Like the `densityFunc`, we interpolation the `coarseMask` and generate some level regions:
coarseFunc =
coarseMask // Reverse[ImageData[#, "Real"]] & // Transpose //
Function[array,
Module[{dim = Dimensions@array},
MapIndexed[{(1 + dim - 2 #2)/(1 - dim), #1} &, array, {-1}]]
] // Flatten[#, 1] & // Interpolation;
coarseLevelRegions =
Module[{ctplot, pts, polys},
ctplot =
ContourPlot[coarseFunc[x, y], {x, -1, 1}, {y, -1, 1},
Contours -> 3];
{pts, polys} =
Cases[ctplot, GraphicsComplex[pts_, e_, ___] :> {pts,
Cases[e, GraphicsGroup[r : {__Polygon}] :> r, ?]
}, ?][[1]];
BoundaryDiscretizeGraphics[GraphicsComplex[pts, #]] & /@ polys
]; // AbsoluteTiming
{11.1717, Null}
So now the stippling points can be grouped according to the `coarseLevelRegions`:
coarseLevelPts = Select[refinedPts, RegionMember[#]] & /@ coarseLevelRegions;
The stroke template is composed with squares centered at the stippling points, with their color determined and randomized in a way similar to that in the last section:
strokeFunc = Function[{pt, size},
GeometricTransformation[
Rectangle[{-1, -1}, {1, 1}],
RotationMatrix[
RandomReal[?/
2]].ScalingMatrix[(1 + .2 RandomReal[{-1, 1}]) size {1, 1}]
] // GeometricTransformation[#, TranslationTransform[pt]] &
];
Now we are ready to paint the picture layer by layer. NOTE THAT the following code can be very time-consuming.
coarseLayers =
Function[{pts, size},
{
FaceForm[RGBColor @@ (
Append[
(1 + .5 RandomReal[{-1, 1}, 3]) Through[
densityFuncColorChannels @@ #],
RandomReal[{.2, 1}]
]
)],
strokeFunc[#, pts]} & /@ pts //
Graphics[Flatten@{EdgeForm[], #},
PlotRange -> 1.5 {{-1, 1}, {-1, 1}}] & //
Rasterize[#, ImageSize -> 1000, Background -> None] &
] @@ # & /@ ({coarseLevelPts, {.2, .1, .05, .02}}//Transpose)
![paint layers][27]
Composing the layers, we get our final "neo-impressionism" result:
composedImg = FoldList[ImageCompose, coarseLayers[[1]], coarseLayers[[2 ;;]]]
![paint stages][28]
composedImg[[-1]] // ImageCrop
![Lena pointillism 2][29]
Note the local vivid color blocks especially those at the feather and hat regions, and how the rendering still preserved an accurate scene at the macro-scale. :)
# 6. Epilogue
In this small writings, we discussed a morden automated method for simulating an ancient art skill called stippling drawing. We also demonstrated how its power can be extended to more advanced artistic renderings. But readers should be reminded that there are other methods for generating the well-spaced point distribution, and there are still many fields related to non-photorealistic computer graphics which are not necessarily related to it. Finally, although not mentioned in this article, the applications of the so-called well-spaced point distribution goes way beyond the examples shown here. The idea can be generalized to high dimensions, and its nice properties makes it an import tool in many scientific feilds, like re-meshing in the finite element method, sparse representation and compressed sensing.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=resultsexamples.png&userId=93201
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonbetweenrandomandstippling-non-uniformcase.png&userId=93201
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonbetweenrandomandstippling-uniformcase.png&userId=93201
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonbetweenrandomandstippling-uniformDelaunayMesh.png&userId=93201
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=interioranglesstatistics.png&userId=93201
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonamongblurredimages.png&userId=93201
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Voronoiofrandompoints.png&userId=93201
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=uniformCVD.gif&userId=93201
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Comparisonofconnectivitybetweenrandomandstippling.png&userId=93201
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=interioranglesstatistics.png&userId=93201
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparisonofFFTbetweenrandomandstippling.png&userId=93201
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=conformaltransformedpoints.png&userId=93201
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=DelaunayMeshofthetransPts.png&userId=93201
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=VoronoiMeshofthetransPts.png&userId=93201
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=connectivityofthetransPts.png&userId=93201
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=approxalgorithmsketch.png&userId=93201
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenafordensityimage.png&userId=93201
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=levelsets.png&userId=93201
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenaofrandomsample.png&userId=93201
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenaofstippling.png&userId=93201
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenaofconnectivity.png&userId=93201
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=interioranglesstatisticsofLena.png&userId=93201
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenasketch.png&userId=93201
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenacolorstippling.png&userId=93201
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenapointillism.png&userId=93201
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=coarseMask.png&userId=93201
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=paintlayers.png&userId=93201
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=paintstages.png&userId=93201
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Lenapointillism2.png&userId=93201Silvia Hao2015-12-16T12:59:38ZParallel Computing and Object Oriented Programming - Part 1
https://community.wolfram.com/groups/-/m/t/1830825
**See Part 2 here:** https://community.wolfram.com/groups/-/m/t/1851278
Wolfram language includes Parallel Computing Tools and the parallel computing is a first step of high-performance computing. A combination of the Parallel Computing Tools and Object Oriented Programming, here after OOP, introduces a new view for the Wolfram language programming. Up to now author had introduced a OOP system for Wolfram language, and now shows an example of OOP based parallel computing.
OOP suits well with the parallel computation because the Instance is basically independent calculation unit and you can find that the same code can apply for both parallel and mono kernel environment.
Part 1 is concerning to the calculation with one instance on one local kernel and the method is explained by an example of Mersenne number calculation. Followings are the Wolfram code steps and you can see first that OOP is composed of basic functions.
step 1; startup local kernels
CloseKernels[];
LaunchKernels[];
kernelList = ParallelEvaluate[$KernelID]
step 2; definition of parallel class using instance preceded method
mq[nam_] := Module[{ins, ine},
nam[set[{ns_, ne_}]] := {ins, ine} = {ns, ne};
go[] := Select[Range[ins, ine], PrimeQ[2^# - 1] &]
];
step 3; definition of object properties with the association list supposing the local kernel number is 4.
object = {
Association["name" -> Unique[k], "range" -> {9000, 9399},
"kernel" -> kernelList[[1]]],
Association["name" -> Unique[k], "range" -> {9400, 9699},
"kernel" -> kernelList[[2]]],
Association["name" -> Unique[k], "range" -> {9700, 9899},
"kernel" -> kernelList[[3]]],
Association["name" -> Unique[k], "range" -> {9900, 10000},
"kernel" -> kernelList[[4]]
]};
step 4; using the association list, we will construct instances, then set parameters for each instance.
Map[ParallelEvaluate[mq[#name], #kernel] &, object];
Map[ParallelEvaluate[#name[set[#range]], #kernel] &, object]
step 5; execute parallel calculation (this case used Mac 3.4 GHz Intel Core i5)
ts = SessionTime[];
ans = ParallelEvaluate[go[]];
{SessionTime[] - ts, ans}
result is
{10.542539, {{}, {9689}, {}, {9941}}}
To evaluate the result we can compare with a mono-kernel computation.
mq[mono];
mono[set[{9000, 10000}]];
ts = SessionTime[];
ans = go[];
{SessionTime[] - ts, ans}
result is
{30.087534, {9689, 9941}}
The parallel computation with 4 local kernels get about 3 times faster than the mono kernel calculation.
Enjoy OOP for Wolfram language.Hirokazu Kobayashi2019-11-25T01:21:43ZNo output in solving system of trigonometric equations
https://community.wolfram.com/groups/-/m/t/2243285
Hello everybody, I hope you are doing all well today.
I'm new to Mathematica, I have a system of equations that I want to solve using Mathematica: <br><br>
$\begin{equation}
\begin{cases}
sin(x)cos( \epsilon y)cosh(\epsilon z) + sin(\epsilon x)cos(y)cosh(z) - sin(x(1 +
\epsilon)) = 0 \\
sin(x)sin(\epsilon y)sinh(\epsilon z) + sin(\epsilon x)sin(y)sinh(z) = 0
\end{cases}
\end{equation}$ <br><br>
I'm only interested in {x, y, z} values in the domain [0, 2*Pi] <br>
$\epsilon$ is a parameter that can take any value between 1.8 to 3.
I tried solving the problem first by setting $\epsilon = 2$, I used methods like `Solve` and `Nsolve`, but it didn't work. it took too much time to execute without any result. after a long day I found out about `Reduce` function, i used it and it works it gave me the full answer. but that's only for the case of $\epsilon = 2$. When I change $\epsilon$ to 2.5 it didn't gave me any answer. I left it to run to about 10 hours or more without any result. <br>
This is my fifth day with this problem, and I'm still stuck.
can anyone provide me with any advices. I really appreciate your help. Thank you so much <br><br>
This is my code for the case of $\epsilon = 2$
Reduce[{
Sin[x]*Cos[2.5*y]*Cosh[2.5*z] + Sin[2.5*x]*Cos[y]*Cosh[z] -
Sin[3.5*x] == 0,
Sin[x]*Sin[2.5*y]*Sinh[2.5*z] + Sin[2.5*x]*Sin[y]*Sinh[z] == 0,
0 <= x <= 2*Pi, 0 <= y <= Pi, 0<= z <= 2*Pi}, {x, w, y}, Reals]Mohamed El Ghafiani2021-04-13T17:04:13ZComputational genealogy with the Wolfram Language
https://community.wolfram.com/groups/-/m/t/2241480
![enter image description here][2]
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/427d4810-05a8-402f-b0de-ab997db2eac3
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2021-04-10at5.48.08PM.png&userId=228444
[Original]: https://www.wolframcloud.com/obj/rnachbar/Published/Genealogy%20With%20Wolfram%20Language.nbRobert Nachbar2021-04-10T22:07:42ZLooking for a similar function to Fold[ ]?
https://community.wolfram.com/groups/-/m/t/2241046
My current project has me creating yet another data mart using Microsoft Dynamics (a CRM solution) as the source. I have nearly 200 tables each having up to several hundred columns.
Unfortunately, the table and column names are a concatenation of words with no capitalization.
So, in order to help me build some documentation, and eventually the SQL for the data mart, I need to turn the gibberish into something the team (and our users) can easily read.
For example, I want to have a function to work like this:
strSeparateWords["fourscoreandsevenyearsago", {"four", "score",
"seven", "and", "seven", "years", "ago"}, "Capitalize" -> True]
to return:
"Four Score And Seven Years Ago"
While I was able to write such a function, I cannot help but think that there is a better (more Wolfram-like) solution.
Here is my solution:
Options[strSeparateWords] = {
"Capitalize" -> False
};
strSeparateWords[string_String, word_String, opt : OptionsPattern[]] :=
Module[
{
replacement = If[OptionValue["Capitalize"], Capitalize[word], word]
},
StringTrim@StringReplace[string, word -> " " <> replacement]
];
strSeparateWords[string_String, words_?matchListOfStringsQ,
opt : OptionsPattern[]] := Module[
{
retVal = string
},
Do[
retVal = strSeparateWords[retVal, word, opt]
,
{word, words}
];
retVal
];
I was hoping to find a function similar to Fold[] that works with single argument functions. In other words, I want a function that worked like this:
anotherFold[
f[initialValue, #] &,
{ a, b, c}
]
would return:
f[f[f[initialValue, a], b], c]
As you can see, I cannot even figure out a good name for such a function. But I think it would be useful to have a generic function like this.
I've searched the documentation and web for over an hour now. But, before giving up, I wanted to ask the Community.
Thanks, and have a great weekend.Mike Besso2021-04-09T21:14:37ZBlack Hole Discussion (based on “[WSS20] Hunting for Black Holes” post)
https://community.wolfram.com/groups/-/m/t/2203331
Hey guys,
Please make sure to glance at <https://community.wolfram.com/groups/-/m/t/2029731> project first. It looks for singularities in WMs and especially the ones that persist for at least 20 steps. Some very useful functions are used that look for the presence of singularities, filter WMs based on that criteria and also look at dimensionality of the system. Their conclusions reflect a disappointment of not finding a change in dimensions as they assume a Schwartz type of BH would have.
Before I dive into physics, let me add that I ran their 21 surviving BH models for a greater number of steps as I summarize in the attached picture (I tried to do 50 steps but some proved too computationally intensive while others I was able to run for 100s of iterations).
Four more models lost their singularities (bringing the total down to 17). Here are the questions we can still answer by looking further into this:
• Which remaining models BHs survive after 100, 200, 300 etc. steps?
• Can models reacquire singularities after losing them?
• If so, we need to map durations of BHs lifetimes and frequency of occurrence.
• Write new function that can identify # of singularities in a given system as well as whether any of them are nested (BH inside another BH).
Now for the physics…
Given a tiny # of steps that can be run on these models we are probably looking at vacuum fluctuations on a very small scale. That makes it unlikely to observe any BHs form via gravitational collapse (not enough steps).
What are these singularities then? To me, they look like topological BHs that have nothing to do with gravitational collapse and whose stability depends on the rewriting rules alone. Now imagine that our expanding universe forms these sub-plank BHs that leech some of the spacetime into pocket universes. WMs show that nothing special happens in those regions and that they expand same as everywhere else.
Our own vacuum can have a specific signature of these topological BHs. Average density and duration can not only affect our cosmological constant but also be a dark matter candidate. Moreover, one could try and match one of the WMs to our own universe based on these criteria.
Sooner or later, certain interesting WMs will need to be placed on the server cloud with large number of steps computed and stored to be explored by the community.
There is much more to discuss here but it’s probably a good start.
Legend: WM = Wolfram Model | BH = Black Hole | sub-plank = reference to Steven’s belief that these “atoms of space” are much smaller thank plank scale
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WPPBHSummary.png&userId=1964811Anton Spektorov2021-02-24T13:19:50ZA Preliminary Code for the Fox - H Function
https://community.wolfram.com/groups/-/m/t/57378
Hey guys, how are you doing? I hope really fine!
While doing my research on pure mathematics / Statistics, I came across this beautiful function called Fox-H function.
This function is quite important to the study of Statistics (Algebra of Random Variables) and Science in general (Fractional Partial Differential Equations, for example).
Unfortunately, Mathematica does not have this function implemented. On the other hand, it has everything you need to build a code to implement the function!
It is interesting to notice that Mathematica has the Meijer-G function implemented, which is a special case of the Fox-H function.
In order to compute some results, I did a quick implementation of the function. In the cases I tested (large scale tests with Alfa-Stable random variables of type 1 and their ratio) the code worked nicely.
Please find the code below:[mcode]Needs["NumericalCalculus`"];?FoxH[a_, b_, z_] :=? ? Module[{SPA, SPB, IPA, IPB, T, LeftP, RightP, Poles, RadiusP, ?c, ?c, ? MaxPossibleResidueIncrementsto?, ?, NRightPolesLessThan?, W, H},? SPA = Product[? Gamma[1 - a[[1, j, 1]] - a[[1, j, 2]]*s], {j, 1, Length[a[[1]]]}];? SPB = Product[? Gamma[b[[1, j, 1]] + b[[1, j, 2]]*s], {j, 1, Length[b[[1]]]}];? IPA = Product[? Gamma[a[[2, j, 1]] + a[[2, j, 2]]*s], {j, 1, Length[a[[2]]]}];? IPB = Product[? Gamma[1 - b[[2, j, 1]] - b[[2, j, 2]]*s], {j, 1, Length[b[[2]]]}];? T := SPA*SPB/(IPA*IPB);? LeftP[p_] := ? DeleteDuplicates[? Flatten[? Table[-(b[[1, j, 1]] + k)/b[[1, j, 2]], {j, 1, Length[b[[1]]]}, {k, 0, ? p}]]];? RightP[p_] := ? DeleteDuplicates[? Flatten[? Table[(1 - a[[1, j, 1]] + k)/a[[1, j, 2]], {j, 1, Length[a[[1]]]}, {k, 0,? p}]]];? ?c = Product[a[[1, j, 2]]^(-a[[1, j, 2]]), {j, 1, Length[a[[1]]]}]*? Product[a[[2, j, 2]]^(-a[[2, j, 2]]), {j, 1, Length[a[[2]]]}]*? Product[b[[1, j, 2]]^(b[[1, j, 2]]), {j, 1, Length[b[[1]]]}]*? Product[b[[2, j, 2]]^(b[[2, j, 2]]), {j, 1, Length[b[[2]]]}];? ?c = Sum[b[[1, j, 2]], {j, 1, Length[b[[1]]]}] + ? Sum[b[[2, j, 2]], {j, 1, ? Length[b[[2]]]}] - (Sum[a[[1, j, 2]], {j, 1, Length[a[[1]]]}] + ? Sum[a[[2, j, 2]], {j, 1, Length[a[[2]]]}]);? Poles[p_] := Sort[DeleteDuplicates[Flatten[{LeftP[p], RightP[p]}]]];? RadiusP[p_] := ? Min[Table[? Abs[Poles[p][[i + 1]] - Poles[p][[i]]], {i, 1, Length[Poles[p]] - 1}]]/2;? MaxPossibleResidueIncrementsto? = ? Ceiling[Re[(Max[LeftP[0]] - Min[RightP[0]])*Max[a[[1, All, 2]]]]];? If[Max[LeftP[0]] < Min[RightP[0]], ? = (Max[LeftP[0]] + Min[RightP[0]])/2, ? ? = Max[LeftP[0]] + RadiusP[MaxPossibleResidueIncrementsto?]];? NRightPolesLessThan? = ? Catch[Do[? If[Length[Select[RightP[i], # < ? &]] - ? Length[Select[RightP[i + 1], # < ? &]] >= 0, Throw[i]], {i, 10, 1000, ? 10}]];? W = Max[Im[Poles[0]]] + 50;? If[Abs[z] >= 0.2, Which[?c > 0 ? And[?c == 0, 0 < Abs[z] < ?c],? H[p1_] := ? Re[(1/(2*Pi*I))*? NIntegrate[? T*z^(-s), {s, ? - I*W, ? + I*W, ? - p1 + I*W, ? - p1 - I*W, ? ? - I*W}]] - ? Sum[? Re[? NResidue[T*z^(-s), {s, r}, ? Radius -> Min[0.001, RadiusP[MaxPossibleResidueIncrementsto?]]]], {r,? Select[RightP[NRightPolesLessThan?], # < ? &]}];? H[1000],? ?c < 0 ? And[?c == 0, Abs[z] > ?c],? H[p1_] := ? Re[(1/(2*Pi*I))*? NIntegrate[? T*z^(-s), {s, ? - I*W, ? + I*W, ? + p1 + I*W, ? + p1 - I*W, ? ? - I*W}]] - ? Sum[? Re[? NResidue[T*z^(-s), {s, r}, ? Radius -> Min[0.001, RadiusP[MaxPossibleResidueIncrementsto?]]]], {r,? Select[RightP[NRightPolesLessThan?], # < ? &]}];? H[1000]], ? H[p1_] := ? Re[(1/(2*Pi*I))*? NIntegrate[T*z^(-s), {s, ? - I*p1, ? + I*p1}, MaxRecursion -> 40, ? PrecisionGoal -> 15]] - ? Sum[Re[? NResidue[T*z^(-s), {s, r}, ? Radius -> Min[0.001, RadiusP[MaxPossibleResidueIncrementsto?]]]], {r, ? Select[RightP[NRightPolesLessThan?], # < ? &]}];? H[2000]]][/mcode][i][i][i]
[/i][/i][/i]The general idea of the code is presented above. I can send the notebook with the code if anyone wants it.[i][i][i]
[/i][/i][/i]
I basically use 3 possible contours in the complex plane to numerically evaluate the complex integral in Mathematica. Each contour is selected according to existence conditions.
The input insertion is similar to that of Meijer-G function. In the case of the H-Function, for example, each of the elements of the sublists of input a is not a constant, but a list with the values {a_j,alpha_j}, according to the definition (http://www.wolframalpha.com/input/?i=fox+H+function).
You can take a deeper look at this function's theory on:
[url=http://www.wolframalpha.com/input/?i=fox+H+function]http://www.wolframalpha.com/input/?i=fox+H+function[/url]
[url=http://en.wikipedia.org/wiki/Fox_H-function]http://en.wikipedia.org/wiki/Fox_H-function[/url]
For applications and mathematical definitions, one may check:
Mathai, A.M., Saxena, R.K. and Haubold, H.J. (2010) The H-Function: Theory and Applications, Springer, New York.
M. D. Springer (1979), The Algebra of Random Variables, John Wiley, New York.
I have used the code to write a paper about the analytical obtention and evaluation of the PDF of the ratio of two Alfa-Stable Random Variables. The paper has just been submitted for publication but I can discuss it with[font='times new roman', times, serif] anyone interested in the topic. I also have other papers about the usage of the function itself in pure and applied math (analytically solving special real degree equations, civil engineering applications, etc), which I would be also happy to share =)
Anyway, I guess that if you somehow get into this area, this code would be useful.
That is it guys, please let me know if you have any suggestions on the improvement of the code or any ideas on the subject!!
Best Regards
LuanLuan Ozelim2013-07-12T06:26:45ZAppending a property to a variable for use in derivations
https://community.wolfram.com/groups/-/m/t/2242983
Folks -- one of my students asked if it is possible to directly associate a mathematical property with a variable for use in derivations involving that variable. For example, can one associate the property that a variable a is a real number that is greater than zero when performing integrals and the like? I know that one can make this assumption in the Integrate command and in other commands via the option Assumptions -> a>0 , but it would be useful to be able to do this at the beginning of a series of derivations that involve the variable, without constantly having to repeat the Assumption at every step.
Thanks -- Dan Dubin, UCSDDan Dubin2021-04-13T21:14:41ZA question on using dynamic control objects
https://community.wolfram.com/groups/-/m/t/2237195
Reference the attached notebook (which is a toy example), I have a PopupMenu with three items (opt1). For each item in the PopupMenu there's a list of options that can be selected/deselected using a CheckboxBar (opt2). The point I'm stuck on is how to reset the list of selected options, opt2, from the CheckboxBar whenever the selected item (opt1) in the main PopupMen changes. Basically, I want to reset the list of items in opt2 to an empty lists {} whenever opt1 changes.
I could add a button to do this, but it seems unnecessary and I'd prefer not to have do so.
Any suggestions would be very much appreciated.
----------
Solved
------
Have figured out a solution - see second attached notebook (ExampleNotebook2.nb).Ian Williams2021-04-05T18:47:06ZPairwise Correlation of Financial Data
https://community.wolfram.com/groups/-/m/t/2242326
One of the regular tasks in statistical arbitrage is to compute correlations between a large universe of stocks, such as the S&P500 index members, for example.
Mathematica/WL has some very nice features for obtaining financial data and manipulating time series. And of course it offers all the commonly required statistical functions, including correlation. But the WL Correlation function is missing one vital feature - the ability to handle data series of unequal length. This arises, of course, because stock data series do not all share a common start date and (very occasionally) omit data for dates in the middle of the series. This creates an issue for the Correlation function, which can only handle series of equal length.
The usual way of handling this is to apply pairwise correlation, in which each pair of data vectors is truncated to include only the dates common to both series. Of course this can easily be done in WL; but it is very inefficient.
Let's take an example. We start with the last 10 symbols in the S&P 500 index membership:
In[1]:= tickers = Take[FinancialData["^GSPC", "Members"], -10]
Out[1]= {"NASDAQ:WYNN", "NASDAQ:XEL", "NYSE:XRX", "NASDAQ:XLNX", \
"NYSE:XYL", "NYSE:YUM", "NASDAQ:ZBRA", "NYSE:ZBH", "NASDAQ:ZION", \
"NYSE:ZTS"}
Next we obtain the returns series for these stocks, over the last several years. By default, FinancialData retrieves the data as TimeSeries Objects. This is very elegant, but slows the processing of the data, as we shall see.
tsStocks =
FinancialData[tickers, "Return",
DatePlus[Today, {-2753, "BusinessDay"}]];
Not all the series contain the same number of date-return pairs. So using Correlation is out of the question:
In[282]:= Table[Length@tsStocks[[i]]["Values"], {i, 10}]
Out[282]= {2762, 2762, 2762, 2762, 2388, 2762, 2762, 2762, 2762, 2060}
Since Correlation doesn't offer a pairwise option, we have to create the required functionality in WL. Let's start with:
PairsCorrelation[ts_] := Module[{td, correl},
If[ts[[1]]["PathLength"] == ts[[2]]["PathLength"],
correl = Correlation @@ ts,
td = TimeSeriesResample[ts, "Intersection"];
correl = Correlation @@ td[[All, All, 2]]]];
We first check to see if the two arguments are of equal length, in which case we can Apply the Correlation function directly. If not, we use the "Intersection" option of the TSResample function to reduce the series to a set of common observation dates. The function is designed to be deployed using parallelization, as follows:
PairsListCorrelation[tslist_] := Module[{pairs, i, td, c, correl = {}},
pairs = Subsets[Range[Length@tslist], {2}];
correl =
ParallelTable[
PairsCorrelation[tslist[[pairs[[i]]]]], {i, 1, Length@pairs}];
{correl, pairs}]
The Subsets function is used to generate a non-duplicative list of index pairs and then a correlation table is built in parallel using PairsCorrelation function on each pair of series.
When we apply the function to the ten stock time series, we get the following results:
In[263]:= AbsoluteTiming[{correl, pairs} =
PairsListCorrelation[tsStocks];]
Out[263]= {13.4791, Null}
In[270]:= Length@correl
Out[270]= 45
In[284]:= Through[{Mean, Median, Min, Max}[correl]]
Out[284]= {0.381958, 0.396429, 0.200828, 0.536383}
So far, so good. But look again at the timing of the PairsListCorrelation function. It takes 13.5 seconds to calculate the 45 correlation coefficients for 10 series. To carry out an equivalent exercise for the entire S&P 500 universe would entail computing 124,750 coefficients, taking approximately 10.5 hours! This is far too slow to be practically useful in the given context.
Some speed improvement is achievable by retrieving the stock returns data in legacy (i.e. list rather than time series) format, but it still takes around 10 seconds to calculate the coefficients for our 10 stocks. Perhaps further speed improvements are possible through other means (e.g. compilation), but what is really required is a core language function to handle series of unequal length (or a Pairwise method for the Correlation function).
For comparison, I can produce the correlation coefficients for all 500 S&P member stocks in under 3 seconds using the 'Rows', 'pairwise' options of the equivalent correlation function in another scientific computing language.
----------
# UPDATE
Another Mathematica user suggested a way to speed up the pairwise correlation algorithm using associations.
We begin by downloading returns data for the S&P500 membership in legacy (i.e. list) format:
tickers = Take[FinancialData["^GSPC", "Members"]];
stockdata =
FinancialData[tickers, "Return",
DatePlus[Today, {-753, "BusinessDay"}], Method -> "Legacy"];
Then define:
PairwiseCorrelation[stockdata_] :=
Module[{assocStocks, pairs, correl},
assocStocks = Apply[Rule, stockdata, {2}] // Map[Association];
pairs = Subsets[Range@Length@assocStocks, {2}];
correl =
Map[Correlation @@ Values@KeyIntersection[assocStocks[[#]]] &,
pairs];
{correl, pairs}]
Here we are using the KeyIntersection function to identify common dates between two series, which is much faster than other methods. Accordingly:
In[317]:= AbsoluteTiming[{correl, pairs} =
PairwiseCorrelation[stockdata];]
Out[317]= {112.836, Null}
In[318]:= Length@correl
Out[318]= 127260
In[319]:= Through[{Mean, Median, Min, Max}[correl]]
Out[319]= {0.428747, 0.43533, -0.167036, 0.996379}
This is many times faster than the original algorithm and, although much slower (40x to 50x) than equivalent algorithms in other languages, gets the job done in reasonable time.
So I still think we need a Method-> "Pairwise" option for the Correlation function.Jonathan Kinlay2021-04-12T07:05:32ZApril Fool's day 2021 was near the start of π's continued fraction
https://community.wolfram.com/groups/-/m/t/2243429
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/wolfram-community/Published/APR-FOOL-GOSPERBill Gosper2021-04-13T17:00:32ZA formula for the n-th Laplacian of a Gaussian function
https://community.wolfram.com/groups/-/m/t/2242639
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/f966d372-af2b-4b74-88d7-97d40410310c
[Original]: https://www.wolframcloud.com/obj/rauan234/Published/gaussian_laplacian.nbRauan Kaldybaev2021-04-12T15:48:11ZTennis racket (Dzhanibekov) effect: torque-free rotational motion
https://community.wolfram.com/groups/-/m/t/2243140
![enter image description here][1]
*SUPPLEMENTARY WOLFRAM MATERIALS for ARTICLE:*
> Peterson Christian, and William Schwalm. 2021. "Euler's rigid rotators, Jacobi elliptic functions, and the Dzhanibekov or tennis racket effect"
> American Journal of Physics 89, 349 (2021).
> https://doi.org/10.1119/10.0003372
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=hero.gif&userId=20103
[2]: https://www.wolframcloud.com/obj/wolfram-community/Published/tennis_racket_effect3.nbChristian Peterson2021-04-13T14:48:09ZPreserve arguments' patterns while taking a Derivative?
https://community.wolfram.com/groups/-/m/t/2242146
Hello!
I've encountered a problem with loss of pattern information after using "function-producing" `Derivative` function.
Assume I have a scalar-valued function defined on vectors:
H[r_?VectorQ] := r.r
(evidently, `r` is assumed to be a three-dimentional vector, but id does not matter)
And then I would like to solve a differential equation like *dr / dt == grad(H)*. Note that I would like to keep the vectorial notation i.e. the solution must be a vector-valued function. I've tried the following:
NDSolveValue[{D[r[t], t] == Table[Derivative[xspec][H][r[t]], {xspec, IdentityMatrix[3]}],
r[0] == {1, 1, 1}}, r, {t, 0, 100}]
This returns an interpolated function, but it does not handle the derivatives of `H` properly. For example, if in the output of `Derivative` appeared term like `#1[[1]]` then it accepts `r[t]` as argument and evaluates simply to `t` and if there's `#1[[2]]` it throws a message that this part does not exist. It seems like `Derivative` loses argument check and the returned function does not formally require vectorial input anymore.
So, the question is: can I somehow tell WL that this derivative is still a function of a vector?Nikolay Shilov2021-04-12T12:32:31Z[feature req] GUI-item for showing parentheses of grouping
https://community.wolfram.com/groups/-/m/t/2241774
When we write code and don't use parentheses, *Mathematica* still knows the grouping of expressions and operators, which results in a determined order of execution, because of the system's internal set of rules for priority/**precendence**. We can and sometimes *must* control the grouping of expressions/operators by adding parentheses in order to get the desired output. If parentheses aren't needed, we can see in <code>FullForm[]</code> that they get dropped along the way by *Mathematica*.
My idea/suggestion/feature request (for us to discuss, and for the Wolfram developers to consider) is:
How about a GUI-item, maybe in form of a button or within a context menu, which automatically **expands the selected code in-line by adding parentheses** for the recognized code structure, thus making the implicit precedence and grouping **visible**?
When we "multi-click" (double-click, triple-click, quadruple-click, quintuple-click, etc) with the left mouse-button in the **middle** of any large chunk of code, *Mathematica* automatically recognizes and **selects** the next bigger balanced compound expression ("code snippet"): this visual auto-select feature is already helpful in giving us a rough idea about the code line structure from inner to outer expressions. The context menu item "**Un/Comment**" alters the code by automatically adding $\text{(* *)}$ to the selected code. So why not have a similar context menu item for the auto-selected code snippet called "**Show Grouping**" or "**[Explicitize][1] Grouping**"?
My idea is powerful. I might be a beginner and always stay at that level but my [ideas][2] are great and contribute to the improvement of the *Mathematica* product, if they get implemented. I would love to hear what the Wolfram developers think of my idea and what the odds are that it gets implemented the sooner the better, thanks.
[1]: https://en.wiktionary.org/wiki/explicitize
[2]: https://community.wolfram.com/groups/-/m/t/2205811Raspi Rascal2021-04-11T11:43:57ZA more practical template for pupils and students
https://community.wolfram.com/groups/-/m/t/2205811
**Summary:** I would love the folks at Wolfram to provide an **attractive** simple modern practical user-friendly "filled" template for a pupil's or student's **workbook** where he/she can dive in right away, edit the sample entries (by overwriting) by typing up problem statements and writing down their own homework and solutions, or notes. The "epm-file" serves as a great example, Wolfram is allowed to copy/emulate it!!
**TL;DR:** Yes, *Mathematica* comes with a few "blank" templates, even one for writing a textbook. And very few professional writers were successful at employing that template to write and publish their book. But I am talking pupils here, highschoolers, students: "we" don't want to publish a pro-quality book but only need something very neat (simple, easy, fun!) which is very usable to use as workbook or solutions book.
The "template" I have been working enthusiastically with, is the notebook format by [@Paul Wellin][at0] . @wolframdevelopers Just download the [21.8MB *.nb-file][1] (let's call it 'the epm-template') and witness for yourself! This is an amazing effective beautiful attractive format to work with. It is "much better" (more practical, more user-friendly, more etc) than the blank templates which come with *Mathematica*; this must have been his reason to create that notebook structure. To create something better. And one must applaud his efforts. His example of a solutions book inspired and motivated me to write my own solutions to problems, e.g. from maths texts. There are some technical problems I am running into because only parts of that notebook structure are easily editable, the other parts require notebook programming knowledge .. and were never meant for the EPM-user to be edited (e.g. the drop-down menus for subchapters). Even with the author's help, I can't figure out some technicals.
That's why I am sharing this idea in public, for the Wolfram developers to see. If Paul can build such an amazing ebook structure, why can't/don't the Wolfram team do it and also include the documentation or how-to-use-this-template youtube video tutorial? I showed the epm-template and my work with it to friends, schoolers, a.o. and they feel inspired to do the same with their (say maths) homework! Then I must admit to them that "1st, you'll need a raspberry, 2nd, the epm-template comes with some technical restrictions, e.g. chapter numbering doesn't go higher than chapter 10", then they ask the same question, why Wolfram the maker of *Mathematica* doesn't offer such an attractive readily usable notebook structure for pupils and high schoolers to fill in their homework and problem solutions.
The following comment is imho not too far-fetched: If there were such a (very similar to the epm-template) file which became popular on the internet, it could in turn even popularize *Mathematica* itself! Youtubers (incl. high schoolers, students) could make viral videos of how they write down their homework with the help of *Mathematica* instead of M$ Word or paper and pencil. And the spark would be really that fantastic easy-to-edit notebook structure! — At least that worked out for me! I couldn't get bothered by the built-in blank templates, I felt appalled, sorry to say; they might work (like a spark) for some, but totally didn't for me. I got the spark only until I saw Paul's creation for the very first time .. and all of a sudden the impact, I was hooked!
So @wolframdevelopers, why not put some efforts into a notebook structure creation similar to Paul's? — And this has to be Wolfram's task, not the end-user's, imho. As an end-user, I want to just use what's handed to me, and not build something even better, BEFORE I can finally start writing down my homework and textbook problem solutions.
Templates have to come from Wolfram. It's their job. The template should be built-in and called ''**New >> Styled Notebook... >> Stylesheets: Solutions Book**". In the meantime I am trying to figure out how to exploit more of the epm-template.
Point being, somebody at Wolfram should care. Paul cared, so he put his Wolfram L knowledge to practice and build that wonderful epm notebook. That notebook structure, in turn, inspired *me* to become productive with Mathematica, what a spark! And it inspires others whom i demonstrate it to. I obviously care, you can feel it with this lengthy post. Paul doesn't get paid for making his creation more accessible/usable to non-EPM readers or for writing up a documentation file how to edit/manipulate/expand the structure step-by-step (so that a high schooler could follow) or even shoot a tutorial video "How to use the epm-template for doing your (maths) homework". It's not his job. But, as I understood very well, anyone is allowed to edit and learn from the file structure, and re-use it.
I think that Paul did enough, kudos. Now it's time for Wolfram to learn from him/it and make it more public and accessible to all current users and prospective Wolfram users.
If nobody at Wolfram cares about Paul's wonderful creation, or my here presented idea (see summary), then .. such is life (and Wolfram is missing out on a simple yet effective way of popularizing their main product, at least among young students) and i will accept, even though i wouldn't understand. I can just wish good luck to everyone 's all. It is not *my* job (and shouldn't be one of my concerns) to market and popularize *Mathematica*, even though I care about its non-popularity; if a difficult application is not popular, then it's difficult to get new users, youngsters, on board. Free on raspberry was just the first step (and it appears that it did **not** have the wished effect of popularization unfortunately)! Kids choose 'the popular girl', the easy-to-use user-friendly GUI-driven application, which so many people and books talk about (and that is not our Mathematica, we all know it). I know that my idea is great. It worked on me. Paul's file. If Wolfram doesn't pick it up, at least i got this topic off my chest for the world to see and learn. At least the marketing team should pick it up, study what's so great about the file (the effect on the user), and seriously consider.
[at0]: https://community.wolfram.com/web/paulwellin
[1]: https://www.programmingmathematica.com/exercises-and-solutions1.htmlRaspi Rascal2021-02-27T18:10:14ZAligning the horizontal axis of a combination chart
https://community.wolfram.com/groups/-/m/t/2240344
I have some data that I need to draw ListLinePlot and BarChart separately and then combine them.
My question is how to combine and align the horizontal axis correctly.
data = Table[{RandomInteger[{1, 200}], RandomReal[{0.3, 0.8}], i}, {i,
0.3, 30, 0.3}];
Column[{ListLinePlot[data[[All, {3, 2}]], PlotRange -> All,
ImageSize -> Medium],
BarChart[data[[All, 1]], ImageSize -> Medium, AspectRatio -> 1/8]},
Alignment -> Center]
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=plot.jpg&userId=1170351Tsai Ming-Chou2021-04-08T20:00:50ZMathematica and logic
https://community.wolfram.com/groups/-/m/t/2242679
Do logicians use Mathematica? For instance for proof trees in sequent calculus. General answers are welcome.Stephan Spahn2021-04-12T19:39:01ZKernel quits while using DSolve[ ]?
https://community.wolfram.com/groups/-/m/t/2241599
Why does my DSolve command does not solve the Non-linear ODEs?
Clear["Global`*"]
(*Parameters*)
length = 1.5;
width = 0.25;
height = 0.25;
area = width*height;
cforce = -400;
dforce = (500*(X1^2));
stress = 10*displacement'[X1] + 100000*displacement'[X1]^3;
DE1 = D[stress*area, X1] + dforce;
DE2 = 10*displacement'[X1] + 100000*displacement'[X1]^3 + 6400;
(* Solving for the Exact Displacement Function *)
solution =
DSolve[{DE1 == 0, displacement[0] == 0, DE2 == 0 /. X1 -> length},
displacement[X1], X1];
displacement = displacement[X1] /. solution[[1]];
Print["The Exact Displacement Function of the Bar is"]
Print[displacement];
Print[Plot[{displacement}, {X1, 0, 1.5}, AspectRatio -> 0.4,
AxesLabel -> {"Bar Length (m)", "Axial Displacement (m)"},
PlotLegends -> {"Exact Displacement of the Bar"}]];
Print[" "]Aslam Kittur2021-04-10T18:10:30ZNIntegrate command does not converge
https://community.wolfram.com/groups/-/m/t/2241579
sol3 = NIntegrate[IntFunc, {X1, 0, 3}, {X2, 0, 2}];
![The solution yieals a 6 x 6 matrix solved for 2 vaiables X1 & X2][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=NIntegrate.PNG&userId=2213110Aslam Kittur2021-04-10T14:53:18ZNo output from Solve[ ]?
https://community.wolfram.com/groups/-/m/t/2241678
Could someone please look at this and tell me what's wrong? It's probably a simple mistake. I think this system of equations should return {{pd -> -3}, {pd -> 10}}. At least, that's what I get when I use a pencil and paper. But my input below doesn't return anything. Not even an error message.
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/9a795edc-6893-4a6d-9925-c543fd8bbafcJay Gourley2021-04-12T05:45:56ZHow I calculated the digits of the MKB constant
https://community.wolfram.com/groups/-/m/t/1323951
March 12, 2015
--------------
What about records of computing the integral analog of the MRB constant? (I call it the MKB constant.) See Google Scholar [MKB constant][1].
Richard Mathar did a lot of work on it [here][4] , where M is the MRB constant and M1 is MKB:
![enter image description here][2]
M1 (MKB) can be written as and integral of a power of ***e***:
![enter image description here][3]
I've gotten Matheamtica to compute 125 digits. However, they are not proven to be correct yet!
They are
0.68765236892769436980931240936544016493963738490362254179507101010743\
366253478493706862729824049846818873192933433546612328629
.
First we compute the real part as far as Mathematica will allow.
a1 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 100]
0.07077603931152880353952802183028200136575469620336299759658471973672\
987938741600037745028756981434374
a2 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 120]
a2 - a1
0.07077603931152880353952802183028200136575469620336302758317278266053\
31986618615110244568060496758380620699811570793175408
2.998658806292380331927444551064700651847986149432*10^-53
a3 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 150]
a3 - a2
0.07077603931152880353952802183028200136575469620336302758317278816361\
8457264385970709799491401005081151056924116255307801983594127144525095\
5653544005192
5.5030852586025244596853426853513292430889869429591759902612*10^-63
a4 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 200]
a4 - a3
0.07077603931152880353952802183028200136575469620336302758317278816361\
8457264382036580831881266177238210031756216795402920795214039271485948\
634659563768084109747493815003439875479076850383786911941519465
-3.9341289676101348278429410251678994599048811883800878730391469306948\
367511*10^-78
a5 = NIntegrate[Cos[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 250]
a5 - a4
0.07077603931152880353952802183028200136575469620336302758317278816361\
8457264382036580831881266177238209440733969109717926999044694539086929\
3857095687266500964737783523859835124762555195276023702167529617039725\
7261177753806842756198742365511173334813888
-5.9102224768568499379616934473239901924894999504143401327371546261745\
6363002821330856184541724766503*10^-103
Next we compute the imaginary part to the same precision.
b1 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 100] - I/Pi
0.*10^-117 -
0.6840003894379321291827444599926611267109914826550016181302726087470\
544306934833279937664708191960468 I
b2 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 120] - I/Pi
b2 - b1
0.*10^-137 -
0.6840003894379321291827444599926611267109914826549994343226304054256\
46767722886537984405858512438464223325361496951820797 I
0.*10^-117 +
2.1838076422033214076629705967900093606123067575826*10^-51 I
b3 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 150] - I/Pi
b3 - b2
0.*10^-167 -
0.6840003894379321291827444599926611267109914826549994343226303771381\
5305812568206208637713014270949108628424796532117557865488349026349505\
4352728287677 I
0.*10^-137 +
2.8287493709597204475898028728369728973137041113531630645218*10^-62 I
b4 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 200] - I/Pi
b4 - b3
0.*10^-218 -
0.6840003894379321291827444599926611267109914826549994343226303771381\
5305812497663815095983421272147867735031056071869477552727290571462108\
208123698276619850397331432861469605963724235550107655309644965 I
0.*10^-167 +
7.0542393541729592998801240893393740460248080312761058454887397227149\
1304910*10^-76 I
b5 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 250] - I/Pi
b5 - b4
0.*10^-268 -
0.6840003894379321291827444599926611267109914826549994343226303771381\
5305812497663815095983421272147867223796451609148860995867828496814126\
9810848570299802270095261060286697622600207986034863822997401942304753\
4951409792726050072747412751162199808963072 I
0.*10^-218 +
5.1123460446272061655685946207464798122703884124663962338780532683279\
9843703703436946621273009904771*10^-102 I
b6 = NIntegrate[I Sin[x Pi] x^(1/x), {x, 1, Infinity},
WorkingPrecision -> 300] - I/Pi
b6 - b5
0.*10^-318 -
0.6840003894379321291827444599926611267109914826549994343226303771381\
5305812497663815095983421272147867223796451609148860995867804988314557\
9408739051911924508290758754789975176921766748245229306743723292030351\
1357229649514450909272015113199881208930542548540913212596310791355732\
04151474091653439098975 I
0.*10^-268 +
2.3508499569040210951838787776180450230549672244567844123778963451625\
36786502744023594180143211599163475397637962318600032530*10^-127 I
Notice that WorkingPrecision->100 gave 51 consistant (correct) digits, WorkingPrecision->120 gave 62 correct digits, WorkingPrecision->150 gave 76 correct digits, WorkingPrecision->200 gave 102 correct digits, so it is not too much of a stretch to believe WorkingPrecision->250 gave 125 correct digits.
In[78]:= c = N[Abs[a5 + b5], 125]
Out[78]= 0.\
6876523689276943698093124093654401649396373849036225417950710101074336\
6253478493706862729824049846818873192933433546612328629
April 18, 2015
--------------
Going back to **integral analog of the MRB constant**'
![enter image description here][5]:
Using formula 5 on page 3 of http://arxiv.org/pdf/0912.3844v3.pdf
.![enter image description here][6]
We can compute a great deal of digits of the **integral analog of the MRB constant**' (I once called it the MKB constant, named after Marsha Kell-Burns my, now ex, wife.) In the paper Mathar simply calls it M1.
**Until further notice in this post when we compute the imaginary part of M1, we will be concerned with the imaginary part's absolute value only,**
This time we will compute the Imaginary part first to at least 500 digits:
a[1] = 0; For[n = 1, n < 11,
a[n] = N[2/Pi -
1/Pi*NIntegrate[
Cos[Pi*x]*x^(1/x)*(1 - Log[x])/x^2, {x, 1, Infinity},
WorkingPrecision -> 100*n], 50 n]; Print[a[n] - a[n - 1]],
n++]; Print[a[11]]
\
giving
0.6840003894379321291827444599926611267109914826549994343226303771381530581249766381509598342127214787
0.*10^-101
0.*10^-151
0.*10^-201
0.*10^-251
0.*10^-301
0.*10^-351
0.*10^-401
0.*10^-451
0.*10^-501
0.6840003894379321291827444599926611267109914826549994343226303771381530581249766381509598342127214786722379645160914886099586780498831455794087390519118879988351918366211827085883779918191195794251385436100844782462528597869421390620796113023053439642582325892202911183326091512210367124716901047132601108752764946385830438156754378694878046808312868541961166205744280461776232345922905313658259576212809654022016030244583148587352474339130505540080799774619683572540292971258866450201101870835703060314349396491402064932644813564545345219868887520120
.
Likewise the real part:
b[1] = 0; For[n = 1, n < 11,
b[n] = N[-1/Pi*
NIntegrate[Sin[Pi*x]*x^(1/x)*(1 - Log[x])/x^2, {x, 1, Infinity},
WorkingPrecision -> 100*n], 50 n]; Print[b[n] - b[n - 1]],
n++]; Print[b[11]]
giving
0.07077603931152880353952802183028200136575469620336302758317278816361845726438203658083188126617723821
0.*10^-102
0.*10^-152
0.*10^-202
0.*10^-252
0.*10^-302
0.*10^-352
0.*10^-402
0.*10^-452
0.*10^-502
0.07077603931152880353952802183028200136575469620336302758317278816361845726438203658083188126617723820944073396910971792699904464538475364292258443860652193330471222906120205483985764336623434898438270710499897053952312269178485299032185072743545220051257328105422174249313177670295863771714489658779291185716175115405623656039914848817528200250723061535734571065031458992196831648681239079549382556509741967588147362548743205919028695774572411439927516593391029992733107982746794845130889328251307263102570083031527430861023428334369104098217022622689
.
Then the magnitude:
N[Sqrt[a[11]^2 + b[11]^2], 500]
giving
0.68765236892769436980931240936544016493963738490362254179507101010743\
3662534784937068627298240498468188731929334335466123286287665409457565\
9577211580255650416284625143925097120589697986500952590195706813170472\
5387265069668971286335322245474865156721299946377659227025219748069576\
0895993932096027520027641920489863095279507385793449828250341732295653\
3809181101532087948181335825805498812728097520936901677028741356923292\
2644964771090329726483682930417491673753430878118054062296678424687465\
624513174205
.
That checks with the 200 digits computed by the quadosc command in mpmath by FelisPhasma at https://github.com/FelisPhasma/MKB-Constant .The function is defined here: http://mpmath.googlecode.com/svn/trunk/doc/build/calculus/integration.html#oscillatory-quadrature-quadosc
![enter image description here][7]
**P.S.**
I just now finished 750 digits, (about the max with formula 5 from the paper, as far as Mathematica is concerned).
Here is the work:
a[1] = 0; For[n = 1, n < 16,
a[n] = N[2/Pi -
1/Pi*NIntegrate[
Cos[Pi*x]*x^(1/x)*(1 - Log[x])/x^2, {x, 1, Infinity},
WorkingPrecision -> 100*n], 50 n]; Print[a[n] - a[n - 1]],
n++]; Print[a[16]];
b[1] = 0; For[n = 1, n < 16,
b[n] = N[-1/Pi*
NIntegrate[Sin[Pi*x]*x^(1/x)*(1 - Log[x])/x^2, {x, 1, Infinity},
WorkingPrecision -> 100*n], 50 n]; Print[b[n] - b[n - 1]],
n++]; Print[b[16]]; Print[N[Sqrt[a[16]^2 + b[16]^2], 750]]
0.6840003894379321291827444599926611267109914826549994343226303771381530581249766381509598342127214787
0.*10^-101
0.*10^-151
0.*10^-201
0.*10^-251
0.*10^-301
0.*10^-351
0.*10^-401
0.*10^-451
0.*10^-501
0.*10^-551
0.*10^-601
3.*10^-650
-4.*10^-700
-2.6*10^-749
0.68400038943793212918274445999266112671099148265499943432263037713815\
3058124976638150959834212721478672237964516091488609958678049883145579\
4087390519118879988351918366211827085883779918191195794251385436100844\
7824625285978694213906207961130230534396425823258922029111833260915122\
1036712471690104713260110875276494638583043815675437869487804680831286\
8541961166205744280461776232345922905313658259576212809654022016030244\
5831485873524743391305055400807997746196835725402929712588664502011018\
7083570306031434939649140206493264481356454534521986888752011950353818\
1776359577265099302389566135475579468144849763261779452665955246258699\
8679271659049208654746533234375478909962633090080006358213908728990850\
5026759549928935029206442637425786005036048098598304092996753145589012\
64547453361707037686708654522699
0.07077603931152880353952802183028200136575469620336302758317278816361845726438203658083188126617723821
0.*10^-102
0.*10^-152
0.*10^-202
0.*10^-252
0.*10^-302
0.*10^-352
0.*10^-402
0.*10^-452
0.*10^-502
2.*10^-551
-1.*10^-600
1.8*10^-650
1.27*10^-699
4.34*10^-749
0.07077603931152880353952802183028200136575469620336302758317278816361\
8457264382036580831881266177238209440733969109717926999044645384753642\
9225844386065219333047122290612020548398576433662343489843827071049989\
7053952312269178485299032185072743545220051257328105422174249313177670\
2958637717144896587792911857161751154056236560399148488175282002507230\
6153573457106503145899219683164868123907954938255650974196758814736254\
8743205919028695774572411439927516593391029992733107982746794845130889\
3282513072631025700830315274308610234283343691040982170226226904594029\
7055093272952022662549075225941956559080574835998923469310063614655255\
0629713179601483134045038416878054929072981851045829413286377842843667\
5378730394247519728064887287780998671021887797977772522419765594172569\
277490031071938177749184834961300
0.687652368927694369809312409365440164939637384903622541795071010107433662534784937068627298240498468188731929334335466123286287665409457565957721158025565041628462514392509712058969798650095259019570681317047253872650696689712863353222454748651567212999463776592270252197480695760895993932096027520027641920489863095279507385793449828250341732295653380918110153208794818133582580549881272809752093690167702874135692329226449647710903297264836829304174916737534308781180540622966784246874656245131742049004832216427665542900559350289936114782223424261285828326467186036500189315374147638489679365569122714398706519530651330568884655048857998738535162606116788633540389660052822237449082894798620397228331715198160243676576563833057235963591510865254600
Using formula 7 from http://arxiv.org/pdf/0912.3844v3.pdf,
![enter image description here][8]
.
(Treating it as we did formula 5),
First, the imaginary part to at least 1000 digits::
a[1] = 0; For[n = 1, n < 21,
a[n] = N[2/Pi +
1/Pi^2 NIntegrate[
Sin[x Pi] x^(1/x) (1 - 3 x + 2 (x - 1) Log[x] + Log[x]^2)/
x^4, {x, 1, Infinity}, WorkingPrecision -> 100 n], 50 n];
Print[a[n] - a[n - 1]], n++]; Print[a[21]]
0.6840003894379321291827444599926611267109914826549994343226303771381530581249766381509598342127214787
0.*10^-101
0.*10^-151
0.*10^-201
0.*10^-251
0.*10^-301
0.*10^-351
0.*10^-401
0.*10^-451
0.*10^-501
0.*10^-551
0.*10^-601
0.*10^-651
0.*10^-701
0.*10^-751
0.*10^-801
0.*10^-851
0.*10^-901
-2.*10^-950
5.*10^-1000
0.684000389437932129182744459992661126710991482654999434322630377138153058124976638150959834212721478672237964516091488609958678049883145579408739051911887998835191836621182708588377991819119579425138543610084478246252859786942139062079611302305343964258232589220291118332609151221036712471690104713260110875276494638583043815675437869487804680831286854196116620574428046177623234592290531365825957621280965402201603024458314858735247433913050554008079977461968357254029297125886645020110187083570306031434939649140206493264481356454534521986888752011950353818177635957726509930238956613547557946814484976326177945266595524625869986792716590492086547465332343754789099626330900800063582139087289908505026759549928935029206442637425786005036048098598304092996753145589012645474533617070376867086545228223060940434935219252885333298390272342234952870883304116640409421452765284609364941205344122569781634782508368641126766528707019957340895061936246645065753101916781254557006989818409283317145837167345971516970849116096077030635788389165381066055992688
Then the real part to at least 1000 digits:
b[1] = 0; For[n = 1, n < 21,
b[n] = N[1/Pi^2 -
1/Pi^2 NIntegrate[
Cos[Pi x] x^(1/x) (1 - 3 x + 2 (x - 1) Log[x] + Log[x]^2)/
x^4, {x, 1, Infinity}, WorkingPrecision -> 100 n], 50 n];
Print[b[n] - b[n - 1]], n++]; Print[b[21]]
0.07077603931152880353952802183028200136575469620336302758317278816361845726438203658083188126617723821
0.*10^-102
0.*10^-152
0.*10^-202
0.*10^-252
0.*10^-302
0.*10^-352
0.*10^-402
0.*10^-452
0.*10^-502
0.*10^-552
0.*10^-602
0.*10^-652
0.*10^-702
0.*10^-752
0.*10^-802
0.*10^-852
-3.*10^-901
8.*10^-951
-4.6*10^-1000
0.0707760393115288035395280218302820013657546962033630275831727881636184572643820365808318812661772382094407339691097179269990446453847536429225844386065219333047122290612020548398576433662343489843827071049989705395231226917848529903218507274354522005125732810542217424931317767029586377171448965877929118571617511540562365603991484881752820025072306153573457106503145899219683164868123907954938255650974196758814736254874320591902869577457241143992751659339102999273310798274679484513088932825130726310257008303152743086102342833436910409821702262269045940297055093272952022662549075225941956559080574835998923469310063614655255062971317960148313404503841687805492907298185104582941328637784284366753787303942475197280648872877809986710218877979777725224197655941725692774900310719381777491848349627938468198411955193898347075098152638657614980900350262780319142430252921925131515239611841070722530473939496294305264627977744876814858325335947117076721493110160508928494597906728688873533031986215124467678736429981544321187124269147141804397293341613
Then the magnitude:
In[97]:= N[Sqrt[a[21]^2 + b[21]^2], 1000]
Out[97]= 0.\
6876523689276943698093124093654401649396373849036225417950710101074336\
6253478493706862729824049846818873192933433546612328628766540945756595\
7721158025565041628462514392509712058969798650095259019570681317047253\
8726506966897128633532224547486515672129994637765922702521974806957608\
9599393209602752002764192048986309527950738579344982825034173229565338\
0918110153208794818133582580549881272809752093690167702874135692329226\
4496477109032972648368293041749167375343087811805406229667842468746562\
4513174204900483221642766554290055935028993611478222342426128582832646\
7186036500189315374147638489679365569122714398706519530651330568884655\
0488579987385351626061167886335403896600528222374490828947986203972283\
3171519816024367657656383305723596359151086525460036387486837632622334\
2987257095524637683005910353149353985736118868884201748241906260834981\
7303422370398413326428269921074045506558966667483453656748906071577744\
4147548424388220133662816274116986724576330176058912438027319979840883\
05950589130911719199
**PPS.**
I just now finished a 1500 digit computation of the integral analog of the MRB constant, but I don't have any way of checking it other than to see that it confirms smaller computations. Which thing it does.
In[99]:= aa =
N[2/Pi + 1/Pi^2 NIntegrate[
Sin[x Pi] x^(1/x) (1 - 3 x + 2 (x - 1) Log[x] + Log[x]^2)/
x^4, {x, 1, Infinity}, WorkingPrecision -> 3000], 1500]
Out[99]= 0.\
6840003894379321291827444599926611267109914826549994343226303771381530\
5812497663815095983421272147867223796451609148860995867804988314557940\
8739051911887998835191836621182708588377991819119579425138543610084478\
2462528597869421390620796113023053439642582325892202911183326091512210\
3671247169010471326011087527649463858304381567543786948780468083128685\
4196116620574428046177623234592290531365825957621280965402201603024458\
3148587352474339130505540080799774619683572540292971258866450201101870\
8357030603143493964914020649326448135645453452198688875201195035381817\
7635957726509930238956613547557946814484976326177945266595524625869986\
7927165904920865474653323437547890996263309008000635821390872899085050\
2675954992893502920644263742578600503604809859830409299675314558901264\
5474533617070376867086545228223060940434935219252885333298390272342234\
9528708833041166404094214527652846093649412053441225697816347825083686\
4112676652870701995734089506193624664506575310191678125455700698981840\
9283317145837167345971516970849116096077030635788389165381066055992708\
4284702473154303800276803908560080204997803241058414188902018357202062\
9532415382916822796942734253441520784640814155687968986766443021927163\
6249354786973717955004441549085673392105556692081075647388204227896978\
1483978754685921758294318270385312597177598977912650715548994562461701\
1553879109152932039370312241134127950112036269188660519350584627066913\
4925878278209048717316088629321353274101519307401594635990058104175474\
300641475776727955287474213040
In[98]:= bb =
N[1/Pi^2 -
1/Pi^2 NIntegrate[
Cos[Pi x] x^(1/x) (1 - 3 x + 2 (x - 1) Log[x] + Log[x]^2)/
x^4, {x, 1, Infinity}, WorkingPrecision -> 3000], 1500]
Out[98]= 0.\
0707760393115288035395280218302820013657546962033630275831727881636184\
5726438203658083188126617723820944073396910971792699904464538475364292\
2584438606521933304712229061202054839857643366234348984382707104998970\
5395231226917848529903218507274354522005125732810542217424931317767029\
5863771714489658779291185716175115405623656039914848817528200250723061\
5357345710650314589921968316486812390795493825565097419675881473625487\
4320591902869577457241143992751659339102999273310798274679484513088932\
8251307263102570083031527430861023428334369104098217022622690459402970\
5509327295202266254907522594195655908057483599892346931006361465525506\
2971317960148313404503841687805492907298185104582941328637784284366753\
7873039424751972806488728778099867102188779797777252241976559417256927\
7490031071938177749184834962793846819841195519389834707509815263865761\
4980900350262780319142430252921925131515239611841070722530473939496294\
3052646279777448768148583253359471170767214931101605089284945979067286\
8887353303198621512446767873642998154432118712426914714180439729334146\
8345902382977472975053271988386946291215512340931334841526712825988330\
6521193975174379922254198045615178994412133135553490942451521573377205\
4086429300485891441696490339106907723915822537813700713422515725943626\
7756749980892097547020923938358076198570370106085596863039832425037481\
4946826330552459256977035009973219582010379262683780372730214991685800\
3676611833579648850161974289307066295385292264148146789532534018500663\
1153014589399140567464592864024
In[109]:= c1500 = Sqrt[aa^2 + bb^2]
Out[109]= \
0.68765236892769436980931240936544016493963738490362254179507101010743\
3662534784937068627298240498468188731929334335466123286287665409457565\
9577211580255650416284625143925097120589697986500952590195706813170472\
5387265069668971286335322245474865156721299946377659227025219748069576\
0895993932096027520027641920489863095279507385793449828250341732295653\
3809181101532087948181335825805498812728097520936901677028741356923292\
2644964771090329726483682930417491673753430878118054062296678424687465\
6245131742049004832216427665542900559350289936114782223424261285828326\
4671860365001893153741476384896793655691227143987065195306513305688846\
5504885799873853516260611678863354038966005282223744908289479862039722\
8331715198160243676576563833057235963591510865254600363874868376326223\
3429872570955246376830059103531493539857361188688842017482419062608349\
8173034223703984133264282699210740455065589666674834536567489060715777\
4441475484243882201336628162741169867245763301760589124380273199798408\
8305950589130911719198776146941477264898934365742508503405073273852990\
3546587114217499635584514475429656959327732862489935076490012861232249\
2446704232200904844779690044774489466704342791971033325818579375177198\
9865742583276770011926585495711579480114327818546199372349313180236079\
1389248808154759564302727311223193005229640892474022665093207969297797\
9723087954832182561714039165214592519432072341006090867558444590500046\
6707963346545638317950978935794173691635274461184852166407791838662429\
40408834876470623546535579027725
Mathar gives a simple scheme to find better formulas at http://arxiv.org/pdf/0912.3844v3.pdf . I could use some help in programming it:
(I keep getting erroneous results!) Does anyone get the right results here?
![enter image description here][9]
Below, where the upper limit of the following integrals shows Infinity, it is meant to be the (Ultraviolet limit of the sequence)
as mentioned by Mathar here:
![enter image description here][10]
**Until further notice in this post when we compute the imaginary part of M1, we will be concerned with the imaginary part's absolute value only,**
I derived a new formula for computing the integral analog of the MRB constant':
f[x_]:=x^(1/x);-((2 I)/\[Pi]^3) + 1/\[Pi]^2 - (
2 I)/\[Pi] + (I/Pi)^3*
Integrate[(-1)^x*D[f[x], {x, 3}], {x, 1, Infinity}]
In traditional form that is M1=
![enter image description here][11]
Using it I computed 2000 digits in only 10.8 minutes:
In[131]:= Timing[f[x_] = x^(1/x);
a = N[1/\[Pi]^2 + (1/Pi)^3*
NIntegrate[Sin[Pi*x]*D[f[x], {x, 3}], {x, 1, Infinity},
WorkingPrecision -> 4000], 2000];
b = N[2/\[Pi]^3 +
2/\[Pi] + (1/Pi)^3*
NIntegrate[Cos[Pi x]*D[f[x], {x, 3}], {x, 1, Infinity},
WorkingPrecision -> 4000], 2000];
Print[N[Sqrt[a^2 + b^2], 2000]]]
During evaluation of In[131]:= 0.68765236892769436980931240936544016493963738490362254179507101010743366253478493706862729824049846818873192933433546612328628766540945756595772115802556504162846251439250971205896979865009525901957068131704725387265069668971286335322245474865156721299946377659227025219748069576089599393209602752002764192048986309527950738579344982825034173229565338091811015320879481813358258054988127280975209369016770287413569232922644964771090329726483682930417491673753430878118054062296678424687465624513174204900483221642766554290055935028993611478222342426128582832646718603650018931537414763848967936556912271439870651953065133056888465504885799873853516260611678863354038966005282223744908289479862039722833171519816024367657656383305723596359151086525460036387486837632622334298725709552463768300591035314935398573611886888420174824190626083498173034223703984133264282699210740455065589666674834536567489060715777444147548424388220133662816274116986724576330176058912438027319979840883059505891309117191987761469414772648989343657425085034050732738529903546587114217499635584514475429656959327732862489935076490012861232249244670423220090484477969004477448946670434279197103332581857937517719898657425832767700119265854957115794801143278185461993723493131802360791389248808154759564302727311223193005229640892474022665093207969297797972308795483218256171403916521459251943207234100609086755844459050004667079633465456383179509789357941736916352744611848521664077918386624294040883487647062354653558109265769644276994369741555722263494599492834558291937955573706480722982389806312472239746286527176248883116124285469947303667188075506826507811479428582807366599407544908560990699866167233307144245764835741501174979679166078765231145175411199825822532170091858833628202128777966026600647843068442894310401343003939117236867245656732686719139206716028255819141802331701942027248337771633882445225049334329008827371320849006472846226868011129149192754883153995560921671208059671732704499253517327447921147157
Out[131]= {653.145, Null}
I am presently computing 10,000 digits using that formula. Come back here for results!
That formula didn't work out; I will try one of the following formulas.
Here are 2 more, more advanced formulas; remember f(x) is x^(1/x):
![enter image description here][12]
I did finish a 5,000 digit computation using M1=
![enter image description here][13]
in 48.11 minutes.
Here are the 5000 digits:of the magnitude:
0.68765236892769436980931240936544016493963738490362254179507101010743366253478493706862729824049846818873192933433546612328628766540945756595772115802556504162846251439250971205896979865009525901957068131704725387265069668971286335322245474865156721299946377659227025219748069576089599393209602752002764192048986309527950738579344982825034173229565338091811015320879481813358258054988127280975209369016770287413569232922644964771090329726483682930417491673753430878118054062296678424687465624513174204900483221642766554290055935028993611478222342426128582832646718603650018931537414763848967936556912271439870651953065133056888465504885799873853516260611678863354038966005282223744908289479862039722833171519816024367657656383305723596359151086525460036387486837632622334298725709552463768300591035314935398573611886888420174824190626083498173034223703984133264282699210740455065589666674834536567489060715777444147548424388220133662816274116986724576330176058912438027319979840883059505891309117191987761469414772648989343657425085034050732738529903546587114217499635584514475429656959327732862489935076490012861232249244670423220090484477969004477448946670434279197103332581857937517719898657425832767700119265854957115794801143278185461993723493131802360791389248808154759564302727311223193005229640892474022665093207969297797972308795483218256171403916521459251943207234100609086755844459050004667079633465456383179509789357941736916352744611848521664077918386624294040883487647062354653558109265769644276994369741555722263494599492834558291937955573706480722982389806312472239746286527176248883116124285469947303667188075506826507811479428582807366599407544908560990699866167233307144245764835741501174979679166078765231145175411199825822532170091858833628202128777966026600647843068442894310401343003939117236867245656732686719139206716028255819141802331701942027248337771633882445225049334329008827371320849006472846226868011129149192754883153995560921671208059671732704499253517327447529208297180672654123457301218758892278525894167935930983363218877512533994251978272092700003994136520699813263053327399132641690231179063314931546906927612775633995348209911166678724589467821767106592498663827057034363632241807121831546175498178011687284590439293322231263406301066863589072717290630291441982684113819198880100231182613587798104863611185433976009254862585527222843445901958943153561148829083242874018226480554274231391324767376148485531787767908124831873688579979114662856184612164534836370699371440464263768724668291617743681719766849740663590277737977490693183461320266666793472116774276618408124767965369796362732668987556797338128876129264558867657737417548617146808592137056879602982206609613881069490166381528825180204703315896719667069923077454352649723496033985893188309150391579573916059639453655188856334980355047281560296288150836680499821806918067869468571687709518088408966653716009356556714281694904914038988996962213833530636987279769672200413448893419914190954063100962251649102614676944333201213024711868954772741991675045198246947499574872027800654821823797116399297131866662866832215332914761325880983081211272181775518951539503852063119472301382766303820851467743266039356123495461914463960644386394228342211998370152351720235034997434035743513051754761571835043769475528640144621307760159481496713401409374957729200400650100318226988524015127382509490642900236553851499823658269458873976032051355393161653806016080446394196719312454167915154602448638624354575153334932298393406734174580316934939632892851077461038399470015366439910136971186909599331204517462262508377673477745789645309425145559198802530351403897927622891172233239135167420567162398873965477371498335087310395422796362380227536212159184529243644094285328763286873653399867593200891823468738537356817916009007206857590792983184556882143118383332812491747733056313117179696094921120670802012310012864110800437831852620698327457619035904268498030693438632685623213366864129523404256345542376567721287706234359125016588483777876970236084456277023948551334490591022594253744077631232660869593809453087749830900393202787736482133628148979992109544954840067942735030391105496026321872468122542495017023785810605820545392820104069279893067324597299043883381251767370331206913429284614563732308018369972360638019778425246546329838131639355043236388708044857300408692365733932897876809202025693305332974091411983635619038514442263783801745983300121464879550146672827072002317686396598587702487509572349422593441184802476344187280014450860069307120621758277552124841158659386176036703247124389223327008210072318671884895179305778728051888524412158486781863155034447221379906386062559915129172725833420555901857729690605950941678587057025641848365090809750870051863842805803189784976076099574956436664131457150096711473033060684065060747340764998195621425524824611657787212347497307297184843276100338110267863618974154272345482369968216663233417338501929114697679974461999040589290327155974468087040862022522065912789
I'm getting closer to 10K digits of M1: Using ![enter image description here][14] , where f(x)=x^(1/x).
I got approx. 10K digits of the imaginary part, but the real part was a little garbled.
Finally using
![enter image description here][15] , where f(x) = x^(1/x) ,
.I got about 10000 digits of M1 in about 12 hours. (It showed that the 5000 digit computation was only correct to 4979 digits, though.)
Here is a rough program to get it:
f[x_]:=x^(1/x); Print[DateString[]]; Print[T0 = SessionTime[]]; prec = 10000;
Timing[Print[
a = N[Re[-(136584/Pi^10) - (34784*I)/Pi^9 +
5670/Pi^8 + (786*I)/Pi^7 - 90/Pi^6 -
(4*I)/Pi^5 - 3/Pi^4 - (2*I)/Pi^3 +
1/Pi^2 - (2*I)/Pi] -
(1/Pi)^10*
NIntegrate[Cos[Pi*x]*D[f[x], {x, 10}], {x, 1, Infinity},
WorkingPrecision -> prec,
PrecisionGoal -> prec], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[
N[b = -Im[-(136584/Pi^10) - (34784*I)/Pi^9 +
5670/Pi^8 + (786*I)/Pi^7 - 90/Pi^6 -
(4*I)/Pi^5 - 3/Pi^4 - (2*I)/Pi^3 +
1/Pi^2 - (2*I)/Pi] +
(1/Pi)^10*
NIntegrate[Sin[Pi*x]*D[f[x], {x, 10}], {x, 1, Infinity},
WorkingPrecision -> prec,
PrecisionGoal -> prec], prec]]]; Print[
SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
See attached 10000MKB.pdf and 10KMKB.nb for work and digits.
On May 5, I computed another 10,000 digits in 9.55 hours see attached faster10KMKB.
On May 6, I computed another 10,000 digits in a blistering fast 5.1 hours see attached fastest10KMKB.nb.
**On May 9, I improved that timing to 4.8 hours (17355 seconds). Here is the code I used:**
d = 15; f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; h[0] = 1; g =
2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}]; Print[
DateString[]];
Print[T0 = SessionTime[]]; prec = 10000;
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
April 20, 2015
--------------
FelisPhasma has been helpful in providing me with a little competition in computing the integral analog of the MRB constant. See [https://github.com/FelisPhasma/MKB-Constant.][16]
I've never done this before. But I so much would like to see others breaking these records that I'm going to give away a program that is practically guaranteed to break my record of 10,000 digits, for the integral analog of the MRB constant in a day or so. The program could use some "clean up" if you care to go that far. (The imaginary part is given as a positive, real constant: it actually starts with a negative sign and of course ends with I.)
Here it is:
f[x_] = x^(1/x); Print[DateString[]]; Print[
T0 = SessionTime[]]; prec = 11000; Timing[
Print[a =
N[Re[(633666648 I)/\[Pi]^13 -
33137280/\[Pi]^12 - ((824760 I)/\[Pi]^11) -
136584/\[Pi]^10 - (34784 I)/\[Pi]^9 +
5670/\[Pi]^8 + (786 I)/\[Pi]^7 - 90/\[Pi]^6 - (4 I)/\[Pi]^5 -
3/\[Pi]^4 - (2 I)/\[Pi]^3 +
1/\[Pi]^2 - (2 I)/\[Pi]] + (1/Pi)^12*
NIntegrate[Cos[Pi x]*D[f[x], {x, 12}], {x, 1, Infinity},
WorkingPrecision -> prec, PrecisionGoal -> prec], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b = -Im[(633666648 I)/\[Pi]^13 -
33137280/\[Pi]^12 - ((824760 I)/\[Pi]^11) -
136584/\[Pi]^10 - (34784 I)/\[Pi]^9 +
5670/\[Pi]^8 + (786 I)/\[Pi]^7 + 90/\[Pi]^6 - (4 I)/\[Pi]^5 -
3/\[Pi]^4 - (2 I)/\[Pi]^3 +
1/\[Pi]^2 - (2 I)/\[Pi]] - (1/Pi)^13*
NIntegrate[Cos[Pi x]*D[f[x], {x, 13}], {x, 1, Infinity},
WorkingPrecision -> prec, PrecisionGoal -> prec],
prec]]]; Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
Will anyone let me know you are running this program to break my record?
Edit: On Sat 2 May 2015 19:03:45 I started a 15,000 digit, new record computation of the real and imaginary parts and magnitude of the integral analog of the MRB constant, (where the imaginary part is given as a positive, real constant), using the following code.
f[x_] = x^(1/x); ClearAll[a];
h[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; h[0] = 1; g = -2 I/Pi +
Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, 18}]; Print[DateString[]];
Print[T0 = SessionTime[]]; prec = 15000;
Print[N[a =
Re[g] + (1/Pi)^19*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, 19}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b = -Im[g] + (1/Pi)^19*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, 19}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
The formula behind this computation is ![enter image description here][17]
Edit: The program took 33.75 hours,The full run is attached in 15KMKB3.nb.
Edit May 9, 2015: **I better than halved my time! I computed 15000 digits in 14.83 hours. See fastestMKB15K.nb/. The faster formula is**
![enter image description here][18]
If you still want me to write out a code for more digits, for you to break that record, let me know.
May 11, 2015
------------
Still talking about the integral analog of the MRB constant:![enter image description here][19]
Here are my speed records -- can you beat any of them?
![enter image description here][20]
Here is a graph of those speed records with a trendline:
![enter image description here][21]
The 20,000 digit run is attached as MKB20K.nb, and MKB20K.pdf,
Here is the algorithm used:
![enter image description here][22]
Here is the code:
d = 30; f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; a[0] = 1; g =
2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}]; Print[
DateString[]];
Print[T0 = SessionTime[]]; prec = 20000;
Print[N[a = -Re[g] - (1/Pi)^(d)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
I just now completed a 25,000 digit computation. It took 63.7 hours and confirmed the 20,000 digits. I updated MKB20K.nb and MKB20K.pdf.
Here is the algorithm and the code I used:
![enter image description here][23]
d = 35; f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; h[0] = 1; g =
2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}]; Print[
DateString[]];
Print[T0 = SessionTime[]]; prec = 25000;
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
Here is new a graph of those speed records with a trendline:
![enter image description here][24]
**Edit:**
On Tue 26 May 2015 06:21:00, I started a 30,000 digit computation using the following code.
Does any one else want to try to break the record?
$MaxExtraPrecision = 100; d = 43; f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] := Sum[
StirlingS1[n, k]*Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1,
n}]; h[0] = 1; g =
2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}]; Print[
DateString[]];
Print[T0 = SessionTime[]]; prec = 30000;
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"]; Print[
N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];
**Edit:**
My first full 30,0000 run finished on Sun 31 May 2015 00:45:09.
Time span: {"4.767 days", "114.4 hours", "6864 minutes", "411849 seconds"}
See attached MKB30K2.nb worksheet.
Here is an updated speed record plot, with trendline. (I think the 30,000 digit run can be done faster.)
![enter image description here][25]
Here is an extensive record of records of computing the integral analog of the MRB constant:
![enter image description here][26]
![enter image description here][62]
Here is a graph of those records. (The progression of computed digits is so extreme, it is almost unbelievable!)
![enter image description here][27]
6[2]: /c/portal/getImageAttachment?filename=13442.JPG&userId=366611
June 5, 2015
------------
I think I came up with a rough program that computes any "prec" digits of the integral analog of the MRB constant.
It chooses, d, the best (or close to the best) order of derivative to use in Mathar's algorithm mentioned in a previous post (formula (12) at [http://arxiv.org/pdf/0912.3844v3.pdf][28] ), Then uses the appropriate code that integrates the integral analog of the constant. It shows the real and imaginary parts as postive real constants and the value the integral, and gives some timings. It could use a lot of cleanup!
I hope someone can help me test it with varying values of prec. Please reply your intentions to use it and results.If no one else can clean it up I will after I tested it more.
prec = 2000; d = Ceiling[0.264086 + 0.00143657 prec]; If[
Mod[d, 4] == 0, f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}]; a[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 1, f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[
a = -Re[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[
b = Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 2, f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
a[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[
a = -Re[g] - (1/Pi)^(d)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[
b = Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 3, f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[
N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[
N[b = Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];]]]]
Here are some of my best timings to compare with the program's results:
digits seconds
1000 38.8650545
2000 437.4906125
3000 889.473875
4000 1586.000714
5000 2802.591704
6000 4569.41586
7000 6891.057587
8000 9659.318566
9000 13491.43967
10000 17355
11000
12000
13000
14000
15000 53385.02323
16000
17000
18000
19000
20000 123876.4331
21000
22000
23000
24000
25000 229130.3088
26000
27000
28000
29000
30000 411848.6322
**Edit:** On Fri 5 Jun 2015 20:41:45 I started a 35,000 digit computation with the above "automated" program.
**Edit:** The 35,000 digit computation should be done by 10:24:38 am EDT | Sunday, June 14, 2015. In the above "automated" program I forgot to adjust the MaxExtraPrecision, but that shouldn't affect the accuracy in that program. It already computed the real part of the integral to 35,000 digits and the first 30,000 of those are the same as the real part of my previously mentioned 30,000 digit calculation. I will keep you posted.
**Edit:** The 35,000 digit computation finished on Sun 14 Jun 2015 06:52:29, taking 727844 seconds. It is attached as 35KMKB.nb. The first 30,000 digits of those are the same as the ones of my previously mentioned 30,000 digit calculation. (That shows the computation didn't take any "wild" turns because of the lack of MaxExtraPrecision.) Further it is a good check of the 30,000 digit run, as all of the bigger computations are of the smaller, because they all are calculated with distinct formulas using different orders of the derivative of x^(1/x).
Feb 28, 2016
------------
For 2000 digits Mathematica 10. 2.0 shows some remarkable improvement over 10.1.2 with the above "automated program" for computing the digits of the integral analog of the MRB constant.
I will post some speed records that are strictly what the program produces in V 10.2.0, below, no picking and choosing of the methods by a human being. Some results will naturally be slower than my previously mentioned speed records, because I tried so very many methods and recorded only the fastest results.
digits seconds
2000 256.3853590
3000 794.4361122
4000 1633.5822870
5000 2858.9390025
10000 17678.7446323
20000 121431.1895170
40000 I got error msg
to be continued
I have to change the program for 40,000 digits! I'll post the new program when I get 40,000 to work.
As of Wed 29 Jul 2015 11:40:10, one of my computers was happily and busily churning away at 40,000 digits of the integral analog of the MRB constant, using the following formula.
**Edit: Mathematica crashed at 11:07 PM 8/4/2016**
(I used MKB as a symbol for the integral analog because it is called the MKB constant. You can find the name MKB constant at http://www.ebyte.it/library/educards/constants/MathConstants.html .) If you can weed through my code, at the bottom of this reply, **you might want to check the formula for the placement of pluses, minuses and imaginary units!!!** A little hint when checking if the formula matches the code, d is 80 so Mod[d,4] =0.
f[x_] = x^(1/x) : a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}]; a[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, 80}]
MKB = -g + (I/Pi)^81*
Integrate[f[x]*D[f[x], {x, 81}], {x, 1, Infinity}]
Here is the code,cleaned up a little: **This is the version from Aug 6, 2015 452 pm; for the first time the imaginary part is signed and shown to be multiplied by the imaginary unit!**
Block[{$MaxExtraPrecision = 200}, prec = 4000; f[x_] = x^(1/x);
ClearAll[a, b, h];
Print[DateString[]];
Print[T0 = SessionTime[]];
If[prec > 35000, d = Ceiling[0.002 prec],
d = Ceiling[0.264086 + 0.00143657 prec]];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
sinplus1 :=
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
cosplus1 :=
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
middle := Print[SessionTime[] - T0, " seconds"];
end := Module[{}, Print[SessionTime[] - T0, " seconds"];
Print[c = Abs[a + b]]; Print[DateString[]]];
If[Mod[d, 4] == 0,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 1,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*sinplus1), prec]]; end];
If[Mod[d, 4] == 2,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 3,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*sinplus1), prec]];
end];]
Come back to see if I decided whether to try the 40K run again.
EDIT: It looks like I've only got one more test for the program (if it passes) before I retry the 40,000 digit calculation!
**EDIT:On Thu 6 Aug 2015 17:23:18, I restarted the 40K run with Windows 10.**
EDIT: My first thought was the program took up too much RAM, apparently over 115 GB! ( I have 64GB installed and a 51 GB paging file; nevertheless, Windows 10 closed the Mathematica kernel to keep from the computer from loosing data.
Can someone else try the 40K run on their computer? It should take 2 weeks on a fast one. Please let me know if you try it and let me know the results, so I will know I don't have a problem with my computer., If two weeks is too great of a commitment, can you try taking note on the RAM used for two progressively larger runs, like 20K and 30 K? I will do the same, and we can compare notes. Thank You!
EDIT: I've been monitoring memory usage for smaller runs and found the program only uses minimal memory! This makes the action of Windows 10 (closing Mathematica kernel to avoid data loss) all the more a mystery! Could the 40K run really use up all of that RAM?
I know there are quite a few of you viewing this post; however, is anyone out there working on these calculations?.
Aug 10, 2016
------------
V. 11 is about 1.25 times faster with my newest program for calculating MKB, (the integral analog of the MRB sum).
V 10. 4 calculated 20,000 digits in 121431.1895170 seconds and V 11 did it in 96979.6545388 seconds.
I've got a little more testing to do, (about 1 day's worth), then I'll try 40,000 digits again, which should take about 12 days.
I will post all my updates here, so you might want to save this message as a favorite so you won't loose it.
Update 1
========
The 40 K automatically started against my wishes on Thu 11 Aug 2016 15:42:08, (due to my pasting two codes at once). I'll keep you informed, how it goes.
Update 2
========
Windows 10 is pushing an update. Wednesday is the latest it will let me restart.
I will restart now with all the updates I can get, Then deffer further ones and hopefully get 12 restart free days to do my 40K.
Update 3
========
I ran all the updates I could find, differed further ones and restarted 40K on Sun 14 Aug 2016 10:32:40.
Update 4
========
Widows 10 stopped the calculation! AGAIN!
Can anyone else try it and see if you get anywhere?
Here is my latest code:
(*Other program:For large calculations.Tested for 1000-35000 digits-- \
see post at \
http://community.wolfram.com/groups/-/m/t/366628?p_p_auth=KA7y1gD4 \
and search for "analog" to find pertinent replies.Designed to include \
40000 digits.A157852 is saved as c,the real part as a and the \
imaginary part as b.*)Block[{$MaxExtraPrecision = 200},
prec = 40000(*Replace 40000 with number of desired digits.40000 \
digits should take two weeks on a 3.5 GH Pentium processor.*);
f[x_] = x^(1/x);
ClearAll[a, b, h];
Print[DateString[]];
Print[T0 = SessionTime[]];
If[prec > 35000, d = Ceiling[0.002 prec],
d = Ceiling[0.264086 + 0.00143657 prec]];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
sinplus1 :=
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
cosplus1 :=
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
middle := Print[SessionTime[] - T0, " seconds"];
end := Module[{}, Print[SessionTime[] - T0, " seconds"];
Print[c = Abs[a + b]]; Print[DateString[]]];
If[Mod[d, 4] == 0,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 1,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*sinplus1), prec]]; end];
If[Mod[d, 4] == 2,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 3,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*sinplus1), prec]];
end];] (*Marvin Ray Burns,Aug 06 2015*)
Sometime in 2017,
-----------------
To try to get windows 10 from closing Mathematica during the computation I tried the instructions found at https://www.autoitscript.com/forum/topic/177749-stopping-windows-10-from-auto-closing-programs-to-free-up-ram/
. I will record progress in this spot as I did before.
**UPDATE**
I followed the memory usage on my computer and it did use around 64 GB of RAM.
And then Windows closed down the Mathematica kernel. I assume that If I can ever afford to maximize my
RAM to its 128GB limit the computation will be successful!
Anyone have better luck?
Nov, 2017
---------
Concentrating on integral analog of the MRB constant:
-----------------------------------------------------
**Search "integral analog" in the above messages for understanding of the integral anaolg of the MRB constant.**
**And search "For 2000 digits Mathematica 10. 2.0" for my history of calculating 40,000 digits of it.**
.
The basic program I wrote to calculate the Integral analog of the MRB constant is
(*Other program:For large calculations.Tested for 1000-35000 digits-- \
see post at \
http://community.wolfram.com/groups/-/m/t/366628?p_p_auth=KA7y1gD4 \
and search for "analog" to find pertinent replies.Designed to include \
40000 digits.A157852 is saved as c,the real part as a and the \
imaginary part as b.*)Block[{$MaxExtraPrecision = 200},
prec = 40000(*Replace 40000 with number of desired digits.40000 \
digits should take two weeks on a 3.5 GH Pentium processor.*);
f[x_] = x^(1/x);
ClearAll[a, b, h];
Print[DateString[]];
Print[T0 = SessionTime[]];
If[prec > 35000, d = Ceiling[0.264086 + 0.0017 prec],
d = Ceiling[0.264086 + 0.00143657 prec]];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
sinplus1 :=
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
cosplus1 :=
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)];
middle := Print[SessionTime[] - T0, " seconds"];
end := Module[{}, Print[SessionTime[] - T0, " seconds"];
Print[c = Abs[a + b]]; Print[DateString[]]];
If[Mod[d, 4] == 0,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 1,
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*sinplus1), prec]]; end];
If[Mod[d, 4] == 2,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*sinplus1, prec]];
middle;
Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*cosplus1), prec]];
end];
If[Mod[d, 4] == 3,
Print[N[a = -Re[g] + (1/Pi)^(d + 1)*cosplus1, prec]];
middle;
Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*sinplus1), prec]];
end];] (*Marvin Ray Burns,
I think I found out why the integral analog of the MRB constant is so hard to calculate to prec=40000 digits!
I've been using too high of an order of the derivative of x^(1/x). I've been running out of memory because of using the 80th derivative from `d = Ceiling[0.002 prec]`, because the 58th derivative from `Ceiling[0.264086 + 0.00143657 prec]` was apparently to small leaving an error statement. I just now asked myself, why make such a big jump?
When my big computer gets back from its tuneup I think I will try `Ceiling[0.00146 prec]` = 59th derivative.
EDIT
----
I tried `Ceiling[0.00146 prec]` and `Ceiling[0.00145 prec]` in Mathematica 11.0 and lost the kernel both times after 6 - 12 hours!
I'm now trying `Ceiling[0.0017 prec]` with v 10.4. It's been over 12 hours and I've not lost the kernel yet. Wish me luck!
EDIT
----
I got the following error message and a real part that does not agree with previous computations.
"NIntegrate failed to converge to prescribed accuracy after 9 recursive bisections in x near {x} = {<<42008>>}. NIntegrate obtained -<<42012>> and <<42014>> for the integral and error estimates."
I'm now trying `Ceiling[0.0018 prec]` with v 10.4....
`Ceiling[0.0018 prec]` with v 10.4 gave the same error.
I'm working on a new program that uses less memory; stay tuned!
March 13, 2018
--------------
I'm now trying `Ceiling[0.0019 prec]` with v 11.2...., at Mon 12 Mar 2018 05:40:13
Here is a record of the memory used by the program. At times the computer may use significantly more.
"Mon 12 Mar 2018 10:30:00" 14 GB DDR3 RAM
"Mon 12 Mar 2018 13:00:00" 15 GB DDR3 RAM
"Mon 12 Mar 2018 14:00:00" 16 GB DDR3 RAM
"Mon 12 Mar 2018 14:30:00" 17 GB DDR3 RAM
"Mon 12 Mar 2018 15:00:00" 18 GB DDR3 RAM
"Mon 12 Mar 2018 22:30:00" 24 GB DDR3 RAM
"Mon 12 Mar 2018 24:00:00" 26 GB DDR3 RAM
"Tue 13 Mar 2018 06:30:00" 33 GB DDR3 RAM
"Tue 13 Mar 2018 07:30:00" 14 GB DDR3 RAM
"Tue 13 Mar 2018 08:00:00" 15 GB DDR3 RAM
"Tue 13 Mar 2018 08:30:00" 16 GB DDR3 RAM
"Tue 13 Mar 2018 11:30:00" 19 GB DDR3 RAM
"Tue 13 Mar 2018 12:00:00" 20 GB DDR3 RAM
"Tue 13 Mar 2018 14:00:00" 22 GB DDR3 RAM
"Tue 13 Mar 2018 14:30:00" 5 GB DDR3 RAM
"Tue 13 Mar 2018 15:00:00" 6 GB DDR3 RAM
"Tue 13 Mar 2018 16:30:00" 8 GB DDR3 RAM
"Tue 13 Mar 2018 18:30:00" 11 GB DDR3 RAM
"Tue 13 Mar 2018 19:30:00" 13 GB DDR3 RAM
"Tue 13 Mar 2018 20:30:00" 14 GB DDR3 RAM
"Tue 13 Mar 2018 21:00:00" 8 GB DDR3 RAM
"Tue 13 Mar 2018 21:30:00" 11 GB DDR3 RAM
"Wed 14 Mar 2018 07:30:00" 26 GB DDR3 RAM
"Wed 14 Mar 2018 07:30:00" 26 GB DDR3 RAM
"Wed 14 Mar 2018 08:00:00" 25 GB DDR3 RAM.Total used by programs 44.54 GB DDR3 RAM.
"Wed 14 Mar 2018 20:00:00" 37 GB DDR3 RAM.Total used by programs 40.32 GB DDR3 RAM.
"Thu 15 Mar 2018 08:00:00" 0 GB DDR3 RAM.Total used by programs 3.84 GB DDR3 RAM.
Update:
-------
V 11.2 cut off its kernel sometime between "Mon 14 Mar 2018 20:00:00" and "Mon 15 Mar 2018 08:00:00."
It seems to me that V11 under Windows 10 cuts off my RAM intensive operations.
The last success I had was using V10.2 under Windows 7. I am trying that combination again, this time for the 40 k digits.
Below is the code I used than and am using now. At first I just changed only "prec=35000" to "prac=40000" and got an errant answer for the real part.
And I got memory use starting out at
"Thu 15 Mar 2018 08:53:47" 0.3 GB, total computer use 3.84 GB
"Thu 15 Mar 2018 11:48:47" 04.3 GB, total computer use 7.68 GB
"Thu 15 Mar 2018 13:00:00" 01.3 GB, total computer use 5.12 GB
"Thu 15 Mar 2018 14:00:00" 01.3 GB, total computer use 5.12 GB
So now I also changed the coefficient of prec from "d = Ceiling[0.264086 + 0.00143657 prec]" to "d = Ceiling[ 0.002 prec]." I think I can get by with .002 because 10.3 in Windows 7 seems to use less memory that the V 11's in Windows 10.
prec = 40000; d = Ceiling[0.002 prec]; If[Mod[d, 4] == 0,
f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}]; a[0] = 1;
g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[a = -Re[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[b =
Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 1, f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[
a = -Re[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[
b = Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 2, f[x_] = x^(1/x); ClearAll[a, b, h];
a[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
a[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) a[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[N[
a = -Re[g] - (1/Pi)^(d)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[
b = Im[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];,
If[Mod[d, 4] == 3, f[x_] = x^(1/x); ClearAll[a, b, h];
h[n_] :=
Sum[StirlingS1[n, k]*
Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];
h[0] = 1; g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];
Print[DateString[]];
Print[T0 = SessionTime[]];
Print[
N[a = -Re[g] + (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[
N[b = Im[g] - (1/Pi)^(d + 1)*
NIntegrate[
Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity},
WorkingPrecision -> prec*(105/100),
PrecisionGoal -> prec*(105/100)], prec]];
Print[SessionTime[] - T0, " seconds"];
Print[N[Sqrt[a^2 + b^2], prec]]; Print[DateString[]];]]]]
Memory use:
"Thu 15 Mar 2018 16:30:46" .3 GB. Total used by programs 3.8 GB.
Here is a break down of the memory use as of 16:45 March 17, 2018
![enter image description here][30]
Mar 18, 2018, 8:10 AM
------------
Just in case I rum out of memory, I increased the size of my paging file!
![enter image description here][31]
"Sun 17 Mar 2018 16:00:00" 15 GB. Total used by programs 49.92 GB.
![enter image description here][32]
![enter image description here][33]
I no longer believe the V 11's use a lot more memory. If I hadn't increased my paging file Windows would have closed Mathematica already!
![enter image description here][34]
I might be slowing the computation down a little, but I don't want to take any chances of running out of memory, so I increased the paging file 1 more time.
![enter image description here][35]
![enter image description here][36]
The computer has been commiting up to 160 GB of total RAM for a while now.
![enter image description here][37]
Finally, the committed memory is going down.
![enter image description here][38]
My computer is acting real sluggish right now. Mathematica is using a minimum amount of DDR3 RAM, but the computer is still committing a near record of virtual RAM.
![enter image description here][39]
My computer is acting too funny, so I aborted the evaluation. The kernel remained running and overall memory remained maxed out. I tried to retrieve a and b (the variables with the real and imaginary parts of the solution), but the computer wouldn't recall them for me. The computer won't evaluate any Mathematica operations. I am restarting my computer and inspecting the damage!
Update:
-------
Windows said it found no errors on my hard drive; that's great!
I'm going to replace my Intel 6 core processor with a faster 8 -core Intel Xeon E5-2687W v2 CPU, and ad an additional hard drive. The new processor and my motherboard both take 128 GB RAM, but ECC is the only 16G DDR3 mims I can find. I'm not sure if my MSI Big Bang-XPower II will take ECC.
40,000 digits of the integral analog might have to wait for me to get a new system.
I am working on a new program to compute the MRB constant in little steps, and will use it on my new processor.
[1]: https://scholar.google.com/scholar?hl=en&as_sdt=0,15&q=%22MKB%20constant%22&btnG=
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capturemkb2.JPG&userId=366611
[3]: /c/portal/getImageAttachment?filename=3763a.JPG&userId=366611
[4]: http://arxiv.org/abs/0912.3844
[5]: /c/portal/getImageAttachment?filename=462244.JPG&userId=366611
[6]: /c/portal/getImageAttachment?filename=41011.JPG&userId=366611
[7]: /c/portal/getImageAttachment?filename=31733.JPG&userId=366611
[8]: /c/portal/getImageAttachment?filename=82262.JPG&userId=366611
[9]: /c/portal/getImageAttachment?filename=42253.JPG&userId=366611
[10]: /c/portal/getImageAttachment?filename=3763a.JPG&userId=366611
[11]: http://arxiv.org/abs/0912.3844
[12]: /c/portal/getImageAttachment?filename=40936.JPG&userId=366611
[13]: /c/portal/getImageAttachment?filename=106828.JPG&userId=366611
[14]: /c/portal/getImageAttachment?filename=27409.JPG&userId=366611
[15]: /c/portal/getImageAttachment?filename=902710.JPG&userId=366611
[16]: https://github.com/FelisPhasma/MKB-Constant
[17]: /c/portal/getImageAttachment?filename=18.JPG&userId=366611
[18]: /c/portal/getImageAttachment?filename=15K.JPG&userId=366611
[19]: /c/portal/getImageAttachment?filename=aaa.JPG&userId=366611
[20]: /c/portal/getImageAttachment?filename=mkbspeed1.JPG&userId=366611
[21]: /c/portal/getImageAttachment?filename=mkbspeedgraph1.JPG&userId=366611
[22]: /c/portal/getImageAttachment?filename=mdk20K1.JPG&userId=366611
[23]: /c/portal/getImageAttachment?filename=35.JPG&userId=366611
[24]: /c/portal/getImageAttachment?filename=2898recordplota1.jpg&userId=366611
[25]: /c/portal/getImageAttachment?filename=3063a.JPG&userId=366611
[26]: /c/portal/getImageAttachment?filename=29741.JPG&userId=366611
[27]: /c/portal/getImageAttachment?filename=54721a.JPG&userId=366611
[28]: http://arxiv.org/pdf/0912.3844v3.pdf
[29]: /c/portal/getImageAttachment?filename=42253.JPG&userId=366611
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10p140kmarch2018memeory1.jpg&userId=366611
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=32901.JPG&userId=366611
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=53022.JPG&userId=366611
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=78733.jpg&userId=366611
[34]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.jpg&userId=366611
[35]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.jpg&userId=366611
[36]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.jpg&userId=366611
[37]: http://community.wolfram.com//c/portal/getImageAttachment?filename=25548.jpg&userId=366611
[38]: http://community.wolfram.com//c/portal/getImageAttachment?filename=76749.jpg&userId=366611
[39]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.jpg&userId=366611Marvin Ray Burns2018-04-20T12:06:18ZPlot 2 parameter solution from ParametricNDSolve with Table
https://community.wolfram.com/groups/-/m/t/2242051
I have a 2-D differential equation which I solve with ```ParametricNDSolve``` with initial time given as a parameters. I solve the equation using
twoeqnpara = ParametricNDSolve[{x'[t]==-3x[t] - y[t],y'[t] == x[t],x[0]==a,
y[0]==b},{x,y},{t,0,100},{a,b}].
When I try to plot with
pp = ParametricPlot[Evaluate@Table[{x[a,b][t],y[a,b][t]}/.twoeqnpara,
{a,-2,2,0.5},{b,-2,2,0.5}],{t,0,40},PlotRange->All]
I don't get any plot. But when I do for single parameter
pp = ParametricPlot[Evaluate@Table[{x[a,a][t],y[a,a][t]}/.twoeqnpara,{a,-2,2,0.4}],
{t,0,40},PlotRange->All,PlotLegends->Range[-2,2,0.4]]
It works and give a plot.
What I think the issue is when I use the command
Table[ x+y, {x,-2,2,0.5},{y,-2,2,0.5}]
, what the command is taking a single value of x and all the values of y. What I need is all the pairs possible from the 2 arrays so that can be used as a initial condition for plotting.Sahil Goyal2021-04-11T17:01:59ZTry to beat these MRB constant records!
https://community.wolfram.com/groups/-/m/t/366628
The MRB constant: ALL ABOARD!
-----------------------------
POSTED BY: Marvin Ray Burns.
Autobiographies and his style: [http://marvinrayburns.com/aboutme.html][1]
On Pi Day, 2021, 2:40 pm EST,
I added a new MRB constant integral.
Map:
----
- First, we have formal identities and theory for **C**<sub>*MRB*</sub>.![CMRB][2]
- Then, at the end of this initial posting, we have world records of the maximum number of digits of **C**<sub>*MRB*</sub>
computations by date.
- Then we have some hints for anyone serious about breaking my record.
- Followed by speed records,
- a program Richard Crandall wrote to check his code for the computing record number of digits
- and a conversation about whether Mathematica uses the same algorithm for computing **C**<sub>*MRB*</sub> by a couple of different methods.
- Then, for a few replies, we compute **C**<sub>*MRB*</sub> from Crandall's eta derivative formulas and see records there.
- There are three replies about "NEW RECORD ATTEMPTS OF 4,000,000 DIGITS!" and the computation is now complete!!!!!.
- We see where I am on a 5,000,000 digit calculation. **(Just recently completed!!!!!!!!!!!!)**
- I describe the MRB supercomputer!!!!!! (faster than some computers with dual platinum Xeon processors) It was used for the 5,000,000 digit calculation.
- Then it comes time for the 6 million digit computation of **C**<sub>*MRB*</sub>. (put on hold, but taken up again at the end)
- We compute **C**<sub>*MRB*</sub> sum via an integral, which certifies the accuracy of **C**<sub>*MRB*</sub> calculations!!!!! (since the sum and integral are vastly different in every way they are computed)
- Then we take up the task of calculating 6,000,000 (6 million) digits of **C**<sub>*MRB*</sub> (for the <s>fourth, fifth time, sixth, seventh time!) (will finish Thu 25 Feb 2021 06:48:06).</s>
- We look for closed forms and find nontrivial, arbitrarily close approximations of **C**<sub>*MRB*</sub>.
- The latest updates on the MRB constant supercomputer 2 (with a GIF of it) and how I'm using it to break new records -- up to 7,500,000 digits -- are in a few replies.
- **Finally, I report that I successfully computed 6,000,000 digits of **C**<sub>*MRB*** on my 8th try!
----------
**C**<sub>*MRB*</sub> is defined below. See http://mathworld.wolfram.com/MRBConstant.html.
With over 20 years of research and ideas from users like you, I developed this informal catalog of formulas for the MRB constant.
**C**<sub>*MRB=*</sub>
![eta equals][3]
![enter image description here][4]
That is proven below by an internet scholar going by the moniker "Dark Malthorp:"
![Dark Marthorp's proof][5]
![eta sums][6] denoting the kth derivative of the Dirichlet eta function of k and 0 respectively,
was first discovered in 2012 by Richard Crandall of Apple Computer.
The left half is proven below by Gottfried Helms and it is proven more rigorously![(][7]considering the conditionally convergent sum,![enter image description here][8]![)][9] below that. Then the right half is a Taylor expansion of η(s) around s = 0.
> ![n^(1/n)-1][10]
At
[https://math.stackexchange.com/questions/1673886/is-there-a-more-rigorous-way-to-show-these-two-sums-are-exactly-equal][11],
it has been noted that "even though one has cause to be a little bit wary around formal rearrangements of conditionally convergent sums (see the [Riemann series theorem][12]), it's not very difficult to validate the formal manipulation of Helms. The idea is to cordon off a big chunk of the infinite double summation (all the terms from the second column on) that we know is absolutely convergent, which we are then free to rearrange with impunity. (Most relevantly for our purposes here, see pages 80-85 of this [document][13], culminating with the Fubini theorem which is essentially the manipulation Helms is using.)"
> ![argrument 1][14] ![argrument 2][15]
----------
----------
----------
We see many more integrals for **C**<sub>*MRB*</sub>.
We can expand
![1/x][16]
into the following.
![xx = 25.656654035][17]
xx = 25.65665403510586285599072933607445153794770546058072048626118194\
90097321718621288009944007124739159792146480733342667`100.;
g[x_] = x^(xx/
x); I NIntegrate[(g[(-t I + 1)] - g[(t I + 1)])/(Exp[Pi t] -
Exp[-Pi t]), {t, 0, Infinity}, WorkingPrecision -> 100]
(*
0.18785964246206712024851793405427323005590309490013878617200468408947\
72315646602137032966544331074969.*)
**Expanding upon the previously mentioned**
![enMRB sinh][18]
we get the following set of formulas that all equal **C**<sub>*MRB*</sub>:
Let
x= 25.656654035105862855990729 ...
along with the following constants (approximate values given)
{u = -3.20528124009334715662802858},
{u = -1.975955817063408761652299},
{u = -1.028853359952178482391753},
{u = 0.0233205964164237996087020},
{u = 1.0288510656792879404912390},
{u = 1.9759300365560440110320579},
{u = 3.3776887945654916860102506},
{u = 4.2186640662797203304551583} or
$
u = \infty .$
Another set follows.
let
x = 1 and
along with the following {approximations}
{u = 2.451894470180356539050514},
{u = 1.333754341654332447320456} or
$
u = \infty $
then
![enter image description here][19]
See
[this notebook from the wolfram cloud][20]
for justification.
----------
----------
Also, in terms of the Euler-Riemann zeta function,
**C**<sub>*MRB*</sub> =![enter image description here][21]
Furthermore, as ![enter image description here][22],
according to [user90369][23] at StackExchange, **C**<sub>*MRB*</sub> can be written as the sum of zeta derivatives similar to the eta derivatives discovered by Crandall.
![zeta hint ][24]Informations about ζ<sup>(j)</sup>(k) please see e.g. [here][25], formulas (11)+(16)+(19).![credit][26]
In the light of the parts above, where
**C**<sub>*MRB*</sub>
= ![k^(1/k)-1][27]
= ![eta'(k)][28]
= ![sum from 0][29] ![enter image description here][30]
as well as ![double equals RHS][31]
an internet scholar going by the moniker "Dark Malthorp" wrote:
> ![eta *z^k][32]
----------
Here is proof of a faster converging integral for its integrated analog by Ariel Gershon.
g(x)=x^(1/x), M1=![hypothesis][101]
Which is the same as
![enter image description here][102]
because Changing the upper limit to 2N + 1 increases MI by 2i/π.
![Iimofg->1][103]
![Cauchy's Integral Theorem][104]
![Lim surface h gamma r=0][105]
![Lim surface h beta r=0][106]
![limit to 2n-1][107]
![limit to 2n-][108]
Plugging in equations [5] and [6] into equation [2] gives us:
![left][109]![right][110]
Now take the limit as N→∞ and apply equations [3] and [4] :
![QED][111]
He went on to note that
![enter image description here][112]
----------
----------
----------
As with any scientific paper, this post contains only reproducible results with methods. These records represent the advancement of consumer-level computers and clever programming over the past 20 years. I see others breaking these records, even after I die!
Here are some record computations. If you know of any others let me know.
- On or about Dec 31, 1998, I computed 1 digit of the (additive inverse of ) **C**<sub>*MRB*</sub> with my TI-92s, by adding 1-sqrt(2)+3^(1/3)-4^(1/4)+... as far as I could. That first digit, by the way, is just 0. Then by using the sum feature, in approximate mode, to compute $\sum _{n=1}^{1000 } (-1)^n \left(n^{1/n}\right),$
I computed the first correct decimal of $\text{CMRB}=\sum _{n=1}^{\infty } (-1)^n \left(n^{1/n}-1\right)$ i.e. (.1). It gave (.1_91323989714) which is close to what Mathematica gives for summing to only an upper limit of 1000.
- On Jan 11, 1999, I computed 4 decimals(.1878) of **C**<sub>*MRB*</sub> with the Inverse Symbolic Calculator, with the command evalf( 0.1879019633921476926565342538468+sum((-1)^n* (n^(1/n)-1),n=140001..150000)); where 0.1879019633921476926565342538468 was the running total of t=sum((-1)^n* (n^(1/n)-1),n=1..10000), then t= t+the sum from (10001.. 20000), then t=t+the sum from (20001..30000) ... up to t=t+the sum from (130001..140000).
- In Jan of 1999, I computed 5 correct decimals (rounded to .18786)of **C**<sub>*MRB*</sub> using Mathcad 3.1 on a 50 MHz 80486 IBM 486 personal computer operating on Windows 95.
- Shortly afterward I tried to compute 9 digits of **C**<sub>*MRB*</sub> using Mathcad 7 professional on the Pentium II mentioned below, by summing (-1)^x x^(1/x) for x=1 to 10,000,000, 20,000,000, and a many more, then linearly approximating the sum to a what a few billion terms would have given.
- On Jan 23, 1999, I computed 500 digits of **C**<sub>*MRB*</sub> with an online tool called Sigma. Remarkably the sum in 4. was correct to 6 of the 9 decimal places! See
[http://marvinrayburns.com/Original_MRB_Post.html][33]
if you can read the printed and scanned copy there.
- In September of 1999, I computed the first 5,000 digits of **C**<sub>*MRB*</sub> on a 350 MHz Pentium II with 64 Mb of RAM using the simple PARI commands \p 5000;sumalt(n=1,((-1)^n*(n^(1/n)-1))), after allocating enough memory.
- On June 10-11, 2003 over a period, of 10 hours, on a 450 MHz P3 with an available 512 MB RAM, I computed 6,995 accurate digits of **C**<sub>*MRB*</sub>.
- Using a Sony Vaio P4 2.66 GHz laptop computer with 960 MB of available RAM, at 2:04 PM 3/25/2004, I finished computing 8000 digits of **C**<sub>*MRB*</sub>.
- On March 01, 2006, with a 3 GHz PD with 2 GB RAM available, I computed the first 11,000 digits of **C**<sub>*MRB*</sub>.
- On Nov 24, 2006, I computed 40, 000 digits of **C**<sub>*MRB*</sub> in 33 hours and 26 min via my program written in Mathematica 5.2. The computation was run on a 32-bit Windows 3 GHz PD desktop computer using 3.25 GB of Ram.
The program was something like this:
Block[{a, b = -1, c = -1 - d, d = (3 + Sqrt[8])^n,
n = 131 Ceiling[40000/100], s = 0}, a[0] = 1;
d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
For[k = 0, k < n, c = b - c;
b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
N[1/2 - s/d, 40000]]
- Finishing on July 29, 2007, at 11:57 PM EST, I computed 60,000 digits of **C**<sub>*MRB*</sub>. Computed in 50.51 hours on a 2.6 GHz AMD Athlon with 64 bit Windows XP. Max memory used was 4.0 GB of RAM.
- Finishing on Aug 3, 2007, at 12:40 AM EST, I computed 65,000 digits of **C**<sub>*MRB*</sub>. Computed in only 50.50 hours on a 2.66 GHz Core 2 Duo using 64 bit Windows XP. Max memory used was 5.0 GB of RAM.
- Finishing on Aug 12, 2007, at 8:00 PM EST, I computed 100,000 digits of **C**<sub>*MRB*</sub>. They were computed in 170 hours on a 2.66 GHz Core 2 Duo using 64 bit Windows XP. Max memory used was 11.3 GB of RAM. The typical daily record of memory used was 8.5 GB of RAM.
- Finishing on Sep 23, 2007, at 11:00 AM EST, I computed 150,000 digits of **C**<sub>*MRB*</sub>. They were computed in 330 hours on a 2.66 GHz Core 2 Duo using 64 bit Windows XP. Max memory used was 22 GB of RAM. The typical daily record of memory used was 17 GB of RAM.
- Finishing on March 16, 2008, at 3:00 PM EST, I computed 200,000 digits of **C**<sub>*MRB*</sub> using Mathematica 5.2. They were computed in 845 hours on a 2.66 GHz Core 2 Duo using 64 bit Windows XP. Max memory used was 47 GB of RAM. The typical daily record of memory used was 28 GB of RAM.
- Washed away by Hurricane Ike -- on September 13, 2008 sometime between 2:00 PM - 8:00 PM EST an almost complete computation of 300,000 digits of **C**<sub>*MRB*</sub> was destroyed. Computed for a long 4015. Hours (23.899 weeks or 1.4454*10^7 seconds) on a 2.66 GHz Core 2 Duo using 64 bit Windows XP. Max memory used was 91 GB of RAM. The Mathematica 6.0 code used follows:
Block[{$MaxExtraPrecision = 300000 + 8, a, b = -1, c = -1 - d,
d = (3 + Sqrt[8])^n, n = 131 Ceiling[300000/100], s = 0}, a[0] = 1;
d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
For[k = 0, k < n, c = b - c;
b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
N[1/2 - s/d, 300000]]
- On September 18, 2008, computation of 225,000 digits of **C**<sub>*MRB*</sub> was started with a 2.66 GHz Core 2 Duo using 64 bit Windows XP. It was completed in 1072 hours. Memory usage is recorded in the attachment pt 225000.xls, near the bottom of this post.
- 250,000 digits were attempted but failed to be completed to a serious internal error that restarted the machine. The error occurred sometime on December 24, 2008, between 9:00 AM and 9:00 PM. The computation began on November 16, 2008, at 10:03 PM EST. Like the 300,000 digit computation, this one was almost complete when it failed. The Max memory used was 60.5 GB.
- On Jan 29, 2009, 1:26:19 pm (UTC-0500) EST, I finished computing 250,000 digits of **C**<sub>*MRB*</sub>. with a multiple-step Mathematica command running on a dedicated 64 bit XP using 4 GB DDR2 RAM onboard and 36 GB virtual. The computation took only 333.102 hours. The digits are at http://marvinrayburns.com/250KMRB.txt. The computation is completely documented in the attached 250000.PD at bottom of this post.
- On Sun 28 Mar 2010 21:44:50 (UTC-0500) EST, I started a computation of 300000 digits of **C**<sub>*MRB*</sub> using an i7 with 8.0 GB of DDR3 RAM onboard, but it failed due to hardware problems.
- I computed 299,998 Digits of **C**<sub>*MRB*</sub>. The computation began Fri 13 Aug 2010 10:16:20 pm EDT and ended 2.23199*10^6 seconds later |
Wednesday, September 8, 2010. I used Mathematica 6.0 for Microsoft
Windows (64-bit) (June 19, 2007) That is an average of 7.44 seconds per digit. I used my Dell Studio XPS 8100 i7 860 @ 2.80 GHz with 8GB physical DDR3 RAM. Windows 7 reserved an additional 48.929
GB virtual Ram.
- I computed exactly 300,000 digits to the right of the decimal point
of **C**<sub>*MRB*</sub> from Sat 8 Oct 2011 23:50:40 to Sat 5 Nov 2011
19:53:42 (2.405*10^6 seconds later). This run was 0.5766 seconds per digit slower than the
299,998 digit computation even though it used 16 GB physical DDR3 RAM on the same machine. The working precision and accuracy goal
combination were maximized for exactly 300,000 digits, and the result was automatically saved as a file instead of just being displayed on the front end. Windows reserved a total of 63 GB of working memory of which 52 GB were recorded being used. The 300,000 digits came from the Mathematica 7.0 command
Quit; DateString[]
digits = 300000; str = OpenWrite[]; SetOptions[str,
PageWidth -> 1000]; time = SessionTime[]; Write[str,
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> digits + 3, AccuracyGoal -> digits,
Method -> "AlternatingSigns"]]; timeused =
SessionTime[] - time; here = Close[str]
DateString[]
- 314159 digits of the constant took 3 tries due to hardware failure. Finishing on September 18, 2012, I computed 314159 digits, taking 59 GB of RAM. The digits came from the Mathematica 8.0.4 code
DateString[]
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> 314169, Method -> "AlternatingSigns"] // Timing
DateString[]
- Sam Noble of Apple computed 1,000,000 digits of **C**<sub>*MRB*</sub> in 18 days 9 hours 11 minutes 34.253417 seconds.
- Finishing on Dec 11, 2012, Richard Crandall, an Apple scientist, computed 1,048,576 digits
in a lightning-fast 76.4 hours computation time (from the timing command). That's on a 2.93 GHz 8-core Nehalem.
- In Aug of 2018, I computed 1,004,993 digits of **C**<sub>*MRB*</sub> in 53.5 hours with 10 DDR4 RAM (of up to 3000 MHz) supported processor cores overclocked up to 4.7 GHz! Search this post for "53.5" for documentation.
- Sept 21, 2018: I computed 1,004,993 digits of **C**<sub>*MRB*</sub>
in 50.37 hours of absolute time (35.4 hours computation time) with 18
(DDR3 and DDR4) processor cores! Search this post for "50.37 hours"
for documentation.**
- On May 11, 2019, I computed over 1,004,993 digits, using 28 kernels
on 18 DDR4 RAM (of up to 3200 MHz) supported cores overclocked up to
5.1 GHz in 45,5 hours of absolute time and only 32.5 hours of computation time! Search 'Documented in the attached ":3 fastest
computers together 3.nb." ' for the post that has the attached documenting notebook.
- On 10/19/20, using 3/4 of the MRB constant supercomputer 2, I finished an over 1,004,993 digits computation of **C**<sub>*MRB*</sub> in 44 hours of absolute time -- see [https://www.wolframcloud.com/obj/bmmmburns/Published/44%20hour%20million.nb][34] for documentation.
- I computed a little over 1,200,000 digits of **C**<sub>*MRB*</sub> in 11
days, 21 hours, 17 minutes, and 41 seconds (finishing on March 31, 2013). I used a six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
- On May 17, 2013, I finished a 2,000,000 or more digit computation of **C**<sub>*MRB*</sub>, using only around 10GB of RAM. It took 37 days 5 hours 6 minutes 47.1870579 seconds. I used my six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
- A previous world record computation of **C**<sub>*MRB*</sub> was finished on Sun 21 Sep 2014 at 18:35:06. It took 1 month 27 days 2 hours 45 minutes 15 seconds. The processor time from the 3,000,000+ digit computation was 22 days. I computed the 3,014,991 digits of **C**<sub>*MRB*</sub> with Mathematica 10.0. I Used my new version of Richard Crandall's code in the attached 3M.nb, optimized for my platform and large computations. I also used a six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz with 64 GB of RAM of which only 16 GB was used. Can you beat it (in more number of digits, less memory used, or less time taken)? This confirms that my previous "2,000,000 or more digit computation" was accurate to 2,009,993 digits. they were used to check the first several digits of this computation. See attached 3M.nb for the full code and digits.
- Finished on Wed 16 Jan 2019 19:55:20, I computed over 4 million digits of **C**<sub>*MRB*</sub>.
It took 4 years of continuous tries. This successful run took 65.13 days computation time, with a processor time of 25.17 days, on a 3.7 GHz overclocked up to 4.7 GHz on all cores Intel 6 core computer with 3000 MHz RAM. According to this computation, the previous record, 3,000,000+ digit computation, was accurate to 3,014,871 decimals, as this computation used my algorithm for computing n^(1/n) as found in chapter 3 in the paper at
https://www.sciencedirect.com/science/article/pii/0898122189900242
and the 3 million+ computation used Crandall's algorithm. Both algorithms outperform Newton's method per calculation and iteration.
See attached [notebook][35].
M R Burns' algorithm:
x = SetPrecision[x, pr];
y = x^n; z = (n - y)/y;
t = 2 n - 1; t2 = t^2;
x =
x*(1 + SetPrecision[4.5, pr] (n - 1)/t2 + (n + 1) z/(2 n t) -
SetPrecision[13.5, pr] n (n - 1) 1/(3 n t2 + t^3 z));
(*N[Exp[Log[n]/n],pr]*)
Example:
ClearSystemCache[]; n = 123456789;
(*n is the n in n^(1/n)*)
x = N[n^(1/n),100];
(*x starts out as a relatively small precision approximation to n^(1/n)*)
pc = Precision[x]; pr = 10000000;
(*pr is the desired precision of your n^(1/n)*)
Print[t0 = Timing[While[pc < pr, pc = Min[4 pc, pr];
x = SetPrecision[x, pc];
y = x^n; z = (n - y)/y;
t = 2 n - 1; t2 = t^2;
x = x*(1 + SetPrecision[4.5, pc] (n - 1)/t2 + (n + 1) z/(2 n t)
- SetPrecision[13.5, pc] n (n - 1)/(3 n t2 + t^3 z))];
(*You get a much faster version of N[n^(1/n),pr]*)
N[n - x^n, 10]](*The error*)];
ClearSystemCache[]; n = 123456789; Print[t1 = Timing[N[n - N[n^(1/n), pr]^n, 10]]]
Gives
{25.5469,0.*10^-9999984}
{101.359,0.*10^-9999984}
R Crandall's algorithm:
While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^n - n;
x = x (1 - 2 y/((n + 1) y + 2 n n));];
(*N[Exp[Log[n]/ n],pr]*)
Example:
ClearSystemCache[]; n = 123456789;
(*n is the n in n^(1/n)*)
x = N[n^(1/n)];
(*x starts out as a machine precision approximation to n^(1/n)*)
pc = Precision[x]; pr = 10000000;
(*pr is the desired precision of your n^(1/n)*)
Print[t0 = Timing[While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^n - n;
x = x (1 - 2 y/((n + 1) y + 2 n n));];
(*N[Exp[Log[n]/n],pr]*)
N[n - x^n, 10]](* The error*)]; Print[
t1 = Timing[N[n - N[n^(1/n), pr]^n, 10]]]
Gives
{32.1406,0.*10^-9999984}
{104.516,0.*10^-9999984}
More information available upon request.
----------
- Finished on Fri 19 Jul 2019 18:49:02, I computed over 5 million digits of **C**<sub>*MRB*</sub>.
Methods described in the reply below that starts with "Attempts at a 5,000,000 digit calculation ."
For this 5 million calculation of MRB using the 3 node MRB supercomputer:
processor time was 40 days.
and actual time was 64 days.
That is faster than the 4 million digit computation using just one node.
- I finally computed 6,000,000 digits of the MRB constant after 8 tries in 19 months. (Search "8/24/2019 It's time for more digits!" below.) finishing on Tue 30 Mar 2021 22:02:49 in 160 days.
The MRB constant supercomputer 2 said the following:
Finished on Tue 30 Mar 2021 22:02:49. Processor and actual time were 5.28815859375*10^6 and 1.38935720536301*10^7 s. respectively
Enter MRB1 to print 6029991 digits. The error from a 5,000,000 or more digit calculation that used a different method is
0.*10^-5024993
That means that the 5,000,000 digit computation Was actually accurate to 5024993 decimals!!!
----------
----------
----------
----------
Here is my mini-cluster of the fastest 3 computers (the MRB constant supercomputer 0) mentioned below:
The one to the left is my custom-built extreme edition 6 core and later with an 8 core Xeon processor.
The one in the center is my fast little 4 core Asus with 2400 MHz RAM.
Then the one on the right is my fastest -- a Digital Storm 6 core overclocked to 4.7 GHz on all cores and with 3000 MHz RAM.
![first 3 way cluster][36]
[1]: http://marvinrayburns.com/aboutme.html
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1ac.JPG&userId=366611
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10514Capture5.JPG&userId=366611
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=47625a.JPG&userId=366611
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6657Capture12.JPG&userId=366611
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2081Capture14.JPG&userId=366611
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=left.gif&userId=366611
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3959Capture7.JPG&userId=366611
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=right.gif&userId=366611
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10297Capture.JPG&userId=366611
[11]: https://math.stackexchange.com/questions/1673886/is-there-a-more-rigorous-way-to-show-these-two-sums-are-exactly-equal
[12]: https://en.wikipedia.org/wiki/Riemann_series_theorem
[13]: https://www.math.ucdavis.edu/~hunter/intro_analysis_pdf/ch4.pdf
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=7211Capture.JPG&userId=366611
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10837Capture.JPG&userId=366611
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Capture21%281%29.JPG&userId=366611
[17]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9644Capture.JPG&userId=366611
[18]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Capture23.JPG&userId=366611
[19]: https://community.wolfram.com//c/portal/getImageAttachment?filename=115335.JPG&userId=366611
[20]: https://www.wolframcloud.com/obj/bmmmburns/Published/double%20zeroes%20of%20CMRB.nb
[21]: https://community.wolfram.com//c/portal/getImageAttachment?filename=11i.JPG&userId=366611
[22]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6033d.JPG&userId=366611
[23]: https://math.stackexchange.com/users/332823/user90369
[24]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1558a.JPG&userId=366611
[25]: https://digitalcommons.wku.edu/cgi/viewcontent.cgi?referer=http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=27&ved=2ahUKEwjMx5SuxbnjAhVLLpoKHcBPBWo4FBAWMAZ6BAgAEAI&url=http://digitalcommons.wku.edu/cgi/viewcontent.cgi?article=2093&context=theses&usg=AOvVaw0gQx0dl_Nw4esC2IQc0LEo&httpsredir=1&article=2093&context=theses
[26]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2965b.JPG&userId=366611
[27]: https://community.wolfram.com//c/portal/getImageAttachment?filename=11ka.JPG&userId=366611
[28]: https://community.wolfram.com//c/portal/getImageAttachment?filename=84481.JPG&userId=366611
[29]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1869Capture.JPG&userId=366611
[30]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6878p1o2.jpg&userId=366611
[31]: https://community.wolfram.com//c/portal/getImageAttachment?filename=5630Capture2.JPG&userId=366611
[32]: https://community.wolfram.com//c/portal/getImageAttachment?filename=11k.JPG&userId=366611
[33]: http://marvinrayburns.com/Original_MRB_Post.html
[34]: https://www.wolframcloud.com/obj/bmmmburns/Published/44%20hour%20million.nb
[35]: https://community.wolfram.com/groups?p_auth=zWk1Qjoj&p_p_auth=r1gPncLu&p_p_id=19&p_p_lifecycle=1&p_p_state=exclusive&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=6&_19_struts_action=/message_boards/get_message_attachment&_19_messageId=1593151&_19_attachment=4%20million%2011%202018.nb
[36]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-video-to-gif.gif&userId=366611
[101]: https://community.wolfram.com//c/portal/getImageAttachment?filename=46311.PNG&userId=366611
[102]: https://community.wolfram.com//c/portal/getImageAttachment?filename=580910.JPG&userId=366611
[103]: https://community.wolfram.com//c/portal/getImageAttachment?filename=28491.JPG&userId=366611
[104]: https://community.wolfram.com//c/portal/getImageAttachment?filename=76812.JPG&userId=366611
[105]: https://community.wolfram.com//c/portal/getImageAttachment?filename=100173.JPG&userId=366611
[106]: https://community.wolfram.com//c/portal/getImageAttachment?filename=57664.JPG&userId=366611
[107]: https://community.wolfram.com//c/portal/getImageAttachment?filename=74665.JPG&userId=366611
[108]: https://community.wolfram.com//c/portal/getImageAttachment?filename=49236.JPG&userId=366611
[109]: https://community.wolfram.com//c/portal/getImageAttachment?filename=15127.JPG&userId=366611
[110]: https://community.wolfram.com//c/portal/getImageAttachment?filename=92858.JPG&userId=366611
[111]: https://community.wolfram.com//c/portal/getImageAttachment?filename=49309.JPG&userId=366611
[112]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3.PNG&userId=366611Marvin Ray Burns2014-10-09T18:08:49ZMathematica Benchmark
https://community.wolfram.com/groups/-/m/t/2164594
Hello all, where do I send the results of my Mathematica file where there is the result
of my Benchmark (I arrives first with my Ryzen 9)
I don"t know how to use Mathematica. StockExhange I am lame :)
Any help would be appreciated.
Thank you Ladies and Gents.
Jean-MichelJean-Michel Collard2021-01-16T16:40:15ZGenerate random covariance matrices?
https://community.wolfram.com/groups/-/m/t/2241906
I am interested in generating data from a multivariate normal distribution, in which I have some control over the relation between variables. I have tried the following:
nvars = 11;
nobs = 70;
k = 15;
W = RandomVariate[NormalDistribution[], {nvars, k}];
MD = DiagonalMatrix@RandomVariate[NormalDistribution[], nvars];
K = W.Transpose@W + MD;
datasim = RandomVariate[MultinormalDistribution@K, nobs]
However, I am not satisfied with my approach, because if I set a higher number for `nvars` and `nobs`, it usually fails because the covariance matrix `R` generated is not always positive semi-definite.
Does anyone know a better aproach to generate large covariance matrices in which I have some control over the relation between variables?
Thanks in advance for any help!David G. Aragonés2021-04-10T23:44:29ZManipulate genealogy GEDCOM files using Wolfram Language?
https://community.wolfram.com/groups/-/m/t/1167459
Robert Nachbar has an interesting Wolfram video http://www.wolfram.com/broadcast/video.php?c=400&p=2&v=1497 on using Mathematica to manipulate genealogy GEDCOM files. As far as I can see the video does not provide a link to a source for his package. Does antone know where I might find it?Ron Gove2017-08-19T21:06:59ZHacking a complex function with Mathematica
https://community.wolfram.com/groups/-/m/t/2241822
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/wolfram-community/Published/mmaHack.nbRobert Rimmer2021-04-11T01:21:31ZDistance between a point and others is an integer
https://community.wolfram.com/groups/-/m/t/2241420
I have 4 points in a 2D plane
{{0, 0}, {20, 0}, {138/5, (24 Sqrt[6])/5}, {25, 10 Sqrt[6]}}
and wish to find a 5th point such that the distances to the 4 points is an integer.
I have tried solving for a point to the point {138/5, (24 Sqrt[6])/5}, specifying Integer elements and reals and cannot get any specific point as an answer, it's all conditionally based. Satisfying 4 points simultaneously seems unobtainable. My question is, is this solvable using Mathematica's built in functions? Or do I resort to a programmable approach?
Btw the pairwise distances between those 4 points are Integer.Paul Cleary2021-04-10T15:50:20ZNotebookOpen[ ] failed with notebook in cloud?
https://community.wolfram.com/groups/-/m/t/2240433
[According to here][1]
I expected this to work:
File1 = NotebookOpen[ "https:/mydomain.mysite.com/Unite3-4.nb"];
SelectionMove[File1, All, Notebook]
SelectionEvaluate[File1]
I also tried
NotebookOpen[ URL["https:/mydomain.mysite.com/Unite3-4.nb"]]
Returns $Failed
[1]: https://reference.wolfram.com/language/ref/NotebookOpen.htmlMerve Temizer2021-04-09T04:49:08ZNASA's Perseverance Mars rover: encoding of parachute message
https://community.wolfram.com/groups/-/m/t/2210481
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Ahmed%27sphoto.jpg&userId=20103
[2]: https://www.wolframcloud.com/obj/ahmed1/Published/nasa_graphical_binary_code6.nbAhmed Elbanna2021-03-05T19:24:49ZVertexShapeFunction pattern matching
https://community.wolfram.com/groups/-/m/t/2241020
I am trying to visualize a heterogeneous graph which nodes are categorical. Here is a dummy graph for implementation:
Input is
```
graph = UndirectedEdge @@@ {{"u0", "c1"}, {"u1", "c3"}, {"u2", "c0"}, {"u3", "c2"}, {"u4", "c1"}, {"u0", "i0"}, {"u1", "i0"}, {"u2", "i0"}, {"u3", "i1"}, {"u4", "i2"}}
```
And it looks like this by specifying vertex coordinates.
![enter image description here][1]
I would like to customize vertex shape by using `VertexShapeFunction`, but I cannot make the pattern matching work, for instance, I would like to make the nodes starts with `u` have shape square:
```
Graph[graph, VertexLabels -> "Name", VertexShapeFunction -> {_?StringStartsQ["u"] -> "Square" }]
```
I am aware the it should be a pattern, as
```
CompleteGraph[5, VertexShapeFunction -> {_?EvenQ -> "Star"}, VertexSize -> 0.2]
```
but I don't know how to make pattern work, I tried:
```
Cases[{"u1", "p1"}, _?StringStartsQ[#, "u"] &]
```
I know `StringCases` and `Select` can easily returns the strings I want, but unfortunately, seems only `Cases` with pattern matching would work for VertexShapeFunction
```
StringCases[{"u1", "p1"}, "u" ~~ _]
Select[{"u1", "p1"}, StringStartsQ[#, "u"] &]
```
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2021-04-09at12.42.18PM.png&userId=524853Wenzhen Zhu2021-04-09T19:42:50ZConstructing cyclic graphs that are as pair-wise disjoint as possible
https://community.wolfram.com/groups/-/m/t/2240973
I cross-posted this on Mathematica Stack Exchange but haven't received any traction there. Perhaps someone on this forum has an idea. I've re-written the description somewhat in hopes this is more clear.
I have an application for which I need a set of circle graphs, where by circle, I mean that the graph forms a single complete loop. So with n vertices, it starts at one vertex, goes to another and finally loops back on itself. An example small circle graph:
aGraph=Graph[DirectedEdge @@ # & /@
Partition[RandomSample[Range[10]], 2, 1, {1, 1}]]
For a set of such graphs (really edge lists), I want to find out their pairwise intersection. I don't know if it is the right way, but I use this:
GraphEdgeIntersections[edgePairs_List] :=
With[{intersection = Intersection @@ edgePairs},
If[intersection == {}, 0,
Max@Flatten[GraphDistanceMatrix[intersection] /. Infinity -> 0]]
];
I know how to construct a set of "perfect" circles where the pairwise intersections yield the null set for the case I really care about, circles with 23 vertices:
anExampleSet =
Table[DirectedEdge @@ # & /@
Transpose[{Range[23], RotateRight[Range[23], i]}], {i, 1, 22}];
Partition[GraphEdgeIntersections[#] & /@ Tuples[anExampleSet, 2],
22] // MatrixForm
You see that the diagonals have a path length of 22 and all other entries are 0 (indicating that they are completely orthogonal). When I look at the number of edges in the flattened set, it is 506, which is exactly the same as the total possible edges in a set of 23.
Length[Select[Tuples[Range[23], 2], #[[1]] != #[[2]] &]]
Is there a method to construct the best set of circles so that I minimize the intersections when going beyond the 22 (or any similar set of 22) shown above?
As part of this work, I came up with a way to visualize the "goodness" of the resulting set using "Image". For this, I construct the graph edge intersection matrix and "color" it according to whether I'm looking for "goodness" or "badness". The idea is to take the matrix and change the values into {R,G,B} pixels at each point according to the rules I want. Since I don't know how many entries are in the resulting set, I wrote a function to generate the replacement rules. I should probably have called the last parameter "badNotGoodP" but this is what I first wrote:
ConstructReplacementRule::usage =
"ConstructReplacementRule[values_,blackValue_,threshold_,\
redNotGreen_]";
ConstructReplacementRule[values_, blackValue_, threshold_,
redNotGreen_] :=
Block[{white = {1, 1, 1}, black = {0, 0, 0}, red = {1, 0, 0},
green = {0, 1, 0}, replacementValues, valuesNoDuplicates},
valuesNoDuplicates = DeleteDuplicates[Flatten@values];
replacementValues =
Cases[valuesNoDuplicates,
a_ :>
If[a == blackValue, black,
If[redNotGreen, If[a > threshold, red, white],
If[a < threshold, green, white]]]];
Thread[valuesNoDuplicates -> replacementValues]
];
This is used for example in the following way:
someRandomGraphs =
With[{randomSample = #},
DirectedEdge @@ # & /@
Partition[Append[randomSample, randomSample[[1]]], 2, 1]] & /@
Table[RandomSample[Range[23]], {i, 1, 256}];
gei = Partition[
GraphEdgeIntersections[#] & /@ Tuples[someRandomGraphs, 2], 256];
Now, if I want to show an image with green pixels for any point in the cross-matrix where we have 0 or 1 intersection, I do this:
Image[gei /. ConstructReplacementRule[gei, 22, 2, False]]
Or if I want to know cases where there are only 0 overlaps, I use this:
Image[gei /.
ConstructReplacementRule[DeleteDuplicates@Flatten@gei, 22, 1,
False]]
If I want to show "badness", I invert the last argument to the rule replacement.
Image[gei /.
ConstructReplacementRule[DeleteDuplicates@Flatten@gei, 22, 1, True]]
I have some generators for producing circle graphs that yield decent results but they aren't perfect. I'm hoping someone with a good background in graphs knows a construction method that yields a better result. One of my methods gives this result for a set of 256 tables:
![Results for a set of circles][1]
Repeating, the objective is to get as many "0" and "1" overlaps as possible. For this table, there are 65536 entries and of those, 256 are the diagonal where the overlap is complete, yielding the count of 22. The unweighted percentage "badness" for the set I'm providing is (3248+222+16+2)/65280 = 5.34%. Another construction method gives 5.37% "badness". Are there any with 0% badness?
I cannot "brute force" this since the total possible sets is huge (23!).
If you've read this far, I appreciate your time and any help you may give.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2021-04-09at12.06.45PM.png&userId=1954142Mark Ross2021-04-09T19:18:56ZThe maximum output precision of InterpolatingFunction[]?
https://community.wolfram.com/groups/-/m/t/2233043
Hello, I am using Interpolation[] to connect the data points in the attachment DATA.nb
and trying with high precision(70). However it seems that Interpolation[] suppresses the precision of the table it acts on, so the values of the resulting interpolating function become all the same with 16 precision in the range(the notebook below shows the beginning and ending values):
&[Wolfram Notebook][1]
I searched for the precision option of it in Wolfram documentation center but found nothing. How can I refine the result of such interpolation? Thanks!
[1]: https://www.wolframcloud.com/obj/6f203712-61ba-4d71-bd46-7c42404608bb袁 旭龙2021-03-31T10:32:28Z[WSS20] Curve OCR for "AP Calculus"-like "sketched" curves
https://community.wolfram.com/groups/-/m/t/2029803
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Sinti%CC%81tulo-100MODIFIED1.png&userId=1878279
[2]: https://www.wolframcloud.com/obj/ahutahaii/Published/projectNotebook_updated.nbJosé Antonio Fernández2020-07-14T16:52:27ZApproval voting election: analysis and visualization
https://community.wolfram.com/groups/-/m/t/2240112
![enter image description here][1]
&[Wolfram Notebook][2]
[Original]: https://www.wolframcloud.com/obj/bobs/Published/STLMayorWardEssay.nb
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=elections_hero3.png&userId=20103
[2]: https://www.wolframcloud.com/obj/9c761b5a-7710-461a-9231-a538b42f9b8eBob Sandheinrich2021-04-08T16:18:16ZEmpty plot using NDSolve[ ] ?
https://community.wolfram.com/groups/-/m/t/2239857
Hey guys,
I'm quite new here and hope i'm in the right section. In trying to solve a PDE system with NDSolve the plots I obtain in the end are empty. I read through quite a few posts regarding this issue but haven't found a solution that I could apply to my code.
The PDE system:
deqs = {D[m[x, t], t] == D[vm[x]/am[x] D[vm[x] m[x, t], x], x] + f[{m[x, t], c[x, t]}],
D[c[x, t], t] == D[vc[x]/ac[x]*D[vc[x]*c[x, t], x], x] - f[{m[x, t], c[x, t]}],
m[x, 0] == M, c[x, 0] == C0, m[0, t] == m[L, t] , c[0, t] == c[L, t]};
Most important code:
sol1 = NDSolve[deqs, {m, c}, {x, 0, L}, {t, 0, 10}]
Plot3D[m[x, t] /. sol1, {x, 0, L}, {t, 0, 10}]
I assume my mistake already is somewhere in these lines, f is a polynome, vc and vm are constants and ac and am are arctan(x) like functions.
To be precise:
f[{a_, b_}] = g*a^2*b - d*a
am[x_]:= (A(ArcTan[B(x-L/2)]+ Pi/2))^(-1);
ac[x_]:=(A (ArcTan[B(-x+L/2)]+Pi/2))^(-1);
vm[x_] := vM;
vc[x_] := vC;
Does anyone have an Idea why this doesn't work?
Thanks a lot in advance!Janis Köhler2021-04-08T13:39:37ZHow can I do NIntegrate of derivative of the function?
https://community.wolfram.com/groups/-/m/t/2239047
Hi everyone,
I define the function and define new function using the derivative and NIntegrate on it.
As an example, I made some simple case.
Clear[f,d]
f[x_?NumericQ, a_?NumericQ] := Sin[a x]
d[x_?NumericQ] := NIntegrate[D[f[x, a], x], {a, 0, 1}]
And try to get the result at some x.
With[{x = 2}, Evaluate@d[x]]
But it does not work...And I got error message
> General::ivar: 2 is not a valid variable.
> NIntegrate::inumr: The integrand \!\(\*SubscriptBox[\(\[PartialD]\), \(2\)]\(f[2, a]\)\) has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,1}}.
How can I fix this?Sangshin Park2021-04-07T12:14:20Z