Message Boards Message Boards

Try to beat these MRB constant records!

POSTED BY:

ever curious, ever learning Marvin Ray Burns and collaborators.

aboutme1 aboutme2




If you see this instead of an image, reload the page enter image description here

is the MRB constant.

Weisstein, Eric W. "MRB Constant." From MathWorld--A Wolfram Web Resource. https://mathworld.wolfram.com/MRBConstant.html




Understanding the Intersection of Critical Thinking and Mathematical Discovery Within You

Mark Twain’s apocryphal quote, “I’m in favor of progress; it’s change I don’t like,” encapsulates a profound human sentiment that echoes through the corridors of intellectual history. At its heart, this statement reflects the delicate balance between embracing new ideas and the inherent discomfort of moving beyond the familiar. (Educational humility is essential to mathematical maturity.) This tension is a driving force behind critical thinking, a process that has evolved and refined over centuries. The innate innocence of a young pupil, who accepts lessons at face value, contrasts sharply with this. While the pupil's unquestioning acceptance lays the foundation for learning, critical thinking—embodied by the pupil's never-ending imagination, religious figures like prophets, as well as discoverers and inventors of scientific ideas—represents the evolution towards questioning established norms and advocating for transformative progress.

Mathematics include:

1 2

Examples of further research done by me and many better scholars unveiling the inexhaustible utility of the MRB constant are in the following reply.


Published at

https://mathworld.wolfram.com/MRBConstant.html,

enter image description here

created by an amateur, serves as a potential catalyst for the field of mathematics by generating new questions, prompting re-evaluation of existing concepts, and encouraging broader participation in research -- a gateway to a world of intellectual exploration, collaboration, and potential breakthroughs! -- is a testament to the power of curiosity, collaboration, and the relentless pursuit of knowledge.



Explored here:


Now, the MRB constant relates to the divergent series

DNE

The divergent sequence of its partial sums has two accumulation points with an upper limiting value or limsup known as the MRB constant (CMRB), and a liminf of CMRB-1:

plot sup and inf

Further, out of the many series for CMRB, first analyze the sum prototype or prototypical series, i.e., the conditionally convergent summation, the sum of two divergent ones:

two series

However, a conditionally convergent series was not satisfying to me, as the Riemann series theorem states, "By a suitable rearrangement of terms, a conditionally convergent series may be made to converge to any desired value, or to diverge." I soon noticed that grouping its terms gave a convergent series:

enter image description here

In[405]:= test = (2 k)^(1/(2 k)) - (2 k + 1)^(1/(2 k + 1));

In[406]:= SumConvergence[Abs[test], k]

Out[406]= True

An equivalent absolutely convergent one.is

enter image description here

\was simply proven by Gottfried Helms

Gottfried Helms



In Mathematics in the Stack Exchange Network, an internet scholar going by the moniker Dark Malthorp presented a beautiful integral for the MRB constant: the integral of Im[(1 + I t)^(1/(1 + I t))]/Sinh[Pi t] for t>0.

enter image description here enter image description here



If the header and the words

Reply | Flag

are shown at the same time in any of the following replies, refresh the page to see them.

POSTED BY: Marvin Ray Burns
26 Replies

Examples of further research unveiling the utility of the MRB constant

Richard J. Mathar of CERN

RICHARD J. MATHAR Abstract. Real and imaginary part of the limit 2N → ∞ of the integral R 2N 1 exp(iπx) x √xdx are evaluated to 20 digits with brute force methods after multiple partial integration, or combining a standard Simpson integration over the first half wave with series acceleration techniques for the alternating series co-phased to each of its points. The integrand is of the logarithmic kind; its branch cut limits the performance of integration techniques that rely on smooth higher order derivatives. 1. Scope 1.1. M. R. Burns’ Constant. Definition 1. The MRB constant is the sum of the series [18, A037077,

as found here

Re. Henrik Schachner Radiation Therapy Center,

here:

enter image description here

ClearAll["Global`*"]
m = Table[N[(-1)^k (k^(1/k) - 1), 2000], {k, 1, 2000}];
(*partial sums of the series*)
am = Accumulate@m;
shanks[ac_List] := 
 Table[(ac[[n + 1]] ac[[n - 1]] - ac[[n]]^2)/(ac[[n + 1]] + 
     ac[[n - 1]] - 2 ac[[n]]), {n, 2, Length[ac] - 1}]
sac = NestList[shanks, am, 24];
ListLogPlot[Abs[Differences /@ sac], Joined -> True, 
 GridLines -> Automatic, ImageSize -> Large]

enter image description here

This plot showcases the Shanks transformation that can be applied iteratively, leading to further improvements in MRB constant calculations.


A recent advancement in MRB constant formulas connecting the Dirichlet eta to nth roots

shown here,

yields a proof of the previously proposed concept that the MRB constant connects such ideas together.

enter image description here

1

3 4


Let m be the MRB constant. Observe that 1/6<m<1/5. Remarkable curios of the MRB related by coefficient list indices "5 and 6" in several contexts:

fthnsith1 fthnsth2

to the f(x)=(-1)^x x^(1/x) first notebook within the initial post:

to twin prime conjectures involving f(x)=(-1)^x x^(1/x)

to CMRB, the limsup sn = sum [f(x<n)=(-1)^x x^(1/x)] approximations involving Catalan Coefficients



-

- The MRB constant, arising from the intricate summation of rational powers and eta derivatives, demonstrates a profound ability to nearly satisfy various mathematical equations.

With emphasis on prime number distribution and sums of squares both interictally related to the numeric value of the MRB constant is explored next:



In search for a suitable approximation to the MRB constant to answer several questions about patterns in its decimal approximation,

millions1

millions2

From ScienceDirect:

Science Direct summary




Published in his life-time collection, Algorithmic Reflections: Selected Works, professor Richard Crandall of Reed College made the case that the MRB constant is indeed a key fundamental constant

as discussed.

To summarize his thesis.

The MRB constant's eta formula connects various zeta function variants through its use of series and its relationship to other special functions. Let's explore this connection in detail.

The MRB Constant and Its Series Representation The MRB constant (MRB) is defined by a specific series involving alternating sums:

enter image description here

This series converges quickly, making it efficient for high-precision calculations. Here's how this ties together various zeta function variants:

Relationship to Zeta Function Variants Eta Function: The eta function η(s) is a variant of the Riemann zeta function ζ(s) and is defined as: $ \eta(s) = (1 - 2^{1-s}) \zeta(s)$

1 2

Here is a screenshot:

enter image description here


As early as 1999, from MathSoft, Dr. Steven Finch asked about the MRB constant;

> enter image description here

Then in 2003, while lecturing a Harvad, Professor Steven Finch was the first to publish the MRB constant in book form under the auspices of Cambridge Press. enter image description here

========================================================================


Dr. Michele Nardelli,

enter image description here

has established the connection of the MRB constant to the study of Geophysics.

One of his remarkable discoveries is shown in this link and worked in this notebook. enter image description here

According to Prof Nardelli, the equation above is 98.9 percent near to the mass of candidate glueball f 0 (1710) scalar meson.

The significance of this estimation

A z-score of approximately 1.734 indicates that the new value x ≈ 1.0005787037 is about 1.734 standard deviations above the mean of the data set ranging from 0.00000001 to 0.999999999.

For the estimate of the mass of candidate glueball f 0 (1710) scalar meson, the fraction 1729/1728 ≈ 1.0005787037 is about 1.734 standard deviations above the mean of the set of real numbers from 0.00000001 to 0.999999999. This indicates that the new value is somewhat significant and lies within the range where many data points fall in a normal distribution.

Similar to how I took claim to the Value that is named after me, Nardelli has a constant: The DN constant. .

Simplifying the above leads to a direct connection from the partial sums of the MRB constant series and Foias' 2nd constant Both constructed at the turn of the millennium and connected to the distribution of prime numbers.



Though I first presented a constant as the alternating sum of the sequence of nth roots of real numbers at the dawn of the millennium, the concept of summing roots of algebraic numbers has a long and rich history.

latex

1 www.kylem.net

2 www.expii.com

3 link.springer.com


Weisstein, Eric W.PHD, pupfishes a continued fraction due to Khovanskii (1963)

This gives an infinite alternating sum of a continued fraction of polynomials that evaluates the MRB constant (CMRB).

enter image description here


- There exists an interesting/interesting situation with the MRB constant.

Is the MRB constant, the sum of only unrelated algebraic numbers, transcendental? If not, is the MRB constant, the sum of only unrelated irrational numbers, rational? If the answers to both of those questions are no and no, then there exists an infinitude of unrelated digits that represent the MRB constant which satisfy a polynomial equation with integer coefficients.


Finally, as the MRB constant is the alternating sum of the real-valued nth roots of the sequence of natural numbers, reaching back all the way to 1926 a product of a sequence of roots of products of the sequence of real numbers was discovered,

While stating,

If eγ is a rational number, then its denominator must be greater than 1015000.

. Shockingly, Wikipedia mentions

[ it is the product of real numbered roots]

that possibly could be related to CMRB:

enter image description here


  • Interestingly,

Rieman

This reminds me of when one follower of this post asked are there a bunch of MRB constants:

The terms of the series

      first 100 terms of the series:*)  m100 = Table[N[(-1)^k (k^(1/k) - 1), 200], {k, 1, 100}]; <p>whose partial sums give raise

the the MRB constant (see for example in this community <a href="http://community.wolfram.com/groups/-/m/t/743474">Record breaking direct summation of MRB constant ) contain the k-th root of k.

As always the question about all the other roots in the complex plane comes up:

  Remove[burnsStrip]
 burnsStrip[k_?NumericQ, w_?NumericQ] := 
 Block[{m = RotationTransform[k, {-1, 0}]},
  Polygon[{m[{-5/2, -w}], m[{1/2, -w}], m[{1/2, w}], m[{-5/2, w}]}]
  ]

Graphics[{{Green, burnsStrip[0, 1/40]}, {Yellow, 
   Sequence @@ (burnsStrip[#, 
        1/40] & /@ (Degree {143.8, 120, 90, 60, 36.2}))}, 
  Table[Point /@ 
    Evaluate[
     ReIm[(-1)^k  ((Flatten[Evaluate[Block[{x}, Solve[x^k == k, x]]]])[[All, 2]] - 1)]], {k, 1, 217}]}, 
 Frame -> True, AspectRatio -> 3/4.7, PlotRange -> All, 
 PlotLabel -> "A Bunch Of Burns Constants"]

giving

enter image description here

The terms for the MRB constant are in the green strip in the interval (-1/2,1/2). It is special, as the graphics shows clearly.

But is it sure, that the partial sums of terms on straight lines through -1 + 0 I or on straight lines through 1 + 0 I do all go to zero? Those (if any) which do not are satellites of the MRB constant.

Potential Connections

RH connection

Similar explorations could provide new insights into the MRB constant's role in the broader context of number theory and complex analysis.

POSTED BY: Marvin Ray Burns

Here are the steps with added descriptions to guide the process of transforming and analyzing the hypercubes related to the MRB constant using linear algebra:

(See following reply about the geometry of the MRB constant.)

If your so inclined, feel free to add to this outline I'm working on!

  1. Theoretical Understanding Review Linear Algebra Tools:

Constructive Step: Familiarize yourself with key linear algebra concepts like vector spaces, matrices, eigenvalues, and eigenvectors. This foundational knowledge will help you understand how to manipulate complex structures using these tools.

Study Hypercubes:

Constructive Step: Investigate the geometric properties of hypercubes by examining how their dimensions and volumes are calculated. Explore how n^(1/n) serves as the edge length for hypercubes of dimension n .

  1. Algebraic Transformation Translate Algebraic Structures:

Constructive Step: Represent the series and sequences related to the MRB constant in matrix form. For example, create a matrix where each element corresponds to the edge length n^(1/n) for each hypercube.

Simplify Using Linear Algebra:

Constructive Step: Apply linear algebra techniques such as diagonalization or eigenvalue decomposition to simplify these matrices. This step reduces complexity and helps identify key features of the hypercubes.

  1. Geometric Interpretation Visualize Hypercubes:

Constructive Step: Use graphical tools or software to create visual models of hypercubes, highlighting the edge lengths n^(1/n) . Visual aids can provide intuitive insights into the geometric properties.

Analyze Geometric Patterns:

Constructive Step: Study the visualized hypercubes to detect patterns and relationships. For instance, observe how the edge lengths evolve with increasing dimensions and how these changes relate to the MRB constant.

  1. Minkowski Sum Application Understand Minkowski Sum:

Constructive Step: Learn about the Minkowski sum by studying its definition and properties. Explore examples of Minkowski sums in lower dimensions to build a strong conceptual understanding.

Apply to Hypercubes:

Constructive Step: Calculate the Minkowski sums of hypercubes using their edge lengths n^(1/n) . Analyze the resulting geometric figures to see how these sums interact and reveal new properties.

  1. Numerical and Computational Analysis Use Computational Tools:

Constructive Step: Leverage computational software such as Mathematica to perform numerical calculations. Input your matrix representations and apply linear algebra operations to compute results accurately.

Check Results for Accuracy:

Constructive Step: Validate the computational results by cross-referencing with known properties or independent calculations. Ensure that the simplifications and transformations are correctly implemented.

  1. Synthesize Findings Combine Insights:

Constructive Step: Integrate the algebraic and geometric insights gained from the previous steps into a cohesive understanding of the MRB constant. Formulate theories or hypotheses based on these combined insights.

Document and Share:

Constructive Step: Compile your findings into a well-organized document or presentation. Share this with the mathematical community through publications, presentations, or online platforms to invite feedback and collaboration.

POSTED BY: Marvin Ray Burns

The geometry of the MRB constant The MRB constant, denoted as ( M ), represents the upper limit point of the sequence of partial sums defined by the series:

$S(x) = \sum_{n=1}^{x} (-1)^n \cdot n^{1/n} $

This series exhibits fascinating geometric properties, which can be visualized through the lengths of the edges of hypercubes with content ( n ) and dimension ( n ). For instance, consider a cube with a volume of 8 units(^3). The length of one of its sides is ( 8^{1/3} = 2 ) units, illustrating the geometric interpretation of $n^{1/n} )$"

https://marvinrayburns.com/ThegeometryV11.pdf

The Minkowski sum is a concept in geometry that involves adding two sets of points in a vector space. Specifically, the Minkowski sum of two sets ( A ) and ( B ) is defined as:

$$ A + B = \{ a + b \mid a \in A, b \in B \} $$

In the context of the MRB constant, the Minkowski sum can be related to the geometric interpretation of the series and the hypercubes. Here's how:

Geometric Interpretation

  1. Hypercubes and Dimensions:
  • The MRB constant in terms of the lengths of the edges of hypercubes with content ( n ) and dimension ( n ). This involves understanding the geometric properties of these hypercubes.
  1. Minkowski Sum and Hypercubes:
  • The Minkowski sum can be used to describe the combination of geometric shapes. The hypercubes with different dimensions can be thought of as sets of points in a higher-dimensional space.
  • When you consider the Minkowski sum of these hypercubes, you are essentially combining their geometric properties. This can help in visualizing the overall structure and behavior of the MRB constant's partial sums.

Application to MRB Constant

  • Alternating Series and Geometry:
  • The alternating series ( S(x) = \sum_{n=1}^{x} (-1)^n \cdot n^{1/n} ) can be interpreted geometrically by considering the Minkowski sum of the hypercubes corresponding to each term in the series.
  • Each term ( (-1)^n \cdot n^{1/n} ) can be seen as contributing to the overall geometric structure, and the Minkowski sum helps in visualizing how these contributions combine.

Conclusion

By using the concept of the Minkowski sum, you can better understand the geometric interpretation of the MRB constant and its partial sums. The Minkowski sum provides a way to combine the geometric properties of the hypercubes, leading to a clearer visualization of the overall structure and behavior of the series.

POSTED BY: Marvin Ray Burns

Here's an outline of the discussion on the real-world applications of the MRB constant from the Wolfram Community:


Outline: Real-World Applications of the MRB Constant

1. Introduction

  • Definition of MRB Constant: Brief overview of the MRB constant, including its mathematical definition and significance.
  • Historical Context: Mention of Marvin Ray Burns, the original investigator of the MRB constant, and the development of the constant over time.

2. Mathematical Properties

  • Series Representation: Explanation of the series representation of the MRB constant.
  • Convergence and Computation: Discussion on the convergence properties and computational methods used to determine the MRB constant.

3. Real-World Applications

  • Physics: Application of the MRB constant in physical theories and models, including its relation to quantum mechanics and statistical mechanics.
  • Engineering: Use of the MRB constant in engineering problems, particularly in signal processing and control systems.
  • Computer Science: Implementation of the MRB constant in algorithms and computational methods, enhancing efficiency in certain calculations.
  • Economics: Potential applications in economic models and financial mathematics, providing insights into market behaviors and trends.

4. Advanced Research and Future Directions

  • Ongoing Investigations: Current research efforts exploring new applications and properties of the MRB constant.
  • Potential Discoveries: Speculation on future breakthroughs and the expanding role of the MRB constant in various scientific fields.

5. Conclusion

  • Summary: Recap of the key points discussed in the outline.
  • Significance: Emphasis on the importance of the MRB constant in both theoretical and practical contexts.

This outline provides a structured overview of the discussion on the real-world applications of the MRB constant. If you need more detailed information on any specific section, feel free to ask!

Source: Conversation with Copilot, 8/26/2024 (1) MRB constant (CMRB) Real-World, and beyond, Applications. https://community.wolfram.com/groups/-/m/t/2712205. (2) Try to beat these MRB constant records! - Wolfram. https://community.wolfram.com/groups/-/m/t/366628. (3) undefined. https://en.wikipedia.org/wiki/MRB_constant. (4) undefined. http://mathworld.wolfram.com/MRBConstant.html.

POSTED BY: Marvin Ray Burns

While preparing for the verifying of 7-million digits, I broke some speed records with two of my i9-14900K 6400MHZRAM (overclocked CPUs), using Mathematica 11.3 and the lightweight grid. They are generally faster than the 3 node MRB constant supercomputer with remote kernels! These are all absolute timings! How does your computer compare to these? What can you do with other software?

1 2 3 4

That light blue highlight, red text columns result here->7200

For the partial column of two 6000MHz 14900K' with red text and yellow highlight, see speed 100 300 1M XMP tweaked.

For column "=F" (highlighted in green) see linked "10203050100" .

At the bottom, see attached "kernel priority 2 computers.nb" for column =B,

"3 fastest computers together.nb" for column =C

and linked "speed records 5 10 20 30 K"

also speed 50K speed 100k, speed 300k and 30p0683 hour million.nb for column =D .

For the mostly red column including the single, record, 10,114 second 300,000 digit run " =E" is in the linked "3 fastest computers together 2.nb.}

For column "=J," see 574 s 100k , .106.1 sec 50k and 6897s 300k

The 27-hour million-digit computation is found here. <-Big notebook.

enter image description here

This is another comparison of my fastest computers' timings in calculating digits of CMRB: enter image description here

The blue column (using the Wolfram Lightweight Grid) is documented here.

The i9-12900KS column is documented here.

The i9-13900KS column is documented here.

The 300,000 digits result in the i9-13900KS column is here, where it ends with the following:

  Finished on Mon 21 Nov 2022 19:55:52. Processor and actual time 
         were 6180.27 and 10114.4781964 s. respectively

  Enter MRB1 to print 301492 digits. The error from a 6,500,000 or more digit 
 calculation that used a different method is  

 Out[72]= 0.*10^-301494


Remembering that the integrated analog of the MRB constant is M2

NIntegrate[(-1)^n (n^(1/n) - 1), {n, 1, Infinity  I}, 
 Method -> "Trapezoidal", WorkingPrecision -> 20]

These results are from the Timing[] command: M2

The 14900KS at 7200 MHz (extreme tuning!) documented here

The i9-12900KS column is documented here.

"Windows10 2024 i9-14900KS 6000 MHZ RAM" documentation here

POSTED BY: Marvin Ray Burns

§13.

MRB Constant Records,

Google Open AI Chat CPT gave the following introduction to the MRB constant records:

It is not uncommon for researchers and mathematicians to compute large numbers of digits for mathematical constants or other mathematical quantities for various reasons. One reason might be to test and improve numerical algorithms for computing the value of the constant. Another reason might be to use the constant as a benchmark to test the performance of a computer or to compare the performance of different computers. Some people may also be interested in the mathematical properties of the constant, and computing a large number of digits can help to reveal patterns or other features of the constant that may not be apparent with fewer digits. Additionally, some people may simply find the process of calculating a large number of digits to be a challenging and rewarding intellectual pursuit.

My inspiration to compute a lot of digits of CMRB came from this archived linked website by Simon Plouffe.

There, computer mathematicians calculate millions, then billions of digits of constants like pi, when with only 65 decimal places of pi, we could determine the size of the observable universe to within a Planck length (where the uncertainty of our measure of the universe would be greater than the universe itself)!

In contrast, 65 digits of the MRB constant "measures" the value of -1+ssqrt(2)-3^(1/3) up to n^(1/n) where n is 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, which can be called 1 unvigintillion or just 10^66.

And why compute 65 digits of the MRB constant? Because having that much precision is the only way to solve such a problem as

1465528573348167959709563453947173222018952610559967812891154^ m-m, where m is the MRB constant, which gives the near integer "to beat all," 200799291330.9999999999999999999999999999999999999999999999999999999999999900450...

And why compute millions of digits of it? uhhhhhhhhhh.... "Because it's there!" (...Yeah, thanks George Mallory!) And why?? (c'est ma raison d'être!!!)

So, below are reproducible results with methods. The utmost care has been taken to assure the accuracy of the record number of digit calculations. These records represent the advancement of consumer-level computers, 21st-century Iterative methods, and clever programming over the past 23 years.

Here are some record computations of CMRB. Let me know if you know of any others!

1 digit of the

CMRB with my TI-92s, by adding -1+sqrt(2)-3^(1/3)+4^(1/4)-5^(1/5)+6^(1/6)... as far as practicle, was computed. That first digit, by the way, was just 0. Then by using the sum key, to compute $\sum _{n=1}^{1000 } (-1)^n \left(n^{1/n}\right),$ the first correct decimal i.e. (.1). It gave (.1_91323989714) which is close to what Mathematica gives for summing to only an upper limit of 1000. Ti-92's


4 decimals(.1878) of CMRB were computed on Jan 11, 1999, with the Inverse Symbolic Calculator, applying the command evalf( 0.1879019633921476926565342538468+sum((-1)^n* (n^(1/n)-1),n=140001..150000)); where 0.1879019633921476926565342538468 was the running total of t=sum((-1)^n* (n^(1/n)-1),n=1..10000), then t= t+the sum from (10001.. 20000), then t=t+the sum from (20001..30000) ... up to t=t+the sum from (130001..140000).

enter image description here

enter image description here

5 correct decimals (rounded to .18786), in Jan of 1999, were drawn from CMRB using Mathcad 3.1 on a 50 MHz 80486 IBM 486 personal computer operating on Windows 95.

 9 digits of CMRB shortly afterward using Mathcad 7 professional on the Pentium II mentioned below, by summing (-1)^x x^(1/x) for x=1 to 10,000,000, 20,000,000, and many more, then linearly approximating the sum to a what a few billion terms would have given.

 500 digits of CMRB with an online tool called Sigma on Jan 23, 1999. See [http://marvinrayburns.com/Original_MRB_Post.html][10]   if you can read the printed and scanned copy there.

enter image description here Sigma still can be found here.


5,000 digits of CMRB in September of 1999 in 2 hours on a 350 MHz PentiumII,133 MHz 64 MB of RAM using the simple PARI commands \p 5000;sumalt(n=1,((-1)^n*(n^(1/n)-1))), after allocating enough memory.

enter image description here PII

To beat that, it was done on July 4, 2022, in 1 second on the 5.5 GHz CMRBSC 3 with 4800MHz 64 GB of RAM by Newton's method using Convergence acceleration of alternating series. Henri Cohen, Fernando Rodriguez Villegas, Don Zagier acceleration "Algorithm 1" to at least 5000 decimals. (* Newer loop with Newton interior. *)

documentation here

And here

I did it using an i9-14900K, overclocked, with 64GB of 6400MHz RAM. I used my own program. Processor and actual time were 0.796875 and 0.8710556 s, respectively.

 6,995 accurate digits of CMRB were computed on June 10-11, 2003, over a period, of 10 hours, on a 450 MHz P3 with an available 512 MB RAM,

PIII

To beat that, it was done in <2.5 seconds on the MRBCSC 3 on July 7, 2022 (more than 14,400 times as fast!)

documentation here

To beat that, it was done in <1. 684 seconds on April 10, 2024 (more than 21,377 times as fast!). documentation [here][19]:
In[3]:= 10 hour*3600 seconds/hour/(1.684 seconds)

Out[3]= 21377.7

8000 digits of CMRB completed, using a Sony Vaio P4 2.66 GHz laptop computer with 960 MB of available RAM, at 2:04 PM 3/25/2004,

enter image description here


  11,000 digits of CMRB> on March 01, 2006, with a 3 GHz PD with 2 GB RAM available calculated.

 40 000 digits of CMRB in 33 hours and 26 min via my program written in Mathematica 5.2 on Nov 24, 2006. The computation was run on a 32-bit Windows 3 GHz PD desktop computer using 3.25 GB of Ram.
The program was

    Block[{a, b = -1, c = -1 - d, d = (3 + Sqrt[8])^n, 
      n = 131 Ceiling[40000/100], s = 0}, a[0] = 1;
     d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
     For[k = 0, k < n, c = b - c;
      b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
     N[1/2 - s/d, 40000]]

 60,000 digits of CMRB on July 29, 2007, at 11:57 PM EST in 50.51 hours on a 2.6 GHz AMD Athlon with 64-bit Windows XP. The max memory used was 4.0 GB of RAM.

65,000 digits of CMRB in only 50.50 hours on a 2.66 GHz Core 2 Duo using 64-bit Windows XP on Aug 3, 2007, at 12:40 AM EST, The max memory used was 5.0 GB of RAM.

It looked similar to this stock image: enter image description here


100,000 digits of CMRB on Aug 12, 2007, at 8:00 PM EST, were computed in 170 hours on a 2.66 GHz Core 2 Duo using 64-bit Windows XP. The max memory used was 11.3 GB of RAM. The typical daily record of memory used was 8.5 GB of RAM.
To beat that, on the 4th of July 2022, the same digits in 1/4 of an hour using the MRB constant supercomputer.
To beat that, on the 7th of July 2022, the same digits in 1/5 of an hour. 
To beat that, on the 4th of April 2024, the same digits in 1/6 of an hour. using a pair of i9-14900Ks in parallel (100,000% as fast as the first 100,000 run by a GHz Core 2 Duo!)

see one sixth hour hundred k.


 150,000 digits of CMRB on Sep 23, 2007, at 11:00 AM EST. Computed in 330 hours on a 2.66 GHz Core 2 Duo using 64-bit Windows XP. The max memory used was 22 GB of RAM. The typical daily record of memory used was 17 GB of RAM.

  200,000 digits of CMRB using Mathematica 5.2 on March 16, 2008, at 3:00 PM EST,. Found in 845 hours, on a 2.66 GHz Core 2 Duo using 64-bit Windows XP. The max memory used was 47 GB of RAM. The typical daily record of memory used was 28 GB of RAM.

300,000 digits of CMRB were destroyed (washed away by Hurricane Ike ) on September 13, 2008 sometime between 2:00 PM - 8:00 PM EST. Computed for a long  4015. Hours (23.899 weeks or 1.4454*10^7 seconds) on a 2.66 GHz Core 2 Duo using 64-bit Windows XP. The max memory used was 91 GB of RAM. The Mathematica 6.0 code is used as follows:

    Block[{$MaxExtraPrecision = 300000 + 8, a, b = -1, c = -1 - d, 
     d = (3 + Sqrt[8])^n, n = 131 Ceiling[300000/100], s = 0}, a[0] = 1; 
     d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++]; 
     For[k = 0, k < n, c = b - c; 
      b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++]; 
     N[1/2 - s/d, 300000]]

225,000 digits of CMRB were started with a 2.66 GHz Core 2 Duo using 64-bit Windows XP on September 18, 2008. It was completed in 1072 hours. 

250,000 digits were attempted but failed to be completed to a serious internal error that restarted the machine. The error occurred sometime on December 24, 2008, between 9:00 AM and 9:00 PM. The computation began on November 16, 2008, at 10:03 PM EST. The Max memory used was 60.5 GB.

 250,000 digits of CMRB on Jan 29, 2009, 1:26:19 pm (UTC-0500) EST, with a multiple-step Mathematica command running on a dedicated 64-bit XP using 4 GB DDR2 RAM onboard and 36 GB virtual. The computation took only 333.102 hours. The digits are at http://marvinrayburns.com/250KMRB.txt. The computation is completely documented.

  300000 digit search of CMRB was initiated using an i7 with 8.0 GB of DDR3 RAM onboard on Sun 28 Mar 2010 at 21:44:50 (UTC-0500) EST, but it failed due to hardware problems.

  299,998 Digits of CMRB: The computation began Fri 13 Aug 2010 10:16:20 pm EDT and ended 2.23199*10^6 seconds later |  Wednesday, September 8, 2010. using Mathematica 6.0 for Microsoft Windows (64-bit) (June 19, 2007), which averages 7.44 seconds per digit.using a Dell Studio XPS 8100 i7 860 @ 2.80 GHz with 8GB physical DDR3 RAM. Windows 7 reserved an additional 48.929 GB of virtual Ram.

enter image description here


300,000 digits to the right of the decimal point of CMRB from Sat 8 Oct 2011 23:50:40 to Sat 5 Nov 2011 19:53:42 (2.405*10^6 seconds later). This run was 0.5766 seconds per digit slower than the 299,998 digit computation even though it used 16 GB physical DDR3 RAM on the same machine. The working precision and accuracy goal combination were maximized for exactly 300,000 digits, and the result was automatically saved as a file instead of just being displayed on the front end. Windows reserved a total of 63 GB of working memory, of which 52 GB were recorded as being used. The 300,000 digits came from the Mathematica 7.0 command`
    Quit; DateString[]
    digits = 300000; str = OpenWrite[]; SetOptions[str, 
    PageWidth -> 1000]; time = SessionTime[]; Write[str, 
    NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]}, 
    WorkingPrecision -> digits + 3, AccuracyGoal -> digits, 
    Method -> "AlternatingSigns"]]; timeused = 
    SessionTime[] - time; here = Close[str]
    DateString[]

314159 digits of the constant took 3 tries due to hardware failure. Finishing on September 18, 2012, 314159 digits, taking 59 GB of RAM.  The digits came from the Mathematica 8.0.4 code`

    DateString[]
    NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]}, 
    WorkingPrecision -> 314169, Method -> "AlternatingSigns"] // Timing
    DateString[]

1,000,000 digits of CMRB  for the first time in history in 18 days, 9 hours 11 minutes, 34.253417 seconds by Sam Noble of the Apple Advanced Computation Group.

1,048,576 digits of CMRB in a lightning-fast 76.4 hours, finishing on Dec 11, 2012, were scored by Dr. Richard Crandall, an Apple scientist and head of its advanced computational group. That was on a 2.93 GHz 8-core Nehalem,  1066 MHz, PC3-8500 DDR3 ECC RAM.

    To beat that, in Aug of 2018, 1,004,993 digits in 53.5 hours 34 hours computation time (from the timing command) with 10 DDR4 RAM (of up to 3000 MHz) supported processor cores overclocked up to 4.7 GHz! Search this post for "53.5" for documentation. 

    To beat that, on Sept 21, 2018: 1,004,993 digits in 50.37 hours of absolute time and 35.4 hours of computation time (from the timing command) with 18  (DDR3 and DDR4) processor cores!  Search this post for "50.37 hours" for documentation.**

    To beat that, on May 11, 2019, over 1,004,993 digits in 45.5 hours of absolute time and only 32.5 hours of computation time, using 28 kernels on 18 DDR4 RAM (of up to 3200 MHz) supported cores overclocked up to  5.1 GHz  Search 'Documented in the attached ":3 fastest computers together 3.nb." '  for the post that has the attached documenting notebook.

    To beat that, over 1,004,993 correct digits in 44 hours of absolute time and 35.4206 hours of computation time on 10/19/20, using 3/4 of the MRB constant supercomputer 2 -- see https://www.wolframcloud.com/obj/bmmmburns/Published/44%20hour%20million.nb  for documentation.

    To beat that, a 1,004,993 correct digits computation in 36.7 hours of absolute time and only 26.4 hours of computation time on Sun 15 May 2022 at 06:10:50, using 3/4  of the MRB constant supercomputer 3. Ram Speed was 4800MHz, and all 30 cores were clocked at up to 5.2 GHz.



    To beat that, a 1,004,993 correct digits computation in 31.2319  hours of absolute time and 16.579  hours of computation time from the Timing[] command using 3/4 of the MRB constant supercomputer 4, finishing Dec 5, 2022. Ram Speed was 5200MHz, and all of the 24 performance cores were clocked at up to 5.95 GHz, plus 32 efficiency cores running slower. using 24 kernels on the Wolfram Lightweight grid over an i-12900k, 12900KS, and 13900K.
    To beat that, a 1,004,993 correct digits computation in 30. hours of absolute time on Marh 21, 2024.
    To beat that, a 1,004,993 correct digits computation in 27. hours

The 27-hour million-digit computation is found here <-Big notebook

see also 30 hour million

36.7 hours million notebook

31.2319 hours million


 A little over 1,200,000 digits, previously, of CMRB in 11   days, 21 hours, 17 minutes, and 41 seconds (I finished on March 31, 2013, using a six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz. see https://www.wolframcloud.com/obj/bmmmburns/Published/36%20hour%20million.nb

for details.


2,000,000 or more digit computation of CMRB on May 17, 2013, using only around 10GB of RAM. It took 37 days 5 hours, 6 minutes 47.1870579 seconds. using a six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz.

 3,014,991 digits of CMRB,  world record computation of **C**<sub>*MRB*</sub> was finished on Sun 21 Sep 2014 at 18:35:06. It took one month 27 days, 2 hours 45 minutes 15 seconds. The processor time from the 3,000,000+ digit computation was 22 days.The 3,014,991 digits of **C**<sub>*MRB*</sub> with Mathematica 10.0. using Burns' new version of Richard Crandall's code in the attached 3M.nb, optimized for my platform and large computations. Also, a six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz with 64 GB of RAM, of which only 16 GB was used. Can you beat it (in more digits, less memory used, or less time taken)? This confirms that my previous "2,000,000 or more digit computation" was accurate to 2,009,993 digits. they were used to check the first several digits of this computation. See attached 3M.nb for the full code and digits.

enter image description here Over 4 million digits of CMRB were finished on Wed 16 Jan 2019, 19:55:20. It took four years of continuous tries. This successful run took 65.13 days absolute time, with a processor time of 25.17 days, on a 3.7 GHz overclocked up to 4.7 GHz on all cores Intel 6 core computer with 3000 MHz RAM. According to this computation, the previous record, 3,000,000+ digit computation, was accurate to 3,014,871 decimals, as this computation used my algorithm for computing n^(1/n) as found in chapter 3 in the paper at

https://www.sciencedirect.com/science/article/pii/0898122189900242 and the 3 million+ computation used Crandall's algorithm. Both algorithms outperform Newton's method per calculation and iteration.


Example use of M R Burns' algorithm to compute 123456789^(1/123456789) 10,000,000 digits:

ClearSystemCache[]; n = 123456789;
(*n is the n in n^(1/n)*)
x = N[n^(1/n),100];
(*x starts out as a relatively small precision approximation to n^(1/n)*)
pc = Precision[x]; pr = 10000000;
(*pr is the desired precision of your n^(1/n)*)
Print[t0 = Timing[While[pc < pr, pc = Min[4 pc, pr];
x = SetPrecision[x, pc];
y = x^n; z = (n - y)/y;
t = 2 n - 1; t2 = t^2;
x = x*(1 + SetPrecision[4.5, pc] (n - 1)/t2 + (n + 1) z/(2 n t)
- SetPrecision[13.5, pc] n (n - 1)/(3 n t2 + t^3 z))];
(*You get a much faster version of N[n^(1/n),pr]*)
N[n - x^n, 10]](*The error*)];
ClearSystemCache[]; n = 123456789; Print[t1 = Timing[N[n - N[n^(1/n), pr]^n, 10]]]

 Gives

  {25.5469,0.*10^-9999984}

  {101.359,0.*10^-9999984}




  More information is available upon request.

 More than 5 million digits of CMRB were found on Fri 19 Jul 2019, 18:49:02; methods are described in the reply below, which begins with "Attempts at a 5,000,000 digit calculation ."   For this 5 million digit calculation of **C**<sub>*MRB*</sub> using the 3 node MRB supercomputer: processor time was 40 days. And the actual time was 64 days.   That is in less absolute time than the 4-million-digit computation, which used just one node.

Six million digits of CMRB after eight tries in 19 months. (Search "8/24/2019 It's time for more digits!" below.) finishing on Tue, 30 Mar 2021, at 22:02:49 in 160 days.
    The MRB constant supercomputer 2 said the following:
    Finished on Tue 30 Mar 2021, 22:02:49. computation and absolute time were
    5.28815859375*10^6 and 1.38935720536301*10^7 s. respectively
    Enter MRB1 to print 6029991 digits. The error from a 5,000,000 or more-digit calculation that used a different method is      
    0.*10^-5024993.

That means that the 5,000,000-digit computation Was accurate to 5024993 decimals!!!

enter image description here


5,609,880, verified by two distinct algorithms for x^(1/x), digits of CMRB on Thu 4 Mar 2021 at 08:03:45. The 5,500,000+ digit computation using a totally different method showed that many decimals are in common with the 6,000,000+ digit computation in 160.805 days.

6,500,000 digits of CMRB on my second try,

Successful code was:

In[2]:= Needs["SubKernels`LocalKernels`"]
Block[{$mathkernel = $mathkernel <> " -threadpriority=2"}, 
 LaunchKernels[]]

Out[3]= {"KernelObject"[1, "local"], "KernelObject"[2, "local"], 
 "KernelObject"[3, "local"], "KernelObject"[4, "local"], 
 "KernelObject"[5, "local"], "KernelObject"[6, "local"], 
 "KernelObject"[7, "local"], "KernelObject"[8, "local"], 
 "KernelObject"[9, "local"], "KernelObject"[10, "local"]}

In[4]:= Print["Start time is ", ds = DateString[], "."];
prec = 6500000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=4*
    number of physical cores*), tsize = 2^7, chunksize, start = 1, ll,
     ctab, pr = Floor[1.005 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["Will give ", end, 
    " time estimates, each more accurate than the previous."];
   Print["Will stop at ", end*chunksize, 
    " iterations to ensure precsion of around ", pr, 
    " decimal places."]; d = ChebyshevT[n, 3];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/396288];
   Do[xvals = Flatten[Parallelize[Table[Table[ll = start + j*tsize + l;
         x = N[E^(Log[ll]/(ll)), iprec];
         pc = iprec;
         While[pc < pr/65536, pc = Min[3 pc, pr/65536];
          x = SetPrecision[x, pc];
          y = x^ll - ll;
          x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];
         (**N[Exp[Log[ll]/ll],pr/99072]**)
         x = SetPrecision[x, pr/16384];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/16384] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/16384] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/4096]*)x = SetPrecision[x, pr/4096];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/4096] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/4096] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/4096]*)x = SetPrecision[x, pr/1024];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/1024] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/1024] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/1024]*)x = SetPrecision[x, pr/256];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/256] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/256] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/256]*)x = SetPrecision[x, pr/64];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/64] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/64] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**N[Exp[Log[
         ll]/ll],pr/64]**)x = SetPrecision[x, pr/16];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/16] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/16] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**N[Exp[Log[
         ll]/ll],pr/16]**)x = SetPrecision[x, pr/4];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/4] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/4] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**N[Exp[Log[
         ll]/ll],pr/4]**)x = SetPrecision[x, pr];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[ll]/
         ll],pr]*)x, {l, 0, tsize - 1}], {j, 0, cores - 1}]]];
    ctab = ParallelTable[Table[c = b - c;
       ll = start + l - 2;
       b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
       c, {l, chunksize}], Method -> "Automatic"];
    s += ctab.(xvals - 1);
    start += chunksize;
    st = SessionTime[] - T0; kc = k*chunksize;
    ti = (st)/(kc + 10^-4)*(n)/(3600)/(24);
    If[kc > 1, 
     Print["As of  ", DateString[], " there were ", kc, 
      " iterations done in ", N[st, 5], " seconds. That is ", 
      N[kc/st, 5], " iterations/s. ", N[kc/(end*chunksize)*100, 7], 
      "% complete.", " It should take ", N[ti, 6], " days or ", 
      N[ti*24*3600, 4], "s, and finish ", DatePlus[ds, ti], "."]];
    Print[];, {k, 0, end - 1}];
   N[-s/d, pr]];
t2 = Timing[MRB1 = expM[prec];]; Print["Finished on ", 
 DateString[], ". Proccessor and actual time were ", t2[[1]], " and ",
  SessionTime[] - T0, " s. respectively"];
Print["Enter MRB1 to print ", 
 Floor[Precision[
   MRB1]], " digits. The error from a 5,000,000 or more digit \
calculation that used a different method is  "]; N[M6M - MRB1, 20]

enter image description here

The MRB constant supercomputer replied,

Finished on Wed 16 Mar 2022 02: 02: 10. computation and absolute time
were 6.26628*10^6 and 1.60264035419592*10^7s respectively Enter MRB1
to print 6532491 digits. The error from a 6, 000, 000 or more digit
the calculation that used a different method is 
0.*10^-6029992.

"Computation time" 72.526 days.

 "Absolute time" 185.491 days.

It would have taken my first computer, a TRS-80 at least 4307 years with today's best mathematical algorithms. 15 GHz/1.77 MHZ 185.491 days1 year/(365 days) enter image description here

It was instantly checked to 6,029,992 or so, digits by the program itself. A 7-million-digit run using different number of digits of Exp[Log[ll]/ll] computed by each method, is in process, which will verify the residue of digits.






POSTED BY: Marvin Ray Burns

Programs and formulas to compute the integrated analog of the MRB constant

The efficient programs

Wed 29 Jul 2015 11:40:10

From an initial accuracy of only 7 digits,

0.07077603931152880353952802183028200137`19.163032309866352 - 
 0.68400038943793212918274445999266112671`20.1482024033675 I - \
(NIntegrate[(-1)^t (t^(1/t) - 1), {t, 1, Infinity}, 
    WorkingPrecision -> 20] - 2 I/Pi)

enter image description here

we have the first efficient program to compute the integrated analog (MKB) of the MRB constant, which is good for 35,000 digits.

Block[{$MaxExtraPrecision = 200}, prec = 4000; f[x_] = x^(1/x);
 ClearAll[a, b, h];
 Print[DateString[]];
 Print[T0 = SessionTime[]];

 If[prec > 35000, d = Ceiling[0.002 prec], 
  d = Ceiling[0.264086 + 0.00143657 prec]];

 h[n_] := 
  Sum[StirlingS1[n, k]*
    Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];

 h[0] = 1;
 g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];

 sinplus1 := 
  NIntegrate[
   Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity}, 
   WorkingPrecision -> prec*(105/100), 
   PrecisionGoal -> prec*(105/100)];

 cosplus1 := 
  NIntegrate[
   Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity}, 
   WorkingPrecision -> prec*(105/100), 
   PrecisionGoal -> prec*(105/100)];

 middle := Print[SessionTime[] - T0, " seconds"];

 end := Module[{}, Print[SessionTime[] - T0, " seconds"];
   Print[c = Abs[a + b]]; Print[DateString[]]];


 If[Mod[d, 4] == 0, 
  Print[N[a = -Re[g] - (1/Pi)^(d + 1)*sinplus1, prec]];
  middle;
  Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*cosplus1), prec]];
  end];


 If[Mod[d, 4] == 1, 
  Print[N[a = -Re[g] - (1/Pi)^(d + 1)*cosplus1, prec]];
  middle;
  Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*sinplus1), prec]]; end];

 If[Mod[d, 4] == 2, 
  Print[N[a = -Re[g] + (1/Pi)^(d + 1)*sinplus1, prec]];
  middle;
  Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*cosplus1), prec]];
  end];

 If[Mod[d, 4] == 3, 
  Print[N[a = -Re[g] + (1/Pi)^(d + 1)*cosplus1, prec]];
  middle;
  Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*sinplus1), prec]];
  end];]

May 2018

I got substantial improvement in calculating the digits of MKB by using V11.3 in May 2018, my new computer (processor Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz, 3601 MHz, 4 Core(s), 8 Logical Processor(s) with 16 GB 2400 MH DDR4 RAM):

Digits  Seconds
2000    67.5503022
3000    217.096312
4000    514.48334
5000    1005.936397
10000   8327.18526
 20000  71000

They are found in the attached 2018 quad MKB.nb.

They are twice as fast,(or more) as my old records with the same program using Mathematica 10.2 in July 2015 on my old big computer (a six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz with 64 GB of 1066 MHz DDR3 RAM):

digits          seconds

2000    256.3853590 
3000    794.4361122
4000       1633.5822870
5000        2858.9390025
10000      17678.7446323 
20000      121431.1895170
40000       I got error msg

May 2021

After finding the following rapidly converging integral for MKB, MKB proposition

[I'm trying to prove this.]



I finally computed 200,000 digits of MKB (0.070776 - 0.684 I...) Started ‎Saturday, ‎May ‎15, ‎2021, ‏‎10: 54: 17 AM, and finished at 9:23:50 am EDT | Friday, August 20, 2021, for a total of 8.37539*10^6 seconds or 96 days 22 hours 29 minutes 50 seconds.

The full computation, verification to 100,000 digits, and hyperlinks to various digits are found below at 200k MKB A.nb. The code was

g[x_] = x^(1/x); u := (t/(1 - t)); Timing[
 MKB1 = (-I Quiet[
      NIntegrate[(g[(1 + u I)])/(Exp[Pi u] (1 - t)^2), {t, 0, 1}, 
       WorkingPrecision -> 200000, Method -> "DoubleExponential", 
       MaxRecursion -> 17]] - I/Pi)]

enter image description here

{See proof at the bottom of this reply.]

After finding the above more rapidly converging integral for MKB, In only 80.5 days, 189,330 real digits and 166,700 imaginary were confirmed to be correct by the following different formula. as Seen at https://www.wolframcloud.com/obj/bmmmburns/Published/2nd%20200k%20MRB.nb

All digits at

https://www.wolframcloud.com/obj/bmmmburns/Published/200K%20confirmed%20MKB.nb (Recommended to open in desktop Mathematica.)

N[(Timing[
   FM2200K - (NIntegrate[(Exp[Log[t]/t - Pi t/I]), {t, 1, Infinity I},
        WorkingPrecision -> 200000, Method -> "Trapezoidal", 
       MaxRecursion -> 17] - I/Pi)]), 20]

enter image description here

I've learned more about what MaxRecusion is required for 250,000 digits to be verified from the two different formulas, and they are being computed as I write. It will probably take over 100 days. Let's try to formalize this derivation.

Proof:

Theorem:

Let MKB be defined as the integral:

$ MKB = \int_1^\infty exp(\pi i t) (t^{1/t} - 1) dt$

Then, an equivalent expression for MKB is:

$ MKB = \int_1^\infty exp \left( \frac{log(t)}{t} - \frac{\pi t}{i} \right) dt + \frac{i}{\pi} $

Proof:1 using complex analysis enter image description here enter image description here enter image description here enter image description here

Proof:2 by series expansion

  1. Series Expansion: We start by using the series expansion of the exponential function:

```

$ e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + ...$ ```

Let $x = \frac{log(t)}{t}$. Substituting into the series expansion, we get:

```

$ e^{\frac{log(t)}{t}} = 1 + \frac{log(t)}{t} + \frac{log^2(t)}{2! t^2} + \frac{log^3(t)}{3! t^3} + ... $ ```

  1. Manipulating the Integral:

Now, let's substitute this back into the integral expression for MKB:

```

$ MKB = \int_1^\infty exp(\pi i t) (t^{1/t} - 1) dt = \int_1^\infty exp(\pi i t) \left( e^{\frac{log(t)}{t}} - 1 \right) dt $ ```

Expanding using the series from step 1, and noting that the first term of the series cancels with the -1:

```

$ MKB = \int_1^\infty exp(\pi i t) \left( \frac{log(t)}{t} + \frac{log^2(t)}{2! t^2} + \frac{log^3(t)}{3! t^3} + ... \right) dt $ ```

  1. Key Identity:

The image provided states (without proof) the following identity:

```

$ e^{\pi i t} \frac{log(t)}{t} = \frac{log(t)}{t} e^{- \pi i} $ ```

Using Euler's formula, $e^{- \pi i} = -1$, this simplifies to:

```

$ e^{\pi i t} \frac{log(t)}{t} = -\frac{log(t)}{t}$ ```

  1. Simplifying the Integral:

Substituting the identity into the integral, we obtain:

```

$ MKB = \int_1^\infty \left( -\frac{log(t)}{t} + exp(\pi i t) \left( \frac{log^2(t)}{2! t^2} + \frac{log^3(t)}{3! t^3} + ... \right) \right) dt$ ```

The first term is a standard integral:

```

$ \int_1^\infty -\frac{log(t)}{t} dt = \left[ -\frac{1}{2} log^2(t) \right]_1^\infty = \frac{1}{2} log^2(1) - \frac{1}{2} log^2(\infty) = 0 - (-\infty) = \infty$ ```

  1. Regularization:

The integral diverges. The image shows that, by subtracting $\frac{1}{\pi i}$ and taking the imaginary part, we can regularize the integral.

After applying the regularization, one would then proceed to show that the infinite sum of the higher-order terms converges, and its imaginary part equals zero. This would leave:

```

$ Im[MKB] = Im \left[\int_1^\infty exp \left( \frac{log(t)}{t} - \frac{\pi t}{i} \right) dt + \frac{i}{\pi} \right] = \frac{1}{\pi}$ ```

  1. Final Form:

    Taking the exponential of both sides and multiplying by $\pi i$ yields the desired form:

```

$ MKB = \int_1^\infty exp \left( \frac{log(t)}{t} - \frac{\pi t}{i} \right) dt + \frac{i}{\pi} $ ```

Important Note: This proof is incomplete, as it relies on an unproven identity and skips the details of regularization and the convergence of the infinite sum. However, it provides a formal structure for the argument presented in the image.

Attachments:
POSTED BY: Marvin Ray Burns

I calculated 6,500,000 digits of the MRB constant!!

The MRB constant supercomputer said,

Finished on Wed 16 Mar 2022 02 : 02 : 10. Processor and actual time were 6.2662810^6 and 1.6026403541959210^7 s.respectively Enter MRB1 to print 6532491 digits. The error from a 6, 000, 000 or more digit calculation that used a different method is 0.*10^-6029992

"Processor time" 72.526 days

"Actual time" 185.491 days

For the digits see the attached 6p5millionMRB.nb. For the documentation of the computation see 2nd 6p5 million.nb.

POSTED BY: Marvin Ray Burns

...including all the methods used to compute CMRB and their efficiency.

While waiting for results on the 2nd try of calculating 6,500,000 digits of the MRB constant (CMRB), I thought I would compare the convergence rate of 3 different primary forms of it. They are listed from slowest to fastest.

POSTED BY: Marvin Ray Burns

Beyond any shadow of a doubt, I verified 5,609,880 digits of the MRB constant on Thu 4 Mar 2021 08:03:45. The 5,500,000+ digit computation using a totally different method showed about that many decimals in common with the 6,000,000+ digit computation. The method for the 6,000,000 run is found in a few messages above in the attached notebook titled "MRBSC2 6 million 1st fourth.nb."

The 5,500,000+digit run is found below in the attached "5p5million.nb," including the verified 5,609,880 digits.

(*Fastest (at RC's end) as of 30 Nov 2012.*)prec = 5500000;(*Number \
of required decimals.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, n, end, iprec, xvals, x, pc, cores = 4, 
    tsize = 2^7, chunksize, start = 1, ll, ctab, 
    pr = Floor[1.02 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["end ", end];
   Print[end*chunksize];
   d = N[(3 + Sqrt[8])^n, pr + 10];
   d = Round[1/2 (d + 1/d)];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/27];
   Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
        x = N[E^(Log[ll]/(ll)), iprec];

      (*N[Exp[Log[ll]/ll], pr/27]*)

        pc = iprec;
        While[pc < pr, pc = Min[3 pc, pr];
         x = SetPrecision[x, pc];
         y = x^ll - ll;
         x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];

      (*N[Exp[Log[ll]/ll], pr]*)

       x, {l, 0, tsize - 1}], {j, 0, cores - 1}, 
       Method -> "EvaluationsPerKernel" -> 1]];
    ctab = Table[c = b - c;
      ll = start + l - 2;
      b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
      c, {l, chunksize}];
    s += ctab.(xvals - 1);
    start += chunksize;
    Print["done iter ", k*chunksize, " ", SessionTime[] - T0];, {k, 0,
      end - 1}];
   N[-s/d, pr]];

t2 = Timing[MRBtest2 = expM[prec];];
N[MRBtest2 - MRB, 20]
POSTED BY: Marvin Ray Burns

...including the dispersion of the 0-9th decimals in CMRB decimal expansions.

Distribution of digits

Here is the distribution of digits within the first 6,000,000 decimal places (.187859,,,), "4" shows up more than other digits, followed by "0," "8" and "7."

enter image description here

Here is the distribution of digits within the first 5,000,000 decimal places (.187859,,,), "4" shows up a lot more than other digits, followed by "0," "8" and "6." enter image description here

Here is a similar distribution over the first 4,000,000: enter image description here

3,000,000 digits share a similar distribution:

enter image description here

Over the first 2 and 1 million digits "4" was not so well represented. So, the heavy representation of "4" is shown to be a growing phenomenon from 2 million to 5 million digits. However, "1,2,and 5" still made a very poor showing: enter image description here

I attached more than 6,000,000 digits of the MRB constant.

Attachments:
POSTED BY: Marvin Ray Burns

...including arbitrarily close approximation formulas for CMRB

m=the MRB constant. We looked at how n^m-m is similar to E^Pi-Pi (a near integer). One might think this is off the subject of breaking computational records of the MRB constant, but it also could help show whether a closed-form exists for computing and checking the digits of m from n^m-m=a near integer and n is an integer.

So, I decided to make an extremely deep search of the n^m-m=a near integer, and n is an integer field. Here are the pearls I gleaned:

In[35]:= m = 
  NSum[(-1)^n (n^(1/n) - 1), {n, 1, Infinity}, WorkingPrecision -> 100,
    Method -> "AlternatingSigns"];

In[63]:= 225897077238546^m - m

Out[63]= 496.99999999999999975304752932252481772179797865

In[62]:= 1668628852566227424415^m - m

Out[62]= 9700.9999999999999999994613109586919797992822178

In[61]:= 605975224495422946908^m - m

Out[61]= 8019.9999999999999999989515156294756517433387956

In[60]:= 3096774194444417292742^m - m

Out[60]= 10896.0000000000000000000000096284579090392932063

In[56]:= 69554400815329506140847^m - m

Out[56]= 19549.9999999999999999999999991932013520540825206

In[68]:= 470143509230719799597513239^m - m

Out[68]= 102479.000000000000000000000000002312496475978584

In[70]:= 902912955019451288364714851^m - m

Out[70]= 115844.999999999999999999999999998248770510754951

In[73]:= 2275854518412286318764672497^m - m

Out[73]= 137817.000000000000000000000000000064276966095482

In[146]:= 2610692005347922107262552615512^m - m

Out[146]= 517703.00000000000000000000000000000013473353420

In[120]:= 9917209087670224712258555601844^m - m

Out[120]= 665228.00000000000000000000000000000011062183643

In[149]:= 19891475641447607923182836942486^m - m

Out[149]= 758152.00000000000000000000000000000001559954712

In[152]:= 34600848595471336691446124576274^m - m

Out[152]= 841243.00000000000000000000000000000000146089062

In[157]:= 543136599664447978486581955093879^m - m

Out[157]= 1411134.0000000000000000000000000000000035813431

In[159]:= 748013345032523806560071259883046^m - m

Out[159]= 1498583.0000000000000000000000000000000031130944

In[162]:= 509030286753987571453322644036990^m - m

Out[162]= 1394045.9999999999999999999999999999999946679646


In[48]:= 952521560422188137227682543146686124^m - m

Out[48]=5740880.999999999999999999999999999999999890905129816474332198321490136628009367504752851478633240


In[26]:= 50355477632979244604729935214202210251^m - m

Out[26]=12097427.00000000000000000000000000000000000000293025439870097812782596113788024271834721860892874


In[27]:= 204559420776329588951078132857792732385^m - m

Out[27]=15741888.99999999999999999999999999999999999999988648448116819373537316944519114421631607853700001


In[46]:= 4074896822379126533656833098328699139141^m - m

Out[46]= 27614828.00000000000000000000000000000000000000001080626974885195966380280626150522220789167201350


In[8]:= 100148763332806310775465033613250050958363^m - m

Out[8]= 50392582.999999999999999999999999999999999999999998598093272973955371081598246


In[10]=  116388848574396158612596991763257135797979^m - m

Out[10]=51835516.000000000000000000000000000000000000000000564045501599584517036465406


In[12]:= 111821958790102917465216066365339190906247589^m - m

Out[12]= 188339125.99999999999999999999999999999999999999999999703503169989535000879619


In[33] := 8836529576862307317465438848849297054082798140^m - m

Out[33] = 42800817.00000000000000000000000000000000000000000000000321239755400298680819416095288742420653229


In[71] := 532482704820936890386684877802792716774739424328^m - m

Out[71] =924371800.999999999999999999999999999999999999999999999998143109316148796009581676875618489611792


In[21]:= 783358731736994512061663556662710815688853043638^m - m

Out[21]= 993899177.0000000000000000000000000000000000000000000000022361744841282020


In[24]:= 8175027604657819107163145989938052310049955219905^m - m

Out[24]= 1544126008.9999999999999999999999999999999999999999999999999786482891477\
944981


19779617801396329619089113017251584634275124610667^m - m
gives
1822929481.00000000000000000000000000000000000000000000000000187580971544991111083798248746369560.


130755944577487162248300532232643556078843337086375^m - m

gives 

2599324665.999999999999999999999999999999999999999999999999999689854836245815499119071864529772632.
i.e.2, 599, 324, 665. 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 689

(51 consecutive 9 s)

322841040854905412176386060015189492405068903997802^m - m

gives

3080353548.000000000000000000000000000000000000000000000000000019866002281287395703598786588650156

i.e. 3, 080, 353, 548. 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000,019

(52 consecutive 0 s)


310711937250443758724050271875240528207815041296728160^m - m

gives

11195802709.99999999999999999999999999999999999999999999999999999960263763...
i.e. 11,195,802,709. 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 602, 637,63

(55 consecutive 9s)

1465528573348167959709563453947173222018952610559967812891154^ m - m  
gives 
200799291330.9999999999999999999999999999999999999999999999999999999999999900450730197594520134278  
i. e., 200, 799, 291, 330.999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 999, 99 

(62 consecutive 9 s).

Here is something that looks like it might lead to another form of arbitrarily close approximations.

enter image description here as in https://www.wolframcloud.com/obj/7238e6f0-6fa5-4015-aaf5-1cca5c2670ca

POSTED BY: Marvin Ray Burns

In 38 1/2 days, I computed 100,000 digits of the MRB constant from the enter image description here

Here is the code:

Timing[mi = 
  NIntegrate[
   Csch[\[Pi] t] E^((t ArcTan[t])/(1 + t^2)) (1 + 
       t^2)^(1/(2 + 2 t^2)) Sin[(2 ArcTan[t] - t Log[1 + t^2])/(2 + 
        2 t^2)], {t, 0, \[Infinity]}, WorkingPrecision -> 100000, 
   Method -> "Trapezoidal", PrecisionGoal -> 100000, 
   MaxRecursion -> 30]]

I attached the notebook with the results.

P.S.

Here is an almost all rational-integrand version of mi= the MRB constant integral that uses neither Re[] or Im[] explicitly:

enter image description here Shown here. and here:

In[30]:= 
Quiet[Block[{v = 1/(t^2 + 1)}, 
   mi = NIntegrate[
     Csch[\[Pi]  t]  E^((t  v  Integrate[v, t]))  Sqrt[v^-v]  Sin[
       v ( Integrate[v, t] - t  Integrate[t  v, t])], {t, 
      0, \[Infinity]}, WorkingPrecision -> 700, 
     Method -> "Trapezoidal", PrecisionGoal -> 700, 
     MaxRecursion -> 8]]];

In[31]:= 
ms = N[NSum[(-1)^n  (n^(1/n) - 1), {n, 1, Infinity}, 
    Method -> "AlternatingSigns", WorkingPrecision -> 705], 700];

In[32]:= mi - ms

Out[32]= 0.*10^-701

We have no call to Im[] or Re[] here either: enter image description here

In[83]:= g[x_] = x^(1/x); ms -
  I/2 NIntegrate[(g[(1 - t)])/(Sin[\[Pi]  t]), {t, -Infinity  I, 
     Infinity  I}, WorkingPrecision -> 60] // Quiet

Out[83]= 1.117567007994466663240331928224318382357*10^-21
Attachments:
POSTED BY: Marvin Ray Burns

nice system!

POSTED BY: l van Veen

The new sum is this.

Sum[(-1)^(k + 1)*(-1 + (1 + k)^(1/(1 + k)) - Log[1 + k]/(1 + k) - 
         Log[1 + k]^2/(2*(1 + k)^2)), {k, 0, Infinity}] 

That appears to be the same as for MRB except now we subtract two terms from the series expansion at the origin of k^(1/k). For each k these terms are Log[k]/k + 1/2*(Log[k]/k)^2. Accounting for the signs (-1)^k and summing, as I did earlier for just that first term, we get something recognizable.

Sum[(-1)^(k)*(Log[k]/(k) + Log[k]^2/(2*k^2)), {k, 1, Infinity}]

(* Out[21]= 1/24 (24 EulerGamma Log[2] - 2 EulerGamma \[Pi]^2 Log[2] - 
   12 Log[2]^2 - \[Pi]^2 Log[2]^2 + 24 \[Pi]^2 Log[2] Log[Glaisher] - 
   2 \[Pi]^2 Log[2] Log[\[Pi]] - 6 (Zeta^\[Prime]\[Prime])[2]) *)

So what does this buy us? For one thing, we get even better convergence from brute force summation, because now our largest terms are O((logk/k)^3) and alternating (which means if we sum in pairs it's actually O~(1/k^4) with O~ denoting the "soft-oh" wherein one drops polylogarithmic factors).

How helpful is this? Certainly it cannot hurt. But even with 1/k^4 size terms, it takes a long time to get even 40 digits, let alone thousands. So there is more going on in that Crandall approach.

POSTED BY: Daniel Lichtblau

Daniel Lichtblau and others, I just deciphered an Identity Crandall used for checking computations of the MRB constant just before he died. It is used in a previous post about checking, where I said it was hard to follow. The MRB constant is B here. B=`enter image description here In input form that is

   B= Sum[(-1)^(k + 1)*(-1 + (1 + k)^(1/(1 + k)) - Log[1 + k]/(1 + k) - 
         Log[1 + k]^2/(2*(1 + k)^2)), {k, 0, Infinity}] + 
     1/24 (\[Pi]^2 Log[2]^2 - 
        2 \[Pi]^2 Log[
          2] (EulerGamma + Log[2] - 12 Log[Glaisher] + Log[\[Pi]]) - 
        6 (Zeta^\[Prime]\[Prime])[2]) + 
     1/2 (2 EulerGamma Log[2] - Log[2]^2)

For 3000 digit numeric approximation, it is

B=NSum[((-1)^(
    k + 1) (-1 + (1 + k)^(1/(1 + k)) - Log[1 + k]/(1 + k) - 
      Log[1 + k]^2/(2 (1 + k)^2))), {k, 0, Infinity}, 
  Method -> "AlternatingSigns", WorkingPrecision -> 3000] + 
 1/24 (\[Pi]^2 Log[2]^2 - 
    2 \[Pi]^2 Log[
      2] (EulerGamma + Log[2] - 12 Log[Glaisher] + Log[\[Pi]]) - 
    6 (Zeta^\[Prime]\[Prime])[2]) + 
 1/2 (2 EulerGamma Log[2] - Log[2]^2)

It is anylitaclly straight forward too because

Sum[(-1)^(k + 1)*Log[1 + k]^2/(2 (1 + k)^2), {k, 0, Infinity}]

gives

1/24 (-\[Pi]^2 (Log[2]^2 + EulerGamma Log[4] - 
      24 Log[2] Log[Glaisher] + Log[4] Log[\[Pi]]) - 
   6 (Zeta^\[Prime]\[Prime])[2])

That is enter image description here I wonder why he chose it?

POSTED BY: Marvin Ray Burns

The identity in question is straightforward. Write n^(1/n) as Exp[Log[n]/n], take a series expansion at 0, and subtract the first term from all summands. That means subtracting off Log[n]/n in each summand. This gives your left hand side. We know it must be M - the sum of the terms we subtracted off. Now add all of them up, accounting for signs.

Expand[Sum[(-1)^n*Log[n]/n, {n, 1, Infinity}]]

(* Out[74]= EulerGamma Log[2] - Log[2]^2/2 *)

So we recover the right hand side.

I have not understood whether this identity helps with Crandall's iteration. One advantage it confers, a good one in general, is that it converts a conditionally convergent alternating series into one that is absolutely convergent. From a numerical computation point of view this is always good.

POSTED BY: Daniel Lichtblau

Jan 2015

How about computing the MRB constant from Crandall's eta derivative formulas?

They are mentioned in a previous post, but here they are again:

enter image description here

Upon reading them, Google OpenAI Chat CPT wrote the following reply:

enter image description here

I computed and checked 500 digits of the MRB constant, using the first eta derivative formula in 38.6 seconds. How well can you do? Can you improve my program? (It is a 51.4% improvement of one of Crandall's programs.) I want a little competition in some of these records! (That formula takes just 225 summands, compared to 10^501 summands using -1^(1/1)+2^(1/2)-3^(1/3)+... See http://arxiv.org/pdf/0912.3844v3.pdf for more summation requirements for other summation methods.)

In[37]:= mm = 
  0.187859642462067120248517934054273230055903094900138786172004684089\
4772315646602137032966544331074969038423458562580190612313700947592266\
3043892934889618412083733662608161360273812637937343528321255276396217\
1489321702076282062171516715408412680448363541671998519768025275989389\
9391445798350556135096485210712078444230958681294976885269495642042555\
8648367044104252795247106066609263397483410311578167864166891546003422\
2258838002545539689294711421221891050983287122773080200364452153905363\
9505533220347062755115981282803951021926491467317629351619065981601866\
4245824950697203381992958420935515162514399357600764593291281451709082\
4249158832041690664093344359148067055646928067870070281150093806069381\
3938595336065798740556206234870432936073781956460310476395066489306136\
0645528067515193508280837376719296866398103094949637496277383049846324\
5634793115753002892125232918161956269736970748657654760711780171957873\
6830096590226066875365630551656736128815020143875613668655221067430537\
0591039735756191489093690777983203551193362404637253494105428363699717\
0244185516548372793588220081344809610588020306478196195969537562878348\
1233497638586301014072725292301472333336250918584024803704048881967676\
7601198581116791693527968520441600270861372286889451015102919988536905\
7286592870868754254925337943953475897035633134403826388879866561959807\
3351473990256577813317226107612797585272274277730898577492230597096257\
2562718836755752978879253616876739403543214513627725492293131262764357\
3214462161877863771542054231282234462953965329033221714798202807598422\
1065564890048536858707083268874877377635047689160983185536281667159108\
4121934201643860002585084265564350069548328301205461932`1661.\
273833491444;

In[30]:= Timing[
 etaMM[m_, pr_] := 
  Module[{a, d, s, k, b, c}, a[j_] := Log[j + 1]^m/(j + 1)^m;
   n = Floor[1.32 pr];
   d = Cos[n ArcCos[3]];
   {b, c, s} = {-1, -d, 0};
   Do[c = b - c;
    s = s + c a[k];
    b = (k + n) (k - n) b/((k + 1) (k + 1/2)), {k, 0, n - 1}];
   N[s/d, pr] (-1)^m];
 eta[s_] := (1 - 2^(1 - s)) Zeta[s];
 eta1 = Limit[D[eta[s], s], s -> 1];
 MRBtrue = mm;
 prec = 500;
 MRBtest = 
  eta1 - Sum[(-1)^m etaMM[m, prec]/m!, {m, 2, Floor[.45 prec]}];
 MRBtest - MRBtrue]

Out[30]= {36.831836, 0.*10^-502}

Here is a short table of computation times with that program:

Digits      Seconds

500        36.831836
1000       717.308198
1500       2989.759165
2000       3752.354453

I just now retweaked the program. It is now

Timing[etaMM[m_, pr_] := 
  Module[{a, d, s, k, b, c}, 
   a[j_] := N[(-PolyLog[1, -j]/(j + 1))^m, pr];
   n = Floor[1.32 pr];
   d = Cos[n ArcCos[3]];
   {b, c, s} = {-1, -d, 0};
   Do[c = b - c;
    s = s + c a[k];
    b = N[(k + n) (k - n) b/((k + 1) (k + 1/2)), pr], {k, 0, n - 1}];
   Return[N[s/d, pr] (-1)^m]];
 eta[s_] := (1 - 2^(1 - s)) Zeta[s];
 eta1 = Limit[D[eta[s], s], s -> 1];
 MRBtrue = mm;
 prec = 1500;
 MRBtest = 
  eta1 - Sum[(-1)^m etaMM[m, prec]/Gamma[m + 1], {m, 2, 
     Floor[.45 prec]}, Method -> "Procedural"];
 MRBtest - MRBtrue]

Feb 2015

Here are my best eta derivative records:

Digits        Seconds
 500          9.874863
 1000        62.587601
 1500        219.41540
 2000       1008.842867
 2500       2659.208646
 3000       5552.902395
 3500       10233.821601

That is using V10.0.2.0 Kernel. Here is a sample

Timing[etaMM[m_, pr_] := 
          Module[{a, d, s, k, b, c}, 
           a[j_] := N[(-PolyLog[1, -j]/(j + 1))^m, pr];
           n = Floor[1.32 pr];
           d = Cos[n ArcCos[3]];
           {b, c, s} = {-1, -d, 0};
           Do[c = b - c;
            s = s + c a[k];
            b = N[(k + n) (k - n) b/((k + 1) (k + 1/2)), pr], {k, 0, n - 1}];
           Return[N[s/d, pr] (-1)^m]];
         eta[s_] := (1 - 2^(1 - s)) Zeta[s];
         eta1 = Limit[D[eta[s], s], s -> 1];
         MRBtrue = mm;
         prec = 500;
         MRBtest = 
          eta1 - Sum[(-1)^m etaMM[m, prec]/Gamma[m + 1], {m, 2, 
             Floor[.45 prec]}];
        ]
         N::meprec: Internal precision limit $MaxExtraPrecision = 50. reached while evaluating 
             -Cos[660 ArcCos[3]].

         N::meprec: Internal precision limit $MaxExtraPrecision = 50. reached while evaluating 
             -Cos[660 ArcCos[3]].

         N::meprec: Internal precision limit $MaxExtraPrecision = 50. reached while evaluating 
             -Cos[660 ArcCos[3]].

         General::stop: Further output of N::meprec will be suppressed during this calculation.

         Out[1]= {9.874863, Null}

Aug 2016

enter image description here

V 11 has a significant improvement in my new most recently mentioned fastest program for calculating digits of the MRB constant via the eta formula, Here are some timings:

Digits           seconds

1500                42.6386632

2000             127.3101969

3000             530.4442911

4000           1860.1966540

5000           3875.6978162

6000           8596.9347275



 10,000        53667.6315476

From an previous message that starts with "How about computing the MRB constant from Crandall's eta derivative formulas?" here are my first two sets of records to compare with the just mentioned ones. You can see that I increased time efficiency by 10 to 29 to even 72 fold for select computations! In the tests used in that "previous message," 4000 or more digit computations produced a seemingly indefinitely long hang-on.

Digits      Seconds

500        36.831836
1000       717.308198
1500       2989.759165
2000       3752.354453


Digits        Seconds
 500          9.874863
 1000        62.587601
 1500        219.41540
 2000       1008.842867
 2500       2659.208646
 3000       5552.902395
 3500       10233.821601

Comparing first of the just mentioned 2000 digit computations with the "significant improvement" one we get the following.

3752/127 ~=29.

And from the slowest to the fastest 1500 digit run we get

2989/42 ~=72,

POSTED BY: Marvin Ray Burns

02/12/2019

Using my 2 nodes of the MRB constant supercomputer (3.7 GH overclocked up to 4.7 GH, Intel 6core, 3000MH RAM,and 4 cores from my 3.6 GH, 2400MH RAM) I computed 34,517 digits of the MRB constant using Crandall's first eta formula:

prec = 35000;
to = SessionTime[];
etaMM[m_, pr_] := 
  Block[{a, s, k, b, c}, 
   a[j_] := (SetPrecision[Log[j + 1], prec]/(j + 1))^m;
   {b, c, s} = {-1, -d, 0};
   Do[c = b - c;
    s = s + c a[k];
    b = (k + n) (k - n) b/((k + 1) (k + 1/2)), {k, 0, n - 1}];
   Return[N[s/d, pr] (-1)^m]];
eta1 = N[EulerGamma Log[2] - Log[2]^2/2, prec]; n = 
 Floor[132/100 prec]; d = N[ChebyshevT[n, 3], prec];
MRBtest = 
  eta1 - Total[
    ParallelCombine[((Cos[Pi #]) etaMM[#, prec]/
         N[Gamma[# + 1], prec]) &, Range[2, Floor[.250 prec]], 
     Method -> "CoarsestGrained"]];
Print[N[MRBtest2 - MRBtest,10]];

SessionTime[] - to

giving -2.166803252*10^-34517 for a difference and 208659.2864422 seconds or 2.415 days for a timing.

Where MRBtest2 is 36000 digits computed through acceleration methods of n^(1/n)

3/28/2019

Here is an updated table of speed eta formula records: eta records 12 31 18

04/03/2019

Using my 2 nodes of the MRB constant supercomputer (3.7 GH overclocked up to 4.7 GH, Intel 6core, 3000MH RAM,and 4 cores from my 3.6 GH, 2400MH RAM) I computed 50,000 digits of the MRB constant using Crandall's first eta formula in 5.79 days.

 prec = 50000;
to = SessionTime[];
etaMM[m_, pr_] := 
  Module[{a, s, k, b, c}, 
   a[j_] := 
    SetPrecision[SetPrecision[Log[j + 1]/(j + 1), prec]^m, prec];
   {b, c, s} = {-1, -d, 0};
   Do[c = b - c;
    s = s + c a[k];
    b = (k + n) (k - n) b/((k + 1) (k + 1/2)), {k, 0, n - 1}];
   Return[N[s/d, pr] (-1)^m]];
eta1 = N[EulerGamma Log[2] - Log[2]^2/2, prec]; n = 
 Floor[132/100 prec]; d = N[ChebyshevT[n, 3], prec];
MRBtest = 
  eta1 - Total[
    ParallelCombine[((Cos[Pi #]) etaMM[#, prec]/
         N[Gamma[# + 1], prec]) &, Range[2, Floor[.245 prec]], 
     Method -> "CoarsestGrained"]];
Print[N[MRBtest2 - MRBtest, 10]];

SessionTime[] - to

 (* 0.*10^-50000

  500808.4835750*)
POSTED BY: Marvin Ray Burns

4/22/2019

Let $$M=\sum _{n=1}^{\infty } \frac{(-1)^{n+1} \eta ^n(n)}{n!}=\sum _{n=1}^{\infty } (-1)^n \left(n^{1/n}-1\right).$$ Then using what I learned about the absolute convergence of $\sum _{n=1}^{\infty } \frac{(-1)^{n+1} \eta ^n(n)}{n!}$ from https://math.stackexchange.com/questions/1673886/is-there-a-more-rigorous-way-to-show-these-two-sums-are-exactly-equal, combined with an identity from Richard Crandall: enter image description here, Also using what Mathematica says:

$$\sum _{n=1}^1 \frac{\underset{m\to 1}{\text{lim}} \eta ^n(m)}{n!}=\gamma (2 \log )-\frac{2 \log ^2}{2},$$

I figured out that

$$\sum _{n=2}^{\infty } \frac{(-1)^{n+1} \eta ^n(n)}{n!}=\sum _{n=1}^{\infty } (-1)^n \left(n^{1/n}-\frac{\log (n)}{n}-1\right).$$

So I made the following major breakthrough in computing MRB from Candall's first eta formula. See attached 100 k eta 4 22 2019. Also shown below.

eta 18 to19 n 2.JPG

The time grows 10,000 times slower than the previous method!

I broke a new record, 100,000 digits: Processor and total time were 806.5 and 2606.7281972 s respectively.. See attached 2nd 100 k eta 4 22 2019.

Here is the work from 100,000 digits. enter image description here

Print["Start time is ", ds = DateString[], "."];
prec = 100000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=4*
    number of physical cores*), tsize = 2^7, chunksize, start = 1, ll,
     ctab, pr = Floor[1.005 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["Will give ", end, 
    " time estimates, each more accurate than the previous."];
   Print["Will stop at ", end*chunksize, 
    " iterations to ensure precsion of around ", pr, 
    " decimal places."]; d = ChebyshevT[n, 3];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/27];
   Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
        x = N[E^(Log[ll]/(ll)), iprec];
        pc = iprec;
        While[pc < pr/4, pc = Min[3 pc, pr/4];
         x = SetPrecision[x, pc];
         y = x^ll - ll;
         x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];(**N[Exp[Log[ll]/
        ll],pr/4]**)x = SetPrecision[x, pr];
        xll = x^ll; z = (ll - xll)/xll;
        t = 2 ll - 1; t2 = t^2;
        x = 
         x*(1 + SetPrecision[4.5, pr] (ll - 1)/
              t2 + (ll + 1) z/(2 ll t) - 
            SetPrecision[13.5, pr] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**
        N[Exp[Log[ll]/ll],pr]**)x, {l, 0, tsize - 1}], {j, 0, 
        cores - 1}, Method -> "EvaluationsPerKernel" -> 32]];
    ctab = ParallelTable[Table[c = b - c;
       ll = start + l - 2;
       b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
       c, {l, chunksize}], Method -> "EvaluationsPerKernel" -> 16];
    s += ctab.(xvals - 1);
    start += chunksize;
    st = SessionTime[] - T0; kc = k*chunksize;
    ti = (st)/(kc + 10^-4)*(n)/(3600)/(24);
    If[kc > 1, 
     Print[kc, " iterations done in ", N[st, 4], " seconds.", 
      " Should take ", N[ti, 4], " days or ", N[ti*24*3600, 4], 
      "s, finish ", DatePlus[ds, ti], "."]];, {k, 0, end - 1}];
   N[-s/d, pr]];
t2 = Timing[MRB = expM[prec];]; Print["Finished on ", 
 DateString[], ". Proccessor time was ", t2[[1]], " s."];
Print["Enter MRBtest2 to print ", Floor[Precision[MRBtest2]], 
  " digits"];


 (Start time is )^2Tue 23 Apr 2019 06:49:31.

 Iterations required: 132026

 Will give 65 time estimates, each more accurate than the previous.

 Will stop at 133120 iterations to ensure precsion of around 100020 decimal places.

 Denominator computed in  17.2324041s.

...

129024 iterations done in 1011. seconds. Should take 0.01203 days or 1040.s, finish Mon 22 Apr 
2019 12:59:16.

131072 iterations done in 1026. seconds. Should take 0.01202 days or 1038.s, finish Mon 22 Apr 
2019 12:59:15.

Finished on Mon 22 Apr 2019 12:59:03. Processor time was 786.797 s.

enter image description here

 Print["Start time is " "Start time is ", ds = DateString[], "."];
 prec = 100000;
 (**Number of required decimals.*.*)ClearSystemCache[];
 T0 = SessionTime[];
 expM[pre_] := 
   Module[{lg, a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=
     4*number of physical cores*), tsize = 2^7, chunksize, start = 1, 
     ll, ctab, pr = Floor[1.0002 pre]}, chunksize = cores*tsize;
    n = Floor[1.32 pr];
    end = Ceiling[n/chunksize];
    Print["Iterations required: ", n];
    Print["Will give ", end, 
     " time estimates, each more accurate than the previous."];
    Print["Will stop at ", end*chunksize, 
     " iterations to ensure precsion of around ", pr, 
     " decimal places."]; d = ChebyshevT[n, 3];
    {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
    iprec = pr/2^6;
    Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
         lg = Log[ll]/(ll); x = N[E^(lg), iprec];
         pc = iprec;
         While[pc < pr, pc = Min[4 pc, pr];
          x = SetPrecision[x, pc];
          xll = x^ll; z = (ll - xll)/xll;
          t = 2 ll - 1; t2 = t^2;
          x = 
           x*(1 + SetPrecision[4.5, pc] (ll - 1)/
                t2 + (ll + 1) z/(2 ll t) - 
              SetPrecision[13.5, 2 pc] ll (ll - 1)/(3 ll t2 + t^3 z))];
          x - lg, {l, 0, tsize - 1}], {j, 0, cores - 1}, 
        Method -> "EvaluationsPerKernel" -> 16]];
     ctab = ParallelTable[Table[c = b - c;
        ll = start + l - 2;
        b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
        c, {l, chunksize}], Method -> "EvaluationsPerKernel" -> 16];
     s += ctab.(xvals - 1);
     start += chunksize;
     st = SessionTime[] - T0; kc = k*chunksize;
     ti = (st)/(kc + 10^-10)*(n)/(3600)/(24);
     If[kc > 1, 
      Print[kc, " iterations done in ", N[st - stt, 4], " seconds.", 
       " Should take ", N[ti, 4], " days or ", ti*3600*24, 
       "s, finish ", DatePlus[ds, ti], "."], 
      Print["Denominator computed in  ", stt = st, "s."]];, {k, 0, 
      end - 1}];
    N[-s/d, pr]];
 t2 = Timing[MRBeta2toinf = expM[prec];]; Print["Finished on ", 
  DateString[], ". Processor and total time were ", 
  t2[[1]], " and ", st, " s respectively."];

Start time is  Tue 23 Apr 2019 06:49:31.

Iterations required: 132026

Will give 65 time estimates, each more accurate than the previous.

Will stop at 133120 iterations to ensure precision of around 100020 decimal places.

Denominator computed in  17.2324041s.

...

131072 iterations done in 2589. seconds. Should take 0.03039 days or 2625.7011182s, finish Tue 23 Apr 2019 07:33:16.

Finished on Tue 23 Apr 2019 07:32:58. Processor and total time were 806.5 and 2606.7281972 s respectively.

enter image description here

 MRBeta1 = EulerGamma Log[2] - 1/2 Log[2]^2

 EulerGamma Log[2] - Log[2]^2/2

enter image description here

   N[MRBeta2toinf + MRBeta1 - MRB, 10]

   1.307089967*10^-99742
POSTED BY: Marvin Ray Burns

Crandall is not using his eta formulas directly!!!!!!! He computes Sum[(-1)^k*(k^(1/k) - 1), {k, 1, Infinity}] directly!

Going back to Crandall's code:

(*Fastest (at RC's end) as of 30 Nov 2012.*)prec = 500000;(*Number of \
required decimals.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, n, end, iprec, xvals, x, pc, cores = 4, 
    tsize = 2^7, chunksize, start = 1, ll, ctab, 
    pr = Floor[1.02 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["end ", end];
   Print[end*chunksize];
   d = N[(3 + Sqrt[8])^n, pr + 10];
   d = Round[1/2 (d + 1/d)];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/27];
   Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
        x = N[E^(Log[ll]/(ll)), iprec];
        pc = iprec;
        While[pc < pr, pc = Min[3 pc, pr];
         x = SetPrecision[x, pc];
         y = x^ll - ll;
         x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];(*N[Exp[Log[ll]/ll],
        pr]*)x, {l, 0, tsize - 1}], {j, 0, cores - 1}, 
       Method -> "EvaluationsPerKernel" -> 1]];
    ctab = Table[c = b - c;
      ll = start + l - 2;
      b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
      c, {l, chunksize}];
    s += ctab.(xvals - 1);
    start += chunksize;
    Print["done iter ", k*chunksize, " ", SessionTime[] - T0];, {k, 0,
      end - 1}];
   N[-s/d, pr]];

t2 = Timing[MRBtest2 = expM[prec];];
MRBtest2 - MRBtest3

x = N[E^(Log[ll]/(ll)), iprec]; Gives k^(1/k) to only Ceiling[pr/27]; decimal places; they are either 1.0, 1.1, 1.2, 1.3 or 1.4 (usually 1.1 or 1.0 for the first 27 desired decimals.) On the other hand,

While[pc < pr, pc = Min[3 pc, pr];
 x = SetPrecision[x, pc];
 y = x^ll - ll;
 x = x (1 - 2 y/((ll + 1) y + 2 ll ll));],

takes the short precision x and gives it the necessary precision and accuracy for k^(1/k) (k Is ll there.) It actually computes k^(1/k). Then he remarks, "(N[Exp[Log[ll]/ll], pr])."

After finding a fast way to compute k^(1/k) to necessary precision he uses Cohen's algorithm 1 (See a screenshot in a previous post.) to accelerate convergence of Sum[(-1)^k*(k^(1/k) - 1), {k, 1, Infinity}]. That is his secret!!

As I mentioned in a previous post the "MRBtest2 - MRBtest3" is for checking with a known-to-be accurate approximation to the MRB constant, MRBtest3

I'm just excited that I figured it out! as you can tell.

POSTED BY: Marvin Ray Burns

Nice work. Worth a bit of excitement, I' d say.

POSTED BY: Daniel Lichtblau

Daniel Lichtblau and others, Richard Crandall did intend to explian his work on the MRB constant and his program to compute it. When I wrote him with a possible small improvement to his program he said, "It's worth observing when we write it up." See screenshot: enter image description here

POSTED BY: Marvin Ray Burns

I can't say I understand either. My guess is the Eta stuff comes from summing (-1)^k*(Log[k]/k)^n over k, as those are the terms that appear in the double sum you get from expanding k^(1/k)-1 in powers of Log[k]/k (use k^(1/k)=Exp[Log[k]/k] and the power series for Exp). Even if it does come from this the details remain elusive..

POSTED BY: Daniel Lichtblau

What Richard Crandall and maybe others did to come up with that method is really good and somewhat mysterious. I still don't really understand the inner workings, and I had shown him how to parallelize it. So the best I can say is that it's really hard to compete against magic. (I don't want to discourage others, I'm just explaining why I myself would be reluctant to tackle this. Someone less familiar might actually have a better chance of breaking new ground.)

In a way this should be good news. Should it ever become "easy" to compute, the MRB number would lose what is perhaps its biggest point of interest. It just happens to be on that cusp of tantalizingly "close" to easily computable (perhaps as sums of zeta function and derivatives thereof), yet still hard enough that it takes a sophisticated scheme to get more than a few dozen digits.

POSTED BY: Daniel Lichtblau

It is hard to be certain that c1 and c2 are correct to 77 digits even though they agree to that extent. I'm not saying that they are incorrect and presumably you have verified this. Just claiming that whatever methods NSum may be using to accelerate convergence, there is really no guarantee that they apply to this particular computation. So c1 aand c2 could agree to that many places because they are computed in a similar manner without all digits actually being correct.

POSTED BY: Daniel Lichtblau
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract