Group Abstract Group Abstract

Message Boards Message Boards

Try to beat these MRB constant records!

POSTED BY:

Marvin Ray Burns, and distinguished colleagues


The MRB constant, a fascinating mathematical anomaly, has intrigued researchers and enthusiasts alike for decades. Defined as the limiting value of a unique alternating series, this enigmatic constant showcases the beauty of numerical exploration and convergence. Despite its relatively recent emergence, the MRB constant reveals unexpected connections to various fields within mathematics and computational analysis. In this post, we dive into its origins, properties, and the ongoing quest to uncover its more profound significance. The MRB constant is an anomaly because it emerges from an alternating series with unusual convergence behavior. Unlike many well-known mathematical constants, the MRB constant has no closed-form expression nor a known exact nature—whether it is algebraic, transcendental, or even irrational. Additionally, the sequence of partial sums that define the MRB constant oscillates between two limit points, creating a bounded yet divergent behavior. This oscillatory nature distinguishes it from more conventional mathematical constants, which typically exhibit straightforward convergence. Its mysterious properties continue to intrigue mathematicians as they explore its deeper connections to number theory and computational analysis.



CMRB If you see this instead of an image, reload the page

is the MRB constant.

Studying the MRB constant's place in mathematics can introduce one to the following:


Number Theory
The study of integers and their properties, including prime numbers. The discovery of large primes is a monumental task that involves deep number theoretic concepts and advanced computational techniques.

Behavior of Series and Sequences
Understanding how series and sequences behave, converge, or diverge is fundamental in mathematical analysis. This includes the study of oscillatory sums and their limit points, which is directly related to the investigation of the MRB constant.

Dirichlet Eta Function and Its Derivatives
The Dirichlet eta function, a variant of the Riemann zeta function, plays a crucial role in analytic number theory. Its properties and derivatives have implications for the MRB constant and other mathematical phenomena.

Limit Points and Oscillatory Sums
Investigating the limit points within oscillatory sums involves understanding how sequences and series approach their limits, which is essential in various fields of mathematics.
The MRB constant's exploration touches on all these areas, revealing the interconnectedness of different mathematical concepts. It's a testament to how seemingly disparate areas of mathematics can come together to uncover new insights and deepen our understanding of the mathematical universe.

So, in the spirit of Leopold Kronecker, a German mathematician and logician who famously said, what translates to "God made the integers, all the rest is the work of man":

Here are a few fundamental constants that were once considered novel discoveries:
- φ (Golden Ratio): Known to the ancient Greeks, it gained prominence in art, architecture, and mathematics over time.
- π (Pi): π has been known since ancient times, but its significance and applications were fully appreciated much later.

Euler's Number (e)
Discovered by Jacob Bernoulli in his studies of compound interest, this number became a cornerstone in calculus and mathematical analysis. It's fundamental to growth models, logarithms, and many branches of mathematics.

Imaginary Unit (i)
Introduced by René Descartes, this constant initially faced skepticism but later became indispensable in complex analysis and engineering. It represents the square root of -1, opening doors to understanding dimensions beyond the real world.

Euler-Mascheroni Constant (γ)
Defined as the limiting difference between the harmonic series and the natural logarithm, this constant was introduced by Leonhard Euler in the 18th century. It appears in areas such as number theory and analysis, enriching the mathematical landscape.

I hope that within this discussion, we will find that the MRB constant is also becoming an essential part.







20 Replies

CONSTRUCTING THE MRB CONSTANT IN MATHEMATICA

Let

Click here<- for some basic constructs of the MRB constant and what is called its integrated analog. That is an integral of f(z) over the right half-plane of cos(Pi*z)(z^(1/z)-1).



Unlikely-looking constructions for the MRB constant

In 2012, a rapidly convergent pair of series for CMRB (S) was discovered using Series and Functional Definition Optimization by the then Chief cryptographer of Apple Computer, Dr. Crandall. Following his passing, his estate sent me a little of his Mathematica work to derive it. I built upon it below.

$$\begin{align*} S &= \sum_{m \ge 1} \frac{(-1)^m}{m!} \eta^{(m)}(m), \\ \\ S &= \sum_{m \ge 1} \frac{c_m}{m!} \eta^{(m)}(0), \\ \\ \text{where the $c$ coefficients are defined}\\\ c_j &:= \sum_{d=1}^{j} \binom{j}{d} (-1)^d d^{j-d}. \end{align*}$$

Since Crandal died before publishing his proof, I tried to prove it and eventually asked my readers for help. Finally, Gottfried Helms proved one publicly:

Gottfried Helms

Note: because of this, the MRB constant sum prototype, can be split into an asymptotic component and a remainder, and these components together reconstruct the original series, converging to the MRB constant

Geometrically Constructing the MRB Constant

With the Wolfram Assistant

Constructing the MRB constant via an integral

I was going to publish this elsewhere, but since it completes the construction of the MRB constant, I decided to post it here. Below, the multivalued nature of (-1)^x implies that f(x)=(-1)^x(x^(1/x)-1) cannot be entire across the complex plane unless a branch is selected to make it single-valued along a particular path. This limits the regions where f(x) can be considered analytic, meaning it is not entire globally. When working with such functions, it's crucial to specify the branch used to ensure the function behaves consistently and is analytic along the desired contour.

This is what the Wolfram Notebook assistant found concerning an integral form for the MRB constant.

While preparing for the verifying of 7-million digits, I broke some speed records with two of my i9-14900K 6400MHZRAM (overclocked CPUs), using Mathematica 11.3 and the lightweight grid. They are generally faster than the 3 node MRB constant supercomputer with remote kernels! Unless noted by the command Timing[], i.;e., the last yellow and the final two blue-highlighted columns in the fourth table, these are all absolute timings. How does your computer compare to these? What can you do with other software?

1 2 3 4

That light blue highlight, red text columns, which shows processor times result here->7200 corrected timing for 30k and was tweaked in 2025.

For the partial column of two 6000MHz 14900K' with red text and yellow highlight, see speed 100 300 1M XMP tweaked.

For column "=F" (highlighted in green) see linked "10203050100" .

At the bottom, see attached "kernel priority 2 computers.nb" for column =B,

"3 fastest computers together.nb" for column =C

and linked "speed records 5 10 20 30 K"

also speed 50K speed 100k, speed 300k and 30p0683 hour million.nb for column =D .

For the mostly red column including the single, record, 10,114 second 300,000 digit run " =E" is in the linked "3 fastest computers together 2.nb.}

For column "=J," see 574 s 100k , .106.1 sec 50k and 6897s 300k

The 27-hour million-digit computation is found here. <-Big notebook.

As for single-device records are concerned, the fourth table: enter image description here The yellow-highlighted column is documented here<- (with the others already documented above).

On 6/16/2025, I broke the 10,000-digit record with a I=9-14900KS using Mathematica 11.3 Kernel: In 3.97 s, absolute timing and 3.28 s timing: >   Mon 16 Jun 2025 11:10:04.12288 iterations done in 3.977 seconds 11.3 kernel result

On 6/16/2025, I broke the 20,000-digit record with a I=9-14900KS using Mathematica 11.3 Kernel: In 16.316 s, absolute timing and 13.8594 s timing:

!11.3 kernel result

, using the following code:

Print["Start time is ", ds = DateString[], "."];
prec = 10000;
(*Number of required decimals.*)
ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16, 
    tsize = 2^7, chunksize, start = 1, ll, ctab, 
    pr = Floor[1.005 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["Will give ", end, 
    " time estimates, each more accurate than the previous."];
   Print["Will stop at ", end*chunksize, 
    " iterations to ensure precision of around ", pr, 
    " decimal places."];
   d = ChebyshevT[n, 3];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/27];
   Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
        x = N[E^(Log[ll]/ll), iprec];
        pc = iprec;
        While[pc < pr/4, pc = Min[3 pc, pr/4];
         x = SetPrecision[x, pc];
         y = x^ll - ll;
         x = x (1 - 2 y/((ll + 1) y + 2 ll^2));];
        x = SetPrecision[x, pr];
        xll = x^ll;
        z = (ll - xll)/xll;
        t = 2 ll - 1;
        t2 = t^2;
        x = 
         x*(1 + SetPrecision[4.5, pr] (ll - 1)/
              t2 + (ll + 1) z/(2 ll t) - 
            SetPrecision[13.5, pr] ll (ll - 1)/(3 ll t2 + t^3 z));
        x, {l, 0, tsize - 1}], {j, 0, cores - 1}, 
       Method -> "EvaluationsPerKernel" -> 32]];
    ctab = ParallelTable[Table[c = b - c;
       ll = start + l - 2;
       b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
       c, {l, chunksize}], Method -> "EvaluationsPerKernel" -> 16];
    s += ctab.(xvals - 1);
    start += chunksize;
    st = SessionTime[] - T0;
    kc = k*chunksize;
    ti = (st)/(kc + 10^-4)*(n)/(3600)/(24);
    If[kc > 1, 
     Print[kc, " iterations done in ", N[st, 4], " seconds.", 
      " Should take ", N[ti, 4], " days or ", N[ti*24*3600, 4], 
      "s, finish ", DatePlus[ds, ti], "."]];, {k, 0, end - 1}];
   N[-s/d, pr]];
t2 = Timing[MRB = expM[prec];];
Print["Finished on ", DateString[], ". Processor time was ", t2[[1]], 
  " s."];
(*Print["error= ", N[test – MRB,20]];*)
Print["Enter MRB to print ", Floor[Precision[MRB]], " digits"];

enter image description here

This is another comparison of my fastest computers' timings in calculating digits of CMRB: enter image description here

The blue column (using the Wolfram Lightweight Grid) is documented here.

The i9-12900KS column is documented here.

The i9-13900KS column is documented here.

The 300,000 digits result in the i9-13900KS column is here, where it ends with the following:

  Finished on Mon 21 Nov 2022 19:55:52. Processor and actual time 
         were 6180.27 and 10114.4781964 s. respectively

  Enter MRB1 to print 301492 digits. The error from a 6,500,000 or more digit 
 calculation that used a different method is  

 Out[72]= 0.*10^-301494


Remembering that the integrated analog of the MRB constant is M2

NIntegrate[(-1)^n (n^(1/n) - 1), {n, 1, Infinity  I}, 
 Method -> "Trapezoidal", WorkingPrecision -> 20]

These results are from the Timing[] command: M2

The 14900KS at 7200 MHz (extreme tuning!) documented here

The i9-12900KS column is documented here.

"Windows10 2024 i9-14900KS 6000 MHZ RAM" documentation here

I calculated 6,500,000 digits of the MRB constant!!

The MRB constant supercomputer said,

Finished on Wed 16 Mar 2022 02 : 02 : 10. Processor and actual time were 6.2662810^6 and 1.6026403541959210^7 s.respectively Enter MRB1 to print 6532491 digits. The error from a 6, 000, 000 or more digit calculation that used a different method is 0.*10^-6029992

"Processor time" 72.526 days

"Actual time" 185.491 days

For the digits see the attached 6p5millionMRB.nb. For the documentation of the computation see 2nd 6p5 million.nb.

02/12/2019

Using my 2 nodes of the MRB constant supercomputer (3.7 GH overclocked up to 4.7 GH, Intel 6core, 3000MH RAM,and 4 cores from my 3.6 GH, 2400MH RAM) I computed 34,517 digits of the MRB constant using Crandall's first eta formula:

prec = 35000;
to = SessionTime[];
etaMM[m_, pr_] := 
  Block[{a, s, k, b, c}, 
   a[j_] := (SetPrecision[Log[j + 1], prec]/(j + 1))^m;
   {b, c, s} = {-1, -d, 0};
   Do[c = b - c;
    s = s + c a[k];
    b = (k + n) (k - n) b/((k + 1) (k + 1/2)), {k, 0, n - 1}];
   Return[N[s/d, pr] (-1)^m]];
eta1 = N[EulerGamma Log[2] - Log[2]^2/2, prec]; n = 
 Floor[132/100 prec]; d = N[ChebyshevT[n, 3], prec];
MRBtest = 
  eta1 - Total[
    ParallelCombine[((Cos[Pi #]) etaMM[#, prec]/
         N[Gamma[# + 1], prec]) &, Range[2, Floor[.250 prec]], 
     Method -> "CoarsestGrained"]];
Print[N[MRBtest2 - MRBtest,10]];

SessionTime[] - to

giving -2.166803252*10^-34517 for a difference and 208659.2864422 seconds or 2.415 days for a timing.

Where MRBtest2 is 36000 digits computed through acceleration methods of n^(1/n)

3/28/2019

Here is an updated table of speed eta formula records: eta records 12 31 18

04/03/2019

Using my 2 nodes of the MRB constant supercomputer (3.7 GH overclocked up to 4.7 GH, Intel 6core, 3000MH RAM,and 4 cores from my 3.6 GH, 2400MH RAM) I computed 50,000 digits of the MRB constant using Crandall's first eta formula in 5.79 days.

 prec = 50000;
to = SessionTime[];
etaMM[m_, pr_] := 
  Module[{a, s, k, b, c}, 
   a[j_] := 
    SetPrecision[SetPrecision[Log[j + 1]/(j + 1), prec]^m, prec];
   {b, c, s} = {-1, -d, 0};
   Do[c = b - c;
    s = s + c a[k];
    b = (k + n) (k - n) b/((k + 1) (k + 1/2)), {k, 0, n - 1}];
   Return[N[s/d, pr] (-1)^m]];
eta1 = N[EulerGamma Log[2] - Log[2]^2/2, prec]; n = 
 Floor[132/100 prec]; d = N[ChebyshevT[n, 3], prec];
MRBtest = 
  eta1 - Total[
    ParallelCombine[((Cos[Pi #]) etaMM[#, prec]/
         N[Gamma[# + 1], prec]) &, Range[2, Floor[.245 prec]], 
     Method -> "CoarsestGrained"]];
Print[N[MRBtest2 - MRBtest, 10]];

SessionTime[] - to

 (* 0.*10^-50000

  500808.4835750*)

nice system!

POSTED BY: l van Veen

The new sum is this.

Sum[(-1)^(k + 1)*(-1 + (1 + k)^(1/(1 + k)) - Log[1 + k]/(1 + k) - 
         Log[1 + k]^2/(2*(1 + k)^2)), {k, 0, Infinity}] 

That appears to be the same as for MRB except now we subtract two terms from the series expansion at the origin of k^(1/k). For each k these terms are Log[k]/k + 1/2*(Log[k]/k)^2. Accounting for the signs (-1)^k and summing, as I did earlier for just that first term, we get something recognizable.

Sum[(-1)^(k)*(Log[k]/(k) + Log[k]^2/(2*k^2)), {k, 1, Infinity}]

(* Out[21]= 1/24 (24 EulerGamma Log[2] - 2 EulerGamma \[Pi]^2 Log[2] - 
   12 Log[2]^2 - \[Pi]^2 Log[2]^2 + 24 \[Pi]^2 Log[2] Log[Glaisher] - 
   2 \[Pi]^2 Log[2] Log[\[Pi]] - 6 (Zeta^\[Prime]\[Prime])[2]) *)

So what does this buy us? For one thing, we get even better convergence from brute force summation, because now our largest terms are O((logk/k)^3) and alternating (which means if we sum in pairs it's actually O~(1/k^4) with O~ denoting the "soft-oh" wherein one drops polylogarithmic factors).

How helpful is this? Certainly it cannot hurt. But even with 1/k^4 size terms, it takes a long time to get even 40 digits, let alone thousands. So there is more going on in that Crandall approach.

POSTED BY: Daniel Lichtblau

The identity in question is straightforward. Write n^(1/n) as Exp[Log[n]/n], take a series expansion at 0, and subtract the first term from all summands. That means subtracting off Log[n]/n in each summand. This gives your left hand side. We know it must be M - the sum of the terms we subtracted off. Now add all of them up, accounting for signs.

Expand[Sum[(-1)^n*Log[n]/n, {n, 1, Infinity}]]

(* Out[74]= EulerGamma Log[2] - Log[2]^2/2 *)

So we recover the right hand side.

I have not understood whether this identity helps with Crandall's iteration. One advantage it confers, a good one in general, is that it converts a conditionally convergent alternating series into one that is absolutely convergent. From a numerical computation point of view this is always good.

POSTED BY: Daniel Lichtblau

Nice work. Worth a bit of excitement, I' d say.

POSTED BY: Daniel Lichtblau

Daniel Lichtblau and others, Richard Crandall did intend to explian his work on the MRB constant and his program to compute it. When I wrote him with a possible small improvement to his program he said, "It's worth observing when we write it up." See screenshot: enter image description here

I can't say I understand either. My guess is the Eta stuff comes from summing (-1)^k*(Log[k]/k)^n over k, as those are the terms that appear in the double sum you get from expanding k^(1/k)-1 in powers of Log[k]/k (use k^(1/k)=Exp[Log[k]/k] and the power series for Exp). Even if it does come from this the details remain elusive..

POSTED BY: Daniel Lichtblau

What Richard Crandall and maybe others did to come up with that method is really good and somewhat mysterious. I still don't really understand the inner workings, and I had shown him how to parallelize it. So the best I can say is that it's really hard to compete against magic. (I don't want to discourage others, I'm just explaining why I myself would be reluctant to tackle this. Someone less familiar might actually have a better chance of breaking new ground.)

In a way this should be good news. Should it ever become "easy" to compute, the MRB number would lose what is perhaps its biggest point of interest. It just happens to be on that cusp of tantalizingly "close" to easily computable (perhaps as sums of zeta function and derivatives thereof), yet still hard enough that it takes a sophisticated scheme to get more than a few dozen digits.

POSTED BY: Daniel Lichtblau

It is hard to be certain that c1 and c2 are correct to 77 digits even though they agree to that extent. I'm not saying that they are incorrect and presumably you have verified this. Just claiming that whatever methods NSum may be using to accelerate convergence, there is really no guarantee that they apply to this particular computation. So c1 aand c2 could agree to that many places because they are computed in a similar manner without all digits actually being correct.

POSTED BY: Daniel Lichtblau
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard