Group Abstract Group Abstract

Message Boards Message Boards

Try to beat these MRB constant records!

POSTED BY:

Marvin Ray Burns, and distinguished colleagues


Beginning on January 20, 2014, this discussion has focused on researching the MRB constant, primarily using the Wolfram Toolbox.

Here is the visit count since then:


The MRB constant, a fascinating mathematical anomaly, has intrigued researchers and enthusiasts alike for decades. Defined as the limiting value of a unique alternating series, this enigmatic constant showcases the beauty of numerical exploration and convergence. Despite its relatively recent emergence, the MRB constant reveals unexpected connections to various fields within mathematics and computational analysis. In this post, we dive into its origins, properties, and the ongoing quest to uncover its more profound significance. The MRB constant is an anomaly because it emerges from an alternating series with unusual convergence behavior. Unlike many well-known mathematical constants, the MRB constant has no closed-form expression nor a known exact nature—whether it is algebraic, transcendental, or even irrational. Additionally, the sequence of partial sums that define the MRB constant oscillates between two limit points, creating a bounded yet divergent behavior. This oscillatory nature distinguishes it from more conventional mathematical constants, which typically exhibit straightforward convergence. Its mysterious properties continue to intrigue mathematicians as they explore its deeper connections to number theory and computational analysis.



CMRB If you see this instead of an image, reload the page

is the MRB constant.

Forward

I've heard my fellow amateurs prematurely boast of their GREAT DISCOVERIES and seen them fall on the jagged rocks of a lack of proof, commonality with known facts, or various other forms of error. I also stumbled over many of them while first trying to describe the MRB constant. Trying never to do the wrong thing twice, I've finally positively affected mathematics. On the way, one obstacle I had to negotiate was people shutting me down by saying it was a mere novelty and was, therefore, not worth investigating. But with further thought, I came up with this objection:

Studying the MRB constant's place in mathematics can introduce one to the following:


Number Theory
The study of integers and their properties, including prime numbers. The discovery of large primes is a monumental task that involves deep number theoretic concepts and advanced computational techniques.

Behavior of Series and Sequences
Understanding how series and sequences behave, converge, or diverge is fundamental in mathematical analysis. This includes the study of oscillatory sums and their limit points, which is directly related to the investigation of the MRB constant.

Dirichlet Eta Function and Its Derivatives
The Dirichlet eta function, a variant of the Riemann zeta function, plays a crucial role in analytic number theory. Its properties and derivatives have implications for the MRB constant and other mathematical phenomena.

Limit Points and Oscillatory Sums
Investigating the limit points within oscillatory sums involves understanding how sequences and series approach their limits, which is essential in various fields of mathematics.
The MRB constant's exploration touches on all these areas, revealing the interconnectedness of different mathematical concepts. It's a testament to how seemingly disparate areas of mathematics can come together to uncover new insights and deepen our understanding of the mathematical universe.

So, in the spirit of Leopold Kronecker, a German mathematician and logician who famously said, what translates to "God made the integers, all the rest is the work of man":

Here are a few fundamental constants that were once considered novel discoveries:
- φ (Golden Ratio): Known to the ancient Greeks, it gained prominence in art, architecture, and mathematics over time.
- π (Pi): π has been known since ancient times, but its significance and applications were fully appreciated much later.

Euler's Number (e)
Discovered by Jacob Bernoulli in his studies of compound interest, this number became a cornerstone in calculus and mathematical analysis. It's fundamental to growth models, logarithms, and many branches of mathematics.

Imaginary Unit (i)
Introduced by René Descartes, this constant initially faced skepticism but later became indispensable in complex analysis and engineering. It represents the square root of -1, opening doors to understanding dimensions beyond the real world.

Euler-Mascheroni Constant (γ)
Defined as the limiting difference between the harmonic series and the natural logarithm, this constant was introduced by Leonhard Euler in the 18th century. It appears in areas such as number theory and analysis, enriching the mathematical landscape.

I hope that within this discussion, we will find that the MRB constant is also becoming an essential part.




Motivation for the MRB constant

Obsessed by people calculating digits of the square root of two, cube root of three and so on, I didn't want to miss out on the fun of setting some kind of computational record. So, in 1994 I began to investigate a mathematical expression, n^(1/n), which has led to many interesting results. By 1999 worried that the Y2K bug would close down computing as we knew it, I found its sum over ℕ, which other mathematicians named in my honor.




CONSTRUCTING THE MRB CONSTANT IN MATHEMATICA

Let

Click here<- for some basic constructs of the MRB constant and what is called its integrated analog. That is an integral of f(z) over the right half-plane of cos(Pi*z)(z^(1/z)-1).



Unlikely looking constructions for the MRB constant

In 2012, a rapidly convergent pair of series for CMRB (S) were discovered using Series and Functional Definition Optimization by the then Chief cryptographer of Apple computer, Dr. Crandall.

$$\begin{align*} S &= \sum_{m \ge 1} \frac{(-1)^m}{m!} \eta^{(m)}(m), \\ \\ S &= \sum_{m \ge 1} \frac{c_m}{m!} \eta^{(m)}(0), \\ \\ \text{where the $c$ coefficients are defined}\\\ c_j &:= \sum_{d=1}^{j} \binom{j}{d} (-1)^d d^{j-d}. \end{align*}$$

Since Crandal died before publishing his proof, I tried to prove it and eventually asked my readers for help. Finally, Gottfried Helms proved one publicly:

Gottfried Helms

Note: because of this, the MRB constant sum prototype, can be split into an asymptotic component and a remainder, and these components together reconstruct the original series, converging to the MRB constant


If the header and the words

Reply | Flag

are shown at the same time in any of the following replies, refresh the page to see them.


22 Replies

Further MRB constant Exploration


MRB constant Conditional Convergence

nb1

According to the Wolfram Notebook Assistant, the Riemann Series Theorem says:

By a suitable rearrangement of terms, a conditionally convergent series may be made to converge to any desired value, or to diverge.

According to the Wolfram Notebook Assistant, this may be done by:

(1.) Rearrange Positive and Negative Terms:

  • Separate the series into positive terms an+ and negative terms an

  • Both $\Sigma$an+ and $\Sigma$an- diverge because the series is conditionally convergent.

(2.) Target Sum:
Let S be the target sum you want the rearranged series to converge to.

(3.) Construct the Rearrangement:

  • Start summing positive terms until the partial sum exceeds S.
  • Add negative terms until the sum falls below S.
  • Repeat this process, adding positive terms to exceed S, then negative terms to fall below S.

(4.) Convergence to S:

  • This process constructs a new series that converges to S.
  • Since both positive and negative sums diverge and can be made arbitrarily large, the partial sums can be made to oscillate around S and eventually converge to it.

Generalization:

By choosing different strategies for rearranging the terms, you can make the series converge to any real number or diverge. This theorem shows the delicate nature of convergence in conditionally convergent series and the non-uniqueness of their sum under rearrangement.

For example,

$$ S = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots $$

converges to $\ln 2$, but the same series can be rearranged to

$$ 1 - \frac{1}{2} + \frac{1}{4} - \frac{1}{3} + \frac{1}{6} + \frac{1}{5} + \frac{1}{8} - \cdots $$

so the series now converges to half of itself.

The wonderful point of theorems is that they are true without exceptions. Hence, the MRB constant series of irrational terms must also equal any rational, as well as any transcendental number of our choosing! Despite an infinitude of them being irrational, as well as algebraic.

->Here<- is a rearrangement of terms of the MRB constant series that add up to within 6.48303*10^-7 of the value of pi.



MRB constant Absolute Convergence

an Analog to the Exponential Function

Leading to an Integral Representation

Below, analog-type- positioning of the MRB-constant and e-to-the x formulas stem from an absolutely convergent series.

Mathematica confirms it here:

In[405]:= test = (2 k)^(1/(2 k)) - (2 k + 1)^(1/(2 k + 1));

In[406]:= SumConvergence[Abs[test], k]

Out[406]= True

In mathematical analysis, series expansions provide powerful tools for approximating functions and exploring their properties. A classic example is the Taylor series expansion of the exponential function $$e^x$$, which elegantly divides into sequences of even and odd powers. These sequences reveal deeper connections to the hyperbolic sine and cosine functions, where even powers correspond to the series expansion of $$\cosh(x)$$ and odd powers to that of $$\sinh(x)$$. This separation showcases the inherent symmetry and structure within the exponential function.

Similarly, the exploration of the MRB constant reveals intriguing parallels. The MRB constant is defined through an alternating series involving the expression $$n^{1/n}$$. Just as with the exponential function, one can examine the series by segregating terms based on parity: odd versus even terms. This division, while not directly yielding trigonometric or hyperbolic functions, highlights a structural analogy that invites deeper investigation into the convergence and behavior of such series.

Both the decomposition of the exponential function and the partitioning of the MRB constant series into odd and even components underscore the rich interplay between algebraic expressions and their geometric or analytic interpretations. Understanding these connections enriches our comprehension of mathematical constants and functions, providing insights that extend beyond the immediate formulations.

When studying calculi of approximations as we do in this discussion, spectral analysis, or quantum field theories, replacing discrete sums, like the above MRB constant, with an integral involving hyperbolic functions can reveal deeper properties in such as

$$ \sum_{n=1}^{\infty} \left( \sqrt[2n]{2n} - \sqrt[2n-1]{2n-1} \right) == C_{MRB} = \int_{0}^{\infty} \frac{\mathfrak{F} \sqrt[1+t]{1+ i t}}{\sinh(\pi t)} dt .$$

The similarity in exponential decay or growth between discrete and continuous representations may explains the why of this equation.

Using a variation of the Abel–Plana formula:

$$\sum_{n=0}^\infty (-1)^nf(n)= \frac {1}{2} f(0)+i \int_0^\infty \frac{f(i t)-f(-i t)}{2\sinh(\pi t)} \, dt, $$

a follower going by the moniker Dark Malthorp presented a beautiful integral for the MRB constant, where for

\begin{equation*} f(x) = 1 - (1 + i x)^{\frac{1}{1 + i x}} \end{equation*}

$$C_{MRB}$$

$$== \sum_{0}^{\infty} \frac{\Re (f(t))}{\sinh(i \pi t)+\cosh(i \pi t)}$$ $$== \int_{0}^{\infty} \frac{\Im (f(t))}{\sinh(\pi t)} dt$$

To apply Abel-Plana, bounds on $f(z)$ are needed. Although not holomorphic on $\Re(z)$ at $0$, it is still manageable since $\lim_{x \rightarrow \infty} f(x + yi) = 0$ for all fixed $y$, and it is bounded in the right half-plane. Using the alternating series formulation of Abel-Plana:

$$C_{MRB} = \sum_{n=0}^{\infty} (-1)^n f(n) = \frac{1}{2} f(0) + i \int_{0}^{\infty} \frac{f(it) - f(-it)}{2 \sinh(\pi t)} dt$$

relate the imaginary part terms to the MRB series\

Simplifying the integral:

\begin{align*} &= \frac{1}{2} \cdot 0 + i \int_{0}^{\infty} \frac{1 - (1 + it)^{\frac{1}{1 + it}} - 1 + (1 - it)^{\frac{1}{1 - it}}}{2 \sinh(\pi t)} dt \ &= i \int_{0}^{\infty} \frac{-(1 + it)^{\frac{1}{1 + it}} + (1 - it)^{\frac{1}{1 - it}}}{2 \sinh(\pi t)} dt \end{align*}

Holomorphic Property

Since $(1 + \frac{1}{x})$ is holomorphic for $\Re(z) \geq 0$ and real-valued for real $z$, it follows that $f(z) = \overline{f(z)}$. This leads to:

$$C_{MRB} = \int_{0}^{\infty} \frac{\Im (f(t))}{\sinh(\pi t)} dt $$

Although we cannot strictly pull the imaginary part out of the integral due to the pole at 0, the imaginary part remains bounded for $t \in (0, \infty)$. I don't see why we cannot write enter image description here N[Integrate[ Im[(1 + It)^(1/(1 + It))]/Sinh[Pit], {t, 0, Infinity}] == 2Integrate[ Im[E^(Pit + Log[1 + It]/(1 + It))]/(E^(PiI) + E^(2Pit)), {t, 0, Infinity}]]

Also, we can take the imaginary part of the outside in the following.

enter image description here

N[Integrate[
   Im[(1 + I*t)^(1/(1 + I*t))]/Sinh[Pi*t], {t, 0, Infinity}]] - 
   N[Im[Integrate[
    Integrate[
     Im[E^(Log[t]/t + x)/(E^(x + I*Pi*t) - E^(x - I*Pi*t))], {x, -t, 
      t}], {t, 1, I*Infinity}]]]

Regressing to the flow of "MRB constant Absolute Convergence an Analog to the Exponential Function Leading to an Integral Representation," The connection between the MRB constant and the integral expression involving complex functions arises from advanced mathematical analysis and the exploration of series convergence properties through complex analysis techniques.

High level proof of

$$C_{MRB} = \int_{0}^{\infty} \frac{\Im (f(t))}{\sinh(\pi t)} dt $$

without the Abel-Plana theorem

by the Wolfram Notebook Assistant

Certainly! Here's a condensed version of your proof, maintaining all the critical computations and steps:


Proving the Connection between the MRB Constant and its Integral Representation

This proof involves complex analysis, asymptotic analysis, and special functions to establish the relationship between the MRB constant and its integral form.

1. Series Representation

Define the MRB constant as an alternating series: $$\text{MRB} = \sum_{n=1}^{\infty} (-1)^n (n^{1/n} - 1)$$

2. Complex Function Representation

Consider the complex function: $$(1 + it)^{1/(1 + it)}$$ This function captures the essence of the series term $$n^{1/n}$$ and can be expanded to show its behavior in the complex plane.

3. Integral Representation

The integral: $$\text{CMRB} = \int_0^\infty \text{Im}\left[(1 + it)^{1/(1 + it)} \text{Csch}(\pi t)\right] \, dt$$ connects the series with the integral using complex analysis techniques.

4. Asymptotic Analysis and Convergence

Analyze the asymptotic behavior of $$(1 + it)^{1/(1 + it)}$$ and $$\text{Csch}(\pi t)$$ to ensure the integral converges, examining $$t \to 0$$ and $$t \to \infty$$.

5. Relate Series Terms to Integral Components

Expand the complex function $$f(t) = (1 + it)^{1/(1 + it)}$$ as: $$f(t) \approx 1 + a_1 t + a_2 t^2 + \cdots$$ Extract the imaginary part: $$\text{Im}[f(t)] \approx b_1 t + b_3 t^3 + \cdots$$ Relate these terms to the MRB series, highlighting the alternating nature with $$(-1)^n n^{1/n}$$.

6. Use Analytic Continuation

If necessary, utilize analytic continuation to explore the series and integral in a unified framework.

7. Convergence Analysis

At Infinity: Analyze $$\text{Im}[f(t)] \text{Csch}(\pi t)$$ to confirm convergence with damping by $$\text{Csch}(\pi t)$$. Near Zero: Assess behavior around $$t = 0$$ using the series expansion.

8. Numerical Verification

Perform numerical computations for both the MRB series and the integral, ensuring they converge to the same value. Truncate and compare finite sums to verify consistency.

**Here is a program I wrote that ensures they converge to the same value, as well as, perhaps asymptotically by the expression (2sin(1))/(tsinh([Pi] t)) , where t is the precision. **

Conclusion

By expanding the complex function, analyzing convergence, and verifying numerically, we establish a robust connection between the MRB constant and its integral representation. This approach not only proves the equivalence but also deepens our understanding of the underlying mathematical structures.

This condensed proof outlines the critical steps and computations needed to establish the connection, focusing on the interplay between series terms and their integral counterparts. While the proof requires a deep understanding of complex analysis and asymptotic methods, the crux lies in showing that the complex integral captures the oscillatory and convergence behavior of the MRB series through a carefully constructed analytic expression. This approach not only proves the connection but also enriches our understanding of the MRB constant's mathematical nature.

A lower-level proof follows from the proceeding sum-integral identity.



Yet another MRB constant integral

On Pi Day, 2021, 2:40 pm EST,

I added a new MRB constant integral. Surprisingly to me, this one is proven by the mere definition of the Residue Theorem: With additional thought, the previous sum-integral and this integral-sum can be seen as the same identity or relation by a factor of i, so this new proof solidifies our high-level one. The Wolfram Assistant proved it.

CMRB = integral to sum

Proof: enter image description here

The notebook assitant clarified:

The appearance of the cosine function in the series is due to how the real part of the integral and the series terms are related, rather than a direct transformation of Csc into Cos. The real part of the expression and the periodicity of cosine (due to the integer poles considered) naturally lead to the involvement of cosine in the final expression.

enter image description here

Conclusion This proof shows how the original integral can be transformed into an infinite series by leveraging complex analysis techniques, specifically contour integration and the residue theorem. Each step builds on the analytical properties of the function and the behavior of its poles in the complex plane.




Recall:

enter image description here

Further exploration raises new queries about convergence over the reals and complexes with the Dirichlet Eta function.

Wolfram Research

Spigot Algorithms

I'd be amiss if I didn't mention the efficient work Wolfram Research put into the MRB constant. Because of their effort, using only five short lines, you can find any nth digit, up to around n=six point five million of the MRB constant in an instant. I wonder if it's possible to make such a program to extract further along digits.

\

PARTIAL SUM METHODS FOR MRB CONSTANT

The success in computing millions of digits of the MRB constant, found in a following reply, lies in the use of Crandall's method (though hidden), his and my very fast algorithms for n^(1/n) and a modern series acceleration.


From a drawn-out conversation with the Wolfram Notebook Assistant, I believe a suspicion I've had for a long time is true: While the MRB constant is primarily a mathematical curiosity, it can be used as a metaphorical or analytical tool to describe and understand the cumulative effects of alternating growth patterns in systems like financial markets, where regular cycles of expansion and contraction are observed.

While preparing for the verifying of 7-million digits, I broke some speed records with two of my i9-14900K 6400MHZRAM (overclocked CPUs), using Mathematica 11.3 and the lightweight grid. They are generally faster than the 3 node MRB constant supercomputer with remote kernels! Unless noted by the command Timing[], i.;e., the last yellow and the final two blue-highlighted columns in the fourth table, these are all absolute timings. How does your computer compare to these? What can you do with other software?

1 2 3 4

That light blue highlight, red text columns, which shows processor times result here->7200 corrected timing for 30k and was tweaked in 2025.

For the partial column of two 6000MHz 14900K' with red text and yellow highlight, see speed 100 300 1M XMP tweaked.

For column "=F" (highlighted in green) see linked "10203050100" .

At the bottom, see attached "kernel priority 2 computers.nb" for column =B,

"3 fastest computers together.nb" for column =C

and linked "speed records 5 10 20 30 K"

also speed 50K speed 100k, speed 300k and 30p0683 hour million.nb for column =D .

For the mostly red column including the single, record, 10,114 second 300,000 digit run " =E" is in the linked "3 fastest computers together 2.nb.}

For column "=J," see 574 s 100k , .106.1 sec 50k and 6897s 300k

The 27-hour million-digit computation is found here. <-Big notebook.

As for single-device records are concerned, the fourth table: enter image description here The yellow-highlighted column is documented here<- (with the others already documented above). enter image description here

This is another comparison of my fastest computers' timings in calculating digits of CMRB: enter image description here

The blue column (using the Wolfram Lightweight Grid) is documented here.

The i9-12900KS column is documented here.

The i9-13900KS column is documented here.

The 300,000 digits result in the i9-13900KS column is here, where it ends with the following:

  Finished on Mon 21 Nov 2022 19:55:52. Processor and actual time 
         were 6180.27 and 10114.4781964 s. respectively

  Enter MRB1 to print 301492 digits. The error from a 6,500,000 or more digit 
 calculation that used a different method is  

 Out[72]= 0.*10^-301494


Remembering that the integrated analog of the MRB constant is M2

NIntegrate[(-1)^n (n^(1/n) - 1), {n, 1, Infinity  I}, 
 Method -> "Trapezoidal", WorkingPrecision -> 20]

These results are from the Timing[] command: M2

The 14900KS at 7200 MHz (extreme tuning!) documented here

The i9-12900KS column is documented here.

"Windows10 2024 i9-14900KS 6000 MHZ RAM" documentation here

Distribution of MRB constant digits as given in all record calculations

Here is the distribution of digits within the first 6,000,000 decimal places (.187859,,,), "4" shows up more than other digits, followed by "0," "8" and "7."

enter image description here

Here is the distribution of digits within the first 5,000,000 decimal places (.187859,,,), "4" shows up a lot more than other digits, followed by "0," "8" and "6." enter image description here

Here is a similar distribution over the first 4,000,000: enter image description here

3,000,000 digits share a similar distribution:

enter image description here

Over the first 2 and 1 million digits "4" was not so well represented. So, the heavy representation of "4" is shown to be a growing phenomenon from 2 million to 5 million digits. However, "1,2,and 5" still made a very poor showing: enter image description here

I attached more than 6,000,000 digits of the MRB constant.

I calculated 6,500,000 digits of the MRB constant!!

The MRB constant supercomputer said,

Finished on Wed 16 Mar 2022 02 : 02 : 10. Processor and actual time were 6.2662810^6 and 1.6026403541959210^7 s.respectively Enter MRB1 to print 6532491 digits. The error from a 6, 000, 000 or more digit calculation that used a different method is 0.*10^-6029992

"Processor time" 72.526 days

"Actual time" 185.491 days

For the digits see the attached 6p5millionMRB.nb. For the documentation of the computation see 2nd 6p5 million.nb.

02/12/2019

Using my 2 nodes of the MRB constant supercomputer (3.7 GH overclocked up to 4.7 GH, Intel 6core, 3000MH RAM,and 4 cores from my 3.6 GH, 2400MH RAM) I computed 34,517 digits of the MRB constant using Crandall's first eta formula:

prec = 35000;
to = SessionTime[];
etaMM[m_, pr_] := 
  Block[{a, s, k, b, c}, 
   a[j_] := (SetPrecision[Log[j + 1], prec]/(j + 1))^m;
   {b, c, s} = {-1, -d, 0};
   Do[c = b - c;
    s = s + c a[k];
    b = (k + n) (k - n) b/((k + 1) (k + 1/2)), {k, 0, n - 1}];
   Return[N[s/d, pr] (-1)^m]];
eta1 = N[EulerGamma Log[2] - Log[2]^2/2, prec]; n = 
 Floor[132/100 prec]; d = N[ChebyshevT[n, 3], prec];
MRBtest = 
  eta1 - Total[
    ParallelCombine[((Cos[Pi #]) etaMM[#, prec]/
         N[Gamma[# + 1], prec]) &, Range[2, Floor[.250 prec]], 
     Method -> "CoarsestGrained"]];
Print[N[MRBtest2 - MRBtest,10]];

SessionTime[] - to

giving -2.166803252*10^-34517 for a difference and 208659.2864422 seconds or 2.415 days for a timing.

Where MRBtest2 is 36000 digits computed through acceleration methods of n^(1/n)

3/28/2019

Here is an updated table of speed eta formula records: eta records 12 31 18

04/03/2019

Using my 2 nodes of the MRB constant supercomputer (3.7 GH overclocked up to 4.7 GH, Intel 6core, 3000MH RAM,and 4 cores from my 3.6 GH, 2400MH RAM) I computed 50,000 digits of the MRB constant using Crandall's first eta formula in 5.79 days.

 prec = 50000;
to = SessionTime[];
etaMM[m_, pr_] := 
  Module[{a, s, k, b, c}, 
   a[j_] := 
    SetPrecision[SetPrecision[Log[j + 1]/(j + 1), prec]^m, prec];
   {b, c, s} = {-1, -d, 0};
   Do[c = b - c;
    s = s + c a[k];
    b = (k + n) (k - n) b/((k + 1) (k + 1/2)), {k, 0, n - 1}];
   Return[N[s/d, pr] (-1)^m]];
eta1 = N[EulerGamma Log[2] - Log[2]^2/2, prec]; n = 
 Floor[132/100 prec]; d = N[ChebyshevT[n, 3], prec];
MRBtest = 
  eta1 - Total[
    ParallelCombine[((Cos[Pi #]) etaMM[#, prec]/
         N[Gamma[# + 1], prec]) &, Range[2, Floor[.245 prec]], 
     Method -> "CoarsestGrained"]];
Print[N[MRBtest2 - MRBtest, 10]];

SessionTime[] - to

 (* 0.*10^-50000

  500808.4835750*)

4/22/2019

Let $$M=\sum _{n=1}^{\infty } \frac{(-1)^{n+1} \eta ^n(n)}{n!}=\sum _{n=1}^{\infty } (-1)^n \left(n^{1/n}-1\right).$$ Then using what I learned about the absolute convergence of $\sum _{n=1}^{\infty } \frac{(-1)^{n+1} \eta ^n(n)}{n!}$ from https://math.stackexchange.com/questions/1673886/is-there-a-more-rigorous-way-to-show-these-two-sums-are-exactly-equal, combined with an identity from Richard Crandall: enter image description here, Also using what Mathematica says:

$$\sum _{n=1}^1 \frac{\underset{m\to 1}{\text{lim}} \eta ^n(m)}{n!}=\gamma (2 \log )-\frac{2 \log ^2}{2},$$

I figured out that

$$\sum _{n=2}^{\infty } \frac{(-1)^{n+1} \eta ^n(n)}{n!}=\sum _{n=1}^{\infty } (-1)^n \left(n^{1/n}-\frac{\log (n)}{n}-1\right).$$

So I made the following major breakthrough in computing MRB from Candall's first eta formula. See attached 100 k eta 4 22 2019. Also shown below.

eta 18 to19 n 2.JPG

The time grows 10,000 times slower than the previous method!

I broke a new record, 100,000 digits: Processor and total time were 806.5 and 2606.7281972 s respectively.. See attached 2nd 100 k eta 4 22 2019.

Here is the work from 100,000 digits. enter image description here

Print["Start time is ", ds = DateString[], "."];
prec = 100000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=4*
    number of physical cores*), tsize = 2^7, chunksize, start = 1, ll,
     ctab, pr = Floor[1.005 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["Will give ", end, 
    " time estimates, each more accurate than the previous."];
   Print["Will stop at ", end*chunksize, 
    " iterations to ensure precsion of around ", pr, 
    " decimal places."]; d = ChebyshevT[n, 3];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/27];
   Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
        x = N[E^(Log[ll]/(ll)), iprec];
        pc = iprec;
        While[pc < pr/4, pc = Min[3 pc, pr/4];
         x = SetPrecision[x, pc];
         y = x^ll - ll;
         x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];(**N[Exp[Log[ll]/
        ll],pr/4]**)x = SetPrecision[x, pr];
        xll = x^ll; z = (ll - xll)/xll;
        t = 2 ll - 1; t2 = t^2;
        x = 
         x*(1 + SetPrecision[4.5, pr] (ll - 1)/
              t2 + (ll + 1) z/(2 ll t) - 
            SetPrecision[13.5, pr] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**
        N[Exp[Log[ll]/ll],pr]**)x, {l, 0, tsize - 1}], {j, 0, 
        cores - 1}, Method -> "EvaluationsPerKernel" -> 32]];
    ctab = ParallelTable[Table[c = b - c;
       ll = start + l - 2;
       b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
       c, {l, chunksize}], Method -> "EvaluationsPerKernel" -> 16];
    s += ctab.(xvals - 1);
    start += chunksize;
    st = SessionTime[] - T0; kc = k*chunksize;
    ti = (st)/(kc + 10^-4)*(n)/(3600)/(24);
    If[kc > 1, 
     Print[kc, " iterations done in ", N[st, 4], " seconds.", 
      " Should take ", N[ti, 4], " days or ", N[ti*24*3600, 4], 
      "s, finish ", DatePlus[ds, ti], "."]];, {k, 0, end - 1}];
   N[-s/d, pr]];
t2 = Timing[MRB = expM[prec];]; Print["Finished on ", 
 DateString[], ". Proccessor time was ", t2[[1]], " s."];
Print["Enter MRBtest2 to print ", Floor[Precision[MRBtest2]], 
  " digits"];


 (Start time is )^2Tue 23 Apr 2019 06:49:31.

 Iterations required: 132026

 Will give 65 time estimates, each more accurate than the previous.

 Will stop at 133120 iterations to ensure precsion of around 100020 decimal places.

 Denominator computed in  17.2324041s.

...

129024 iterations done in 1011. seconds. Should take 0.01203 days or 1040.s, finish Mon 22 Apr 
2019 12:59:16.

131072 iterations done in 1026. seconds. Should take 0.01202 days or 1038.s, finish Mon 22 Apr 
2019 12:59:15.

Finished on Mon 22 Apr 2019 12:59:03. Processor time was 786.797 s.

enter image description here

 Print["Start time is " "Start time is ", ds = DateString[], "."];
 prec = 100000;
 (**Number of required decimals.*.*)ClearSystemCache[];
 T0 = SessionTime[];
 expM[pre_] := 
   Module[{lg, a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=
     4*number of physical cores*), tsize = 2^7, chunksize, start = 1, 
     ll, ctab, pr = Floor[1.0002 pre]}, chunksize = cores*tsize;
    n = Floor[1.32 pr];
    end = Ceiling[n/chunksize];
    Print["Iterations required: ", n];
    Print["Will give ", end, 
     " time estimates, each more accurate than the previous."];
    Print["Will stop at ", end*chunksize, 
     " iterations to ensure precsion of around ", pr, 
     " decimal places."]; d = ChebyshevT[n, 3];
    {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
    iprec = pr/2^6;
    Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
         lg = Log[ll]/(ll); x = N[E^(lg), iprec];
         pc = iprec;
         While[pc < pr, pc = Min[4 pc, pr];
          x = SetPrecision[x, pc];
          xll = x^ll; z = (ll - xll)/xll;
          t = 2 ll - 1; t2 = t^2;
          x = 
           x*(1 + SetPrecision[4.5, pc] (ll - 1)/
                t2 + (ll + 1) z/(2 ll t) - 
              SetPrecision[13.5, 2 pc] ll (ll - 1)/(3 ll t2 + t^3 z))];
          x - lg, {l, 0, tsize - 1}], {j, 0, cores - 1}, 
        Method -> "EvaluationsPerKernel" -> 16]];
     ctab = ParallelTable[Table[c = b - c;
        ll = start + l - 2;
        b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
        c, {l, chunksize}], Method -> "EvaluationsPerKernel" -> 16];
     s += ctab.(xvals - 1);
     start += chunksize;
     st = SessionTime[] - T0; kc = k*chunksize;
     ti = (st)/(kc + 10^-10)*(n)/(3600)/(24);
     If[kc > 1, 
      Print[kc, " iterations done in ", N[st - stt, 4], " seconds.", 
       " Should take ", N[ti, 4], " days or ", ti*3600*24, 
       "s, finish ", DatePlus[ds, ti], "."], 
      Print["Denominator computed in  ", stt = st, "s."]];, {k, 0, 
      end - 1}];
    N[-s/d, pr]];
 t2 = Timing[MRBeta2toinf = expM[prec];]; Print["Finished on ", 
  DateString[], ". Processor and total time were ", 
  t2[[1]], " and ", st, " s respectively."];

Start time is  Tue 23 Apr 2019 06:49:31.

Iterations required: 132026

Will give 65 time estimates, each more accurate than the previous.

Will stop at 133120 iterations to ensure precision of around 100020 decimal places.

Denominator computed in  17.2324041s.

...

131072 iterations done in 2589. seconds. Should take 0.03039 days or 2625.7011182s, finish Tue 23 Apr 2019 07:33:16.

Finished on Tue 23 Apr 2019 07:32:58. Processor and total time were 806.5 and 2606.7281972 s respectively.

enter image description here

 MRBeta1 = EulerGamma Log[2] - 1/2 Log[2]^2

 EulerGamma Log[2] - Log[2]^2/2

enter image description here

   N[MRBeta2toinf + MRBeta1 - MRB, 10]

   1.307089967*10^-99742

nice system!

POSTED BY: l van Veen

The new sum is this.

Sum[(-1)^(k + 1)*(-1 + (1 + k)^(1/(1 + k)) - Log[1 + k]/(1 + k) - 
         Log[1 + k]^2/(2*(1 + k)^2)), {k, 0, Infinity}] 

That appears to be the same as for MRB except now we subtract two terms from the series expansion at the origin of k^(1/k). For each k these terms are Log[k]/k + 1/2*(Log[k]/k)^2. Accounting for the signs (-1)^k and summing, as I did earlier for just that first term, we get something recognizable.

Sum[(-1)^(k)*(Log[k]/(k) + Log[k]^2/(2*k^2)), {k, 1, Infinity}]

(* Out[21]= 1/24 (24 EulerGamma Log[2] - 2 EulerGamma \[Pi]^2 Log[2] - 
   12 Log[2]^2 - \[Pi]^2 Log[2]^2 + 24 \[Pi]^2 Log[2] Log[Glaisher] - 
   2 \[Pi]^2 Log[2] Log[\[Pi]] - 6 (Zeta^\[Prime]\[Prime])[2]) *)

So what does this buy us? For one thing, we get even better convergence from brute force summation, because now our largest terms are O((logk/k)^3) and alternating (which means if we sum in pairs it's actually O~(1/k^4) with O~ denoting the "soft-oh" wherein one drops polylogarithmic factors).

How helpful is this? Certainly it cannot hurt. But even with 1/k^4 size terms, it takes a long time to get even 40 digits, let alone thousands. So there is more going on in that Crandall approach.

POSTED BY: Daniel Lichtblau

Daniel Lichtblau and others, I just deciphered an Identity Crandall used for checking computations of the MRB constant just before he died. It is used in a previous post about checking, where I said it was hard to follow. The MRB constant is B here. B=`enter image description here In input form that is

   B= Sum[(-1)^(k + 1)*(-1 + (1 + k)^(1/(1 + k)) - Log[1 + k]/(1 + k) - 
         Log[1 + k]^2/(2*(1 + k)^2)), {k, 0, Infinity}] + 
     1/24 (\[Pi]^2 Log[2]^2 - 
        2 \[Pi]^2 Log[
          2] (EulerGamma + Log[2] - 12 Log[Glaisher] + Log[\[Pi]]) - 
        6 (Zeta^\[Prime]\[Prime])[2]) + 
     1/2 (2 EulerGamma Log[2] - Log[2]^2)

For 3000 digit numeric approximation, it is

B=NSum[((-1)^(
    k + 1) (-1 + (1 + k)^(1/(1 + k)) - Log[1 + k]/(1 + k) - 
      Log[1 + k]^2/(2 (1 + k)^2))), {k, 0, Infinity}, 
  Method -> "AlternatingSigns", WorkingPrecision -> 3000] + 
 1/24 (\[Pi]^2 Log[2]^2 - 
    2 \[Pi]^2 Log[
      2] (EulerGamma + Log[2] - 12 Log[Glaisher] + Log[\[Pi]]) - 
    6 (Zeta^\[Prime]\[Prime])[2]) + 
 1/2 (2 EulerGamma Log[2] - Log[2]^2)

It is anylitaclly straight forward too because

Sum[(-1)^(k + 1)*Log[1 + k]^2/(2 (1 + k)^2), {k, 0, Infinity}]

gives

1/24 (-\[Pi]^2 (Log[2]^2 + EulerGamma Log[4] - 
      24 Log[2] Log[Glaisher] + Log[4] Log[\[Pi]]) - 
   6 (Zeta^\[Prime]\[Prime])[2])

That is enter image description here I wonder why he chose it?

The identity in question is straightforward. Write n^(1/n) as Exp[Log[n]/n], take a series expansion at 0, and subtract the first term from all summands. That means subtracting off Log[n]/n in each summand. This gives your left hand side. We know it must be M - the sum of the terms we subtracted off. Now add all of them up, accounting for signs.

Expand[Sum[(-1)^n*Log[n]/n, {n, 1, Infinity}]]

(* Out[74]= EulerGamma Log[2] - Log[2]^2/2 *)

So we recover the right hand side.

I have not understood whether this identity helps with Crandall's iteration. One advantage it confers, a good one in general, is that it converts a conditionally convergent alternating series into one that is absolutely convergent. From a numerical computation point of view this is always good.

POSTED BY: Daniel Lichtblau

Crandall is not using his eta formulas directly!!!!!!! He computes Sum[(-1)^k*(k^(1/k) - 1), {k, 1, Infinity}] directly!

Going back to Crandall's code:

(*Fastest (at RC's end) as of 30 Nov 2012.*)prec = 500000;(*Number of \
required decimals.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, n, end, iprec, xvals, x, pc, cores = 4, 
    tsize = 2^7, chunksize, start = 1, ll, ctab, 
    pr = Floor[1.02 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["end ", end];
   Print[end*chunksize];
   d = N[(3 + Sqrt[8])^n, pr + 10];
   d = Round[1/2 (d + 1/d)];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/27];
   Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
        x = N[E^(Log[ll]/(ll)), iprec];
        pc = iprec;
        While[pc < pr, pc = Min[3 pc, pr];
         x = SetPrecision[x, pc];
         y = x^ll - ll;
         x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];(*N[Exp[Log[ll]/ll],
        pr]*)x, {l, 0, tsize - 1}], {j, 0, cores - 1}, 
       Method -> "EvaluationsPerKernel" -> 1]];
    ctab = Table[c = b - c;
      ll = start + l - 2;
      b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
      c, {l, chunksize}];
    s += ctab.(xvals - 1);
    start += chunksize;
    Print["done iter ", k*chunksize, " ", SessionTime[] - T0];, {k, 0,
      end - 1}];
   N[-s/d, pr]];

t2 = Timing[MRBtest2 = expM[prec];];
MRBtest2 - MRBtest3

x = N[E^(Log[ll]/(ll)), iprec]; Gives k^(1/k) to only Ceiling[pr/27]; decimal places; they are either 1.0, 1.1, 1.2, 1.3 or 1.4 (usually 1.1 or 1.0 for the first 27 desired decimals.) On the other hand,

While[pc < pr, pc = Min[3 pc, pr];
 x = SetPrecision[x, pc];
 y = x^ll - ll;
 x = x (1 - 2 y/((ll + 1) y + 2 ll ll));],

takes the short precision x and gives it the necessary precision and accuracy for k^(1/k) (k Is ll there.) It actually computes k^(1/k). Then he remarks, "(N[Exp[Log[ll]/ll], pr])."

After finding a fast way to compute k^(1/k) to necessary precision he uses Cohen's algorithm 1 (See a screenshot in a previous post.) to accelerate convergence of Sum[(-1)^k*(k^(1/k) - 1), {k, 1, Infinity}]. That is his secret!!

As I mentioned in a previous post the "MRBtest2 - MRBtest3" is for checking with a known-to-be accurate approximation to the MRB constant, MRBtest3

I'm just excited that I figured it out! as you can tell.

Nice work. Worth a bit of excitement, I' d say.

POSTED BY: Daniel Lichtblau

Daniel Lichtblau and others, Richard Crandall did intend to explian his work on the MRB constant and his program to compute it. When I wrote him with a possible small improvement to his program he said, "It's worth observing when we write it up." See screenshot: enter image description here

I can't say I understand either. My guess is the Eta stuff comes from summing (-1)^k*(Log[k]/k)^n over k, as those are the terms that appear in the double sum you get from expanding k^(1/k)-1 in powers of Log[k]/k (use k^(1/k)=Exp[Log[k]/k] and the power series for Exp). Even if it does come from this the details remain elusive..

POSTED BY: Daniel Lichtblau

What Richard Crandall and maybe others did to come up with that method is really good and somewhat mysterious. I still don't really understand the inner workings, and I had shown him how to parallelize it. So the best I can say is that it's really hard to compete against magic. (I don't want to discourage others, I'm just explaining why I myself would be reluctant to tackle this. Someone less familiar might actually have a better chance of breaking new ground.)

In a way this should be good news. Should it ever become "easy" to compute, the MRB number would lose what is perhaps its biggest point of interest. It just happens to be on that cusp of tantalizingly "close" to easily computable (perhaps as sums of zeta function and derivatives thereof), yet still hard enough that it takes a sophisticated scheme to get more than a few dozen digits.

POSTED BY: Daniel Lichtblau

It is hard to be certain that c1 and c2 are correct to 77 digits even though they agree to that extent. I'm not saying that they are incorrect and presumably you have verified this. Just claiming that whatever methods NSum may be using to accelerate convergence, there is really no guarantee that they apply to this particular computation. So c1 aand c2 could agree to that many places because they are computed in a similar manner without all digits actually being correct.

POSTED BY: Daniel Lichtblau
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard