Message Boards Message Boards

GROUPS:

Try to beat these MRB constant records!

Posted 8 years ago
674961 Views
|
73 Replies
|
17 Total Likes
|

CMRB is the MRB constant.

POSTED BY: Marvin Ray Burns,

enter image description hereenter image description here '

WITH COAUTHORS: Giuseppe Peano in the

enter image description here

(FORMULATION of CMRB MATHEMATICS)

'

Issac Newton in the enter image description here

(PRINCIPLES OF CMRB MATHEMATICS)

'

Gottfried Wilhelm Leibniz in the

enter image description here fandom.com/

(REPORTS OF CMRB SCHOLARS)

'

and most significantly,

Euclid in the enter image description here important7.com/

ELEMENTS OF CMRB GEOMETRY

(assuming a different form of the parallel postulate)

.

When asked for an image that matches everything contained in this discussion, Google's AI gave the following cartoon credited to V.J Motto.

enter image description here

An Easter egg for you to find below:

(In another reality, I invented CMRB and then discovered many of its qualities.)




Content as of Sept 30, 2022

The first post

Proofs of the nature of the MRB constant =CMRB=$ \sum_{n-1}^\infty(-1)^n(n^{1/n}-1)$

is it convergent?

is -1 the only term that is it convergent for?

is it absolutely convergent?

  1. Q and A,
  2. My claim to the MRB constant (CMRB), or a case of calculus déjà vu?
  3. What exactly is it?
  4. Where is it found?
  5. How it all began,
  6. The why and what of the CMRB Records,
  7. CMRB and its applications.

The following contents of the first post have been moved. Use CNRL+F to locate.

  1. CNRTL+F "Real-World, and beyond, Applications".
  2. CNRL+F "MeijerG Representation for" CMRB,
  3. CNRL+F "the Laplace transform analogy to" CMRB.
  4. CMRB CNRL+F "formulas and identities",
  5. CNRL+F "Primary Proof 1",
  6. CNRL+F "Primary Proof 2",
  7. CNRL+F "Primary Proof 3",
  8. CNRL+F "The relationship between" CMRB and its integrated analog,
  9. The MRB constant supercomputer 0

Second post:

The following might help anyone serious about breaking my record.

Third post

The following email Crandall sent me before he died might be helpful for anyone checking their results.

Fourth post

Perhaps some of these speed records will be easier to beat.

Many more interesting posts

...including the MRB constant supercomputers 1 and 2.

...including records of computing the MRB constant from Crandall's eta derivative formulas.

...including all the methods used to compute CMRB and their efficiency.

...including the dispersion of the 0-9th decimals in CMRB decimal expansions.

...including the convergence rate of 3 major different forms of CMRB.

...including complete documentation of all multimillion-digit records with many highlights.

...including arbitrarily close approximation formulas for CMRB.

...including efficient programs to compute the integrated analog (MKB) of CMRB.

...including a recent discovery that could help in verifying digital expansions of the integrated analog (MKB) of CMRB.

...including CNRL+F "the Laplace transform analogy to" CMRB.

...including CNRTL+F "Real-World, and beyond, Applications".

... including an overview of all CMRB speed records, by platform.

...including a few attempts at a spectacular 7 million digits using Mathematica.

...including an inquiry for a closed form for CMRB.

...including a question as to how normal is CMRB.




enter image description here


Proof of Leibniz criterion, summoned is proven here: We will prove that both the partial sums $S_{2m+1}=\sum_{n=1}^{2m+1} (-1)^{n-1} a_n$ with odd number of terms, and $S_{2m}=\sum_{n=1}^{2m} (-1)^{n-1} a_n$ with even number of terms, converge to the same number ''L''. Thus the usual partial sum $S_k=\sum_{n=1}^k (-1)^{n-1} a_n$ also converges to ''L''.

The odd partial sums decrease monotonically:

$$ S_{2(m+1)+1}=S_{2m+1}-a_{2m+2}+a_{2m+3} \leq S_{2m+1} $$

while the even partial sums increase monotonically:

$$ S_{2(m+1)}=S_{2m}+a_{2m+1}-a_{2m+2} \geq S_{2m} $$

both because an decreases monotonically with n.

Moreover, since an are positive, $ S_{2m+1}-S_{2m}=a_{2m+1} \geq 0 $. Thus we can collect these facts to form the following suggestive inequality:

$ a_1 - a_2 = S_2 \leq S_{2m} \leq S_{2m+1} \leq S_1 = a_1. $

Now, note that a1 − a2 is a lower bound of the monotonically decreasing sequence S2m+1, the monotone convergence theorem then implies that this sequence converges as ''m'' approaches infinity. Similarly, the sequence of even partial sum converges too.

Finally, they must converge to the same number because

$$ \lim_{m\to\infty}(S_{2m+1}-S_{2m})=\lim_{m\to\infty}a_{2m+1}=0. $$

Call the limit ''L'', then the monotone convergence theorem also tells us extra information that

$$ S_{2m} \leq L \leq S_{2m+1} $$

for any ''m''. This means the partial sums of an alternating series also "alternates" above and below the final limit. More precisely, when there is an odd (even) number of terms, i.e. the last term is a plus (minus) term, then the partial sum is above (below) the final limit.



Next we ask and observe, enter image description here

The short "proof" is that $z_0$ must be real for the limit to be 0, and since $lim_{n->\infty}n^{1/n}=1,$ $z_0=1.$

enter image description here

That is because, while the first one of the above plots has a point where the alternating series fails to alternate at about n=40. It can be implied, the second has that quality at the limit of n as n goes to infinity (since the series converges for x0=1). The third has no point with that quality. Together they show that x0=1 is the only value for which the series converges. See this Wolfram Demonstration:You can open and experiment with it.



I asked a math professor if the series,

$ \sum_{n-1}^\infty(-1)^n(n^{1/n}-1)$

is absolutely convergent, and he replied as follows.

It is not absolutely convergent. When you take the absolute value, your common term is the series n^(1/n)-1. For all large n this term is bigger than 1/n, and therefore the

series diverges. See Harmonic Series.

To see the inequality, rewrite as

$$n^{1/n}-1>1/n $$

$$n^{1/n}>1+1/n $$

$$n>(1+1/n)^n. $$

The right-hand sides form a convergent sequence (it converges to e) and therefore are bounded.

See

Mathworld E


P.S.

For the series of the absolute values, I noticed $$-1<\sum_{n=1}^x\left(n^{\frac{1}{n}}-1\right)-\sqrt{x}-1<0.5$$ for $$11\leq x\leq 286$$

and $$-0.9<\sum_{n=3}^x\left(n^{\frac{1}{n}}-1\right)-\sqrt{x}-\frac{1}{2}<0.6$$

for the same domain.




Q&A

Q: What can you expect from reading about CMRB and its record computations?

A:

As you see, the war treated me kindly enough, in spite of the heavy gunfire, to allow me to get away from it all and take this walk in the land of your ideas.

— Karl Schwarzschild (1915), “Letter to Einstein”, Dec 22

Q: Can you calculate more digits of the MRB constant?

A:

With the availability of high-speed electronic computers, it is now quite convenient to devise statistical experiments for the purpose of estimating certain mathematical constants and functions.

Copyright © 1966 ACM
(Association for Computing Machinery)

New York, NY, United States

Q: How can you compute them?

A:

The value of $\pi$ has engaged the attention of many mathematicians and calculators from the time of Archimedes to the present day, and has been computed from so many different formulae, that a complete account of its calculation would almost amount to a history of mathematics.

- James Glaisher (1848-1928)

Q: Why should you do it?

A:

While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals.

Albert A. Michelson (1894)

Q: Why are those digits there?

A:

There is nothing without a reason.

Read more at: https://minimalistquotes.com/gottfried-leibniz-quote-229585/




This discussion is not crass bragging; it is an attempt by this amateur to share his discoveries with the greatest audience possible.

Amateurs have been known to make a few significant discoveries as discussed in enter image description here here. This amateur has made his best attempts at proving his discoveries and has often asked for help in doing so. Great thanks to all of those who offered a hand! If I've failed to give you credit for any of your suggestions, let me know and I will correct that issue!

As I went more and more public with my discoveries, I made several attempts to see what portion of them was original. What I concluded from these investigations was that the only original thought I had was the obstinacy to think anything meaningful can be found in the infinite sum shown next. CMRB sum Nonetheless, it is possible that someone might have a claim to this thought to whom I have not given proper credit. If that is you I apologize. The last thing we need is another calculus war, this time for a constant. However, if your thought was published after mine, as Newton said concerning Leibniz's claim to calculus, “To take away the Right of the first inventor, and divide it between him and that other would be an Act of Injustice.” [Sir Isaac Newton, The Correspondence of Isaac Newton, 7 v., edited by H. W. Turnbull, J. F. Scott, A. Rupert Hall, and Laura Tilling, Cambridge University Press, 1959–1977: VI, p. 455]

Here is what Google says abut the MRB constant as of August 8, 2022 at https://www.google.com/search?q=who+discovered+the+%22MRB+constant%22

enter image description here

If you see this instead of an image, reload the page.enter image description here

(the calculus war for CMRB)

CREDIT

https://soundcloud.com/cmrb/homer-simpson-vs-peter-griffin-cmrb

'

From Wikipedia, the free encyclopedia

'

The calculus controversy (German: Prioritätsstreit, "priority dispute") was an argument between the mathematicians Isaac Newton and Gottfried Wilhelm Leibniz over who had first invented calculus.

enter image description here

(Newton's notation as published in PRINCIPIS MATHEMATICA [PRINCIPLES OF MATHEMATICS])

enter image description here

( Leibniz's notation as published in the scholarly journal, Acta Eruditorum [Reports of Scholars])

Whether or not we divide the credit between the two pioneers,

Wikipedia said one thing that distinguishes their finds from the work of their antecedents:

Newton came to calculus as part of his investigations in physics and geometry. He viewed calculus as the scientific description of the generation of motion and magnitudes. In comparison, Leibniz focused on the tangent problem and came to believe that calculus was a metaphysical explanation of the change. Importantly, the core of their insight was the formalization of the inverse properties between the integral and the differential of a function. This insight had been anticipated by their predecessors, but they were the first to conceive calculus as a system in which new rhetoric and descriptive terms were created.[24] Their unique discoveries lay not only in their imagination but also in their ability to synthesize the insights around them into a universal algorithmic process, thereby forming a new mathematical system.

Like as Newton and Leibniz created a new system from the elaborate, confusing structure

designed and built by their predecessors, my forerunners studied series for centuries leading to a

labyrinth of sums, and then I created a "new scheme" for the CMRB "realities" to escape it!


it is defined in all of the following places, the majority of which attribute to my curiosity.



CMRB

= B =

enter image description here and from Richard Crandall in 2012 courtesy of Apple Computer's advanced computational group we have the following computational scheme using equivalent sums of the zeta variant, Dirichlet eta:

enter image description here enter image description here

Eta denotes the kth derivative of the Dirichlet eta function of m and 0 respectively, The cj's are found by the code,

  N[ Table[Sum[(-1)^j Binomial[k, j] j^(k - j), {j, 1, k}], {k, 1, 10}]]

(* {-1., -1., 2., 9., 4., -95., -414., 49., 10088., 55521.}*)

... enter image description hereenter image description here

Crandall's first "B" is proven below by Gottfried Helms and it is proven more rigorously, considering the conditionally convergent sum,CMRB sum afterward. Then the formula (44) is a Taylor expansion of eta(s) around s = 0.

n^(1/n)-1

At enter image description here here, we have the following explanation.

Even though one has cause to be a little bit wary around formal rearrangements of conditionally convergent sums (see the Riemann series theorem), it's not very difficult to validate the formal manipulation of Helms. The idea is to cordon off a big chunk of the infinite double summation (all the terms from the second column on) that we know is absolutely convergent, which we are then free to rearrange with impunity. (Most relevantly for our purposes here, see pages 80-85 of this document. culminating with the Fubini theorem which is essentially the manipulation Helms is using.)

So, by definition the MRB constant $B$ is the conditionally convergent sum $\sum_{n \geq 1} (-1)^n (n^{1/n} - 1)$. Put $a_n = (-1)^n (n^{1/n}= - 1)$,

so $B = \sum_{n \geq 1} a_n.$

Looking at the first column, put $b_n = (-1)^n \frac{\log(n)}{n}.$

so $\eta^{(1)}(1) = \sum_{n \geq 1}> b_n$

as a conditionally convergent series.

We have

$$B - \eta^{(1)} = \sum_{n \geq 1} a_n - b_n = \sum_{n \geq 1} \sum_{m \geq 2} (-1)^n \frac{(\log n)^m}{n^m m!}$$

(The first equation is an elementary limit statement that says if $\sum_{n \geq 1} a_n$ converges and $\sum_{n \geq 1} b_n$ converges, then also $\sum_{n \geq 1} a_n - b_n$ converges and $\sum_{n \geq 1} a_n - \sum_{n \geq 1} b_n = \sum_{n \geq 1} a_n - b_n$. It doesn't at all matter whether the convergence of either series is conditional or absolute.)

So now we check the absolute convergence of the right-hand side, i.e., that $\sum_{n \geq 1} \sum_{m \geq 2} \frac{(\log n)^m}{n^m m!}$ converges. (Remember what this means in the case of infinite sums of positive terms: it means that there is a number $K$ such that every finite partial sum $S$ is bounded above by $K$; the least such upper bound will be the number that the infinite sum converges to.) So take any such finite partial sum $S$, and rearrange its terms so that the terms in the $m = 2$ column come first, then the terms in the $m = 3$ column, and so on. An upper bound for the terms of $S$ in the $m = 2$ column is $\frac{\zeta^{(2)}(2)}{2!}$. Put that one aside.

For the $m = 3$ column, an upper bound is $\sum_{n \geq 2} \frac{(\log n)^3}{n^3 3!}$ (we drop the $n=1$ term which is $0$). By calculus we have $\log n \leq n^{1/2}$ for all $n \geq 2$, so this has upper bound $\frac1{3!} \sum_{n \geq 2} \frac1{n^{3/2}} \leq \frac1{3!} \int_1^\infty \frac{dx}{x^{3/2}}$ by an integral test, which yields $\frac{2}{3!}$ as an upper bound. Applying the same reasoning for the $m$ column from $m = 4$ on, an upper bound for that column would be $\frac1{m!} \int_1^\infty \frac{dx}{x^{m/2}} = \frac{2}{m!(m-2)}$. Adding all those upper bounds together, an upper bound for the entire doubly infinite sum would be

$$\frac{\zeta^{(2)}(2)}{2!} + \sum_{m \geq 3} \frac{2}{m!(m-2)}$$

which certainly converges. So we have absolute convergence of the doubly infinite sum.

Thus we are in a position to apply the Fubini theorem, which justifies the rearrangement expressed in the first of the following equations

$$\sum_{n \geq 1} \sum_{m \geq 2} (-1)^n \frac{(\log n)^m}{n^m m!} = \sum_{m \geq 2} \sum_{n \geq 1} (-1)^n \frac{(\log n)^m}{n^m m!} = \sum_{m \geq 2} (-1)^{m+1} \frac{\eta^{(m)}(m)}{m!}$$

giving us what we wanted.


enter image description here The integral forms for CMRB differ by only a trigonometric multiplicand to that of its analog.

enter image description here

In[147]:= CMRB = 
 Re[NIntegrate[(I*E^(r*t))/Sin[Pi*t] /. r -> Log[t^(1/t) - 1]/t, 
       {t, 1, I*Infinity}, WorkingPrecision -> 30]]

Out[147]= 0.187859642462067120248517934054

In[148]:= Quiet[MKB = NIntegrate[E^(r*t)*(Cos[Pi*t] + I*Sin[Pi*t]) /. 
         r -> Log[t^(1/t) - 1]/t, {t, 1, I*Infinity}, 
   WorkingPrecision -> 30, 
       Method -> "Trapezoidal"]]

Out[148]= 0.0707760393115292541357595979381 - 
 0.0473806170703505012595927346527 I

enter image description here

In[182]:= CMRB = 
 Re[NIntegrate[(I*E^(r*t))/Sin[Pi*t] /. r -> Log[t^(1/t) - 1]/t, {t, 
    1, I*Infinity}, WorkingPrecision -> 30]]

Out[182]= 0.187859642462067120248517934054

In[203]:= CMRB - 
 N[NSum[(E^( r*t))/Cos[Pi*t] /. r -> Log[t^(1/t) - 1]/t, {t, 1, 
    Infinity}, Method -> "AlternatingSigns", WorkingPrecision -> 37], 
  30]

Out[203]= 5.*10^-30

In[223]:= CMRB - 
 Quiet[N[NSum[
    E^(r*t)*(Cos[Pi*t] + I*Sin[Pi*t]) /. r -> Log[t^(1/t) - 1]/t, {t, 
     1, Infinity}, Method -> "AlternatingSigns", 
    WorkingPrecision -> 37], 30]]

Out[223]= 5.*10^-30

In[204]:= Quiet[
 MKB = NIntegrate[
   E^(r*t)*(Cos[Pi*t] + I*Sin[Pi*t]) /. r -> Log[t^(1/t) - 1]/t, {t, 
    1, I*Infinity}, WorkingPrecision -> 30, Method -> "Trapezoidal"]]

Out[204]= 0.0707760393115292541357595979381 - 
 0.0473806170703505012595927346527 I

We derive the preceding and following integral forms of CMRB from the Abel-Plana Formula, considering the following result.

  ExpToTrig[Re[Exp[2  Pi z - 1]]]

r

APformula APproof

\







How it all began

From these meager beginnings: My life has proven that one's grades in school are not necessarily a prognostication of achievement in mathematics. For evidence of my poor grades see my report cards.

The eldest child, raised by my sixth-grade-educated mother, I was a D and F student through 6th grade the second time, but in Jr high, in 1976, we were given a self-paced program. Then I noticed there was more to math than rote multiplication and division of 3 and 4-digit numbers! Instead of repetition, I was able to explore what was out there. The more I researched, the better my grades got! It was amazing!! So, having become proficient in mathematics during my high school years, on my birthday in 1994, I decided to put down the TV remote control and pick up a pencil. I began by writing out the powers of 2, like 2*2, 2*2*2, etc. I started making up algebra problems to work at solving, and even started buying books on introductory calculus.
Then came my first opportunity to attend university. I took care of my mother, who suffered from Alzheimer's, so instead of working my usual 60+ hours a week. I started taking a class or two a semester. After my mom passed away, I went back to working my long number of hours but always kept up on my math hobby! 
Occasionally, I make a point of going to school and taking a class or two to enrich myself and my math hobby. This has become such a successful routine that some strangers listed me on Wikipedia as an amateur mathematician alphabetically following Jost Bürgi who constructed a table of progressions that is now understood as antilogarithms independently of John Napier at the behest of Johannes Kepler.
I've even studied a few graduate-level topics in Mathematics.

Why I started so slow and am now a pioneer is a mystery! Could it say something about the educational system? Can the reason be found deep in psychology? (After all, I never made any progress in math or discoveries without first assuming I could, even when others were telling me, I couldn't!) Or could it be that the truth is a little of both and more?



From these meager beginnings:

On January 11 and 23,1999 I wrote,

I have started a search for a new mathematical constant! Does anyone want to help me? Consider, 1^(1/1)-2^(1/2)+3^(1/3)...I will take it apart and examine it "bit by bit." I hope to find connections to all kinds of arithmetical manipulations. I realize I am in "no man's land," but I work best there! If anyone else is foolhardy enough to come along and offer advice, I welcome you.

The point is that I found the MRB constant (CMRB), meaning after all the giants were through roaming the forest of numbers and founding all they found, one virgin mustard seedling caught my eye. So I carefully "brought it up" to a level of maturity, and my own understanding of math along with it! (In another reality, I invented CMRB and then discovered many of its qualities.)

In doing so, I came to find out that this constant (CMRB)

MRB math world snippit

(from https://mathworld.wolfram.com/MRBConstant.html)

was more closely related to other constants than I could have imagined.

As the apprentice of all, building upon the foundation of Chebyshev (1854–1859) on the best uniform approximation of functions, as vowed on January 23, 1999. "I took CMRB apart and examined it 'bit by bit,' finding connections to all kinds of arithmetical manipulations." Not satisfied with conveniently construed constructions (half-hazardously put together formulas) that a naïve use of numeric search engines like Wolfram Alpha or the OEIS, might give, I set out to determine the most interesting (by being the most improbable but true) approximations for each constant in relation to it.

For example, consider its relationship to Viswanath's constant (VC)

Viswanath' math world snippit

(from https://mathworld.wolfram.com/RandomFibonacciSequence.html)

With both being functions of x1/x alone, we have these near-zeors of VC using CMRB, which have a ratio of Gelfond's constant $=e^\pi.$ =e^\pi Notice, by "near-zeros," I mean we have the following. near-zeros

VC/(6*(11/7 - ProductLog[1])) - CMRB
CMRB - (5*VC^6)/56

3.4164*10^-8

1.47*10^-9

Out[54]= 3.4164*^-8

Out[55]= 1.47*^-9

See cloud notebook. The near zero, CMRB - (5*VC^6)/56, is so small that Wolfram Alpha yields a rational power of VC for the nth root of 56/6 CMRB.

enter image description here



Then there is the Rogers - Ramanujan Continued Fraction, R(q),

of CMRB that is well-linearly-approximated by terms of itself alone:



From these meager beginnings:

On Feb 22, 2009, I wrote,

It appears that the absolute value, minus 1/2, of the limit(integral of (-1)^x*x^(1/x) from 1 to 2N as N->infinity) would equal the partial sum of (-1)^x*x^(1/x) from 1 to where the upper summation is even and growing without bound. Is anyone interested in improving or disproving this conjecture? 

enter image description here

I came to find out my discovery, a very slow to converge oscillatory integral, would later be further defined by Google Scholar.

Here is proof of a faster converging integral for its integrated analog (The MKB constant) by Ariel Gershon.

g(x)=x^(1/x), M1=hypothesis

Which is the same as

enter image description here because changing the upper limit to 2N + 1 increases MI by $2i/\pi.$

MKB constant calculations have been moved to their discussion at http://community.wolfram.com/groups/-/m/t/1323951?ppauth=W3TxvEwH .

Iimofg->1

Cauchy's Integral Theorem

Lim surface h gamma r=0

Lim surface h beta r=0

limit to 2n-1

limit to 2n-

Plugging in equations [5] and [6] into equation [2] gives us:

leftright

Now take the limit as N?? and apply equations [3] and [4] : QED He went on to note that

enter image description here

After I mentioned it to him, Richard Mathar published his meaningful work on it here in arxiv, where M is the MRB constant and M1 is MKB:

enter image description here

M1 has a convergent series, enter image description here which has lines of symmetry across whole-and-half number points on the x-axis, and half-periods of exactly 1, for both real and imaginary parts as in the following plots. And where

f[x_] = Exp[I Pi x] (x^(1/x) - 1); Assuming[
 x \[Element] Integers && x > 1, 
 FullSimplify[Re[f[x + 1/2]] - Im[f[x]]]]

gives 0

ReImPlot[(-1)^x (x^(1/x) - 1), {x, 1, Infinity}, PlotStyle -> Blue, 
 Filling -> Axis, FillingStyle -> {Green, Red}]

big plot small plot


Every 75 i of the upper value of the partial integration yields 100 additional digits of M2=enter image description here and of CMRB=enter image description here=enter image description here

Here is a heuristic explanation for the observed behavior.

Write the integral as an infinite series, $m= \sum_{k = 1}^\infty a_k$ with $a_k = \int_{i kM}^{i (k+1)M} \frac{t^{1/t}-1}{\sin (\pi t)} \, dt$ for $k \ge 2$ and the obvious modification for $k = 1$. we are computing the partial sums of these series with $M = 75$ and the question is why the series remainders decrease by a factor of $10^{-100}$ for each additional term.

The integrand is a quotient with numerator $t^{1/t} - 1 \approx \log t\, / t$ and denominator $1/\sin \pi t \approx e^{i \pi t}$ for large imaginary $t$. The absolute values of these terms therefore are $|a_k| \approx \log |kM|/|kM| \cdot e^{-\pi kM}$. This implies

o$

as $k \to \infty$. Consequently the remainders $\sum_{k = N}^\infty$ behave like $e^{- \pi N M}$. They decrease by a factor of $e^{-\pi M}$ for each additional term. And for $M = 75$, this is approximately $10^{-100}$, predicting an increase in accuracy of 100 digits whenever the upper integration bound increased by $75i$.

I used the fact that enter image description here

The following "partial proof of it" is from Quora.

Whileenter image description here enter image description here

enter image description here


I developed a lot more theory behind it and ways of computing many more digits in this linked Wolfram post.

Here is how my analysis (along with improvements to Mathematica) has improved the speed of calculating that constant's digits:

(digits and seconds)

enter image description here Better 2022 results are expected soon!

2022 results documentation:



From these meager beginnings:

In October 2016, I wrote the following here in researchgate:

First, we will follow the path the author took to find out that for

ratio of a-1 to a

the limit of the ratio of a to a - 1, as a goes to infinity is Gelfond's Constant, (e ^ pi). We will consider the hypothesis and provide hints for proof using L’ Hospital’s Rule (since we have indeterminate forms as a goes to infinity):

The following should help in a proof of the hypothesis:

Cos[PiIx] == Cosh[Pix], Sin[PiIx] == I Sin-h[Pix], and Limit[x^(1/x),x->Infinity]==1.

Using L’Hospital’s Rule, we have the following:

L’ Hospital’s a's

(17) (PDF) Gelfond' s Constant using MKB constant like integrals. Available from: https://www.researchgate.net/publication/309187705Gelfond%27sConstantusingMKBconstantlikeintegrals [accessed Aug 16 2022].

We find there is no limit a goes to infinity, of the ratio of the previous forms of integrals when the "I" is left out, and give a small proof for their divergence.

That was responsible for the integral-equation-discovery mentioned in one of the following posts, where it is written, "Using those ratios, it looks like" (There m is the MRB constant.)

enter image description here


From these meager beginnings:

In November 2013, I wrote:

$C$MRB is approximately 0.1878596424620671202485179340542732. See this and this.

$\sum_{n=1}^\infty (-1)^n\times(n^{1/n}-a)$ is formally convergent only when $a =1$. However, if you extend the meaning of $\sum$ through "summation methods", whereby series that diverge in one sense converge in another sense (e.g. Cesaro, etc.) you get results for other $a$. A few years ago it came to me to ask what value of $a$ gives $$\sum_{n=1}^\infty (-1)^n\times(n^{1/n}-a)=0\text{ ?}$$(For what value of a is the Levin's u-transform's and Cesàro's sum result 0 considering weak convergence?)

The solution I got surprised me: it was $a=1-2\times C\mathrm{MRB}=0.6242807150758657595029641318914535398881938101997224\ldots$. Where $C\mathrm{MRB}$ is $\sum_{n=1}^\infty (-1)^n\times(n^{1/n}-1)$.

I asked, "If that's correct can you explain why?" and got the following comment. enter image description here

To see this for yourself in Mathematica enter FindRoot[NSum[(-1)^n*(n^(1/n) - x), {n, 1, Infinity}], {x, 1}] where regularization is used so that the sum that formally diverges returns a result that can be interpreted as evaluation of the analytic extension of the series.

Finally let a = M2 = $1-2\times C$MRB = 0.6242807150758... and the two limit-points of the series $\sum_{n=1}^\infty (-1)^n\times(n^{1/n}-M2)$ are +/- $C$MRB with its Levin's u-transform's result being 0. See here.

Also,

enter image description here




Scholarly works about CMRB.

From these meager beginnings:

In 2015 I wrote:

Mathematica makes some attempts to project hyper-dimensions onto 2-space with the Hypercube command. Likewise, some attempts at tying them to our universe are mentioned at https://bctp.berkeley.edu/extraD.html . The MRB constant from infinite-dimensional space is described at http://marvinrayburns.com/ThegeometryV12.pdf. It is my theory that like the MRB constant the universe, under inflation, started in an infinite number of space dimensions. And they almost all instantly collapsed, as far as our sensory realm is concerned, leaving all but the few we enjoy today.

I'm not the first person to think the universe consists of an infinitude of dimensions. Some string theories and theorists propose it too. Michele Nardelli added the following.

In string theory, perturbation methods involve such a high degree of approximation that the theory is unable to identify which of the Calabi - Yau spaces are candidates for describing the universe. The consequence is that it does not describe a single universe, but something like 10^500 universes. In reality, admitting 10^500 different quantum voids would allow the only mechanism known at the moment to explain the present value of the cosmological constant following an idea by Steven Weinberg. Furthermore, a very large value of different voids is typical of any type of matter coupled to gravity and is also obtained when coupling the standard model. I believe that the multiverse is a "space of infinite dimensions" with infinite degrees of freedom and infinite possible potential wave functions that when they collapse, formalize a particle or a universe in a quantum state. The strings vibrating, like the strings of a musical instrument, emit frequencies that are not always precise numbers, indeed, very often they are decimal, irrational, and/or transcendent numbers. The MRB constant serves as a "regularizer" to obtain solutions as precise as possible and this in various sectors of string theory, black holes, and cosmology

In this physics.StackExchange question his concept of the dimensions in string theory and a possible link with number theory is inquired about.

Many MRB constant papers by Michele Nardelli are found here in Google Scholar, which include previous versions of these.

Hello. Here are the links of my more comprehensive articles describing the various applications of the CMRB in various fields of theoretical physics and cosmology. Thanks always for your availability, see you soon.

--

Analyzing several equations concerning Geometric Measure Theory, various Ramanujan parameters and the developments of the MRB Constant. New possible mathematical connections with some sectors of String Theory. XII

--

On several equations concerning Geometric Measure Theory, various Ramanujan parameters and the developments of the MRB Constant. New possible mathematical connections with some sectors of Cosmology (Bubble universes) and String Theory

--

Analyzing several equations concerning Geometric Measure Theory, various Ramanujan parameters and the developments of the MRB Constant. New possible mathematical connections with some sectors of Cosmology (Bubble universes) and String Theory. III

--

Analyzing several equations concerning Geometric Measure Theory, various Ramanujan parameters and the developments of the MRB Constant. New possible mathematical connections with some equations concerning various aspects of Quantum Mechanics and String Theory. VI

--

Analyzing several equations concerning various aspects of String Theory and one-loop graviton correction to the conformal scalar mode function. New possible mathematical connections with various Ramanujan parameters and some developments of the MRB Constant.

--

Analyzing several equations concerning Geometric Measure Theory, various Ramanujan parameters and the developments of the MRB Constant. New possible mathematical connections with some equations concerning Multiverse models and the Lorentzian path integral for the vacuum decay process

--

On the study of various equations concerning Primordial Gravitational Waves in Standard Cosmology and some sectors of String Theory. New possible mathematical connections with various Ramanujan formulas and various developments of the MRB Constant

--

On the study of some equations concerning the mathematics of String Theory. New possible connections with some sectors of Number Theory and MRB Constant

--

On the study of some equations concerning the mathematics of String Theory. New possible connections with some sectors of Number Theory and MRB Constant. II

--

Analyzing some equations of Manuscript Book 2 of Srinivasa Ramanujan. New possible mathematical connections with several equations concerning the Geometric Measure Theory, the MRB Constant and various sectors of String Theory

--

Analyzing the MRB Constant in Geometric Measure Theory and in a Ramanujan equation. New possible mathematical connections with ζ(2), ϕ , the Quantum Cosmological Constant and some sectors of String Theory

--

Analyzing some equations of Manuscript Book 2 of Srinivasa Ramanujan. New possible mathematical connections with several equations concerning the Geometric Measure Theory, the MRB Constant, various sectors of Black Hole Physics and String Theory

--

Analyzing further equations of Manuscript Book 2 of Srinivasa Ramanujan. New possible mathematical connections with the MRB Constant, the Ramanujan-Nardelli Mock General Formula and several equations concerning some sectors of String Theory III

His latest papers on the MRB constant, follow.

Hi Marvin, for me the best links you could post are those related to the works concerning the Ramanujan continued fractions and mathematical connections with MRB Constant and various sectors of String Theory.

Here are the links (in all there are 40):

https://www.academia.edu/80247977/ https://www.academia.edu/80298701/ https://www.academia.edu/80376615/ https://www.academia.edu/80431963/ https://www.academia.edu/80508286/ https://www.academia.edu/80590932/ https://www.academia.edu/80660709/ https://www.academia.edu/80724379/ https://www.academia.edu/80799006/ https://www.academia.edu/80894850/ https://www.academia.edu/81033980/ https://www.academia.edu/81150262/ https://www.academia.edu/81231887/ https://www.academia.edu/81313294/ https://www.academia.edu/81536589/ https://www.academia.edu/81625054/ https://www.academia.edu/81705896/ https://www.academia.edu/81769347/ https://www.academia.edu/81812404/ https://www.academia.edu/81874954/ https://www.academia.edu/81959191/ https://www.academia.edu/82036273/ https://www.academia.edu/82080277/ https://www.academia.edu/82129372/ https://www.academia.edu/82155422/ https://www.academia.edu/82204999/ https://www.academia.edu/82231273/ https://www.academia.edu/82243774/ https://www.academia.edu/82347058/ https://www.academia.edu/82399680/ https://www.academia.edu/82441768/ https://www.academia.edu/82475969/ https://www.academia.edu/82516896/ https://www.academia.edu/82521506/ https://www.academia.edu/82532215/ https://www.academia.edu/82622577/ https://www.academia.edu/82679726/ https://www.academia.edu/82733681/ https://www.academia.edu/82777895/ https://www.academia.edu/82828901/

He recently added the following.

Hi Marvin,

The MRB Constant, also in the case of the Ramanujan's expressions that we are slowly analyzing, serves to "normalize", therefore to rectify the approximations we obtain. For example, for the value of zeta (2), which is always approximate (1.64382 ....)" [from the string theory equations, example below], "adding an expression containing the MRB Constant gives a result much closer to the real value which is 1.644934 ... This procedure is carried out on all those we call "recurring numbers" (Pi, zeta (2), 4096, 1729 and the golden ratio), which, developing the expressions, are always approximations, from which, by inserting the CMRB in various ways, we obtain results much closer to the real values of the aforementioned recurring numbers. Finally, remember that Ramanujan's expressions and the recurring numbers that are obtained are connected to the frequencies of the strings, therefore to the vibrations of the same.

One example of his procedure from https://www.academia.edu/81812404/OnfurtherRamanujanscontinuedfractionsmathematicalconnectionswithMRBConstantvariousequationsconcerningsomesectorsofStringTheoryXIX?

was to analyze some expressions from Ramanujan's notebooks.

Finding other expressions from series of their anti-derivative and derivatives, in this case, dividing two previous expressions, after some calculations, he obtained this expression from it."

enter image description here

Then finally "by inserting the CMRB, obtaining results much closer to the real values of the aforementioned recurring numbers:" (referring to Ramanujan's equation, and the after more work..)

, enter image description here

You need to look at the paper entirely to see how he puts it all together. He uses Wolfram Alpha for a lot of it.

7/7/2022 I just found a video he made concerning his work on string theory with its connection to Ramanujan and CMRB. English subtitles are available on youtube.


There are around 200 papers concerning the MRB contact here at acadeia.edu. enter image description here

More Google Scholar results on CMRBt are here, which include the following.

Dr. Richard Crandall called the MRB constant a key fundamental constant

enter image description here

in this linked well-sourced and equally greatly cited Google Scholar promoted paper. Also here.

Dr. Richard J. Mathar wrote on the MRB constant here.

Xun Zhou, School of Water Resources and Environment, China University of Geosciences (Beijing), wrote the following in "on Some Series and Mathematic Constants Arising in Radioactive Decay" for the Journal of Mathematics Research, 2019.

A divergent infinite series may also lead to mathematical constants if its partial sum is bounded. The Marvin Ray Burns’ (MRB) constant is the upper bounded value of the partial sum of the divergent and alternating infinite series: -11/1+21/2-31/3+41/4-51/5+61/6-···=0.187859···(M. Chen, & S. Chen, 2016). Thus, construction of new infinite series has the possibility of leading to new mathematical constants.





MRB Constant Records,

My inspiration to compute a lot of digits of CMRB came from the following website by Simon Plouffe.

There, computer mathematicians calculate millions, then billions of digits of constants like pi, when with only 65 decimal places of pi, we could determine the size of the observable universe to within a Planck length (where the uncertainty of our measure of the universe would be greater than the universe itself)!

In contrast 65 digits of the MRB constant "measures" the value of -1+ssqrt(2)-3^(1/3) up to n^(1/n) where n is 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, which can be called 1 unvigintillion or just 10^66.

And why compute 65 digits of the MRB constant? Because having that much precision is the only way to solve such a problem as 

1465528573348167959709563453947173222018952610559967812891154^ m-m, where m is the MRB constant, which gives the near integer "to beat all," 200799291330.9999999999999999999999999999999999999999999999999999999999999900450...

And why compute millions of digits of it? uhhhhhhhhhh.... "Because it's there!" (...Yeah, thanks George Mallory!)
And why?? (c'est ma raison d'être!!!)

enter image description here enter image description here enter image description here enter image description here

So, below you find reproducible results with methods. The utmost care has been taken to assure the accuracy of the record number of digits calculations. These records represent the advancement of consumer-level computers, 21st century Iterative methods, and clever programming over the past 23 years.

Here are some record computations of CMRB. If you know of any others let me know, and I will probably add them!

1 digit of the (additive inverse of ) **C**<sub>*MRB*</sub> with my TI-92s, by adding 1-sqrt(2)+3^(1/3)-4^(1/4)+... as far as I could, was computed. That first digit, by the way, was just 0. Then by using the sum key, to compute $\sum _{n=1}^{1000 } (-1)^n \left(n^{1/n}\right),$  the first correct decimal of $\text{CMRB}=\sum _{n=1}^{\infty } (-1)^n \left(n^{1/n}-1\right)$ i.e. (.1). It gave (.1_91323989714) which is close to what Mathematica gives for summing to only an upper limit of 1000. 

4 decimals(.1878) of CMRB were computed on Jan 11, 1999 with the Inverse Symbolic Calculator, applying the command evalf( 0.1879019633921476926565342538468+sum((-1)^n* (n^(1/n)-1),n=140001..150000)); where 0.1879019633921476926565342538468 was the running total of t=sum((-1)^n* (n^(1/n)-1),n=1..10000), then t= t+the sum from (10001.. 20000), then t=t+the sum from (20001..30000) ... up to t=t+the sum from (130001..140000).

5 correct decimals (rounded to .18786), in Jan of 1999 were drawn from CMRB using Mathcad 3.1 on a 50 MHz 80486 IBM 486 personal computer operating on Windows 95.

 9 digits of CMRB shortly afterward using Mathcad 7 professional on the Pentium II mentioned below, by summing (-1)^x x^(1/x) for x=1 to 10,000,000, 20,000,000, and a many more, then linearly approximating the sum to a what a few billion terms would have given.

 500 digits of CMRB with an online tool called Sigma on Jan 23, 1999. See [http://marvinrayburns.com/Original_MRB_Post.html][159]   if you can read the printed and scanned copy there.

5,000 digits of CMRB in September of 1999 in 2 hours on a 350 MHz PentiumII,133 MHz 64 MB of RAM using the simple PARI commands \p 5000;sumalt(n=1,((-1)^n*(n^(1/n)-1))), after allocating enough memory.
To beat that, I did it on July 4, 2022, in 1 second on the 5.5 GHz CMRBSC 3 with 4800MHz 64 GB of RAM by Newton's method using Convergence acceleration of alternating series. Henri Cohen, Fernando Rodriguez Villegas, Don Zagier acceleration "Algorithm 1" to at least 5000 decimals. (* Newer loop with Newton interior. *)

PII documentation here


 6,995 accurate digits of CMRB were computed on June 10-11, 2003 over a period, of 10 hours, on a 450 MHz P3 with an available 512 MB RAM,.
To beat that, I did it in <2.5 seconds on the MRBCSC 3 on July 7, 2022 (more than 14,400 times as fast!)

PIII documentation here


8000 digits of CMRB completed, using a Sony Vaio P4 2.66 GHz laptop computer with 960 MB of available RAM, at 2:04 PM 3/25/2004,

  11,000 digits of CMRB> on March 01, 2006, with a 3 GHz PD with 2 GB RAM available calculated.

 40, 000 digits of CMRB in 33 hours and 26 min via my program written in Mathematica 5.2 on Nov 24, 2006. The computation was run on a 32-bit Windows 3 GHz PD desktop computer using 3.25 GB of Ram.
The program was

    Block[{a, b = -1, c = -1 - d, d = (3 + Sqrt[8])^n, 
      n = 131 Ceiling[40000/100], s = 0}, a[0] = 1;
     d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
     For[k = 0, k < n, c = b - c;
      b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
     N[1/2 - s/d, 40000]]

 60,000 digits of CMRB on July 29, 2007, at 11:57 PM EST in 50.51 hours on a 2.6 GHz AMD Athlon with 64-bit Windows XP. Max memory used was 4.0 GB of RAM.

65,000 digits of CMRB in only 50.50 hours on a 2.66 GHz Core 2 Duo using 64-bit Windows XP on Aug 3, 2007, at 12:40 AM EST, Max memory used was 5.0 GB of RAM.

100,000 digits of CMRB on Aug 12, 2007, at 8:00 PM EST, were computed in 170 hours on a 2.66 GHz Core 2 Duo using 64-bit Windows XP. Max memory used was 11.3 GB of RAM. The typical daily record of memory used was 8.5 GB of RAM.
To beat that, on the 4th of July, 2022, I computed the same digits in  1/4 of an hour. CNTRL+F "4th of July, 2022" for documentation.
To beat that, on the 7th of July, 2022, I computed the same digits in  1/5 of an hour. CNTRL+F "7th of July, 2022" for documentation (850 times as fast as the first 100,000 run!)

 150,000 digits of CMRB on Sep 23, 2007, at 11:00 AM EST. Computed in 330 hours on a 2.66 GHz Core 2 Duo using 64-bit Windows XP. Max memory used was 22 GB of RAM. The typical daily record of memory used was 17 GB of RAM.

  200,000 digits of CMRB using Mathematica 5.2 on March 16, 2008, at 3:00 PM EST,. Found in 845 hours, on a 2.66 GHz Core 2 Duo using 64-bit Windows XP. Max memory used was 47 GB of RAM. The typical daily record of memory used was 28 GB of RAM.

300,000 digits of CMRB were destroyed (washed away by Hurricane Ike ) on September 13, 2008 sometime between 2:00 PM - 8:00 PM EST. Computed for a long  4015. Hours (23.899 weeks or 1.4454*10^7 seconds) on a 2.66 GHz Core 2 Duo using 64-bit Windows XP. Max memory used was 91 GB of RAM. The Mathematica 6.0 code is used follows:

    Block[{$MaxExtraPrecision = 300000 + 8, a, b = -1, c = -1 - d, 
     d = (3 + Sqrt[8])^n, n = 131 Ceiling[300000/100], s = 0}, a[0] = 1; 
     d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++]; 
     For[k = 0, k < n, c = b - c; 
      b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++]; 
     N[1/2 - s/d, 300000]]

225,000 digits of CMRB were started with a 2.66 GHz Core 2 Duo using 64-bit Windows XP on September 18, 2008. It was completed in 1072 hours. 

250,000 digits were attempted but failed to be completed to a serious internal error that restarted the machine. The error occurred sometime on December 24, 2008, between 9:00 AM and 9:00 PM. The computation began on November 16, 2008, at 10:03 PM EST. The Max memory used was 60.5 GB.

 250,000 digits of CMRB on Jan 29, 2009, 1:26:19 pm (UTC-0500) EST, with a multiple-step Mathematica command running on a dedicated 64-bit XP using 4 GB DDR2 RAM onboard and 36 GB virtual. The computation took only 333.102 hours. The digits are at http://marvinrayburns.com/250KMRB.txt. The computation is completely documented.

  300000 digit search of CMRB was initiated using an i7 with 8.0 GB of DDR3 RAM onboard on Sun 28 Mar 2010 at 21:44:50 (UTC-0500) EST, but it failed due to hardware problems.

  299,998 Digits of CMRB: The computation began Fri 13 Aug 2010 10:16:20 pm EDT and ended 2.23199*10^6 seconds later |  Wednesday, September 8, 2010. I used Mathematica 6.0 for Microsoft Windows (64-bit) (June 19, 2007) Which is an average of 7.44 seconds per digit. I used my Dell Studio XPS 8100 i7 860 @ 2.80 GHz with 8GB physical DDR3 RAM. Windows 7 reserved an additional 48.929 GB virtual Ram.

300,000 digits to the right of the decimal point of CMRB from Sat 8 Oct 2011 23:50:40 to Sat 5 Nov 2011 19:53:42 (2.405*10^6 seconds later). This run was 0.5766 seconds per digit slower than the 299,998 digit computation even though it used 16 GB physical DDR3 RAM on the same machine. The working precision and accuracy goal combination were maximized for exactly 300,000 digits, and the result was automatically saved as a file instead of just being displayed on the front end. Windows reserved a total of 63 GB of working memory of which 52 GB were recorded being used. The 300,000 digits came from the Mathematica 7.0 command`
    Quit; DateString[]
    digits = 300000; str = OpenWrite[]; SetOptions[str, 
    PageWidth -> 1000]; time = SessionTime[]; Write[str, 
    NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]}, 
    WorkingPrecision -> digits + 3, AccuracyGoal -> digits, 
    Method -> "AlternatingSigns"]]; timeused = 
    SessionTime[] - time; here = Close[str]
    DateString[]

314159 digits of the constant took 3 tries due to hardware failure. Finishing on September 18, 2012, I computed 314159 digits, taking 59 GB of RAM.  The digits came from the Mathematica 8.0.4 code`

    DateString[]
    NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]}, 
    WorkingPrecision -> 314169, Method -> "AlternatingSigns"] // Timing
    DateString[]

1,000,000 digits of CMRB  for the first time in history in 18 days 9 hours 11 minutes 34.253417 seconds by Sam Noble of the Apple Advanced Computation Group.

1,048,576 digits of CMRB in a lightning-fast 76.4 hours, finishing on Dec 11, 2012 were scored by Dr Richard Crandall, an Apple scientist and head of its advanced computational group. That's on a 2.93 GHz 8-core Nehalem.

    To beat that, on Aug of 2018, I computed 1,004,993 digits in 53.5 hours 34 hours computation time (from the timing command) with 10 DDR4 RAM (of up to 3000 MHz) supported processor cores overclocked up to 4.7 GHz! Search this post for "53.5" for documentation. 

    To beat that, on Sept 21, 2018: I computed 1,004,993 digits in 50.37 hours of absolute time and 35.4 hours of computation time (from the timing command) with 18  (DDR3 and DDR4) processor cores!  Search this post for "50.37 hours" for documentation.**

    To beat that, on May 11, 2019, I computed over 1,004,993 digits, in 45.5 hours of absolute time and only 32.5 hours of computation time, using 28 kernels on 18 DDR4 RAM (of up to 3200 MHz) supported cores overclocked up to  5.1 GHz  Search 'Documented in the attached ":3 fastest computers together 3.nb." '  for the post that has the attached documenting notebook.

    To beat that, I accumulated over 1,004,993 correct digits in 44 hours of absolute time and 35.4206 hours of computation time on 10/19/20, using 3/4 of the MRB constant supercomputer 2 -- see https://www.wolframcloud.com/obj/bmmmburns/Published/44%20hour%20million.nb  for documentation.

    To beat that I did a 1,004,993 correct digits computation in 36.7 hours of absolute time and only 26.4 hours of computation time, on Sun 15 May 2022 at 06:10:50, using 3/4  of the MRB constant supercomputer 3. Ram Speed was 4800MHz and all of the 30 cores were clocked at up to 5.2 GHz.

    To beat that I did a 1,004,993 correct digits computation in 35.6 hours of absolute time and only 25.3 hours of computation time, on Wed 3 Aug 2022 08:05:38, using the MRB constant supercomputer 3. Ram Speed was 4000MHz and all of the 40 cores were clocked at up to 5.5 GHz.

44 hours million notebook

36.7 hours million notebook

35.6 hours million notebook


 A little over 1,200,000 digits, previously, of CMRB in 11   days, 21 hours, 17 minutes, and 41 seconds (I finished on March 31, 2013, using a six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz. see https://www.wolframcloud.com/obj/bmmmburns/Published/36%20hour%20million.nb

for details.


2,000,000 or more digit computation of CMRB on May 17, 2013, using only around 10GB of RAM. It took 37 days 5 hours 6 minutes 47.1870579 seconds. I used my six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.

 3,014,991 digits of CMRB,  world record computation of **C**<sub>*MRB*</sub> was finished on Sun 21 Sep 2014 at 18:35:06. It took 1 month 27 days 2 hours 45 minutes 15 seconds. The processor time from the 3,000,000+ digit computation was 22 days. I computed the 3,014,991 digits of **C**<sub>*MRB*</sub> with Mathematica 10.0. I Used my new version of Richard Crandall's code in the attached 3M.nb, optimized for my platform and large computations. I also used a six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz with 64 GB of RAM of which only 16 GB was used. Can you beat it (in more number of digits, less memory used, or less time taken)? This confirms that my previous "2,000,000 or more digit computation" was accurate to 2,009,993 digits. they were used to check the first several digits of this computation. See attached 3M.nb for the full code and digits.

 Over 4 million digits of CMRB was finished on Wed 16 Jan 2019 19:55:20.
It took 4 years of continuous tries. This successful run took 65.13 days absolute time, with a processor time of 25.17 days, on a 3.7 GHz overclocked up to 4.7 GHz on all cores Intel 6 core computer with 3000 MHz RAM. According to this computation, the previous record, 3,000,000+ digit computation, was accurate to 3,014,871 decimals, as this computation used my algorithm for computing n^(1/n) as found in chapter 3 in the paper at 

https://www.sciencedirect.com/science/article/pii/0898122189900242 and the 3 million+ computation used Crandall's algorithm. Both algorithms outperform Newton's method per calculation and iteration.


Example use of M R Burns' algorithm to compute 123456789^(1/123456789) 10,000,000 digits:

ClearSystemCache[]; n = 123456789;
(*n is the n in n^(1/n)*)
x = N[n^(1/n),100];
(*x starts out as a relatively small precision approximation to n^(1/n)*)
pc = Precision[x]; pr = 10000000;
(*pr is the desired precision of your n^(1/n)*)
Print[t0 = Timing[While[pc < pr, pc = Min[4 pc, pr];
x = SetPrecision[x, pc];
y = x^n; z = (n - y)/y;
t = 2 n - 1; t2 = t^2;
x = x*(1 + SetPrecision[4.5, pc] (n - 1)/t2 + (n + 1) z/(2 n t)
- SetPrecision[13.5, pc] n (n - 1)/(3 n t2 + t^3 z))];
(*You get a much faster version of N[n^(1/n),pr]*)
N[n - x^n, 10]](*The error*)];
ClearSystemCache[]; n = 123456789; Print[t1 = Timing[N[n - N[n^(1/n), pr]^n, 10]]]

 Gives

  {25.5469,0.*10^-9999984}

  {101.359,0.*10^-9999984}




  More information is available upon request.

 More than 5 million digits of CMRB were found on Fri 19 Jul 2019 18:49:02,   Methods described in the reply below which begins with "Attempts at a 5,000,000 digit calculation ."   For this 5 million digit calculation of **C**<sub>*MRB*</sub> using the 3 node MRB supercomputer: processor time was 40 days. and the actual time was 64 days.   That is in less absolute time than the 4-million-digit computation which used just one node.

6,000,000 digits of CMRB after 8 tries in 19 months. (Search "8/24/2019 It's time for more digits!" below.) finishing on Tue 30 Mar 2021 at 22:02:49 in 160 days.
    The MRB constant supercomputer 2 said the following:
    Finished on Tue 30 Mar 2021 22:02:49. computation and absolute time were
    5.28815859375*10^6 and 1.38935720536301*10^7 s. respectively
    Enter MRB1 to print 6029991 digits. The error from a 5,000,000 or more-digit calculation that used a different method is      
    0.*10^-5024993.

That means that the 5,000,000-digit computation Was actually accurate to 5024993 decimals!!!

5,609,880, verified by 2 distinct algorithms for x^(1/x), digits of CMRB on Thu 4 Mar 2021 at 08:03:45. The 5,500,000+ digit computation using a totally different method showed that many decimals are in common with the 6,000,000+ digit computation in 160.805 days.

6,500,000 digits of CMRB on my second try,

The MRB constant supercomputer said,

Finished on Wed 16 Mar 2022 02: 02: 10. computation and absolute time
were 6.26628*10^6 and 1.60264035419592*10^7s respectively Enter MRB1
to print 6532491 digits. The error from a 6, 000, 000 or more digit
calculation that used a different method is 
0.*10^-6029992.

"Computation time" 72.526 days

 "Absolute time" 185.491 days







CMRB and its applications

Definition 1 CMRB is defined at https://en.wikipedia.org/wiki/MRB_constant .

From Wikipedia:

If you see this instead of an image, reload the page If you see this instead of an image, reload the page

References
 Plouffe, Simon. "mrburns". Retrieved 12 January 2015.
 Burns, Marvin R. (23 January 1999). "RC". math2.org. Retrieved 5 May 2009.
 Plouffe, Simon (20 November 1999). "Tables of Constants" (PDF). Laboratoire de combinatoire et d'informatique mathématique. Retrieved 5 May 2009.
 Weisstein, Eric W. "MRB Constant". MathWorld.
 Mathar, Richard J. (2009). "Numerical Evaluation of the Oscillatory Integral Over exp(iπx) x^*1/x) Between 1 and Infinity". arXiv:0912.3844 [math.CA].
 Crandall, Richard. "Unified algorithms for polylogarithm, L-series, and zeta variants" (PDF). PSI Press. Archived from the original (PDF) on April 30, 2013. Retrieved 16 January 2015.
 (sequence A037077 in the OEIS)
 (sequence A160755 in the OEIS)
 (sequence A173273 in the OEIS)
 Fiorentini, Mauro. "MRB (costante)". bitman.name (in Italian). Retrieved 14 January 2015.
 Finch, Steven R. (2003). Mathematical Constants. Cambridge, England: Cambridge University Press. p. 450. ISBN 0-521-81805-2.

`

The following equation that was shown in the Wikipedia definition shows how closely the MRB constant is related to root two.

enter image description here

In[1]:= N[Sum[Sqrt[2]^(1/n)* Sqrt[n]^(1/n) - ((Sqrt[2]^y*Sqrt[2]^x)^(1/Sqrt[2]^x))^Sqrt[2]^(-y)/. 
x -> 2*Log2[a^2 + b^2] /. 
y -> 2*Log2[-ai^2 - bi^2] /. 
a -> 1 - (2*n)^(1/4) /. 
b -> 2^(5/8)*Sqrt[n^(1/4)] /. 
ai -> 1 - I*(2*n)^(1/4) /. 
bi -> 2^(5/8)*Sqrt[I*n^(1/4)], {n, 1, Infinity}], 7]

Out[1]= 0.1878596 + 0.*10^-8 I

The complex roots and powers above are found to be well-defined because we get all either "integer" and "rational" the first of the following lists only, also by working from the bottom to the top of the above list of equations.

enter image description here

Code:

In[349]:= Table[
 Head[FullSimplify[
   Expand[(Sqrt[2])^-y/(Sqrt[2])^x] //. 
     x -> 2 (Log[1 + Sqrt[2] Sqrt[n]]/Log[2]) /. 
    y -> 2 (Log[-1 + Sqrt[2] Sqrt[n]]/Log[2])]], {n, 1, 10}]

Out[349]= {Integer, Rational, Rational, Rational, Rational, Rational, \
Rational, Rational, Rational, Rational}

In[369]:= Table[
 Head[FullSimplify[
   Expand[(Sqrt[2])^-y/(Sqrt[2])^x] //. 
     x -> 2 (Log[1 + Sqrt[2] Sqrt[n]]/Log[3]) /. 
    y -> 2 (Log[-1 + Sqrt[2] Sqrt[n]]/Log[2])]], {n, 1, 10}]

Out[369]= {Times, Rational, Times, Times, Times, Times, Times, Times, \
Times, Times}

Definition 2 CMRB is defined at http://mathworld.wolfram.com/MRBConstant.html.

From MathWorld:

MathWorld MRB MathWorld MRB 2

SEE ALSO:
Glaisher-Kinkelin Constant, Power Tower, Steiner's Problem
REFERENCES:
Burns, M. R. "An Alternating Series Involving n^(th) Roots." Unpublished note, 1999.

Burns, M. R. "Try to Beat These MRB Constant Records!" http://community.wolfram.com/groups/-/m/t/366628.

Crandall, R. E. "Unified Algorithms for Polylogarithm, L-Series, and Zeta Variants." 2012a.

http://www.marvinrayburns.com/UniversalTOC25.pdf.

Crandall, R. E. "The MRB Constant." §7.5 in Algorithmic Reflections: Selected Works. PSI Press, pp. 28-29, 2012b.

Finch, S. R. Mathematical Constants. Cambridge, England: Cambridge University Press, p. 450, 2003.

Plouffe, S. "MRB Constant." http://pi.lacim.uqam.ca/piDATA/mrburns.txt.

Sloane, N. J. A. Sequences A037077 in "The On-Line Encyclopedia of Integer Sequences."

Referenced on Wolfram|Alpha: MRB Constant
CITE THIS AS:
Weisstein, Eric W. "MRB Constant." From MathWorld--A Wolfram Web Resource. https://mathworld.wolfram.com/MRBConstant.html

How would we show that the any of the series in the above MathWorld definition are convergent, or even absolutely convergent?


For "a"k=k1/k, given that the sequence is monotonically decreasing according to Steiner's Problem, next, we would like to show (5) is the alternating sum of a sequence that converges to 0 monotonically and use the Alternating series test to see that it is conditionally convergent

Here is proof that 1 is the limit of "a" as k goes to infinity:

enter image description here

Here are many other proofs that 1 is the limit of "a" as k goes to infinity.

Thus, (k1/k-1) is a monotonically decreasing and bounded below by 0 sequence.

If we want an absolutely convergent series, we can use (4).

Skenter image description here which, since the sum of the absolute values of the summands is finite, the sum converges absolutely!

There is no closed-form for CMRB in the MathWorld definition; this could be due to the following: in Mathematical Constants,( Finch, S. R. Mathematical Constants, Cambridge, England: Cambridge University Press, p. 450), Steven Finch wrote that it is difficult to find an "exact formula" (closed-form solution) for it.

enter image description here enter image description here





Real-World, and beyond, Applications

This section and the rest of the content of this first post were moved below to improve loading times. CNRL+F "Real-World, and beyond, Applications" to finish reading it.

POSTED BY: Marvin Ray Burns
73 Replies

enter image description here

The MRB constant (CMRB) can be compared to the constant $\pi$ in terms of how normal of a number it is: here.

Example, where CMRB has an normality number of 0, perfectly normal in base 5 for 50 digits.

enter image description here

POSTED BY: Marvin Ray Burns

Here is an answer to the previous question.

While mathematicians try to crack this nut, here's a physicist's point of view. I will focus on how to calculate this integral, keeping things as simple as possible, probably approximately.

The imaginary part of $(1+it)^{\frac{1}{1+it}}$:

$$f(t)=(1+t^2)^{\frac{1}{2(1+t^2)}}e^{\frac{t\arctan t}{1+t^2}}\sin\left [\frac{\arctan t}{1+t^2}-\frac{t\ln(1+t^2)}{2(1+t^2)} \right ]$$

Such a gem divided by $\sinh(\pi t)$ needs to be integrated from zero to infinity in so-called closed form.

For a physicist, this is a hopeless case. But...

Some of the first terms of Taylor's expansion of $f(t)$:

$$t-\frac{t^3}{2}-\frac{3t^5}{4}+...$$

Indeed, this expansion diverges for $1<t$

Nevertheless, we divide the first two terms of the expansion by $\sinh(\pi t)$, integrate from zero to infinity, and use well-known results:

$$\int_0^\infty \frac{t}{\sinh(\pi t)}dt=\frac{1}{4}$$

$$\int_0^\infty \frac{t^3}{\sinh(\pi t)}dt=\frac{1}{8}$$

Result:

$$\frac{1}{4}-\frac{1}{16}=\frac{3}{16}$$

The absolute deviation from the exact value is about $0.0004$

This is an example of that how we can use diverging series to compute values. It is only important to guess where to truncate the series.

POSTED BY: Marvin Ray Burns

At Stackexchange

Concerning CMRB=enter image description here=enter image description here,

I asked the following.

For some insight, I looked at comments by @Brevan Ellefsen, but still wonder if this integral can be expressed more elementarily than just calling it $ C_{MRB},$ which is short for the MRB constant which I first described as a sum at the end of the last millennium. There is no known closed-form expression of the MRB constant. With as slow as the series is to converge and as hard the integral is to calculate, a finite number of standard operations for the constant would be just glorious! Here is a summary of nearly a quarter century of evaluating it.

@Dark Malthorp had the insight to prove my suspicion as shown: "

enter image description here "

Is it possible to use a residue calculation to find a closed form for $$ C{MRB} = \int0^\infty \frac{\Im(1+it)^{\frac1{1+it}}}{\sinh(\pi t)}dt? $$

I'm not sure if I'm on the right track, but $\frac{(1+it)^{\frac1{1+it}}}{\sinh(\pi t)}$ having a pole at $0,$ can we consider the possibility of evaluating $$ 2C_{MRB} = \int_{-\infty}^\infty \frac{\Im(1+it)^{\frac1{1+it}}}{\sinh(\pi t)}dt? $$ I found out Mathematica gives the following.


In[70]:= Limit[Im[(1 + I t)^(1/(1 + I t)) Csch[\[Pi] t]], t -> 0]

Out[70]= 1/\[Pi]

In[181]:= Residue[(1 + I t)^((1/(1 + I t))) /Sinh[\[Pi] t], {t, 0}]

Out[181]= 1/\[Pi]

In[236]:= NIntegrate[
 Im[(1 + I t)^(1/(1 + I t))/Sinh[Pi t]], {t, 0, Infinity}]

Out[236]= 0.18786

In[237]:= 1/2 - 1./Pi

Out[237]= 0.18169

This line of reasoning gives a nice set of approximations for CMRB, but nothing exact.

In[527]:= CMRB=NSum[(-1)^n (n^(1/n)-1),

{n,1,Infinity},WorkingPrecision->30,Method->"AlternatingSigns"]

Out[527]= 0.18785964246206712024857897184

Let p be the following partial approximation

In[544]:= p=((1/2-1/\[Pi])+1/(2 \[Pi]-1));

In[545]:= CMRB - 1/2 p

Out[545]= 0.00237470999999980600500193334

In[546]:= (-279/(485 \[Pi]) + p) - CMRB

Out[546]= -7.2407186775943961640*10^-10

In[547]:= (237471/50000000 + p)/2 - CMRB

Out[547]= 1.9399499806666*10^-16

In[548]:= (Pi^2 Sqrt[4187/10993830] + p)/3 - CMRB

Out[548]= 3.1221252470091*10^-16

enter image description here


A different line of reasoning follows, but its analysis, or how to improve its approximation (or even to determine whether that approximation is fully related to it) is beyond my power.

enter image description here

Here is a little more detail showing some symmetry, but I don't see anything exactly equaling CMRB here. enter image description here enter image description here In

CMRB/2 + NIntegrate[
  E/Pi t - Im[(1 + I t)^(1/(1 + I t))/Sinh[Pi t]]/
    Re[(1 + I t)^(1/(1 + I t))/Sinh[Pi t]], {t, 0, a}, 
  WorkingPrecision -> 20] - (3078 p)/(7769 p + 3850)

Out[827]= 1.714*10^-19

POSTED BY: Marvin Ray Burns

I'm getting fast at computing a potential 7 million digits of CMRB. See my latest attempt, where I had a crash. I'll try this again soon!

I'm off to the best start of 7,000,000 digits yet! The short program used above uses too much RAM. On the other hand, the code below uses less memory and the run below is maximized for core number efficiency, considering my network speed, for the 7,000,000 digits. Let's see if it finishes!

Print["Start time is ", ds = DateString[], "."];
prec = 7000000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=4*
    number of physical cores*), tsize = 2^7, chunksize, start = 1, ll,
     ctab, pr = Floor[1.005 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["Will give ", end, 
    " time estimates, each more accurate than the previous."];
   Print["Will stop at ", end*chunksize, 
    " iterations to ensure precsion of around ", pr, 
    " decimal places."]; d = ChebyshevT[n, 3];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/396288];
   Do[xvals = Flatten[Parallelize[Table[Table[ll = start + j*tsize + l;
         x = N[E^(Log[ll]/(ll)), iprec];
         pc = iprec;
         While[pc < pr/65536, pc = Min[3 pc, pr/65536];
          x = SetPrecision[x, pc];
          y = x^ll - ll;
          x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];
         (**N[Exp[Log[ll]/ll],pr/99072]**)
         x = SetPrecision[x, pr/16384];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/16384] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/16384] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/4096]*)x = SetPrecision[x, pr/4096];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/4096] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/4096] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/4096]*)x = SetPrecision[x, pr/1024];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/1024] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 

             SetPrecision[13.5, 
               pr/1024] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/1024]*)x = SetPrecision[x, pr/256];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/256] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/256] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/256]*)x = SetPrecision[x, pr/64];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/64] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/64] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**N[Exp[Log[
         ll]/ll],pr/64]**)x = SetPrecision[x, pr/16];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/16] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/16] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**N[Exp[Log[
         ll]/ll],pr/16]**)x = SetPrecision[x, pr/4];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/4] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/4] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**N[Exp[Log[
         ll]/ll],pr/4]**)x = SetPrecision[x, pr];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[ll]/
         ll],pr]*)x, {l, 0, tsize - 1}], {j, 0, cores - 1}]]];
    ctab = ParallelTable[Table[c = b - c;
       ll = start + l - 2;
       b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
       c, {l, chunksize}], Method -> "Automatic"];
    s += ctab.(xvals - 1);
    start += chunksize;
    st = SessionTime[] - T0; kc = k*chunksize;
    ti = (st)/(kc + 10^-4)*(n)/(3600)/(24);
    If[kc > 1, 
     Print["As of  ", DateString[], " there were ", kc, 
      " iterations done in ", N[st, 5], " seconds. That is ", 
      N[kc/st, 5], " iterations/s. ", N[kc/(end*chunksize)*100, 7], 
      "% complete.", " It should take ", N[ti, 6], " days or ", 
      N[ti*24*3600, 4], "s, and finish ", DatePlus[ds, ti], "."]];
    Print[];, {k, 0, end - 1}];
   N[-s/d, pr]];
t2 = Timing[MRB1 = expM[prec];]; Print["Finished on ", 
 DateString[], ". Proccessor and actual time were ", t2[[1]], " and ",
  SessionTime[] - T0, " s. respectively"];
Print["Enter MRB1 to print ", 
 Floor[Precision[
   MRB1]], " digits. The error from a 5,000,000 or more digit \
calculation that used a different method is  "]; N[M6M - MRB1, 20]

Start time is Thu 25 Aug 2022 19:27:24.

Iterations required: 9286198

Will give 4535 time estimates, each more accurate than the previous.

Will stop at 9287680 iterations to ensure precsion of around 7034999 decimal places.



As of  Thu 25 Aug 2022 20:20:52 there were 2048 iterations done in 3208.0 seconds. That is 0.63841 iterations/s. 0.02205072% complete. It should take 168.354 days or 1.455*10^7s, and finish Fri 10 Feb 2023 03:57:26.



As of  Thu 25 Aug 2022 20:48:47 there were 4096 iterations done in 4883.5 seconds. That is 0.83874 iterations/s. 0.04410143% complete. It should take 128.144 days or 1.107*10^7s, and finish Sat 31 Dec 2022 22:54:30.



As of  Thu 25 Aug 2022 21:16:46 there were 6144 iterations done in 6562.7 seconds. That is 0.93620 iterations/s. 0.06615215% complete. It should take 114.804 days or 9.919*10^6s, and finish Sun 18 Dec 2022 14:44:41.



As of  Thu 25 Aug 2022 21:45:01 there were 8192 iterations done in 8257.1 seconds. That is 0.99211 iterations/s. 0.08820287% complete. It should take 108.334 days or 9.360*10^6s, and finish Mon 12 Dec 2022 03:28:00.



As of  Thu 25 Aug 2022 22:13:27 there were 10240 iterations done in 9963.0 seconds. That is 1.0278 iterations/s. 0.1102536% complete. It should take 104.571 days or 9.035*10^6s, and finish Thu 8 Dec 2022 09:10:06.



As of  Thu 25 Aug 2022 22:41:47 there were 12288 iterations done in 11663. seconds. That is 1.0536 iterations/s. 0.1323043% complete. It should take 102.016 days or 8.814*10^6s, and finish Mon 5 Dec 2022 19:50:31.



As of  Thu 25 Aug 2022 23:10:42 there were 14336 iterations done in 13398. seconds. That is 1.0700 iterations/s. 0.1543550% complete. It should take 100.447 days or 8.679*10^6s, and finish Sun 4 Dec 2022 06:10:31.



As of  Thu 25 Aug 2022 23:39:06 there were 16384 iterations done in 15102. seconds. That is 1.0849 iterations/s. 0.1764057% complete. It should take 99.0722 days or 8.560*10^6s, and finish Fri 2 Dec 2022 21:11:24.



As of  Fri 26 Aug 2022 00:07:52 there were 18432 iterations done in 16829. seconds. That is 1.0953 iterations/s. 0.1984564% complete. It should take 98.1296 days or 8.478*10^6s, and finish Thu 1 Dec 2022 22:34:01.



As of  Fri 26 Aug 2022 00:36:29 there were 20480 iterations done in 18545. seconds. That is 1.1043 iterations/s. 0.2205072% complete. It should take 97.3255 days or 8.409*10^6s, and finish Thu 1 Dec 2022 03:16:05.



As of  Fri 26 Aug 2022 01:05:23 there were 22528 iterations done in 20279. seconds. That is 1.1109 iterations/s. 0.2425579% complete. It should take 96.7496 days or 8.359*10^6s, and finish Wed 30 Nov 2022 13:26:49.



As of  Fri 26 Aug 2022 01:33:57 there were 24576 iterations done in 21993. seconds. That is 1.1174 iterations/s. 0.2646086% complete. It should take 96.1826 days or 8.310*10^6s, and finish Tue 29 Nov 2022 23:50:17.



As of  Fri 26 Aug 2022 02:02:48 there were 26624 iterations done in 23725. seconds. That is 1.1222 iterations/s. 0.2866593% complete. It should take 95.7749 days or 8.275*10^6s, and finish Tue 29 Nov 2022 14:03:18.



As of  Fri 26 Aug 2022 02:31:38 there were 28672 iterations done in 25454. seconds. That is 1.1264 iterations/s. 0.3087100% complete. It should take 95.4177 days or 8.244*10^6s, and finish Tue 29 Nov 2022 05:28:53.



As of  Fri 26 Aug 2022 03:01:12 there were 30720 iterations done in 27229. seconds. That is 1.1282 iterations/s. 0.3307607% complete. It should take 95.2642 days or 8.231*10^6s, and finish Tue 29 Nov 2022 01:47:49.



As of  Fri 26 Aug 2022 03:30:01 there were 32768 iterations done in 28957. seconds. That is 1.1316 iterations/s. 0.3528115% complete. It should take 94.9801 days or 8.206*10^6s, and finish Mon 28 Nov 2022 18:58:43.



As of  Fri 26 Aug 2022 03:59:00 there were 34816 iterations done in 30696. seconds. That is 1.1342 iterations/s. 0.3748622% complete. It should take 94.7606 days or 8.187*10^6s, and finish Mon 28 Nov 2022 13:42:37.



As of  Fri 26 Aug 2022 04:27:50 there were 36864 iterations done in 32426. seconds. That is 1.1369 iterations/s. 0.3969129% complete. It should take 94.5398 days or 8.168*10^6s, and finish Mon 28 Nov 2022 08:24:44.

Here is the latest report from the MRB constant supercomputer 3:

As of  Mon 29 Aug 2022 03:23:37 there were 329728 iterations done in 2.8777*10^5 seconds. That is 1.1458 iterations/s. 3.550165% complete. It should take 93.8034 days or 8.105*10^6s, and finish Sun 27 Nov 2022 14:44:18.

Then here:

As of  Thu 1 Sep 2022 21:08:43 there were 690176 iterations done in 6.1088*10^5 seconds. That is 1.1298 iterations/s. 7.431092% complete. It should take 95.1306 days or 8.219*10^6s, and finish Mon 28 Nov 2022 22:35:26.

Here:

As of  Sun 4 Sep 2022 19:15:45 there were 968704 iterations done in 8.6330*10^5 seconds. That is 1.1221 iterations/s. 10.42999% complete. It should take 95.7845 days or 8.276*10^6s, and finish Tue 29 Nov 2022 14:17:08.
POSTED BY: Marvin Ray Burns

How about a cool 7 million digits?

. .

From the MRB constant supercomputer, using only one node:

In[4]:= Needs["SubKernels`LocalKernels`"]
Block[{$mathkernel = $mathkernel <> " -threadpriority=2"}, 
 LaunchKernels[]]

Out[5]= {"KernelObject"[1, "local"], "KernelObject"[2, "local"], 
 "KernelObject"[3, "local"], "KernelObject"[4, "local"], 
 "KernelObject"[5, "local"], "KernelObject"[6, "local"], 
 "KernelObject"[7, "local"], "KernelObject"[8, "local"], 
 "KernelObject"[9, "local"], "KernelObject"[10, "local"]}

In[6]:= Print["Start time is ", ds = DateString[], "."];
prec = 7000000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=4*
    number of physical cores*), tsize = 2^7, chunksize, start = 1, ll,
     ctab, pr = Floor[1.005 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["Will give ", end, 
    " time estimates, each more accurate than the previous."];
   Print["Will stop at ", end*chunksize, 
    " iterations to ensure precsion of around ", pr, 
    " decimal places."]; d = ChebyshevT[n, 3];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/396288];
   Do[xvals = Flatten[Parallelize[Table[Table[ll = start + j*tsize + l;
         x = N[E^(Log[ll]/(ll)), iprec];
         pc = iprec;
         While[pc < pr/65536, pc = Min[3 pc, pr/65536];
          x = SetPrecision[x, pc];
          y = x^ll - ll;
          x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];
         (**N[Exp[Log[ll]/ll],pr/99072]**)
         x = SetPrecision[x, pr/16384];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/16384] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/16384] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/4096]*)x = SetPrecision[x, pr/4096];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/4096] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/4096] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/4096]*)x = SetPrecision[x, pr/1024];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/1024] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/1024] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/1024]*)x = SetPrecision[x, pr/256];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/256] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/256] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[
         ll]/ll],pr/256]*)x = SetPrecision[x, pr/64];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/64] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/64] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**N[Exp[Log[
         ll]/ll],pr/64]**)x = SetPrecision[x, pr/16];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/16] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/16] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**N[Exp[Log[
         ll]/ll],pr/16]**)x = SetPrecision[x, pr/4];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr/4] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr/4] ll (ll - 1) 1/(3 ll t2 + t^3 z));(**N[Exp[Log[
         ll]/ll],pr/4]**)x = SetPrecision[x, pr];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pr] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               pr] ll (ll - 1) 1/(3 ll t2 + t^3 z));(*N[Exp[Log[ll]/
         ll],pr]*)x, {l, 0, tsize - 1}], {j, 0, cores - 1}]]];
    ctab = ParallelTable[Table[c = b - c;
       ll = start + l - 2;
       b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
       c, {l, chunksize}], Method -> "Automatic"];
    s += ctab.(xvals - 1);
    start += chunksize;
    st = SessionTime[] - T0; kc = k*chunksize;
    ti = (st)/(kc + 10^-4)*(n)/(3600)/(24);
    If[kc > 1, 
     Print["As of  ", DateString[], " there were ", kc, 
      " iterations done in ", N[st, 5], " seconds. That is ", 
      N[kc/st, 5], " iterations/s. ", N[kc/(end*chunksize)*100, 7], 
      "% complete.", " It should take ", N[ti, 6], " days or ", 
      N[ti*24*3600, 4], "s, and finish ", DatePlus[ds, ti], "."]];
    Print[];, {k, 0, end - 1}];
   N[-s/d, pr]];
t2 = Timing[MRB7 = expM[prec];]; Print["Finished on ", 
 DateString[], ". Proccessor and actual time were ", t2[[1]], " and ",
  SessionTime[] - T0, " s. respectively"];
Print["Enter MRB1 to print ", 
 Floor[Precision[
   MRB1]], " digits. The error from a 5,000,000 or more digit \
calculation that used a different method is  "]; N[MRB7 - MRB1, 20]

During evaluation of In[6]:= Start time is Thu 14 Apr 2022 16:22:41.

During evaluation of In[6]:= Iterations required: 9286198

During evaluation of In[6]:= Will give 4535 time estimates, each more accurate than the previous.

During evaluation of In[6]:= Will stop at 9287680 iterations to ensure the precision of around 7034999 decimal places.

During evaluation of In[6]:= 

During evaluation of In[6]:= As of  Thu 14 Apr 2022 18:32:49 there were 2048 iterations done in 7808.2 seconds. That is 0.26229 iterations/s. 0.02205072% complete. It should take 409.775 days or 3.540*10^7s, and finish Mon 29 May 2023 10:58:23.

...

As of  Sun 17 Apr 2022 02:51:26 there were 106496 iterations done in 2.1053*10^5 seconds. That is 0.50586 iterations/s. 1.146637% complete. It should take 212.469 days or 1.836*10^7s, and finish Sun 13 Nov 2022 03:37:24.

...

As of  Mon 18 Apr 2022 04:13:11 there were 153600 iterations done in 3.0183*10^5 seconds. That is 0.50890 iterations/s. 1.653804% complete. It should take 211.201 days or 1.825*10^7s, and finish Fri 11 Nov 2022 21:11:45.

...

As of  Tue 19 Apr 2022 03:22:22 there were 196608 iterations done in 3.8518*10^5 seconds. That is 0.51043 iterations/s. 2.116869% complete. It should take 210.566 days or 1.819*10^7s, and finish Fri 11 Nov 2022 05:57:26.

...

As of  Wed 20 Apr 2022 02:25:24 there were 239616 iterations done in 4.6816*10^5 seconds. That is 0.51182 iterations/s. 2.579934% complete. It should take 209.993 days or 1.814*10^7s, and finish Thu 10 Nov 2022 16:13:19.

...

As of  Thu 21 Apr 2022 00:22:00 there were 280576 iterations done in 5.4716*10^5 seconds. That is 0.51279 iterations/s. 3.020948% complete. It should take 209.598 days or 1.811*10^7s, and finish Thu 10 Nov 2022 06:43:56.

...

As of  Fri 22 Apr 2022 04:58:46 there were 333824 iterations done in 6.5017*10^5 seconds. That is 0.51344 iterations/s. 3.594267% complete. It should take 209.329 days or 1.809*10^7s, and finish Thu 10 Nov 2022 00:17:09.

...

As of  Sat 23 Apr 2022 00:48:57 there were 370688 iterations done in 7.2158*10^5 seconds. That is 0.51372 iterations/s. 3.991180% complete. It should take 209.218 days or 1.808*10^7s, and finish Wed 9 Nov 2022 21:35:53.

...

As of  Sun 24 Apr 2022 03:21:25 there were 419840 iterations done in 8.1712*10^5 seconds. That is 0.51380 iterations/s. 4.520397% complete. It should take 209.184 days or 1.807*10^7s, and finish Wed 9 Nov 2022 20:47:31.

...

As of  Mon 25 Apr 2022 04:33:20 there were 466944 iterations done in 9.0784*10^5 seconds. That is 0.51435 iterations/s. 5.027563% complete. It should take 208.962 days or 1.805*10^7s, and finish Wed 9 Nov 2022 15:28:33.

...

As of  Wed 27 Apr 2022 01:36:29 there were 550912 iterations done in 1.0700*10^6 seconds. That is 0.51486 iterations/s. 5.931643% complete. It should take 208.755 days or 1.804*10^7s, and finish Wed 9 Nov 2022 10:30:11

...

As of  Fri 29 Apr 2022 08:31:39 there were 653312 iterations done in 1.2677*10^6 seconds. That is 0.51534 iterations/s. 7.034179% complete. It should take 208.561 days or 1.802*10^7s, and finish Wed 9 Nov 2022 05:50:33.

...

As of  Mon 2 May 2022 04:22:24 there were 780288 iterations done in 1.5120*10^6 seconds. That is 0.51607 iterations/s. 8.401323% complete. It should take 208.265 days or 1.799*10^7s, and finish Tue 8 Nov 2022 22:44:21.

...

As of  Thu 5 May 2022 00:23:04 there were 907264 iterations done in 1.7568*10^6 seconds. That is 0.51642 iterations/s. 9.768467% complete. It should take 208.122 days or 1.798*10^7s, and finish Tue 8 Nov 2022 19:18:47.

...

As of  Fri 6 May 2022 02:41:38 there were 956416 iterations done in 1.8515*10^6 seconds. That is 0.51655 iterations/s. 10.29768% complete. It should take 208.070 days or 1.798*10^7s, and finish Tue 8 Nov 2022 18:03:40.

...

As of  Mon 9 May 2022 05:00:16 there were 1095680 iterations done in 2.1191*10^6 seconds. That is 0.51706 iterations/s. 11.79713% complete. It should take 207.866 days or 1.796*10^7s, and finish Tue 8 Nov 2022 13:09:11.

...

As of  Thu 12 May 2022 04:20:37 there were 1228800 iterations done in 2.3759*10^6 seconds. That is 0.51720 iterations/s. 13.23043% complete. It should take 207.810 days or 1.795*10^7s, and finish Tue 8 Nov 2022 11:49:21.

...

As of  Mon 16 May 2022 02:29:04 there were 1404928 iterations done in 2.7148*10^6 seconds. That is 0.51751 iterations/s. 15.12679% complete. It should take 207.685 days or 1.794*10^7s, and finish Tue 8 Nov 2022 08:49:11.

...

As of  Thu 19 May 2022 01:45:18 there were 1538048 iterations done in 2.9714*10^6 seconds. That is 0.51762 iterations/s. 16.56009% complete. It should take 207.639 days or 1.794*10^7s, and finish Tue 8 Nov 2022 07:43:04.

...

As of  Fri 20 May 2022 10:35:13 there were 1599488 iterations done in 3.0896*10^6 seconds. That is 0.51771 iterations/s. 17.22161% complete. It should take 207.605 days or 1.794*10^7s, and finish Tue 8 Nov 2022 06:54:35.

...

As of  Tue 24 May 2022 04:50:32 there were 1767424 iterations done in 3.4145*10^6 seconds. That is 0.51763 iterations/s. 19.02977% complete. It should take 207.638 days or 1.794*10^7s, and finish Tue 8 Nov 2022 07:41:25.

...

As of  Mon 30 May 2022 02:27:32 there were 2027520 iterations done in 3.9243*10^6 seconds. That is 0.51666 iterations/s. 21.83021% complete. It should take 208.027 days or 1.797*10^7s, and finish Tue 8 Nov 2022 17:01:55.

...

As of  Wed 1 Jun 2022 23:29:10 there were 2154496 iterations done in 4.1728*10^6 seconds. That is 0.51632 iterations/s. 23.19735% complete. It should take 208.164 days or 1.799*10^7s, and finish Tue 8 Nov 2022 20:18:21.

...

As of  Tue 7 Jun 2022 01:55:28 there were 2379776 iterations done in 4.6136*10^6 seconds. That is 0.51582 iterations/s. 25.62293% complete. It should take 208.365 days or 1.800*10^7s, and finish Wed 9 Nov 2022 01:08:25.

...

As of  Thu 9 Jun 2022 07:29:40 there were 2478080 iterations done in 4.8064*10^6 seconds. That is 0.51558 iterations/s. 26.68137% complete. It should take 208.464 days or 1.801*10^7s, and finish Wed 9 Nov 2022 03:30:29.

...

As of  Sun 12 Jun 2022 02:23:15 there were 2600960 iterations done in 5.0472*10^6 seconds. That is 0.51532 iterations/s. 28.00441% complete. It should take 208.566 days or 1.802*10^7s, and finish Wed 9 Nov 2022 05:58:01.

...

For an unknown reason, the computer restarted on June 16, 2022.

Since the 3/4 of the MRB constant supercomputer using my 4th-degree convergence rate algorithm is so efficient, I decided to use then to confirm the 7 million being computed by the other fourth using a hybrid algorithm.

The 4th-degree convergence rate calculation will get done sooner than the one that started a few months earlier!

Here are the codes and results in the same form as the hybrid one shown in a few messages above that starts with, "How about a cool 7 million digits?".

In[3]:= CloseKernels[]

In[1]:= Needs["SubKernels`LocalKernels`"]
Block[{$mathkernel = $mathkernel <> " -threadpriority=2"}, 
 LaunchKernels[]]

Out[2]= {"KernelObject"[1, "local"], "KernelObject"[2, "local"], 
 "KernelObject"[3, "local"], "KernelObject"[4, "local"], 
 "KernelObject"[5, "local"], "KernelObject"[6, "local"], 
 "KernelObject"[7, "local"], "KernelObject"[8, "local"], 
 "KernelObject"[9, "local"], "KernelObject"[10, "local"], 
 "KernelObject"[11, "local"], "KernelObject"[12, "local"], 
 "KernelObject"[13, "local"], "KernelObject"[14, "local"], 
 "KernelObject"[15, "local"], "KernelObject"[16, "local"], 
 "KernelObject"[17, "WIN-1AA39U1LQNT"], 
 "KernelObject"[18, "WIN-1AA39U1LQNT"], 
 "KernelObject"[19, "WIN-1AA39U1LQNT"], 
 "KernelObject"[20, "WIN-1AA39U1LQNT"], 
 "KernelObject"[21, "WIN-1AA39U1LQNT"], 
 "KernelObject"[22, "WIN-1AA39U1LQNT"], 
 "KernelObject"[23, "WIN-1AA39U1LQNT"], 
 "KernelObject"[24, "WIN-1AA39U1LQNT"], 
 "KernelObject"[25, "WIN-1AA39U1LQNT"], 
 "KernelObject"[26, "WIN-1AA39U1LQNT"], 
 "KernelObject"[27, "WIN-1AA39U1LQNT"], 
 "KernelObject"[28, "WIN-1AA39U1LQNT"], 
 "KernelObject"[29, "WIN-1AA39U1LQNT"], 
 "KernelObject"[30, "WIN-1AA39U1LQNT"], 
 "KernelObject"[31, "WIN-1AA39U1LQNT"], 
 "KernelObject"[32, "WIN-1AA39U1LQNT"], 
 "KernelObject"[33, "2600:1700:71d0:fd50:0:0:0:12"], 
 "KernelObject"[34, "2600:1700:71d0:fd50:0:0:0:12"], 
 "KernelObject"[35, "2600:1700:71d0:fd50:0:0:0:12"], 
 "KernelObject"[36, "2600:1700:71d0:fd50:0:0:0:12"], 
 "KernelObject"[37, "2600:1700:71d0:fd50:0:0:0:12"], 
 "KernelObject"[38, "2600:1700:71d0:fd50:0:0:0:12"]}

In[3]:= Print["Start time is ", ds = DateString[], "."];
prec = 7000000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=4*
    number of physical cores*), tsize = 2^7, chunksize, start = 1, ll,
     ctab, pr = Floor[1.005 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["Will give ", end, 
    " time estimates, each more accurate than the previous."];
   Print["Will stop at ", end*chunksize, 
    " iterations to ensure precsion of around ", pr, 
    " decimal places."]; d = ChebyshevT[n, 3];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = pr/2^6;
   Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
        x = N[E^(Log[ll]/(ll)), iprec];
        pc = iprec;
        While[pc < pr, pc = Min[4 pc, pr];
         x = SetPrecision[x, pc];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;
         x = 
          x*(1 + SetPrecision[4.5, pc] (ll - 1)/
               t2 + (ll + 1) z/(2 ll t) - 
             SetPrecision[13.5, 
               2 pc] ll (ll - 1)/(3 ll t2 + t^3 z))];(**N[Exp[Log[ll]/
        ll],pr]**)x, {l, 0, tsize - 1}], {j, 0, cores - 1}]];
    ctab = ParallelTable[Table[c = b - c;
       ll = start + l - 2;
       b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
       c, {l, chunksize}], Method -> "Automatic"];
    s += ctab.(xvals - 1);
    start += chunksize;
    st = SessionTime[] - T0; kc = k*chunksize;
    ti = (st)/(kc + 10^-4)*(n)/(3600)/(24);
    If[kc > 1, 
     Print["As of  ", DateString[], " there were ", kc, 
      " iterations done in ", N[st, 5], " seconds. That is ", 
      N[kc/st, 5], " iterations/s. ", N[kc/(end*chunksize)*100, 7], 
      "% complete.", " It should take ", N[ti, 6], " days or ", 
      N[ti*24*3600, 4], "s, and finish ", DatePlus[ds, ti], "."]];
    Print[];, {k, 0, end - 1}];
   N[-s/d, pr]];
t2 = Timing[MRB1 = expM[prec];]; Print["Finished on ", 
 DateString[], ". Proccessor and actual time were ", t2[[1]], " and ",
  SessionTime[] - T0, " s. respectively"];
Print["Enter MRB1 to print ", 
 Floor[Precision[
   MRB1]], " digits. The error from a 6,500,000 or more digit 
 calculation that used a different method is  "]; N[MRB - MRB1, 20]



Start time is Fri 10 Jun 2022 09:19:40.

Iterations required: 9286198

Will give 4535 time estimates, each more accurate than the previous.

Will stop at 9287680 iterations to ensure precsion of around 7034999 decimal places.



As of  Fri 10 Jun 2022 10:23:46 there were 2048 iterations done in 3846.5 seconds. That is 0.53243 iterations/s. 0.02205072% complete. It should take 201.865 days or 1.744*10^7s, and finish Thu 29 Dec 2022 06:04:33.

...

As of  Sat 11 Jun 2022 16:59:07 there were 106496 iterations done in 1.1397*10^5 seconds. That is 0.93445 iterations/s. 1.146637% complete. It should take 115.019 days or 9.938*10^6s, and finish Mon 3 Oct 2022 09:47:03.

...

As of  Sun 12 Jun 2022 07:08:42 there were 153600 iterations done in 1.6494*10^5 seconds. That is 0.93123 iterations/s. 1.653804% complete. It should take 115.416 days or 9.972*10^6s, and finish Mon 3 Oct 2022 19:18:33.

...

As of  Sun 12 Jun 2022 20:07:56 there were 196608 iterations done in 2.1170*10^5 seconds. That is 0.92873 iterations/s. 2.116869% complete. It should take 115.728 days or 9.999*10^6s, and finish Tue 4 Oct 2022 02:47:20.

...

As of  Mon 13 Jun 2022 09:07:41 there were 239616 iterations done in 2.5848*10^5 seconds. That is 0.92702 iterations/s. 2.579934% complete. It should take 115.941 days or 1.002*10^7s, and finish Tue 4 Oct 2022 07:54:35.

...

As of  Mon 13 Jun 2022 21:33:17 there were 280576 iterations done in 3.0322*10^5 seconds. That is 0.92533 iterations/s. 3.020948% complete. It should take 116.152 days or 1.004*10^7s, and finish Tue 4 Oct 2022 12:59:02.

...

As of  Thu 16 Jun 2022 07:25:11 there were 466944 iterations done in 5.1153*10^5 seconds. That is 0.91284 iterations/s. 5.027563% complete. It should take 117.742 days or 1.017*10^7s, and finish Thu 6 Oct 2022 03:08:16.

...

As of  Fri 17 Jun 2022 09:39:35 there were 550912 iterations done in 6.0599*10^5 seconds. That is 0.90910 iterations/s. 5.931643% complete. It should take 118.225 days or 1.021*10^7s, and finish Thu 6 Oct 2022 14:44:16.

...

As of  Sat 18 Jun 2022 17:50:34 there were 653312 iterations done in 7.2185*10^5 seconds. That is 0.90505 iterations/s. 7.034179% complete. It should take 118.755 days or 1.026*10^7s, and finish Fri 7 Oct 2022 03:27:22.

...

As of  Mon 20 Jun 2022 09:53:03 there were 780288 iterations done in 8.6600*10^5 seconds. That is 0.90102 iterations/s. 8.401323% complete. It should take 119.286 days or 1.031*10^7s, and finish Fri 7 Oct 2022 16:11:11.

...

Thinking that it would be more productive to explore the power of the full MRB constant supercomputer 3 than to run this computation for 3 and a half more months, I am going to stop this computation.

I've shown that the 7,000,00 digits could be done in around120 days with these resources.

6/27/2022 With my network, the digit calculations of 1,000,000 or less are not to be improved by the additional 1/4 of the MRBSC3. To conserve energy, I'm waiting until the outdoor temperature is likely to stay under 90° F, here in Indianapolis, for a few months before I do any more large calculations.

I'm at it again!

The third attempt at a spectacular 7 million digits using Mathematica 13.1 with 8 kernels on 8 performance and 8 efficiency cores at 5.2 GHz with 60000 Mhz DDR5 memory (under-clocked to 4000MHz), using an algorithm with a 4th-degree convergence rate.

enter image description here

Print["Start time is ", ds = DateString[], "."];
prec = 7000000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=4*
    number of physical cores*), tsize = 2^7, chunksize, start = 1, ll,
     ctab, pr = Floor[1.005 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["Will give ", end, 
    " time estimates, each more accurate than the previous."];
   Print["Will stop at ", end*chunksize, 
    " iterations to ensure precsion of around ", pr, 
    " decimal places."]; d = ChebyshevT[n, 3];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = pr/2^6;
   Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
        x = N[E^(Log[ll]/(ll)), iprec];
        pc = iprec;
        While[pc < pr, pc = Min[4 pc, pr];
         x = SetPrecision[x, pc];
         xll = x^ll; z = (ll - xll)/xll;
         t = 2 ll - 1; t2 = t^2;

         x = x*(1 + 
             SetPrecision[4.5, pc] (ll - 1)/t2 + (ll + 1) z/(2 ll t) -
              SetPrecision[13.5, 
               2 pc] ll (ll - 1)/(3 ll t2 + t^3 z))];(**N[Exp[Log[ll]/
        ll],pr]**)x, {l, 0, tsize - 1}], {j, 0, cores - 1}]];
    ctab = ParallelTable[Table[c = b - c;
       ll = start + l - 2;
       b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
       c, {l, chunksize}], Method -> "Automatic"];
    s += ctab . (xvals - 1);
    start += chunksize;
    st = SessionTime[] - T0; kc = k*chunksize;
    ti = (st)/(kc + 10^-4)*(n)/(3600)/(24);
    If[kc > 1, 
     Print["As of  ", DateString[], " there were ", kc, 
      " iterations done in ", N[st, 5], " seconds. That is ", 
      N[kc/st, 5], " iterations/s. ", N[kc/(end*chunksize)*100, 7], 
      "% complete.", " It should take ", N[ti, 6], " days or ", 
      N[ti*24*3600, 4], "s, and finish ", DatePlus[ds, ti], "."]];
    Print[];, {k, 0, end - 1}];
   N[-s/d, pr]];
t2 = Timing[MRB1 = expM[prec];]; Print["Finished on ", 
 DateString[], ". Proccessor and actual time were ", t2[[1]], " and ",
  SessionTime[] - T0, " s. respectively"];
Print["Enter MRB1 to print ", 
 Floor[Precision[
   MRB1]], " digits. The error from a 6,500,000 or more digit 
 calculation that used a different method is  "]; N[MRB - MRB1, 20]

Start time is Sun 10 Jul 2022 05:04:03.

Iterations required: 9286198

Will give 4535 time estimates, each more accurate than the previous.

Will stop at 9287680 iterations to ensure precsion of around 7034999 decimal places.

As of  Sun 10 Jul 2022 06:51:13 there were 2048 iterations done in 6429.7 seconds. That is 0.31852 iterations/s. 0.02205072% complete. It should take 337.432 days or 2.915*10^7s, and finish Mon 12 Jun 2023 15:26:08.



As of  Sun 10 Jul 2022 07:46:26 there were 4096 iterations done in 9742.4 seconds. That is 0.42043 iterations/s. 0.04410143% complete. It should take 255.641 days or 2.209*10^7s, and finish Wed 22 Mar 2023 20:27:22.



As of  Sun 10 Jul 2022 08:42:30 there were 6144 iterations done in 13107. seconds. That is 0.46875 iterations/s. 0.06615215% complete. It should take 229.289 days or 1.981*10^7s, and finish Fri 24 Feb 2023 11:59:32.



As of  Sun 10 Jul 2022 09:39:46 there were 8192 iterations done in 16543. seconds. That is 0.49520 iterations/s. 0.08820287% complete. It should take 217.040 days or 1.875*10^7s, and finish Sun 12 Feb 2023 06:01:48.



As of  Sun 10 Jul 2022 10:34:36 there were 10240 iterations done in 19833. seconds. That is 0.51631 iterations/s. 0.1102536% complete. It should take 208.166 days or 1.799*10^7s, and finish Fri 3 Feb 2023 09:03:41.



As of  Sun 10 Jul 2022 11:29:34 there were 12288 iterations done in 23131. seconds. That is 0.53123 iterations/s. 0.1323043% complete. It should take 202.321 days or 1.748*10^7s, and finish Sat 28 Jan 2023 12:46:22.



As of  Sun 10 Jul 2022 12:26:09 there were 14336 iterations done in 26526. seconds. That is 0.54045 iterations/s. 0.1543550% complete. It should take 198.871 days or 1.718*10^7s, and finish Wed 25 Jan 2023 01:58:44.



As of  Sun 10 Jul 2022 13:20:41 there were 16384 iterations done in 29798. seconds. That is 0.54984 iterations/s. 0.1764057% complete. It should take 195.473 days or 1.689*10^7s, and finish Sat 21 Jan 2023 16:25:03.



As of  Sun 10 Jul 2022 14:15:55 there were 18432 iterations done in 33112. seconds. That is 0.55666 iterations/s. 0.1984564% complete. It should take 193.077 days or 1.668*10^7s, and finish Thu 19 Jan 2023 06:55:03.



As of  Sun 10 Jul 2022 15:13:35 there were 20480 iterations done in 36572. seconds. That is 0.56000 iterations/s. 0.2205072% complete. It should take 191.929 days or 1.658*10^7s, and finish Wed 18 Jan 2023 03:21:12.



As of  Sun 10 Jul 2022 16:12:26 there were 22528 iterations done in 40103. seconds. That is 0.56175 iterations/s. 0.2425579% complete. It should take 191.328 days or 1.653*10^7s, and finish Tue 17 Jan 2023 12:56:50.



As of  Sun 10 Jul 2022 17:08:58 there were 24576 iterations done in 43495. seconds. That is 0.56504 iterations/s. 0.2646086% complete. It should take 190.216 days or 1.643*10^7s, and finish Mon 16 Jan 2023 10:15:32.



As of  Sun 10 Jul 2022 18:05:56 there were 26624 iterations done in 46913. seconds. That is 0.56752 iterations/s. 0.2866593% complete. It should take 189.383 days or 1.636*10^7s, and finish Sun 15 Jan 2023 14:16:15.



As of  Sun 10 Jul 2022 19:02:58 there were 28672 iterations done in 50335. seconds. That is 0.56962 iterations/s. 0.3087100% complete. It should take 188.685 days or 1.630*10^7s, and finish Sat 14 Jan 2023 21:30:11.



As of  Sun 10 Jul 2022 20:00:44 there were 30720 iterations done in 53800. seconds. That is 0.57100 iterations/s. 0.3307607% complete. It should take 188.230 days or 1.626*10^7s, and finish Sat 14 Jan 2023 10:34:44.



As of  Sun 10 Jul 2022 20:56:48 there were 32768 iterations done in 57164. seconds. That is 0.57322 iterations/s. 0.3528115% complete. It should take 187.500 days or 1.620*10^7s, and finish Fri 13 Jan 2023 17:03:23.



As of  Sun 10 Jul 2022 21:53:09 there were 34816 iterations done in 60546. seconds. That is 0.57503 iterations/s. 0.3748622% complete. It should take 186.910 days or 1.615*10^7s, and finish Fri 13 Jan 2023 02:54:25.



As of  Sun 10 Jul 2022 22:49:26 there were 36864 iterations done in 63923. seconds. That is 0.57670 iterations/s. 0.3969129% complete. It should take 186.371 days or 1.610*10^7s, and finish Thu 12 Jan 2023 13:58:01.



As of  Sun 10 Jul 2022 23:46:23 there were 38912 iterations done in 67340. seconds. That is 0.57784 iterations/s. 0.4189636% complete. It should take 186.000 days or 1.607*10^7s, and finish Thu 12 Jan 2023 05:04:25.



As of  Mon 11 Jul 2022 00:43:05 there were 40960 iterations done in 70742. seconds. That is 0.57901 iterations/s. 0.4410143% complete. It should take 185.626 days or 1.604*10^7s, and finish Wed 11 Jan 2023 20:05:32.



As of  Mon 11 Jul 2022 01:39:52 there were 43008 iterations done in 74149. seconds. That is 0.58002 iterations/s. 0.4630650% complete. It should take 185.301 days or 1.601*10^7s, and finish Wed 11 Jan 2023 12:17:43.



As of  Mon 11 Jul 2022 02:36:41 there were 45056 iterations done in 77558. seconds. That is 0.58093 iterations/s. 0.4851158% complete. It should take 185.011 days or 1.598*10^7s, and finish Wed 11 Jan 2023 05:20:13.



As of  Mon 11 Jul 2022 03:35:55 there were 47104 iterations done in 81111. seconds. That is 0.58073 iterations/s. 0.5071665% complete. It should take 185.075 days or 1.599*10^7s, and finish Wed 11 Jan 2023 06:52:21.



As of  Mon 11 Jul 2022 04:34:36 there were 49152 iterations done in 84632. seconds. That is 0.58077 iterations/s. 0.5292172% complete. It should take 185.063 days or 1.599*10^7s, and finish Wed 11 Jan 2023 06:34:32.



As of  Mon 11 Jul 2022 05:32:01 there were 51200 iterations done in 88077. seconds. That is 0.58131 iterations/s. 0.5512679% complete. It should take 184.892 days or 1.597*10^7s, and finish Wed 11 Jan 2023 02:29:08.



As of  Mon 11 Jul 2022 06:29:23 there were 53248 iterations done in 91519. seconds. That is 0.58182 iterations/s. 0.5733186% complete. It should take 184.728 days or 1.596*10^7s, and finish Tue 10 Jan 2023 22:32:49.

...

As of  Tue 12 Jul 2022 15:07:54 there were 122880 iterations done in 2.0903*10^5 seconds. That is 0.58786 iterations/s. 1.323043% complete. It should take 182.833 days or 1.580*10^7s, and finish Mon 9 Jan 2023 01:02:52.



As of  Tue 12 Jul 2022 16:06:10 there were 124928 iterations done in 2.1253*10^5 seconds. That is 0.58782 iterations/s. 1.345094% complete. It should take 182.843 days or 1.580*10^7s, and finish Mon 9 Jan 2023 01:17:30.



As of  Tue 12 Jul 2022 17:04:14 there were 126976 iterations done in 2.1601*10^5 seconds. That is 0.58782 iterations/s. 1.367144% complete. It should take 182.843 days or 1.580*10^7s, and finish Mon 9 Jan 2023 01:18:01.



As of  Tue 12 Jul 2022 18:02:48 there were 129024 iterations done in 2.1953*10^5 seconds. That is 0.58774 iterations/s. 1.389195% complete. It should take 182.868 days or 1.580*10^7s, and finish Mon 9 Jan 2023 01:54:13.

...

As of  Wed 13 Jul 2022 19:13:43 there were 182272 iterations done in 3.1018*10^5 seconds. That is 0.58763 iterations/s. 1.962514% complete. It should take 182.902 days or 1.580*10^7s, and finish Mon 9 Jan 2023 02:42:30.



As of  Wed 13 Jul 2022 20:12:33 there were 184320 iterations done in 3.1371*10^5 seconds. That is 0.58755 iterations/s. 1.984564% complete. It should take 182.928 days or 1.580*10^7s, and finish Mon 9 Jan 2023 03:20:06.



As of  Wed 13 Jul 2022 21:11:04 there were 186368 iterations done in 3.1722*10^5 seconds. That is 0.58750 iterations/s. 2.006615% complete. It should take 182.942 days or 1.581*10^7s, and finish Mon 9 Jan 2023 03:40:54.



As of  Wed 13 Jul 2022 22:09:10 there were 188416 iterations done in 3.2071*10^5 seconds. That is 0.58750 iterations/s. 2.028666% complete. It should take 182.943 days or 1.581*10^7s, and finish Mon 9 Jan 2023 03:41:18.



As of  Wed 13 Jul 2022 23:08:03 there were 190464 iterations done in 3.2424*10^5 seconds. That is 0.58742 iterations/s. 2.050717% complete. It should take 182.969 days or 1.581*10^7s, and finish Mon 9 Jan 2023 04:19:46.

...

As of  Fri 15 Jul 2022 18:24:31 there were 282624 iterations done in 4.8003*10^5 seconds. That is 0.58877 iterations/s. 3.042999% complete. It should take 182.550 days or 1.577*10^7s, and finish Sun 8 Jan 2023 18:15:34.



As of  Fri 15 Jul 2022 19:22:21 there were 284672 iterations done in 4.8350*10^5 seconds. That is 0.58878 iterations/s. 3.065050% complete. It should take 182.547 days or 1.577*10^7s, and finish Sun 8 Jan 2023 18:11:01.



As of  Fri 15 Jul 2022 20:20:40 there were 286720 iterations done in 4.8700*10^5 seconds. That is 0.58875 iterations/s. 3.087100% complete. It should take 182.555 days or 1.577*10^7s, and finish Sun 8 Jan 2023 18:22:34.



As of  Fri 15 Jul 2022 21:19:07 there were 288768 iterations done in 4.9050*10^5 seconds. That is 0.58872 iterations/s. 3.109151% complete. It should take 182.565 days or 1.577*10^7s, and finish Sun 8 Jan 2023 18:37:48.



As of  Fri 15 Jul 2022 22:17:31 there were 290816 iterations done in 4.9401*10^5 seconds. That is 0.58869 iterations/s. 3.131202% complete. It should take 182.574 days or 1.577*10^7s, and finish Sun 8 Jan 2023 18:50:50.

...

As of  Tue 19 Jul 2022 00:02:05 there were 446464 iterations done in 7.5948*10^5 seconds. That is 0.58785 iterations/s. 4.807056% complete. It should take 182.833 days or 1.580*10^7s, and finish Mon 9 Jan 2023 01:04:00.



As of  Tue 19 Jul 2022 01:01:15 there were 448512 iterations done in 7.6303*10^5 seconds. That is 0.58780 iterations/s. 4.829107% complete. It should take 182.849 days or 1.580*10^7s, and finish Mon 9 Jan 2023 01:26:35.



As of  Tue 19 Jul 2022 01:58:48 there were 450560 iterations done in 7.6648*10^5 seconds. That is 0.58783 iterations/s. 4.851158% complete. It should take 182.842 days or 1.580*10^7s, and finish Mon 9 Jan 2023 01:16:01.



As of  Tue 19 Jul 2022 02:56:38 there were 452608 iterations done in 7.6995*10^5 seconds. That is 0.58784 iterations/s. 4.873208% complete. It should take 182.838 days or 1.580*10^7s, and finish Mon 9 Jan 2023 01:11:14.
POSTED BY: Marvin Ray Burns

July 9, 2022

Are is a table of several of my speed records from my fastest computers. First a caveat, there is a discrepancy in my timings in the first of the following 2 tables: The timings at and before " 3.2 GH 6core, 1666 MH RAM " were absolute timings. However, it was getting too hard to commit modern computers to just the one task of computing digits -- they are doing too many other things. So, I started recording computation time from the Timing command. table

The new 16-core computer's computations with verification are documented in this notebook. Although I'm not making any large number of digits computations, I'm still working on speed records:

However, all the following timings are absolute! From the first post,

01/08/2019

Here is an update of 100k digits (in seconds):

enter image description here

Notice, I broke the 1,000-second mark!!!!!

4th of July, 2022

I did it in 861 seconds of absolute time! enter image description here with the full power of the MRB constant supercomputer 3 (MRBSC 3).

See notebook.

7th of July, 2022

I did it in 691 seconds of absolute time!

enter image description here

See notebook.

30th of July, 2022

I did it in 682 seconds of absolute time!

enter image description here See notebook.

POSTED BY: Marvin Ray Burns

Real-World, and beyond, Applications

CMRB as a Growth Model

Its factor a models the interest rate to multiply an investment k times in k periods, as well as "other growth and decay functions involving the more general expression (1+k)^n, as in Plot 1A," because enter image description here

r=(k^(1/k)-1);Animate[ListPlot[l=Accumulate[Table[(r+1)^n,{k,100}]], PlotStyle->Red,PlotRange->{0,150},PlotLegends->{"\!\(\*UnderscriptBox[\(\[Sum]\), \(\)]\)(r+1\!\(\*SuperscriptBox[\()\), \(n\)]\)/.r->(\!\(\*SuperscriptBox[\(k\), \(1/k\)]\)-1)/.n->"n},AxesOrigin->{0,0}],{n,0,5}]

Plot 1A enter image description here

The discrete rates looks like the following.

r = (k^(1/k) - 1); me = 
 Animate[ListPlot[l = Table[(r + 1)^n, {k, 100}], PlotStyle -> Red, 
   PlotLegends -> {"(r+1)^n/.r->\!\(\*SuperscriptBox[\(k\), \
\(1/k\)]\)=1/.n->", n}, AxesOrigin -> {0, 0}, 
   PlotRange -> {0, 7}], {n, 1, 5}]

enter image description here

That factor enter image description here models not only discretely compounded rates but continuous too, ie Pt=p0e^rt.

By entering

Solve[P*E^(r*t) == P*(t^(1/t) - 1), r] 

we see, for Pt=p0e^rt, t>e

gives an effect of continuous decay of enter image description here Here Q1 means the first Quarter form 0 to -1.

The alternating sum of the principal of those continuous rates, i.e. P=(-1)t er t is the MRB constant (CMRB): enter image description here

In[647]:= NSum[(-1)^t ( E^(r*t)) /. r -> Log[-1 + t^(1/t)]/t, {t, 1, 
  Infinity}, Method -> "AlternatingSigns", WorkingPrecision -> 30]

Out[647]= 0.18785964246206712024857897184

Its integral (MKB) is an analog to CMRB : enter image description here

In[1]:= NIntegrate[(-1)^t (E^(r*t)) /. r -> Log[-1 + t^(1/t)]/t, {t, 
1, Infinity I}, Method -> "Trapezoidal", WorkingPrecision -> 30] - 
2 I/Pi

   Out[1]= 0.0707760393115288035395280218303 - 
    0.6840003894379321291827444599927 I

So, integrating P yields about 1/2 greater of a total than summing:

In[663]:= 
CMRB = NSum[(-1)^n ( Power[n, ( n)^-1] - 1), {n, 1, Infinity}, 
   Method -> "AlternatingSigns", WorkingPrecision -> 30];

 In[664]:= 
MKB = Abs[
   NIntegrate[(-1)^t ( E^(r*t)) /. r -> Log[-1 + t^(1/t)]/t, {t, 1, 
      Infinity I}, Method -> "Trapezoidal", WorkingPrecision -> 30] - 
    2 I/Pi];

In[667]:= MKB - CMRB

Out[667]= 0.49979272646562724956073343752

Next:

CMRB from Geometric Series and Power Series

The MRB constant: enter image description here is closely related to geometric series: enter image description here

The inverse function of the "term" of the MRB constant, i.e. x^(1/x) within a certain domain is solved for in this link,

enter image description here

...

enter image description here

Now we have the following for the orientated area, from 0 to 1, between the graph of that term and the axis.

enter image description here

In[344]:= f[x_] = x^(1/x);

In[346]:= CMRB = 
 NSum[(-1)^x (f[x] - 1), {x, 1, Infinity}, WorkingPrecision -> 20]

Out[346]= 0.18785964246207

In[350]:= (10 (CMRB + 3))/(3 (3 CMRB - 17)) - 
 NIntegrate[g = -x /. Solve[y == f[x], x], {y, 0, 1}, 
  WorkingPrecision -> 20]

Out[350]= {1.5605*10^-11}

Consider the following about a slight generalization of that term.

enter image description here

CMRB can be written in geometric series form:

CMRB= enter image description here

In[240]:= N[Quiet[(Sum[q^k, {x, 1, Infinity}] /. 
          k -> Log[-E^(I*Pi*x) + E^(x*(I*Pi + Log[x]/x^2))]/Log[q]) - 
       Sum[E^(I*Pi*x)*(-1 + x^(1/x)), {x, 1, Infinity}]]]

Out[240]= -4.163336342344337*^-16

Why would we express CMRB so? I'm not entirely sure, but we do have the following interestingly intricate graphs that go towards the value of the MRB constant and the MRB constant-1 as the input gets large. enter image description here enter image description here enter image description here enter image description here

enter image description here enter image description here

see notebook here.


Next

The Geometry of the MRB constant


In 1837 Pierre Wantzel proved that an nth root of a given length cannot be constructed if n is not a power of 2 (as mentioned here in Wikipedia). However, the following is a little different.

For , on November 21, 2010, I coined a multiversal analog to, Minkowski space that plots their values from constructions arising from a peculiar non-euclidean geometry, below, and fully in this vixra draft.

As in Diagram 2, we give each n-cube a hyperbolic volume (content) equal to its dimension,enter image description here Geometrically, as in Diagram3, on the y,z-plane line up an edge of each n-cube. The numeric values displayed in the diagram are the partial sums of S[x_] = Sum[(-1)^n*n^(1/n), {n, 1, 2*u}] where u is an positive integer. Then M is the MRB constant.

enter image description here

Join[ Table[N[S[x]], {u, 1, 4}], {"..."}, {NSum[(-1)^n*(n^(1/n) - 1), {n, 1, Infinity}]}]

Out[421]= {0.414214, 0.386178, 0.354454, 0.330824, "...", 0.18786}


Here are views of some regions of the plot of a definite integral equal to CMRB..

enter image description here

In[66]:= Csch[Pi t] Im[(1 + I t)^(1/(1 + I t))]

Out[66]= Csch[[Pi] t] Im[(1 + I t)^(1/(1 + I t))]

In[67]:= f[t_] = Csch[Pi t] Im[(1 + I t)^(1/(1 + I t))]

Out[67]= Csch[[Pi] t] Im[(1 + I t)^(1/(1 + I t))]

ReImPlot[Im[(1 + I t)^(1/(1 + I t))], {t, 0, 1}, PlotStyle -> Blue, 
 PlotLabels -> {Placed[(1 + I t)^(1/(1 + I t)), Above]}]

enter image description here

ReImPlot[Im[(1 + I t)^(1/(1 + I t))], {t, 0, 5}, PlotStyle -> Blue, 
 PlotLabels -> {Placed[(1 + I t)^(1/(1 + I t)), Above]}]

enter image description here

Show[ReImPlot[Csch[\[Pi] t], {t, 0, 1}, PlotStyle -> Yellow, 
  PlotLabels -> "Expressions"]]

enter image description here

Show[ReImPlot[Csch[\[Pi] t], {t, 0, 5}, PlotStyle -> Yellow, 
  PlotLabels -> "Expressions"]]

enter image description here

ReImPlot[f[t], {t, 0, 1}, 
 PlotLabel -> NIntegrate[f[t], {t, 0, 1}, WorkingPrecision -> 20], 
 PlotStyle -> Green, PlotLabels -> "Expressions"]

enter image description here

ReImPlot[f[t], {t, 0, 5}, 
 PlotLabel -> NIntegrate[f[t], {t, 0, 5}, WorkingPrecision -> 20], 
 PlotStyle -> Green, PlotLabels -> "Expressions"]

enter image description here

Next







MeijerG Representation

From its integrated analog, I found a MeijerG representation for CMRB.

The search for it began with the following:

On 10/10/2021, I found the following proper definite integral that leads to almost identical proper integrals from 0 to 1 for CMRB and its integrated analog.

m vs m2 0 to 1

See notebook in this link.

Here is a MeijerG function for the integrated analog. See (proof) of discovery.

enter image description here

f(n)=enter image description here.

`

In[135]:=f[n_]:=MeijerG[{{},Table[1,{n+1}]},{Prepend[Table[0,n+1],-n+1],{}},-\[ImaginaryI]\[Pi]];`
       In[337]:=M2=NIntegrate[E^(I Pi x)(SuperscriptBox["x", FractionBox["1", "x"]]-1), 
  {x,1,Infinity I},WorkingPrecision->100]

Out[337]=0.07077603931152880353952802183028200136575469620336302758317278816361845726438203658083188126617723821-0.04738061707035078610720940650260367857315289969317363933196100090256586758807049779050462314770913485 \[ImaginaryI]

enter image description here

I wonder if there is one for the MRB constant sum (CMRB)?

According to "Primary Proof 1" and "Primary Proof 3" shown below along with the section prefixed by the phrase "So far I came up with," it can be proven that for G being the Wolfram MeijerG function

and f(n)=enter image description here, and enter image description here

g[x_] = (-1)^x (1 - (x + 1)^(1/(x + 1)));

In[52]:= (1/2)*
 NIntegrate[(g[-t] - g[t])/(Sin[Pi*t]*Cos[Pi*t]*I + Sin[Pi*t]^2), {t, 
   0, I*Infinity}, WorkingPrecision -> 100, 
     Method -> "GlobalAdaptive"]

Out[52]= 0.\
1170836031505383167089899122239912286901483986967757585888318959258587\
7430027817712246477316693025869 + 
 0.0473806170703507861072094065026036785731528996931736393319610009025\
6586758807049779050462314770913485 I



In[57]:= Re[
 NIntegrate[ 
  g[-t]/(Sin[Pi*t]*Cos[Pi*t]*I + Sin[Pi*t]^2), {t, 0, I*Infinity}, 
  WorkingPrecision -> 100, 
     Method -> "GlobalAdaptive"]]

Out[57]= 0.\
1878596424620671202485179340542732300559030949001387861720046840894772\
315646602137032966544331074969

The Laplace transform analogy to the CMRB

enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here see notebook

Likewise, Wolfram Alpha here says

enter image description here

It also adds

enter image description here

Interestingly,

enter image description here

That has the same argument, enter image description here, as the MeijerG transformation of CMRB. enter image description here



MRB constant formulas and identities

I developed this informal catalog of formulas for the MRB constant with over 20 years of research and ideas from users like you.


6/7/2022

CMRB

=enter image description here

=enter image description here

=enter image description here

=enter image description here

=enter image description here

=enter image description here

So, using induction, we have. enter image description here

enter image description here

Sum[Sum[(-1)^(x + n), {n, 1, 5}] + (-1)^(x) x^(1/x), {x, 2, Infinity}]


3/25/2022

Formula (11) =

enter image description here

As Matheamatica says:

Assuming[Element[c, \[DoubleStruckCapitalZ]], FullSimplify[
     E^(t*(r + I*Pi*(2*c + 1))) /. r -> Log[t^(1/t) - 1]/t]]

= E^(I (1 + 2 c) [Pi] t) (-1 + t^(1/t)) enter image description here

Where for all integers c, (1+2c) is odd leading to enter image description here

Expanding the E^log term gives

enter image description here

which is enter image description here,

That is exactly (2) in the above-quoted MathWorld definition: enter image description here


2/21/2022

Directly from the formula of 12/29/2021 below, enter image description here In

u = (-1)^t; N[
NSum[(t^(1/t) - 1) u, {t, 1, Infinity }, WorkingPrecision -> 24, 
Method -> "AlternatingSigns"], 15]

Out[276]= 0.187859642462067

In

v = (-1)^-t - (-1)^t; 2 I N[
NIntegrate[Im[(t^(1/t) - 1) v^-1], {t, 1, Infinity I}, 
WorkingPrecision -> 24], 15]

Out[278]= 0.187859642462067

Likewise, enter image description here

Expanding the exponents,

enter image description here This can be generalized to (x+log/

Building upon that, we get a closed form for the inner integral in the following.

CMRB= enter image description here

In[1]:= 
CMRB = NSum[(-1)^n (n^(1/n) - 1), {n, 1, Infinity}, 
   WorkingPrecision -> 1000, Method -> "AlternatingSigns"];

In[2]:= CMRB - { 
 Quiet[Im[NIntegrate[
    Integrate[
     E^(Log[t]/t + x)/(-E^((-I)*Pi*t + x) + E^(I*Pi*t + x)), {x, 
      I, -I}], {t, 1, Infinity I}, WorkingPrecision -> 200, 
    Method -> "Trapezoidal"]]];
 Quiet[Im[NIntegrate[
    Integrate[
     Im[E^(Log[t]/t + x)/(-E^((-I)*Pi*t + x) + E^(I*Pi*t + x))], {x, 
      -t,  t }], {t, 1
     , Infinity  I}, WorkingPrecision -> 2000, 
    Method -> "Trapezoidal"]]]}

Out[2]= {3.*10^-998, 3.*10^-998}

Which after a little analysis, can be shown convergent in the continuum limit at t → ∞ i.



12/29/2021

From "Primary Proof 1" worked below, it can be shown that enter image description here

Mathematica knows that because

  m = N[NSum[-E^(I*Pi*t) + E^(I*Pi*t)*t^t^(-1), {t, 1, Infinity}, 
      Method -> "AlternatingSigns", WorkingPrecision -> 27], 18];
  Print[{m - 
     N[NIntegrate[
       Im[(E^(Log[t]/t) + E^(Log[t]/t))/(E^(I \[Pi] t) - 
            E^(-I \[Pi] t))] I, {t, 1, -Infinity I}, 
       WorkingPrecision -> 20], 18], 
    m - N[NIntegrate[
       Im[(E^(Log[t]/t) + E^(Log[t]/t))/(E^(-I \[Pi] t) - 
            E^(I \[Pi] t))] I, {t, 1, Infinity I}, 
       WorkingPrecision -> 20], 18], 
    m + 2 I*NIntegrate[
       Im[(E^(I*Pi*t + Log[t]/t))/(-1 + E^((2*I)*Pi*t))], {t, 1, 
        Infinity I}, WorkingPrecision -> 20]}]

yields

  {0.*^-19,0.*^-19,0.*^-19}

Partial sums to an upper limit of (10^n i) give approximations for the MRB constant + the same approximation *10^-(n+1) i. Example:

-2 I*NIntegrate[
  Im[(E^(I*Pi*t + Log[t]/t))/(-1 + E^((2*I)*Pi*t))], {t, 1, 10^7 I}, 
  WorkingPrecision -> 20]

gives 0.18785602000738908694 + 1.878560200074*10^-8 I where CMRB ≈ 0.187856.

Notice it is special because if we integrate only the numerator, we have MKB=enter image description here, which defines the "integrated analog of CMRB" (MKB) described by Richard Mathar in https://arxiv.org/abs/0912.3844. (He called it M1.)

Like how this:

NIntegrate[(E^(I*Pi*t + Log[t]/t)), {t, 1, Infinity I}, 
  WorkingPrecision -> 20] - I/Pi

converges to

0.070776039311528802981 - 0.68400038943793212890 I.

(The upper limits " i infinity" and " infinity" produce the same result in this integral.)



11/14/2021

Here is a standard notation for the above mentioned

CMRB,enter image description here enter image description here.

In[16]:= CMRB = 0.18785964246206712024851793405427323005590332204; \
CMRB - NSum[(Sum[
    E^(I \[Pi] x) Log[x]^n/(n! x^n), {x, 1, Infinity}]), {n, 1, 20}, 
  WorkingPrecision -> 50]

Out[16]= -5.8542798212228838*10^-30

In[8]:= c1 = 
 Activate[Limit[(-1)^m/m! Derivative[m][DirichletEta][x] /. m -> 1, 
   x -> 1]]

Out[8]= 1/2 Log[2] (-2 EulerGamma + Log[2])

In[14]:= CMRB - 
 N[-(c1 + Sum[(-1)^m/m! Derivative[m][DirichletEta][m], {m, 2, 20}]), 
  30]

Out[14]= -6.*10^-30


11/01/2021

: The catalog now appears complete, and can all be proven through Primary Proof 1, and the one with the eta function, Primary Proof 2, both found below.

a ≠b enter image description here enter image description here

g[x_] = x^(1/x); CMRB = 
 NSum[(-1)^k (g[k] - 1), {k, 1, Infinity}, WorkingPrecision -> 100, 
  Method -> "AlternatingSigns"]; a = -Infinity I; b = Infinity I; 
g[x_] = x^(1/x); (v = t/(1 + t + t I);
 Print[CMRB - (-I /2 NIntegrate[ Re[v^-v Csc[Pi/v]]/ (t^2), {t, a, b},
       WorkingPrecision -> 100])]); Clear[a, b]
    -9.3472*10^-94

Thus, we find

enter image description here

here, and enter image description here next:

In[93]:= CMRB = 
 NSum[Cos[Pi n] (n^(1/n) - 1), {n, 1, Infinity}, 
  Method -> "AlternatingSigns", WorkingPrecision -> 100]; Table[
 CMRB - (1/2 + 
    NIntegrate[
     Im[(t^(1/t) - t^(2 n))] (-Csc[\[Pi] t]), {t, 1, Infinity I}, 
     WorkingPrecision -> 100, Method -> "Trapezoidal"]), {n, 1, 5}]

Out[93]= {-9.3472*10^-94, -9.3473*10^-94, -9.3474*10^-94, \
-9.3476*10^-94, -9.3477*10^-94}

CNT+F "The following is a way to compute the" for more evidence

For such n, enter image description here converges to 1/2+0i.

(How I came across all of those and more example code follow in various replies.)



On 10/18/2021

, I found the following triad of pairs of integrals summed from -complex infinity to +complex infinity.

CMRB= -complex infinity to +complex infinity

You can see it worked in this link here.

In[1]:= n = {1, 25.6566540351058628559907};

In[2]:= g[x_] = x^(n/x);
-1/2 Im[N[
   NIntegrate[(g[(1 - t)])/(Sin[\[Pi] t]), {t, -Infinity I, 
     Infinity I}, WorkingPrecision -> 60], 20]]

Out[3]= {0.18785964246206712025, 0.18785964246206712025}

In[4]:= g[x_] = x^(n/x);
1/2 Im[N[NIntegrate[(g[(1 + t)])/(Sin[\[Pi] t]), {t, -Infinity I, 
     Infinity I}, WorkingPrecision -> 60], 20]]

Out[5]= {0.18785964246206712025, 0.18785964246206712025}

In[6]:= g[x_] = x^(n/x);
1/4 Im[N[NIntegrate[(g[(1 + t)] - (g[(1 - t)]))/(Sin[\[Pi] t]), {t, -Infinity I, 
     Infinity I}, WorkingPrecision -> 60], 20]]

Out[7]= {0.18785964246206712025, 0.18785964246206712025}

Therefore, bringing

enter image description here

back to mind, we joyfully find,

CMRB n and 1

In[1]:= n = 
  25.65665403510586285599072933607445153794770546058072048626118194900\
97321718621288009944007124739159792146480733342667`100.;

g[x_] = {x^(1/x), x^(n/x)};

CMRB = NSum[(-1)^k (k^(1/k) - 1), {k, 1, Infinity}, 
   WorkingPrecision -> 100, Method -> "AlternatingSigns"];

Print[CMRB - 
  NIntegrate[Im[g[(1 + I t)]/Sinh[\[Pi] t]], {t, 0, Infinity}, 
   WorkingPrecision -> 100], u = (-1 + t); v = t/u;
 CMRB - NIntegrate[Im[g[(1 + I v)]/(Sinh[\[Pi] v] u^2)], {t, 0, 1}, 
   WorkingPrecision -> 100], 
 CMRB - NIntegrate[Im[g[(1 - I v)]/(Sinh[-\[Pi] v] u^2)], {t, 0, 1}, 
   WorkingPrecision -> 100]]

During evaluation of In[1]:= {-9.3472*10^-94,-9.3472*10^-94}{-9.3472*10^-94,-9.3472*10^-94}{-9.3472*10^-94,-9.3472*10^-94}

In[23]:= Quiet[
 NIntegrate[
  Im[g[(1 + I t)]/Sinh[\[Pi] t] - 
    g[(1 + I v)]/(Sinh[\[Pi] v] u^2)], {t, 1, Infinity}, 
  WorkingPrecision -> 100]]

Out[23]= -3.\
9317890831820506378791034479406121284684487483182042179057328100219696\
20202464096600592983999731376*10^-55

In[21]:= Quiet[
 NIntegrate[
  Im[g[(1 + I t)]/Sinh[\[Pi] t] - 
    g[(1 - I v)]/(Sinh[-\[Pi] v] u^2)], {t, 1, Infinity}, 
  WorkingPrecision -> 100]]

Out[21]= -3.\
9317890831820506378791034479406121284684487483182042179057381396998279\
83065832972052160228141179706*10^-55

In[25]:= Quiet[
 NIntegrate[
  Im[g[(1 + I t)]/Sinh[\[Pi] t] + 
    g[(1 + I v)]/(Sinh[-\[Pi] v] u^2)], {t, 1, Infinity}, 
  WorkingPrecision -> 100]]

Out[25]= -3.\
9317890831820506378791034479406121284684487483182042179057328100219696\
20202464096600592983999731376*10^-55


On 9/29/2021

I found the following equation for CMRB (great for integer arithmetic because

(1-1/n)^k=(n-1)^k/n^k. )

CMRB integers 1

So, using only integers, and sufficiently large ones in place of infinity, we can use

CMRB integers 2

See

In[1]:= Timing[m=NSum[(-1)^n (n^(1/n)-1),{n,1,Infinity},WorkingPrecision->200,Method->"AlternatingSigns"]][[1]]

Out[1]= 0.086374

In[2]:= Timing[m-NSum[(-1)^n/x! (Sum[((-1 + n)^k) /(k n^(1 + k)), {k, 1, Infinity}])^ x, {n, 2, Infinity}, {x, 1,100}, Method -> "AlternatingSigns",  WorkingPrecision -> 200, NSumTerms -> 100]]

Out[2]= {17.8915,-2.2*^-197}

It is very much slower, but it can give a rational approximation (p/q), like in the following.

In[3]:= mt=Sum[(-1)^n/x! (Sum[((-1 + n)^k) /(k n^(1 + k)), {k, 1,500}])^ x, {n, 2,500}, {x, 6}];

In[4]:= N[m-mt]

Out[4]= -0.00602661

In[5]:= Head[mt]

Out[5]= Rational

Compared to the NSum formula for m, we see

In[6]:= Head[m]

Out[6]= Real


On 9/19/2021

I found the following quality of CMRB.

replace constants for CMRB



On 9/5/2021

I added the following MRB constant integral over an unusual range.

strange

See proof in this link here.



On Pi Day, 2021, 2:40 pm EST,

I added a new MRB constant integral.

CMRB = integral to sum

We see many more integrals for CMRB.

We can expand 1/x into the following.

xx = 25.656654035

xx = 25.65665403510586285599072933607445153794770546058072048626118194\
90097321718621288009944007124739159792146480733342667`100.;


g[x_] = x^(xx/
    x); I NIntegrate[(g[(-t I + 1)] - g[(t I + 1)])/(Exp[Pi t] - 
           Exp[-Pi t]), {t, 0, Infinity}, WorkingPrecision -> 100]

 (*
0.18785964246206712024851793405427323005590309490013878617200468408947\
72315646602137032966544331074969.*)

Expanding upon the previously mentioned

enMRB sinh

we get the following set of formulas that all equal CMRB:

Let

x= 25.656654035105862855990729 ...

along with the following constants (approximate values given)

{u = -3.20528124009334715662802858},

{u = -1.975955817063408761652299},

{u = -1.028853359952178482391753},

{u = 0.0233205964164237996087020},

{u = 1.0288510656792879404912390},

{u = 1.9759300365560440110320579},

{u = 3.3776887945654916860102506},

{u = 4.2186640662797203304551583} or

$ u = \infty .$

Another set follows.

let x = 1 and

along with the following {approximations}

{u = 2.451894470180356539050514},

{u = 1.333754341654332447320456} or

$ u = \infty $

then

enter image description here

See this notebook from the wolfram cloud for justification.



2020 and before:

Also, in terms of the Euler-Riemann zeta function,

CMRB =enter image description here

Furthermore, as enter image description here,

according to user90369 at Stack Exchange, CMRB can be written as the sum of zeta derivatives similar to the eta derivatives discovered by Crandall. zeta hint Information about η(j)(k) please see e.g. this link here, formulas (11)+(16)+(19).credit



In the light of the parts above, where

CMRB

= k^(1/k)-1

= eta'(k)

= sum from 0 enter image description here as well as double equals RHS an internet scholar going by the moniker "Dark Malthorp" wrote:

eta *z^k






Primary Proof 1

CMRB=enter image description here, based on

CMRB eta equals enter image description here

is proven below by an internet scholar going by the moniker "Dark Malthorp."

Dark Marthorp's proof



Primary Proof 2

eta sums denoting the kth derivative of the Dirichlet eta function of k and 0 respectively, was first discovered in 2012 by Richard Crandall of Apple Computer.

The left half is proven below by Gottfried Helms and it is proven more rigorously(considering the conditionally convergent sum,enter image description here) below that. Then the right half is a Taylor expansion of eta(s) around s = 0.

n^(1/n)-1

At https://math.stackexchange.com/questions/1673886/is-there-a-more-rigorous-way-to-show-these-two-sums-are-exactly-equal,

it has been noted that "even though one has cause to be a little bit wary around formal rearrangements of conditionally convergent sums (see the Riemann series theorem), it's not very difficult to validate the formal manipulation of Helms. The idea is to cordon off a big chunk of the infinite double summation (all the terms from the second column on) that we know is absolutely convergent, which we are then free to rearrange with impunity. (Most relevantly for our purposes here, see pages 80-85 of this document, culminating with the Fubini theorem which is essentially the manipulation Helms is using.)"

argument 1 argument 2



Primary Proof 3

Here is proof of a faster converging integral for its integrated analog (The MKB constant) by Ariel Gershon.

g(x)=x^(1/x), M1=hypothesis

Which is the same as

enter image description here because changing the upper limit to 2N + 1 increases MI by 2i/?.

MKB constant calculations have been moved to their discussion at http://community.wolfram.com/groups/-/m/t/1323951?ppauth=W3TxvEwH .

Iimofg->1

Cauchy's Integral Theorem

Lim surface h gamma r=0

Lim surface h beta r=0

limit to 2n-1

limit to 2n-

Plugging in equations [5] and [6] into equation [2] gives us:

leftright

Now take the limit as N?? and apply equations [3] and [4] : QED He went on to note that

enter image description here





I wondered about the relationship between CMRB and its integrated analog and asked the following. enter image description here So far I came up with

Another relationship between the sum and integral that remains more unproven than I would like is

CMRB(1-i)

f[x_] = E^(I \[Pi] x) (1 - (1 + x)^(1/(1 + x)));
CMRB = NSum[f[n], {n, 0, Infinity}, WorkingPrecision -> 30, 
   Method -> "AlternatingSigns"];
M2 = NIntegrate[f[t], {t, 0, Infinity I}, WorkingPrecision -> 50];
part = NIntegrate[(Im[2 f[(-t)]] + (f[(-t)] - f[(t)]))/(-1 + 
      E^(-2 I \[Pi] t)), {t, 0, Infinity I}, WorkingPrecision -> 50];
CMRB (1 - I) - (M2 - part)

gives

6.10377910^-23 - 6.10377910^-23 I.

Where the integral does not converge, but Mathematica can give it a value:

enter image description here






Update 2015

Here is my mini-cluster of the fastest 3 computers (the MRB constant supercomputer 0) mentioned below: The one to the left is my custom-built extreme edition 6 core and later with an 8 core 3.4 GHz Xeon processor with 64 GB 1666 MHz RAM.. The one in the center is my fast little 4-core Asus with 2400 MHz RAM. Then the one on the right is my fastest -- a Digital Storm 6 core overclocked to 4.7 GHz on all cores and with 3000 MHz RAM.

first 3 way cluster

enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here see notebook

Likewise, Wolfram Alpha here says

enter image description here

It also adds

enter image description here

Interestingly,

enter image description here

That has the same argument, enter image description here, as the MeijerG transformation of CMRB. enter image description here

POSTED BY: Marvin Ray Burns

Moved below.

POSTED BY: Marvin Ray Burns

The Laplace transform analogy to the CMRB

enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here see notebook

Likewise, Wolfram Alpha here says

enter image description here

It also adds

enter image description here

Interestingly,

enter image description here

That has the same argument, enter image description here, as the MeijerG transformation of CMRB. enter image description here

POSTED BY: Marvin Ray Burns

Removed.

POSTED BY: Marvin Ray Burns

A reply a couple of places before this one has some "Programs to compute the integrated analog". Here is a recent discovery that could help in verifying the analog's digital expansions.

When f[x_] = E^(I Pi x) (1 - (1 + x)^(1/(1 + x))), the MRB constant is Sum[f[n],{n,0,Infinity}] and also, Sum[f[n],{n,1,Infinity}].

POSTED BY: Marvin Ray Burns

Moved below.

POSTED BY: Marvin Ray Burns

Programs to compute the integrated analog

The efficient programs

Wed 29 Jul 2015 11:40:10

From an initial accuracy of only 7 digits,

0.07077603931152880353952802183028200137`19.163032309866352 - 
 0.68400038943793212918274445999266112671`20.1482024033675 I - \
(NIntegrate[(-1)^t (t^(1/t) - 1), {t, 1, Infinity}, 
    WorkingPrecision -> 20] - 2 I/Pi)

enter image description here

we have the first efficient program to compute the integrated analog (MKB) of the MRB constant, which is good for 35,000 digits.

Block[{$MaxExtraPrecision = 200}, prec = 4000; f[x_] = x^(1/x);
 ClearAll[a, b, h];
 Print[DateString[]];
 Print[T0 = SessionTime[]];

 If[prec > 35000, d = Ceiling[0.002 prec], 
  d = Ceiling[0.264086 + 0.00143657 prec]];

 h[n_] := 
  Sum[StirlingS1[n, k]*
    Sum[(-j)^(k - j)*Binomial[k, j], {j, 0, k}], {k, 1, n}];

 h[0] = 1;
 g = 2 I/Pi - Sum[-I^(n + 1) h[n]/Pi^(n + 1), {n, 1, d}];

 sinplus1 := 
  NIntegrate[
   Simplify[Sin[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity}, 
   WorkingPrecision -> prec*(105/100), 
   PrecisionGoal -> prec*(105/100)];

 cosplus1 := 
  NIntegrate[
   Simplify[Cos[Pi*x]*D[f[x], {x, d + 1}]], {x, 1, Infinity}, 
   WorkingPrecision -> prec*(105/100), 
   PrecisionGoal -> prec*(105/100)];

 middle := Print[SessionTime[] - T0, " seconds"];

 end := Module[{}, Print[SessionTime[] - T0, " seconds"];
   Print[c = Abs[a + b]]; Print[DateString[]]];


 If[Mod[d, 4] == 0, 
  Print[N[a = -Re[g] - (1/Pi)^(d + 1)*sinplus1, prec]];
  middle;
  Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*cosplus1), prec]];
  end];


 If[Mod[d, 4] == 1, 
  Print[N[a = -Re[g] - (1/Pi)^(d + 1)*cosplus1, prec]];
  middle;
  Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*sinplus1), prec]]; end];

 If[Mod[d, 4] == 2, 
  Print[N[a = -Re[g] + (1/Pi)^(d + 1)*sinplus1, prec]];
  middle;
  Print[N[b = -I (Im[g] + (1/Pi)^(d + 1)*cosplus1), prec]];
  end];

 If[Mod[d, 4] == 3, 
  Print[N[a = -Re[g] + (1/Pi)^(d + 1)*cosplus1, prec]];
  middle;
  Print[N[b = -I (Im[g] - (1/Pi)^(d + 1)*sinplus1), prec]];
  end];]

May 2018

I got substantial improvement in calculating the digits of MKB by using V11.3 in May 2018, my new computer (processor Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz, 3601 MHz, 4 Core(s), 8 Logical Processor(s) with 16 GB 2400 MH DDR4 RAM):

Digits  Seconds
2000    67.5503022
3000    217.096312
4000    514.48334
5000    1005.936397
10000   8327.18526
 20000  71000

They are found in the attached 2018 quad MKB.nb.

They are twice as fast,(or more) as my old records with the same program using Mathematica 10.2 in July 2015 on my old big computer (a six-core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz with 64 GB of 1066 MHz DDR3 RAM):

digits          seconds

2000    256.3853590 
3000    794.4361122
4000       1633.5822870
5000        2858.9390025
10000      17678.7446323 
20000      121431.1895170
40000       I got error msg

May 2021

After finding the following rapidly converging integral for MKB, enter image description here

(See Primary Proof 3 in the first post.)

I finally computed 200,000 digits of MKB (0.070776 - 0.684 I...) Started ‎Saturday, ‎May ‎15, ‎2021, ‏‎10: 54: 17 AM, and finished at 9:23:50 am EDT | Friday, August 20, 2021, for a total of 8.37539*10^6 seconds or 96 days 22 hours 29 minutes 50 seconds.

The full computation, verification to 100,000 digits, and hyperlinks to various digits are found below at 200k MKB A.nb. The code was

g[x_] = x^(1/x); u := (t/(1 - t)); Timing[
 MKB1 = (-I Quiet[
      NIntegrate[(g[(1 + u I)])/(Exp[Pi u] (1 - t)^2), {t, 0, 1}, 
       WorkingPrecision -> 200000, Method -> "DoubleExponential", 
       MaxRecursion -> 17]] - I/Pi)]

enter image description here

After finding the above more rapidly converging integral for MKB, In only 80.5 days, 189,330 real digits and 166,700 imaginary were confirmed to be correct by the following different formula. as Seen at https://www.wolframcloud.com/obj/bmmmburns/Published/2nd%20200k%20MRB.nb

All digits at

https://www.wolframcloud.com/obj/bmmmburns/Published/200K%20confirmed%20MKB.nb (Recommended to open in desktop Mathematica.)

N[(Timing[
   FM2200K - (NIntegrate[(Exp[Log[t]/t - Pi t/I]), {t, 1, Infinity I},
        WorkingPrecision -> 200000, Method -> "Trapezoidal", 
       MaxRecursion -> 17] - I/Pi)]), 20]

enter image description here

I've learned more about what MaxRecusion is required for 250,000 digits to be verified from the two different formulas, and they are being computed as I write. It will probably take over 100 days.

Laurent series for the analog

I've not perfected the method, but here is how to compute the integrated analog of the MRB constant from series.

$f = (-1)^z (z^(1/z) - 1); MKB = 
 NIntegrate[$f, {z, 1, Infinity I}, WorkingPrecision -> 500]; 
Table[s[x_] = Series[$f, {z, n, x}] // Normal; 
  Timing[Table[
    MKB - Quiet[ 
      NIntegrate[s[x] /. z -> n, {n, 1, Infinity I}, 
       WorkingPrecision -> p, Method -> "Trapezoidal", 
       MaxRecursion -> Ceiling[Log2[p/2]]]], {p, 100, 100 x, 
     100}]], {x, 1, 10}] // TableForm

enter image description here

Table[Short[s[n]], {n, 1, 5}] // TableForm enter image description here

Attachments:
POSTED BY: Marvin Ray Burns

I calculated 6,500,000 digits of the MRB constant!!

The MRB constant supercomputer said,

Finished on Wed 16 Mar 2022 02 : 02 : 10. Processor and actual time were 6.2662810^6 and 1.6026403541959210^7 s.respectively Enter MRB1 to print 6532491 digits. The error from a 6, 000, 000 or more digit calculation that used a different method is 0.*10^-6029992

"Processor time" 72.526 days

"Actual time" 185.491 days

For the digits see the attached 6p5millionMRB.nb. For the documentation of the computation see 2nd 6p5 million.nb.

POSTED BY: Marvin Ray Burns

Time for a quick memorial:

This discussion began sometime around 2/21/2013.

"This MRB records posting reached a milestone of over 120,000 views on 3/31/2020, around 4:00 am."

"As of 04:00 am 1/2/2021, this discussion had 300,000 views!"

"And as of 08:30 pm 2/3/2021, this discussion had 330,000 views!"

"7:00 pm 10/8/2021 it had 520,000 views!"

1:40 am 3/2/2022 600,000 views

8:25 pm 5/4/2022 650,000 views

In the last 7 months, this discussion has had as many visitors as it did in its first 7 years!

POSTED BY: Marvin Ray Burns

While waiting for results on the 2nd try of calculating 6,500,000 digits of the MRB constant (CMRB), I thought I would compare the rate of convergence of 3 major different forms of it. They are listed from slowest to fastest

POSTED BY: Marvin Ray Burns

To add meaningful content about my second try at computing 6,500,000 digits, On 11 Sep 2021 at 14:15:27, I started the second try of computing 6,500,000 digits of the MRB constant. Here is the beginning of it:

 In[2]:= Needs["SubKernels`LocalKernels`"]
Block[{$mathkernel = $mathkernel <> " -threadpriority=2"}, 
 LaunchKernels[]]

Out[3]= {"KernelObject"[1, "local"], "KernelObject"[2, "local"], 
 "KernelObject"[3, "local"], "KernelObject"[4, "local"], 
 "KernelObject"[5, "local"], "KernelObject"[6, "local"], 
 "KernelObject"[7, "local"], "KernelObject"[8, "local"], 
 "KernelObject"[9, "local"], "KernelObject"[10, "local"]}

In[4]:= Print["Start time is ", ds = DateString[], "."];
prec = 6500000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] := 
  Module[{a, d, s, k, bb, c, end, iprec, xvals, x, pc, cores = 16(*=4*
    number of physical cores*), tsize = 2^7, chunksize, start = 1, ll,
     ctab, pr = Floor[1.005 pre]}, chunksize = cores*tsize;
   n = Floor[1.32 pr];
   end = Ceiling[n/chunksize];
   Print["Iterations required: ", n];
   Print["Will give ", end, 
    " time estimates, each more accurate than the previous."];
   Print["Will stop at ", end*chunksize, 
    " iterations to ensure precsion of around ", pr, 
    " decimal places."]; d = ChebyshevT[n, 3];
   {b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
   iprec = Ceiling[pr/396288];
   Do[xvals = Flatten[Parallelize[Table[Table[ll = start + j*tsize + l;
         x = N[E^(Log[ll]/