Message Boards Message Boards

Find the "nth" of a large PrimeNumber?

Posted 6 years ago

Hi Guys! I hope all of you are fine :) Maybe someone can tell me here how can I find with Wolfram Alpha or Mathematica the nth's of larger primes? I used "PrimePi", but "PrimePi" works not with large primes (primes like these 1921773217311523519374373 do not work...too large...). Is there a criterion, method and or a script with which I can find the nth's of larger primes?

I have also used the "nthprime" function, but i think this is not what i need, but when there is a method with the nth prime function to find the "th's" of larger primes, can someone here show me, how it works? To better understanding what i mean, here an example:

  • 2 is the 1(<-i need this number).Primenumber
  • 3 is the 2(<-i need this number).Primenumber
  • 5 is the 3(<-i need this number).Primenumber
  • 7 is the 4(<-i need this number).Primenumber

and so on...another example:

19 is the 8th (!) Prime, 23 is the 9th (!) Prime, 29 is the 10th (!) Prime... now i need a function to find which prime is 1921773217311523519374373? I need a function to get that out, i hope anybody here has an idea how can i find with WolframAlpha or Mathematica which/what (!) prime is 1921773217311523519374373.

I hope anyone can help me here. Kind regards and best wishes.

POSTED BY: Nural I.
14 Replies
Posted 6 years ago

Prime counting (and identifying large primes in general) is generally known to be a "hard" and computationally intensive problem. Much of modern cryptography is built on this fact. You have a number which is of the order 10^24. Since there isn't a formula to generate primes, as you get start looking at larger numbers, finding their PrimePi value can get tremendously difficult.

I found a c++ library dedicated to prime counting. I don't know a ton about this library, but the authors seem to have put time and effort into making it efficient with known algorithms. With their library on a decent CPU they benchmark calculating PrimePi[10^22] at taking ~8200 seconds. 10^22 is still orders of magnitude less than the number you're after. The authors claim their library can handle x<10^31 though, so if you're up to it and want to dedicate a few days of CPU time to calculating your value, you should plug it in there!

PrimePi[10^24] was only verified unconditionally recently (2012) using state of the art, dedicated algorithms on powerful machines! You can read the paper here: https://arxiv.org/abs/1203.5712. While computation tools advance relatively quickly in the 21st century, I'm not confident that consumer machines are powerful enough to handle this type of calculation quickly on their own.

*Please note that was done with some quick hobbyist Google searching. An expert in the field may be able to provide more information on the topic.

POSTED BY: Kyle Martin
Posted 6 years ago

Your best bet will be to use RiemannR to estimate PrimePi:

Round[RiemannR[1921773217311523519374373]]

(* 35007295017219325087811 *)

N[%]

(* 3.50073*10^22 *)

Based off the value for PrimePi[10^24], the relative error here is probably minuscule and result might be accurate up to the first 13 or 14 digits.

N[1 - RiemannR[10^24]/18435599767349200867866]

(* 9.22595*10^-14 *)
POSTED BY: Greg Hurst
Posted 6 years ago

PrimePi on powers of 10 up to 10^27 have been calculated. See here and here.

POSTED BY: Greg Hurst
Posted 6 years ago

Hi Kyle! Thank you very much for the explanation and for all the very helpful links. Wow, I didn't realize that this is a so difficult endeavour. But what I still do not understand, why they are trying to generate them over and over again? There were already calculated all primes up to the prime number which have 23 249 425 digits...why mathematica or wolfram alpha do not just use lists?

The Company Wolfram (which have many servers and cpu power) could just make (generate) a list up to the prime number which have (for example) 500 (or more) digits, in which the "nth's" simultaneously also been listed (like the program "primegen", it generates both at the same time and saved the results as a txt), and then simply assign the list to a function, then when a user type a prime number into these function, then must the function just only find these prime number in the list in which the nth's also been listed and finaly display the result...That would really help a lot of people.

POSTED BY: Nural I.
Posted 6 years ago

I think you underestimate how much memory that would require (and how large a number this is). Let's just assume we store a list of numbers from 1, to your number, 1921773217311523519374373. We'll store each number as a 128-bit integer, as that's what most of the numbers would require (18446744073709551615 is the maximum unsigned 64-bit integer).

Storing 1921773217311523519374373 numbers, which are 128 bits each, would take ~3.0748371476984375x10^13 Terabytes of memory. I'm pretty sure this is significantly higher than current estimates for total computing memory in the world. This doesn't even factor in storing in their "PrimePi pair" along with them.

This rough calculation was done with:

In[1]:= N[1921773217311523519374373*UnitConvert[Quantity[128, "Bits"], "Terabytes"]]

Out[1]= Quantity[3.07484*10^13, "Terabytes"]

(This oversimplifies some things, while leaving out some obvious ways to make it more efficient. In any event, the suggestion simply is beyond what is practical)

POSTED BY: Kyle Martin

@KyleMartin, One can save a bit on memory by only storing successive increments, but even that gets tricky since increments can be arbitrarily large. As for the question (not yours) about storing all primes up to 500 digits, you are of course on target that that would exceed all available storage.

POSTED BY: Daniel Lichtblau
Posted 6 years ago

Computing an isolated value (PrimePi[10^24] for example) is much much less expensive to compute than all primes less than that.

However still, at present day, to compute PrimePi10^24] requires roughly 200 CPU days and 24 GB of RAM using the state of the art algorithm described [here (see page 10 for timings).

POSTED BY: Greg Hurst
Anonymous User
Anonymous User
Posted 6 years ago

"hard" and computationally intensive

Mathematica digital keeps a certificate of each VERIFIED large prime number.

It's Impossible: They must be each found and then VERIFIED (the verification can require division which computers are unable to do due to the sheer length of digits of numerator and denomenator, in which case the number is NOT registered by Mathematica). To divide you need to know if a number is prime :)

It's the "not intersection" of all frequencies of all past numbers. There is no calculation possible. Only data of all past - which is far far far more than computers can ever handle when we past a certain threshhold.

POSTED BY: Anonymous User

This is straying somewhat off topic. There is no issue with dividing integers on the order of 10^30. The issue is in knowing one has correctly counted all primes up to the specified value. Apparently this has been done up to 10^24, and maybe a bit further.

All this is unrelated to certificates for extremely large primes. Those do not require counts of all smaller primes.

POSTED BY: Daniel Lichtblau
Posted 6 years ago

Hi @All, thank you very much for all the detailed and helpful informations as well as your outstanding support! I really appreciate that. Indeed, i have underestimated the issue and i obviously haven't given the issue very much thought, but now, i know it, and i am up to date, thanks for that!

All the best :)

POSTED BY: Nural I.

Yes, the RiemannR approximation is pretty good, the exact prime count being

35007295017213256839290

(which, however took about 108 hours to compute on a reasonably fast computer)

POSTED BY: Ilian Gachevski
Posted 6 years ago

Whoa, is this the exact value of PrimePi[1921773217311523519374373]?

What type of machine did you use? Your timing seems much faster than the times quoted here.

POSTED BY: Greg Hurst
Posted 6 years ago

which, however took about 108 hours to compute on a reasonably fast computer

Wow, that's massive! I thank you for that Ilian!

@Chip you were right, the variance is (actually) minuscule:

  • 35007295017213256839290
  • 35007295017219325087811
POSTED BY: Nural I.

The machine was a 16-core/32-thread Linux box with 384 GB of RAM, running Kim Wallisch's parallel implementation of the Deleglise-Rivat algorithm (mentioned, but not benchmarked in the linked paper).

POSTED BY: Ilian Gachevski
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract