Hi, I would be happy to understand the way in which floating point numbers are represented in MATHEMATICA. In particular, I don't understand why the minimum and maximum positive numbers have fixed values, whereas simultaneously one can vary the accuracy of the representation. Is there any lucid explanation available? My question is motivated by the attempts to avoid underflow or overflow in computations involving (for example) exponential function. Leslaw
This document might be helpful.
This document is just a user manual. I do not really see an explanation there, except for a vague statement that in the case of non-arbitrary precision MATHEMATICA uses built-in double precision variables. I do not understand why, in the case of arbitrary precision, when apparently much more bits are used for storing the numbers, the extra bits are used only for improving the accuracy dynamically, but not for enlarging the range of numbers dynamically. Is there no technical description available of how the arbitrary precision numbers are implemented? Leslaw
Arbitrary precision numbers are represented in mantissa-exponent form. The exponent is stored in a fixed-length machine number (integer or double, I'm not sure) and is apparently limited to 52 bits.
Are there any documents (public reports, publications, etc.) available, describing how exactly these numbers are implemented? Leslaw