There are severa interrelatedl misconceptions about what value is being returned, whether or how to obtain a different value, what is printed visually vs what value is stored, and the like. This has, by and large, been noted in responses by Henrik Schachner, Neil Singer, and perhaps others. I want to spell out a few things.
(1) The inputs are machine precision numbers. A construct like N[value,100]
will not change them because N
will not raise precision, and machine precision is considered to be the lowest possible precision in the Wolfram Language.
(2) As noted by others, default precision shown is 6 digits. The comment about not using Mathematica for accounting was perhaps intended as an offhand remark, but regardless it should be noted that there is AccountingForm
, for visual purposes (by which I mean it only does formatting, it does not alter values). I will cut-paste the example in question directly because it may give a better indication of what I mean.
In[11]:= val = 5000.01 + 5000.03
Out[11]= 10000.
In[12]:= AccountingForm[val, {Infinity, 2}]
Out[12]//AccountingForm= \!\(
TagBox[
InterpretationBox["\<\"10000.04\"\>",
10000.04,
AutoDelete->True],
AccountingForm[#, {
DirectedInfinity[1], 2}]& ]\)
Notice that the InterpretationBox
gives the formatting of the value, to two [places to the right of the decimal point.
(3) SetPrecision
can be used to raise precision. One should understand, however, that it might change the value in an unexpected manner. It works by taking the binary representation and padding with binary zeros. Since not all terminating decimal values have exact terminating binary counterparts, the value thus obtained may be different from the input. This can be useful (and I do use SetPrecision
in certain circumstances to raise precision), but, for the reason above, it comes with a caveat.
(4) Default display precision for machine reals is 6 digits. It has been this way quite possibly since version 1.0. An alternative would be to show 16 or 17 digits. This could have been done but, on balance, I think the default is nicer. This is of course a subjective judgement. One can always get a better indication of the actual value using InputForm
.
In[13]:= InputForm[val]
Out[13]//InputForm=
10000.04
The definitive value is the binary bit pattern, and that can be deduced from RealDigits
:
In[14]:= RealDigits[val, 2]
Out[14]= {{1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0,
0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0}, 14}
(5) Entering values with increasingly many padded zeros at the right of the decimal point shows a change of behavior at the point where the inputs are no longer of a size that can be machine numbers. These are thus bignums, and subject to the display formatting used for such. It does not restrict to a default of 6 digits but rather shows all significant digits (in the sense of "significance arithmetic").
(6) I hope it has become clear from prior responses that the subject heading is in error. There is no "precision error" in the computation claimed to be problematic. The default formatting might not be what was wanted, but by definition it is not possible to have a default that meets all possible needs. My guess is either AccountingForm
or InputForm
would be what was wanted here.
I hope the above provides some indication of the display-vs-actual-value, formatting possibilities, and reasons for the various outputs that were seen and in some cases called into question.