Subtracting two floating point numbers close to each other leads to loss of significance. This is a feature of floating point arithmetic, the calculator will also give a more accurate result provided one uses the second formula that avoids the cancellation. For the correct answer, see the above 300 digit computation.
The three numbers in Raymond's post are only correct up to 1.0000031416. The reason is that the entire computation is done with machine precision and calling SetPrecision afterwards will only pad with (binary) zeros. There will be more digits in the result, but they will not be reliable, because we have started with a machine number. A little better, but far from optimal (correct only up to 1.0000031416123929) is
In[4]:= (1 - Sqrt[1 - 4 Pi SetPrecision[0.000001, 35]])/(2 Pi)
Out[4]= 1.0000031416123929083759918187152063*10^-6
Compare with
N, which works adaptively to obtain the desired precision:
In[5]:= N[(1 - Sqrt[1 - 4 Pi 10^-6])/(2 Pi), 35]
Out[5]= 1.0000031416123929536281643215075828*10^-6
The difference between precision and accuracy is that the former measures relative, and the latter absolute uncertainty (error). If x0 is the true value, and x an approximation, the precision satisfies Abs[(x-x0)/x0] == 10^-prec, while the accuracy satisfies Abs[x-x0] == 10^-acc. Precision can therefore be thought of as the approximate number of significant digits, while accuracy is the approximate number of significant digits to the right of the decimal point.
Perhaps the following
blog post and the resources mentioned there, particularly this
tutorial from the documentation, will be of interest. For a really in-depth look, I would recommend this article: Sofroniou, M., & Spaletta, G. (2005). Precise numerical computation. Journal of Logic and Algebraic Programming, 64(1), 113-134.