Unless I say otherwise, 1.000001 is an exact number. I am well aware that by convention Mathematica doesn't treat it as that, so I don't want to get into that discussion as well. But I just find it strange that for instance:
In[1]:= N[1.00000000000000010, 1000]
Out[1]= 1.0000000000000001
with the remaining 983 0's on the end just assumed. And yet it doesn't assume there are any zeros after the number when I write it.
I just can't understand why nobody seems to see what is staring me in the face. Let's look at the common example of Pi:
In[2]:= N[Pi, 15]
Out[2]= 3.14159265358979
In[3]:= N[Pi, 16]
Out[3]= 3.141592653589793
In[4]:= N[Pi, 17]
Out[4]= 3.1415926535897932
In[5]:= N[Pi, 18]
Out[5]= 3.14159265358979324
In[6]:= N[Pi, 19]
Out[6]= 3.141592653589793238
In[7]:= N[Pi, 20]
Out[7]= 3.1415926535897932385
Beautiful. Exactly what I would want it to do. Exactly what I would expect it to do. Exactly what I want to know. Here is something else I would like it to do:
N[1.0000000000000001, 15]
1.00000000000000
N[1.0000000000000001, 16]
1.000000000000000
N[1.0000000000000001, 17]
1.0000000000000001
N[1.0000000000000001, 18]
1.00000000000000010
N[1.0000000000000001, 19]
1.000000000000000100
N[1.0000000000000001, 20]
1.0000000000000001000
But what does it actually do?
In[8]:= N[1.0000000000000001, 15]
Out[8]= 1.
In[9]:= N[1.0000000000000001, 16]
Out[9]= 1.
In[10]:= N[1.0000000000000001, 17]
Out[10]= 1.
In[11]:= N[1.0000000000000001, 18]
Out[11]= 1.
In[12]:= N[1.0000000000000001, 19]
Out[12]= 1.
In[13]:= N[1.0000000000000001, 20]
Out[13]= 1.
In[14]:= N[1.0000000000000001, 1000]
Out[14]= 1.
If that's not the definition of what it does then change the definition.
While I'm at it, what's wrong with this picture?
In[15]:= SetPrecision[10000.04, 7]
Out[15]= 10000.04
In[16]:= SetPrecision[10000.04, MachinePrecision]
Out[16]= 10000.
In[17]:= SetPrecision[10000.04, 17]
Out[17]= 10000.040000000001
You are setting the precision progressively higher and get different results at each stage. I understand the extra digits on the last one, but that's not what I'm talking about. I'm talking about the fact that it prints any decimals at all. Ok, I've got an explanation of why this happens, but I still don't agree that it's right.
Even though it has been acknowledge that there are stylistic choices here, most people seem to be justifying Mathematica's behavior by implying it's inevitable. Well I know it's not because I've seen it done differently, and better, in other software. I just think there's room for improvement here.