Message Boards Message Boards

0
|
7541 Views
|
17 Replies
|
11 Total Likes
View groups...
Share
Share this post:

Table command not obeying limits

Posted 10 years ago

Why does Table not obey the given limits? In this example the last list entry is larger than the upper limit!

Table[x, {x, 0.1, 0.7, 0.1}] // FullForm
Output: List[0.1`,0.2`,0.30000000000000004`,0.4`,0.5`,0.6`,0.7000000000000001`]

In contrast, the Range function works as expected.

Range[0.1, 0.7, 0.1]//FullForm
Output: List[0.1`,0.2`,0.30000000000000004`,0.4`,0.5`,0.6`,0.7`]
POSTED BY: Berthold Bäuml
17 Replies

Hi Berthold,

you are asking for floating point numbers, and these come with machine precision ...

Henrik

POSTED BY: Henrik Schachner
Posted 10 years ago

Of course here I am using approximate numbers with MachinePrecision. But this does not explain why Table gives back an entry beyond the upper limit -- also MachinePrecision numbers have a unique ordering. Range is doing it correctly.

Berthold

POSTED BY: Berthold Bäuml

You asked for approximate numbers. 0.7000000000000001 is approximately 7/10:

In[72]:= 0.7000000000000001 == 7/10

Out[72]= True

Mathematica makes no guarantee that approximate numbers computed in different ways will be identical even if they compare as equal and would be mathematically identical if computed exactly. If you want exact numbers, specify exact numbers as input:

In[78]:= Table[n, {n, 1/10, 7/10, 1/10}]

Out[78]= {1/10, 1/5, 3/10, 2/5, 1/2, 3/5, 7/10}

Note that your apparently exact results are generally not exact: fractions whose denominators are not powers of two cannot be exactly represented in floating point binary. 0.7 only appears to be exact when rounded to the nearest decimal in machine precision . But it's a little off. You can see this if you ask for additional precision:

In[79]:= SetPrecision[Range[0.1, .7, .1], 25]

Out[79]= {0.1000000000000000055511151, 0.2000000000000000111022302, \
0.3000000000000000444089210, 0.4000000000000000222044605, \
0.5000000000000000000000000, 0.5999999999999999777955395, \
0.6999999999999999555910790}
POSTED BY: John Doty

SetPrecision's manual says

When SetPrecision is used to increase the precision of a number, the number is padded with zeros. The zeros are taken to be in base 2. In base 10, the additional digits are usually not zeros.

In[30]:= SetPrecision[Table[x, {x, 0.1, 0.7, 0.1}], 20]
Out[30]= {0.10000000000000000555, 0.20000000000000001110, \
0.30000000000000004441, 0.40000000000000002220, \
0.50000000000000000000, 0.59999999999999997780, \
0.70000000000000006661}

this does not reach 0.7

In[26]:= Table[x, {x, SetPrecision[0.1, 30], SetPrecision[0.7, 30], SetPrecision[0.1, 30]}]
Out[26]= {0.100000000000000005551115123126, \
0.200000000000000011102230246252, 0.300000000000000016653345369377, \
0.400000000000000022204460492503, 0.500000000000000027755575615629, \
0.600000000000000033306690738755}

on the other hand:

In[6]:= Table[x, {x, 0.1`20, 0.7`20, 0.1`20}]
Out[6]= {0.10000000000000000000, 0.20000000000000000000, \
0.30000000000000000000, 0.40000000000000000000, \
0.50000000000000000000, 0.60000000000000000000, 0.70000000000000000000}

but

In[33]:= Table[x, {x, 0.1`30, 0.7`30, 0.1`30}]
Out[33]= {0.100000000000000000000000000000, \
0.200000000000000000000000000000, 0.300000000000000000000000000000, \
0.400000000000000000000000000000, 0.500000000000000000000000000000, \
0.600000000000000000000000000000}

but again

In[37]:= SetPrecision[Table[x, {x, 0.1, 0.7, 0.1}], 30]
Out[37]= {0.100000000000000005551115123126, \
0.200000000000000011102230246252, 0.300000000000000044408920985006, \
0.400000000000000022204460492503, 0.500000000000000000000000000000, \
0.599999999999999977795539507497, 0.700000000000000066613381477509}

easiest way is to make the iteration step width into a rational number

In[41]:= Table[x, {x, 0.1, 0.7, Rationalize[0.1]}]
Out[41]= {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7}
In[42]:= FullForm[%]
Out[42]//FullForm= List[0.1`, 0.2`, 0.30000000000000004`, 0.4`, 0.5`, 0.6`, 0.7000000000000001`}]
POSTED BY: Udo Krause
Posted 10 years ago

It is clear that both numbers, 0.7and 0.7000000000000001 are MachinePrecision numbers. Nevertheless they are different and the latter is larger than the former and so Table is not obeying the upper limit which I provided to it!

Range is doing it correctly -- as also your example with using higher precision numbers shows: all elements in the resulting list are between the lower and upper bound, i.e., <0.7

Your example of 0.7000000000000001 == 7/10 shows another severe problem with Mathematica. Here 7/10 is first converted (correctly) to a MachinPrecision Number (because the left side is also only of MachinePrecision), which results in

In[73]:= N[7/10]//FullForm
Out[73]//FullForm= 0.7`

Then they are compared

In[100]:= 0.7000000000000001`==0.7`
Out[100]= True

This is a strange result as the two numbers are obviously different. I know, that the documentation says that Equal etc. are dropping the last 8 digits when comparing. But I do not understand why it is dropping precision (no other language I know does this!) which leads to strange and inconsistent results like in

In[96]:= Floor[1.- 10^-14]
Out[96]= 0
In[95]:= 1.-( 10^-14) < 1.
Out[95]= False

The first result says that 1.- 10^-14 is definitely smaller than 1. and the second says that it is not!

POSTED BY: Berthold Bäuml

Mathematica treats approximate numbers as approximate, by design. You cannot expect exact results if you use approximate numbers. If you want exact results, use exact numbers. Use the tool's design, don't fight it.

POSTED BY: John Doty

Hi Berthold,

I can see your point, but if the two numbers 0.7and 0.7000000000000001 were compared in a way you are expecting it, then it would be a matter of pure chance whether the last entry in your list (the result of Table[x, {x, 0.1, 0.7, 0.1}]) is 6.9... or 7.0...

What is needed is some meaningful comparison of two floating point numbers. From a pure numerical point of view such a comparison does not make any sense - unless you take some measures: The Documentation e.g. for Equal (==) says

Approximate numbers with machine precision or higher are considered equal if they differ in at most their last seven binary digits (roughly their last two decimal digits).

This is to be kept in mind.

Henrik

POSTED BY: Henrik Schachner
Posted 10 years ago

Approximate numbers are not some "unsharp" thing with arbitrary behavior but have a clear semantics. As having now dug deeper in this issue it seems obvious that Mathematica's MachinePrecision numbers, which are floating point numbers, do not obey the IEEE 754 standard. This standard precisely defines an ordering for all floating point numbers with the totalOrder predicate. My question is, why is Mathematica not following this standard which every other language I now of (C/C++, Matlab, Maple, Python, ...) does?

POSTED BY: Berthold Bäuml

Mathematica has its own conventions for the behavior of approximate numbers. It follows them rigorously for its arbitrary precision numbers, and more heuristically for machine numbers. It does not even guarantee that machine numbers will be encoded according to IEEE 754, although most floating point hardware uses that standard these days.

There is no law of nature that makes IEEE 754 the true path. In my embedded work, using simple processors without hardware floating point, I sometimes wish IEEE 754 was not so ubiquitous: it is not an efficient standard for software floating point. Other languages generally use whatever the hardware and system libraries provide: in the past I've used much C on hardware with different floating point.

Mathematica is a very different language from the others you mentioned. Because it is different, it is often good at solving problems that the other languages you mention cannot easily solve. But, of course, that requires you to use its differences rather than fight them.

POSTED BY: John Doty

It does not even guarantee that machine numbers will be encoded according to IEEE 754

Maple states in How Maple Compares to Mathematica p 15 Numerics, that Mathematica does not follow IEEE 754.

There is no law of nature that makes IEEE 754 the true path.

Standards are about convention, not about truth.

Usually about 2 orders of magnitude above the $MachineEpsilon Mathematica does things right

In[1]:= Table[{o, Floor[1. - 10^o $MachineEpsilon]}, {o, 0, 5}] 
Out[1]= {{0, 0}, {1, 0}, {2, 0}, {3, 0}, {4, 0}, {5, 0}}

In[3]:= Table[{o, 1. - 10^o $MachineEpsilon < 1.}, {o, 0, 5}]
Out[3]= {{0, False}, {1, False}, {2, True}, {3, True}, {4, True}, {5, True}}

 In[4]:= 10^(-14) < 100 $MachineEpsilon
 Out[4]= True

Results in the area of $MachineEpsilon are not results.

POSTED BY: Udo Krause

There are many conventions Mathematica flouts. There would be no point to Mathematica if it was the same as FORTRAN. There are conventions for many kinds of tools, but a chainsaw doesn't follow the conventions for handsaw construction. And, of course, if you expect a chainsaw to behave like a handsaw you're likely to injure yourself.

POSTED BY: John Doty
Posted 10 years ago

Whatever floating point model Mathematica is using, the following should never happen:

In[16]:= 1. > 1. - 10.^-14

Out[16]= False

In[17]:= 0. > -10.^-14

Out[17]= True

To be clear, we are way in the numerical precision of MachinePrecision (15.9546 on my machine) numbers when doing the subtraciton of 1. on both sides.

POSTED BY: Berthold Bäuml

Whatever floating point model Mathematica is using, the following should never happen:

It happens. Get over it. That you find this a problem indicates that you're fighting the tool, not using it. I've used Mathematica for a quarter of a century and I've never found its model of approximate arithmetic particularly troublesome. All such models have their particular problems. Don't confuse "familiar and conventional" with "correct and useful". Indeed, if you use explicit precision arithmetic or exact numbers, Mathematica has more capability than other systems here. Mathematica exists to solve problems, not to conform to your opinions.

POSTED BY: John Doty

Consider to read this Why is (-1.)^2. a Complex Number? related discussion.

POSTED BY: Udo Krause

You can also use the ComputerArithmetic package

In[21]:= Needs["ComputerArithmetic`"]

In[25]:= Ulp[0.]
Out[25]= 2.22507*10^-308

In[42]:= Ulp[ComputerNumber[0]]
Out[42]= If[\!\(\*
TagBox[
InterpretationBox["\<\"0.\"\>",
0.,
AutoDelete->True],
BaseForm[#, 10]& ]\) < 2.22507*10^-308, $MinMachineNumber, 
 ComputerArithmetic`Private`u$3699 = 
  N[2^Floor[
    Log[2, ComputerArithmetic`Private`t$3699 $MachineEpsilon] - 0.5]];
  ComputerArithmetic`Private`t$3699 = 
  ComputerArithmetic`Private`t$3699 - 
   ReleaseHold[
    ComputerArithmetic`Private`t$3699 - 
     ComputerArithmetic`Private`u$3699]; 
 If[ComputerArithmetic`Private`t$3699 == 0. || 
   ComputerArithmetic`Private`t$3699 == 
    2 ComputerArithmetic`Private`u$3699, 
  2 ComputerArithmetic`Private`u$3699, 
  ComputerArithmetic`Private`u$3699]]

In[43]:= Ulp[1. - 10.^(-14)]
Out[43]= 1.11022*10^-16

In[44]:= Ulp[-10.^(-14)]
Out[44]= 1.57772*10^-30

In[45]:= Ulp[1.]
Out[45]= 1.11022*10^-16

In[31]:= Arithmetic[]
Out[31]= {4, 10, RoundingRule -> RoundToEven, 
 ExponentRange -> {-50, 50}, MixedMode -> False, IdealDivide -> False,
  IdealDivision -> False}

In[37]:= ComputerNumber[1] - ComputerNumber[10^(-14)] < ComputerNumber[1]
Out[37]= False

In[38]:= ComputerNumber[1] - ComputerNumber[10^(-14)] -  ComputerNumber[1] < ComputerNumber[1] - ComputerNumber[1]
Out[38]= False

In[36]:= -ComputerNumber[10^(-14)] < ComputerNumber[0]
Out[36]= True

In[41]:= ComputerNumber[0] == ComputerNumber[1.] - ComputerNumber[1.]
Out[41]= True

Zero is a special number because its floating point representation is exact.

In[48]:= Ulp[-10.^(-14)]
Out[48]= 1.57772*10^-30

In[49]:= Ulp[1. - 10.^(-14)]
Out[49]= 1.11022*10^-16

In[50]:= Ulp[10^28 - 10.^(-14)]
Out[50]= 2.19902*10^12

the difference between two consecutive machine numbers raises with the numbers and from this numbers around 1. are not the same as 1. + (numbers around 0.) in numerics - in contrast to algebra.

POSTED BY: Udo Krause

As a lumpy-bumpy rule (working knowledge): Keep away 2 orders of magnitude from the $MachineEpsilon:

In[58]:= 1. < 1. + 2.22045 10^(-14)
Out[58]= True

In[59]:= 1. < 1. + 1. 10^(-14)
Out[59]= False

In[56]:= 1. < 1. + 2.22507*10.^(-15)
Out[56]= False

In[57]:= $MachineEpsilon
Out[57]= 2.22045*10^-16

and zero always works:

In[60]:= 0. < 2.22507*10.^(-308)
Out[60]= True

In[61]:= 0. < 2.22507*10.^(-3080)
Out[61]= True

and do not compare numerical relationships around 0. with numerical relationships around 1.

POSTED BY: Udo Krause

To expand, Mathematica's approximate numbers do not represent a "floating point model" in the sense that IEEE 754 numbers do. IEEE 754 is explicitly a specification for arithmetic using a defined subset of rational numbers. That's fine as the specification of a floating point hardware unit, but it's not really a model of physical quantities.

Consider that my outside thermometer reads 31.4 at this moment. It looks like a rational number but the physical reality isn't really any sort of mathematical number. The number is just a convenient connection from the air to the mathematics. Another thermometer in the same place would probably yield a slightly different number. When you ask for temperature at high precision, things get fuzzy: real physical systems are never exactly in thermal equilibrium and even defining exactly what the system is can get very tricky. In statistical mechanics, the temperature of a finite physical system necessarily fluctuates. No two thermometers are identical. This is the normal state of affairs for physical quantities in general.

Most programming systems simply gloss over this distinction. You get get a floating point implementation that's rigorous in its behavior from the point of view of a floating point implementor, but how it models reality is up to you. Mathematica, the invention of a physicist, goes a step or two farther toward modeling the way physical quantities work. It has a fast implementation layered atop your machine's native floating point, and a more rigorous implementation for greater precision and better tracking of precision.

You should not think of approximate numbers in Mathematica as floating point, although in some cases floating point is involved in their implementation. Approximate numbers are approximate numbers. But, if you want rigorous calculations over a subset of rationals, Mathematica gives you a much larger subset and greater mathematical rigor than IEEE 754 does.

For real physical rigor, you should explicitly maintain a probability distribution for every quantity and do arithmetic via Bayes' theorem. I have colleagues who have done that (not in Mathematica) for tricky physical calculations. Of course, in practice you can only do this approximately, and it scales horribly. It's not suitable as a model for quantities in a general-purpose system. Mathematica's approach is a good compromise.

POSTED BY: John Doty
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract