Message Boards Message Boards

2
|
15951 Views
|
25 Replies
|
27 Total Likes
View groups...
Share
Share this post:

[?] Change the default number of digits to display in the output?

Posted 5 years ago

Here is some disturbing behavior that caused me a lot of trouble and was difficult to track down.

In[1]:= 50.01 + 50.03
Out[1]= 100.04

In[2]:= 500.01 + 500.03
Out[2]= 1000.04

In[3]:= 5000.01 + 5000.03
Out[3]= 10000.

Really? Definitely don't use Mathematica for your accounting tasks. And it's not a case of the lost digits actually being there but hidden when printing. Let's try looking at 1000 digits just to be safe.

In[4]:= N[5000.01 + 5000.03, 1000]
Out[4]= 10000.

So we have to change the input precision, which is a problem I discovered long ago but have learned to live with to some extent. Although, sometimes it's easier to just use the Windows Calculator. I wish there was a way to make Mathematica extend the precision of all numbers automatically if you wanted to.

But how much precision do we need to get the right answer? You might think, ok, in the last line the inputs have 6 significant digits and the answer has 7. So we should just have to add one extra 0 on the inputs to give them 7 digits.

In[5]:= 5000.010 + 5000.030
Out[5]= 10000.

But no. Let's try adding a lot more.

In[6]:= 5000.01000000000000 + 5000.0300000000000
Out[6]= 10000.

Still not enough. We have to add a full 12 zeros on the end just to get the correct answer to 2 decimal places!

In[7]:= 5000.01000000000000 + 5000.03000000000000
Out[7]= 10000.0400000000000

I never liked the way Mathematica handled precision, and this makes me like it even less.

POSTED BY: Doug Lerner
25 Replies

The questions of this thread has been answered more than sufficiently. Please keep discussion contained within the bounds of the original questions and Wolfram technologies. Please make sure you know the rules: https://wolfr.am/READ-1ST

POSTED BY: Moderation Team

Try

   N[Rationalize[1.0000000000000001`20, 0], 1000]`

Out[144]= \ 1.00000000000000010000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000000000000000000000000000000000000000000000000000\ 0000000000000000000000

POSTED BY: Marvin Ray Burns

Doug, it's not quite clear to me what your aim is with these posts.

Do you have a practical problem that you need help solving? If yes, can you clarify what you are trying to accomplish that you need help with?

Or are you trying to argue that fundamental behaviours of the Wolfram Language should be changed? When people do this, they often have only a few very narrow use cases in mind, but the Mathematica must be usable for many different tasks, and they do not immediately see that the proposed changes would cause countless inconveniences (or worse), not to mention break most existing programs.

Unless I say otherwise, 1.000001 is an exact number.

Well, then you're not speaking the Wolfram Language, but another language. The correct way to say that is 1000001/1000000. But you already knew that, so what is your point?

But what does it actually do?

Daniel explained what it does in detail, including that MachinePrecision is treated as the lowest. Do you have any questions regarding that explanation?

If that's not the definition of what it does then change the definition.

You seem to want SetPrecision, which you are already aware of. Why don't you just use it?


It seems to me that you may not be realizing what the use of machine precision is, or why it behaves so differently from arbitrary precision arithmetic in Mathematica. Its purpose is performance. "Machine precision" is not about a certain precision but about using the same floating point format that is natively supported by the CPU. It also comes with tradeoffs such as the lack of precision tracking—this is why it necessarily behaves differently.

You are setting the precision progressively higher

This is not the case. As explained in other posts, MachinePrecision is not just a precision of 16 (or rather $MachinePrecision) digits. It is qualitatively different.


Well I know it's not because I've seen it done differently, and better, in other software.

I do not see a proposal for a better system for arbitrary precision arithmetic in your posts, nor a serious comparison with an existing alternative and an analysis of pros and cons.

But let us come back to something concrete and practical. What is it that you are trying to accomplish with Mathematica and cannot? What specific task can be done in another system (which?) that cannot be done in Mathematica reasonably conveniently?

Your original complaint was that machine precision numbers are only displayed with 6 digits of precision by default. Many easy solutions have been offered. Do you have any other practical problems?

Posted 5 years ago

Unless I say otherwise, 1.000001 is an exact number. I am well aware that by convention Mathematica doesn't treat it as that, so I don't want to get into that discussion as well. But I just find it strange that for instance:

In[1]:= N[1.00000000000000010, 1000]
Out[1]= 1.0000000000000001

with the remaining 983 0's on the end just assumed. And yet it doesn't assume there are any zeros after the number when I write it.

I just can't understand why nobody seems to see what is staring me in the face. Let's look at the common example of Pi:

In[2]:= N[Pi, 15]
Out[2]= 3.14159265358979

In[3]:= N[Pi, 16]
Out[3]= 3.141592653589793

In[4]:= N[Pi, 17]
Out[4]= 3.1415926535897932

In[5]:= N[Pi, 18]
Out[5]= 3.14159265358979324

In[6]:= N[Pi, 19]
Out[6]= 3.141592653589793238

In[7]:= N[Pi, 20]
Out[7]= 3.1415926535897932385

Beautiful. Exactly what I would want it to do. Exactly what I would expect it to do. Exactly what I want to know. Here is something else I would like it to do:

N[1.0000000000000001, 15]
1.00000000000000

N[1.0000000000000001, 16]
1.000000000000000

N[1.0000000000000001, 17]
1.0000000000000001

N[1.0000000000000001, 18]
1.00000000000000010

N[1.0000000000000001, 19]
1.000000000000000100

N[1.0000000000000001, 20]
1.0000000000000001000

But what does it actually do?

In[8]:= N[1.0000000000000001, 15]
Out[8]= 1.

In[9]:= N[1.0000000000000001, 16]
Out[9]= 1.

In[10]:= N[1.0000000000000001, 17]
Out[10]= 1.

In[11]:= N[1.0000000000000001, 18]
Out[11]= 1.

In[12]:= N[1.0000000000000001, 19]
Out[12]= 1.

In[13]:= N[1.0000000000000001, 20]
Out[13]= 1.

In[14]:= N[1.0000000000000001, 1000]
Out[14]= 1.

If that's not the definition of what it does then change the definition.

While I'm at it, what's wrong with this picture?

In[15]:= SetPrecision[10000.04, 7]
Out[15]= 10000.04

In[16]:= SetPrecision[10000.04, MachinePrecision]
Out[16]= 10000.

In[17]:= SetPrecision[10000.04, 17]
Out[17]= 10000.040000000001

You are setting the precision progressively higher and get different results at each stage. I understand the extra digits on the last one, but that's not what I'm talking about. I'm talking about the fact that it prints any decimals at all. Ok, I've got an explanation of why this happens, but I still don't agree that it's right.

Even though it has been acknowledge that there are stylistic choices here, most people seem to be justifying Mathematica's behavior by implying it's inevitable. Well I know it's not because I've seen it done differently, and better, in other software. I just think there's room for improvement here.

POSTED BY: Doug Lerner

Doug,

I think the problem is that the underlying assumption of your previous posts is that

N[1.000001, 1000]

means "give me 1.000001 to 1000 digits of precision". That is not the meaning of "N[]" according to the documentation nor the design of MMA. The N function is for converting exact numbers to floating point numbers with a certain precision. If the number already has a lower precision (such as a machine float) then N does nothing except return the number unchanged. Once the float is returned, it is subject to the same printing rules I discussed in my earlier post.

Another way to look at it is that the printing format is a user preference to help maintain screen clutter. You set your preferences to only display 6 digits and then round numbers off for display. From my perspective I do not think it is fair to set the preference to 6 digits and then use the fact that you do not get 10 digits of display as evidence of a bug.

I hope this helps.

Regards,

Neil

POSTED BY: Neil Singer

I feel that the posting that indicates you're not "100% satisfied" still mixes up a couple of things. As Daniel pointed out, there are two types of approximate real numbers in Mathematica, machine reals and arbitrary-precision numbers, a.k.a. bignums. Each type has different display rules. In the factory defaults, machine reals are displayed in rounded format (rounded to six digits, with trailing zeros truncated). Arbitrary-precision numbers are displayed with as many digits as their precision indicates, and the display includes trailing zeros. In neither case does the displayed digits completely represent the internal value, except in cases where the representation and the value happen to coincide. Also, one might take issue with the assertion that machine precision is "16 decimal digits," depending on the emphasis on decimal digits. The number 0.1 is not represented exactly in binary floating-point, but has to be rounded (see below). Thus even one decimal digit cannot be represented (exactly). The principal issue, as it seem to me, of the discrepancy between display and internal value is inherent to displaying in decimal and value stored in binary. The only real solutions are getting a decimal computer or printing out in a binary format such as hex. (Excel tries to get around these kind of issues through "cosmetic" rounding, which causes other problems; see Kahan 2006, sect. 2.)

As a user I've found the number representation in Mathematica notebooks convenient, once I learned how to understand the output. Hand-held calculators do rounding (at least they used to), so it didn't seem that unusual to me. Six digits meant I could read a medium-long list of numbers. The bignums are apparent by their digits, unless their precision is between one and six.

The documentation for PrintPrecision

PrintPrecision is an option for selections that specifies the maximum number of digits used for displaying a machine-precision number.

The "Details" section hints at another workaround:

  • The setting for PrintPrecision is only used when NumberMarks -> False.

One can set these options for the current session or once for all (which writes the setting into your init.m file); or one can use Style[1/3., PrintPrecision -> 12] (or NumberMarks -> True) to override the defaults for a single output.

SetOptions[$FrontEndSession, NumberMarks -> True] (* for current session only *)
SetOptions[$FrontEnd, NumberMarks -> True]        (* for current & future sessions *)

SetOptions[$FrontEndSession, PrintPrecision -> 12]

But note that this won't relieve one of the impression that computers make arithmetic mistakes. The following will make sense only if you are familiar with the binary64 floating-point standard of IEEE-754:

SetOptions[$FrontEndSession, NumberMarks -> True];
0.1 + 0.2
(*   Out[]=0.30000000000000004`  *)

0.3
(*  Out[]= 0.3`  *)

The discrepancy is due to the way binary floating-point numbers are supposed to work according to the IEEE standard.

There is a decimal IEEE-754 standard that avoids the problem with binary. There are decimal machines (IBM POWER6 and System Z) and decimal math libraries, such as Intel® Decimal Floating-Point Math Library . Unfortunately, commonly available machines are binary.

POSTED BY: Michael Rogers
Posted 5 years ago

Thank you for clearing that up. I'm glad my comments got through to someone who seems to have some influence on the direction of Mathematica so that maybe these issues can be considered in future improvements.

As I suggested, I've been using Mathematica for many years and it's one of my core software tools. The things I've been able to do over the years would be unthinkable without it. I suppose I started off this thread being a bit critical, but I hope I'm ending it with some ideas for improvement.

POSTED BY: Doug Lerner
Posted 5 years ago

Thank you for your reply Daniel. I submitted my last response at about the same time and I was obviously referring to the post before yours.

I still do not feel 100% satisfied. I was aware of most of what you said, and I do vaguely remember something about 6 digits from when I was learning Mathematica years ago when I started with version 1.2, which is probably why I said "I never liked the way Mathematica handled precision". But my comment about accounting was facetious.

I will agree that the title "Precision Error" is wrong. But I might defend the title "Precision Display Error". Because I think that is what it is. Machine precision on most machines, I believe, is 16 decimal digits. The numbers I am talking about are well below that. I see no reason why decreasing the precision with SetPrecision should override the display precision. You're making the number less precise, but then you display more digits than you do for the more precise original? If you set the precision to greater than machine precision then ok, it can display more digits. There's no reason SetPrecision shouldn't behave exactly the same way as padding with 0's, as far as the number display.

And as I just mentioned, the N function is used specifically to display more digits. I understand that does not increase precision, but as even you demonstrated, there is already enough precision to store the full answer. So why do you have to use InputForm to see the whole number when the N function was seemingly made for that very purpose?

POSTED BY: Doug Lerner

I did mention that the default display of six digits is a matter of subjective preference. It predates my arrival at the company by a bit, and at this point I am quite accustomed to it. But I am aware that it did not have to be that way, and there are arguments to be made for doing differently. As people may realize, we are of late extending some of this, what with new formatting forms e.g. DecimalForm and PercentForm, and with different notions of numerical computation as are under the hood in Around. And in a very recent live-broadcast development meeting, I believe the topic of how to more easily change the default displayed digits came up.

I want to go a bit further into a couple of areas. I'll start by noting that machine precision in some sense has two meanings. One is the approximately 16 digits that a machine value has. For this there is $MachinePrecision in the Wolfram Language. In contrast, to denote computations that should take place using machine numbers, there is MachinePrecision (no dollar sign, that is). Numerically it evaluates to $MachinePrecision:

In[28]:= $MachinePrecision

Out[28]= 15.9546

In[29]:= N[MachinePrecision]

Out[29]= 15.9546

Yet is has a very different meaning in terms of how lower precision "infects" higher-- for that purpose it can be regarded as -infinity, in that bignums of any precision, even lower than $MachinePrecision, will be coerced to machine numbers when a computation is done using MachinePrecision. This relates to the present discussion as follows. SetPrecision[machinevalue,10] does not exactly lower the precision. Yes, it is lower than $MachinePrecision, but as it is now a bignum of precision 10 digits, it is regarded as higher than MachinePrecision. (Maybe more to the point, it is a bignum regardless of having precision lower than $MachinePrecision.) As a bignum it is subject to a different default behavior for display of digits.

The other point I will raise is that N is by no means a function for displaying digits. It is a function for obtaining a certain precision from either an exact value, or a value that is numeric but has higher precision than the number of digits in the second argument of N. How many digits get displayed by default then depends on whether the result is a bignum (of any precision) or a machine number; this is quite independent of the fact that the number was produced using N.

POSTED BY: Daniel Lichtblau
Posted 5 years ago

I thought that might be the case, but it seems wrong that setting the precision should override the digit display while the N function does not. If anything, it should be the reverse. Setting precision should only affect the internal representation of the number, not the display, whereas the N function is just about synonymous with "display more digits". Seems like the problem now is the N function.

POSTED BY: Doug Lerner

There are severa interrelatedl misconceptions about what value is being returned, whether or how to obtain a different value, what is printed visually vs what value is stored, and the like. This has, by and large, been noted in responses by Henrik Schachner, Neil Singer, and perhaps others. I want to spell out a few things.

(1) The inputs are machine precision numbers. A construct like N[value,100] will not change them because N will not raise precision, and machine precision is considered to be the lowest possible precision in the Wolfram Language.

(2) As noted by others, default precision shown is 6 digits. The comment about not using Mathematica for accounting was perhaps intended as an offhand remark, but regardless it should be noted that there is AccountingForm, for visual purposes (by which I mean it only does formatting, it does not alter values). I will cut-paste the example in question directly because it may give a better indication of what I mean.

In[11]:= val = 5000.01 + 5000.03

Out[11]= 10000.

In[12]:= AccountingForm[val, {Infinity, 2}]

Out[12]//AccountingForm= \!\(
TagBox[
InterpretationBox["\<\"10000.04\"\>",
10000.04,
AutoDelete->True],
AccountingForm[#, {
DirectedInfinity[1], 2}]& ]\)

Notice that the InterpretationBox gives the formatting of the value, to two [places to the right of the decimal point.

(3) SetPrecision can be used to raise precision. One should understand, however, that it might change the value in an unexpected manner. It works by taking the binary representation and padding with binary zeros. Since not all terminating decimal values have exact terminating binary counterparts, the value thus obtained may be different from the input. This can be useful (and I do use SetPrecision in certain circumstances to raise precision), but, for the reason above, it comes with a caveat.

(4) Default display precision for machine reals is 6 digits. It has been this way quite possibly since version 1.0. An alternative would be to show 16 or 17 digits. This could have been done but, on balance, I think the default is nicer. This is of course a subjective judgement. One can always get a better indication of the actual value using InputForm.

In[13]:= InputForm[val]

Out[13]//InputForm=
10000.04

The definitive value is the binary bit pattern, and that can be deduced from RealDigits:

In[14]:= RealDigits[val, 2]

Out[14]= {{1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
   1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 
  0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0}, 14}

(5) Entering values with increasingly many padded zeros at the right of the decimal point shows a change of behavior at the point where the inputs are no longer of a size that can be machine numbers. These are thus bignums, and subject to the display formatting used for such. It does not restrict to a default of 6 digits but rather shows all significant digits (in the sense of "significance arithmetic").

(6) I hope it has become clear from prior responses that the subject heading is in error. There is no "precision error" in the computation claimed to be problematic. The default formatting might not be what was wanted, but by definition it is not possible to have a default that meets all possible needs. My guess is either AccountingForm or InputForm would be what was wanted here.

I hope the above provides some indication of the display-vs-actual-value, formatting possibilities, and reasons for the various outputs that were seen and in some cases called into question.

POSTED BY: Daniel Lichtblau

If you explicitly set the precision, it overrides the default display option. This way you can distinguish terms with specific precision. It is rare that you would specify precision less than machine precision. I certainly would not do that for the display reason that you posted. (One example might be if you want to explore the affects of limited precision on a calculation). Usually you specify the precision to force MMA to use extended precision for a calculation that needs it.

Regards

POSTED BY: Neil Singer
Posted 5 years ago

Wow, good answer, Thanks.

That solves the problem, but I still feel like there are inconsistencies. Like why does 5000.01`7 + 5000.03`7 = 10000.04 with display digits set to 6, but N[5000.01 + 5000.03, 1000] = 1000. using the same settings?

POSTED BY: Doug Lerner

What does "7" even mean

You can specify the precision of a number with the prime character. For example if I type 10. I get 10 to machine precision (normally about 16 digits) but if I type 10.`20 I get a floating point number with 20 digits of precision. You really need to read this tutorial.

POSTED BY: Neil Singer

Doug,

This is a function of your preferences. Go to preferences and change the default number of digits to display in the output from 6 to 10, or something else. Now it will show 10 digits of precision (while keeping any extra digits hidden). This rounding behavior is only for display and was done to make big expressions with lots of numbers manageable but if you change your preference, you can have what you want.

enter image description here

Alternatively, you can do this change for only one notebook and leave your global settings alone by going to the options inspector and changing PrintPrecision option for that notebook or executing this one time for a particular notebook:

SetOptions[SelectedNotebook[], PrintPrecision -> 10]

Regards,

Neil

POSTED BY: Neil Singer

So "1000." is the correct answer to 1000 digits, even when it knows internally that the .04 is there?

No, one sees here just the same displaying behavior:

N[5000.01 + 5000.03, 1000] // FullForm
(*  Out:   10000.04`   *)
POSTED BY: Henrik Schachner
Posted 5 years ago

I never implied a miscalculation, I'm saying misbehavior. 5000.01 + 5000.03 = 10000. is a problem all by itself. Mathematica might know the answer, but that doesn't do the user any good when it gives an answer like that. Sure you can do tricks and apply function to get it spit out the right answer, but 5000.01 + 5000.03 = 10000. is just wrong. It's wrong to 16 digits, which it supposedly is, and it's wrong to 7 digits. And if you don't like that 5000.010 + 5000.030 = 10000. is even more wrong.

I did not realize that all digits were still there internally. But that just points out another 'flaw'. N[5000.01 + 5000.03, 1000] = 1000. So "1000." is the correct answer to 1000 digits, even when it knows internally that the .04 is there?

POSTED BY: Doug Lerner

This discussion is somewhat hard to understand. I anyone really thinking that there is a miscalculation?

enter image description here

POSTED BY: Henrik Schachner
Posted 5 years ago

I like the SetPrecision idea, which I hadn't thought of. Although, it can still be tedious to wrap every expression with SetPrecision. And as the last post suggests, it's not always clear just how many digits you need. The problem as I stated it seems straightforward and easy to put a number on, but that was distilled from a more complicated series of calculations that took me a while to figure out what was going on. I guess you could just SetPrecision of every expression in a notebook to 10 or 100 or something just to be safe and make sure the numbers were being handled in the way you intended, but that seems rather crude.

I guess what I'm hinting at here is there is inconsistent behavior here at best and a bug at worst. I mean understand about computer numerical calculations and how this problem might arise, but Mathematica is supposed to give you consistent result regardless of the underlying hardware. Precision[5000.01] is MachinePrecision and $MachinePrecision is 15.9546. So why do you have to set the precision of 5000.01 to 7 to get the calculation to work when it is already supposed to have a precision of 16? And again, why does 5000.010 not behave the same as 5000.01`7?

POSTED BY: Doug Lerner

.. I've been thinking about a way of not having to manually put the precision number in SetPrecision... I found this way if you turn the numbers into string to then use the command. Is there any other way?

a = "5000.01700000";
b = "5000.00000000";
SetPrecision[(a // ToExpression) + (b // ToExpression), 
 StringLength[a]]

a2 = "50000.01700000";
b2 = "50000.00000010";
SetPrecision[(a2 // ToExpression) + (b2 // ToExpression), 
 StringLength[a2]]

a3 = "500000.017000000";
b3 = "500000.000000100";
SetPrecision[(a3 // ToExpression) + (b3 // ToExpression), 
 StringLength[a3]]

imagz1

Thanks

POSTED BY: Claudio Chaib

If you insist on copying and pasting data that is used elsewhere, try

 SetPrecision[5000.01 + 5000.03, 7]

(* 10000.04.*)

.

POSTED BY: Marvin Ray Burns
Posted 5 years ago

I was aware of this method and should have mentioned it. In fact, it's the way I worked around the problem in my notebook. But adding `x on the end of every number is just as tedious as adding 0's, especially when you have a long list of numbers possibly copied from somewhere else.

Another issue with that is why doesn't 5000.01`7 behave the same as 5000.010 or even 5000.0100000? What does the 7 even mean?

POSTED BY: Doug Lerner

You can try

5000.01`7 + 5000.03`7

 (* 10000.04   . *) 
POSTED BY: Marvin Ray Burns

Yes! This method is perfect! Works for many many decimal places! Thanks a lot, Marvin, I learned something new!

5000.0000000000000000000000000000001`36 + 5000.0000000000000000000000000000003`36

( * 10000.0000000000000000000000000000004 * )

POSTED BY: Claudio Chaib

I think maybe this is not the perfect solution, because it is visual and also has limitations... but it might slightly improve your visualization if using AccountingForm []

AccountingForm[5000.01 + 5000.03, 7]
AccountingForm[5000.00000001 + 5000.00000003, 13]
AccountingForm[5000.00000000001 + 5000.00000000003, 16]

imag1

...I think maybe there's still another better way...

POSTED BY: Claudio Chaib

Group Abstract Group Abstract