Group Abstract Group Abstract

Message Boards Message Boards

0
|
147 Views
|
3 Replies
|
0 Total Likes
View groups...
Share
Share this post:

What might be the effect of the precision of initial conditions on the precision of ODE solution?

Posted 3 days ago

Hello,

I am trying to solve a large system of first order ODEs (ca. 480 ODEs) by the default automatic NDSolve method, using WorkingPrecision->50. This implies the precision of the solution to be about 25. Due to excessive memory demands I am forced to decompose the calculation process into several stages. Any new stage begins with initial conditions taken to be equal to the final numerical solutions resulting from a previous stage. The latter have the precision of 25 digits, which seems to be the reason why (in the new stage) NDSolve complains that the precision of the ODEs is less than the WorkingPrecision. My question is: should I worry about these warning messages? In principle, in any next stage the precision of the solution will not be greater than 25 anyway. Is there any reason to expect that the precision of the solution will deteriorate below 25, if it is only the initial values that have the precision smaller than WorkingPrecision->50 ?

Lesław

POSTED BY: Leslaw Bieniasz
3 Replies

The following returns a number with the identical FullForm as N[Pi, 50]:

file = FileNameJoin[{$TemporaryDirectory, "number.m"}];
Import[Export[file, N[Pi, 50]]]

However, your recent comments reminded me of this issue raised on StackExchange with the Export[] and Import[] of NDSolve[] solutions:

https://mathematica.stackexchange.com/questions/295837/precision-of-interpolation-function-after-exporting-and-importing/

POSTED BY: Michael Rogers

Thanks for your comment. It may be that I do something wrong with saving and reading the solutions between stages, in a file. How to export/import these data so that the precision (50) of numbers is preserved (independently of the solution precision of 25)? I use Export and Import commands assuming the "List" format. Using SetPrecision does not help.

By the way, things would be much easier if the InterpolationFunction object could be optionally allocated on a disk during the calculations. Maybe someone can suggest this to Wolfram? My calculations are not very time consuming, but they take a lot of memory, reaching 50-60GB, whereas the default available space on a supercomputing cluster is only 3.85 GB. Requesting additional space generates enormous costs.

Leslaw

POSTED BY: Leslaw Bieniasz

From your description, I assume it's loss of precision from evaluating the boundary conditions to be used as initial conditions for the next stage. This loss comes from an estimate of the maximum roundoff error propagated through the computation. It is an estimated bound, not the actual error.

Setting $MinPrecision and $MaxPrecision should suppress the message. If your BCs are numerically ill-conditioned, it won't help that; it will probably hide it (caveat).

Block[{$MinPrecision = 50, $MaxPrecision = 50}, BCs /. currentstate]

(There seem to be two, distinct kinds of precision being discussed, namely, the floating-point precision of the numbers used by the kernel and the precision (or accuracy) of the solution. Whenever I do what is described, the solution at the end of a stage always has a floating-point precision of 50, no matter what the precision of the solution happens to be. In fact, NDSolve[] cannot know the precision of the solution. It knows only the format of the numbers (50-digit bignums) in its argument. So NDSolve[] is not complaining about the solution being inaccurate. It's complaining about some of the numbers having a bignum format of less than 50 digits.)

POSTED BY: Michael Rogers
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard