Group Abstract Group Abstract

Message Boards Message Boards

0
|
330 Views
|
4 Replies
|
1 Total Like
View groups...
Share
Share this post:

What might be the effect of the precision of initial conditions on the precision of ODE solution?

Posted 1 month ago

Hello,

I am trying to solve a large system of first order ODEs (ca. 480 ODEs) by the default automatic NDSolve method, using WorkingPrecision->50. This implies the precision of the solution to be about 25. Due to excessive memory demands I am forced to decompose the calculation process into several stages. Any new stage begins with initial conditions taken to be equal to the final numerical solutions resulting from a previous stage. The latter have the precision of 25 digits, which seems to be the reason why (in the new stage) NDSolve complains that the precision of the ODEs is less than the WorkingPrecision. My question is: should I worry about these warning messages? In principle, in any next stage the precision of the solution will not be greater than 25 anyway. Is there any reason to expect that the precision of the solution will deteriorate below 25, if it is only the initial values that have the precision smaller than WorkingPrecision->50 ?

Lesław

POSTED BY: Leslaw Bieniasz
4 Replies

The following returns a number with the identical FullForm as N[Pi, 50]:

file = FileNameJoin[{$TemporaryDirectory, "number.m"}];
Import[Export[file, N[Pi, 50]]]

However, your recent comments reminded me of this issue raised on StackExchange with the Export[] and Import[] of NDSolve[] solutions:

https://mathematica.stackexchange.com/questions/295837/precision-of-interpolation-function-after-exporting-and-importing/

POSTED BY: Michael Rogers

The following is a simple code (for solving just one ODE), which illustrates my approach:

wprec=50;
tmax1=1;
tmax2=10;
ivpsol1=NDSolve[{u'[t]+u[t]^2==0,u[0]==1},{u[t]},{t,0,tmax1},WorkingPrecision->wprec,PrecisionGoal->wprec/2,
MaxSteps->Infinity,InterpolationOrder->All];
unum1[t_]:=Evaluate[First[u[t]/.ivpsol1]];
umax1=unum1[tmax1]
0.49999999999999999999999936808923840799082795395903
Export["Z:\\umax1.txt",umax1,"List"];
umax1new=First[SetPrecision[Import["Z:\\umax1.txt","List"],wprec]]
0.49999999999999999999999936808923840799082795395903
ivpsol2=NDSolve[{u'[t]+u[t]^2==0,u[tmax1]==umax1new},{u[t]},{t,tmax1,tmax2},WorkingPrecision->wprec,PrecisionGoal->wprec/2,
MaxSteps->Infinity,InterpolationOrder->All];
unum2[t_]:=Evaluate[First[u[t]/.ivpsol2]];
unum2[tmax1]
0.4999999999999999999999993680892384080

In this simple case I don't get warnings indicating that the precision of my ODE is smaller than the WorkingPrecision during the second call of NDSolve. It seems also that there is no problem with preserving all 50 significant digits of umax1new during Export/Import. However, the Interpolating Function object resulting from the second NDSolve call does not reproduce all digits of the initial condition (umax1new), although more than 25 requested digits are reproduced.

By the way, I observe that for this example NDSolve successfully maintains the requested precision of the solution for t up to about 1 (exact value of umax1new is 0.5) , but later the solution obtained becomes less and less accurate, independently of whether I restart the calculations or not. The adaptive step selection mechanism does not seem to work properly for larger t.

Leslaw

POSTED BY: Leslaw Bieniasz

Thanks for your comment. It may be that I do something wrong with saving and reading the solutions between stages, in a file. How to export/import these data so that the precision (50) of numbers is preserved (independently of the solution precision of 25)? I use Export and Import commands assuming the "List" format. Using SetPrecision does not help.

By the way, things would be much easier if the InterpolationFunction object could be optionally allocated on a disk during the calculations. Maybe someone can suggest this to Wolfram? My calculations are not very time consuming, but they take a lot of memory, reaching 50-60GB, whereas the default available space on a supercomputing cluster is only 3.85 GB. Requesting additional space generates enormous costs.

Leslaw

POSTED BY: Leslaw Bieniasz

From your description, I assume it's loss of precision from evaluating the boundary conditions to be used as initial conditions for the next stage. This loss comes from an estimate of the maximum roundoff error propagated through the computation. It is an estimated bound, not the actual error.

Setting $MinPrecision and $MaxPrecision should suppress the message. If your BCs are numerically ill-conditioned, it won't help that; it will probably hide it (caveat).

Block[{$MinPrecision = 50, $MaxPrecision = 50}, BCs /. currentstate]

(There seem to be two, distinct kinds of precision being discussed, namely, the floating-point precision of the numbers used by the kernel and the precision (or accuracy) of the solution. Whenever I do what is described, the solution at the end of a stage always has a floating-point precision of 50, no matter what the precision of the solution happens to be. In fact, NDSolve[] cannot know the precision of the solution. It knows only the format of the numbers (50-digit bignums) in its argument. So NDSolve[] is not complaining about the solution being inaccurate. It's complaining about some of the numbers having a bignum format of less than 50 digits.)

POSTED BY: Michael Rogers
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard