Message Boards Message Boards

0
|
597 Views
|
11 Replies
|
6 Total Likes
View groups...
Share
Share this post:

Is there any method to extract interpolation formula from NDSolve ?

Posted 20 days ago

Hi, I try to use NDSolve to solve a large system of first order ODEs (initial value problem). As I need highly accurate solutions, I declare InterpolationOrder->All. It would be helpful for me to know exactly the interpolation formulae (together with appropriate numerical coefficients) that are used by NDSolve (or by the interpolating function object that it produces). Is there any method to extract this information?

One more question: the interpolating function object returned by NDSolve in my case is very big, taking several gigabytes of memory, which forces me to use several nodes on a supercomputing cluster. Isn't there any method to reduce its size without compromising the accuracy?

Leslaw.

POSTED BY: Leslaw Bieniasz
11 Replies

This is what I meant by the conflict. Whether you get dense output or Hermite interpolation corresponds to InterpolationOrder -> All or InterpolationOrder -> Automatic respectively. It does not depend on the option specification of Method (except variable-order methods sometimes use Hermite for low-order steps). Nor does an integration method use interpolation (at least not LSODA, IDA, or Runge-Kutta methods). So specifying InterpolationOrder -> All causes NDSolve[] to construct a memory-intensive, dense-output interpolating function, which is in conflict with desiring a small one that just has the function values at the steps. InterpolationOrder -> Automatic stores just the function and derivative values (up to the order of the ODE).

If you want just the steps, with no extra use of memory, here is a way:

{ifdata} = Last@Reap[
    Sow[{{0.}, 1.}, "Steps"];
    NDSolve[{x'[t] == x[t], x[0] == 1}, {}, {t, 10, 10}, 
      StepMonitor :> Sow[{{t}, x[t]}, "Steps"]],
     "Steps"];
solFN = Interpolation[ifdata]

It will be accurate at the steps and moderately accurate between them.

Table[(solFN[t] - Exp[t]) / Exp[t],
  {t, solFN["Grid"]}] // MinMax
(*  {0., 1.70507*10^-7}  *)

Table[(solFN[t] - Exp[t]) / Exp[t],
 {t, MovingAverage[solFN["Grid"], 2]}] // MinMax
(*  {-0.0000132379, 9.5433*10^-6}  *)

If you're using a higher-order method that can take large steps, then the difference in accuracy is greater:

{ifdata} = Last@Reap[
    Sow[{{0.}, 1.}, "Steps"];
    sol = NDSolve[{x'[t] == x[t], x[0] == 1}, x, {t, 10, 10},
      Method -> "Extrapolation", 
      StepMonitor :> Sow[{{t}, x[t]}, "Steps"]],
    "Steps"];
solFN = Interpolation[ifdata]

Table[(solFN[t] - Exp[t]) / Exp[t],
  {t, solFN["Grid"]}] // MinMax
(*  {-2.92009*10^-10, 0.  *)

Table[(solFN[t] - Exp[t]) / Exp[t],
 {t, MovingAverage[solFN["Grid"], 2]}] // MinMax
(*  {-0.025472, 0.00819171}  *)

As for the size of the solution, the time grid plus interpolating coefficients (whose number is the average order plus one, maybe ~8 for the Automatic method) times the dimension (~200 you said) times the number of steps times 8 bytes per float probably accounts for most of the size. There are lists giving the structure of the interpolation per step (method plus indices into the interpolation data array), which adds 2-3 times 8 bytes per step. That's 19 Kb per step, more if the order is higher. I'm not sure about variable-order methods, which might mean less array-packing and in turn more overhead to store the data.

POSTED BY: Michael Rogers

In the Function Repository we find utilities for this precise purpose: InterpolatingFunctionData, InterpolatinfFunctionToPiecewise and more

https://reference.wolfram.com/language/tutorial/NDSolvePackages.html.

POSTED BY: Gianluca Gorni

An example:

POSTED BY: Mariusz Iwaniuk

Maybe you can compress data :

POSTED BY: Mariusz Iwaniuk

The univariate interpolations produced by NDSolve[] use Hermite interpolation, Chebyshev series, or local series for each step. I know of no case where the "Spline" or other method is used. Aside from the interpolation coordinates, the data stored for interpolation consists of function and derivative values or coefficients. This data constitutes the bulk of the InterpolatingFunction and probably cannot be made smaller without losing accuracy. One can compute the interpolant at each step from this data. The resource function @Gianluca Gorni referred to computes the interpolant from the values of the interpolating function. Since they are polynomials, you could do this yourself, too. I would expect small round-off errors now and then. The errors should be negligible, unless you need the exact interpolating formula.

Aside from numeric data, there is a very small amount of bookkeeping information, and when there are series interpolants, there is a modestly sized list that indicates the interpolating method for each step.

POSTED BY: Michael Rogers

Is your solution a vector-valued interpolating function of one variable or many scalar-valued interpolating functions of one variable? Or mixture of each?

POSTED BY: Michael Rogers

Thank you for suggestions. The compression can be applied only after the largish interpolating function object is completely created, so that one cannot avoid large memory demands during the NDSolve operations. I wonder if it would be possible to force NDSolve to create such an object containing only the minimum necessary information (say, discrete time values with corresponding solution values, but no interpolant information)? This would probably reduce the size considerably. Could one later complete such an object with interpolants, without using NDSolve again? Currently my object (arising for a system of about 200 ODEs) has ca. 12-15 Gigabytes, and must be stored in several supercomputing cluster nodes, but I expect the size to increase by a factor of 10 or so, in final calculations. Hence, the amount of memory needed presents a problem. Or perhaps, one could somehow declare the object to be stored on a disk rather than in operational memory? Is there any such alternative?

Leslaw

POSTED BY: Leslaw Bieniasz

I wonder if it would be possible to force NDSolve to create such an object containing only the minimum necessary information (say, discrete time values with corresponding solution values, but no interpolant information)?

This is very nearly what InterpolationOrder -> Automatic does. NDSolve[] will also store the derivative values it has already calculated. There are ways around this, but discarding such information makes a even less accurate interpolation. The InterpolationOrder does not change the values at the steps, and therefore it does not affect the accuracy at the steps. But it does affect interpolated values between the steps. Once you throw away this information you cannot recover it, except by re-integrating the ODE.

However, you said in the opening question that you need a highly accurate interpolation and use InterpolationOrder -> All. All of the above seems in conflict with the goal of high accuracy.

POSTED BY: Michael Rogers

Whether there is a conflict or not, depends, in my opinion, on how NDSolve performs the interpolation. If the interpolation employs only the stored values of solutions at the discrete time grid used for integration (and possibly some derivatives), and the interpolation is of Hermite type, as it seems to be, then I don't see any need to store the interpolation polynomial coefficients or any additional information, as it can always be recreated from the stored solutions (and derivatives). But some ODE solution methods contain the so-called "dense output" formulae, for computing solutions between the grid nodes, and it may be that such formulae require some additional information. I don't know if NDSolve uses such methods. Yes, I need high accuracy, but I also cannot use up all computational resources of our computing center. The size of the object created by NDSolve appears exccessively large, in my opinion, especially if compared to the computing time which is relatively small (ca. 1 hour).

Lesław

POSTED BY: Leslaw Bieniasz

How to call this PropertyList[] function in the case of solving a system of ODEs? As I wrote, I solve a system, and in such a case, after calling

PropertyList[y/.sol]

where y is a list of dependent unknowns as functions of the independent variable t, say: y={y1[t],y2[t], etc. }, I get a message that the object does not have properties. Replacing y by y1 or y2, etc., or y1[t], y2[t], etc., does not help.

I am able, however, to extract a chunk of the interpolating function object corresponding to one unknown, for example, the chunk for y1 is sol[[1]][[1]], and I can calculate the values of y1 from it correctly. But for such a chunk, the PropertyList[] also does not seem to work. It seems that sol is just a list of individual interpolating function objects for all unknowns in the ODE system; I don't see any way to define or modify this structure myself - it is created automatically. The question then emerges where the information about the discrete t-grid coordinates is stored: is it stored only once (where?), or it is repeated in every sub-object (chunk) for every unknown?

Lesław

POSTED BY: Leslaw Bieniasz

Try it on one (1) of the functions, like y1 with no t, and not a list of functions. An InterpolatingFunction[...] is an object with properties, but InterpolatingFunction[...][t] with the t is not. PropertyList on a list of InterpolatingFunction[] objects yields $Failed with no explanation.

Note that the property list is the same for all valid InterpolatingFunction[] objects. So PropertyList[Interpolation@Range@5] will give you the list of properties.

Each InterpolatingFunction[] is self-contained. Each contains a t-grid. If they each came from the same NDSolve[] call, then each has the same t-grid; it is repeated and not shared. y /. sol /. t -> "Grid" would probably show you the grids.

POSTED BY: Michael Rogers
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract