This is what I meant by the conflict. Whether you get dense output or Hermite interpolation corresponds to InterpolationOrder -> All
or InterpolationOrder -> Automatic
respectively. It does not depend on the option specification of Method
(except variable-order methods sometimes use Hermite for low-order steps). Nor does an integration method use interpolation (at least not LSODA, IDA, or Runge-Kutta methods). So specifying InterpolationOrder -> All
causes NDSolve[]
to construct a memory-intensive, dense-output interpolating function, which is in conflict with desiring a small one that just has the function values at the steps. InterpolationOrder -> Automatic
stores just the function and derivative values (up to the order of the ODE).
If you want just the steps, with no extra use of memory, here is a way:
{ifdata} = Last@Reap[
Sow[{{0.}, 1.}, "Steps"];
NDSolve[{x'[t] == x[t], x[0] == 1}, {}, {t, 10, 10},
StepMonitor :> Sow[{{t}, x[t]}, "Steps"]],
"Steps"];
solFN = Interpolation[ifdata]
It will be accurate at the steps and moderately accurate between them.
Table[(solFN[t] - Exp[t]) / Exp[t],
{t, solFN["Grid"]}] // MinMax
(* {0., 1.70507*10^-7} *)
Table[(solFN[t] - Exp[t]) / Exp[t],
{t, MovingAverage[solFN["Grid"], 2]}] // MinMax
(* {-0.0000132379, 9.5433*10^-6} *)
If you're using a higher-order method that can take large steps, then the difference in accuracy is greater:
{ifdata} = Last@Reap[
Sow[{{0.}, 1.}, "Steps"];
sol = NDSolve[{x'[t] == x[t], x[0] == 1}, x, {t, 10, 10},
Method -> "Extrapolation",
StepMonitor :> Sow[{{t}, x[t]}, "Steps"]],
"Steps"];
solFN = Interpolation[ifdata]
Table[(solFN[t] - Exp[t]) / Exp[t],
{t, solFN["Grid"]}] // MinMax
(* {-2.92009*10^-10, 0. *)
Table[(solFN[t] - Exp[t]) / Exp[t],
{t, MovingAverage[solFN["Grid"], 2]}] // MinMax
(* {-0.025472, 0.00819171} *)
As for the size of the solution, the time grid plus interpolating coefficients (whose number is the average order plus one, maybe ~8 for the Automatic method) times the dimension (~200 you said) times the number of steps times 8 bytes per float probably accounts for most of the size. There are lists giving the structure of the interpolation per step (method plus indices into the interpolation data array), which adds 2-3 times 8 bytes per step. That's 19 Kb per step, more if the order is higher. I'm not sure about variable-order methods, which might mean less array-packing and in turn more overhead to store the data.