Message Boards Message Boards

3
|
24725 Views
|
12 Replies
|
21 Total Likes
View groups...
Share
Share this post:

Problem using optimization for modeling

I'm trying use NMinimize to determine the parameters of a model, but I'm having problems using OutputResponse as part of the objective function. The model represents lithium-ion cells used to power an autonomous underwater vehicle. I can measure a real cell's voltage response to changes in output current, and I want to compare that output to the output of my model for a given set of model circuit parameters. If I can minimize the difference between the output of my model and the output of a real cell, then my model will be correct. In general, I'm trying to minimize the difference between two curves.

I use OutputResponse to generate the output of my model. My problem is that OutputResponse can't simulate the response of a system that contains symbolic parameters. But my model parameters have to be symbolic so NMinimize can vary them. Here is my code (sorry for the long posting):

Write the transfer function of the circuit that models the complex impedance of the cell.
tf = (Rser + Rfast/((Rfast s Cfast) + 1) + Rslow/((Rslow s Cslow) + 1));
Convert the tf into a transfer function model.
tfm = TransferFunctionModel[{{tf}}, s];
Generate a battery current profile.
Ibatt = UnitStep[t - 1] - UnitStep[t - 5];
Plot[Ibatt, {t, 0, 10}, Exclusions -> None]

Assign values to the circuit parameters.
vals = {Rser -> 0.1, Rfast -> 0.05, Cfast -> 10, Rslow -> 0.01, Cslow -> 100};
Voc = 4.2;
Find the output response for these circuit values and current profile.
y = OutputResponse[tfm /. vals, Ibatt, {t, 10}];
Plot[Voc - y, {t, 0, 10}]

Define some different circuit values to get a different response and plot both responses.
dvals = {Rser -> 0.06, Rfast -> 0.08, Cfast -> 15, Rslow -> 0.015, Cslow -> 50};
dy = OutputResponse[tfm /. dvals, Ibatt, {t, 10}];
Plot[{Voc - y, Voc - dy}, {t, 0, 10}]
Find the area of the difference between the two responses.
Plot[Abs[y - dy], {t, 0, 10}]
NIntegrate[Abs[y - dy], {t, 0, 10}]
{0.156543}
I now have an objective function and can try modeling. Given only the first response curve as a template, find the circuit values that produced that response as a test of the modeling algorithm. Do this by minimizing the difference between the template and the output of a model whose circuit values are adjusted by NMinimize. In this case I already know the answer, but if it works then I can use measurements of an actual cell as my template.
 NMinimize[
  NIntegrate[
   Abs[
    OutputResponse[tfm, Ibatt, {t, 0, 10}] - y
   ], {t, 0, 10}
  ], {Rser, Rfast, Cfast, Rslow, Cslow}
 ]
 
 OutputResponse::symbs: Cannot simulate response of system StateSpaceModel[{{{0,1},{-(1/(Cfast Cslow Rfast Rslow)),-((Cfast Rfast+Cslow Rslow)/(Cfast Cslow Rfast Rslow))}},{{0},{1}},{{-(Rser/(Cfast Cslow Rfast Rslow))+(Rfast+Rser+Rslow)/(Cfast Cslow Rfast Rslow),-((Rser (Cfast Rfast+Cslow Rslow))/(Cfast Cslow Rfast Rslow))+(<<1>>+<<2>>+<<1>>)/<<1>>}},{{Rser}}},<<18>>->None] that contains symbolic parameter(s). >>
As you can see, OutputResponse can't produce an output because it has symbolic parameters.

How do I get NMinimize to set the values of the parameters before OutputResponse uses them? Will NMinimize work at all with this type of objective function?

Thanks for any help,
Paul
POSTED BY: Paul McGill
12 Replies
Though not recomended, you can still use NMinimize or FindMinimum in this way. Here is a piece of code that should work regardless of problematic numeric performance:
  • Define the system and input: 
tf[Rser_, Rfast_, Cfast_, Rslow_, Cslow_, s_] := (Rser + Rfast/((Rfast s Cfast) + 1) + Rslow/((Rslow s Cslow) + 1))
tfm[Rser_, Rfast_, Cfast_, Rslow_, Cslow_, s_] :=
   TransferFunctionModel[{{tf[Rser, Rfast, Cfast, Rslow, Cslow, s]}}, s] Ibatt = UnitStep[t - 1] - UnitStep[t - 5];
  • Define responce y and responce f with parameter
y = OutputResponse[tfm[0.1`, 0.05`, 10, 0.01`, 100, t], Ibatt, {t, 10}];
f[Rser_?NumericQ, Rfast_?NumericQ, Cfast_?NumericQ, Rslow_?NumericQ, Cslow_?NumericQ] :=
   First@NIntegrate[
        Abs[OutputResponse[tfm[Rser, Rfast, Cfast, Rslow, Cslow, t],Ibatt, {t, 0, 10}] - y],
        {t, 0, 10},
        Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0, Method -> {"TrapezoidalRule"}}
]
  • You can use the optimization function here to find the close result: and use a dynamic variable to track the process:
FindMinimum[
        f[Rser, Rfast, Cfast, Rslow, Cslow],
        {{Rser, 0.05}, {Rfast, 0.08}, {Cfast, 5}, {Rslow, 1/10}, {Cslow, 50}}, StepMonitor :> (x = Cslow)
]
Dynamic[x]
The result I have for this set of value is not bad even though it halted due to poor convergence: 
During evaluation of In[30]:= FindMinimum::sdprec: Line search unable to find a sufficient decrease in the function value with MachinePrecision digit precision. >>

Out[30]= {0.173594, {Rser -> 0.0555499, Rfast -> 0.0593808, Cfast -> 5., Rslow -> 0.0779124, Cslow -> 50.}}
For this type of non-linear system, you may want to find some heuristical values first before you go ahead. Proper start value and boundaries for the parameters are very helpful in general. 
POSTED BY: Shenghui Yang
Posted 12 years ago
After evaluating all but your last block of code I look at the following
In[14]:= tfm

Out[14]= TransferFunctionModel[{{{Rfast + Rser + Rslow + Cfast*Rfast*Rser*s + Cfast*Rfast*Rslow*s + Cslow*Rfast*Rslow*s + Cslow*Rser*Rslow*s + Cfast*Cslow*Rfast*Rser*Rslow*s^2}}, (1 + Cfast*Rfast*s)*(1 + Cslow*Rslow*s)}, s]

Thus I change your last block of code to be

 In[15]:= f[{Rse_?NumericQ, Rfa_?NumericQ, Cfa_?NumericQ, Rsl_?NumericQ, Csl_?NumericQ}] :=
  (Print[{tfm /. {Rser -> Rse, Rfast -> Rfa, Cfast -> Cfa, Rslow -> Rsl, Cslow -> Csl}}]];
   NIntegrate[Abs[OutputResponse[tfm/.{Rser->Rse,Rfast->Rfa,Cfast->Cfa,Rslow->Rsl,Cslow->Csl}, Ibatt, {t,0,10}]-y], {t,0,10}]
  );
 NMinimize[f[{Rser, Rfast, Cfast, Rslow, Cslow}], {Rser, Rfast, Cfast, Rslow, Cslow}]
 
 During evaluation of In[16]:={TransferFunctionModel[{{{1.07688+0.15247*s-0.02527*s^2}},(1-0.07403*s)*(1+0.34557*s)},s]}
 
 During evaluation of In[16]:= {TransferFunctionModel[{{{1.07688+0.15247*s-0.02527*s^2}},(1-0.07403*s)*(1+0.34557*s)},s]}

During evaluation of In[16]:= NMinimize::nnum: The function value {1.19169*10^51} is not a number at
{Cfast,Cslow,Rfast,Rser,Rslow} = {0.989531,0.284773,0.349234,0.987641,-0.259993}. >>

Out[17]= NMinimize[f[{Rser, Rfast, Cfast, Rslow, Cslow}], {Rser, Rfast, Cfast, Rslow, Cslow}]

I make some of those changes based on looking at your use of OutputResponse in a previous code block and the rest based on experimentation.
By introducing a dummy function f and using ?NumericQ on each of the arguments I forced everything to be numeric to satisfy NIntegrate.
By using :=(Print[],NIntegrate[]); I get a look at your integrand every time f is called.

The error message from NMinimize blowing up may be because I've misused part of your code.
Hopefully something in this will give you the needed clue to get to the next step.
POSTED BY: Bill Simpson
This is officially implemented via the HoldAll attribute of NMinimize and its siblings.

NMinimize does not have the HoldAll attribute. FindMinimum does, but all that means is that the argument is not evaluated before being passed to function. Then, to quote the documentation, FindMinimum "first localizes the values of all variables, then evaluates f with the variables being symbolic, and then repeatedly evaluates the result numerically". The use of a black box objective function f (defined using _?NumericQ) prevents the symbolic evaluation step by keeping f unevaluated for non-numeric arguments.
POSTED BY: Ilian Gachevski
Bill - I really like your approach of using a dummy function. I don't understand how adding the pattern test to the function arguments forces them to be numbers. I thought that adding those qualifiers to function arguments tests the arguments and rejects them if they don't match, but it doesn't force them to be anything. Oh well, there's a lot about Mma I don't understand...

I defined my dummy function as follows
 f[{Rse_?NumericQ, Rfa_?NumericQ, Cfa_?NumericQ, Rsl_?NumericQ, Csl_?NumericQ}] :=
  (error =
    NIntegrate[
     Abs[OutputResponse[
        tfm /. {Rser -> Rse, Rfast -> Rfa, Cfast -> Cfa, Rslow -> Rsl,
          Cslow -> Csl}, Ibatt, {t, 0, 10}] - y
     ], {t, 0, 10}
    ];
   Print[error, " ", Rser, " ", Rfast, " ", Cfast, " ", Rslow, " ", Cslow];)
Unfortunately, when I run the minimization, even adding constraints, I get
NMinimize[{f[{Rser, Rfast, Cfast, Rslow, Cslow}], Rser > 0,
  Rser < 0.5, Rfast > 0, Rfast < 0.5, Cfast > 10, Cfast < 100,
  Rslow > 0, Rslow < 0.5, Cslow > 50, Cslow < 200}, {Rser, Rfast, Cfast, Rslow, Cslow}]
{1.74048} 0.485254 0.301249 93.0382 0.148942 135.54
{1.74048} 0.485254 0.301249 93.0382 0.148942 135.54
{1.74048} Rser Rfast Cfast Rslow Cslow
NMinimize::nnum: The function value Null is not a number at {Cfast,Cslow,Rfast,Rser,Rslow} = {93.0382,135.54,0.301249,0.485254,0.148942}. >>
The good news is that NMinimize is working with numeric arguments. The bad news is that the arguments don't seem to change and then it fails after just two iterations. I'm not sure what's going on but will play with it some more. Thanks again for looking at it for me.

Shenghui - Thank you also for taking the time to help with this. I'm going to try your suggestions tomorrow. Perhaps your method will work with constraints. 
POSTED BY: Paul McGill
@Bill, you are right:
This often happens because something like NMinimize or Integrate might call your function first with just symbols instead of beginning with numeric values. The guess is that NMinimize or Integrate is "probing" or testing your function, to see if it can get a symbolic result back and save a lot of time testing with numerics.
This is officially implemented via the HoldAll attribute of NMinimize and its siblings. 
POSTED BY: Shenghui Yang
Wow, you guys are amazing. I can't tell you how much I appreciate your patient and clear answers to my questions.
You need your f[] to return a real value, not a list and not Null. Your version isn't doing that.

D'oh! I've been using Mma since the late 1980s, and consider myself an intermediate-level programmer (I'm an EE), but that was a rookie mistake. I've never completely understood how Mma decides what to return from a function, so I usually just experiment until it works. Now I know to always test my objective function to make sure it returns something that can be optimized.

The other person who responded to you began to question whether this whole approach to the problem is wise or not. I'm not even beginning to point you towards whether what you have chosen is appropriate or what the speed of convergence will be. I'm still just trying to get you to a point where you write working code.

Yes, you guys are right - I may be going about this all wrong and I'm open to suggestions on a better approach. My goal was to get something working and then improve it. Even if it's very slow in its current form, I don't mind waiting a long time for the answer because I only have to do the modeling once. Sometimes I find that writing code that runs slowly gets me to an answer more quickly than spending a lot of time optimizing code that runs quickly. I can also work on other things while my slow code is running.

About the ?NumericQ, many of the algorithms in Mathematica do things you might not expect.

That's certainly true! Now I understand that the pattern test is, in this case, a flag to NMinimize to tell it not to bother to look at the objective function symbolically. I would never have figured that out on my own. I would think that this slows down the optimization because NMinimize can't "look" at the solution surface to see its shape or complexity, but I don't think I have an alternative.

I'm currently running Bill's code but it only tries about 2 vectors per second, so I turned off the printing and I'll let it run overnight. I've also switched to differential evolution, as my previous reading on it claims that it's very robust for complex, nonlinear solution spaces. If it doesn't converge then I'll try Shenghui's approach since NMinimize and FindMinimum seem to use different algorithms. I'll let you know what I find.
POSTED BY: Paul McGill
It might be possible to get a close enough answer by searching the whole parameter space (in the spirit of Stephen Wolfram's "A New Kind of Science")

This code will take a while, but it will finish in an overnight run.
data = Table[f[{Rser, Rfast, Cfast, Rslow, Cslow}], {Rser, 0, .5, .1}, {Rfast, 0, .5, .1}, {Cfast, 10, 100, 10}, {Rslow, 0, .5, .1}, {Cslow, 50, 200, 30}]; {
Min[data], Position[data, Min[data]]}
That way you can get a better sense for what is going on by studying data, and also you get an answer with some feeling for its correctness.
POSTED BY: Todd Rowland
Progress update: I've managed to successfully run my test case, minimizing the error between my model output and my test output.

Timing[NMinimize[{f[{Rser, Rfast, Cfast, Rslow, Cslow}],
   0 < Rser < 0.5, 0 < Rfast < 0.5, 10 < Cfast < 100, 0 < Rslow < 0.5,
    50 < Cslow < 200}, {Rser, Rfast, Cfast, Rslow, Cslow},
  Method -> "DifferentialEvolution"]]
{23648.140956, {0.00278763, {Rser -> 0.104464, Rfast -> 0.0426679,
   Cfast -> 14.1031, Rslow -> 0.0126952, Cslow -> 50.}}}

The process is pretty slow, taking over 6.5 hours to run, but at least it produced a decent result. Just to remind you, the correct values are

vals = {Rser -> 0.1, Rfast -> 0.05, Cfast -> 10, Rslow -> 0.01,
  Cslow -> 100}

The resistor values are pretty close, but the capacitor values are 40% high and 50% low. However, if I compare the output of the optimized result with the correct output, it's not too bad.
Plot[{Voc - y, Voc - yopt}, {t, 0, 10}]


Plot[Abs[y - yopt], {t, 0, 10}, PlotRange -> All]



Perhaps I can speed up the process using Todd's method above, which searches the whole parameter space. My algorithm was searching about two solutions per second. Todd's approach would need to look at 6 x 6 x 10 x 6 x 6 = 12,960 solutions, which would take about 1.8 hours. This might get me close to the optimum parameters, after which I could let NMinimize take over (assuming the parameter space is well enough behaved). This will be more important as a I attempt to model a real cell and don't know a priori what the correct model values are. I'll attempt to model some real test data next.
POSTED BY: Paul McGill
Paul,

How are things going with your project?

Besides using a uniform search to give a good seed to NMinimize, it can give a you a bunch of information about the error function and about the result the NMinimize gives.  For example,
ListLinePlot[Flatten[data]]

You can use data to answer the simplest questions like would there be lots of choices near the minimum? or are there nearly optimal choices far from the absolute minimum?

There are lots of other things you can plug the data into like the descriptive statistics functions in Mathematica or the upload at Wolfram Alpha Pro
POSTED BY: Todd Rowland
Back to the implementation, it might speed things to discretize all the domains.  Above I suggested computing data as a uniform search of the parameter space, but you can also try storing the values of the OutputResponses for a finite set of values.   Then for each y you can just run it without recalculating the model functions.  (an alternative to this is to store the model functions as data).

Instead of tfm you would have
OutputValues[Rser_, Rfast_, Cfast_, Rslow_, Cslow_] := OutputResponse[tfm[Rser, Rfast, Cfast, Rslow, Cslow, t],
Ibatt, {t, 0, 10}] /. Table[{t -> i}, {i, 0, 10}]

tfmdata=Table[OutputValues[Rser, Rfast, Cfast, Rslow, Cslow], {Rser, 0, .5, .1}, {Rfast, 0, .5, .1}, {Cfast, 10, 100, 10}, {Rslow, 0, .5, .1}, {Cslow, 50, 200, 30}];
Note that instead of y you would have yvals sampled at the same values of t.

Instead of f you would have
fx[Rser_Integer, Rfast_, Cfast_, Rslow_, Cslow_, yvals_] := Total[tfmdata[[Rser, Rfast, Cfast, Rslow, Cslow]] - yvals]
Instead of data you would have
datax[yvals_]:= Table[fx[Rser, Rfast, Cfast, Rslow, Cslow,yvals], {Rser, 6}, {Rfast, 6}, {Cfast, 11}, {Rslow, 6}, {Cslow,6}];
then e.g.,
datay=datax[Table[y[i],{i,0,10}][/i]
POSTED BY: Todd Rowland
Posted 12 years ago
You need your f[] to return a real value, not a list and not Null. Your version isn't doing that.

Print out what your version of f returns and see whether you think that would be something that could be minimized.

Perhaps this version will help you.
f[{Rse_?NumericQ, Rfa_?NumericQ, Cfa_?NumericQ, Rsl_?NumericQ, Csl_?NumericQ}] :=
(error = NIntegrate[Abs[OutputResponse[tfm/.{Rser->Rse,Rfast->Rfa,Cfast->Cfa,Rslow->Rsl,Cslow->Csl},Ibatt,{t,0,10}]-y],{t,0,10}];
   Print[error, " ", Rse, " ", Rfa, " ", Cfa, " ", Rsl, " ", Csl];
   First[error]
);
NMinimize[{f[{Rser, Rfast, Cfast, Rslow, Cslow}], 0<Rser<0.5, 0<Rfast<0.5, 10<Cfast<100, 0<Rslow<0.5, 50<Cslow<200}, {Rser, Rfast, Cfast, Rslow, Cslow}, Method -> "RandomSearch"]

Looking at that as it runs shows how slow it is and how it seems to choose some very odd values for your five parameters, but at least it isn't immediately failing and it seems to be gradually crawling towards a minimum.

The other person who responded to you began to question whether this whole approach to the problem is wise or not. I'm not even beginning to point you towards whether what you have chosen is appropriate or what the speed of convergence will be. I'm still just trying to get you to a point where you write working code. There are reasons that are not always obvious and are sometimes nearly hidden why a bit of code I write is the way it is. Much of this is based on the underlying model of calculation that Mathematica uses. I'm not sure how to get you to a place where parts of that are clear to you.

About the ?NumericQ, many of the algorithms in Mathematica do things you might not expect. A very common problem is for someone to write a function that depends on the parameters having actual numeric values, instead of just being x and y and z. This often happens because something like NMinimize or Integrate might call your function first with just symbols instead of beginning with numeric values. The guess is that NMinimize or Integrate is "probing" or testing your function, to see if it can get a symbolic result back and save a lot of time testing with numerics. If you have written f[x_,y_,z_] which you expect to be given numbers and which will fail when called with f[p,q,r] then you have a problem like you described at the beginning, "why is my function failing?!?!?" By using ?Numeric you are only defining your f for constant numeric arguments. Then when Integrate or Minimize probes your f to see if it can get a symbolic answer it finds there is no f defined for f[p,q,r] and so it starts over with only numeric values and begins searching from there.  So, ?Numeric does not somehow magically transform p into a number, but it does force NMinimize to see it must use numeric constants for parameters to not fail or have your function fail. Again, this is some understanding, some experience, some shared knowledge, some folklore and some superstition all combined to be able to write Mathematica code that (mostly) works.
POSTED BY: Bill Simpson
Todd - Thanks for checking in on me. I've been off the forums for a bit and just saw your last two posts.

That is a good idea to flatten the objective function values and plot them. I was trying to figure out how to visualize five-dimensional data but couldn't quite wrap my head around it. The flattened data line plot might be a bit confusing, like taking the brightness values of the raster scan lines of a TV picture, stringing them all end-to-end, and trying to imagine what the original picture looked like. But at least it would give you an idea of how "bumpy" the surface is and how close together the minimum values are.

Your last comment is also a good one. Making the model output discrete not only speeds up the visualization of the solution space, but it speeds up the optimization as well. I realized that my battery test data would be discrete values sampled at one-second intervals, so I decided to output my model data in the same form.

As before, I define my circuit model as a continuous model in s, but then I convert it to a discrete-time model in z.

tf = (Rser + Rfast/((Rfast s Cfast) + 1) + Rslow/((Rslow s Cslow) + 1);

tfm = TransferFunctionModel[{{tf}}, s];

dtm = ToDiscreteTimeModel[tfm, 1, z];

I also define the electrical current into the model as discrete values instead of the continuous function I used previously.

Ibatp1 = Table[Piecewise[{{0, t < 100}, {3.3, 100 <= t <= 458}, {0, t > 458}}], {t, 0, 999}];

Now, when I want to generate the model output, I give it the series of electrical current values instead of a continous current function and a time range. Here I define my objective function as the difference (ManhattanDistance[ ]) between the model output (Vocp1s - modResp, with Vocp1s being the open-circuit voltage of the cell) and the output I measured from the load testing of the cell (Vbatp1).

objf[{Rse_?NumericQ, Rfa_?NumericQ, Cfa_?NumericQ, Rsl_?NumericQ,
    Csl_?NumericQ}] := Module[{modResp, error},
   modResp =
    Flatten[OutputResponse[
      dtm /. {Rser -> Rse, Rfast -> Rfa, Cfast -> Cfa, Rslow -> Rsl,
        Cslow -> Csl}, Ibatp1]];
   error = ManhattanDistance[Vocp1s - modResp, Vbatp1]
   ];

Finally, I can run the optimization with constraints.

sol = Timing[
  NMinimize[{objf[{Rser, Rfast, Cfast, Rslow, Cslow}],
    0.01 < Rser < 0.1, 0.01 < Rfast < 0.1, 0.01 < Cfast < 1000,
    0.01 < Rslow < 0.1, 1000 < Cslow < 10000}, {Rser, Rfast, Cfast,
    Rslow, Cslow}, Method -> "DifferentialEvolution"]]
{2400.906304, {3.66694, {Rser -> 0.0732495, Rfast -> 0.0151545,
   Cfast -> 0.417882, Rslow -> 0.0176008, Cslow -> 1848.02}}}

The optimization takes 40 minutes, but is much faster than the 6.5 hours my test case took when I was using continuous functions. Here I plot the optimized model output (blue) along with the load test measurements (violet), which are discrete to the nearest ten millivolts.

ListPlot[{Vocp1s - Flatten[OutputResponse[dtm /. sol[[2, 2]], Ibatp1]], Vbatp1}, PlotRange -> {3.7, 4.2}, GridLines -> Automatic]


As you can see, the model output reproduces the test data very well.
POSTED BY: Paul McGill
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract