Message Boards Message Boards

0
|
46 Views
|
4 Replies
|
0 Total Likes
View groups...
Share
Share this post:

Unexpected behaviour from NMaximize optimising a stochastic function

Posted 5 hours ago

I have a complicated function whose output contains noise. I'm interested in using Differential Evolution via NMaximize to solve it. However, NMaximize appears to compile or somehow 'freeze' the function outside its loop.

Below is a minimal example. Here, 'function 1' produces the expected behaviour in finding the argmax $\pi$ alongside a different random maximum each time.

That is, for three example runs:

{{0.368844, {\[Eta] -> 3.14159}}, {1.29786, {\[Eta] -> 
    3.14159}}, {0.128056, {\[Eta] -> 3.14159}}}

My actual problem is more like 'function 2'. But in this case it provides the same result three times. The result is also wrong and not $\pi$ - probably linked with the unexpected behaviour:

{{0.08951, {\[Eta] -> 3.89056}}, {0.08951, {\[Eta] -> 
    3.89056}}, {0.08951, {\[Eta] -> 3.89056}}}

Here is the example code:

ClearAll["Global`*"]

(* function 1 *)
f1 := NMaximize[{1.0 - (\[Pi] - \[Eta])^2 + 
     RandomVariate[NormalDistribution[]], 
    0. < \[Eta] < 4.}, \[Eta] \[Element] Reals, 
   Method -> "DifferentialEvolution"  ];
Table[f1, {3}]

(* function 2 *)
w[\[Eta]_?NumericQ] := 
  Block[{\[Epsilon]}, \[Epsilon] := 
    RandomVariate[NormalDistribution[]]; 
   1.0 - (\[Pi] - \[Eta])^2 + \[Epsilon]];
f2 := NMaximize[{w[\[Eta]], 0. < \[Eta] < 4.}, \[Eta] \[Element] 
    Reals, Method -> "DifferentialEvolution"  ];
Table[f2, {3}]
POSTED BY: Cameron Turner
4 Replies

OK, let me first say that if the problem contains the word "stochastic," I probably don't know how it is supposed to work.

Here's how the code works:

f1 gets a random variate, say, alpha, then calls NMaximize[] on the function 1. - (Pi - eta)^2 + alpha. The value of alpha stays the say during the optimization calculation. The NMaximize optimizes a quadratic function, whose maximum is obviously when eta == Pi every time.

f2, both yours and mine, gets a new random variate every time w[eta] is evaluated. It is only evaluated when eta is given a numeric value by NMaximize. In a single computation NMaximum does this many times, and each time the value of the variate changes. This random noise makes NMaximum think there is no convergence.

So back to my ignorance. The approaches are quite different. I don't know which one is correct. It depends on what you are trying to do. So I just fixed the code and left it at that. Optimizing f1 doesn't need NMaximize[], so at first I thought that f1 was wrong. But f2 doesn't converge, so maybe that one's wrong. So I leave it to you to decide which does what you intended.

Here's how I would fix the f2 issue. It may be totally wrong for your problem. If the variates have a standard deviation of $\sigma$, then the precision goal ought to be of a similar order. Here's an example with $\sigma$ and the precision goal set at $10^{-3}$:

w[\[Eta]_?NumericQ] := 
  Block[{\[Epsilon]}, \[Epsilon] := 
    RandomVariate[NormalDistribution[0, 1/1000]];
   1.0 - (\[Pi] - \[Eta])^2 + \[Epsilon]];
f2 := NMaximize[{w[\[Eta]], 0. < \[Eta] < 4.}, \[Eta] \[Element] 
    Reals, Method -> {"DifferentialEvolution", 
     "RandomSeed" -> RandomInteger[2^63]}, PrecisionGoal -> 3];
Table[f2, {3}]
(*
{{0.999971,
 {\[Eta] -> 3.14249}}, {1.00144, {\[Eta] -> 3.14865}}, {1.00111, {\[Eta] -> 3.14819}}}
*)

I got no errors running a few times. It's random noise, so I wouldn't be surprised if it threw an error now and then. If you need $\sigma=1$, then set PrecisionGoal to 1 or less (it must be positive or NMaximize refuses to compute).

POSTED BY: Michael Rogers
Posted 4 hours ago

See a comparison of the convergence of f1, with the f2 with the random seed below:

Plot of f1

plot of f2

POSTED BY: Cameron Turner
Posted 4 hours ago

Thanks for your reply again Michael.

This does produce the expected behaviour of stochastic variation between runs. However, the method you've provided (and I have just tried) appears to struggle. That is, compared to the case where you define the function internal to NMaximize, where it obtains the correct argmax quickly and with high precision. By contrast, the "RandomSeed" option has the NMaximize not converge in 100 iterations. What accounts for this difference?

Ideally, I want them to treat both options the same, converging just as quickly and accurately, even though the function is defined outside NMaximize.

POSTED BY: Cameron Turner

I believe NMaximize and NMinimize control the random seed. That would explain why you get the same result each time. Luckily, the user can specify a random random seed:

f2 := NMaximize[{w[\[Eta]], 0. < \[Eta] < 4.}, \[Eta] \[Element] 
    Reals, Method -> {"DifferentialEvolution", 
     "RandomSeed" -> RandomInteger[2^63]}];
Table[f2, {3}]
POSTED BY: Michael Rogers
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract