Message Boards Message Boards

1
|
575 Views
|
7 Replies
|
8 Total Likes
View groups...
Share
Share this post:

Unexpected behaviour from NMaximize optimising a stochastic function

Posted 3 months ago

I have a complicated function whose output contains noise. I'm interested in using Differential Evolution via NMaximize to solve it. However, NMaximize appears to compile or somehow 'freeze' the function outside its loop.

Below is a minimal example. Here, 'function 1' produces the expected behaviour in finding the argmax $\pi$ alongside a different random maximum each time.

That is, for three example runs:

{{0.368844, {\[Eta] -> 3.14159}}, {1.29786, {\[Eta] -> 
    3.14159}}, {0.128056, {\[Eta] -> 3.14159}}}

My actual problem is more like 'function 2'. But in this case it provides the same result three times. The result is also wrong and not $\pi$ - probably linked with the unexpected behaviour:

{{0.08951, {\[Eta] -> 3.89056}}, {0.08951, {\[Eta] -> 
    3.89056}}, {0.08951, {\[Eta] -> 3.89056}}}

Here is the example code:

ClearAll["Global`*"]

(* function 1 *)
f1 := NMaximize[{1.0 - (\[Pi] - \[Eta])^2 + 
     RandomVariate[NormalDistribution[]], 
    0. < \[Eta] < 4.}, \[Eta] \[Element] Reals, 
   Method -> "DifferentialEvolution"  ];
Table[f1, {3}]

(* function 2 *)
w[\[Eta]_?NumericQ] := 
  Block[{\[Epsilon]}, \[Epsilon] := 
    RandomVariate[NormalDistribution[]]; 
   1.0 - (\[Pi] - \[Eta])^2 + \[Epsilon]];
f2 := NMaximize[{w[\[Eta]], 0. < \[Eta] < 4.}, \[Eta] \[Element] 
    Reals, Method -> "DifferentialEvolution"  ];
Table[f2, {3}]
POSTED BY: Cameron Turner
7 Replies
Posted 3 months ago

See a comparison of the convergence of f1, with the f2 with the random seed below:

Plot of f1

plot of f2

POSTED BY: Cameron Turner

I believe NMaximize and NMinimize control the random seed. That would explain why you get the same result each time. Luckily, the user can specify a random random seed:

f2 := NMaximize[{w[\[Eta]], 0. < \[Eta] < 4.}, \[Eta] \[Element] 
    Reals, Method -> {"DifferentialEvolution", 
     "RandomSeed" -> RandomInteger[2^63]}];
Table[f2, {3}]
POSTED BY: Michael Rogers
Posted 3 months ago

Thanks for your reply again Michael.

This does produce the expected behaviour of stochastic variation between runs. However, the method you've provided (and I have just tried) appears to struggle. That is, compared to the case where you define the function internal to NMaximize, where it obtains the correct argmax quickly and with high precision. By contrast, the "RandomSeed" option has the NMaximize not converge in 100 iterations. What accounts for this difference?

Ideally, I want them to treat both options the same, converging just as quickly and accurately, even though the function is defined outside NMaximize.

POSTED BY: Cameron Turner

OK, let me first say that if the problem contains the word "stochastic," I probably don't know how it is supposed to work.

Here's how the code works:

f1 gets a random variate, say, alpha, then calls NMaximize[] on the function 1. - (Pi - eta)^2 + alpha. The value of alpha stays the say during the optimization calculation. The NMaximize optimizes a quadratic function, whose maximum is obviously when eta == Pi every time.

f2, both yours and mine, gets a new random variate every time w[eta] is evaluated. It is only evaluated when eta is given a numeric value by NMaximize. In a single computation NMaximum does this many times, and each time the value of the variate changes. This random noise makes NMaximum think there is no convergence.

So back to my ignorance. The approaches are quite different. I don't know which one is correct. It depends on what you are trying to do. So I just fixed the code and left it at that. Optimizing f1 doesn't need NMaximize[], so at first I thought that f1 was wrong. But f2 doesn't converge, so maybe that one's wrong. So I leave it to you to decide which does what you intended.

Here's how I would fix the f2 issue. It may be totally wrong for your problem. If the variates have a standard deviation of $\sigma$, then the precision goal ought to be of a similar order. Here's an example with $\sigma$ and the precision goal set at $10^{-3}$:

w[\[Eta]_?NumericQ] := 
  Block[{\[Epsilon]}, \[Epsilon] := 
    RandomVariate[NormalDistribution[0, 1/1000]];
   1.0 - (\[Pi] - \[Eta])^2 + \[Epsilon]];
f2 := NMaximize[{w[\[Eta]], 0. < \[Eta] < 4.}, \[Eta] \[Element] 
    Reals, Method -> {"DifferentialEvolution", 
     "RandomSeed" -> RandomInteger[2^63]}, PrecisionGoal -> 3];
Table[f2, {3}]
(*
{{0.999971,
 {\[Eta] -> 3.14249}}, {1.00144, {\[Eta] -> 3.14865}}, {1.00111, {\[Eta] -> 3.14819}}}
*)

I got no errors running a few times. It's random noise, so I wouldn't be surprised if it threw an error now and then. If you need $\sigma=1$, then set PrecisionGoal to 1 or less (it must be positive or NMaximize refuses to compute).

POSTED BY: Michael Rogers
Posted 3 months ago

Great explanation, thank you, I think your proposed f2 is what I want!

POSTED BY: Cameron Turner

Another approach occurred to me. In my w[eta], whenever w[eta] is re-evaluated at the same value of eta, we get a different value. NMaximize does this re-evaluation many times. That means when NMaximize think it has a large value, when evaluates again, the value might be not so large. That's probably why it was complaining.

We can set up w[eta] so that it remembers what value it got and reuses the value (a technique called memoization). At each call we would get a different random function, but it would be a function, with the same output for the same input. With this method, we don't need PrecisionGoal:

f2 := (
   ClearAll[w];
   mem : w[\[Eta]_?NumericQ] := 
    mem = Block[{\[Epsilon]}, \[Epsilon] := 
       RandomVariate[NormalDistribution[0, 1]];
      1.0 - (\[Pi] - Sow@\[Eta])^2 + \[Epsilon]];
   NMaximize[{w[\[Eta]], 0. < \[Eta] < 4.}, \[Eta] \[Element] Reals, 
    Method -> {"DifferentialEvolution", 
      "RandomSeed" -> RandomInteger[2^63]}]
   );

As I said earlier, I'm not conversant with stochastic analysis. I have a modest familiarity with basic probability and elementary statistics. However, I wonder about using NMaximize on w[eta] in any of the f2 setups. The sampling is uncontrolled. The harder NMaximize tries — that is, the more often it evaluates w[eta] as it converges to the maximum — the more likely it is to get a greater offset from the random variate. That seems different from how WienerProcess[] and RandomFunction[] work, which uses discrete, equal-spaced sampling. It felt worth considering.

POSTED BY: Michael Rogers
Posted 2 months ago

Thanks Michael, i'll have to carefully consider the point you've made here. This approach may work nicely!

POSTED BY: Cameron Turner
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract