Message Boards Message Boards

Principle of RandomSearch method in NMinimize?

GROUPS:

Hello everyone,

I am using NMinimize procedure with RandomSearch method explicitly chosen for optimization of a non-convex 6 dimensional problem. Those 6 variables are non-negative and they sum up to one.

Can someone explain me how does RandomSearch method in Wolfram Language environment work? It is unclear from http://reference.wolfram.com/language/tutorial/ConstrainedOptimizationGlobalNumerical.html

For example: "... generating a population of random starting points…“ - how admissible solutions are obtained? From which (multivariate) distribution we are sampling from? A similar question may be asked for remaining 3 methods, "NelderMead", "DifferentialEvolution", and "SimulatedAnnealing".

The method seems to be different from method described at en.wikipedia.org/wiki/Random_search where hypercubes are mentioned. Am I right?

Thank you for your answers!

POSTED BY: Lukas Vacek
Answer
2 months ago

If I remember correctly hypercubes are used for this. Except if there are explicit linear inequality constraints, linear programming is used to find viable points in "random" directions. For equality constraints I believe variables are solved for in terms of others, and inequalities are changed accordingly.

POSTED BY: Daniel Lichtblau
Answer
2 months ago

Thank you for your answer Daniel!

You gave me very little insight into method. For deeper understanding I would like to know, for example, implicit starting parameters for "PenaltyFunction" (how is chosen, from which space of functions?), "InitialPoints" (how do i find them?). How the radius of hypercube is determined?

I found in help: "The random search algorithm works by generating a population of random starting points and uses a local optimization method from each of the starting points to converge to a local minimum. The best local minimum is chosen to be the solution. " - There are no hypercubes mentioned.

I have no reason to believe that method is not good, but it is hardly defendable for me to use metod that is not properly cited or described.

Thank you for your feedback.

POSTED BY: Lukas Vacek
Answer
2 months ago

You can do much better random searches using ParametricIPOPTMinimize

http://community.wolfram.com/groups/-/m/t/1164680

POSTED BY: Frank Kampas
Answer
2 months ago

You can see that FindMinimum is called, presumably on random points, although I don't know why there are 2n+1 (there are 11 for me in V11.3, for n = 5):

Trace[
 NMinimize[f, {x, y}, Method -> {"RandomSearch", "SearchPoints" -> 5}],
 _FindMinimum,
 TraceInternal -> True
 ]
POSTED BY: Michael Rogers
Answer
2 months ago

Thank you for participating in this topic. Let me summarize some conclusions:

  • We know that function calls FindMinimum (local optimization method) on certain number of certain points (See Michael Rogers post)

  • Behavior of SearchPoints is strange. (See Michael Rogers post)

  • How radius is determined is unclear.

  • Usage of Penalty function is unclear.

  • How pseudo generator of admissible solution work is unclear. According to Daniel Lichtblau's post "For equality constraints I believe variables are solved for in terms of others, and inequalities are changed accordingly." - Sounds like we solve for one variable and treat others like parameters of solution. How do we set those parameters pseudo randomly such that we somehow cover admissible set?

POSTED BY: Lukas Vacek
Answer
2 months ago

Is there someone who could possibly write me the procedure in steps?

Say:

  1. Sampling random points from admissible set using [some] multivariate distribution.
  2. ….
  3. ….
  4. ….
  5. Result = […].

We are not far from classifying method as a blackbox for now.

Lukas.

POSTED BY: Lukas Vacek
Answer
2 months ago

How would you find an "admissible set"?

POSTED BY: Daniel Lichtblau
Answer
2 months ago

That is what I am asking for.

POSTED BY: Lukas Vacek
Answer
1 month ago

Group Abstract Group Abstract