Message Boards Message Boards


Principle of RandomSearch method in NMinimize?

Posted 8 months ago
8 Replies
0 Total Likes

Hello everyone,

I am using NMinimize procedure with RandomSearch method explicitly chosen for optimization of a non-convex 6 dimensional problem. Those 6 variables are non-negative and they sum up to one.

Can someone explain me how does RandomSearch method in Wolfram Language environment work? It is unclear from

For example: "... generating a population of random starting points…“ - how admissible solutions are obtained? From which (multivariate) distribution we are sampling from? A similar question may be asked for remaining 3 methods, "NelderMead", "DifferentialEvolution", and "SimulatedAnnealing".

The method seems to be different from method described at where hypercubes are mentioned. Am I right?

Thank you for your answers!

8 Replies

If I remember correctly hypercubes are used for this. Except if there are explicit linear inequality constraints, linear programming is used to find viable points in "random" directions. For equality constraints I believe variables are solved for in terms of others, and inequalities are changed accordingly.

Posted 8 months ago

Thank you for your answer Daniel!

You gave me very little insight into method. For deeper understanding I would like to know, for example, implicit starting parameters for "PenaltyFunction" (how is chosen, from which space of functions?), "InitialPoints" (how do i find them?). How the radius of hypercube is determined?

I found in help: "The random search algorithm works by generating a population of random starting points and uses a local optimization method from each of the starting points to converge to a local minimum. The best local minimum is chosen to be the solution. " - There are no hypercubes mentioned.

I have no reason to believe that method is not good, but it is hardly defendable for me to use metod that is not properly cited or described.

Thank you for your feedback.

You can do much better random searches using ParametricIPOPTMinimize

You can see that FindMinimum is called, presumably on random points, although I don't know why there are 2n+1 (there are 11 for me in V11.3, for n = 5):

 NMinimize[f, {x, y}, Method -> {"RandomSearch", "SearchPoints" -> 5}],
 TraceInternal -> True
Posted 8 months ago

Thank you for participating in this topic. Let me summarize some conclusions:

  • We know that function calls FindMinimum (local optimization method) on certain number of certain points (See Michael Rogers post)

  • Behavior of SearchPoints is strange. (See Michael Rogers post)

  • How radius is determined is unclear.

  • Usage of Penalty function is unclear.

  • How pseudo generator of admissible solution work is unclear. According to Daniel Lichtblau's post "For equality constraints I believe variables are solved for in terms of others, and inequalities are changed accordingly." - Sounds like we solve for one variable and treat others like parameters of solution. How do we set those parameters pseudo randomly such that we somehow cover admissible set?

Posted 8 months ago

Is there someone who could possibly write me the procedure in steps?


  1. Sampling random points from admissible set using [some] multivariate distribution.
  2. ….
  3. ….
  4. ….
  5. Result = […].

We are not far from classifying method as a blackbox for now.


How would you find an "admissible set"?

Posted 7 months ago

That is what I am asking for.

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
or Discard

Group Abstract Group Abstract