Group Abstract Group Abstract

Message Boards Message Boards

Find global and local optima of a function?

I have to find local and global optima of a function, e.g.:

f[x_, y_, z_] := x^3 + y^2 + z^2 + 12 x y + 2 z

I use the following Wolfram functions:

Maximize[f[x, y, z], {x, y, z}]
NMaximize[f[x, y, z], {x, y, z}]
FindMaximum[f[x, y, z], {x, y, z}]

Minimize[f[x, y, z], {x, y, z}]
NMinimize[f[x, y, z], {x, y, z}]
FindMinimum[f[x, y, z], {x, y, z}]

None of them find any optimal points. Nevertheless, I try to find stationary/critical points of the function:

In[10]:= criticalPoints = Solve[D[f[x, y, z], {{x, y, z}}] == {0, 0, 0}, {x, y, z}]
matrixHessian = D[D[f[x, y, z], {{x, y, z}}], {{x, y, z}}]

Out[10]= {{x -> 0, y -> 0, z -> -1}, {x -> 24, y -> -144, z -> -1}}
Out[11]= {{6 x, 12, 0}, {12, 2, 0}, {0, 0, 2}}

There are two stationary/critical points. Now, I have to consider the Hessian definition in these points:

In[22]:= PositiveDefiniteMatrixQ[matrixHessian /. criticalPoints[[1]]]
NegativeDefiniteMatrixQ[matrixHessian /. criticalPoints[[1]]]
PositiveDefiniteMatrixQ[matrixHessian /. criticalPoints[[2]]]
NegativeDefiniteMatrixQ[matrixHessian /. criticalPoints[[2]]]

Out[22]= False
Out[23]= False
Out[24]= True
Out[25]= False

In[20]:= NegativeSemidefiniteMatrixQ[
 matrixHessian /. criticalPoints[[1]]]
PositiveSemidefiniteMatrixQ[matrixHessian /. criticalPoints[[1]]]

Out[20]= False
Out[21]= False

In conclusion, the point {x -> 24, y -> -144, z -> -1} is a local minimum.

Also, this may be confirmed by applying FindMinimum[] and some appropriate initial point:

In[27]:= FindMinimum[f[x, y, z], {{x, 24}, {y, -23}, {z, 0}}]

Out[27]= {-6913., {x -> 24., y -> -144., z -> -1.}}

I solved stated problem, but some dissatisfaction remains: is this an efficient approach to find optimal points of a function?

5 Replies

Minimize, Maximize, NMinimize and NMaximize look for global solutions. As you discovered, FindMinimum will work if you start it near your local minimum. If you want all the critical points for an optimization problem, use the Kuhn-Tucker equations.

POSTED BY: Frank Kampas

I use Wolfram Mathematica to teach the course of Optimization Methods. The example presented above is only an elementary sample which I use to explain how to find optima of a function. The same approach I use to find the optima of a constrained problem with the means of Lagrange function and KKT conditions. I addressed the question to the community because it was strange for me that for a such simple problem the built-in functions leave for an inexperienced user a chance to not find the solution.

The documentation for Minimize and Maximize state that they use the Kuhn-Tucker equations for bounded problems. However, since they find a global optimum, that doesn't help with your problem.

POSTED BY: Frank Kampas

Thank you for the comment, Frank. Sure, for a constrained optimization problem I can use Karush-Kuhn-Tucker conditions. But, the Wolfram Language is a symbolic one and it is somewhat strange that buit-in functions do not apply the KKT conditions to find the optima.

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard