Message Boards Message Boards

0
|
2463 Views
|
0 Replies
|
0 Total Likes
View groups...
Share
Share this post:

Why Doesn't Minimize Use the Kuhn-Tucker equations for bounded problems?

From the documentation:
For linear cases, Minimize and Maximize use exact linear programming methods. For polynomial cases they use cylindrical algebraic decomposition. For analytic cases in closed and bounded boxes they use methods based on Lagrange multipliers.

 The following example is a bounded problem which can be solved by the Kuhn-Tucker equations (Lagrange multipliers) but 

Minimize does not solve it but Reduce can solve the Kuhn-Tucker equation.

objective function, constraints, variables of problem 

In[1]:= obj = x^2 - 
  y^2; cons = {Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}; vars = {x, y};

 Minimize does not find a solution

In[2]:= Minimize[{obj, Sequence @@ cons}, vars]

Out[2]= Minimize[{x^2 - y^2, Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]

 function to generate the Kuhn-Tucker equations for a minimization problem.

In[3]:= KTEqs[obj_, cons_List, vars_] :=
 Module[{consconvrule = {GreaterEqual[x_, y_] -> LessEqual[y - x, 0], 
     Equal[x_, y_] -> Equal[x - y, 0], 
     LessEqual[x_, y_] -> LessEqual[x - y, 0],
     LessEqual[lb_, x_, ub_] -> LessEqual[(x - lb) (x - ub), 0], 
     GreaterEqual[ub_, x_, lb_] -> LessEqual[(x - lb) (x - ub), 0]} 
    , stdcons, eqcons, ineqcons, lambdas, mus, lagrangian, eqs1, eqs2, eqs3, 
   alleqns, allvars },
  stdcons = cons /. consconvrule;
  eqcons = Cases[stdcons, Equal[_, 0]][[All, 1]];
  ineqcons = Cases[stdcons, LessEqual[_, 0]][[All, 1]];
  lambdas = Array[\[Lambda], Length[eqcons]];
  mus = Array[\[Mu], Length[ineqcons]];
  lagrangian = obj + lambdas.eqcons + mus.ineqcons;
  eqs1 = Thread[ D[lagrangian, {vars}] == 0];
  eqs2 = Thread[mus >=   0];
  eqs3 = Thread[mus*ineqcons == 0];
  alleqns = Join[eqs1, eqs2, eqs3, cons];
  allvars = Join[vars, lambdas, mus];
  {alleqns, allvars}
  ]

Kuhn-Tucker equations for this problem 

In[4]:= kteqs = KTEqs[obj, cons, vars];

 solution found by Reduce

In[5]:= sln = Reduce[Sequence @@ kteqs, Backsubstitution -> True];

 convert the solution to rules

In[6]:= slnrls =  List @@ (ToRules /@ sln);

 Look at the numerical objective function values to see how many global minima there are

In[7]:=  N[obj /. slnrls]

Out[7]= {0., -9.37535, -24.9443, -9.37535, -24.9443}

 find the objective function values and pair them with the solution rules

In[8]:= res = Thread[{obj /. slnrls, slnrls}];

Take the best two results 

In[9]:= Sort[res, #1[[1]] < #2[[1]] &][[{1, 2}]]

Out[9]= {{-25 + 25/9 (-3 + \[Pi])^2, {y -> 5, \[Mu][2] -> 0, 
   x -> -(5/3) (-3 + \[Pi]), \[Mu][1] -> (20 (-3 + \[Pi]))/(
    3 Sqrt[3]), \[Mu][3] -> 
    1/9 (9 + 6 Sqrt[3] Sin[5 + 5/3 (-3 + \[Pi])] - 
       2 Sqrt[3] \[Pi] Sin[5 + 5/3 (-3 + \[Pi])])}}, {-25 + 
   25/9 (-3 + \[Pi])^2, {y -> -5, \[Mu][2] -> 0, 
   x -> 5/3 (-3 + \[Pi]), \[Mu][1] -> (20 (-3 + \[Pi]))/(
    3 Sqrt[3]), \[Mu][3] -> 
    1/9 (9 + 6 Sqrt[3] Sin[5 + 5/3 (-3 + \[Pi])] - 
       2 Sqrt[3] \[Pi] Sin[5 + 5/3 (-3 + \[Pi])])}}}

Examine the numerical values of the global minima

In[10]:= N[%]

Out[10]= {{-24.9443, {y -> 5., \[Mu][2.] -> 0., 
   x -> -0.235988, \[Mu][1.] -> 0.54499, \[Mu][3.] -> 
    1.0472}}, {-24.9443, {y -> -5., \[Mu][2.] -> 0., 
   x -> 0.235988, \[Mu][1.] -> 0.54499, \[Mu][3.] -> 1.0472}}}

 compare to NMinimize result

In[11]:= NMinimize[{obj, Sequence @@ cons}, vars]

During evaluation of In[11]:= NMinimize::cvmit: Failed to converge to the requested accuracy or precision within 100 iterations.

Out[11]= {-24.9443, {x -> 0.235988, y -> -5.}}
POSTED BY: Frank Kampas
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract