0
|
8680 Views
|
7 Replies
|
0 Total Likes
View groups...
Share
GROUPS:

# Speeding Up Matrix Expression Evaluation With Expand

Posted 10 years ago
 In some cases, evaluation of expressions containing matrices can be sped up by wrapping the expression in Expand. Here's an example of a 5 fold gain in speed for an expression containing 50x50 matrices: In[12]:= n = 50; In[13]:= SeedRandom[0]; Dimensions[a = RandomReal[{-1, 1}, {n, n}]] Out[13]= {50, 50} In[14]:= SeedRandom[0]; Dimensions[b = RandomReal[{-1, 1}, n]] Out[14]= {50} In[15]:= v = Array[x, n]; In[16]:= AbsoluteTiming @ NMaximize[-1/4 v.a.Transpose[a].v - b.v, v] Out[16]= {7.294668, {51.9929, {x[1] -> -8.79, x[2] -> -5.14004, x[3] -> 4.77431, x[4] -> -33.5074, x[5] -> -17.8675, x[6] -> -41.973, x[7] -> -14.8, x[8] -> 7.17246, x[9] -> 40.5797, x[10] -> -1.30352, x[11] -> 15.2247, x[12] -> 24.7157, x[13] -> 20.5358, x[14] -> -37.6259, x[15] -> -23.1615, x[16] -> 5.04749, x[17] -> 0.716219, x[18] -> -9.14941, x[19] -> 18.7447, x[20] -> 14.4649, x[21] -> 9.51407, x[22] -> -23.4491, x[23] -> 1.5991, x[24] -> 6.41455, x[25] -> 2.53314, x[26] -> 17.8861, x[27] -> -13.0104, x[28] -> 10.2575, x[29] -> -16.721, x[30] -> 15.549, x[31] -> -1.68674, x[32] -> 12.4654, x[33] -> 24.0295, x[34] -> -17.7568, x[35] -> 9.24454, x[36] -> -11.9835, x[37] -> -5.83835, x[38] -> 0.33071, x[39] -> 16.9103, x[40] -> -30.5781, x[41] -> -18.2771, x[42] -> -29.8421, x[43] -> -7.07211, x[44] -> -14.7736, x[45] -> -14.2617, x[46] -> 39.7569, x[47] -> 48.0993, x[48] -> -3.96191, x[49] -> 9.70208, x[50] -> -21.7266}}} In[17]:= AbsoluteTiming @ NMaximize[Expand[-1/4 v.a.Transpose[a].v - b.v], v] Out[17]= {1.386620, {51.9929, {x[1] -> -8.79, x[2] -> -5.14004, x[3] -> 4.77431, x[4] -> -33.5074, x[5] -> -17.8675, x[6] -> -41.973, x[7] -> -14.8, x[8] -> 7.17246, x[9] -> 40.5797, x[10] -> -1.30352, x[11] -> 15.2247, x[12] -> 24.7157, x[13] -> 20.5358, x[14] -> -37.6259, x[15] -> -23.1615, x[16] -> 5.04749, x[17] -> 0.716219, x[18] -> -9.14941, x[19] -> 18.7447, x[20] -> 14.4649, x[21] -> 9.51407, x[22] -> -23.4491, x[23] -> 1.5991, x[24] -> 6.41455, x[25] -> 2.53314, x[26] -> 17.8861, x[27] -> -13.0104, x[28] -> 10.2575, x[29] -> -16.721, x[30] -> 15.549, x[31] -> -1.68674, x[32] -> 12.4654, x[33] -> 24.0295, x[34] -> -17.7568, x[35] -> 9.24454, x[36] -> -11.9835, x[37] -> -5.83835, x[38] -> 0.33071, x[39] -> 16.9103, x[40] -> -30.5781, x[41] -> -18.2771, x[42] -> -29.8421, x[43] -> -7.07211, x[44] -> -14.7736, x[45] -> -14.2617, x[46] -> 39.7569, x[47] -> 48.0993, x[48] -> -3.96191, x[49] -> 9.70208, x[50] -> -21.7266}}} 
7 Replies
Sort By:
Posted 10 years ago
 Yes, the QuadraticProgramming option for FindMinimum is quite nice, but does require the problem have a constraint, as I complained about in a separate post.I took Boyd's online course on Convex Optimization last year. The homework had to be done with CVX.I understand that MATLink was written by some Mathematica users who wanted to be able to access CVX from Mathematica.
Posted 10 years ago
 I was reproducing an example in Boyd's book on Convex Optimization, which stopped at the maximization. The maximization is the dual of minimizing v.v subject to a.v == b, which NMinimize can do very quickly.
Posted 10 years ago
 FindMinimum can be even faster if told this is a quadratic problem. Timing[{min, vals} = FindMinimum[{x.x, a.x == b}, x, Method -> "QuadraticProgramming"];] (* {0.011513, Null} *) Very good book you are using. I need to get back to it some day (was going through it 2-3 years ago, got to around p. 200).
Posted 10 years ago
 You can speed it slightly more using FindMinimum. or the quadratic formula (is there a reason you are not going that route)? n = 50; SeedRandom[0]; a = RandomReal[{-1, 1}, {n, n}]; b = RandomReal[{-1, 1}, n]; x = Array[xx, n]; Timing[ aat = a.Transpose[a]; iaat = Inverse[aat]; vals = -2*b.iaat; va = vals.a; max = -1/4 va.va - b.vals; {max, vals} ] (* Out[443]= {0.000278, {27.4582378854, {-13.7190746299, 5.0248966213, 4.27200941301, -5.27186128016, -9.34445324724, -16.7770271653, \ -10.2969820955, 14.8082311631, 18.2881033362, 8.79472160829, 3.73982999401, 25.6931835927, 15.0987923009, -32.8520372134, -17.6629530707, 4.61315429798, -10.4093844839, -5.91550548946, 2.825412681, -5.4744873435, 1.75459547275, -8.63951633905, 7.73932523613, 12.4463904122, 4.99696850235, 1.03233240236, -1.92340140008, 0.0448214342458, -12.3600513517, 17.2642317887, -9.3190511584, 16.5787124096, 9.79082324277, -9.25426886038, 3.42454996007, -2.00990959625, -5.07728002623, -8.45579303961, 10.077263217, -22.7847052432, -14.8615953665, -19.469187501, \ -4.28859347837, -21.1896979177, 2.43729925419, 18.9547713922, 18.3169977856, 3.46879765417, 9.2438775981, -12.1369961499}}}501, \ -4.28859347837, -21.1896979177, 2.43729925419, 18.9547713922, 18.3169977856, 3.46879765417, 9.2438775981, -12.1369961499}}} *) 
Posted 10 years ago
 Calculating a.Transpose[a] outside the maximization works slightly better than Expand.
Posted 10 years ago
 If differentiation of the objective function is involved, then I would suspect that the expanded form is easier to differentiate and that it's easier to evaluate the derivative. Your grouping could work in that direction as well.
Posted 10 years ago
 This is pretty fast. AbsoluteTiming[{max, vals} = NMaximize[-1/4 (v.a).(Transpose[a].v) - b.v, v];] It's all in the ordering of the operations. Grouping matrix.vector helps considerably, I guess.
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.