0
|
3621 Views
|
5 Replies
|
4 Total Likes
View groups...
Share
GROUPS:

# Solving 2 unknowns by using 12 equations

Posted 10 years ago
 Dear All, Usually we can obtain n unknowns by using n equations. But I want to obtain the best fit 2 unknowns for 12 equations. I tried to use below command: FindRoot[{a b Sech[266 b] == 0.391, a b Sech[128.3 b] == 0.3169, a b Sech[158.5 b] == 0.3969, a b Sech[240.25 b] == 0.3117, a b Sech[156.8 b] == 0.6311, a b Sech[136.6 b] == 0.2162, a b Sech[363.95 b] == 0.2727, a b Sech[106.35 b] == 0.3987, a b Sech[54.3 b] == 0.3596, a b Sech[127.9 b] == 0.5211, a b Sech[179.7 b] == 0.27, a b Sech[105.3 b] == 0.889}, {{a, 110}, {b, 0.007}}]  But it didnt work. Then I tried the below command: NSolve[{a b Sech[266 b] == 0.391, a b Sech[128.3 b] == 0.3169, a b Sech[158.5 b] == 0.3969, a b Sech[240.25 b] == 0.3117, a b Sech[156.8 b] == 0.6311, a b Sech[136.6 b] == 0.2162, a b Sech[363.95 b] == 0.2727, a b Sech[106.35 b] == 0.3987, a b Sech[54.3 b] == 0.3596, a b Sech[127.9 b] == 0.5211, a b Sech[179.7 b] == 0.27, a b Sech[105.3 b] == 0.889}, {a, b}, Reals]  But it also didnt work. So is that possible to obtain a and b by using 12 equations? Thanks for your attention. Best Regards, Intan
5 Replies
Sort By:
Posted 10 years ago
 Dear Daniel and Bill,Thanks a lot for your explanations. It's really helpful.Thanks and Regards, Intan
Posted 10 years ago
 a. Your two constraints are fine. NMinimize will search within and up to the edge of the region that you have specified and return the best solution it can.b. I do not believe it is possible to add a constraint to NMinimize for the maximum error. It is possible to use NMinimize for a different function. In:= sol = NMinimize[{Max[ (a b Sech[266 b] - 0.391)^2, (a b Sech[128.3 b] - 0.3169)^2, (a b Sech[158.5 b] - 0.3969)^2, (a b Sech[240.25 b] - 0.3117)^2, (a b Sech[156.8 b] - 0.6311)^2, (a b Sech[136.6 b] - 0.2162)^2, (a b Sech[363.95 b] - 0.2727)^2, (a b Sech[106.35 b] - 0.3987)^2, (a b Sech[54.3 b] - 0.3596)^2, (a b Sech[127.9 b] - 0.5211)^2, (a b Sech[179.7 b] - 0.27)^2, (a b Sech[105.3 b] - 0.889)^2],{a>100,b>.006}},{a,b}] Out= {0.100338, {a -> 107.292, b -> 0.00672948}} (* Look at the size of the errors *) In:= {a b Sech[266 b] - 0.391, a b Sech[128.3 b] - 0.3169, a b Sech[158.5 b] - 0.3969, a b Sech[240.25 b] - 0.3117, a b Sech[156.8 b] - 0.6311, a b Sech[136.6 b] - 0.2162, a b Sech[363.95 b] - 0.2727, a b Sech[106.35 b] - 0.3987, a b Sech[54.3 b] - 0.3596, a b Sech[127.9 b] - 0.5211, a b Sech[179.7 b] - 0.27, a b Sech[105.3 b] - 0.889} /. sol[] Out= { -0.15645, 0.200137, 0.0474598, -0.0358702, -0.182727, 0.28068, -0.148911, 0.171069, 0.316762, -0.00309119, 0.125682, -0.316762} Notice that the largest error in this is smaller than the largest error in my previous post, but the other smaller errors are now somewhat larger with this method. You should decide exactly what is the most important item that you wish to minimize.There may be no solution with error<.001.c. Any expression can be used in place of square if it gives a measure of how good the result is for your variables and the expression considers a value smaller than the desired to be as undesired as a value greater than the desired. You could use absolute value Abs[a b Sech[266 b] - 0.391] or fourth power or any other function that is greater than or equal to zero and probably symmetric around zero.
Posted 10 years ago
 When searching for "best approximation" solutions to overdetermnied systems of equations, a fairly common tactic is to minimize a sum of squares. This is not the only approach but it is perhaps the one most commonly used.As for the results in the constrained examples, it just means NMinimize is finding its best values on the constraint boundary.
Posted 10 years ago
 Dear Bill,Thanks a lot for your answer, it really helps! I tried it out and yes, I got the same results as yours. I just know about the function of this NMinimize command, so may I ask you further questions as follows: a. The function of this NMinimize command is to find the global minimum of problems with/without constraints. In this case, the constraint is a > 100. If I added one more constraint for the b, I obtained the values of a and b are similar to the threshold itself. In :=solution = NMinimize[{(a b Sech[266 b] - 0.391)^2 + (a b Sech[128.3 b] - 0.3169)^2 + (a b Sech[158.5 b] - 0.3969)^2 + (a b Sech[240.25 b] - 0.3117)^2 + (a b Sech[156.8 b] - 0.6311)^2 + (a b Sech[136.6 b] - 0.2162)^2 + (a b Sech[363.95 b] - 0.2727)^2 +(a b Sech[106.35 b] - 0.3987)^2 + (a b Sech[54.3 b] - 0.3596)^2 + (a b Sech[127.9 b] -0.5211)^2 + (a b Sech[179.7 b] - 0.27)^2 + (a b Sech[105.3 b] - 0.889)^2, {a > 100, b > 0.006}}, {a, b}] Out ={0.387513, {a -> 100.576, b -> 0.006}} Similarly if I changed the constraints into a > 110 and b > 0.006, then the obtained a and b become 110 and 0.006: In :=solution =NMinimize[{(a b Sech[266 b] - 0.391)^2 + (a b Sech[128.3 b] - 0.3169)^2 + (a b Sech[158.5 b] - 0.3969)^2 + (a b Sech[240.25 b] - 0.3117)^2 + (a b Sech[156.8 b] - 0.6311)^2 + (a b Sech[136.6 b] - 0.2162)^2 + (a b Sech[363.95 b] - 0.2727)^2 + (a b Sech[106.35 b] - 0.3987)^2 + (a b Sech[54.3 b] - 0.3596)^2 + (a b Sech[127.9 b] - 0.5211)^2 + (a b Sech[179.7 b] - 0.27)^2 + (a b Sech[105.3 b] - 0.889)^2, {a > 110, b > 0.006}}, {a, b}] Out ={0.405625, {a -> 110., b -> 0.006}} So does it mean that actually using 2 constraints for solving 2 unknows is not acceptable?b. About the error/ discrepancies between the real solution and the estimated solution, is that possible to add on the boundary condition for the error such as defined the error < 0.001?c. About the command itself, the equations were rearranged by moving the rhs to the lhs for each equations then sum up all of those equations. Is it the fixed rule for finding global solution for several equations? Then how about the meaning of square/quadratic of each equation? Is there any reference to further understand about this command? I only know the reference from this link: http://reference.wolfram.com/mathematica/ref/NMinimize.html?q=NMinimize&lang=en Thanks in advance for your reply. Best Regards, Intan
Posted 10 years ago
 Perhaps this? In:= sol=NMinimize[{ (a b Sech[266 b] - 0.391)^2 + (a b Sech[128.3 b] - 0.3169)^2 + (a b Sech[158.5 b] - 0.3969)^2 + (a b Sech[240.25 b] - 0.3117)^2 + (a b Sech[156.8 b] - 0.6311)^2 + (a b Sech[136.6 b] - 0.2162)^2 + (a b Sech[363.95 b] - 0.2727)^2 + (a b Sech[106.35 b] - 0.3987)^2 + (a b Sech[54.3 b] - 0.3596)^2 + (a b Sech[127.9 b] - 0.5211)^2 + (a b Sech[179.7 b] - 0.27)^2 + (a b Sech[105.3 b] - 0.889)^2, a > 100}, {a, b}] During evaluation of In:= NMinimize::cvmit: Failed to converge to the requested accuracy or precision within 100 iterations. >> Out= {0.342799, {a -> 141.072, b -> 0.00350932}} (* See how much error there is *) In:= {a b Sech[266 b] - 0.391, a b Sech[128.3 b] - 0.3169, a b Sech[158.5 b] - 0.3969, a b Sech[240.25 b] - 0.3117, a b Sech[156.8 b] - 0.6311, a b Sech[136.6 b] - 0.2162, a b Sech[363.95 b] - 0.2727, a b Sech[106.35 b] - 0.3987, a b Sech[54.3 b] - 0.3596, a b Sech[127.9 b] - 0.5211, a b Sech[179.7 b] - 0.27, a b Sech[105.3 b] - 0.889} /.sol[] Out= {-0.0538205, 0.131903, 0.0303526, 0.0478318, -0.202564, 0.226966, -0.01655, 0.0637825, 0.126613, -0.0720312, 0.140667, -0.425912}