# Faster ways to solve symmetric top problem?

Posted 12 days ago
258 Views
|
17 Replies
|
18 Total Likes
|
 I'm currently trying to solve the symmetric top problem using Mathematica. It's essentially solving this differential equation: Sec[\[Theta][t]/ 2] (1 - Cos[\[Theta][t]] + 2 Cos[2 \[Theta][t]] + 2 (1 + Cos[\[Theta][t]]) Derivative[1][\[Theta]][t]^2) == 0 (There are parameters, but I have set them to some numerical values in this equation).I found this implementation Nutation of a Symmetric Top - Wolfram Demonstrations Project but it's quite complicated. So I try to do it with NDSolve: eqN={Sec[\[Theta][t]/ 2] (1 - Cos[\[Theta][t]] + 2 Cos[2 \[Theta][t]] + 2 (1 + Cos[\[Theta][t]]) Derivative[1][\[Theta]][t]^2) == 0, \[Theta][0] == 1}; solN = NDSolve[eqN, \[Theta][t], {t, 0, 4}] On my computer, it gives warnings like: "Maximum number of xxx steps reached at the point t == xxx". And I plot out the solutions using: plotf = \[Theta][t] /. solN; Plot[plotf, {t, 0, 4}] The result is like this:It looks like the solver won't proceed past when it reach the max/min point of theta, but before that everything is fine.So here is my question: Without actually manually doing integration like the one in Demonstrations project did, are there any ways to solve it using NDSolve capability of Mathematica? Thanks in advance!
17 Replies
Sort By:
Posted 11 days ago
 Ox,NDSolve is solving the equation, however, your solution is becoming complex (you are integrating into a region in which the derivative of Theta is the square root of a negative number. This suggests that either your equations are wrong (for the physics you are modeling) or the top enters a region where those equations no longer apply (ie it fall over).I would start by checking the physics.One way to try this is to use Wolfram SystemModeler to model the top.I hope this helps.Regards,Neil
Posted 11 days ago
 Well, I'm certain the equations are physical since I obtained it directly from Goldstein's book, and the parameters are physical too. It's would be strange if it leads to complex numbers.Regardless, Mariusz Iwaniuk posted a solution to the exactly same equation, so I don't think the equations are wrong. If so, then, why Mathematica would pick up complex numbers with default solver?Thank you!
Posted 11 days ago
 This maybe off topic but, from what I see on the Systemmodeler webpage, it's don't have a physics interface built-in that one can draw, say, a ellipsoid and specify the gravity and fix point. So how does one simulate this? Do we have to manually put in the differential equation?
Posted 11 days ago
Posted 11 days ago
 Try: eqN = {Sec[Î¸[t]/2] (1 - Cos[Î¸[t]] + 2 Cos[2 Î¸[t]] + 2 (1 + Cos[Î¸[t]]) Derivative[1][Î¸][t]^2) == 0, Î¸[0] == 1}; solN = NDSolve[eqN, Î¸[t], {t, 0, 4*Pi}, Method -> {"EquationSimplification" -> "Residual"}] Plot[Evaluate[Î¸[t] /. solN], {t, 0, 4*Pi}] 1:  eq = {Sec[Î¸[t]/2] (1 - Cos[Î¸[t]] + 2 Cos[2 Î¸[t]] + 2 (1 + Cos[Î¸[t]]) Derivative[1][Î¸][t]^2) == 0}; INIT = Derivative[1][Î¸][0] /. Solve[eq /. t -> 0 /. Î¸[0] -> 1, Derivative[1][Î¸][0]](*A second initial conditions *) Then we have: $\theta '(0)=-\sqrt{\frac{-1+\cos (1)-2 \cos (2)}{2 (1+\cos (1))}}$ and $\theta '(0)=\sqrt{\frac{-1+\cos (1)-2 \cos (2)}{2 (1+\cos (1))}}$  sol = ParametricNDSolve[{eq, Î¸[0] == 1, Î¸'[0] == a}, Î¸, {t, 0, 4*Pi}, {a}, Method -> {"EquationSimplification" -> "Residual"}]; Plot[{Î¸[(INIT[[1]])][t] /. sol, Î¸[(INIT[[2]])][t] /. sol}, {t, 0, 4*Pi}] 2:  eq = {Sec[Î¸[t]/2] (1 - Cos[Î¸[t]] + 2 Cos[2 Î¸[t]] + 2 (1 + Cos[Î¸[t]]) Derivative[1][Î¸][t]^2) == 0}; INIT = Derivative[1][Î¸][0] /. Solve[eq /. t -> 0 /. Î¸[0] -> 1, Derivative[1][Î¸][0]](*A second initial conditions *) sol = ParametricNDSolve[{D[eq,t], Î¸[0] == 1, Î¸'[0] == a}, Î¸, {t, 0, 4*Pi}, {a}]; Plot[{Î¸[(INIT[[1]])][t] /. sol, Î¸[(INIT[[2]])][t] /. sol}, {t, 0, 4*Pi}] 
Posted 11 days ago
 It works! But I am more confused. Why this solving method avoids the problem? I couldn't find any description of this "Residual" thing in Mathematica documentation. Is it more robust than default method or what?Thank you!
Posted 11 days ago
 HelloI'm ordinary user not Jedi Master in Mathematica. I only know how much it is written in the documentation. About "Residual" find here: (NDSolve->Options->Method->EquationSimplification)Regards MI
Posted 11 days ago
 That should teach me not to post without actually looking at the equations in detail! Sorry for the misinformation. Good catch Mariusz! from what I see on the Systemmodeler webpage, it's don't have a physics interface built-in that one can draw, say, a ellipsoid and specify the gravity and fix point. So how does one simulate this? Do we have to manually put in the differential equation? As to SystemModeler, you would model the top as a rigid body using the Multibody library elements -- specify the mass and inertia properties, the location of the center of mass and the contact point. Constrain the contact point on a surface and give it some initial conditions. SystemModeler derives the equations of motion and solves them, including an animation. There is a gyroscope example in the WSM examples that shows a rotating, swinging body. Unfortunately I do not see a closer example. If I get a chance, I will knock up a quick model and post it for you.Regards,Neil
Posted 11 days ago
 Here is the spinning top model in WSM. I sent it spinning and moving in one direction and then hit it with an off center force at 2 seconds to send it off precessing.Here is the animation:
Posted 11 days ago
 Thanks for making this!Although I personally find solving DEs are more intuitive for a simple problem like this, I think WSM will save a lot of time in many other situations.
Posted 10 days ago
 Ox,As to your original question. But I am more confused. Why this solving method avoids the problem? I couldn't find any description of this "Residual" thing in Mathematica documentation. Is it more robust than default method or what? I had a chance to look at your equations. The Sqrt function has a discontinuous slope at zero so it interferes with integrators because they rely on the slope. The integrator is jumping into the complex solution space and can't return. Reducing step size does not help.The best way to solve this is to turn it into a DAE. This is your new equivalent system when you use Method -> {"EquationSimplification" -> "Residual"} If you enter: NDSolve[{\[Theta]'[t] == thPrime[t], eqN[[1]] /. \[Theta]'[t] -> thPrime[t], \[Theta][0] == 1}, {\[Theta][t], thPrime}, {t, 0, 4}] This NDSolve is equivalent to yours but it throws no errors. In this case I am manually doing what "Residual" is doing.In every NDSolve one step along the way is to solve for the derivatives. In summary, the "Residual" method is not a better approach but a way to convert an ODE into a DAE to avoid a problem associated with solving for the derivative. A description of the details for DAE's and the "Residual" method can be found hereA detailed description of the inner workings of NDSolve can be found hereRegards,Neil
Posted 10 days ago
 Neil,Thanks a lot for investigating the equations for me.I have briefly read the references you linked to, and have one further question (sorry) regarding your reply.Here is the description for "Residual" option on the page: "Residual": subtract right-hand sides of equations from left-hand sides to form a residual function F(x,x',t)=0  While the default option "Solve" is doing: "Solve": solve the system symbolically in the form x'=f(x,t) if possible Isn't the example you gave NDSolve[{\[Theta]'[t] == thPrime[t], eqN[[1]] /. \[Theta]'[t] -> thPrime[t], \[Theta][0] == 1}, {\[Theta][t], thPrime}, {t, 0, 4}] appears to be more like the form x'=f(x,t) ("Solve" option)? On my computer, this input gives error: NDSolve::ndcf: Repeated convergence test failure at t == 1.8522106545898993; unable to continue. which is similar to the original problem. And the plot of theta[t] is I think this is not how "Residual" works, if my interpretation is correct. Again thanks for providing the useful references for this discussion to be possible (I am going to read them carefully later).Regards,Y
Posted 10 days ago
 Just a few comments for this part: the "Residual" method is not a better approach I found a more detailed explanation in the reference you provided. I will just quote them for those who don't bother to read pages of documentation: When the system is put in a residual form, the derivatives are represented by a new set of variables that are generated to be unique symbols ... The process of generating an explicit system of ODEs may sometimes become expensive due to the symbolic treatment of the system
Posted 10 days ago
 Ox,What version of MMA are you running? On my version the NDSolve completes for all 4 seconds using the manually formed DAE formulation.You are correct that you specified the problem in the form of f(x,x',t)==0. However, NDSolve goes through a series of steps in solving the problem. One step ("EquationSimplification") is to solve your equation again to put it in a new form. The default form is x'=f(x,t). When it does this, it gets a Sqrt on the right hand side which leads to your numerical problem. A second alternative is to use "Residual". This option takes your equation and substitutes new variables for each of the derivatives so you have an expression g(x,newx,t)==0 where the newx's are the derivatives in the original expression. Now this problem is a DAE -- you have an algebraic equation relating the x's and the newx's and you have differential equations relating x'[t] == newx[t].If you want to view "under the hood" you can use the following commands to see what MMA is actually doing: Needs["DifferentialEquationsInterpolatingFunctionAnatomy"]; Needs["DifferentialEquationsNDSolveUtilities"]; state = First@ NDSolveProcessEquations[eqN, {\[Theta][t]}, {t}, Method -> {"EquationSimplification" -> "Residual"}]; state["NumericalFunction"]["FunctionExpression"] Out[25]= Function[{t, \[Theta], \[Theta]$12339105}, {(1 - Cos[\[Theta]] + 2 \[Theta]$12339105^2 (1 + Cos[\[Theta]]) + 2 Cos[2 \[Theta]]) Sec[\[Theta]/2]}] state = First@NDSolveProcessEquations[eqN, {\[Theta][t]}, {t}]; state["NumericalFunction"]["FunctionExpression"] Out[27]= Function[{t, \[Theta]}, {-( Sqrt[-1 + Cos[\[Theta]] - 2 Cos[2 \[Theta]]]/Sqrt[ 2 + 2 Cos[\[Theta]]])}] Note that the residual case adds a new variable, the original EquationSimplification results in a Sqrt[]. (as an aside: Note that you only get one branch of the solution using the DAE approach and there is a second branch that you can get by adding a (consistent) initial condition to the problem (see the example that Mariusz pointed to in the NDSolve docs under "Method" option.)RegardsNeil
Posted 9 days ago
 Neil,I am using MMA 11.0.1. I have tried a few times but the error message is still there.One question I still have is how not simplifying of the equation can avoid the problem, i.e., how to avoid integrating the derivative of theta (thPrime) during the solving process. I decided to check if the methods MMA used to solve them is different. I use the code on StackExchange: Select[Flatten[ Trace[NDSolve[eqN, \[Theta][t], {t, 0, 4}], TraceInternal -> True]], ! FreeQ[#, Method | NDSolveMethodData] &] For the original problem, MMA use LSODA solver, which "solves systems dy/dt = f with a dense or banded Jacobian". If Method -> {"EquationSimplification" -> "Residual"} option is added, the method changed to IDA, an "Implicit Differential-Algebraic solver". Maybe this solver never integrate thPrime but get theta in another way?I tried to specify the solver: NDSolve[eqN, \[Theta][t], {t, 0, 4}, Method -> {"IDA"}] On my computer, it gives error: NDSolve::mconly: For the method IDA, only machine real code is available. Unable to continue with complex values or beyond floating-point exceptions. There are 2 branches of solution, one stopped at t=0.695 and another one stopped at t=1.84. It looks like there are still complex values at some point. I will experiment a bit to see how it works.RegardsY
 Ox,I am running 11.3. and the snippet of code below works. Please try quitting your Kernel or restarting MMA and try the following code: eqN = {Sec[\[Theta][t]/2] (1 - Cos[\[Theta][t]] + 2 Cos[2 \[Theta][t]] + 2 (1 + Cos[\[Theta][t]]) Derivative[1][\[Theta]][t]^2) == 0, \[Theta][0] == 1}; sol = NDSolve[{\[Theta]'[t] == thPrime[t], eqN[[1]] /. \[Theta]'[t] -> thPrime[t], \[Theta][0] == 1}, {\[Theta][t], thPrime[t]}, {t, 0, 4}]; Plot[\[Theta][t] /. sol , {t, 0, 4}] to getThe DAE solver in MMA is IDA (a portion of SUNDIALS developed by Lawrence Livermore National Labs)If you apply the same tracing code that you posted to the "new" system with thPrime you see it also uses IDA because it becomes a DAE problem. Select[Flatten[ Trace[NDSolve[{\[Theta]'[t] == thPrime[t], eqN[[1]] /. \[Theta]'[t] -> thPrime[t], \[Theta][0] == 1}, {\[Theta][t], thPrime}, {t, 0, 4}], TraceInternal -> True]], ! FreeQ[#, Method | NDSolveMethodData] &] IDA will only be used for DAE systems so you cannot specify it for your original system. Now to your real question. "Why is the DAE solver better at this problem?". First, the reason that the Sqrt fails is that the algorithm takes a step in theta near when the argument of the Sqrt goes to zero. The step in theta makes the derivative complex and the solver does not know what to do with that solution. In fact, it just keeps on integrating producing a complex result. To see how changing to a DAE gets around this, let's solve your problem manually as a DAE. The first step is to add a new variable for theta'[t] and take a derivative of the algebraic constraint. In[32]:= eqN[[1]] /. Derivative[1][\[Theta]][t] -> thPrime[t] Out[32]= Sec[\[Theta][t]/ 2] (1 - Cos[\[Theta][t]] + 2 Cos[2 \[Theta][t]] + 2 (1 + Cos[\[Theta][t]]) thPrime[t]^2) == 0 In[37]:= dalg = Simplify[ D[eqN[[1]] /. Derivative[1][\[Theta]][t] -> thPrime[t], t] /. Derivative[1][\[Theta]][t] -> thPrime[t]] Out[37]= Sec[\[Theta][t]/2]^2 thPrime[ t] (5 Sin[\[Theta][t]/2] - 9 Sin[(3 \[Theta][t])/2] - 6 Sin[(5 \[Theta][t])/2] - 2 (Sin[\[Theta][t]/2] + Sin[(3 \[Theta][t])/2]) thPrime[t]^2 + 32 Cos[\[Theta][t]/2]^3 Derivative[1][thPrime][t]) == 0 Solve this for thPrime'[t]: In[38]:= dalgSolved = Solve[dalg, thPrime'[t]] Out[38]= {{Derivative[1][thPrime][t] -> 1/32 Sec[\[Theta][t]/ 2]^2 (9 Sec[\[Theta][t]/2] Sin[(3 \[Theta][t])/2] + 6 Sec[\[Theta][t]/2] Sin[(5 \[Theta][t])/2] - 5 Tan[\[Theta][t]/2] + 2 Sec[\[Theta][t]/2] Sin[(3 \[Theta][t])/2] thPrime[t]^2 + 2 Tan[\[Theta][t]/2] thPrime[t]^2)}} Your new system of equations is Equation 1 In[40]:= eq1 = First@First@ dalgSolved /. Rule -> Equal Out[40]= Derivative[1][thPrime][t] == 1/32 Sec[\[Theta][t]/ 2]^2 (9 Sec[\[Theta][t]/2] Sin[(3 \[Theta][t])/2] + 6 Sec[\[Theta][t]/2] Sin[(5 \[Theta][t])/2] - 5 Tan[\[Theta][t]/2] + 2 Sec[\[Theta][t]/2] Sin[(3 \[Theta][t])/2] thPrime[t]^2 + 2 Tan[\[Theta][t]/2] thPrime[t]^2) Equation 2: In[41]:= eq2 = \[Theta]'[t] == thPrime[t] Out[41]= Derivative[1][\[Theta]][t] == thPrime[t] In[42]:= eqN Out[42]= {Sec[\[Theta][t]/ 2] (1 - Cos[\[Theta][t]] + 2 Cos[2 \[Theta][t]] + 2 (1 + Cos[\[Theta][t]]) Derivative[1][\[Theta]][t]^2) == 0, \[Theta][0] == 1} Find a starting value for thPrime[0] thpStart = N[Solve[eqN[[1]] /. Derivative[1][\[Theta]][t] -> thPrime[t] /. \[Theta][t] -> 1, thPrime[t]]] Now you can solve this ODE (no longer a DAE) but this new ODE has 2 variables instead of one and no Sqrt is ever taken. In[55]:= manualDAESol = NDSolve[{eq1, eq2, \[Theta][0] == 1, thPrime[0] == thPrime[t] /. thpStart[[2]]}, {\[Theta][t], thPrime[t]}, {t, 0, 4}] `You get the same answer. The advantage of the manual approach is that you also get the second solution by changing thpStart[[2]] to the other solution. thpStart[[1]].(And now you can see why there are two solutions when you solve it manually.)I hope this answers your question.Regards,Neil