# Message Boards

Answer
(Unmark)

GROUPS:

5

# [WSC20] Exploring staircase solutions to nonlinear wave equations

Posted 1 year ago

Introduction For some f[u], nonlinear wave equations of the form 2 ∂ ∂ 2 t 2 ∂ ∂ 2 x General information about nonlinear wave equations This chapter discusses some general properties of nonlinear wave equations.
Brief overview Nonlinear wave equation are equations of the form 2 ∂ ∂ 2 t 2 ∂ ∂ 2 x ( 1 )Where f[u] is a real-valued function and x goes from -∞ to +∞ The equation (1) is a nonlinear partial differential equation. It is described by the Lagrangian and conserves the energy function, which puts restrictions on its' solutions, and in particular means that u, u t u x
Numerical solutions Let's have a look at several numerical solutions to (1) to get the feel of it. I am going to use the NDSolve function and plot solutions using Plot3D. Firstly, let us specify the initial conditions. The choice is arbitrary, however I'm going to start with a Gaussian. This function is defined for all x, symmetric with respect to x = 0, and decays quickly while being infinitely differentiable for all values of x. In[]:= initConds = u[0, x] Exp[-x^2], Derivative[1, 0][u][0, x] 0; Generally, u[t, x] denotes the value of u at time t and position x, and Derivative[n, m][u][t, x] is the same as 2 ∂ n ∂ m ∂ u t Here and in the rest of the article u t ∂u ∂t u x ∂u ∂t u tt 2 ∂ ∂ 2 t u tx 2 ∂ ∂t∂x Below is the plot of initial values of u. As you can see, it decays very quickly, and even for x = 3 it’s already almost zero. Plot[Exp[-x^2],{x,-3,3}] Now, let us define the boundaries. Ideally, (1) should be solved within an infinite interval of x. However, for practical reasons, let us instead solve the equation in the finite interval x ∈ [ -R, R ] and t ∈ [ 0, T ], where R and T some positive numbers. I will take In[]:= T = 20;R = 32; Finally, we have the boundary conditions. In the original problem, the domain of x is infinite, and we don't have to worry about those. In the numerical approximation, however, boundary conditions are compulsory. I will choose them to be u x u x In[]:= boundaryConds= Derivative[0, 1][u][t, -R]0, Derivative[0, 1][u][t, R]0; Lastly, we have our equation. In general, it is written as In[]:= eqnTemplate =( Derivative[2, 0][u][t, x]Derivative[0, 2][u][t, x] + f[u[t, x]]); When we will need to study the case of a specific f[u], we can just use replacement like that: eqnTemplate/.fSin Or like that: eqnTemplate/.fFunction[u,u-u^3] The Function[u, func] thing means that we are defining a function called func with the argument u. Note: Function[u, u - u^3] gives the same result as Function[x, x - x^3]. This is how we use NDSolve to obtain a numerical solution for (1): soln1 = NDSolve[ Join[initConds, boundaryConds, {eqnTemplate /. fFunction[u, -u]}], u, {t, 0, T}, {x, -R, R} ] (* the curly brackets around f[u[t, x... are here because of the rules of Wolfram language; they don't change the meaning *) Out[]= uInterpolatingFunction
Now, we can plot our solution, just like that: Plot3D[ u[t,x]/.soln1, {t,0,T},{x,-R,R}, PlotRangeAll, PlotPoints50 ] For future use, I've defined a function that automatically uses NDSolve to give numerical solutions, removing the need to specify a ton of arguments every time we want to solve our beloved equation. In[]:= solveThePDE[replacementRuleForF_Rule] := NDSolve[ Join[initConds, boundaryConds, {eqnTemplate /. replacementRuleForF}], u, {t, 0, T}, {x, -R, R} ]; Here is how it works: solveThePDE[fCos] Also, here is the plotter function: In[]:= Options[myPlot] = {Rasterize True};myPlot[solutionToPlot_, OptionsPattern[]] := Module[{plt}, plt = Plot3D[ u[t, x] /. solutionToPlot, {t, 0, T}, {x, -R, R}, PlotRange All, PlotPoints 75 ]; If[OptionValue["Rasterize"], Rasterize[plt], plt ] ]; This "Rasterize" option controls whether the plot will be converted to an image or not. Converting dynamic plots to static images allows to reduce the load on the computer, however sometimes we might want to be able to spin and play around with the original plot. Lastly, in the rest of this article I'll be working mainly not with f[u] but with V[u], as it gives more insight into the problem. NDSolve, however, only accepts actual differential equations and thus needs f[u]. Thus, I've created a function to calculate f[u] from V[u]. VtoF[Vrule_] := With[{expr = -D[(V /. Vrule)[u], u]},ToExpression["Function[u, " <> ToString[InputForm[expr]] <> "]"]] It looks complicated, but basically what it does is take the derivative and multiplies the result by -1. Here is an example. VtoF[VFunction[u,u^3]] The derivative of 3 u 3 2 u -3 2 u
Lagrangian As Stephen Wolfram has shown, equation (1) can be described by the Lagrangian with the density ℒ= 2 u t 2 u x 2 ( 2 )where V[u]=- u ∫ 0 ( 3 )
Symmetry, Noether’s theorem, and conservation laws
Time translation symmetry It can be seen that (2) is translationally symmetric with respect to t and x. The invariant corresponding to time symmetry, energy, is, as Stephen Wolfram has shown, E= ∞ ∫ -∞ dE dt 2 u t 2 u x 2
Space translation symmetry However, because (2) is also translationally symmetric with respect to x, there is one more energy-like invariant. It can be found using the same formula for Hamiltonian used to determine the energy function, except that instead of t there should be x. G= ∞ ∫ 0 dG dx u t u x ∞ 0 - 2 u t 2 u x 2 The physical meaning of G is not entirely clear to me. In addition, since it is not constant throughout the x-axis, it's hard to find a practical application for it.
Hyperbolic rotation symmetry We can write (1) in the form when all operators are taken to the left hand side. In this case, it becomes 2 ∂ t 2 ∂ x t'=tcosh[ε]+xsinh[ε]x'=xcosh[ε]+tsinh[ε] It can be shown that the equation (1), as well as , are written in exactly the same way!
Energy flow Let us derive an expression for the flow of energy, that I will denote J. In order to do that, let us use the law of conservation of energy. ∂ρ ∂t ( 4 )Since the problem is one-dimensional, del operator is identical to ∂/∂x. Then, using the definition of ρ , we obtain ∂ ∂t 2 u t 2 u x 2 ∂J ∂x Which can be simplified to u t u tt u x u xt u t ∂J ∂x Using the equations (1) and (3), we arrive at the following. ∂J ∂x ∂ ∂x u t u x u t u x Where C is an arbitrary constant. From physical reasoning, the flow of energy must be zero if u and all its’ derivatives are zero. Then, the expression for the flow of energy is the following. J=- u t u x ( 5 )
Stability of space-independent form of (1) In this section, I'm studying the space-independent form of (1) to better understand the original equation. I've identified four types of behavior, one stable and two unstable. Understanding the stability of (1) is an important stage in understanding the equation as a whole. But first, let us study the stability of space-independent form of (1), when u[t, x + Δx] = u[t, x] for any Δx. As we will later see, the general form of (1) exhibits similar types of behavior to its' space-independent form. If u[t, x + Δx] = u[t, x], then, using the definition of derivative u x lim h0 u[t,x+h]-u[t,h] h u x u xx 2 d 2 dt ( 6 )This is the equation of motion of a unit mass experiencing a force f. It can exhibit four types of behavior: stationary, oscillatory, unbounded, and singular. Let us look at each of those in more detail.
Stationary This type of behavior is probably the most boring of all. If at some time t 0 du dt t 0 t 0 2 d 2 dt t 0 then the value of u will always be equal to u[ t 0 Plot[1,{t,0,1}] Out[]=
Oscillatory The equation (6) can be re-written as d 1 2 2 du dt Integrating this, we obtain 2 du dt Where C is an arbitrary constant. From this equation, it can be seen that the mass can only be moving if V[u] < C. If the mass is surrounded from both sides by potential barriers higher than C, it will never overcome any of them and instead will be oscillating somewhere in-between. More formally, if at some point in time there exist such u m u p u m t 0 u p t 0 u m t 0 2 u t t 0 2 u p t 0 2 u t t 0 2 then u will forever remain bounded within that interval. u m u p The plot of u[t] then looks something like this. In[]:= oscillatorySolnTo6 = NDSolve[{u''[t] -Sin[u[t] - 1], u[0] 0, u'[0] 1.7}, u, {t, 0, 16}]; Plot[u[t]/.oscillatorySolnTo6,{t,0,16}] Out[]= Because u and 2 du dt
Unbounded If either of u m u p u m u p a≤ lim u∞ where a and b are some numbers, then du dt In[]:= unboundedSolnTo6 = NDSolve[{u''[t] -Sin[u[t] - 1], u[0] 0, u'[0] 1.78}, u, {t, 0, 16}]; Plot[u[t]/.unboundedSolnTo6,{t,0,16}] Out[]=
Singular If the point is not bounded by potential barriers and goes off, let’s say, to infinity, and lim u∞ then the point will be moving infinitely fast. An example of a singular solution is plotted below. In[]:= singularSolnTo6 = NDSolve[{u''[t] u[t], u[0] 0, u'[0] 1.78}, u, {t, 0, 16}]; Plot[u[t]/.singularSolnTo6,{t,0,16}] Out[]= Actually, there are two types of a singular solution: those that actually have a singularity and those that just rise quickly. To see why is that, let us use the following equation derived earlier. 2 du dt If u goes off to infinity, the value of du dt du dt From here, dt du 1 √(2(C-V[u])) And t[u]= 1 √2 u ∫ u[0] du' √(C-V[u']) Let us find the time it takes for the solution to go to infinity. If the solution actually has a singularity, u will reach infinity in finite time. Else, the solution is simply a quickly increasing function. t ∞ 1 √2 ∞ ∫ u[0] du' √(C-V[u']) As we can see, a solution has a singularity if the integral ∞ ∫ u[0] du' √(C-V[u']) However, for our purposes quickly rising solutions and singular solutions are not too different from one another, so they both fell under one classification slot.
Stability of general form of (1) When the derivative of u with respect to x is not zero, the equation (1) can no longer be reduced to (6). However, the stability classification system developed for (6) is still applicable. As earlier, there are four types of behavior, listed below.
Stationary Here, everything is clear without words. Rasterize[Plot3D[1,{t,0,1},{x,-1,1}]] Out[]=
Oscillatory In the case of (1), the criterion for oscillatory behavior is not as strict as that for (6). However, if the plot of V[u] looks like a pit, the behavior of (1) is likely to be oscillatory. In[]:= V1 = Function[u, u^2]; Plot[V1[u],{u,-1,1}] Out[]= In the case of V[u] = 2 u myPlot[solveThePDE[fVtoF[VV1]]] Out[]=
Unbounded Just like (6), the equation (1) can allow u to escape to infinity with a finite speed. In[]:= V2=Function[u, -u^2/(1+u^2)]; Plot[V2[u],{u,-4,4}] Out[]= If the plot of V[u] looks somewhat like this, then the solution increases with a finite speed. myPlot[solveThePDE[fVtoF[VFunction[u,-u^2/(1+u^2)]]]] Out[]= You can see that the solution growth with a steady rate bounded from above by the law of conservation of energy.
Singular Some functions V[u] just yield singularities. If the value of u manages to escape to infinity and the limit of V[u] is negative infinity, u starts increasing infinitely fast. In[]:= V3 = Function[u, -u^2]; Plot[V3[u],{u,-3,3}] Out[]= In[]:= singularSoln=solveThePDE[fVtoF[VV3]]; myPlot[singularSoln] Out[]= An important thing about singular behavior is that if the u-function develops a singularity in one particular point, u also explodes in the neighboring points. To see why that's the case, let us plot J = - u t u x In[]:= JforSingularSoln[x_] := -First[Derivative[1, 0][u][20, x] * Derivative[0, 1][u][20, x] /. singularSoln]; Plot[JforSingularSoln[x],{x,-R,R},PlotLegends{"J"},PlotRangeAll,PlotStyleOrange] Out[]=
As you can see, the flux of energy is positive to the right from the center and negative to the left. What this means is that to the right of the center the energy is flowing to the right, and to the left of the center it's flowing to the left, which basically implies that energy is being drained from the center. VectorPlot[{JforSingularSoln[x],0},{x,-20,20},{y,-1,1},VectorPointsTable[{x,0},{x,-7,7,1}],AspectRatio1/6,ImageSizeLarge] Out[]= This figure shows the direction of flux of energy around the center.
Upper and lower bounds for u The law of conservation of energy determines the maximal and minimal possible value u can have for a given energy E. In this subsection, I will prove the latter statement and find the bounds for u. For the sake of shortness, I will only find the upper bound and not the lower bound, as the process of their derivation is practically identical.
Why are there any bounds for u? Due to the law of conservation of energy, the value of u is restricted to some finite interval. To see why that is, let us examine the expression for the energy of the system. E= ∞ ∫ -∞ 2 u t 2 u x 2 One obvious way to make u large is to make it large at one point while making it zero everywhere else. However, as it's easy to show, that would make the integral ∞ ∫ -∞ 2 u x ∞ ∫ -∞ Plot[{If[Abs[x]<0.1,0.5,0],0.75Exp[-x^2/8],Exp[-Abs[x]]},{x,-3,3},PlotStyleThick,AxesOrigin{-2,0},PlotLegends{"Too sharp","Too wide","Optimal"},ExclusionsNone] Out[]=
Finding the bounds Now, the question is how to find an optimal function u[t, x]. The word an here is important, as due to translation symmetry we can get as many optimal functions u[t, x] from one of them by shifting it either by either the t or x axis. To avoid ambiguity, let us restrict ourselves to such u[t, x] that have their global maximum at t = 0, x = 0. Energy is the limiting factor in the process of increasing u. Thus, the optimal solution will be such that uses the minimal amount of energy. From the expression for energy, it is obvious that at t = 0, u t u t u t u x E= ∞ ∫ -∞ d dx ∂ρ ∂ u x ∂ρ ∂u Since ρ= 2 u t 2 u x 2 ∂ρ ∂ u x u x ∂ρ ∂u dV du d dx u x So, finally, we get an equation describing the optimal solution. 2 d 2 dx It can be simplified if we introduce an auxiliary variable y≡ du dx y dy du Or ydy=-f[u]du But, since y dy = d 2 y 2 d 2 y 2 2 y Where C is the constant of integration. Substituting the definition of y, we obtain 2 du dx We can understand the meaning of C by setting u to 0. Since V[0] = - 0 ∫ 0 1 2 2 du dx u=0 2 du dx u=0 du dx By the initial conditions, the value of u and u x 2 du dx One possible solution to the problem is to make C nonzero and forcefully equate u to zero at some two points and further away from the origin. This would satisfy the initial conditions and allow us to reach any values of u. Below you can see an illustration explaining how this would work. The solution is smooth everywhere except the points x = 0, x = π / 2 and x = -π / 2. Plot[UnitStep[Pi/2-Abs[x]](Pi/8+Cos[x]-Abs[x]/4),{x,-3,3},PlotStyleThick] Out[]= In this case, the value of C should be chosen so that u would reach a certain value. Anyway, the shape of this ideal solution is described by the equation du dx The sign of u x u x du dx Since du dx dx du x[u]= u ∫ u max du' -√(2(V[u']+C)) 1 √2 u max ∫ u du' √(V[u']+C) After that, we can get u[x] as an inverse function of the latter integral. After that, energy as a function of maximal u can be found by integrating the energy density. Finally, u max E[ u max
Speed of signal propagation For the vast majority of f[u], the speed at which waves travel depends on the shape of the waves themselves. In this section the latter is demonstrated on the example of linear f[u]. To prove the point, I perform mathematical manipulations and create some numerical solutions. For the vast majority of f[u], the speed at which waves travel depends on the shape of the waves themselves. To see this, we can study the case of linear f[u], when f[u]=-cu And (1) takes the form 2 ∂ ∂ 2 t 2 ∂ ∂ 2 x Since this is a linear equation, its' solution will be a sum of exponential functions of the form u[t, x] = C Exp[i(ω t - k x)], where i is the imaginary unit. Solving the characteristic equation, we arrive at the following nonlinear relation between ω and k. 2 ω 2 k As it's known, nonlinear dispersion relations cause different types of waves to travel with different speeds.Below you can see two plots, both showing numerical solutions to the same PDE 2 ∂ ∂ 2 t 2 ∂ |