Message Boards Message Boards

Finding the function given the gradient vector

Posted 3 years ago

Hello. The gradient vector Grad(f) of the function f is given as in the file. Question 1: How can I find the smallest value of Grad(f) based on variables x1 and x2 ? Question 2: Given only Grad(f) vector, how can I find the function f ? I will be very happy if those who know this subject can help.

Attachments:
POSTED BY: Meryem Aslı
18 Replies

It is not quite clear to me what you are asking. (first, your syntax is a bit off. You probably want to use a symbol for the lhs, not Grad (f).).

A gradient is a vector. When you say "smallest", do you smallest magnitude? And, assuming the gradient was taken with respect to x1,x2, x3, do you want to know the smallest magnitude at each point {x1,x2,x3}.

If so, you will have two differential algebraic equations for g[t] in parameters t and s at each point x1,x2,x3. Looking at the equations, it is unlikely that you will be able to find a symbolic solution with Mathematica or any other symbolic algebra package. (I tried, but gave up after Mathematica didn't come back with an answer in 5 minutes). You will also need to give two initial conditions for g.

Another strategy may be to expand the magnitude of the gradient, gf.gf to first order in in s,t around s=0,t=0. But, I am not sure if that is helpful for your purpose.

POSTED BY: W. Craig Carter
Anonymous User
Anonymous User
Posted 3 years ago

There's allot of free math showing how to find f from the gradient of f. Also the lagrange multiplier method (as well as others) for minimization. And linear algebra algorithms as well.

It's against Community policy to "drop a problem you haven't attempted to solve" in the forum, nor should problems be contrived to be unsolvable by certain methods without being introducing as such.

POSTED BY: Anonymous User

This question seems to be poorly stated. First, Is this a 2-dimensional problem with coordinates x1 and x2? Then what is the long expression in the right hand side list? Are these the components of a vector field? Then this seems to be a 3-dimensional problem. And what are all these other variables? Do they have a dependence on the coordinates that is not shown? If not why not just condense them to a scalar parameter?

In general given a potential field f[coordinates] one can take the gradient and obtain a vector field. But these calculations are more commonly done using differential forms as covectors. Then the vector field is given by the exterior derivative of the potential. But it is not true that every vector field is given by the derivative of a potential function f. The vector field has to be exact, which means in turn that it is closed. That means that the exterior derivative of the vector field is zero. Then one has to define a line integral from some point to every other point to calculate a potential function.

Using my GrassmannCalculus application I was able to do the very simplest of cases - using the potential of a gravitational field from a point source to calculate the force field - all done symbolically. Then starting with the force field, showing it was closed and designing line integrals to get back the potential function.

In general it's a more difficult topic because it depends on the global topology of the manifold and involves de Rham cohomology theory of which I know little - if anything.

Posted 3 years ago

Honestly, my English is not enough to express what I want to say. Yes, here is a vector such that Grad(f) = {x1, x2, x3}. The x3 component is the long expression in the list on the right. Yes, these are the components of a vector field. This is a 3 dimensional problem. I wrote the 3rd component x3 in terms of x1 and x2. As you said, I want to know the smallest magnitude at each point {x1,x2,x3}.

Here s and t are scalar variables. g(t) is the function dependent on t. a and b are constant numbers. I can summarize the problem I'm trying to solve as follows:

There is a surface equation given by this parameterization. --> x := (a + tCos[bs] - g[t]Sin[bs]){Cos[s], Sin[s], 0} + {0, 0, tSin[bs] + g[t]Cos[b*s]}

I found the mean curvature (H) and the unit normal vector (N) of the surface.

< N , Grad(f) > = H

I'm trying to find function '' f '' here. I have no information about the programs and methods that can calculate this.

We can choose the variables x1 and x2 as we want. How can I find the smallest magnitude at each point {x1, x2, x3}?

POSTED BY: Meryem Aslı
Anonymous User
Anonymous User
Posted 3 years ago

If you put x3 in terms of x1 and x2 and applied the gradient you still would not have the expression first posted (for one thing, the first element of the vector x cannot be x1 after the gradient since that was the value before the gradient). Also if you knew the equation for x then why would post a questionably formulated gradient and as what it was (this is not an arcane puzzle forum)? Also Grad[ x ] evaluates without substitutions. But that doesn't matter.

When working with gradients during a calculus course it is easy to apply the Grad operator and get a nasty expressions perhaps hundreds of terms long (many are surprised by this). The answer is not to do it that way. If you are led to a wild expression in your vectors you should do as the book proposes one step at a time to maintain a solvable expressions. You will learn other methods as you advance in Mathematics but you will have to wait. Therefore technique of expressing one vector in terms of another is not "always useful".

POSTED BY: Anonymous User
Anonymous User
Anonymous User
Posted 3 years ago

I want to underline what the first responding poster said* a minimization problem (other than 0) requires a constraint which still hasn't been stated. Say, two curves and a condition that says "what is considered minimal" (the distance between them or along them? a mutual distance from the origin?)*

You've also said you want f, but I assume f( x ) is the surface you gave, that the surface is not the constraint. If the constraint is the surface and you gave us (grad f)={x1,x2,f[x1,x2]} then you've still withheld (grad f)={x1,x2,x3} meaning before you altered it.

I can only guess you mean the direction of greatest change is normal to the surface and of least change is tangent to surface as a condition. In that light you want the equation that is always tangent to the surface and the gradient is not necessary since the surface was given. If r[t,s] = surface given, then r'[t,s] (easily done by mathematica), is tangent to the surface at each point and so the direction of least change. If you have N you should already have T, the tangent vector.

(* the vector valued surface *)
In[109]:= D[(a + t Cos[b s] - g[t] Sin[b s]) {Cos[s], Sin[s], 0} + {0,
    0, t Sin[b s] + g[t] Cos[b s]}, t, s]

(* the direction of least change, r'[t,s] *)
Out[109]= {Cos[s] (-b Sin[b s] - b Cos[b s] Derivative[1][g][t]) - 
  Sin[s] (Cos[b s] - Sin[b s] Derivative[1][g][t]), 
 Sin[s] (-b Sin[b s] - b Cos[b s] Derivative[1][g][t]) + 
  Cos[s] (Cos[b s] - Sin[b s] Derivative[1][g][t]), 
 b Cos[b s] - b Sin[b s] Derivative[1][g][t]}

If you mean minimal curvature K, then since you have a wavy sin cos surface then you may only need the maximum points of the surface to find minimal K or the 0 points for greatest K (ie, it may be a trick question not requiring the kind of solution your looking for).

You should ask a teacher or look in the book's "solution guide". And you should ask people this kind of question without the problem FULLY TYPED OUT as the book had it (and page, and name) - which likely requires being in a different forum altogether.

POSTED BY: Anonymous User

It could be that the problem is to:

1) First, calculate a scalar potential field,f, from the vector force field. (The statement seems to indicate there is one.)

2) Then, as a second step, find the maximum of the potential on the surface specified.

Maybe this has somehow all gotten mashed together.

A clearer statement of the objective is necessary.

Posted 3 years ago

Here {x1, x2, x3} are not components of x. I named the components of Grad(f) as x1, x2, x3, respectively. So I made a naming as "Grad(f) = {x1, x2, x3}". The "x" is just the name of the surface.

First I computed the N and H values for the x surface. Then I calculated Grad(f) from the equation "<N, Grad(f)> = H" given above.

Grad(f) is in the file I sent. I don't know the function f and my main goal is to find the function f. How can I find a function f using only Grad(f)?

POSTED BY: Meryem Aslı

This remains confusing. On one side there is what looks like a surface of the form z=f(x,y) (so the two parameters are x and y). Later it appear that the surface is instead of the form {f1(s,t),f2(s,t),f3(s,t)} with {s,t} being the parameters.

As a second remark, it seems that you have a gradient and want to recover a potential function? Existence of such a function requires that some second derivative equations be satisfied.

POSTED BY: Daniel Lichtblau

Hello Meryem,

your expressions are indeed somewhat complicated, so I choose a much simpler example which may, at least I hope so, give you a hint.

First of all you should check the "mixed derivatives" of your gradient, that means check the "integrability conditions". That will show if your function f exists.

Now for the example.

Define

grad[f_] := {D[f, x], D[f, y], D[f, z]}

Let's say you have a gradient

gradf = {(2 x)/z, (2 y)/z, -((x^2 + y^2)/z^2)};

Check the mixed derivatives

D[gradf[[1]], y] == D[gradf[[2]], x]
D[gradf[[1]], z] == D[gradf[[3]], x]
D[gradf[[2]], z] == D[gradf[[3]], y]

Each answer is True, so there is indeed a function giving the gradient in question.

Now chose a straight line from { x0, y0, z0 } to { x1, y1, z1 } and the derivative of the line element to calculate

Integrate [ gradf . dr ]

along this line. Set

rule = {x -> x0 (1 - t) + t x1, y -> y0 (1 - t) + t y1, z -> z0 (1 - t) + t z1};
xp = D[{x, y, z} /. rule, t];
df = (gradf /. rule).xp // FullSimplify;

Switch off the output of conditions in Integrate and do the integration, change the variables back to x, y, z and get a function yielding the given gradient

SetOptions[Integrate, GenerateConditions -> False];
ff = Integrate[df, {t, 0, 1}] /. {x1 -> x, y1 -> y, z1 -> z} // FullSimplify
grad[ff]

Of course you may as well integrate along the edges of the cuboid:

fff = Integrate[(gradf[[1]] /. {x -> u, y -> y0, z -> z0}), {u, x0, x1}] +
  Integrate[(gradf[[2]] /. {x -> x1, y -> u, z -> z0}), {u, y0, y1}] +
  Integrate[(gradf[[3]] /. {x -> x1, y -> y1, z -> u}), {u, z0, z1}]
fff = (fff // FullSimplify) /. {x1 -> x, y1 -> y, z1 -> z}
grad[fff]
gradf

I hope you got an idea how to calculate your function f.

POSTED BY: Hans Dolhaine

In general the procedure may be written as

grad[f_] := {D[f, x], D[f, y], D[f, z]}

t1 = Integrate[a[u, y0, z0], {u, x0, x}];
t2 = Integrate[b[x, u, z0], {u, y0, y}];
t3 = Integrate[c[x, y, u], {u, z0, z}];

(*Integrability conditions *)
rr = {
   Derivative[1, 0, 0][b][xx__] -> Derivative[0, 1, 0][a][xx],
   Derivative[1, 0, 0][c][xx__] -> Derivative[0, 0, 1][a][xx],
   Derivative[0, 1, 0][c][xx__] -> Derivative[0, 0, 1][b][xx]
   };

F = t1 + t2 + t3;
grad[F]
grad[F] /. rr
POSTED BY: Hans Dolhaine

Rename your Grad(f) given above to gf = ... your expression.... and try

D[ gf[[1]], x2] == D[ gf[[2]], x1]
D[ gf[[1]], x3] == D[ gf[[3]], x1]
D[ gf[[2]], x3] == D[ gf[[3]], x2]

That means the function you are looking for does not exist

POSTED BY: Hans Dolhaine

I like Hans Dolhaine's example. He does his calculation entirely in a vector notation. There is an alternative method using differential forms that is somewhat easier to follow. I decided to use Hans' example to illustrate the differential form method.

Two spaces have been defined, an xyz-Space that the vector field and the resulting scalar potential field exist in and a 1-dimensional t-space that maps to a straight line in xyz-Space and is used to calculate the potential. We can instantly switch between them with the commands setxyzSpace and settSpace whose definition I'm not showing because it's kind of boiler plate.

Several of the expressions are in graphics form because their formatting is part of the niceness of the GrassmannCalculus application I'm using.The following starts with Hans` gradient field in vector form. We then convert it to a differential form, which is equivalent to 'lowering the index' in tensor calculus.

setxyzSpace

gradVector = enter image description here

gradForm = gradVector /. VectorToForm

giving

-((dz (x^2 + y^2))/z^2) + (2 dx x)/z + (2 dy y)/z

The check for integrability is very simple: the exterior derivative of the gradForm must be zero.

ExteriorDerivative[gradForm] // EvaluateExteriorDerivatives

giving

0

To obtain the potential function we integrate a line integral on the vector field from a fixed point to the generic point {x, y, z}. I picked the origin as the fixed point. The actual integration is done in the 1-dimensional t-space.

We define a line as a function of t that maps {x, y, z} onto points in the line. We will need rules on how the coordinates are pulled back to points on the line.

lineFunction := Function[t, t {x, y, z}]
xyzPullback = Thread[{x, y, z} -> lineFunction[t]]

giving

{x -> t x, y -> t y, z -> t z}

To do the integral along the line we simply pull back the gradForm to t-space to obtain a differential form on the line parameterization.

tForm = gradForm // PullbackForms[setxyzSpace, settSpace, xyzPullback]

giving

(dt (x^2 + y^2))/z

This is a trivial integral but we do it formally to show the steps.

settSpace; domain = {t, 0, 1};
FormIntegral[{domain}, tForm] + constant
potential = % // EvaluateFormIntegrals

giving

enter image description here

constant + (x^2 + y^2)/z

Which is the potential. To check we take the exterior derivative of the potential to recover the gradForm. If we want it expressed in a vector basis instead of the form basis we just 'raise the index' using the metric.

setxyzSpace
ExteriorDerivative[potential] // EvaluateExteriorDerivatives
% /. FormToVector

giving

-((dz (x^2 + y^2))/z^2) + (2 dx x)/z + (2 dy y)/z

enter image description here

These routines are part of a GrassmannCalculus Mathematica application that is an extension of John Browne's axiomatization of Grassmann Algebra. It would be very nice for teaching modern physics with modern mathematics.Anyone like to help?

David,

What is the current version of your GrassmannAlgebra application? I tried to find out from the link at https://davidandalicepark.wordpress.com/home-2/mathematica/, but that gives a "page can't be found" error.

[Similar question for your Presentations application! (While recent versions of Mathematica have introduced some built-in complex function plotting tools along with some utility functions for complex numbers, Mathematica still lacks a lot of the functionality of "Presentations" as well as its more flexible paradigm for graphics objects.)]

POSTED BY: Murray Eisenberg
Posted 3 years ago

Thank you so much. I will try to solve the problem using this solution method.

POSTED BY: Meryem Aslı

Hi Meryem,

any progress in your problem?

Did you realize that with your gradf a function giving this gradient does not exist?

What about telling us the background of your problem (without the lengthy Sin and Cos terms)?

Greetings HD

POSTED BY: Hans Dolhaine
Posted 3 years ago

Hello Hans. I could not deal with this question for a long time due to my health problems. Yesterday I looked at the question again and unfortunately there is no function that gives this gradient as you said.

POSTED BY: Meryem Aslı
Posted 3 years ago

Also, since each of the x1, x2, x3 terms are functions dependent on variables, I might not have reached a solution.

POSTED BY: Meryem Aslı
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract