User Portlet
Discussions |
---|
adj is a matrix that has all the variables that are optimized (satisficed actually in this case, pure constraints). After the optimizer finds an optimum, adj is the solution. |
I don't know if the problem is with Minimize, because I don't call Minimize, I call NMInimize. I think the problem might be because NMimimize tries to take symbolic derivatives (even with methods such as simulated annealing or evolutionary... |
OK, great, thanks!. That means either x[1]=y[1]=0 if the expressions are not equal, or if they are, an infinite number of solutions. The real problem is with xE .ne. 0, and using your idea I get something useful for some relations between the... |
I know things are already too complicated, but perhaps in a way they are too simple. Edges in hypergraphs are directed, which means they are 2-valued. Why not allow more general edges, like n-valued, continuous, vector/tensor valued, etc? I doubt... |
*Edited. And here is another bug, which I don't really understand (I thought problem was Derivative, but I was wrong, forgot to refresh definitions). The first cell is without the conditional, gives good output. Second gives output, but it's wrong.... |
I take it back. It didn't work. Another constraint is linear independence of all the eigenvectors... |
I tried it, as well as Hold and ReleaseHold, but I am not understanding enough of the under-the-hood details to make it work. |
I have a complicated function of many variables, and I would like to pass those variables to it, and write the code for it using these list structures (scalars, 1D and 2D arrays), and then use it in a system of ODEs where I take not just derivatives... |
I have version 13.2. When I changed the kernel program to WorlframKernel, as you have it in the pic (but I didn't see that anywhere else, why is this obvious?), it worked. The default was newkernel, which didn't work. Thanks! |
I have a system that looks like dF/dt= sigma \gradient F, where F is a vector quantity. There are big differences in the components of the gradient (at least away from equilibrium, where the gradient=0), and I think the largest component needs a... |