Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Physics sorted by activeplse to solving my problem "incomplete expression"
http://community.wolfram.com/groups/-/m/t/1129406
ClearAll[X1, X2, X3, X4, t]; (**)
reactions = {X1 + X2 -> X3,
X3 -> X1 + X2, X2 -> X1 + X4}; (**)
vars = {X1, X2, X3,
X4}; (**)
rates = {1.1,
0.1, .8}; (**)
X1 == 100 ; (* анхны нөхцөл *)
X2 == 100;
X3 == 0;
X4 == 0;
de = NDSolveValue
[{
X1'[t] == .9 X3[t] - 1.1 X1[t] X2[t], (**)
X2'[t] == .1 X3[t] - 1.1 X1[t] X2[t], (**)
X3'[t] == 1.1 X1[t] X2[t] - .9 X3[t], (**)
X4'[t] == .8 X3[t], (**)
X1[0] = 100,
X2[0] = 100,
X3[0] = 0,
X4[0] = 0
},
{X1, X2, X3, X4},
{t, 10} DependentVariables -> {X1, X2, X3,
X4}]; (**)
stochastic = SSA[reactions, init, rates, {0, 10}];
df = {PlotStyle -> Thick, PlotTheme -> "Scientific"};
Row@{Plot[Evaluate@Through@de@t, {t, 0, 10}, Evaluate@df,
PlotLabel -> "deterministic ODE"], Spacer@10,
Plot[Evaluate@Through@stochastic@t, {t, 0, 10}, Evaluate@df,
PlotLabel -> "stochastic SSA"]}tudewdorj togtokhtur2017-06-27T10:34:13ZCan not understand the problem?
http://community.wolfram.com/groups/-/m/t/1129375
This is a screen shot of my code. I am trying to minimize a function f(). please help me.Arnob Mukherjee2017-06-27T07:12:46Z[✓] NSolve two eqs connected to the London penetration depth of a PB film?
http://community.wolfram.com/groups/-/m/t/1128280
Hello,
I try to numerically solve two equations that are connected to the London penetration depth of a thin PB film. <br />
The physical background doesn't really matter but if someone is interested I can post more about it. <br />
The 1st problem:
I need a solution for the following equation that should give me a simple value:
$$ 2,01121 \cdot 10^{-10}= 0,005 \cdot (1,48 \cdot 10^{-8}-2 \cdot \lambda* tanh( \frac{1,48 \cdot 10^{-7}}{2 \cdot \lambda}))$$
I know that the solution for lambda has to be in the range of 50-70nm but I'm not really familiar how to solve something like this with Mathematica.
I tried to use the NSolve function but I don't really give me anything. <br />
In[6]:= NSolve {2.01121*10^(-10) == 0.005* (1.48*10^(-8) - 2*x*tanh[ 1.48*10^(-7)/(2*x)])}
Out[6]= {NSolve (2.01121*10^-10 == 0.005 (1.48*10^-8 - 2 x tanh[7.4*10^-8/x]))}
The 2nd problem is similar but this time the solution isn't just a value but a function that depends on T. Therefore I would need something like a table with values as output.
$$ 1-\frac{T}{7.2}= \frac{\Delta(T)}{1,9872\cdot10^{-22}}\tanh(\frac{\Delta(T)}{2,76\cdot 10^{-23}\cdot T})$$
I tried to solve it again with NSolve:
In[16]:= NSolve {1 - (T/7.2) == (a[T]/(1.9872*10^(-22)))*tanh[a[T]/(T*2.76*10^(-23))]}
Out[16]= {NSolve (1 - 0.138889 T == 5.03221*10^21 a[T] tanh[(3.62319*10^22 a[T])/T])}
I probably use the NSolve wrong or use the syntax wrong but i can't find an error so any help is appreciated.Stefan Dietel2017-06-26T08:47:50ZSolve a differential equation of a bomb trajectory?
http://community.wolfram.com/groups/-/m/t/1128238
Greetings and respect. Please help me to solve this problem by Wolfram Mathematica.
appears directly below the plane in the crosshairs of his visual \
targeting device.Assume that the wind is blowing horizontally \
throughout the entire space below the plane with a speed of 60 mph, \
and that the air density does not vary with altitude.The bomb has a \
mass of 100 kg.Assume that it is spherical in shape with a radius of \
0.2 m.
(a) Calculate the required ground speed of the plane if the bomb is \
to strike the target.
(b) Plot the trajectory of the bomb.Explain why the "trailing side" \
of the trajectory is linear.M P2017-06-26T06:17:34Z[✓] Solve a 2D heat equation inside a circular domain?
http://community.wolfram.com/groups/-/m/t/1126696
I am trying to solve the Heat Equation in 2D for a circular domain and I used the example attached, however, for some reason I do not get any answer from it, and in principle, it seems that I am following the same steps as in the original document from wolfram tutorials. Any help will be much appreciated. I am using version 11.1.1David Quesada2017-06-23T17:34:18ZMotion of a classical particle in a box (2 D and 3 D)
http://community.wolfram.com/groups/-/m/t/1127402
# https://wolfr.am/mAuOX0XK
This repository was made for the Homework Assignment for Wolfram Summer School 2017.
The "FunwithPhysicsin2D.nb" file in this repository contains code for implementing
the steps described in this readme file.
## Author: Bhubanjyoti Bhattacharya
## Date: June 21,2017
## Motion of a classical particle in a box (2 dimensions)
Here we will describe the motion of a classical particle inside a box with hard walls.
The particle will be represented by a single unit of a 2D (or 3D) raster. The interactions
with the walls will be considered elastic, i.e. such interactions simply reverse the direction
of motion perpendicular to the wall.
### The first step is to create a box and a point particle with given coordinates within the box.
We first create a 2-dimensional raster with one of the elements highlighted using a different color:
```
mybox[{mx_Integer, ny_Integer}, {px_Integer, qy_Integer}] :=
Graphics[Raster[
ReplacePart[
ConstantArray[{1, 1, 1}, {ny, mx}], {qy, px} -> {1, 0, 0}]],
Frame -> True, FrameTicks -> None];
mybox[{20, 10}, {5, 5}]
```
![basic_raster][1]
### The second step is to animate this box
(Note that the .nb file in this repository has more steps (more detailed description of the process I followed))
(Note also that we will use discrete time steps to describe the motion, to use this in conjunction with the Raster function)
```
mytimeAnimatedbox[{mx_Integer, ny_Integer}, {x0_Integer,
y0_Integer}, {vx_Integer, vy_Integer}] :=
Animate[mybox[{mx, ny}, {x0 + vx t, y0 + vy t}], {t, 0,
Min[(mx - x0)/vx, (ny - y0)/vy], 1}, AnimationRunning -> False];
mytimeAnimatedbox[{20, 10}, {5, 5}, {1, 1}]
```
![animated_figure][2]
Above we made a 20 x 10 raster in 2 dimensions. The particle is started with coordinates (5,5).
The new function takes values (vx,vy) which describe the speed of the particle in x and y directions respectively.
### The third step is to figure out what happens after collisions with a wall.
The particle's motion in 2D can be broken down into two independent motions in the x and y directions.
In our simple case these two motions are similar to each other. We can therefore describe both motions
with the same function.
Here we will try to figure out the function that describes the position of the particle 'n' time steps after it starts.
The idea is simple as follows:
* If incrementing the position by any number of time steps does not result in the particle hitting the wall boundaries,
then the particle's position follows the simple rule $x = x_0 + v_x t$
* If the particle hits a wall, we assume that the collision is elastic, so its velocity perpendicular to the wall
simply changes sign.
The above two rules can be implemented using the following function:
```
posn[pos_Integer, v_Integer, posmax_Integer, t_Integer] :=
1 + If[EvenQ[Floor[(pos + v (t - 1))/(posmax - 1)]],
Mod[pos + v (t - 1),
posmax -
1], (posmax - 1) Floor[(pos + v (t - 1))/(posmax - 1) + 1] -
pos - v (t - 1)];
```
In order to understand the above function we can plot it as a function of time steps. Below
we plot it for the first 50 time steps:
```
ListLinePlot[Table[{n, posn[5, 1, 10, n]}, {n, 0, 50}]]
```
![position_vs_time_plot][3]
### The final step is to put all of this together to actually obtain the result
The code that creates the two-dimensional box for us is as follows:
```
myFinalAnimatedbox[{m_Integer, n_Integer}, {x_Integer,
y_Integer}, {vx_Integer, vy_Integer}, tt_Integer] :=
Animate[mybox[{m, n}, {posn[x, vx, m, t], posn[y, vy, n, t]}], {t,
0, tt, 1}, AnimationRate -> 20];
```
![final_animated_figure][4]
## We will extend this construction to 3 dimensions
### Instead of using a 2D Raster we use a 3D Raster
Using the same techniques as before we can construct the function that generates a free particle in a 3D box:
```
my3DFinalbox[{m_Integer, n_Integer, o_Integer}, {x_Integer, y_Integer,
z_Integer}, {vx_Integer, vy_Integer, vz_Integer}, tt_Integer] :=
Animate[my3Dbox[{m, n, o}, {posn[x, vx, m, t], posn[y, vy, n, t],
posn[z, vz, o, t]}], {t, 0, tt, 1}, AnimationRate -> 5,
AnimationRunning -> True];
my3DFinalbox[{10, 10, 10}, {5, 5, 3}, {1, 1, 1}, 100]
```
![3D_animated_figure][5]
More cool examples in 3D are here: https://wolfr.am/mARXbPv7
Edited to remove padding in the figures for the collisions to look closer to real, using:
``` PlotRangePadding -> None```
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5693Fig1.png&userId=1081732
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5742Fig2.gif&userId=1081732
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Fig3.png&userId=1081732
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1865Fig4.gif&userId=1081732
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8935Fig5.gif&userId=1081732Bhubanjyoti Bhattacharya2017-06-24T00:46:58ZBuilding a simple Wolfram Language code for Tensorial/Vectorial Calculus
http://community.wolfram.com/groups/-/m/t/1127218
There are a lot of Mathematica packages for vector/tensor calculus: [Tensorial][1], [Advanced Tensor Analysis][2], [Ricci][3], [TensoriaCalc][4], [grt][5], [xAct][6] are just a few mentions.
The main thing I don't like about these packages is that the majority of them (if not all) are not up to date with the current Mathematica version. Some of them haven't seen an update in decades. The other major drawback of these packages is the cumbersome notation, declaration and the fact that almost all are to be used with General Relativity in mind and use coordinate notation. The results are not really coordinate-free.
With this in mind, I have developed a "Package" that can deal with symbolic vectors/tensors in a coordinate-free form and the package is human-readable (some packages implementations in this matter are simply indecipherable).
In the first part, I'll present the code with simple explanations and the second part some simple examples.
These functions are needed to clear the OenOwnValues, DownValues, UpValues, SubValues. Mathematica has no built-in way of doing it.
(* Prevent evaluation *)
SetAttributes[{ClearOwnValues, ClearDownValues, ClearUpValues, ClearSubValues}, HoldFirst]
(* Always return true for commodity later *)
ClearOwnValues[var_Symbol] := (OwnValues@var = {}; True)
ClearDownValues[var_Symbol] := (DownValues@var = {}; True)
ClearUpValues[var_Symbol] := (UpValues@var = {}; True)
ClearSubValues[var_Symbol] := (SubValues@var = {}; True)
(* Delete Values of "f" if they match the input *)
ClearDownValues[expr:f_Symbol[___]] := (DownValues@f = DeleteCases[DownValues@f, _?(!FreeQ[First@#, HoldPattern@expr] &)]; True)
ClearUpValues[expr:f_Symbol[___]] := (UpValues@f = DeleteCases[UpValues@f, _?(!FreeQ[First@#, HoldPattern@expr] &)]; True)
ClearSubValues[expr:f_Symbol[___][___]] := (SubValues@f = DeleteCases[SubValues@f, _?(!FreeQ[First@#, HoldPattern@expr] &)]; True)
The Define-family functions are the main core of this package, they are used to define all kinds of relationship and can be easily expanded.
You can define a Symbol "var" or a function "fun[var, ___]" to be a certain "type". "var" cannot have OwnValues, otherwise it will be evaluated.
You can only define symbols. the function "fun" should be primarily used as script-like functions as Subscript, SuperHat, etc.
(* Prevent evaluation *)
SetAttributes[Define$Internal, HoldAll]
(*
Internal version of Define, all others versions are a call of this function.
The OwnValues and/or DownValues are cleared immediately, this step is to avoid rule-parsing.
The possible types are: Real, Imaginary, Tensor and Constant. For Tensor type additional parameters are needed: rank and dimension.
*)
Define$Internal[(var_Symbol /; ClearOwnValues@var) | (var:Except[Hold, fun_][head_Symbol /; ClearOwnValues@head, ___] /; ClearDownValues@var),
type:"Real" | "Imaginary" | "Tensor" | "Constant",
rank_Integer:2, dim_Integer:3] := Module[{tag},
(* All expressions are defined as UpValues, assign it to the corresponding tag *)
(* UpValues cannot be deeply nested, hence the need to assign it to the "tag" *)
tag = If[Head@var === Symbol, var, fun];
Which[
type === "Real", (* Typical properties needed for real quantities *)
Evaluate@tag /: Element[var, Reals] = True;
Evaluate@tag /: Re[v:var] := v;
Evaluate@tag /: Im[v:var] := 0;
Evaluate@tag /: Conjugate[v:var] := v;
Evaluate@tag /: Abs[v:var] := RealAbs@v;
,
type === "Imaginary", (* Typical properties needed for Imaginary quantities *)
Evaluate@tag /: Element[var, Reals] = False;
Evaluate@tag /: Re[v:var] = 0;
Evaluate@tag /: Im[v:var] := v/I;
Evaluate@tag /: Conjugate[v:var] := -v;
Evaluate@tag /: Abs[v:var] := RealAbs@Im@v;
,
type === "Tensor", (* For compativility with Mathematica current Tensor-functions *)
Evaluate@tag /: ArrayQ@var = rank != 0;
Evaluate@tag /: TensorQ@var = rank != 0;
Evaluate@tag /: MatrixQ@var = rank == 2;
Evaluate@tag /: VectorQ@var = rank == 1;
Evaluate@tag /: ListQ@var = rank != 0;
Evaluate@tag /: ScalarQ@var = rank == 0;
Evaluate@tag /: TensorRank@var = rank;
Evaluate@tag /: TensorDimensions@var = ConstantArray[dim, {rank}];
Evaluate@tag /: Element[var, Arrays@TensorDimensions@var] = True;
Evaluate@tag /: Element[var, Matrices@{dim, dim}] = rank == 2;
Evaluate@tag /: Element[var, Vectors@dim] = rank == 1;
,
type === "Constant", (* A constant has zero "derivative" *)
Evaluate@tag /: ConstantQ@var = True;
Evaluate@tag /: grad@var = 0;
Evaluate@tag /: div@var = 0;
Evaluate@tag /: curl@var = 0;
Evaluate@tag /: DotNabla[_, var] = 0;
Evaluate@tag /: D[var, __] = 0;
Evaluate@tag /: Dp[var, __] = 0;
Evaluate@tag /: Dt[var, ___] = 0;
Evaluate@tag /: Delta[var] = 0;
,
True, $Failed]
]
(* Assign more than one variable *)
Define$Internal[vars__ /; Length@{vars} > 1, type:"Real" | "Imaginary" | "Tensor" | "Constant", rank_Integer:2, dim_Integer:3] :=(
Define$Internal[#, type, rank, dim] & /@ Hold /@ Hold@vars // ReleaseHold;) (* Hacky-way of passing Hold down *)
Define$Internal[Hold@var_, type:"Real" | "Imaginary" | "Tensor" | "Constant", rank_Integer:2, dim_Integer:3] := Define$Internal[var, type, rank, dim]
(* Main Define functions *)
SetAttributes[{DefineReal, DefineImaginary, DefineTensor, DefineConstant}, HoldAll]
DefineReal[vars__] := Define$Internal[vars, "Real"]
DefineImaginary[vars__] := Define$Internal[vars, "Imaginary"]
DefineTensor[vars__, rank_Integer:2, dim_Integer:3] := Define$Internal[vars, "Tensor", rank, dim]
DefineConstant[vars__] := Define$Internal[vars, "Constant"]
(* Define multiple things at once *)
SetAttributes[{DefineRealTensor, DefineConstantTensor, DefineRealConstantTensor}, HoldAll]
DefineRealTensor[vars__, rank_Integer:2, dim_Integer:3] := (DefineReal@vars; DefineTensor[vars, rank, dim];)
DefineConstantTensor[vars__, rank_Integer:2, dim_Integer:3] := (DefineConstant@vars; DefineTensor[vars, rank, dim];)
DefineRealConstantTensor[vars__, rank_Integer:2, dim_Integer:3] := (DefineReal@vars; DefineConstant@vars; DefineTensor[vars, rank, dim];)
Now it is possible to define tensorial variables and make them behave as tensor with current Mathematica implementation.
Some built-in functions needed to be redefined to work with symbolic tensors. An example of this necessity is:
(* Define two tensors a and b *)
DefineTensor[a, b, 2]
TensorRank[2*a - 3*b] (* Return 2 *)
TensorQ[a] (* Return True *)
TensorQ[2*a] (* Return False *)
Mathematica function TensorQ don't know that a scalar times a tensor is a tensor. The following code is for refifinition:
Unprotect[TensorQ, VectorQ, TensorRank, Dot, Cross, TensorProduct]
(* Numbers are always scalar/constant. These functions are not built-in. *)
ScalarQ[a_?NumericQ] := True
ConstantQ[a_?NumericQ] := True
(* Complexes *)
TensorQ[(Re|Im|Conjugate)[a_]] := TensorQ@a
VectorQ[(Re|Im|Conjugate)[a_]] := VectorQ@a
ScalarQ[(Re|Im|Conjugate)[a_]] := ScalarQ@a
ConstantQ[(Re|Im|Conjugate)[a_]] := ConstantQ@a
TensorRank[(Re|Im|Conjugate)[a_]] := TensorRank@a
(* Plus *)
TensorQ[(a_?TensorQ) + (b_?TensorQ)] := TensorRank@a === TensorRank@b
VectorQ[(a_?VectorQ) + (b_?VectorQ)] := True
ScalarQ[(a_?ScalarQ) + (b_?ScalarQ)] := True
ConstantQ[(a_?ConstantQ) + (b_?ConstantQ)] := True
(* Times *)
TensorQ[(a__?ScalarQ) * (b_?TensorQ)] := True
VectorQ[(a__?ScalarQ) * (b_?VectorQ)] := True
ScalarQ[(a__?ScalarQ) * (b_?ScalarQ)] := True
ConstantQ[(a_?ConstantQ /; ScalarQ@a) * (b_?ConstantQ)] := True
(* Pass scalars out of Dot and Cross, as is done in TensorProduct *)
Dot[a___, Times[b_, s__?ScalarQ], c___] := Times[s, Dot[a, b, c]]
Cross[a_, Times[b_, s__?ScalarQ]] := Times[s, Cross[a, b]]
Cross[Times[a_, s__?ScalarQ], b_] := Times[s, Cross[a, b]]
(* Dot *)
TensorQ[(a_?TensorQ) . (b_?TensorQ)] /; TensorRank@a + TensorRank@b - 2 >= 1 := True
VectorQ[(a_?TensorQ) . (b_?TensorQ)] /; TensorRank@a + TensorRank@b - 2 == 1 := True
ScalarQ[(a_?VectorQ) . (b_?VectorQ)] := True
ConstantQ[(a_?ConstantQ /; TensorQ@a) . (b_?ConstantQ /; TensorQ@b)] := True
(* Automatically evaluate to zero, as TensorProduct *)
Dot[a___, 0, b___] := 0
(* Cross *)
TensorQ[(a_?VectorQ) \[Cross] (b_?VectorQ)] := True
VectorQ[(a_?VectorQ) \[Cross] (b_?VectorQ)] := True
ConstantQ[(a_?ConstantQ /; VectorQ@a) \[Cross] (b_?ConstantQ /; VectorQ@b)] := True
(* Perpendicular vectors automatically evalute to zero *)
Cross[a_?VectorQ, a_?VectorQ] := 0
(* Automatically evaluate to zero, as TensorProduct *)
Cross[a___, 0, b___] := 0
(* Return single argument as Dot, Times and TensorProduct *)
Cross[a_] := a
(* Tensor Product *)
TensorQ[(a_?TensorQ) \[TensorProduct] (b_?TensorQ)] := True
ConstantQ[(a_?ConstantQ /; TensorQ@a) \[TensorProduct] (b_?ConstantQ /; TensorQ@b)] := True
(* Power *)
ScalarQ@Power[a_?ScalarQ, b_?ScalarQ] := True
ScalarQ[1/a_?ScalarQ] := True
ConstantQ@Power[a_?ConstantQ /; ScalarQ@a, b_?ConstantQ /; ScalarQ@b] := True
ConstantQ[1/a_?ConstantQ /; ScalarQ@a] := True
(* grad *)
grad[_?ConstantQ] := 0
TensorQ@grad[a_?ScalarQ] := True
VectorQ@grad[a_?ScalarQ] := True
TensorQ@grad[a_?TensorQ] := True
TensorRank@grad[a_?ScalarQ] := 1
TensorRank@grad[a_?TensorQ] := TensorRank@a + 1
(* div *)
div[_?ConstantQ] := 0
TensorQ@div[a_?TensorQ /; TensorRank@a >= 2] := True
VectorQ@div[a_?TensorQ /; TensorRank@a == 2] := True
ScalarQ@div[a_?VectorQ] := True
TensorRank@div[a_?TensorQ] := TensorRank@a - 1
(* curl *)
curl[_?ConstantQ] := 0
TensorQ@curl[a_?VectorQ] := True
VectorQ@curl[a_?VectorQ] := True
TensorRank@curl[a_?VectorQ] := 1
(* DotNabla *)
DotNabla[_, _?ConstantQ] := 0
(* Dp *)
Dp[_?ConstantQ, args__] := 0
TensorQ@Dp[a_, args__] := TensorQ@a
VectorQ@Dp[a_, args__] := VectorQ@a
ScalarQ@Dp[a_, args__] := ScalarQ@a
TensorRank@Dp[a_, args__] := TensorRank@a
(* Delta *)
Delta[_?ConstantQ] := 0
TensorQ@Delta[a_] := TensorQ@a
VectorQ@Delta[a_] := VectorQ@a
ScalarQ@Delta[a_] := ScalarQ@a
TensorRank@Delta[a_] := TensorRank@a
(* List *)
Dp[a_List, args__] := Dp[#, args] & /@ a
(* Don't assume anything is a scalar/constant *)
ScalarQ[a_] := False
ConstantQ[a_] := False
Protect[TensorQ, VectorQ, TensorRank, Dot, Cross, TensorProduct]
Where we have defined the Tensor-functions: grad, div, curl which are self-explanatory; Dp is the partial derivative, Delta gives the variation of a quantity, somewhat related to Dp, and DotNabla is (for the lack of better name) the convective derivative.
For better print, we'll define the following notation:
(* Hacky-way to create parenthesis *)
MakeBoxes[Parenthesis[a_], _] := MakeBoxes[a.1][[1, 1]]
MakeBoxes[grad[a_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a}, "grad", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"\[Del]", #1} &)]
MakeBoxes[div[a_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a}, "div", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"\[Del]\[CenterDot]", #1} &)]
MakeBoxes[curl[a_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a}, "curl", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"\[Del]\[Cross]", #1} &)]
MakeBoxes[DotNabla[a_, b_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a, MakeBoxes@Parenthesis@b}, "DotNabla", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"(", #1, "\[CenterDot]\[Del]", ")", #2} &)]
MakeBoxes[Delta[a_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a}, "Delta", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"\[Delta]", #1} &)]
And now for the most important part of the code, the function ExpandDerivative, which as the name suggest, expand the derivative-like functions:
(* Expand Derivatives/Vectors/Tensors on expr and apply custom rules *)
ExpandDerivative[expr_, rules_:{}] := expr //. Flatten@{
(* Custom Rules *)
rules,
(* Linearity *)
(op:grad|div|curl|Delta|Inactive[grad]|Inactive[div]|Inactive[curl]|Inactive[Delta]|Re|Im|Conjugate)[a_ + b__] :> op@a + op[+b],
(op:Dp|Inactive[Dp]|Sum|Inactive[Sum])[a_ + b__, arg__] :> op[a, arg] + op[+b, arg],
(op:Times|Dot|TensorProduct|Cross|DotNabla|Inactive[DotNabla])[a___, b_ + c__, d___] :> op[a, b, d] + op[a, +c, d],
(op:grad|div|curl|Delta|Inactive[grad]|Inactive[div]|Inactive[curl]|Inactive[Delta]|Re|Im|Conjugate)[(op\[CapitalSigma]:Sum|Inactive[Sum])[a_, args__]] :> op\[CapitalSigma][op@a, args],
(* Sum *)
(op:Sum|Inactive[Sum])[s_*a_, v_Symbol] /; FreeQ[s, v] :> s*op[a, v],
(op:Sum|Inactive[Sum])[s_, v_Symbol] /; FreeQ[s, v] :> s*op[1, v],
(* Complexes *)
Conjugate@(op:Times|Dot|Cross|TensorProduct)[a_, b__] :> op[Conjugate@a, Conjugate@op@b], (* Pass Conjugate to child *)
(op:grad|div|curl|Delta|Inactive[grad]|Inactive[div]|Inactive[curl]|Inactive[Delta])[(opC:Re|Im|Conjugate)[a_]] :> opC@op@a, (* Pass Conjugate/Re/Im to parent *)
Dp[(op:Re|Im|Conjugate)[a_], v_] :> op@Dp[a, v], (* Pass Conjugate/Re/Im to parent *)
(* Triple Product *)
Cross[a_?VectorQ, Cross[b_?VectorQ, c_?VectorQ]] :> b*a.c - c*a.b,
Cross[Cross[a_?VectorQ, b_?VectorQ], c_?VectorQ] :> b*a.c - a*b.c,
(* Quadruple Product *)
Dot[Cross[a_?VectorQ, b_?VectorQ], Cross[c_?VectorQ, d_?VectorQ]] :> (a.c)*(b.d) - (a.d)*(b.c),
(* Second Derivatives *)
div@curl[_?VectorQ] :> 0,
curl@grad[_?ScalarQ | _?VectorQ] :> 0,
(* grad *)
grad[(s_?ScalarQ) * (b_)] :> s*grad@b + b\[TensorProduct]grad@s,
grad[(a_?VectorQ) . (b_?VectorQ)] :> a\[Cross]curl@b + b\[Cross]curl@a + DotNabla[a, b] + DotNabla[b, a], (* Use physics form *)
grad[(s_?ScalarQ) ^ (n_?ConstantQ)] :> n*s^(n-1)*grad@s,
grad[(n_?ConstantQ /; ScalarQ@n) ^ (s_?ScalarQ)] :> n^s*Log[n]*grad@s,
(* div *)
div[(s_?ScalarQ) * (b_?TensorQ)] :> s*div@b + b.grad@s,
div[(a_?VectorQ) \[TensorProduct] (b_?VectorQ)] :> DotNabla[b, a] + a*div@b,
div[(a_?VectorQ) \[Cross] (b_?VectorQ)] :> b.curl@a - a.curl@b,
(* curl *)
curl[(s_?ScalarQ) * (b_?VectorQ)] :> grad[s]\[Cross]b + s*curl@b,
curl[(a_?VectorQ) \[Cross] (b_?VectorQ)] :> div[a\[TensorProduct]b - b\[TensorProduct]a],
(* DotNabla *)
DotNabla[(s_?ScalarQ) * (b_?VectorQ), c_?VectorQ] :> s*DotNabla[b, c],
DotNabla[a_?VectorQ, (\[Beta]_?ScalarQ)*(c_?VectorQ)] :> c*a.grad@\[Beta] + \[Beta]*DotNabla[a, c],
(* Dp *)
Dp[(op:Times|Dot|Cross|TensorProduct)[a_, b__], v_Symbol] :> op[Dp[a, v], b] + op[a, Dp[op@b, v]],
Dp[Power[a_?ScalarQ, b_?ScalarQ], v_Symbol] :> Power[a, b-1]*b*Dp[a, v] + Power[a,b]*Log[a]*Dp[b, v],
(* Delta *)
Delta[(op:Times|Dot|Cross|TensorProduct)[a_, b__]] :> op[Delta@a, b] + op[a, Delta@op@b],
Delta@Power[a_?ScalarQ, b_?ScalarQ] :> Power[a, b-1]*b*Delta[a] + Power[a,b]*Log[a]*Delta[b]
}
Some examples. Calculating the divergent of Maxwell Stress Tensor in vaccum:
![MST][7]
Calculate the Einstein-Laub force density for linear dielectrics:
![EL][8]
Testing Poynting theorem in vaccum (no sources):
![P][9]
Where the first argument is the quantity being "tested".
Many other uses are possible and is fairly easy to extended some definitions.
[1]: http://library.wolfram.com/infocenter/Demos/434/
[2]: http://library.wolfram.com/infocenter/MathSource/8827/
[3]: https://sites.math.washington.edu/~lee/Ricci/
[4]: http://www.stargazing.net/yizen/Tensoria.html
[5]: http://www.vaudrevange.com/pascal/grt/
[6]: http://www.xact.es/
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Fig1.png&userId=845022
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Fig2.png&userId=845022
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Fig3.png&userId=845022Thales Fernandes2017-06-23T22:49:11ZImage Correlation in Particle Image Velocimetry is behaving strangely
http://community.wolfram.com/groups/-/m/t/1124830
Note: i have posted the same question on MSE: https://mathematica.stackexchange.com/questions/148739/image-correlation-in-particle-image-velocimetry-is-behaving-strangely
I have been trying to implement a code for determining flow-field using Particle Image Velocimetry.
In this technique a user can take two images. Using small windows from the first image (which act as kernels) and search windows from the second image one can determine the cross-correlation which simply tells where the small window moves within a given search window. This process can be repeated between the second and the third image and so on.
A clear detail can be found in the second paragraph:
http://www.physics.emory.edu/faculty/weeks//idl/piv.html
I have two images here (posting as a gif, you can save this and import it in mathematica as a list of two images):
![enter image description here][1]
I use the following code to generate the flow-field.
windowsize = 32; (* select window size *)
imgDim = ImageDimensions[images[[1]]]; (* dimensions for the images *)
imgone = ImageCrop[images[[1]], imgDim - (2*windowsize)]; (* removing
border from first image: we dont want to create windows at the borders *)
firstimgsplits = ImagePartition[imgone, windowsize];
(* breaking the first image into small windows *)
searchwindows = ImagePartition[images[[2]], windowsize*3, {windowsize, windowsize}];
(* breaking the second image into search windows *)
{dim1, dim2} = Dimensions@searchwindows;
H = Last@ImageDimensions[imgone];
(* get midpoints of the windows of the first frame *)
midptsFirst = Flatten[Table[{i windowsize + windowsize/2,
j (windowsize) + windowsize/2}, {i, 1, dim1}, {j, 1, dim2}], 1];
(* pts in the second image where correlation is max *)
correlationPts = Table[MorphologicalComponents[ImagePad[
ImageAdjust@ImageCorrelate[searchwindows[[i + 1, j + 1]],
firstimgsplits[[i + 1, j + 1]], NormalizedSquaredEuclideanDistance,
PerformanceGoal -> "Quality"], {{j*windowsize, H - windowsize (j + 1)},
{H - windowsize (i + 1), windowsize i}}, White]]~Position~0,
{i, 0, dim1 - 1}, {j, 0, dim2 - 1}]~Flatten~2;
now when i create a flow-field from the displacement of points (red pts in the first image and cyan pts in second image) I can see that something is not right. My eyes tell me that the particles have move in a direction different from the ones found using ImageCorrelate
This should be rather straightforward for Mathematica. I do not know what is wrong in this simple piece of code. I will appreciate if someone can help me with this question.
ListAnimate@{Show[images[[1]], Graphics[{Red, Point@midptsFirst}]],
Show[images[[2]], Graphics[{Cyan, PointSize[Medium], Point@correlationPts,
{Pink, Arrowheads[Small], MapThread[Arrow[{#1, #2}] &, {midptsFirst, correlationPts}]}}]]}
![enter image description here][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Testpiv3.gif&userId=942204
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1195Picture1.png&userId=942204Ali Hashmi2017-06-20T12:15:21ZHiggs Boson Classification via Neural Network
http://community.wolfram.com/groups/-/m/t/1016315
![enter image description here][1]
# Introduction
So here is a simple approach that applies Wolfram Language machine learning functions to a classification problem for finding possible Higgs particles. It uses a labeled data set with 30 numerical physical attributes (things like measured spins, angles, energies, etc.) and with labels being either 'signal' (s) or 'background' (b). The attached notebook runs through a sample analysis in the Wolfram Language: importing the training data, cleaning it up for using it, setting up a neural network, training the network with the data, and finally checking how well the trained neural network does at making predictions. Here is the description of data from the source website at KAGGLE:
> Discovery of the long awaited Higgs boson was announced July 4, 2012 and confirmed six months later. 2013 saw a number of prestigious awards, including a Nobel prize. But for physicists, the discovery of a new particle means the beginning of a long and difficult quest to measure its characteristics and determine if it fits the current model of nature.
> A key property of any particle is how often it decays into other particles. ATLAS is a particle physics experiment taking place at the Large Hadron Collider at CERN that searches for new particles and processes using head-on collisions of protons of extraordinarily high energy. The ATLAS experiment has recently observed a signal of the Higgs boson decaying into two tau particles, but this decay is a small signal buried in background noise.
> The goal of the Higgs Boson Machine Learning Challenge is to explore the potential of advanced machine learning methods to improve the discovery significance of the experiment. No knowledge of particle physics is required. Using simulated data with features characterizing events detected by ATLAS, your task is to classify events into "tau tau decay of a Higgs boson" versus "background."
> The winning method may eventually be applied to real data and the winners may be invited to CERN to discuss their results with high energy physicists.
# References and sources
- [Learning to discover: the Higgs boson machine learning challenge][2]
- [KAGGLE: Higgs Boson Machine Learning Challenge][3]
- [Opendata ATLAS][4]
# Training data
Import the training data:
training = Import["D:\\machinelearning\\higgs\\training\\training.csv", "Data"];
Dimensions[training]
`{250001, 33}`
Look at the data fields (they are described in the pdf link above). "EventId" should not be used as part of the training, since it has no predictive value. The last column "Label" is the classification (s=signal, b=background)
training[[1]]
`{"EventId", "DER_mass_MMC", "DER_mass_transverse_met_lep", \
"DER_mass_vis", "DER_pt_h", "DER_deltaeta_jet_jet", \
"DER_mass_jet_jet", "DER_prodeta_jet_jet", "DER_deltar_tau_lep", \
"DER_pt_tot", "DER_sum_pt", "DER_pt_ratio_lep_tau", \
"DER_met_phi_centrality", "DER_lep_eta_centrality", "PRI_tau_pt", \
"PRI_tau_eta", "PRI_tau_phi", "PRI_lep_pt", "PRI_lep_eta", \
"PRI_lep_phi", "PRI_met", "PRI_met_phi", "PRI_met_sumet", \
"PRI_jet_num", "PRI_jet_leading_pt", "PRI_jet_leading_eta", \
"PRI_jet_leading_phi", "PRI_jet_subleading_pt", \
"PRI_jet_subleading_eta", "PRI_jet_subleading_phi", "PRI_jet_all_pt", \
"Weight", "Label"}`
Sample vector:
training[[2]]
`{100000, 138.47, 51.655, 97.827, 27.98, 0.91, 124.711, 2.666, 3.064, \
41.928, 197.76, 1.582, 1.396, 0.2, 32.638, 1.017, 0.381, 51.626, \
2.273, -2.414, 16.824, -0.277, 258.733, 2, 67.435, 2.15, 0.444, \
46.062, 1.24, -2.475, 113.497, 0.00265331, "s"}`
Set up a simple neural network (this can be tinkered with to improve the results):
net=NetInitialize[
NetChain[{
LinearLayer[3000],Ramp,LinearLayer[3000],Ramp,LinearLayer[2],SoftmaxLayer[]
},
"Input"->{30},
"Output"->NetDecoder[{"Class",{"b","s"}}]
]]
![enter image description here][5]
Set up the training data:
data = Map[Take[#, {2, 31}] -> Last[#] &, Drop[training, 1]];
Numerical vectors that each point to a classification (s or b):
RandomSample[data, 3]
`{{87.06, 23.069, 67.711, 162.488, -999., -999., -999., 0.903, 4.245,
318.43, 0.523, 0.839, -999., 105.019, -0.404, 1.612,
54.943, -0.898, 0.855, 13.541, 1.729, 409.232, 1,
158.469, -1.363, -1.762, -999., -999., -999., 158.469} ->
"b", {163.658, 55.559, 116.84, 50.019, -999., -999., -999., 2.855,
38.623, 130.132, 0.906, 1.359, -999., 51.993, 0.36, 2.142,
47.112, -0.932, -1.596, 22.782, 2.663, 191.557, 1, 31.026, -2.333,
0.739, -999., -999., -999., 31.026} ->
"b", {100.248, 27.109, 60.729, 132.094, -999., -999., -999., 1.405,
10.063, 218.474, 0.541, 1.414, -999., 62.519, -0.401, -1.974,
33.817, -0.893, -0.657, 54.857, -1.298, 396.228, 1,
122.137, -2.369, 1.689, -999., -999., -999., 122.137} -> "s"}`
# Training
Length[data]
`250000`
{tdata,vdata}=TakeDrop[data,240000];
result=NetTrain[net,tdata,TargetDevice->"GPU",ValidationSet->Scaled[0.1],MaxTrainingRounds->1000]
![enter image description here][6]
DumpSave["D:\\machinelearning\\higgs\\higgs.mx", result];
# Testing
This is the test data (unlabeled):
test = Import["D:\\machinelearning\\higgs\\test\\test.csv", "Data"];
Extract the validation data:
validate = Map[Take[#, {2, 31}] &, Drop[test, 1]];
Predictions made on the unlabeled data:
result /@ RandomSample[validate, 5]
`{"b", "b", "s", "b", "b"}`
Sample from the labeled data and compute classifier statistics:
cm = ClassifierMeasurements[result, RandomSample[vdata, 1000]]
![enter image description here][7]
cm["Accuracy"]
`0.846`
Plot the confusion matrix:
cm["ConfusionMatrixPlot"]
![enter image description here][8]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ATLASEXP_image.png&userId=20103
[2]: https://higgsml.lal.in2p3.fr/files/2014/04/documentation_v1.8.pdf
[3]: https://www.kaggle.com/c/higgs-boson
[4]: http://opendata.cern.ch/about/ATLAS
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5393ty56yetjhw4.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5256567urytere.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rtyee567rutyrd.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ghf56utyehw65eyhrw.png&userId=20103Arnoud Buzing2017-02-17T14:16:25ZSimulating the Universe (an alternative approach)
http://community.wolfram.com/groups/-/m/t/982238
This is the Community Post presenting the 2016 Wolfram Summer School Project of József Konczer, a Hungarian PhD student of theoretical physics, assisted by Todd Rowland.
The project notebook with the code is attached to this post.
## Fundamental theories in Physics ##
The Holy Grail of theoretical physics would be a theory, which could describe all known phenomena in every situation. This would be called the Theory of Everything ([ToE][1]). In present times, all approaches to a ToE candidate—like String/M theory, Loop quantum gravity ([and many others][2])—incorporate quantum mechanics from the beginning. This approach served surely very useful effective theories like the Standard Model itself, however this approach does not help to find an underlying deterministic theory from which quantum effects would emerge, like Einstein dreamed.
The standard argument for the unavoidability of quantum mechanics and uncertainty is the result of the [Bohr–Einstein debates][3] which was "won" by Bohr because the [EPR paradox][4] was tested by [measurements][5] (there is an ongoing test as well called the [Big Bell test][6]) and the result excludes local hidden variables. (The arguments can be found in details [here][7] and [here][8]). It has to be emphasized that in these arguments locality is a key assumption.
----------
One can ask, what kind of nonlocal theory can be constructed, which still have predictive power, and is not based of conspiracy of Nature? There are only a few researchers who post this question openly, one is them is Gerard ’t Hooft who published recently a [book][9] based on collected [papers][10]. His approach is conservative (from main stream point of view), and mainly suggests, that if one quantize time, then in same basis the unitary time evolution considered in quantum mechanics become a permutation operator between special basis elements or "beable states". However not every time evolution has this property and typically the interacting theories fail to fulfill the requirements. A more bold, however much less understood theory (or framework) is what Stephen Wolfram described in [NKS][11]. The brief summary of his ideas can be found in this [blog post][12]. The main idea here, is to find a simple data structure, for instance a sparse graph, a simple discrete dynamics governed by a replacement rule, an interpretation for this cellular automaton (CA) and then investigate if we can observe similar phenomena what we see in our Universe.
## Hints pointing toward the CA description ##
This is a speculative and highly subjective argumentation, however I think this blog post is an appropriate place to articulate my motives and do not stick to the objective style of research papers.
So first of all without going too deep into metaphysics I don't want to state things about Nature it self, I only talk about our description of it.
The first successful and highly useful description of Nature was Newtons description, which heavily used the idea of continuity of space and time. This idea proved to be useful in description of solids, liquids and gases as well. However some ideas became so useful and popular, that we forgot that all of them is only our description and not Nature its self. Quantum effects and effects related to relativity reminded us, that under non standard circumstances old descriptions can fail. As I see, quantum effects have two message for us. The first is that quantities could be and should be described by discrete variables, and secondly that under a certain level systems can not be observed without disturbance. If we take into account that space and time even as we observe them are influenced by these quantized quantities, it is straightforward to deduce, that space and time should be quantized as well.
Before these findings in physics probability theory was developed. First it was used to analyze gambling situations where one don't know every information about the system. From this point of view it is clearly a strategy to manage our ignorance toward some details in a deterministic situation. However after some point physicists started to use probabilities as they were part of the phenomena, and not only our clever way to make inference from systems where we do not know every detail. Many physicists—including myself—where educated in the spirit of frequentist [interpretation of probability theory][13], which is useful in some cases but as I think, prevents some questions to ask. I think this promotion of probability to an objective property contributed to the interpretation of quantum mechanics as well. As Jayns wrote in his [book][14]:
> In current quantum theory, probabilities express our own ignorance due to our failure
to search for the real causes of physical phenomena; and, worse, our failure even to think
seriously about the problem. This ignorance may be unavoidable in practice, but in our
present state of knowledge we do not know whether it is unavoidable in principle; the
‘central dogma’ simply asserts this, and draws the conclusion that belief in causes, and
searching for them, is philosophically naive. If everybody accepted this and abided by it,
no further advances in understanding of physical law would ever be made; indeed, no such
advance has been made since the 1927 Solvay Congress in which this mentality became
solidified into physics. But it seems to us that this attitude places a premium on stupidity;
to lack the ingenuity to think of a rational physical explanation is to support the supernatural
view.
However even if one thinks, that theories incorporating quantum mechanics are "only" effective theories, probably we can get intuitions from them. There is a [recent result][15] from [AdS/CFT][16] correspondence as an example for the [EPR=ER][17] conjecture. And a connecting [paper][18] of Leonard Susskind, concluding that:
> What all of this suggests to me, and what I want to suggest to you, is that quantum mechanics and gravity are far more tightly related than we (or at least I) had ever imagined. The essential nonlocalities of quantum mechanics (the need for instantaneous communication in order to classically simulate entanglement) parallels the nonlocal potentialities of general relativity: ER=EPR.
The cited papers state, that spacetime structure can be understood as a net of entanglements, however maybe the statement can be reversed, and say that the phenomena of entanglement can be described by a nonlocal spacetime structure.
Among the mentioned hints, the existing theoretical constructions can help to find an appropriate interpretations as well. For example it can happen, that to describe our seemingly 3 dimensional space one has to describe space with higher effective dimensionality and interpret the entangled parts not just as connected regions, but as global structures in the extra dimensions.
After taking hints from existing theoretical constructions one can investigate, what kind of phenomena can appear in simple CA-s which mimic some parts of Nature.
Perhaps the most well known CA is Conway's [Game of Life][19] this is a 2D Cellular automaton where localized objects (called [spaceships][20] or gliders) can propagate, and can interact with each other. This behavior can remind us to particles, however the built in rectangular structure is reflected on the properties of spaceships, and there are no nonlocal connections between these object because of the locality of the rule.
Both problems can be solved, if one tries to construct a CA without built in topology. (This construction will be described in detail.)
Another nice feature of special CA-s called substitution systems, is that for a structure living in the automaton, can not observe the absolute number of steps, or other structures beside him, only the causal net of implemented changes can be recognized from inside. This feature unites the relative space and time for observers or structures inside the system. It can remind us to causal network description of General Relativity.
A third hint from CA point of view is the typical appearance of complex behavior, which can lead to an effective probabilistic description of the system with a higher symmetry what the framework originally allowed. (For example CA description of flows) From disorder new effective order can emerge possible with higher symmetry.
The conjectured computational irreducibility of CA would replace the promised "free will" possibility of quantum mechanics with a different but in some sense similar concept. In this framework the faith of the Universe would be determined, but even an observer outside the system—God if one wishes—could not know the consequences only by letting the simulation run up to the desired point.
Furthermore a multiway CA dynamics is compatible with the many world interpretation of quantum mechanics, with the advantage, that the splitting points of histories are not observer dependent. In this framework the overall dynamics is deterministic, however structures living always on one branch of the evolution will witness an unavoidable true random behavior from the inside point of view.
## Nature and our understanding of it ##
Of course it would be an arrogant attitude to force Nature to fulfill our philosophical expectations, however one can imagine how our description of it can change during time.
There are several situations what one can imagine:
- There is a deterministic description which is valid in any situation (which can appear in our Universe)
- This can be totally discreet
- Or it can be continuous partially or in whole
- It can be, that after some point a truly random (or at least appearing to us) mechanism will appear, which can not be unfolded
- Or it can happen that construction of laws to describe Nature will never come to an end, and our understanding of reality will based on infinite set of possibly deterministic rules.
- And of course it can happen that something unexpected will turn out.
Without favoring any of the listed cases above, my main point is that the very first situation, namely that our universe can be described as a deterministic discrete system is not totally excluded. And the most natural way to understand it can be a CA description.
## CA description candidate for our Universe ##
To have a CA description, one has to choose a data structure, a dynamics and an interpretation. (It has to be pointed out, that any CA can be simulated on another Turing complete CA with different interpretation of the states. Because of that any CA description is highly non unique. However one can try to choose a description which has the "simplest" interpretation.)
For a fundamental CA description one can choose simple graphs as data structure. This seems as a natural choice, because of its simplicity and because of its non fixed topology.
To have a chance to describe deterministic dynamics on this data structure, we even restrict the degree of nodes on the graph. One can try to find the threshold of complexity of the CA, and it seams that cubic graphs can already produce complicated enough structures. So one can set the data structure to a simple cubic graph.
The next step is to define an appropriate dynamics on this data structure. A natural approach is to introduce subgraph replacement rules, which means the following: If one finds a given subgraph pattern $H_1$ in the present graph $G$, then replace it with a compatible new graph $H_2$. It sounds simple, however there are many details which have to be fulfilled to get a dynamics with desired properties.
I mention here two properties of the patterns, which seems to be essential to get a substitution systems which generates a complex behavior and appears completely deterministic from the inside without the specification of the order of replacements in the system.
The first one is a **non overlapping property** of the pattern graph(s) $ H_1 $. This means, that $H_1$ has a special structure such as there is no cubic graph $G$, where two subgraphs can be found which are isomorphic to $H_1$ and have nonzero intersection. The following rule does not fulfill this requirement because there is a cubic graph, where two intersecting copies of $H_1$ can be found
![overlapping rule][21]
![Interesting patterns][22]
The second requirement is **non triviality**, which gives a constraint for $H_2$. In this case we wish to have $H_2$, that there exists a cubic graph $G$, which contains a subgraph $H_1$, where after the replacement $H_1 \rightarrow H_2$ there can be found a new pattern $H_1$, which intersects with $H_2$ but has parts outside $H_2$ as well. (Without this property only self similar or frozen (where there is no more pattern which can be changed) graphs can be generated from finite initial graphs.) The pictures show visually the requirement:
![enter image description here][23]
![enter image description here][24]
After setting some rule, which fulfill these requirements, we have to find an initial graph, apply the rule many times and find an interpretation for the result. It has to be pointed out, that the actual graph structure at a given step can not be observed from an inside point of view. What an inner observer, or a structure can explore is the causal structure which is generated by the replacements. (For details see [NKS chapter 9, section 13][25] )
This is similar to the [causal set program][26].
So a natural way of interpretation of the emerging causal net is that it is a discretization of some kind of spacetime. And local propagating disturbances relative to the overall average structure are particle like excitations, which can have nonlocal connections relative to the average large scale structure. However from AdS/CFT insights it can happen, that we have to interpret particles for example as global structures in a higher dimensional bulk spacetime, which have ends on a boundary-like smaller dimensional surface.
## My contribution to the project ##
During the 3 weeks of 2016 Wolfram summer school I set a framework where the steps of a substitution are precisely defined, and in which the substitutions can be effectively performed even for relatively big graphs. Furthermore I tested a numerical approach to measure the effective dimensionality of the emergent graph structure after sufficiently many steps.
Unfortunately I could not test this framework with rules which could give complex, deterministic behavior, so I could benchmark this machinery on a simple, point to triangle rule, which gives a fractal-like structure. If we interpret this graph as space, then this simple dynamics results a $D=\log(3)/\log(2)=1.58$ dimensional fractal space.
Here is a graph of the generated fractal Universe after 100 steps, started from a tetrahedron:
![Generated fractal Universe after 100 steps, started from a tetrahedron][27]
And the neighborhood structure of this space:
![Local structure in the fractal Universe][28]
## Further directions ##
This project to find deterministic CA description for our Universe is in its infant stage. The framework is more or less set, but it needs tremendous work to investigate possible dynamics and analyze the results of simulations.
An outline of a huge project would be the following:
- List the possible rules, which fulfill the non overlapping and non trivial conditions
- Investigate their long term behavior starting from simple initial graphs
- Find quantities and a method of their measurement which can be determined from generated causal graphs
- Find fixed points of the dynamics which preserve long scale dimensionality and possibly other quantities
- List and investigate local disturbances near these fixed points
- After setting an interpretation analyze the particle-like structures (gliders of this dynamics)
- Develop an effective field theory which can describe an effective behavior of the system near to the fixed points
- Match these field theories with the Standard Model of particle physics
- Find out new predictions of the derived effective field theories, which can be tested by measurements
## Conclusion ##
In my project I could set a framework and show a trivial example for a deterministic graph evolution model.
During the summer school I was not fortunate enough to find dynamics which produce complex behavior, however to find an appropriate rule seems reachable in the near future. Hopefully a dynamics producing complex topology would be interesting enough to inspire much more people and after some point a serious investigation of the field could be started.
I think personally, that proving or even disproving that this framework to describe Nature can be worked out is an extremely interesting challenge and deserves further theoretical research.
In the end I would like to thank my mentor Todd Rowland, and the whole Wolfram summer school team for the organization and I really hope that there will be a continuation of this project.
Last but not least I thank for all the summer school participants for great discussions and a lifelong experience!
![enter image description here][29]
----------
## Further comments ##
I try to collect here some useful comments of my friends and collegues, who kindly read my post, and responded in person:
There is a concept named [Digital physics][30] which has a much longer history what I suggested, and probably the earliest pioneer of the field was Konrad Zuse. Fortunatelly his thesis—[Calculating Space][31] or “Rechnender Raum”—is now translated into English and has a modern, LaTeX typesetting.
Beside NKS there is another relevant book, which can serve as an extended list of references and valuable material in its own, written by Andrew Ilachinski with the title [Cellular Automata A Discrete Universe][32].
There is an ongoing "mini revolution" in the description of AdS/CFT based on [Tensor Networks][33]. The original paper on the topic can be found [here][34].
[1]: https://en.wikipedia.org/wiki/Theory_of_everything
[2]: https://www.quantamagazine.org/20150803-physics-theories-map/
[3]: https://en.wikipedia.org/wiki/Bohr%E2%80%93Einstein_debates
[4]: https://en.wikipedia.org/wiki/EPR_paradox
[5]: https://arxiv.org/abs/1508.05949
[6]: http://thebigbelltest.org/#/science?l=EN
[7]: http://www.cambridge.org/catalogue/catalogue.asp?isbn=9780521818629
[8]: http://www.springer.com/in/book/9783662137352
[9]: http://www.springer.com/us/book/9783319412849
[10]: https://arxiv.org/abs/1405.1548
[11]: http://www.wolframscience.com/
[12]: http://blog.stephenwolfram.com/2015/12/what-is-spacetime-really/
[13]: https://plato.stanford.edu/entries/probability-interpret/
[14]: http://www.cambridge.org/catalogue/catalogue.asp?isbn=0521592712
[15]: http://www.nature.com/news/the-quantum-source-of-space-time-1.18797
[16]: https://en.wikipedia.org/wiki/AdS/CFT_correspondence
[17]: https://en.wikipedia.org/wiki/ER=EPR
[18]: https://arxiv.org/abs/1604.02589
[19]: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
[20]: http://conwaylife.com/wiki/Category:Spaceships
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=H1H2.png&userId=981213
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=GH1H1.png&userId=981213
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=H1H2_2.png&userId=981213
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=GG.png&userId=981213
[25]: http://www.wolframscience.com/nksonline/section-9.13
[26]: https://en.wikipedia.org/wiki/Causal_sets
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PresentationTemplate_KJ_2.png&userId=981213
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PresentationTemplate_KJ_3.png&userId=981213
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=vladstudio_higgs_boson_fluo_800x600_signed.jpg&userId=981213
[30]: https://en.wikipedia.org/wiki/Digital_physics
[31]: http://www.mathrix.org/zenil/ZuseCalculatingSpace-GermanZenil.pdf
[32]: http://www.worldscientific.com/worldscibooks/10.1142/4702
[33]: https://arxiv.org/abs/1306.2164
[34]: https://arxiv.org/abs/0905.1317Jozsef Konczer2016-12-16T07:22:29ZCoffee optimization, how to get your cup of joe just right
http://community.wolfram.com/groups/-/m/t/1024265
## Introduction ##
I take my coffee black, so I had no idea that there was a large controversy behind when you should add your milk to your coffee. [@Gary Bass][at0] however highlighted us to this with [his question][1] in the community. Apparently, the timing of the added milk is critical. If you add the milk directly the coffee will retain its temperature for a longer time, which is perfect if you plan on drinking later. If you are short on time however, and want to save your throat from scalding hot coffee, you might want to save the milk for just before you are about to drink it.
Obviously, this is something that cannot be taken lightly and some serious simulation is required. I compiled the information from that thread here, for anyone looking to perfect their morning routine. The post will include an explanation for how the model was created. I also however attached the actual model so if you are just looking for the simulation, check out the summary. Hopefully however, you also gain some insights into how to model events involving states in [SystemModeler][2]:
## Adding Milk to Coffee ##
In SystemModeler, there are at least two approaches you could take when adding the milk to the coffee. Either you could have everything collected into a single component that has a event in it corresponding to the addition of the milk, or you could have a separate component that specifies the addition of milk as a flow over time. I explored both those scenarios in the attached model.
Approach 1, with a discrete event in the coffee involves creating a copy of the [HeatCapacitor][3] component and adding some parameters that separates the heat capacity of the milk, the amount of milk added, when it is added, etc. As you say, the mixed Cp is unknown. A naïve initial approach could be to just add the two heat capacities together. If C is the total heat capacity of the coffee, with or without milk, you could add an equation that says:
C = if time > AddTime then Ccoffee * Vcoffee * 1000 + Cmilk * Vmilk * 1000 else Ccoffee * Vcoffee * 1000;
The coefficients are just there to convert the different units.
The temperature is a bit more difficult, since it varies continuously over time it is state so and can't be as easily changed as the capacity (which changes values only at one discrete interval).
What you have to do with states is using the [reinit(var,newValue)][4] function to reinitialize variable var, to the new value newValue. If you mix fluids together, the new temperature is the new total enthalpy divided by the new heat capacity:
t = (m1 c1 t1 + m2 c2 t2 + ... + mn cn tn) / (m1 c1 + m2 c2 + ... + mn cn)
(from [Engineering Toolbox][5])
In Modelica, we could reinitialize the temperature when the simulation time exceeds the time when the milk should be added, using the following:
when time > AddTime then
reinit(T, (Ccoffee * Vcoffee * 1000 * T + Cmilk * Vmilk * 1000 * MilkTemperature) / C);
end when;
Adding the coffee component and connecting it to a [ThermalConductor][6] component (to represent the cup) and connecting that in turn to an [FixedTemperature][7] component (to represent the room temperature) results in a fairly compact model:
![Diagram with coffee component][8]
If milk is added after 300 seconds, it produces the following simulation:
![Simulation with coffee component][9]
Approach 2 is having a short flow of milk, instead of a instantaneous addition of milk. The benefit of this is you could create your own addition strategy. For example, you could add half of the milk at the beginning, and half after 300 seconds. Or any arbitrary strategy. For now, I focused on doing it as a 1 second pulse.
An input is added to the coffee, corresponding to the flow of milk. The volume of the milk in the coffee is no longer a parameter but increases with the flow:
der(Vmilk) = u;
And the heat capacity increases with the milk volume:
C = Ccoffee * Vcoffee * 1000 + Cmilk * Vmilk * 1000;
Adding milk will increase the enthalpy in the system, but the increased heat capacity will still cause a drop in temperature:
T = H/C;
der(H) = port.Q_flow + Cmilk * 1000 * u * MilkTemperature;
With H being the enthalpy.
The milk component is simply a pulse from [Pulse][10] that has some additional parameters.
![milk addition component diagram][11]
Everything taken together, we now have an additional component in the coffee cooling model:
![diagram with coffee and milk components][12]
As it should, this approach gives a similar plot as the first one. The only difference is that the milk is added over a duration of 1 second. As the duration approaches zero, the two approaches would converge.
![simulation with coffee and milk][13]
You could use this approach to fit parameters, using the methodology from the [electric kettle][14] example.
## Other Cooling Processes ##
In the model above, we had a very naïve cooling process for our coffee. We assumed it could be described by Newtons law of cooling (which the heat conduction component is based on). In the [original thread][15] a paper is linked that goes into detail on how you might expand the a coffee model to include some other forms of cooling.
I will here use a [HeatCapacitor][16] component instead of the coffee component to simplify things, but the two should interchangeable. The experiment numbers are in reference to the attached article.
**Experiment 1**
Experiment 1 can be described using standard components from the Modelica.Thermal.HeatTransfer package. The pot will be HeatCapacitor component, the ambient temperature will be modeled using FixedTemperature and the convection is modeled using a ThermalConductor, which follows Newtons law of cooling.
![experiment 1 diagram][17]
The G parameter in the ThermalConductor is equivalent to the k parameter they use. From what I could tell, the paper did not include any measurement of the heat capacitance or ambient temperature so I went with 3 dl of water and 20 degrees Celsius. However, both of these would probably need to be higher to fit their experimental data.
**Experiment 2**
To create experiment 2 I first duplicated experiment 1 by selecting it and pressing Ctrl+D, you can also right click and select Duplicate. Experiment 2 requires a component like the ThermalConductor but one that has an exponent that causes nonlinear behaviour in the heat flow. No such component exist in the Modelica Standard Library, but we can easily create one. I created a new component to be used in experiment 2 by dragging the normal ThermalConductor into Experiment2.
![cope class][18]
And gave it a new name, "ArbitraryExponentConductor"
Now I had to modify it to use the exponent. After opening the new component I first added a new parameter by right clicking the parameter view and and selecting Insert > Parameter
![Adding new parameter to model][19]
I used the name x as in the paper and used type Real.
![new parameter window][20]
Now I had to modify the equations so I went into the Modelica Text View (Ctrl+3) and changed the line:
Q_flow = G * dT;
to
Q_flow = G * dT ^ x;
dT corresponds to the temperature difference (tc-ts) in the paper.
Going back into Experiment 2, I changed the normal ThermalConductor by right clicking it and selected Change Type. In the dialog, I gave the name of the new type (CofeeCooling.Experiment2.ArbitraryExponentConductor). You can also drag the component from the component browser directly into the field.
![change model quickly][21]
or of course, delete the component, drag the new one in and make new connections.
**Experiment 3**
For experiment 3, you need to add a some more stuff. Start by douplicating experiment 1. Connect the ThermalConductor to a new HeatCapacitor instead of the FixtedTemperature. That heat capacitor will be the pot, while the original one will be the coffee. The first ThermalConductor then represents equation 1 in the paper, transfer of heat from coffee to the pot. Add another ThermalConductor and connect it between the pot and the FixedTemperature to represent equation 5. Also add two BodyRadiation components and connect them from each capacitor to the FixedTemperature. These will represent all the radition effects described. They are bidirectional so they represent two equations each (3,4 and 6,7). For evaporation, I created a custom component which is described by the equation
port.Q_flow = k * port.T;
Where k is the product of the P, l and v parameters described in the paper. You could add individual parameters for each of them instead, as described in the text for experiment 2.
Connect the evaporation to the coffee capacitor.
![experiment 3 model diagram][22]
The 4:th experiment is much like the the 3:d one. I modified the evaporation component to have the equation
port.Q_flow = k * port.T ^ z;
## Summary & Simulation ##
Okey, so that was the *how*. Now we want to use this model to draw conclusions. I'll use the simplest model here and encourage you to try out the more advanced models yourselves.
Say we want to drink our coffee in 2 minutes, starting from 80°C. Everyone knows that the optimal coffee drinking temperature is 72.34°C. When should we add our milk to get there in 2 minutes?
We can do a parametric simulation in Mathematica to try out two different timings:
addTimes = {0, 110};
sim = WSMSimulate["CoffeeAndMilk.Scenarios.Approach1", WSMParameterValues -> {"AddTime" -> addTimes}];
In the plot, I will add a point that is the optimum temperature at time = 120s. I'll also use a trick to get some nice legends to better understand which curve corresponds to which:
WSMPlot[sim, "coffee.T",
PlotRange -> {{60, 180}, {70, 80}},
Epilog -> Point[{120, 72.34}],
PlotLegends -> Map["Add time = " <> ToString[#] &, addTimes]
]
This produces the following plot:
![plot 1 with 0 and 110][23]
So close. But we can't give up just now. Let us adjust the timing a bit and add the milk right before we want to drink the coffee:
![plot 2 with 0 and 120][24]
That just about does it I'd say.
[at0]: http://community.wolfram.com/web/bassgarys
[1]: http://community.wolfram.com/groups/-/m/t/1021383
[2]: http://www.wolfram.com/system-modeler/
[3]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Thermal.HeatTransfer.Components.HeatCapacitor.html
[4]: https://reference.wolfram.com/system-modeler/libraries/ModelicaReference/ModelicaReference.Operators.%27reinit%28%29%27.html
[5]: http://www.engineeringtoolbox.com/mixing-fluids-temperature-mass-d_1785.html
[6]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Thermal.HeatTransfer.Components.ThermalConductor.html
[7]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Thermal.HeatTransfer.Sources.FixedTemperature.html
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9088mod1.png&userId=554806
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mod1sim.png&userId=554806
[10]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Blocks.Sources.Pulse.html
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=milk.png&userId=554806
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mode2.png&userId=554806
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mod2sim.png&userId=554806
[14]: https://www.wolfram.com/system-modeler/examples/consumer-products/electric-kettle-fluid-heat-transfer.html
[15]: http://community.wolfram.com/groups/-/m/t/1021383
[16]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Thermal.HeatTransfer.Components.HeatCapacitor.html
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4316mod3.png&userId=554806
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Copy.png&userId=554806
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Addnewparameter.png&userId=554806
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=newparameter.png&userId=554806
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=quickchange.png&userId=554806
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mod3%281%29.png&userId=554806
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test1plot.png&userId=554806
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2plot.png&userId=554806Patrik Ekenberg2017-03-02T17:16:24ZUsing Mathematica in Teaching Differential Equations
http://community.wolfram.com/groups/-/m/t/1124581
I am Brian Winkel, Professor Emeritus (civilian) United States Military Academy, West Point NY USA. I wish to contact colleagues who teach differential equations using or considering using modeling and technology.
I am currently the Director of SIMIODE-Systemic Initiative for Modeling Investigations and Opportunities with Differential Equations, an organization of teachers and students interested in teaching and learning differential equations by using modeling and technology throughout the process. Visit us at www.simiode.org.
We have designed a Student Competition Using Differential Equation Modeling – SCUDEM for April 2018 (see www.simiode.org/scudem for complete details) and invite schools to host SCUDEM (we have some 60 teams in the US already)and to consider sponsoring a team.
SIMIODE is a 501(c)3 organization and all its resources are freely available under the most generous Creative Commons license. Visit us at www.simiode.org and join. All is FREE at SIMIODE.
![SCUDEM 2018 local site locations in the United States as of 16 June 2017][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SCUDEMSitesMap.jpg&userId=1124219Brian Winkel2017-06-19T23:12:18Z