Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions from all groups sorted by activeParallel Mathematica Environment on the RaspberryPi using OOP
http://community.wolfram.com/groups/-/m/t/1057588
My project, Parallel Mathematica Environment on the RaspberryPi using OOP, is a sample application of **Object Oriented Programming for the Mathematica** cluster computing, implemented with a Mac and three RaspberryPi Zero connected with a USB hub and three USB cables.
Basic idea is to deploy a constructed instance image to calculating servers (RaspberryPi) and send messages to the instance. [OOP on the Mathematica is already developed and shown][1] in this community, and further detail is shown on [slidesshare][2] titled of "OOP for Mathematica."
![enter image description here][3]
----------
Preparing for RaspberryPi Zero is as follows using SSH connection from a Mac,
- naming each Zero as raspberypi,raspberrypi1,raspberrypi2,...
- set the server program "init" to each RaspberryPi, init is,
$ cat init
While[True,
Run[“nc -l 8000>input”];
temp=ReleaseHold[<<input];
temp >>output;
Run[“nc your-mac-hostname.local 8002<output”]
]
where, socket numbers must be identical.
- Run Mathematica manually, and wait the booting Mathematica up.
$ wolfram <init&
Checking each RaspberryPi is useful as,
$ nc -l 8002 >output|nc raspberrypi.local 8000 <<EOF
> 10!
> EOF
$ cat output
3628800
----------
Cluster controller program on a Mac is,
- set directory
SetDirectory[NotebookDirectory[]];
- setup socket communication process
com1="nc -l 8002 >output1 |nc raspberrypi.local 8000 <input1";
com2="nc -l 9002 >output2 |nc raspberrypi1.local 9000 <input2";
com3="nc -l 9502 >output3 |nc raspberrypi2.local 9500 <input3";
- set object property
obj={
<|"name"->node1,"comm"->com1,"in"->"input1","out"->"output1","p"->{2000,3500}|>,
<|"name"->node2,"comm"->com2,"in"->"input2","out"->"output2","p"->{3501,4000}|>,
<|"name"->node3,"comm"->com3,"in"->"input3","out"->"output3","p"->{4000,4500}|>};
- define calculation server class, where is a sample Mersenne prime number calculation
new[nam_]:=Module[{ps,pe},
mersenneQ[n_]:=PrimeQ[2^n-1];
setv[nam[{s_,e_}]]^:={ps,pe}={s,e};
calc[nam]^:=Select[Range[ps,pe],mersenneQ]
];
- construct instances
Map[new[#name]&,obj];
- deploy instances to calculation servers
Map[Save[#in,#name]&,obj];
Map[Run[#comm]&,obj];
- send message to each instance
Map[Put[Hold@setv[#name[#p]],#in]&,obj];
Map[Run[#comm]&,obj];
- start calculation
Map[Put[Hold@calc[#name],#in]&,obj];
proc=Map[StartProcess[{$SystemShell,"-c",#comm}]&,obj]
- wait for the process termination (mannualy in this sample code)
Map[ProcessStatus[#]&,proc]
{Finished,Finished,Finished}
- gather the results
Map[FilePrint[#out]&,obj];
{2203, 2281, 3217}
{}
{4253, 4423}
[1]: http://community.wolfram.com/groups/-/m/t/897081?p_p_auth=o5qxZhNR
[2]: https://www.slideshare.net/kobayashikorio/oop-for-mathematica
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2017-04-10.jpg&userId=897049Hirokazu Kobayashi2017-04-10T01:15:22ZLoading CUDA Functions with LibraryFunctionLoad
http://community.wolfram.com/groups/-/m/t/1128397
CUDALink is the recommended interface in the Wolfram Language for computing on CUDA-enabled graphical processing units (GPUs).
In order to access additional functionality in CUDA libraries or create customized CUDA kernel functions, it is also possible
to use the function LibraryFunctionLoad from the LibraryLink package.
This post demonstrates a few examples in the Wolfram Language to:
1. Invoke functions from CUDA host library APIs like Thrust and cuBlas
2. Compile and load custom CUDA kernel functions
*Some familiarity with the `LibraryLink` package would be helpful in understanding the idea behind this approach. The [Wolfram LibraryLink User Guide](https://reference.wolfram.com/language/LibraryLink/tutorial/Overview.html) is a good starting point.*
*To enable Mathematica to successfully load the CUDA Runtime Library, required for compilation of CUDA functions, it is recommended that you add the CUDA Runtime Library path to the system environment variable `LD_LIBRARY_PATH`.*
## 1. Reducing a list of numbers
Here is a simple example of reducing a list of numbers (with the default `+` operator) in the Wolfram Language.
```
xList = {1,2,3,4};
res = Total[xList]
```
Here is the same operation performed in CUDA, using the Thrust Library.
```
#include <cuda_runtime.h>
#include <thrust/device_vector.h>
#include <thrust/reduce.h>
#include <thrust/execution_policy.h>
#include <iostream>
int main () {
thrust::device_vector<int32_t> dv{1,2,3,4};
const int32_t res = thrust::reduce(thrust::device, dv.begin(), dv.end());
std::cout << res << std::endl;
return 0;
}
```
This is a trivial example. The reduction function in Wolfram Language is highly optimized and already performs efficiently. Nevertheless, for the sake of demonstration, we wrap the call to the Thrust API in a C library function with `LibraryLink`.
```.c
extern "C" {
DLLEXPORT int cudaSumInt(WolframLibraryData libData, mint Argc, MArgument * Args, MArgument Res) {
// ---- On Host ---- //
MTensor inTensor;
mint * in;
inTensor = MArgument_getMTensor(Args[0]);
in = libData->MTensor_getIntegerData(inTensor);
const mint len = libData->MTensor_getFlattenedLength(inTensor);
// ---- On Device ---- //
thrust::device_vector<mint> dv(in, in+len);
const mint out = thrust::reduce(thrust::device, dv.begin(), dv.end());
// ---- Set Res ---- //
MArgument_setInteger(Res, out);
return LIBRARY_NO_ERROR;
}
}
```
To compile this library function, we also need to define `WolframLibrary_getVersion`, `WolframLibrary_initialize` and `WolframLibrary_uninitialize`. For more details about these `LibraryLink` functions, please refer to [this section](https://reference.wolfram.com/language/LibraryLink/tutorial/LibraryStructure.html#280210622) in the LibraryLink tutorial.
Here is an example of a Makefile to compile the code.
```.Makefile
CC = nvcc
LINKTYPE = -shared
TARGET = ./libname.so
SOURCE = ./kernel_link.cu
NVCCFLAGS = -arch=sm_52 -O3
CFLAGS = -m64 --compiler-bindir /usr/bin --compiler-options -fPIC
MINSTALLDIR = /usr/local/Wolfram/Mathematica/11.0
INCMMA = $(MINSTALLDIR)/SystemFiles/IncludeFiles/C
INCGPU = $(MINSTALLDIR)/SystemFiles/Links/GPUTools/Includes
LIBMMA = $(MINSTALLDIR)/SystemFiles/Libraries/Linux-x86-64
INCMATHLINK = $(MINSTALLDIR)/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions
LIBMATHLINK = $(MINSTALLDIR)/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions
INCCUDA = /usr/local/cuda-8.0/include
LIBCUDA = /usr/local/cuda-8.0/lib64
$(TARGET) :
$(CC) $(LINKTYPE) -L $(LIBCUDA),$(LIBMATHLINK),$(LIBMMA) $(CFLAGS) $(NVCCFLAGS) -I $(INCMMA),$(INCMATHLINK),$(INCGPU),$(INCCUDA) -o $(TARGET) $(SOURCE)
```
The compiled library can now be loaded into the Mathematica with `LibraryFunctionLoad`.
```.Mathematica
In[1]:= libFunc = LibraryFunctionLoad[NotebookDirectory[] <> "libname.so",
"cudaSumInt", {{Integer, _}}, {Integer}]
In[2]:= libFunc[{1,2,3,4}]
Out[2]:= 10
```
This provides an alternate way to load CUDA libraries into Mathematica.
You can find another example in the archive link below. That example demonstrates how a function from cuBlas host API can be invoked in the Wolfram Language.
## 2. Custom CUDA kernel function - myCUDAFunctionLoad
You may want to write your own CUDA kernel function and wish to call it from the Wolfram Language. The following example demonstrates how this can be done.
### 2.1 Templating
The `StringTemplate` function in the Wolfram Language can be used to create a library source file with the CUDA kernel functions.
```.Mathematica
includes =
"#include <cuda_runtime.h>
#include <stdio.h>
";
```
```.Mathematica
template =
"extern \"C\" {
#include \"WolframLibrary.h\"
DLLEXPORT mint WolframLibrary_getVersion( ) {
return WolframLibraryVersion;
}
DLLEXPORT int WolframLibrary_initialize(WolframLibraryData libData) {
return LIBRARY_NO_ERROR;
}
DLLEXPORT void WolframLibrary_uninitialize(WolframLibraryData libData) {
return ;
}
DLLEXPORT int `dl_func_name`(WolframLibraryData libData, mint Argc, MArgument * Args, MArgument Res) {
// Memory Management
`loc_mem`
// Block and thread size define
dim3 block_size(`loc_bs`);
dim3 thread_size(`loc_ts`);
// Launch Kernel
`kernel_func_name`<<<block_size, thread_size>>>(`loc_args`);
cudaDeviceSynchronize();
// Set return
`loc_return`
// Free Device Memory
`loc_free`
return LIBRARY_NO_ERROR;
}
}";
```
A simple template to create a function that mimics the `compile` command would be as follows:
```.Mathematica
compiletemplate =
"\"`nvcc`\" -shared -L\"`cudalib`\" -L\"`mathlib`\" -L\"`syslib`\" -m64 --compiler-bindir \"`ccpath`\" --compiler-options -fPIC -arch=sm_`arch` -O3 -I\"`sysinclude`\" -I\"`mathinclude`\" -I\"`gtinclude`\" -I\"`cudainclude`\" -o `target` `source`";
```
This template can now be used to create the `compile` function.
```.Mathematica
compile[sourcePath_String,cudaToolkitPath_String,arch_Integer]:=
Module[{command,libpath},
command=StringTemplate[compiletemplate]
[<|
"nvcc"->cudaToolkitPath<>"/bin/nvcc",
"cudalib"->cudaToolkitPath<>"/lib64",
"mathlib"->$InstallationDirectory<>"/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions",
"syslib"->$InstallationDirectory<>"/SystemFiles/Libraries/Linux-x86-64",
"ccpath"->"/usr/bin",
"arch"->ToString[arch],
"sysinclude"->$InstallationDirectory<>"/SystemFiles/IncludeFiles/C",
"mathinclude"->$InstallationDirectory<>"/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions",
"gtinclude"->$InstallationDirectory<>"/SystemFiles/Links/GPUTools/Includes",
"cudainclude"->cudaToolkitPath<>"/include",
"target"->StringReplace[sourcePath,".cu"->".so"],
"source"->sourcePath
|>];
RunProcess[$SystemShell,"StandardOutput",command];
Return[StringReplace[sourcePath,".cu"->".so"]];
]
```
The compiled executable will be placed in the same directory as the source file, with the same name, but with a different extension.
### 2.2 Helper Functions
A few helper functions are also required for assembling the source code containing the custom CUDA Kernel function. Their names and description are as follows:
1. sizeToString : this converts a list (length <= 3) into a string to fit into `loc_bs` and `loc_ts`
2. scalarMemMange : this generates C code for scalar (int or double) variables according to a customizable argument list
3. arrayMemManage : this generates C code for array (Host and Device) variables according to a customizable argument list
4. returnMemMange : this generates C code for returning data to Mathematica
5. memManage : this assembles all memory related source code
6. srcAssemble : this assembles the final source code for the C library, where the CUDA kernel function is queued
Once the source code has been prepared, `myCudaFunctionLoad` is invoked to compile and load the function with `LibraryFunctionLoad`.
```.Mathematica
myCUDAFunctionLoad[cudaToolkitPath_String, kernel_String, kname_String, args_List, bs_, ts_:32, arch_Integer:60]
:= Module[{source,sourcePath,libPath,func,iargs,oarg},
source = srcAssemble[kernel, kname,args, bs,ts];
sourcePath = Export[$TemporaryDirectory<>"src_"<>kname<>".cu",source,"Text"];
libPath = compile[sourcePath,cudaToolkitPath,arch];
iargs = args/.{x_,y_,z_}->{x,y};
oarg = Cases[args,{_,_,"Output"}][[1,;;-2]];
func = LibraryFunctionLoad[libPath,"host_"<>kname,iargs,oarg];
Return[func];
]
```
The `myCUDAFunctionLoad` function takes as arguments:
![enter image description here][1]
*Notice, the format for the `args` list is the same as that used by CUDAFunctionLoad and LibraryFunctionLoad*
The code above can be tested with a very simple kernel function, defined below.
```.Mathematica
kernels =
"__global__ void linear_plus(double alpha, double * a, double beta, double * b, int NMAX)
{
size_t idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx<NMAX) {
a[idx] = alpha * a[idx] + beta * b[idx];
}
}
";
```
This function is then loaded into Mathematica and used as follows:
```.Mathematica
In[13]:= func =
myCUDAFunctionLoad[
"/usr/local/cuda-8.0",
kernels,
"linear_plus",
{Real,
{Real, _, "Output"},
Real,
{Real, _, "Input"},
Integer
},
1024,
32,
52
]
```
![enter image description here][2]
```
In[14]:= ma = RandomReal[10, 5];
mb = RandomReal[10, 5];
al = RandomReal[];
be = RandomReal[];
nmax = Length[ma];
In[18]:= func[al, ma, be, mb, nmax]
Out[18]= {3.20244, 3.07296, 3.75771, 1.94413, 4.75805}
In[19]:= al*ma + be*mb
Out[19]= {3.20244, 3.07296, 3.75771, 1.94413, 4.75805}
```
You can download all the code from this [archive](https://amoeba.wolfram.com/index.php/s/mha3dYRy5ClDjgM). Before you try it on your own machine, please set the correct value for `arch` (according to your architecture) in the call to `myCUDAFunctionLoad`.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-06-26at17.55.10.png&userId=1126318
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.png&userId=1126318Wang Zhang2017-06-26T22:55:43ZHelp with a LaTeX conversion
http://community.wolfram.com/groups/-/m/t/1128366
I have no idea why this does not work:
TeXForm[\[PartialD]u/\[PartialD]t -
\!\(\*OverscriptBox[\(v\), \(^\)]\)\[SmallCircle]\[Del]u -
v (Subscript[\[Mu], a] + Subscript[\[Mu],
s]) u + \[Integral]Subscript[v\[Mu], s] Subscript[p, s] (
\!\(\*SuperscriptBox[\(\[CapitalOmega]\), \('\)]\) \[RightArrow] \
\[CapitalOmega]) u (r,
\!\(\*SuperscriptBox[\(\[CapitalOmega]\), \('\)]\), t) \[DifferentialD]
\!\(\*SuperscriptBox[\(\[CapitalOmega]\), \('\)]\) +
q (r, \[CapitalOmega], t)]
I am trying to convert to LaTeX format.Jose Calderon2017-06-26T19:31:51ZCan someone let me know what I am doing wrong? i am not getting an output
http://community.wolfram.com/groups/-/m/t/1128811
Experiment 22.1
In each of the following, you are given a differential equation and a function y=f(x). Following the procedure of Section 22.1, check in each case whether the function is a solution to the differential equation.
1. x y'=3 y +x^4 cos x, y=x^3 sin x.
2. x y'=3 y +x^4 cos x, y=x^2 sin x.
3. x^2 y''-x y'+y=ln x, y=x ln x+ln x +2.
4. x^2 y''+x y'+y=0, y= cos(ln x) + sin(ln x).
5. x^3(y')^2+x^2 y y'+4=0, y=-(x+4)/x.
6. (y')^2+x y=e^x, y=e^x-e^-x.
1.
Clear[f, y, x, left, right]
f[x_] := x^3*Sin[x]
left[y_] := x*y'[x]
right[y_] := 3*y[x]
In[19]:= Simplify[left[f]]
Out[19]= left[f]
2.
Clear[f, y, x, left, right]
f[x_] := x^2*Sin[x]
left[y_] := x*y'[x]
right[y_] := 3*y + x^4*Cos[x]
In[13]:= Simplify[left[f]]
Out[13]= left[f]
3.
Clear[f, y, x, left, right]
f[x_] := x*Log[x] + Log[x] + 2
left[y_] := x^2*y''[x] - x*y'[x] + y[x]
right[y_] := Log[x]
[left[f]]
Log[x]
4.
Clear[f, x, y, left, right]
f[x_] := Cos[Log[x]] + Sin[Log[x]]
left[y_] := x^2*y''[x] + x*y'[x] + y[x]
right[y_] := 0
In[11]:= [left[f]]
During evaluation of In[11]:= Syntax::tsntxi: "[left[f]]" is incomplete; more input is needed.
5.
Clear[f, x, y, left, right]
f[x_] := -[x + 4]/x
left[y_] := x^3*[y'[x]]^2 + x^2*y[x]*y'[x] + 4
right[y_] := 0
Simplify[left[f]]
left[f]
6.
Clear[f, x, y, left, right]
f[x_] := E^[x] - E^[-x]
left[y_] := [y'[x]]^2 + x*y'[x]
right[y_] := E^[x]
Simplify[left[f]]
left[f]Brianna Cimino2017-06-26T18:48:16Z[GIF] Stereo Vision (Stereographic projection of a (24, 23)-torus knot)
http://community.wolfram.com/groups/-/m/t/1128223
![Stereographic projection of a (24, 23)-torus knot][1]
**Stereo Vision**
This is very similar to [_Rise Up_][2], though with a $(24,23)$-torus knot rather than a $(29,-5)$-torus knot. The major difference is that, rather than stereographically projecting the knot from the 3-sphere to $\mathbb{R}^3$ and then building a tube of uniform thickness around it, I'm making the uniform tube up in the 3-sphere and projecting the whole thing down. Thanks to [@Henry Segerman][at0] for the suggestion.
In order to accomplish this, I parametrized the boundary of a tubular neighborhood, found the formula for the projection, and then used `ParametricPlot3D`. In practice, this turned out to be quite computationally expensive. I will show the code at the end, but the code is basically incomprehensible without knowing where it came from, so I'll start with some intermediate steps.
First, we need a stereographic projection function and a function which will output a $(p,q)$-torus knot on the Clifford torus, offset by an angle $\theta$ from the standard one:
Stereo3D[{x1_, y1_, x2_, y2_}] := {x1/(1 - y2), y1/(1 - y2), x2/(1 - y2)};
pqtorus[t_, θ_, p_, q_] := ComplexExpand[Flatten[ReIm /@ (1/Sqrt[2] {E^(p I (t + θ/p)), E^(q I t)})]];
Now, the way I'm going to parametrize the boundary of a tubular neighborhood of the knot is to think of the knot as the core curve of a torus; then the second circle in the torus is the unit circle in the normal space to each point on the torus. Thinking of the knot as sitting inside $\mathbb{R}^4$, each point has a 3-dimensional normal space, namely the orthogonal complement of the tangent vector to the knot. But we only want the part of the normal space which is tangent to the sphere. Since the outward unit normal to a point $\vec{p}$ on the sphere is just $\vec{p}$ itself, this means that the normal space we want is the orthogonal complement of the plane spanned by the point itself (thought of as a vector) and the tangent vector.
So then if you run
Orthogonalize[NullSpace[{#, D[#, t]}] &[pqtorus[t, θ, p, q]]]
and do a lot of simplification, you will eventually arrive at the following orthonormal basis for the normal space to `pqtorus[t, θ, p, q]`:
pqNormal[t_, θ_, p_, q_] :=
{{(
Sqrt[2] (-p Cos[p t + θ] Sin[q t] +
q Cos[q t] Sin[p t + θ]))/(
Sqrt[3 p^2 + q^2 + (-p^2 + q^2) Cos[2 q t]] Sign[p]), -((
Sqrt[2] (q Cos[q t] Cos[p t + θ] +
p Sin[q t] Sin[p t + θ]))/(
Sqrt[3 p^2 + q^2 + (-p^2 + q^2) Cos[2 q t]] Sign[p])), 0, (
Sqrt[2] Abs[p])/Sqrt[
3 p^2 + q^2 + (-p^2 + q^2) Cos[2 q t]]}, {(-(p^2 + q^2) Cos[
q t] Cos[p t + θ] - 2 p q Sin[q t] Sin[p t + θ])/
Sqrt[3 p^4 + 4 p^2 q^2 + q^4 + (-p^4 + q^4) Cos[2 q t]], (
2 p q Cos[p t + θ] Sin[q t] - (p^2 + q^2) Cos[q t] Sin[
p t + θ])/Sqrt[
3 p^4 + 4 p^2 q^2 + q^4 + (-p^4 + q^4) Cos[2 q t]],
1/2 Sqrt[(3 p^2 + q^2 + (-p^2 + q^2) Cos[2 q t])/(
p^2 + q^2)], ((-p^2 + q^2) Sin[2 q t])/(
2 Sqrt[3 p^4 + 4 p^2 q^2 + q^4 + (-p^4 + q^4) Cos[2 q t]])}};
Now, we get an actual parametrization for the stereographically-projected surface in 3D by running the following function:
Block[{b, p = 24, q = 23},
b[t_, θ_] := pqNormal[t, θ, p, q];
Stereo3D[Cos[r] pqtorus[t, θ, p, q] + Sin[r] (Cos[s] b[t, θ][[1]] + Sin[s] b[t, θ][[2]])]
]
(Of course, you can put in any integers you like for `p` and `q`).
Unfortunately, just applying `ParametricPlot3D` to `Stereo3D[Cos[r] pqtorus[t, θ, p, q] + Sin[r] (Cos[s] b[t, θ][[1]] + Sin[s] b[t, θ][[2]])]` was much much slower than copy-pasting the output of the above into `ParametricPlot3D`, so the code below contains the entire unpleasant output of the above function.
Unfortunately, we need to set `PlotPoints` very high to get a remotely reasonable surface, so this is far too slow to make into a `Manipulate`. Here's the code I used to output the above GIF:
knot =
With[{r = .03, viewpoint = {0, 3, 0},
cols = RGBColor /@ {"#f54123", "#0098d8", "#0b3536"}},
ParallelTable[
ParametricPlot3D[
{(-1105 Cos[r] Sqrt[4514 - 94 Cos[46 t]]
Cos[24 t + θ] +
2 Sin[r] (1105 Cos[
23 t] (Sqrt[1105] Cos[24 t + θ] Sin[s] -
23 Sqrt[2] Cos[s] Sin[24 t + θ]) +
24 Sin[23 t] (1105 Sqrt[2]
Cos[s] Cos[24 t + θ] +
46 Sqrt[1105]
Sin[s] Sin[24 t + θ])))/(-2210 Sqrt[
2257 - 47 Cos[46 t]] + 53040 Sqrt[2] Cos[s] Sin[r] +
1105 Cos[r] Sqrt[4514 - 94 Cos[46 t]] Sin[23 t] -
47 Sqrt[1105]
Sin[r] Sin[s] Sin[
46 t]), -((1105 Sqrt[2]
Cos[s] (-47 Cos[t + θ] +
Cos[47 t + θ]) Sin[r] +
2208 Sqrt[1105]
Cos[24 t + θ] Sin[r] Sin[s] Sin[23 t] +
1105 (Cos[r] Sqrt[4514 - 94 Cos[46 t]] -
2 Sqrt[1105] Cos[23 t] Sin[r] Sin[s]) Sin[
24 t + θ])/(-2210 Sqrt[2257 - 47 Cos[46 t]] +
53040 Sqrt[2] Cos[s] Sin[r] +
1105 Cos[r] Sqrt[4514 - 94 Cos[46 t]] Sin[23 t] -
47 Sqrt[1105] Sin[r] Sin[s] Sin[46 t])), ((
Cos[r] Cos[23 t])/Sqrt[2] + (
Sqrt[2257 - 47 Cos[46 t]] Sin[r] Sin[s])/(2 Sqrt[1105]))/(
1 - (Cos[r] Sin[23 t])/Sqrt[2] + (
Sin[r] (-53040 Sqrt[2] Cos[s] +
47 Sqrt[1105] Sin[s] Sin[46 t]))/(
2210 Sqrt[2257 - 47 Cos[46 t]]))},
{t, 0, 2 π}, {s, 0, 2 π}, PlotPoints -> 1000,
PlotRange -> 2.7, ViewPoint -> viewpoint, PlotStyle -> White,
Axes -> None, Mesh -> None, ViewAngle -> π/9,
ViewVertical -> {0, 0, -1}, Boxed -> False,
Background -> cols[[-1]], ImageSize -> 540,
Lighting -> {{"Point", cols[[1]], {3/4, 0, 0}},
{"Point", cols[[2]], {-3/4, 0, 0}}, {"Ambient", cols[[-1]], viewpoint}}],
{θ, 0., -2 π/23 - #, #}] &[-π/230]
];
Export[NotebookDirectory[] <> "knot.gif", knot, "DisplayDurations" -> 1/50, "AnimationRepetitions" -> Infinity]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=knots51.gif&userId=610054
[2]: http://community.wolfram.com/groups/-/m/t/1122344
[at0]: http://community.wolfram.com/web/wolframcomClayton Shonkwiler2017-06-26T04:16:27Z[✓] Delete returned True and False from a list?
http://community.wolfram.com/groups/-/m/t/1127532
Hi,
I am using a certain condition in generating a large set of lists whose elements can contain the two words 'True' and 'False' as returned by the condition. How to remove these two words from each list without the need to that manually knowing that some of teh lists may not contain these two words? I have tried DeleteCases and match to 'True' and 'False' but that didn't work.
This is an example of the lists I am generating:
{False, True, False, False, False,
1/128 (6 + 18 Sqrt[5] - 96 Subscript[a, 11]) ==
0, -(1/4) Sqrt[3] Subscript[a, 12] == 0, (Sqrt[3] Subscript[a, 12])/
8 == 0, -(1/8) Sqrt[3] Subscript[a, 13] == 0, (
Sqrt[3] Subscript[a, 13])/8 == 0}
also, how to delete elements that contain numbers only with no variables or coefficients?
Thank you,
eftrsdeft rsd2017-06-24T16:37:45Z[✓] NSolve two eqs connected to the London penetration depth of a PB film?
http://community.wolfram.com/groups/-/m/t/1128280
Hello,
I try to numerically solve two equations that are connected to the London penetration depth of a thin PB film. <br />
The physical background doesn't really matter but if someone is interested I can post more about it. <br />
The 1st problem:
I need a solution for the following equation that should give me a simple value:
$$ 2,01121 \cdot 10^{-10}= 0,005 \cdot (1,48 \cdot 10^{-8}-2 \cdot \lambda* tanh( \frac{1,48 \cdot 10^{-7}}{2 \cdot \lambda}))$$
I know that the solution for lambda has to be in the range of 50-70nm but I'm not really familiar how to solve something like this with Mathematica.
I tried to use the NSolve function but I don't really give me anything. <br />
In[6]:= NSolve {2.01121*10^(-10) == 0.005* (1.48*10^(-8) - 2*x*tanh[ 1.48*10^(-7)/(2*x)])}
Out[6]= {NSolve (2.01121*10^-10 == 0.005 (1.48*10^-8 - 2 x tanh[7.4*10^-8/x]))}
The 2nd problem is similar but this time the solution isn't just a value but a function that depends on T. Therefore I would need something like a table with values as output.
$$ 1-\frac{T}{7.2}= \frac{\Delta(T)}{1,9872\cdot10^{-22}}\tanh(\frac{\Delta(T)}{2,76\cdot 10^{-23}\cdot T})$$
I tried to solve it again with NSolve:
In[16]:= NSolve {1 - (T/7.2) == (a[T]/(1.9872*10^(-22)))*tanh[a[T]/(T*2.76*10^(-23))]}
Out[16]= {NSolve (1 - 0.138889 T == 5.03221*10^21 a[T] tanh[(3.62319*10^22 a[T])/T])}
I probably use the NSolve wrong or use the syntax wrong but i can't find an error so any help is appreciated.Stefan Dietel2017-06-26T08:47:50Z[✓] Postprocess a ContourPlot to find minimum and maximum values?
http://community.wolfram.com/groups/-/m/t/1126384
I have a complicated function that is time consuming to evaluate, and has lots of local minima and maxima
cp = ContourPlot[x y Sin[30 x] Sin[20 y], {x, 0, 1}, {y, 0, 1}]
Is there a simple way to find the coordinates and function value of the maximum and minimum of the function, as found by Mathematica in the process of generating the contour plot?
I looked at the FullForm of cp, but it is a long and cryptic GraphicsComplex.James D Hanson2017-06-23T17:53:53ZUse the out put of ParametricNDSolve as initial condition?
http://community.wolfram.com/groups/-/m/t/1128573
Hello,
I am trying to use ParametricNDSolve command to solve a system of differential equations. I have four system of differential equations that should be solved order by order and the out put of the first system is the initial condition for the second one and so on.
The problem is that these system of equations are depended to a parameter thus i need to use ParametricNSDSolve command to solve them.
Now I have a question, how does it possible to use the out put of the ParametricNDSolve as initial condition keeping it's parametric condition?
Thanks a lot,
BestsMary Aghaei2017-06-26T13:36:15Z[GIF] Moving bars illusion
http://community.wolfram.com/groups/-/m/t/1127522
![enter image description here][1]
These bars are moving at a constant speed together, but it looks as if they move to the right in an alternating fashion…
Code:
size={sizex,sizey}={600,400};
n=16;
bars=sizex Subdivide[-1/2,1/2,2n-1];
width=N[bars[[2]]-bars[[1]]];
bars=Rectangle[{#1,-sizey/2},{#2,sizey/2}]&@@@Partition[bars,2];
cols={Lighter[Yellow,1/5],Darker[Blue,1/5]};
heights=N[sizey Subdivide[-1/2,1/2,Length[cols]+1][[2;;-2]]];
ClearAll[CreateScene]
CreateScene[\[Alpha]_]:=Module[{recs},
recs=MapThread[{#1,Rectangle[{\[Alpha] sizex-2 width,#2-width/2},{\[Alpha] sizex+2 width,#2+width/2}]}&,{cols,heights}];
Rasterize[Graphics[{bars,recs},PlotRange->({{-1,1},{-1,1}}size/2),PlotRangePadding->0],"Image",RasterSize->size,ImageSize->size]
]
Manipulate[CreateScene[a], {a, -0.6, 0.6}]
enjoy!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=movingbars.gif&userId=73716Sander Huisman2017-06-24T15:45:59ZSolve a differential equation of a bomb trajectory?
http://community.wolfram.com/groups/-/m/t/1128238
Greetings and respect. Please help me to solve this problem by Wolfram Mathematica.
appears directly below the plane in the crosshairs of his visual \
targeting device.Assume that the wind is blowing horizontally \
throughout the entire space below the plane with a speed of 60 mph, \
and that the air density does not vary with altitude.The bomb has a \
mass of 100 kg.Assume that it is spherical in shape with a radius of \
0.2 m.
(a) Calculate the required ground speed of the plane if the bomb is \
to strike the target.
(b) Plot the trajectory of the bomb.Explain why the "trailing side" \
of the trajectory is linear.M P2017-06-26T06:17:34ZFlappy Bird in a Mathematica notebook
http://community.wolfram.com/groups/-/m/t/1127985
I was asked to give a presentation about something fun I do with Mathematica. It struck me that I make a lot of interfaces, so why not make a game? As a first attempt I decided to port the game Flappy Bird to run in a Mathematica notebook. This is really a purely academic exercise, as I didn't stop to ask myself "should I do this?", but rather "how well would this run?".
You can find the final project on my GitHub account: [Flappy Bird in Mathematica][1]
I looked up a version of the game ([flappybird.io][2]) as I had actually never played the game before. I screen captured the pipes and other sprites and stripped off the backgrounds with `RemoveBackground`. The version I was taking as inspiration did not have sounds, so I added sounds that I found via google searches. I have no idea if they are correct.
I followed some ideas from an earlier Wolfram Technology Talk about [making Space Invaders in a Mathematica notebook][3], such as using a `ScheduledTask` to update the frames. I originally included hit detection using `RegionIntersection` but found that that was my main bottleneck. I coded my own compiled version using the distance from a rectangle to a point (it was two-orders of magnitude faster). I use compiled functions wherever possible.
You load the game's source code with `<<FlappyBird.wl` and then load the game with `playFlappyBird[]`.
Here's a short video of the final result:
![enter image description here][4]
[1]: https://github.com/KMDaily/FlappyBird_Mathematica
[2]: http://flappybird.io/
[3]: http://www.wolfram.com/broadcast/video.php?c=104&v=41
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=FlappyGif.gif&userId=829295Kevin Daily2017-06-26T03:33:15Z[✓] Find shortest distance from a given point to a given line?
http://community.wolfram.com/groups/-/m/t/1127928
Is there a function in Mathematica that gives me the length of the shortest distance from a given point to a given line?Laurens Wachters2017-06-25T06:42:04ZCreate irregular shapes design and evolution?
http://community.wolfram.com/groups/-/m/t/1127494
Is there in Mathematica a way to design regions and make them to evolve towards irregular shapes? In particular, I am interested in the evolution from a circular region to an irregular shape as is illustrated in the picture attached. Some hints will be much appreciated ...![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_0430.JPG&userId=123826David Quesada2017-06-24T17:47:29Z[✓] Solve a 2D heat equation inside a circular domain?
http://community.wolfram.com/groups/-/m/t/1126696
I am trying to solve the Heat Equation in 2D for a circular domain and I used the example attached, however, for some reason I do not get any answer from it, and in principle, it seems that I am following the same steps as in the original document from wolfram tutorials. Any help will be much appreciated. I am using version 11.1.1David Quesada2017-06-23T17:34:18ZMotion of a classical particle in a box (2 D and 3 D)
http://community.wolfram.com/groups/-/m/t/1127402
# https://wolfr.am/mAuOX0XK
This repository was made for the Homework Assignment for Wolfram Summer School 2017.
The "FunwithPhysicsin2D.nb" file in this repository contains code for implementing
the steps described in this readme file.
## Author: Bhubanjyoti Bhattacharya
## Date: June 21,2017
## Motion of a classical particle in a box (2 dimensions)
Here we will describe the motion of a classical particle inside a box with hard walls.
The particle will be represented by a single unit of a 2D (or 3D) raster. The interactions
with the walls will be considered elastic, i.e. such interactions simply reverse the direction
of motion perpendicular to the wall.
### The first step is to create a box and a point particle with given coordinates within the box.
We first create a 2-dimensional raster with one of the elements highlighted using a different color:
```
mybox[{mx_Integer, ny_Integer}, {px_Integer, qy_Integer}] :=
Graphics[Raster[
ReplacePart[
ConstantArray[{1, 1, 1}, {ny, mx}], {qy, px} -> {1, 0, 0}]],
Frame -> True, FrameTicks -> None];
mybox[{20, 10}, {5, 5}]
```
![basic_raster][1]
### The second step is to animate this box
(Note that the .nb file in this repository has more steps (more detailed description of the process I followed))
(Note also that we will use discrete time steps to describe the motion, to use this in conjunction with the Raster function)
```
mytimeAnimatedbox[{mx_Integer, ny_Integer}, {x0_Integer,
y0_Integer}, {vx_Integer, vy_Integer}] :=
Animate[mybox[{mx, ny}, {x0 + vx t, y0 + vy t}], {t, 0,
Min[(mx - x0)/vx, (ny - y0)/vy], 1}, AnimationRunning -> False];
mytimeAnimatedbox[{20, 10}, {5, 5}, {1, 1}]
```
![animated_figure][2]
Above we made a 20 x 10 raster in 2 dimensions. The particle is started with coordinates (5,5).
The new function takes values (vx,vy) which describe the speed of the particle in x and y directions respectively.
### The third step is to figure out what happens after collisions with a wall.
The particle's motion in 2D can be broken down into two independent motions in the x and y directions.
In our simple case these two motions are similar to each other. We can therefore describe both motions
with the same function.
Here we will try to figure out the function that describes the position of the particle 'n' time steps after it starts.
The idea is simple as follows:
* If incrementing the position by any number of time steps does not result in the particle hitting the wall boundaries,
then the particle's position follows the simple rule $x = x_0 + v_x t$
* If the particle hits a wall, we assume that the collision is elastic, so its velocity perpendicular to the wall
simply changes sign.
The above two rules can be implemented using the following function:
```
posn[pos_Integer, v_Integer, posmax_Integer, t_Integer] :=
1 + If[EvenQ[Floor[(pos + v (t - 1))/(posmax - 1)]],
Mod[pos + v (t - 1),
posmax -
1], (posmax - 1) Floor[(pos + v (t - 1))/(posmax - 1) + 1] -
pos - v (t - 1)];
```
In order to understand the above function we can plot it as a function of time steps. Below
we plot it for the first 50 time steps:
```
ListLinePlot[Table[{n, posn[5, 1, 10, n]}, {n, 0, 50}]]
```
![position_vs_time_plot][3]
### The final step is to put all of this together to actually obtain the result
The code that creates the two-dimensional box for us is as follows:
```
myFinalAnimatedbox[{m_Integer, n_Integer}, {x_Integer,
y_Integer}, {vx_Integer, vy_Integer}, tt_Integer] :=
Animate[mybox[{m, n}, {posn[x, vx, m, t], posn[y, vy, n, t]}], {t,
0, tt, 1}, AnimationRate -> 20];
```
![final_animated_figure][4]
## We will extend this construction to 3 dimensions
### Instead of using a 2D Raster we use a 3D Raster
Using the same techniques as before we can construct the function that generates a free particle in a 3D box:
```
my3DFinalbox[{m_Integer, n_Integer, o_Integer}, {x_Integer, y_Integer,
z_Integer}, {vx_Integer, vy_Integer, vz_Integer}, tt_Integer] :=
Animate[my3Dbox[{m, n, o}, {posn[x, vx, m, t], posn[y, vy, n, t],
posn[z, vz, o, t]}], {t, 0, tt, 1}, AnimationRate -> 5,
AnimationRunning -> True];
my3DFinalbox[{10, 10, 10}, {5, 5, 3}, {1, 1, 1}, 100]
```
![3D_animated_figure][5]
More cool examples in 3D are here: https://wolfr.am/mARXbPv7
Edited to remove padding in the figures for the collisions to look closer to real, using:
``` PlotRangePadding -> None```
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5693Fig1.png&userId=1081732
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5742Fig2.gif&userId=1081732
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Fig3.png&userId=1081732
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1865Fig4.gif&userId=1081732
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8935Fig5.gif&userId=1081732Bhubanjyoti Bhattacharya2017-06-24T00:46:58ZWhat does a RetrievalFailure[nodat] mean?
http://community.wolfram.com/groups/-/m/t/1126992
Hi everyone,
Can I please have help with the following question?
I can't access all the data I want to from the internet, especially Wolfram Data Repository. However, my connectivity is not necessarily flawed. When I go to "Test Internet Connectivity" it returns a success message, and these following items also return meaningful answers through my internet connection:
Import["http://www.stephenwolfram.com/img/homepage/stephen-wolfram-\
portrait.png"]
CountryData[Entity["Country", "United States"], "Flag"]
WikipediaData["computers"]
However, when I try other methods of accessing data, specifically the following example included in the Eiwl book: Entity["AnatomicalStructure",
"Skull"]["Graphics3D"], or when I enter SpeciesData[
Entity["Species", "Family:Elephantidae"], "AlternateCommonNames"], I get the following error message:
EntityValue::nodat: Unable to download data. Some or all results may be missing.
Missing["RetrievalFailure"]
I have Chrome 48 and Mathematica 11, with a fully functional browser. I have a perfectly working connection and can access Wolfram|Alpha just fine.
Thanks.Titus Sharman2017-06-23T23:05:40ZBuilding a simple Wolfram Language code for Tensorial/Vectorial Calculus
http://community.wolfram.com/groups/-/m/t/1127218
There are a lot of Mathematica packages for vector/tensor calculus: [Tensorial][1], [Advanced Tensor Analysis][2], [Ricci][3], [TensoriaCalc][4], [grt][5], [xAct][6] are just a few mentions.
The main thing I don't like about these packages is that the majority of them (if not all) are not up to date with the current Mathematica version. Some of them haven't seen an update in decades. The other major drawback of these packages is the cumbersome notation, declaration and the fact that almost all are to be used with General Relativity in mind and use coordinate notation. The results are not really coordinate-free.
With this in mind, I have developed a "Package" that can deal with symbolic vectors/tensors in a coordinate-free form and the package is human-readable (some packages implementations in this matter are simply indecipherable).
In the first part, I'll present the code with simple explanations and the second part some simple examples.
These functions are needed to clear the OenOwnValues, DownValues, UpValues, SubValues. Mathematica has no built-in way of doing it.
(* Prevent evaluation *)
SetAttributes[{ClearOwnValues, ClearDownValues, ClearUpValues, ClearSubValues}, HoldFirst]
(* Always return true for commodity later *)
ClearOwnValues[var_Symbol] := (OwnValues@var = {}; True)
ClearDownValues[var_Symbol] := (DownValues@var = {}; True)
ClearUpValues[var_Symbol] := (UpValues@var = {}; True)
ClearSubValues[var_Symbol] := (SubValues@var = {}; True)
(* Delete Values of "f" if they match the input *)
ClearDownValues[expr:f_Symbol[___]] := (DownValues@f = DeleteCases[DownValues@f, _?(!FreeQ[First@#, HoldPattern@expr] &)]; True)
ClearUpValues[expr:f_Symbol[___]] := (UpValues@f = DeleteCases[UpValues@f, _?(!FreeQ[First@#, HoldPattern@expr] &)]; True)
ClearSubValues[expr:f_Symbol[___][___]] := (SubValues@f = DeleteCases[SubValues@f, _?(!FreeQ[First@#, HoldPattern@expr] &)]; True)
The Define-family functions are the main core of this package, they are used to define all kinds of relationship and can be easily expanded.
You can define a Symbol "var" or a function "fun[var, ___]" to be a certain "type". "var" cannot have OwnValues, otherwise it will be evaluated.
You can only define symbols. the function "fun" should be primarily used as script-like functions as Subscript, SuperHat, etc.
(* Prevent evaluation *)
SetAttributes[Define$Internal, HoldAll]
(*
Internal version of Define, all others versions are a call of this function.
The OwnValues and/or DownValues are cleared immediately, this step is to avoid rule-parsing.
The possible types are: Real, Imaginary, Tensor and Constant. For Tensor type additional parameters are needed: rank and dimension.
*)
Define$Internal[(var_Symbol /; ClearOwnValues@var) | (var:Except[Hold, fun_][head_Symbol /; ClearOwnValues@head, ___] /; ClearDownValues@var),
type:"Real" | "Imaginary" | "Tensor" | "Constant",
rank_Integer:2, dim_Integer:3] := Module[{tag},
(* All expressions are defined as UpValues, assign it to the corresponding tag *)
(* UpValues cannot be deeply nested, hence the need to assign it to the "tag" *)
tag = If[Head@var === Symbol, var, fun];
Which[
type === "Real", (* Typical properties needed for real quantities *)
Evaluate@tag /: Element[var, Reals] = True;
Evaluate@tag /: Re[v:var] := v;
Evaluate@tag /: Im[v:var] := 0;
Evaluate@tag /: Conjugate[v:var] := v;
Evaluate@tag /: Abs[v:var] := RealAbs@v;
,
type === "Imaginary", (* Typical properties needed for Imaginary quantities *)
Evaluate@tag /: Element[var, Reals] = False;
Evaluate@tag /: Re[v:var] = 0;
Evaluate@tag /: Im[v:var] := v/I;
Evaluate@tag /: Conjugate[v:var] := -v;
Evaluate@tag /: Abs[v:var] := RealAbs@Im@v;
,
type === "Tensor", (* For compativility with Mathematica current Tensor-functions *)
Evaluate@tag /: ArrayQ@var = rank != 0;
Evaluate@tag /: TensorQ@var = rank != 0;
Evaluate@tag /: MatrixQ@var = rank == 2;
Evaluate@tag /: VectorQ@var = rank == 1;
Evaluate@tag /: ListQ@var = rank != 0;
Evaluate@tag /: ScalarQ@var = rank == 0;
Evaluate@tag /: TensorRank@var = rank;
Evaluate@tag /: TensorDimensions@var = ConstantArray[dim, {rank}];
Evaluate@tag /: Element[var, Arrays@TensorDimensions@var] = True;
Evaluate@tag /: Element[var, Matrices@{dim, dim}] = rank == 2;
Evaluate@tag /: Element[var, Vectors@dim] = rank == 1;
,
type === "Constant", (* A constant has zero "derivative" *)
Evaluate@tag /: ConstantQ@var = True;
Evaluate@tag /: grad@var = 0;
Evaluate@tag /: div@var = 0;
Evaluate@tag /: curl@var = 0;
Evaluate@tag /: DotNabla[_, var] = 0;
Evaluate@tag /: D[var, __] = 0;
Evaluate@tag /: Dp[var, __] = 0;
Evaluate@tag /: Dt[var, ___] = 0;
Evaluate@tag /: Delta[var] = 0;
,
True, $Failed]
]
(* Assign more than one variable *)
Define$Internal[vars__ /; Length@{vars} > 1, type:"Real" | "Imaginary" | "Tensor" | "Constant", rank_Integer:2, dim_Integer:3] :=(
Define$Internal[#, type, rank, dim] & /@ Hold /@ Hold@vars // ReleaseHold;) (* Hacky-way of passing Hold down *)
Define$Internal[Hold@var_, type:"Real" | "Imaginary" | "Tensor" | "Constant", rank_Integer:2, dim_Integer:3] := Define$Internal[var, type, rank, dim]
(* Main Define functions *)
SetAttributes[{DefineReal, DefineImaginary, DefineTensor, DefineConstant}, HoldAll]
DefineReal[vars__] := Define$Internal[vars, "Real"]
DefineImaginary[vars__] := Define$Internal[vars, "Imaginary"]
DefineTensor[vars__, rank_Integer:2, dim_Integer:3] := Define$Internal[vars, "Tensor", rank, dim]
DefineConstant[vars__] := Define$Internal[vars, "Constant"]
(* Define multiple things at once *)
SetAttributes[{DefineRealTensor, DefineConstantTensor, DefineRealConstantTensor}, HoldAll]
DefineRealTensor[vars__, rank_Integer:2, dim_Integer:3] := (DefineReal@vars; DefineTensor[vars, rank, dim];)
DefineConstantTensor[vars__, rank_Integer:2, dim_Integer:3] := (DefineConstant@vars; DefineTensor[vars, rank, dim];)
DefineRealConstantTensor[vars__, rank_Integer:2, dim_Integer:3] := (DefineReal@vars; DefineConstant@vars; DefineTensor[vars, rank, dim];)
Now it is possible to define tensorial variables and make them behave as tensor with current Mathematica implementation.
Some built-in functions needed to be redefined to work with symbolic tensors. An example of this necessity is:
(* Define two tensors a and b *)
DefineTensor[a, b, 2]
TensorRank[2*a - 3*b] (* Return 2 *)
TensorQ[a] (* Return True *)
TensorQ[2*a] (* Return False *)
Mathematica function TensorQ don't know that a scalar times a tensor is a tensor. The following code is for refifinition:
Unprotect[TensorQ, VectorQ, TensorRank, Dot, Cross, TensorProduct]
(* Numbers are always scalar/constant. These functions are not built-in. *)
ScalarQ[a_?NumericQ] := True
ConstantQ[a_?NumericQ] := True
(* Complexes *)
TensorQ[(Re|Im|Conjugate)[a_]] := TensorQ@a
VectorQ[(Re|Im|Conjugate)[a_]] := VectorQ@a
ScalarQ[(Re|Im|Conjugate)[a_]] := ScalarQ@a
ConstantQ[(Re|Im|Conjugate)[a_]] := ConstantQ@a
TensorRank[(Re|Im|Conjugate)[a_]] := TensorRank@a
(* Plus *)
TensorQ[(a_?TensorQ) + (b_?TensorQ)] := TensorRank@a === TensorRank@b
VectorQ[(a_?VectorQ) + (b_?VectorQ)] := True
ScalarQ[(a_?ScalarQ) + (b_?ScalarQ)] := True
ConstantQ[(a_?ConstantQ) + (b_?ConstantQ)] := True
(* Times *)
TensorQ[(a__?ScalarQ) * (b_?TensorQ)] := True
VectorQ[(a__?ScalarQ) * (b_?VectorQ)] := True
ScalarQ[(a__?ScalarQ) * (b_?ScalarQ)] := True
ConstantQ[(a_?ConstantQ /; ScalarQ@a) * (b_?ConstantQ)] := True
(* Pass scalars out of Dot and Cross, as is done in TensorProduct *)
Dot[a___, Times[b_, s__?ScalarQ], c___] := Times[s, Dot[a, b, c]]
Cross[a_, Times[b_, s__?ScalarQ]] := Times[s, Cross[a, b]]
Cross[Times[a_, s__?ScalarQ], b_] := Times[s, Cross[a, b]]
(* Dot *)
TensorQ[(a_?TensorQ) . (b_?TensorQ)] /; TensorRank@a + TensorRank@b - 2 >= 1 := True
VectorQ[(a_?TensorQ) . (b_?TensorQ)] /; TensorRank@a + TensorRank@b - 2 == 1 := True
ScalarQ[(a_?VectorQ) . (b_?VectorQ)] := True
ConstantQ[(a_?ConstantQ /; TensorQ@a) . (b_?ConstantQ /; TensorQ@b)] := True
(* Automatically evaluate to zero, as TensorProduct *)
Dot[a___, 0, b___] := 0
(* Cross *)
TensorQ[(a_?VectorQ) \[Cross] (b_?VectorQ)] := True
VectorQ[(a_?VectorQ) \[Cross] (b_?VectorQ)] := True
ConstantQ[(a_?ConstantQ /; VectorQ@a) \[Cross] (b_?ConstantQ /; VectorQ@b)] := True
(* Perpendicular vectors automatically evalute to zero *)
Cross[a_?VectorQ, a_?VectorQ] := 0
(* Automatically evaluate to zero, as TensorProduct *)
Cross[a___, 0, b___] := 0
(* Return single argument as Dot, Times and TensorProduct *)
Cross[a_] := a
(* Tensor Product *)
TensorQ[(a_?TensorQ) \[TensorProduct] (b_?TensorQ)] := True
ConstantQ[(a_?ConstantQ /; TensorQ@a) \[TensorProduct] (b_?ConstantQ /; TensorQ@b)] := True
(* Power *)
ScalarQ@Power[a_?ScalarQ, b_?ScalarQ] := True
ScalarQ[1/a_?ScalarQ] := True
ConstantQ@Power[a_?ConstantQ /; ScalarQ@a, b_?ConstantQ /; ScalarQ@b] := True
ConstantQ[1/a_?ConstantQ /; ScalarQ@a] := True
(* grad *)
grad[_?ConstantQ] := 0
TensorQ@grad[a_?ScalarQ] := True
VectorQ@grad[a_?ScalarQ] := True
TensorQ@grad[a_?TensorQ] := True
TensorRank@grad[a_?ScalarQ] := 1
TensorRank@grad[a_?TensorQ] := TensorRank@a + 1
(* div *)
div[_?ConstantQ] := 0
TensorQ@div[a_?TensorQ /; TensorRank@a >= 2] := True
VectorQ@div[a_?TensorQ /; TensorRank@a == 2] := True
ScalarQ@div[a_?VectorQ] := True
TensorRank@div[a_?TensorQ] := TensorRank@a - 1
(* curl *)
curl[_?ConstantQ] := 0
TensorQ@curl[a_?VectorQ] := True
VectorQ@curl[a_?VectorQ] := True
TensorRank@curl[a_?VectorQ] := 1
(* DotNabla *)
DotNabla[_, _?ConstantQ] := 0
(* Dp *)
Dp[_?ConstantQ, args__] := 0
TensorQ@Dp[a_, args__] := TensorQ@a
VectorQ@Dp[a_, args__] := VectorQ@a
ScalarQ@Dp[a_, args__] := ScalarQ@a
TensorRank@Dp[a_, args__] := TensorRank@a
(* Delta *)
Delta[_?ConstantQ] := 0
TensorQ@Delta[a_] := TensorQ@a
VectorQ@Delta[a_] := VectorQ@a
ScalarQ@Delta[a_] := ScalarQ@a
TensorRank@Delta[a_] := TensorRank@a
(* List *)
Dp[a_List, args__] := Dp[#, args] & /@ a
(* Don't assume anything is a scalar/constant *)
ScalarQ[a_] := False
ConstantQ[a_] := False
Protect[TensorQ, VectorQ, TensorRank, Dot, Cross, TensorProduct]
Where we have defined the Tensor-functions: grad, div, curl which are self-explanatory; Dp is the partial derivative, Delta gives the variation of a quantity, somewhat related to Dp, and DotNabla is (for the lack of better name) the convective derivative.
For better print, we'll define the following notation:
(* Hacky-way to create parenthesis *)
MakeBoxes[Parenthesis[a_], _] := MakeBoxes[a.1][[1, 1]]
MakeBoxes[grad[a_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a}, "grad", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"\[Del]", #1} &)]
MakeBoxes[div[a_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a}, "div", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"\[Del]\[CenterDot]", #1} &)]
MakeBoxes[curl[a_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a}, "curl", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"\[Del]\[Cross]", #1} &)]
MakeBoxes[DotNabla[a_, b_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a, MakeBoxes@Parenthesis@b}, "DotNabla", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"(", #1, "\[CenterDot]\[Del]", ")", #2} &)]
MakeBoxes[Delta[a_], form:TraditionalForm] := TemplateBox[{MakeBoxes@Parenthesis@a}, "Delta", Tooltip -> Automatic,
DisplayFunction :> (RowBox@{"\[Delta]", #1} &)]
And now for the most important part of the code, the function ExpandDerivative, which as the name suggest, expand the derivative-like functions:
(* Expand Derivatives/Vectors/Tensors on expr and apply custom rules *)
ExpandDerivative[expr_, rules_:{}] := expr //. Flatten@{
(* Custom Rules *)
rules,
(* Linearity *)
(op:grad|div|curl|Delta|Inactive[grad]|Inactive[div]|Inactive[curl]|Inactive[Delta]|Re|Im|Conjugate)[a_ + b__] :> op@a + op[+b],
(op:Dp|Inactive[Dp]|Sum|Inactive[Sum])[a_ + b__, arg__] :> op[a, arg] + op[+b, arg],
(op:Times|Dot|TensorProduct|Cross|DotNabla|Inactive[DotNabla])[a___, b_ + c__, d___] :> op[a, b, d] + op[a, +c, d],
(op:grad|div|curl|Delta|Inactive[grad]|Inactive[div]|Inactive[curl]|Inactive[Delta]|Re|Im|Conjugate)[(op\[CapitalSigma]:Sum|Inactive[Sum])[a_, args__]] :> op\[CapitalSigma][op@a, args],
(* Sum *)
(op:Sum|Inactive[Sum])[s_*a_, v_Symbol] /; FreeQ[s, v] :> s*op[a, v],
(op:Sum|Inactive[Sum])[s_, v_Symbol] /; FreeQ[s, v] :> s*op[1, v],
(* Complexes *)
Conjugate@(op:Times|Dot|Cross|TensorProduct)[a_, b__] :> op[Conjugate@a, Conjugate@op@b], (* Pass Conjugate to child *)
(op:grad|div|curl|Delta|Inactive[grad]|Inactive[div]|Inactive[curl]|Inactive[Delta])[(opC:Re|Im|Conjugate)[a_]] :> opC@op@a, (* Pass Conjugate/Re/Im to parent *)
Dp[(op:Re|Im|Conjugate)[a_], v_] :> op@Dp[a, v], (* Pass Conjugate/Re/Im to parent *)
(* Triple Product *)
Cross[a_?VectorQ, Cross[b_?VectorQ, c_?VectorQ]] :> b*a.c - c*a.b,
Cross[Cross[a_?VectorQ, b_?VectorQ], c_?VectorQ] :> b*a.c - a*b.c,
(* Quadruple Product *)
Dot[Cross[a_?VectorQ, b_?VectorQ], Cross[c_?VectorQ, d_?VectorQ]] :> (a.c)*(b.d) - (a.d)*(b.c),
(* Second Derivatives *)
div@curl[_?VectorQ] :> 0,
curl@grad[_?ScalarQ | _?VectorQ] :> 0,
(* grad *)
grad[(s_?ScalarQ) * (b_)] :> s*grad@b + b\[TensorProduct]grad@s,
grad[(a_?VectorQ) . (b_?VectorQ)] :> a\[Cross]curl@b + b\[Cross]curl@a + DotNabla[a, b] + DotNabla[b, a], (* Use physics form *)
grad[(s_?ScalarQ) ^ (n_?ConstantQ)] :> n*s^(n-1)*grad@s,
grad[(n_?ConstantQ /; ScalarQ@n) ^ (s_?ScalarQ)] :> n^s*Log[n]*grad@s,
(* div *)
div[(s_?ScalarQ) * (b_?TensorQ)] :> s*div@b + b.grad@s,
div[(a_?VectorQ) \[TensorProduct] (b_?VectorQ)] :> DotNabla[b, a] + a*div@b,
div[(a_?VectorQ) \[Cross] (b_?VectorQ)] :> b.curl@a - a.curl@b,
(* curl *)
curl[(s_?ScalarQ) * (b_?VectorQ)] :> grad[s]\[Cross]b + s*curl@b,
curl[(a_?VectorQ) \[Cross] (b_?VectorQ)] :> div[a\[TensorProduct]b - b\[TensorProduct]a],
(* DotNabla *)
DotNabla[(s_?ScalarQ) * (b_?VectorQ), c_?VectorQ] :> s*DotNabla[b, c],
DotNabla[a_?VectorQ, (\[Beta]_?ScalarQ)*(c_?VectorQ)] :> c*a.grad@\[Beta] + \[Beta]*DotNabla[a, c],
(* Dp *)
Dp[(op:Times|Dot|Cross|TensorProduct)[a_, b__], v_Symbol] :> op[Dp[a, v], b] + op[a, Dp[op@b, v]],
Dp[Power[a_?ScalarQ, b_?ScalarQ], v_Symbol] :> Power[a, b-1]*b*Dp[a, v] + Power[a,b]*Log[a]*Dp[b, v],
(* Delta *)
Delta[(op:Times|Dot|Cross|TensorProduct)[a_, b__]] :> op[Delta@a, b] + op[a, Delta@op@b],
Delta@Power[a_?ScalarQ, b_?ScalarQ] :> Power[a, b-1]*b*Delta[a] + Power[a,b]*Log[a]*Delta[b]
}
Some examples. Calculating the divergent of Maxwell Stress Tensor in vaccum:
![MST][7]
Calculate the Einstein-Laub force density for linear dielectrics:
![EL][8]
Testing Poynting theorem in vaccum (no sources):
![P][9]
Where the first argument is the quantity being "tested".
Many other uses are possible and is fairly easy to extended some definitions.
[1]: http://library.wolfram.com/infocenter/Demos/434/
[2]: http://library.wolfram.com/infocenter/MathSource/8827/
[3]: https://sites.math.washington.edu/~lee/Ricci/
[4]: http://www.stargazing.net/yizen/Tensoria.html
[5]: http://www.vaudrevange.com/pascal/grt/
[6]: http://www.xact.es/
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Fig1.png&userId=845022
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Fig2.png&userId=845022
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Fig3.png&userId=845022Thales Fernandes2017-06-23T22:49:11ZPerform a numerical integration?
http://community.wolfram.com/groups/-/m/t/1124347
I am getting error while evaluating following integral in Mathematica. Even though I am getting result I am sure it is wrong.
ClearAll["GLOBAL'*"]
{r, a, b, Z, A, B,phi } = {4.087, 1.205, 0.3812, 0, 345.0527, 606741.04395, Pi/3} // Rationalize[#, 0] &;
JJ[n_] := r NIntegrate[1/((t Sin[phi])^2 + r^2 + (t Cos[phi] + Z - z)^2 - 2 r t Sin[phi] Cos[eta])^
n, {eta, 0, 2 Pi}, {z, 0, 50}, {t, -5/3, 5/3}, MinRecursion -> 10, MaxRecursion -> 35, WorkingPrecision -> 20];
a b (-A JJ[3] + B JJ[6])
NIntegrate::eincr: The global error of the strategy GlobalAdaptive has increased more than 2000 times. The global error is expected to decrease monotonically after a number of integrand evaluations. Suspect one of the following: the working precision is insufficient for the specified precision goal; the integrand is highly oscillatory or it is not a (piecewise) smooth function; or the true value of the integral is 0. Increasing the value of the GlobalAdaptive option MaxErrorIncreases might lead to a convergent numerical integration. NIntegrate obtained 0.01443218400405474348879705436971438392688666301951432535504827223787189`70. and 1.034077729663448519046495248747417797443269722765919064706177094984896`70.*^-10 for the integral and error estimates. >>
NIntegrate::eincr: The global error of the strategy GlobalAdaptive has increased more than 2000 times. The global error is expected to decrease monotonically after a number of integrand evaluations. Suspect one of the following: the working precision is insufficient for the specified precision goal; the integrand is highly oscillatory or it is not a (piecewise) smooth function; or the true value of the integral is 0. Increasing the value of the GlobalAdaptive option MaxErrorIncreases might lead to a convergent numerical integration. NIntegrate obtained 6.363900979126543579586998981914238753034987512692844419582498779915941`70.*^-6 and 1.243798561618425968533853139468121686264826907454906286861264969523907`70.*^-13 for the integral and error estimates. >>
-2.100045775865940530
Symbolic integration doesn;t give answer in Mathematica. Is there anyway to speed up while doing symbolic integration in this case/sahad dubai2017-06-19T19:22:26ZPrintout3D using RegionPlot3D, PametricPlot3D, Graphics3D?
http://community.wolfram.com/groups/-/m/t/1127354
## Combining : RegionPlot3D, PametricPlot3D and Graphics3D, it is possible to see all 3 with Show, but no printing them with Printout3D. ##
co := {0.7, 0.7, 0.7};
x1 := -2;
y1 := -2;
z1 := 1;
p[0] := co + {0, 0, 0};
p[1] := co + {x1, 0, 0};
p[2] := co + {x1, 0, z1};
p[3] := co + {0, 0, z1};
p[4] := co + {0, y1, 0};
p[5] := co + {x1, y1, 0};
p[6] := co + {x1, y1, z1};
p[7] := co + {0, y1, z1};
cube := {
Yellow, Polygon[{p[0], p[1], p[2], p[3]}],
Pink, Polygon[{p[0], p[1], p[5], p[4]}],
Cyan, Polygon[{p[0], p[4], p[7], p[3]}],
Red, Polygon[{p[3], p[2], p[6], p[7]}],
Purple, Polygon[{p[4], p[5], p[6], p[7]}],
Blue, Polygon[{p[1], p[5], p[6], p[2]}
]};
Print["G1=Graphics3D[{cube}]-yes"];
G1 =
Graphics3D[{cube}]
Print["D1=DiscretizeGraphics[g1]-yes"];
D1= DiscretizeGraphics[G1]
Print["G2=Graphics3D[{g1-yes"];
G2 =
ParametricPlot3D[
1.4 {Cos[t] (3 + Cos[u]), Sin[t] (3 + Cos[u]), Sin[u]}, {t, 0,
2 Pi}, {u, 0, 2 Pi}, Boxed -> False]
Print["D2=DiscretizeGraphics[G2]-yes"]; \
D2 = DiscretizeGraphics[G2]
Print["G3=RegionPlot3D[x^2+y^2>4,{x,-2,2},{y,-2,2},{z,-2,1},
Mesh -> False, PlotPoints -> 35, PlotRange -> All]
G3 = RegionPlot3D[x^2 + y^2 > 4, {x, -2, 2}, {y, -2, 2}, {z, -2, 1},
Mesh -> False, PlotPoints -> 35, PlotRange -> All]
Print["D3=DiscretizeGraphics[G3]-yes"];
D3 = DiscretizeGraphics[G3]
Print["S1=Show[G1,G2,G3]-yes"]; S1 = Show[G1, G2, G3]
Print["Printout3D[ S1 ]- no G2,G3"]; Printout3D[S1]
Print["S2=Show[D1,D2,D3]-yes"]; S2 = Show[D1, D2, D3]
Print["Printout3D[ S2 ]- no"]; Printout3D[S2]
Print["S3=Show[D1,S2,S3]-no"]; S3 = Show[D1, S2, S3]
Print["Printout3D[ S3 ]- no"]; Printout3D[S3]
Print["S4=Show[G1,D2,G3]-yes"]; S4 = Show[G1, D2, G3]
Print["Printout3D[ S4 ]- no"]; Printout3D[S4]
Print["S5=Show[G1,G2,D3]-yes"]; S5 = Show[G1, G2, D3]
Print["Printout3D[ S5 ]- no G2"]; Printout3D[S5]
Print["S6=Show[D1,D2,G3]-yes"]; S6 = Show[D1, D2, G3]
Print["Printout3D[ S6 ]- no"]; Printout3D[S6]
Print["S7=Show[G1,D2,D3]-yes"]; S7 = Show[G1, D2, D3]
Print["Printout3D[ S7 ]- no"]; Printout3D[S7]
Print["S8=Show[D1,G2,D3]-yes"]; S8 = Show[D1, G2, D3]
Print["Printout3D[ S8 ]- no G2"]; Printout3D[S8]Angel Luis Diaz Perez2017-06-23T20:46:30ZBreast Cancer Diagnosis
http://community.wolfram.com/groups/-/m/t/1120757
During the Wolfram Summer School Armenia 2016, I worked on the project [Facial Emotion Recognition][1], to determine the emotions on faces dynamically. The successful outcome of the project made me think about whether it is possible to do the same procedure on a medical problem. After studying different problems, I ended up with breast cancer diagnosis in mammograms. Before getting into the technical side of the problem, it's better to know more of breast cancer:
![breast cancer statistics][2]
Breast cancer is the most common cancer among american women, after skin cancers. About 1 in 8 (12%) of U.S. women will develop invasive breast cancer during their lifetime. But advances in breast cancer treatment mean many women can expect to beat the disease and maintain their physical appearance. 5-Year survival rate for women with breast cancer was 89% in 2015, up from the 63% in 1960s. ([Read more][3])
![Digital Database for Screening Mammography][4]
Breast cancer is sometimes found after symptoms appear, but many women with breast cancer have no symptoms. Different tests can be used to look for and diagnose breast cancer. If your doctor finds an area of concern on a screening test (a mammogram), or if you have symptoms that could mean breast cancer, you will need more tests to know for sure if it’s cancer. After all, a biopsy is the only way to know FOR SURE if it’s cancer. ([Read more][5]) However, knowing that we are facing a complex problem, we can continue to the technical part of it:
Thanks to University of Southern California, we are using [DDSM][6] (Digital Database for Screening Mammography) for this projects. Data is divided to 3 groups of Normal, Benign and Cancer which represents the case condition. The Normal group contained 1624 samples, Benign and Cancer did 181 and 283 respectively. (Unfortunately, not great amount of data in Machine Learning scales.) At last all images needed to be converted to PNG format before starting to code.
Tolerance problem
-----------------
Reviewing the images, we noticed that there is no strict discipline in positioning in the mammography. many of them were unadjusted and some reflected right to left. To gain a more reliable system, all images were rotated by 5, 10 and 15 in both clockwise and counterclockwise directions (appropriate cut-offs were performed on newly generated artifacts) and even been reflected from left to right. The manual generation increased overall number of the images by 14 times.
rotateDatas[x_] := List[
x
,
ImageRotate[x, -5 Degree, Background -> Black]
,
ImageRotate[x, +5 Degree, Background -> Black]
,
ImageRotate[x, -10 Degree, Background -> Black]
,
ImageRotate[x, +10 Degree, Background -> Black]
,
ImageRotate[x, -15 Degree, Background -> Black]
,
ImageRotate[x, +15 Degree, Background -> Black]
];
reflectData := List[Map[(ImageReflect[#, Right -> Left]) &, #], #] &;
Neural network
--------------
Like the Facial Emotion Recognition project, LeNet5 network was used for this project. The encoder layer has 3 classes: Normal, Benign, Cancer.
![Using LeNet5 to diagnose breast cancer][7]
Training
--------
80% of the data is randomly selected as training set. Half of the remaining images (10%) is used as test data and the rest (10%) as validation data. It would have taken 24 hours to train the network on "Intel Core i5 4210U CPU, Windows 10 x64" for 20 rounds. But using my GPU (NVIDIA GeForce 840M), this time reduced to about an hour. (Thanks to Wolfram team!) However, training stopped sooner due to over-fitting. Applying the test set to the network, resulted this confusion matrix:
![Breast cancer diagnosis][8]
Issues and Further Improvement
-------
The systems does well in Normal data, But not in Cancer and Benign cases. I see two major problems in it:
1. Assume an extreme case, the network estimates 51% of being Normal and 49% of Cancer. The encoding layer assumes it as Normal. A radiologist usually asks for further tests if there is the slightest chance of being non-normal.
2. Assume another extreme case, that the networks estimates 34% of being Normal and 33% for each Benign and Cancer. Again it will be perceived as Normal.
It's better to have a network to estimate the chance of being non-normal and warn the patient if the chances were higher than a threshold (20%, 30% or whatever). If yes, the case should be given to a specific network which decides whether it is Cancer or Benign.
After all, more data is always a mercy.
You can find data sets, network file and the notebook, [here][9]. You have to put the MX files in $HomeDirectory (usually Documents folder for Windows users) to evaluate it.
[1]: http://community.wolfram.com/groups/-/m/t/908615
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Breast-Cancer-Statistics-Infographic1-520x245.png&userId=900427
[3]: http://www.cancercenter.com/community/infographics/
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=A_0578_1.RIGHT_MLO.LJPEG.1_highpass.gif&userId=900427
[5]: https://www.cancer.org/cancer/breast-cancer/screening-tests-and-early-detection.html
[6]: http://marathon.csee.usf.edu/Mammography/Database.html
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3493Lenet-Breast.PNG&userId=900427
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ConfusionMatrix.PNG&userId=900427
[9]: https://www.dropbox.com/sh/0ysjmu7tnlaflxz/AAALbrPEgnTCDbpGN_4drxOka?dl=0Iman Nazari2017-06-15T12:37:33Z[✓] Convert a constant to a variable?
http://community.wolfram.com/groups/-/m/t/1122133
Hi. I have an expression like
f:=ax
or`
f[x_]:=ax
and next i have to put
f[x_,a_]:=f
or
f[x_,a_]:=f
but it doesn't work.deimos19902017-06-17T17:38:29ZUse Solve with a condition of a limit x is much less than D?
http://community.wolfram.com/groups/-/m/t/1106375
Greetings,
I am working on a very complicated equation that needs to be solved. To simplify the function I use a limit in which x is much less than D or ( x<<D). I cannot find much less/greater condition in Mathematica. I need to solve that equation symbolically and I know that I should get four solutions. By taking a limit manually I got something like A*x^4-B*x^2+C=0. Then using solve I was able to get four solution, but with coefficients that were very long. (In manual calculations, I was able to get shorter coefficients)
How do I solve equation with much less condition?
Just for the sake of example let say I want to solve
f(x)= sqrt( D^2 - ( x - z )^2 ) + x * cos(a) + x + z
after limit
f(x)= sqrt( D^2 - ( z )^2 ) + x * cos(a) + x + z
Is there a command in Mathematica to selectively simplify/factor/expand some terms instead all terms in equations?Adam Szewczyk2017-05-25T01:56:46ZMonad code generation and extension
http://community.wolfram.com/groups/-/m/t/1126923
## Introduction
This document aims to introduce monadic programming in Mathematica / Wolfram Language (WL) in a concise and code-direct manner. The core of the monad codes discussed is simple, derived from the fundamental principles of Mathematica / WL.
The usefulness of the monadic programming approach manifests in multiple ways. Here are a few we are interested in:
1) easy to construct, read, and modify sequences of commands (pipelines),
2) easy to program polymorphic behaviour,
3) easy to program context utilization.
Speaking informally,
- Monad programming provides an interface that allows interactive, dynamic creation and change of sequentially structured computations with polymorphic and context-aware behavior.
The theoretical background provided in this document is given in the Wikipedia article on Monadic programming. The code in this document is based on the primary monad definition given \[Wk1,H3\]. (Based on the ["Kleisli triple"](https://en.wikipedia.org/wiki/Kleisli_category) and used in Haskell.)
The general monad structure can be seen as:
1) a software design pattern;
2) a fundamental programming construct (similar to class in object-oriented programming);
3) an interface for software types to have implementations of.
In this document we treat the monad structure as a [design pattern](https://en.wikipedia.org/wiki/Software_design_pattern), \[[Wk3](https://en.wikipedia.org/wiki/Software_design_pattern)\]. (After reading \[H3\] point 2 becomes more obvious. A similar in spirit, minimalistic approach to [Object-oriented Design Patterns](https://en.wikipedia.org/wiki/Design_Patterns) is given in \[[AA1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Implementation-of-Object_Oriented-Programming-Design-Patterns-in-Mathematica.md)\].)
We do not deal with types for monads explicitly, we generate code for monads instead. One reason for this is the "monad design pattern" perspective; another one is that in Mathematica / WL the notion of algebraic data type is not needed -- pattern matching comes from the core "book of replacement rules" principle.
The rest of the document is organized as follows.
**1.** *Fundamental sections*
The section "What is a monad?" gives the necessary definitions. The section "The basic Maybe monad" shows how to program a monad from scratch in Mathematica / WL. The section "Extensions with polymorphic behavior" shows how extensions of the basic monad functions can be made. (These three sections form a complete read on monadic programming, the rest of document can be skipped.)
**2.** *Monadic programming in practice*
The section "Monad code generation" describes packages for generating monad code. The section "Flow control in monads" describes additional, control flow functionalities. The section "General work-flow of monad code generation utilization" gives a general perspective on the use monad code generation. The section "Software design with monadic programming" discusses (small scale) software design with monadic programming.
**3.** *Case study sections*
The case study sections "Contextual monad classification" and "Tracing monad pipelines" hopefully have interesting and engaging examples of monad code generation, extension, and utilization.
## What is a monad?
### The monad definition
In this document a monad is any set of a symbol $m$ and two operators *unit* and *bind* that adhere to the monad laws. (See the next sub-section.) The definition is taken from [Wk1] and [H3] and phrased in Mathematica / WL terms in this section. In order to be brief, we deliberately do not consider the equivalent monad definition based on *unit*, *join*, and *map* (also given in [H3].)
Here are operators for a monad associated with a certain symbol `M`:
1. monad *unit* function ("return" in Haskell notation) is `Unit[x_] := M[x]`;
2. monad *bind* function (">>=" in Haskell notation) is a rule like `Bind[M[x_], f_] := f[x] with MatchQ[f[x],M[_]]` giving `True`.
Note that:
- the function `Bind` unwraps the content of `M[_]` and gives it to the function `f`;
- the functions $f_i$ are responsible to return results wrapped with the monad symbol `M`.
Here is an illustration formula showing a **monad pipeline**:
![Monad-formula-generic](http://imgur.com/JyNp2os.png)
From the definition and formula it should be clear that if for the result of `Bind[_M,f[x]]` the test `MatchQ[f[x],_M]` is `True` then the result is ready to be fed to the next binding operation in monad's pipeline. Also, it is clear that it is easy to program the pipeline functionality with `Fold`:
Fold[Bind, M[x], {f1, f2, f3}]
(* Bind[Bind[Bind[M[x], f1], f2], f3] *)
### The monad laws
The monad laws definitions are taken from
[[H1](https://wiki.haskell.org/Monad_laws)] and [H3]. In the monad laws given below the symbol "⟹" is for monad's binding operation and ↦ is for a function in anonymous form.
Here is a table with the laws:
![laws-tbl](http://imgur.com/E4VEucD.png)
**Remark:** The monad laws are satisfied for every symbol in Mathematica / WL with `List` being the unit operation and `Apply` being the binding operation.
![laws-tbl-2](http://imgur.com/FR6S2Fu.png)
### Expected monadic programming features
Looking at formula (1) -- and having certain programming experiences -- we can expect the following features when using monadic programming.
- Computations that can be expressed with monad pipelines are easy to construct and read.
- By programming the binding function we can tuck-in a variety of monad behaviours -- this is the so called "programmable semicolon" feature of monads.
- Monad pipelines can be constructed with `Fold`, but with suitable definitions of infix operators like `DoubleLongRightArrow` (⟹) we can produce code that resembles the pipeline in formula (1).
- A monad pipeline can have polymorphic behaviour by overloading the signatures of $f_i$ (and if we have to, `Bind`.)
These points are clarified below. For more complete discussions see [[Wk1](https://en.wikipedia.org/wiki/Monad_(functional_programming))] or [H3].
## The basic Maybe monad
It is fairly easy to program the basic monad Maybe discussed in [WK1].
The goal of the Maybe monad is to provide easy exception handling in a sequence of chained computational steps. If one of the computation steps fails then the whole pipeline returns a designated failure symbol, say `None` otherwise the result after the last step is wrapped in another designated symbol, say Maybe.
Here is the special version of the generic pipeline formula (1) for the Maybe monad:
![Monad-formula-maybe](http://imgur.com/DRNAhPG.png)
Here is the minimal code to get a functional Maybe monad (for a more detailed exposition of code and explanations see [AA7]):
MaybeUnitQ[x_] := MatchQ[x, None] || MatchQ[x, Maybe[___]];
MaybeUnit[None] := None;
MaybeUnit[x_] := Maybe[x];
MaybeBind[None, f_] := None;
MaybeBind[Maybe[x_], f_] :=
Block[{res = f[x]}, If[FreeQ[res, None], res, None]];
MaybeEcho[x_] := Maybe@Echo[x];
MaybeEchoFunction[f___][x_] := Maybe@EchoFunction[f][x];
MaybeOption[f_][xs_] :=
Block[{res = f[xs]}, If[FreeQ[res, None], res, Maybe@xs]];
In order to make the pipeline form of the code we write let us give definitions to a suitable infix operator (like "⟹") to use MaybeBind:
DoubleLongRightArrow[x_?MaybeUnitQ, f_] := MaybeBind[x, f];
DoubleLongRightArrow[x_, y_, z__] :=
DoubleLongRightArrow[DoubleLongRightArrow[x, y], z];
Here is an example of a Maybe monad pipeline using the definitions so far:
data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};
MaybeUnit[data]⟹(* lift data into the monad *)
(Maybe@ Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
MaybeEcho⟹(* display current value *)
(Maybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)
(* {0.61,0.48,0.92,0.9,0.32,0.11,4,4,0}
None *)
The result is `None` because:
1. the data has a number that is too small, and
2. the definition of `MaybeBind` stops the pipeline aggressively using a `FreeQ[_,None]` test.
### Monad laws verification
Let us convince ourselves that the current definition of `MaybeBind` gives a monad.
The verification is straightforward to program and shows that the implemented Maybe monad adheres to the monad laws.
![Monad-laws-table-Maybe](http://imgur.com/N2PV9sY.png)
## Extensions with polymorphic behavior
We can see from formulas (1) and (2) that the monad codes can be easily extended through overloading the pipeline functions.
For example the extension of the Maybe monad to handle of `Dataset` objects is fairly easy and straightforward.
Here is the formula of the Maybe monad pipeline extended with `Dataset` objects:
![Monad-Maybe-formula-Dataset](http://imgur.com/aWdtR6B.png)
Here is an example of a polymorphic function definition for the Maybe monad:
MaybeFilter[filterFunc_][xs_] := Maybe@Select[xs, filterFunc[#] &];
MaybeFilter[critFunc_][xs_Dataset] := Maybe@xs[Select[critFunc]];
See [AA7] for more detailed examples of polymorphism in monadic programming with Mathematica / WL.
A complete discussion can be found in [H3]. (The main message of [H3] is the poly-functional and polymorphic properties of monad implementations.)
### Polymorphic monads in R's dplyr
The R package [dplyr](http://dplyr.tidyverse.org), [[R1](https://github.com/tidyverse/dplyr)], has implementations centered around monadic polymorphic behavior. The command pipelines based on [dplyr](http://dplyr.tidyverse.org)can work on R data frames, SQL tables, and Spark data frames without changes.
Here is a diagram of a typical work-flow with dplyr:
[![dplyr-pipeline](http://i.imgur.com/kqch4eUl.jpg)](http://i.imgur.com/kqch4eU.jpg)
The diagram shows how a pipeline made with dplyr can be re-run (or reused) for data stored in different data structures.
## Monad code generation
We can see monad code definitions like the ones for Maybe as some sort of initial templates for monads that can be extended in specific ways depending on their applications. Mathematica / WL can easily provide code generation for such templates; (see [[WL1](https://mathematica.stackexchange.com/a/2352/34008)]). As it was mentioned in the introduction, we do not deal with types for monads explicitly, we generate code for monads instead.
In this section are given examples with packages that generate monad codes. The case study sections have examples of packages that utilize generated monad codes.
### Maybe monads code generation
The package [[AA2](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MaybeMonadCodeGenerator.m)] provides a Maybe code generator that takes as an argument a prefix for the generated functions. (Monad code generation is discussed further in the section "General work-flow of monad code generation utilization".)
Here is an example:
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MaybeMonadCodeGenerator.m"]
GenerateMaybeMonadCode["AnotherMaybe"]
data = {0.61, 0.48, 0.92, 0.90, 0.32, 0.11};
AnotherMaybeUnit[data]⟹(* lift data into the monad *)
(AnotherMaybe@Join[#, RandomInteger[8, 3]] &)⟹(* add more values *)
AnotherMaybeEcho⟹(* display current value *)
(AnotherMaybe @ Map[If[# < 0.4, None, #] &, #] &)(* map values that are too small to None *)
(* {0.61,0.48,0.92,0.9,0.32,0.11,8,7,6}
AnotherMaybeBind: Failure when applying: Function[AnotherMaybe[Map[Function[If[Less[Slot[1], 0.4], None, Slot[1]]], Slot[1]]]]
None *)
We see that we get the same result as above (`None`) and a message prompting failure.
### State monads code generation
The State monad is also basic and its programming in Mathematica / WL is not that difficult. (See [[AA3](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m)].)
Here is the special version of the generic pipeline formula (1) for the State monad:
![Monad-formula-State](http://imgur.com/rbXWydC.png)
Note that since the State monad pipeline caries both a value and a state, it is a good idea to have functions that manipulate them separately.
For example, we can have functions for context modification and context retrieval. (These are done in [[AA3](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m)].)
This loads the package [[AA3](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m)]:
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/StateMonadCodeGenerator.m"]
This generates the State monad for the prefix "StMon":
GenerateStateMonadCode["StMon"]
The following StMon pipeline code starts with a random matrix and then replaces numbers in the current pipeline value according to a threshold parameter kept in the context. Several times are invoked functions for context deposit and retrieval.
SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, {3, 2}], <|"mark" -> "TooSmall", "threshold" -> 0.5|>]⟹
StMonEchoValue⟹
StMonEchoContext⟹
StMonAddToContext["data"]⟹
StMonEchoContext⟹
(StMon[#1 /. (x_ /; x < #2["threshold"] :> #2["mark"]), #2] &)⟹
StMonEchoValue⟹
StMonRetrieveFromContext["data"]⟹
StMonEchoValue⟹
StMonRetrieveFromContext["mark"]⟹
StMonEchoValue;
(* value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
context: <|mark->TooSmall,threshold->0.5|>
context: <|mark->TooSmall,threshold->0.5,data->{{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}|>
value: {{0.789884,0.831468},{TooSmall,0.50537},{TooSmall,TooSmall}}
value: {{0.789884,0.831468},{0.421298,0.50537},{0.0375957,0.289442}}
value: TooSmall *)
## Flow control in monads
We can implement dedicated functions for governing the pipeline flow in a monad.
Let us look at a breakdown of these kind of functions using the State monad StMon generated above.
### Optional acceptance of a function result
A basic and simple pipeline control function is for optional acceptance of result -- if failure is obtained applying $f$ then we ignore its result (and keep the current pipeline value.)
Here is an example with `StMonOption` :
SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
StMonEchoValue⟹
StMonOption[If[# < 0.3, None] & /@ # &]⟹
StMonEchoValue
(* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
StMon[{0.789884, 0.831468, 0.421298, 0.50537, 0.0375957}, <||>] *)
Without `StMonOption` we get failure:
SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
StMonEchoValue⟹
(If[# < 0.3, None] & /@ # &)⟹
StMonEchoValue
(* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
StMonBind: Failure when applying: Function[Map[Function[If[Less[Slot[1], 0.3], None]], Slot[1]]]
None *)
### Conditional execution of functions
It is natural to want to have the ability to chose a pipeline function application based on a condition.
This can be done with the functions `StMonIfElse` and `StMonWhen`.
SeedRandom[34]
StMonUnit[RandomReal[{0, 1}, 5]]⟹
StMonEchoValue⟹
StMonIfElse[
Or @@ (# < 0.4 & /@ #) &,
(Echo["A too small value is present.", "warning:"];
StMon[Style[#1, Red], #2]) &,
StMon[Style[#1, Blue], #2] &]⟹
StMonEchoValue
(* value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
warning: A too small value is present.
value: {0.789884,0.831468,0.421298,0.50537,0.0375957}
StMon[{0.789884,0.831468,0.421298,0.50537,0.0375957},<||>] *)
**Remark:** Using flow control functions like `StMonIfElse` and `StMonWhen` with appropriate messages is a better way of handling computations that might fail. The silent failures handling of the basic Maybe monad is convenient only in a small number of use cases.
### Iterative functions
The last group of pipeline flow control functions we consider comprises iterative functions that provide the functionalities of `Nest`, `NestWhile`, `FoldList`, etc.
In [[AA3](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m)] these functionalities are provided through the function StMonIterate.
Here is a basic example using `Nest` that corresponds to `Nest[#+1&,1,3]`:
StMonUnit[1]⟹StMonIterate[Nest, (StMon[#1 + 1, #2]) &, 3]
(* StMon[4, <||>] *)
Consider this command that uses the full signature of `NestWhileList`:
NestWhileList[# + 1 &, 1, # < 10 &, 1, 4]
(* {1, 2, 3, 4, 5} *)
Here is the corresponding StMon iteration code:
StMonUnit[1]⟹StMonIterate[NestWhileList, (StMon[#1 + 1, #2]) &, (#[[1]] < 10) &, 1, 4]
(* StMon[{1, 2, 3, 4, 5}, <||>] *)
Here is another results accumulation example with `FixedPointList` :
StMonUnit[1.]⟹
StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &]
(* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <||>] *)
When the functions `NestList`, `NestWhileList`, `FixedPointList` are used with `StMonIterate` their results can be stored in the context. Here is an example:
StMonUnit[1.]⟹
StMonIterate[FixedPointList, (StMon[(#1 + 2/#1)/2, #2]) &, "fpData"]
(* StMon[{1., 1.5, 1.41667, 1.41422, 1.41421, 1.41421, 1.41421}, <|"fpData" -> {StMon[1., <||>],
StMon[1.5, <||>], StMon[1.41667, <||>], StMon[1.41422, <||>], StMon[1.41421, <||>],
StMon[1.41421, <||>], StMon[1.41421, <||>]} |>] *)
More elaborate tests can be found in [[AA8](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/GeneratedStateMonadTests.m)].
### Partial pipelines
Because of the associativity law we can design pipeline flows based on functions made of "sub-pipelines."
fEcho = Function[{x, ct}, StMonUnit[x, ct]⟹StMonEchoValue⟹StMonEchoContext];
fDIter = Function[{x, ct},
StMonUnit[y^x, ct]⟹StMonIterate[FixedPointList, StMonUnit@D[#, y] &, 20]];
StMonUnit[7]⟹fEcho⟹fDIter⟹fEcho;
(*
value: 7
context: <||>
value: {y^7,7 y^6,42 y^5,210 y^4,840 y^3,2520 y^2,5040 y,5040,0,0}
context: <||> *)
## General work-flow of monad code generation utilization
With the abilities to generate and utilize monad codes it is natural to consider the following work flow. (Also shown in the diagram below.)
1. Come up with an idea that can be expressed with monadic programming.
2. Look for suitable monad implementation.
3. If there is no such implementation, make one (or two, or five.)
4. Having a suitable monad implementation, generate the monad code.
5. Implement additional pipeline functions addressing envisioned use cases.
6. Start making pipelines for the problem domain of interest.
7. Are the pipelines are satisfactory? If not go to **5**. (Or **2.**)
[![make-monads](http://imgur.com/9iinzkzl.jpg)](http://imgur.com/9iinzkz.jpg)
### Monad templates
The template nature of the general monads can be exemplified with the group of functions in the package [StateMonadCodeGenerator.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m), [4].
They are in five groups:
1. base monad functions (unit testing, binding),
2. display of the value and context,
3. context manipulation (deposit, retrieval, modification),
4. flow governing (optional new value, conditional function application, iteration),
5. other convenience functions.
We can say that all monad implementations will have their own versions of these groups of functions. The more specialized monads will have functions specific to their intended use. Such special monads are discussed in the case study sections.
## Software design with monadic programming
The application of monadic programming to a particular problem domain is very similar to designing a software framework or designing and implementing a Domain Specific Language (DSL).
The answers of the question "When to use monadic programming?" can form a large list. This section provides only a couple of general, personal viewpoints on monadic programming in software design and architecture. The principles of monadic programming can be used to build systems from scratch (like Haskell and Scala.) Here we discuss making specialized software with or within already existing systems.
### Framework design
Software framework design is about architectural solutions that capture the commonality and variability in a problem domain in such a way that:
1) significant speed-up can be achieved when making new applications, and
2) a set of policies can be imposed on the new applications.
The rigidness of the framework provides and supports its flexibility -- the framework has a backbone of rigid parts and a set of "hot spots" where new functionalities are plugged-in.
Usually Object-Oriented Programming (OOP) frameworks provide inversion of control -- the general work-flow is already established, only parts of it are changed. (This is characterized with "leave the driving to us" and "don't call us we will call you.")
The point of utilizing monadic programming is to be able to easily create different new work-flows that share certain features. (The end user is the driver, on certain rail paths.)
In my opinion making a software framework of small to moderate size with monadic programming principles would produce a library of functions each with polymorphic behaviour that can be easily sequenced in monadic pipelines. This can be contrasted with OOP framework design in which we are more likely to end up with backbone structures that (i) are static and tree-like, and (ii) are extended or specialized by plugging-in relevant objects. (Those plugged-in objects themselves can be trees, but hopefully short ones.)
### DSL development
Given a problem domain the general monad structure can be used to shape and guide the development of DSLs for that problem domain.
Generally, in order to make a DSL we have to choose the language syntax and grammar. Using monadic programming the syntax and grammar commands are clear. (The monad pipelines are the commands.) What is left is "just" the choice of particular functions and their implementations.
Another way to develop such a DSL is through a grammar of natural language commands. Generally speaking, just designing the grammar -- without developing the corresponding interpreters -- would be very helpful in figuring out the components at play. Monadic programming meshes very well with this approach and applying the two approaches together can be very fruitful.
## Contextual monad classification *(case study)*
In this section we show an extension of the State monad into a monad aimed at machine learning classification work-flows.
### Motivation
We want to provide a DSL for doing machine learning classification tasks that allows us:
1) to do basic summarization and visualization of the data,
1) to control splitting of the data into training and testing sets;
2) to apply the built-in classifiers;
3) to apply classifier ensembles (see [[AA9](https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m)] and [[AA10](https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/)]);
4) to evaluate classifier performances with standard measures and
5) ROC plots.
Also, we want the DSL design to provide clear directions how to add (hook-up or plug-in) new functionalities.
The package [[AA4](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m)] discussed below provides such a DSL through monadic programming.
### Package and data loading
This loads the package [[AA4](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m)]:
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicContextualClassification.m"]
This gets some test data (the Titanic dataset):
dataName = "Titanic";
ds = Dataset[Flatten@*List @@@ ExampleData[{"MachineLearning", dataName}, "Data"]];
varNames = Flatten[List @@ ExampleData[{"MachineLearning", dataName}, "VariableDescriptions"]];
varNames = StringReplace[varNames, "passenger" ~~ (WhitespaceCharacter ..) -> ""];
If[dataName == "FisherIris", varNames = Most[varNames]];
ds = ds[All, AssociationThread[varNames -> #] &];
### Monad design
The package [[AA4](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m)] provides functions for the monad **ClCon** -- the functions implemented in [[AA4](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m)] have the prefix "ClCon".
The classifier contexts are Association objects. The pipeline values can have the form:
ClCon[ val, context:(_String|_Association) ]
The ClCon specific monad functions deposit or retrieve values from the context with the keys: "trainData", "testData", "classifier". The general idea is that if the current value of the pipeline cannot provide all arguments for a ClCon function, then the needed arguments are taken from the context. If that fails, then an message is issued.
This is illustrated with the following pipeline with comments example.
[![ClCon-basic-example](http://imgur.com/98e60pN.png)](http://imgur.com/98e60pN.png)
The pipeline and results above demonstrate polymorphic behaviour over the classifier variable in the context:
different functions are used if that variable is a `ClassifierFunction` object or an association of
named `ClassifierFunction` objects.
Note the demonstrated granularity and sequentiality of the operations coming from using a monad structure.
With those kind of operations it would be easy to make interpreters for natural language DSLs.
### Another usage example
This monadic pipeline in this example goes through several stages: data summary, classifier training, evaluation, acceptance test, and if the results are rejected a new classifier is made with a different algorithm using the same data splitting. The context keeps track of the data and its splitting. That allows the conditional classifier switch to be concisely specified.
First let us define a function that takes a `Classify` method as an argument and makes a classifier and calculates performance measures.
ClSubPipe[method_String] :=
Function[{x, ct},
ClConUnit[x, ct]⟹
ClConMakeClassifier[method]⟹
ClConEchoFunctionContext["classifier:",
ClassifierInformation[#["classifier"], Method] &]⟹
ClConEchoFunctionContext["training time:", ClassifierInformation[#["classifier"], "TrainingTime"] &]⟹
ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall"}]⟹
ClConEchoValue⟹
ClConEchoFunctionContext[
ClassifierMeasurements[#["classifier"],
ClConToNormalClassifierData[#["testData"]], "ROCCurve"] &]
];
Using the sub-pipeline function `ClSubPipe` we make the outlined pipeline.
SeedRandom[12]
res =
ClConUnit[ds]⟹
ClConSplitData[0.7]⟹
ClConEchoFunctionValue["summaries:", ColumnForm[Normal[RecordsSummary /@ #]] &]⟹
ClConEchoFunctionValue["xtabs:",
MatrixForm[CrossTensorate[Count == varNames[[1]] + varNames[[-1]], #]] & /@ # &]⟹
ClSubPipe["LogisticRegression"]⟹
(If[#1["Accuracy"] > 0.8,
Echo["Good accuracy!", "Success:"]; ClConFail,
Echo["Make a new classifier", "Inaccurate:"];
ClConUnit[#1, #2]] &)⟹
ClSubPipe["RandomForest"];
[![ClCon-pipeline-2-output](http://imgur.com/ffXPNMvl.png)](http://imgur.com/ffXPNMv.png)
## Tracing monad pipelines *(case study)*
The monadic implementations in the package [MonadicTracing.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m), [[AA5](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m)] allow tracking of the pipeline execution of functions within other monads.
The primary reason for developing the package was the desire to have the ability to print a tabulated trace of code and comments using the usual monad pipeline notation. (I.e. without conversion to strings etc.)
It turned out that by programming [MonadicTracing.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m) I came up with a [monad transformer](https://en.wikipedia.org/wiki/Monad_transformer); see [[Wk2](https://en.wikipedia.org/wiki/Monad_transformer)], [H2].
### Package loading
This loads the package [[AA5](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m)]:
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
### Usage example
This generates a Maybe monad to be used in the example (for the prefix "Perhaps"):
GenerateMaybeMonadCode["Perhaps"]
GenerateMaybeMonadSpecialCode["Perhaps"]
In following example we can see that pipeline functions of the Perhaps monad are interleaved with comment strings.
Producing the grid of functions and comments happens "naturally" with the monad function `TraceMonadEchoGrid`.
data = RandomInteger[10, 15];
TraceMonadUnit[PerhapsUnit[data]]⟹"lift to monad"⟹
TraceMonadEchoContext⟹
PerhapsFilter[# > 3 &]⟹"filter current value"⟹
PerhapsEcho⟹"display current value"⟹
PerhapsWhen[#[[3]] > 3 &,
PerhapsEchoFunction[Style[#, Red] &]]⟹
(Perhaps[#/4] &)⟹
PerhapsEcho⟹"display current value again"⟹
TraceMonadEchoGrid[Grid[#, Alignment -> Left] &];
Note that :
1. the tracing is initiated by just using `TraceMonadUnit`;
2. pipeline functions (actual code) and comments are interleaved;
3. putting a comment string after a pipeline function is optional.
Another example is the ClCon pipeline in the sub-section "Monad design" in the previous section.
## Summary
This document presents a style of using monadic programming in Wolfram Language (Mathematica). The style has some shortcomings, but it definitely provides convenient features for day-to-day programming and in coming up with architectural designs.
The style is based on WL's basic language features. As a consequence it is fairly concise and produces light overhead.
Ideally, the packages for the code generation of the basic Maybe and State monads would serve as starting points for other more general or more specialized monadic programs.
## References
### Monadic programming
\[Wk1\] Wikipedia entry: [Monad (functional programming)](https://en.wikipedia.org/wiki/Monad_(functional_programming)), URL: [https://en.wikipedia.org/wiki/Monad_(functional_programming)](https://en.wikipedia.org/wiki/Monad_(functional_programming)) .
\[Wk2\] Wikipedia entry: [Monad transformer](https://en.wikipedia.org/wiki/Monad_transformer), URL: [https://en.wikipedia.org/wiki/Monad_transformer](https://en.wikipedia.org/wiki/Monad_transformer) .
\[Wk3\] Wikipedia entry: [Software Design Pattern](https://en.wikipedia.org/wiki/Software_design_pattern), URL: [https://en.wikipedia.org/wiki/Software_design_pattern](https://en.wikipedia.org/wiki/Software_design_pattern) .
\[H1\] Haskell.org article: [Monad laws,](https://wiki.haskell.org/Monad_laws) URL: [https://wiki.haskell.org/Monad_laws](https://wiki.haskell.org/Monad_laws).
\[H2\] Sheng Liang, Paul Hudak, Mark Jones, ["Monad transformers and modular interpreters",](http://haskell.cs.yale.edu/wp-content/uploads/2011/02/POPL96-Modular-interpreters.pdf) (1995), Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages. New York, NY: ACM. pp. 333\[Dash]343. doi:10.1145/199448.199528.
\[H3\] Philip Wadler, ["The essence of functional programming"](https://page.mi.fu-berlin.de/scravy/realworldhaskell/materialien/the-essence-of-functional-programming.pdf), (1992), 19'th Annual Symposium on Principles of Programming Languages, Albuquerque, New Mexico, January 1992.
### R
\[R1\] Hadley Wickham et al., [dplyr: A Grammar of Data Manipulation](https://github.com/tidyverse/dplyr), (2014), [tidyverse at GitHub](https://github.com/tidyverse), URL: [https://github.com/tidyverse/dplyr](https://github.com/tidyverse/dplyr) .
(See also, [http://dplyr.tidyverse.org](http://dplyr.tidyverse.org) .)
### Mathematica / Wolfram Language
\[WL1\] Leonid Shifrin, "Metaprogramming in Wolfram Language", (2012), [Mathematica StackExchange](https://mathematica.stackexchange.com). (Also posted at [Wolfram Community](http://community.wolfram.com) in 2017.)
URL of [the Mathematica StackExchange answer](https://mathematica.stackexchange.com/a/2352/34008): [https://mathematica.stackexchange.com/a/2352/34008](https://mathematica.stackexchange.com/a/2352/34008) .
URL of t[he Wolfram Community post](http://community.wolfram.com/groups/-/m/t/1121273): [http://community.wolfram.com/groups/-/m/t/1121273](http://community.wolfram.com/groups/-/m/t/1121273) .
### MathematicaForPrediction
\[AA1\] Anton Antonov, ["Implementation of Object-Oriented Programming Design Patterns in Mathematica"](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Implementation-of-Object_Oriented-Programming-Design-Patterns-in-Mathematica.md), (2016) [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*, *[https://github.com/antononcube/MathematicaForPrediction](https://github.com/antononcube/MathematicaForPrediction).
\[AA2\] Anton Antonov, [Maybe monad code generator Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MaybeMonadCodeGenerator.m), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*.
*URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MaybeMonadCodeGenerator.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MaybeMonadCodeGenerator.m) .
\[AA3\] Anton Antonov, [State monad code generator Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*.
*URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m) .
\[AA4\] Anton Antonov, [Monadic contextual classification Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*.*
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m) .
\[AA5\] Anton Antonov, [Monadic tracing Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*.
*URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m) .
\[AA6\] Anton Antonov, [MathematicaForPrediction utilities](https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m), (2014), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction).
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m) .
\[AA7\] Anton Antonov, ["Simple monadic programming"](https://github.com/antononcube/MathematicaForPrediction/blob/master/Documentation/Simple-monadic-programming.pdf), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction).
(*Preliminary version, 40% done.)*
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/Documentation/Simple-monadic-programming.pdf](https://github.com/antononcube/MathematicaForPrediction/blob/master/Documentation/Simple-monadic-programming.pdf) .
\[AA8\] Anton Antonov, [Generated State Monad Mathematica unit tests](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/GeneratedStateMonadTests.m), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction).
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/GeneratedStateMonadTests.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/GeneratedStateMonadTests.m) .
\[AA9\] Anton Antonov, [Classifier ensembles functions Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m), (2016), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction).
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m) .
\[AA10\] Anton Antonov, ["ROC for classifier ensembles, bootstrapping, damaging, and interpolation"](https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/), (2016), [MathematicaForPrediction at WordPress](https://mathematicaforprediction.wordpress.com).
URL: [https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/](https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/) .Anton Antonov2017-06-23T11:09:46ZTunnel a remote Wolfram Language kernel connection through ssh?
http://community.wolfram.com/groups/-/m/t/1126278
I am unable to connect to a remote kernel using the default procedure as documented in "[How to | Connect to a Remote Kernel][1]." It seem that the an ssh connection is made to the remote server, however the remote server then attempts to make a direct connection back to the client using an IP address and multiple dynamically assigned ports. This fails for a number of reasons. The client has a firewall. In addition, the client is connected over a VPN and routed via NAT which make it difficult to route back by IP address.
Creating a manual connection using the procedure as documented in "[Quick Answer How do I manually create a remote Wolfram Language kernel connection?][2]" also fails. It that case, I manually made an ssh connect to the remote system with -R flags for the remote ports. Unfortunately, the remote wolfram instance still fails to connect to those localhost ports.
Please let me know if you have any suggestions for connecting a Mathematica client and remote kernel through ssh.
Test System: Wolfram Mathematica 11.1.1 Kernel for Linux x86 (64-bit)
[1]: http://reference.wolfram.com/language/howto/ConnectToARemoteKernel.html
[2]: http://support.wolfram.com/kb/12495Donald Pellegrino2017-06-22T13:18:43ZWolfram|Alpha for HTC Vive or other VR devices?
http://community.wolfram.com/groups/-/m/t/879610
I had an idea while waiting for my own HTC Vive [(website)][1] to arrive regarding Wolfram|Alpha. If it could be implemented as a program for the Vive, it could make 3D plots (both across real numbers and the complex plane) become easily visualized, especially with it's "Room Scale" capabilities. I know that the Vive was really made with games in mind, but I think that it could be also useful when used with Wolfram|Alpha. I'm not sure if this is the right place to post this idea, but hopefully the Wolfram|Alpha team will see this and consider it. Please share with me your thoughts, and hope this is seen by Wolfram|Alpha developers so they can see your ideas too.
[1]: http://www.htcvive.comCaelum Codicem2016-06-30T05:13:39ZVisual Interface to the Wolfram Language
http://community.wolfram.com/groups/-/m/t/1124967
Hi everyone.
I'm working on a visual interface to the Wolfram Language called [visX][1], and I'd like to ask what you all think of it.
Wolfram Language code can often be thought of as a set of blocks, each of which takes some inputs, does something, and produces an output. VisX lets you write WL code exactly this way - you draw a digram, connecting blocks with links. For example, say you want to count how many times each digit (0 to 9) occurs in the first 30 digits of Pi. With text-based WL code, you'd write
digits = RealDigits[N[Pi, 30]][[1]]
Count[digits, #] & /@ Range[0, 9]
In visX, you'd draw this:
![digits of Pi][2]
I guess it's pretty self-explanatory. In addition to using built-in WL blocks, you can write your own, like the CountInList block. Normally, blocks just transform inputs to output, but the CountInList block is mapped over its input which is indicated by the little brackets on the outside of its connection ports. (That's basically visual syntactic sugar for "/@" or Map.) The 4 in the upper-right corner indicates that results inside this block are showing results from the 4-th time through the map. The block with "digits" in it sets a variable, which is then referenced in the CountInList block.
You define blocks (which are basically functions) by just making an empty rectangle and dragging contents in then wiring them together, then you can use copies of the block wherever you want. A change in any copy of the block will be reflected in all other copies. There's no real difference between a defining a block and using it. Recursion can be specified by just including a copy of the block inside itself. Blocks can call other blocks in the same manner.
Just like regular WL code, visX blocks can be nested deeply, but with the visual interface, it's easy to zoom in and out. At any point, the UI will show you the right amount of detail for each block - sometimes no detail at all, sometimes its name and labels on its inputs, sometimes its actual contents (which can then be edited or further zoomed...).
visX is stand-alone software that runs locally on your machine, evaluates the diagram using your local Mathematica kernel, and receives the results and puts them back in the diagram. You can load data files using Import as usual.
One of the problems that I've seen with visual languages in the past is that while simple things are easy to do, the code quickly gets too complex to manage and the visual interface starts to get in the way. With the Wolfram Language in theory everything is an expression, and this can lead you to write functional-style programs which are easily thought of as a diagram, but that's not always the most natural way to express a computation. Sometimes you just need a little for loop. Consider calculating Fibonacci numbers. Start the sequence with 1, 1, ... then each element of the sequence is the sum of the previous two. Yes, you can write a recursive algorithm to do this, but most people just want to write a little for loop. In visX, you can do this (calculates the 6th Fibonacci number):
![embedded code][4]
I've tried to let you use blocks-and-links when that's the most natural thing (which is usually), and text-based code when that's better. Of course, you can mix them together however you want.
A second problem I've found with visual programming languages is that it can actually be much slower to use then writing out text, because you have to laboriously drag and drop every single block. Even simple algebraic expressions like
Sin[x]^2 + Cos[x]^2
2x^2 + 4x*y + 8y^2
would involve a lot of blocks because of all the Plus, Times, and Power blocks, as well as all the constants and symbols. With visX, you can enter Wolfram Language code snippets like those, and it will parse them and transform them into blocks which you can then insert into your diagram all at once and edit at will. This makes it much faster to get your idea onto the screen so that you can start evaluating it and developing it. I'm also working on the ability to take a visX block and give you back the Wolfram Language code that it represents.
The examples given here are simple, but of course you can use this interface for putting together a complex piece of code as well. I find it especially handy when building up a calculation with lots of intermediate results along the way, or to rapidly prototype an algorithm where I want to be able to easily switch the data flows around.
Does this project seem useful to anyone? I'd like to get some feedback - what do you think of it? Would you use it? For what?
If there's interest, I could do a small-scale alpha test in about a month from now.
More info at [visx.io][5].
-Nicholas Hoff
*edited to clarify block definitions and recursion*
[1]: http://visx.io
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=pi_digits_without_chrome.png&userId=1124239
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=pi_digits.png&userId=1124239
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=embeded_wl.png&userId=1124239
[5]: http://visx.ioNicholas Hoff2017-06-20T10:02:52ZAbout package initialization
http://community.wolfram.com/groups/-/m/t/982404
I would like to share some thoughts about package initialization. As always, I post these ideas in the hope to stimulate discussion between package developers, hopefully get some feedback and learn something. It is more efficient if we learn from each other than if each of us has to go through the same stumbling blocks alone.
## Introduction
Some packages only contain definitions. Complex packages typically contain some sort of initialization—code that evaluates when the package loads. There are many different reasons why this may be necessary:
- Load external dependencies such as LibraryLink functions, installable MathLink programs, etc.
- Load and process configuration files
- Load data files (precomputed data, caches, etc.)
- Verify and set up some external environment, e.g. what if the package needs to call external processes using RunProcess, or even modify the system `PATH` or `LD_LIBRARY_PATH` to make them work? (MaTeX needs something similar)
## Things to keep in mind when the package has initialization code
Initialization code can affect loading performance. A polished, high-quality package loads without delay. This is especially important on slow platforms such as the Raspberry Pi computer. The currently released version of my IGraph/M package takes 30-60 seconds to load on the Raspberry Pi, which is just unacceptable. The development version fixes this.
Initialization code can cause serious problems when the package is loaded on kernel startup, not to mention slowing down kernel start. There are certain fundamental functions which [do not work during kernel initialization][1], such as `Throw` and `Catch`. These are dependencies to many other common functions. In the end, even very bad things can happen: if `Import` is used during kernel initialization, not only will it fail once, it will also stop working for the rest of the session.
How might a package get loaded during kernel startup? A user may load it in their kernel `init.m` file. Or they may place it in their `$UserBaseDirectory/Autoload` directory. Or if the package is distributed as a paclet, [Mathematica will offer a GUI setting to load it on startup][2]. According to my experience, Murphy's laws apply everywhere: if you give your users even the slightest chance to mess up something, one of them certainly will.
Of course, these are minor issues. You do not need to solve them to write a useful package. But if you are aiming for creating a high-quality package that will be used my many people, it is good to keep these things in mind.
## Strategies for robust initialization
This is the section I would like some feedback on. My goal is to make initialization *fast* and *robust* so it is safe to run during kernel startup.
### Prefer simple functions to complex ones
Try to use functions which run fast and are safe during initialization.
For reading and writing configuration files, prefer `Get` instead of `Import[..., "Package"]` and `Put` instead of `Export[..., "Package"]`.
For reading text files, prefer `ReadString` instead of `Import[..., "String"]`.
Generally, use low-level IO functions, such as `Read`, `ReadList`, etc., and do not forget to `Close` the stream.
`ReadString` is new in version 10.0, and triggers loading ProcessLink (which contains functions like `RunProcess`). This means that we cannot use it in v9.0, and also its first use is slow because it has to load soe dependencies. A faster and more compatible alternative is
readString[file_] :=
Module[{stream}
stream = OpenRead[file];
res = Read[stream, Record, RecordSeparators -> {}];
Close[stream]
res
]
### Delay initialization when possible
This is the most effective and most general tool when dealing with initialization. The usual idiom to define a symbol `sym` is
sym := sym = computeValue[];
Then `sym` will be computed the first time it is *used* and not when the package is *loaded*. `sym` could be a variable holding persistent configuration that is read from a file, precomputed data, or even a LibraryLink function:
libFun := libFun = LibraryFunctionLoad[...];
This idiom is extremely convenient because it can be applied to existing code with a minimal change.
### Use scheduled tasks to delay-load
I learnt this trick from [@WReach on StackExchange](http://mathematica.stackexchange.com/a/132225/12) and [@Ilian Gachevski][at0].
Use a scheduled task to delay initialization until kernel startup has finished:
RunScheduledTask[
(* perform some complex initialization *)
; RemoveScheduledTask @ $ScheduledTask
, {0}
]
This is really just a hack and it is not fully reliable (perhaps there's a risk of race conditions). I needed to use a delay of 1 second instead of 0 seconds to make it work reliably.
I do not recommend using this in published packages, but the technique is very useful for personal packages. I use it to set up [function argument completions](http://mathematica.stackexchange.com/a/129910/12) in a personal package that I always auto-load.
### If your users ask about reliably auto-loading your package, suggest DeclarePackage
This is just an idea: if you need to load a package at kernel startup (but not actually use it in kernel init files), [then use `DeclarePackage`][3] instead of `Needs`
<br><br>
Any comments, suggestion, are most welcome!
[at0]: http://community.wolfram.com/web/iliang
[1]: http://mathematica.stackexchange.com/q/17164/12
[2]: http://mathematica.stackexchange.com/a/132065/12
[3]: http://reference.wolfram.com/language/tutorial/AutomaticLoadingOfPackages.htmlSzabolcs Horvát2016-12-16T11:47:52ZGet FinancialData price history?
http://community.wolfram.com/groups/-/m/t/1097940
Since 16 may 2017, why am I not able to get the price history of US stock market tickers? Please see output below! Has anything changed?
In[9]:= FinancialData["GE", {{2017, 1, 3}, {2017, 5, 15}}]
Out[9]= Missing["NotAvailable"]
In[10]:= FinancialData["IBM", {{2017, 1, 3}, {2017, 5, 15}}]
Out[10]= Missing["NotAvailable"]sridev ramaswamy2017-05-18T12:09:48ZImage Correlation in Particle Image Velocimetry is behaving strangely
http://community.wolfram.com/groups/-/m/t/1124830
Note: i have posted the same question on MSE: https://mathematica.stackexchange.com/questions/148739/image-correlation-in-particle-image-velocimetry-is-behaving-strangely
I have been trying to implement a code for determining flow-field using Particle Image Velocimetry.
In this technique a user can take two images. Using small windows from the first image (which act as kernels) and search windows from the second image one can determine the cross-correlation which simply tells where the small window moves within a given search window. This process can be repeated between the second and the third image and so on.
A clear detail can be found in the second paragraph:
http://www.physics.emory.edu/faculty/weeks//idl/piv.html
I have two images here (posting as a gif, you can save this and import it in mathematica as a list of two images):
![enter image description here][1]
I use the following code to generate the flow-field.
windowsize = 32; (* select window size *)
imgDim = ImageDimensions[images[[1]]]; (* dimensions for the images *)
imgone = ImageCrop[images[[1]], imgDim - (2*windowsize)]; (* removing
border from first image: we dont want to create windows at the borders *)
firstimgsplits = ImagePartition[imgone, windowsize];
(* breaking the first image into small windows *)
searchwindows = ImagePartition[images[[2]], windowsize*3, {windowsize, windowsize}];
(* breaking the second image into search windows *)
{dim1, dim2} = Dimensions@searchwindows;
H = Last@ImageDimensions[imgone];
(* get midpoints of the windows of the first frame *)
midptsFirst = Flatten[Table[{i windowsize + windowsize/2,
j (windowsize) + windowsize/2}, {i, 1, dim1}, {j, 1, dim2}], 1];
(* pts in the second image where correlation is max *)
correlationPts = Table[MorphologicalComponents[ImagePad[
ImageAdjust@ImageCorrelate[searchwindows[[i + 1, j + 1]],
firstimgsplits[[i + 1, j + 1]], NormalizedSquaredEuclideanDistance,
PerformanceGoal -> "Quality"], {{j*windowsize, H - windowsize (j + 1)},
{H - windowsize (i + 1), windowsize i}}, White]]~Position~0,
{i, 0, dim1 - 1}, {j, 0, dim2 - 1}]~Flatten~2;
now when i create a flow-field from the displacement of points (red pts in the first image and cyan pts in second image) I can see that something is not right. My eyes tell me that the particles have move in a direction different from the ones found using ImageCorrelate
This should be rather straightforward for Mathematica. I do not know what is wrong in this simple piece of code. I will appreciate if someone can help me with this question.
ListAnimate@{Show[images[[1]], Graphics[{Red, Point@midptsFirst}]],
Show[images[[2]], Graphics[{Cyan, PointSize[Medium], Point@correlationPts,
{Pink, Arrowheads[Small], MapThread[Arrow[{#1, #2}] &, {midptsFirst, correlationPts}]}}]]}
![enter image description here][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Testpiv3.gif&userId=942204
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1195Picture1.png&userId=942204Ali Hashmi2017-06-20T12:15:21Z[Mathematica-vs-R] Text analysis of Trump tweets
http://community.wolfram.com/groups/-/m/t/967299
## Introduction
This post is to proclaim the [MathematicaVsR at GitHub](https://github.com/antononcube/MathematicaVsR) project ["Text analysis of Trump tweets"](https://github.com/antononcube/MathematicaVsR/tree/master/Projects/TextAnalysisOfTrumpTweets) in which we compare Mathematica and R over text analyses of Twitter messages made by Donald Trump (and his staff) before the USA president elections in 2016.
This project follows and extends the exposition and analysis of the R-based blog post ["Text analysis of Trump's tweets confirms he writes only the (angrier) Android half"](http://varianceexplained.org/r/trump-tweets/) by David Robinson at [VarianceExplained.org](http://varianceexplained.org); see [1].
The blog post \[[1](http://varianceexplained.org/r/trump-tweets/)\] links to several sources that claim that during the election campaign Donald Trump tweeted from his Android phone and his campaign staff tweeted from an iPhone. The blog post [1] examines this hypothesis in a quantitative way (using various R packages.)
The hypothesis in question is well summarized with the tweet:
> Every non-hyperbolic tweet is from iPhone (his staff).
> Every hyperbolic tweet is from Android (from him). [pic.twitter.com/GWr6D8h5ed](pic.twitter.com/GWr6D8h5ed)
> -- Todd Vaziri (@tvaziri) August 6, 2016
This conjecture is fairly well supported by the following [mosaic plots](https://mathematicaforprediction.wordpress.com/2014/03/17/mosaic-plots-for-data-visualization/), \[[2](https://mathematicaforprediction.wordpress.com/2014/03/17/mosaic-plots-for-data-visualization/)\]:
[![TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Sentiment-Device](http://i.imgur.com/eKjxlTvm.png)](http://i.imgur.com/eKjxlTv.png) [![TextAnalysisOfTrumpTweets-iPhone-MosaicPlot-Device-Weekday-Sentiment](http://i.imgur.com/RMfuNNtm.png)](http://i.imgur.com/RMfuNNt.png)
We can see the that Twitter messages from iPhone are much more likely to be neutral, and the ones from Android are much more polarized. As
Christian Rudder (one of the founders of [OkCupid](https://www.okcupid.com), a dating website) explains in the chapter "Death by a Thousand Mehs" of the book ["Dataclysm"](http://dataclysm.org), \[[3](http://dataclysm.org)\], having a polarizing image (online persona) is as a very good strategy to engage online audience:
> [...] And the effect isn't small-being highly polarizing will in fact get you about 70 percent more messages. That means variance allows you to effectively jump several "leagues" up in the dating pecking order - [...]
(The mosaic plots above were made for the Mathematica-part of this project. Mosaic plots and weekday tags are not used in [1].)
### Links
- The Mathematica part: [PDF file](https://github.com/antononcube/MathematicaVsR/blob/master/Projects/TextAnalysisOfTrumpTweets/Mathematica/Text-analysis-of-Trump-tweets.pdf), [Markdown file](https://github.com/antononcube/MathematicaVsR/blob/master/Projects/TextAnalysisOfTrumpTweets/Mathematica/Text-analysis-of-Trump-tweets.md).
- The R part consists of :
- the blog post \[[1](http://varianceexplained.org/r/trump-tweets/)\], and
- the R-notebook given as [Markdown](https://github.com/antononcube/MathematicaVsR/blob/master/Projects/TextAnalysisOfTrumpTweets/R/TextAnalysisOfTrumpTweets.Rmd) and [HTML](https://cdn.rawgit.com/antononcube/MathematicaVsR/master/Projects/TextAnalysisOfTrumpTweets/R/TextAnalysisOfTrumpTweets.nb.html).
## Concrete steps
The Mathematica-part of this project does not follow closely the blog post [1]. After the ingestion of the data provided in [1], the Mathematica-part applies alternative algorithms to support and extend the analysis in [1].
The sections in the [R-part notebook](https://github.com/antononcube/MathematicaVsR/blob/master/Projects/TextAnalysisOfTrumpTweets/R/TextAnalysisOfTrumpTweets.Rmd) correspond to some -- not all -- of the sections in the Mathematica-part.
The following list of steps is for the Mathematica-part.
1. **Data ingestion**
- The blog post [1] shows how to do in R the ingestion of Twitter data of Donald Trump messages.
- That can be done in Mathematica too using the built-in function `ServiceConnect`,
but that is not necessary since [1] provides a link to the ingested data used [1]:
load(url("http://varianceexplained.org/files/trump_tweets_df.rda"))
- Which leads to the ingesting of an R data frame in the Mathematica-part using RLink.
2. **Adding tags**
- We have to extract device tags for the messages -- each message is associated with one of the tags "Android", "iPad", or "iPhone".
- Using the message time-stamps each message is associated with time tags corresponding to the creation time month, hour, weekday, etc.
- Here is summary of the data at this stage:
![enter image description here][1]
3. **Time series and time related distributions**
- We can make several types of time series plots for general insight and to support the main conjecture.
- Here is a Mathematica made plot for the same statistic computed in [1] that shows differences in tweet posting behavior:
![enter image description here][2]
- Here are distributions plots of tweets per weekday:
![enter image description here][3]
4. **Classification into sentiments and Facebook topics**
- Using the built-in classifiers of Mathematica each tweet message is associated with a sentiment tag and a Facebook topic tag.
- In [1] the results of this step are derived in several stages.
- Here is a mosaic plot for conditional probabilities of devices, topics, and sentiments:
![enter image description here][4]
5. **Device-word association rules**
- Using [Association rule learning](https://en.wikipedia.org/wiki/Association_rule_learning) device tags are associated with words in the tweets.
- In the Mathematica-part these associations rules are not needed for the sentiment analysis (because of the built-in classifiers.)
- The association rule mining is done mostly to support and extend the text analysis in [1] and, of course, for comparison purposes.
- Here is an example of derived association rules together with their most important measures:
![enter image description here][5]
In [1] the sentiments are derived from computed device-word associations, so in [1] the order of steps is 1-2-3-5-4. In Mathematica we do not need the steps 3 and 5 in order to get the sentiments in the 4th step.
## Comparison
Using Mathematica for sentiment analysis is much more direct because of the built-in classifiers.
The R-based blog post [1] uses heavily the "pipeline" operator `%>%` which is kind of a recent addition to R (and it is both fashionable and convenient to use it.) In Mathematica the related operators are `Postfix` (`//`), `Prefix` (`@`), `Infix` (`~~`), `Composition` (`@*`), and `RightComposition` (`/*`).
Making the time series plots with the R package "ggplot2" requires making special data frames. I am inclined to think that the Mathematica plotting of time series is more direct, but for this task the data wrangling codes in Mathematica and R are fairly comparable.
Generally speaking, the R package ["arules"](https://cran.r-project.org/web/packages/arules/index.html) -- used in this project for Associations rule learning -- is somewhat awkward to use:
- it is data frame centric, does not work directly with lists of lists, and
- requires the use of factors.
The Apriori implementation in ["arules"](https://cran.r-project.org/web/packages/arules/index.html) is much faster than the one in ["AprioriAlgorithm.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/AprioriAlgorithm.m) -- "arules" uses a more efficient algorithm [implemented in C](http://www.borgelt.net/fpm.html).
## References
\[1\] David Robinson, ["Text analysis of Trump's tweets confirms he writes only the (angrier) Android half"](http://varianceexplained.org/r/trump-tweets/), (2016), [VarianceExplained.org](http://varianceexplained.org).
\[2\] Anton Antonov, ["Mosaic plots for data visualization"](https://mathematicaforprediction.wordpress.com/2014/03/17/mosaic-plots-for-data-visualization/), (2014), [MathematicaForPrediction at WordPress](https://mathematicaforprediction.wordpress.com).
\[3\] Christian Rudder, [Dataclysm](http://dataclysm.org), Crown, 2014. ASIN: B00J1IQUX8 .
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=yMtdphT.png&userId=143837
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=oDv5Cm0.png&userId=143837
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=UGMy4EW.png&userId=143837
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dMxSpHa.png&userId=143837
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dSSb4KD.png&userId=143837Anton Antonov2016-11-21T10:51:46Z[✓] Use of ControllerManipulate?
http://community.wolfram.com/groups/-/m/t/1125938
Does anybody know in what way the command ControllerManipulate can be used? The help function says that it gives the same as Manipulate, but without a controlling slider or so. The controller would be outside. But what kind of controller is meant then? In other words, how can one control the result of ControllerManipulate? Does anybody have an example?Laurens Wachters2017-06-21T16:42:08ZHiggs Boson Classification via Neural Network
http://community.wolfram.com/groups/-/m/t/1016315
![enter image description here][1]
# Introduction
So here is a simple approach that applies Wolfram Language machine learning functions to a classification problem for finding possible Higgs particles. It uses a labeled data set with 30 numerical physical attributes (things like measured spins, angles, energies, etc.) and with labels being either 'signal' (s) or 'background' (b). The attached notebook runs through a sample analysis in the Wolfram Language: importing the training data, cleaning it up for using it, setting up a neural network, training the network with the data, and finally checking how well the trained neural network does at making predictions. Here is the description of data from the source website at KAGGLE:
> Discovery of the long awaited Higgs boson was announced July 4, 2012 and confirmed six months later. 2013 saw a number of prestigious awards, including a Nobel prize. But for physicists, the discovery of a new particle means the beginning of a long and difficult quest to measure its characteristics and determine if it fits the current model of nature.
> A key property of any particle is how often it decays into other particles. ATLAS is a particle physics experiment taking place at the Large Hadron Collider at CERN that searches for new particles and processes using head-on collisions of protons of extraordinarily high energy. The ATLAS experiment has recently observed a signal of the Higgs boson decaying into two tau particles, but this decay is a small signal buried in background noise.
> The goal of the Higgs Boson Machine Learning Challenge is to explore the potential of advanced machine learning methods to improve the discovery significance of the experiment. No knowledge of particle physics is required. Using simulated data with features characterizing events detected by ATLAS, your task is to classify events into "tau tau decay of a Higgs boson" versus "background."
> The winning method may eventually be applied to real data and the winners may be invited to CERN to discuss their results with high energy physicists.
# References and sources
- [Learning to discover: the Higgs boson machine learning challenge][2]
- [KAGGLE: Higgs Boson Machine Learning Challenge][3]
- [Opendata ATLAS][4]
# Training data
Import the training data:
training = Import["D:\\machinelearning\\higgs\\training\\training.csv", "Data"];
Dimensions[training]
`{250001, 33}`
Look at the data fields (they are described in the pdf link above). "EventId" should not be used as part of the training, since it has no predictive value. The last column "Label" is the classification (s=signal, b=background)
training[[1]]
`{"EventId", "DER_mass_MMC", "DER_mass_transverse_met_lep", \
"DER_mass_vis", "DER_pt_h", "DER_deltaeta_jet_jet", \
"DER_mass_jet_jet", "DER_prodeta_jet_jet", "DER_deltar_tau_lep", \
"DER_pt_tot", "DER_sum_pt", "DER_pt_ratio_lep_tau", \
"DER_met_phi_centrality", "DER_lep_eta_centrality", "PRI_tau_pt", \
"PRI_tau_eta", "PRI_tau_phi", "PRI_lep_pt", "PRI_lep_eta", \
"PRI_lep_phi", "PRI_met", "PRI_met_phi", "PRI_met_sumet", \
"PRI_jet_num", "PRI_jet_leading_pt", "PRI_jet_leading_eta", \
"PRI_jet_leading_phi", "PRI_jet_subleading_pt", \
"PRI_jet_subleading_eta", "PRI_jet_subleading_phi", "PRI_jet_all_pt", \
"Weight", "Label"}`
Sample vector:
training[[2]]
`{100000, 138.47, 51.655, 97.827, 27.98, 0.91, 124.711, 2.666, 3.064, \
41.928, 197.76, 1.582, 1.396, 0.2, 32.638, 1.017, 0.381, 51.626, \
2.273, -2.414, 16.824, -0.277, 258.733, 2, 67.435, 2.15, 0.444, \
46.062, 1.24, -2.475, 113.497, 0.00265331, "s"}`
Set up a simple neural network (this can be tinkered with to improve the results):
net=NetInitialize[
NetChain[{
LinearLayer[3000],Ramp,LinearLayer[3000],Ramp,LinearLayer[2],SoftmaxLayer[]
},
"Input"->{30},
"Output"->NetDecoder[{"Class",{"b","s"}}]
]]
![enter image description here][5]
Set up the training data:
data = Map[Take[#, {2, 31}] -> Last[#] &, Drop[training, 1]];
Numerical vectors that each point to a classification (s or b):
RandomSample[data, 3]
`{{87.06, 23.069, 67.711, 162.488, -999., -999., -999., 0.903, 4.245,
318.43, 0.523, 0.839, -999., 105.019, -0.404, 1.612,
54.943, -0.898, 0.855, 13.541, 1.729, 409.232, 1,
158.469, -1.363, -1.762, -999., -999., -999., 158.469} ->
"b", {163.658, 55.559, 116.84, 50.019, -999., -999., -999., 2.855,
38.623, 130.132, 0.906, 1.359, -999., 51.993, 0.36, 2.142,
47.112, -0.932, -1.596, 22.782, 2.663, 191.557, 1, 31.026, -2.333,
0.739, -999., -999., -999., 31.026} ->
"b", {100.248, 27.109, 60.729, 132.094, -999., -999., -999., 1.405,
10.063, 218.474, 0.541, 1.414, -999., 62.519, -0.401, -1.974,
33.817, -0.893, -0.657, 54.857, -1.298, 396.228, 1,
122.137, -2.369, 1.689, -999., -999., -999., 122.137} -> "s"}`
# Training
Length[data]
`250000`
{tdata,vdata}=TakeDrop[data,240000];
result=NetTrain[net,tdata,TargetDevice->"GPU",ValidationSet->Scaled[0.1],MaxTrainingRounds->1000]
![enter image description here][6]
DumpSave["D:\\machinelearning\\higgs\\higgs.mx", result];
# Testing
This is the test data (unlabeled):
test = Import["D:\\machinelearning\\higgs\\test\\test.csv", "Data"];
Extract the validation data:
validate = Map[Take[#, {2, 31}] &, Drop[test, 1]];
Predictions made on the unlabeled data:
result /@ RandomSample[validate, 5]
`{"b", "b", "s", "b", "b"}`
Sample from the labeled data and compute classifier statistics:
cm = ClassifierMeasurements[result, RandomSample[vdata, 1000]]
![enter image description here][7]
cm["Accuracy"]
`0.846`
Plot the confusion matrix:
cm["ConfusionMatrixPlot"]
![enter image description here][8]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ATLASEXP_image.png&userId=20103
[2]: https://higgsml.lal.in2p3.fr/files/2014/04/documentation_v1.8.pdf
[3]: https://www.kaggle.com/c/higgs-boson
[4]: http://opendata.cern.ch/about/ATLAS
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5393ty56yetjhw4.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5256567urytere.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rtyee567rutyrd.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ghf56utyehw65eyhrw.png&userId=20103Arnoud Buzing2017-02-17T14:16:25ZSimulating the Universe (an alternative approach)
http://community.wolfram.com/groups/-/m/t/982238
This is the Community Post presenting the 2016 Wolfram Summer School Project of József Konczer, a Hungarian PhD student of theoretical physics, assisted by Todd Rowland.
The project notebook with the code is attached to this post.
## Fundamental theories in Physics ##
The Holy Grail of theoretical physics would be a theory, which could describe all known phenomena in every situation. This would be called the Theory of Everything ([ToE][1]). In present times, all approaches to a ToE candidate—like String/M theory, Loop quantum gravity ([and many others][2])—incorporate quantum mechanics from the beginning. This approach served surely very useful effective theories like the Standard Model itself, however this approach does not help to find an underlying deterministic theory from which quantum effects would emerge, like Einstein dreamed.
The standard argument for the unavoidability of quantum mechanics and uncertainty is the result of the [Bohr–Einstein debates][3] which was "won" by Bohr because the [EPR paradox][4] was tested by [measurements][5] (there is an ongoing test as well called the [Big Bell test][6]) and the result excludes local hidden variables. (The arguments can be found in details [here][7] and [here][8]). It has to be emphasized that in these arguments locality is a key assumption.
----------
One can ask, what kind of nonlocal theory can be constructed, which still have predictive power, and is not based of conspiracy of Nature? There are only a few researchers who post this question openly, one is them is Gerard ’t Hooft who published recently a [book][9] based on collected [papers][10]. His approach is conservative (from main stream point of view), and mainly suggests, that if one quantize time, then in same basis the unitary time evolution considered in quantum mechanics become a permutation operator between special basis elements or "beable states". However not every time evolution has this property and typically the interacting theories fail to fulfill the requirements. A more bold, however much less understood theory (or framework) is what Stephen Wolfram described in [NKS][11]. The brief summary of his ideas can be found in this [blog post][12]. The main idea here, is to find a simple data structure, for instance a sparse graph, a simple discrete dynamics governed by a replacement rule, an interpretation for this cellular automaton (CA) and then investigate if we can observe similar phenomena what we see in our Universe.
## Hints pointing toward the CA description ##
This is a speculative and highly subjective argumentation, however I think this blog post is an appropriate place to articulate my motives and do not stick to the objective style of research papers.
So first of all without going too deep into metaphysics I don't want to state things about Nature it self, I only talk about our description of it.
The first successful and highly useful description of Nature was Newtons description, which heavily used the idea of continuity of space and time. This idea proved to be useful in description of solids, liquids and gases as well. However some ideas became so useful and popular, that we forgot that all of them is only our description and not Nature its self. Quantum effects and effects related to relativity reminded us, that under non standard circumstances old descriptions can fail. As I see, quantum effects have two message for us. The first is that quantities could be and should be described by discrete variables, and secondly that under a certain level systems can not be observed without disturbance. If we take into account that space and time even as we observe them are influenced by these quantized quantities, it is straightforward to deduce, that space and time should be quantized as well.
Before these findings in physics probability theory was developed. First it was used to analyze gambling situations where one don't know every information about the system. From this point of view it is clearly a strategy to manage our ignorance toward some details in a deterministic situation. However after some point physicists started to use probabilities as they were part of the phenomena, and not only our clever way to make inference from systems where we do not know every detail. Many physicists—including myself—where educated in the spirit of frequentist [interpretation of probability theory][13], which is useful in some cases but as I think, prevents some questions to ask. I think this promotion of probability to an objective property contributed to the interpretation of quantum mechanics as well. As Jayns wrote in his [book][14]:
> In current quantum theory, probabilities express our own ignorance due to our failure
to search for the real causes of physical phenomena; and, worse, our failure even to think
seriously about the problem. This ignorance may be unavoidable in practice, but in our
present state of knowledge we do not know whether it is unavoidable in principle; the
‘central dogma’ simply asserts this, and draws the conclusion that belief in causes, and
searching for them, is philosophically naive. If everybody accepted this and abided by it,
no further advances in understanding of physical law would ever be made; indeed, no such
advance has been made since the 1927 Solvay Congress in which this mentality became
solidified into physics. But it seems to us that this attitude places a premium on stupidity;
to lack the ingenuity to think of a rational physical explanation is to support the supernatural
view.
However even if one thinks, that theories incorporating quantum mechanics are "only" effective theories, probably we can get intuitions from them. There is a [recent result][15] from [AdS/CFT][16] correspondence as an example for the [EPR=ER][17] conjecture. And a connecting [paper][18] of Leonard Susskind, concluding that:
> What all of this suggests to me, and what I want to suggest to you, is that quantum mechanics and gravity are far more tightly related than we (or at least I) had ever imagined. The essential nonlocalities of quantum mechanics (the need for instantaneous communication in order to classically simulate entanglement) parallels the nonlocal potentialities of general relativity: ER=EPR.
The cited papers state, that spacetime structure can be understood as a net of entanglements, however maybe the statement can be reversed, and say that the phenomena of entanglement can be described by a nonlocal spacetime structure.
Among the mentioned hints, the existing theoretical constructions can help to find an appropriate interpretations as well. For example it can happen, that to describe our seemingly 3 dimensional space one has to describe space with higher effective dimensionality and interpret the entangled parts not just as connected regions, but as global structures in the extra dimensions.
After taking hints from existing theoretical constructions one can investigate, what kind of phenomena can appear in simple CA-s which mimic some parts of Nature.
Perhaps the most well known CA is Conway's [Game of Life][19] this is a 2D Cellular automaton where localized objects (called [spaceships][20] or gliders) can propagate, and can interact with each other. This behavior can remind us to particles, however the built in rectangular structure is reflected on the properties of spaceships, and there are no nonlocal connections between these object because of the locality of the rule.
Both problems can be solved, if one tries to construct a CA without built in topology. (This construction will be described in detail.)
Another nice feature of special CA-s called substitution systems, is that for a structure living in the automaton, can not observe the absolute number of steps, or other structures beside him, only the causal net of implemented changes can be recognized from inside. This feature unites the relative space and time for observers or structures inside the system. It can remind us to causal network description of General Relativity.
A third hint from CA point of view is the typical appearance of complex behavior, which can lead to an effective probabilistic description of the system with a higher symmetry what the framework originally allowed. (For example CA description of flows) From disorder new effective order can emerge possible with higher symmetry.
The conjectured computational irreducibility of CA would replace the promised "free will" possibility of quantum mechanics with a different but in some sense similar concept. In this framework the faith of the Universe would be determined, but even an observer outside the system—God if one wishes—could not know the consequences only by letting the simulation run up to the desired point.
Furthermore a multiway CA dynamics is compatible with the many world interpretation of quantum mechanics, with the advantage, that the splitting points of histories are not observer dependent. In this framework the overall dynamics is deterministic, however structures living always on one branch of the evolution will witness an unavoidable true random behavior from the inside point of view.
## Nature and our understanding of it ##
Of course it would be an arrogant attitude to force Nature to fulfill our philosophical expectations, however one can imagine how our description of it can change during time.
There are several situations what one can imagine:
- There is a deterministic description which is valid in any situation (which can appear in our Universe)
- This can be totally discreet
- Or it can be continuous partially or in whole
- It can be, that after some point a truly random (or at least appearing to us) mechanism will appear, which can not be unfolded
- Or it can happen that construction of laws to describe Nature will never come to an end, and our understanding of reality will based on infinite set of possibly deterministic rules.
- And of course it can happen that something unexpected will turn out.
Without favoring any of the listed cases above, my main point is that the very first situation, namely that our universe can be described as a deterministic discrete system is not totally excluded. And the most natural way to understand it can be a CA description.
## CA description candidate for our Universe ##
To have a CA description, one has to choose a data structure, a dynamics and an interpretation. (It has to be pointed out, that any CA can be simulated on another Turing complete CA with different interpretation of the states. Because of that any CA description is highly non unique. However one can try to choose a description which has the "simplest" interpretation.)
For a fundamental CA description one can choose simple graphs as data structure. This seems as a natural choice, because of its simplicity and because of its non fixed topology.
To have a chance to describe deterministic dynamics on this data structure, we even restrict the degree of nodes on the graph. One can try to find the threshold of complexity of the CA, and it seams that cubic graphs can already produce complicated enough structures. So one can set the data structure to a simple cubic graph.
The next step is to define an appropriate dynamics on this data structure. A natural approach is to introduce subgraph replacement rules, which means the following: If one finds a given subgraph pattern $H_1$ in the present graph $G$, then replace it with a compatible new graph $H_2$. It sounds simple, however there are many details which have to be fulfilled to get a dynamics with desired properties.
I mention here two properties of the patterns, which seems to be essential to get a substitution systems which generates a complex behavior and appears completely deterministic from the inside without the specification of the order of replacements in the system.
The first one is a **non overlapping property** of the pattern graph(s) $ H_1 $. This means, that $H_1$ has a special structure such as there is no cubic graph $G$, where two subgraphs can be found which are isomorphic to $H_1$ and have nonzero intersection. The following rule does not fulfill this requirement because there is a cubic graph, where two intersecting copies of $H_1$ can be found
![overlapping rule][21]
![Interesting patterns][22]
The second requirement is **non triviality**, which gives a constraint for $H_2$. In this case we wish to have $H_2$, that there exists a cubic graph $G$, which contains a subgraph $H_1$, where after the replacement $H_1 \rightarrow H_2$ there can be found a new pattern $H_1$, which intersects with $H_2$ but has parts outside $H_2$ as well. (Without this property only self similar or frozen (where there is no more pattern which can be changed) graphs can be generated from finite initial graphs.) The pictures show visually the requirement:
![enter image description here][23]
![enter image description here][24]
After setting some rule, which fulfill these requirements, we have to find an initial graph, apply the rule many times and find an interpretation for the result. It has to be pointed out, that the actual graph structure at a given step can not be observed from an inside point of view. What an inner observer, or a structure can explore is the causal structure which is generated by the replacements. (For details see [NKS chapter 9, section 13][25] )
This is similar to the [causal set program][26].
So a natural way of interpretation of the emerging causal net is that it is a discretization of some kind of spacetime. And local propagating disturbances relative to the overall average structure are particle like excitations, which can have nonlocal connections relative to the average large scale structure. However from AdS/CFT insights it can happen, that we have to interpret particles for example as global structures in a higher dimensional bulk spacetime, which have ends on a boundary-like smaller dimensional surface.
## My contribution to the project ##
During the 3 weeks of 2016 Wolfram summer school I set a framework where the steps of a substitution are precisely defined, and in which the substitutions can be effectively performed even for relatively big graphs. Furthermore I tested a numerical approach to measure the effective dimensionality of the emergent graph structure after sufficiently many steps.
Unfortunately I could not test this framework with rules which could give complex, deterministic behavior, so I could benchmark this machinery on a simple, point to triangle rule, which gives a fractal-like structure. If we interpret this graph as space, then this simple dynamics results a $D=\log(3)/\log(2)=1.58$ dimensional fractal space.
Here is a graph of the generated fractal Universe after 100 steps, started from a tetrahedron:
![Generated fractal Universe after 100 steps, started from a tetrahedron][27]
And the neighborhood structure of this space:
![Local structure in the fractal Universe][28]
## Further directions ##
This project to find deterministic CA description for our Universe is in its infant stage. The framework is more or less set, but it needs tremendous work to investigate possible dynamics and analyze the results of simulations.
An outline of a huge project would be the following:
- List the possible rules, which fulfill the non overlapping and non trivial conditions
- Investigate their long term behavior starting from simple initial graphs
- Find quantities and a method of their measurement which can be determined from generated causal graphs
- Find fixed points of the dynamics which preserve long scale dimensionality and possibly other quantities
- List and investigate local disturbances near these fixed points
- After setting an interpretation analyze the particle-like structures (gliders of this dynamics)
- Develop an effective field theory which can describe an effective behavior of the system near to the fixed points
- Match these field theories with the Standard Model of particle physics
- Find out new predictions of the derived effective field theories, which can be tested by measurements
## Conclusion ##
In my project I could set a framework and show a trivial example for a deterministic graph evolution model.
During the summer school I was not fortunate enough to find dynamics which produce complex behavior, however to find an appropriate rule seems reachable in the near future. Hopefully a dynamics producing complex topology would be interesting enough to inspire much more people and after some point a serious investigation of the field could be started.
I think personally, that proving or even disproving that this framework to describe Nature can be worked out is an extremely interesting challenge and deserves further theoretical research.
In the end I would like to thank my mentor Todd Rowland, and the whole Wolfram summer school team for the organization and I really hope that there will be a continuation of this project.
Last but not least I thank for all the summer school participants for great discussions and a lifelong experience!
![enter image description here][29]
----------
## Further comments ##
I try to collect here some useful comments of my friends and collegues, who kindly read my post, and responded in person:
There is a concept named [Digital physics][30] which has a much longer history what I suggested, and probably the earliest pioneer of the field was Konrad Zuse. Fortunatelly his thesis—[Calculating Space][31] or “Rechnender Raum”—is now translated into English and has a modern, LaTeX typesetting.
Beside NKS there is another relevant book, which can serve as an extended list of references and valuable material in its own, written by Andrew Ilachinski with the title [Cellular Automata A Discrete Universe][32].
There is an ongoing "mini revolution" in the description of AdS/CFT based on [Tensor Networks][33]. The original paper on the topic can be found [here][34].
[1]: https://en.wikipedia.org/wiki/Theory_of_everything
[2]: https://www.quantamagazine.org/20150803-physics-theories-map/
[3]: https://en.wikipedia.org/wiki/Bohr%E2%80%93Einstein_debates
[4]: https://en.wikipedia.org/wiki/EPR_paradox
[5]: https://arxiv.org/abs/1508.05949
[6]: http://thebigbelltest.org/#/science?l=EN
[7]: http://www.cambridge.org/catalogue/catalogue.asp?isbn=9780521818629
[8]: http://www.springer.com/in/book/9783662137352
[9]: http://www.springer.com/us/book/9783319412849
[10]: https://arxiv.org/abs/1405.1548
[11]: http://www.wolframscience.com/
[12]: http://blog.stephenwolfram.com/2015/12/what-is-spacetime-really/
[13]: https://plato.stanford.edu/entries/probability-interpret/
[14]: http://www.cambridge.org/catalogue/catalogue.asp?isbn=0521592712
[15]: http://www.nature.com/news/the-quantum-source-of-space-time-1.18797
[16]: https://en.wikipedia.org/wiki/AdS/CFT_correspondence
[17]: https://en.wikipedia.org/wiki/ER=EPR
[18]: https://arxiv.org/abs/1604.02589
[19]: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
[20]: http://conwaylife.com/wiki/Category:Spaceships
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=H1H2.png&userId=981213
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=GH1H1.png&userId=981213
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=H1H2_2.png&userId=981213
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=GG.png&userId=981213
[25]: http://www.wolframscience.com/nksonline/section-9.13
[26]: https://en.wikipedia.org/wiki/Causal_sets
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PresentationTemplate_KJ_2.png&userId=981213
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PresentationTemplate_KJ_3.png&userId=981213
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=vladstudio_higgs_boson_fluo_800x600_signed.jpg&userId=981213
[30]: https://en.wikipedia.org/wiki/Digital_physics
[31]: http://www.mathrix.org/zenil/ZuseCalculatingSpace-GermanZenil.pdf
[32]: http://www.worldscientific.com/worldscibooks/10.1142/4702
[33]: https://arxiv.org/abs/1306.2164
[34]: https://arxiv.org/abs/0905.1317Jozsef Konczer2016-12-16T07:22:29ZExtract numbers from quantities from AirTemperatureData[] ?
http://community.wolfram.com/groups/-/m/t/1124856
Hi. In Mathematica 10.3 I type this:
AirTemperatureData[]
and I get this:
DataPaclets`WeatherConvenienceFunctionsDump`unitAirTemperature[
QuantityUnits`Private`ToQuantity[
QuantityUnits`Private`UnknownQuantity[16.,
"DegreesCelsius"]], "Metric"]
I haven't managed to extract the number (in this case, 16.). I've tried with QuantityMagnitude, etc., to no avail. Can somebody help me?
FranciscoFrancisco Gutierrez2017-06-20T15:56:53ZOptimize computation time of Integrate[] ?
http://community.wolfram.com/groups/-/m/t/1124955
Hi, I am doing some symbolic computation in a rather large notebook, but I find that Mathematica is underutilizing my computer resources. One bottleneck is the computation of
Efsoft = -(1/
2) \[Nu]t - \[Alpha] Integrate[\[Nu] (\[Nu] +
2 \[Nu]t)/(\[Nu] + \[Nu]t)^2 Exp[-\[Nu]/\[Nu]c] Cos[
2 \[Pi] \[Rho] \[Nu]]^2, {\[Nu], 0, \[Infinity]}];
The formula itself probably does not matter, but I see that Mathematica spends a lot of time and uses just 4% of the CPU of the system. Is this usual? I experience similar problems when running code with this copy of Mathematica 10, which has a campus license.Juan Jose Garcia Ripoll2017-06-20T09:23:26ZPage breaks with writing assistant
http://community.wolfram.com/groups/-/m/t/1121632
I have a problem that I hope someone has a solution for. I'm using Writing Assistant to write a book in Mathematica. After selecting the page break button on Writing Assistant, I find that Mathematica inserts two page breaks at the start and end of the (hidden) cell. So when I print the document, it throws a whole blank page.
How do I ensure that only one page break is inserted?Jonathan Kinlay2017-06-16T17:26:25ZBiokmod 5.4 A toolbox for biokinetic modeling to free download
http://community.wolfram.com/groups/-/m/t/615970
Biokmod 5.4: A Mathematica toolbox for solving system of differential equations, fitting coefficients, convolution, and more, with application for modeling Linear and Nonlinear Biokinetic Systems. It included the current ICRP biokinetic models. It can be applied in pharmacokinetic, internal dosimetry, bioassay evaluations, nuclear medicine and more. This toolbox consist in Mathematica packages and tutorials .
It can be downloaded here:
http://diarium.usal.es/guillermo/biokmod/ .
Additional information:
http://diarium.usal.es/guillermo/files/2015/11/SummaryBiokmod54.pdf
BIOKMODWEB is a web application developped with webMathematica and Biokmod available at:
http://www3.enusa.es/webMathematica/Public/biokmod.html
GuillermoGuillermo Sanchez2015-11-17T21:11:37ZPlot a function of x with different terms on R- and on R+?
http://community.wolfram.com/groups/-/m/t/1122535
Hello,
I want to Plot a function of x with different terms on R-, 0 and on R+ as one function rather than as a set of functions.
How can I achieve that, there is no such example in the doc for Plot[]?
Thank you.LV LV2017-06-19T03:09:55ZCoffee optimization, how to get your cup of joe just right
http://community.wolfram.com/groups/-/m/t/1024265
## Introduction ##
I take my coffee black, so I had no idea that there was a large controversy behind when you should add your milk to your coffee. [@Gary Bass][at0] however highlighted us to this with [his question][1] in the community. Apparently, the timing of the added milk is critical. If you add the milk directly the coffee will retain its temperature for a longer time, which is perfect if you plan on drinking later. If you are short on time however, and want to save your throat from scalding hot coffee, you might want to save the milk for just before you are about to drink it.
Obviously, this is something that cannot be taken lightly and some serious simulation is required. I compiled the information from that thread here, for anyone looking to perfect their morning routine. The post will include an explanation for how the model was created. I also however attached the actual model so if you are just looking for the simulation, check out the summary. Hopefully however, you also gain some insights into how to model events involving states in [SystemModeler][2]:
## Adding Milk to Coffee ##
In SystemModeler, there are at least two approaches you could take when adding the milk to the coffee. Either you could have everything collected into a single component that has a event in it corresponding to the addition of the milk, or you could have a separate component that specifies the addition of milk as a flow over time. I explored both those scenarios in the attached model.
Approach 1, with a discrete event in the coffee involves creating a copy of the [HeatCapacitor][3] component and adding some parameters that separates the heat capacity of the milk, the amount of milk added, when it is added, etc. As you say, the mixed Cp is unknown. A naïve initial approach could be to just add the two heat capacities together. If C is the total heat capacity of the coffee, with or without milk, you could add an equation that says:
C = if time > AddTime then Ccoffee * Vcoffee * 1000 + Cmilk * Vmilk * 1000 else Ccoffee * Vcoffee * 1000;
The coefficients are just there to convert the different units.
The temperature is a bit more difficult, since it varies continuously over time it is state so and can't be as easily changed as the capacity (which changes values only at one discrete interval).
What you have to do with states is using the [reinit(var,newValue)][4] function to reinitialize variable var, to the new value newValue. If you mix fluids together, the new temperature is the new total enthalpy divided by the new heat capacity:
t = (m1 c1 t1 + m2 c2 t2 + ... + mn cn tn) / (m1 c1 + m2 c2 + ... + mn cn)
(from [Engineering Toolbox][5])
In Modelica, we could reinitialize the temperature when the simulation time exceeds the time when the milk should be added, using the following:
when time > AddTime then
reinit(T, (Ccoffee * Vcoffee * 1000 * T + Cmilk * Vmilk * 1000 * MilkTemperature) / C);
end when;
Adding the coffee component and connecting it to a [ThermalConductor][6] component (to represent the cup) and connecting that in turn to an [FixedTemperature][7] component (to represent the room temperature) results in a fairly compact model:
![Diagram with coffee component][8]
If milk is added after 300 seconds, it produces the following simulation:
![Simulation with coffee component][9]
Approach 2 is having a short flow of milk, instead of a instantaneous addition of milk. The benefit of this is you could create your own addition strategy. For example, you could add half of the milk at the beginning, and half after 300 seconds. Or any arbitrary strategy. For now, I focused on doing it as a 1 second pulse.
An input is added to the coffee, corresponding to the flow of milk. The volume of the milk in the coffee is no longer a parameter but increases with the flow:
der(Vmilk) = u;
And the heat capacity increases with the milk volume:
C = Ccoffee * Vcoffee * 1000 + Cmilk * Vmilk * 1000;
Adding milk will increase the enthalpy in the system, but the increased heat capacity will still cause a drop in temperature:
T = H/C;
der(H) = port.Q_flow + Cmilk * 1000 * u * MilkTemperature;
With H being the enthalpy.
The milk component is simply a pulse from [Pulse][10] that has some additional parameters.
![milk addition component diagram][11]
Everything taken together, we now have an additional component in the coffee cooling model:
![diagram with coffee and milk components][12]
As it should, this approach gives a similar plot as the first one. The only difference is that the milk is added over a duration of 1 second. As the duration approaches zero, the two approaches would converge.
![simulation with coffee and milk][13]
You could use this approach to fit parameters, using the methodology from the [electric kettle][14] example.
## Other Cooling Processes ##
In the model above, we had a very naïve cooling process for our coffee. We assumed it could be described by Newtons law of cooling (which the heat conduction component is based on). In the [original thread][15] a paper is linked that goes into detail on how you might expand the a coffee model to include some other forms of cooling.
I will here use a [HeatCapacitor][16] component instead of the coffee component to simplify things, but the two should interchangeable. The experiment numbers are in reference to the attached article.
**Experiment 1**
Experiment 1 can be described using standard components from the Modelica.Thermal.HeatTransfer package. The pot will be HeatCapacitor component, the ambient temperature will be modeled using FixedTemperature and the convection is modeled using a ThermalConductor, which follows Newtons law of cooling.
![experiment 1 diagram][17]
The G parameter in the ThermalConductor is equivalent to the k parameter they use. From what I could tell, the paper did not include any measurement of the heat capacitance or ambient temperature so I went with 3 dl of water and 20 degrees Celsius. However, both of these would probably need to be higher to fit their experimental data.
**Experiment 2**
To create experiment 2 I first duplicated experiment 1 by selecting it and pressing Ctrl+D, you can also right click and select Duplicate. Experiment 2 requires a component like the ThermalConductor but one that has an exponent that causes nonlinear behaviour in the heat flow. No such component exist in the Modelica Standard Library, but we can easily create one. I created a new component to be used in experiment 2 by dragging the normal ThermalConductor into Experiment2.
![cope class][18]
And gave it a new name, "ArbitraryExponentConductor"
Now I had to modify it to use the exponent. After opening the new component I first added a new parameter by right clicking the parameter view and and selecting Insert > Parameter
![Adding new parameter to model][19]
I used the name x as in the paper and used type Real.
![new parameter window][20]
Now I had to modify the equations so I went into the Modelica Text View (Ctrl+3) and changed the line:
Q_flow = G * dT;
to
Q_flow = G * dT ^ x;
dT corresponds to the temperature difference (tc-ts) in the paper.
Going back into Experiment 2, I changed the normal ThermalConductor by right clicking it and selected Change Type. In the dialog, I gave the name of the new type (CofeeCooling.Experiment2.ArbitraryExponentConductor). You can also drag the component from the component browser directly into the field.
![change model quickly][21]
or of course, delete the component, drag the new one in and make new connections.
**Experiment 3**
For experiment 3, you need to add a some more stuff. Start by douplicating experiment 1. Connect the ThermalConductor to a new HeatCapacitor instead of the FixtedTemperature. That heat capacitor will be the pot, while the original one will be the coffee. The first ThermalConductor then represents equation 1 in the paper, transfer of heat from coffee to the pot. Add another ThermalConductor and connect it between the pot and the FixedTemperature to represent equation 5. Also add two BodyRadiation components and connect them from each capacitor to the FixedTemperature. These will represent all the radition effects described. They are bidirectional so they represent two equations each (3,4 and 6,7). For evaporation, I created a custom component which is described by the equation
port.Q_flow = k * port.T;
Where k is the product of the P, l and v parameters described in the paper. You could add individual parameters for each of them instead, as described in the text for experiment 2.
Connect the evaporation to the coffee capacitor.
![experiment 3 model diagram][22]
The 4:th experiment is much like the the 3:d one. I modified the evaporation component to have the equation
port.Q_flow = k * port.T ^ z;
## Summary & Simulation ##
Okey, so that was the *how*. Now we want to use this model to draw conclusions. I'll use the simplest model here and encourage you to try out the more advanced models yourselves.
Say we want to drink our coffee in 2 minutes, starting from 80°C. Everyone knows that the optimal coffee drinking temperature is 72.34°C. When should we add our milk to get there in 2 minutes?
We can do a parametric simulation in Mathematica to try out two different timings:
addTimes = {0, 110};
sim = WSMSimulate["CoffeeAndMilk.Scenarios.Approach1", WSMParameterValues -> {"AddTime" -> addTimes}];
In the plot, I will add a point that is the optimum temperature at time = 120s. I'll also use a trick to get some nice legends to better understand which curve corresponds to which:
WSMPlot[sim, "coffee.T",
PlotRange -> {{60, 180}, {70, 80}},
Epilog -> Point[{120, 72.34}],
PlotLegends -> Map["Add time = " <> ToString[#] &, addTimes]
]
This produces the following plot:
![plot 1 with 0 and 110][23]
So close. But we can't give up just now. Let us adjust the timing a bit and add the milk right before we want to drink the coffee:
![plot 2 with 0 and 120][24]
That just about does it I'd say.
[at0]: http://community.wolfram.com/web/bassgarys
[1]: http://community.wolfram.com/groups/-/m/t/1021383
[2]: http://www.wolfram.com/system-modeler/
[3]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Thermal.HeatTransfer.Components.HeatCapacitor.html
[4]: https://reference.wolfram.com/system-modeler/libraries/ModelicaReference/ModelicaReference.Operators.%27reinit%28%29%27.html
[5]: http://www.engineeringtoolbox.com/mixing-fluids-temperature-mass-d_1785.html
[6]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Thermal.HeatTransfer.Components.ThermalConductor.html
[7]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Thermal.HeatTransfer.Sources.FixedTemperature.html
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9088mod1.png&userId=554806
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mod1sim.png&userId=554806
[10]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Blocks.Sources.Pulse.html
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=milk.png&userId=554806
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mode2.png&userId=554806
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mod2sim.png&userId=554806
[14]: https://www.wolfram.com/system-modeler/examples/consumer-products/electric-kettle-fluid-heat-transfer.html
[15]: http://community.wolfram.com/groups/-/m/t/1021383
[16]: https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Thermal.HeatTransfer.Components.HeatCapacitor.html
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4316mod3.png&userId=554806
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Copy.png&userId=554806
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Addnewparameter.png&userId=554806
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=newparameter.png&userId=554806
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=quickchange.png&userId=554806
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mod3%281%29.png&userId=554806
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test1plot.png&userId=554806
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2plot.png&userId=554806Patrik Ekenberg2017-03-02T17:16:24ZUsing Mathematica in Teaching Differential Equations
http://community.wolfram.com/groups/-/m/t/1124581
I am Brian Winkel, Professor Emeritus (civilian) United States Military Academy, West Point NY USA. I wish to contact colleagues who teach differential equations using or considering using modeling and technology.
I am currently the Director of SIMIODE-Systemic Initiative for Modeling Investigations and Opportunities with Differential Equations, an organization of teachers and students interested in teaching and learning differential equations by using modeling and technology throughout the process. Visit us at www.simiode.org.
We have designed a Student Competition Using Differential Equation Modeling – SCUDEM for April 2018 (see www.simiode.org/scudem for complete details) and invite schools to host SCUDEM (we have some 60 teams in the US already)and to consider sponsoring a team.
SIMIODE is a 501(c)3 organization and all its resources are freely available under the most generous Creative Commons license. Visit us at www.simiode.org and join. All is FREE at SIMIODE.
![SCUDEM 2018 local site locations in the United States as of 16 June 2017][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SCUDEMSitesMap.jpg&userId=1124219Brian Winkel2017-06-19T23:12:18ZSolve a problem involving logical programming using "For" function?
http://community.wolfram.com/groups/-/m/t/1123048
I have a small programming problem to solve using Mathematica. Being new to the problem involving some sort of logical programming, looking for some initial help. I tried to write a script using the function “For”, which does not work at all. Would appreciate if someone can suggest an edit to the script or any other function more appropriate for solving such problems. Following is the simplified version of the problem I want to solve.
There are four pockets (say a, b, c and d) each having 1 dollar in the beginning of the experiment. Each of these pockets has a potential associated with it, which is a function of an independent variable V (say Fa = 3*V, Fb = 4*V etc). On varying the independent variable V, the potentials for each pocket take distinguished trajectories.
Now the condition is that as the difference between two potentials exceeds a fixed constant, all the money which lower potential pocket has, is transferred to the one with higher potential pocket. For example, if this constant for pocket a & b is 5, the 1 dollar of a will be transferred to pocket b, as soon as Fa-Fb >= 5. Similarly, now 2 dollars of b will be transferred to another pocket c when Fc - Fb touches a fixed threshold difference. This redistribution of 4 dollars will continue with the variation of parameter V.
I want to solve this problem using Mathematica, so that I can have the distribution of the total $ 4 among four pockets as a function of parameter V. Thanks in advance
The small script which I tried is as follows. Attached is the *.nb file for this script.
Fa = 3*V;
Fb = 4*V;
Fc = 5*V;
Fd = 6*V;
For[a = 1; b = 1; c = 1;
d = 1, {Fb - Fa > 5, b == b + a; a = 0}; {Fc - Fb > 7, c == c + b;
b == 0}; {Fd - Fc > 8, d == c + d; c == 0}, {Do[V, {V, 1, 20, 1}],
Plot[{a, b, c, d}, {V, 1, 20}]}]S G2017-06-19T15:26:46ZCDF templates for creating textbook notes?
http://community.wolfram.com/groups/-/m/t/1109137
I purchased the license for Mathematica 11 and got it installed this week, So far I'm having a great learning experience with it. One of the most highlights of purchasing Mathematica was CDF documentation . I would like to create notes on textbooks by using the exact same feature as shown on this website: [Link][1].
Is this a template created by Mathematica, how do I go about creating a CDF such as this for a textbook I am learning.
[1]: http://www.wolfram.com/cdf/uses-examples/textbooks.htmlAbhilash Sukumari2017-05-26T19:18:21ZMetaprogramming in Wolfram Language
http://community.wolfram.com/groups/-/m/t/1121273
*NOTE: Please see the original version of this post [**HERE**][1]. Cross-posted here per suggestion of [Vitaliy Kaurov][2]*
*Also note: This post has been reposted verbatim, and as such is rather dated. While I believe that it is still mostly accurate, it does not necessarily fully reflect my current views on the subject matter. In particular, a number of newer internal projects have been using metaprogramming techniques in ways not fully reflected here.*
----------
##What this answer is and is not
To avoid some confusion and misunderstanding, let me state right away what is the intended status of this answer.
This answer ***is not***
- A tutorial to the subject
- A systematic, or complete, introduction to the subject
- An authoritative answer putting the final word on the subject
This answer hopefully is
- An (subjective!) overview of various meta-programming techniques in Mathematica, *in the way they are known to me*. I want to explicitly state that I am ***not*** trying to convey any kind of the "common wisdom" here, since the answer is largely based on my own experiences, and I have not seen an overwhelming number of meta-programming examples in Mathematica-related resources I had a chance to get acquainted with (so I may have no idea what the common wisdom is :)).
- A collection of (hopefully relevant) links with some minimal explanations, which would allow the reader to see some examples and applications of metaprogramming in Mathematica, or at least examples of what I consider meta-programming in Mathematica.
- A possible stub for some future answers, so that this larger one could be eventually rewritten and/or split into more focused and narrow ones, as the interest towards some particular forms of metaprogramming in Mathematica is being developed in our community.
##Preamble
Ok, let me give it a shot. I'll start by claiming that Mathematica is very well suited for meta-programming, and one can write much more powerful programs in Mathematica by utilizing it. However, while it *allows* for very interesting and powerful meta-programming techniques, it does not IMO provide a convenient layer of tools to make these techniques more standard and effortless. Particularly painful is the evaluation control (preventing pieces of code from premature evaluation), because of the *absence* of the true quotation mechanism (here I will disagree with some other answers), the infinite evaluation model of Mathematica, and a quite complex core evaluator.
##Enumerating some meta-programming techniques
There are several forms of meta-programming, so let me give a partial list first, and discuss afterwards
- Introspection-based metaprogramming
- Reflection-based metaprogramming (like in say, Java)
- Run-time code generation
- Macros (like in Lisp)
- DSL (domain-specific-language) creation
- ...?
In addition to these, Mathematica has its own meta-programming devices, such as rule-based metaprogramming and the `Block`-related techniques.
##Introspection
Mathematica is IMO very strong here. There are a couple of reasons for this:
- Homoiconic language (programs written in own data structures - Mathematica expressions. This is code-as-data paradigm, like Lisp which uses lists for this)
- One can access global definitions for symbols stored in `OwnValues`, `DownValues`, `SubValues`, `UpVaulues`, etc, and various other global properties, programmatically.
- Rule-based destructuring techniques (using `Cases` etc) seriously simplify many introspection-related operations
- Mathematica code is "over-transparent" - even pure functions are expressions, available to introspection and destructuring, rather than black boxes. This has its downsides (for example, making a functional abstraction leaky in Mathematica, see the end of [this answer][3]), but it also allows for things like `withGlobalFunctions` macro from [this answer][4], where global function definitions are expanded inside pure functions (that macro also illustrates other meta-programming techniques).
###Automatic dependency tracking
I will give a single simple explicit example of what I mean by introspection here, and supply some references to more involved cases. The following line of code gives all the symbols used to build a given expression `expr`, kept unevaluated:
Cases[Unevaluated[expr],s_Symbol:>HoldComplete[s],{0,Infinity},Heads->True]
Note that this will work for *any* Mathematica expression, including a piece of (perhaps unevaluated) Mathematica code.
A good illustration of introspection-based meta-programming is the symbol dependency analysis. I gave it a shot [here][5], where I fully used all of the above-mentioned features (homoiconic language, low-level access to symbol's properties, rule-based destructuring). A simpler but practical application of dependency analysis can be found e.g. in the `getDependencies` function from [this answer][6], where I do use the dependencies to dynamically construct a set of symbols which are encapsulated (not easily available on the top-level) but whose definitions must be saved during the serialization of the list object being constructed.
###Working around some language limitations
Sometimes, introspection-based metaprogramming can be also used to go around certain limitations of the language, or to make the language constructs behave in the way you want while minimally affecting them. Some examples off the top of my head: [changing the default behavior of `SaveDefinitions` option for `Manipulate`][7], [making patterns to match only children of certain elements][8], and also two functions from [this answer][9]: a function `casesShielded` which implements a version of `Cases` that shields certain sub-expressions (matching specific pattern) from the pattern-matcher. and a (rather hacky) function `myCases` which implements a modified depth-first search, where the head is inspected before the elements (this is not what is happening in standard `Cases`, which sometimes has unwanted consequences). Yet another example here is the tiny framework I wrote to deal with the leaks of standard lexical scoping mechanism in Mathematica, which can be found [here][10].
###Summary
To conclude this section, I think that introspection-based meta-programming is a very useful and powerful technique in Mathematica, and the one that is relatively easy to implement without engaging in a fight with the system. I am also positive that it is possible to factor out the most useful introspection primitives and have a higher-level introspection-based metaprogramming library, and hope such a library will emerge soon.
##Reflection - based metaprogramming
This may probably be considered a subset of the introspection-based metaprogramming, but it is particularly powerful for languages which impose more rigid rules on how code is written, particularly OO languages (Java for example). This uniform and rigid structure (e.g. all code is in classes, etc) allows for automatic querying of, for example, the methods called on the object, etc. Mathematica per se is not particularly powerful here, because "too many ways of doing things" are allowed for this to be effective, but one can surely write frameworks and / or DSLs in Mathematica which would benefit from this meta-programming style.
##Run-time code generation
This type of meta-programming can be used relatively easily and brings a lot to the table in Mathematica.
###Automation and adding convenient syntax
I will give a small example from [this answer][11], where an ability to generate a pure function (closure) at run-time allows us to easily define a version of SQL `select` with a more friendly Mathematica syntax, and based on the in-memory Mathematica representation of an SQL table as a nested list:
ClearAll[select, where];
SetAttributes[where, HoldAll];
select[table : {colNames_List, rows__List}, where[condition_]] :=
With[{selF = Apply[Function, Hold[condition] /.
Dispatch[Thread[colNames -> Thread[Slot[Range[Length[colNames]]]]]]]},
Select[{rows}, selF @@ # &]];
Please see the aforementioned answer for examples of use. Further developments of these ideas (also based on meta-programming) can be found in [this][12] and [this][13] discussions.
###Making JIT-compiled functions, and using `Compile` in more flexible ways
An important class of applications of run-time code-generation is in improving the flexibility of `Compile`. A simple example would be to create a JIT-compiled version of `Select`, which would compile `Select` with a custom predicate:
ClearAll[selectJIT];
selectJIT[pred_, listType_] :=
selectJIT[pred, Verbatim[listType]] =
Block[{lst},
With[{decl = {Prepend[listType, lst]}},
Compile @@
Hold[decl, Select[lst, pred], CompilationTarget -> "C",
RuntimeOptions -> "Speed"]]];
This function actually illustrates several techniques, but let me first show how it is used:
test = RandomInteger[{-25, 25}, {10^6, 2}];
selectJIT[#[[2]] > 0 &, {_Integer, 2}][test] // Short // AbsoluteTiming
selectJIT[#[[2]] > 0 &, {_Integer, 2}][test] // Short // AbsoluteTiming
(*
==> {0.4707032,{{-6,9},{-5,23},{-4,4},{13,3},{-5,7},{19,22},<<489909>>,{11,25},{-6,5},
{-24,1},{-25,18},{9,19},{13,24}}}
==> {0.1250000,{{-6,9},{-5,23},{-4,4},{13,3},{-5,7},{19,22},<<489909>>,{11,25},{-6,5},
{-24,1},{-25,18},{9,19},{13,24}}}
*)
The second time it was several times faster because the compiled function was memoized. But even including the compilation time, it beats the standard `Select` here:
Select[test,#[[2]]>0&]//Short//AbsoluteTiming
(*
==> {1.6269531,{{-6,9},{-5,23},{-4,4},{13,3},{-5,7},{19,22},<<489909>>,{11,25},{-6,5},
{-24,1},{-25,18},{9,19},{13,24}}}
*)
The other techniques illustrated here are the use of constructs like `Compile@@Hold[...]` to fool the variable-renaming scheme (see e.g. [this answer][14] for a detailed explanation), and the use of `With` and replacement rules (pattern-based definitions) as a code-injecting device (this technique is used very commonly). Another example of a very similar nature is [here][15], and yet another, very elegant example is [here][16].
###Custom assignment operators and automatic generation of function's definitions
Another class of run-time code-generation techniques (which is somewhat closer to macros in spirit) is to use custom assignment operators, so that you can generate rather complex or large (possibly boilerplate) code from relatively simple specifications. Applications range from relatively simple cases of adding some convenience/ syntactic sugar, such as e.g. [here][17] (where we define a custom assignment operator to allow us to use option names directly in code), to somewhat more complex cases like making replacements in definitions at the definition-time, as say in the function `lex` from [this answer][18] (see also the code for a `LetL` macro below), to quite sophisticated generation of boilerplate code, happening e.g. in JLink behind the scenes (which, for JLink, is a big deal, because *this* (plus of course the great design of JLink and Java reflection) is the reason why JLink is so much easier to use than Mathlink).
###Automating error-handling and generating boilerplate code
Yet another use for run-time code generation (similar to the previous) is to automate error-handling. I discussed one approach to that [here][19], but it does not have to stop there - one can go much further in factoring out (and auto-generating) the boilerplate code from the essential code.
###A digression: one general problem with various meta-programming techniques in Mathematica
The problem with this and previous classes of use cases however is the lack of composition: you can not generally define several custom assignment operators and be sure that they will always work correctly in combinations. To do this, one has to write a framework, which would handle composition. While this is possible to do, the development effort can rarely be justified for simple projects. Having a general library for this would be great, provided that this is at all possible. In fact, I will argue that the lack of composibility ("out of the box") is plaguing many potentially great meta-programming techniques in Mathematica, particularly macros.
Note that I don't consider this being a fundamental core language-level problem, since the relevant libraries / frameworks can surely be written. I view it more as a consequence of the extreme generality of Mathematica and it being in a transition from a niche scientific language to a general-purpose one (in terms of its typical uses, not just capabilities), so I am sure this problem has a solution and will eventually be solved.
###Proper (macro-like) run-time generation of Mathematica code
A final use case for the run-time code generation I want to mention is, well, run-time Mathematica code generation. This is also similar to macros (as they are understood in Lisp) in spirit, in fact probably the closest to them from all techniques I am describing here. One relatively simple example I discuss [here][20], and a similar approach is described [here][21]. A more complex case involving generation of entire packages I used for the real-time cell-based code highlighter described [here][22]. There are also more sophisticated techniques of run-time Mathematica code generation - one of which (in a very oversimplified form) I described [here][23]
###Summary
To summarize this section, I view run-time code generation as another meta-programming technique which is absolutely central to make non-trivial things with Mathematica.
##Macros
First, what I mean by macros is probably not what is commonly understood by macros in other languages. Specifically, by macro in Mathematica I will mean a construct which:
- Manipulates pieces of Mathematica code as data, possibly preventing them from (premature) evaluation
- Expands code at run-time (not "read-time" or "compile-time", which are not so well defined in Mathematica)
###Some simple examples
Here is the simplest macro I know of, which allows one to avoid introducing an intermediate variable in cases when something must be done after the result has been obtained:
SetAttributes[withCodeAfter,HoldRest];
withCodeAfter[before_,after_]:=(after;before)
The point here is that the argument `before` is computed before being passed in the body of `withCodeAfter`, therefore evaluating to the result we want, while the code `after` is being passed unevaluated (due to the `HoldRest` attribute), and so is evaluated already inside the body of `withCodeAfter`. Nevertheless, the returned result is the value of `before`, since it stands at the end.
Even though the above macro is very simple, it illustrates the power of macros, since this kind of code manipulation requires special support from the language and is not present in many languages.
###Tools used for writing macros
The main tools used for writing macros are tools of evaluation control, such as
- `Hold*`- attributes,
- `Evaluate` and `Unevaluated`
- code injection using `With` and / or replacement rules
- Pure functions with `Hold` - attributes
Even in the simple example above, 2 of these tools were used (`Hold`-attribute and replacement rules, the latter hidden a bit by using global replacement rules / definitions). The discussion of the evaluation control constructs proper is outside the scope of the present discussion but a few places you can look at are [here][24] and [here][25]
###Typical classes of macros
Macros can widely range in their purpose. Here are some typical classes
- Making new scoping constructs or environments (very typical use case)
- Used in combination with run-time code generation to inject some unevaluated code
- Used in combination with some dynamic scoping, to execute code in some environments where certain global rules are modified. In this case, the "macro" - part is used to delay the evaluation until the code finds itself in a new environment, so strictly speaking these are rather custom dynamic scoping constructs.
###Examples of new scoping constructs / environments
There are plenty of examples of the first type of macros available in the posts on StackOverlflow and here. One of my favorite macros, which I will reproduce here, is the `LetL` macro which allows consecutive bindings for `With` scoping construct:
ClearAll[LetL];
SetAttributes[LetL, HoldAll];
LetL /: Verbatim[SetDelayed][lhs_, rhs : HoldPattern[LetL[{__}, _]]] :=
Block[{With}, Attributes[With] = {HoldAll};
lhs := Evaluate[rhs]];
LetL[{}, expr_] := expr;
LetL[{head_}, expr_] := With[{head}, expr];
LetL[{head_, tail__}, expr_] :=
Block[{With}, Attributes[With] = {HoldAll};
With[{head}, Evaluate[LetL[{tail}, expr]]]];
What it does is to expand a single declaration like `LetL[{a=1,b=a+1,c = a+b},a+b+c]` into a nested `With` at run-time, and it also works for function definitions. I described in more fully [here][26] (where some subtleties associated with it are also described), and used it extensively e.g. [here][27]. A very similar example can be found in [this answer][28]. Yet another example I already mentioned - it is the macro `withGlobalFunctions` from [this answer][29], which expands all generically-defined (via patterns) global functions. The last example I want to include here (although it also is relevant for the third use case) is a macro for performing a code cleanup, discussed [here][30], and I particularly like the version by @WReach, which I will reproduce here:
SetAttributes[CleanUp, HoldAll]
CleanUp[expr_, cleanup_] :=
Module[{exprFn, result, abort = False, rethrow = True, seq},
exprFn[] := expr;
result =
CheckAbort[
Catch[Catch[result = exprFn[]; rethrow = False; result], _,
seq[##] &], abort = True];
cleanup;
If[abort, Abort[]];
If[rethrow, Throw[result /. seq -> Sequence]];
result]
It is not fully "bullet-proof", but does a really good job in the majority of cases.
###Examples of run-time code generation / new functionality
Actually, many of the above examples also qualify here. I'll add just one more here (in two variations): the abortable table from [this answer][31] (I will reproduce the final version here):
ClearAll[abortableTableAlt];
SetAttributes[abortableTableAlt, HoldAll];
abortableTableAlt[expr_, iter : {_Symbol, __} ..] :=
Module[{indices, indexedRes, sowTag, depth = Length[Hold[iter]] - 1},
Hold[iter] /. {sym_Symbol, __} :> sym /. Hold[syms__] :> (indices := {syms});
indexedRes = Replace[#, {x_} :> x] &@ Last@Reap[
CheckAbort[Do[Sow[{expr, indices}, sowTag], iter], Null],sowTag];
AbortProtect[
SplitBy[indexedRes, Array[Function[x, #[[2, x]] &], {depth}]][[##,1]] & @@
Table[All, {depth + 1}]
]];
(it accepts the same syntax as `Table`, including the multidimensional case, but returns the partial list of accumulated results in the case of Abort[] - see examples of use in the mentioned answer), and its version for a conditional `Table`, which only adds an element is certain condition is fulfilled - it is described [here][32]. There are of course many other examples in this category.
###Examples of dynamic environments
Dynamic environments can be very useful when you want to modify certain global variables or, which is much less trivial, functions, for a particular piece of code, so that the rest of the system remains unaffected. The typical constructs used to achieve this are `Block` and ``Internal`InheritedBlock``.
The simplest and most familiar dynamic environment is obtained by changing the values of `$RecursionLimit` and / or `$IterationLimit` inside a `Block`. Some examples of use for these are in [my answer][33] in the discussion of tail call optimization in Mathematica. For a more complex example, see [my suggestion][34] for the recent question on convenient string manipulation. Some more examples can be found in my answer to [this question][35]. An example of application of this to profiling can be found [here][36].
Again, there are many more examples, many of which I probably missed here.
###Problems with writing macros in Mathematica
To my mind, the main problems with writing and using macros consistently in Mathematica are these:
- Hard to control evaluation. No *real* quotation mechanism (`Hold` and `HoldComplete` don't count because they create extra wrappers, and `Unevaluated` does not count since it is not permanent ans is stripped during the evaluation)
- Macros as described above are expanded from outside to inside. Coupled with the lack of *real* quotation mechanism, this leads to the absence of true macro composition out of the box. This composition can be achieved, but with some efforts
- The lack of the real compilation stage (The definition-time does not fully count since most definitions are delayed).
To circumvent these issues, one has to apply various techniques, such as
- [Trott - Strzebonski in-place evaluation technique][37] to evaluate parts of held expressions in-place (see also [this answer][38] for some more details on that)
- A technique which I call (for the lack of a better name) "inverse rule-dressing", which exploits the properties of delayed rule substitution (delayed, plus intrusive), to inject some unevaluated code. I used it in the first solution in [this answer][39], in more complex way in the `SavePointers` function in [this answer][40], and in a number of other cases. It has also been used very elegantly in [this answer][41].
- using a custom `Hold`-like wrapper which is first mapped on (possibly all) parts of an expression, and later removed using rules. Two examples of this techniques are [here][42] and [here][43]
- ...
Despite all these techniques being useful, and in total covering most of the needs for macro-writing, the need to use them (often in combinations) and the resulting code complexity shows, to my mind, the serious need for a generic library which would provide simpler means for macro-writing. I would prefer to be able to nest macros and not think about zillion of things that may go wrong because of some unwanted evaluation, but rather about things that really matter (such as variable captures).
###Summary
Macros are another very powerful meta-programming technique. While it *is* possible to write them in Mathematica, it is, as of now, a rather involved undertaking, and composing macros is an even harder task. Because composition in the key, I attribute the fact that macros are not in widespread use in Mathematica programming, to this lack of composition, plus the complexity of writing individual macros. That said, I think this is a very promising direction, and hope that some time soon we will have the tools which would make writing macros a more simple and automatic process.
##DSL creation
I won't say almost anything here, except noting that this is entirely possible in Mathematica, and some nice syntax can be added easily via `UpValues`.
##Final remarks
I think that meta-programming is one of the most important and promising directions in the present and future of Mathematica programming. It is also rather complex, and IMO, largely unexplored in Mathematica still. I hope that this justifies this post being so long.
I tried to summarize various approaches to meta-programming in Mathematica, which I am aware of, and give references to examples of these approaches, so that the reader can look for him/herself. Since meta-programming is a complex topic, I did not attempt to write a tutorial, but rather tried to summarize various experiences of myself and others to produce a kind of a reference.
One may notice that the references are dominated by the code I wrote. One reason for that is that I am a heavy user of meta-programming in Mathematica. Another reason is that everyone remembers own code the most. I have to apologize for not including some other references which did not come to my mind right away. I invite everyone to edit this post and add more references, which I missed.
[1]: https://mathematica.stackexchange.com/a/2352/81
[2]: http://community.wolfram.com/web/vitaliyk
[3]: https://stackoverflow.com/questions/4430998/mathematica-what-is-symbolic-programming/4435720#4435720
[4]: https://mathematica.stackexchange.com/questions/704/functions-vs-patterns/746#746
[5]: https://stackoverflow.com/questions/8867757/has-anyone-written-any-function-to-automatically-build-a-dependency-graph-of-an/8869545#8869545
[6]: https://mathematica.stackexchange.com/questions/36/file-backed-lists-variables-for-handling-large-data/209#209
[7]: https://stackoverflow.com/questions/6579644/savedefinitions-considered-dangerous/6580284#6580284
[8]: https://stackoverflow.com/questions/6451802/pattern-to-match-only-children-of-certain-elements/6453673#6453673
[9]: https://stackoverflow.com/questions/8700934/why-is-cases-so-slow-here-are-there-any-tricks-to-speed-it-up/8701756#8701756
[10]: https://gist.github.com/1683497
[11]: https://stackoverflow.com/questions/4787901/data-table-manipulation-in-mathematica/4788373#4788373
[12]: https://stackoverflow.com/questions/8240943/data-table-manipulation-in-mathematica-step-2
[13]: https://stackoverflow.com/questions/6130276/conditionnal-data-manipulation-in-mathematica
[14]: https://stackoverflow.com/questions/6236458/plot-using-with-versus-plot-using-block-mathematica/6236808#6236808
[15]: https://stackoverflow.com/questions/4973424/in-mathematica-how-do-i-compile-the-function-outer-for-an-arbitrary-number-of/4973603#4973603
[16]: https://stackoverflow.com/questions/8204784/how-to-compile-a-function-that-computes-the-hessian/8210224#8210224
[17]: https://stackoverflow.com/questions/4682742/optional-named-arguments-without-wrapping-them-all-in-optionvalue/4683924#4683924
[18]: https://mathematica.stackexchange.com/questions/1602/resource-management-in-mathematica/1603#1603
[19]: https://stackoverflow.com/questions/6560116/best-practices-in-error-reporting-mathematica/6563886#6563886
[20]: https://stackoverflow.com/questions/6214946/how-to-dynamically-generate-mathematica-code/6215394#6215394
[21]: https://stackoverflow.com/questions/8741671/unevaluated-form-of-ai/8742627#8742627
[22]: https://mathematica.stackexchange.com/questions/1315/customizing-syntax-highlighting-for-private-cell-styles/1320#1320
[23]: https://stackoverflow.com/questions/8741671/unevaluated-form-of-ai/8746584#8746584
[24]: https://stackoverflow.com/questions/4856177/preventing-evaluation-of-mathematica-expressions
[25]: https://stackoverflow.com/questions/1616592/mathematica-unevaluated-vs-defer-vs-hold-vs-holdform-vs-holdallcomplete-vs-etc
[26]: https://stackoverflow.com/questions/5866016/question-on-condition/5869885#5869885
[27]: https://mathematica.stackexchange.com/questions/36/file-backed-lists-variables-for-handling-large-data/209#209
[28]: https://stackoverflow.com/questions/8373526/error-generating-localized-variables-as-constants/8377522#8377522
[29]: https://mathematica.stackexchange.com/questions/704/functions-vs-patterns/746#746
[30]: https://stackoverflow.com/questions/3365794/reliable-clean-up-in-mathematica
[31]: https://stackoverflow.com/questions/6470625/mathematica-table-function/6471024#6471024
[32]: https://stackoverflow.com/questions/6367932/generate-a-list-in-mathematica-with-a-conditional-tested-for-each-element/6368770#6368770
[33]: https://stackoverflow.com/questions/4481301/tail-call-optimization-in-mathematica/4627671#4627671
[34]: https://mathematica.stackexchange.com/questions/344/convenient-string-manipulation/377#377
[35]: https://mathematica.stackexchange.com/questions/1162/alternative-to-overloading-set
[36]: https://mathematica.stackexchange.com/questions/1786/workbench-profile-question/1798#1798
[37]: http://library.wolfram.com/conferences/devconf99/villegas/UnevaluatedExpressions/Links/index_lnk_30.html
[38]: https://stackoverflow.com/questions/6633236/replace-inside-held-expression/6633334#6633334
[39]: https://stackoverflow.com/questions/6234701/how-to-block-symbols-without-evaluating-them/6236264#6236264
[40]: https://stackoverflow.com/questions/6579644/savedefinitions-considered-dangerous/6580284#6580284
[41]: https://mathematica.stackexchange.com/questions/1929/injecting-a-sequence-of-expressions-into-a-held-expression/1937#1937
[42]: https://stackoverflow.com/questions/5747742/uses-for-mapall/5749275#5749275
[43]: https://mathematica.stackexchange.com/questions/2137/truncate-treeform-to-show-only-the-top/2139#2139Leonid Shifrin2017-06-16T12:08:12ZCreate a color 3D slicer?
http://community.wolfram.com/groups/-/m/t/1122480
Hello All!
I've decided to use Wolfram as a platform to learn how to code a Color 3D Slicer. I'm a total newbie when it comes to coding anything other than HTML. Any advice, suggestions, or help would be highly appreciated., i.e. how to incorporate color information to slices. Thanks!EDWARD AYLWARD2017-06-19T04:58:40ZSet parameters of the methods just like "DBSCAN" and "TSNE" ?
http://community.wolfram.com/groups/-/m/t/1122326
I even don‘t know what parameters the method have .
For example, when I use the function
DimensionReduce[data, Method -> "TSNE"]
I don‘t know what parameters that the "TSNE" have. And can I perform the "TSNE" with 'barneshut' algorithm?Huadun Wang2017-06-18T08:10:58Z[✓] Treat indexed objects as variables?
http://community.wolfram.com/groups/-/m/t/1122353
It seems Mathematica doesn't recognize a[1] + 1 like the same variable a[1] plus one:
a[1] = \[Omega]
a[2] = l
V[x_, a[1], a[2]] := 1/4*a[1]^2*x^2 + a[2]*(a[2] + 1)/x^2
V[x, a[1] + 1, a[2]]deimos19902017-06-18T14:59:17ZTrain large deep learning NN in true batch mode?
http://community.wolfram.com/groups/-/m/t/1121051
When I am training a DNN (Deep Neural Network) a typical command is:
NetTrain[trainCNN4a1,
TrainSet, {"TrainedNet", "LossEvolutionPlot",
"RMSWeightEvolutionPlot", "RMSGradientEvolutionPlot",
"TotalTrainingTime", "MeanBatchesPerSecond",
"MeanInputsPerSecond", "BatchLossList", "RoundLossList",
"ValidationLossList"}, ValidationSet -> Scaled[0.2],
Method -> {"SGD", "Momentum" -> 0.95},
TrainingProgressReporting -> "Print", MaxTrainingRounds -> 5,
BatchSize -> 256];
This works fine for smaller training set, but eventually it will fail (even on my 32GB iMac) when the training sets start getting truly large (>100K images).
How can I use NetTrain[ ] so it does not require the full Training set (and validation set) to be loaded as an in memory object (in example: TrainSet)?
Ideally I want to have these image files in folders, where the folder name delineates the "tag". Then NetTrain[ ] grabs from these folders the necessary files for training, but in a way that does not destroy computer performance.
Is this a DIY project?
Any help on this critical issue is appreciated.Bryan Minor2017-06-15T16:46:02Z[GIF] Rise Up ((29, 5)-torus knot)
http://community.wolfram.com/groups/-/m/t/1122344
![(29,5)-torus knot][1]
**Rise Up**
Continuing the torus knot theme ([1][2], [2][3], [3][4]). This is just a simple rotation of a $(29,5)$-torus knot. It's entirely three-dimensional, but because it's much simpler to parametrize torus knots on the Clifford torus in 4D, I am as usual parametrizing there and then stereographically projecting to 3D.
Here's the code:
Stereo3D[{x1_, y1_, x2_, y2_}] := {x1/(1 - y2), y1/(1 - y2), x2/(1 - y2)};
pqtorus[t_, θ_, p_, q_] := 1/Sqrt[2] {E^(p I (t + θ/p)), E^(q I t)};
With[{viewpoint = {0, 3, 0}, n = 450*29, p = 29, q = 5,
cols = RGBColor /@ {"#F21368", "#22C7A9", "#474655"}},
Manipulate[
Graphics3D[
{Tube[Table[Stereo3D[Flatten[ReIm /@ pqtorus[t, -θ, p, -q]]], {t, 0., 2 π, 2 π/n}], .07]},
PlotRange -> 2.7, ViewPoint -> viewpoint, ViewAngle -> π/9,
ViewVertical -> {0, 0, -1}, Boxed -> False,
Background -> cols[[-1]], ImageSize -> 540,
Lighting -> {{"Point", cols[[1]], {3/4, 0, 0}}, {"Point", cols[[2]], {-3/4, 0, 0}},
{"Ambient", cols[[-1]], viewpoint}, {"Point", Darker[cols[[-1]], .87], viewpoint}}],
{θ, 0, 2 π/q}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=knots49Lr.gif&userId=610054
[2]: http://community.wolfram.com/groups/-/m/t/1099081
[3]: http://community.wolfram.com/groups/-/m/t/1100242
[4]: http://community.wolfram.com/groups/-/m/t/1115846Clayton Shonkwiler2017-06-18T14:41:30ZMake a list of checkboxes that exclude each other?
http://community.wolfram.com/groups/-/m/t/1119969
I am looking for a simple way to make a list of checkboxes that exclude each other.
Suppose one has the following checkboxes:
Checkbox[Dynamic[p]]
Checkbox[Dynamic[q]]
Checkbox[Dynamic[r]]
How do I make that if I check one of them the other two are unchecked. Sothat always only p or q or r is true, and the other two false.
Thanks in advance for your help.Laurens Wachters2017-06-14T11:12:56Z