Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Language sorted by activeTwogether: A webapp listing all the films given two actors
https://community.wolfram.com/groups/-/m/t/1616053
Can you name all the movies that included...
Spencer Tracy and Katherine Hepburn?
Bill Murray and Dan Ackroyd?
Corey Haim and Corey Feldman?
I wrote a webapp with the Wolfram Language which takes two actors as input, and which lists back all the movies they were cast in together as output.
I call the app Twogether, and you can find it here:
https://www.wolframcloud.com/objects/mitchell/twogether
Please enjoy! Thanks.Mitchell Szczepanczyk2019-02-19T18:22:58ZLimit of a derivative to the imaginary
https://community.wolfram.com/groups/-/m/t/1615756
In the attempt to calculate the limit of a derivative to imaginary x I found a limit of the radius of a derivative function of 2pir, no matter how big the radius the limit of the function will allwys tend to 0.81i(imaginary ), further investigating it i found to be usefull in making a wormhole from two hyperboloids separated from each other just be equaling the equation to negative Pi, it opens a hole in between the hyperboles and comunicate it one to the other. I further wanted to manipulate the value of the right part of the equation to open up a possibility of seeing that the expansion of the hole leads to a finite /infinite no border limit universe, but it is just a sketch. Hope you see the novelty aproach to resolve a derivative of a complex number in a new manner.
c=300000000
n=RandomInteger[100,{200}]
r=RandomReal[5.29*10^-11,{200}]
f=(((Pi+1)*r)*Sqrt[(-2*Pi*r)/((Pi+1)*r)])/((Sqrt[(2*Pi*r)^2+2*Pi*r]))
g=2*Pi*(r+f)
gg=2*Pi*r
h=((c-c*Power[c, (c)^-1])/f)^2
Plot[f^-1,{f,-Pi,Pi}]
t=Table[Log[f,g],Pi]
Plot[t,{t,-Pi,Pi}]
ParametricPlot[g^-1*gg,{g,-Pi,Pi}]
ContourPlot[f*g,{f,0,4Pi},{g,0,4Pi}]
ContourPlot3D[f*h-gg^2==Pi,{f,-2Pi,2Pi},{h,-2Pi,2Pi},{gg,-2Pi,2Pi}]
Play[f,{f,0,90}]
c=300000000
n=RandomInteger[100,{200}]
r=RandomReal[5.29*10^-11,{200}]
f=(((Pi+1)*r)*Sqrt[(-2*Pi*r)/((Pi+1)*r)])/((Sqrt[(2*Pi*r)^2+2*Pi*r]))
g=2*Pi*(r+f)
gg=2*Pi*r
h=((c-c*Power[c, (c)^-1])/f)^2
Plot[f^-1,{f,-Pi,Pi}]
t=Table[Log[f,g],Pi]
Plot[t,{t,-Pi,Pi}]
ParametricPlot[g^-1*gg,{g,-Pi,Pi}]
ContourPlot[f*g,{f,0,4Pi},{g,0,4Pi}]
ContourPlot3D[f*h-gg^2==-Pi,{f,-2Pi,2Pi},{h,-2Pi,2Pi},{gg,-2Pi,2Pi}]
Play[f,{f,0,90}]
![a way to avoid singulaities][1]![making th value of the equation equals negative pi\]\[2\![enter image description here][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=IMAGINARYLIMITCIRCLE0005.jpg&userId=1147177
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wormholelimitedexaimaginario2.jpg&userId=1147177Luis Felipe Massena Misiec2019-02-19T14:42:24Z3D plot from 1×n(one column) vector.
https://community.wolfram.com/groups/-/m/t/1613881
Hello guys!
I want to Plot 3D figure from a 1×n(column vector) containing numeric data. I have used ListPlot3D but its not working as I need to convert 1×n(column vector) vector into n×n matrix first then i can use the command ListPlot3D[{{x1,y1,z1},{x2,y2,z2},…}]. Please guide me how I can convert first 1×n(column vector) into a matrix of order n×n or is there any other way to get 3D plot in Mathematica. I am very new on Mathematica programming. Need your's help to sort out my problem. Highly Appreciated !khan nm2019-02-15T14:24:47ZMatrix Numerov Function which can be Generalized to Many Potentials
https://community.wolfram.com/groups/-/m/t/1607989
I have written a function that can implement the Matrix Numerov method in a similar manner to [Pillai, et al at UW-Madison][1], except that this script can quickly be generalized to other potential functions and run in two lines of code. I have tested it on the V potential which can also be solved by Airy functions. Thus, you can accurately approximate eigenvalues and eigenvectors for a variety of potentials.
**First, a Derivation:**
The Numerov algorithm is for solving differential equations of the form $$y''(x)=-g(x)*y(x) + s(x)$$ Invoke a Taylor expansion of y(x) about $x_0:$
$$y(x)=
y(x_0)
+(x-x_0)\;
y'(x_0)
+\frac{(x-x_0)^2 }{2!}
y''(x_0)
+\frac{(x-x_0)^3}{3!}
y'''(x_0)
+\frac{(x-x_0)^4}{4!}
y^{iv}(x_0)+...$$
Rewrite, letting $h$ equal the distance from point $x_0$ to $x$ such that $y(x)=y(x0+h)$:
$$y(x_0+h)
=y(x_0)
+h\;y'(x_0)
+\frac{h^2}{2!} y''(x_0)
+\frac{h^3}{3!}\;y'''(x_0)
+\frac{h^4}{4!}
y^{iv}(x_0)+...$$
Define a grid of x values and let $(x0+h)=x_{n+1}$
and $x0=x_n$.
Then a step forward (toward positive x) is:
$$y_{n+1}
=y_{n}+
h\;y'(x_{n} )
+\frac{h^2}{2!}\;y''(x_{n})
+\frac{h^3}{3!}\;y'''(x_{n})+
\frac{h^4}{4!}
y^{iv}(x_n)+...$$
That was a step forwards. For a step backwards we replace $h$ with $-h$:
$$y_{n-1}
=y_{n}-
h\;y'(x_{n} )
+\frac{h^2}{2!}\;y''(x_{n})
-\frac{h^3}{3!}\;y'''(x_{n})
+\frac{h^4}{4!}
y^{iv}(x_n)+...$$
Add the two equations together and you get:
$$y_{n+1}
+y_{n-1}
=2 y_{n}
+h^2 y''(x_{n})
+\frac{h^{4}}{12} y^{iv}(x_n)
+O(h^6)$$
Rearrange:
$$y_{n+1}
-2y_n
+y_{n-1}
=\mathbf {h^2 y''(x_n)}
+\mathbf{\frac{h^4}{12} y^{iv}(x_n)}+O(h^6)$$
We will largely ignore $O(h^6)$ and substitute for the bold $h^2 y''(x_n)$
and $\frac{h^4}{12} y^{iv}(x_n)$ above.
For $\mathbf{h^2 y''(x_n)}$ we use the original differential equation:
$y''(x_n)
=-g_n
y_n
+s_n$.
For the bold
$\mathbf{\frac{h^4}{12} y^{iv}(x_n)}$, we figure that:
$$\frac{h^4}{12}\frac{d^2}{dx^2}
(y''(x_n))
=\frac{h^4}{12}\frac{d^2}{dx^2}
(-g_n
y_n
+s_n)$$
Now we just learned (rearranging the prior $y_{n+1}
-2y_n
+y_{n-1}$
equation again), that for generic $y(x)$,
$$y''(x^2)
=\frac{y_{n+1}
-2y_n
+y_{n-1}-O(h^4)}{h^2}$$
Use that idea twice to express the pieces of $\frac{d^2}{dx^2}
(-g_n
y_n
+s_n)$ and you get:
$$\frac{h^4}{12}\frac{d^2}{dx^2}
(-g_n
y_n
+s_n)
=\frac{h^4}{12}
\left[
\frac{-(g_{n+1}
y_{n+1}
-2g_n
y_n
+g_{n-1}
y_{n-1})}{h^2}
+\frac{s_{n+1}
-2s_n
+s_{n-1}}
{h^2}
\right]$$
...which is then equal to:
$$\frac{h^2}{12}
[-(g_{n+1}
y_{n+1}
-2g_{n}
y_{n}
+g_{n-1}
y_{n-1})
+s_{n+1}
-2s_{n}
+s_{n-1}]$$
Now sub that back into the equation with the boldface:
$$y_{n+1}
-2y_n
+y_{n-1}
=h^2(-g_n
y_n
+s_n)
+\frac{h^2}{12}
[-g_{n+1}
y_{n+1}
+2g_{n}
y_{n}
-g_{n-1}
y_{n-1}
+s_{n+1}
-2s_{n}
+s_{n-1}]$$
Divide both sides by $h^2$:
$$\frac{y_{n+1}
-2y_n
+y_{n-1}}{h^2}
=(-g_n
y_n
+s_n)
+\frac{1}{12}
[-g_{n+1}
y_{n+1}
+2g_{n}
y_{n}
-g_{n-1}
y_{n-1}
+s_{n+1}
-2s_{n}
+s_{n-1}]$$
...which is equal to (combining terms):
$$\frac{-1}{12}(g_{n+1}
y_{n+1}
+10g_n
y_n
+g_{n-1}
y_{n-1})
+\frac{1}{12}
(s_{n+1}
+10s_n
+s_{n-1})$$
Now, applying the regular Numerov algorithm might consist of picking two points $y_{n-1}$
and $y_n$, using the above formula to generate
$y_{n+1}$ and iterating over all points
$n$ in the $x$ grid. But here I will substitute into the one-dimensional time-independent Schrödinger equation and introduce the matrix method.
*Rearrange the Schrödinger Equation*:
The one-dimensional Schrödinger equation gives $$\frac{-\hbar^2}{2m}\frac{d^2\psi}{dx^2}
+V(x)\psi(x)=E\psi(x)$$
Rearranging,
$$\frac{d^2\psi}{dx^2}=
\frac{2m}{\hbar^2}V(x)\psi(x)
-\frac{2mE}{\hbar^2}\psi(x)$$
Looking familiar yet? Let $y''(x)=\frac{d^2\psi}{dx^2}$,
and $-g(x)=\frac{2m}{\hbar^2}V(x)$,
$y(x)=\psi(x)$,
and $s(x)=\frac{-2mE}{\hbar^2}\psi(x)$.
Now again define a grid of x values (how to get the grid spacing right to come later...) and let $\psi_n$ represent the value of $\psi(x)$
at the $x_n$'th point.
Now we can use the result above, noting that $\hbar$ is the reduced Planck constant, but $h$ is the grid spacing $\Delta x$:
$$\begin{align}
\frac{y_{n+1}
-2y_n
+y_{n+1}}{h^2}
= & \frac{-1}{12}
\left[
(\frac{-2m}{\hbar^2}V_{n+1})
(\psi_{n+1}
)
+10(\frac{-2m}{\hbar^2}
V_n)
\psi_n
+(\frac{-2m}{\hbar^2}V_{n-1})
(\psi_{n-1}
)\right] \\\
& +\frac{1}{12}
\left[
\frac{-2mE}{\hbar^2}\psi_{n+1}
+10(\frac{-2mE}{\hbar^2})\psi_n
+\frac{-2mE}{\hbar^2}\psi_{n-1}
\right]
\end{align}$$
Multiply everything by $\frac{-\hbar^2}{2m}$ and rearrange a little:
$$\frac{-\hbar^2}{2m}\left[
\frac{\psi_{n+1}
-2\psi_n
+\psi_{n-1}}{h^2}
\right]
+\frac{1}{12}
\left[V_{n+1}
\psi_{n+1}
+10V_n
\psi_n
+V_{n-1}
\psi_{n-1}\right]
=\frac{1}{12}E
\left[
\psi_{n+1}
+10\psi_n
+\psi_{n-1}
\right]$$
And now for...
*The Matrix Form*
Let $\psi(x)$ be represented by a column vector...
I.e.
$$ \begin{pmatrix}
\psi_{1} \\\
\psi_{2} \\\
\psi_{3} \\\
\vdots
\end{pmatrix}
$$
Here each $\psi_i$
is the value of $\psi(x)$
at the grid point $x_i$ (Many apologies if the above shows up as a row vector; I tried).
Now we can express the equation above in matrix format as...
$$\frac{-\hbar^2}{2m}\mathbf{A} \;\mathbf{\psi}
+\mathbf{B}\;\mathbf{V}\;\mathbf{\psi}
=E\;\mathbf{B}\;\mathbf{\psi}$$
Now, $\mathbf{A}$ is equal to...
$\frac{1}{h^2}*$
$$
\begin{pmatrix}
-2 & 1 & 0 & 0 & 0 &\cdots\\\
1 & -2 & 1 & 0 & 0 &\cdots\\\
0 & 1 & -2 & 1 & 0 &\cdots\\\
0 & 0 & 1 & -2 & 1 &\cdots\\\
0 & 0 & 0 & 1 & -2 &\cdots\\\
\vdots & \vdots & \vdots & \vdots & \vdots &\ddots
\end{pmatrix}
$$
And **B** is $\frac{1}{12}*$
$$
\begin{pmatrix}
10 & 1 & 0 & 0 & 0 &\cdots\\\
1 & 10 & 1 & 0 & 0 &\cdots\\\
0 & 1 & 10 & 1 & 0 &\cdots\\\
0 & 0 & 1 & 10 & 1 &\cdots\\\
0 & 0 & 0 & 1 & 10 &\cdots\\\
\vdots & \vdots & \vdots & \vdots & \vdots &\ddots
\end{pmatrix}
$$
Finally, **V** is equal to the diagonal matrix:
$$
\begin{pmatrix}
V_{1} & 0 & 0 & 0 & 0 &\cdots\\\
0 & V_{2} & 0 & 0 & 0 &\cdots\\\
0 & 0 & V_{3} & 0 & 0 &\cdots\\\
0 & 0 & 0 & V_{4} & 0 &\cdots\\\
0 & 0 & 0 & 0 & V_{5} &\cdots\\\
\vdots & \vdots & \vdots & \vdots & \vdots &\ddots
\end{pmatrix}
$$
(Does this remind anyone of Discrete Variable Representation? Mathematica code for that to come soon...)
Left-multiplying by $\mathbf{B^{-1}}$ gives...
$$\frac{-\hbar^2}{2m}\mathbf{B^{-1}\;A\;\psi}
+\mathbf{V\;\psi}
=E\;\psi$$
Rearranging,
$$\left(
\frac{-\hbar^2}{2m}\mathbf{B^{-1}\;A}
+\mathbf{V}
\right)
\psi=E\;\psi$$
And *voila!* we have an equation in the $\mathbf{H} \psi = E \psi$ form.
So $\frac{-\hbar^2}{2m}\mathbf{B^{-1}\;A}$ is your kinetic energy operator. If you're not sure the matrix form is equivalent, pick three elements of the $\psi$ vector to be
$\psi_{n-1}$,
$\psi_n$,
and $\psi_{n+1}$,
and propagate through the matrix algebra element-wise.
I will now define these matrices and operations in Mathematica...
**Mathematica Code**
Grid spacing should be fine enough to have five points in the narrowest half-wavelength. For the problem I was working on, I cared about wavefunctions near an energy of 8.84755*10^-16 ergs, so $p0 = \sqrt{2 m E} = 1.4327*10^-19$ -->
the narrowest lobe has width $\frac{1}{2}*\frac{h}{p0}=2.31244*10^-8$ (this time h = Planck's constant in erg*sec).
The grid spacing should be $\frac{1}{5}$ of that width, or 4.62488*10^-9... I believe this problem was using inches for some unfathomable reason.
To use this function, first define your potential function as <some name> = <some function>, i.e.:
v1input[x_] = a Abs[x]
(Here a was a constant set equal to 5.368244090805358*10^-9, the slope of the V-shaped potential).
Then, decide your minimum and maximum values of x (advice for the V-shaped potential: go a couple wavelengths beyond the wall at the energy of interest...).
Then employ the following function, where *expr_* is the name of your potential function...
MatrixNumerov[expr_, xmin_, xmax_, dx_, hbar_, m_] :=
Module[{xvals, Vfunction, Vmatrix, A, B, KE, n1, H, eivals, eivecs},
xvals = Table[i, {i, xmin, xmax, dx}];
Vfunction = expr[xvals];
Vmatrix = DiagonalMatrix[Vfunction];
n1 = Length[xvals];
B = (DiagonalMatrix[1 + 0 Range[n1 - Abs[-1]], -1] +
10 DiagonalMatrix[1 + 0 Range[n1 - Abs[0]], 0] +
DiagonalMatrix[1 + 0 Range[n1 - Abs[1]], 1])/12.;
A = (DiagonalMatrix[1 + 0 Range[n1 - Abs[-1]], -1] -
2 DiagonalMatrix[1 + 0 Range[n1 - Abs[0]], 0] +
DiagonalMatrix[1 + 0 Range[n1 - Abs[1]], 1])/((dx)^2);
KE = (-hbar^2/(2 m))*Inverse[B].A;
H = KE + Vmatrix;
{eivals, eivecs} = Eigensystem[H];
{eivals, eivecs}]
Here's an example...
With x running from -2.573102897338254`E-7 to 2.573102897338254E-7 in increments of 4.624881629283604'E-9,
the code:
MatrixNumerov[v1input, -2.573102897338254`*^-7,
2.573102897338254`*^-7, 4.624881629283604`*^-9, 1.0545718`*^-27,
1.1600000000000001`*^-23]
...returns a list of eigenvalues and a list of eigenvectors, just like a regular Eigensystem[...] command,
with the smallest ten eigenvalues equal to $8.84669*10^{-16}, 8.21013*10^{-16}, 7.55796*10^{-16}, 6.86411*10^{-16},
6.14806*10^{-16}, 5.36843*10^{-16}, 4.5527*10^{-16}, 3.61808*10^{-16},
2.60396*10^{-16}, 1.13641*10^{-16}$
.
Those ten eigen-energies solved by the Airy method were:
$8.84755*10^{-16}, 8.21055*10^{-16}, 7.5585*10^{-16}, 6.8642*10^{-16},
6.14836*10^{-16}, 5.36824*10^{-16}, 4.55283*10^{-16}, 3.61758*10^{-16},
2.604*10^{-16}, 1.13465*10^{-16}$
So you can see, the approximation is darn close.
This has been adapted from my quantum mechanics midterm (MIT course 5.73 taught by Bob Field Fall 2018) - apologies for any errors/omissions, and I'm afraid I'll have to leave a discussion of how to get the energies by the Airy functions for V-shaped potential with slope = a (above) for another time.
[1]: https://www.physics.wisc.edu/~tgwalker/448-9Mathematica/448%20Mathematica/MatrixNumerov.pdf?fbclid=IwAR0nSY8iBmOsRwqlPg3pemRZJIrxCYF6lsOVE4_sbbNoJlbFLTQvw-WZfBE%20June%2020,%202012Madeleine Sutherland2019-02-08T05:06:11ZUpload photocell sensor data from Arduino Yun Rev 2 to Data Drop?
https://community.wolfram.com/groups/-/m/t/1615623
Hi All,
I have created a Databin using Data Drop, connected a [Arduino Yun Rev 2][1] to the internet, and created a functional program that queries a photoelectric sensor for a reading and posts the reading to a LCD screen and a laptop screen. Unfortunately I am unable to upload the sensor data from the Yun to Data Drop and would appreciate any insights.
Wolfram's [Adruino Yun tutorial][2] suggests including the following code snippet in the Arduino sketch
//upload to datadrop
DatabinRealAdd(\"" <> bin["ShortID"] <> "\",\"AverageHumidity\",avg);
Prior to adding this code snippet the entire program compiles and runs; after adding it the Arduino IDE highlights the newly added code snippet and gives the following error
stray '\' in program
This is the function that includes the code snippet
void photocellReading() // Photoelectric Sensor Reading
{
int readPhotocell = analogRead(photoCell);
if (readPhotocell < 78) // (10,000 lux)(0.00775) = 78 = Full Daylight Reading
{
// Show data on LCD shield screen
lcd.setCursor(0, 0);
lcd.print("ShadyPlant ");
lcd.println(readPhotocell);
lcd.println("");
delay(timerPrint);
// Show data on serial port (laptop screen)
Serial.print("Shady Plant: ");
Serial.println(readPhotocell);
Serial.println("");
delay(timerPrint);
}
else
{
// Show data on LCD shield screen
lcd.setCursor(0, 0);
lcd.print("SunnyPlant ");
lcd.println(readPhotocell);
lcd.println("");
delay(timerPrint);
// Show data on serial port (laptop screen)
Serial.print("Sunny Plant: ");
Serial.println(readPhotocell);
Serial.println("");
delay(timerPrint);
// Upload data online using Wolfram Data Drop
DatabinRealAdd(\"" <> bin["8 character Databin ID"] <> "\",\"PhotoCellReading\",readPhotocell);
}
}
Editing the code snippet to read
DatabinRealAdd(\"" <> bin["8 character Databin ID"] <> "\","PhotoCellReading",readPhotocell);
or to read
DatabinRealAdd("" <> bin["Bsnuo7NG"] <> "",\"PhotoCellReading\",readPhotocell);
... again yields the error
stray '\' in program
[1]: https://store.arduino.cc/usa/arduino-yun-rev-2
[2]: https://reference.wolfram.com/language/tutorial/DataDropWithArduinoYun.html#512562408Steven Lyell2019-02-19T04:16:51ZTry to beat these MRB constant records!
https://community.wolfram.com/groups/-/m/t/366628
Map:
- First we have these record number of digits of the MRB constant
computations.
- Then we have some hints for anyone serious about breaking my record.
- Next, we have speed records.
- Then we have a program Richard Crandall wrote to check his code for computing record number of digits.
- Then there is a conversation about whether Mathematica uses the same algorithm for computing MRB by a couple of different methods.
- Then, for a few replies, we compute the MRB constant from Crandall's
eta derivative formulas and see records there.
- Then there are three replies about "NEW RECORD ATTEMPTS OF 4,000,000 DIGITS!" and the computation is now complete!!!!!.
- Finally, we see where I am on a 5,000,000 digits calculation.
POSTED BY: Marvin Ray Burns.
**MKB constant calculations,**
![enter image description here][1] ,
**have been moved to their own discussion at**
[Calculating the digits of the MKB constant][2].
I think the following important point got buried near the end.
When it comes to mine and a few other people's passion to calculate many digits of constants and the dislike possessed by a few more people, it is all a matter telling us that minds work differently!
The MRB constant is defined below. See http://mathworld.wolfram.com/MRBConstant.html.
$$\text{MRB}=\sum _{n=1}^{\infty } (-1)^n \left(n^{1/n}-1\right).$$
Here are some record computations. If you know of any others let me know.
1. On or about Dec 31, 1998 I computed 1 digit of the (additive inverse of the) MRB constant with my TI-92's, by adding 1-sqrt(2)+3^(1/3)-4^(1/4) as far as I could and then by using the sum feature to compute $\sum _{n=1}^{1000 } (-1)^n \left(n^{1/n}\right).$ That first digit, by the way, is just 0.
2. On Jan 11, 1999 I computed 3 digits of the MRB constant with the Inverse Symbolic Calculator.
3. In Jan of 1999 I computed 4 correct digits of the MRB constant using Mathcad 3.1 on a 50 MHz 80486 IBM 486 personal computer operating on Windows 95.
4. Shortly afterwards I computed 9 correct digits of the MRB constant using Mathcad 7 professional on the Pentium II mentioned below.
5. On Jan 23, 1999 I computed 500 digits of the MRB constant with the online tool called Sigma.
6. In September of 1999, I computed the first 5,000 digits of the MRB Constant on a 350 MHz Pentium II with 64 Mb of ram using the simple PARI commands \p 5000;sumalt(n=1,((-1)^n*(n^(1/n)-1))), after allocating enough memory.
7. On June 10-11, 2003 over a period, of 10 hours, on a 450mh P3 with an available 512mb RAM, I computed 6,995 accurate digits of the MRB constant.
8. Using a Sony Vaio P4 2.66 GHz laptop computer with 960 MB of available RAM, on 2:04 PM 3/25/2004, I finished computing 8000 digits of the MRB constant.
9. On March 01, 2006 with a 3GH PD with 2GBRAM available, I computed the first 11,000 digits of the MRB Constant.
10. On Nov 24, 2006 I computed 40, 000 digits of the MRB Constant in 33hours and 26min via my own program in written in Mathematica 5.2. The computation was run on a 32-bit Windows 3GH PD desktop computer using 3.25 GB of Ram.
11. Finishing on July 29, 2007 at 11:57 PM EST, I computed 60,000 digits of MRB Constant. Computed in 50.51 hours on a 2.6 GH AMD Athlon with 64 bit Windows XP. Max memory used was 4.0 GB of RAM.
12. Finishing on Aug 3 , 2007 at 12:40 AM EST, I computed 65,000 digits of MRB Constant. Computed in only 50.50 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 5.0 GB of RAM.
13. Finishing on Aug 12, 2007 at 8:00 PM EST, I computed 100,000 digits of MRB Constant. They were computed in 170 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 11.3 GB of RAM. Median (typical) daily record of memory used was 8.5 GB of RAM.
14. Finishing on Sep 23, 2007 at 11:00 AM EST, I computed 150,000 digits of MRB Constant. They were computed in 330 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 22 GB of RAM. Median (typical) daily record of memory used was 17 GB of RAM.
15. Finishing on March 16, 2008 at 3:00 PM EST, I computed 200,000 digits of MRB Constant using Mathematica 5.2. They were computed in 845 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 47 GB of RAM. Median (typical) daily record of memory used was 28 GB of RAM.
16. Washed away by Hurricane Ike -- on September 13, 2008 sometime between 2:00PM - 8:00PM EST an almost complete computation of 300,000 digits of the MRB Constant was destroyed. Computed for a long 4015. Hours (23.899 weeks or 1.4454*10^7 seconds) on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 91 GB of RAM. The Mathematica 6.0 code used follows:
Block[{$MaxExtraPrecision = 300000 + 8, a, b = -1, c = -1 - d,
d = (3 + Sqrt[8])^n, n = 131 Ceiling[300000/100], s = 0}, a[0] = 1;
d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
For[k = 0, k < n, c = b - c;
b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
N[1/2 - s/d, 300000]]
17. On September 18, 2008 a computation of 225,000 digits of MRB Constant was started with a 2.66GH Core2Duo using 64 bit Windows XP. It was completed in 1072 hours. Memory usage is recorded in the attachment pt 225000.xls, near the bottom of this post.
18. 250,000 digits was attempted but failed to be completed to a serious internal error which restarted the machine. The error occurred sometime on December 24, 2008 between 9:00 AM and 9:00 PM. The computation began on November 16, 2008 at 10:03 PM EST. Like the 300,000 digit computation this one was almost complete when it failed. The Max memory used was 60.5 GB.
19. On Jan 29, 2009, 1:26:19 pm (UTC-0500) EST, I finished computing 250,000 digits of the MRB constant. with a multiple step Mathematica command running on a dedicated 64bit XP using 4Gb DDR2 Ram on board and 36 GB virtual. The computation took only 333.102 hours. The digits are at http://marvinrayburns.com/250KMRB.txt . The computation is completely documented in the attached 250000.pd at bottom of this post.
20. On Sun 28 Mar 2010 21:44:50 (UTC-0500) EST, I started a computation of 300000 digits of the MRB constant using an i7 with 8.0 GB of DDR3 Ram on board, but it failed due to hardware problems.
21. I computed 299,998 Digits of the MRB constant. The computation began Fri 13 Aug 2010 10:16:20 pm EDT and ended 2.23199*10^6 seconds later |
Wednesday, September 8, 2010. I used Mathematica 6.0 for Microsoft
Windows (64-bit) (June 19, 2007) That is an average of 7.44 seconds per digit.. I used my Dell Studio XPS 8100 i7 860 @ 2.80 GH 2.80 GH
with 8GB physical DDR3 RAM. Windows 7 reserved an additional 48.929
GB virtual Ram.
22. I computed exactly 300,000 digits to the right of the decimal point
of the MRB constant from Sat 8 Oct 2011 23:50:40 to Sat 5 Nov 2011
19:53:42 (2.405*10^6 seconds later). This run was 0.5766 seconds per digit slower than the
299,998 digit computation even though it used 16GB physical DDR3 RAM on the same machine. The working precision and accuracy goal
combination were maximized for exactly 300,000 digits, and the result was automatically saved as a file instead of just being displayed on the front end. Windows reserved a total of 63 GB of working memory of which at 52 GB were recorded being used. The 300,000 digits came from the Mathematica 7.0 command
Quit; DateString[]
digits = 300000; str = OpenWrite[]; SetOptions[str,
PageWidth -> 1000]; time = SessionTime[]; Write[str,
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> digits + 3, AccuracyGoal -> digits,
Method -> "AlternatingSigns"]]; timeused =
SessionTime[] - time; here = Close[str]
DateString[]
23. 314159 digits of the constant took 3 tries do to hardware failure. Finishing on September 18, 2012 I computed 314159 digits, taking 59 GB of RAM. The digits are came from the Mathematica 8.0.4 code
DateString[]
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> 314169, Method -> "AlternatingSigns"] // Timing
DateString[]
24. Sam Noble of Apple computed 1,000,000 digits of the MRB constant in 18 days 9 hours 11 minutes 34.253417 seconds.
25. Finishing on Dec 11, 2012 Ricard Crandall, an Apple scientist, computed 1,048,576 digits
in a lighting fast 76.4 hours (probably processor time). That's on a 2.93 Ghz 8-core Nehalem. **It took until the use of DDR4 to compute nearly that many digits in an absolute time that quick!!: In Aug of 2018 I computed 1,004,993 digits of the MRB constant in 53.5 hours with 10 processor cores! Search this post for "53.5" for documentation. Sept 21, 2018, I just now computed 1,004,993 digits of the MRB constant in 50.37 hours of absolute time (35.4 hours processor time) with 18 processor cores!** Search this post for "50.37 hours" for documentation.**
26. Previously, I computed a little over 1,200,000 digits of the MRB constant in 11
days, 21 hours, 17 minutes, and 41 seconds,( finishing on on March 31 2013). I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
27. On May 17, 2013 I finished a 2,000,000 or more digit computation of the MRB constant, using only around 10GB of RAM. It took 37 days 5 hours 6 minutes 47.1870579 seconds. I used my six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
28. A previous world record computation of the MRB constant was finished on Sun 21 Sep 2014 18:35:06. It took 1 month 27 days 2 hours 45 minutes 15 seconds.The processor time from the 3,000,000+ digit computation was 22 days. I computed the 3,014,991 digits of the MRB constant with Mathematica 10.0. I Used my new version of Richard Crandall's code in the attached 3M.nb, optimized for my platform and large computations. I also used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz with 64 GB of RAM of which only 16 GB was used. Can you beat it (in more number of digits, less memory used, or less time taken)? This confirms that my previous "2,000,000 or more digit computation" was actually accurate to 2,009,993 digits. they were used to check the first several digits of this computation. See attached 3M.nb for the full code and digits.
29. Finished on Wed 16 Jan 2019 19:55:20, I computed over 4 million digits of the MRB constant.
It took 4 years of continuous tries. This successful run took 65.13 days computation time, with a processor time of 25.17 days, on a 3.7 GH overclocked up to 4.7 GH on all cores Intel 6 core computer with 3000 MHz RAM. According to this computation, the previous record, 3,000,000+ digit computation, was accurate to 3,014,871 decimals, as this computation used my own algorithm for computing n^(1/n) as found at chapter 3 in the paper at
https://www.sciencedirect.com/science/article/pii/0898122189900242
and the 3 million+ computation used Crandall's algorithm. Both algorithms outperform Newton's method per calculation and iteration.
See attached [notebook][3].
M R Burns' algorithm:
x = SetPrecision[x, pr];
y = x^n; z = (n - y)/y;
t = 2 n - 1; t2 = t^2;
x =
x*(1 + SetPrecision[4.5, pr] (n - 1)/t2 + (n + 1) z/(2 n t) -
SetPrecision[13.5, pr] n (n - 1) 1/(3 n t2 + t^3 z));
(*N[Exp[Log[n]/n],pr]*)
Example:
ClearSystemCache[]; n = 123456789;
(*n is the n in n^(1/n)*)
x = N[n^(1/n),100];
(*x starts out as a relatively small precision approximation to n^(1/n)*)
pc = Precision[x]; pr = 10000000;
(*pr is the desired presicion of your n^(1/n)*)
Print[t0 = Timing[While[pc < pr, pc = Min[4 pc, pr];
x = SetPrecision[x, pc];
y = x^n; z = (n - y)/y;
t = 2 n - 1; t2 = t^2;
x = x*(1 + SetPrecision[4.5, pc] (n - 1)/t2 + (n + 1) z/(2 n t)
- SetPrecision[13.5, pc] n (n - 1)/(3 n t2 + t^3 z))];
(*You get a much faster version of N[n^(1/n),pr]*)
N[n - x^n, 10]](*The error*)];
ClearSystemCache[]; n = 123456789; Print[t1 = Timing[N[n - N[n^(1/n), pr]^n, 10]]]
Gives
{25.5469,0.*10^-9999984}
{101.359,0.*10^-9999984}
R Crandall's algorithm:
While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^n - n;
x = x (1 - 2 y/((n + 1) y + 2 n n));];
(*N[Exp[Log[n]/ n],pr]*)
Example:
ClearSystemCache[]; n = 123456789;
(*n is the n in n^(1/n)*)
x = N[n^(1/n)];
(*x starts out as a machine precision approximation to n^(1/n)*)
pc = Precision[x]; pr = 10000000;
(*pr is the desired presicion of your n^(1/n)*)
Print[t0 = Timing[While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^n - n;
x = x (1 - 2 y/((n + 1) y + 2 n n));];
(*N[Exp[Log[n]/n],pr]*)
N[n - x^n, 10]](* The error*)]; Print[
t1 = Timing[N[n - N[n^(1/n), pr]^n, 10]]]
Gives
{32.1406,0.*10^-9999984}
{104.516,0.*10^-9999984}
More information available upon request.
Here is my mini cluster of the fastest 3 computers mentioned below:
The one to the left is my custom built extreme edition 6 core and later with a 8 core Xeon processor.
The one in the center is my fast little 4 core Asus with 2400 MHz RAM.
Then the one on the right is my fastest -- a Digital Storm 6 core overclocked to 4.7 GHz on all cores and with 3000 MHz RAM.
![enter image description here][4]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5860Capturemkb.JPG&userId=366611
[2]: http://community.wolfram.com/groups/-/m/t/1323951?p_p_auth=W3TxvEwH
[3]: https://community.wolfram.com/groups?p_auth=zWk1Qjoj&p_p_auth=r1gPncLu&p_p_id=19&p_p_lifecycle=1&p_p_state=exclusive&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=6&_19_struts_action=/message_boards/get_message_attachment&_19_messageId=1593151&_19_attachment=4%20million%2011%202018.nb
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-video-to-gif.gif&userId=366611Marvin Ray Burns2014-10-09T18:08:49ZBatch capacities of Mathematica?
https://community.wolfram.com/groups/-/m/t/798054
Dear all,
I am working on windows 7 or 8. I try the batch capacities of Mathematica. I have written a very simple notebook
N[Sin[2]]
Export["essai.txt", N[Sin[2]]]
Exit[]
and I have typeset
MathKernel -script essai1.m > essai.out
MathKernel -run "<<essai1.m" > essai.out
math -run essai1.m > essai.out
I have also tried
math < essai1.m > essai1.out &
with the following result
*Mathematica 10.0 for Microsoft Windows (64-bit)
Copyright 1988-2014 Wolfram Research, Inc.
In[1]:=
In[2]:= In[2]:=
In[3]:=
In[4]:=
In[5]:=
In[6]:=
In[7]:= In[7]:= In[7]:= In[7]:=*
Whichever be the command there is no output. We would like to generate may files with export in jpeg, png or eps but we know not how to do.
Is there some one who could help ?
ThanksCyrille Piatecki2016-02-22T16:49:23ZAutomatic differentiation?
https://community.wolfram.com/groups/-/m/t/1615283
I am wondering whether there are any plans to include Automatic Differentiation functionality in Mathematica. Given the great progress that has been made on Neural Networks, I guess that this functionality is already internally available. It would be great if this were to be made available to users.
Is anybody aware of any effort on this, or any packages that do this?Asim Ansari2019-02-18T21:54:53ZCan Diagonal[] be used in Compile ?
https://community.wolfram.com/groups/-/m/t/1614715
Can the Diagonal[] function (which returns the diagonal of a matrix) be used in a Compile function ?From the error message in the code below, it seems that the Compile thinks the Diagonal function returns a table of Depth 2, and not of depth 1 (as it does really). Is this a bug, or am I misunderstanding something ?
Thank you for your help,
DiagCompile =
Compile[{{statearray, _Integer, 2}},
Module[{res = {0, 0, 0, 0}}, res = Diagonal[statearray]; res]]
During evaluation of In[102]:= Compile::cset: Variable res of type {_Integer,1} encountered in assignment of type {_Integer,2}.
During evaluation of In[102]:= Compile::cset: Variable res of type {_Integer,1} encountered in assignment of type {_Integer,2}.Thibaut Jonckheere2019-02-17T12:02:14Z[GIF]Breathe (Concentric Star Polygons Gradient) for Meditation
https://community.wolfram.com/groups/-/m/t/1614549
Inspired by Clayton's https://community.wolfram.com/groups/-/m/t/1614126
----------
![embededstars][1]
Use Clayton's code in the above link and set trigonometric function the shadows for phase shift in each frame of the iteration, you can create the gif to control your breathe rythem.
anim1 = With[{\[Delta] = 1/12,
cols = RGBColor /@ {"#07090e", "#2bb3c0", "#faf7f2"}},
Table[Graphics[Reverse[Table[s = Mod[r + i, 3/2];
{Blend[cols, LogisticSigmoid[8 (s - 1/2)]],
Polygon@Map[
RotationTransform[-Sin[s*\[Pi]*3*r]]@*
RotationTransform[3*r*\[Pi]], star52[2*s], {2}]}, {i, 0,
3/2 - #, #}]] &[\[Delta]], PlotRange -> 1, ImageSize -> 540,
Background -> cols[[-1]]], {r, 0, \[Delta], 0.004}]];
And
anim2 = With[{\[Delta] = 1/12,
cols = RGBColor /@ {"#07090e", "#2bb3c0", "#faf7f2"}},
Table[Graphics[Reverse[Table[s = Mod[r + i, 3/2];
{Blend[cols, LogisticSigmoid[8 (s - 1/2)]],
Polygon@Map[
RotationTransform[Sin[s*\[Pi]*3*r]]@*
RotationTransform[-3*r*\[Pi]], star52[2*s], {2}]}, {i, 0,
3/2 - #, #}]] &[\[Delta]], PlotRange -> 1,
ImageSize -> 540, Background -> cols[[-1]]], {r, 0, \[Delta],
0.004}]];
Visualize by
ListAnimate[anim[0]~Join~Reverse@anim[0]~Join~anim2~Join~Reverse[anim2]]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3287loop.gif&userId=23928Shenghui Yang2019-02-16T15:15:23Z[GIF] Recede (Concentric circles gradient)
https://community.wolfram.com/groups/-/m/t/1614126
![Concentric circles gradient][1]
**Recede**
Just a very simple gradient with concentric circles. One fun feature is the use of `LogisticSigmoid[]` for the color gradient. Here's the code:
DynamicModule[{s, δ = 1/12, cols = RGBColor /@ {"#07090e", "#2bb3c0", "#faf7f2"}},
Manipulate[
Graphics[
Reverse[
Table[
s = Mod[r + i, 3/2];
{Blend[cols, LogisticSigmoid[8 (s - 1/2)]], Disk[{0, 0}, s]},
{i, 0, 3/2 - #, #}]] &[δ],
PlotRange -> 1, ImageSize -> 540, Background -> cols[[-1]]],
{r, 0, δ}]
]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=disks8Lq.gif&userId=610054Clayton Shonkwiler2019-02-15T18:01:18Zpsfrag for Mathematica 10
https://community.wolfram.com/groups/-/m/t/474155
Hello,
As far as I understand, psfrag is no longer working with Mathematica 10. Does anyone have a solution for this problem or knows whether there will be a solution in the near future? Or is there an alternative?
What I want to do is export eps files from Mathematica and include them into Latex with nice Labels.a b2015-04-05T14:34:56ZSolve this system of differential equations with DSolveValue?
https://community.wolfram.com/groups/-/m/t/1614780
Consider the following code:
DSolveValue[{y''[x] - x y'[x] + (y[x])^2 == x, y[1] == 1, y'[x] == 0},
y[x], x]
DSolveValue::overdet: There are fewer dependent variables than equations, so the system is overdetermined
What I should do?anna sent2019-02-17T19:48:56ZModel a wave equation forced oscillation, A(t)?
https://community.wolfram.com/groups/-/m/t/1613702
Tenenbaum, 28D, p339. Forced undamped oscillation.
y[t]''+w0^2 y[t]==F Sin[w1 t+b]
yc==c1 Sin[w0 t+d]
yp==F/(w0^2-w1^2) Sin[w1+b]
(*constants w0,w1,b,d,c1*)
I am told |c|+|F...| is Amplitude(max) (and note is a constant), so that if w0 is near w1, I am told A will be infinite within (one) period. (serway 3rd phy), showing same, says: "due to limitations it will not actually grow infinite"
I am then told (tenenbaum): "if w0 and w1 are equal" (the undet. coeff. solves differently, which i confirmed by doing)
yp==-t F/(2 w0) Cos[w0+b]
Which grows infinite over unbounded t (and no doubt has phy limitations) and A==A(t).
My question is how can minute differences in w0-w1 yield large Amplitude than equality of w0==w1 can (and at that, in a far shorter time). I suspect my "reasoning" here is simply mislead?
My 2nd question is if there are youtube videos or URL showing actual pertinent experiment (or data, but not calculated data) proving the case that small differences are infinitely more powerful than equality - assuming I'm wrong.
I understand F Sin[w0 t+b] (with a w0) may constitute an impossible machine but if is I don't understand it's use in the books. I don't understand the result of a this "machine" causing a wave to be infinite in one period while it has the same frequency unless the problem is ... malformed in practice (a silly machine). If i'm wrong here and machines just such as this are (with limitation) common in electronics, please say so. The use in the books introduce it as a simple equation physically bound, not as any kind of mathematical (unlikely) machine. Which is confusing.
I think I've just read something wrong or am inexperience with electronics scope results. Which is why I ask.John Hendrickson2019-02-15T00:16:44ZObtain info about the differential equation of LegendreP with respect to x?
https://community.wolfram.com/groups/-/m/t/1613917
Hello, members of Mathematica community.
I would like to ask you about the method which Mathematica uses to find the roots of differential equation of LegendreP with respect to x.
dLegendreP/dx=0 .
Also how big of standard error ( or relative error) does this method give for the roots of dP/dx=0.Rasim Bekir2019-02-15T11:07:12Z[✓] Delete a row in a matrix that contains 0s using DeteteCases?
https://community.wolfram.com/groups/-/m/t/1614020
I need a code that delete specific rows in a matrix (full line), but looking for specific values.
This case is OK, and the line {0,0} was deleted accordingly by the code "DeleteCases".
In[1]:= data = {{1, 1}, {2, 2}, {3, 3}, {0, 0}, {4, 4}, {5, 5}, {6, 6}};
In[2]:= DeleteCases[data, {0, 0}]
Out[2]= {{1, 1}, {2, 2}, {3, 3}, {4, 4}, {5, 5}, {6, 6}}
***MY QUESTION:**
However, I need delete all lines that contain specific values on both columns.
Based on the folowing data, I need a code that find the zeros on all columns and, then, delete the rows {0,3} and {6,0}. Can I do it using "DeleteCases"?
data = {{1, 1}, {2, 2}, {0, 3}, {4, 4}, {5, 5}, {6, 0}};
The output should be the following matrix:
{{1, 1}, {2, 2}, {4, 4}, {5, 5}}
Thank you in advance.
GustavoGustavo Dacanal2019-02-15T17:48:10Z[WSC18] Simulating Auditory and Visual deficiencies
https://community.wolfram.com/groups/-/m/t/1383237
#What is this project?
This project is intended to let you experience the world how other people see and hear it. I Especially explore various auditory and visual impairments and simulate their effects. I hope this tool can go on to help friends and families understand what their loved ones are going through.
Overall I set out to create an interactive display of different sensory deficiencies, and I believe in part I achieved my goal. Although I would like to add more things I got some of the most common/impactful ailments! I used two main sources to acquire my data
http://www.roger-russell.com/hearing/hearing.htm
https://www.sciencedirect.com/science/article/pii/S167229301150008X?via%3Dihub#bb0005
#What does it look like?
Below are instances of the simulation as well as rough regressions I made to supplement
![Audio Distortion][1]![Visual Distortion][2]
![Female Regression][3]![Male Regression][4]
#How did I Create this project?
In order to simulate frequency loss I used Fourier transforms isolate and lower (by a certain percentage) the decibel level of ranges of frequencies. To add tinnitus I had to play around with different functions in the audio space but eventually I was able to simplify it and go on to apply it. The bulk of my code is below it is the code which implements the Fourier transform
```
AgeAlter[audio_, age_, gender_] :=
Module[{sampleRate, fromDezibel, amps, kernel, Data},
If[gender == "Male", Data = ApproximateAgeMale[age],
Data = ApproximateAgeFemale[age]];
sampleRate = First@Values@Options[audio, SampleRate];
fromDezibel = x \[Function] Exp[-x/8.685889638065035`];
amps = MapAt[N@BlockMap[Mean, \[Pi] #/sampleRate, 2, 1] &,
MapAt[fromDezibel, Transpose[Data], {2}], {1}];
kernel =
LeastSquaresFilterKernel[amps,
First@Values@Options[audio, SampleRate]];
e = AudioLoudness[
Audio[Map[channel \[Function] ListConvolve[kernel, channel],
AudioData@audio], Sequence @@ Options[audio]]];
If[age < 0, "How Are you Alive???",
If[age > 90,
"You're either Dead or Deaf (Or you're hearing is far too \
negligible to simulate)",
If[age <= 30,
Grid[{{ListLinePlot[AudioLoudness[audio],
PlotLabels -> "What you hear/would hear",
PlotLabel ->
"Decibel level of orignal and modified audio clips",
LabelStyle -> {Black, Bold},
AxesLabel -> {"Seconds", "Decibels"},
ImageSize -> Large]}, {audio}}],
Grid[{{ListLinePlot[{e,
AudioLoudness[
AudioTrim[audio, (-1*(Duration[audio] * .88094189354))],
Alignment -> Center]},
PlotLabels -> {"What you hear", "What you would hear"},
PlotLabel ->
"Decibel level of original and modified audio clips",
LabelStyle -> {Black, Bold},
AxesLabel -> {"Seconds", "Decibels"},
ImageSize -> Large]},
{Audio[
Map[channel \[Function] ListConvolve[kernel, channel],
AudioData@audio], Sequence @@ Options[audio]]}}]]]]]
```
This code essentially takes in an audio file and corresponding age, it then transforms the audio into Fourier space and maps decimal lowering amounts across different frequency ranges. Since decibels use a logarithmic scale i often struggled dealing with different units and converting between them although a professor helped me work it out. Other than this the rest was not too difficult to implement, the visual aspects were just a simple blurring of the image and extracting different RGB values (using built in functions).
#In the future
in the future I intend to improve and grow this project, expanding its reach for many different deficiencies, both applying the auditory and visual senses and not applying. I also hope to use this project, as a demonstration, to help raise awareness for people afflicted with these ailments or if this project becomes solidified enough, to directly make an impact on those who need it.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4967AudioModel.png&userId=1372232
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10323VIsualModel.png&userId=1372232
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Female2.png&userId=1372232
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Male.png&userId=1372232Sartaj Gulati2018-07-13T20:53:52ZA new site for sharing articles about Mathematica
https://community.wolfram.com/groups/-/m/t/1612093
Having been involved in the Mathematica Stack Exchange community for many years, there was one thing that I felt was sorely missing; a place to share Mathematica tutorials. Mathematica Stack Exchange, as you know, follows strictly the Q&A format.
I have now finally sat down and created the website that I wanted: http://wolframlanguagereviews.org.
Please go there and check out the articles that I've published so far, such as [this one](http://wolframlanguagereviews.org/2018/08/09/domain-decomposition-method-using-graph-coloring/) about domain decomposition.
**It is possible for anyone to publish their articles on this website.**
I think of it as a community project:
- Together we can create an awesome collection of articles that will reach this community and beyond
- Your articles will look great on Wolfram Language Reviews (just have a look), they will be something that you'll want to share.
- I take care of all the web hosting business.
- You retain the copyright of your articles and may withdraw them at any time or publish them elsewhere, should you feel that Wolfram Language Reviews is not enough.
You can read more about publishing on Wolfram Language Reviews [here](http://wolframlanguagereviews.org/publishing/).
As I said, I have thought about creating a website like this for quite some time. Hopefully, some of you have also felt that this was needed. Please ask any questions that you have. You can also reach me at editor@wolframlanguagereviews.org
Thank you for reading!Calle E2019-02-13T21:34:47ZStreamPlot the following gradient?
https://community.wolfram.com/groups/-/m/t/1610629
Hello,
it could be, that plenty of people asked this question already. I didn't search the right way. Sorry!
My Problem is with very basic usage of Mathematica:
First I define my new Function:
myFunction[x_, y_] := x^2 - x^4
then I define the gradient of this function as a function:
myGradient[x_, y_] := Grad[myFunction[x, y], {x, y}]
When I want to `StreamPlot[myGradient[x,y], {x, -2, 2}, {y, -2, 2}]` I got an empty diagram. When I use the output of the second last command (myGradient...) the Plotting works fine (`StreamPlot[%6, {x, -2, 2}, {y, -2, 2}]`).
Why does it not work like this and how do I do it right? In general I want to use the output of some input as the value of a function.
Thanks!Julian SSS2019-02-11T19:50:47ZUsable Property for ElementData absent from ElementData["Properties"]
https://community.wolfram.com/groups/-/m/t/1607534
According to the documentation, `ElementData["Properties"]` gives a list of all properties available for chemical elements. However, `ElementData["Properties"]` does not include `MolarRadioactivity` despite documentation for `ElementData` showing that `MolarRadioactibity` is a usable property (Visible under the "Nuclear properties include" section). I'm asking to find out why this is the case, as it seems a bit counterintuitive.
Below, I create JSON files of the available properties when using ElementData and EntityValue.
a = ElementData["Properties"];
Export["elementData.json", a, "JSON"];
In the `elementData.json` file, it completely skips over `MolarRadioactivity`
...
"MohsHardness",
"MolarMagneticSusceptibility",
"MolarVolume",
"Name",
...
However, I can still use the property with `ElementData`
ElementData["Uranium", "MolarRadioactivity"]
Is this intended? If so, why?
Thanks, Edwin.Edwin K2019-02-07T06:31:23ZGenerate a table from Solve or NSolve and plot it?
https://community.wolfram.com/groups/-/m/t/1609677
I have a simple question. I want to generate a list. To make things simple, I use a simple model.
xt = Table[{NSolve[x^2 - .001 i == 0 && x > 0]}, {i, 8}];
This generates a table, but when I inquire the value of xt, I got an answer like
xt[[4]]
Answer is
{{{x -> 0.0632456}}}
How can I ListPlot xt?
If I want to ListPlot xt versus another table, how can I do it?
Much thanks to anyone who can tell me.Hong-Yee Chiu2019-02-10T17:55:58ZFind shaded area between two arcs
https://community.wolfram.com/groups/-/m/t/1613233
###Please download the notebook at the end of the discussion
----------
![question][1]
This is a problem posted by a TikTok user. The origional version is for middle school students, so it is safe to assume the two arcs in the problem are from two separated circles. They are tangent to each other at the left top corner of the given rectangle.
Let's extend this problem to a more general case if the longer arc is part of conic section. We can give the coordinates to some points in the picture: $(0,0)$, $(2,0)$,$(4,0)$,$(6,0)$, where the origin is at the left bottom corner.
Clear["Global`*"]
longArc[x_, y_] := x^2/a^2 + (y + h)^2/(h + 4)^2
`longArc` is the implicit form of an ellipse of which the two axes are parallel to x and y axis respectively. The center of the ellipse is said to move downward along the y axis. So the coordinate of the center of ellipse is $(0, -h)$. The semi minor axis in y direction is $b = h+4$. We denote `a` for the semi-major axis. Solve for `a` in terms of `h`:
Reduce[36/a^2+h^2/(4+h)^2==1&&h>0&&a>0,{a}]
we have
h>0&&a==(3 Sqrt[(16+8 h+h^2)/(2+h)])/Sqrt[2]
Now we can define eccentricity of the ellipse as
ecc[h_]:=With[{a=(3 Sqrt[(16+8 h+h^2)/(2+h)])/Sqrt[2]},Sqrt@Abs[a^2-(h+4)^2]/Max[a,h+4]]
in case there is a switch. Use `Manipulate` function to verify the set of valid ellipses:
Manipulate[
a = (3 Sqrt[(16 + 8 h + h^2)/(2 + h)])/Sqrt[2];
GraphicsRow@{ContourPlot[{x^2/a^2 + (y + h)^2/(h + 4)^2 == 1,
x^2 + y^2 == 16}, {x, -10, 10}, {y, -10, 10},
Epilog -> {Point[{6, 0}], Line[{{0, 4}, {6, 4}, {6, 0}}]},
Axes -> True],
Plot[ecc[t], {t, 0, 8}, PlotLabel -> "Eccentricity",
Epilog -> {PointSize[0.03], Point[{h, ecc[h]}]}]
}, {h, 0, 8}]
![ecc][2]
Note that if the eccentrity is zero (downward cusp), we have the a circle that is to be found in the original question.
##Discussion
- If `h` is negative, as the center of ellipse move upward, the ellipse will intersect with the vertical line on the right twice:
![moveup][3]
- If `h` approaches positive infinite, as the ellipse is stretched very long downward, there exists a limit:
![movedown][4]
Because `h` is very large, the eccentricity is very close to 1 according to the graph on the right side. Thus the limit of the streched ellipse is a parabola, with vertex at $(4,0)$ and facing downward. The closed form expression is:
y - 4 = -1/9 x^2
![limit][5]
Use the following plot function in the `Manipulate` function above to see the animation with three curves in one plot:
ContourPlot[
{
x^2/a^2 + (y + h)^2/(h + 4)^2 == 1,
x^2 + y^2 == 16,
9*(y - 4) == -x^2
}, {x, -10, 10}, {y, -10, 10}...]
##Find Numeric Area
`ImplicitRegion` is used in a very straight forward manner. Given `h` is 4:
Module[{h=4,a},
a=(3 Sqrt[(16+8 h+h^2)/(2+h)])/Sqrt[2];
\[ScriptCapitalR]=ImplicitRegion[x^2+y^2> 16&&x^2/a^2+(y+h)^2/(h+4)^2< 1&&x>0&&y>0,{x,y}];
DiscretizeRegion[\[ScriptCapitalR]]
]
![area][6]
Compute the area of the region by:
Area[%]
=> 4.45849
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=102041.jpg&userId=23928
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2571loop.gif&userId=23928
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1.PNG&userId=23928
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2.PNG&userId=23928
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4190loop2.gif&userId=23928
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=area.PNG&userId=23928Shenghui Yang2019-02-14T11:37:15ZUse ParametricPlot to show the results of NDSolve?
https://community.wolfram.com/groups/-/m/t/1613368
Im trying to show the results of NdSolve by using Parametric Plot.These are pendulum equations and I want to get phase-space trajectories of this system. For small angles like Pi/5 i should get a circle.Here is my code
pend[\[Theta]0, \[Omega]0, \[Gamma], \[CurlyPhi]0, \[Omega]D, T] :=
NDSolve[{\[Theta]'[t] == \[Omega][t], \[Omega]'[
t] == -\[Gamma]\[Omega][t] -
Sin[\[Theta][t]] + \[CurlyPhi]0 Cos[\[Alpha][t]], \[Alpha]'[
t] == \[Omega]D, \[Theta][0] == \[Theta]0, \[Omega][
0] == \[Omega]0, \[Alpha][0] ==
0}, {\[Theta], \[Omega], \[Alpha]}, {t, 0, T}];
sol1 = pend[Pi/5, 0, 0, 0, 0, 100];
ParametricPlot[
Evaluate[{{\[Theta][t], \[Omega][t]} /. sol1}], {t, 0, 30},
PlotRange -> All, Frame -> True]
I keep getting the error message that the result of NDSolve " is neither a list of replacement rules nor a valid dispatch table, so cannot be used for replacing". I don't know how to fix it. I would appreciate your help very much.Paweł Żukowski2019-02-14T14:59:58ZPlot a Dirac Delta Function?
https://community.wolfram.com/groups/-/m/t/1611541
Good Evening All,
I have stumbled across a Dirac Delta Function, when applying the Fourier Transform to a function. I am not quite sure how to plot it. Could anyone provide some guidance?
Thanks!
F[t_] := A*Sin[ t]
g[k_] := FourierTransform[F[t], t, k]
I A Sqrt[\[Pi]/2] DiracDelta[-1 + k] -
I A Sqrt[\[Pi]/2] DiracDelta[1 + kLuciano Pinheiro2019-02-13T02:10:21ZAvoid error while using NDSolve on theses differential equations?
https://community.wolfram.com/groups/-/m/t/1612266
Im trying to solve the differential equations and I keep getting the message error "NDSolve called with 2 arguments; 3 or more arguments are expected". My code is
pend[a0, w0, k, l, wd] =
NDSolve[{a'[t] == w[t], w'[t] == -k w[t] - Sin[a[t]] + l Cos[q[t]],
q'[t] == wd, a[0] == a0, w[0] == w0, q[0] == 0}, {a, w, q}, {t, 0,
100}];
Any help with this is greatly appreciated.Paweł Żukowski2019-02-13T19:58:29ZWhat Do People Want To Build?
https://community.wolfram.com/groups/-/m/t/1607956
This is a [cross-post](https://b3m2a1.github.io/request-development-input.html)
---
# What Do People Want To Build?
Over the past year and a bit I've developed quite a bit of infrastructure from [a documentation platform](https://b3m2a1.github.io/simplifying-mathematica-documentation.html) to a [website builder ](tps:/b3m2a1.github.io/making-a-blog-in-30-minutes.html) and [package repository](https://paclets.github.io/PacletServer/) all extending and using [my existing toolkit](https://github.com/b3m2a1/mathematica-BTools). Recently, though I've wondered what I can do to make this *useful*.
Obviously, there are little bugs to quash, little things to extend and build, but I think for many things the infrastructure I have can be useful and make the development process a lot quicker and cleaner. So I'm putting out a request.
What do you want to build and what tools would make that easier? What kind of blockages have you run into when trying to get stuff developed? Leave me notes in the comments and I'll try to figure out what kinds of things I can/should document/make to make this better for everyone.b3m2a1 2019-02-07T22:49:31ZSimulating Finite Automata (and making it look nice)
https://community.wolfram.com/groups/-/m/t/1611589
![The simulation in action][1]
![a plot of a nondeterministic finite automaton, AddBin3 that recognizes a set of 3-digit binary numbers whos first two columns add up to its' third colum][2]
The above nondeterministic finite automaton recognizes a language I will call AddBin3. The alphabet for this NFA is the set of 3-digit binary numbers ({0,0,0},{0,0,1},...{1,1,1}}. The language includes all strings whose first 2 rows add up to the third row. So {1,0,1} (1+0=1} would be part of the language and {0,0,1},{1,1,0} (01+01=10), but not {1,1,1}.<br>
To simulate the states the automaton passes given a certain input string is fairly simple using **FoldList**. We simply pass it the initial state, a set of rules and the input, then apply the rules repeatedly to the set of states.
rule = <|{1, {0, 0, 0}} -> {0,
1}, {1, {0, 0, 1}} -> {2}, {1, {0, 1, 0}} -> {}, {1, {0, 1,
1}} -> {0, 1},
{1, {1, 0, 0}} -> {}, {1, {1, 0, 1}} -> {0,
1}, {1, {1, 1, 0}} -> {}, {1, {1, 1, 1}} -> {},
{2, {0, 0, 0}} -> {}, {2, {0, 0, 1}} -> {}, {2, {0, 1,
0}} -> {2}, {2, {0, 1, 1}} -> {},
{2, {1, 0, 0}} -> {2}, {2, {1, 0, 1}} -> {}, {2, {1, 1, 0}} -> {0,
1}, {2, {1, 1, 1}} -> {2},
{0, {0, 0, 0}} -> {}, {0, {0, 0, 1}} -> {}, {0, {0, 1,
0}} -> {}, {0, {0, 1, 1}} -> {},
{0, {1, 0, 0}} -> {}, {0, {1, 0, 1}} -> {}, {0, {1, 1,
0}} -> {}, {0, {1, 1, 1}} -> {}
|>;
FoldList[Union @@ (Function[s, rule[{s, #2}]] /@ #1) &, {1}, {{0, 0,
1}, {1, 1, 0}, {1, 0, 1}}]
Output: {{1}, {2}, {0, 1}, {0, 1}}
As we see, the automaton starts in state 1, moves to state 2, then moves on to states 0 and 1.<br>
To make this nicer to read, I made a more elaborate version, which has more information (the initial state, the accept state(s), the rule, etc. It outputs a quite elaborate **StringTemplate** that I thought was worth sharing.
addBin3Simulation[input_List]:=
((*set the initial state, accept state(s), alphabet, and rules*)
initialstate={1};
acceptstates= {0};
alphabet={{0,0,0},{0,0,1},{0,1,0},{0,1,1},{1,0,0},{1,0,1},{1,1,0},{1,1,1}};
rule=<|{1,{0,0,0}}->{0,1},{1,{0,0,1}}->{2},{1,{0,1,0}}->{},{1,{0,1,1}}->{0,1},
{1,{1,0,0}}->{},{1,{1,0,1}}->{0,1},{1,{1,1,0}}->{},{1,{1,1,1}}->{},
{2,{0,0,0}}->{},{2,{0,0,1}}->{},{2,{0,1,0}}->{2},{2,{0,1,1}}->{},
{2,{1,0,0}}->{2},{2,{1,0,1}}->{},{2,{1,1,0}}->{0,1},{2,{1,1,1}}->{2},
{0,{0,0,0}}->{},{0,{0,0,1}}->{},{0,{0,1,0}}->{},{0,{0,1,1}}->{},
{0,{1,0,0}}->{},{0,{1,0,1}}->{},{0,{1,1,0}}->{},{0,{1,1,1}}->{}
|>;
(*Fold the rule over and over on the states to get a list of the sequence of states*)
states=FoldList[Union@@(Function[s,rule[{s,#2}]]/@#1)&,initialstate,input];
(*check that all the characters in the input string are part of the alphabet*)
If[ContainsOnly[input,alphabet],
(*if they are, output the result of the simulation*)
StringRiffle[
Join[
(*First, output the initial state*)
{StringTemplate["The intial state of the NFA is \!\(\*SubscriptBox[\(q\), \(``\)]\)`"][initialstate]},
(*Then, show the sequence of states reached through the input*)
(*adjust the output depending on the number of states for correct grammar*)
(*have a special output for an empty list*)
MapThread[StringTemplate["After the next input `1`, the new state <*
If[Length[#2]==1,
\" is \"<>#2[[1]],
\"s are\" <>
If[Length[#2]==0,
\" none. The NFA terminates here.\",
StringRiffle[Most[#2],{\" \", \",\", \" and \"}]
<>Last[#2]]]*>"],
{input,Map[StringTemplate["\!\(\*SubscriptBox[\(q\), \(`1`\)]\)"],Rest[states],{2}]}
(*terminate the MapThread loop after the first empty list*)
[[All,;;FirstPosition[Rest[states],{},{-1}][[1]]]]],
(*attach a statement weather the string was accepted or not*)
{If[Last[states]!= {},
"This is the last state and "<>If[ContainsAny[Last[states],acceptstates],
"the string is accepted.",
"the string is not accepted."],
"The string is not accepted."]}],
"\n"],
(*if the input characters are not all in the alphabet, output an error message*)
"Error: One or more of the input characters are not in the alphabet"])
When we give this function an input string, it will give us information in an easily digestible format.<br>
Some examples:
![enter image description here][3]
The function will also give an Error when the string has characters that are not in the language.<br>
Of course, this function can be generalized for other NFAs:
(*generalied NFA simulation*)
nfaSimulation[alphabet_List, initialstate_List, rule_Association,
acceptstates_List, input_List] :=
((*Fold the rule over and over on the states to get a list of the \
sequence of states*)
states =
FoldList[Union @@ (Function[s, rule[{s, #2}]] /@ #1) &,
initialstate, input];
(*check that all the characters in the input string are part of the \
alphabet*)
If[ContainsOnly[input, alphabet],
(*if they are, output the result of the simulation*)
StringRiffle[
Join[
(*First, output the initial state*)
{StringTemplate[
"The intial state of the NFA is \!\(\*SubscriptBox[\(q\), \
\(``\)]\)`"][initialstate]},
(*Then,
show the sequence of states reached through the input*)
(*
adjust the output depending on the length of the list of states \
to have correct grammar*)
(*have a special output for an empty list*)
MapThread[
StringTemplate[
"After the next input `1`, the new state <*If[Length[#2]==1, \
\" is \"<>#2[[1]], \"s are\" <>If[Length[#2]==0, \" none. The NFA \
terminates here.\", StringRiffle[Most[#2],{\" \", \",\", \" and \
\"}]<>Last[#2]]]*>"],
{input,
Map[StringTemplate["\!\(\*SubscriptBox[\(q\), \(`1`\)]\)"],
Rest[states], {2}]}
(*terminate the MapThread loop after the first instance \
of an empty list*)
[[All, ;; FirstPosition[Rest[states], {}, {-1}][[1]]]]],
(*attach a statement weather the string was accepted or not*)
\
{If[Last[states] != {},
"This is the last state and " <>
If[ContainsAny[Last[states], acceptstates],
"the string is accepted.",
"the string is not accepted."],
"The string is not accepted."]}],
"\n"],
(*if the input characters are not all in the alphabet,
output an error message*)
"Error: One or more of the input characters are not in the \
alphabet"])
We just need to give this function the alphabet, initial state, rule, acceptstates and an input and it will generate a narrative about the computation of the NFA.
![a generalized version of the simulator in action][4]
<br>
<br>
Bonus: here is how I draw NFAs with the WL.
nfaPlot[q_, q0_, transitions_, f_,
opts___] := (g \[Function]
Graph[g,
VertexShape ->
Join[Thread[
Complement[VertexList[g], f, {q0}] -> Graphics[Circle[]]],
Thread[DeleteCases[f, q0] ->
Graphics[{Circle[], Circle[{0, 0}, 0.8]}]], {q0 ->
Graphics[{If[MemberQ[f, q0], Circle[{0, 0}, 0.8], Nothing],
Thickness[0.05], Circle[]}]}], VertexSize -> Large,
EdgeStyle -> Black, opts])@
Graph[q, Labeled[#1 \[DirectedEdge] #2,
If[Length[#3] === 1, #3[[1]], #3]] & @@@
KeyValueMap[Append,
GroupBy[transitions, (#[[;; 2]] &) -> (#[[3]] &)]]]
nfaPlot[Labeled[#, Style[Subscript["q", #], Large], Center] & /@ {0,
1, 2}, 0,
MapAt[Style[#, Large, Italic,
FontFamily -> "Times New Roman"] &, {{0, 1, 1}, {1, 2, 1}, {2, 0,
1}, {1, 0, 0}, {2, 1, 0}, {0, 2, 0}}, {All, 3}], {2}]
The output looks like the image below. The initial state is marked by a bold circle, but feel free to manually draw an arrow leading into the diagram like I did above:
<br>![an example DFA][5]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Examples.JPG&userId=1340981
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10097AddBin3.jpg&userId=1340981
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Examples.JPG&userId=1340981
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Example2.JPG&userId=1340981
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=examplegraph.jpg&userId=1340981Katja Della Libera2019-02-13T13:39:00ZUse Button[] with Dynamic[] expressions?
https://community.wolfram.com/groups/-/m/t/1610109
Hi, I just don't get my head around this... when I call a Module with a Dynamic expression (i.e. a progress indicator), this works fine unless I call the same code from Button[]. See below sample code... when calling bigFunc[] standalone from the notebook, everything works fine. When it gets called on the push of the Button[], the progress indicator doesn't move, and the progress dialog window also doesn't close.
What causes this? My intention is to call some Modules that process loads of data (thus the progress indicator dialog...) from a window with buttons, as a menu.
Using DynamicModule in any combination doesn't seem to make a difference... I couldn't find any hint anywhere in the documentation... Mathematica 11.3 on MacOS 10.14.3
Any thoughts, help - thank you!
In[42]:= doSomething[i_] := Module[{},
Speak[progi]; Pause[2]; progi++;
];
bigFunc[] := Module[{},
progi = 1; proglen = 3;
prognb =
CreateDialog[
Column[{Style[
"Doing, " <> ToString[proglen] <> " things in total",
FontSize -> 12],
Row[{ProgressIndicator[Dynamic[progi], {1, proglen},
ImageMargins -> 5], " ",
Dynamic[Round@N[100 progi/proglen]], " %"}]}],
WindowSize -> Fit];
Table[doSomething[i], {i, 1, 3}];
NotebookClose[prognb];
];
callFromButton[] := Module[{},
Button["Start!", bigFunc[]]
]
In[45]:= bigFunc[] (** works fine! **)
In[46]:= callFromButton[] (** doesn't work! **)Thomas Valjak2019-02-10T21:26:51ZDetermining forces in biological tissues from images
https://community.wolfram.com/groups/-/m/t/1571507
Forces and pressure play a key role in the development of an organism. These forces that are generated by the cells can act upon other cells in concert to yield dramatic changes in the architecture of a tissue. Forces may cause a tissue to stretch or rotate. Likewise, the internal pressure within the cells may cause the tissue to bulge or contract. Without precise regulation of
forces and pressure by the cells it is not hard to imagine that the developmental processes will be severely impacted. Therefore, biologists and biophysicists who are studying animal development often need a measure of the distribution of forces and pressure in a tissue over time.
There are experimental methods for measuring forces. In "laser ablation" a high-powered laser beam can be used to ablate junction(s) between two or more cells that are under tension. The recoil velocity can be used to determine the magnitude of tension. Imagine cutting a guitar string that is under high tension. Once snapped it will spring back. The process is highly invasive, that is the
junction in query has to be severed, rendering it less useful to study forces in a spatio-temporal manner.
In this post I would like to share with you the notion of inferring forces and pressure in a tissue using images. This technique, now commonly known as "Force Inference" was proposed by Ishihara and Sugimura (Journal of Theoretical Biology, 2012) as well as Brodland (2014). Force Inference allows us to determine forces without having to destroy tissues. And the idea is pretty simple. A cell is delimited by the junctions (edges) enclosing it and an edge can be represented by a line drawn between two vertices. We assume that a tissue at any given moment is in quasi-equilibrium and consequently the forces acting on the vertices sum to zero. Such an assumption makes sense since morphological changes in tissues occur over a long time-scale. The inertial and viscous effects are negligible. The forces acting on a vertex are both due to tension acting along the cell-cell junctions and the internal pressure of the cell.
Here is a Mathematica implementation that uses force balance over all vertices of an epithelia (sheet of cells) to determine the unknown tension and pressure. Note that the forces and pressure determined from this method only yields a relative estimate of the unknowns and not an absolute one. The script is based on the approach proposed by Ishihara.
![enter image description here][2]
![enter image description here][3]
![enter image description here][4]
![enter image description here][5]
**Code** : https://github.com/alihashmiii/Force-Inference
(* ::Package:: *)
(* ::Section:: *)
(*Force Inference*)
segmentImage[binarizedMask_?ImageQ,border: True|False]:=
Which[!border,
Print["border not present, segmenting with morphological components"];
ArrayComponents@DeleteBorderComponents@MorphologicalComponents[ColorNegate@binarizedMask, CornerNeighbors->False],
border,
Print["border exists, segmenting with watershed components"];
WatershedComponents[binarizedMask, CornerNeighbors->False]
];
bresenhamLine[p0_,p1_]:=Module[{dx,dy,sx,sy,err,newp},
{dx,dy}=Abs[p1-p0];
{sx,sy}=Sign[p1-p0];
err=dx-dy;
newp[{x_,y_}]:=With[{e2=2 err},{If[e2>-dy,err-=dy;x+sx,x],If[e2<dx,err+=dx;y+sy,y]}];
NestWhileList[newp,p0,#=!=p1&,1]
];
closeImage[image_]:=Module[{morphImage, morphImagePos,posConv,ptsOrdered},
morphImage=MorphologicalTransform[image,"SkeletonEndPoints"];
morphImagePos=PixelValuePositions[morphImage,1];
posConv = With[{mean=Mean@N@morphImagePos},
ptsOrdered=Append[#,First@#]&[SortBy[morphImagePos,ArcTan@@(mean-#)&]];
DeleteDuplicates@Flatten[bresenhamLine@@@Partition[ptsOrdered,2,1,1],1]
];
ReplacePixelValue[image,posConv-> 1]
];
Options[associateVertices]= {"stringentCheck"-> True};
associateVertices[img_,segt_,OptionsPattern[]]:= With[{dim =Reverse@ImageDimensions@img,stringentQ=OptionValue["stringentCheck"]},
Module[{pts,members,vertices,nearest,vertexset,likelymergers,imagegraph,imggraphweight,imggraphpts,vertexpairs,posVertexMergers,
meanVertices,Fn},
pts = ImageValuePositions[MorphologicalTransform[img,{"Fill","SkeletonBranchPoints"}], 1];
(* finding branch points *)
members = ParallelMap[Block[{elems},
elems = Dilation[ReplaceImageValue[ConstantImage[0,Reverse@dim],#->1],1];DeleteCases[Union@Flatten@ImageData[elems*Image[segt]],0|0.]
]&,pts];
vertices = Cases[Thread[Round@members-> pts],HoldPattern[pattern:{__}/;Length@pattern >= 2 -> _]];
(* vertices with 2 or more neighbour cells *)
nearest = Nearest[Reverse[vertices, 2]]; (* nearest func for candidate vertices *)
Fn = GroupBy[MapAt[Sort,(#-> nearest[#,{All,3}]&/@Values[vertices]),{All,2}],Last->First,#]&;
Which[Not@stringentQ,
(* merge if candidate vertices are 2 manhattan blocks away. Not a stringent check for merging *)
KeyMap[Union@*Flatten]@Fn[List@*N@*Mean]//Normal,
stringentQ,
(* a better check is to see the pixels separating the vertices are less than 3 blocks *)
vertexset = Fn[Identity];
(* candidates for merging*)
likelymergers = Cases[Normal[vertexset],PatternSequence[{{__Integer}..}-> i:{__List}/;Length[i]>= 2]];
(*defining graph properties of the image *)
imagegraph = MorphologicalGraph@MorphologicalTransform[img,{"Fill"}];
imggraphweight = AssociationThread[(EdgeList[imagegraph]/.UndirectedEdge->List )-> PropertyValue[imagegraph,EdgeWeight]];
imggraphpts = Nearest@Reverse[Thread[VertexList[imagegraph]-> PropertyValue[imagegraph,VertexCoordinates]],2];
(* corresponding nodes on the graph *)
vertexpairs = Union@*Flatten@*imggraphpts/@(Values[likelymergers]);
(* find pairs < than 3 edgeweights away, take a mean of vertices and update the association with mean position *)
posVertexMergers = Position[Thread[Lookup[imggraphweight,vertexpairs]< 3],True];
If[posVertexMergers != {},
meanVertices = MapAt[List@*N@*Mean,likelymergers,Thread[{Flatten@posVertexMergers,2}]];
Scan[(vertexset[#[[1]]]=#[[2]])&,meanVertices]];
KeyMap[Union@*Flatten]@vertexset//Normal]
]
];
formAndComputeMatrices[vertexCoordinatelookup_,inds_,colsOrder_,edgenum_,delV_,vertexToCells_,vertexvertexConn_,
maxcellLabels_,filteredvertices_,vertexAssoc_]:=
Module[{tx,ty,tensinds,filteredvertexnum,relabelvert,spArrayTx,spArrayTy,spArrayPx,spArrayPy,spArrayX,spArrayY,$filteredvertices},
{tx,ty}=Transpose[
With[{target=vertexCoordinatelookup[#[[2]]],source=vertexCoordinatelookup[#[[1]]]},
(target-source)/Norm[target-source]
]&/@inds];
Print["Tension coefficients computed: ",Style["\[CheckmarkedBox]",20]];
MapThread[Print[Style["counts of zero coefficients "<>#1,Red], Count[#2,0.]]&,{{"Tx: ","Ty: "},{tx,ty}}];
$filteredvertices=Complement[filteredvertices,delV];
filteredvertexnum=Length@$filteredvertices;
relabelvert=AssociationThread[$filteredvertices-> Range[Length@$filteredvertices]];
tensinds=Thread[{Lookup[relabelvert,Part[inds,All,1]],colsOrder}];
spArrayTx=spArrayTy=SparseArray@ConstantArray[0,{filteredvertexnum,edgenum}];
MapThread[(spArrayTx[[Sequence@@#1]]=#2)&,{tensinds,tx}];
MapThread[(spArrayTy[[Sequence@@#1]]=#2)&,{tensinds,ty}];
spArrayPx=spArrayPy=SparseArray@ConstantArray[0,{filteredvertexnum,maxcellLabels}];
MapThread[Print[Style[#1<> "coefficients stats: ",Blue],Counts@Map[Total@*Unitize,Normal[#2]]]&,{{"Tx ", "Ty "},{spArrayTx,spArrayTy}}];
Print[Style["Tension coefficients dist: ",Bold],
Histogram[{{},Join[spArrayTx["NonzeroValues"],spArrayTy["NonzeroValues"]]},20,ImageSize->Small]];
Block[{neighbouringCells,bisectionlabels,bisectpts,centroid,orderingT,vertexcoords,orderptsT,orderIndT,orderCells,kk=0,px,py},
With[{cellToVertexLabelsT= Reverse[vertexToCells,2], edgeVertexPart=GroupBy[vertexvertexConn~Flatten~1 ,First-> Last]},
With[{cellToAllVerticesT= GroupBy[Flatten[Thread/@cellToVertexLabelsT],First-> Last]},
Do[
neighbouringCells= vertexToCells[[i,2]]; (* for vertex, the neighbouring cell labels *)
bisectionlabels=Intersection[cellToAllVerticesT[#],edgeVertexPart[i]]&/@neighbouringCells;
If[Length[neighbouringCells]>2 && MatchQ[bisectionlabels,{Repeated[{_,_},{3,\[Infinity]}]}],
(vertexcoords=DeleteDuplicates[vertexCoordinatelookup[#]&/@Flatten@bisectionlabels];
centroid=Mean@vertexcoords;
orderingT=Ordering[ArcTan[Last@#-Last@centroid,First@#-First@centroid]&/@vertexcoords];
orderptsT=vertexcoords[[orderingT]];
orderIndT=Partition[Lookup[vertexAssoc,Append[orderptsT,First@orderptsT]],2,1];
orderCells=Last@@@Position[(x\[Function] Map[Intersection[x,#]&,orderIndT])/@(cellToAllVerticesT[#]&/@neighbouringCells),
x_/;Length[x]==2];
neighbouringCells=neighbouringCells[[orderCells]];
bisectpts=Map[vertexCoordinatelookup,orderIndT,{2}];
{px,py}=Transpose[{(#[[2,2]]-#[[1,2]])/2,-(#[[2,1]]-#[[1,1]])/2}&/@bisectpts];
If[MemberQ[px,0.]||MemberQ[py,0.],kk++];
{px,py})
];
Scan[(spArrayPx[[ Sequence@@#[[1]] ]]=#[[2]])&,Thread[Thread[{relabelvert@i,neighbouringCells}]-> px]];
Scan[(spArrayPy[[ Sequence@@#[[1]] ]]=#[[2]])&,Thread[Thread[{relabelvert@i,neighbouringCells}]-> py]],{i,$filteredvertices}]
]
];
Print["Pressure coefficients computed: ",Style["\[CheckmarkedBox]",20]];
Print[Style["Pressure coefficients zero: ",Red],kk];
];
MapThread[Print[Style[#1<> "coefficients stats: ",Blue],Counts@Map[Total@*Unitize,Normal[#2]]]&,{{"Px ", "Py "},{spArrayPx,spArrayPy}}];
Print[Style["pressure coefficients dist: ",Bold],
Histogram[{{},Join[spArrayPx["NonzeroValues"],spArrayPy["NonzeroValues"]]},ImageSize->Small]];
spArrayX=Join[spArrayTx,spArrayPx,2];
spArrayY=Join[spArrayTy,spArrayPy,2];
{spArrayX,spArrayY,Dimensions[spArrayTx],Dimensions[spArrayPx]}
];
(* maximize Log-likelihood function *)
maximizeLogLikelihood[spArrayX_,spArrayY_,dimTx_,dimPx_]:= Module[{range=10.0^Range[-1.5,1.5,0.1],sol,spA,spg,spB,n,m,
spb,\[Tau],SMatrix,Q,R,H,h,logL,\[Mu],p},
Print[Style["\nwith maximum likelihood",Bold,18]];
sol=With[{ls=range},
(*spA=SparseArray@(Join[spArrayX,spArrayY]);*)
spA=SparseArray@(Flatten[Transpose[{spArrayX,spArrayY}],1]);
spg=SparseArray@(ConstantArray[1.,Last@dimTx]~Join~ConstantArray[0.,Last@dimPx]);
spB=SparseArray@DiagonalMatrix[spg];
n=First@Dimensions@spA;
m=(Length[#]-Total@#)&@Diagonal[spB\[Transpose].spB];
With[{DimspB=First[Dimensions@spB]},
spb=SparseArray@ConstantArray[0.,First[Dimensions@spA]];
Table[(\[Tau]=Sqrt[\[Mu]];
SMatrix=SparseArray@(Map[Flatten]@Transpose[{Join[spA,\[Tau] spB],Join[spb,\[Tau] spg]},{2,1}]);
{Q,R}=SparseArray/@QRDecomposition[SMatrix];
R=DiagonalMatrix[Sign[Diagonal@R]].R;
H=R[[;;#,;;#]]&@DimspB;
\!\(\*OverscriptBox[\(h\), \(\[RightVector]\)]\)=R[[;;#,#+1]]&@DimspB;
h=R[[#+1,#+1]]&@DimspB;
logL=-(n-m+1)*Log[h^2]+Total[Log[Diagonal[\[Mu] (spB\[Transpose].spB)]["NonzeroValues"]]]-
2*Total[Log[Diagonal[H[[1;;-2,1;;-2]]]["NonzeroValues"]]]),{\[Mu],ls}]
]
];
Print[ListPlot[{sol,sol},Joined-> {True,False},PlotStyle->{{Red,Thick},{PointSize[0.02],Black}},AxesStyle->{{Black},{Black}},
AxesLabel->{"index \[Mu]","LogLikelihood"},Background->LightBlue]];
\[Mu]=Keys@@MaximalBy[Thread[range-> sol],Values,1];
Print["optimized value of \[Mu]: ",\[Mu]];
\[Tau]=Sqrt[\[Mu]];
With[{DimspB=First[Dimensions@spB]},
SMatrix=SparseArray@(Map[Flatten]@Transpose[{Join[spA,\[Tau] spB],Join[spb,\[Tau] spg]},{2,1}]);
{Q,R}=SparseArray/@QRDecomposition[SMatrix];
R=DiagonalMatrix[Sign[Diagonal@R]].R;
H=R[[;;#,;;#]]&@DimspB;
\!\(\*OverscriptBox[\(h\), \(\[RightVector]\)]\)=R[[;;#,#+1]]&@DimspB;
];
p=PseudoInverse[H].\!\(\*OverscriptBox[\(h\), \(\[RightVector]\)]\); p
];
plotMaps[p_,segmentation_,edgeImg_,maxcellLabels_,dimTx_,vertexToCells_,vertexCoordinatelookup_,edgeLabels_,border_]:=
Module[{cellToVertexLabels,cellToAllVertices,ptsEdges,k,v,ord,edgeptAssoc,poly,pts,mean,ordering,orderpts,tvals,cols,pvals,
removecollabels,collabels,pressurecolours},
cellToVertexLabels= Reverse[vertexToCells,2];
cellToAllVertices= GroupBy[Flatten[Thread/@cellToVertexLabels],First-> Last];
(* polygons *)
ptsEdges ={{1,1},Reverse@Dimensions[segmentation],{Last[Dimensions@segmentation],1},{1,First[Dimensions@segmentation]}};
{k,v}={Keys@#,Values[#][[All,2]]}&@ComponentMeasurements[segmentation,{"AdjacentBorderCount","Centroid"},#==2&];
ord=Flatten[Function[x,Position[#,Min[#]]&@Map[EuclideanDistance[#,x]&,ptsEdges]]/@v];
edgeptAssoc=Association[Rule@@@Thread[{k,ptsEdges[[ord]]}]];
poly=(
pts=vertexCoordinatelookup/@cellToAllVertices[#];
If[MemberQ[k,#],AppendTo[pts,edgeptAssoc[#]],pts];
mean=Mean[pts];
ordering=Ordering[ArcTan[Last@#-Last@mean,First@#-First@mean]&/@pts];
orderpts=pts[[ordering]];
Polygon@Append[orderpts,First@orderpts]
)&/@Range[maxcellLabels];
tvals=Rescale@p[[1;;Last@dimTx]];
cols=ColorData["Rainbow"][#]&/@tvals;
Print["Tension map:"];
Print[Graphics[{Thickness[0.005],Riffle[cols,Line/@Values@edgeLabels]}]];
pvals=p[[ Last[dimTx]+1;;]];
removecollabels = If[border, Keys@ComponentMeasurements[segmentation,"AdjacentBorders",Length[#]>0&],
Values@Last@ComponentMeasurements[Dilation[segmentation,1]/. 0 -> (maxcellLabels + 1),"Neighbors"]
];
collabels=Complement[Range@maxcellLabels,removecollabels];
pressurecolours=ColorData["Rainbow"][#]&/@Rescale[(pvals[[collabels]])];
Print["Pressure map:"];
Print@Show[Graphics@Riffle[pressurecolours,poly[[collabels]]],edgeImg]
];
(* ::Section:: *)
(*Mains*)
ForceInference[filename_,Imgborder:True|False:True]:=
Module[{Img,ImgC,segmentation,maxcellLabels,cellsToVertices,vertexnum,edges,smalledges,maxedgeLabels,edgeEndPoints,nearest,
nearestedgeEndPoints,edge2pixLabels,pos,oldCoords,vertexAssoc,vertexToCells,filteredvertices,filteredvertexnum,relabelvert,
edgeLabels,edgenum,spArrayTx,spArrayTy,vertexCoordinatelookup,vertexpairs,vertexvertexConn,inds,edgelabelToVert,delV,
vertToedges,edgeImg,colsOrder,p,spArrayX,spArrayY,dimTx,dimPx},
LaunchKernels[];
ImgC=Img=Binarize@ColorConvert[Import[filename],"Grayscale"];
If[Not@Imgborder,Img=closeImage@Img];
Print[Image[Img,ImageSize->Medium]];
segmentation = segmentImage[Img,Imgborder];
Print[Colorize@segmentation];
Print["Image segmented: ", Style["\[CheckmarkedBox]",20]];
maxcellLabels = Max@Values@ComponentMeasurements[segmentation,"LabelCount"];
cellsToVertices = associateVertices[Img,segmentation];
Print["vertices found and associated: ", Style["\[CheckmarkedBox]",20]];
vertexnum=Length@cellsToVertices;
edges=MorphologicalComponents@ImageFilter[If[#[[3,3]] == 1 && Total[#[[2;;-2,2;;-2]],2] == 3, 1, 0]&,ImgC,2];
(* associate vertices with all edges. for pixel value 1 edge find two nearest pts. for all edges <3, merge the pts together;
make changes to the cellToVertex *)
(* edges to be deleted *)
smalledges=Flatten[Position[Values@ComponentMeasurements[edges,"Count"],1|2]];
maxedgeLabels=Max@edges;
edgeEndPoints=With[{comp=Values@ComponentMeasurements[edges,"Mask"]},
ParallelTable[
If[Total[#] == 1,ImageValuePositions[#,1],
ImageValuePositions[MorphologicalTransform[#,"SkeletonEndPoints"],1]]&@Binarize@Image[comp[[i]]],
{i,maxedgeLabels}]
];
(* for small edge: if one pixel delete *)
edges=Fold[If[Length@edgeEndPoints[[#2]]==1,#1/.#2 -> 0,#1]&,edges,smalledges];
nearest=Nearest@Flatten[Values[cellsToVertices],1];
nearestedgeEndPoints=Map[Flatten@*nearest,edgeEndPoints,{2}];
(* if edge is two pixels then put average value in the cellsToVertices: *)
edge2pixLabels=Keys@Cases[ComponentMeasurements[edges,"Count"],HoldPattern[_-> 2]];
If[edge2pixLabels!={},
(oldCoords=nearestedgeEndPoints[[#]];
pos=Position[cellsToVertices,#,Infinity]&/@oldCoords;
cellsToVertices=Fold[ReplacePart[#1,#2-> Mean@oldCoords]&,cellsToVertices,pos]
)&/@edge2pixLabels
];
edges=ArrayComponents[edges/.Thread[edge2pixLabels-> 0]];
Print["edges found and associated: ", Style["\[CheckmarkedBox]",20]];
cellsToVertices=Normal@AssociationMap[Reverse,GroupBy[cellsToVertices,Last-> First,Union@*Flatten]];
vertexnum=Length@cellsToVertices; (* Length@Flatten[Values@cellsToVertices,1]; *)
nearest=Nearest@Flatten[Values@cellsToVertices,1];
edgeEndPoints=Delete[edgeEndPoints,Partition[smalledges,1]];
nearestedgeEndPoints=Map[Flatten@*nearest,edgeEndPoints,{2}];
vertexAssoc= AssociationThread[Flatten[Values@cellsToVertices,1],Range@vertexnum];
vertexToCells=Reverse[MapAt[vertexAssoc[#]&,MapAt[Flatten,cellsToVertices,{All,2}],{All,2}],2];
filteredvertices=Keys@Select[<|vertexToCells|>,(Length[#]>2&)];
filteredvertexnum=Length@filteredvertices;(* till above we have isolated all vertices that share three or more edges;
we can relabel those filtered vertices to be the rows of the matrix *)
relabelvert=AssociationThread[filteredvertices-> Range[Length@filteredvertices]];(*all edges are relabeled to have a unique identity*)
edgeLabels=AssociationThread[Range[Length@#]->#]&[nearestedgeEndPoints];
edgenum=Max[Keys@edgeLabels];
vertexCoordinatelookup=AssociationMap[Reverse,vertexAssoc];(* given vertex label \[Rule] get coordinates from the original lookup *)
vertexpairs=Map[vertexAssoc,nearestedgeEndPoints,{2}];
(* edge coordinates to vertex label. take vertices one by one and find all the edges it is a part of. None should be less than 3 *)
vertexvertexConn= ParallelTable[
Cases[vertexpairs,{OrderlessPatternSequence[i,p_]}:> {i,p}],
{i,filteredvertices}];
delV=Cases[vertexvertexConn,{{p_,_},{p_,_}}:> p];
vertexvertexConn=DeleteCases[vertexvertexConn,{_,_}];
inds=Flatten[vertexvertexConn,1];(* edgelabel \[Rule] vertices *)
edgelabelToVert=Map[vertexAssoc,edgeLabels,{2}];
(* vertices \[Rule] edgelabel *)
vertToedges=Normal@AssociationMap[Reverse,edgelabelToVert];
colsOrder=Flatten[Cases[vertToedges,PatternSequence[{OrderlessPatternSequence@@#}-> p_]:> p,Infinity]&/@inds];
edgeImg=Graphics[{Thickness[0.005],Line@Lookup[vertexCoordinatelookup,edgelabelToVert[#]]&/@colsOrder}];
{spArrayX,spArrayY,dimTx,dimPx}=formAndComputeMatrices[vertexCoordinatelookup,inds,colsOrder,edgenum,delV,vertexToCells,
vertexvertexConn,maxcellLabels,filteredvertices,vertexAssoc];
p = maximizeLogLikelihood[spArrayX,spArrayY,dimTx,dimPx];
plotMaps[p,segmentation,edgeImg,maxcellLabels,dimTx,vertexToCells,vertexCoordinatelookup,edgeLabels,Imgborder];
];
**Update**
I made slight changes to the code in order to accept images in which cells are not connected to the image borders; an example of such an image is 'smallimagepadded.tif' (see attached file) and as a PNG shown below. In essence the modified code closes the peripheral edges to form cells.
Run the script as:
Quiet@ForceInference["image (e.g. image.tif) with cells connected to the image borders"]
or
Quiet@ForceInference["image (e.g. smallimagepadded.tif) with cells not connected to the image borders", False]
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=smallimagepadded.png&userId=942204
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=im1.png&userId=942204
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=im2.png&userId=942204
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=im3.png&userId=942204
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=im4.png&userId=942204Ali Hashmi2018-12-15T17:45:01ZGlycolysis Similarity Among Humans, Chimps, and Salmon
https://community.wolfram.com/groups/-/m/t/1307919
Well, I find myself between jobs again, so I'll take another step into the wild biology frontier. First, a few sentences on what I've learned from working remotely for the past two years for two failed companies. I've seen a lot of one strategy that doesn't work, which seems to stem from the mythos of Steve Jobs and the iPhone. Namely, a genius designing a masterpiece in isolation and then taking the world by storm with it. It doesn't seem to work very well when the goal is to automate and scale a process. Instead, the vaguely defined masterpiece never manifests itself, the team wastes huge amounts of time on work that is thrown away, zero successful completions of the process occur (in these cases getting students into universities they will like and getting people the best deals on loans), and then the company runs out of money. And even if they had been able to decide exactly what the magical app should have been, it wouldn't have worked as soon as it was confronted with the messy details of reality. I would have been much happier if they were drawing their inspiration from the big restaurant chains or stores that started as a single shop, the big transport services that started by delivering on foot within a single city, or even the early Steve Jobs days where Apple was building computers in a garage.
Anyway, back to working with biology data, which is one of my sanctuaries of slow but steady progress (the other being working on more interesting character AI for games, because current games get boring too quickly). I've been getting familiar with more of the various NCBI databases and tinkering with the API. All of the UI, terminology, and acronyms are somewhat overwhelming at first. I find having a toy genetic engineering project in mind to helps me stay focused on figuring out specific computations I would actually want to run. So my toy project is to genetically engineer grass that doesn't have to be cut. It should automatically stop growing (but stay alive) at an agreeable height for a lawn. I chose that project because it's beyond what can be done today, but perhaps much simpler than common goals like curing diseases in humans. And it's something where I can conceive of personally doing physical experiments, because the safety and regulatory concerns are much less stringent when dealing with a tray of grass. And it's perhaps even less controversial than other genetically modified plants, because it's weakening the grass from an evolutionary standpoint. So you don't have to worry about it taking over natural fields. It turns out golf courses have been interested in creating so-called dwarf grasses via selective breeding for a while, but I think to truly get the desired behavior will require engineering.
Most genetic engineering projects you read about involve adding or disabling single genes. As you start to investigate progressing to designing small gene circuits, you realize that the metabolic pathway data is lagging behind the genomic data (which is already not as complete as you want). The most popular database for pathway data is [KEGG Pathway][1]. It consists of a few hundred hand drawn pathway maps covering pathways that are somewhat consistent (called evolutionarily conserved) across many organisms, annotated with compounds and enzymes. Then for a variety of species it lists which genes produce the proteins corresponding to the various enzymes in the reactions. Of course, the completeness varies a lot between species, however, it's enough to begin imagining potential paths forward.
I've been studying general knowledge on plant and grass development, so using the pathways I can perhaps get some ideas for genes to try and disable in order to stop the growth. Plants use a much larger amount and variety of secondary metabolites than animals, so then perhaps I can find one whose concentration correlates with the height of the blade of grass. Many processes in biology are regulated by the concentration of a signaling molecule (like how your hand knows [where to grow a thumb instead of a pinky][2]). Perhaps the trickiest part will be to design an enzyme that is activated by a particular concentration of that signaling molecule, if I can't find one that already exists. But enzyme design is a hot topic, and as computational chemistry algorithms continue to advance (and become easier to use due to integration in the WL, more standardized formats, etc), eventually that will be possible too. Then the final step would be to add a gene or genes that just produce some RNA to interfere with the genes I want to disable ([a common technique][3]). Then my enzyme can [promote][4]/[activate][5] that gene when the enzyme is activated by the concentration of the signaling molecule. Whew. That's a long list of hypotheticals for such a simple sounding project, and we haven't even considered trying to control side effects. But I think this type of reasoning will become more common in the future.
However, those investigations are for future dates! Let's wrap this post up with a simple, interesting exercise using the KEGG database. Let's see how similar the DNA and protein amino acid sequences used for [glycolysis][6] (a fundamental process in all organisms) are among humans, chimps, and something really different like salmon. The way the KEGG API works is that for a given pathway we can request either the enzyme categories used in the various reactions (prefix "ec"), the formulas for the reactions the enzymes catalyze (prefix "rn"), or the genes that produce those enzymes (prefix varies per organism, "hsa" is human, "ptr" is chimp, "sasa" is salmon). KEGG has their own IDs for the genes, but then from those you can request the sequences or links back to other databases like the NCBI nucleotide and protein databases for more information. We'll start by grabbing the XML data (in a schema they call KGML) for the glycolysis genes for each species.
human = Import["http://rest.kegg.jp/get/hsa00010/kgml", "XML"][[2, 3]];
chimpanzee =
Import["http://rest.kegg.jp/get/ptr00010/kgml", "XML"][[2, 3]];
salmon = Import["http://rest.kegg.jp/get/sasa00010/kgml", "XML"][[2,
3]];
Let's define a function to download and parse out the name and sequence data for a given gene. It would be nice if they offered this data in JSON format.
getGeneInfo[keggId_] := keggId // <|
"Names" ->
First@StringCases[
Import["http://rest.kegg.jp/get/" <> #, "Text"],
"NAME" ~~ Whitespace ~~ Shortest[names__] ~~ "\n" :>
StringSplit[names, ", "]],
"NTSeq" ->
StringJoin[
StringSplit[
Import["http://rest.kegg.jp/get/" <> # <> "/ntseq", "Text"],
"\n"][[2 ;;]]],
"AASeq" ->
StringJoin[
StringSplit[
Import["http://rest.kegg.jp/get/" <> # <> "/aaseq", "Text"],
"\n"][[2 ;;]]]
|> &
Now let's download the info for each of the genes mentioned in each of the pathways. Sometimes multiple genes are given for a single enzyme or reaction node in the pathway.
humanGeneInfo =
Select[human, #[[1]] == "entry" && #[[2, 3, 2]] == "gene" &][[All, 2,
2, 2]] // StringSplit // Flatten //
Module[{i = 0},
Monitor[AssociationMap[(i++; getGeneInfo@#) &, #],
ProgressIndicator[i, {1, Length@#}]]] &
chimpanzeeGeneInfo =
Select[chimpanzee, #[[1]] == "entry" && #[[2, 3, 2]] ==
"gene" &][[All, 2, 2, 2]] // StringSplit // Flatten //
Module[{i = 0},
Monitor[AssociationMap[(i++; getGeneInfo@#) &, #],
ProgressIndicator[i, {1, Length@#}]]] &
salmonGeneInfo =
Select[salmon, #[[1]] == "entry" && #[[2, 3, 2]] == "gene" &][[All,
2, 2, 2]] // StringSplit // Flatten //
Module[{i = 0},
Monitor[AssociationMap[(i++; getGeneInfo@#) &, #],
ProgressIndicator[i, {1, Length@#}]]] &
You'll see some error messages thrown from a lot of the salmon genes missing names. Now let's go through each of the human genes and find the chimpanzee gene that has a shared name and list the name of the gene, the length of the nucleotide sequence, and the edit distances to get from the human to chimpanzee nucleotide and amino acid sequences for that gene. Then sort by ratio of edit distance to sequence length so the most similar ones come first and the most different ones come last.
Table[SelectFirst[chimpanzeeGeneInfo,
ContainsAny[ToLowerCase@humanGene[["Names"]],
ToLowerCase@#[["Names"]]] &] //
If[! MissingQ@#, {humanGene[["Names", 1]],
StringLength@humanGene[["NTSeq"]],
EditDistance[humanGene[["NTSeq"]], #[["NTSeq"]]],
EditDistance[humanGene[["AASeq"]], #[["AASeq"]]]},
Nothing] &, {humanGene, humanGeneInfo}] //
SortBy[#[[3]]/#[[2]] &]
{{ENO3,1305,0,0},{PGAM2,762,1,0},{PGK1,1254,2,0},{PDHA2,1167,2,1},{LDHB,1005,2,1},{HK1,2754,7,3},{ADH5,1125,3,2},{ADPGK,1491,4,1},{ALDOB,1095,3,1},{ACSS2,2106,8,4},{GAPDH,1008,4,0},{ALDH1A3,1539,7,1},{ALDOC,1095,5,3},{ENO2,1305,6,0},{DLAT,1944,9,4},{LDHA,999,5,1},{AKR1A1,978,5,3},{ALDH2,1554,8,3},{PGAM1,765,4,0},{HK2,2754,15,3},{PDHB,1080,6,0},{GPI,1677,10,5},{ADH4,1143,7,2},{ENO1,1305,8,0},{PCK2,1923,12,6},{ALDOA,1095,7,2},{GALM,1029,7,3},{FBP1,1017,7,3},{HKDC1,2754,19,4},{ENO4,1878,13,6},{LDHAL6B,1146,8,5},{PGM1,1689,12,6},{ACSS1,2070,15,5},{PGM2,1839,14,4},{PDHA1,1173,9,1},{BPGM,780,6,2},{ADH1A,1128,9,5},{ADH1B,1128,9,2},{GAPDHS,1227,10,5},{MINPP1,1464,12,5},{ALDH1B1,1554,13,5},{G6PC2,1068,9,5},{PGK2,1254,11,5},{FBP2,1020,10,1},{G6PC,1074,12,4},{PFKL,2343,29,0},{GCK,1398,28,13},{ADH6,1107,23,9},{PKLR,1725,36,13},{PGAM4,765,17,11},{PKM,1596,57,21},{ALDH9A1,1557,82,28},{ALDH3A2,1458,79,31},{DLD,1530,92,29},{HK3,2772,190,63},{PFKP,2355,202,68},{PFKM,2343,225,74},{ADH1C,1128,145,48},{TPI1,861,114,37},{ALDH3B1,1407,220,74},{ALDH3A1,1362,224,74},{PCK1,1869,409,135},{ALDH7A1,1620,359,121},{ALDH3B2,1158,267,97},{LDHAL6A,999,238,79},{LDHC,999,348,116},{ADH7,1161,414,170},{G6PC3,1041,454,173}}
We can see most of them are very similar. [Enolase 3][7] is completely identical between humans and chimpanzees. It has a few isoenzymes that are expressed in different tissue types in mammals. This particular one is most common in skeletal muscle. [G6PC3][8] has an edit distance that is almost half the length of the nucleotide sequence. This gene encodes the catalytic subunit of the G6Pase enzyme. In humans, mutations in this gene can cause babies to have low white blood cell counts. Let's do the same computation for salmon.
Table[SelectFirst[salmonGeneInfo,
ContainsAny[ToLowerCase@humanGene[["Names"]],
ToLowerCase@#[["Names"]]] &] //
If[! MissingQ@#, {humanGene[["Names", 1]],
StringLength@humanGene[["NTSeq"]],
EditDistance[humanGene[["NTSeq"]], #[["NTSeq"]]],
EditDistance[humanGene[["AASeq"]], #[["AASeq"]]]},
Nothing] &, {humanGene, humanGeneInfo}] //
SortBy[#[[3]]/#[[2]] &]
{{GAPDH,1008,213,49},{PGAM1,765,169,35},{ENO3,1305,292,64},{PGK1,1254,283,61},{PGAM4,765,175,44},{ALDH2,1554,356,107},{ALDOA,1095,252,66},{ALDOC,1095,257,75},{GCK,1398,348,99},{PCK1,1869,477,155},{ALDH7A1,1620,415,99},{ADH5,1125,305,71},{PGM1,1689,461,124},{ALDOB,1095,301,98},{FBP2,1020,284,86},{ALDH9A1,1557,468,145},{DLD,1530,461,91},{PCK2,1923,591,214},{PDHB,1080,332,92},{LDHB,1005,310,74},{LDHA,999,316,97},{ACSS1,2070,662,218},{ADPGK,1491,493,185},{ADH1C,1128,376,142},{PGM2,1839,622,182},{DLAT,1944,659,197},{GALM,1029,356,136},{ALDH3A2,1458,549,194},{G6PC3,1041,437,191},{G6PC2,1068,454,156},{ENO4,1878,819,363}}
Over half of them are missing because there are a lot of salmon genes that are unnamed or have no shared names with the human genes. The dissimilarity ratio ranges from 0% to 44% in chimpanzees and 21% to 44% in salmon. This is expected, of course, because salmon are less similar to us than chimpanzees. Now let's ignore the annotated names and just find the salmon gene ID that is the most similar to each human gene based on finding all pairwise edit distances in the set.
Table[Join[{humanGene[["Names", 1]],
StringLength@humanGene[["NTSeq"]]},
First@SortBy[
KeyValueMap[{EditDistance[
humanGene[["NTSeq"]], #2[["NTSeq"]]], #} &, salmonGeneInfo],
First]], {humanGene, humanGeneInfo}] // SortBy[#[[3]]/#[[2]] &]
{{ENO1,1305,262,sasa:100194865},{GAPDH,1008,210,sasa:106575942},{ALDOA,1095,233,sasa:106583908},{PGAM2,762,165,sasa:106589759},{ENO3,1305,284,sasa:100196671},{ENO2,1305,285,sasa:106576545},{PGAM1,765,169,sasa:100194748},{PKM,1596,356,sasa:100195460},{PGK1,1254,283,sasa:106568020},{HK2,2754,623,sasa:106585103},{ALDOC,1095,250,sasa:106612788},{PGAM4,765,175,sasa:100194748},{ALDH2,1554,356,sasa:106578507},{HK1,2754,634,sasa:106587516},{GPI,1677,397,sasa:100196524},{GCK,1398,344,sasa:106585167},{PGK2,1254,316,sasa:100194765},{PDHB,1080,274,sasa:106566255},{FBP1,1017,259,sasa:106561989},{PCK1,1869,477,sasa:100195420},{ALDH7A1,1620,415,sasa:100194754},{PFKP,2355,604,sasa:100380410},{PFKM,2343,613,sasa:106566997},{HKDC1,2754,739,sasa:106612435},{PGM1,1689,457,sasa:106568718},{FBP2,1020,276,sasa:106593443},{ADH5,1125,305,sasa:100195992},{ALDH1A3,1539,421,sasa:106562593},{ALDH1B1,1554,427,sasa:106578507},{ALDOB,1095,301,sasa:100136522},{PDHA1,1173,335,sasa:106590467},{PFKL,2343,682,sasa:106582566},{ACSS2,2106,623,sasa:106566799},{ALDH9A1,1557,468,sasa:100195073},{DLD,1530,461,sasa:106561021},{LDHA,999,302,sasa:106573043},{PCK2,1923,582,sasa:100195420},{LDHB,1005,308,sasa:106609123},{G6PC,1074,333,sasa:106579337},{AKR1A1,978,306,sasa:106584055},{BPGM,780,247,sasa:100196266},{PDHA2,1167,370,sasa:106563586},{ACSS1,2070,662,sasa:106576722},{ADH1B,1128,361,sasa:106611602},{TPI1,861,280,sasa:106569985},{ADH1C,1128,369,sasa:106611602},{PKLR,1725,565,sasa:100195460},{ADPGK,1491,493,sasa:106560768},{ADH1A,1128,374,sasa:106611602},{PGM2,1839,622,sasa:106595213},{DLAT,1944,659,sasa:106604067},{GALM,1029,356,sasa:106608200},{LDHAL6A,999,350,sasa:106573043},{LDHC,999,351,sasa:106573043},{ADH4,1143,404,sasa:106611602},{ADH7,1161,414,sasa:106611602},{ALDH3B1,1407,506,sasa:106606778},{HK3,2772,1003,sasa:106585103},{ALDH3B2,1158,427,sasa:106606760},{ADH6,1107,415,sasa:100195992},{ALDH3A2,1458,549,sasa:100286782},{GAPDHS,1227,463,sasa:106577739},{LDHAL6B,1146,433,sasa:106573069},{G6PC2,1068,410,sasa:106579337},{ALDH3A1,1362,531,sasa:100286782},{G6PC3,1041,437,sasa:106589637},{ENO4,1878,819,sasa:106606505},{MINPP1,1464,642,sasa:106589393}}
I looked up several of these gene IDs in KEGG and then followed the links to the NCBI gene database. Many of them have names with "-like" appended to say it is like another gene or enzyme. Also, for example, the salmon doesn't have genes listed for enolase 1 or 2 (it does have 3 and 4), but it has one called alpha-enolase that is more similar to the human enolase 1 than any other pair of human to salmon genes in the set. It's also interesting that the salmon PGAM1 gene is slightly more similar to the human PGAM2 gene than to the human PGAM1 gene. Matching the full set of human genes to salmon had hardly any effect on the range of dissimilarity ratios. It changed from 21%-44% to 20%-44%. So the maximum dissimilarity in salmon remained at 44% even when considering unnamed salmon genes.
And we'll end it there for now. I've attached a pretty messy notebook. Normally, as I make multiple passes over the code, I move and organize cells toward the top, but in this case the notebook is pretty raw. It also has some scratch work where I was parsing chemical reactions from other KGML files and looking at the number of genes involved in each reaction between the species. Until next time!
[1]: http://www.genome.jp/kegg/pathway.html
[2]: https://en.wikipedia.org/wiki/Sonic_hedgehog_%28protein%29
[3]: https://en.wikipedia.org/wiki/RNA_interference
[4]: https://en.wikipedia.org/wiki/Promoter_%28genetics%29
[5]: https://en.wikipedia.org/wiki/Activator_%28genetics%29
[6]: https://en.wikipedia.org/wiki/Glycolysis
[7]: https://www.ncbi.nlm.nih.gov/gene?term=%28eno3%5Bgene%5D%29%20AND%20%28Homo%20sapiens%5Borgn%5D%29%20AND%20alive%5Bprop%5D%20NOT%20newentry%5Bgene%5D&sort=weight
[8]: https://www.ncbi.nlm.nih.gov/gene?term=%28g6pc3%5Bgene%5D%29%20AND%20%28Homo%20sapiens%5Borgn%5D%29%20AND%20alive%5Bprop%5D%20NOT%20newentry%5Bgene%5D&sort=weightMichael Hale2018-03-23T19:07:50ZThe Scoville scale of peppers (151st birth anniversary of W.L. Scoville)
https://community.wolfram.com/groups/-/m/t/777535
Today is the 151st birth anniversary of [Wilbur Lincoln Scoville][1], who is best known for his [Scoville Organoleptic Test][2] that is used to measure hotness of peppers.
Born in Bridgeport Connecticut on January 22nd, 1865, Wilbur Lincoln Scoville was a chemist, award-winning researcher, professor of pharmacology and the second vice-chairman of the American Pharmaceutical Association.
![enter image description here][3]
First, I generated a dataset of peppers from an excel file available at [Meadow view growers][4] website, the images were obtained using [BingSearch][5] which connect to the Bing Search API with the Wolfram Language.
bs = ServiceConnect["BingSearch"];
ServiceExecute["BingSearch","Search",{"Query"->" pepper","SearchType"->"Images","MaxItems"->1,"Elements"->"Thumbnails"}]
![pepper][6]
peppers =
Dataset[<|"variety" -> First[#],
"image" ->
ImageResize[First@ServiceExecute["BingSearch",
"Search", {"Query" -> First[#] <> " pepper",
"SearchType" -> "Images", "MaxItems" -> 3,
"Elements" -> "Thumbnails"}],100],
"days" ->
Quantity[
If[StringQ[Part[#, 2]], Last@StringSplit[Part[#, 2], "-"],
Part[#, 2]], "days"], "type" -> Part[#, 3],
"scoville" ->
Interpreter["Integer"][Last@Quiet@StringSplit[Part[#, 4], "-"]],
"comments" -> Last[#]|> & /@
Import["http://www.meadowview.com/wp-content/uploads/2012/08/\
PepperChart.xls"][[1, 3 ;; 35]]]
![enter image description here][7]
Finally, I visualize the Scoville scale of the peppers using a ListLogPlot:
ListLogPlot[
MapThread[
Tooltip[# + 1, TableForm[#2]] &, {Normal[peppers[All, "scoville"]],
MapThread[{Style[#1, Bold, 16, Red],
Style[#2 "Scoville Heat Units (SHU)", Bold], #3} &, {Normal[
peppers[All, "variety"]], Normal[peppers[All, "scoville"]],
Normal[peppers[All, "image"]]}]}], ImageSize -> 800,
PlotRange -> All, AxesLabel -> {None, "SHU"}, PlotMarkers -> Style["j", 34, Red, Bold],
PlotRange -> All,
PlotLabel ->
Style["The Scoville Scale", 32, RGBColor[1, 0.05, 0, 0.85], Bold],
Ticks -> {MapThread[{#1,
Rotate[Style[#2 , FontSize -> 12], 85 Degree]} &, {Range[33],
Normal[peppers[All, "variety"]]}], Automatic}]
Check out the notebook attached and the dataset peppers.m in order to explore the interactive visualization.
[1]: https://en.wikipedia.org/wiki/Wilbur_Scoville
[2]: https://en.wikipedia.org/wiki/Scoville_scale#Scoville_organoleptic_test
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScovilleHeat.gif&userId=95400
[4]: http://meadowview.com/vegetables/
[5]: https://reference.wolfram.com/language/ref/service/BingSearch.html
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScovilleHeatPost.png&userId=95400
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScovilleHeatData.png&userId=95400Jofre Espigule2016-01-22T23:30:41ZSolve eight order polynomial with variable coefficients?
https://community.wolfram.com/groups/-/m/t/1611146
I want to solve eight order following polynomials and obtain the x depending on alpha.
Aj=Aj(alpha)
A8 * x^8 + A7 * x^7 + A6 * x^6 + A5 * x^5 + A4 * x^4 + A3 * x^3 + A2 * x^2 + A1 * x +
A0=0
How do I get this? Is it possible?
Thanks,Isa Comez2019-02-12T08:52:27ZAvoid a syntax error while calculating jet velocities?
https://community.wolfram.com/groups/-/m/t/1610724
I keep receiving this syntax error. Anyone have an idea as to why?Dan Rivera2019-02-11T20:18:16ZFrustration Solitaire
https://community.wolfram.com/groups/-/m/t/1609558
## Frustration Solitaire ##
Frustration solitaire is a game that has roots stemming from the early 1700's. The rules of the game are simple: a dealer calls out the ranks of cards in order $\textit{Ace, Two, Three, . . .}$ and so on. At the same time the dealer draws a card from a sufficiently well shuffled deck. If the rank of the card drawn matches the rank of the card the dealer says you lose the game.
![cards][1]
The rank of the cards the dealer would have called out are $\textit{Ace, Two, Three, Four, Five}$. Since the fifth card has rank five we lose.
Let's programme a game of frustration solitaire.
We start by creating an array that corresponds to the ranks of the cards the dealer calls out.
dealer = Flatten[Table[Range[1, 13], 4]]
Next, we need to simulate a well shuffled deck of cards. Using the function `RandomSample[]` we can easily "shuffle" the deck of cards.
shuffle = RandomSample[Flatten[Table[Range[1, 13], 4]]]
Combine the lists using `Transpose[]` to get our very own game of frustration solitaire.
In[1]:= fs =
Transpose[{Flatten[Table[Range[1, 13], 4]],
RandomSample[Flatten[Table[Range[1, 13], 4]]]}]
Out[1]= {{1, 11}, {2, 9}, {3, 8}, {4, 9}, {5, 8}, {6, 5}, {7, 9}, {8,
6}, {9, 5}, {10, 2}, {11, 4}, {12, 13}, {13, 5}, {1, 10}, {2,
7}, {3, 12}, {4, 13}, {5, 1}, {6, 12}, {7, 4}, {8, 1}, {9, 2}, {10,
7}, {11, 10}, {12, 13}, {13, 10}, {1, 8}, {2, 3}, {3, 9}, {4,
11}, {5, 3}, {6, 3}, {7, 10}, {8, 8}, {9, 6}, {10, 5}, {11, 2}, {12,
7}, {13, 11}, {1, 12}, {2, 12}, {3, 6}, {4, 3}, {5, 1}, {6, 1}, {7,
7}, {8, 2}, {9, 13}, {10, 4}, {11, 6}, {12, 4}, {13, 11}}
Lets see if we have won:
In[2]:= w1 =
If[Part[fs[[#]], 1] == Part[fs[[#]], 2], 1, 0] & /@
Range[Length[fs]];
In[3]:= If[Length[DeleteCases[0]@w1] == 0,
"YOU WIN!", "YOU LOSE."]
Out[3]= "YOU LOSE"
Now we shouldn't feel *too* bad about losing. The name "frustration" solitaire stems from the fact that the percentage of winning is actually very low. In 2009, Doyle et. al. found out that the percentage of winning a game of frustration solitaire is approximately $1.62\%$. They worked this out by framing the question within the world of combinatorics. Finding the percentage of winning a game of frustration solitaire is equivalent to finding the number of rank derangements (i.e. how many permutations that have no rank-fixed points) divided by $52!$ (i.e. the total number of permutations of a deck of cards).
Let's generate 100,000 games of frustration solitaire and see how close we can get to the estimate Doyle et. al. produced.
In[4]:= trials =
Table[s =
Transpose[{Flatten[Table[Range[1, 13], 4]],
RandomSample[Flatten[Table[Range[1, 13], 4]]]}];
If[Length[
DeleteCases[
If[Part[s[[#]], 1] == Part[s[[#]], 2], 1, 0] & /@
Range[Length[s]], 0]] == 0, 0, 1], 100000];
In[5]:= winning = (1 - Total[trials]/100000)*100// N
Out[5]= 1.61
In our 100,000 games of frustration solitaire we won $1.61\%$ of the time, hence the title of "frustration" solitaire is very well deserved.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=c.png&userId=1598258William Rudman2019-02-09T21:11:54ZPleistocene-Holocene Sea Level Change and U.K./European Coastlines
https://community.wolfram.com/groups/-/m/t/1609980
![Plot of sea level over the last 2.5 millions years and its effect on European and U.K. Coastlines][1]
The Pleistocene Epoch ended 12,000 years ago, making way for the Holocene Epoch which continues to today. During the peak of the last glacial episode, tremendous amounts of water was locked away in the glaciers which caused sea levels to drop. As the Pleistocene came to a close, glaciers from the last glacial episode were melted which caused the sea levels to rise. The following exploration uses data from De Boer, B., R.S.W. Van de Wal, R. Bintanja, L.J. Lourens and E. Tuenter, "Cenozoic global ice-volume and temperature simulations with 1-D ice-sheet models forced by benthic d18O records", Annals of Glaciology, 51 (55), 23-33, 2010.
deboer = Import[
"http://www.staff.science.uu.nl/~boer0160/Model_output/BdeBoer_\
etal_1Dmodel_output.txt", "Lines"];
deboerdata = ToExpression[StringSplit /@ deboer[[20 ;;]]];
sealeveldata = {1000*#[[1]], #[[2]]} & /@ deboerdata[[All, {1, 2}]];
Over the last 40 million years, the global temperatures and sea levels have changed. Prior to 2.5 million years ago, the global sea level tended to be higher than it is today as global temperatures were warmer.
ListPlot[sealeveldata, Joined -> True, Frame -> True, PlotRange -> All,
FrameLabel -> {{None,
"Sea Level Change (m)"}, {"Millions of Years Ago", None}},
FrameTicks -> {{Table[{i, i}, {i, -140, 80, 20}],
Table[{i, i}, {i, -140, 80,
20}]}, {Table[{i, -i/1000000}, {i, -40000000, 0, 5000000}],
Table[{i, -i/1000000}, {i, -40000000, 0, 5000000}]}},
ImageSize -> 550]
![global sea level changes over the last 40 million years][2]
Starting about 2.5 million years ago, as the Pleistocene Epoch was starting, global temperatures started to drop and permanent ice started to form at the poles. We entered an ice age. During the Pleistocene, temperatures varied wildly and glaciers advanced and retreated in many pulses. Today, there is still permanent ice at the poles so we are still in an ice age. But we are between cold pulses, so we are in an interglacial period within the current ice age.
ListPlot[sealeveldata, Joined -> True, Frame -> True,
PlotRange -> {{-2500000, 0}, {-150, 10}},
FrameTicks -> {{Table[{i, i}, {i, -140, 0, 20}],
Table[{i, i}, {i, -140, 0,
20}]}, {Table[{i, -i/1000}, {i, -2500000, 0, 200000}],
Table[{i, -i/1000}, {i, -2500000, 0, 200000}]}},
FrameLabel -> {{None,
"Sea Level Change (m)"}, {"Thousands of Years Ago", None}},
ImageSize -> 550]
![global sea level changes over the last 2.5 million years][3]
We can focus in on the peak of the last glacial episode to find the lowest sea level which happened about 20,000 years ago, when the glaciers locked up a large amount of water from the oceans.
ListPlot[sealeveldata, Joined -> True, Frame -> True,
PlotRange -> {{-24000, 0}, {-150, 10}},
FrameTicks -> {{Table[{i, i}, {i, -140, 0, 20}],
Table[{i, i}, {i, -140, 0, 20}]}, {Table[{i, -i/1000}, {i, -24000,
0, 2000}], Table[{i, -i/1000}, {i, -24000, 0, 2000}]}},
FrameLabel -> {{None,
"Sea Level Change (m)"}, {"Thousands of Years Ago", None}},
ImageSize -> 550]
![global sea level changes over the last 24,000 years][4]
For more continuous coverage, we can interpolate the data.
seaint = Interpolation[sealeveldata, InterpolationOrder -> 1];
To see the effects that changing sea levels had on the coast of the U.K. and Europe, first we need to obtain the elevation data using GeoElevationData. The following form works nicely using the forthcoming version 12:
data = GeoElevationData[
Entity["City", {"London", "GreaterLondon", "UnitedKingdom"}],
GeoRange -> Quantity[800, "Miles"], GeoProjection -> Automatic,
UnitSystem -> "Metric"];
You can get similar results from 11.3 and earlier using the following definition:
data = GeoElevationData[
GeoDisk[Entity[
"City", {"London", "GreaterLondon", "UnitedKingdom"}],
Quantity[800, "Miles"]], UnitSystem -> "Metric"];
The following variations all use the data generated from version 12. At the greatest extent of the glacial event, the U.K. was connected to mainland Europe. The connecting land is often referred to as [Doggerland][5]. As long as the U.K. and Europe were connected, hunter-gatherers were able to move back and forth.
With[{ybp = -22000},
ReliefPlot[Reverse[data],
PlotRange -> {Full, Full, {seaint[ybp], All}}, ImageSize -> 600,
ColorFunction -> "FallColors", ClippingStyle -> RGBColor[0, .2, .5],
PerformanceGoal -> "Quality",
PlotLabel -> Row[{-Round[ybp], " years ago"}],
Epilog -> {Text[Style["Doggerland", 12, White], {320, 250}]},
FrameTicks -> All]]
![sea level effects on the U.K. and Europe 22,000 years ago][6]
By about 10,500 years ago (8,500 BCE) sea levels rose to the point that the U.K. had basically separated from the European mainland. A small uprise known as the Dogger Bank was still exposed as an island.
With[{ybp = -10500},
ReliefPlot[Reverse[data],
PlotRange -> {Full, Full, {seaint[ybp], All}}, ImageSize -> 600,
ColorFunction -> "FallColors", ClippingStyle -> RGBColor[0, .2, .5],
PerformanceGoal -> "Quality",
PlotLabel -> Row[{-Round[ybp], " years ago"}],
Epilog -> {Text[Style["Dogger Bank", 12, White], {300, 300}]}]]
![sea level effects on the U.K. and Europe 10,500 years ago][7]
By 9,000 years ago (7,000 BCE), the Dogger Bank had also submerged.
With[{ybp = -9000},
ReliefPlot[Reverse[data],
PlotRange -> {Full, Full, {seaint[ybp], All}}, ImageSize -> 600,
ColorFunction -> "FallColors", ClippingStyle -> RGBColor[0, .2, .5],
PerformanceGoal -> "Quality",
PlotLabel -> Row[{-Round[ybp], " years ago"}]]]
![sea level effects on the U.K. and Europe 9,000 years ago][8]
An animated version can be created as well. [Here is a link][9] I created showing the variation in sea level over the last 500,000 years.
Time will tell if we will enter a new glacial period and see the glaciers once again advance as they have many times in the last 2.5 million years.
[![enter image description here][10]][9]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Row2.png&userId=25355
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=FullData.png&userId=25355
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Pleistocene.png&userId=25355
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=GlacialMaximum.png&userId=25355
[5]: https://en.wikipedia.org/wiki/Doggerland
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3616Doggerland.png&userId=25355
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=DoggerBank.png&userId=25355
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Recent.png&userId=25355
[9]: https://vimeo.com/316422438
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-02-11at5.19.57PM.png&userId=20103Jeff Bryant2019-02-10T20:39:02ZRegionMember - RegionPlot3D vs ImplicitRegion
https://community.wolfram.com/groups/-/m/t/1601865
My plan is to effect a 3D extrusion by forming a cross section region in the x-y plane, R, say, then use RegionMember[R,{x,y}] as part of the implicit specification of a 3D region extending from zMin to zMax.
I'll start with what appears to work. I picked a regular pentagon cross section, and generated from it an object that appears to have a Head == MeshRegion and that returns True to MeshRegionQ. (a condition not guaranteed, I've found.)
xSection = DiscretizeRegion[RegularPolygon[5]]
{Head[#], MeshRegionQ[#]}&@% // TableForm
When I use membership in xSection as part of the specification of bounds for a RegionPlot3D, a Graphics3D is returned, and that can be converted to an apparently legal MeshRegion using DiscretizeGraphics[] thus:
DiscretizeGraphics[
RegionPlot3D[
RegionMember[xSection, {x, y}]
, {x, -1., 1.}
, {y, -1., 1.}
, {z, -0.5, 0.5}
]
]
{Head[#], MeshRegionQ[#]}&@% // TableForm
The return is a presentable prismatic extrusion, subject to the limitations of RegionPlot3D.
Applying the form to ImplicitRegion fails, though.
ImplicitRegion[
RegionMember[xSection, {x, y}]
, {x, y, {z, -0.5, 0.5}}
]
ImplicitRegion::bcond: RegionMember[,{x,y}] should be a Boolean combination of equations, inequalities, and Element statements.
I sort of get that RegionMember returns True/False if I give it a "numeric point" (I suppose that's how RP3D works), but it also offers to "gives conditions for the point {x,y,\[Ellipsis]} to be a member of reg.".It seems like that should be a "Boolean combination of equations."
RegionMember[reg] returns a RegionMemberFunction, but that doesn't seem to work as I expected either.
rmf = RegionMember[xSection]
ImplicitRegion[
rmf[{x, y}]
, {x, y, {z, -0.5, 0.5}}]
ImplicitRegion::bcond: RegionMemberFunction[,2,Region`Mesh`CanonicalDistance[True]][{x,y}] should be a Boolean combination of equations, inequalities, and Element statements.
What is it about this I don't understand?
Cheers,
FredFred Klingener2019-01-30T15:29:12ZLabeled Cube
https://community.wolfram.com/groups/-/m/t/1610082
How to label faces of a cube with numbers 1 to 6? First, I'll turn text outlines into polygons.
SymbolToPolygon[symbol_, dimension_: "3D"] :=
Module[{pol, poly, index, pos, minmax, diffs, poly2D},
pol = (Cases[ImportString[ExportString[symbol, "PDF"], "PDF"], _FilledCurve, \[Infinity]][[1, 2]]);
poly = pol[[1]];
index = 2;
While[index <= Length[pol],
pos = Position[poly, Nearest[poly, pol[[index, 1]]][[1]]][[1, 1]] ;
poly = Join[Take[poly, pos], Reverse[pol[[index]]], Drop[poly, pos - 1]];
index++];
minmax = MinMax /@ Transpose[poly];
diffs = #[[2]] - #[[1]] & /@ minmax;
poly2D = (# - Mean /@ minmax)/Max[diffs] & /@ poly;
If[dimension === "2D", Polygon[poly2D], Polygon[Append[#, 1] & /@ poly2D]]]
After that, rotations and the cube
tab={{0,{0,1,0}},{Pi/2,{0,1,0}},{Pi/2,{1,0,0}},{-Pi/2,{1,0,0}},{-Pi/2,{0,1,0}},{Pi,{0,1,0}} };
Graphics3D[Table[Polygon[SymbolToPolygon[ToString[n]][[1]].RotationMatrix[tab[[n,1]],
tab[[n,2]]]],{n,1,6}]]
![labeled cube][1]
Can anyone improve on that?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=labeledcube.jpg&userId=21530Ed Pegg2019-02-11T18:12:10ZWhere is Transpose of Dataset documented?
https://community.wolfram.com/groups/-/m/t/1605601
Hello,
I stumbled across the fact that a Dataset can be transposed (driven by need and desperation). I am not ready with an example, but could produce one with more time.
If this is documented, where?
Thank you,Vincent Virgilio2019-02-04T13:18:21ZMaximize the solution of an equation containing an integral?
https://community.wolfram.com/groups/-/m/t/1610261
I have to find `{x,y}` which makes the integral
NIntegrate[(1/(E^((x^2 - 2*x*d + d^2 + y^2 )/(2*(d + r^2)))*
(Sqrt[d]*(d + r^2)))), {d, 0, Infinity}]
equal to `Pi^0.5/Ry`. Among all the possible solutions, I am interested in the one which maximises y, with the constraint `y>0`. I have also a good starting point for y. The problem has to be solved for different values of `r`, say from 0 to 20, and `Ry`, say from 10^-7 to 10^7.
I have set the problem in this way:
f2[x_?NumberQ, y_?NumberQ, r_?NumberQ] :=
NIntegrate[(1/(E^((x^2 - 2*x*d + d^2 + y^2 )/(2*(d + r^2)))*
(Sqrt[d]*(d + r^2)))), {d, 0, Infinity}];
solu2 = Table[
FindMaximum[{y, f2[x, y, r] == Sqrt[\[Pi]]/Ry, y > 0}, {x, y}], {r,
ranger}, {Ry, rangeRy}]
Unfortunately, `NIntegrate` fails to converge to the solution for all the values of `r` and Ry.
Any help?umby piscopo2019-02-11T11:24:01ZSolve numerically a diffusion equation with fully reflecting wall?
https://community.wolfram.com/groups/-/m/t/1610358
I am trying to solve numerically the diffusion equation $\partial_t P(x,t)=\partial_x^2 P(x,t)+ \partial_x V'(x)P(x,t)$. I have a potential that diverges at zero: $V(x)=4((1/x^4)-(1/x^2))$, therefore, I want to set a reflecting wall at, say xc=0.5, and solve only for x>xc.
1. In the code below, you will see my unsuccessful attempt in placing thes boundary conditions.
2. Since I found that I cannot use DiracDelta and HeavisideTheta functions to set my initial condition, I use instead $Pinit(x)=\exp(-(x-8)^2)/\sqrt{\pi}$, which has a negligible contribution from x<=0.
3. It seems that even though, mathematically I believe I am setting a reflecting wall condition, which should not allow any flow to the region below x<xc, it seems that numerically this still happens. And eventually it make the end solution u[x,T] not normalized correctly.
How do I achieve my goal above, to solve this equation only on the positive half-plane?
The following code shows a negative part for $u(x,T)$, which should not have existed, and I belive it is responcible for $\int_0^\infty u(x,T)dx\neq1$.
Needs["DifferentialEquations`InterpolatingFunctionAnatomy`"]
V[x] = ((1/x)^4 - (1/x)^2) 4;
F[x] = -4 (-(4/x^5) + 2/x^3)
x0 = 8;
Pinit[x] = Exp[-(x - x0)^2]/(Sqrt[Pi]);
T = 1000;
BoundaryCondition = 50;
mol[n_Integer, o_: "Pseudospectral"] := {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> n,
"MinPoints" -> n, "DifferenceOrder" -> o}}
uval = NDSolveValue[{D[u[x, t], t] + D[F[x]*u[x, t], x] -
D[u[x, t], x, x] == 0,
u[x, 0] == Pinit[x], (D[u[x, t], x] /. x -> 0.5) == 0,
u[0.5, t] == 0}, u, {x, 0.5, BoundaryCondition}, {t, 0, T},
Method -> mol[2000, 4]];
Plot[{uval[x, T]}, {x, -5, 5}, PlotRange -> All,
PlotStyle -> {Automatic, {Thick, Dashed}}]Erez A2019-02-11T10:08:16ZCreate GUI Button for the Manipulate to export selected data?
https://community.wolfram.com/groups/-/m/t/1609290
Hi there,
can someone provide an example of how to include a button on the manipulate command that would export the data you select?
SetOptions[
Plot, {ImageSize -> 400, AspectRatio -> 0.2,
ImagePadding -> {{100, 1}, {10, 10}}, Frame -> True}];
Manipulate[
Column[{Plot[x, {x, 1, 4}, Epilog -> {Line[{{y, 0}, {y, 10}}]}],
Plot[x^z/4^z, {x, 1, 4}, Epilog -> {Line[{{y, 0}, {y, 10}}]} ]}],
{y, 1, 4}, {z, 1, 10, 1}]
ThanksT P2019-02-10T01:10:43Z