Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag sorted by activeSOLVING NONLINEAR EQUATION WITH INTEGRALS AND BETA FUNCTIONS
https://community.wolfram.com/groups/-/m/t/3179575
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=GINI.jpg&userId=3179559
Hello! I am new to Mathematica. Does anybody here know, how to solve (for alpha) this equation? I find it very difficult. I have try to use Nsolve[ ] and Solve[ ]. But it does not working.MUHAMMAD FAHEM BIN MUSA2024-05-21T02:31:51ZDifferent result integrating with integer bounds vs floating point bounds
https://community.wolfram.com/groups/-/m/t/3178350
I found an interesting bug (I think) involving the difference between using integer bounds and floating point bounds when integrating in Mathematica:
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=mathematicabug.PNG&userId=3178314
Why is it that this integral evaluates to positive pi when using integer bounds but -pi when using floating point bounds? What am I missing?Baiza Mand2024-05-17T15:55:24Z$ProcessorCount isn't equal to the real number of CPUs
https://community.wolfram.com/groups/-/m/t/3178956
I'm running Mathematica13.2 on a CentOS7 machine with two AMD Epyc 9754 but $ProcessorCount=12, which is much less.
The shell command "lscpu" and C function sysconf(_SC_NPROCESSORS_ONLN) in <unistd.h> both give that the number of cpus is 256, and I'm sure the program is on the computing node.
I made some checks, such as SetSystemOptions["ParallelOptions" -> "ParallelThreadNumber" -> 256], and $MaxLicenseProcesses gives Infinity. But they don't work.
This problem is complicated, maybe the key is how does Mathematica know the number of cpus. I guess it may be associated with the MKL lib, but I don't know how to test.
Thanks for help and discussion.Shepard WQC2024-05-19T10:37:00ZDynamical Hall responses of disordered superconductors
https://community.wolfram.com/groups/-/m/t/3179263
![Proposed setup for the measurement of the Hall effect. Materials subjected to a magnetic field show circular birefringence, i.e. left and right polarized light waves propagate with different velocities. Below is Hall response of a superconductor for different temperatures.][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10740Lead.png&userId=20103
[2]: https://www.wolframcloud.com/obj/83cf9702-e3b0-4c31-9e49-35a7a8af2d93Alberto Hijano2024-05-20T17:44:43ZQuantum research and education: unveiling Mathematica's superiority
https://community.wolfram.com/groups/-/m/t/3178293
![Quantum qubit states and probabilities using Mathematica's quantum framework][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=quantummathematica.png&userId=20103
[2]: https://www.wolframcloud.com/obj/88ca7f05-28c8-4f26-9e76-1efe24135e1aMads Bahrami2024-05-18T00:10:49ZSimple equations that approximate semiPrime factors. Lots of them. Graphing to find estimate.
https://community.wolfram.com/groups/-/m/t/3161100
The middle equation crosses the x-axis at zero where x is 41227 the smaller Prime factor of pnp.
These equations approximate x and y where pnp = x*y.
The larger the pop however the more factors to test. But the graph gives a starting point.
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/df083bc4-c1fe-4cf0-99ac-83c97cfb00f2Bobby Joe Snyder2024-04-19T21:20:30ZError using NDSolve
https://community.wolfram.com/groups/-/m/t/3177970
This is my code:
![enter image description here][1]
An error pops up to say "Equation or list of equations expected instead of True in the first
argument"
Why and how do I fix this?
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-18at10.35.16%E2%80%AFam.png&userId=3177956J McDonald2024-05-18T00:34:04ZNCaputoD function is based on Caputo L1 scheme or Caputo predictor-corrector scheme?
https://community.wolfram.com/groups/-/m/t/3179165
May I know if the NCaputoD in-build function in mathematica is based on Caputo L1 scheme or Caputo predictor-corrector scheme?G Velmurugan2024-05-20T02:47:31ZMonte Carlo simulation for Lennard-Jones fluid equation of state
https://community.wolfram.com/groups/-/m/t/490664
![Directive observation to the ensemble energy profile, which makes it possible to determine whether the simulation cycle and trial numbers are sufficient for the system to reach equilibrium][1]
In the famous book "[Understanding Molecular Simulation: From Algorithms to Applications][2]" by D. Frenkel and B. Smit, there is a case study to use Monte Carlo (MC) molecular simulation computing the phase diagram of Lennard-Jones fluid. The results shall be compared with data compiled by Johnson *et al* <sup>[1]</sup>. The source code provided with the book, however, can only be compiled under Fortran 77 with Linux command line. With Mathematica at hand, it is tempting to rewrite these code and offer myself with a more intuitive and flexible viewpoint to the problem.
The LJ potential is given by
$$\phi(r)=4\epsilon \left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^6\right]$$
where $\phi$, $\epsilon$, $\sigma$ and $r$ are illustrated below.
![enter image description here][3]
First, there should be a box of lattice particles.
latticeDisplace[boxSize_?NumberQ,numParticle_Integer]:=
Block[{delta=boxSize/Ceiling@(numParticle^((1/3))),flag=1,position=ConstantArray[{0,0,0},numParticle]},
Do[
Do[
Do[{If[flag<=numParticle,position[[flag]]={xComponent,yComponent,zComponent}],flag++},{zComponent,delta/2.,boxSize,delta}],
{yComponent,delta/2.,boxSize,delta}],
{xComponent,delta/2.,boxSize,delta}];
Return[position];
]";
At the cut-off distance $r_c$, one can choose to do either truncation $\phi(r)=0$ for $r>r_c$ or shift $\phi(r)=\phi(r)-\phi(r_c)$ for $r < r_c$. The virial potential is calculated as
$$Vir(r) = force \times r = -r\frac{d\phi(r)}{dr}=48\epsilon \left[\left(\frac{\sigma}{r}\right)^{12}-0.5\left(\frac{\sigma}{r}\right)^6\right]$$
ener[cutoffSquare_?NumberQ, distanceSquare_?NumberQ, \[Sigma]Square_?NumberQ, \[Epsilon]_?NumberQ, shift_]:=
Block[{energy, vir},
If[distanceSquare<cutoffSquare,
{energy=4.\[Epsilon]((\[Sigma]Square/distanceSquare)^6-(\[Sigma]Square/distanceSquare)^3)-If[shift,4.\[Epsilon]((\[Sigma]Square/cutoffSquare)^6-(\[Sigma]Square/cutoffSquare)^3),0],vir=48.\[Epsilon]((\[Sigma]Square/distanceSquare)^6-.5(\[Sigma]Square/distanceSquare)^3)},
{energy=0,vir=0}];
Return[{energy, vir}]
];
Then, introduce periodic boundary condition if the distance between any two particles is larger than half box size.
potential[posi_, particleID_Integer, startParticle_Integer:1, {boxSize_?NumberQ, cutoffSquare_?NumberQ, \[Sigma]Square_, \[Epsilon]_, shift_}]:=
Block[{length=Range[startParticle, Length@posi],distanceSquare,(*{energy,vir}={0,0}*)energy=0, vir=0},
Do[
If[part!=particleID,
{distanceSquare=
If[Norm[posi[[particleID]]-posi[[part]]]^2>boxSize^2/4,
Norm[posi[[particleID]]-posi[[part]]/.dis_/;Abs@dis>boxSize/2->boxSize-Abs@dis]^2(*(boxSize-Norm[posi[[particleNum]]-posi[[part]]])^2*),
Norm[posi[[particleID]]-posi[[part]]]^2],
{energy,vir}+=ener[cutoffSquare, distanceSquare,\[Sigma]Square, \[Epsilon], shift]}],
{part, startParticle, Length@posi}];
Return[{energy, vir}]
];
The pressure tail at cut-off distance is
$$P_{tail}=\frac{16}{3} \pi \rho ^2 \sigma ^3 \epsilon \left[\frac{2}{3} \left(\frac{\sigma }{r_c}\right)^9-\left(\frac{\sigma }{r_c}\right)^3\right]$$
tailPressure[{cutoffSquare_?NumberQ, \[Sigma]Square_, \[Epsilon]_}, \[Rho]_?NumberQ]:=
Block[{\[Sigma]=Sqrt@\[Sigma]Square, rc=Sqrt@cutoffSquare, correctPressure},
correctPressure=16/3*\[Pi]*\[Epsilon]*\[Rho]^2*\[Sigma]^3 (2/3 (\[Sigma]/rc)^9-(\[Sigma]/rc)^3);
Return[correctPressure]
]
Similarly, the energy tail at cut-off is
$$\phi_{tail}=\frac{8}{3} \pi \rho \sigma ^3 \epsilon \left[\frac{1}{3} \left(\frac{\sigma }{r_c}\right)^9-\left(\frac{\sigma }{r_c}\right)^3\right]$$
tailEnergy[{cutoffSquare_?NumberQ, \[Sigma]Square_, \[Epsilon]_}, \[Rho]_?NumberQ]:=
Block[{\[Sigma]=Sqrt@\[Sigma]Square, rc=Sqrt@cutoffSquare, correctEnergy},
correctEnergy=8/3*\[Pi]*\[Epsilon]*\[Rho]*\[Sigma]^3 (1/3 (\[Sigma]/rc)^9-(\[Sigma]/rc)^3);
Return[correctEnergy]
]
With `potential[]` and `tailEnergy[]`, the total energy $\Phi$ of all particles inside the ensemble can be calculated.
totalEnergy[posi_,tailCor_,{boxSize_?NumberQ, cutoffSquare_?NumberQ, \[Sigma]Square_, \[Epsilon]_, shift_}]:=
Block[{length=Length@posi, totalEnergy=0, totalVir=0, \[Rho]},
Do[
{totalEnergy, totalVir}+=potential[posi,particle,particle,{boxSize, cutoffSquare, \[Sigma]Square, \[Epsilon], shift}]
,{particle, 1, length}];
If[tailCor,
{\[Rho]=length/boxSize^3,
totalEnergy+=length*tailEnergy[{cutoffSquare,\[Sigma]Square, \[Epsilon]},\[Rho]]}
];
Return[{totalEnergy, totalVir}]
]
For monatomic particles, there is only translational moves to be considered. Otherwise orientational moves need to be taken into account (Rigid/Nonrigid, Linear/Nonlinear). For Monte Carlo algorithm:
- Randomly select one particle from the ensemble and calculate the system potential $\Phi_i$.
- Have the particle a random displacement $r`=r+\Delta$, and calculate the new system potential $\Phi_n$.
- Accept the trial move in last step with probability $exp[-(\Phi_n-\Phi_i)/kT]$
Code:
mcTranslate[posi_ , dr_, \[Beta]_,{boxSize_?NumberQ, cutoffSquare_?NumberQ, \[Sigma]Square_, \[Epsilon]_, shift_}]:=
Block[
{flag=RandomInteger[{1, Length@posi}], oldConf, oldPotential=0, oldVir=0, newPotential=0, newVir=0, config = posi, numAcc=0, potentialDiff=0, virDiff=0},
{oldPotential, oldVir}=potential[config, flag, 1, {boxSize, cutoffSquare, \[Sigma]Square, \[Epsilon], shift}];
oldConf = config[[flag]];
config[[flag]] += dr*RandomReal[{-.5, .5},{3}];
config[[flag]]=config[[flag]]/.dis_/;dis>boxSize->dis-boxSize;
config[[flag]]=config[[flag]]/.dis_/;dis<0->boxSize+dis;
{newPotential, newVir}=potential[config, flag, 1, {boxSize, cutoffSquare, \[Sigma]Square, \[Epsilon], shift}];
If[
RandomReal[]<Exp[-\[Beta](newPotential-oldPotential)],
{++numAcc,potentialDiff+=(newPotential-oldPotential), virDiff+=(newVir-oldVir)},
{config[[flag]]=oldConf}
];
Return[{numAcc, potentialDiff, virDiff, config}];
]
monteCarlo[posi_, attemp_, stepSpace_, temp_, {boxSize_?NumberQ, cutoffSquare_?NumberQ, \[Sigma]Square_, \[Epsilon]_, shift_}]:=
Block[{acceptNum=0, energyDif=0, virDif=0, configuration=posi, numAccept=0, difEnergy=0, difVir=0},
Nest[
(
{acceptNum, energyDif, virDif, configuration}=mcTranslate[#, stepSpace, 1/temp, {boxSize, cutoffSquare, \[Sigma]Square, \[Epsilon], shift}];
numAccept+=acceptNum;
difEnergy+=energyDif;
difVir+=virDif;
configuration
(*Print[{energy, vir, configuration}];*)
)&,
configuration,
attemp
];
Return[{numAccept/attemp//N, difEnergy, difVir, configuration}]
]
Another trick is to adjust the trial step $\Delta$ so that the accept ratio is around 0.5 (or any value between 0 and 1). If it's too large, more trials would be rejected, while too small value leads to longer computational time reaching equilibrium. Also, any new $\Delta$ should be compared with box size to comply with periodic boundary condition.
adjustStep[frac_, stepSpace_, targetFrac_:0.5, boxSize_]:=
Block[{stepOrigin=stepSpace, stepNew},
stepNew=stepSpace*frac/targetFrac;
If[stepNew/stepOrigin>1.5,stepNew=1.5stepOrigin];
If[stepNew/stepOrigin<.5,stepNew=.5stepOrigin];
If[stepNew>.5boxSize,stepNew=.5boxSize];
Return[stepNew];
]
Finally the thermodynamic properties can be calculated including specific energy and pressure.
propertyThermo[posi_, ener_, vir_, temperature_, {boxSize_, cutoffSquare_?NumberQ, \[Sigma]Square_, \[Epsilon]_}, tailCor_:True]:=
Block[{averageEnergy=ener/Length@posi, volume=boxSize^3, density=Length@posi/boxSize^3, pressure},
(*Print[boxSize,volume];*)
pressure=Length@posi/volume*temperature+vir/(3*volume);
(*Print[pressure,volume];*)
If[tailCor,pressure+=tailPressure[{cutoffSquare, \[Sigma]Square, \[Epsilon]}, \[Rho]]];
Return[{averageEnergy, pressure}];
]
Intuitive animation enables a directive observation to the ensemble energy profile, which makes it possible to determine whether the simulation cycle and trial numbers are sufficient for the system to reach equilibrium.
![enter image description here][4]
Parameters for the test run is $\rho=0.5$, $T=2$, particle number is 200. MC runs 40 cycles, in each of which there are 1,000 trial moves. LJ potential at cut-off is truncated, and pressure tail is corrected. The results is $P=1.12461$ and average internal energy $-3.1517$. This is an excellent agreement with data from Johnson *et al* <sup>[1]</sup>, at their Table 2.
![enter image description here][5]
D. Frenkel *et al* also presented the LJ equation of state isotherm at $T=2.0$ and $T=0.9$. A comparison between their results and this work is given below.
D. Frenkel *et al*:
![D. Frenkel *et al*][6]
This work:
![This work][7]
[1] J. Johnson, J. Zollweg, K. Gubbins, The Lennard-Jones equation of state revisited, Mol. Phys. 78 (1993) 591618.
----------
The code is tested under MMA 9.0.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=LJMC4.gif&userId=20103
[2]: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=_ZwalIwAAAAJ&citation_for_view=_ZwalIwAAAAJ:u5HHmVD_uO8C
[3]: /c/portal/getImageAttachment?filename=7821LJPotential.png&userId=58176
[4]: /c/portal/getImageAttachment?filename=LJMC4.gif&userId=58176
[5]: /c/portal/getImageAttachment?filename=Table1.PNG&userId=58176
[6]: /c/portal/getImageAttachment?filename=Reference.PNG&userId=58176
[7]: /c/portal/getImageAttachment?filename=Results.png&userId=58176Baichuan Sun2015-05-03T04:37:27ZProving Pythagorean theorem using Trigonometry
https://community.wolfram.com/groups/-/m/t/3174364
I recently read that two students have proven the Pythagorean Theorem using Trigonometry.
I haven't looked at their proof, but the problem seems straight forward.
Consider a right triangle with sides a and b and hypotenuse c, and angle theta opposite side a.
Then the tan(theta) = a/b.
sine(theta) = a/c.
So c = a/sine(theta) = a/sine(arctan(a/b).
Using Mathematica:
a/Sin[ArcTan[a/b]] evaluates to Sqrt[1 + a^2/b^2] b
which is equivalent to Sqrt[a^2 +b^2]
In[1]:= a/Sin[ArcTan[a/b]]
Out[1]= Sqrt[1 + a^2/b^2] b
Am I missing something ?Frank Kampas2024-05-11T19:59:09ZPeriodogram short one data point
https://community.wolfram.com/groups/-/m/t/3178170
Periodogram seams to be one data point short on the frequency axis. Thus the frequencies are incorrect. With an input vector of length n, I think the periodogram should have n / 2 +1 points.
The example I have should have a peak at frequency 0.2. Instead the peak is at 0.203. The input data is 30 cycles over a length of 150.
Scaling the frequency axis by DataRange -> {0, (n / 2 -1) / n} seams to give the correct frequency.
Compare with Fourier, plotting 0 to 1/ 2, which is n / 2 + 1 points.
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/80be1c5e-7847-43a3-b6de-62a033aed0ddPhilip Isaacson2024-05-17T15:56:37ZCharge-polarization coupling in the nanostructure thin HFxZr1-xO2 film-graphene
https://community.wolfram.com/groups/-/m/t/3178088
![Left: Schematic illustration of the determination of the effective Landau coefficients from the limited number experimental hysteresis loops. Right: Polarization-field dependences calculated for a 10-nm Hf0.5Zr0.5O2 film deposited on the same substrate at lower and higher deposition temperatures][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4361Main.png&userId=20103
[2]: https://www.wolframcloud.com/obj/4f778dcf-b042-46bd-b1be-9f820c6ed729Anna Morozovska2024-05-17T14:48:09ZNSolve and FindRoot unable to solve system of trig equations
https://community.wolfram.com/groups/-/m/t/3177924
I am trying to find numerical solutions for this system of trig equations:
a = -1/3;
R[\[Alpha]_,\[Beta]_,\[Gamma]_] = {{1, 0, 0}, {0, Cos[\[Alpha]], -Sin[\[Alpha]]},{0, Sin[\[Alpha]],Cos[\[Alpha]]}}. {{Cos[\[Beta]],0,Sin[\[Beta]]},{0,1, 0},{-Sin[\[Beta]],0,Cos[\[Beta]]}} . {{Cos[\[Gamma]], -Sin[\[Gamma]],0},{Sin[\[Gamma]],Cos[\[Gamma]],0},{0,0,1}};
a1 = {1, -1, 1}/Sqrt[3];
a2 = {1, 1, -1}/Sqrt[3];
b1 = R[10 * Degree, 45*Degree, 20*Degree].a1;
b2 =R[10 * Degree, 45*Degree, 20*Degree].a2;
A1 = R[10 * Degree, 20 * Degree, 30* Degree].a1;
A2 = R[10 * Degree, 20 * Degree, 30* Degree].a2;
B1 = R[10 * Degree, 20 * Degree, 30* Degree].b1;
B2 = R[10 * Degree, 20 * Degree, 30* Degree].b2;
phia1 = N[ArcCos[a1.{0, 0, 1}]];
phia2 = N[ArcCos[a2.{0, 0, 1}]];
phib1 = N[ArcCos[b1.{0, 0, 1}]];
phib2 =N[ArcCos[b2.{0, 0, 1}]];
phiA1 =N[ArcCos[A1.{0, 0, 1}]];
phiA2 = N[ArcCos[A2.{0, 0, 1}]];
phiB1 = N[ArcCos[B1.{0, 0, 1}]];
phiB2 = N[ArcCos[B2.{0, 0, 1}]];
sol = NSolve[
Cos[alpha1 - alpha2] == (a - Cos[phia1]*Cos[phia2])/(Sin[phia1]*Sin[phia2]) &&
Cos[beta1 - beta2] == (a - Cos[phib1]*Cos[phib2])/(Sin[phib1]*Sin[phib2])&&
Cos[gamma1 - gamma2] == (a - Cos[phiA1]*Cos[phiA2])/(Sin[phiA1]*Sin[phiA2])&&
Cos[delta1 - delta2] ==(a - Cos[phiB1]*Cos[phiB2])/(Sin[phiB1]*Sin[phiB2])&&
Sin[phia1] * Sin[phib1] * Cos[alpha1 - beta1] + Cos[phia1]*Cos[phib1] == Sin[phiA1]*Sin[phiB1]*Cos[gamma1 - delta1] + Cos[phiA1]*Cos[phiB1] &&
Sin[phia1]*Sin[phib2]*Cos[alpha1 - beta2] + Cos[phia1]*Cos[phib2] == Sin[phiA1]*Sin[phiB2]*Cos[gamma1 - delta2] + Cos[phiA1]*Cos[phiB2] &&
Sin[phia2] * Sin[phib1]*Cos[alpha2 - beta1] + Cos[phia2]*Cos[phib1] == Sin[phiA2]*Sin[phiB1]*Cos[gamma2 - delta1] + Cos[phiA2]*Cos[phiB1] &&
Sin[phia2]*Sin[phib2]*Cos[alpha2 - beta2] + Cos[phia2]*Cos[phib2] == Sin[phiA2]*Sin[phiB2]*Cos[gamma2 - delta2] + Cos[phiA2]*Cos[phiB2] &&
0 <= alpha1 <= 2*Pi && 0 <= alpha2 <= 2*Pi && 0 <= beta1 <= 2*Pi && 0 <= beta2 <= 2*Pi && 0<= gamma1 <=2*Pi && 0<=gamma2<=2*Pi && 0<=delta1<=2*Pi && 0<=delta2<=2*Pi,
{alpha1, alpha2, beta1, beta2, gamma1, gamma2, delta1, delta2}, Reals]
This system should definitely have a solution, but for some reason NSolve does not return anything after 10-15 minutes of running the notebook. I also tried using FindRoot and choosing initial guesses near the actual values for alpha1 = ArcCos[a1.{1, 0, 0}], alpha2 = ArcCos[a2.{1, 0, 0}], etc:
alpha1guess = N[ArcCos[a1.{1, 0, 0}]] + 0.5;
alpha2guess = N[ArcCos[a2.{1, 0, 0}]] + 0.5;
beta1guess = N[ArcCos[b1.{1, 0, 0}]] + 0.5;
beta2guess = N[ArcCos[b2.{1, 0, 0}]] + 0.5;
gamma1guess = N[ArcCos[A1.{1, 0, 0}]] + 0.5;
gamma2guess = N[ArcCos[A2.{1, 0, 0}]] + 0.5;
delta1guess = N[ArcCos[B1.{1, 0, 0}]] + 0.5;
delta2guess = N[ArcCos[B2.{1, 0, 0}]] + 0.5;
sol = FindRoot[
{Cos[alpha1 - alpha2] == (a - Cos[phia1]*Cos[phia2])/(Sin[phia1]*Sin[phia2]),
Cos[beta1 - beta2] == (a - Cos[phib1]*Cos[phib2])/(Sin[phib1]*Sin[phib2]),
Cos[gamma1 - gamma2] == (a - Cos[phiA1]*Cos[phiA2])/(Sin[phiA1]*Sin[phiA2]),
Cos[delta1 - delta2] ==(a - Cos[phiB1]*Cos[phiB2])/(Sin[phiB1]*Sin[phiB2]),
Sin[phia1] * Sin[phib1] * Cos[alpha1 - beta1] + Cos[phia1]*Cos[phib1] == Sin[phiA1]*Sin[phiB1]*Cos[gamma1 - delta1] + Cos[phiA1]*Cos[phiB1],
Sin[phia1]*Sin[phib2]*Cos[alpha1 - beta2] + Cos[phia1]*Cos[phib2] == Sin[phiA1]*Sin[phiB2]*Cos[gamma1 - delta2] + Cos[phiA1]*Cos[phiB2],
Sin[phia2] * Sin[phib1]*Cos[alpha2 - beta1] + Cos[phia2]*Cos[phib1] == Sin[phiA2]*Sin[phiB1]*Cos[gamma2 - delta1] + Cos[phiA2]*Cos[phiB1],
Sin[phia2]*Sin[phib2]*Cos[alpha2 - beta2] + Cos[phia2]*Cos[phib2] == Sin[phiA2]*Sin[phiB2]*Cos[gamma2 - delta2] + Cos[phiA2]*Cos[phiB2]},
{{alpha1, alpha1guess},{ alpha2, alpha2guess}, {beta1, beta1guess}, {beta2, beta2guess}, {gamma1, gamma1guess}, {gamma2, gamma2guess}, {delta1, delta1guess}, {delta2, delta2guess}}]
However, whichever values I choose for the initial guesses, FindRoot returns this error:
FindRoot::jsing: Encountered a singular Jacobian at the point {alpha1,alpha2,beta1,beta2,gamma1,gamma2,delta1,delta2} = {1.05532,1.05532,0.472267,1.83579,0.452311,1.66968,0.656504,2.45326}. Try perturbing the initial point(s).
What is the issue here? Are there any other ways to solve this system numerically with Mathematica?Alice Wang2024-05-16T20:17:07Z[WEBINAR] Handling uncertainties and calibrating system models with Wolfram Language
https://community.wolfram.com/groups/-/m/t/3178046
![enter image description here][1]
In this Wolfram System Modeler webinar, the recently added functionality to calibrate, validate and visualize uncertainty in system models will be presented. You will learn to calibrate parameter values to match simulation results with real-world data and to visualize the uncertainty of key outputs resulting from uncertainty in parameters, initial values or inputs. Examples such as a Hohmann transfer orbit, calibration of rates in a chemical reaction and a simplified loudspeaker are explored.
No prior Wolfram System Modeler or Wolfram Language experience is required to join this webinar.
We look forward to seeing you:
**Tuesday, June 18, 2024. 12-1 pm CDT (5-6 pm UTC/GMT)**
> [**REGISTER HERE**][2]
![enter image description here][3]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=7090Hoffmannorbittransfer-resize.gif&userId=20103
[2]: https://www.wolfram.com/wolfram-u/courses/modeling-simulation/handling-uncertainties-and-calibrating-system-models/
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wolframu.png&userId=1618777Sergio Vargas2024-05-17T09:18:24ZWhy does nested Integration works for a single function but not for a list of functions ?
https://community.wolfram.com/groups/-/m/t/3175908
I try to apply nested integration to lists of functions. The code works fine for a single function (not a list), but returns error when trying to calculate the same expression with a list of functions.
A function `f[x,args...]` is simplified preliminary by defining argument `x` (let say `X={1,2,3,4}`) and I obtain a list of functions for given `x`'s. Next step, I define integral functions `f2[args...]` and `f3[args...]` and try to compute a nested integration with the list, desiring to obtain a list of result for all `x`'s. May it be done without use of 'Indexed' or calling a part of the function list in general `f[[i]]` ? I would like to send a list into `Nintegrates` and obtain list of outputs.
Currently my solution is the following, but I like to avoid use of indexes:
ListExpressions = {x + y + z, 2 x + y + z, 3 x + y + z, 4 x + y + z, 5 x + y + z};
f[x_?NumericQ, y_?NumericQ, z_?NumericQ, KK_?NumericQ] := Evaluate[Indexed[ListExpressions, KK]]
f2[K1_?NumericQ, y_?NumericQ, z_?NumericQ, KK_?NumericQ] := NIntegrate[f[x, y, z, KK], {x, 0, K1}];
f3[K1_?NumericQ, K2_?NumericQ, K3_?NumericQ, KK_?NumericQ] := NIntegrate[f2[K1, y, z, KK], {y, 0, K2}, {z, 0, K3}];
f3[1,1,10,#]&/@Range[5]
Out= {60.,65.,70.,75.,80.}Aka Kopejkin2024-05-14T11:30:40ZMathematical Games: combinatorial codes and designs
https://community.wolfram.com/groups/-/m/t/3177778
![M22 Graph, 13 points, each on 4 parabolas, Singer Difference Set, and binary Golay code][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=10020Lead.png&userId=20103
[2]: https://www.wolframcloud.com/obj/d4461264-d697-450b-a609-05a94be80874Ed Pegg2024-05-16T17:17:32ZEllipseFit: function for linear least square ellipse fit
https://community.wolfram.com/groups/-/m/t/3176958
Unhappy with existing ellipse fitting I present a function based on Linear Least Squares and SVD, the latter symbolically resolved and not directly used/performed for the actual fit calculation.
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/2ef6c23d-1651-4eb2-af93-7f302c46cac5Robert Nowak2024-05-16T15:33:26ZHow to remove "Abs" from "Simplify" output?
https://community.wolfram.com/groups/-/m/t/3176422
Hello,
In Mathemtiaca
Simplify[Sqrt[(a b c + d e g)^2], a b c + d e g > 0]
will produce
Abs[a b c + d e g]
It is a little wired, as I think the answer should be
a b d+ d e gZhenyu Zeng2024-05-15T02:01:56ZWhat would the right approach to perform a 2D discrete convolution for this problem ?
https://community.wolfram.com/groups/-/m/t/3176766
I am trying to perform a 2D discrete convolution of two terms; one a Gaussian and a differentiated Green's function containing Hankel function of first kind and zero order. The convolution is defined as $$p(x,y,t) = f(x,y) * \frac{d}{dt} G(x,y,t)$$ where,
- $f(x,y) = \epsilon e^{ -\alpha (x^2 + y^2) }$
- $\frac{d}{dt}G(x,y,t) = \frac{\omega}{4} \frac{1}{c_0^2 } H_0^{(1)}\left [k\sqrt{x^2+y^2} \right ] e^{-i \omega t}$
The common parameters in these terms are
- $\epsilon=0.5$,
- $\alpha = \frac{log(2)}{2}$
- $\omega = \frac{2\pi}{30}$
- $c = 340.2626486$
- $ k = \omega/c$
- $t = 90$
These terms are from a paper in acoustics. I have tried to perform a discrete convolution but it fails to produced any output except the warning `TerminatedEvaluation["RecursionLimit"]`. I have attached the workbook which contains the usage of the command`DiscreteConvolve` and the convolving variables I have used are $\xi$ and $\eta$.
What would be right way to perform a convolution for these terms? Are there any gotchas which I need to be aware of ?
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/46255f17-ad81-4f02-b458-88fda7e1c79aIshan Nande2024-05-15T16:14:57ZWhich VSCode plugin?
https://community.wolfram.com/groups/-/m/t/3174824
![Which VS Code plugin][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2024-05-12_21-36-45.png&userId=3125396Nickarino Skri2024-05-13T01:45:21ZImport trained classifier to Python
https://community.wolfram.com/groups/-/m/t/3176919
Hello Wolfram Community
I have trained a text classifier that understands a product description and puts the product in a category
for example.
"Low Fat milk, 1000ml" -> Dairy
" Strawberry high protein yogurt"-> Dairy
" 1kg of red apples" -> Fruits
and so.
My classifier was built with the Classify function and I saved the model in a . wmlf Whenever I need to classify a bunch of products I just import as
MarkovClassifier =
Import["C:\\Users\\Amor_Rodrigo\\Desktop\\Latest\\Markov Classifier \
834.wmlf"]
and then just classify my list
MarkovClassifier[mylist]
That's it. As easy as that. My problem is that now that I need to implement this function in Python
I am trying something like this
from wolframclient.evaluation import WolframLanguageSession
from wolframclient.language import wlexpr
session=WolframLanguageSession()
classifierpath = r"C:\\Users\\Amor_Rodrigo\\Desktop\\Latest\\Markov Classifier 834.wmlf"
session.evaluate(wlexpr('MarkovClassifier=Import[classifierpath]'))
test= session.evaluate(wlexpr('Map[#^2 &, Range[5]]'))
print(test)
session. Terminate()
My output is this
First argument classifierpath is not a valid file, directory, or URL specification.
First argument classifierpath is not a valid file, directory, or URL specification.
(1, 4, 9, 16, 25)
Does anyone know how to import that classifier function and put it to work. ?
Any help will be really appreciatedRodrigo Amor2024-05-16T04:58:44ZInserting a matrix in Mathematica Online
https://community.wolfram.com/groups/-/m/t/3177108
How do I insert a matrix in Mathematica Online? The documentation says it should be under the Insert menu, but it is not there in my Online session.
Also, is there a good place that describes the Online interface for someone who is used to the desktop interface?
Thanks.John Gore2024-05-15T23:07:25ZNon-transitive Grime dice: compute and visualize the complete set of possible winning cycles
https://community.wolfram.com/groups/-/m/t/428745
For Christmas this year, I got myself a fun mathematical gift: a set of 10 [non-transitive dice][3], namely [Grime Dice][4]! You can get your own set [here][5]. Behold their dicey splendor:
![Non-transitive Grime dice: compute and visualize the complete set of possible winning cycles][6]
These dice possess the fascinating property that their winning relationships (in the sense of "winning" = "rolls a higher number > 50% of the time") are non-transitive. i.e. if die A wins against die B, and die B wins against die C, it actually does *not* hold, in general, that die A wins against die C. In fact, die C might win against die A!
If we label the 5 Grime Dice colors Red, Blue, Yellow, Olive, and Magenta, there are 2 primary non-transitive winning cycles:
1. **By word length:** Red beats Blue beats Olive beats Yellow beats Magenta beats Red
2. **Alphabetically:** Blue beats Magenta beats Olive beats Red beats Yellow beats Blue
That's pretty neat and non-intuitive by itself, but things get weirder when you roll two dice of the same color together: the word length cycle reverses, while the alphabetical cycle (almost) stays intact.
1. **By word length (doubles):** Red/Red loses to Blue/Blue loses to Olive/Olive loses to Yellow/Yellow loses to Magenta/Magenta loses to Red/Red
2. **Alphabetically (doubles):** Blue/Blue beats Magenta/Magenta beats Olive/Olive loses to Red/Red beats Yellow/Yellow beats Blue/Blue
Using Mathematica, we can calculate the winning probabilities, and visualize the cycles:
![Non-transitive Grime dice: compute and visualize the complete set of possible winning cycles][7]
These 4 cycles are the only ones which are advertised, but it turns out there are many more that you can form using just the 10 dice included in the set.
In fact, there are 298 such cycles! Here are the plots of all **1-die cycles**
![Non-transitive Grime dice: compute and visualize the complete set of possible winning cycles][8]
and **all 2-dice cycles**
![Non-transitive Grime dice: compute and visualize the complete set of possible winning cycles][9]
and **all 3-die cycles**
![enter image description here][10]
In this post, we'll walk through the Mathematica code used to create these plots and to compute the complete set of possible cycles.
Modeling dice
-------------
The first step is to provide a simple representation of the different dice colors, each of which has a unique pip configuration:
(* represent the dice, their names, and their face values *)
red = dice["Red"] = {{"Red"}, {4, 4, 4, 4, 4, 9}};
blue = dice["Blue"] = {{"Blue"}, {2, 2, 2, 7, 7, 7}};
olive = dice["Olive"] = {{"Olive"}, {0, 5, 5, 5, 5, 5}};
yellow = dice["Yellow"] = {{"Yellow"}, {3, 3, 3, 3, 8, 8}};
magenta = dice["Magenta"] = {{"Magenta"}, {1, 1, 6, 6, 6, 6}};
To compute which of two dice beats the other, and the odds of that win, we generate every possible roll between the two dice, and see which one comes out on top more often:
(* compute which of two dice would win, including the odds *)
(* returns {winner -> loser, odds} *)
compareDice[{lName_, lVals_}, {rName_, rVals_}] := (
rolls = Tuples[{lVals, rVals}];
winDiff = Total[rolls /. {l_, r_} -> Sign[r - l]];
odds = 1/2 + Abs[winDiff]/(2*Length[rolls]);
{If[winDiff > 0, rName -> lName, lName -> rName], N[odds, 3]}
);
Thus we can see that, for example, Red beats Blue 58% of the time, and Yellow beats Magenta 56% of the time.
compareDice[red, blue]
compareDice[magenta, yellow]
(* output:
{{"Red"} -> {"Blue"}, 0.583}
{{"Yellow"} -> {"Magenta"}, 0.556}
*)
Non-transitive cycles
---------------------
We are already in a position to verify and quantify the primary word-length and alphabetical cycles with single dice:
byWordLength = {{red, blue}, {blue, olive}, {olive, yellow}, {yellow, magenta}, {magenta, red}};
byAlpha = {{blue, magenta}, {magenta, olive}, {olive, red}, {red, yellow}, {yellow, blue}};
compareDice @@@ byWordLength
compareDice @@@ byAlpha
(* output:
{{{"Red"} -> {"Blue"}, 0.583}, {{"Blue"} -> {"Olive"}, 0.583}, {{"Olive"} -> {"Yellow"}, 0.556}, {{"Yellow"} -> {"Magenta"}, 0.556}, {{"Magenta"} -> {"Red"}, 0.556}}
{{{"Blue"} -> {"Magenta"}, 0.667}, {{"Magenta"} -> {"Olive"}, 0.722}, {{"Olive"} -> {"Red"}, 0.694}, {{"Red"} -> {"Yellow"}, 0.722}, {{"Yellow"} -> {"Blue"}, 0.667}}
*)
We can represent and compare pairs of dice as if they were each one 36-sided die, with each face corresponding to the total from a possible roll of the constituent dice. This lets us compute the odds of the word-length and alphabetical double dice cycles, too:
(* create a new "die" by combining two dice *)
combine[{name1_, vals1_}, {name2_, vals2_}] := {Join[name1, name2], Plus @@@ Tuples[{vals1, vals2}]};
double[die_] := combine[die, die];
compareDice @@@ Map[double, byWordLength, {2}]
compareDice @@@ Map[double, byAlpha, {2}]
(* output
{{{"Blue", "Blue"} -> {"Red", "Red"}, 0.590}, {{"Olive", "Olive"} -> {"Blue", "Blue"}, 0.590}, {{"Yellow", "Yellow"} -> {"Olive", "Olive"}, 0.691}, {{"Magenta", "Magenta"} -> {"Yellow", "Yellow"}, 0.593}, {{"Red", "Red"} -> {"Magenta", "Magenta"}, 0.691}}
{{{"Blue", "Blue"} -> {"Magenta", "Magenta"}, 0.556}, {{"Magenta", "Magenta"} -> {"Olive", "Olive"}, 0.583}, {{"Red", "Red"} -> {"Olive", "Olive"}, 0.518}, {{"Red", "Red"} -> {"Yellow", "Yellow"}, 0.583}, {{"Yellow", "Yellow"} -> {"Blue", "Blue"}, 0.556}}
*)
Plotting
--------
It's much nicer to visualize the winning relationships between the dice, rather than just printing out the data. Mathematica has excellent plotting and visualization capabilities, so this is certainly possible.
[GraphPlot][11] is a good choice here. Its default visual output isn't very well-suited to this problem, though, so we will need to do some customization. We can take advantage of the various hooks which are exposed by the function, enabling us to specify custom graphical objects to represent the vertices and edges of the relationship graph.
The below code will create nice graph plots where the vertices are represented by appropriately-colored dice icons, and the edges point from winner -> loser and are labeled with the probability of that win.
(* keep track of which colors should be used in plots *)
colors["Red"] = Red;
colors["Blue"] = Blue;
colors["Olive"] = Green;
colors["Yellow"] = Yellow;
colors["Magenta"] = Purple;
(* plot colored rectangles to represent the dice at a graph vertex *)
getVertex[center_, names_] := (
numDice = Length@names;
positions = {-0.08 + #, 0.08 + #} & /@
Range[-0.04*(numDice - 1)/2, 0.04*(numDice - 1)/2, 0.04];
Transpose[{colors /@ names, Rectangle[center + #1, center + #2, RoundingRadius -> 0.02] & @@@ positions}]
);
(* plot a nicely-formatted labeled arrow for graph edges *)
getEdge =
({Gray, If[#3 == 0.5, Line[#1], Arrow[#1, 0.15]], Black,
Inset[#3, Mean[#1], Background -> White]} &);
(* given a list of dice pairs, creates a nicely-formatted plot of
winning relationships and odds *)
plotDice[pairs_] :=
GraphPlot[compareDice @@@ pairs,
VertexRenderingFunction -> getVertex,
EdgeRenderingFunction -> getEdge];
Let's take a look at the single and double cycles visually:
![enter image description here][12]
Pretty neat! Besides various oddities with orientation and ordering, these plots are quite appealing. Exact placement of the vertices can be specified by the VertexCoordinateRules parameter to GraphPlot, but the default layout works well enough for our purposes.
More cycles
-----------
We have looked at the primary 5-color cycles using both single dice and doubles of the same color. That's just the beginning, though. For example, besides the 5-color cycles, various smaller cycles also exist:
![enter image description here][13]
How many of these smaller cycles exist? What about bigger cycles? And what about cycles involving doubles composed of 2 different colors? Or even cycles consisting of sets of 3 dice? We want to compute every *possible cycle* that can be created using the 10 dice from the set.
Our overall approach to solving this will be to generate directed graphs which encode all of the winning relationships between unique dice sets of a certain size (single dice, pairs, or triples), then search for cycles within those graphs.
It should be noted (we won't prove it here) that for Grime Dice, any pair of dice beats any single die, and any triple of dice beats any pair or single die. Thus it is indeed acceptable to split this computation up into separate buckets for single dice, pairs, and triples. There are no heterogeneous cycles with respect to number of competing dice.
Single dice
-----------
One might assume that the cycles of single dice would be the easiest to compute. In fact, single dice pose a couple of unique challenges that pairs and triples do not. Specifically, in a set of 10 dice, we could potentially find cycles up to size 10. But since we only have 5 colors, once a cycle becomes length-6 or longer we must necessarily have 2 same-color nodes in the cycle. We need to make sure to differentiate between the two copies of each color.
In order to capture the fact that we have 2 copies of each color on hand, we will use a bit of a hack. The "second" copy of each color will be represented as a combination with a special "white" die which has 1 face and always rolls 0.
(* dummy "white" die used to differentiate between
two instances of the same color die *)
white = dice["White"] = {{"White"}, {0}};
(* when plotting, just make the white die invisible *)
colors["White"] = Transparent;
(* all distinct single dice from set of 10 *)
allDice[1] =
Join[allColors, combine @@@ Tuples[{allColors, {white}}]];
To start building the actual relationship graph, we will define a couple of helper functions. The first is used to create the [DirectedEdge][14] values Mathematica consumes when [Graph][15] is called in the second function. The edges are directed from "winning di(c)e" to "losing di(c)e".
(* Note that we don't return an edge here if the 2 dice are equally matched *)
getGraphEdge[left_, right_] := (
{relationship, odds} = compareDice[left, right];
If[odds != 1/2, relationship /. Rule -> DirectedEdge]
);
(* builds the graph of winning relationships for
n-tuples of dice *)
makeGraph[n_] :=
Graph[Cases[getGraphEdge @@@ Subsets[allDice[n], {2}], DirectedEdge[__]]];
We can now generate the full graph of single dice relationships, and have Mathematica compute all cycles up to maximum size of 10. In the last step, note that we need to deduplicate the cycle list to eliminate those cycles which are unique only due to inclusion of the dummy "white" die.
diceGraph[1] = makeGraph[1];
(* built-in function DeleteDuplicatesBy is present only
in Mathematica 10+ *)
deDupeBy[expr_, f_] := Values[GroupBy[expr, f, First]];
(* compute all cycles of single dice that can
be made from the 10 included dice *)
cycles[1] =
deDupeBy[FindCycle[diceGraph[1], 10, All],
Sort[(# /. {e_, "White"} -> {e})] &];
CountsBy[cycles[1], Length]
cycles[1] // Length
(* output:
<|3 -> 5, 4 -> 5, 5 -> 2, 6 -> 15, 7 -> 20, 8 -> 20, 9 -> 10, 10 -> 3|>
80
*)
We see that there are a total of 80 unique single-die cycles, with sizes ranging from 3 to 10.
Pairs of dice
-------------
Dice pairs turn out to be the simplest case.
With pairs (and above), we do not need to consider the possibility of distinct-yet-identical nodes in the cycle. Proof: Between any two identical nodes, there must be at least 2 other nodes (if there was only one node, it would be simultaneously beating and losing to identical nodes on either side), so a full cycle with identical nodes must have length at least 6 (the 2 identical nodes + 2 separating nodes on each side). When each node consists of a pair of dice, this requires at least 12 dice. Since we only have 10 dice, this is impossible.
This eliminates the need for the dummy die, as well as the de-duplication at the end.
The only additional wrinkle we need to consider is the possibility for a computed cycle to contain more than 2 dice of a particular color. Such cycles are invalid in our scenario, since we are only utilizing the 10 dice in the set. We will update our helpers and add some additional filtering to eliminate such cycles.
Finally, for pairs, we only need to search for cycles up to length 5.
(* all unique dice pairs *)
allDice[2] = Flatten[Table[combine @@ allColors[[{i, j}]],
{i, 1, Length[allColors]},
{j, i, Length[allColors]}], 1];
(* updated to avoid creating edges between nodes
that combine to use more than 2 of any color *)
getGraphEdge[left_, right_] :=
If[FreeQ[Tally[Join[left[[1]], right[[1]]]], {_, count_} /; count > 2],
{relationship, odds} = compareDice[left, right];
If[odds != 1/2, relationship /. Rule -> DirectedEdge]
];
(* check if a given full cycle uses more than 2 of
any particular color *)
isValidCycle[cyc_] :=
FreeQ[Tally[Flatten[cyc /. DirectedEdge[a_, _] :> a]], {_, count_} /; count > 2];
(* compute all cycles of pairs of dice that can
be made from the 10 included dice *)
diceGraph[2] = makeGraph[2];
cycles[2] = Select[FindCycle[diceGraph[2], 5, All], isValidCycle];
CountsBy[cycles[2], Length]
cycles[2] // Length
(* output:
<|3 -> 55, 4 -> 89, 5 -> 25|>
169
*)
There are 169 unique cycles using pairs of dice.
Triples of Dice
---------------
Triples are the largest sets we need to consider. At least 3 nodes are required to form a cycle, and if those nodes consist of 4 or more dice each, our set of 10 dice will not be sufficient.
Similarly, we need only search for cycles of length 3 here -- a cycle of 4 triples requires more dice than we have.
We start by extending our dice-combining function to handle triples, building up all such unique triples, and generating their winning relationship graph. Note that any triples where all 3 dice are the same color are invalid, and should be filtered.
(* extend to handle triples *)
combine[die1_, die2_, die3_] := combine[die1, combine[die2, die3]];
(* all unique dice triples *)
allDice[3] = Select[
Flatten[Table[combine @@ allColors[[{i, j, k}]],
{i, 1, Length[allColors]},
{j, i, Length[allColors]},
{k, j, Length[allColors]}], 2],
Length@Union@#[[1]] != 1 &];
diceGraph[3] = makeGraph[3];
From here, computing the triple cycles should be as simple as calling FindCycle again. Unfortunately, Mathematica spins (seemingly) indefinitely when one tries this. The relationship graph for triples is 30 nodes and 208 edges - not trivial, but not really that big. I'm not sure why FindCycle has trouble with it. Oddly enough, FindCycle immediately finds 1 cycle if that's all you ask for, but exhibits the hang if you ask for even just 2 cycles, let alone all of them.
So we will need to search for the 3-cycles in this graph manually. The below code does the trick.
(* for each edge in the graph, collect potential second edges
e.g. for edge A -> B, find all pairs {{A -> B, B -> X},{A -> B, B -> Y}, ...} *)
edgePairs =
Flatten[EdgeList[diceGraph[3]] /.
DirectedEdge[a_, b_] :> ({DirectedEdge[a, b], #} & /@
EdgeList[diceGraph[3], DirectedEdge[b, _]]), 1];
(* find and validate the 3rd and final edge of a 3-cycle.
e.g. given {A -> B, B -> C}, check that C -> A exists, and
the cycle A -> B -> C -> A is valid *)
completeCycle[DirectedEdge[a_, b_], DirectedEdge[c_, d_]] := (
lastEdge = DirectedEdge[d, a];
If[MemberQ[EdgeList[diceGraph[3]], lastEdge], (
cycle = {DirectedEdge[a, b], DirectedEdge[c, d], lastEdge};
If[isValidCycle[cycle],
Sow[cycle]
])
]
);
This enables us to compute the cycles, though we do need to deduplicate them (unlike FindCycle, our manual code is not smart enough to realize the cycle A -> B -> C ->A is the same as the cycle B -> C -> A -> B).
cycles[3] = deDupeBy[Reap[Scan[completeCycle @@ # &, edgePairs]][[2, 1]], Sort];
CountsBy[cycles[3], Length]
(* output:
<|3 -> 49|>
*)
There are 49 triple-dice cycles. This brings the grand total to 298 unique non-transitive cycles in a set of 10 Grime Dice.
Plot all the cycles!
--------------------
Finally, the fun part - making giant plots of every possible cycle!
To plot a single cycle, we just need to massage the data a little bit so that it works with plotDice from earlier.
(* 'combining' a single die is a no-op *)
combine[{name1_, vals1_}] := {name1, vals1};
(* plot a single non-transitive dice cycle *)
plotCycle[cyc_] :=
plotDice[cyc /. DirectedEdge[l_, r_] :> {combine @@ (dice /@ l), combine @@ (dice /@ r)}];
A couple of example plots:
![enter image description here][16]
To generate the full plots, as linked at the top of the post, all we need is
(* plot everything! *)
plotCycle /@ cycles[1]
plotCycle /@ cycles[2]
plotCycle /@ cycles[3]
----------
*All code from this post available as a GitHub gist [here][1]. This is a repost of my [original blog][2].*
[1]: https://gist.github.com/latkin/305575735a3328a8e94a
[2]: http://latkin.org/blog/2015/01/16/non-transitive-grime-dice-via-mathematica/
[3]: http://en.wikipedia.org/wiki/Nontransitive_dice
[4]: http://www.singingbanana.com/dice/article.htm
[5]: http://mathsgear.co.uk/collections/dice/products/non-transitive-grime-dice
[6]: /c/portal/getImageAttachment?filename=WP_20141231_001-300x243.jpg&userId=11733
[7]: /c/portal/getImageAttachment?filename=cycles41.png&userId=11733
[8]: /c/portal/getImageAttachment?filename=1cycles_full.png&userId=11733
[9]: /c/portal/getImageAttachment?filename=2cycles_full.png&userId=11733
[10]: /c/portal/getImageAttachment?filename=3cycles_full.png&userId=11733
[11]: http://reference.wolfram.com/language/ref/GraphPlot.html
[12]: /c/portal/getImageAttachment?filename=cycles12.png&userId=11733
[13]: /c/portal/getImageAttachment?filename=cycles22.png&userId=11733
[14]: http://reference.wolfram.com/language/ref/DirectedEdge.html
[15]: http://reference.wolfram.com/language/ref/Graph.html
[16]: /c/portal/getImageAttachment?filename=cycles31.png&userId=11733Lincoln Atkinson2015-01-26T21:47:48Z[WUCB-2024] Conferência de usuários Wolfram 2024 - Brasil
https://community.wolfram.com/groups/-/m/t/3177228
![enter image description here][1]
**Save the date: Saturday, November 9, 2024**
Communication - Wolfram User Conference 2024 - Brazil (WUCB-2024)
To answer questions, share and learn more about the "Conferência de usuários Wolfram 2024 - Brasil" and hackathon send your message here...
Conference date: November 9, 2024 (Saturday) from 9:00 pm to 5:00 pm (GMT-3 - São Paulo, SP, Brazil) at Universidade Presbiteriana Mackenzie
Online Brazilian HACKATON starting: Monday, November 4 to Monday, November, 11, 2024, online
To submit a talk (30 min) send title and abstract (250-300 words) to: brazil_2024@wolfram.com
Website and more details coming soon...
----------
FOR PORTUGUESE READERS:
----------
**Comunicação - Conferência de usuários Wolfram 2024 - Brasil (WUCB-2024)**
Para tirar dúvidas, compartilhar e saber mais sobre a " Conferência de usuários Wolfram 2024 - Brasil" e **hackathon** mande sua mensagem aqui...
Data da conferência: Dia 9 de novembro de 2024 (Sábado) das 09:00 as 17:00 em São Paulo, SP, Brasil, na Universidade Presbiteriana Mackenzie
Hackathon on-line: Dia 4 de novembro até a 11 de novembro de 2024
Para submeter uma palestra (30 min) envie o título e abstract (250-300 palavras) para: brazil_2024@wolfram.com
Website e mais detalhes em breve...
Dicas para inspiração:
- Para se inspirar use o [Wolfram Language Hackathon Project Generator][2]
- Ou veja os aplicativos disponíveis no [Product Hunt][3] projetos podem ser prototipados e extendidos com a tecnologia Wolfram de forma muito produtiva
- Expore um problema usando [ciências dos dados][4].
- Use ferramentas avançadas de [Inteligência Artificial (IA)][7]
- Veja projetos vencedores [2021][5] e [2020][6]:
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2024-05-1515-23-12.png&userId=68072
[2]: http://hackathon.guru/
[3]: https://www.producthunt.com/
[4]: https://www.wolfram.com/wolfram-u/multiparadigm-data-science/
[5]: https://www.wolfram.com/events/virtual-conference-br/2021/hackathon/
[6]: https://www.wolfram.com/events/virtual-conference-br/2020/hackathon/
[7]: https://www.wolfram.com/resources/tools-for-AIs/Daniel Carvalho2024-05-15T18:23:52ZCross sections for ionization-assisted electron knock-on damage of hBN
https://community.wolfram.com/groups/-/m/t/3177018
![scanning transmission electron microscopy (STEM) images showing an hBN sample at different magnifications. Plot of reduction of the displacement threshold energy as a function of charge constrained on the ejecting element. Plot of displacement cross sections for boron.][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=5555Lead.png&userId=20103
[2]: https://www.wolframcloud.com/obj/d3563999-b23d-4cf8-9cb4-4e70ee7fd111Toma Susi2024-05-15T18:13:50ZWolfram Language analysis of Reddit's sixty-second-button
https://community.wolfram.com/groups/-/m/t/500948
There is a [Button][1] on the site Reddit, which resets a sixty second countdown.
As long as the countdown never finishes, the button remains.
The site's users, of which there are [more than three million][3], may each press the button once.
Since April 1st, they have pressed the button nearly one million times, perpetuating it for more than a month and a half.
Each user permanently bears the time they reset the countdown from; users compete, compare, and argue over pushing at various times. In the beginning times less than 50s were impossible, whilst today almost a third of times are single digit. Users are also colour coded into tens (and thus teams), 50's are purple, single digits red.
This has produced some [awesome][4] visualisations, and a host of difficult questions, spanning game theory, community psychology, and statistics.
A keen Mathematica user, I felt I could bring more visualisations (and some analysis) to the table
I start by bringing a large raw data set (April 3rd to May 20th, available [here][5]) into Mathematica, and finding the times where the timer increases (implying it has been reset.)
Data = Import["C:\\Users\\Me\\Desktop\\button.csv"];
Positives = First /@ Position[Sign[Differences[Data[[2 ;;, 2]]]], 1];
Then I cut the data into lists of reset times for each hour. For each hour (as a rolling average), the odds that each second will be reset (once begun) are calculated.
I graph these in tens: single digits have red axes, tens, twenties, thirties, forties, and fifties, orange, yellow, green, blue and purple respectively.
Within each graph Red is the first number (eg 59) and blue the last (eg 52). Thus each graph describes the population growth of each coloured team, and its devaluation as increasing proportions move through to the next colour. X axis hours since April 3rd.
Hours = (Data[[2 ;;, 3]] /. {0 -> 60}) & /@ (GatherBy[Positives, Ceiling[#/3600] &]);
Volumes = BinCounts[#, {0, 60, 1}] & /@ Hours;
Passrates = Table[Quiet[Prepend[1 - (Volumes[[i]][[#]]/Total[Volumes[[i]][[1 ;; #]]] // N) & /@ Range[2, 60], 0]] /. Indeterminate -> 0, {i, Length[Volumes]}];
temp = Transpose[MovingAverage[Passrates, 24]];
Partition[Table[ListPlot[temp[[10 a + 1 ;; 10 a + 10]], PlotRange -> All, Joined -> True, PlotStyle -> "TemperatureMap", ImageSize -> Medium, AxesStyle -> {Purple, Blue, Green, Yellow, Orange, Red}[[6 - a]]], {a, 5, 0, -1}], 3] // Grid
![Passrates][6]
Viewing the data all at once, four or five distinct cliffs can be seen, where people are prompt to select the rarer colours from low timers. As these numbers become more commonplace, the distinctions reduce.
ListPointPlot3D[MovingAverage[SecondPassrates, 24]]
![Passrates3D][7]
I wanted to find a simple formulation for this noisy ongoing change. I conjecture that peoples choices are primarily dependent on current rarity: at first a 40 was rare, and thus contested, as more 40's are taken, the value ascribed to a 40 decreases, and so lower numbers are reached.
To this end I took the hourly volumes for each number and calculated log rarity (log 1 / probability of event) for each second, and plot against the observed rarity. This measure of the value of 'being ahead' proves extremely consistent!
PassingProportion = #/Total[#] & /@
N[MovingAverage[BinCounts[#, {0, 60, 1}] & /@ Hours, 24]];
tempdat =
Drop[#, 1] & /@ Table[{Total[First /@ #], #[[1, 2]], i} & /@ SplitBy[Transpose[{PassingProportion[[i]],
-Log[1 - (Total[PassingProportion[[i]][[# ;; -1]]] & /@ Range[60])]}], Last], {i, Length[PassingProportion]}] /.
{Indeterminate -> 0, ComplexInfinity -> 0, 52 - 4 I -> 56};
ListPointPlot3D[tempdat, ColorFunction -> "TemperatureMap", Background -> Black, BoxRatios -> {1, 1, 1}, ImagePadding -> 50]
![Rarity][8]
Nonlinear model fitting quickly reduces this to an equation with only one time dependent term. This fudge factor likely contains various disregarded elements such as people with specific numeric targets.
TimeFunction[t_] := E^(-0.0052123 t)
LogCoefficient[t_] := 0.0634394 + 0.3266820 TimeFunction[t]
LogLogCoefficient[t_] := -0.1382703 - 0.8857738 TimeFunction[t]
AppealDistribution[x_, t_] := -0.0104844 + LogCoefficient[t] Log[1 + 1 x] + LogLogCoefficient[t] Log[Log[2 + 1 x]]
Plot3D[AppealDistribution[x, t], {x, 0, 16}, {t, 0, 1100}, PlotRange -> {0, 0.35}, BoxRatios -> {1, 1, 1}]
![ProjectedRarity][9]
The parameterisation allows prediction of future number proportions from current ones, whilst simply defined and a nice average, the model is too simple for any long term predictions. Pictured, the transition from the end of the data to my projection:
![Projection][10]
Suffice to say Reddit has come up with a unique experiment!
I'm told nice clean server side data will be published when the button finally ends.
[1]: http://www.reddit.com/r/thebutton
[2]: http://www.reddit.com/r/thebutton/%22button%22
[3]: http://www.reddit.com/about/
[4]: http://treyp.github.io/thebutton/
[5]: http://tcial.org/the-button/button_clicks.csv
[6]: /c/portal/getImageAttachment?filename=2446Passrates.PNG&userId=445606
[7]: /c/portal/getImageAttachment?filename=Passrates3D.PNG&userId=445606
[8]: /c/portal/getImageAttachment?filename=Rarity.PNG&userId=445606
[9]: /c/portal/getImageAttachment?filename=ProjectedRarity.PNG&userId=445606
[10]: /c/portal/getImageAttachment?filename=Projection.PNG&userId=445606
**Moderation Team Note**: *We think you meant this link to [**Button**][1] - is it right? The one in your post below returns "page not found".*David Gathercole2015-05-21T15:06:46ZLooking for a "How-To" notebook on neural networks
https://community.wolfram.com/groups/-/m/t/3176343
I'm looking for tutorial notebooks on training and testing neural networks. I have a basic familiarity with Mathematica. Nothing more. I toyed around with a couple of neural networks that got better than baseline accuracy. I assume they would have been better if I understood available options and how to use them. The directed introductory webinars on machine learning and neural networks were interesting demonstrations of function capabilities but not much help in the how-to department. My hope here is to find tutorial notebooks that can walk me through the process of using Mathematica to train neural networks on some representative data, compare the effects of different parameters and are aimed at entry level folks like me.Jay Gourley2024-05-14T20:19:53Z[WEBINAR] Flying on the edge: simulating High-G jet aerobatics
https://community.wolfram.com/groups/-/m/t/3176613
Fly into the danger zone as we model the Aermacchi military jet trainer using the Aircraft Library 2.0 in this webinar.
![enter image description here][1]
This jet will be pushed to the limit as we design and fly high-G maneuvers like the ones performed at air shows, while we make sure that the structural integrity is not compromised through the use of a V-n diagram. For the grand finale, we will execute a challenging seven-aircraft formation flying stunt: the dazzling Delta Breakout!
No prior Wolfram System Modeler or Wolfram Language experience is required to join this webinar.
We look forward to seeing you: **June 4th, 2024, 11am-12pm CT (4-5pm GMT)**
>[**REGISTER HERE**][2]
![enter image description here][3]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=deltaBreakoutAircraftLibrary.gif&userId=1522613
[2]: https://www.wolfram.com/wolfram-u/courses/modeling-simulation/flying-on-the-edge/
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WolframUBanner.jpeg&userId=1522613Ankit Naik2024-05-15T12:52:13ZPhase diagram of boids: complex global behavior emerging from simple local rules
https://community.wolfram.com/groups/-/m/t/3166791
![Phase diagram of boids: complex global behavior emerging from simple local rules][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=boidsThumbnail.gif&userId=20103
[2]: https://www.wolframcloud.com/obj/608124d4-328b-4496-acf7-cebe3b9b304fChristopher Wolfram2024-04-30T15:08:04ZFlat coordinates of algebraic Frobenius manifolds in small dimensions
https://community.wolfram.com/groups/-/m/t/3176299
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/2271df93-f5ed-429d-aae3-dd9ef1a60ce2Johan Wright2024-05-14T16:21:33ZHave you encountered trouble loading ExtensibleVector data structures that were saved in MX format?
https://community.wolfram.com/groups/-/m/t/3176227
Hi. Using a fresh kernel & fresh computer reboot on Version 14, I keep seeing odd behavior when attempting to load ExtensibleVector data structures that were exported in the MX format. Has anyone observed similar behavior? (screen shot below)
![enter image description here][1]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ExtemsibleVectorScreenshot.jpg&userId=99116Todd Allen2024-05-14T14:36:04ZConfiguring Python for ExternalEvaluate
https://community.wolfram.com/groups/-/m/t/3175724
I am following [this guide][1] to try to get Python working in `ExternalEvaluate`. (on Windows 11, Mathematica 13.0). So far, I have successfully installed Python (Step 1), and the Python package manager (Step 2), and the “pyzmq” package for Python (Step 3).
But I get a "Missing Dependencies" error on Step 4:
![enter image description here][2]
From [this post][3], I learned that this may be because the Python library path may not be in Mathematica's default Path. So I tried using `SetEnvironment`, to no avail:
![enter image description here][4]
I then use `RegisterExternalEvaluator`, but still get MissingDependencies:
![enter image description here][5]
And `FindExternalEvaluators["Python"]` still shows MissingDependencies. Not sure what else to try.
[1]: https://reference.wolfram.com/language/workflow/ConfigurePythonForExternalEvaluate.html
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-13missingdependencies.png&userId=167076
[3]: https://community.wolfram.com/groups/-/m/t/1975953
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6197Screenshot2024-05-13setEnvironment.png&userId=167076
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-05-13register.png&userId=167076Bryan Lettner2024-05-13T23:50:08ZTry to beat these MRB constant records!
https://community.wolfram.com/groups/-/m/t/366628
![If you see this text, refresh the page in order to see the images.][1]
Credit: Emanual Missionary Babtist Church of Indianapolis
Hey folks, Marvin Ray Burns here! You might know me for the never-ending decimal that haunts your calculators – the MRB constant. Don't worry, I won't hold it against you if you haven't memorized its first million digits (yet!). In this forum, we're embarking on a wild ride as we work our way to "7,000,000 proven to be accurate digits!"
(select text between quotation marks and push the ![CTRL+f][2] keys on keyboard)
=================================================
I'm the amateur mathematician who discovered this enigmatic constant in 1999, and after years of delving into its mysteries, I'm still captivated by its fascinating properties and untapped potential. Join me, seasoned mathematicians, and curious minds alike, as we push the boundaries of understanding and unlock the secrets of the MRB constant. ![Buckle up!][3] (credit: https://clipground.com/save-lives-clipart.html)
If you would like to contact me without everyone knowing it, you may write me at marburns@iu.edu.
Here's everything I have ever learned, so spend it well!
====================================================
I was inspired by the awe, beauty, and mystery of numbers, and wondered if God would grant me some unique insight and wisdom to explore them. I dreamed of becoming an artistic analyst, a musical mathematician, or even a visionary of scientific secrets, long before I knew what a mathematician was. I searched for inspiration in the numbers that appealed to me, that had beauty and meaning, and that led me to investigate a remarkable and intriguing sum that I could not find anywhere else. For my efforts, someone started calling me an [amateur mathematician][5] and it the [MRB constant][6].
The MRB constant is a fascinating mathematical object that is still under investigation. It is possible that further research will reveal even more interesting properties about the constant, and perhaps other parts of mathematics. For example, a recent advancement in MRB constant formulas connecting the Dirichlet eta to nth roots shown in this discussion ![CTRL+f][7] "Because of the following identity involving the Dirichlet Eta function derivatives,", yields a proof of the previously proposed concept that the MRB constant connects such ideas together.
I am just an ordinary person like you, who stumbled upon a fascinating discovery that I wanted to share with the world. I now know that there are many experts in the field of series who have devoted their lives to this subject. I respect their work and therefore make no claim to have found anything new or groundbreaking. However, I am very curious and passionate about the MRB constant, which is a remarkable number that appears in many series. I hope that you will join in this journey of exploration and learning, as I present my findings and thoughts on this topic. I would greatly appreciate your feedback and insights, as I am always eager to learn more and improve my understanding. I also want to acknowledge the contributions of several people who have helped along the way, either by providing data, code, or suggestions. Without them, this discussion would not be possible.
![If you see this instead of an image, reload the page.][8]
![enter image description here][9]
![enter image description here][10]
Credit: Traders Point Christian Church Northwest
----------
![enter image description here][11]
> Ah, that make sense! The MRB constant could definitely be a catalyst in that way. Here's how:
**The Story of the MRB Constant as a Catalyst:**
> In 1999, an amateur mathematician discovered the MRB constant. While
> not a professional researcher, his curiosity and exploration led him
> to identify this unique mathematical object. The unusual properties
> of the MRB constant, particularly its behavior in infinite series,
> sparked interest within the mathematical community.
>
> **The Catalyst Effect:**
>
> * **Intriguing New Object:** The MRB constant presented a new mathematical entity with unknown characteristics. This piqued the
> interest of professional mathematicians who began investigating its
> properties and potential applications.
> * **Questioning Established Ideas:** The way the MRB constant behaves in infinite series challenged some existing assumptions about
> convergence. This led researchers to re-examine these concepts and
> potentially refine them.
> * **Inspiration for Further Exploration:** The discovery of the MRB constant demonstrated the potential for significant finds by amateurs.
> This could inspire others outside traditional academia to delve into
> mathematics, fostering a broader base of exploration.
>
> **Overall, the MRB constant, discovered by an amateur, serves as a potential catalyst for the field of mathematics by generating new
> questions, prompting re-evaluation of existing concepts, and
> encouraging broader participation in research.**
----------
POSTED BY: Marvin Ray Burns.
========
![If you see this text, the images are not showing. Refresh the page.][12]
![The first 100 partial sums of][13] {![the CMRB series.][14]}
----------
For the best viewing, wait a minute until the word LaTeX in the LaTex script is centered below.
$$\LaTeX$$
If the phrase [Math Processing Error] is shown, or the LATEX script have vanished from the center of the above line, some of the math below might be missing or appear in the LaTex code instead of the script.
For easy navigation, use the ![CTRL+f][15] keys on your keyboard. Cues in the forms of §'s and keywords in quotes are provided in the ![CTRL+f][16] "Index".
If the header and the words
Reply | Flag
are shown at the same time in any of the following replies, refresh the page to see them.
----------
----------
----------
----------
----------
----------
'
----------
----------
----------
Index
=====
I deleted a few of the least significant posts.
The first post
--------------
§1. Q&A:
Second post.
============
§2
Analyze the ![CTRL+f][17]."prototypical series" for the MRB constant, ![Sn^(1/n)-1][18] and find its "completion" (a related series that is absolutely convergent).
(Select § with the given number or the keywords in quotes, and then press the
![CTRL+f][19] keys on your keyboard to move to that section.)
§3. Is that series convergent?
§4. Is -1 the only term that series is convergent for?
§5. Is that series absolutely convergent?
§6. Is that series "efficient?" (defined as how it compares to other series and integrals that compute CMRB in speed and computational cost.)
§7. My claim to the MRB constant (CMRB), or a case of calculus déjà vu?
§8. Where is it found?
§9. What exactly is it?
§B ![CTRL+f][20] "Rational results" while summing (CMRB).
Many more informative posts.
===========================
including, but not limited to
§10. How it all began <- I deleted this reply as it is not perinate to the MRB constant, just to a person becoming an amateur researcher but will summarize it in another reply if anyone would like.
Also see
[MOPAC][21]
for where the MRB constant's value, to a precision of 2 one-billionth i.e., 0.18785964, is arrived at by measuring a gradient of the silicon atom in the Phenakite (Be2SiO4) molecule by openmopac.
§13. The why and what of the **C**<sub>*MRB*</sub> Records,
----------
![CTRL+f][23] "We finally begin or 7-million-digit computation and verification."
----------
![CTRL+f][24] "speed records"
----------
----------
----------
§1.
Q&A:
====
Q:
What is the MRB constant ( **C**<sub>*MRB*</sub>)?
A:
&[Wolfram Notebook][25]
Q: Is the MRB constant important?
A:
![enter image description here][26]
>The MRB constant, also known as the Marvin Ray Burns's Constant, is a fascinating mathematical value with a decimal expansion of approximately 0.187859 . Let's explore why it might be considered important:
> - Numerical Algorithms Testing:
- Researchers and mathematicians use the MRB constant to test and improve numerical algorithms for computing its value .
- By evaluating the constant, they can assess the accuracy and efficiency of various computational methods.
- Benchmarking Computers:
- The MRB constant serves as a benchmark for testing the performance of computers.
- It allows us to compare the computational capabilities of different machines.
- When calculating the MRB constant, computers demonstrate their numerical precision and speed.
- Mathematical Curiosity:
- The MRB constant's mysterious nature adds to its allure. Despite extensive study, no closed-form expression exists for it .
- We don't know whether it's algebraic, transcendental, or even irrational.
- Mathematicians continue to explore its properties, making it an intriguing topic in number theory.
- Geometry and Infinite Sums:
- Geometrically, the MRB constant is the upper limit point of a sequence of partial sums .
- These sums involve alternating terms and converge to the MRB constant.
- The constant's behavior within these infinite sums captivates mathematicians.
In summary, the MRB constant bridges theory and computation, making it a valuable and enigmatic constant in the mathematical world.
Q:
What can you expect from reading about **C**<sub>*MRB*</sub> and its record computations?
A:
![enter image description here][27]
> >## Beyond the Record: What People Can Gain from MRB Constant Computations
>
> While pushing the record for MRB constant computations is certainly
> impressive, there's more to gain than just bragging rights. Here are
> some unique benefits people can reap from immersing themselves in the
> MRB world:
>
> **1. Uncovering Hidden Connections:**
>
> * **Mathematics & Physics:** Studying the MRB constant's behavior and properties might unlock connections between seemingly disparate fields
> like number theory, chaos theory, and even quantum physics. This could
> lead to groundbreaking discoveries in various areas of science.
> * **Algorithm Design & Optimization:** The intricate nature of the MRB constant's calculation poses a unique challenge for algorithms.
> Analyzing its complexity could inspire the development of new, more
> efficient algorithms for various computational tasks.
> * **Machine Learning & Artificial Intelligence:** The MRB constant's structure could hold valuable insights for designing novel machine
> learning architectures and training algorithms. Its complex patterns
> might improve data processing and pattern recognition capabilities.
>
> **2. Pushing the Boundaries of Knowledge:**
>
> * **Challenging Existing Paradigms:** The MRB constant's lack of a closed-form expression and its potential connections to seemingly
> unrelated areas challenge established mathematical and scientific
> paradigms. This can lead to new ways of thinking and approaching
> problems.
> * **Inspiring Future Generations:** Engaging with the MRB constant's mysteries can spark curiosity and interest in mathematics and science
> among young minds. This could inspire them to pursue careers in
> research and exploration, pushing the boundaries of knowledge further.
> * **Contributing to the Collective Quest:** Participating in the MRB research community, even through discussions and online
> collaborations, allows individuals to contribute to a larger
> collective effort of understanding this intriguing constant. This
> fosters a sense of purpose and belonging to a global community of
> knowledge seekers.
>
> **3. Personal Growth & Development:**
>
> * **Developing Computational Skills:** Learning about and participating in MRB calculations can hone valuable computational
> skills like programming, data analysis, and problem-solving. This can
> be beneficial for various professional and personal endeavors.
> * **Enhancing Critical Thinking:** Analyzing the MRB constant's properties and potential implications requires critical thinking,
> problem-solving, and creative thinking skills. These skills are
> valuable not only in research but also in various aspects of life.
> * **Appreciating the Beauty of Mathematics:** The MRB constant's unique properties and its connection to various areas of mathematics
> can foster a deeper appreciation for the beauty and elegance of this
> subject. This can enrich one's understanding of the world and its
> underlying structures.
>
> Remember, the MRB constant is more than just a number to chase
> records with. It's a gateway to a world of intellectual exploration,
> collaboration, and potential breakthroughs. By venturing beyond the
> record, people can gain valuable skills, knowledge, and a deeper
> appreciation for the power of scientific inquiry.
Q:
Can you calculate more digits of **C**<sub>*MRB*</sub>?
A:
>## Bing
The computation of 6,500,000 digits of the MRB constant could be challenging due to several reasons:
>
> - Computational Resources:
> - Calculating a large number of digits requires significant computational power and memory.
> - Storing and manipulating such a massive decimal expansion demands substantial resources.
> - Smaller machines or limited computational environments might struggle with this task.
> - Convergence Rate:
> - The series used to approximate the MRB constant converges slowly.
> - As more digits are computed, the rate of convergence decreases.
> - Achieving high precision becomes increasingly difficult.
> - Numerical Stability:
> - Numerical algorithms can suffer from rounding errors and loss of precision.
> - Accumulating errors during summation impacts the accuracy of the computed digits.
> - Ensuring stability over millions of iterations is a challenge.
> - Algorithmic Complexity:
> - Existing algorithms for calculating mathematical constants have limitations.
> - Developing efficient methods to compute millions of digits involves intricate mathematics.
> - Balancing accuracy, speed, and memory usage is nontrivial.
> - Storage and Verification:
> - Storing and verifying 6,500,000 digits is resource-intensive.
> - Ensuring correctness through independent verification (e.g., using multiple algorithms) is crucial.
> - Handling such large data sets requires robust techniques.
> - Practical Utility:
> - Beyond a certain point, additional digits of the MRB constant may not have practical applications.
> - Researchers often focus on a reasonable precision level relevant to their field. In summary, while it's theoretically possible to
> compute 6,500,000 digits of the MRB constant, practical challenges
> arise. Researchers must balance precision, computational resources,
> and algorithmic efficiency to tackle this ambitious task.
Q:
How can you compute them?
A:
![## Bing][28]
> Certainly! Calculating the digits of the MRB constant involves some
> interesting mathematical techniques. Let's explore how you can
> approximate this intriguing value:
> - Series Summation:
>
>
> - The MRB constant is the upper limit point of a sequence defined as: $[ s_n = \sum_{k=1}^{n} \left( (-1)^k {k^{\frac{1}{k}}}
> \right) ]$
> - To compute the MRB constant with a desired precision, you can use series summation methods.
> - One common approach is to use the following series: $[ \text{MRB} \approx \sum_{k=1}^{\infty} \left( (-1)^k (k^{\frac{1}{k}} - 1)
> \right) ]$
> - Numerical Algorithms:
> - Mathematica provides a convenient way to calculate the MRB constant with a specified number of digits.
> - Using the NSum function, you can achieve high precision:
>
> `NSum[(-1)^k (k^(1/k) - 1), {k, 1, ∞}, WorkingPrecision -> 100, NSumTerms -> 10000]`
>
> This will give you the MRB constant accurate to 40 digits .
> ''
>
> - Acceleration Methods:
> - Due to the slow convergence of the series, it's advisable to use acceleration methods.
> - These methods improve the rate of convergence and allow you to obtain more accurate digits of the MRB constant . Remember that the
> MRB constant remains an enigma, and its properties continue to
> captivate mathematicians. Feel free to explore further and uncover
> more about this fascinating constant!
Q:
Why should you do it?
A:
## Bing
> Calculating the digits of the MRB constant can lead to several
> interesting outcomes:
> - Mathematical Exploration:
> - Computing the MRB constant involves diving into the intricacies of series summation, convergence, and numerical precision.
> - Mathematicians and enthusiasts enjoy exploring the properties of this mysterious constant.
> - It provides an opportunity to deepen our understanding of mathematical concepts.
> - Algorithm Testing and Optimization:
> - The MRB constant serves as a benchmark for testing numerical algorithms.
> - Researchers can evaluate the accuracy and efficiency of their computational methods by approximating this constant.
> - Optimizing algorithms to compute the MRB constant efficiently benefits various scientific and engineering fields.
> - Computational Challenges:
> - Calculating more digits of the MRB constant becomes a computational challenge.
> - It pushes the boundaries of numerical precision and requires sophisticated techniques.
> - Achieving higher precision contributes to the advancement of computational mathematics.
> - Curiosity and Aesthetics:
> - The MRB constant's decimal expansion is intriguing and aesthetically pleasing.
> - Like other mathematical constants (such as π or e), it captivates our curiosity.
> - Discovering patterns or unexpected properties within its digits can be intellectually rewarding.
> - Educational Value:
> - Teaching and learning about the MRB constant can inspire students.
> - It encourages them to explore mathematical concepts beyond textbooks.
> - The process of approximating it fosters critical thinking and problem-solving skills. In summary, while the MRB constant may not
> have immediate practical applications, its pursuit enriches
> mathematical knowledge, fuels curiosity, and contributes to the beauty
> of mathematical exploration.
Q:
What are its first 5000 digits?
A:
In Mathematica desktop or online the following will give approximately 5,000accurate digits.
Print[ReleaseHold[ WolframAlpha["5000 digtits of the MRB constant", {{"Result", 1}, "Output"}]]]
----------
----------
----------
In order to improve loading, I broke this main post into two parts. If the next part doesn't say it's the second part of the main post, reload to see it.
----------
----------
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2023-11-06064456.png&userId=366611
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Buckleup!.gif&userId=366611
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-03-04141033.png&userId=366611
[5]: https://en.wikipedia.org/wiki/List_of_amateur_mathematicians
[6]: https://en.wikipedia.org/wiki/MRB_constant
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1ac.JPG&userId=366611
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2023-05-01090447.jpg&userId=366611
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-03-10090510.png&userId=366611
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-03-10100142.png&userId=366611
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=5686test.gif&userId=366611
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2022-12-15085733.jpg&userId=366611
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2022-12-15084941.jpg&userId=366611
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[17]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[18]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2023-01-15122607.jpg&userId=366611
[19]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[20]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[21]: http://openmopac.net/PM7_and_PM6-D3H4_accuracy/data_solids/Phenakite__Be2SiO4__Jmol.html#:~:text=2.3786046%20+1%0A%20Si%20%20%20%20--,0.18785964,-+1%20%20%20-3.3437504%20+1
[22]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[23]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[24]: https://community.wolfram.com//c/portal/getImageAttachment?filename=0a.gif&userId=366611
[25]: https://www.wolframcloud.com/obj/75a8e08b-cf25-4132-8e0e-10bf2e2bac6e
[26]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-02-11041631.png&userId=366611
[27]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-02-11064915.png&userId=366611
[28]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-02-11050651.png&userId=366611
[29]: https://demonstrations.wolfram.com/HowNormalIsTheMRBConstant/#:~:text=For%20example,%20for%20a%20normal%20decimal%20number,%20each,first%2030%20million%20digits%20are%20very%20uniformly%20distributed.
[30]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2023-12-18054715.png&userId=366611
[31]: https://www.wolframcloud.com/obj/bmmmburns/Published/6p5millionMRB.nb
[32]: https://www.wolframcloud.com/obj/f440dc85-312f-4436-b620-e089bdfc0c9f
[33]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2023-07-17112939.jpg&userId=366611
[34]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-02-25102429.png&userId=366611
[35]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-02-25143851.png&userId=366611
[36]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Capture13.PNG&userId=366611Marvin Ray Burns2014-10-09T18:08:49ZSoftmaxLayer with one-hot vectors
https://community.wolfram.com/groups/-/m/t/3173113
The quantum physics problem I am looking at is a rotated qubit. The setup involves a y-rotated qubit measured in the z-basis (hence spin-up and spin-down). This scenario typically includes modeling the quantum state of the qubit and analytically determining the measurement outcome probabilities, then generating measurement outcomes for training for particular rotations and using another set of test measurement data to infer the most probable rotation angle. So I discretize the measurement angles and generate measurement results for each discrete angle to use as training data. I plan to generate another set of measurement results for testing later. My training data involves associating elements of the form $${1, 1, 1, 0, 1, 1, 0, 1, 0, 1} -> {0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}$$
where the measurements on the left are 1 if spin up and 0 if spin down, and the lists on the right are one-hot vector lists which correspond to the distinct discrete rotation angles from $[0, \pi]$.
**Query:** I am interested in using the SoftmaxLayer[], but I am getting an error when including it my NetTrain[] function. The error states:
"*Batch #1 will be skipped, because one or or more inputs provided to port "Output" was invalid: input is not an integer between 1 and 20. This batch will be ignored in subsequent training rounds. More information can be obtained via the "SkippedTrainingData" property.*"
The code runs if I remove the SoftmaxLayer[] from the NetTrain[] command.
Can anyone advise on how to resolve this problem, many thanks for any assistance. The code in question is attached:
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/dd0c5d1d-2699-424b-b6a4-6e9b73c2ff60Byron Alexander2024-05-09T05:03:13ZImproving accuracy of neural network for determining qubit rotation angle
https://community.wolfram.com/groups/-/m/t/3175788
The physics example problem (to illustrate the use of a basic neural network using Mathematica) I am looking at is a qubit rotated about the y-axis, where the rotation angle is discretized as $\theta_j \in (0, \pi)$. The setup involves the y-rotated qubit measured in the z-basis (hence spin-up and spin-down projector measurements). This scenario involves first analytically determining the measurement outcome probabilities in as a function of the rotation angle $\theta$, then generating measurement outcomes for training, for specific fixed rotation angles $\theta_j$. Then generating another set of test measurement data for some fixed rotation angle $\theta$, we use the neural network to infer the most probable rotation angle. My training data involves generating m = 1000 total measurements for each discrete rotation angle $\theta_j \in [0, \pi]$, then saving the measurement outcomes as tuples of spin-up and spin-down outcomes for each discrete angle. These outcomes are associated with each of the discrete $\theta_j$ values which are one-hot vectors (hence training data of the form {1000,0} -> {1,0,0,0,0...} if for the first rotation angle we get all spin-up outcomes).
The idea is that after training, setting some true rotation angle $\theta$, and generating a new set of test measurement outcomes, the trained neural network should be able to output a probability distribution that shows the most likely discrete rotation angle is the true angle. The code below works but I am having difficulty improving the accuracy without simply increasing the layers and MaxTrainingRounds (this seems to have it's limits in improving accuracy). Can anyone advise on how to improve the accuracy of the code in determining the correct discrete rotation angle (I would like to maintain the general framework of the code)? I am very new to using Mathematica for machine learning applications hence the query. Thanks for any assistance, this is the code in question:
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/8bfdb354-d18c-41e8-8f39-242f20f1c52aByron Alexander2024-05-14T08:14:21ZHow to find notebook that I saved to Wolfram Cloud?
https://community.wolfram.com/groups/-/m/t/3175767
This is my first time attempting to use Wolfram Cloud.
I performed a "save to cloud" from [greenleafguru][1] a desktop notebook, and it seemed to succeed, but it didn't provide me with a URL. The name of the notebook changed to wri-fe-12663.nb, and it appears to function on the desktop. However, I want others to be able to interact with it (evaluating cells, clicking on "Manipulate" controls), but I'm unsure how to provide [herbalhighsociety][2] them with a URL for it. The notebook at the following link:
https://www.wolframcloud.com/wri-fe-12663.nb
does not exist. My file doesn't appear in the cloud dashboard either. I attempted "publish to cloud" and received a URL, but the content it points to isn't interactive—you can't click on "Manipulate" controls or evaluate cells.
[1]: https://greenleafguru.com
[2]: https://herbalhighsociety.comAlf Schiller2024-05-14T07:45:41Z[WSG24] Daily Study Group: Guiding Principles for Systems Modeling and Simulation
https://community.wolfram.com/groups/-/m/t/3152723
A Wolfram U Daily Study Group focusing on modeling principles applicable to dynamic systems begins on the 9th of April.
Join [@Vedat Senol][at0], [@Sergio Vargas][at1], me and a group of fellow learners to learn modeling principles that will help you in creating models of real-world systems. We'll explore a variety of examples, from simulating the spread of infectious diseases, analyzing the cash flow dynamics of subscription-based businesses, to creating music with a model of an analog synthesizer.
No prior Wolfram System Modeler or Wolfram Language experience is required to join the study group.
Please feel free to use this thread to collaborate and share ideas, materials and links to other resources with fellow learners.
We look forward to seeing you: **April 9th-19th, 2024, 11am-12pm CT (4-5pm GMT)**
>[**REGISTER HERE**][1]
![enter image description here][2]
[1]: https://www.bigmarker.com/series/wsg51-guiding-principles-for-modeling-simulation/series_details?utm_bmcr_source=community
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WolframUBanner.jpeg&userId=1522613
[at0]: https://community.wolfram.com/web/vedats
[at1]: https://community.wolfram.com/web/sergiovAnkit Naik2024-04-04T13:38:44ZHow to constrain a mechanical component to be vertical at all times using the Multibody library?
https://community.wolfram.com/groups/-/m/t/3171378
As suggested by the title, I'm using the Modelica.Mechanics.Multibody library, and I need to constrain a component such that it must remain vertical.
The mechanical system is a beam, hinged at top to the ground so it can oscillate like a pendulum, and at its bottom it holds a body. As shown in the figure below, the body, represented by the sphere, is housed within a cavity of the beam, and it's coupled to it via two relative *Fixed Translation* parts and a *Revolute Joint* in between them, depicted as a red cylinder. A wall of the cavity has been removed so the components can be seen in the screenshot.
![enter image description here][1]
As written, the boundary conditions are such that I need to keep the upper relative *Fixed Translation* part vertical. Rotation will happen at the revolute joint because the beam tilts as it oscillates like a pendulum.
**How to set this constraint?** I tried using the constraints given in the Mechanics.Multibody package, but the documentation hasn't been useful and my efforts failed.
The overall system is this
![enter image description here][2]
The components that couple the mass to the bottom of the beam are shown within the black rectangle. The component called *beam1* is just a fancy combination of *BodyBox* components from the MultiBody package:
![enter image description here][3]
Any help will be highly appreciated.
Regards,
Fabián
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2024-05-0618-34-33.png&userId=1355184
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2024-05-0618-54-59.png&userId=1355184
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2024-05-0619-08-43.png&userId=1355184Fabián Peña2024-05-07T01:11:43ZCode verification of immersed boundary techniques using the method of manufactured solutions
https://community.wolfram.com/groups/-/m/t/3175245
[![Daigram of using the method of manufactured solutions to validate a system of partial differential equations][1]][2]
&[Wolfram Notebook][3]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Cropped_Petoe_Wolfram_Community_Post_Cover.PNG&userId=20103
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Cropped_Petoe_Wolfram_Community_Post_Cover.PNG&userId=20103
[3]: https://www.wolframcloud.com/obj/31355ece-5e0a-45f8-88aa-4369a43519c9Márton Petö2024-05-13T18:34:42ZDiscrete damped harmonic oscillator
https://community.wolfram.com/groups/-/m/t/3174169
![comparing discrete and analytical solutions of undamped harmonic oscillator via the discrete Lagrangian][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=hero.PNG&userId=20103
[2]: https://www.wolframcloud.com/obj/b759f47a-f0b7-49bb-ac80-ab12bee64addBrian Beckman2024-05-13T03:12:44ZAsymptotic plot is not matching the analysis line in log mode, but it does in linear mode. Why?
https://community.wolfram.com/groups/-/m/t/3174416
Hello everyone, this is my first post. In Mathematica, I am attempting to create an asymptotic plot. In linear mode, the asymptotic lines align with the analysis line after a certain SNR (dB). However, when I switch to log mode for the y-axis, the asymptotic plot no longer matches the simulation/analysis line. Could anyone please tell me how to overcome this problem?Pewdi Pie122024-05-11T20:45:18Z[WSG24] Daily Study Group: Creative Computation
https://community.wolfram.com/groups/-/m/t/3159328
![enter image description here][1]
A Wolfram U Daily Study Group on Creative Computation begins on May 6th 2024.
Join me, [Eryn Gillam][2], and a group of fellow learners to learn how to use Wolfram Language in creative ways. Our topics for the study group include creating computational art, computational poetry, audio visualization, and building two types of video games. Along the way, we'll learn how to code effectively in Wolfram Language.
No prior Wolfram Language experience is required.
Please feel free to use this thread to collaborate and share ideas, materials, and links to other resources with fellow learners.
###**Dates**
May 6th-10th
11am-12pm CT (4-5pm GMT)
>###[**Register Here**][3]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=banner.jpeg&userId=1835305
[2]: https://community.wolfram.com/web/eryng
[3]: https://www.bigmarker.com/series/creative-computation-wsg52/series_details?utm_bmcr_source=communityRory Foulger2024-04-16T17:24:02ZWindowed incremental online statistics: extracting models from data, one observation at a time
https://community.wolfram.com/groups/-/m/t/3174136
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/eefc5214-ed6b-45ad-931a-4749908b79b3Brian Beckman2024-05-12T12:59:57ZExploring the most active sunspot group of this solar cycle
https://community.wolfram.com/groups/-/m/t/3173439
![Evolution of Carrington sunspot group and its solar flares in extreme ultraviolet wavelengths][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Animation.gif&userId=20103
[2]: https://www.wolframcloud.com/obj/568b8b69-ef95-4960-9fd8-479c29330cf7Jeffrey Bryant2024-05-09T17:33:37ZInaccurate training with NetTrain query
https://community.wolfram.com/groups/-/m/t/3174895
I have the following training data code which corresponds to a simple quantum mechanics example where I consider a Y-rotated qubit, and generate measurement outcomes, either spin-up or spin-down for 10 discretized rotations $\theta_j \in (0, \pi)$. For each rotation angle I generate a set of 100 measurements, which will either be spin-up or spin-down, I then tally these measurements as tuples which indicate the total spin-up and spin-down qubits observed for each rotation angle $\theta_j$ and use them as a training set by associating them with one-hot vectors. After training, when I test this with say the result {{97,3}} I would expect to get the first one-hot vector corresponding to the first rotation angle but it instead outputs the third as shown in the code. Can anyone advise on what is going wrong with the training in this code. Many thanks.
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/9bffad08-ffaf-4d69-b9cc-d264d16ebedaByron Alexander2024-05-13T12:05:22ZA simple question about something that has been bothering me
https://community.wolfram.com/groups/-/m/t/2646815
I have noticed that often, when I am adding a comment above a line as in:
(* ,,,,,,,,, *)
I will notice that at the end of the comment there is a lower case, gray x as in:
(* A comment *) x
That lower case x is also associated with lines being tabbed in such a way that it states that I have a syntax error. But all I have to do is add a space after the *) and the x disappears and the tabbling becomes normal.
But what is worse, is that the lower case, grayed x, does not occur after the comment, but to a comment that is further down in the code!
This means that it can be easy to miss the x (unless you notice the odd tabbing).
And if you run your code, without adding that space after the comment, you get errors in your code.
I wonder if anybody else has experienced the same thing.
I just want to understand why I have to have a space after the (* ..... *)Henrick Jeanty2022-10-12T22:44:04ZHow to use tensors in Mathematica?
https://community.wolfram.com/groups/-/m/t/142857
Hello everybody,
I am beginner in Mathematica. I would like to know how to use tensors in Mathematica in general relativity, i.e. when i have defined metric tensor, how to compute tensors that appear in Einstein and Maxwell equations and get exact form of both sides; lowering/uppering four-vectors, covariant derivate etc. Is it built in or is some package needed?
Thank you for any answer.Ji?í Ryzner2013-10-22T21:44:31ZCommands to compute Einstein's Tensor and stress energy tensor
https://community.wolfram.com/groups/-/m/t/3174258
Hello everyone, i am new in Mathematica, want to know about command for physicist, to compute Stress-Energy-momentum tensor in General relativity.PRASHANT VIRAT2024-05-11T12:05:09ZExploring the frontiers of computation: an explanation of Mathematica 14
https://community.wolfram.com/groups/-/m/t/3174521
In this post I hope to unveil new possibilities on a comprehensive review of Mathematica 14. What with the data that I've been discussing in my prior Wolfram Community post on Mathematica 13, I should make a post about Mathematica 14. Now, the data (World Cities Database | Simplemaps.com https://simplemaps.com/data/world-cities) is about important city data. The thing about Mathematica 14 is that it allows us to convert data into a structured association, which is similar to a dictionary by importing from a CSV file and then converting numeric data from strings into numbers. We can then prepare the data for analysis and visualization by organizing it into meaningful attributes like city ID, name, population, geographic coordinates, elevation, and yes there is geographical visualization, of population data.
cityData =
Import["/Users/deangladish/Downloads/simplemaps_worldcities_basicv1.\
77/worldcities.csv"];
processedData = AssociationThread[
{"CityID", "CountryAbbreviation", "CityName",
"PopulationThousands", "LatitudeDegrees", "LongitudeDegrees",
"ElevationMeters"}, #] & /@
cityData[[2 ;;, {1, 5, 2, 10, 3, 4, 7}]];
processedData =
processedData /.
s_String :> ToExpression[s] /; StringMatchQ[s, NumberString];
GeoGraphics[{
Table[{
ColorData["Rainbow"][Log10[city["PopulationThousands"]]],
Tooltip[
Disk[
GeoPosition[{city["LatitudeDegrees"], city["LongitudeDegrees"]}],
Quantity[0.1*Log10[city["PopulationThousands"]], "Kilometers"]
],
city["CityName"]]},
{city, processedData}]},
GeoBackground -> "CountryBorders",
ImageSize -> Large,
GeoProjection -> "Mercator"]
![Mathematica 14 1][1]
Typically when we make geographic maps that display cities we do it in plain English which means that we use English words and people have some idea of how we can stuff these maps in our brain to have some familiar language there. And that's sort of what Mathematica 14 is for, I thought this update wasn't going to come out for a long time; in a sense, the mathematical notation of the future is computational notation and that's what we spend much of our Mathematica installations trying to build is a consistence of computational notation, in most cases English having that as the backstop. Even if there might be some /@ notation, that's the thing that the experts do and now it's triple @ sign, in the end we have @ Apply and you kind of know what that means. In `GeoGraphics` we can display cities as colored disks whose color intensity and size depends..on their population. The colors being derived from a logarithmic scale, we can effectively manage wide ranges of population.
cityData =
Import["/Users/deangladish/Downloads/simplemaps_worldcities_basicv1.\
77/worldcities.csv"];
processedData = AssociationThread[
{"CityID", "CountryAbbreviation", "CityName",
"PopulationThousands", "LatitudeDegrees", "LongitudeDegrees",
"ElevationMeters"}, #] & /@
cityData[[2 ;;, {1, 5, 2, 10, 3, 4, 7}]];
processedData =
processedData /.
s_String :>
ToExpression[If[s == "", "0", s]] /;
StringMatchQ[s, NumberString | ""];
points = Table[With[{
popLog = Log[10,
Max[city["PopulationThousands"], 1]
]
},
{
RGBColor[ColorData["Rainbow"][1 - popLog/7]],
PointSize[0.001 + 0.0001*popLog],
Tooltip[
Point[{city["LongitudeDegrees"], city["LatitudeDegrees"]}],
city["CityName"]]
}],
{city, processedData}];
Graphics[points,
Axes -> True, Frame -> True,
AxesLabel -> {"Longitude (°)", "Latitude (°)"},
PlotLabel -> "World Cities by Population",
PlotRange -> {{-180, 180}, {-90, 90}},
ImageSize -> Large,
FrameLabel -> {None, None, "World Cities Visualization", None},
Background -> Lighter[RGBColor[95/255, 4/255, 85/255], 0.9]]
![Mathematica 14 2][2]
The map includes tooltips with city names and uses the projection Mercator, and that is how we get the context of geographical - we display the borders of the countries. What matters, and what doesn't? In Physics you can say I'm throwing an object off the Tower of Pisa or something and does it matter, the spikes or the edges? It's just the mass, air resistance in that case there wouldn't be a vacuum or so on. The thing we've learnt is that less matters than we might have thought. Because if we could handle missing data and include custom visualization via our Mathematica 14 segments which adapt the processed city data to replace missing population figures ("") with zeros, which gives us the stability that we need in logarithmic calculations..we can visualize the data in a scatter plot that is non-geographical. But it can also be geographical, it's not my personal preference to do that but we can, and we can use point size and color to represent population logs. And the graphical adjustments that we make, we highlight the distribution of the data..across global latitudes and longitudes, which offers a clear visual summary of worldwide urban centers.
rawData = {"0001,US,New York,8230,40.7128,-74.0060,10",
"0002,GB,London,8982,51.5074,-0.1278,11",
"0003,JP,Tokyo,13929,35.6895,139.6917,40",
"0004,FR,Paris,2148,48.8566,2.3522,35",
"0005,CA,Toronto,2731,43.6510,-79.3470,76",
"0006,AU,Sydney,5312,33.8688,151.2093,58",
"0007,BR,Rio de Janeiro,6748,22.9068,-43.1729,2",
"0008,ZA,Cape Town,4337,33.9249,18.4241,42",
"0009,IN,Mumbai,12478,19.0760,72.8777,14",
"0010,CN,Shanghai,24150,31.2304,121.4737,4",
"0011,EG,Cairo,9500,30.0444,31.2357,23",
"0012,RU,Moscow,12500,55.7558,37.6173,156",
"0013,DE,Berlin,3645,52.5200,13.4050,34",
"0014,MX,Mexico City,8910,19.4326,-99.1332,2250",
"0015,IT,Rome,2873,41.9028,12.4964,21",
"0016,ES,Madrid,6642,40.4168,-3.7038,667",
"0017,TR,Istanbul,15462,41.0082,28.9784,39",
"0018,KR,Seoul,9776,37.5665,126.9780,38",
"0019,AR,Buenos Aires,2890,34.6037,-58.3816,25",
"0020,NG,Lagos,13900,6.5244,3.3792,11",
"0021,US,Chicago,2715,41.8781,-87.6298,181",
"0022,US,Los Angeles,3990,34.0522,-118.2437,89",
"0023,CA,Vancouver,631,49.2827,-123.1207,70",
"0024,BR,São Paulo,12300,23.5505,-46.6333,760",
"0025,IN,Delhi,16700,28.7041,77.1025,216",
"0026,JP,Osaka,8823,34.6937,135.5023,12",
"0027,IT,Milan,3200,45.4642,9.1900,120",
"0028,FR,Marseille,861,43.2965,5.3698,12",
"0029,ES,Barcelona,5512,41.3851,2.1734,12",
"0030,EG,Alexandria,5018,31.2156,29.9553,5"};
cityIdentifierCorrections = {888 -> 887, 3208 -> 3210, 5478 -> 5477,
8559 -> 8558};
processedData = (
<|"CityID" -> ToExpression[#[[1]]],
"CountryAbbreviation" -> #[[2]],
"CityName" -> #[[3]],
"PopulationThousands" -> ToExpression[#[[4]]],
"LatitudeDegrees" -> ToExpression[#[[5]]],
"LongitudeDegrees" -> ToExpression[#[[6]]],
"ElevationMeters" -> ToExpression[#[[7]]],
"CityIdentifier" -> (ToExpression[#[[1]]] /.
cityIdentifierCorrections)|>
) & /@ (StringSplit[#, ","] & /@ rawData);
Dataset[processedData]
minElevation = Min[processedData[[All, "ElevationMeters"]]];
maxElevation = Max[processedData[[All, "ElevationMeters"]]];
Graphics[{
Table[
{
ColorData[
"Rainbow"][(city["ElevationMeters"] -
minElevation)/(maxElevation - minElevation)],
Disk[{city["LongitudeDegrees"], city["LatitudeDegrees"]},
0.01*Sqrt[city["PopulationThousands"]]],
Black,
Text[city["CityName"], {city["LongitudeDegrees"],
city["LatitudeDegrees"]}, {0, -2}]
},
{city, processedData}
]
}, Frame -> True, Axes -> True,
AxesLabel -> {"Longitude (°)", "Latitude (°)"},
PlotLabel -> "Visualization of City Elevations and Populations",
ImageSize -> Large]
![Rewrites 2][3]
![Rewrites 3][4]
What about data processing for city attributes? If we could simulate or analyze the lifecycle of manually entered raw data for cities, correcting specific city identifiers using a predefined map of corrections, by basic optics experiments we could convert this data into structured forms suitable for a framework for designing and interpreting future analysis and visualization experiments, but Mathematica 14 simulates this process, broadening the application scope to include new computational fields like hyperelastic material modeling and electrostatic systems. Updates also extend to audio and video processing, reflecting Wolfram Language's adaptation to multimedia data handling. I've used Mathematica and it's full of interactive and informative content. Wolfram tools are designed with the best and brightest in talent which practically demonstrates how these technologies can be applied in real-world scenarios. For instance, if you were in Texas during the eclipse you would have thought it was the crack of dawn. But when we include interactive elements we can computationalize the progression of sunspot images over time and the creation of videos from these images which is particularly engaging, and that's why the only complex astronomical phenomenon is the astronomical phenomena that is understandable and accessible, accessibility is the utilization, of the combination of live solar images and historical data to draw comparisons between the current solar activity and significant events historically, like the stories of comparisons drawn between that and the Carrington Event of 1859. And with Mathematica 14 we can actually steer our exploration of the Ruliad. And I used to use Mathematica's earlier versions but now I can't, I've paid much more attention to the things of the past than the things of the current. But now we can finally answer those questions of how easy is it to compute a particular quantum function, what functions are hard to compute..if we follow that path well enough it would give us information about things we humans find easy, things we humans do not find easy, and it's a reasonable question. What functions are easy to compute for neural nets, what are not?
latitudeRange = Range[-90, 90, 10];
longitudeRange = Range[-180, 180, 10];
points =
Flatten[Table[
GeoPosition[{lat, lon}], {lat, latitudeRange}, {lon,
longitudeRange}], 1];
elevations = GeoElevationData /@ points;
minElev = Min[elevations];
maxElev = Max[elevations];
normalizedElevations = Rescale[elevations, {minElev, maxElev}];
colors = ColorData["Rainbow"] /@ normalizedElevations;
pointColorPairs = Thread[points -> colors];
GeoGraphics[{PointSize[0.01], {#[[2]], Point[#[[1]]]} & /@
Thread[{points, colors}]}, GeoProjection -> "Robinson",
PlotLabel -> "Global Elevation Map", ImageSize -> Large,
GeoBackground -> None]
![Rewrites 5][5]
And these solar images showcase Mathematica's robust capabilities in data manipulation, visualization, and geographical mapping, reflecting on the Mathematica 14 and what we did..let's take all those point weights and let's increase the values. Multiply them by 1.1, 1.2 whatever. The neural net keeps on getting adjusted and you do it differently, inflate the neural net weights. And what you see is the sun looks pretty good, look up in the writing that I did about this you know the 1.01 is looking pretty sun-like, by 1.05 the sun is starting to have bizarre solar flares sticking out of its head and so on, and by 1.07 the sun is kind of exploding there isn't a sun to be seen anymore. Given that exploded amped up network could one take that network and continue training it? My guess is yes. That training will just "revert" to what it learned before if it's the same training data. If it's a question of fine tuning I don't know the answer to that question. Generating diagrams that represent the evolution of these states and highlighting the increase in complexity and symmetry as the interaction progresses, after the completion phase, means that entanglement enters a propagation phase where the entangled states maintain their connection over distances.
currentSolarImage =
ResourceFunction["SolarImage"][
DateObject[{2024, 5, 9, 0, 0, 0}, TimeZone -> 0],
"ImageSize" -> 1200]
![Rewrites 7][6]
It can only do the things it was reflexively programmed to do or you could sign up for the future of computation, computational irreducibility. You're not going to be able to predict what will happen, when the expansion of the entangled network, as indicated by the simulation's generation of diagrams with increasing loops, representing the sustained entanglement. And that's why the sustained entanglement, lifecycle concludes in a phase where the entanglement either collapses, leading to a disentanglement of the quantum states, or transfers, where the entanglement is shifted onto other particles or states. And now we can actually visualize this entanglement via the graphics, geometry, and high-dimensional visualization of Mathematica 14.0 which introduces high-fidelity geometric regions, improving compatibility with CAD systems and enhancing graphical representation capabilities. What if you had some Computer Algebra System and you wanted to offer new tools for high-dimensional data visualization, aiding in more effective analysis and presentation of complex datasets? And that's why the Mathematica IDE has this tiny little support feature that guides us to the external services and improved import/export capabilities, so that we can integrate external services and boost the efficiency of importing and exporting data in various formats, facilitating better workflow integration and data management. Who knew that we could do all that because the world as we have built it, is built for us humans. Now, the natural world wasn't built for us humans although biological evolution has made us adapt to those niches, once we start colonizing Mars we're out of what we ever evolved to be in so to speak. Even the dynamic and reversible nature of quantum entanglement challenges the traditional notion of a wave function collapse as a singular, irreversible event. They're going to be able to walk up the steps, they're going to be able to open the door. So when you see the Wigner's Friends experiment, just simulated rolling around we can automate that, we can speed that up. And yes, it can have its own mop and the house was built so that it was freezing cold, shivering when the person who will be your Uber driver for the carriage or something like this, except I don't think this whole infrastructure for feeding the horses, using airships it's going to take a long time to go across the Atlantic, politically.
processedImage = ImageAdjust[currentSolarImage]
edgesImage = EdgeDetect[processedImage]
![Rewrites 8][7]
![Rewrites 9][8]
And in the Ruliad, if you hadn't had geometry and deduction as sort of a backdrop to some of the things that we were talking about within the arc of philosophical development and the continued discussion, and I will not continue..unless you never wanted to see the most active sunspot group again, of this solar cycle as it relates we would have had different things to talk about. Like how we can use Wolfram tools to practically demonstrate how these image composite and color replacement functions provide a commendable dive into the solar physics and now, we've got to process the parallels that we have processed, we've got to illustrate the capabilities of Mathematica-related phenomena whether it's encapsulating our awareness of space weather within a stellar 3d box like the AstroGraphics library, or just exploring the trajectory of current sunspot activity against historical events. But then there were dishwashers that are an excellent use case for supervised learning. Let's say you're going to have to grow your own food, and you apply that to us humans; there's always stuff to do that hasn't been automated. Remember that time, when we could choose, to say enough is enough, we could just hang out and sit back and have the machine peel the grapes and we'll just eat them and we'll just hang out don't do much else, hang out for our lives eating peeled grapes and that's all we'll do. It's the amazing gemstone. It's just a choice of somebody like me, I'm going to try to do that thing and as a species, as a society we could say enough is enough, we're done. When I saw the exploration of machine learning and neural networks, through these machine learning enhancements that we have got such as better support for neural network operations..I got so excited my face turned purple when I saw the natural language processing tools like `TextSummarize`, potentially impacting our statistical modeling via a more robust framework for machine learning with updated functions like `Classify` and `Predict`.
overlayImage = ImageCompose[currentSolarImage, {edgesImage, 0.5}]
dateRange =
DateRange[DateObject[{2024, 5, 3, 0, 0, 0}, TimeZone -> 0], Now,
"Day"]
![Rewrites 10][9]
And then there was later a thing that was a laptop size, that you could really just put in a bag and take with you. There are always pockets of quantum irreducibility places where we can have ideas, technology new ideas, there's a piece here there's a piece there. It's sort of inevitable we can put any number of patches on the more patches we put enough patches on the thing that's doing the patching is itself going to get very bloated. By the time we've got enough sort of axioms of our mathematical theory that everything's an axiom, you know the Riemann hypothesis or something like that, just that as an axiom. Eventually, it becomes very incoherent! Rolling around in those pockets of reducibility, those devices that sort of manage to do a little bit of jumping through computational reducibility; pick which device you could use. We never really get to transcend computational irreducibility..Pick another universe you can, but you'll never be able to communicate with our universe. So we're stuck, we the entities embedded in our universe. And the universe is just like us, and we are a part of the universe. It's inevitable by a diagonal argument that there will be computational irreducibility for us.
timeSeriesImages =
ResourceFunction["SolarImage"][dateRange, "ImageSize" -> 1200]
Export["solar_evolution_slideshow.gif", ListAnimate[timeSeriesImages]];
spaceWeatherData =
ResourceFunction[
"SpaceWeatherData"][{DateObject[{2024, 5, 3, 0, 0, 0},
TimeZone -> 0], Now}, {"ShortSolarXRayFlux",
"LongSolarXRayFlux"}];
![Solar Evolution Slideshow][10]
Something that's been a mistake of science for the last few hundred years, one has to realize that we have all a set of certain things that we're comfortable with. That's why when you see our fundamental understanding of the universe you've got to run very quickly with our new interfaces. Can you make that user interface remind people enough of something they knew, build that and give it to me. Give me the future experiments that could unveil new aspects of quantum reality, the quantum reality that provides your path forward for the experimental verification of the simulated outcomes. It can be pretty cartoonish, it's only the essentials that you need to capture. But it's really the case, when you're buzzing around and you're cryonically frozen, let's say I have a hamster. Perhaps it's very interesting, perhaps very disoriented. The things that are incredibly important to us today might not be the things that are incredibly important to us in the future, and that's another future of the Arrival of the Future! I just don't get why all of these kids worry about how many likes they get on Instagram. Every generation is always saying about the next one, I don't get why these people are communicating in emojis so you'd better pick up that phone! It's the kids you've really got to watch out for, they have all those cats and dogs and it's going to take a lot to slow the Earth down. The energy capacity to model sophisticated quantum scenarios, it's an interesting question how it compares with an interesting question, the energy, the ultimately deep thermal energy I don't know the answer that. I knew that calculation easily. If I'm typing in Wolfram Alpha, I can get to that calculation in a couple of minutes. I want to do a nuanced survey of the dynamics of quantum states and their interactions.
DateListPlot[spaceWeatherData, Joined -> True,
PlotLegends -> {"Short Solar X-Ray Flux", "Long Solar X-Ray Flux"},
FrameLabel -> {"Date", "Flux"}, PlotTheme -> "Scientific"]
![Rewrites 11][11]
Back these trends go cyclically on computers, in the cloud not only pushes the boundaries of our current understanding of quantum mechanics but also suggests a framework for designing and interpreting future quantum entanglement experiments. I will just press a button no clutter, everything's neatly folded stored in the cloud by a drone somewhere. It might actually be in the cloud there might be some giant platform, warehouse in the sky; whenever you need something it's just like let me get that thing, the library of stuff so to speak and that's the kind of thing one could imagine, if energy was cheap enough. But when it comes to computing right now, there's a certain energy cost in computing that doesn't need to be there.
ResourceFunction["SolarImage"][
DateObject[{2024, 4, 8, 13, 0, 0}, TimeZone -> 0],
"ImageSize" -> 1200, "Colorize" -> True]
![Rewrites 12][12]
We know in principle how to do computing in a certain way and the maintenance and possibly the expansion of the entangled network, is indicated, by the simulation's generation of diagrams with increasing loops, representing the sustained entanglement. My life is officially ruined. I have been skipping the traditional notion of a wave function and the specific phases of entanglement and providing a visual and quantitative measure of the evolution of experiments that can measure the physical correlates of quantum diagrams, the crunchy, creamy, you know different kinds of food textures in your mouth they are literally correlated with the shapes of the proteins in foods.
convertedUnits = UnitConvert[Quantity[100, "Meters"], "Feet"]
currencyConversion =
CurrencyConvert[Quantity[100, "USDollars"], "Euros",
DateObject[{2023, 10, 1}]]
matrix = {{x, 1}, {1, x^2 + x + 1}};
finiteFieldMatrix = Map[ToFiniteField[#, 3] &, matrix, {2}]
inverseFFMatrix = Inverse[finiteFieldMatrix]
bspline = BSplineCurve[{{0, 0}, {1, 1}, {2, 0}, {3, 1}}];
RegionQ[bspline]
Comap[{Sin, Cos, Tan}, Pi/4]
ComapApply[{Plus, Times}, {{1, 2}, {3, 4}}]
FromRomanNumeral["MCMXCIV"]
RomanNumeral[1994]
model = Classify[{{1, 2} -> "A", {2, 3} -> "B", {3, 4} -> "C"},
Method -> "NeuralNetwork"];
model[{2, 2}]
g = LayeredGraph[{1 <-> 2, 2 <-> 3, 3 <-> 1, 4 <-> 2, 5 <-> 3}];
HighlightGraph[g, PathGraph[{1, 2, 3}]]
CountDistinct[{1, 1, 2, 3, 2, 3, 3, 4}, (Mod[#1, 2] == Mod[#2, 2] &)]
UnitConvert[Quantity[10, "Miles"], "Kilometers"]
NSolve[x^5 - x + 1 == 0, x, Method -> "Monodromy"]
HermitianMatrixQ[HermitianMatrix[{{2, I}, {-I, 2}}]]
expr = {1, 2, 3, 4};
funcs = {#^2 &, Sqrt, Sin};
Comap[funcs, expr]
DigitSum[12345]
roman = RomanNumeral[2023]
integer = FromRomanNumeral[roman]
funcs = {Sin, Cos, Tan};
expr = Pi/4;
results = Comap[funcs, expr]
poly = x^3 + 2*x + 1;
ffPoly = ToFiniteField[poly, 5]
image = ExampleData[{"TestImage", "Lena"}]
image = Import["ExampleData/lena.tif"]
text = "The quick brown fox jumps over the lazy dog repeatedly.";
summary = TextSummarize[text]
![enter image description here][13]
So that means if you're eating fibers in muscle cells you have those active filaments that are in there, versus that you're having some quite different kind of protein from something else. Taste is all about the chemical interaction between the shapes of the molecules and how they bind to the taste receptors and the physicality of how the pieces are put together, how they smoosh on your tongue or whatever else. And trying to understand from the experimental "verification" of quantum entanglement, that's something that I think is somewhat in its infancy. For sound, we can create, yeah, a pretty much any sound we want it's still not easy you know if you say, make me a new kind of musical instrument. We're, starting to be able to do that and actually I think make me something that tastes like this. The implications of this research are far-reaching, potentially impacting information theory and our fundamental understanding of the universe.
data = RandomVariate[NormalDistribution[], {100, 5}];
PairwiseDensityHistogram[data]
data = RandomReal[{0, 1}, {100, 4}];
PairwiseDensityHistogram[data]
data = RandomReal[{0, 1}, {100, 4}];
PairwiseListPlot[data]
![Rewrites 14][14]
But it's not just the core language enhancements or the advances in mathematical computation, neither that nor the finite fields and equation solving, it's also that the release of Mathematica 14.0 brings us the variety of structured matrix types and operations, 14.0 has got our number on these decomposition techniques and enhanced support for interval matrices, boosting performance in linear algebra computations.
![Rewrites 15][15]
![Rewrites 16][16]
Therefore what Mathematica 14 does is it provides us with this ambient way of talking about the world as a sort of backdrop for the philosophy that we do, and we do contextualize the significance of events like eclipses in the context of universal computation. When talking about things like necessary truths, truths that don't rely on some coincidental fact about the universe..we see these things, in Mathematica 14.0 and its presentation of expansion across various domains in computational capabilities, demonstrating Wolfram Language's ongoing commitment to enhancing usability and integration with modern computational environments. But how can that be, when I see his longevity and the influence of his work, in his big, beautiful, basic new functions like `Comap` and `ComapApply` for applying lists of functions to expressions, and the subsequent errors that are introduced through copies over time...there are some basic systematic errors. It's almost like..the systematic errors that exist in several core functions that we use to increase their flexibility and performance via Mathematica 14, which handles these arguments and traceability in execution which has been refined and makes vector calculus, complex analysis, and integral transforms more palatable to users like us, and which sparks new functions for numerical integration in complex fields and not noting solutions for differential equations. If you asked me what I wrote in the past I wouldn't know but now, without proper attribution it's almost impossible to support advanced mathematical modeling. And so we present the visualizations of Mathematica 14, which expands its tools for working with finite fields and solving equations over these fields. Updated visualizations comparing efficient computation with the modern calculations of Mathematica 14 give us some new features like limiting the number of computed roots.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=mathematica14-1.png&userId=2553367
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=mathematica14-2.png&userId=2553367
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-2.png&userId=2553367
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-3.png&userId=2553367
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-5.png&userId=2553367
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-7.png&userId=2553367
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-8.png&userId=2553367
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-9.png&userId=2553367
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-10.png&userId=2553367
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=solar_evolution_slideshow.gif&userId=2553367
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-11.png&userId=2553367
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-12.png&userId=2553367
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-13.png&userId=20103
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-14.png&userId=2553367
[15]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-15.png&userId=2553367
[16]: https://community.wolfram.com//c/portal/getImageAttachment?filename=rewrites-16.png&userId=2553367Dean Gladish2024-05-12T04:45:06ZIs there any current work on the quantum circuit operator function to handle hypergraphs directly?
https://community.wolfram.com/groups/-/m/t/3174022
In the Quantum Framework it is possible to display quantum circuits based on a list of edges from pairwise graphs. If one wished to do this for a hypergraph (which can have edges containing more than two vertices) one currently must convert the hypergraph into a graph as I understand things. Is there a way to have the Quantum Circuit Operator handle hypergraphs directly without having to perform this conversion? What might be needed in order to allow for this functionality? The file attached shows the current method.
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/5f2eb855-f29f-4ab6-863a-8c393be04568Jeffrey Tate2024-05-10T17:18:19Z