Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Mathematics sorted by activeSolve equation of motion with Dirac-fermions?
http://community.wolfram.com/groups/-/m/t/1400212
Dear Wolfram team:
I am a beginner of Mathematica.
My Problem is that I want solve a System with n equation of Motion in first order. In this equation of motion are creation and annihilation operators of Dirac-fermions. **I don't know and don't find a it, how I can describes the creation and annihilation operators of Dirac-fermions in Mathematica**. The equation of motion have the form:
$$\dot{c}_i^\dagger [t]=f*c_i^\dagger[t]+g[t]*c_{i+1}[t]-g[t]*c_{i+1}^\dagger[t]+h[t]*c_{i-1}[t]-h[t]*c_{i+1}^\dagger[t]\\
\dot{c}_i [t]=f*c_i[t]+g[t]*c_{i+1}^\dagger[t]-g[t]*c_{i+1}[t]-h[t]*c_{i-1}^\dagger[t]+h[t]*c_{i+1}[t],$$
where $c_i^\dagger,c_i $ are creation and annihilation operators and f,g,h are functions.
Then I want use DSolve or NDSolve to solve the equation of motion.
Thanks, for your help.Constantin Harder2018-08-09T10:05:59Z[WSS18] Introducing Hadamard Binary Neural Networks
http://community.wolfram.com/groups/-/m/t/1374288
##Introducing Hadamard Binary Neural Networks
Deep neural networks are an important tool in modern applications. It has become a major challenge to accelerate their training. As the complexity of our training tasks increase, the computation does too. For sustainable machine learning at scale, we need distributed systems that can leverage the available hardware effectively. This research hopes to exceed the current state of the art performance of neural networks by introducing a new architecture optimized for distributability. The scope of this work is not just limited to optimizing neural network training for large servers, but also to bring training to heterogeneous environments; paving way for a distributed peer to peer mesh computing platform that can harness the wasted resources of idle computers in a workplace for AI.
#### Network Architecture and Layer Evaluator
Here, I will describe the network and the Layer Evaluator, to get an in depth understanding of the network architecture.
Note:
- **hbActForward** : Forward binarization of Activations.
- **hbWForward** : Forward binarization of Weights.
- **binAggression** : Aggressiveness of binarization (Vector length to binarize)
Set up the Layer Evaluator.
layerEval[x_, layer_Association] := layerEval[x, Lookup[layer, "LayerType"], Lookup[layer, "Parameters"]];
layerEval[x_, "Sigmoid", param_] := 1/(1 + Exp[-x]);
layerEval[x_, "Ramp", param_] := Abs[x]*UnitStep[x];
layerEval[ x_, "LinearLayer", param_] := Dot[x, param["Weights"]];
layerEval[ x_, "BinLayer", param_] := Dot[hbActForward[x, binAggression], hbWForward[param["Weights"], binAggression]];
layerEval[x_, "BinarizeLayer", param_] := hbActForward[x, binAggression];
netEvaluate[net_, x_, "Training"] := FoldList[layerEval, x, net];
netEvaluate[net_, x_, "Test"] := Fold[layerEval, x, net];
Define the network
net = {<|"LayerType" -> "LinearLayer", "Parameters" -> <|"Weights" -> w0|>|>,
<|"LayerType" -> "Ramp"|>,
<|"LayerType" -> "BinarizeLayer"|>,
<|"LayerType" -> "BinLayer", "Parameters" -> <|"Weights" -> w1|>|>,
<|"LayerType" -> "Ramp"|>,
<|"LayerType" -> "BinLayer", "Parameters" -> <|"Weights" -> w2|>|>,
<|"LayerType" -> "Sigmoid"|> };
MatrixForm@netEvaluate[net, input[[1 ;; 3]], "Test" ] (* Giving network inputs *)
![enter image description here][1]
#### Advantages of Hadamard Binarization
- Faster convergence with respect to vanilla binarization techniques.
- Consistently about 10 times faster than CMMA algorithm.
- Angle of randomly initialized vectors preserved in high dimensional spaces. (Approximately 37 degrees as vector length approach infinity.)
- Reduced communication times for distributed deep learning.
- Optimization of im2col algorithm for faster inference.
- Reduction of model sizes.
### Accuracy analysis
![enter image description here][2]
As seen above, the HBNN model gives 87% accuracy, whereas the BNN model (Binary Neural Networks) give only 82%. These networks have only been trained for 5 epochs.
### Performance Analysis
X Axis: Matrix Size
| Y Axis: Time (seconds)
**CMMA vs xHBNN**
![enter image description here][3]
**MKL vs xHBNN**
$\hspace{1mm}$![enter image description here][4]
### Visualize weight histograms
![enter image description here][5]
It is evident that the Hadamard BNN preserves the distribution of the weights much better. Note that the BNN graph has a logarithmic vertical axis, for representation purposes.
### Demonstration of the angle preservation ability of the HBNN architecture
![enter image description here][6]
Binarization approximately preserves the direction of high dimensional vectors. The figure above demonstrates that the angle between a random vector (from a standard normal distribution) and its binarized version converges to ~ 37 degrees as the dimension of the vector goes to infinity. This angle is exceedingly small in high dimensions.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=tempz.png&userId=1302993
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=accuracy.png&userId=1302993
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6613xCma.png&userId=1302993
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=xMKL.png&userId=1302993
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=histogram.png&userId=1302993
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=anglepreserve.png&userId=1302993Yash Akhauri2018-07-10T22:12:26ZThe Delian Brick and other 3D self-similar dissections
http://community.wolfram.com/groups/-/m/t/1368091
Divide a cuboid into two cuboids similar to the original shape. The answer involves the cube root of 2, otherwise known as the [Delian constant](http://mathworld.wolfram.com/DelianConstant.html). I've called this object the Delian Brick. It's a 3D 2-reptile. A stack of three bricks can be made using the cube root of 3, and so on.
With[{r=2^(1/3)},
Graphics3D[{Opacity[.5],
Cuboid[{0 r^0,0 r^1,0r^2},{1 r^0,1r^1,1r^2}], Cuboid[{1 r^0,0 r^1,0r^2},{2 r^0,1 r^1,1r^2}]},SphericalRegion-> True, Boxed-> False]]
![Delian Brick][1]
I'd self-discovered the Delian Brick myself, as did at least ten other recreational mathematicians I've exchanged correspondence with. It may have been known to the ancient greeks. The first publication I've found is by Dale Walton and the game company Thinkfun, who expanded it into a 3D 4-irreptile they called the Fifth Chair puzzle.
With[{r=2^(1/3)},
Graphics3D[{Opacity[.5],
{Red,Cuboid[{0 r^0,0 r^1,0r^2},{2 r^0,r^1,r^2}], Cuboid[{1 r^0,1 r^1,0r^2},{2 r^0,2 r^1,1r^2}]},
{Blue,Cuboid[{0 r^0,1 r^1,0r^2},{1 r^0,3r^1,1r^2}], Cuboid[{1 r^0,2 r^1,0r^2},{2 r^0,3 r^1,1r^2}]},
{Green,Cuboid[{0 r^0,3 r^1,0r^2},{2 r^0,4r^1,2r^2}], Cuboid[{0 r^0,2 r^1,1r^2},{2 r^0,3 r^1,2r^2}]},
{Yellow, Cuboid[{2 r^0,0 r^1,0r^2},{4 r^0,2r^1,2r^2}], Cuboid[{0 r^0,0 r^1,1r^2},{2 r^0,2 r^1,2r^2}]}}, SphericalRegion-> True, Boxed-> False]]
![fifth chair][2]
There are also [five space-filling tetrahedra](http://demonstrations.wolfram.com/SpaceFillingTetrahedra/), and at least two of them are 8-reptiles
Row[{Graphics3D[{Opacity[.5],Polygon/@Union[Sort/@
Flatten[Subsets[#,{3}]&/@(IntegerDigits/@({{020,111,121,022},{022,111,112,222},{022,111,121,222},{022,113,112,222},{022,113,123,024},{022,113,123,222},{111,202,212,113},{111,222,212,113}}+111)-1),1]]}, Boxed-> False, SphericalRegion->True],
Graphics3D[{Opacity[.5],Polygon/@Union[Sort/@
Flatten[Subsets[#,{3}]&/@(IntegerDigits/@({{002,022,111,113},{022,042,131,133},{022,222,111,113},{022,222,111,131},{022,222,113,133},{022,222,131,133},{111,131,220,222},{113,133,222,224}}+111)),1]]}, Boxed-> False, SphericalRegion->True]}]
![tetrahedron reptiles][3]
More of these self-similar 3D dissections are listed at [3D Rep-Tiles and Irreptiles](http://demonstrations.wolfram.com/3DRepTilesAndIrreptiles/). The ones I list here need to be added there. Most of the 3D rep-tiles are based on either a 2D reptile or a polycube. The four items in this discussion fit in neither of those categories. Are there others?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=DelianBrick.png&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=FifthChair.png&userId=21530
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=tetrahedronreptiles.png&userId=21530Ed Pegg2018-07-03T16:02:03ZFind the "nth" of a large PrimeNumber?
http://community.wolfram.com/groups/-/m/t/1395404
Hi Guys! I hope all of you are fine :) Maybe someone can tell me here how can I find with Wolfram Alpha or Mathematica the nth's of larger primes? I used "PrimePi", but "PrimePi" works not with large primes (primes like these 1921773217311523519374373 do not work...too large...). Is there a criterion, method and or a script with which I can find the nth's of larger primes?
I have also used the "nthprime" function, but i think this is not what i need, but when there is a method with the nth prime function to find the "th's" of larger primes, can someone here show me, how it works? To better understanding what i mean, here an example:
- 2 is the 1(<-i need this number).Primenumber
- 3 is the 2(<-i need this number).Primenumber
- 5 is the 3(<-i need this number).Primenumber
- 7 is the 4(<-i need this number).Primenumber
and so on...another example:
19 is the 8th (!) Prime, 23 is the 9th (!) Prime, 29 is the 10th (!) Prime... now i need a function to find which prime is 1921773217311523519374373? I need a function to get that out, i hope anybody here has an idea how can i find with WolframAlpha or Mathematica which/what (!) prime is 1921773217311523519374373.
I hope anyone can help me here. Kind regards and best wishes.Nural I.2018-07-31T18:14:09ZOutputs from InverseFourierSequenceTransform
http://community.wolfram.com/groups/-/m/t/1402695
Yes:
In[331]:= InverseFourierSequenceTransform[1, x, n,
FourierParameters -> {a, 1}]
Out[331]= (2 \[Pi])^((1 - a)/2) DiscreteDelta[n]
No:
In[329]:= InverseFourierSequenceTransform[1, x, n,
FourierParameters -> {a, 2 Pi}]
Out[329]= 0Joe Donaldson2018-08-12T19:39:52ZUse FindRoot for the following function?
http://community.wolfram.com/groups/-/m/t/1402742
In the example below, FindRoot doesn't work with the provided function, calcTresAtTime[mCpRes_?NumericQ, mFracClr_?NumericQ,
timeTarget_?NumericQ]. However, no problems are observed when calling the function by itself or from Plot. The [documentation][1] mentions that FindRoot first localizes all of the variables, then evaluates f with the variables being symbolic. The examples in the documentation show how to turn this off, by using _?NumericQ.
eq01ResHB =
MCpRes ures'[t] ==
mCpPump (uclr[t] - ures[t]) + UAambRes (uamb - ures[t]) +
UAbrg (ubrg - ures[t]);
eq02ClrHB =
MCpClr uclr'[t] ==
mCpPump (ures[t] - uclr[t]) + UAambClr (uamb - uclr[t]) +
UAclr (ucw - uclr[t] );
ic = {ures[0] == ures0, uclr[0] == uclr0};
eqSet = Join[{eq01ResHB, eq02ClrHB}, ic];
vars = {ures, uclr};
KuambRes = 0.025 ;
Kuabrg = 0.236;
KuambClr = 0.0024;
Kuaclr = 0.1;
calcTresAtTime[mCpRes_?NumericQ, mFracClr_?NumericQ,
timeTarget_?NumericQ] := Module[{TresSolLocal, TclrSolLocal},
parmsRes = {MCpRes -> mCpRes , UAambRes -> KuambRes,
UAbrg -> Kuabrg};
parmsClr = {MCpClr -> mFracClr mCpRes, UAambClr -> KuambClr,
UAclr -> Kuaclr};
parmsBoundary = {mCpPump -> 1, ubrg -> 200, ucw -> 60, uamb -> 70};
parmsInitialCond = {ures0 -> 70, uclr0 -> 70};
eqSetValues =
eqSet /. parmsRes /. parmsClr /. parmsBoundary /. parmsInitialCond;
{TresSolLocal, TclrSolLocal} =
NDSolveValue[eqSetValues, vars, {t, 0, 2000}];
N@TresSolLocal[timeTarget]
]
calcTresAtTime[60., 0.4, 300.]
Plot[calcTresAtTime[x, 0.4, 300.], {x, 0, 80}]
FindRoot[ 130 == calcTresAtTime[x, 0.4, 300.], {x, 0, 80}]
Below is the result of the Plot command. So this suggests that the function itself doesn't have any severe problems.
![enter image description here][2]
Below is the result of the FindRoot command
![FindRoot output][3]
Have worked through the examples in the documentation, but can't find where I have taken a wrong turn.
Any help would be appreciated.
[1]: http://reference.wolfram.com/language/ref/FindRoot.html
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PlotExample01.jpg&userId=894223
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=FindRoot_Output.PNG&userId=894223Robert McHugh2018-08-12T06:41:10ZSolve the Karush-Kuhn-Tucker equations with Reduce
http://community.wolfram.com/groups/-/m/t/1402471
Some years ago I published a short article in the Mathematica Journal describing solving the Karush-Kuhn-Tucker equations with Reduce, to do symbolic optimization. I was pleased to see that the approach subsequently used by several people. However, the code in that article has the problem that it gives all local minima. I've recently updated the code to only give global minima. The new code has the advantage over Minimize that it gives multiple global minima and also provides the values of the Lagrange multipliers, which give the sensitivity of the objective function to changes in the constraints. The code is shown below with copious comments. I've also given two examples in which the code returns a result but Minimize does not, even though this is an unusual circumstance.
Code
In[1]:= (* Generate the Karush-Kuhn-Tucker Equations *)
KTEqs[obj_ (* objective function *), cons_List (* constraints *), vars_List (*
variables *)] :=
Module[{consconvrule = {GreaterEqual[x_, y_] -> LessEqual[y - x, 0],
Equal[x_, y_] -> Equal[x - y, 0],
LessEqual[x_, y_] -> LessEqual[x - y, 0],
LessEqual[lb_, x_, ub_] -> LessEqual[(x - lb) (x - ub), 0],
GreaterEqual[ub_, x_, lb_] -> LessEqual[(x - lb) (x - ub), 0]} ,
x, y, lb, ub , stdcons, eqcons, ineqcons, lambdas, mus, lagrangian, eqs1,
eqs2, eqs3, alleqns, allvars },
(* Change constraints to Equal and LessEqual form with zero on the right-
hand side *)
stdcons = cons /. consconvrule;
(* Separate the equality constraints and the inequality constraints *)
eqcons = Cases[stdcons, Equal[_, 0]][[All, 1]];
ineqcons = Cases[stdcons, LessEqual[_, 0]][[All, 1]];
(* Define the Lagrange multipliers for the equality and inequality \
constraints *)
lambdas = Array[\[Lambda], Length[eqcons]];
mus = Array[\[Mu], Length[ineqcons]];
(* Define the Lagrangian *)
lagrangian = obj + lambdas.eqcons + mus.ineqcons;
(* The derivatives of the Lagrangian are equal to zero *)
eqs1 = Thread[ D[lagrangian, {vars}] == 0];
(* Lagrange multipliers for inequality constraints are \[GreaterEqual]0 to \
get minima *)
eqs2 = Thread[mus >= 0];
(* Lagrange multipliers for inequality constraints are 0 unless the \
constraint value is 0 *)
eqs3 = Thread[mus*ineqcons == 0];
(* Collect the equations *)
alleqns = Join[eqs1, eqs2, eqs3, cons];
(* Collect the variables *)
allvars = Join[vars, lambdas, mus];
(* Return the equations and the variables *)
{alleqns, allvars}
]
In[2]:= (* Convert logical expressions to rules *)
torules[res_] := If[Head[res] === And, ToRules[res], List @@ (ToRules /@ res)]
In[3]:= (* Find the global minima *)
KKTReduce[obj_(* objective function *), cons_List (* constraints *),
vars_List (* variables *)] :=
Block[{kkteqs, kktvars, red, rls, objs, allres, minobj, sel, ret, minred,
minredrls},
(* Construct the equations and the variables *)
{kkteqs, kktvars} = KTEqs[obj, cons, vars];
(* Reduce the equations *)
red = LogicalExpand @
Reduce[kkteqs, kktvars, Reals, Backsubstitution -> True];
(* Convert the Reduce results to rules (if possible ) *)
rls = torules[red];
(* If the conversion to rules was complete *)
If[Length[Position[rls, _ToRules]] == 0,
(* Calculate the values of the objective function *)
objs = obj /. rls;
(* Combine the objective function values with the rules *)
allres = Thread[{objs, rls}];
(* Find the minimum objective value *)
minobj = Min[objs];
(* Select the results with the minimum objective value *)
sel = Select[allres, #[[1]] == minobj &];
(* Return the minimum objective value with the corresponding rules *)
ret = {minobj, sel[[All, 2]]},
(* Else if the results were not completely converted to rules *)
(* Use MinValue to find the smallest objective function value *)
minobj = MinValue[{obj, red}, kktvars];
(* Use Reduce to find the corresponding results *)
minred =
Reduce[obj == minobj && red, kktvars, Reals, Backsubstitution -> True];
(* Convert results to rules, if possible *)
minredrls = torules[minred];
ret = If[
Length[Position[minredrls, _ToRules]] == 0, {minobj, minredrls}, {minobj,
minred}];
];
(* Remove excess nesting from result *)
If[Length[ret[[2]]] == 1 && Depth[ret[[2]]] > 1, {ret[[1]], ret[[2, 1]]},
ret]
]
In[4]:=
Examples
In[5]:= Minimize[{x^2 - y^2, Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
Out[5]= Minimize[{x^2 - y^2, Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
In[6]:= KKTReduce[x^2 - y^2, {Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
Out[6]= {-25 + 25/9 (-3 + \[Pi])^2, {{x -> -(5/3) (-3 + \[Pi]),
y -> 5, \[Mu][1] -> (20 (-3 + \[Pi]))/(3 Sqrt[3]), \[Mu][2] ->
0, \[Mu][3] ->
1/9 (9 + 6 Sqrt[3] Sin[5 + 5/3 (-3 + \[Pi])] -
2 Sqrt[3] \[Pi] Sin[5 + 5/3 (-3 + \[Pi])])}, {x -> 5/3 (-3 + \[Pi]),
y -> -5, \[Mu][1] -> (20 (-3 + \[Pi]))/(3 Sqrt[3]), \[Mu][2] ->
0, \[Mu][3] ->
1/9 (9 + 6 Sqrt[3] Sin[5 + 5/3 (-3 + \[Pi])] -
2 Sqrt[3] \[Pi] Sin[5 + 5/3 (-3 + \[Pi])])}}}
In[7]:= TimeConstrained[
Minimize[{(Subscript[x, 1] - Subscript[x, 2])^2 + (Subscript[x, 2] -
Subscript[x, 3])^4, (1 + Subscript[x, 2]^2) Subscript[x, 1] + Subscript[
x, 3]^4 - 3 == 0}, {Subscript[x, 1], Subscript[x, 2], Subscript[x,
3]}], 60]
Out[7]= $Aborted
In[8]:= AbsoluteTiming @
KKTReduce[(Subscript[x, 1] - Subscript[x, 2])^2 + (Subscript[x, 2] -
Subscript[x, 3])^4, {(1 + Subscript[x, 2]^2) Subscript[x, 1] + Subscript[
x, 3]^4 - 3 == 0}, {Subscript[x, 1], Subscript[x, 2], Subscript[x, 3]}]
Out[8]= {1.67203, {0, {{Subscript[x, 1] -> 1, Subscript[x, 2] -> 1,
Subscript[x, 3] -> 1, \[Lambda][1] -> 0}, {Subscript[x, 1] ->
AlgebraicNumber[Root[3 + 2 #1 + 2 #1^2 + #1^3 &, 1], {0, 1, 0}],
Subscript[x, 2] ->
AlgebraicNumber[Root[3 + 2 #1 + 2 #1^2 + #1^3 &, 1], {0, 1, 0}],
Subscript[x, 3] ->
AlgebraicNumber[
Root[3 + 2 #1 + 2 #1^2 + #1^3 &, 1], {0, 1, 0}], \[Lambda][1] -> 0}}}}Frank Kampas2018-08-11T17:18:58ZWhy NicholsGridLines in Nichols Plot are different than Matlab
http://community.wolfram.com/groups/-/m/t/1402189
The sensitivity lines(NicholsGridLines) in NicholsPlot in Mathematica drawn differently from what is drawn in Matlab and what I learned in lectures in control systems.
Here we can see the function P[s]=-15 (1+0.2 s/3+(s/3)^2)/(s(1+s/2)(1-1.6 s/5+(s/5)^2)(1+0.2 s/7+(s/7)^2)) which by wolfram is not Entering the 3dB sensitivity loop (just below (Pi,0)) and by Matlab it does. Wolfram Mathematica are drawing the Nichols Grid Lines as a reflection with respect to the x axis compared to what i learned at control system and Matlab for some reason. in addition, Mathematica draws the plot around the phase pi and Matlab around -Pi, why these differences?
![the sensitivity lines(NicholsGridLines) in NicholsPlot][1]
Is there a option for flipping these grid-lines somehow? Or can someone explain why it is drawn as shown?
The code I used to draw the plot:
NicholsPlot[P[s],
GridLines -> {Range[-2 \[Pi], 2 \[Pi], 0.5 \[Pi]], Automatic},
StabilityMargins -> True, PlotRange -> {{0, 2 \[Pi]}, {-50, 60}},
NicholsGridLines -> {{ -10^((3/20)), -10^((6/20))} ,},
ScalingFunctions -> {"Radian", Automatic},
Ticks -> {Range[-2 \[Pi], 2 \[Pi], \[Pi]/2], Automatic}]
Dynamic[MousePosition["Graphics"]]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=%D7%AA%D7%9E%D7%95%D7%A0%D7%94%D7%9C%D7%9C%D7%90%D7%A9%D7%9D.png&userId=1402172Eliav Louski2018-08-11T01:22:37Z[GIF] This is Only a Test (Decagons from stereographic projections)
http://community.wolfram.com/groups/-/m/t/1380624
![Decagons formed from stereographically projected points][1]
**This is Only a Test**
This one is fairly straightforward. Form 60 concentric circles on the sphere centered at the point $(0,1,0)$. On each circle, take 10 equally-spaced points, stereographically project to the plane, and form a decagon from the resulting points. Now rotate the sphere and all the points on it around the axis $(0,1,0)$. The result (at least after adding some color) is this animation. This is a sort of discretized companion to my old still piece [_Dipole_][2].
Here's the code:
Stereo[p_] := p[[;; -2]]/(1 - p[[-1]]);
With[{r = 2, n = 10, m = 60,
cols = RGBColor /@ {"#2EC4B6", "#011627", "#E71D36"}},
Manipulate[
Graphics[
{EdgeForm[Thickness[.0045]],
Join[{Reverse[#[[1]]], #[[2]]}]
&[Partition[
Table[
{Blend[cols, θ/π],
EdgeForm[Lighter[Blend[cols, θ/π], .15]],
Polygon[
Table[Stereo[(Cos[θ] {0, 1, 0} +
Sin[θ] {Cos[t], 0, Sin[t]}).RotationMatrix[ϕ, {0, 1, 0}]],
{t, π/2, 5 π/2, 2 π/n}]]},
{θ, π/(2 m), π - π/(2 m), π/m}],
m/2]]},
PlotRange -> r, ImageSize -> 540, Background -> Blend[cols, 1/2]],
{ϕ, 0, 2 π/n}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=stereo29.gif&userId=610054
[2]: https://shonkwiler.org/still-images/dipoleClayton Shonkwiler2018-07-12T03:41:03ZSolve 2 coupled 2nd ODEs and plot them with ParametricPlot?
http://community.wolfram.com/groups/-/m/t/1393597
I am interested to solve two coupled 2nd order differential equations and plot the solution using ParamatricPlot. Can anyone help me to resolve this issue? The solution is a trajectory of a particle under the influence of gravity. So, I am also interested to animate the trajectory of the particle as well. I have attached the Mathematica script with this post.Soumen Basak2018-07-28T10:13:02ZFind second derivative using D?
http://community.wolfram.com/groups/-/m/t/1399266
The code below gives incorrect 2nd derivative. figure 1 shows the original function xxSumS1[s, r] = funDervtveLAB[s, r, 0] when r=200, while second figure shows the 1st derivative of the original function "Exp[-noisepow*s]*laplaceLABs[s, r]" when r=200. The 1st derivative is correct as the 1st derivative of a decreasing function is negative (figure 2). However, the 2nd derivative "xxSumS3[s_, r_] = funDervtveLAB[s, r, 2] when r=200" seems to be incorrect. This is because I expect it to be positive for all values of s as it is the derivative of the 1st derivative and the 1st derivative is an increasing function in s. Figure 3 shows the 2nd derivative of the original function
![figure 1][1] ![figure 2][2] ![figure 3][3]
Clear["Global`*"]
a = 4.88; b = 0.43; etaLAB = 10.^(-0.1/10); etaNAB = 10.^(-21/10); etaTB = etaNAB;
PtABdB = 32; PtAB = 10^(PtABdB/10)*1*^-3; PtTBdB = 40; PtTB = 10^(PtTBdB/10)*1*^-3;
NF = 8; BW = 1*^7; noisepowdBm = -147 - 30 + 10*Log[10, BW] + NF;
noisepow = 0; RmaxLAB = 20000;
TBdensity = 1*^-6; ABdensity = 1*^-6;
alfaLAB = 2.09; alfaNAB = 2.09; alfaTB = 2.09;
mparameter = 3;
zetaLAB = PtAB*etaLAB; zetaNAB = PtAB*etaNAB; zetaTB = PtTB*etaTB;
height = 100; sinrdBrange = -10; sinr = 10.^(sinrdBrange/10);
probLoSz[z_] := 1/(1 + a*Exp[-b*(180/Pi*N[ArcTan[height/z]] - a)]);
probLoSr[r_] := 1/(1 + a*Exp[-b*(180/Pi*N[ArcTan[height/Sqrt[r^2 - height^2]]] -
a)]);
funLoS[z_] := z*probLoSz[z];
funNLoS[z_] := z*(1 - probLoSz[z]);
funLABNABs[z_, s_] := (1 - 1/(1 + s*zetaNAB*(z^2 + height^2)^(-alfaNAB/2)))*funNLoS[z];
funLABLABs[z_,
s_] := (1 - (mparameter/(mparameter + s*zetaLAB*(z^2 + height^2)^(-alfaLAB/2)))^mparameter)*funLoS[z];
funLABTBs[z_, s_] := z*(1 - 1/(1 + s*zetaTB*z^(-alfaTB)));
distnceLABNABs = (zetaLAB/zetaNAB)^(1/alfaLAB)*height^(alfaNAB/alfaLAB);
NearstInterfcLABNABs[r_] := Piecewise[{{height, r <= distnceLABNABs}, {(zetaNAB/zetaLAB)^(1/alfaNAB)* r^(alfaLAB/alfaNAB), r > distnceLABNABs}}];
NearstInterfcLABTBs[r_] := (zetaTB/zetaLAB)^(1/alfaTB)*r^(alfaLAB/alfaTB);
NearstInterfcLABLABs[r_] := r;
lowerlimitLABNABs[r_] := Sqrt[NearstInterfcLABNABs[r]^2 - height^2];
lowerlimitLABLABs[r_] := Sqrt[NearstInterfcLABLABs[r]^2 - height^2];
lowerlimitLABTBs[r_] := NearstInterfcLABTBs[r];
InteglaplaceLABNABs[s_?NumericQ, r_?NumericQ] := NIntegrate[funLABNABs[z, s], {z, lowerlimitLABNABs[r], RmaxLAB}];
InteglaplaceLABLABs[s_?NumericQ, r_?NumericQ] := NIntegrate[funLABLABs[z, s], {z, lowerlimitLABLABs[r], RmaxLAB}];
InteglaplaceLABTBs[s_?NumericQ, r_?NumericQ] := NIntegrate[funLABTBs[z, s], {z, lowerlimitLABTBs[r], RmaxLAB}];
laplaceLABNABs[s_, r_] := Exp[-2*Pi*ABdensity*InteglaplaceLABNABs[s, r]];
laplaceLABLABs[s_, r_] := Exp[-2*Pi*ABdensity*InteglaplaceLABLABs[s, r]];
laplaceLABTBs[s_, r_] := Exp[-2*Pi*TBdensity*InteglaplaceLABTBs[s, r]];
laplaceLABs[s_, r_] :=
laplaceLABNABs[s, r]*laplaceLABLABs[s, r]*laplaceLABTBs[s, r];
funDervtveLAB[s_, r_, kk_] := D[Exp[-noisepow*s]*laplaceLABs[s, r], {s, kk}];
xxSumS1[s_, r_] = funDervtveLAB[s, r, 0]; (*original function*)
xxSumS2[s_, r_] = funDervtveLAB[s, r, 1]; (* 1st derivative*)
xxSumS3[s_, r_] = funDervtveLAB[s, r, 2]; (*2nd derivative*)
xxSumR1[r_] := xxSumS1[s, r] /. s -> (mparameter*sinr/zetaLAB*r^alfaLAB);
xxSumR2[r_] := xxSumS2[s, r] /. s -> (mparameter*sinr/zetaLAB*r^alfaLAB);
xxSumR3[r_] := xxSumS3[s, r] /. s -> (mparameter*sinr/zetaLAB*r^alfaLAB);
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig-1.jpg&userId=1350020
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig-2.jpg&userId=1350020
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig-3.jpg&userId=1350020mohamed alzenad2018-08-07T20:02:23ZSolve a PDE with boundary conditions (chemical adsorption in fixed beds)?
http://community.wolfram.com/groups/-/m/t/1398247
Dear Wolfram team:
I have been trying for week to solve a system of 2 partial differential equations describing the adsorption of a chemical substance on a fixed bed (for example, a column of activated carbon). The 2 equations are the following, taken from McCabe (1993):
![Description of eq 1][1]
![Description of eq 2][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=EQ1.png&userId=1020580
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=EQ2.png&userId=1020580
Unfortunately I cannot get past the general solution (with arbitrary constants) because when I try to put boundary conditions the Mathematica program fails. Maybe I am using the wrong command or syntax, or maybe there are too much or too few boundary conditions.
I have left attached the program, where I tryed to simplify the problem combining both equations in a third.
Thank you in advance for your help.
Best regards,
Alberto SilvaAlberto Silva Ariano2018-08-06T01:04:00ZPerform calculations on y-axis values?
http://community.wolfram.com/groups/-/m/t/1396972
Sometimes when plotting functions, you want to do some operation on the y-axis. This comes up when you want to plot say decibels vs. frequency. It is not clear how you can do an operation like performing 10 Log10 on the y-axis values. Is there a straight forward way to do this? Incidentally, LogPlot just gives you the y-axis in log form.Jesse Sheinwald2018-08-03T16:34:54ZCollect coefficients in a polynomial defined by symbolic summation?
http://community.wolfram.com/groups/-/m/t/1398308
Hi all,
I'd like to distribute the squared sum and collect the coefficients in front of n and n^2 in the following expression:
Sum[a[i] (n (n - 1) + 2 n), {i, 1, K}] - Sum[a[i] n, {i, 1, K}]^2,
Would anyone know how to do this?Aurelien Bibaut2018-08-06T03:06:02ZFind multinominal maximum with NMaximize?
http://community.wolfram.com/groups/-/m/t/1392154
Hello guys. This is the table with the result that should be got https://www.dropbox.com/s/rohvusatedkfh3r/Screenshot_20180725-184452.png?dl=0
The mysticism thing is that it found a part of correct values only with NMaximize. With 3000-26000 everithing is OK but with others a strange thing is happening. For example, if I take 3000 and more I get the right answer
Clear["Global`*"]
n = 100;
Y = 1000000;
q = NMaximize[{n!/((94)!*4!)*p0^(94)*ap10^4*p1*p2,
p0 + p1 + p100 + p2 + ap10 == 1 && p0 >= 0 && p1 >= 0 && p2 >= 0 &&
ap10 >= 0 &&
p100 >= 0 && (1/100)*Y*(-10*ap10 + 30*p1 + 40*p2 + 100*p100) ==
3000}, {p0, p1, p2, p100, ap10}]
{0.0272692, {p0 -> 0.94, p1 -> 0.00999995, p2 -> 0.00999991,
p100 -> 5.71366*10^-8, ap10 -> 0.0400001}}
So I took all values from -6000 to 29000 and get this. Most of the values are correct, but some of the definatele wrong.
{
RowBox[{"-", "6000"}], "0.00273003019908044`"},
{
RowBox[{"-", "5000"}], "4.002433792920723`*^-12"},
{
RowBox[{"-", "4000"}], "9.420778019991495`*^-80"},
{
RowBox[{"-", "3000"}], "0.010805254786454756`"},
{
RowBox[{"-", "2000"}], "0.014835126377371699`"},
{
RowBox[{"-", "1000"}], "1.1255118610736836`"},
{"0", "1.0083957979097389`"},
{"1000", "3.3664295470533197`*^-22"},
{"2000", "1.0105517857633675`"},
{"3000", "0.027269195370073815`"},
{"4000", "0.026831313283137287`"},
{"5000", "0.025659720271071166`"},
{"6000", "0.02396928739747252`"},
{"7000", "0.021955363082878255`"},
{"8000", "0.019861954802262567`"},
{"9000", "0.017956439320638243`"},
{"10000", "0.01623208221174882`"},
{"11000", "0.01467181808621124`"},
{"12000", "0.013260174294049413`"},
{"13000", "0.011983123654999555`"},
{"14000", "0.01082795065726872`"},
{"15000", "0.00978312998048758`"},
{"16000", "6.311707230722985`*^-18"},
{"17000", "0.007983743159662302`"},
{"18000", "0.007211133780524158`"},
{"19000", "0.00651261660134752`"},
{"20000", "0.005881151176722276`"},
{"21000", "0.005310359883946591`"},
{"22000", "0.004794466105647533`"},
{"23000", "0.004328238122009399`"},
{"24000", "0.003906938191582962`"},
{"25000", "0.0035262763453895838`"},
{"26000", "0.0031823684622467223`"},
{"27000", "4.580651706257595`*^-15"},
{"28000", "2.32433808006728`*^-15"},
{"29000", "1.5695861700229384`*^-14"}
That is more interesting with reduce function gives different results.
Clear["Global`*"]
n = 100;
Y = 1000000;
r = Reduce[
p0 + p1 + p100 + p2 + ap10 == 1 && p0 >= 0 && p1 >= 0 && p2 >= 0 &&
ap10 >= 0 &&
p100 >= 0 && (1/100)*Y*(-10*ap10 + 30*p1 + 40*p2 + 100*p100) ==
3000, {ap10, p0, p1, p2, p100}, Reals, Backsubstitution -> True];
q = NMaximize[{n!/((94)!*4!)*p0^(94)*ap10^4*p1*p2, r}, {p0, p1, p2,
p100, ap10}]
{0.0230288, {p0 -> 0.940867, p1 -> 0.00763999, p2 -> 0.00891332,
p100 -> 0.00127333, ap10 -> 0.0413065}}
FInd Maximum also dosent work
Clear["Global`*"]
Y = 1000000;
n = 100;
q = FindMaximum[{n!/((94)!*4!)*p0^(94)*p1*p2*p10^4,
p0 + p1 + p100 + p2 + p10 == 1 && p0 >= 0 && p1 >= 0 && p2 >= 0 &&
p100 >= 0 &&
p10 >= 0 && (1/100)*Y*(-10*p10 + 30*p1 + 40*p2 + 100*p100) ==
3000}, {{p0, 0.91}, {p1, 0.01387}, {p2, 0.016}, {p100,
0.021}, {p10, 0.35}}]
{4.61652*10^-31, {p0 -> 0.412788, p1 -> 0.0133285, p2 -> 0.0185802,
p100 -> 0.0428179, p10 -> 0.512485}}
So what is that, wrong table at dropbox or I need to use a different code?
Multiple searches also dont do a good thing,http://community.wolfram.com/groups/-/m/t/1164680
n = 100;
Y = 1000000;
iMin[-n!/((94)!*4!)*p0^(94)*p1*p2*p10^4,
List @@ (p0 + p1 + p2 + p10 + p100 == 1 &&
p0 >= 0 && p1 >= 0 && p2 >= 0 && p100 >= 0 &&
p10 >= 0 && (1/100)*Y*(-10*p10 + 30*p1 + 40*p2 + 100*p100) ==
3000),
Thread[{{p0, p1, p2, p100, p10}, 0, 1}], 10, 0]
{-1.57794*10^-44, {p0 -> 0.289568, p1 -> 0.0329513, p2 -> 0.0407228,
p100 -> 0.0368193, p10 -> 0.599939}}Alex Graham2018-07-25T17:01:39ZUse InputField for defining a function to be applied to a list?
http://community.wolfram.com/groups/-/m/t/1397587
Suppose I have a simple list, Range[4]. I apply a function to the members of this list, e.g. f[x_] = 1/x. With the following code I am getting what I expected
In[125]:= f[x_] = 1/x;
Range[4];
Map[f, Range[4]]
Out[127]= {1, 1/2, 1/3, 1/4}
But now I want to use an InputField for defining the function to be applied to the list Range[4]. I then use the following code:.
Panel[DynamicModule[{f = 1/x, f1},
Column[{InputField[Dynamic[f]], f1[x_] = Dynamic[f],
Map[f1, Range[4]]}]]]
Here I started with the function f =1/x, but I can change it in the input field.
With the above code I get a panel containing the input box, the definition of the new function f1 to be applied to the list Range[4], and the final result.
I had expected this to be {1, 1/2, 1/3,1/4}., but I am getting {1/x,1/x,1/x,1/x}.
What am I doing wrong?Laurens Wachters2018-08-05T07:52:25ZUse Sum while the summand is a function?
http://community.wolfram.com/groups/-/m/t/1398028
Mathematica 11.3.0.0 Windows 10 64
Before posting a bug, I would like if anybody can verify:
The following works o.k:
Sum[Times @@ (IntegerDigits[i, 7] + 1), {i, 0, 10^6}]
However if the Summand is a function we get some gibberish:
fu[n_] := Times @@ (IntegerDigits[n, 7] + 1);
Sum[fu[i], {i, 0, 10^6}]
Interestingly, the following works:
Sum[fu[i], {i, 1, 10^6}]Daniel Huber2018-08-05T14:29:18ZLie Groups and Lie Algebras using Wolfram Language?
http://community.wolfram.com/groups/-/m/t/1397667
Sorry if this is repetition, I am fairly new to Mathematica, and would like to use it for the above module, could someone point me in the right direction please... I have used Maple for the differential geometry stuff but lost on Mathematica. I have mainly used Mathematica for Groebner Basis, matrices, complex analysis etc no problem at all.Tonde Kush2018-08-04T15:45:56ZPlace a ContourPlot under a Plot3D?
http://community.wolfram.com/groups/-/m/t/1396065
I would like to combine a 3-dimensional graph of a function with its 2-dimensional contour-plot underneath it in a professional way. But I have no idea how to start, I try this:
W[s_, b_, q_,
p_] := (1/\[Pi]) Exp[-(p^2) +
I*Sqrt[2] p (b - Conjugate[s]) - (1/
2)*((Abs[s])^2 + (Abs[b])^2) - (q^2) +
Sqrt[2]*q*(b + Conjugate[s]) - (Conjugate[s]*b)]
Wpsi[\[Alpha]_, q1_, p1_, q2_, p2_] :=
Np[\[Alpha]]^2 (W[\[Alpha], \[Alpha], q1, p1]*
W[\[Alpha], \[Alpha], q2, p2] +
W[\[Alpha], -\[Alpha], q1, p1]*W[\[Alpha], -\[Alpha], q2, p2] +
W[-\[Alpha], \[Alpha], q1, p1]*W[-\[Alpha], \[Alpha], q2, p2] +
W[-\[Alpha], -\[Alpha], q1, p1]*W[-\[Alpha], -\[Alpha], q2, p2])
plot3D = Plot3D[Wpsi[1, 0, p1, 0, p2], {p2, -2, 2}, {p1, -2, 2},
PlotTheme -> "Scientific", PlotPoints -> 60, PlotRange -> All,
ColorFunction -> Hue, PlotLegends -> Automatic, Mesh -> None];
cntplot =
ContourPlot[Wpsi[1, 0, p1, 0, p2], {p2, -2, 2}, {p1, -2, 2},
PlotRange -> All, Contours -> 20, Axes -> False, PlotPoints -> 30,
PlotRangePadding -> 0, Frame -> False, ColorFunction -> Hue];
gr = Graphics3D[{Texture[cntplot], EdgeForm[],
Polygon[{{-2, -2, -0.4}, {2, -2, -0.4}, {2, 2, -0.4}, {-2,
2, -0.4}},
VertexTextureCoordinates -> {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]},
Lighting -> "Naturel"];
Show[plot3D, gr, PlotRange -> All, BoxRatios -> {1, 1, .6},
FaceGrids -> {Back, Left}]
that gives:
![graph][1]
it is not good for me, I want some think like this:
![needs][2]
Are i can do it by mathematica ?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=graf.PNG&userId=856431
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out.PNG&userId=856431Ziane Mustapha2018-08-02T15:12:40ZFind z-score?
http://community.wolfram.com/groups/-/m/t/1397936
Hello, everyone.
I know how to calculate the probability with Mathematica:
Probability[ x >= 1.0, x ~ NormalDistribution[] ]
The answer is 0.158655.
My question is, how can I find the z-score of normal distribution when the probability is given? For example, I want to find the value of z in the below pseudo-code:
Probability[ x >= z, x ~ NormalDistribution[] ] == 0.05
Or is there any ready-made function that can compute z-score directly in Mathematica?
Thank you!Kui Chen2018-08-05T08:16:45ZExtract information from transfer function matrix using parameter names?
http://community.wolfram.com/groups/-/m/t/1397626
Dear all,
As shown in the picture below I have a transfer function matrix with one input (DisplacementGround) and two outputs (DisplacementMass and VelocityMass). I can access either one of the two elements of the matrix using SystemModelExtract and referring to the elements as {1}, {1} and {1},{2}.
However, it would be very useful to be able to extract the individual transfer functions using the actual names of the input and output quantities (e.g. DisplscementGround in this example). These names come from a Modelica model imported into Mathematica.
Is there a way to achieve it?
Thank you very much in advance.
Fabian
![Description][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MathematicaQuestion.png&userId=1355184oquichtli2018-08-04T14:37:18Z