Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Sciencesearch.php sorted by activeWolfram programming Lab desktop is just a web browser
http://community.wolfram.com/groups/-/m/t/1464978
I have just confirmed with Technical support that local kernels cannot be run from WPL Desktop.
I am really disappointed with this, since this means that entire software is just an expensive web browser. The only product they figured out to be able to compute locally is Mathematica.
If you would like to find about yours, type Kernels[]
that will render error message about kernels failing to start due to lack of activation.
Well done Wolfram treating customers that way.
Kind regards
KarolKarol Kopiec2018-09-18T14:35:04ZNMaximize fails!
http://community.wolfram.com/groups/-/m/t/1463232
Hi!
I have tried to solve the following non-linear integer max-problem, without setting some lower (positive) bounds on Q1, Q2 & Q3, just being positive. Mathematica fails to find the optimal solotion, unless I increase the lower bounds for Q2 & Q3 to about 40. I have solved the same problem in LINGO and found the global solution {P -> 60., Q1 -> 0, Q2 -> 52., Q3 -> 60., y1 -> 0, y2 -> 1, y3 -> 1, \z1 -> 0, z2 -> 0, z3 -> 1} in less than a second. Any suggestions?
NMaximize[{(Q1 + Q2 + Q3) (P - 20) + 3*0.5 (80 - P) Q1*z1 +
2*0.5 (100 - 0.8 P) Q2*z2 + 2*0.5 (90 - 0.5 P) Q3*z3,
z1 + z2 + z3 == 1 && Q1 == (80 - P) y1 && Q2 == (100 - 0.8 P) y2 &&
Q3 == (90 - 0.5 P) y3 && 2 <= y1 + y2 + y3 <= 3 &&
3 <= y1 + y2 + y3 + z1 + z2 + z3 <= 4 &&
2 <= y1 + y2 + y3 + z1 <= 3 && 2 <= y1 + y2 + y3 + z2 <= 3 &&
2 <= y1 + y2 + y3 + z3 <= 3 && Q1 >= 0 && Q2 >= 0 && Q3 >= 0 &&
1 >= y1 >= 0 && 1 >= y2 >= 0 && 1 >= y3 >= 0 && z1 >= 0 &&
z2 >= 0 && z3 >= 0 && 80 > P > 20 && y1 \[Element] Integers &&
y2 \[Element] Integers && y3 \[Element] Integers &&
z1 \[Element] Integers && z2 \[Element] Integers &&
z3 \[Element] Integers}, {P, Q1, Q2, Q3, y1, y2, y3, z1, z2, z3}]Christos Papahristodoulou2018-09-17T19:22:37ZSolver for unsteady flow with the use of Mathematica FEM
http://community.wolfram.com/groups/-/m/t/1433064
![fig7][331]
I started the discussion [here][1] but I also want to repeat on this forum.
There are many commercial and open code for solving the problems of unsteady flows.
We are interested in the possibility of solving these problems using Mathematica FEM. Previously proposed solvers for stationary incompressible isothermal flows:
Solving 2D Incompressible Flows using Finite Elements:
http://community.wolfram.com/groups/-/m/t/610335
FEM Solver for Navier-Stokes equations in 2D:
http://community.wolfram.com/groups/-/m/t/611304
Nonlinear FEM Solver for Navier-Stokes equations in 2D:
https://mathematica.stackexchange.com/questions/94914/nonlinear-fem-solver-for-navier-stokes-equations-in-2d/96579#96579
We give several examples of the successful application of the finite element method for solving unsteady problem including nonisothermal and compressible flows. We will begin with two standard tests that were proposed to solve this class of problems by
M. Schäfer and S. Turek, Benchmark computations of laminar ﬂow around a cylinder (With support by F. Durst, E. Krause and R. Rannacher). In E. Hirschel, editor, Flow Simulation with High-Performance Computers II. DFG priority research program results 1993-1995, number 52 in Notes Numer. Fluid Mech., pp.547–566. Vieweg, Weisbaden, 1996. https://www.uio.no/studier/emner/matnat/math/MEK4300/v14/undervisningsmateriale/schaeferturek1996.pdf
![fig8][332]
Let us consider the flow in a flat channel around a cylinder at Reynolds number = 100, when self-oscillations occur leading to the detachment of vortices in the aft part of cylinder. In this problem it is necessary to calculate drag coeﬃcient, lift coeﬃcient and pressure diﬀerence in the frontal and aft part of the cylinder as functions of time, maximum drag coeﬃcient, maximum lift coeﬃcient , Strouhal number and pressure diﬀerence $\Delta P(t)$ at $t = t0 +1/2f$. The frequency f is determined by the period of oscillations of lift coeﬃcient f=f(c_L). The data for this test, the code and the results are shown below.
H = .41; L = 2.2; {x0, y0, r0} = {1/5, 1/5, 1/20};
Ω = RegionDifference[Rectangle[{0, 0}, {L, H}], Disk[{x0, y0}, r0]];
RegionPlot[Ω, AspectRatio -> Automatic]
K = 2000; Um = 1.5; ν = 10^-3; t0 = .004;
U0[y_, t_] := 4*Um*y/H*(1 - y/H)
UX[0][x_, y_] := 0;
VY[0][x_, y_] := 0;
P0[0][x_, y_] := 0;
Do[
{UX[i], VY[i], P0[i]} =
NDSolveValue[{{Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
u[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (u[x, y] - UX[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[u[x, y], x] +
VY[i - 1][x, y]*D[u[x, y], y],
Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
v[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (v[x, y] - VY[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[v[x, y], x] +
VY[i - 1][x, y]*D[v[x, y], y],
\!\(\*SuperscriptBox[\(u\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] +
\!\(\*SuperscriptBox[\(v\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y]} == {0, 0, 0} /. μ -> ν, {
DirichletCondition[{u[x, y] == U0[y, i*t0], v[x, y] == 0},
x == 0.],
DirichletCondition[{u[x, y] == 0., v[x, y] == 0.},
0 <= x <= L && y == 0 || y == H],
DirichletCondition[{u[x, y] == 0,
v[x, y] == 0}, (x - x0)^2 + (y - y0)^2 == r0^2],
DirichletCondition[p[x, y] == P0[i - 1][x, y], x == L]}}, {u, v,
p}, {x, y} ∈ Ω,
Method -> {"FiniteElement",
"InterpolationOrder" -> {u -> 2, v -> 2, p -> 1},
"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}], {i, 1, K}];
{ContourPlot[UX[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> u, MaxRecursion -> 2],
ContourPlot[VY[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> v, MaxRecursion -> 2,
PlotRange -> All]} // Quiet
{DensityPlot[UX[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> u, MaxRecursion -> 2],
DensityPlot[VY[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> v, MaxRecursion -> 2, PlotRange -> All]} // Quiet
dPl = Interpolation[
Table[{i*t0, (P0[i][.15, .2] - P0[i][.25, .2])}, {i, 0, K, 1}]];
cD = Table[{t0*i, NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) + Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Sin[θ] -
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Cos[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
cL = Table[{t0*i, -NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) +
Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Cos[θ] +
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Sin[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
{ListLinePlot[cD,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(D\)]\)"}],
ListLinePlot[cL,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(L\)]\)"}],
Plot[dPl[x], {x, 0, 8}, AxesLabel -> {"t", "ΔP"}]}
f002 = FindFit[cL, a*.5 + b*.8*Sin[k*16*t + c*1.], {a, b, k, c}, t]
Plot[Evaluate[a*.5 + b*.8*Sin[k*16*t + c*1.] /. f002], {t, 4, 8},
Epilog -> Map[Point, cL]]
k0=k/.f002;
Struhalnumber = .1*16*k0/2/Pi
cLm = MaximalBy[cL, Last]
sol = {Max[cD[[All, 2]]], Max[cL[[All, 2]]], Struhalnumber,
dPl[cLm[[1, 1]] + Pi/(16*k0)]}
In Fig. 1 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
{3.17805, 1.03297, 0.266606, 2.60427}
lowerbound= { 3.2200, 0.9900, 0.2950, 2.4600};
upperbound = {3.2400, 1.0100, 0.3050, 2.5000};
![Fig1][2]
Note that our results differ from allowable by several percent, but if you look at all the results of Table 4 from the cited article, then the agreement is quite acceptable.The worst prediction is for the Strouhal number. We note that we use the explicit Euler method, which gives an underestimate of the Strouhal number, as follows from the data in Table 4.
The next test differs from the previous one in that the input speed varies according to the `U0[y_, t_] := 4*Um*y/H*(1 - y/H)*Sin[Pi*t/8]`. It is necessary to determine the time dependence of the drag and lift parameters for a half-period of oscillation, as well as the pressure drop at the last moment of time. In Fig. 2 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
sol = {3.0438934441256595`,
0.5073345082785012`, -0.11152933279750943`};
lowerbound = {2.9300, 0.4700, -0.1150};
upperbound = {2.9700, 0.4900, -0.1050};
![Fig2][3]
For this test, the agreement with the data in Table 5 is good. Consequently, the two tests are almost completely passed.
I wrote and debugged this code using Mathematics 11.01. But when I ran this code using Mathematics 11.3, I got strange pictures, for example, the disk is represented as a hexagon, the size of the area is changed.
![Fig3][4]
In addition, the numerical solution of the problem has changed, for example, test 2D2
{3.17805, 1.03297, 0.266606, 2.60427} v11.01
{3.15711, 1.11377, 0.266043, 2.54356} v11.03
The attached file contains the working code for test 2D3 describing the flow around the cylinder in a flat channel with a change in the flow velocity.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D3.png&userId=1218692
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Math11.3.png&userId=1218692
[331]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CylinderRe100test2D2.gif&userId=1218692
[332]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2D2test.png&userId=1218692Alexander Trounev2018-08-31T11:44:04ZMetaprogramming: the Future of the Wolfram Language
http://community.wolfram.com/groups/-/m/t/1435093
With all the marvelous new functionality that we have come to expect with each release, it is sometimes challenging to maintain a grasp on what the Wolfram language encompasses currently, let alone imagine what it might look like in another ten years. Indeed, the pace of development appears to be accelerating, rather than slowing down.
However, I predict that the "problem" is soon about to get much, much worse. What I foresee is a step change in the pace of development of the Wolfram Language that will produce in days and weeks, or perhaps even hours and minutes, functionality might currently take months or years to develop.
So obvious and clear cut is this development that I have hesitated to write about it, concerned that I am simply stating something that is blindingly obvious to everyone. But I have yet to see it even hinted at by others, including Wolfram. I find this surprising, because it will revolutionize the way in which not only the Wolfram language is developed in future, but in all likelihood programming and language development in general.
The key to this paradigm shift lies in the following unremarkable-looking WL function WolframLanguageData[], which gives a list of all Wolfram Language symbols and their properties. So, for example, we have:
WolframLanguageData["SampleEntities"]
![enter image description here][1]
This means we can treat WL language constructs as objects, query their properties and apply functions to them, such as, for example:
WolframLanguageData["Cos", "RelationshipCommunityGraph"]
![enter image description here][2]
In other words, the WL gives us the ability to traverse the entirety of the WL itself, combining WL objects into expressions, or programs. This process is one definition of the term “Metaprogramming”.
What I am suggesting is that in future much of the heavy lifting will be carried out, not by developers, but by WL programs designed to produce code by metaprogramming. If successful, such an approach could streamline and accelerate the development process, speeding it up many times and, eventually, opening up areas of development that are currently beyond our imagination (and, possibly, our comprehension).
So how does one build a metaprogramming system? This is where I should hand off to a computer scientist (and will happily do so as soon as one steps forward to take up the discussion). But here is a simple outline of one approach.
The principal tool one might use for such a task is genetic programming:
WikipediaData["Genetic Programming"]
> In artificial intelligence, genetic programming (GP) is a technique whereby computer programs are encoded as a set of genes that are then modified (evolved) using an evolutionary algorithm (often a genetic algorithm, "GA") – it is an application of (for example) genetic algorithms where the space of solutions consists of computer programs. The results are computer programs that are able to perform well in a predefined task. The methods used to encode a computer program in an artificial chromosome and to evaluate its fitness with respect to the predefined task are central in the GP technique and still the subject of active research.
One can take issue with this explanation on several fronts, in particular the suggestion that GP is used primarily as a means of generating a computer program for performing a predefined task. That may certainly be the case, but need not be.
Leaving that aside, the idea in simple terms is that we write a program that traverses the WL structure in some way, splicing together language objects to create a WL program that “does something”. That “something” may be a predefined task and indeed this would be a great place to start: to write a GP metaprogramming system that creates WL programs that replicate the functionality of existing WL functions. Most of the generated programs would likely be uninteresting, slower versions of existing functions; but it is conceivable, I suppose, that some of the results might be of academic interest, or indicate a potentially faster computation method, perhaps. However, the point of the exercise is to get started on the metaprogramming project, with a simple(ish) task with very clear, pre-defined goals and producing results that are easily tested. In this case the “objective function” is a comparison of results produced by the inbuilt WL functions vs the GP-generated functions, across some selected domain for the inputs.
I glossed over the question of exactly how one “traverses the WL structure” for good reason: I feel sure that there must have been tremendous advances in the theory of how to do this in the last 50 years. But just to get the ball rolling, one could, for instance, operate a dual search, with a local search evaluating all of the functions closely connected to the (randomly chosen) starting function (WL object), while a second “long distance” search jumps randomly to a group of functions some specified number of steps away from the starting function.
[At this point I envisage the computer scientists rolling their eyes and muttering “doesn’t this idiot know about the {fill in the bank} theorem about efficient domain search algorithms?”].
Anyway, to continue. The initial exercise is about the mechanics of the process rather that the outcome. The second stage is much more challenging, as the goal is to develop new functionality, rather than simply to replicate what already exists. It would entail defining a much more complex objective function, as well as perhaps some constraints on program size, the number and types of WL objects used, etc.
An interesting exercise, for example, would be to try to develop a metaprogramming system capable of winning the Wolfram One-Liner contest. Here, one might characterize the objective function as “something interesting and surprising”, and we would impose a tight constraint on the length of programs generated by the metaprogramming system to a single line of code.
What is “interesting and surprising”? To be defined – that’s a central part of the challenge. But, in principle, I suppose one might try to train a neural network to classify whether or not a result is “interesting” based on the results of prior one-liner competitions.
From there, it’s on to the hard stuff: designing metaprogramming systems to produce WL programs of arbitrary length and complexity to do “interesting stuff” in a specific domain. That “interesting stuff” could be, for instance, a more efficient approximation for a certain type of computation, a new algorithm for detecting certain patterns, or coming up with some completely novel formula or computational concept.
Obviously one faces huge challenges in this undertaking; but the potential rewards are also enormous in terms of accelerating the pace of language development and discovery. It is a fascinating area for R&D, one that the WL is ideally situated to exploit. Indeed, I would be mightily surprised to learn that there is not already a team engaged on just such research at Wolfram. If so, perhaps one of them could comment here?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10942Fig1.png&userId=773999
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_12.png&userId=773999Jonathan Kinlay2018-09-02T13:38:13ZCan one have custom "operator records" in WSM?
http://community.wolfram.com/groups/-/m/t/1455024
Inspired by [this answer][1] to my question on stack overflow I was curious about overloading the class **operator** to have an **operator record**. From the [Modelica specs for Version 3.2.2][2] (Chapter 14) I take it that *operator records* are available in the language since 2013.
**Can one have a class *operator* in WSM?**
Edit:
I edited the title to make my question more precise: I am interested in defining my own operator records.
Update: I have cross-posted this question now on Mathematica Stack Exchange [(181939)][3].
[1]: https://stackoverflow.com/a/52313716/5363743
[2]: https://www.modelica.org/documents/ModelicaSpec32Revision2.pdf
[3]: https://mathematica.stackexchange.com/q/181939/764Guido Wolf Reichert2018-09-13T14:27:01ZUnit Checking in System Modeler and making use of unit attributes
http://community.wolfram.com/groups/-/m/t/1451968
Unit checking (or dimensional analysis) is an important means of model verification:
[Units of Measurement in a Modelica Compiler][1]
**Question 1: Is unit checking implemented in the current version of Wolfram System Modeler?**
(If not, what options are there to achieve it by other means?)
In Modelica units are stored as attributes to variables (e.g. Real( unit = "m/s" ) ) and are usually enabled by modifications of existing basic types with default values for these attributes (e.g. the default unit for the type Real is " ").
**Question 2: How to make use of this "meta information" within the model itself:**
- How can I access the unit of some variable?
parameter String referencedUnit = <function of another variable to access its unit>
- How can I assign such a referenced unit to another variable within the runtime of a model? (e.g. I have written a converter function to convert units of time, say from minutes to months, as this is frequently needed in economic or business simulation models where SIunits are rather an exception).
output convertedTime =Real( quantity = "Time", unit = <to be assigned in func-body> )
or
input String timeUnitA; // unit of time to convert from
input String timeUnitB; //unit of time to convert to
output convertedTime = Real( quantity = "Time", unit = timeUnitB )
(All the above will not work for me as different degrees of variability prohibit this: a modification of the unit attribute must be a constant...).
EDIT: I have cross-posted the second part of my question on stack overflow [(5363743)][2]
[1]: http://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=8889293&fileOId=8889294 "Units of Measurement in a Modelica Compiler"
[2]: https://stackoverflow.com/q/52312562/5363743Guido Wolf Reichert2018-09-12T10:34:04ZExtract the real solutions among many which are complex?
http://community.wolfram.com/groups/-/m/t/1450000
Here is the equation and the solution: I want the two real solutions.
In[2]:= soln1[a_, b_, c_, \[Xi]_, \[Lambda]_, \[Phi]_, rn_] :=
NSolve[\[Rho]^2 +
2 \[Rho] (a Sin[\[Theta]] Cos[\[Phi]] +
b Sin[\[Theta]] Sin[\[Phi]] + c Cos[\[Theta]]) + a^2 + b^2 +
c^2 - rn^2 ==
0 && \[Rho] - \[Lambda] rn Cos[\[Theta] + \[Xi] Cos[\[Phi]]]^2 ==
0, {\[Rho], \[Theta]}];
In[8]:= ss = soln1[.5, .5, .5, .2, .5, .1, 1]
During evaluation of In[8]:= NSolve::ifun: Inverse functions are being used by NSolve, so some solutions may not be found; use Reduce for complete solution information.
Out[8]= {{\[Rho] -> -2.51366 - 3.61652 I, \[Theta] ->
1.8313 - 1.79756 I}, {\[Rho] -> -2.51366 + 3.61652 I, \[Theta] ->
1.8313 + 1.79756 I}, {\[Rho] -> -0.146207 -
0.0312241 I, \[Theta] -> -1.82025 +
0.520643 I}, {\[Rho] -> -0.146207 +
0.0312241 I, \[Theta] -> -1.82025 - 0.520643 I}, {\[Rho] ->
0.152974, \[Theta] -> 0.785684}, {\[Rho] ->
0.311045 - 0.210578 I, \[Theta] ->
2.25043 - 0.388346 I}, {\[Rho] ->
0.311045 + 0.210578 I, \[Theta] ->
2.25043 + 0.388346 I}, {\[Rho] -> 0.417435, \[Theta] -> -0.617471}}Hong-Yee Chiu2018-09-11T16:44:53ZDecouple the following equations?
http://community.wolfram.com/groups/-/m/t/1459261
I have two couple equations and I want to decouple it. How can I do it? any help will be appreciated.
eq1 = k*X''[Y] + Bi*(Z[Y] - X[Y]) == Subscript[U, p]/Subscript[U, m];
eq2 = Z''[Y] - Bi*(Z[Y] - X[Y]) == 0;Mirza Farrukh Baig2018-09-15T13:55:30ZFind the root of a numerical function that contains a root-finding itself
http://community.wolfram.com/groups/-/m/t/1463531
I have encountered the following problem:
- define a numerical function h with input x and output y
- use FindRoot to find the root x* of h
- one characteristic of h: at some part within h FindRoot is called
I have done that with other software, but its not working in Mathematica, or I am doing something wrong. I have attached a MWE.
Simply evaluating the function h works, but FindRoot does return many error messages. I really do not understand why.
Any ideas?
Best,
Benjamin
h[inp_] :=
Module[{y, x, a, equ, sol, z},
a = inp;
equ = {x + .5 y - a[[1]], x + y - a[[2]]};
sol = FindRoot[equ, {{x, 1.0}, {y, 1.0}}];
z = {x, y} /. sol;
z - {1, 1}
]
XY = {X, Y};
XYval = {1.5, 2.0};
XYStart = {{XY[[1]], XYval[[1]]}, {XY[[2]], XYval[[2]]}};
h[XYval]
FindRoot[h[XYInput], XYStart]Benjamin L2018-09-17T20:28:02ZMusic Generation with GAN MidiNet
http://community.wolfram.com/groups/-/m/t/1435251
I generate a music with reference to [MidiNet][1]. Most neural network models for music generation use recurrent neural networks. However, MidiNet use convolutional neural networks.
There are three models in MidiNet. Model 1 is Melody generator, no chord condition. Model 2,3 are Melody generators with chord condition. I try Model 1, because it is most interesting in the three models compared in the paper.
**Get MIDI data**
-----------------------
My favorite Jazz bassist is [Jaco Pastorius][2]. I get MIDI data from [here][3]. For example, I get MIDI data of "The Chicken".
url = "http://www.midiworld.com/download/1366";
notes = Select[Import[url, {"SoundNotes"}], Length[#] > 0 &];
There are some styles in the notes. I get base style from them.
notes[[All, 3, 3]]
Sound[notes[[1]]]
![enter image description here][4]
![enter image description here][5]
I change MIDI data to Image data. I fix the smallest note unit to be the sixteenth note. I divide the MIDI data into the sixteenth note period and select the sound found at the beginning of each period. And the pitch of SoundNote function is from 1 to 128. So, I change one bar to grayscale image(h=128*w=16).
First, I create the rule to change each note pitch(C-1,...,G9) to number(1,...,128), C4 -> 61.
codebase = {"C", "C#", "D", "D#", "E" , "F", "F#", "G", "G#" , "A",
"A#", "B"};
num = ToString /@ Range[-1, 9];
pitch2numberrule =
Take[Thread[
StringJoin /@ Reverse /@ Tuples[{num, codebase}] ->
Range[0, 131] + 1], 128]
![enter image description here][6]
Next, I change each bar to image (h = 128*w = 16).
tempo = 108;
note16 = 60/(4*tempo); (* length(second) of 1the sixteenth note *)
select16[snlist_, t_] :=
Select[snlist, (t <= #[[2, 1]] <= t + note16) || (t <= #[[2, 2]] <=
t + note16) || (#[[2, 1]] < t && #[[2, 2]] > t + note16) &, 1]
selectbar[snlist_, str_] :=
select16[snlist, #] & /@ Most@Range[str, str + note16*16, note16]
selectpitch[x_] := If[x === {}, 0, x[[1, 1]]] /. pitch2numberrule
pixelbar[snlist_, t_] := Module[{bar, x, y},
bar = selectbar[snlist, t];
x = selectpitch /@ bar;
y = Range[16];
Transpose[{x, y}]
]
imagebar[snlist_, t_] := Module[{image},
image = ConstantArray[0, {128, 16}];
Quiet[(image[[129 - #[[1]], #[[2]]]] = 1) & /@ pixelbar[snlist, t]];
Image[image]
]
soundnote2image[soundnotelist_] := Module[{min, max, data2},
{min, max} = MinMax[#[[2]] & /@ soundnotelist // Flatten];
data2 = {#[[1]], #[[2]] - min} & /@ soundnotelist;
Table[imagebar[data2, t], {t, 0, max - min, note16*16}]
]
(images1 = soundnote2image[notes[[1]]])[[;; 16]]
![enter image description here][7]
**Create the training data**
-----------------------
First, I drop images1 to an integer multiple of the batch size. Its length is 128 bars and about 284 seconds with a batch size of 16.
batchsize = 16;
getbatchsizeimages[i_] := i[[;; batchsize*Floor[Length[i]/batchsize]]]
imagesall = Flatten[Join[getbatchsizeimages /@ {images1}]];
{Length[imagesall], Length[imagesall]*note16*16 // N}
![enter image description here][8]
MidiNet proposes a novel conditional mechanism to use music from the previous bar to condition the generation of the present bar to take into account the temporal dependencies across a different bar. So, each training data of MidiNet (Model 1: Melody generator, no chord condition) consists of three "noise", "prev", "Input". "noise" is a 100-dimensions random vector. "prev" is an image data(1*128*16) of the previous bar. "Input" is an image data(1*128*16) of the present bar. The first "prev" of each batch is all 0.
I generate training data with a batch size of 16 as follows.
randomDim = 100;
n = Floor[Length@imagesall/batchsize];
noise = Table[RandomReal[NormalDistribution[0, 1], {randomDim}],
batchsize*n];
input = ArrayReshape[ImageData[#], {1, 128, 16}] & /@
imagesall[[;; batchsize*n]];
prev = Flatten[
Join[Table[{{ConstantArray[0, {1, 128, 16}]},
input[[batchsize*(i - 1) + 1 ;; batchsize*i - 1]]}, {i, 1, n}]],
2];
trainingData =
AssociationThread[{"noise", "prev",
"Input"} -> {#[[1]], #[[2]], #[[3]]}] & /@
Transpose[{noise, prev, input}];
**Create GAN**
-----------------------
I create generator with reference to MidiNet.
generator = NetGraph[{
1024, BatchNormalizationLayer[], Ramp, 256,
BatchNormalizationLayer[], Ramp, ReshapeLayer[{128, 1, 2}],
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[1, {128, 1}, "Stride" -> {2, 1}],
LogisticSigmoid,
ConvolutionLayer[16, {128, 1}, "Stride" -> {2, 1}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp, CatenateLayer[],
CatenateLayer[], CatenateLayer[],
CatenateLayer[]}, {NetPort["noise"] ->
1, NetPort["prev"] -> 19,
19 -> 20 ->
21 -> 22 -> 23 -> 24 -> 25 -> 26 -> 27 -> 28 -> 29 -> 30,
1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7, {7, 30} -> 31,
31 -> 8 -> 9 -> 10, {10, 27} -> 32,
32 -> 11 -> 12 -> 13, {13, 24} -> 33,
33 -> 14 -> 15 -> 16, {16, 21} -> 34, 34 -> 17 -> 18},
"noise" -> {100}, "prev" -> {1, 128, 16}
]
![enter image description here][9]
I create discriminator which does not have BatchNormalizationLayer and LogisticSigmoid, because I use [Wasserstein GAN][10] easy to stabilize the training.
discriminator = NetGraph[{
ConvolutionLayer[64, {89, 4}, "Stride" -> {1, 1}], Ramp,
ConvolutionLayer[64, {1, 4}, "Stride" -> {1, 1}], Ramp,
ConvolutionLayer[16, {1, 4}, "Stride" -> {1, 1}], Ramp,
1},
{1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7}, "Input" -> {1, 128, 16}
]
![enter image description here][11]
I create Wasserstein GAN network.
ganNet = NetInitialize[NetGraph[<|"gen" -> generator,
"discrimop" -> NetMapOperator[discriminator],
"cat" -> CatenateLayer[],
"reshape" -> ReshapeLayer[{2, 1, 128, 16}],
"flat" -> ReshapeLayer[{2}],
"scale" -> ConstantTimesLayer["Scaling" -> {-1, 1}],
"total" -> SummationLayer[]|>,
{{NetPort["noise"], NetPort["prev"]} -> "gen" -> "cat",
NetPort["Input"] -> "cat",
"cat" ->
"reshape" -> "discrimop" -> "flat" -> "scale" -> "total"},
"Input" -> {1, 128, 16}]]
![enter image description here][12]
**NetTrain**
-----------------------
I train by using the training data created before. I use RMSProp as the method of NetTrain according to Wasserstein GAN. It take about one hour by using GPU.
net = NetTrain[ganNet, trainingData, All, LossFunction -> "Output",
Method -> {"RMSProp", "LearningRate" -> 0.00005,
"WeightClipping" -> {"discrimop" -> 0.01}},
LearningRateMultipliers -> {"scale" -> 0, "gen" -> -0.2},
TargetDevice -> "GPU", BatchSize -> batchsize,
MaxTrainingRounds -> 50000]
![enter image description here][13]
**Create MIDI**
-----------------------
I create image data of 16 bars by using generator of trained network.
bars = {};
newbar = Image[ConstantArray[0, {1, 128, 16}]];
For[i = 1, i < 17, i++,
noise1 = RandomReal[NormalDistribution[0, 1], {randomDim}];
prev1 = {ImageData[newbar]};
newbar =
NetDecoder[{"Image", "Grayscale"}][
NetExtract[net["TrainedNet"], "gen"][<|"noise" -> noise1,
"prev" -> prev1|>]];
AppendTo[bars, newbar]
]
bars
![enter image description here][14]
I select only the pixel having the max value among each column of the image, because there is a feature that the image generated by Wasserstein GAN is blurred. I clear the images.
clearbar[bar_, threshold_] := Module[{i, barx, col, max},
barx = ConstantArray[0, {128, 16}];
col = Transpose[bar // ImageData];
For[i = 1, i < 17, i++,
max = Max[col[[i]]];
If[max >= threshold,
barx[[First@Position[col[[i]], max, 1], i]] = 1]
];
Image[barx]
]
bars2 = clearbar[#, 0.1] & /@ bars
![enter image description here][15]
I change the image to SoundNote. I concatenate the same continuous pitches.
number2pitchrule = Reverse /@ pitch2numberrule;
images2soundnote[img_, start_] :=
SoundNote[(129 - #[[2]]) /.
number2pitchrule, {(#[[1]] - 1)*note16, #[[1]]*note16} + start,
"ElectricBass", SoundVolume -> 1] & /@
Sort@(Reverse /@ Position[(img // ImageData) /. (1 -> 1.), 1.])
snjoinrule = {x___, SoundNote[s_, {t_, u_}, v_, w_],
SoundNote[s_, {u_, z_}, v_, w_], y___} -> {x,
SoundNote[s, {t, z}, v, w], y};
I generate music and attach its mp3 file.
Sound[Flatten@
MapIndexed[(images2soundnote[#1, note16*16*(First[#2] - 1)] //.
snjoinrule) &, bars2]]
![enter image description here][16]
**Conclusion**
-----------------------
I try music generation with GAN. I am not satisfied with the result. I think that the causes are various, poor training data, poor learning time, etc.
Jaco is gone. I hope Neural Networks will be able to express Jaco's base.
[1]: https://arxiv.org/abs/1703.10847
[2]: https://en.wikipedia.org/wiki/Jaco_Pastorius
[3]: http://www.bock-for-pastorius.de/midi.htm
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=317901.jpg&userId=1013863
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=567502.jpg&userId=1013863
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=476803.jpg&userId=1013863
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=744004.jpg&userId=1013863
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=586405.jpg&userId=1013863
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=707106.jpg&userId=1013863
[10]: https://arxiv.org/abs/1701.07875
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=435507.jpg&userId=1013863
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=170508.jpg&userId=1013863
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=324809.jpg&userId=1013863
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=965210.jpg&userId=1013863
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=706311.jpg&userId=1013863
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=177112.jpg&userId=1013863Kotaro Okazaki2018-09-02T02:30:04ZReverse the axes of a plot?
http://community.wolfram.com/groups/-/m/t/1459957
Hello and thanks for your help.
I am trying to invert the axes provided by the Plot [] command, to invert the Y axis (vertical) and the graphical maintenance of the x axis (horizontal). Thank you very much for your help, I tried to find an answer in the program itself but I did not find it.
Thank you very much for any help you can give me.Miguel Saldias2018-09-15T19:20:21ZScatter plotting satellites?
http://community.wolfram.com/groups/-/m/t/1454309
Im dealing with a table of GEO satellites, I have generated a table with Az and EL values relative to my location. To bad they couldn't be found in the SatelliteData[] query...
This is a snippet of the data.
dataSatTable = {
{"SAT NAME", "EL", "AZ"},
{"NSS-806", 3.26, 99.47},
{"Galaxy-17-19", 5.69, 258.52},
{"Eutelsat-113", 10.4, 254.51}
}
I need to plot each satellite as a dot with a text tag on a scatter plot, This will form an arc. I then need to add another table of data containing obstructions on the plot.
Any pointers?Mathison Ott2018-09-13T07:18:01Zpsfrag for Mathematica 10
http://community.wolfram.com/groups/-/m/t/474155
Hello,
as far as I understand, psfrag is no longer working with Mathematica 10. Does anyone have a solution for this problem or
knows whether there will be a solution in the near future? Or is there an alternative?
What I want to do is export eps files from Mathematica and include them into Latex with nice Labels.a b2015-04-05T14:34:56ZGet a numerical solution to a nonlinear ODE?
http://community.wolfram.com/groups/-/m/t/1458395
I am trying to solve a nonlinear ODE BY applying a NDsolve and using StiffnessSwitching method, but when I try to find the root of my equation it gives me a error message. The same code is working well in Mathematica version 9, but in M.version 11.3 that I just upgraded is not working I do not know why this happened.
Would anyone help me please?
Here is my code
Z=800;
g= 0.023800000000000000000;
k2= 0.000194519;
R= 1.5472;
ytest0= -13.911917694213733`;
ϵ = $MachineEpsilon ;
y1[ytest_?NumericQ] :=
NDSolve[
{y''[r] + 2 y'[r]/r == κ2 Sinh[y[r]] , y[1] == ytest,
y'[ϵ] == 0}, y, {r, ϵ, 1},
Method -> {"StiffnessSwitching", "NonstiffTest" -> False}];
y2[ytest_?NumericQ] :=
NDSolve[
{y''[r] + 2 y'[r]/r == κ2 Sinh[y[r]],
y[1] == ytest, y'[R] == 0}, y, {r, 1, R},
Method -> {"StiffnessSwitching", "NonstiffTest" -> False}];
y1Try[ytest_?NumericQ] := y'[1] /. y1[ytest];
y2Try[ytest_?NumericQ] := y'[1] /. y2[ytest];
f = ytest /. FindRoot[y1Try[ytest] - y2Try[ytest]==-Zg, {ytest, ytest0}]kolod al2018-09-15T03:44:54ZAvoid issue while using DMSList function?
http://community.wolfram.com/groups/-/m/t/1461490
DMSList[20.365]
The above code should have returned the list of degree, minutes and seconds. But only the following code returned the required results of {20, 21, 54.}.
DMSList[{20.365,0,0}]htan aungmin2018-09-17T09:35:33ZCreate a "Great Circle" on a globe through two given points?
http://community.wolfram.com/groups/-/m/t/1460856
A collegue of mine is on holiday from Amsterdam to Miami. Just for fun I would like to plot the great circle through the center of the earth going through Amsterdam and Miami.
I tried to do that with GeoGraphics/Geopath, but for both came the Error-Message “GeoGraphics/Geopath” is not a graphics primitive. I tried several things but I cannot get it right ? How to create it ?
The code is from the Wolfram help with some adaption. See att.
Thank youChiel Geeraert2018-09-16T17:35:52ZAvoid issues on demonstrations running on a web browser?
http://community.wolfram.com/groups/-/m/t/1457702
I have noticed at least a couple of demonstrations that one can't run from within a web browser (constrained optimization and union bound probability). When accessing these demonstrations and the cursor is placed in the graphic, a notice pops up that states "this demonstration is optimized for desktop". If the CDF file is downloaded to an iPad running the Wolfram Player, it seems to run correctly. What is it about these demonstrations that prevents them from running inside a web browser?Mike Luntz2018-09-14T13:22:00ZDefine an implicit function?
http://community.wolfram.com/groups/-/m/t/1457290
Hi there,
I have a theoretic decision model that involves a number of equations that cannot be solved in a closed form, i.e. the solution is given only implicitely. The first of these functions then enters another equaition, that again can only be solved implicitely. And encounter problems when I try to implement this. I only need a numerical solution, as it is for illustration prupose only.
Now here's the first equation:
b[x_] := Solve[b == x - 0.1*Quantile[NormalDistribution[0, 1], b], b]
which gives the error message
$RecursionLimit::reclim2: Recursion depth of 1024 exceeded during evaluation of -0.1 Quantile[NormalDistribution,b].
I'd prefer to have the 0.1 as a free parameter "b[x_,s_] :=...", but would be able to live with that if I have to. Any ideas or comments?
Best ChristianChristan Bauer2018-09-14T11:24:04ZConnect Mathematica to a Bluetooth 4.0 device?
http://community.wolfram.com/groups/-/m/t/1454347
How does Mathematica connect to heart rate device via Bluetooth 4.0 and analyze the sampled data? The FindDevices command has tried but the device cannot found. Are there any things to be aware or other methods?
FindDevices[]
{DeviceObject[{"Camera", 1}], DeviceObject[{
"FunctionDemo", 1}], DeviceObject[{
"RandomSignalDemo", 1}], DeviceObject[{"WriteDemo", 1}]}Tsai Ming-Chou2018-09-13T09:37:35ZSolve the following equations with floor and ceil using Wolfram|Alpha?
http://community.wolfram.com/groups/-/m/t/1457369
When I enter this:
n=50, B=150, k=29.5, h=13.5, x=floor(B/k), H=(ceil(n/(2*x))*h)
I get a solution:
> B = 150, h = 27/2, H = 135/2, k = 59/2, n = 50, x = 5
BUT when I enter this:
n=50, B=150, R=2, k=29.5, h=13.5, x=floor(B/k), H=(ceil(n/(R*x))*h)
I get an error:
> Wolfram|Alpha doesn't understand your query
Why?Cev Ing2018-09-14T11:30:28ZIssues with old postings
http://community.wolfram.com/groups/-/m/t/1455753
Are there any issues with old postings in the community? When trying to access an old post, I'm unable to view the page.
Example
Simple Pendulum Experiment Using Mathematica's Image Processing Capability
We can use the pin notebook to capture the position and time in space of the pendulum.[mcode]pics = {}; Pause[10]; Do[AppendTo[pics, {AbsoluteTime[], CurrentImage[]}]; Pause[0.1], {i,...
AUTHOR: Diego Zviovich 5221Views 0replies
HyperLink
http://community.wolfram.com/groups/-/m/t/193779?p_p_auth=xTZ8jYHy
Result:
GROUPS:
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of UseDiego Zviovich2018-09-13T23:03:06ZConvert Wolfram Dataset to JSON or CSV via API?
http://community.wolfram.com/groups/-/m/t/1455513
I am getting wolfram CDF format from this,
beta = APIFunction[{"tablename" -> "String"},ResourceData[ResourceObject[#tablename] ]& ]
co = CloudDeploy[beta, Permissions->"Public"]
Response:
Dataset[{<|"Name" -> "Aachen", "ID" -> "1", "NameType" -> "Valid", "Classification" -> "L5", "Mass" -> Quantity[21, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1880}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{50.775, 6.08333}]|>, <|"Name" -> "Aarhus", "ID" -> "2", "NameType" -> "Valid", "Classification" -> "H6", "Mass" -> Quantity[720, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1951}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{56.18333, 10.23333}]|>, <|"Name" -> "Abee", "ID" -> "6", "NameType" -> "Valid", "Classification" -> "EH4", "Mass" -> Quantity[107000, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1952}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{54.21667, -113.}]|>, <|"Name" -> "Acapulco", "ID" -> "10", "NameType" -> "Valid", "Classification" -> "Acapulcoite", "Mass" -> Quantity[1914, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1976}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{16.88333, -99.9}]|>, <|"Name" -> "Achiras", "ID" -> "370", "NameType" -> "Valid", "Classification" -> "L6", "Mass" -> Quantity[780, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1902}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{-33.16667, -64.95}]|> }]
I need this in a JSON format, I tried to convert it using URLexecute it didn't work
Does anyone know any pythonic or wolfram way to convert this into JSON or CSV?Sag Mk2018-09-13T17:28:29ZObtain an inhomogeneous compound Poisson process?
http://community.wolfram.com/groups/-/m/t/1456738
Mathematica has functions of compound Poisson process and inhomogeneous Poisson process. But it does not have a function of the combination of the two. In other words, is it possible to obtain a compound Poisson process with time-varying intensity?
Thanks in advance!Livvy Zhen2018-09-14T03:36:21ZPrevent the reset of the DynamicImage settings?
http://community.wolfram.com/groups/-/m/t/1455313
Dear all,
In the following code how can I prevent the reset of the definition of the dynamicImage, i.e., if a have zoomed the image and then move the slider how can I prevent that the dynamicImage lose the zoom:
img = ExampleData[{"TestImage", "House"}];
image = {img, img*2.0};
Manipulate[
Row[{DynamicImage@img, image[[ind]]}],
{ind, 1, 2, 1}
]
Please note this example is a toy problem.
Thank you,
LuisLuis Mendes2018-09-13T14:41:01ZWork with functions defined inside a Package?
http://community.wolfram.com/groups/-/m/t/1150206
I have functions defined inside a Begin Block of a Package in the normal manner (BeginPackage ... foo::usage="bar" ... Begin ... DegreesPerMeterAtmosphere[targelevdeg_,startalt_]: = targelevdeg+startalt ... etc.)
I tried to define variables that would be exposed by the packaged (as below) - but it does not work and I can't find the correct comments ion the manuals. This is the simplified code:
BeginPackage[ "FoundationFunctions`"]
speedlight::usage = "The speed of light in meters/sec";
DegreesPerMeterAtmosphere::usage = "DegreesPerMeterAtmosphere[targelevdeg,startalt] stuff"
Begin[ "Private`"]
speedlight:=299792458 (* m/s *);
DegreesPerMeterAtmosphere[targelevdeg_,startalt_]: = targelevdeg+startalt
End[]
EndPackage[]
But when I Get[] the package from a Notebook the function is pulled across but not the variable. Specifically Names["FoundationFunctions`*"] yields {"DegreesPerMeterAtmosphere"}.
Am I trying to do something that is silly (perhaps one cannot use packages to define/ encapsulate variables)? Or have I done the right thing in the wrong way?Andrew Macafee2017-07-20T13:01:51ZFitting An Ellipse Inside a Non-Convex Curve
http://community.wolfram.com/groups/-/m/t/1453823
The goal is to find the largest ellipse (with given ratio of axes), centered at a given point and with a given orientatiion, that fits inside a specified non-convex oval.
equation of oval
In[1]:= oval[{x_, y_}] = ((x - 1)^2 + y^2) ((x + 1)^2 + y^2) - (21/20)^4;
derive equation of an ellipse with axes "a" and "b", centered at { xc, yc }
with major axis making angle \[Theta] with x-axis.
In[2]:= Thread[{xel, yel} =
DiagonalMatrix[{a, b}^-1].RotationMatrix[-\[Theta]].{x - xc, y - yc}];
In[3]:= eleq[{{a_, b_}, xc_, yc_, \[Theta]_}, {x_, y_}] = xel^2 + yel^2 - 1;
symbolically find largest ellipse with axes "a" and "a/2",
centered at { 1, (1/5) }
oriented at \[Pi]/3
RegionWithin takes about 1 minute.
In[4]:= AbsoluteTiming[
RegionWithin[ImplicitRegion[oval[{x, y}] <= 0, {x, y}],
ImplicitRegion[eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}] <= 0, {x, y}],
GenerateConditions -> True] // N]
Out[4]= {63.4787, 0. < a <= 0.315686 || -0.315686 <= a < 0.}
Calculating it numerically with a Lagrange multiplier
and NSolve takes about 1 second.
The desired answer is the one with the smallest value of a.
In[5]:= AbsoluteTiming[
sln = NSolve[{a >= 0, oval[{x, y}] == 0,
eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}] == 0,
Sequence @@
D[oval[{x,
y}] == \[Lambda] eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}], {{x,
y}}]}, {a, x, y, \[Lambda]}, Reals]]
Out[5]= {0.808361, {{a -> 0.315686, \[Lambda] -> 0.869176, x -> 1.15549,
y -> 0.474695}, {a -> 4.34436, \[Lambda] -> 7.03937, x -> -1.41308,
y -> 0.191823}, {a -> 0.817698, \[Lambda] -> 1.46331, x -> 0.654269,
y -> -0.531984}, {a -> 1.14366, \[Lambda] -> 1.77874, x -> 1.34316,
y -> -0.315728}}}
eliminating the Lagrange multiplier before solving speeds up the calculation.
In[6]:= AbsoluteTiming[
sln = NSolve[{a >= 0, oval[{x, y}] == 0,
eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}] == 0,
Eliminate[
D[oval[{x,
y}] == \[Lambda] eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}], {{x,
y}}], \[Lambda]]}, {a, x, y}, Reals]]
Out[6]= {0.0641214, {{x -> 1.15549, y -> 0.474695, a -> 0.315686}, {x -> -1.41308,
y -> 0.191823, a -> 4.34436}, {x -> 1.34316, y -> -0.315728,
a -> 1.14366}, {x -> 0.654269, y -> -0.531984, a -> 0.817698}}}
Plotting all the results show that the curves are tangent at the intersection point.
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ellipse_in_oval.jpg&userId=29126Frank Kampas2018-09-13T00:45:28Z[GIF] Thoughtform
http://community.wolfram.com/groups/-/m/t/1453464
![enter image description here][1]
Same principle as a previous [post][2], but added some visual aids to make it more intuitive. Drastically resized due to filesize limits, download full-size GIF [here][3] .
Also had some fun with the colors and had an art print made.
![enter image description here][4]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=framesforward30.053.gif&userId=167076
[2]: http://community.wolfram.com/groups/-/m/t/947494
[3]: https://www.dropbox.com/s/rt7cwewf81a0lfy/Thoughtform%200.053.gif?dl=0
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_1335copy.JPG&userId=167076Bryan Lettner2018-09-12T23:36:22Z[✓] Create an image collage and apply Blur?
http://community.wolfram.com/groups/-/m/t/1453520
Hello Everyone,
I am very new to Wolfram language and try to go through the course book of the language. I like however tryout my ideas on how the things can be done in unconventional ways.
I was trying to produce image collage as:
<pre>
<code>
i = CurrentImage[];
ImageCollage[
Table[f[i], {f, {Blur, EdgeDetect, Binarize}}]
]
</code>
</pre>
And I was wondering that If I would like apply a parameter to the Blur function that would be something like partial application i.e. <code>Blur[5]</code>
Which in turn can be further applied in <code>f[i]</code>
is such a thing possible?
the way I tried it <code>Blur[5]</code> doesn't work.
Kind regards
KarolKarol Kopiec2018-09-12T20:38:10ZDefine a piecewise function from a list?
http://community.wolfram.com/groups/-/m/t/1452490
I would like to define a piecewise function by providing two lists and using a for loop
eg:
xlist = {1, 2, 3, 4}
ylist = {4, 5, 6}
f(x):= { ylist[1] if x in [ xlist[1],xlist[2] ];ylist[2] if x in [ xlist[2],xlist[3] ]; ylist[3] if x in [ xlist[3],xlist[4] ]; 0 otherwise}Noureddine Toumi2018-09-12T17:58:45ZLaunch Wolfram CDF player?
http://community.wolfram.com/groups/-/m/t/1453502
Hello
I download Wolfram CDF Player Version 11.3.0 for Windows 7/8 from the Wolfram website and the installer downloaded. But when I tried Io click the installer a few seconds later it opens a popup saying "Timeout waiting for windows to load"/ I have plenty of hard disk space and do not understand what the issue may be. Can someone help?
Thank you
VioletViolet K2018-09-12T20:03:17Z[Wolfram Media] A Numerical Approach to Real Algebraic Curves
http://community.wolfram.com/groups/-/m/t/1452588
[![enter image description here][1]][2]
Wolfram Media has released a new book, [*A Numerical Approach to Real Algebraic Curves with the Wolfram Language*][2], by Barry H. Dayton. [Dayton][3] is a [mathematician][4] and long-time Mathematica user.
Bridging the gap between the sophisticated topic of real algebraic curve theory and on-the-spot computation and visualization of real algebraic curves, Dayton uses the Wolfram Language to explore and analyze real curves that often do not have rational points on them. In classical texts, analysis of these types of real curves was only really possible in the theoretical sense, but the Wolfram Language's ability to work with machine numbers, both in calculations and in detailed plots, enables accurate analysis of extremely complicated curves. This book is intended for those with some understanding of calculus and partial derivatives and with basic knowledge of the Wolfram Language.
One thing that makes this [Wolfram Media][5] publication unique is that not only is the book available for purchase on Amazon as a Kindle file, the entire text of the book with all of the code used to make the plots is available for free as downloadable Wolfram Notebooks. This book's unique style includes a large function appendix that evaluates independently of the chapter interface and activates the functions used in the text itself.
Read this month's [article of *The Mathematica Journal* for a summary][6]. Below are a few beautiful images from the article.
![enter image description here][7]
![enter image description here][8]
We're excited for this release as it is the first book by a non-Wolfram author that we've published, and we have several additional titles under consideration for 2019. Please check back on the Publishing and Authoring Group discussion over the next few months for updates!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-09-12at4.30.29PM.png&userId=20103
[2]: http://www.wolfram-media.com/products/dayton-algebraic-curves.html
[3]: http://barryhdayton.space
[4]: https://scholar.google.com/citations?user=hHz85rIAAAAJ&hl=en
[5]: http://www.wolfram-media.com
[6]: http://www.mathematica-journal.com/2018/08/a-wolfram-language-approach-to-real-numerical-algebraic-plane-curves
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Dayton_PlacedGraphics_1.gif&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Dayton_PlacedGraphics_7.gif&userId=20103Jeremy Sykes2018-09-12T19:30:25ZRaspberryPi 3 Model B+ and I2C issue
http://community.wolfram.com/groups/-/m/t/1431827
Hello,
I have been struggling with SenseHAT and Mathematica. First Mathematica was not able to find SenseHAT at all even than I followed I2C setup guide but eventually after adding following line to /boot/config.txt I was able to make some progress:
dtparam=i2c0=on
After that there is two i2c buses in the system:
pi@raspberrypi:~ $ ls -l /dev/i2c-*
crw-rw---- 1 root i2c 89, 0 Aug 30 21:34 /dev/i2c-0
crw-rw---- 1 root i2c 89, 1 Aug 30 21:34 /dev/i2c-1
However Mathematica reports variety I2C errors:
pi@raspberrypi:~ $ wolfram
Wolfram Language 11.3.0 Engine for Linux ARM (32-bit)
Copyright 1988-2018 Wolfram Research, Inc.
In[1]:= sensehat = DeviceOpen["SenseHAT"]
Out[1]= DeviceObject[{SenseHAT, 1}]
In[2]:= DeviceRead[sensehat, "Temperature"]
DeviceWrite::unknownMRAAWriteError: An unknown error occured writing to the I2C bus.
DeviceWrite::unknownMRAAWriteError: An unknown error occured writing to the I2C bus.
DeviceWrite::unknownMRAAWriteError: An unknown error occured writing to the I2C bus.
General::stop: Further output of DeviceWrite::unknownMRAAWriteError
will be suppressed during this calculation.
Out[2]= 42.4979 degrees Celsius
when investigating further linux journal it seems that libmraa (presumably of Mathematica MRAALink) tries to use I2C-0 bus:
Aug 30 21:38:05 raspberrypi libmraa[1037]: libmraa version v1.6.1 initialised by user 'pi' with EUID 100
Aug 30 21:38:05 raspberrypi libmraa[1037]: libmraa initialised for platform 'Raspberry Pi Model B Rev 1'
Aug 30 21:38:05 raspberrypi libmraa[1037]: i2c_init: Selected bus 0
Aug 30 21:38:22 raspberrypi libmraa[1037]: i2c0: write: Access error: Remote I/O error
Aug 30 21:38:22 raspberrypi libmraa[1037]: i2c0: write: Access error: Remote I/O error
However as far as I can tell the SenseHAT is in i2c bus 1. So I removed "dtparam=i2c0=on" and after reboot added symbolic link for i2c-0 from i2c-1:
pi@raspberrypi:/ $ ls -la /dev/i2c-*
lrwxrwxrwx 1 root root 10 Aug 30 21:43 /dev/i2c-0 -> /dev/i2c-1
crw-rw---- 1 root i2c 89, 1 Aug 30 21:41 /dev/i2c-1
and tried again SenseHAT in Mathematica which seems to work now:
pi@raspberrypi:/ $ wolfram
Wolfram Language 11.3.0 Engine for Linux ARM (32-bit)
Copyright 1988-2018 Wolfram Research, Inc.
In[1]:= sensehat = DeviceOpen["SenseHAT"]
Out[1]= DeviceObject[{SenseHAT, 1}]
In[2]:= DeviceRead[sensehat, "Temperature"]
Out[2]= 38.9896 degrees Celsius
So for me it looks that Matkematica uses wrong I2C bus at least in this particular model:
pi@raspberrypi:/dev $ cat /proc/device-tree/model
Raspberry Pi 3 Model B Plus Rev 1.3
pi@raspberrypi:/ $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
I think this should be fixed to Mathematica.Teemu Ahola2018-08-30T18:49:14ZSimplify the following mathematical expression?
http://community.wolfram.com/groups/-/m/t/1431336
Hey I want to simplify this calculation (see the file) and write it like 1+ax+by/1+cx+dy. Us it possible if yes, how can I do it!Hamza Hboub2018-08-30T15:08:23ZNumerical anomalies in a minimax algorithm
http://community.wolfram.com/groups/-/m/t/1449482
I am trying to compute error bounds for polynomial estimates to $\sin(t\theta)/\sin(\theta)$ for $t \in [0,1]$. The polynomials are of the form $p(x,t)$, where $x = \cos(\theta)$. The polynomials have a constant $u \in [0,1]$ that I want to choose to minimize the maximum error. The mathematical derivation is irrelevant, so I have skipped those details. I wrote code (Mathematica 11.3) to do this and plotted the minimax result as a function of $u$. (I omitted the NMinimize call for h[u] in this code sample.)
a[i_?NumericQ, t_?NumericQ] := If[i >= 1, a[i - 1, t]*(t^2 - i^2)/(i*(2*i + 1)), t]
p[n_?NumericQ, y_?NumericQ, t_?NumericQ] := (sum = a[n, t]; For[i = n - 1, i >= 0, i--, sum = a[i, t] + sum*y]; sum)
f[n_?NumericQ, x_?NumericQ, t_?NumericQ] := Sin[t*ArcCos[x]]/Sin[ArcCos[x]] - p[n, x - 1, t]
g[n_?NumericQ, u_?NumericQ, x_?NumericQ, t_?NumericQ] := Abs[f[n, x, t] - u*a[n, t]*(x - 1)^n]
h[n_?NumericQ, u_?NumericQ] := (result = NMaximize[{g[n, u, x, t], 0 <= x <= 1 && 0 <= t <= 1}, {x, t}]; result[[1]])
Plot[h[8, u], {u, 0.7, 0.9}]
The output of Plot has some numerical anomalies.
![Output of Plot function, default method for NMaximize][1]
When I program this in C++ using double precision, the function h(u) is smooth. Evaluating h[8,0.75], Mathematica produces 0.000058529. Evaluating h[8,0.751], Mathematica produces 9.13505e-06. I did not expect the sawtooth-like behavior of the graph. The valleys do not show up in my C++ computations, which shows effectively a V-shaped graph with vertex near (0.85352, 1.91558e-05). I tried to change the working precision, but the sawtooth behavior persisted.
I switched the method to "Simulated Annealing." The output of the Plot function also has some anomalies.
![Output of Plot, simulated annealing][2]
The outputs at the two aforementioned locations are h[8,0.75] = 0.0000583938 and h[8,0.751] = 0.0000580131, but now the anomalies are in a different region of the graph.
Finally, I tried using "Differential Evolution" as the method. The output looks like what I expected.
![Output of Plot, differential evolution][3]
I know how to debug numerical issues in C++ code using a debugger, but I am a novice at Mathematica and wish to know whether there is some standard approach or set of tools that allows me to diagnose such issues. Also, is there some general advice on choosing the method for minimizing or maximizing? Or is this simply something one has to use trial-and-error to determine? Thank you.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HGraph8Anomalies.png&userId=1449429
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HGraph8SimulatedAnnealing.png&userId=1449429
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HGraph8DifferentialEvolution.png&userId=1449429David Eberly2018-09-11T04:30:56ZGenerate a mesh in order to do a heat transfer analysis?
http://community.wolfram.com/groups/-/m/t/1450254
Hi All,
I'm trying to generate a mesh in order to do a heat transfer analysis and have come up with the code in the attached file based on the Wolfram documentation and other posts in the community.
The mesh is broken up into 9 regions to which I would like to assign material properties. I've had a go at assigning point markers to nodes which I then intended to use to assign material properties. However, I've come unstuck because the number of incidents I've created doesn't match the number of nodes (i.e. entries in mesh["Coordinates"]) due to each region being meshed separately and there being two incident IDs per node at each interface between regions. I'm new to Mathematica so I'd be very grateful if anyone could shed some light on how best to go about this. Also, is there a way to show the IDs of all nodes (PointElements?) rather than just the ones on the boundary? I've written my own code to solve for heat transfer in a separate notebook. Many thanks, ArchieArchie Watts-Farmer2018-09-11T14:56:59ZHow to display underlaying symbolic function but not calculating the result
http://community.wolfram.com/groups/-/m/t/1449847
Hi All,
given the data list:-
data = {24, 5, -9, 105, 15, 111};
Table[data[[i]], {i, 1, 6}] (* which shows the values in the data list *)
Out[]= {24, 5, -9, 105, 15, 111}
BUT I want Table[data[[i]],{i,1,6}]
**(* How can I code the output to show the under laying symbolic as shown below *)**
{ data[[1]], data[[2]], data[[3]], data[[4]], data[[5]], data[[6]] }
Many thanks for any help your can offer.
Lea...Lea Rebanks2018-09-11T10:20:06Z[Wellin] Share/Discuss your solutions to select exercises!
http://community.wolfram.com/groups/-/m/t/1442768
First of all, i hope that the contents of this thread does not violate copyrights or the will of any co-author and that we are allowed to freely post, share, discuss exercises/exercise solutions, especially our own solutions, found in texts co-authored by [Wellin][1] et al. (Gaylord, Kamin, Wellin). Due to legal concerns, maybe we cannot republish the exercise problem statement verbatim or in any other copied form (screenshot/image, photo/image, scan/image)?
*Mathematica* learners and readers of these popular programming intro texts should find this collective discussion thread helpful, everyone, especially beginners, is welcome to post, share, ask. When working with his books I try to solve an exercise on my own, then check the official **.nb** solution (not the **.pdf** file), compare, and merge the two solutions, if mine differs conceptually. I include further edits in the **.nb** file, such as code rearrangement/reformatting, further textual explanations, sometimes corrections, text coloring, etc., basically improving the personal usefulness of the solution, e.g. for an eventual [future second read][2].
There are 5 notable titles so far:
- **EPM1** (2016)
- **PWM1** (2013)
- **IPM3** (2005)
- **IPM2** (1996)
- **IPM1** (1993)
Examples from one book can be re-found as exercises in another book, and vice versa, and there is much overlap and similarity of style, text, and exercises among the 5 books. All the material is imho introductory and only suitable for beginners. Like myself.
If you enjoy learning from (one of) these intro texts as much as I do, then this thread shall become the place for you to participate and discuss particular things thereof (solutions, text, questions, wishes, typos, criticism, etc). We would love hearing from you!
[1]: https://www.programmingmathematica.com/books.html
[2]: https://www.youtube.com/watch?v=lItgAV6Ly6MRaspi Rascal2018-09-07T10:55:45ZDoes any US College offer online Mathematica-based introductory calculus?
http://community.wolfram.com/groups/-/m/t/1429873
Someone I know is looking for a course teaching introductory calculus (univariate differential, integral) using the Wolfram Language / Mathematica. The course needs to be taught on an online basis from an accredited US college such that the credits received could be transferred back to his home university. To my surprise, my 10 minutes with a search engine did not find anything current. Is anyone aware of such an offering?Seth Chandler2018-08-29T22:49:41ZWorkarounds for network timeouts when trying to use Interpreter["Person"]
http://community.wolfram.com/groups/-/m/t/1419777
I frequently get network timeout problems when using Interpreter in ways that require connectivity to the Wolfram server. My network connection in general is quite fast, so I don't think that's the issue. Here's an example.
We have a list of presidents using their common names.
presidents= {"George Washington", "John Adams", "Thomas Jefferson", "James \
Madison", "James Monroe", "John Quincy Adams", "Andrew Jackson", \
"Martin Van Buren", "William Henry Harrison", "John Tyler", "James K. \
Polk", "Zachary Taylor", "Millard Fillmore", "Franklin Pierce", \
"James Buchanan", "Abraham Lincoln", "Andrew Johnson", "Ulysses S. \
Grant", "Rutherford B. Hayes", "James A. Garfield", "Chester A. \
Arthur", "Grover Cleveland", "Benjamin Harrison", "Grover Cleveland \
(2nd term)", "William McKinley", "Theodore Roosevelt", "William \
Howard Taft", "Woodrow Wilson", "Warren G. Harding", "Calvin \
Coolidge", "Herbert Hoover", "Franklin D. Roosevelt", "Harry S. \
Truman", "Dwight D. Eisenhower", "John F. Kennedy", "Lyndon B. \
Johnson", "Richard Nixon", "Gerald Ford", "Jimmy Carter", "Ronald \
Reagan", "George H. W. Bush", "Bill Clinton", "George W. Bush", \
"Barack Obama", "Donald Trump"};
I now want to represent them as entities so that users can get further information on them. So, here's the plan. I want to make one call to Interpreter rather than Map Interpreter over a list of names.
presidentEntities=Interpreter["Person", True &, Missing[], AmbiguityFunction -> First][
presidentNames]]
When I do this, I frequently get a network timeout error. Now it's Sunday afternoon here in the US and I wouldn't think this was peak load time. Moreover, I've gotten the error -- and similar errors for other Interpreter calls -- on many other occasions. Moreover, I don't think 45 names should really tax the Wolfram server too hard.
So, are there any user workarounds for this? (I've tried the ugly method of breaking up the list into pieces and then reassembling, but even that sometimes fails). Am I doing something wrong? Is there a way of making some Interpreter code local?
Is there some way of determining that the Wolfram Server is having a bad day or hour or suffering a particularly heavy load?
More generally, is there something that can be done about WolframAlpha throughput. The Wolfram Language (as opposed to Mathematica) depends on access to vast amount of external data. But if I can't count on reliable service, it discourages use of programs and constructs that depend on that data and the Entity construct.Seth Chandler2018-08-22T22:35:58Z[Event] Shanghai User Meetup Review
http://community.wolfram.com/groups/-/m/t/1450141
*All notebooks used in the presentation can be downloaded at the end of the post.*
----------
The idea of the post is to encourage our lovely users to share their experience about Wolfram products in local meetup groups, building up friendship and partnership among our community.
On 9/8/2018 Saturday, WRI Developer Mr. Shenghui Yang hosted a 12-people private Mathematica user panel to discuss the latest R&D achievement of Wolfram Language V11.1, 2 and 3, including
- Updates and Improvements for Geo system and Entity
- Neural Network in V11.3
- Wolfram Cloud user interface and deployment
- Several appealing examples of Mathematica dynamic feature in K-12 teaching project
![lecturing][1]
![beginning][2]
## Geo system ##
To have Wolfram Language features more accessible and relatable to our domestic users, Shenghui mixed his real life elements into Wolfram Language. The whole presentation became his daily life storytelling upon Wolfram Language knowledge base.
W|A command line interface briefly describes the weather condition on the day of this event
![weather][3]
GeoMagenetData and GeoGravityData demonstrates important geophysics properties of Shanghai at the moment of the presentation ;-) No need to worry about any anomaly
![geodata][4]
GeoPosition with customized GeoMarker visualize the location of this event along the riverband of Yangtze
![marker][5]
GeoDistance, GeoPath and several powerful projection options show our user how Wolfram headquarters relates to the meeting place. One of ~530 projection types is used in the example.
In:= GeoProjectionData["LambertAzimuthal"]
Out= {LambertAzimuthal,{Centering->{0,0},GridOrigin->{0,0},ReferenceModel->1}}
In := GeoProjectionData[]//Short
Out:= {Airy,Aitoff,Albers,AmericanPolyconic,ApianI,<<525>>,WinkelTripel}
![path][6]
GeoArea + GeoPosition, after mark the places the host visited most frequently in Shanghai, formed a large triangle. Combine EntityValue and related functions to easily extract the ratio of the triangle to Shanghai in terms of area
![area][7]
GeoPath and TravelDirectionData also reported accurately how long it takes to route and visit all three marked places
![travel][8]
Finally, Shenghui mention that this event being hosted in a nice tea house, owned by YueSheng Du, the Shanghai-born Mob King and the God of Father of Far East during the Chiang Kai-shek era. Related background information can be retrieved both by built-in Entity functions and ExternalService with BingSearch V5
![history][9]
![bing][10]
## Discussion on K-12 Math Topics ##
This section is set specifically to users in K-12 education industry or the parents, whose kids in this academic interval, looking for new way for their kids to understand the school materials.
Shenghui and several local users reached out to some domestic teachers in public and private schools, ranging from the elite to mid level.
Real test problems were collected for the demo. A brief moment was left to the audience to think about the challenging problems before seeing the notebook with solution. The solution uses Mathematica built-in strong visualization, dynamic and CloudDeploy features. One of the most stressful and painful problem in the current domestic K-12 math education is that students need to take math-olympiad level exam for middle and high school. Most of the kids have no choice but to recite the hard-coded hacks to solve the tricky problems in short time. Lack of understanding and intuitive explanation make the process even more challenging. The host brings new vision into these problem via graphical presentation.
Here is an example of Non-stop trains problem with graphical explanation. (10 grade math problem) This question is asked to compute the distance between each cross point. The demo is designed to help students to understand the physical process and solve by hand in the exam, rather than shoot a Mathematica solution to them
![question][11]
![solution][12]
## Neural Network and AI ##
The presentation is based on the updated version of [Taliesin's][13] [notebook][14] and demo session on [YouTube][15] (some NN layers' name are updated in V 11.3 like DotPlusLayer -> LinearLayer). The examples are fully tested in the attached notebook for V11.3. Though the topic is quite involved for first time users, the audience are willing to learn Wolfram Language. Shenghui and his college roomate, a [Tecent AI Lab][16] senior researcher and also a veteran Mathematica user, collaboratively initialize bi-weekly online discussion for domestic Mathematica users. The one-hour AI-topic paper reading session is aimed to have the users familiar with basic NN layers in Wolfram Language and with different Networks available in the [Wolfram Neural Network Repository][17].
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.jpg&userId=23928
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.jpg&userId=23928
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=23928
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&userId=23928
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.png&userId=23928
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.png&userId=23928
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.png&userId=23928
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8.png&userId=23928
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9.png&userId=23928
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.png&userId=23928
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.png&userId=23928
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12.png&userId=23928
[13]: https://twitter.com/taliesinb
[14]: https://wolfr.am/gLSyxCEE
[15]: https://www.youtube.com/watch?v=FnpqI4REiak
[16]: https://ai.tencent.com/ailab/index.html
[17]: https://resources.wolframcloud.com/NeuralNetRepository/Shenghui Yang2018-09-11T14:30:05ZImageAugmentationLayer on image and target mask
http://community.wolfram.com/groups/-/m/t/1445573
Hi, I'd like to use ImageAugmentationLayer in my binary image segmentation neural network. However, it seems like I can't get the ImageSegmentationLayer to do exactly the same transform on my input image as on my target mask. Is there a hidden way to do this that's not mentioned in the docs? It seems like every invocation of the layer will use a new random crop, but I need to do the _exact same_ random crop on pairs of images.
Cheers!Carl Lange2018-09-09T12:40:13ZCreating and Evaluating MCQs in the Wolfram Language
http://community.wolfram.com/groups/-/m/t/1447407
Dear community members, does anybody know of any available resources to get started creating and automatically grading MCQs (multiple choice questions) in the Wolfram Language?
Many thanks in advance,
RubenRuben Garcia Berasategui2018-09-10T10:41:14ZAvoid strange results from LinearSolve (and Solve)?
http://community.wolfram.com/groups/-/m/t/1447563
I encounter strange behaviour of LinearSolve (and Solve):
Given a symmetric, positive matrix M (4x4, but nasty expressions) I try to solve the linear system
M.x=rhs
with rhs=(1,0,0,0).
Using LinearSolve[M,rhs] I obtain an answer, that yields Indetermined values, when evaluated for special values of M. The same answer is obtained when using Solve.
But if I calculate the Inverse of M and multiply that with rhs, I obtain the correct response, without any indetermined entries.
For larger matrices this bypass would become too involved.Alois Steindl2018-09-10T11:30:42Z[WSS18] Reinforcement Q-Learning for Atari Games
http://community.wolfram.com/groups/-/m/t/1380007
## Introduction ##
This project aims to create a neural network agent that plays Atari games. This agent is trained using Q-Learning. The agent will not have any priori knowledge of the game. It is able to learn by playing the game and only being told when it loses.
##What is reinforcement learning? ##
Reinforcement learning is an area under the general machine Learning, inspired by behavioral psychology. The agent learns what to do, given a situation and a set of possible actions to choose from, in order to maximize a reward. Therefore, to model a problem to reinforcement learning problem, the game should have a set of states, a set of actions that able to transfer one state into another and a set of reward corresponding to each state. The mathematical formulation of reinforcement learning problem is called Markov Decision Process (MDP).
![An visual representation of reinforcement learning problem][1]
Image From:https://medium.freecodecamp.org/diving-deeper-into-reinforcement-learning-with-q-learning-c18d0db58efe
## Markov Decision Process ##
Before apply Markov decision process to the problem, we need to make sure the problem satisfy the Markov property which is that the current state completely represents the state of the environment. For short, the future depends only on the present.
An MDP can be defined by **(S,A,R,P,γ)** where:
- S — set of possible states
- A — set of possible actions
- R — probability distribution of reward given (state, action) pair
- P — probability distribution over how likely any of the states is to
be the new states, given (state, action) pair. Also known as
transition probability.
- γ — reward discount factor
At initial state $S_{0}$, the agent chooses action $A_{0}$. Then the environment gives reward $R_{0}=R(.|S_{0}, A_{0})$ and next state $S_{1}=P(.|S_{0},A_{0})$. Repeats till the environment ends.
##Value Network##
In value-based RL, the input will be the current state or a combination of few recent states, and the output will be the estimated future reward of every possible action at this state. The goal will be to optimize the value function so that the prediction value is close to the actual reward. In the following graph, each number in the box represents the distance from current box to the goal.
![Value network example][2]
Image From:https://medium.freecodecamp.org/diving-deeper-into-reinforcement-learning-with-q-learning-c18d0db58efe
## Deep Q-Learning ##
Deep Q-learning is the algorithm that I used to construct my agent. The basic idea of Q function is to get the state and action then output the corresponding sum of rewards till the end of the game. In deep Q-learning, we use a neural network as the Q function therefore we can use one state as input and let neural network to generate the prediction for all possible actions.
The Q function is stated as following.
$Q(S_{t},A) = R_{t+1}+\gamma maxQ(S_{t+1},A)\\Where:\\Q(S_{t},A)\,\,\,\,\,\,\,\,\,\,\,\,\, = The \,predicted\,sum \,of rewards \,given\, current\,state\,and\,selected\,action\\R_{t+1} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,= Reward\,received\,after\,taking\,action\\\gamma \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,= Discount\,factor\\maxQ(S_{t+1},A) = The\,prediction\,of\,next\,state$
As we can see that given current state and action, Q function outputs the reward of current plus the max value of the predictions of next state. This function will iteratively predicts the reward till the end of the game where Q[S,A] = R. Therefore we can calculate the loss by minus the prediction of current state with the sum of the reward and the prediction of the next state. When loss equals to 0, the function will able to perfectly predicts the reward of all actions. In another sense that the Q function is predicting the future value of its own prediction. People might ask how could this function be ever converge? Yes, this function is usually hard to converge but when it converges, the performance is really well. There are a lot of techniques that can be used to speed up the converge of the Q function. I will talk about a few techniques I used in this project.
## Experience Replay ##
Experience replay means that the agent will remember the states that it has experienced and learn from those experience when training. It gives more efficient way of using generated data which is by learning it for multiple times. It is important when gaining experience is expensive to agent. Since the Q function usually don't converge in a short time which means a lot of the outcomes from the experience are usually similar, multiple passes on the same data is useful.
## Decaying Random Factor ##
Random Factor is the possibility for the agent to choose a random action instead of the best predicted action. It allows the agent to start with random player to increase the diversity of the sample. The random factor decreases with the more game plays therefore the agent is able to be reinforced on its own action pattern.
## Combine Multiple Observations As Input ##
The following image shows a single frame took out from Atari game BreakOut. From this image, the agent is able to capture information about the location of the ball, the location of the board, etc. But several important information is not shown. If you play as the agent, this image is shown to you, what action you will choose? Feel something is missing? Is the ball going right or left? Is the ball going up or down?
![breakout frame1][3]
Generated Using openAI Gym
The following images are two continuous frames took out from the game BreakOut. From these two images that agent is able to capture the information on the direction of the ball and also the speed of the ball. A lot of people tends to forget this since processing recent memories during playing a game is like a nature to us but not to an reinforcement agent.
![breakout frame1][4]![frame 2][5]
Generated Using openAI Gym
## Agent Play in CartPole environment ##
The main environment for agent to learn and tested is CartPole environment. This environment is consist of two movable parts. One is the cart which is controlled by the agent, has two possible action every state which is moving left or right. The other one is pole. This environment simulate the effect of gravity on pole which makes it fall to left or right due to its orientation with the horizon. For this environment to be considered as solved, the average episodes that the agent able to get in 100 games is over 195. Following graph is a visual representation of the environment. The blue rectangle represents the pole. The black box is the cart. The black line is the horizon.
![cart pole sample][6]
First, let's create an environment
$env = RLEnvironmentCreate["WLCartPole"]
Then, initialize a network for this environment and a generator
policyNet =
NetInitialize@
NetChain[{LinearLayer[128], Tanh, LinearLayer[128], Tanh,
LinearLayer[2]}, "Input" -> 8,
"Output" -> NetDecoder[{"Class", {0, 1}}]];
generator := creatGenerator[$env, 20, 10000, False, 0.98, 1000, 0.95, False]
The generator function plays the game and generates input-output pairs to train the network.
Inside the generator, it initialize the replay buffer which is processed, reward list is used to record the performance, best is to record the peak performance.
If[#AbsoluteBatch == 0,
processed = <|"action"->{},"observation"->{},"next"->{},"reward"->{}|>;
$rewardList = {};
$env=env;
best = 0;
];
Then the environment data are being generated from game function and being preprocessed. At the start of training, the generator will produce more data to fill the replay buffer.
If[#AbsoluteBatch == 0,
experience = preprocess[game[start,maxEp,#Net, render, Power[randomDiscount,#AbsoluteBatch], $env], nor]
,
experience = preprocess[game[1,maxEp,#Net, render, Power[randomDiscount,#AbsoluteBatch],$env], nor]
];
The game function is below, it is joining current observation and next observation as the input to the network.
game[ep_Integer,st_Integer,net_NetChain,render_, rand_, $env_, end_:Function[False]]:= Module[{
states, list,next,observation, punish,choiceSpace,
state,ob,ac,re,action
},
choiceSpace = NetExtract[net,"Output"][["Labels"]];
states = <|"observation"->{},"action"->{},"reward"->{},"next"->{}|>;
Do[
state["Observation"] = RLEnvironmentReset[$env]; (* reset every episode *)
ob = {};
ac = {};
re = {};
next = {};
Do[
observation = {};
observation = Join[observation,state["Observation"]];
If[ob=={},
observation = Join[observation,state["Observation"]]
,
observation = Join[observation, Last[ob][[;;Length[state["Observation"]]]]]
];
action = If[RandomReal[]<=Max[rand,0.1],
RandomChoice[choiceSpace]
,
net[observation]
];
(*Print[action];*)
AppendTo[ob, observation];
AppendTo[ac, action];
state = RLEnvironmentStep[$env, action, render];
If[Or[state["Done"], end[state]],
punish = - Max[Values[net[observation,"Probabilities"]]] - 1;
AppendTo[re, punish];
AppendTo[next, observation];
Break[]
,
AppendTo[re, state["Reward"]];
observation = state["Observation"];
observation = Join[observation, ob[[-1]][[;;Length[state["Observation"]]]]];
AppendTo[next, observation];
]；
,
{step, st}];
AppendTo[states["observation"], ob];
AppendTo[states["action"], ac];
AppendTo[states["reward"], re];
AppendTo[states["next"], next];
,
{episode,ep}
];
(* close the $environment when done *)
states
]
Preprocess function flatten the input and has an option on if normalizing the observation
preprocess[x_, nor_:False] := Module[{result},(
result = <||>;
result["action"] = Flatten[x["action"]];
If[nor,
result["observation"] = N[Normalize/@Flatten[x["observation"],1]];
result["next"] = N[Normalize/@Flatten[x["next"],1]];
,
result["observation"] = Flatten[x["observation"],1];
result["next"] = Flatten[x["next"],1];
];
result["reward"] = Flatten[x["reward"]];
result
)]
Let's continue with generator, after getting the data from the game, generator measures the performance and records it.
NotebookDelete[temp];
reward = Length[experience["action"]];
AppendTo[$rewardList,reward];
temp=PrintTemporary[reward];
Records the net with best performance
If[reward>best,best = reward;bestNet = #Net];
Add these experience to the replay buffer
AppendTo[processed["action"],#]&/@experience["action"];
AppendTo[processed["observation"],#]&/@experience["observation"];
AppendTo[processed["next"],#]&/@experience["next"];
AppendTo[processed["reward"],#]&/@experience["reward"];
Make sure the total size of replay buffer does not exceed the limit
len = Length[processed["action"]] - replaySize;
If[len > 0,
processed["action"] = processed["action"][[len;;]];
processed["observation"] = processed["observation"][[len;;]];
processed["next"] = processed["next"][[len;;]];
processed["reward"] = processed["reward"][[len;;]];
];
Add input of the network to the result
pos = RandomInteger[{1,Length[processed["action"]]},#BatchSize];
result = <||>;
result["Input"] = processed["observation"][[pos]];
Calculates the out put based on the next state and reward and add to the result
predictionsOfCurrentObservation = Values[#Net[processed["observation"][[pos]],"Probabilities"]];
rewardsOfAction = processed["reward"][[pos]];
maxPredictionsOfNextObservation = gamma*Max[Values[#]]&/@#Net[processed["next"][[pos]],"Probabilities"];
temp = rewardsOfAction + maxPredictionsOfNextObservation;
MapIndexed[
(predictionsOfCurrentObservation[[First@#2,(#1+1)]]=temp[[First@#2]])&,(processed["action"][[pos]]-First[NetExtract[net,"Output"][["Labels"]]])
];
result["Output"] = out;
result
In the end, we can start training
trained =
NetTrain[policyNet, generator,
LossFunction -> MeanSquaredLossLayer[], BatchSize -> 32,
MaxTrainingRounds -> 2000]
## Performance of the agent ##
![enter image description here][7]
The graph above show the performance of the agent in 1000 games in cart pole environment. The agent starts with random play which has a low number of episodes lasted. The performance stay low till 800 games. But after 800 games, the performance starts to increase exponentially. In the end of the training, the performance jumps from 3k to 10k which is the maximal number of episode per game in 4 games. This proves that although the Q function is hard to converge, but when it converges, the performance is very well.
##Future Directions##
The current agent uses the classical DQN as its major structure. Other techniques like Noisy Net, DDQN, Prioritized Reply, etc can help the Q function to converge in a shorter time. Other algorithms like Rainbow Algorithm which is based on Q learning will be the next step of this project.
code can be found on [github link][8]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rl.png&userId=1363029
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=vn.png&userId=1363029
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=breakout1.png&userId=1363029
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=breakout1.png&userId=1363029
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=breakout2.png&userId=1363029
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=cp.png&userId=1363029
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=performance.png&userId=1363029
[8]: https://github.com/ianfanx/wss2018ProjectIan Fan2018-07-11T20:52:09ZHighlight sections of code when I click my cursor at a particular point?
http://community.wolfram.com/groups/-/m/t/1444669
Is there a way to have Mathematica automatically highlight sections of code when I click my cursor at a particular point in my code? For example, if I click at the second bracket in this: <br>
`F[G[H[t]]]` <br>
It will highlight the entire `[H[t]]` instead of the end brackets?Joshua Champion2018-09-08T17:06:46ZPerform Breadth First Search algorithm (BFS) for the 8-puzzle game?
http://community.wolfram.com/groups/-/m/t/1448725
I've tried to implement Breadth First Search algorithm in MMA, to attempt to solve the 8-puzzle game. But in some cases, I ran out of memory, but on other cases it solves without problem.
Here is the code I am using to make BFS, in the case of inicial = {{1, 6, 2}, {0, 4, 3}, {7, 5, 8}}; you get the desired answer, run the following code and see the result
mutacion[tablero_List] :=
Module[{posc, directions, newposs, olddigits},
posc = Flatten[Position[tablero, 0]];
directions = Select[Tuples[Range[-1, 1], 2], Norm[#] == 1 &];
newposs = (posc + #) & /@ directions;
newposs = Select[newposs, FreeQ[#, 4] \[And] FreeQ[#, 0] &];
olddigits = Extract[tablero, newposs];
MapThread[
ReplacePart[tablero, {#1 -> 0, posc -> #2}] &, {newposs,
olddigits}]]
q = {}; map = {};
inicial = {{1, 6, 2}, {0, 4, 3}, {7, 5, 8}};
final = {{1, 2, 3}, {4, 5, 6}, {7, 8, 0}};
AppendTo[q, {inicial, 0}]
AppendTo[map, {inicial, 0}]
While[q != {}, prim = First@MinimalBy[q, Last];
hijos = Flatten[Most[MapAt[mutacion, prim, 1]], 1];
If[Not@MemberQ[map, #, Infinity],
AppendTo[q, {#, Last[prim] + 1}]] & /@ hijos;
If[Not@MemberQ[map, #, Infinity],
AppendTo[map, {#, Last[prim] + 1}]] & /@ hijos;
q = DeleteCases[q, prim, Infinity];
If[MemberQ[hijos, final],
Print["Found at the level : ", Last[prim] + 1]; Break[]]]
but when inicial = {{2, 1, 5}, {6, 3, 4}, {8, 0, 7}};I have waited for more than 15 minutes without getting any response, maybe the problem is with the command MemberQ, since that command must make many comparisons in increasingly larger lists. I want to ask you please can you help me to correct my mistakes and thus be able to improve my code to obtain the solutions. Thanks in advance, your help is very necessary and important.Luis Ledesma2018-09-10T19:12:54ZProgram the distance Jaro?
http://community.wolfram.com/groups/-/m/t/1416937
Hello everyone, I'm trying to program Jaro distance as requested by [this page][1], I've done the following code that works well for the next two pairs ("MARTHA", "MARHTA") and ("DIXON", "DICKSONX") but when I try with ("JELLYFISH", "SMELLYFISH") I get an error because the code counts the double S of "SMELLYFISH", due to this error I have not been able to finish successfully, here is what I have programmed up to this moment:
uno = "DIXON"; dos ="DICKSONX" ;
rep = Characters[uno] \[Intersection] Characters[dos]
scope = Max[StringLength[uno], StringLength[dos]]/2 - 1
inter = Transpose[{Flatten[Position[Characters[uno], #] & /@ rep],
Flatten[Position[Characters[dos], #] & /@ rep]}]
m = Select[inter, Abs[#[[1]] - #[[2]]] < scope &]
prb = Select[m, #[[1]] != #[[2]] &]
trans = Length[DeleteCases[Position[prb, Reverse[#]] & /@ prb, {}]]/2
1/3 (Length[m]/StringLength[uno] + Length[m]/StringLength[dos] + (
Length[m] - trans)/Length[m])
Someone who can help me solve this problem? Maybe the approach I'm using is wrong, unfortunately I have not been able to find a way to solve the problem with my code, I hope that someone can please guide me to achieve my goal, any help is welcome, thank you in advance for your help
[1]: https://rosettacode.org/wiki/Jaro_distanceLuis Ledesma2018-08-21T04:16:21ZCloudDeploy a Wolfram dataset?
http://community.wolfram.com/groups/-/m/t/1441795
I have tried
CloudDeploy[ResourceData[ResourceObject["Meteorite Landings"] ] ]
this Aborts without any errorSag Mk2018-09-06T22:25:32ZCreate a right hand grid to match a textbook perspective?
http://community.wolfram.com/groups/-/m/t/1447840
I am making some resource materials for a student. I would like to match the grid used in the textbook. I can create an x,y,z axes, and turn the edge axes off. I am unable to create the same orientation of the axe. Can this be done? I have tried viewpoint, but can not get a vertical z axes a horizontal y axes and a slant x axes. If so how?![enter image description here][1] Also, does Mathematica have a right hand image?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=RightHandGraph.jpg&userId=989681Peter Munro2018-09-10T16:36:25ZWA says '7x/x = 7' only if x ≠ 0, but has no problem with 'ax/x = a'. Why?
http://community.wolfram.com/groups/-/m/t/1443175
In WA, entering 'ax/x = a' will return 'True.'
If 'a' is replaced with a constant, it changes its rule. '7x/x = 7' will say that it's true as long as x ≠ 0.
Why is it that 'ax/x = a' is true until 'a' is replaced with a constant?C R2018-09-07T17:41:20Z