Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Language sorted by activeLazy lists in Mathematica
http://community.wolfram.com/groups/-/m/t/1467915
Hi all. In this post I want to demonstrate a package I wrote (and still tweak here and there), which implements Haskell-style lazy lists in Mathematica. Anyone who wants to play around with it can find here on Github:
https://github.com/ssmit1986/lazyLists
## What are lazy lists? ##
Before diving into implementation details, let's give some motivation for the use of lazy lists. A lazy list is a method to implicitly represent a long (possibly infinite) list. Of course, you cannot truly have an infinite list in your computer, so the central idea is to format the list as a linked list consisting of 2 parts. This means that a `lazyList` will always look like this:
lazyList[first, tail]
Here, `first` is the first element of the list and `tail` is a held expression that, when evaluated, will give you the rest of the list, which is again a `lazyList` with a first element and a tail. So in other words: elements of a `lazyList` are only generated when they are needed and not before. This makes it possible to represent infinite lists and perform list operations on them. To get the elements of the list, one can simply evaluate the tail as often as needed to progress through the list.
For example, let's define `lazyRange[]` as the lazy list of all positive integers. Then
Map[#^2&, lazyRange[]]
becomes the infinite list of all squares, again represented as a lazy list. You can go even further, though. For example, you can generate the triangular numbers by doing a `FoldList` over the integers and then select the odd ones with a `Select`:
Select[FoldList[Plus, 0, lazyRange[]], OddQ]
which is yet another lazy list. So if we want the first 100 odd triangular numbers, we simply evaluate the tail if this lazy list 99 times to get them. In contrast, if you'd try to do this with a normal list, you could do something like this:
Select[FoldList[Plus, 0, Range[n]], OddQ]
However, what value should you pick for `n`? If you pick it too low, you won't get your 100 numbers. If it's too high, you're doing too much work. Of course you could write some sort of `While` loop, but the code for that would be less concise and doesn't really play into the strengths of Wolfram Language.
## Implementation ##
To illustrate how my code works, I will reproduce some of the code in this blog post, though the package code is different in some respects for efficiency reasons.
The easiest way to prevent the tail from evaluating is to give `lazyList` the `HoldRest` attribute, which is how I implemented them:
Attributes[lazyList] = {HoldRest}
Next, we need some way to construct basic infinite lists like the positive integers. This is generally done recursively. My `lazyRange[]` function takes up to 2 arguments: a starting value (1 by default) and an increment value (also 1 by default):
lazyRange[start : _ : 1, step : _ : 1] := lazyList[start, lazyRange[start + step, step]]
We can extract the first element with `First` and advance through the list with `Last`:
First@lazyRange[]
First@Last@lazyRange[]
First@Last@Last@lazyRange[]
Out[100]= 1
Out[101]= 2
Out[102]= 3
We can also check that the tail of `lazyRange[]` is equal to the list of integers starting from 2:
In[103]:= Last@lazyRange[] === lazyRange[2]
Out[103]= True
Of course, iterating `Last` can be done with `NestList`, so if we want to get the first `n` elements of the lazy list, we can define the following special functionality for `Take` by setting an `UpValue` for `lazyList`:
lazyList /: Take[l_lazyList, n_Integer] := NestList[Last, l, n - 1][[All, 1]]
Take[lazyRange[], 10]
Out[105]= {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
As it turns out, nesting `Last` isn't actually the most efficient way to do this, so I ended up implementing `Take` with `ReplaceRepeated` and `Sow`/`Reap` to make the best use of the pattern matching capabilities of WL.
Next, we want to be able to do transformations on `lazyList`s. The simplest one is `Map`: you simply create a `lazyList` with the function applied to the first element and then `Map` the function over the tail:
lazyList /: Map[f_, lazyList[first_, tail_]] := lazyList[
f[first],
Map[f, tail]
];
Map[#^2 &, lazyRange[]]
Take[%, 10]
Out[117]= lazyList[1, (#1^2 &) /@ lazyRange[1 + 1, 1]]
Out[118]= {1, 4, 9, 16, 25, 36, 49, 64, 81, 100}
Similarly, `Select` is easily implemented by repeatedly evaluating the tail until we find an element that satisfied the selector function `f`. At that point we found our element return a `lazyList`. We use
lazyList /: Select[lazyList[first_, tail_], f_] /; f[first] := lazyList[first, Select[tail, f]];
lazyList /: Select[lazyList[first_, tail_], f_] := Select[tail, f];
As an example, we can now find the first 10 numbers that are co prime to 12 and the first 10 squares that are 1 more than a multiple of 3:
Take[Select[lazyRange[], CoprimeQ[#, 12] &], 10]
Take[Select[Map[#^2 &, lazyRange[]], Mod[#, 3] === 1 &], 10]
Out[128]= {1, 5, 7, 11, 13, 17, 19, 23, 25, 29}
Out[129]= {1, 4, 16, 25, 49, 64, 100, 121, 169, 196}
I hope this gives a good enough overview of the benefits of `lazyLists` as well as giving you an idea of how to use them. In the package I tried to implement other list computational functionality (such as `MapIndexed`, `MapThread`, `FoldList`, `Transpose`, `Cases`, and `Pick`) for lazy lists as efficiently as possible.
Please let me know if you have further suggestions!Sjoerd Smit2018-09-19T22:04:27ZEliminate G variable from this system of two equations?
http://community.wolfram.com/groups/-/m/t/1466372
Consider the following code:
Eliminate[{-A^16 + 3 A^11 F - 3 A^6 F^2 + A F^3 + 10 A^14 G -
20 A^9 F G + 10 A^4 F^2 G - 40 A^12 G^2 + 55 A^7 F G^2 -
15 A^2 F^2 G^2 + 80 A^10 G^3 - 85 A^5 F G^3 + 5 F^2 G^3 -
75 A^8 G^4 + 75 A^3 F G^4 - 25 A F G^5 + 75 A^4 G^6 -
75 A^2 G^7 + 25 G^8 + 25 A^6 H - 75 A^4 G H + 75 A^2 G^2 H -
25 G^3 H - 15 A^12 R + 5 A^7 F R + 10 A^2 F^2 R + 105 A^10 G R +
15 A^5 F G R + 5 F^2 G R - 300 A^8 G^2 R - 75 A^3 F G^2 R +
475 A^6 G^3 R + 25 A F G^3 R - 500 A^4 G^4 R + 375 A^2 G^5 R -
125 G^6 R - 25 A^8 R^2 + 25 A^3 F R^2 + 75 A^6 G R^2 +
50 A F G R^2 - 250 A^2 G^3 R^2 + 125 G^4 R^2 + 125 A^2 G R^3 ==
0 , -A^25 - 3125 A^10 B + 5 A^20 F - 10 A^15 F^2 + 10 A^10 F^3 -
5 A^5 F^4 + F^5 + 25 A^23 G + 15625 A^8 B G - 100 A^18 F G +
150 A^13 F^2 G - 100 A^8 F^3 G + 25 A^3 F^4 G - 275 A^21 G^2 -
31250 A^6 B G^2 + 850 A^16 F G^2 - 900 A^11 F^2 G^2 +
350 A^6 F^3 G^2 - 25 A F^4 G^2 + 1750 A^19 G^3 +
31250 A^4 B G^3 - 4000 A^14 F G^3 + 2750 A^9 F^2 G^3 -
500 A^4 F^3 G^3 - 7125 A^17 G^4 - 15625 A^2 B G^4 +
11375 A^12 F G^4 - 4500 A^7 F^2 G^4 + 250 A^2 F^3 G^4 +
19375 A^15 G^5 + 3125 B G^5 - 20000 A^10 F G^5 +
3750 A^5 F^2 G^5 - 35625 A^13 G^6 + 21250 A^8 F G^6 -
1250 A^3 F^2 G^6 + 43750 A^11 G^7 - 12500 A^6 F G^7 -
34375 A^9 G^8 + 3125 A^4 F G^8 + 15625 A^7 G^9 - 3125 A^5 G^10 +
25 A^21 R - 100 A^16 F R + 150 A^11 F^2 R - 100 A^6 F^3 R +
25 A F^4 R - 375 A^19 G R + 1125 A^14 F G R - 1125 A^9 F^2 G R +
375 A^4 F^3 G R + 2125 A^17 G^2 R - 4500 A^12 F G^2 R +
2625 A^7 F^2 G^2 R - 250 A^2 F^3 G^2 R - 4875 A^15 G^3 R +
6500 A^10 F G^3 R - 1500 A^5 F^2 G^3 R - 125 F^3 G^3 R -
1875 A^13 G^4 R + 3750 A^8 F G^4 R - 1875 A^3 F^2 G^4 R +
36250 A^11 G^5 R - 22500 A^6 F G^5 R + 1875 A F^2 G^5 R -
87500 A^9 G^6 R + 25000 A^4 F G^6 R + 103125 A^7 G^7 R -
9375 A^2 F G^7 R - 62500 A^5 G^8 R + 15625 A^3 G^9 R +
375 A^17 R^2 - 500 A^12 F R^2 - 125 A^7 F^2 R^2 +
250 A^2 F^3 R^2 - 6250 A^15 G R^2 + 6250 A^10 F G R^2 +
39375 A^13 G^2 R^2 - 25625 A^8 F G^2 R^2 + 1875 A^3 F^2 G^2 R^2 -
124375 A^11 G^3 R^2 + 48750 A^6 F G^3 R^2 - 2500 A F^2 G^3 R^2 +
215625 A^9 G^4 R^2 - 43750 A^4 F G^4 R^2 - 200000 A^7 G^5 R^2 +
12500 A^2 F G^5 R^2 + 75000 A^5 G^6 R^2 + 3125 F G^6 R^2 +
15625 A^3 G^7 R^2 - 15625 A G^8 R^2 - 1875 A^13 R^3 +
625 A^8 F R^3 + 1250 A^3 F^2 R^3 + 3125 A^11 G R^3 -
3125 A^6 F G R^3 + 25000 A^9 G^2 R^3 + 6250 A^4 F G^2 R^3 -
106250 A^7 G^3 R^3 - 3125 A^2 F G^3 R^3 + 175000 A^5 G^4 R^3 -
3125 F G^4 R^3 - 140625 A^3 G^5 R^3 + 46875 A G^6 R^3 -
3125 A^9 R^4 + 3125 A^4 F R^4 + 15625 A^7 G R^4 -
31250 A^5 G^2 R^4 + 31250 A^3 G^3 R^4 - 15625 A G^4 R^4 +
3125 A^5 R^5 == 0 }, G]mohamd fathi2018-09-19T11:28:30ZThoughts on a Python interface, and why ExternalEvaluate is just not enough
http://community.wolfram.com/groups/-/m/t/1185247
`ExternalEvaluate`, introduced in M11.2, is a nice initiative. It enables limited communication with multiple languages, including Python, and appears to be designed to be relatively easily extensible (see ``ExternalEvaluate`AddHeuristic`` if you want to investigate, though I wouldn't invest in this until it becomes documented).
**My great fear, however, is that with `ExternalEvaluate` Wolfram will consider the question of a Python interface settled.**
This would be a big mistake. A *general* framework, like `ExternalEvaluate`, that aims to work with *any* language and relies on passing code (contained in a string) to an evaluator and getting JSON back, will never be fast enough or flexible enough for *practical scientific computing*.
Consider a task as simple as computing the inverse of a $100\times100$ Mathematica matrix using Python (using [`numpy.linalg.inv`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html)).
I challenge people to implement this with `ExternalEvaluate`. It's not possible to do it *in a practically useful way*. The matrix has to be sent *as code*, and piecing together code from strings just can't replace structured communication. The result will need to be received as something encodable to JSON. This has terrible performance due to multiple conversions, and even risks losing numerical precision.
Just sending and receiving a tiny list of 10000 integers takes half a second (!)
In[6]:= ExternalEvaluate[py, "range(10000)"]; // AbsoluteTiming
Out[6]= {0.52292, Null}
Since I am primarily interested in scientific and numerical computing (as I believe most M users are), I simply won't use `ExternalEvaluate` much, as it's not suitable for this purpose. What if we need to do a [mesh transformation](https://mathematica.stackexchange.com/q/155484/12) that Mathematica can't currently handle, but there's a Python package for it? It's exactly the kind of problem I am looking to apply Python for. I have in fact done mesh transformations using MATLAB toolboxes directly from within Mathematica, using [MATLink][1], while doing the rest of the processing in Mathematica. But I couldn't do this with ExternalEvaluate/Python in a reasonable way.
In 2017, any scientific computing system *needs* to have a Python interface to be taken seriously. [MATLAB has one][2], and it *is* practically usable for numerical/scientific problems.
----
## A Python interface
I envision a Python interface which works like this:
- The MathLink/WSTP API is exposed to Python, and serves as the basis of the system. MathLink is good at transferring large numerical arrays efficiently.
- Fundamental data types (lists, dictionaries, bignums, etc.) as well as datatypes critical for numerical computing (numpy arrays) can be transferred *efficiently* and *bidirectionally*. Numpy arrays in particular must translate to/from packed arrays in Mathematica with the lowest possible overhead.
- Python functions can be set up to be called from within Mathematica, with automatic argument translation and return type translation. E.g.,
PyFun["myfun"][ (* myfun is a function defined in Python *)
{1,2,3} (* a list *),
PyNum[{1,2,3}] (* cast to numpy array, since the interpretation of {1,2,3} is ambiguous *),
PySet[{1,2,3}] (* cast to a set *)
]
- The system should be user-extensible to add translations for new datatypes, e.g. a Python class that is needed frequently for some application.
- The primary mode of operation should be that Python is run as a slave (subprocess) of Mathematica. But there should be a second mode of operation where both Mathematica and Python are being used interactively, and they are able to send/receive structured data to/from each other on demand.
- As a bonus: Python can also call back to Mathematica, so e.g. we can use a numerical optimizer available in Python to find the minimum of a function defined in Mathematica
- An interface whose primary purpose is to call Mathematica from Python is a different topic, but can be built on the same data translation framework described above.
The development of such an interface should be driven by real use cases. Ideally, Wolfram should talk to users who use Mathematica for more than fun and games, and do scientific computing as part of their daily work, with multiple tools (not just M). Start with a number of realistic problems, and make sure the interface can help in solving them. As a non-trivial test case for the datatype-extension framework, make sure people can set up auto-translation for [SymPy objects][3], or a [Pandas dataframe][4], or a [networkx graph][5]. Run `FindMinimum` on a Python function and make sure it performs well. (In a practical scenario this could be a function implementing a physics simulation rather than a simple formula.) As a performance stress test, run `Plot3D` (which triggers a very high number of evaluations) on a Python function. Performance and usability problems will be exposed by such testing early, and then the interface can be *designed* in such a way as to make these problems at least solvable (if not immediately solved in the first version). I do not believe that they are solvable with the `ExternalEvaluate` design.
Of course, this is not the only possible design for an interface. J/Link works differently: it has handles to Java-side objects. But it also has a different goal. Based on my experience with MATLink and RLink, I believe that *for practical scientific/numerical computing*, the right approach is what I outlined above, and that the performance of data structre translation is critical.
----
## ExternalEvaluate
Don't get me wrong, I do think that the `ExternalEvaluate` framework is a very useful initiative, and it has its place. I am saying this because I looked at its source code and it appears to be easily extensible. R has zeromq and JSON capabilities, and it looks like one could set it up to work with `ExternalEvaluate` in a day or so. So does Perl, anyone want to give it a try? `ExternalEvaluate` is great because it is simple to use and works (or can be made to work) with just about any interpreted language that speaks JSON and zeromq. But it is also, in essence, a quick and dirty hack (that's extensible in a quick and dirty way), and won't be able to scale to the types of problems I mentioned above.
----
## MathLink/WSTP
Let me finally say a few words about why MathLink/WSTP are critical for Mathematica, and what should be improved about them.
I believe that any serious interface should be built on top of MathLink. Since Mathematica already has a good interface capable of inter-process communication, that is designed to work well with Mathematica, and designed to handle numerical and symbolic data efficiently, use it!!
Two things are missing:
- Better documentation and example programs, so more people will learn MathLink
- If the MathLink library (not Mathematica!) were open source, people would be able to use it to link to libraries [which are licensed under the GPL][6]. Even a separate open source implementation that only supports shared memory passing would be sufficient—no need to publish the currently used code in full. Many scientific libraries are licensed under the GPL, often without their authors even realizing that they are practically preventing them from being used from closed source systems like Mathematica (due to the need to link to the MathLink libraries). To be precise, GPL licensed code *can* be linked with Mathematica, but the result cannot be shared with anyone. I have personally requested the author of a certain library to grant an exception for linking to Mathematica, and they did not grant it. Even worse, I am not sure they understood the issue. The authors of other libraries *cannot* grant such a permission because they themselves are using yet other GPL's libraries.
[MathLink already has a more permissive license than Mathematica.][7] Why not go all the way and publish an open source implementation?
I am hoping that Wolfram will fix these two problems, and encourage people to create MathLink-based interfaces to other systems. (However, I also hope that Wolfram will create a high-quality Python link themselves instead of relying on the community.)
I have talked about the potential of Mathematica as a glue-language at some Wolfram events in France, and I believe that the capability to interface external libraries/systems easily is critical for Mathematica's future, and so is a healthy third-party package ecosystem.
[1]: http://matlink.org/
[2]: https://www.mathworks.com/help/matlab/matlab-engine-for-python.html
[3]: http://www.sympy.org/
[4]: http://pandas.pydata.org/
[5]: https://networkx.github.io/
[6]: https://en.wikipedia.org/wiki/Copyleft
[7]: https://www.wolfram.com/legal/agreements/mathlink.htmlSzabolcs Horvát2017-09-15T12:33:04ZInterpolate the following implicit function?
http://community.wolfram.com/groups/-/m/t/1466296
Hi,
i have a problem with solve. I need to define this function b of two variables, which is implicitely defined. I need it later to enter it into some other really nasty equations, which then again have to be solved numerically. So i wanted to define a grind of x and ss and then use an interpolation to get a smooth function version of b. However, I cannot even get a solution at a single point. (Maple does that without problems, so it shouldn't be a mathematical problem, but more of my stupidity). Below find the code an error text. If I use "NSolve" I even get no output. Any ideas?
Best
Christian
b[x_, ss_] := Solve[b == x - ss*Quantile[NormalDistribution, b], b, Reals]
b[0.3,0.1]
Solve::inex: Solve was unable to solve the system with inexact coefficients or the system obtained by direct rationalization of inexact numbers present in the system. Since many of the methods used by Solve require exact input, providing Solve with an exact version of the system may help.Christan Bauer2018-09-19T09:23:12ZSolve analytically the following partial differential equations (PDE's)?
http://community.wolfram.com/groups/-/m/t/1395518
I have three PDE's and I want to solve it analytically. But I could not find any method to solve it. Can anyone suggest me which method is suitable for these type of PDE's.
Details are given in the attached file.
Anyone can help me it would be highly appreciated.Mirza Farrukh Baig2018-08-01T13:10:47ZMetaprogramming: the Future of the Wolfram Language
http://community.wolfram.com/groups/-/m/t/1435093
With all the marvelous new functionality that we have come to expect with each release, it is sometimes challenging to maintain a grasp on what the Wolfram language encompasses currently, let alone imagine what it might look like in another ten years. Indeed, the pace of development appears to be accelerating, rather than slowing down.
However, I predict that the "problem" is soon about to get much, much worse. What I foresee is a step change in the pace of development of the Wolfram Language that will produce in days and weeks, or perhaps even hours and minutes, functionality might currently take months or years to develop.
So obvious and clear cut is this development that I have hesitated to write about it, concerned that I am simply stating something that is blindingly obvious to everyone. But I have yet to see it even hinted at by others, including Wolfram. I find this surprising, because it will revolutionize the way in which not only the Wolfram language is developed in future, but in all likelihood programming and language development in general.
The key to this paradigm shift lies in the following unremarkable-looking WL function WolframLanguageData[], which gives a list of all Wolfram Language symbols and their properties. So, for example, we have:
WolframLanguageData["SampleEntities"]
![enter image description here][1]
This means we can treat WL language constructs as objects, query their properties and apply functions to them, such as, for example:
WolframLanguageData["Cos", "RelationshipCommunityGraph"]
![enter image description here][2]
In other words, the WL gives us the ability to traverse the entirety of the WL itself, combining WL objects into expressions, or programs. This process is one definition of the term “Metaprogramming”.
What I am suggesting is that in future much of the heavy lifting will be carried out, not by developers, but by WL programs designed to produce code by metaprogramming. If successful, such an approach could streamline and accelerate the development process, speeding it up many times and, eventually, opening up areas of development that are currently beyond our imagination (and, possibly, our comprehension).
So how does one build a metaprogramming system? This is where I should hand off to a computer scientist (and will happily do so as soon as one steps forward to take up the discussion). But here is a simple outline of one approach.
The principal tool one might use for such a task is genetic programming:
WikipediaData["Genetic Programming"]
> In artificial intelligence, genetic programming (GP) is a technique whereby computer programs are encoded as a set of genes that are then modified (evolved) using an evolutionary algorithm (often a genetic algorithm, "GA") – it is an application of (for example) genetic algorithms where the space of solutions consists of computer programs. The results are computer programs that are able to perform well in a predefined task. The methods used to encode a computer program in an artificial chromosome and to evaluate its fitness with respect to the predefined task are central in the GP technique and still the subject of active research.
One can take issue with this explanation on several fronts, in particular the suggestion that GP is used primarily as a means of generating a computer program for performing a predefined task. That may certainly be the case, but need not be.
Leaving that aside, the idea in simple terms is that we write a program that traverses the WL structure in some way, splicing together language objects to create a WL program that “does something”. That “something” may be a predefined task and indeed this would be a great place to start: to write a GP metaprogramming system that creates WL programs that replicate the functionality of existing WL functions. Most of the generated programs would likely be uninteresting, slower versions of existing functions; but it is conceivable, I suppose, that some of the results might be of academic interest, or indicate a potentially faster computation method, perhaps. However, the point of the exercise is to get started on the metaprogramming project, with a simple(ish) task with very clear, pre-defined goals and producing results that are easily tested. In this case the “objective function” is a comparison of results produced by the inbuilt WL functions vs the GP-generated functions, across some selected domain for the inputs.
I glossed over the question of exactly how one “traverses the WL structure” for good reason: I feel sure that there must have been tremendous advances in the theory of how to do this in the last 50 years. But just to get the ball rolling, one could, for instance, operate a dual search, with a local search evaluating all of the functions closely connected to the (randomly chosen) starting function (WL object), while a second “long distance” search jumps randomly to a group of functions some specified number of steps away from the starting function.
[At this point I envisage the computer scientists rolling their eyes and muttering “doesn’t this idiot know about the {fill in the bank} theorem about efficient domain search algorithms?”].
Anyway, to continue. The initial exercise is about the mechanics of the process rather that the outcome. The second stage is much more challenging, as the goal is to develop new functionality, rather than simply to replicate what already exists. It would entail defining a much more complex objective function, as well as perhaps some constraints on program size, the number and types of WL objects used, etc.
An interesting exercise, for example, would be to try to develop a metaprogramming system capable of winning the Wolfram One-Liner contest. Here, one might characterize the objective function as “something interesting and surprising”, and we would impose a tight constraint on the length of programs generated by the metaprogramming system to a single line of code.
What is “interesting and surprising”? To be defined – that’s a central part of the challenge. But, in principle, I suppose one might try to train a neural network to classify whether or not a result is “interesting” based on the results of prior one-liner competitions.
From there, it’s on to the hard stuff: designing metaprogramming systems to produce WL programs of arbitrary length and complexity to do “interesting stuff” in a specific domain. That “interesting stuff” could be, for instance, a more efficient approximation for a certain type of computation, a new algorithm for detecting certain patterns, or coming up with some completely novel formula or computational concept.
Obviously one faces huge challenges in this undertaking; but the potential rewards are also enormous in terms of accelerating the pace of language development and discovery. It is a fascinating area for R&D, one that the WL is ideally situated to exploit. Indeed, I would be mightily surprised to learn that there is not already a team engaged on just such research at Wolfram. If so, perhaps one of them could comment here?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10942Fig1.png&userId=773999
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_12.png&userId=773999Jonathan Kinlay2018-09-02T13:38:13ZDecouple the following equations?
http://community.wolfram.com/groups/-/m/t/1459261
I have two couple equations and I want to decouple it. How can I do it? any help will be appreciated.
eq1 = k*X''[Y] + Bi*(Z[Y] - X[Y]) == Subscript[U, p]/Subscript[U, m];
eq2 = Z''[Y] - Bi*(Z[Y] - X[Y]) == 0;Mirza Farrukh Baig2018-09-15T13:55:30ZExtract the real solutions among many which are complex?
http://community.wolfram.com/groups/-/m/t/1450000
Here is the equation and the solution: I want the two real solutions.
In[2]:= soln1[a_, b_, c_, \[Xi]_, \[Lambda]_, \[Phi]_, rn_] :=
NSolve[\[Rho]^2 +
2 \[Rho] (a Sin[\[Theta]] Cos[\[Phi]] +
b Sin[\[Theta]] Sin[\[Phi]] + c Cos[\[Theta]]) + a^2 + b^2 +
c^2 - rn^2 ==
0 && \[Rho] - \[Lambda] rn Cos[\[Theta] + \[Xi] Cos[\[Phi]]]^2 ==
0, {\[Rho], \[Theta]}];
In[8]:= ss = soln1[.5, .5, .5, .2, .5, .1, 1]
During evaluation of In[8]:= NSolve::ifun: Inverse functions are being used by NSolve, so some solutions may not be found; use Reduce for complete solution information.
Out[8]= {{\[Rho] -> -2.51366 - 3.61652 I, \[Theta] ->
1.8313 - 1.79756 I}, {\[Rho] -> -2.51366 + 3.61652 I, \[Theta] ->
1.8313 + 1.79756 I}, {\[Rho] -> -0.146207 -
0.0312241 I, \[Theta] -> -1.82025 +
0.520643 I}, {\[Rho] -> -0.146207 +
0.0312241 I, \[Theta] -> -1.82025 - 0.520643 I}, {\[Rho] ->
0.152974, \[Theta] -> 0.785684}, {\[Rho] ->
0.311045 - 0.210578 I, \[Theta] ->
2.25043 - 0.388346 I}, {\[Rho] ->
0.311045 + 0.210578 I, \[Theta] ->
2.25043 + 0.388346 I}, {\[Rho] -> 0.417435, \[Theta] -> -0.617471}}Hong-Yee Chiu2018-09-11T16:44:53ZSolve the following non-linear integer max-problem with NMaximize?
http://community.wolfram.com/groups/-/m/t/1463232
Hi!
I have tried to solve the following non-linear integer max-problem, without setting some lower (positive) bounds on Q1, Q2 & Q3, just being positive. Mathematica fails to find the optimal solotion, unless I increase the lower bounds for Q2 & Q3 to about 40. I have solved the same problem in LINGO and found the global solution {P -> 60., Q1 -> 0, Q2 -> 52., Q3 -> 60., y1 -> 0, y2 -> 1, y3 -> 1, \z1 -> 0, z2 -> 0, z3 -> 1} in less than a second. Any suggestions?
NMaximize[{(Q1 + Q2 + Q3) (P - 20) + 3*0.5 (80 - P) Q1*z1 +
2*0.5 (100 - 0.8 P) Q2*z2 + 2*0.5 (90 - 0.5 P) Q3*z3,
z1 + z2 + z3 == 1 && Q1 == (80 - P) y1 && Q2 == (100 - 0.8 P) y2 &&
Q3 == (90 - 0.5 P) y3 && 2 <= y1 + y2 + y3 <= 3 &&
3 <= y1 + y2 + y3 + z1 + z2 + z3 <= 4 &&
2 <= y1 + y2 + y3 + z1 <= 3 && 2 <= y1 + y2 + y3 + z2 <= 3 &&
2 <= y1 + y2 + y3 + z3 <= 3 && Q1 >= 0 && Q2 >= 0 && Q3 >= 0 &&
1 >= y1 >= 0 && 1 >= y2 >= 0 && 1 >= y3 >= 0 && z1 >= 0 &&
z2 >= 0 && z3 >= 0 && 80 > P > 20 && y1 \[Element] Integers &&
y2 \[Element] Integers && y3 \[Element] Integers &&
z1 \[Element] Integers && z2 \[Element] Integers &&
z3 \[Element] Integers}, {P, Q1, Q2, Q3, y1, y2, y3, z1, z2, z3}]Christos Papahristodoulou2018-09-17T19:22:37ZFind the root of a numerical function that contains a root-finding itself?
http://community.wolfram.com/groups/-/m/t/1463531
I have encountered the following problem:
- define a numerical function h with input x and output y
- use FindRoot to find the root x* of h
- one characteristic of h: at some part within h FindRoot is called
I have done that with other software, but its not working in Mathematica, or I am doing something wrong. I have attached a MWE.
Simply evaluating the function h works, but FindRoot does return many error messages. I really do not understand why.
Any ideas?
Best,
Benjamin
h[inp_] :=
Module[{y, x, a, equ, sol, z},
a = inp;
equ = {x + .5 y - a[[1]], x + y - a[[2]]};
sol = FindRoot[equ, {{x, 1.0}, {y, 1.0}}];
z = {x, y} /. sol;
z - {1, 1}
]
XY = {X, Y};
XYval = {1.5, 2.0};
XYStart = {{XY[[1]], XYval[[1]]}, {XY[[2]], XYval[[2]]}};
h[XYval]
FindRoot[h[XYInput], XYStart]Benjamin L2018-09-17T20:28:02ZSolver for unsteady flow with the use of Mathematica FEM
http://community.wolfram.com/groups/-/m/t/1433064
![fig7][331]
I started the discussion [here][1] but I also want to repeat on this forum.
There are many commercial and open code for solving the problems of unsteady flows.
We are interested in the possibility of solving these problems using Mathematica FEM. Previously proposed solvers for stationary incompressible isothermal flows:
Solving 2D Incompressible Flows using Finite Elements:
http://community.wolfram.com/groups/-/m/t/610335
FEM Solver for Navier-Stokes equations in 2D:
http://community.wolfram.com/groups/-/m/t/611304
Nonlinear FEM Solver for Navier-Stokes equations in 2D:
https://mathematica.stackexchange.com/questions/94914/nonlinear-fem-solver-for-navier-stokes-equations-in-2d/96579#96579
We give several examples of the successful application of the finite element method for solving unsteady problem including nonisothermal and compressible flows. We will begin with two standard tests that were proposed to solve this class of problems by
M. Schäfer and S. Turek, Benchmark computations of laminar ﬂow around a cylinder (With support by F. Durst, E. Krause and R. Rannacher). In E. Hirschel, editor, Flow Simulation with High-Performance Computers II. DFG priority research program results 1993-1995, number 52 in Notes Numer. Fluid Mech., pp.547–566. Vieweg, Weisbaden, 1996. https://www.uio.no/studier/emner/matnat/math/MEK4300/v14/undervisningsmateriale/schaeferturek1996.pdf
![fig8][332]
Let us consider the flow in a flat channel around a cylinder at Reynolds number = 100, when self-oscillations occur leading to the detachment of vortices in the aft part of cylinder. In this problem it is necessary to calculate drag coeﬃcient, lift coeﬃcient and pressure diﬀerence in the frontal and aft part of the cylinder as functions of time, maximum drag coeﬃcient, maximum lift coeﬃcient , Strouhal number and pressure diﬀerence $\Delta P(t)$ at $t = t0 +1/2f$. The frequency f is determined by the period of oscillations of lift coeﬃcient f=f(c_L). The data for this test, the code and the results are shown below.
H = .41; L = 2.2; {x0, y0, r0} = {1/5, 1/5, 1/20};
Ω = RegionDifference[Rectangle[{0, 0}, {L, H}], Disk[{x0, y0}, r0]];
RegionPlot[Ω, AspectRatio -> Automatic]
K = 2000; Um = 1.5; ν = 10^-3; t0 = .004;
U0[y_, t_] := 4*Um*y/H*(1 - y/H)
UX[0][x_, y_] := 0;
VY[0][x_, y_] := 0;
P0[0][x_, y_] := 0;
Do[
{UX[i], VY[i], P0[i]} =
NDSolveValue[{{Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
u[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (u[x, y] - UX[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[u[x, y], x] +
VY[i - 1][x, y]*D[u[x, y], y],
Inactive[
Div][({{-μ, 0}, {0, -μ}}.Inactive[Grad][
v[x, y], {x, y}]), {x, y}] +
\!\(\*SuperscriptBox[\(p\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] + (v[x, y] - VY[i - 1][x, y])/t0 +
UX[i - 1][x, y]*D[v[x, y], x] +
VY[i - 1][x, y]*D[v[x, y], y],
\!\(\*SuperscriptBox[\(u\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y] +
\!\(\*SuperscriptBox[\(v\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, y]} == {0, 0, 0} /. μ -> ν, {
DirichletCondition[{u[x, y] == U0[y, i*t0], v[x, y] == 0},
x == 0.],
DirichletCondition[{u[x, y] == 0., v[x, y] == 0.},
0 <= x <= L && y == 0 || y == H],
DirichletCondition[{u[x, y] == 0,
v[x, y] == 0}, (x - x0)^2 + (y - y0)^2 == r0^2],
DirichletCondition[p[x, y] == P0[i - 1][x, y], x == L]}}, {u, v,
p}, {x, y} ∈ Ω,
Method -> {"FiniteElement",
"InterpolationOrder" -> {u -> 2, v -> 2, p -> 1},
"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}], {i, 1, K}];
{ContourPlot[UX[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> u, MaxRecursion -> 2],
ContourPlot[VY[K/2][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, Contours -> 20,
PlotPoints -> 25, PlotLabel -> v, MaxRecursion -> 2,
PlotRange -> All]} // Quiet
{DensityPlot[UX[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> u, MaxRecursion -> 2],
DensityPlot[VY[K][x, y], {x, y} ∈ Ω,
AspectRatio -> Automatic, ColorFunction -> "BlueGreenYellow",
FrameLabel -> {x, y}, PlotLegends -> Automatic, PlotPoints -> 25,
PlotLabel -> v, MaxRecursion -> 2, PlotRange -> All]} // Quiet
dPl = Interpolation[
Table[{i*t0, (P0[i][.15, .2] - P0[i][.25, .2])}, {i, 0, K, 1}]];
cD = Table[{t0*i, NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) + Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Sin[θ] -
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Cos[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
cL = Table[{t0*i, -NIntegrate[(-ν*(-Sin[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(UX[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]) +
Cos[θ] (Sin[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]] + Cos[θ]
\!\(\*SuperscriptBox[\(VY[i]\),
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x0 + r Cos[θ],
y0 + r Sin[θ]]))*Cos[θ] +
P0[i][x0 + r Cos[θ], y0 + r Sin[θ]]*
Sin[θ]) /. {r -> r0}, {θ, 0, 2*Pi}]}, {i,
1000, 2000}]; // Quiet
{ListLinePlot[cD,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(D\)]\)"}],
ListLinePlot[cL,
AxesLabel -> {"t", "\!\(\*SubscriptBox[\(c\), \(L\)]\)"}],
Plot[dPl[x], {x, 0, 8}, AxesLabel -> {"t", "ΔP"}]}
f002 = FindFit[cL, a*.5 + b*.8*Sin[k*16*t + c*1.], {a, b, k, c}, t]
Plot[Evaluate[a*.5 + b*.8*Sin[k*16*t + c*1.] /. f002], {t, 4, 8},
Epilog -> Map[Point, cL]]
k0=k/.f002;
Struhalnumber = .1*16*k0/2/Pi
cLm = MaximalBy[cL, Last]
sol = {Max[cD[[All, 2]]], Max[cL[[All, 2]]], Struhalnumber,
dPl[cLm[[1, 1]] + Pi/(16*k0)]}
In Fig. 1 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
{3.17805, 1.03297, 0.266606, 2.60427}
lowerbound= { 3.2200, 0.9900, 0.2950, 2.4600};
upperbound = {3.2400, 1.0100, 0.3050, 2.5000};
![Fig1][2]
Note that our results differ from allowable by several percent, but if you look at all the results of Table 4 from the cited article, then the agreement is quite acceptable.The worst prediction is for the Strouhal number. We note that we use the explicit Euler method, which gives an underestimate of the Strouhal number, as follows from the data in Table 4.
The next test differs from the previous one in that the input speed varies according to the `U0[y_, t_] := 4*Um*y/H*(1 - y/H)*Sin[Pi*t/8]`. It is necessary to determine the time dependence of the drag and lift parameters for a half-period of oscillation, as well as the pressure drop at the last moment of time. In Fig. 2 shows the components of the flow velocity and the required coefficients. Our solution of the problem and what is required in the test
sol = {3.0438934441256595`,
0.5073345082785012`, -0.11152933279750943`};
lowerbound = {2.9300, 0.4700, -0.1150};
upperbound = {2.9700, 0.4900, -0.1050};
![Fig2][3]
For this test, the agreement with the data in Table 5 is good. Consequently, the two tests are almost completely passed.
I wrote and debugged this code using Mathematics 11.01. But when I ran this code using Mathematics 11.3, I got strange pictures, for example, the disk is represented as a hexagon, the size of the area is changed.
![Fig3][4]
In addition, the numerical solution of the problem has changed, for example, test 2D2
{3.17805, 1.03297, 0.266606, 2.60427} v11.01
{3.15711, 1.11377, 0.266043, 2.54356} v11.03
The attached file contains the working code for test 2D3 describing the flow around the cylinder in a flat channel with a change in the flow velocity.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D2.png&userId=1218692
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=test2D3.png&userId=1218692
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Math11.3.png&userId=1218692
[331]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CylinderRe100test2D2.gif&userId=1218692
[332]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2D2test.png&userId=1218692Alexander Trounev2018-08-31T11:44:04ZMusic Generation with GAN MidiNet
http://community.wolfram.com/groups/-/m/t/1435251
I generate a music with reference to [MidiNet][1]. Most neural network models for music generation use recurrent neural networks. However, MidiNet use convolutional neural networks.
There are three models in MidiNet. Model 1 is Melody generator, no chord condition. Model 2,3 are Melody generators with chord condition. I try Model 1, because it is most interesting in the three models compared in the paper.
**Get MIDI data**
-----------------------
My favorite Jazz bassist is [Jaco Pastorius][2]. I get MIDI data from [here][3]. For example, I get MIDI data of "The Chicken".
url = "http://www.midiworld.com/download/1366";
notes = Select[Import[url, {"SoundNotes"}], Length[#] > 0 &];
There are some styles in the notes. I get base style from them.
notes[[All, 3, 3]]
Sound[notes[[1]]]
![enter image description here][4]
![enter image description here][5]
I change MIDI data to Image data. I fix the smallest note unit to be the sixteenth note. I divide the MIDI data into the sixteenth note period and select the sound found at the beginning of each period. And the pitch of SoundNote function is from 1 to 128. So, I change one bar to grayscale image(h=128*w=16).
First, I create the rule to change each note pitch(C-1,...,G9) to number(1,...,128), C4 -> 61.
codebase = {"C", "C#", "D", "D#", "E" , "F", "F#", "G", "G#" , "A",
"A#", "B"};
num = ToString /@ Range[-1, 9];
pitch2numberrule =
Take[Thread[
StringJoin /@ Reverse /@ Tuples[{num, codebase}] ->
Range[0, 131] + 1], 128]
![enter image description here][6]
Next, I change each bar to image (h = 128*w = 16).
tempo = 108;
note16 = 60/(4*tempo); (* length(second) of 1the sixteenth note *)
select16[snlist_, t_] :=
Select[snlist, (t <= #[[2, 1]] <= t + note16) || (t <= #[[2, 2]] <=
t + note16) || (#[[2, 1]] < t && #[[2, 2]] > t + note16) &, 1]
selectbar[snlist_, str_] :=
select16[snlist, #] & /@ Most@Range[str, str + note16*16, note16]
selectpitch[x_] := If[x === {}, 0, x[[1, 1]]] /. pitch2numberrule
pixelbar[snlist_, t_] := Module[{bar, x, y},
bar = selectbar[snlist, t];
x = selectpitch /@ bar;
y = Range[16];
Transpose[{x, y}]
]
imagebar[snlist_, t_] := Module[{image},
image = ConstantArray[0, {128, 16}];
Quiet[(image[[129 - #[[1]], #[[2]]]] = 1) & /@ pixelbar[snlist, t]];
Image[image]
]
soundnote2image[soundnotelist_] := Module[{min, max, data2},
{min, max} = MinMax[#[[2]] & /@ soundnotelist // Flatten];
data2 = {#[[1]], #[[2]] - min} & /@ soundnotelist;
Table[imagebar[data2, t], {t, 0, max - min, note16*16}]
]
(images1 = soundnote2image[notes[[1]]])[[;; 16]]
![enter image description here][7]
**Create the training data**
-----------------------
First, I drop images1 to an integer multiple of the batch size. Its length is 128 bars and about 284 seconds with a batch size of 16.
batchsize = 16;
getbatchsizeimages[i_] := i[[;; batchsize*Floor[Length[i]/batchsize]]]
imagesall = Flatten[Join[getbatchsizeimages /@ {images1}]];
{Length[imagesall], Length[imagesall]*note16*16 // N}
![enter image description here][8]
MidiNet proposes a novel conditional mechanism to use music from the previous bar to condition the generation of the present bar to take into account the temporal dependencies across a different bar. So, each training data of MidiNet (Model 1: Melody generator, no chord condition) consists of three "noise", "prev", "Input". "noise" is a 100-dimensions random vector. "prev" is an image data(1*128*16) of the previous bar. "Input" is an image data(1*128*16) of the present bar. The first "prev" of each batch is all 0.
I generate training data with a batch size of 16 as follows.
randomDim = 100;
n = Floor[Length@imagesall/batchsize];
noise = Table[RandomReal[NormalDistribution[0, 1], {randomDim}],
batchsize*n];
input = ArrayReshape[ImageData[#], {1, 128, 16}] & /@
imagesall[[;; batchsize*n]];
prev = Flatten[
Join[Table[{{ConstantArray[0, {1, 128, 16}]},
input[[batchsize*(i - 1) + 1 ;; batchsize*i - 1]]}, {i, 1, n}]],
2];
trainingData =
AssociationThread[{"noise", "prev",
"Input"} -> {#[[1]], #[[2]], #[[3]]}] & /@
Transpose[{noise, prev, input}];
**Create GAN**
-----------------------
I create generator with reference to MidiNet.
generator = NetGraph[{
1024, BatchNormalizationLayer[], Ramp, 256,
BatchNormalizationLayer[], Ramp, ReshapeLayer[{128, 1, 2}],
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[1, {128, 1}, "Stride" -> {2, 1}],
LogisticSigmoid,
ConvolutionLayer[16, {128, 1}, "Stride" -> {2, 1}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp, CatenateLayer[],
CatenateLayer[], CatenateLayer[],
CatenateLayer[]}, {NetPort["noise"] ->
1, NetPort["prev"] -> 19,
19 -> 20 ->
21 -> 22 -> 23 -> 24 -> 25 -> 26 -> 27 -> 28 -> 29 -> 30,
1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7, {7, 30} -> 31,
31 -> 8 -> 9 -> 10, {10, 27} -> 32,
32 -> 11 -> 12 -> 13, {13, 24} -> 33,
33 -> 14 -> 15 -> 16, {16, 21} -> 34, 34 -> 17 -> 18},
"noise" -> {100}, "prev" -> {1, 128, 16}
]
![enter image description here][9]
I create discriminator which does not have BatchNormalizationLayer and LogisticSigmoid, because I use [Wasserstein GAN][10] easy to stabilize the training.
discriminator = NetGraph[{
ConvolutionLayer[64, {89, 4}, "Stride" -> {1, 1}], Ramp,
ConvolutionLayer[64, {1, 4}, "Stride" -> {1, 1}], Ramp,
ConvolutionLayer[16, {1, 4}, "Stride" -> {1, 1}], Ramp,
1},
{1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7}, "Input" -> {1, 128, 16}
]
![enter image description here][11]
I create Wasserstein GAN network.
ganNet = NetInitialize[NetGraph[<|"gen" -> generator,
"discrimop" -> NetMapOperator[discriminator],
"cat" -> CatenateLayer[],
"reshape" -> ReshapeLayer[{2, 1, 128, 16}],
"flat" -> ReshapeLayer[{2}],
"scale" -> ConstantTimesLayer["Scaling" -> {-1, 1}],
"total" -> SummationLayer[]|>,
{{NetPort["noise"], NetPort["prev"]} -> "gen" -> "cat",
NetPort["Input"] -> "cat",
"cat" ->
"reshape" -> "discrimop" -> "flat" -> "scale" -> "total"},
"Input" -> {1, 128, 16}]]
![enter image description here][12]
**NetTrain**
-----------------------
I train by using the training data created before. I use RMSProp as the method of NetTrain according to Wasserstein GAN. It take about one hour by using GPU.
net = NetTrain[ganNet, trainingData, All, LossFunction -> "Output",
Method -> {"RMSProp", "LearningRate" -> 0.00005,
"WeightClipping" -> {"discrimop" -> 0.01}},
LearningRateMultipliers -> {"scale" -> 0, "gen" -> -0.2},
TargetDevice -> "GPU", BatchSize -> batchsize,
MaxTrainingRounds -> 50000]
![enter image description here][13]
**Create MIDI**
-----------------------
I create image data of 16 bars by using generator of trained network.
bars = {};
newbar = Image[ConstantArray[0, {1, 128, 16}]];
For[i = 1, i < 17, i++,
noise1 = RandomReal[NormalDistribution[0, 1], {randomDim}];
prev1 = {ImageData[newbar]};
newbar =
NetDecoder[{"Image", "Grayscale"}][
NetExtract[net["TrainedNet"], "gen"][<|"noise" -> noise1,
"prev" -> prev1|>]];
AppendTo[bars, newbar]
]
bars
![enter image description here][14]
I select only the pixel having the max value among each column of the image, because there is a feature that the image generated by Wasserstein GAN is blurred. I clear the images.
clearbar[bar_, threshold_] := Module[{i, barx, col, max},
barx = ConstantArray[0, {128, 16}];
col = Transpose[bar // ImageData];
For[i = 1, i < 17, i++,
max = Max[col[[i]]];
If[max >= threshold,
barx[[First@Position[col[[i]], max, 1], i]] = 1]
];
Image[barx]
]
bars2 = clearbar[#, 0.1] & /@ bars
![enter image description here][15]
I change the image to SoundNote. I concatenate the same continuous pitches.
number2pitchrule = Reverse /@ pitch2numberrule;
images2soundnote[img_, start_] :=
SoundNote[(129 - #[[2]]) /.
number2pitchrule, {(#[[1]] - 1)*note16, #[[1]]*note16} + start,
"ElectricBass", SoundVolume -> 1] & /@
Sort@(Reverse /@ Position[(img // ImageData) /. (1 -> 1.), 1.])
snjoinrule = {x___, SoundNote[s_, {t_, u_}, v_, w_],
SoundNote[s_, {u_, z_}, v_, w_], y___} -> {x,
SoundNote[s, {t, z}, v, w], y};
I generate music and attach its mp3 file.
Sound[Flatten@
MapIndexed[(images2soundnote[#1, note16*16*(First[#2] - 1)] //.
snjoinrule) &, bars2]]
![enter image description here][16]
**Conclusion**
-----------------------
I try music generation with GAN. I am not satisfied with the result. I think that the causes are various, poor training data, poor learning time, etc.
Jaco is gone. I hope Neural Networks will be able to express Jaco's base.
[1]: https://arxiv.org/abs/1703.10847
[2]: https://en.wikipedia.org/wiki/Jaco_Pastorius
[3]: http://www.bock-for-pastorius.de/midi.htm
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=317901.jpg&userId=1013863
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=567502.jpg&userId=1013863
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=476803.jpg&userId=1013863
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=744004.jpg&userId=1013863
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=586405.jpg&userId=1013863
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=707106.jpg&userId=1013863
[10]: https://arxiv.org/abs/1701.07875
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=435507.jpg&userId=1013863
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=170508.jpg&userId=1013863
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=324809.jpg&userId=1013863
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=965210.jpg&userId=1013863
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=706311.jpg&userId=1013863
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=177112.jpg&userId=1013863Kotaro Okazaki2018-09-02T02:30:04ZReverse the axes of a plot?
http://community.wolfram.com/groups/-/m/t/1459957
Hello and thanks for your help.
I am trying to invert the axes provided by the Plot [] command, to invert the Y axis (vertical) and the graphical maintenance of the x axis (horizontal). Thank you very much for your help, I tried to find an answer in the program itself but I did not find it.
Thank you very much for any help you can give me.Miguel Saldias2018-09-15T19:20:21ZScatter plotting satellites?
http://community.wolfram.com/groups/-/m/t/1454309
Im dealing with a table of GEO satellites, I have generated a table with Az and EL values relative to my location. To bad they couldn't be found in the SatelliteData[] query...
This is a snippet of the data.
dataSatTable = {
{"SAT NAME", "EL", "AZ"},
{"NSS-806", 3.26, 99.47},
{"Galaxy-17-19", 5.69, 258.52},
{"Eutelsat-113", 10.4, 254.51}
}
I need to plot each satellite as a dot with a text tag on a scatter plot, This will form an arc. I then need to add another table of data containing obstructions on the plot.
Any pointers?Mathison Ott2018-09-13T07:18:01Zpsfrag for Mathematica 10
http://community.wolfram.com/groups/-/m/t/474155
Hello,
as far as I understand, psfrag is no longer working with Mathematica 10. Does anyone have a solution for this problem or
knows whether there will be a solution in the near future? Or is there an alternative?
What I want to do is export eps files from Mathematica and include them into Latex with nice Labels.a b2015-04-05T14:34:56ZGet a numerical solution to a nonlinear ODE?
http://community.wolfram.com/groups/-/m/t/1458395
I am trying to solve a nonlinear ODE BY applying a NDsolve and using StiffnessSwitching method, but when I try to find the root of my equation it gives me a error message. The same code is working well in Mathematica version 9, but in M.version 11.3 that I just upgraded is not working I do not know why this happened.
Would anyone help me please?
Here is my code
Z=800;
g= 0.023800000000000000000;
k2= 0.000194519;
R= 1.5472;
ytest0= -13.911917694213733`;
ϵ = $MachineEpsilon ;
y1[ytest_?NumericQ] :=
NDSolve[
{y''[r] + 2 y'[r]/r == κ2 Sinh[y[r]] , y[1] == ytest,
y'[ϵ] == 0}, y, {r, ϵ, 1},
Method -> {"StiffnessSwitching", "NonstiffTest" -> False}];
y2[ytest_?NumericQ] :=
NDSolve[
{y''[r] + 2 y'[r]/r == κ2 Sinh[y[r]],
y[1] == ytest, y'[R] == 0}, y, {r, 1, R},
Method -> {"StiffnessSwitching", "NonstiffTest" -> False}];
y1Try[ytest_?NumericQ] := y'[1] /. y1[ytest];
y2Try[ytest_?NumericQ] := y'[1] /. y2[ytest];
f = ytest /. FindRoot[y1Try[ytest] - y2Try[ytest]==-Zg, {ytest, ytest0}]kolod al2018-09-15T03:44:54ZAvoid issue while using DMSList function?
http://community.wolfram.com/groups/-/m/t/1461490
DMSList[20.365]
The above code should have returned the list of degree, minutes and seconds. But only the following code returned the required results of {20, 21, 54.}.
DMSList[{20.365,0,0}]htan aungmin2018-09-17T09:35:33ZCreate a "Great Circle" on a globe through two given points?
http://community.wolfram.com/groups/-/m/t/1460856
A collegue of mine is on holiday from Amsterdam to Miami. Just for fun I would like to plot the great circle through the center of the earth going through Amsterdam and Miami.
I tried to do that with GeoGraphics/Geopath, but for both came the Error-Message “GeoGraphics/Geopath” is not a graphics primitive. I tried several things but I cannot get it right ? How to create it ?
The code is from the Wolfram help with some adaption. See att.
Thank youChiel Geeraert2018-09-16T17:35:52ZDefine an implicit function?
http://community.wolfram.com/groups/-/m/t/1457290
Hi there,
I have a theoretic decision model that involves a number of equations that cannot be solved in a closed form, i.e. the solution is given only implicitely. The first of these functions then enters another equaition, that again can only be solved implicitely. And encounter problems when I try to implement this. I only need a numerical solution, as it is for illustration prupose only.
Now here's the first equation:
b[x_] := Solve[b == x - 0.1*Quantile[NormalDistribution[0, 1], b], b]
which gives the error message
$RecursionLimit::reclim2: Recursion depth of 1024 exceeded during evaluation of -0.1 Quantile[NormalDistribution,b].
I'd prefer to have the 0.1 as a free parameter "b[x_,s_] :=...", but would be able to live with that if I have to. Any ideas or comments?
Best ChristianChristan Bauer2018-09-14T11:24:04ZConnect Mathematica to a Bluetooth 4.0 device?
http://community.wolfram.com/groups/-/m/t/1454347
How does Mathematica connect to heart rate device via Bluetooth 4.0 and analyze the sampled data? The FindDevices command has tried but the device cannot found. Are there any things to be aware or other methods?
FindDevices[]
{DeviceObject[{"Camera", 1}], DeviceObject[{
"FunctionDemo", 1}], DeviceObject[{
"RandomSignalDemo", 1}], DeviceObject[{"WriteDemo", 1}]}Tsai Ming-Chou2018-09-13T09:37:35ZConvert Wolfram Dataset to JSON or CSV via API?
http://community.wolfram.com/groups/-/m/t/1455513
I am getting wolfram CDF format from this,
beta = APIFunction[{"tablename" -> "String"},ResourceData[ResourceObject[#tablename] ]& ]
co = CloudDeploy[beta, Permissions->"Public"]
Response:
Dataset[{<|"Name" -> "Aachen", "ID" -> "1", "NameType" -> "Valid", "Classification" -> "L5", "Mass" -> Quantity[21, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1880}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{50.775, 6.08333}]|>, <|"Name" -> "Aarhus", "ID" -> "2", "NameType" -> "Valid", "Classification" -> "H6", "Mass" -> Quantity[720, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1951}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{56.18333, 10.23333}]|>, <|"Name" -> "Abee", "ID" -> "6", "NameType" -> "Valid", "Classification" -> "EH4", "Mass" -> Quantity[107000, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1952}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{54.21667, -113.}]|>, <|"Name" -> "Acapulco", "ID" -> "10", "NameType" -> "Valid", "Classification" -> "Acapulcoite", "Mass" -> Quantity[1914, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1976}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{16.88333, -99.9}]|>, <|"Name" -> "Achiras", "ID" -> "370", "NameType" -> "Valid", "Classification" -> "L6", "Mass" -> Quantity[780, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1902}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{-33.16667, -64.95}]|> }]
I need this in a JSON format, I tried to convert it using URLexecute it didn't work
Does anyone know any pythonic or wolfram way to convert this into JSON or CSV?Sag Mk2018-09-13T17:28:29ZObtain an inhomogeneous compound Poisson process?
http://community.wolfram.com/groups/-/m/t/1456738
Mathematica has functions of compound Poisson process and inhomogeneous Poisson process. But it does not have a function of the combination of the two. In other words, is it possible to obtain a compound Poisson process with time-varying intensity?
Thanks in advance!Livvy Zhen2018-09-14T03:36:21ZPrevent the reset of the DynamicImage settings?
http://community.wolfram.com/groups/-/m/t/1455313
Dear all,
In the following code how can I prevent the reset of the definition of the dynamicImage, i.e., if a have zoomed the image and then move the slider how can I prevent that the dynamicImage lose the zoom:
img = ExampleData[{"TestImage", "House"}];
image = {img, img*2.0};
Manipulate[
Row[{DynamicImage@img, image[[ind]]}],
{ind, 1, 2, 1}
]
Please note this example is a toy problem.
Thank you,
LuisLuis Mendes2018-09-13T14:41:01ZWork with functions defined inside a Package?
http://community.wolfram.com/groups/-/m/t/1150206
I have functions defined inside a Begin Block of a Package in the normal manner (BeginPackage ... foo::usage="bar" ... Begin ... DegreesPerMeterAtmosphere[targelevdeg_,startalt_]: = targelevdeg+startalt ... etc.)
I tried to define variables that would be exposed by the packaged (as below) - but it does not work and I can't find the correct comments ion the manuals. This is the simplified code:
BeginPackage[ "FoundationFunctions`"]
speedlight::usage = "The speed of light in meters/sec";
DegreesPerMeterAtmosphere::usage = "DegreesPerMeterAtmosphere[targelevdeg,startalt] stuff"
Begin[ "Private`"]
speedlight:=299792458 (* m/s *);
DegreesPerMeterAtmosphere[targelevdeg_,startalt_]: = targelevdeg+startalt
End[]
EndPackage[]
But when I Get[] the package from a Notebook the function is pulled across but not the variable. Specifically Names["FoundationFunctions`*"] yields {"DegreesPerMeterAtmosphere"}.
Am I trying to do something that is silly (perhaps one cannot use packages to define/ encapsulate variables)? Or have I done the right thing in the wrong way?Andrew Macafee2017-07-20T13:01:51ZFitting An Ellipse Inside a Non-Convex Curve
http://community.wolfram.com/groups/-/m/t/1453823
The goal is to find the largest ellipse (with given ratio of axes), centered at a given point and with a given orientatiion, that fits inside a specified non-convex oval.
equation of oval
In[1]:= oval[{x_, y_}] = ((x - 1)^2 + y^2) ((x + 1)^2 + y^2) - (21/20)^4;
derive equation of an ellipse with axes "a" and "b", centered at { xc, yc }
with major axis making angle \[Theta] with x-axis.
In[2]:= Thread[{xel, yel} =
DiagonalMatrix[{a, b}^-1].RotationMatrix[-\[Theta]].{x - xc, y - yc}];
In[3]:= eleq[{{a_, b_}, xc_, yc_, \[Theta]_}, {x_, y_}] = xel^2 + yel^2 - 1;
symbolically find largest ellipse with axes "a" and "a/2",
centered at { 1, (1/5) }
oriented at \[Pi]/3
RegionWithin takes about 1 minute.
In[4]:= AbsoluteTiming[
RegionWithin[ImplicitRegion[oval[{x, y}] <= 0, {x, y}],
ImplicitRegion[eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}] <= 0, {x, y}],
GenerateConditions -> True] // N]
Out[4]= {63.4787, 0. < a <= 0.315686 || -0.315686 <= a < 0.}
Calculating it numerically with a Lagrange multiplier
and NSolve takes about 1 second.
The desired answer is the one with the smallest value of a.
In[5]:= AbsoluteTiming[
sln = NSolve[{a >= 0, oval[{x, y}] == 0,
eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}] == 0,
Sequence @@
D[oval[{x,
y}] == \[Lambda] eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}], {{x,
y}}]}, {a, x, y, \[Lambda]}, Reals]]
Out[5]= {0.808361, {{a -> 0.315686, \[Lambda] -> 0.869176, x -> 1.15549,
y -> 0.474695}, {a -> 4.34436, \[Lambda] -> 7.03937, x -> -1.41308,
y -> 0.191823}, {a -> 0.817698, \[Lambda] -> 1.46331, x -> 0.654269,
y -> -0.531984}, {a -> 1.14366, \[Lambda] -> 1.77874, x -> 1.34316,
y -> -0.315728}}}
eliminating the Lagrange multiplier before solving speeds up the calculation.
In[6]:= AbsoluteTiming[
sln = NSolve[{a >= 0, oval[{x, y}] == 0,
eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}] == 0,
Eliminate[
D[oval[{x,
y}] == \[Lambda] eleq[{{a, a/2}, 1, 1/5, \[Pi]/3}, {x, y}], {{x,
y}}], \[Lambda]]}, {a, x, y}, Reals]]
Out[6]= {0.0641214, {{x -> 1.15549, y -> 0.474695, a -> 0.315686}, {x -> -1.41308,
y -> 0.191823, a -> 4.34436}, {x -> 1.34316, y -> -0.315728,
a -> 1.14366}, {x -> 0.654269, y -> -0.531984, a -> 0.817698}}}
Plotting all the results show that the curves are tangent at the intersection point.
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ellipse_in_oval.jpg&userId=29126Frank Kampas2018-09-13T00:45:28Z[GIF] Thoughtform
http://community.wolfram.com/groups/-/m/t/1453464
![enter image description here][1]
Same principle as a previous [post][2], but added some visual aids to make it more intuitive. Drastically resized due to filesize limits, download full-size GIF [here][3] .
Also had some fun with the colors and had an art print made.
![enter image description here][4]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=framesforward30.053.gif&userId=167076
[2]: http://community.wolfram.com/groups/-/m/t/947494
[3]: https://www.dropbox.com/s/rt7cwewf81a0lfy/Thoughtform%200.053.gif?dl=0
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_1335copy.JPG&userId=167076Bryan Lettner2018-09-12T23:36:22Z[✓] Create an image collage and apply Blur?
http://community.wolfram.com/groups/-/m/t/1453520
Hello Everyone,
I am very new to Wolfram language and try to go through the course book of the language. I like however tryout my ideas on how the things can be done in unconventional ways.
I was trying to produce image collage as:
<pre>
<code>
i = CurrentImage[];
ImageCollage[
Table[f[i], {f, {Blur, EdgeDetect, Binarize}}]
]
</code>
</pre>
And I was wondering that If I would like apply a parameter to the Blur function that would be something like partial application i.e. <code>Blur[5]</code>
Which in turn can be further applied in <code>f[i]</code>
is such a thing possible?
the way I tried it <code>Blur[5]</code> doesn't work.
Kind regards
KarolKarol Kopiec2018-09-12T20:38:10ZDefine a piecewise function from a list?
http://community.wolfram.com/groups/-/m/t/1452490
I would like to define a piecewise function by providing two lists and using a for loop
eg:
xlist = {1, 2, 3, 4}
ylist = {4, 5, 6}
f(x):= { ylist[1] if x in [ xlist[1],xlist[2] ];ylist[2] if x in [ xlist[2],xlist[3] ]; ylist[3] if x in [ xlist[3],xlist[4] ]; 0 otherwise}Noureddine Toumi2018-09-12T17:58:45Z[Wolfram Media] A Numerical Approach to Real Algebraic Curves
http://community.wolfram.com/groups/-/m/t/1452588
[![enter image description here][1]][2]
Wolfram Media has released a new book, [*A Numerical Approach to Real Algebraic Curves with the Wolfram Language*][2], by Barry H. Dayton. [Dayton][3] is a [mathematician][4] and long-time Mathematica user.
Bridging the gap between the sophisticated topic of real algebraic curve theory and on-the-spot computation and visualization of real algebraic curves, Dayton uses the Wolfram Language to explore and analyze real curves that often do not have rational points on them. In classical texts, analysis of these types of real curves was only really possible in the theoretical sense, but the Wolfram Language's ability to work with machine numbers, both in calculations and in detailed plots, enables accurate analysis of extremely complicated curves. This book is intended for those with some understanding of calculus and partial derivatives and with basic knowledge of the Wolfram Language.
One thing that makes this [Wolfram Media][5] publication unique is that not only is the book available for purchase on Amazon as a Kindle file, the entire text of the book with all of the code used to make the plots is available for free as downloadable Wolfram Notebooks. This book's unique style includes a large function appendix that evaluates independently of the chapter interface and activates the functions used in the text itself.
Read this month's [article of *The Mathematica Journal* for a summary][6]. Below are a few beautiful images from the article.
![enter image description here][7]
![enter image description here][8]
We're excited for this release as it is the first book by a non-Wolfram author that we've published, and we have several additional titles under consideration for 2019. Please check back on the Publishing and Authoring Group discussion over the next few months for updates!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-09-12at4.30.29PM.png&userId=20103
[2]: http://www.wolfram-media.com/products/dayton-algebraic-curves.html
[3]: http://barryhdayton.space
[4]: https://scholar.google.com/citations?user=hHz85rIAAAAJ&hl=en
[5]: http://www.wolfram-media.com
[6]: http://www.mathematica-journal.com/2018/08/a-wolfram-language-approach-to-real-numerical-algebraic-plane-curves
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Dayton_PlacedGraphics_1.gif&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Dayton_PlacedGraphics_7.gif&userId=20103Jeremy Sykes2018-09-12T19:30:25ZSimplify the following mathematical expression?
http://community.wolfram.com/groups/-/m/t/1431336
Hey I want to simplify this calculation (see the file) and write it like 1+ax+by/1+cx+dy. Us it possible if yes, how can I do it!Hamza Hboub2018-08-30T15:08:23ZNumerical anomalies in a minimax algorithm
http://community.wolfram.com/groups/-/m/t/1449482
I am trying to compute error bounds for polynomial estimates to $\sin(t\theta)/\sin(\theta)$ for $t \in [0,1]$. The polynomials are of the form $p(x,t)$, where $x = \cos(\theta)$. The polynomials have a constant $u \in [0,1]$ that I want to choose to minimize the maximum error. The mathematical derivation is irrelevant, so I have skipped those details. I wrote code (Mathematica 11.3) to do this and plotted the minimax result as a function of $u$. (I omitted the NMinimize call for h[u] in this code sample.)
a[i_?NumericQ, t_?NumericQ] := If[i >= 1, a[i - 1, t]*(t^2 - i^2)/(i*(2*i + 1)), t]
p[n_?NumericQ, y_?NumericQ, t_?NumericQ] := (sum = a[n, t]; For[i = n - 1, i >= 0, i--, sum = a[i, t] + sum*y]; sum)
f[n_?NumericQ, x_?NumericQ, t_?NumericQ] := Sin[t*ArcCos[x]]/Sin[ArcCos[x]] - p[n, x - 1, t]
g[n_?NumericQ, u_?NumericQ, x_?NumericQ, t_?NumericQ] := Abs[f[n, x, t] - u*a[n, t]*(x - 1)^n]
h[n_?NumericQ, u_?NumericQ] := (result = NMaximize[{g[n, u, x, t], 0 <= x <= 1 && 0 <= t <= 1}, {x, t}]; result[[1]])
Plot[h[8, u], {u, 0.7, 0.9}]
The output of Plot has some numerical anomalies.
![Output of Plot function, default method for NMaximize][1]
When I program this in C++ using double precision, the function h(u) is smooth. Evaluating h[8,0.75], Mathematica produces 0.000058529. Evaluating h[8,0.751], Mathematica produces 9.13505e-06. I did not expect the sawtooth-like behavior of the graph. The valleys do not show up in my C++ computations, which shows effectively a V-shaped graph with vertex near (0.85352, 1.91558e-05). I tried to change the working precision, but the sawtooth behavior persisted.
I switched the method to "Simulated Annealing." The output of the Plot function also has some anomalies.
![Output of Plot, simulated annealing][2]
The outputs at the two aforementioned locations are h[8,0.75] = 0.0000583938 and h[8,0.751] = 0.0000580131, but now the anomalies are in a different region of the graph.
Finally, I tried using "Differential Evolution" as the method. The output looks like what I expected.
![Output of Plot, differential evolution][3]
I know how to debug numerical issues in C++ code using a debugger, but I am a novice at Mathematica and wish to know whether there is some standard approach or set of tools that allows me to diagnose such issues. Also, is there some general advice on choosing the method for minimizing or maximizing? Or is this simply something one has to use trial-and-error to determine? Thank you.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HGraph8Anomalies.png&userId=1449429
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HGraph8SimulatedAnnealing.png&userId=1449429
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HGraph8DifferentialEvolution.png&userId=1449429David Eberly2018-09-11T04:30:56ZGenerate a mesh in order to do a heat transfer analysis?
http://community.wolfram.com/groups/-/m/t/1450254
Hi All,
I'm trying to generate a mesh in order to do a heat transfer analysis and have come up with the code in the attached file based on the Wolfram documentation and other posts in the community.
The mesh is broken up into 9 regions to which I would like to assign material properties. I've had a go at assigning point markers to nodes which I then intended to use to assign material properties. However, I've come unstuck because the number of incidents I've created doesn't match the number of nodes (i.e. entries in mesh["Coordinates"]) due to each region being meshed separately and there being two incident IDs per node at each interface between regions. I'm new to Mathematica so I'd be very grateful if anyone could shed some light on how best to go about this. Also, is there a way to show the IDs of all nodes (PointElements?) rather than just the ones on the boundary? I've written my own code to solve for heat transfer in a separate notebook. Many thanks, ArchieArchie Watts-Farmer2018-09-11T14:56:59ZHow to display underlaying symbolic function but not calculating the result
http://community.wolfram.com/groups/-/m/t/1449847
Hi All,
given the data list:-
data = {24, 5, -9, 105, 15, 111};
Table[data[[i]], {i, 1, 6}] (* which shows the values in the data list *)
Out[]= {24, 5, -9, 105, 15, 111}
BUT I want Table[data[[i]],{i,1,6}]
**(* How can I code the output to show the under laying symbolic as shown below *)**
{ data[[1]], data[[2]], data[[3]], data[[4]], data[[5]], data[[6]] }
Many thanks for any help your can offer.
Lea...Lea Rebanks2018-09-11T10:20:06Z[Wellin] Share/Discuss your solutions to select exercises!
http://community.wolfram.com/groups/-/m/t/1442768
First of all, i hope that the contents of this thread does not violate copyrights or the will of any co-author and that we are allowed to freely post, share, discuss exercises/exercise solutions, especially our own solutions, found in texts co-authored by [Wellin][1] et al. (Gaylord, Kamin, Wellin). Due to legal concerns, maybe we cannot republish the exercise problem statement verbatim or in any other copied form (screenshot/image, photo/image, scan/image)?
*Mathematica* learners and readers of these popular programming intro texts should find this collective discussion thread helpful, everyone, especially beginners, is welcome to post, share, ask. When working with his books I try to solve an exercise on my own, then check the official **.nb** solution (not the **.pdf** file), compare, and merge the two solutions, if mine differs conceptually. I include further edits in the **.nb** file, such as code rearrangement/reformatting, further textual explanations, sometimes corrections, text coloring, etc., basically improving the personal usefulness of the solution, e.g. for an eventual [future second read][2].
There are 5 notable titles so far:
- **EPM1** (2016)
- **PWM1** (2013)
- **IPM3** (2005)
- **IPM2** (1996)
- **IPM1** (1993)
Examples from one book can be re-found as exercises in another book, and vice versa, and there is much overlap and similarity of style, text, and exercises among the 5 books. All the material is imho introductory and only suitable for beginners. Like myself.
If you enjoy learning from (one of) these intro texts as much as I do, then this thread shall become the place for you to participate and discuss particular things thereof (solutions, text, questions, wishes, typos, criticism, etc). We would love hearing from you!
[1]: https://www.programmingmathematica.com/books.html
[2]: https://www.youtube.com/watch?v=lItgAV6Ly6MRaspi Rascal2018-09-07T10:55:45ZDoes any US College offer online Mathematica-based introductory calculus?
http://community.wolfram.com/groups/-/m/t/1429873
Someone I know is looking for a course teaching introductory calculus (univariate differential, integral) using the Wolfram Language / Mathematica. The course needs to be taught on an online basis from an accredited US college such that the credits received could be transferred back to his home university. To my surprise, my 10 minutes with a search engine did not find anything current. Is anyone aware of such an offering?Seth Chandler2018-08-29T22:49:41ZWorkarounds for network timeouts when trying to use Interpreter["Person"]
http://community.wolfram.com/groups/-/m/t/1419777
I frequently get network timeout problems when using Interpreter in ways that require connectivity to the Wolfram server. My network connection in general is quite fast, so I don't think that's the issue. Here's an example.
We have a list of presidents using their common names.
presidents= {"George Washington", "John Adams", "Thomas Jefferson", "James \
Madison", "James Monroe", "John Quincy Adams", "Andrew Jackson", \
"Martin Van Buren", "William Henry Harrison", "John Tyler", "James K. \
Polk", "Zachary Taylor", "Millard Fillmore", "Franklin Pierce", \
"James Buchanan", "Abraham Lincoln", "Andrew Johnson", "Ulysses S. \
Grant", "Rutherford B. Hayes", "James A. Garfield", "Chester A. \
Arthur", "Grover Cleveland", "Benjamin Harrison", "Grover Cleveland \
(2nd term)", "William McKinley", "Theodore Roosevelt", "William \
Howard Taft", "Woodrow Wilson", "Warren G. Harding", "Calvin \
Coolidge", "Herbert Hoover", "Franklin D. Roosevelt", "Harry S. \
Truman", "Dwight D. Eisenhower", "John F. Kennedy", "Lyndon B. \
Johnson", "Richard Nixon", "Gerald Ford", "Jimmy Carter", "Ronald \
Reagan", "George H. W. Bush", "Bill Clinton", "George W. Bush", \
"Barack Obama", "Donald Trump"};
I now want to represent them as entities so that users can get further information on them. So, here's the plan. I want to make one call to Interpreter rather than Map Interpreter over a list of names.
presidentEntities=Interpreter["Person", True &, Missing[], AmbiguityFunction -> First][
presidentNames]]
When I do this, I frequently get a network timeout error. Now it's Sunday afternoon here in the US and I wouldn't think this was peak load time. Moreover, I've gotten the error -- and similar errors for other Interpreter calls -- on many other occasions. Moreover, I don't think 45 names should really tax the Wolfram server too hard.
So, are there any user workarounds for this? (I've tried the ugly method of breaking up the list into pieces and then reassembling, but even that sometimes fails). Am I doing something wrong? Is there a way of making some Interpreter code local?
Is there some way of determining that the Wolfram Server is having a bad day or hour or suffering a particularly heavy load?
More generally, is there something that can be done about WolframAlpha throughput. The Wolfram Language (as opposed to Mathematica) depends on access to vast amount of external data. But if I can't count on reliable service, it discourages use of programs and constructs that depend on that data and the Entity construct.Seth Chandler2018-08-22T22:35:58Z[Event] Shanghai User Meetup Review
http://community.wolfram.com/groups/-/m/t/1450141
*All notebooks used in the presentation can be downloaded at the end of the post.*
----------
The idea of the post is to encourage our lovely users to share their experience about Wolfram products in local meetup groups, building up friendship and partnership among our community.
On 9/8/2018 Saturday, WRI Developer Mr. Shenghui Yang hosted a 12-people private Mathematica user panel to discuss the latest R&D achievement of Wolfram Language V11.1, 2 and 3, including
- Updates and Improvements for Geo system and Entity
- Neural Network in V11.3
- Wolfram Cloud user interface and deployment
- Several appealing examples of Mathematica dynamic feature in K-12 teaching project
![lecturing][1]
![beginning][2]
## Geo system ##
To have Wolfram Language features more accessible and relatable to our domestic users, Shenghui mixed his real life elements into Wolfram Language. The whole presentation became his daily life storytelling upon Wolfram Language knowledge base.
W|A command line interface briefly describes the weather condition on the day of this event
![weather][3]
GeoMagenetData and GeoGravityData demonstrates important geophysics properties of Shanghai at the moment of the presentation ;-) No need to worry about any anomaly
![geodata][4]
GeoPosition with customized GeoMarker visualize the location of this event along the riverband of Yangtze
![marker][5]
GeoDistance, GeoPath and several powerful projection options show our user how Wolfram headquarters relates to the meeting place. One of ~530 projection types is used in the example.
In:= GeoProjectionData["LambertAzimuthal"]
Out= {LambertAzimuthal,{Centering->{0,0},GridOrigin->{0,0},ReferenceModel->1}}
In := GeoProjectionData[]//Short
Out:= {Airy,Aitoff,Albers,AmericanPolyconic,ApianI,<<525>>,WinkelTripel}
![path][6]
GeoArea + GeoPosition, after mark the places the host visited most frequently in Shanghai, formed a large triangle. Combine EntityValue and related functions to easily extract the ratio of the triangle to Shanghai in terms of area
![area][7]
GeoPath and TravelDirectionData also reported accurately how long it takes to route and visit all three marked places
![travel][8]
Finally, Shenghui mention that this event being hosted in a nice tea house, owned by YueSheng Du, the Shanghai-born Mob King and the God of Father of Far East during the Chiang Kai-shek era. Related background information can be retrieved both by built-in Entity functions and ExternalService with BingSearch V5
![history][9]
![bing][10]
## Discussion on K-12 Math Topics ##
This section is set specifically to users in K-12 education industry or the parents, whose kids in this academic interval, looking for new way for their kids to understand the school materials.
Shenghui and several local users reached out to some domestic teachers in public and private schools, ranging from the elite to mid level.
Real test problems were collected for the demo. A brief moment was left to the audience to think about the challenging problems before seeing the notebook with solution. The solution uses Mathematica built-in strong visualization, dynamic and CloudDeploy features. One of the most stressful and painful problem in the current domestic K-12 math education is that students need to take math-olympiad level exam for middle and high school. Most of the kids have no choice but to recite the hard-coded hacks to solve the tricky problems in short time. Lack of understanding and intuitive explanation make the process even more challenging. The host brings new vision into these problem via graphical presentation.
Here is an example of Non-stop trains problem with graphical explanation. (10 grade math problem) This question is asked to compute the distance between each cross point. The demo is designed to help students to understand the physical process and solve by hand in the exam, rather than shoot a Mathematica solution to them
![question][11]
![solution][12]
## Neural Network and AI ##
The presentation is based on the updated version of [Taliesin's][13] [notebook][14] and demo session on [YouTube][15] (some NN layers' name are updated in V 11.3 like DotPlusLayer -> LinearLayer). The examples are fully tested in the attached notebook for V11.3. Though the topic is quite involved for first time users, the audience are willing to learn Wolfram Language. Shenghui and his college roomate, a [Tecent AI Lab][16] senior researcher and also a veteran Mathematica user, collaboratively initialize bi-weekly online discussion for domestic Mathematica users. The one-hour AI-topic paper reading session is aimed to have the users familiar with basic NN layers in Wolfram Language and with different Networks available in the [Wolfram Neural Network Repository][17].
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.jpg&userId=23928
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.jpg&userId=23928
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=23928
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&userId=23928
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.png&userId=23928
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.png&userId=23928
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.png&userId=23928
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8.png&userId=23928
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9.png&userId=23928
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.png&userId=23928
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.png&userId=23928
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12.png&userId=23928
[13]: https://twitter.com/taliesinb
[14]: https://wolfr.am/gLSyxCEE
[15]: https://www.youtube.com/watch?v=FnpqI4REiak
[16]: https://ai.tencent.com/ailab/index.html
[17]: https://resources.wolframcloud.com/NeuralNetRepository/Shenghui Yang2018-09-11T14:30:05ZImageAugmentationLayer on image and target mask
http://community.wolfram.com/groups/-/m/t/1445573
Hi, I'd like to use ImageAugmentationLayer in my binary image segmentation neural network. However, it seems like I can't get the ImageSegmentationLayer to do exactly the same transform on my input image as on my target mask. Is there a hidden way to do this that's not mentioned in the docs? It seems like every invocation of the layer will use a new random crop, but I need to do the _exact same_ random crop on pairs of images.
Cheers!Carl Lange2018-09-09T12:40:13ZCreating and Evaluating MCQs in the Wolfram Language
http://community.wolfram.com/groups/-/m/t/1447407
Dear community members, does anybody know of any available resources to get started creating and automatically grading MCQs (multiple choice questions) in the Wolfram Language?
Many thanks in advance,
RubenRuben Garcia Berasategui2018-09-10T10:41:14ZAvoid strange results from LinearSolve (and Solve)?
http://community.wolfram.com/groups/-/m/t/1447563
I encounter strange behaviour of LinearSolve (and Solve):
Given a symmetric, positive matrix M (4x4, but nasty expressions) I try to solve the linear system
M.x=rhs
with rhs=(1,0,0,0).
Using LinearSolve[M,rhs] I obtain an answer, that yields Indetermined values, when evaluated for special values of M. The same answer is obtained when using Solve.
But if I calculate the Inverse of M and multiply that with rhs, I obtain the correct response, without any indetermined entries.
For larger matrices this bypass would become too involved.Alois Steindl2018-09-10T11:30:42Z[WSS18] Reinforcement Q-Learning for Atari Games
http://community.wolfram.com/groups/-/m/t/1380007
## Introduction ##
This project aims to create a neural network agent that plays Atari games. This agent is trained using Q-Learning. The agent will not have any priori knowledge of the game. It is able to learn by playing the game and only being told when it loses.
##What is reinforcement learning? ##
Reinforcement learning is an area under the general machine Learning, inspired by behavioral psychology. The agent learns what to do, given a situation and a set of possible actions to choose from, in order to maximize a reward. Therefore, to model a problem to reinforcement learning problem, the game should have a set of states, a set of actions that able to transfer one state into another and a set of reward corresponding to each state. The mathematical formulation of reinforcement learning problem is called Markov Decision Process (MDP).
![An visual representation of reinforcement learning problem][1]
Image From:https://medium.freecodecamp.org/diving-deeper-into-reinforcement-learning-with-q-learning-c18d0db58efe
## Markov Decision Process ##
Before apply Markov decision process to the problem, we need to make sure the problem satisfy the Markov property which is that the current state completely represents the state of the environment. For short, the future depends only on the present.
An MDP can be defined by **(S,A,R,P,γ)** where:
- S — set of possible states
- A — set of possible actions
- R — probability distribution of reward given (state, action) pair
- P — probability distribution over how likely any of the states is to
be the new states, given (state, action) pair. Also known as
transition probability.
- γ — reward discount factor
At initial state $S_{0}$, the agent chooses action $A_{0}$. Then the environment gives reward $R_{0}=R(.|S_{0}, A_{0})$ and next state $S_{1}=P(.|S_{0},A_{0})$. Repeats till the environment ends.
##Value Network##
In value-based RL, the input will be the current state or a combination of few recent states, and the output will be the estimated future reward of every possible action at this state. The goal will be to optimize the value function so that the prediction value is close to the actual reward. In the following graph, each number in the box represents the distance from current box to the goal.
![Value network example][2]
Image From:https://medium.freecodecamp.org/diving-deeper-into-reinforcement-learning-with-q-learning-c18d0db58efe
## Deep Q-Learning ##
Deep Q-learning is the algorithm that I used to construct my agent. The basic idea of Q function is to get the state and action then output the corresponding sum of rewards till the end of the game. In deep Q-learning, we use a neural network as the Q function therefore we can use one state as input and let neural network to generate the prediction for all possible actions.
The Q function is stated as following.
$Q(S_{t},A) = R_{t+1}+\gamma maxQ(S_{t+1},A)\\Where:\\Q(S_{t},A)\,\,\,\,\,\,\,\,\,\,\,\,\, = The \,predicted\,sum \,of rewards \,given\, current\,state\,and\,selected\,action\\R_{t+1} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,= Reward\,received\,after\,taking\,action\\\gamma \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,= Discount\,factor\\maxQ(S_{t+1},A) = The\,prediction\,of\,next\,state$
As we can see that given current state and action, Q function outputs the reward of current plus the max value of the predictions of next state. This function will iteratively predicts the reward till the end of the game where Q[S,A] = R. Therefore we can calculate the loss by minus the prediction of current state with the sum of the reward and the prediction of the next state. When loss equals to 0, the function will able to perfectly predicts the reward of all actions. In another sense that the Q function is predicting the future value of its own prediction. People might ask how could this function be ever converge? Yes, this function is usually hard to converge but when it converges, the performance is really well. There are a lot of techniques that can be used to speed up the converge of the Q function. I will talk about a few techniques I used in this project.
## Experience Replay ##
Experience replay means that the agent will remember the states that it has experienced and learn from those experience when training. It gives more efficient way of using generated data which is by learning it for multiple times. It is important when gaining experience is expensive to agent. Since the Q function usually don't converge in a short time which means a lot of the outcomes from the experience are usually similar, multiple passes on the same data is useful.
## Decaying Random Factor ##
Random Factor is the possibility for the agent to choose a random action instead of the best predicted action. It allows the agent to start with random player to increase the diversity of the sample. The random factor decreases with the more game plays therefore the agent is able to be reinforced on its own action pattern.
## Combine Multiple Observations As Input ##
The following image shows a single frame took out from Atari game BreakOut. From this image, the agent is able to capture information about the location of the ball, the location of the board, etc. But several important information is not shown. If you play as the agent, this image is shown to you, what action you will choose? Feel something is missing? Is the ball going right or left? Is the ball going up or down?
![breakout frame1][3]
Generated Using openAI Gym
The following images are two continuous frames took out from the game BreakOut. From these two images that agent is able to capture the information on the direction of the ball and also the speed of the ball. A lot of people tends to forget this since processing recent memories during playing a game is like a nature to us but not to an reinforcement agent.
![breakout frame1][4]![frame 2][5]
Generated Using openAI Gym
## Agent Play in CartPole environment ##
The main environment for agent to learn and tested is CartPole environment. This environment is consist of two movable parts. One is the cart which is controlled by the agent, has two possible action every state which is moving left or right. The other one is pole. This environment simulate the effect of gravity on pole which makes it fall to left or right due to its orientation with the horizon. For this environment to be considered as solved, the average episodes that the agent able to get in 100 games is over 195. Following graph is a visual representation of the environment. The blue rectangle represents the pole. The black box is the cart. The black line is the horizon.
![cart pole sample][6]
First, let's create an environment
$env = RLEnvironmentCreate["WLCartPole"]
Then, initialize a network for this environment and a generator
policyNet =
NetInitialize@
NetChain[{LinearLayer[128], Tanh, LinearLayer[128], Tanh,
LinearLayer[2]}, "Input" -> 8,
"Output" -> NetDecoder[{"Class", {0, 1}}]];
generator := creatGenerator[$env, 20, 10000, False, 0.98, 1000, 0.95, False]
The generator function plays the game and generates input-output pairs to train the network.
Inside the generator, it initialize the replay buffer which is processed, reward list is used to record the performance, best is to record the peak performance.
If[#AbsoluteBatch == 0,
processed = <|"action"->{},"observation"->{},"next"->{},"reward"->{}|>;
$rewardList = {};
$env=env;
best = 0;
];
Then the environment data are being generated from game function and being preprocessed. At the start of training, the generator will produce more data to fill the replay buffer.
If[#AbsoluteBatch == 0,
experience = preprocess[game[start,maxEp,#Net, render, Power[randomDiscount,#AbsoluteBatch], $env], nor]
,
experience = preprocess[game[1,maxEp,#Net, render, Power[randomDiscount,#AbsoluteBatch],$env], nor]
];
The game function is below, it is joining current observation and next observation as the input to the network.
game[ep_Integer,st_Integer,net_NetChain,render_, rand_, $env_, end_:Function[False]]:= Module[{
states, list,next,observation, punish,choiceSpace,
state,ob,ac,re,action
},
choiceSpace = NetExtract[net,"Output"][["Labels"]];
states = <|"observation"->{},"action"->{},"reward"->{},"next"->{}|>;
Do[
state["Observation"] = RLEnvironmentReset[$env]; (* reset every episode *)
ob = {};
ac = {};
re = {};
next = {};
Do[
observation = {};
observation = Join[observation,state["Observation"]];
If[ob=={},
observation = Join[observation,state["Observation"]]
,
observation = Join[observation, Last[ob][[;;Length[state["Observation"]]]]]
];
action = If[RandomReal[]<=Max[rand,0.1],
RandomChoice[choiceSpace]
,
net[observation]
];
(*Print[action];*)
AppendTo[ob, observation];
AppendTo[ac, action];
state = RLEnvironmentStep[$env, action, render];
If[Or[state["Done"], end[state]],
punish = - Max[Values[net[observation,"Probabilities"]]] - 1;
AppendTo[re, punish];
AppendTo[next, observation];
Break[]
,
AppendTo[re, state["Reward"]];
observation = state["Observation"];
observation = Join[observation, ob[[-1]][[;;Length[state["Observation"]]]]];
AppendTo[next, observation];
]；
,
{step, st}];
AppendTo[states["observation"], ob];
AppendTo[states["action"], ac];
AppendTo[states["reward"], re];
AppendTo[states["next"], next];
,
{episode,ep}
];
(* close the $environment when done *)
states
]
Preprocess function flatten the input and has an option on if normalizing the observation
preprocess[x_, nor_:False] := Module[{result},(
result = <||>;
result["action"] = Flatten[x["action"]];
If[nor,
result["observation"] = N[Normalize/@Flatten[x["observation"],1]];
result["next"] = N[Normalize/@Flatten[x["next"],1]];
,
result["observation"] = Flatten[x["observation"],1];
result["next"] = Flatten[x["next"],1];
];
result["reward"] = Flatten[x["reward"]];
result
)]
Let's continue with generator, after getting the data from the game, generator measures the performance and records it.
NotebookDelete[temp];
reward = Length[experience["action"]];
AppendTo[$rewardList,reward];
temp=PrintTemporary[reward];
Records the net with best performance
If[reward>best,best = reward;bestNet = #Net];
Add these experience to the replay buffer
AppendTo[processed["action"],#]&/@experience["action"];
AppendTo[processed["observation"],#]&/@experience["observation"];
AppendTo[processed["next"],#]&/@experience["next"];
AppendTo[processed["reward"],#]&/@experience["reward"];
Make sure the total size of replay buffer does not exceed the limit
len = Length[processed["action"]] - replaySize;
If[len > 0,
processed["action"] = processed["action"][[len;;]];
processed["observation"] = processed["observation"][[len;;]];
processed["next"] = processed["next"][[len;;]];
processed["reward"] = processed["reward"][[len;;]];
];
Add input of the network to the result
pos = RandomInteger[{1,Length[processed["action"]]},#BatchSize];
result = <||>;
result["Input"] = processed["observation"][[pos]];
Calculates the out put based on the next state and reward and add to the result
predictionsOfCurrentObservation = Values[#Net[processed["observation"][[pos]],"Probabilities"]];
rewardsOfAction = processed["reward"][[pos]];
maxPredictionsOfNextObservation = gamma*Max[Values[#]]&/@#Net[processed["next"][[pos]],"Probabilities"];
temp = rewardsOfAction + maxPredictionsOfNextObservation;
MapIndexed[
(predictionsOfCurrentObservation[[First@#2,(#1+1)]]=temp[[First@#2]])&,(processed["action"][[pos]]-First[NetExtract[net,"Output"][["Labels"]]])
];
result["Output"] = out;
result
In the end, we can start training
trained =
NetTrain[policyNet, generator,
LossFunction -> MeanSquaredLossLayer[], BatchSize -> 32,
MaxTrainingRounds -> 2000]
## Performance of the agent ##
![enter image description here][7]
The graph above show the performance of the agent in 1000 games in cart pole environment. The agent starts with random play which has a low number of episodes lasted. The performance stay low till 800 games. But after 800 games, the performance starts to increase exponentially. In the end of the training, the performance jumps from 3k to 10k which is the maximal number of episode per game in 4 games. This proves that although the Q function is hard to converge, but when it converges, the performance is very well.
##Future Directions##
The current agent uses the classical DQN as its major structure. Other techniques like Noisy Net, DDQN, Prioritized Reply, etc can help the Q function to converge in a shorter time. Other algorithms like Rainbow Algorithm which is based on Q learning will be the next step of this project.
code can be found on [github link][8]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rl.png&userId=1363029
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=vn.png&userId=1363029
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=breakout1.png&userId=1363029
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=breakout1.png&userId=1363029
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=breakout2.png&userId=1363029
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=cp.png&userId=1363029
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=performance.png&userId=1363029
[8]: https://github.com/ianfanx/wss2018ProjectIan Fan2018-07-11T20:52:09ZHighlight sections of code when I click my cursor at a particular point?
http://community.wolfram.com/groups/-/m/t/1444669
Is there a way to have Mathematica automatically highlight sections of code when I click my cursor at a particular point in my code? For example, if I click at the second bracket in this: <br>
`F[G[H[t]]]` <br>
It will highlight the entire `[H[t]]` instead of the end brackets?Joshua Champion2018-09-08T17:06:46Z