Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Data Science sorted by activeLazy lists in Mathematica
http://community.wolfram.com/groups/-/m/t/1467915
Hi all. In this post I want to demonstrate a package I wrote (and still tweak here and there), which implements Haskell-style lazy lists in Mathematica. Anyone who wants to play around with it can find here on Github:
https://github.com/ssmit1986/lazyLists
## What are lazy lists? ##
Before diving into implementation details, let's give some motivation for the use of lazy lists. A lazy list is a method to implicitly represent a long (possibly infinite) list. Of course, you cannot truly have an infinite list in your computer, so the central idea is to format the list as a linked list consisting of 2 parts. This means that a `lazyList` will always look like this:
lazyList[first, tail]
Here, `first` is the first element of the list and `tail` is a held expression that, when evaluated, will give you the rest of the list, which is again a `lazyList` with a first element and a tail. So in other words: elements of a `lazyList` are only generated when they are needed and not before. This makes it possible to represent infinite lists and perform list operations on them. To get the elements of the list, one can simply evaluate the tail as often as needed to progress through the list.
For example, let's define `lazyRange[]` as the lazy list of all positive integers. Then
Map[#^2&, lazyRange[]]
becomes the infinite list of all squares, again represented as a lazy list. You can go even further, though. For example, you can generate the triangular numbers by doing a `FoldList` over the integers and then select the odd ones with a `Select`:
Select[FoldList[Plus, 0, lazyRange[]], OddQ]
which is yet another lazy list. So if we want the first 100 odd triangular numbers, we simply evaluate the tail if this lazy list 99 times to get them. In contrast, if you'd try to do this with a normal list, you could do something like this:
Select[FoldList[Plus, 0, Range[n]], OddQ]
However, what value should you pick for `n`? If you pick it too low, you won't get your 100 numbers. If it's too high, you're doing too much work. Of course you could write some sort of `While` loop, but the code for that would be less concise and doesn't really play into the strengths of Wolfram Language.
## Implementation ##
To illustrate how my code works, I will reproduce some of the code in this blog post, though the package code is different in some respects for efficiency reasons.
The easiest way to prevent the tail from evaluating is to give `lazyList` the `HoldRest` attribute, which is how I implemented them:
Attributes[lazyList] = {HoldRest}
Next, we need some way to construct basic infinite lists like the positive integers. This is generally done recursively. My `lazyRange[]` function takes up to 2 arguments: a starting value (1 by default) and an increment value (also 1 by default):
lazyRange[start : _ : 1, step : _ : 1] := lazyList[start, lazyRange[start + step, step]]
We can extract the first element with `First` and advance through the list with `Last`:
First@lazyRange[]
First@Last@lazyRange[]
First@Last@Last@lazyRange[]
Out[100]= 1
Out[101]= 2
Out[102]= 3
We can also check that the tail of `lazyRange[]` is equal to the list of integers starting from 2:
In[103]:= Last@lazyRange[] === lazyRange[2]
Out[103]= True
Of course, iterating `Last` can be done with `NestList`, so if we want to get the first `n` elements of the lazy list, we can define the following special functionality for `Take` by setting an `UpValue` for `lazyList`:
lazyList /: Take[l_lazyList, n_Integer] := NestList[Last, l, n - 1][[All, 1]]
Take[lazyRange[], 10]
Out[105]= {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
As it turns out, nesting `Last` isn't actually the most efficient way to do this, so I ended up implementing `Take` with `ReplaceRepeated` and `Sow`/`Reap` to make the best use of the pattern matching capabilities of WL.
Next, we want to be able to do transformations on `lazyList`s. The simplest one is `Map`: you simply create a `lazyList` with the function applied to the first element and then `Map` the function over the tail:
lazyList /: Map[f_, lazyList[first_, tail_]] := lazyList[
f[first],
Map[f, tail]
];
Map[#^2 &, lazyRange[]]
Take[%, 10]
Out[117]= lazyList[1, (#1^2 &) /@ lazyRange[1 + 1, 1]]
Out[118]= {1, 4, 9, 16, 25, 36, 49, 64, 81, 100}
Similarly, `Select` is easily implemented by repeatedly evaluating the tail until we find an element that satisfied the selector function `f`. At that point we found our element return a `lazyList`. We use
lazyList /: Select[lazyList[first_, tail_], f_] /; f[first] := lazyList[first, Select[tail, f]];
lazyList /: Select[lazyList[first_, tail_], f_] := Select[tail, f];
As an example, we can now find the first 10 numbers that are co prime to 12 and the first 10 squares that are 1 more than a multiple of 3:
Take[Select[lazyRange[], CoprimeQ[#, 12] &], 10]
Take[Select[Map[#^2 &, lazyRange[]], Mod[#, 3] === 1 &], 10]
Out[128]= {1, 5, 7, 11, 13, 17, 19, 23, 25, 29}
Out[129]= {1, 4, 16, 25, 49, 64, 100, 121, 169, 196}
I hope this gives a good enough overview of the benefits of `lazyLists` as well as giving you an idea of how to use them. In the package I tried to implement other list computational functionality (such as `MapIndexed`, `MapThread`, `FoldList`, `Transpose`, `Cases`, and `Pick`) for lazy lists as efficiently as possible.
Please let me know if you have further suggestions!Sjoerd Smit2018-09-19T22:04:27ZMetaprogramming: the Future of the Wolfram Language
http://community.wolfram.com/groups/-/m/t/1435093
With all the marvelous new functionality that we have come to expect with each release, it is sometimes challenging to maintain a grasp on what the Wolfram language encompasses currently, let alone imagine what it might look like in another ten years. Indeed, the pace of development appears to be accelerating, rather than slowing down.
However, I predict that the "problem" is soon about to get much, much worse. What I foresee is a step change in the pace of development of the Wolfram Language that will produce in days and weeks, or perhaps even hours and minutes, functionality might currently take months or years to develop.
So obvious and clear cut is this development that I have hesitated to write about it, concerned that I am simply stating something that is blindingly obvious to everyone. But I have yet to see it even hinted at by others, including Wolfram. I find this surprising, because it will revolutionize the way in which not only the Wolfram language is developed in future, but in all likelihood programming and language development in general.
The key to this paradigm shift lies in the following unremarkable-looking WL function WolframLanguageData[], which gives a list of all Wolfram Language symbols and their properties. So, for example, we have:
WolframLanguageData["SampleEntities"]
![enter image description here][1]
This means we can treat WL language constructs as objects, query their properties and apply functions to them, such as, for example:
WolframLanguageData["Cos", "RelationshipCommunityGraph"]
![enter image description here][2]
In other words, the WL gives us the ability to traverse the entirety of the WL itself, combining WL objects into expressions, or programs. This process is one definition of the term “Metaprogramming”.
What I am suggesting is that in future much of the heavy lifting will be carried out, not by developers, but by WL programs designed to produce code by metaprogramming. If successful, such an approach could streamline and accelerate the development process, speeding it up many times and, eventually, opening up areas of development that are currently beyond our imagination (and, possibly, our comprehension).
So how does one build a metaprogramming system? This is where I should hand off to a computer scientist (and will happily do so as soon as one steps forward to take up the discussion). But here is a simple outline of one approach.
The principal tool one might use for such a task is genetic programming:
WikipediaData["Genetic Programming"]
> In artificial intelligence, genetic programming (GP) is a technique whereby computer programs are encoded as a set of genes that are then modified (evolved) using an evolutionary algorithm (often a genetic algorithm, "GA") – it is an application of (for example) genetic algorithms where the space of solutions consists of computer programs. The results are computer programs that are able to perform well in a predefined task. The methods used to encode a computer program in an artificial chromosome and to evaluate its fitness with respect to the predefined task are central in the GP technique and still the subject of active research.
One can take issue with this explanation on several fronts, in particular the suggestion that GP is used primarily as a means of generating a computer program for performing a predefined task. That may certainly be the case, but need not be.
Leaving that aside, the idea in simple terms is that we write a program that traverses the WL structure in some way, splicing together language objects to create a WL program that “does something”. That “something” may be a predefined task and indeed this would be a great place to start: to write a GP metaprogramming system that creates WL programs that replicate the functionality of existing WL functions. Most of the generated programs would likely be uninteresting, slower versions of existing functions; but it is conceivable, I suppose, that some of the results might be of academic interest, or indicate a potentially faster computation method, perhaps. However, the point of the exercise is to get started on the metaprogramming project, with a simple(ish) task with very clear, pre-defined goals and producing results that are easily tested. In this case the “objective function” is a comparison of results produced by the inbuilt WL functions vs the GP-generated functions, across some selected domain for the inputs.
I glossed over the question of exactly how one “traverses the WL structure” for good reason: I feel sure that there must have been tremendous advances in the theory of how to do this in the last 50 years. But just to get the ball rolling, one could, for instance, operate a dual search, with a local search evaluating all of the functions closely connected to the (randomly chosen) starting function (WL object), while a second “long distance” search jumps randomly to a group of functions some specified number of steps away from the starting function.
[At this point I envisage the computer scientists rolling their eyes and muttering “doesn’t this idiot know about the {fill in the bank} theorem about efficient domain search algorithms?”].
Anyway, to continue. The initial exercise is about the mechanics of the process rather that the outcome. The second stage is much more challenging, as the goal is to develop new functionality, rather than simply to replicate what already exists. It would entail defining a much more complex objective function, as well as perhaps some constraints on program size, the number and types of WL objects used, etc.
An interesting exercise, for example, would be to try to develop a metaprogramming system capable of winning the Wolfram One-Liner contest. Here, one might characterize the objective function as “something interesting and surprising”, and we would impose a tight constraint on the length of programs generated by the metaprogramming system to a single line of code.
What is “interesting and surprising”? To be defined – that’s a central part of the challenge. But, in principle, I suppose one might try to train a neural network to classify whether or not a result is “interesting” based on the results of prior one-liner competitions.
From there, it’s on to the hard stuff: designing metaprogramming systems to produce WL programs of arbitrary length and complexity to do “interesting stuff” in a specific domain. That “interesting stuff” could be, for instance, a more efficient approximation for a certain type of computation, a new algorithm for detecting certain patterns, or coming up with some completely novel formula or computational concept.
Obviously one faces huge challenges in this undertaking; but the potential rewards are also enormous in terms of accelerating the pace of language development and discovery. It is a fascinating area for R&D, one that the WL is ideally situated to exploit. Indeed, I would be mightily surprised to learn that there is not already a team engaged on just such research at Wolfram. If so, perhaps one of them could comment here?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10942Fig1.png&userId=773999
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_12.png&userId=773999Jonathan Kinlay2018-09-02T13:38:13ZMusic Generation with GAN MidiNet
http://community.wolfram.com/groups/-/m/t/1435251
I generate a music with reference to [MidiNet][1]. Most neural network models for music generation use recurrent neural networks. However, MidiNet use convolutional neural networks.
There are three models in MidiNet. Model 1 is Melody generator, no chord condition. Model 2,3 are Melody generators with chord condition. I try Model 1, because it is most interesting in the three models compared in the paper.
**Get MIDI data**
-----------------------
My favorite Jazz bassist is [Jaco Pastorius][2]. I get MIDI data from [here][3]. For example, I get MIDI data of "The Chicken".
url = "http://www.midiworld.com/download/1366";
notes = Select[Import[url, {"SoundNotes"}], Length[#] > 0 &];
There are some styles in the notes. I get base style from them.
notes[[All, 3, 3]]
Sound[notes[[1]]]
![enter image description here][4]
![enter image description here][5]
I change MIDI data to Image data. I fix the smallest note unit to be the sixteenth note. I divide the MIDI data into the sixteenth note period and select the sound found at the beginning of each period. And the pitch of SoundNote function is from 1 to 128. So, I change one bar to grayscale image(h=128*w=16).
First, I create the rule to change each note pitch(C-1,...,G9) to number(1,...,128), C4 -> 61.
codebase = {"C", "C#", "D", "D#", "E" , "F", "F#", "G", "G#" , "A",
"A#", "B"};
num = ToString /@ Range[-1, 9];
pitch2numberrule =
Take[Thread[
StringJoin /@ Reverse /@ Tuples[{num, codebase}] ->
Range[0, 131] + 1], 128]
![enter image description here][6]
Next, I change each bar to image (h = 128*w = 16).
tempo = 108;
note16 = 60/(4*tempo); (* length(second) of 1the sixteenth note *)
select16[snlist_, t_] :=
Select[snlist, (t <= #[[2, 1]] <= t + note16) || (t <= #[[2, 2]] <=
t + note16) || (#[[2, 1]] < t && #[[2, 2]] > t + note16) &, 1]
selectbar[snlist_, str_] :=
select16[snlist, #] & /@ Most@Range[str, str + note16*16, note16]
selectpitch[x_] := If[x === {}, 0, x[[1, 1]]] /. pitch2numberrule
pixelbar[snlist_, t_] := Module[{bar, x, y},
bar = selectbar[snlist, t];
x = selectpitch /@ bar;
y = Range[16];
Transpose[{x, y}]
]
imagebar[snlist_, t_] := Module[{image},
image = ConstantArray[0, {128, 16}];
Quiet[(image[[129 - #[[1]], #[[2]]]] = 1) & /@ pixelbar[snlist, t]];
Image[image]
]
soundnote2image[soundnotelist_] := Module[{min, max, data2},
{min, max} = MinMax[#[[2]] & /@ soundnotelist // Flatten];
data2 = {#[[1]], #[[2]] - min} & /@ soundnotelist;
Table[imagebar[data2, t], {t, 0, max - min, note16*16}]
]
(images1 = soundnote2image[notes[[1]]])[[;; 16]]
![enter image description here][7]
**Create the training data**
-----------------------
First, I drop images1 to an integer multiple of the batch size. Its length is 128 bars and about 284 seconds with a batch size of 16.
batchsize = 16;
getbatchsizeimages[i_] := i[[;; batchsize*Floor[Length[i]/batchsize]]]
imagesall = Flatten[Join[getbatchsizeimages /@ {images1}]];
{Length[imagesall], Length[imagesall]*note16*16 // N}
![enter image description here][8]
MidiNet proposes a novel conditional mechanism to use music from the previous bar to condition the generation of the present bar to take into account the temporal dependencies across a different bar. So, each training data of MidiNet (Model 1: Melody generator, no chord condition) consists of three "noise", "prev", "Input". "noise" is a 100-dimensions random vector. "prev" is an image data(1*128*16) of the previous bar. "Input" is an image data(1*128*16) of the present bar. The first "prev" of each batch is all 0.
I generate training data with a batch size of 16 as follows.
randomDim = 100;
n = Floor[Length@imagesall/batchsize];
noise = Table[RandomReal[NormalDistribution[0, 1], {randomDim}],
batchsize*n];
input = ArrayReshape[ImageData[#], {1, 128, 16}] & /@
imagesall[[;; batchsize*n]];
prev = Flatten[
Join[Table[{{ConstantArray[0, {1, 128, 16}]},
input[[batchsize*(i - 1) + 1 ;; batchsize*i - 1]]}, {i, 1, n}]],
2];
trainingData =
AssociationThread[{"noise", "prev",
"Input"} -> {#[[1]], #[[2]], #[[3]]}] & /@
Transpose[{noise, prev, input}];
**Create GAN**
-----------------------
I create generator with reference to MidiNet.
generator = NetGraph[{
1024, BatchNormalizationLayer[], Ramp, 256,
BatchNormalizationLayer[], Ramp, ReshapeLayer[{128, 1, 2}],
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[64, {1, 2}, "Stride" -> {2, 2}],
BatchNormalizationLayer[], Ramp,
DeconvolutionLayer[1, {128, 1}, "Stride" -> {2, 1}],
LogisticSigmoid,
ConvolutionLayer[16, {128, 1}, "Stride" -> {2, 1}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp,
ConvolutionLayer[16, {1, 2}, "Stride" -> {1, 2}],
BatchNormalizationLayer[], Ramp, CatenateLayer[],
CatenateLayer[], CatenateLayer[],
CatenateLayer[]}, {NetPort["noise"] ->
1, NetPort["prev"] -> 19,
19 -> 20 ->
21 -> 22 -> 23 -> 24 -> 25 -> 26 -> 27 -> 28 -> 29 -> 30,
1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7, {7, 30} -> 31,
31 -> 8 -> 9 -> 10, {10, 27} -> 32,
32 -> 11 -> 12 -> 13, {13, 24} -> 33,
33 -> 14 -> 15 -> 16, {16, 21} -> 34, 34 -> 17 -> 18},
"noise" -> {100}, "prev" -> {1, 128, 16}
]
![enter image description here][9]
I create discriminator which does not have BatchNormalizationLayer and LogisticSigmoid, because I use [Wasserstein GAN][10] easy to stabilize the training.
discriminator = NetGraph[{
ConvolutionLayer[64, {89, 4}, "Stride" -> {1, 1}], Ramp,
ConvolutionLayer[64, {1, 4}, "Stride" -> {1, 1}], Ramp,
ConvolutionLayer[16, {1, 4}, "Stride" -> {1, 1}], Ramp,
1},
{1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7}, "Input" -> {1, 128, 16}
]
![enter image description here][11]
I create Wasserstein GAN network.
ganNet = NetInitialize[NetGraph[<|"gen" -> generator,
"discrimop" -> NetMapOperator[discriminator],
"cat" -> CatenateLayer[],
"reshape" -> ReshapeLayer[{2, 1, 128, 16}],
"flat" -> ReshapeLayer[{2}],
"scale" -> ConstantTimesLayer["Scaling" -> {-1, 1}],
"total" -> SummationLayer[]|>,
{{NetPort["noise"], NetPort["prev"]} -> "gen" -> "cat",
NetPort["Input"] -> "cat",
"cat" ->
"reshape" -> "discrimop" -> "flat" -> "scale" -> "total"},
"Input" -> {1, 128, 16}]]
![enter image description here][12]
**NetTrain**
-----------------------
I train by using the training data created before. I use RMSProp as the method of NetTrain according to Wasserstein GAN. It take about one hour by using GPU.
net = NetTrain[ganNet, trainingData, All, LossFunction -> "Output",
Method -> {"RMSProp", "LearningRate" -> 0.00005,
"WeightClipping" -> {"discrimop" -> 0.01}},
LearningRateMultipliers -> {"scale" -> 0, "gen" -> -0.2},
TargetDevice -> "GPU", BatchSize -> batchsize,
MaxTrainingRounds -> 50000]
![enter image description here][13]
**Create MIDI**
-----------------------
I create image data of 16 bars by using generator of trained network.
bars = {};
newbar = Image[ConstantArray[0, {1, 128, 16}]];
For[i = 1, i < 17, i++,
noise1 = RandomReal[NormalDistribution[0, 1], {randomDim}];
prev1 = {ImageData[newbar]};
newbar =
NetDecoder[{"Image", "Grayscale"}][
NetExtract[net["TrainedNet"], "gen"][<|"noise" -> noise1,
"prev" -> prev1|>]];
AppendTo[bars, newbar]
]
bars
![enter image description here][14]
I select only the pixel having the max value among each column of the image, because there is a feature that the image generated by Wasserstein GAN is blurred. I clear the images.
clearbar[bar_, threshold_] := Module[{i, barx, col, max},
barx = ConstantArray[0, {128, 16}];
col = Transpose[bar // ImageData];
For[i = 1, i < 17, i++,
max = Max[col[[i]]];
If[max >= threshold,
barx[[First@Position[col[[i]], max, 1], i]] = 1]
];
Image[barx]
]
bars2 = clearbar[#, 0.1] & /@ bars
![enter image description here][15]
I change the image to SoundNote. I concatenate the same continuous pitches.
number2pitchrule = Reverse /@ pitch2numberrule;
images2soundnote[img_, start_] :=
SoundNote[(129 - #[[2]]) /.
number2pitchrule, {(#[[1]] - 1)*note16, #[[1]]*note16} + start,
"ElectricBass", SoundVolume -> 1] & /@
Sort@(Reverse /@ Position[(img // ImageData) /. (1 -> 1.), 1.])
snjoinrule = {x___, SoundNote[s_, {t_, u_}, v_, w_],
SoundNote[s_, {u_, z_}, v_, w_], y___} -> {x,
SoundNote[s, {t, z}, v, w], y};
I generate music and attach its mp3 file.
Sound[Flatten@
MapIndexed[(images2soundnote[#1, note16*16*(First[#2] - 1)] //.
snjoinrule) &, bars2]]
![enter image description here][16]
**Conclusion**
-----------------------
I try music generation with GAN. I am not satisfied with the result. I think that the causes are various, poor training data, poor learning time, etc.
Jaco is gone. I hope Neural Networks will be able to express Jaco's base.
[1]: https://arxiv.org/abs/1703.10847
[2]: https://en.wikipedia.org/wiki/Jaco_Pastorius
[3]: http://www.bock-for-pastorius.de/midi.htm
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=317901.jpg&userId=1013863
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=567502.jpg&userId=1013863
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=476803.jpg&userId=1013863
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=744004.jpg&userId=1013863
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=586405.jpg&userId=1013863
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=707106.jpg&userId=1013863
[10]: https://arxiv.org/abs/1701.07875
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=435507.jpg&userId=1013863
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=170508.jpg&userId=1013863
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=324809.jpg&userId=1013863
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=965210.jpg&userId=1013863
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=706311.jpg&userId=1013863
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=177112.jpg&userId=1013863Kotaro Okazaki2018-09-02T02:30:04ZScatter plotting satellites?
http://community.wolfram.com/groups/-/m/t/1454309
Im dealing with a table of GEO satellites, I have generated a table with Az and EL values relative to my location. To bad they couldn't be found in the SatelliteData[] query...
This is a snippet of the data.
dataSatTable = {
{"SAT NAME", "EL", "AZ"},
{"NSS-806", 3.26, 99.47},
{"Galaxy-17-19", 5.69, 258.52},
{"Eutelsat-113", 10.4, 254.51}
}
I need to plot each satellite as a dot with a text tag on a scatter plot, This will form an arc. I then need to add another table of data containing obstructions on the plot.
Any pointers?Mathison Ott2018-09-13T07:18:01ZConvert Wolfram Dataset to JSON or CSV via API?
http://community.wolfram.com/groups/-/m/t/1455513
I am getting wolfram CDF format from this,
beta = APIFunction[{"tablename" -> "String"},ResourceData[ResourceObject[#tablename] ]& ]
co = CloudDeploy[beta, Permissions->"Public"]
Response:
Dataset[{<|"Name" -> "Aachen", "ID" -> "1", "NameType" -> "Valid", "Classification" -> "L5", "Mass" -> Quantity[21, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1880}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{50.775, 6.08333}]|>, <|"Name" -> "Aarhus", "ID" -> "2", "NameType" -> "Valid", "Classification" -> "H6", "Mass" -> Quantity[720, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1951}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{56.18333, 10.23333}]|>, <|"Name" -> "Abee", "ID" -> "6", "NameType" -> "Valid", "Classification" -> "EH4", "Mass" -> Quantity[107000, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1952}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{54.21667, -113.}]|>, <|"Name" -> "Acapulco", "ID" -> "10", "NameType" -> "Valid", "Classification" -> "Acapulcoite", "Mass" -> Quantity[1914, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1976}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{16.88333, -99.9}]|>, <|"Name" -> "Achiras", "ID" -> "370", "NameType" -> "Valid", "Classification" -> "L6", "Mass" -> Quantity[780, "Grams"], "Fall" -> "Fell", "Year" -> DateObject[{1902}, "Year", "Gregorian", -5.], "Coordinates" -> GeoPosition[{-33.16667, -64.95}]|> }]
I need this in a JSON format, I tried to convert it using URLexecute it didn't work
Does anyone know any pythonic or wolfram way to convert this into JSON or CSV?Sag Mk2018-09-13T17:28:29ZWorkarounds for network timeouts when trying to use Interpreter["Person"]
http://community.wolfram.com/groups/-/m/t/1419777
I frequently get network timeout problems when using Interpreter in ways that require connectivity to the Wolfram server. My network connection in general is quite fast, so I don't think that's the issue. Here's an example.
We have a list of presidents using their common names.
presidents= {"George Washington", "John Adams", "Thomas Jefferson", "James \
Madison", "James Monroe", "John Quincy Adams", "Andrew Jackson", \
"Martin Van Buren", "William Henry Harrison", "John Tyler", "James K. \
Polk", "Zachary Taylor", "Millard Fillmore", "Franklin Pierce", \
"James Buchanan", "Abraham Lincoln", "Andrew Johnson", "Ulysses S. \
Grant", "Rutherford B. Hayes", "James A. Garfield", "Chester A. \
Arthur", "Grover Cleveland", "Benjamin Harrison", "Grover Cleveland \
(2nd term)", "William McKinley", "Theodore Roosevelt", "William \
Howard Taft", "Woodrow Wilson", "Warren G. Harding", "Calvin \
Coolidge", "Herbert Hoover", "Franklin D. Roosevelt", "Harry S. \
Truman", "Dwight D. Eisenhower", "John F. Kennedy", "Lyndon B. \
Johnson", "Richard Nixon", "Gerald Ford", "Jimmy Carter", "Ronald \
Reagan", "George H. W. Bush", "Bill Clinton", "George W. Bush", \
"Barack Obama", "Donald Trump"};
I now want to represent them as entities so that users can get further information on them. So, here's the plan. I want to make one call to Interpreter rather than Map Interpreter over a list of names.
presidentEntities=Interpreter["Person", True &, Missing[], AmbiguityFunction -> First][
presidentNames]]
When I do this, I frequently get a network timeout error. Now it's Sunday afternoon here in the US and I wouldn't think this was peak load time. Moreover, I've gotten the error -- and similar errors for other Interpreter calls -- on many other occasions. Moreover, I don't think 45 names should really tax the Wolfram server too hard.
So, are there any user workarounds for this? (I've tried the ugly method of breaking up the list into pieces and then reassembling, but even that sometimes fails). Am I doing something wrong? Is there a way of making some Interpreter code local?
Is there some way of determining that the Wolfram Server is having a bad day or hour or suffering a particularly heavy load?
More generally, is there something that can be done about WolframAlpha throughput. The Wolfram Language (as opposed to Mathematica) depends on access to vast amount of external data. But if I can't count on reliable service, it discourages use of programs and constructs that depend on that data and the Entity construct.Seth Chandler2018-08-22T22:35:58Z[Event] Shanghai User Meetup Review
http://community.wolfram.com/groups/-/m/t/1450141
*All notebooks used in the presentation can be downloaded at the end of the post.*
----------
The idea of the post is to encourage our lovely users to share their experience about Wolfram products in local meetup groups, building up friendship and partnership among our community.
On 9/8/2018 Saturday, WRI Developer Mr. Shenghui Yang hosted a 12-people private Mathematica user panel to discuss the latest R&D achievement of Wolfram Language V11.1, 2 and 3, including
- Updates and Improvements for Geo system and Entity
- Neural Network in V11.3
- Wolfram Cloud user interface and deployment
- Several appealing examples of Mathematica dynamic feature in K-12 teaching project
![lecturing][1]
![beginning][2]
## Geo system ##
To have Wolfram Language features more accessible and relatable to our domestic users, Shenghui mixed his real life elements into Wolfram Language. The whole presentation became his daily life storytelling upon Wolfram Language knowledge base.
W|A command line interface briefly describes the weather condition on the day of this event
![weather][3]
GeoMagenetData and GeoGravityData demonstrates important geophysics properties of Shanghai at the moment of the presentation ;-) No need to worry about any anomaly
![geodata][4]
GeoPosition with customized GeoMarker visualize the location of this event along the riverband of Yangtze
![marker][5]
GeoDistance, GeoPath and several powerful projection options show our user how Wolfram headquarters relates to the meeting place. One of ~530 projection types is used in the example.
In:= GeoProjectionData["LambertAzimuthal"]
Out= {LambertAzimuthal,{Centering->{0,0},GridOrigin->{0,0},ReferenceModel->1}}
In := GeoProjectionData[]//Short
Out:= {Airy,Aitoff,Albers,AmericanPolyconic,ApianI,<<525>>,WinkelTripel}
![path][6]
GeoArea + GeoPosition, after mark the places the host visited most frequently in Shanghai, formed a large triangle. Combine EntityValue and related functions to easily extract the ratio of the triangle to Shanghai in terms of area
![area][7]
GeoPath and TravelDirectionData also reported accurately how long it takes to route and visit all three marked places
![travel][8]
Finally, Shenghui mention that this event being hosted in a nice tea house, owned by YueSheng Du, the Shanghai-born Mob King and the God of Father of Far East during the Chiang Kai-shek era. Related background information can be retrieved both by built-in Entity functions and ExternalService with BingSearch V5
![history][9]
![bing][10]
## Discussion on K-12 Math Topics ##
This section is set specifically to users in K-12 education industry or the parents, whose kids in this academic interval, looking for new way for their kids to understand the school materials.
Shenghui and several local users reached out to some domestic teachers in public and private schools, ranging from the elite to mid level.
Real test problems were collected for the demo. A brief moment was left to the audience to think about the challenging problems before seeing the notebook with solution. The solution uses Mathematica built-in strong visualization, dynamic and CloudDeploy features. One of the most stressful and painful problem in the current domestic K-12 math education is that students need to take math-olympiad level exam for middle and high school. Most of the kids have no choice but to recite the hard-coded hacks to solve the tricky problems in short time. Lack of understanding and intuitive explanation make the process even more challenging. The host brings new vision into these problem via graphical presentation.
Here is an example of Non-stop trains problem with graphical explanation. (10 grade math problem) This question is asked to compute the distance between each cross point. The demo is designed to help students to understand the physical process and solve by hand in the exam, rather than shoot a Mathematica solution to them
![question][11]
![solution][12]
## Neural Network and AI ##
The presentation is based on the updated version of [Taliesin's][13] [notebook][14] and demo session on [YouTube][15] (some NN layers' name are updated in V 11.3 like DotPlusLayer -> LinearLayer). The examples are fully tested in the attached notebook for V11.3. Though the topic is quite involved for first time users, the audience are willing to learn Wolfram Language. Shenghui and his college roomate, a [Tecent AI Lab][16] senior researcher and also a veteran Mathematica user, collaboratively initialize bi-weekly online discussion for domestic Mathematica users. The one-hour AI-topic paper reading session is aimed to have the users familiar with basic NN layers in Wolfram Language and with different Networks available in the [Wolfram Neural Network Repository][17].
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.jpg&userId=23928
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.jpg&userId=23928
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=23928
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&userId=23928
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.png&userId=23928
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.png&userId=23928
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.png&userId=23928
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8.png&userId=23928
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9.png&userId=23928
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.png&userId=23928
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.png&userId=23928
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12.png&userId=23928
[13]: https://twitter.com/taliesinb
[14]: https://wolfr.am/gLSyxCEE
[15]: https://www.youtube.com/watch?v=FnpqI4REiak
[16]: https://ai.tencent.com/ailab/index.html
[17]: https://resources.wolframcloud.com/NeuralNetRepository/Shenghui Yang2018-09-11T14:30:05ZImageAugmentationLayer on image and target mask
http://community.wolfram.com/groups/-/m/t/1445573
Hi, I'd like to use ImageAugmentationLayer in my binary image segmentation neural network. However, it seems like I can't get the ImageSegmentationLayer to do exactly the same transform on my input image as on my target mask. Is there a hidden way to do this that's not mentioned in the docs? It seems like every invocation of the layer will use a new random crop, but I need to do the _exact same_ random crop on pairs of images.
Cheers!Carl Lange2018-09-09T12:40:13Z