Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Scienceshowthread.php?threadid=78 sorted by activeExternal APIs and data wrangling
http://community.wolfram.com/groups/-/m/t/1404817
Hi, I am considering using the Wolfram Cloud to host a project that makes use of API calls to a third party (never done production deployments in the WC before). The problem is that API calls (in the default way provided by the third party) contain metadata and even the key-value pairs included in the response need to be cleaned up (the only thing that is relevant to me is the last value, in the example below that would be the number 1.9082*^7). Is there any way to make the call so that just that value is extracted (or at least the list put in usable key-value dataset format?)? If not, what would be the most efficient way to clean up the output, to simply assign that value to a variable? The code will be making many simultaneous API calls, and I'll really prefer to avoid performance issues -not to waste too much computing power wrangling data. Thanks!
{meta->{request->{granularity->Daily,start_date->2018-01-27,end_date->2018-01-31,limit->Null },status->Success,last_updated->2018-07-31},value->{{date->2018-01-27,value->1.48229*^7},{date->2018-01-28,value->1.42697*^7},{date->2018-01-29,value->1.67565*^7},{date->2018-01-30,value->1.91857*^7},{**date->2018-01-31,value->1.9082*^7**}}}George W2018-08-14T05:33:17ZSolve equation of motion with Dirac-fermions?
http://community.wolfram.com/groups/-/m/t/1400212
Dear Wolfram team:
I am a beginner of Mathematica.
My Problem is that I want solve a System with n equation of Motion in first order. In this equation of motion are creation and annihilation operators of Dirac-fermions. **I don't know and don't find a it, how I can describes the creation and annihilation operators of Dirac-fermions in Mathematica**. The equation of motion have the form:
$$\dot{c}_i^\dagger [t]=f*c_i^\dagger[t]+g[t]*c_{i+1}[t]-g[t]*c_{i+1}^\dagger[t]+h[t]*c_{i-1}[t]-h[t]*c_{i+1}^\dagger[t]\\
\dot{c}_i [t]=f*c_i[t]+g[t]*c_{i+1}^\dagger[t]-g[t]*c_{i+1}[t]-h[t]*c_{i-1}^\dagger[t]+h[t]*c_{i+1}[t],$$
where $c_i^\dagger,c_i $ are creation and annihilation operators and f,g,h are functions.
Then I want use DSolve or NDSolve to solve the equation of motion.
Thanks, for your help.Constantin Harder2018-08-09T10:05:59ZDynamically create a list of anonymous functions?
http://community.wolfram.com/groups/-/m/t/1403526
As the title indicates, if you know how to create a list of anonymous functions in your program, please tell me.satoshi nakagawa2018-08-13T21:43:23ZGet rid of a controller/variable in Animate?
http://community.wolfram.com/groups/-/m/t/1402972
Hello!
I'm trying to get rid of controllers and the controller variable using `Animate`. I've gotten rid of the controllers, but the variable ϴ still shows up.
animate[obj_] := Animate[
With[
{v = RotationTransform[\[Theta], {0, 0, 1}][{3, 0, 3}]},
Show[obj, ViewPoint -> v]], {\[Theta], 0, 2 Pi},
Alignment -> Center,
Paneled -> False,
SaveDefinitions -> True,
AnimationRate -> .01,
AppearanceElements -> None,
AnimationRunning -> True] /.
(AppearanceElements -> _) ->
(AppearanceElements -> {})
Giving the following animation:
![PRNP][1]
If I use `ControlType->None`, ϴ does not show up...
animate2[obj_] := Animate[
With[
{v = RotationTransform[\[Theta], {0, 0, 1}][{3, 0, 3}]},
Show[obj, ViewPoint -> v]], {\[Theta], 0, 2 Pi},
Alignment -> Center,
Paneled -> False,
SaveDefinitions -> True,
AnimationRate -> .01,
AppearanceElements -> None,
AnimationRunning -> True,
ControlType -> None] /.
(AppearanceElements -> _) ->
(AppearanceElements -> {})
![enter image description here][2]
...but then it does not rotate. I realize it needs some sort of control object (or I'm guessing it does), but is there a way to hide ϴ so all I get is a rotating object that starts automatically upon code evaluation?
Any help is appreciated!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=prnp.gif&userId=1036924
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-08-13at1.57.30PM.png&userId=1036924Swede White2018-08-13T18:58:23Z[WSS18] Introducing Hadamard Binary Neural Networks
http://community.wolfram.com/groups/-/m/t/1374288
##Introducing Hadamard Binary Neural Networks
Deep neural networks are an important tool in modern applications. It has become a major challenge to accelerate their training. As the complexity of our training tasks increase, the computation does too. For sustainable machine learning at scale, we need distributed systems that can leverage the available hardware effectively. This research hopes to exceed the current state of the art performance of neural networks by introducing a new architecture optimized for distributability. The scope of this work is not just limited to optimizing neural network training for large servers, but also to bring training to heterogeneous environments; paving way for a distributed peer to peer mesh computing platform that can harness the wasted resources of idle computers in a workplace for AI.
#### Network Architecture and Layer Evaluator
Here, I will describe the network and the Layer Evaluator, to get an in depth understanding of the network architecture.
Note:
- **hbActForward** : Forward binarization of Activations.
- **hbWForward** : Forward binarization of Weights.
- **binAggression** : Aggressiveness of binarization (Vector length to binarize)
Set up the Layer Evaluator.
layerEval[x_, layer_Association] := layerEval[x, Lookup[layer, "LayerType"], Lookup[layer, "Parameters"]];
layerEval[x_, "Sigmoid", param_] := 1/(1 + Exp[-x]);
layerEval[x_, "Ramp", param_] := Abs[x]*UnitStep[x];
layerEval[ x_, "LinearLayer", param_] := Dot[x, param["Weights"]];
layerEval[ x_, "BinLayer", param_] := Dot[hbActForward[x, binAggression], hbWForward[param["Weights"], binAggression]];
layerEval[x_, "BinarizeLayer", param_] := hbActForward[x, binAggression];
netEvaluate[net_, x_, "Training"] := FoldList[layerEval, x, net];
netEvaluate[net_, x_, "Test"] := Fold[layerEval, x, net];
Define the network
net = {<|"LayerType" -> "LinearLayer", "Parameters" -> <|"Weights" -> w0|>|>,
<|"LayerType" -> "Ramp"|>,
<|"LayerType" -> "BinarizeLayer"|>,
<|"LayerType" -> "BinLayer", "Parameters" -> <|"Weights" -> w1|>|>,
<|"LayerType" -> "Ramp"|>,
<|"LayerType" -> "BinLayer", "Parameters" -> <|"Weights" -> w2|>|>,
<|"LayerType" -> "Sigmoid"|> };
MatrixForm@netEvaluate[net, input[[1 ;; 3]], "Test" ] (* Giving network inputs *)
![enter image description here][1]
#### Advantages of Hadamard Binarization
- Faster convergence with respect to vanilla binarization techniques.
- Consistently about 10 times faster than CMMA algorithm.
- Angle of randomly initialized vectors preserved in high dimensional spaces. (Approximately 37 degrees as vector length approach infinity.)
- Reduced communication times for distributed deep learning.
- Optimization of im2col algorithm for faster inference.
- Reduction of model sizes.
### Accuracy analysis
![enter image description here][2]
As seen above, the HBNN model gives 87% accuracy, whereas the BNN model (Binary Neural Networks) give only 82%. These networks have only been trained for 5 epochs.
### Performance Analysis
X Axis: Matrix Size
| Y Axis: Time (seconds)
**CMMA vs xHBNN**
![enter image description here][3]
**MKL vs xHBNN**
$\hspace{1mm}$![enter image description here][4]
### Visualize weight histograms
![enter image description here][5]
It is evident that the Hadamard BNN preserves the distribution of the weights much better. Note that the BNN graph has a logarithmic vertical axis, for representation purposes.
### Demonstration of the angle preservation ability of the HBNN architecture
![enter image description here][6]
Binarization approximately preserves the direction of high dimensional vectors. The figure above demonstrates that the angle between a random vector (from a standard normal distribution) and its binarized version converges to ~ 37 degrees as the dimension of the vector goes to infinity. This angle is exceedingly small in high dimensions.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=tempz.png&userId=1302993
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=accuracy.png&userId=1302993
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6613xCma.png&userId=1302993
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=xMKL.png&userId=1302993
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=histogram.png&userId=1302993
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=anglepreserve.png&userId=1302993Yash Akhauri2018-07-10T22:12:26ZCustom Neural Network Architectures for Social Science Data
http://community.wolfram.com/groups/-/m/t/1402774
The code in the notebook attached to this post sets forth my efforts to develop custom neural network architectures to work on datasets found in social sciences (or other fields). It is the result of a lot of trial and even more error. It shows how to do the following things. Some of this is covered in the Wolfram Language documentation, but not as an extensive worked example.
1. Create numerical vectors out of nominal data
2. Develop a loss function when the target consists of nominal data
3. Use ClassifierMeasurements when the classifier is the output of a trained neural network
4. Specify what form the neural network must be in for ClassifierMeasurements to work and how to modify a non-conforming trained network to be in the appropriate form.
5. Show equivalent NetChains and NetGraphs
6. Show how the neural network can itself encode nominal data contains as values to Associations, catenate that data and then pipe it through the rest of a NetGraph.
7. Show how to hook up a loss function to the output of a neural network
8. How to see the innards of a neural network more clearly, as well as the plan to convert it to something useable by MXNet.
9. How to work with Datasets and Query.
I strongly suspect that this is not the most efficient way create a neural network to analyze data contained in a Dataset with named columns and lots of nominal variable variables. However, it's the best I can do for now. I hope it is instructive to others. More importantly perhaps, I hope that these efforts will inspire others more knowledgeable in the field to show (1) how this can all be done in a more efficient manner and (2) how other bells and whistles can be added, such as a custom loss function, weighted inputs, desired distribution of predictions, etc.. While the Wolfram documentation on neural networks is extensive, as of version 11.3, in which the functionality is still deemed "Experimental," it lacks, in my view, the conceptual perspective and range of worked examples from diverse fields that I think would lower the desired barriers to entry for non-expert users of machine learning.
Note:
I did receive some excellent assistance in this effort from Wolfram Technical Support, but there comes a point when you kind of want to do it on your own. My efforts in asking the community.wolfram.com website for assistance didn't receive any immediate response and so, being the persistent sort, I decided just to try and do it on my own.
##Do the Encoding Before We Get to NetTrain##
Download the Titanic and convert it from a Dataset to a list of associations.
Short[titanic = Normal@ExampleData[{"Dataset", "Titanic"}]]
Scramble the data, delete the missing values to keep things simple, and encode survival in a way I prefer.
titanic2 =
Query[RandomSample /* (DeleteMissing[#, 1,
2] &), {"survived" -> (If[#, "survived", "died"] &)}][titanic];
Encode the nominal data as unit vectors.
titanic3 =
Query[All,
List["class" -> NetEncoder[{"Class", {"1st", "2nd", "3rd"}, "UnitVector"}],
"sex" -> NetEncoder[{"Class", {"male", "female"}, "UnitVector"}]]][
titanic2]
![enter image description here][1]
Get the data as a list of six values ruled onto a single value.
Short[titanic4 = Query[All, Values /* (Flatten[Most[#]] -> Last[#] &)][titanic3]]
![enter image description here][2]
Form training and testing data sets.
Short[{trainingData, testData} = TakeDrop[titanic4, Round[0.7*Length[titanic4]]]]
![enter image description here][3]
Create a pretty basic net chain ending with a SoftmaxLayer[] that turns the output into probabilities.
chainlinks = {LinearLayer[12], ElementwiseLayer[LogisticSigmoid], LinearLayer[4], LinearLayer[2], SoftmaxLayer[]};
nc = NetChain[chainlinks, "Input" -> 6, "Output" -> NetDecoder[{"Class", {"died", "survived"}}]]
![enter image description here][4]
Just test the NetChain to see if it works.
NetInitialize[nc][{0, 0, 1, 18, 1, 0}]
> "died"
Train the neural net. Use the CrossEntropy loss as the function to minimize. Remember that the target data needs to be encoded from died and survived to 1 and 2. Otherwise the CrossEntropyLossLayer gets unhappy. After 2000 rounds I find it's all overfitting anyway. So I limit the training rounds.
chainTrained = NetTrain[nc, trainingData, All, ValidationSet -> Scaled[0.2], LossFunction -> CrossEntropyLossLayer["Index",
"Target" -> NetEncoder[{"Class", {"died", "survived"}}]], MaxTrainingRounds -> 2000]
![enter image description here][5]
Get the TrainedNet out of the NetTrainResultsObject and see how our classifier performed.
cmo = ClassifierMeasurements[chainTrained["TrainedNet"], testData]
![enter image description here][6]
cmo["ConfusionMatrixPlot"]
![enter image description here][7]
Not bad. (But not great. The question is whether that's the fault of the classifier or just irreducible noise in the data).
##Now do it with NetGraph##
Same data, but do it with a NetGraph.
ngt = NetGraph[chainlinks, {1 -> 2 -> 3 -> 4 -> 5}, "Input" -> 6,
"Output" -> NetDecoder[{"Class", {"died", "survived"}}]]
![enter image description here][8]
From here on in, it' s all exactly the same.
graphTrained =
NetTrain[ngt, trainingData, All, ValidationSet -> Scaled[0.2],
LossFunction ->
CrossEntropyLossLayer["Index",
"Target" -> NetEncoder[{"Class", {"died", "survived"}}]],
MaxTrainingRounds -> 2000]
![enter image description here][9]
graphCmo = ClassifierMeasurements[graphTrained["TrainedNet"], testData]
![enter image description here][10]
graphCmo["ConfusionMatrixPlot"]
![enter image description here][11]
Not surprisingly, the results are very similar.
##Now Do the Encoding Within NetTrain##
Now, I want to do it with the data in a different form. I want the neural network to do the encoding. And I want to at least think about having a custom loss function. Convert the form of the data so that it is "column oriented." Basically we are going to use the third variant in the function specification set forth below.
![enter image description here][12]
{trainingData2, testData2} = Map[Normal[Transpose[Dataset[#]]] &, TakeDrop[titanic2, Round[0.7*Length[titanic2]]]];
Here' s what the training Data looks like.
Keys[trainingData2]
> {"class", "age", "sex", "survived"}
Map[Short, Values[trainingData2]]
![enter image description here][13]
Now form a NetGraph that Catenates some of the values from the data together and then goes through the same process as our NetChain (and NetGraph) above. Add a loss function at the end. Note that the data coming in from the Target port into the "myloss" layer is encoded from nominal values died and survived into integers 1 and 2.
nodes = Association["catenate" -> CatenateLayer[], "l15" -> LinearLayer[15],
"ls1" -> ElementwiseLayer[LogisticSigmoid], "l5" -> LinearLayer[5],
"l2" -> LinearLayer[2], "sm" -> SoftmaxLayer[],
"myloss" ->
CrossEntropyLossLayer["Index",
"Target" -> NetEncoder[{"Class", {"died", "survived"}}]]];
Create the connectivity structure between the nodes. Note that I am careful to specify which connectors of various NetPorts connect with other NetPort connectors. Certain Layers, like CrossEntropyLossLayer have connector names that the user can't alter so far as I can figure out. The connector name "Target" for example, needs to stay "Target." Also notice that I believe I have to generate a NetPort["Loss"] for the network to be trained.
connectivity = {{NetPort["class"], NetPort["age"], NetPort["sex"]} ->
"catenate", "catenate" -> "l15" -> "ls1" -> "l5" -> "l2" -> "sm",
"sm" -> NetPort["myloss", "Input"],
NetPort["survived"] -> NetPort["myloss", "Target"],
"myloss" -> NetPort["Loss"], "sm" -> NetPort["Output"]}
> {{NetPort["class"], NetPort["age"], NetPort["sex"]} -> "catenate",
> "catenate" -> "l15" -> "ls1" -> "l5" -> "l2" -> "sm", "sm" ->
> NetPort["myloss", "Input"], NetPort["survived"] -> NetPort["myloss",
> "Target"], "myloss" -> NetPort["Loss"], "sm" -> NetPort["Output"]}
Now let' s put our NetGraph together. Here I have to tell it how various inputs and outputs will be encoded and decoded. You will notice I do NOT tell it how to encode the "survived" values because our CrossEntropyLossLayer handles that part of the work.
ngt2 = NetGraph[nodes, connectivity,
"class" -> NetEncoder[{"Class", {"1st", "2nd", "3rd"}, "UnitVector"}],
"age" -> "Scalar",
"sex" -> NetEncoder[{"Class", {"male", "female"}, "UnitVector"}],
"Output" -> NetDecoder[{"Class", {"died", "survived"}}]]
![enter image description here][14]
Here' s a picture of our net.
NetInformation[ngt2, "FullSummaryGraphic"]
![enter image description here][15]
We can get the structure information back out of the NetGraph using some "secret" functions. I found these useful when working on this project to help me understand what was going on.
NeuralNetworks`GetNodes[ngt2]
![enter image description here][16]
NeuralNetworks`NetGraphEdges[ngt2] (* shouldn't this be called GetEdges for consistency??*; or maybe GetNodes should be NetGraphNodes??*)
> {NetPort["class"] -> NetPort[{"catenate", 1}],
> NetPort["age"] -> NetPort[{"catenate", 2}],
> NetPort["sex"] -> NetPort[{"catenate", 3}],
> NetPort["survived"] -> NetPort[{"myloss", "Target"}],
> NetPort[{"catenate", "Output"}] -> "l15", "l15" -> "ls1", "ls1" -> "l5",
> "l5" -> "l2", "l2" -> "sm", "sm" -> NetPort[{"myloss", "Input"}],
> "sm" -> NetPort["Output"], NetPort[{"myloss", "Loss"}] -> NetPort["Loss"]}
We can also get a closer look at what the neural net is going to do, athough, frankly, I don' t understand the diagram fully. It does look cool, though. (I believe the diagram essentially shows how the Wolfram Language framework will be translated to MXNet).
NeuralNetworks`NetPlanPlot[NeuralNetworks`ToNetPlan[ngt2]]
![enter image description here][17]
Anyway, let' s train the network. Notice how I designate the loss function with a string that refers to a node (NetPort) in the network. I'm not quite sure why, but you can't designate the loss function as "myloss"; again, I wish the documentation were clearer on this issue. Again, I'll stop after 2000 rounds.
titanicNet2 =
NetTrain[ngt2, trainingData2, All, LossFunction -> "Loss",
ValidationSet -> Scaled[0.2], MaxTrainingRounds -> 2000]
![enter image description here][18]
We can extract the trained network from the NetTrainResultsObject.
titanicNet2["TrainedNet"]
![enter image description here][19]
Let' s run it on the test data.
Short[titanicNet2["TrainedNet"][testData2]]
![enter image description here][20]
If I try to use ClassifierMeasurements on it, though, it fails.
ClassifierMeasurements[titanicNet2["TrainedNet"], testData2];
![enter image description here][21]
The error message is unhelpful. And nothing I could find in the documentation spells out the circumstances under which the results of a neural network can be used in ClassifierMeasurements. Maybe, however, it's because our NetGraph is producing two outputs: a Loss value and an Output value. When we make Classifiers, to be the best of my knowledge, we only get an Output value. Let's trim the network.
titanicNet2Trimmed = NetDelete[titanicNet2["TrainedNet"], "myloss"]
![enter image description here][22]
Now, when we run our trimmed network on the test data (stripped of the survived column), we just get output values as a List and not as part of a multi-key Association.
titanicNet2Trimmed[Query[KeyDrop["survived"]][testData2]]
![enter image description here][23]
And now ClassifierMeasurements works!!
cmo2 = ClassifierMeasurements[titanicNet2Trimmed, testData2 -> "survived"]
![enter image description here][24]
cmo2["ConfusionMatrixPlot"]
![enter image description here][25]
##Conclusion##
I hope the code above helps others in appreciating the incredible neural network functionality built into the Wolfram language and inspires further posts on how it can be used in creative and flexible ways.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=27521.png&userId=20103
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=a1.png&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=a2.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=103982.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=98653.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=32714.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=82165.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=93816.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=x.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15818.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=66799.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-08-13at11.40.32.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=605711.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=659712.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=941513.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=223114.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1051116.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=146917.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=272718.png&userId=20103
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=606219.png&userId=20103
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=xx.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=548820.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1048021.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=761522.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=631723.png&userId=20103Seth Chandler2018-08-12T14:18:02ZWhat type of data container is used in this example?
http://community.wolfram.com/groups/-/m/t/1403092
I'm sure this is a very novice question. I've been looking at a number of examples of things that can be done. In a few examples, such as https://www.wolfram.com/language/11/enhanced-geo-visualization/measure-the-density-of-trees.html?product=mathematica, there is a concise block of data used. See in[1] in the above link.
What is this?
I would like to understand how to build on for a home project I'm working on about home values.Michael Madsen2018-08-13T20:45:27ZDocumentation & Functionality enhancements as NeuralNets leave Experimental
http://community.wolfram.com/groups/-/m/t/1402799
I've been doing a lot of work with Machine Learning in the Wolfram Language recently and we have tremendous capability and a clean architecture typical of Wolfram Language products. Right now, all the functions are labeled as "Experimental" and thus are not held to quite as high a standard as features that have graduated from that designation. I believe the transition to a full and permanent part of the language could be helped by addressing two matters: (a) some documentation lacunae and (b) some interoperability challenges. I am attempting to start a conversation on that point by putting forth some suggestions for enhancements to Documentation and Functionality. My focus is not on creating new fancy layers -- like ones that would create a Generative Adversarial Network or other great stuff -- but on making the existing functionality more accessible to those not completely expert in either Neural Nets or the MXNet framework on which it rests.
**Documentation**
A key issue is that we have this wonderful Classify and Predict functionality that can use Neural Networks and that kind of/sort of integrates with the NetTrain, NetGraph stuff, but the integration is not as tight as desirable and the documentation is lacking. Here are some ideas.
1. Classify[data,Method->"NeuralNetwork"] and Predict[data,Method->"NeuralNetwork"] should provide the network used for training, including the loss function. Perhaps there could be an option that had Classify and Predict return a NetTrainResultsObject. Or perhaps ClassifierInformation could extract the network in a form that could be reused within NetTrain or otherwise edited. This way one could take a Network used by Classify or Predict and (a) see more easily what the heck it was doing and (b) think of tweaks that might enhance its performance. Moreover, one could see how Classify created a Net that implemented the optional features such as IndeterminateThreshold and UtilityFunction. It would be a great learning tool.
2. There should be a worked example probably using NetGraph showing at least one way to implement every option to Classify and Predict within the NeuralNetwork paradigm. Thus ClassPriors, FeatureExtractor, FeatureNames, FeatureTypes, IndeterminateThreshold, UtilityFunction should all be shown. ValidationSet would be nice too.
3. The requirements for ClassifierMeasurements to work on the output from a NetTrain operation should be clearly stated.
4. There is a lot of functionality hidden in the NeuralNetworks` context. A lot of it is quite useful. Some of it should be promoted for more general use and documented appropriately.
**Functionality**
OK. This is a hard one -- probably much harder than I appreciate. But perhaps a start could be made.
1. It would be great to be able to just write regular Wolfram Language code and, where possible, have it automatically translated into a NetGraph expression. A function named NeuralNetForm (or NetGraphCompile or something like that)
Example:
NetGraphForm[(MapThread[#1 - #2 &] /* (Dot[{1, 2, 3}, #] &)), {"x",
"y"}] ->
NetGraph[{ThreadingLayer[#1 - #2 &],
ConstantArrayLayer["Array" -> {1, 2, 3}], DotLayer[]},
{{NetPort["x"], NetPort["y"]} -> 1, {1, 2} -> 3}]
So that then one could take the NetGraph (netg) and do the following
netg[Association["x" -> {3, 5, 8}, "y" -> {2, 16, -3}]]
And you'd get 12.
2. Right now the [NeuralNetwork repository][1] is filled with elaborate nets for doing wonderful and fancy things. But perhaps there could be a section of that repository devoted to simpler tasks: asymmetric cross entropy losses just to take a particular example.
Probably others will have additional ideas. Or it may be that my ideas are impracticable, a special case of a more general problem, or already in the works. Perhaps some constructive user feedback might help the product evolve even more successfully.
[1]: https://resources.wolframcloud.com/NeuralNetRepository/Seth Chandler2018-08-13T01:39:07ZThe Delian Brick and other 3D self-similar dissections
http://community.wolfram.com/groups/-/m/t/1368091
Divide a cuboid into two cuboids similar to the original shape. The answer involves the cube root of 2, otherwise known as the [Delian constant](http://mathworld.wolfram.com/DelianConstant.html). I've called this object the Delian Brick. It's a 3D 2-reptile. A stack of three bricks can be made using the cube root of 3, and so on.
With[{r=2^(1/3)},
Graphics3D[{Opacity[.5],
Cuboid[{0 r^0,0 r^1,0r^2},{1 r^0,1r^1,1r^2}], Cuboid[{1 r^0,0 r^1,0r^2},{2 r^0,1 r^1,1r^2}]},SphericalRegion-> True, Boxed-> False]]
![Delian Brick][1]
I'd self-discovered the Delian Brick myself, as did at least ten other recreational mathematicians I've exchanged correspondence with. It may have been known to the ancient greeks. The first publication I've found is by Dale Walton and the game company Thinkfun, who expanded it into a 3D 4-irreptile they called the Fifth Chair puzzle.
With[{r=2^(1/3)},
Graphics3D[{Opacity[.5],
{Red,Cuboid[{0 r^0,0 r^1,0r^2},{2 r^0,r^1,r^2}], Cuboid[{1 r^0,1 r^1,0r^2},{2 r^0,2 r^1,1r^2}]},
{Blue,Cuboid[{0 r^0,1 r^1,0r^2},{1 r^0,3r^1,1r^2}], Cuboid[{1 r^0,2 r^1,0r^2},{2 r^0,3 r^1,1r^2}]},
{Green,Cuboid[{0 r^0,3 r^1,0r^2},{2 r^0,4r^1,2r^2}], Cuboid[{0 r^0,2 r^1,1r^2},{2 r^0,3 r^1,2r^2}]},
{Yellow, Cuboid[{2 r^0,0 r^1,0r^2},{4 r^0,2r^1,2r^2}], Cuboid[{0 r^0,0 r^1,1r^2},{2 r^0,2 r^1,2r^2}]}}, SphericalRegion-> True, Boxed-> False]]
![fifth chair][2]
There are also [five space-filling tetrahedra](http://demonstrations.wolfram.com/SpaceFillingTetrahedra/), and at least two of them are 8-reptiles
Row[{Graphics3D[{Opacity[.5],Polygon/@Union[Sort/@
Flatten[Subsets[#,{3}]&/@(IntegerDigits/@({{020,111,121,022},{022,111,112,222},{022,111,121,222},{022,113,112,222},{022,113,123,024},{022,113,123,222},{111,202,212,113},{111,222,212,113}}+111)-1),1]]}, Boxed-> False, SphericalRegion->True],
Graphics3D[{Opacity[.5],Polygon/@Union[Sort/@
Flatten[Subsets[#,{3}]&/@(IntegerDigits/@({{002,022,111,113},{022,042,131,133},{022,222,111,113},{022,222,111,131},{022,222,113,133},{022,222,131,133},{111,131,220,222},{113,133,222,224}}+111)),1]]}, Boxed-> False, SphericalRegion->True]}]
![tetrahedron reptiles][3]
More of these self-similar 3D dissections are listed at [3D Rep-Tiles and Irreptiles](http://demonstrations.wolfram.com/3DRepTilesAndIrreptiles/). The ones I list here need to be added there. Most of the 3D rep-tiles are based on either a 2D reptile or a polycube. The four items in this discussion fit in neither of those categories. Are there others?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=DelianBrick.png&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=FifthChair.png&userId=21530
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=tetrahedronreptiles.png&userId=21530Ed Pegg2018-07-03T16:02:03ZFind the "nth" of a large PrimeNumber?
http://community.wolfram.com/groups/-/m/t/1395404
Hi Guys! I hope all of you are fine :) Maybe someone can tell me here how can I find with Wolfram Alpha or Mathematica the nth's of larger primes? I used "PrimePi", but "PrimePi" works not with large primes (primes like these 1921773217311523519374373 do not work...too large...). Is there a criterion, method and or a script with which I can find the nth's of larger primes?
I have also used the "nthprime" function, but i think this is not what i need, but when there is a method with the nth prime function to find the "th's" of larger primes, can someone here show me, how it works? To better understanding what i mean, here an example:
- 2 is the 1(<-i need this number).Primenumber
- 3 is the 2(<-i need this number).Primenumber
- 5 is the 3(<-i need this number).Primenumber
- 7 is the 4(<-i need this number).Primenumber
and so on...another example:
19 is the 8th (!) Prime, 23 is the 9th (!) Prime, 29 is the 10th (!) Prime... now i need a function to find which prime is 1921773217311523519374373? I need a function to get that out, i hope anybody here has an idea how can i find with WolframAlpha or Mathematica which/what (!) prime is 1921773217311523519374373.
I hope anyone can help me here. Kind regards and best wishes.Nural I.2018-07-31T18:14:09Z[✓] Use Except in a RegularExpression?
http://community.wolfram.com/groups/-/m/t/1392816
Friends:
Can Except be used in Regular Expressions? And then how?
This is potentially quite useful.
Suppose I want to match a pattern with the class character [A-Z] but without the class character [B-D].
This seems to be a job for combined regular expressions and Except...But then how to do it?
Any help is welcome,
FranciscoFrancisco Gutierrez2018-07-26T16:32:39ZTransform a string in UTF-8 format into a string in ANSI format?
http://community.wolfram.com/groups/-/m/t/1402419
I have a string like a = "abcdefg"
it is a UTF-8 format string.
I want to transform a into a string which is in ANSI format, how can I do that?gearss zhang2018-08-11T00:21:06ZDisplay an image with high resolution?
http://community.wolfram.com/groups/-/m/t/1403130
Hello All,
I require to display the output images at least 400 x 400 on screen.
They are tiny as shown attached.
Please find below code and nb file:
Manipulate[imageA= ;
varBin=Binarize[imageA,binimageA];
Column[{Button["Gradient Filter",
out=ImageAdjust[GradientFilter[varBin,0.5]]],
Row[{imageA,varBin,out}]}],
{binimageA,0,1}]
Thanks for usual considerationMan Oj2018-08-13T12:03:04Z[WSC18] Chronological Dating of Historical Texts Using RNNs
http://community.wolfram.com/groups/-/m/t/1382707
![Automatic Chronological Dating of Historical Texts Using RNNs][1]
#Abstract#
Chronological dating is essential for various tasks such as summarization and document retrieval. This project proposes a novel method of dating historical texts using a Recurrent Neural Network that works both on the character level and the word level. The results show a significant improvement in the accuracy of detection compared to using a word level only RNN. The error span is between 1 year and a century for most cases. Though it achieved a decent performance for the texts originating from the 19th century, the accuracy declines significantly for older texts due to their scarcity and the non-homogeneous distribution of the provided dataset.
#Data Collection and Pre-processing#
The training data is composed of public domain books collected from Openlibrary, an online project created by Aaron Swartz, Brewster Kahle and others. Wolfram Language supports a Service Connect that allows a direct interaction with the Openlibrary API.
Data Collection
In[106]:= openlibrary = ServiceConnect["OpenLibrary"]
Out[106]= ServiceObject["OpenLibrary",
"ID" -> "connection-1f55c291dcb5feaa290ece0cd1c97ed2"]
In[107]:= BookDatabase = <||>;
nbCalls = 0;
In[109]:= GetTextRequest[keys_] := {nbCalls++;
Normal@openlibrary["BookText", {"BibKeys" -> {keys}}]}
In[110]:= GetValidTextKey[keys_] :=
SelectFirst[keys,
MatchQ[Pause[.1]; Normal@openlibrary["BookText", {"BibKeys" -> {#}}],
KeyValuePattern[_ -> _String]] &];
GetFirstText[list_] := FirstCase[list, Except["NotAvailable", _String]]
In[112]:= GetTexts [keys_] :=
Quiet[GetFirstText[
Values[Normal@
openlibrary["BookText", {"BibKeys" -> RandomSample[keys, UpTo[50]]}]]]]
In[113]:= AddBook[b_] :=
BookDatabase[b["FirstPublishYear"]] =
If[MatchQ[BookDatabase[b["FirstPublishYear"]], _Missing],
{GetTexts[b["EditionKey"]]},
Append[BookDatabase[b["FirstPublishYear"]], GetTexts[b["EditionKey"]]]
]
In[114]:= AddSubject[subj_String] :=
Module[{searchResults},
(*Database init*)
BookDatabase = <||>;
(*Searching books*)
searchResults =
Select[Normal@
openlibrary["BookSearch", {"Subject" -> subj, MaxItems -> 90}],
#["HasFulltext"] &];
(*Downloading Text*)
GeneralUtilities`MonitoredMap[AddBook, searchResults];
Print[subj <> " DOWNLOADED!"];
(*Exporting*)
Export["C:\\Users\\Tarek\\OneDrive\\Documents\\Portfolio\\opportunities\\\
Wolfram Summer Camp\\Dating Historical Texts\\" <> subj <> ".wxf",
BookDatabase];
Pause[180];
]
(*TESTING*)
In[115]:= AddSubject /@ {"Religion", "Games", "Drama", "Action", "Adventure", "Horror",
"Spirituality", "Poetry", "Fantasy"}
During evaluation of In[115]:= Religion DOWNLOADED!
During evaluation of In[115]:= Games DOWNLOADED!
During evaluation of In[115]:= Drama DOWNLOADED!
During evaluation of In[115]:= Action DOWNLOADED!
During evaluation of In[115]:= Adventure DOWNLOADED!
During evaluation of In[115]:= Horror DOWNLOADED!
During evaluation of In[115]:= Spirituality DOWNLOADED!
During evaluation of In[115]:= Poetry DOWNLOADED!
During evaluation of In[115]:= Fantasy DOWNLOADED!
#Training the Neural Net#
![RNN Architecture][2]
The project uses a hybrid Word-level and Character-level Recurrent Neural Network. The word-level processing is built on the GloVe model in order to compute vector representations for words. The limitation to using a Word-level only network is that most of old books include words that were not included in the training data for GloVe. Thus, adding a character-level Network Chain seems to improve the prediction accuracy since it helps process previously unseen corpora.
Define the net
In[24]:= net = NetGraph[<|
"chars" -> {
UnitVectorLayer[],
LongShortTermMemoryLayer[50],
DropoutLayer[.5]
},
"words" -> {
NetModel[
"GloVe 100-Dimensional Word Vectors Trained on Wikipedia and Gigaword 5 \
Data"],
LongShortTermMemoryLayer[50],
SequenceLastLayer[],
DropoutLayer[.5]
},
"cat" -> CatenateLayer[],
"predict" -> {
LongShortTermMemoryLayer[100],
SequenceLastLayer[],
DropoutLayer[.5],
LinearLayer[1]
}
|>,
{
NetPort["Characters"] -> "chars",
NetPort["Words"] -> "words",
{"chars", "words"} -> "cat",
"cat" -> "predict" -> NetPort["Date"]
},
"Characters" -> NetEncoder[{"Characters", characters}],
"Date" -> NetDecoder["Scalar"]
];
Create training data
In[32]:= sample[text_String, n_: 1024] :=
Module[{len, offset},
len = StringLength@text;
offset = RandomInteger[{1, len - n - 1}];
StringPadRight[
charPreprocess@
StringTake[text, {Max[1, offset], Min[len, offset + n - 1]}], n]
];
In[33]:= getSample[KeyValuePattern[{"FullText" -> text_String,
"FirstPublishYear" -> d_DateObject}]] :=
With[{s = sample[text]},
<|"Characters" -> s, "Words" -> s, "Date" -> dateToNum@d|>
];
$samples = 100000;
In[43]:= import = Flatten[Import /@ FileNames["*.wxf", $dataDir, Infinity]];
withDate = Cases[import, KeyValuePattern["FirstPublishYear" -> _DateObject]];
trainingData =
RandomSample[
Flatten@Table[getSample /@ withDate, Ceiling[$samples/Length[withDate]]],
UpTo[$samples]];
Length@trainingData
Training
results = NetTrain[
net,
trainingData,
All,
ValidationSet -> Scaled[.25],
TargetDevice -> "GPU",
MaxTrainingRounds -> Quantity[8, "Hours"],
BatchSize -> 48,
TrainingProgressCheckpointing -> {"Directory", $dataDir <>
"Trained_Networks\\", "Interval" -> Quantity[15, "Minutes"]}
];
In[66]:= trained = results["TrainedNet"];
Save trained net
In[67]:= Export["PredictTextDate.wlnet", trained]
Out[67]= "PredictTextDate.wlnet"
#Testing#
Testing and Results
In[25]:= CalculateAccuracy[title_String] := Module[{text, predDate, actualDate},
text = processForInput[sample[ResourceData[title]]];
actualDate = ResourceObject[title]["SourceMetadata"]["Date"];
predDate = numToDate[net[text]];
{IntegerPart[
Abs[UnitConvert[DateDifference[actualDate, predDate], "Years"]]],
actualDate, DateObject[predDate, "Year"]}
]
In[50]:= titleList = {"Friends, Romans, Countrymen", "On the Origin of Species",
"Agnes Grey", "Alice in Wonderland", "The Pickwick Papers",
"The Wheels of Chance", "Pellucidar",
"The Adventures of Huckleberry Finn", "The Emerald City of Oz",
"The Old Curiosity Shop", "Adam Bede", "A Study in Scarlet",
"Micah Clarke", "Prufrock"};
In[51]:= accuracyList = CalculateAccuracy /@ titleList;
In[52]:= resultsTable =
Dataset@SortBy[
Join[{{"Error", "Actual Date", "Predicted Date"}}, accuracyList], #[[2]] &];
In[53]:= meanAccuracy = N@Mean@accuracyList[[All, 1]]
Out[53]= Quantity[25.8571, "Years"]
#Want to test it out?#
![Dating Historical Texts Microsite][3]
We have launched a micro-site that implements the current neural network architecture in order to allow the prediction of publication dates of an input text.
Link: [Dating Historical Sites Microsite][4]
You can also try testing the code using the Wolfram Code below.
Link: [Download Code][5]
#Acknowledgements#
This project could not have been accomplished without the support, encouragement and insight of my mentor: Mr. Richard Hennigan.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=communitypostimage.png&userId=1372178
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=net.png&userId=1372178
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Microsite.PNG&userId=1372178
[4]: https://www.wolframcloud.com/objects/tarek.ab.aloui/WSC2018/DatingHistoricalTexts
[5]: https://drive.google.com/open?id=15pDUwLUm_zxzD-YYmDTIyeQHMyCJJzuNTarek Aloui2018-07-13T14:55:27ZOutputs from InverseFourierSequenceTransform
http://community.wolfram.com/groups/-/m/t/1402695
Yes:
In[331]:= InverseFourierSequenceTransform[1, x, n,
FourierParameters -> {a, 1}]
Out[331]= (2 \[Pi])^((1 - a)/2) DiscreteDelta[n]
No:
In[329]:= InverseFourierSequenceTransform[1, x, n,
FourierParameters -> {a, 2 Pi}]
Out[329]= 0Joe Donaldson2018-08-12T19:39:52Z[WSS18] Generating Music with Expressive Timing and Dynamics
http://community.wolfram.com/groups/-/m/t/1380021
![Cover][1]
## Goal ##
There are many ways to generate music and one of them is algorithmic, where music is generated with the help of a list of handcrafted rules.
The approach in this project is different - I build a neural network that knows nothing about music but learns it from thousands of songs given in MIDI format.
_Apart from just generating a meaningful sequence of notes I also wanted to add **dynamics** in loudness and humanlike mistakes in timing with **no restrictions for note durations**._
- **Why dynamics and timing?**
There is no human who is able to play on a musical instrument with precisely the same loudness and strictly in time with a metronome(at least I can't). People do mistakes, but in the case of music, they are helping in creating what we call more alive music. It is a fact that dynamic music with slight time shifts sounds more interesting, so even when you write music in a program you supposed to add these "mistakes" by yourself.
- **Why performances?**
The dataset that I use for the project contains performances of [Yamaha e-piano competition][2] participants. This gives us a possibility to learn the dynamics and mistakes in timings.
__Here's an [example][3] generated by the model.__
All the code, data and trained models can be found on [GitHub][32].
The examples will be attached to this post as files just in case.
----------
## Inspiration ##
This is not an original work and mostly it's an attempt to recreate the work of [Magenta][4] team from their blog [post][5].
Nevertheless, in this post, I will try to add more details to the **preprocessing** steps and how you can build a similar neural network model in Wolfram Language.
## Data ##
I've used a [site][6] that has the Yamaha e-piano performances but also contains a set of classic and jazz compositions.
In the original [work][7] Magenta team has used only the Yamaha dataset but with a heavy augmentation on top of that: Time-stretching (making each performance up to 5% faster or slower), Transposition (raising or lowering the pitch of each performance by up to a major third).
Also, you can create your own list of MIDI files and build a dataset with the help of the code provided below in the post.
Here are links to find a lot of free MIDI songs: [The Lakh MIDI Dataset][8](very well prepared a dataset for ML projects), [MidiWorld][9] and [FreeMidi][10]
## MIDI ##
MIDI is short for Musical Instrument Digital Interface. It’s a language that allows computers, musical instruments, and other hardware to communicate.
MIDI carries event messages that specify musical notation, pitch, velocity, vibrato, panning, and clock signals (which set tempo).
_For the project, we need only events that denote where is every note starts/ends and with what are velocity and pitch._
## Preprocessing The Data ##
Even though MIDI is already a digital representation of music, we can't just take raw bytes of a file and feed it to an ML model as in the case of the models working with images. First of all, images and music are conceptually different tasks: the first is a single event(data) per item(an image), the second is a sequence of events per item(a song). Another reason is that raw MIDI representation and a single MIDI event itself contain a lot of irrelevant information to our task.
Thus we need a special data representation, a MIDI-like stream of musical events. Specifically, I use the following set of events:
- 88 **note-on** events, one for each of the 88 MIDI pitches of piano range. These events start a new note.
- 88 **note-off** events, one for each of the 88 MIDI pitches of piano range. These events release a note.
- 100 **time-shift** events in increments of 10 ms up to 1 second. These events move forward in time to the next note event.
- 34 **velocity** events, corresponding to MIDI velocities quantized into 32 bins. These events change the velocity applied to subsequent notes.
The neural network operates on a one-hot encoding over these 310 different events. This is the very same representation as in the original work but the number of note-on/note-off is fewer, I encode 88 notes in piano range instead of 127 notes in MIDI pitch range to reduce one-hot encoding vector size and make the process of learning easier.
**For example**, if you want to encode 4 notes from C major with durations of a half second and with different velocities your sequence of events would be somewhat like this(for clarity I use only indices instead of the whole one-hot encoding):
`{288, 60, 226, 148, 277, 62, 226, 150, 300, 64, 226, 152, 310, 67, 226, 155}`
![Preprocessing encoding C major example][11]
In this particular example:
- _60, 62, 64, 67_ are **note on** events(C5, D5, E5, G5). Values in a range from 1 to 88.
- _148, 150, 152, 155_ are **note off** events. Values in a range from 89 to 176.
- _226_ is a half second **time shift** event. Values in a range from 177 = 10 ms to 276 = 1 sec.
- _288, 277, 300, 310_ are **velocity** events. Values in a range from 277 to 310.
In this way, you can encode music that is expressive in dynamics and timing.
Now, let's take a look on another example with a chord from the same notes but with different durations:
`{300, 60, 62, 64, 67, 226, 152, 155, 226, 150, 226, 148}`
![C major chord][12]
As you can see, if you want to play more than one note at once you just need to put them in a single bunch of note-on events(60, 62, 64, 67).
Then you add time shift and note-off events as you needed. If you need a duration longer than 1 sec you can stack together more than one time-shift events({310, 310} = 2 sec time-shift).
**WL and MIDI**
Wolfram Language has a **built-in** support of MIDI files what is really simplifying initial work.
To get data from MIDI file you need to import it with specific elements:
![WL MIDI Import Elements][13]
In the code below I also extract and calculate needed information related to a tempo of a song.
{raw, header} = Import[path, #]& /@ {"RawData", "Header"};
tempos = Cases[Flatten[raw], HoldPattern["SetTempo" -> tempo_] :> tempo];
microsecondsPerBeat = If[Length@tempos > 0, First[tempos], 500000]; (* If there is no explicit tempo we use default 120 bpm *)
timeDivision = First@Cases[header, HoldPattern["TimeDivision" -> division_] :> division];
(* Convert timeDivision value to base of 2 *)
timeDivisionBits = IntegerDigits[timeDivision, 2];
(* Pad zeros at the beginning if the value takes less then 16 bits *)
timeDivisionBits = If[Length@timeDivisionBits < 16, PadLeft[timeDivisionBits, 16], timeDivisionBits];
(* The top bit responsible for the type of TimeDivision *)
timeDivisionType = timeDivisionBits[[1]];
framesPerSecond = timeDivisionBits[[2 ;; 8]];
ticksPerFrame = timeDivisionBits[[9 ;; 16]];
ticksPerBeat = If[timeDivisionType == 0, timeDivision, 10^6 /(framesPerSecond * ticksPerFrame)];
secondsPerTick = (microsecondsPerBeat / ticksPerBeat) * 10^-6.;
An example of raw data and header info from MIDI file in Wolfram Language:
![Raw MIDI output][14]
**SetTempo** is a number of microseconds per beat(microseconds per quarter note).
**Time Division** has two type of interpreting. If the top bit is 0 then the type is "ticks per beat" (or “pulses per quarter note”) otherwise, the type is "frames per second". We need those two values to calculate time per one **MIDI tick** that used in MIDI events as a time measurement.
One MIDI event in WL representation looks like this
`{56, {9, 0}, {46, 83}}`
- 56 is a number of **MIDI ticks** that means the total amount of time that must pass from the previous MIDI event.
It represents our **time-shift** event by simple multiplication of this number with **secondsPerTick**.
- 9 is a status byte of MIDI events(9,8 are **note-on**, **note-off** respectively).
- 0 is MIDI channel(irrelevant for us).
- 46 indicates what is a pitch of this note(related to **note-on**/**note-off** events).
- 83 is a number we encode in a **velocity** event.
If you want to understand how a real raw MIDI data structured, this [blog][15] is specifically useful.
Now, what we need is to parse a sequence of MIDI events and filter them only for events that are **note-on**, **note-off** and all the events that have the number of **MIDI ticks** greater than 0. Some of the meta-messages have irrelevant MIDI ticks thus we need to exclude them from final sequence - we just skip the events with value **F**(Meta message) in the MIDI status byte.
After filtering MIDI data you get a sequence that is ready to be encoded to the final representation and will be fed to the model.
![Filtered MIDI events][16]
To encode the sequence of MIDI events to the final representation I use the code below:
EncodeMidi[track_, secondsPerTick_] := Block[{lastVelocity = 0},
ClearAll[list];
Flatten[
Map[
Block[{list = {}},
(* Add time shifts when needed *)
If[TimeShiftByte[#, secondsPerTick] > 0, list = Join[list, EncodeTimeShift[TimeShiftByte[#, secondsPerTick]]]];
(* Proceed with logic only if it's a note event *)
If[StatusByte[#] == NoteOnByte || StatusByte[#] == NoteOffByte,
(* Add velocity if it's different from the last seen *)
If[lastVelocity != QuantizedVelocity[VelocityByte[#]] && StatusByte[#] == NoteOnByte,
lastVelocity = QuantizedVelocity[VelocityByte[#]];
list = Join[list, List[EncodeVelocity[VelocityByte[#]]]];
];
(* Add note event *)
list = Join[list, List[EncodeNote[NoteByte[#], StatusByte[#] == NoteOnByte]]];
];
(* Return encoded list*)
list
]&,
track]
, 1]];
This code has a lot of functions that I've written during the summer school but they are mostly utility short functions.
You can check them and complete implementation on [GitHub][17].
When the code for the preprocessing is ready it's time to build a dataset.
**Building Dataset**
I've made a [notebook][18] that takes care of preprocessing of MIDI files and encode them into the final representation.
(* Take all files names in Midi folder *)
files = FileNames["*", NotebookDirectory[] <> "Midi"];
dataset = Flatten[EncodeTrack /@ files, 1];
During the encoding, each track is partitioning into smaller segments:
encodings = Partition[EncodeMidi[GetMidiEvents[raw, secondsPerTick], secondsPerTick], 500];
In the original work, Magenta team split each song into 30-second segments to keep each example of manageable size. The problem is that partition by equal time doesn't give you the equal size of examples. Even though you can use varying input size in sequence models I wanted to use a static size of examples to speed up the training process. I was told that internally in WL(or maybe everywhere) it's more efficient to have the same size of every example for a model.
However, I believe this kind of partition has a drawback, in a way that an equal number of encoded events could have a different duration in time thus adding inconsistency in the dataset.
In my case, I've divided each song into segments of 500 encoded events.
![One Song Final Encoding][19]
To reduce the size of the final dataset I use only indices for one-hot encodings.
As the result, the final dimension of my dataset was **{99285, 500}**
If you want to try partition by the time you need to edit `EncodeTrack` function in [`Midi.m`][20].
With this code, you will find positions of where to split a sequence on equal time segments:
GetTimePositions[track_, seconds_, secondsPerTick_] :=
Block[{positions = {}, time = 0},
Do[
time = time + track[[i]][[1]] * secondsPerTick;
If[time > seconds, positions = Append[positions, i]; time = 0;],
{i, Length@track}];
positions
]
Where parameter `track` is a sequence of MIDI events. Then you split the same `track` with the positions you've got from the function.
segments = FoldPairList[TakeDrop, track, positions];
After that, you need to encode `segments` with the help of `EncodeMidi` function. If you do that there is one thing left - rework the model to accept varying input size but the next part will cover how to build a model with a static size of example.
----------
## Building a Model ##
Because music data is a sequence of events we need an architecture that knows how to remember, and predicts what is the next event based on all previous. This is exactly what Recurrent Neural Networks try to do - RNNs can use their internal state (memory) to process sequences of inputs. If you want to check more details I would recommend to watch this [introduction][21] video.
On the abstract level, RNN learns the probabilities of events that follow after each other. Take for example this language model from Wolfram Neural Repository, it predicts the next character of a given sequence.
NetModel["Wolfram English Character-Level Language Model V1"]["hello worl"]
The output is **d**.
You can get top 5 probabilities if you want.
NetModel["Wolfram English Character-Level Language Model V1"]["hello worl", {"TopProbabilities", 5}]
You will get:
{"d" -> 0.980898, "e" -> 0.00808785, "h" -> 0.0045687, " " -> 0.00143807, "l" -> 0.000681855}
In my work, I needed similar behavior but instead of characters, I wanted to predict encoded MIDI events. That is why the basis of the model I build is [Wolfram English Character-Level Language Model V1][22]. Also, after reading a [guide][23] about sequence learning with neural networks in WL I've decided to improve the training process with "teacher forcing" technique.
**Teacher Forcing**
In a simple language model, a model takes the last prediction from an input sequence and compute the class of it. But for "teacher forcing" we need to get classes of all predictions.
![Model comparison][24]
Comparatively to the language model I've removed one `GatedReccurentLayer` and `Dropoutlayer` due to the not so big dataset(as precautions to avoid overfitting). Another benefit of using "teacher forcing" is that you don't need to separately create labels for every example. To compute the loss we make out of an input example two sequences:
1. Everything but the **last** element(Sequence**Most**Layer)
2. Everything but the **first** element(Sequence**Rest**Layer)
![Teacher Forcing Net][25]
As you can notice the input is only one vector of indices with size 500 and labels for computing the loss are generating inside of a `NetGraph`.
Here is a visualized example of the flow with simple input:
![Input flow explanation][26]
You can find the code for creating the model in this [PerfrormanceRnnModel][27] notebook.
After all the data is ready and the model is finalized we can start training.
NetTrain[teacherForcingNet,
<|"Input" -> dataTrain|>,
All,
TrainingProgressCheckpointing -> {"File", checkPointDir, "Interval" -> Quantity[5, "Minutes"]},
BatchSize -> 64,
MaxTrainingRounds -> 10,
TargetDevice -> "GPU", (* Use CPU if you don't have Nvidia GPU *)
ValidationSet -> <|"Input" -> dataValidate|>
]
A friendly advice - it's better to use **"Checkpoining"** during the training. This will keep your mental health safe and will work as assurance that all training progress is saved.
I was training the model 30 rounds and it took around 4-5 hours on AWS' GPUs.
First 10-15 rounds weren't showing any sight of problems but later training clearly started to overfit.
![Training loss][28]
Unfortunately, I haven't had time to fix this problem because of the limited time but to overcome this problem I might reduce the size of GRUs from 512 to 256 and return Dropout layer.
## Generate Music ##
To generate music we need a model that predicts the next event in a sequence as it was in the language model. To do that I take the trained model and extract out of it "PerformanceRNN Predict Model" part.
predictNet = NetExtract[trainedNet, "predict"];
Next step is to convert this `predictNet` to a model that takes varying input size and return the class of the next event.
generateModel = NetJoin[NetTake[predictNet, 3], {
SequenceLastLayer[],
NetExtract[predictNet, {4, "Net"}],
SoftmaxLayer[]},
"Input" -> Automatic,
"Output" -> NetDecoder[{"Class", Range[310]}]
]
The resulting architecture is pretty the same as the language model from which I've started - it takes a sequence with varying size of encoded MIDI events `{177, 60, 90}` and predicts what could be next event `{177, 60, 90, ?}`.
![Model Comparison(Generation)][29]
**Now, let's the fun begin!**
generateDemo[net_, start_, len_] := Block[{obj = NetStateObject[net]},
Join@NestList[{obj[#, "RandomSample"]} &, start, len]
]
This small function is all we need to generate a sequence of the desired length.
`NetStateObject` helps to keep track of all sequences that were applied to the network, meaning every next prediction is the result of all previous events not only the recent one.
`start` should be a sequence of encoded MIDI events. It also can be a single item sequence, say you want to start from a pause or a particular note. This is a possibility to some extent put the generation process in a particular direction.
Okay, two lines of code left and you can hear play with generating of music:
generatedSequence = Flatten[generateDemo[generateModel, {60, 216, 148, 62, 200, 150, 64, 236, 152, 67, 198, 155}, 500]];
ToSound[generatedSequence]
These are other examples: [2][30], [3][31].
You can generate your own demos if download [repository][32] and open [PerformanceRNN][33] notebook.
## Further Work ##
That was a very fun and challenging task for me. I can't say that I'm satisfied with the results but this a good start and I have a direction now.
What I want to explore is Variational Autoencoder, especially [MusicVAE][34] that is made by the same Magenta team.
However, I'll start with improving the existing model by changing the architecture and cleaning the dataset to have only performances from the Yamaha dataset.
Thank you for reading the post, and feel free to ask any questions.
![Peace!][35]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2433article_cover.png&userId=1352227
[2]: http://www.piano-e-competition.com
[3]: https://drive.google.com/open?id=1I7l6hrecWsuMxqvEdUiRWtg6N6NCW34R
[4]: https://magenta.tensorflow.org/
[5]: https://magenta.tensorflow.org/performance-rnn
[6]: http://www.kuhmann.com/Yamaha.htm
[7]: https://github.com/tensorflow/magenta/tree/master/magenta/models/performance_rnn
[8]: http://colinraffel.com/projects/lmd/
[9]: http://www.midiworld.com/
[10]: https://freemidi.org/
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5707Preprocessing_explanation.png&userId=1352227
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6109Prep_ex_2.png&userId=1352227
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MIDIimportelements.png&userId=1352227
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7717raw_midi_output.png&userId=1352227
[15]: http://www.recordingblogs.com/wiki/musical-instrument-digital-interface-midi
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=filtered_midi_events.png&userId=1352227
[17]: https://github.com/Apisov/Performance-RNN-WL/blob/master/Project/Midi.m
[18]: https://github.com/Apisov/Performance-RNN-WL/blob/master/BuildData.nb
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=One_track_final_encoding.png&userId=1352227
[20]: https://github.com/Apisov/Performance-RNN-WL/blob/master/Project/Midi.m
[21]: http://www.wolfram.com/wolfram-u/catalog/wl036/
[22]: https://resources.wolframcloud.com/NeuralNetRepository/resources/Wolfram-English-Character-Level-Language-Model-V1
[23]: http://reference.wolfram.com/language/tutorial/NeuralNetworksSequenceLearning.html#1013067167
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Modelcomparison.png&userId=1352227
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3750Teacher_forcing.png&userId=1352227
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Teacher_forcing_explanation.png&userId=1352227
[27]: https://github.com/Apisov/Performance-RNN-WL/blob/master/PerformanceRNNModel.nb
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=raw_midi.png&userId=1352227
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Modelcomparison%28Generation%29.png&userId=1352227
[30]: https://drive.google.com/open?id=1GtlaOtTF_9rHiDVsrqmLnUaKCSAva1dP
[31]: https://drive.google.com/open?id=1sEihbFJw4XbVZveYl8efoM8Ivq8781ar
[32]: https://github.com/Apisov/Performance-RNN-WL
[33]: https://github.com/Apisov/Performance-RNN-WL/blob/master/PerformanceRNN.nb
[34]: https://magenta.tensorflow.org/music-vae
[35]: http://community.wolfram.com//c/portal/getImageAttachment?filename=d2eedc8a1ea8fc6a62e23b151c7fb3675c8153cc.png&userId=1352227Pavlo Apisov2018-07-11T21:20:56ZCreate a custom loss function with NetTrain?
http://community.wolfram.com/groups/-/m/t/1402209
Suppose I want to Classify some data but, for my own reasons, want a custom NeuralNet architecture rather than whatever Classify develops algorithmically. AND I also want a custom loss function. In my example, I want an asymmetric loss such that predicting True when the real answer is False is a worse problem than predicting False when the real answer is True. In Classify, there is an option UtilityFunction that works splendidly in such cases. And I think the following set of layers would work to emulate a utility function in the Neural Network arena if I wanted losses in one direction to count double losses in the other direction. There may well be much better functions, I only show the code below to indicate that something may be possible.
lossnet =
NetChain[{ThreadingLayer[#1 - #2 &],
ElementwiseLayer[2*Ramp[#] + 1*Ramp[-#] &]}]
But I can't quite figure out how to put it all together. The particular toy problem I want to solve is to come up with a model that works on the Titanic dataset and predicts survival but, for my own reasons, counts a prediction of survival when the person dies as worse that prediction of death when the person survives.
Three other notes:
1) My question is related to a question asked [here][1] but no one ever answered it.
2) The documentation for the neural net framework really needs to be improved, particularly if it escapes the "Experimental" framework. Right now, it is missing the conceptual framework that would make its use easy. It also seems to have a very heavy focus on image processing rather than on data analysis in other contexts, such as social science. Moreover, some of the documentation is underinclusive. By way of example, there are options to NetGraph that are listed in the "Details" section yet there is no indication at the top of the ref page that any options exist. As a result it is extremely challenging to figure out how to deal with data such as the Titanic which is a list of Associations and for which various columns of the data need special encoding.
3) One motivation for using a custom utility function is that when one output class is scarce, the neural net frequently develops a predictor that always predicts the most common class: predicting that everyone on the Titanic will live. In the Classify context, there are ways of dealing with this: use of ClassPriors, UtilityFunctions. I'd like the same capabilities when using the Neural Network framework.
[1]: http://community.wolfram.com/groups/-/m/t/982989Seth Chandler2018-08-10T15:19:12ZUse FindRoot for the following function?
http://community.wolfram.com/groups/-/m/t/1402742
In the example below, FindRoot doesn't work with the provided function, calcTresAtTime[mCpRes_?NumericQ, mFracClr_?NumericQ,
timeTarget_?NumericQ]. However, no problems are observed when calling the function by itself or from Plot. The [documentation][1] mentions that FindRoot first localizes all of the variables, then evaluates f with the variables being symbolic. The examples in the documentation show how to turn this off, by using _?NumericQ.
eq01ResHB =
MCpRes ures'[t] ==
mCpPump (uclr[t] - ures[t]) + UAambRes (uamb - ures[t]) +
UAbrg (ubrg - ures[t]);
eq02ClrHB =
MCpClr uclr'[t] ==
mCpPump (ures[t] - uclr[t]) + UAambClr (uamb - uclr[t]) +
UAclr (ucw - uclr[t] );
ic = {ures[0] == ures0, uclr[0] == uclr0};
eqSet = Join[{eq01ResHB, eq02ClrHB}, ic];
vars = {ures, uclr};
KuambRes = 0.025 ;
Kuabrg = 0.236;
KuambClr = 0.0024;
Kuaclr = 0.1;
calcTresAtTime[mCpRes_?NumericQ, mFracClr_?NumericQ,
timeTarget_?NumericQ] := Module[{TresSolLocal, TclrSolLocal},
parmsRes = {MCpRes -> mCpRes , UAambRes -> KuambRes,
UAbrg -> Kuabrg};
parmsClr = {MCpClr -> mFracClr mCpRes, UAambClr -> KuambClr,
UAclr -> Kuaclr};
parmsBoundary = {mCpPump -> 1, ubrg -> 200, ucw -> 60, uamb -> 70};
parmsInitialCond = {ures0 -> 70, uclr0 -> 70};
eqSetValues =
eqSet /. parmsRes /. parmsClr /. parmsBoundary /. parmsInitialCond;
{TresSolLocal, TclrSolLocal} =
NDSolveValue[eqSetValues, vars, {t, 0, 2000}];
N@TresSolLocal[timeTarget]
]
calcTresAtTime[60., 0.4, 300.]
Plot[calcTresAtTime[x, 0.4, 300.], {x, 0, 80}]
FindRoot[ 130 == calcTresAtTime[x, 0.4, 300.], {x, 0, 80}]
Below is the result of the Plot command. So this suggests that the function itself doesn't have any severe problems.
![enter image description here][2]
Below is the result of the FindRoot command
![FindRoot output][3]
Have worked through the examples in the documentation, but can't find where I have taken a wrong turn.
Any help would be appreciated.
[1]: http://reference.wolfram.com/language/ref/FindRoot.html
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PlotExample01.jpg&userId=894223
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=FindRoot_Output.PNG&userId=894223Robert McHugh2018-08-12T06:41:10ZUnderstand behavior of TableForm?
http://community.wolfram.com/groups/-/m/t/1402300
Mathematica 11.3.0.0 Windows 10 64
This may be a silly question, but I bang my head against this cumbersome feature.
Can anybody explain where the "None " in the following output comes from (and eventually how to eliminate it):
Input:
TableForm[{{1, {1, 1}}, {2, {2, 2}}}, TableDirections -> {Row, Column, Column}]
Output ( the periods indicate spaces, otherwise the text formatter of this forum screws up the formatting):
1..............2
None...1..2
............1..2Daniel Huber2018-08-11T19:45:59ZSolve the Karush-Kuhn-Tucker equations with Reduce
http://community.wolfram.com/groups/-/m/t/1402471
Some years ago I published a short article in the Mathematica Journal describing solving the Karush-Kuhn-Tucker equations with Reduce, to do symbolic optimization. I was pleased to see that the approach subsequently used by several people. However, the code in that article has the problem that it gives all local minima. I've recently updated the code to only give global minima. The new code has the advantage over Minimize that it gives multiple global minima and also provides the values of the Lagrange multipliers, which give the sensitivity of the objective function to changes in the constraints. The code is shown below with copious comments. I've also given two examples in which the code returns a result but Minimize does not, even though this is an unusual circumstance.
Code
In[1]:= (* Generate the Karush-Kuhn-Tucker Equations *)
KTEqs[obj_ (* objective function *), cons_List (* constraints *), vars_List (*
variables *)] :=
Module[{consconvrule = {GreaterEqual[x_, y_] -> LessEqual[y - x, 0],
Equal[x_, y_] -> Equal[x - y, 0],
LessEqual[x_, y_] -> LessEqual[x - y, 0],
LessEqual[lb_, x_, ub_] -> LessEqual[(x - lb) (x - ub), 0],
GreaterEqual[ub_, x_, lb_] -> LessEqual[(x - lb) (x - ub), 0]} ,
x, y, lb, ub , stdcons, eqcons, ineqcons, lambdas, mus, lagrangian, eqs1,
eqs2, eqs3, alleqns, allvars },
(* Change constraints to Equal and LessEqual form with zero on the right-
hand side *)
stdcons = cons /. consconvrule;
(* Separate the equality constraints and the inequality constraints *)
eqcons = Cases[stdcons, Equal[_, 0]][[All, 1]];
ineqcons = Cases[stdcons, LessEqual[_, 0]][[All, 1]];
(* Define the Lagrange multipliers for the equality and inequality \
constraints *)
lambdas = Array[\[Lambda], Length[eqcons]];
mus = Array[\[Mu], Length[ineqcons]];
(* Define the Lagrangian *)
lagrangian = obj + lambdas.eqcons + mus.ineqcons;
(* The derivatives of the Lagrangian are equal to zero *)
eqs1 = Thread[ D[lagrangian, {vars}] == 0];
(* Lagrange multipliers for inequality constraints are \[GreaterEqual]0 to \
get minima *)
eqs2 = Thread[mus >= 0];
(* Lagrange multipliers for inequality constraints are 0 unless the \
constraint value is 0 *)
eqs3 = Thread[mus*ineqcons == 0];
(* Collect the equations *)
alleqns = Join[eqs1, eqs2, eqs3, cons];
(* Collect the variables *)
allvars = Join[vars, lambdas, mus];
(* Return the equations and the variables *)
{alleqns, allvars}
]
In[2]:= (* Convert logical expressions to rules *)
torules[res_] := If[Head[res] === And, ToRules[res], List @@ (ToRules /@ res)]
In[3]:= (* Find the global minima *)
KKTReduce[obj_(* objective function *), cons_List (* constraints *),
vars_List (* variables *)] :=
Block[{kkteqs, kktvars, red, rls, objs, allres, minobj, sel, ret, minred,
minredrls},
(* Construct the equations and the variables *)
{kkteqs, kktvars} = KTEqs[obj, cons, vars];
(* Reduce the equations *)
red = LogicalExpand @
Reduce[kkteqs, kktvars, Reals, Backsubstitution -> True];
(* Convert the Reduce results to rules (if possible ) *)
rls = torules[red];
(* If the conversion to rules was complete *)
If[Length[Position[rls, _ToRules]] == 0,
(* Calculate the values of the objective function *)
objs = obj /. rls;
(* Combine the objective function values with the rules *)
allres = Thread[{objs, rls}];
(* Find the minimum objective value *)
minobj = Min[objs];
(* Select the results with the minimum objective value *)
sel = Select[allres, #[[1]] == minobj &];
(* Return the minimum objective value with the corresponding rules *)
ret = {minobj, sel[[All, 2]]},
(* Else if the results were not completely converted to rules *)
(* Use MinValue to find the smallest objective function value *)
minobj = MinValue[{obj, red}, kktvars];
(* Use Reduce to find the corresponding results *)
minred =
Reduce[obj == minobj && red, kktvars, Reals, Backsubstitution -> True];
(* Convert results to rules, if possible *)
minredrls = torules[minred];
ret = If[
Length[Position[minredrls, _ToRules]] == 0, {minobj, minredrls}, {minobj,
minred}];
];
(* Remove excess nesting from result *)
If[Length[ret[[2]]] == 1 && Depth[ret[[2]]] > 1, {ret[[1]], ret[[2, 1]]},
ret]
]
In[4]:=
Examples
In[5]:= Minimize[{x^2 - y^2, Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
Out[5]= Minimize[{x^2 - y^2, Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
In[6]:= KKTReduce[x^2 - y^2, {Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
Out[6]= {-25 + 25/9 (-3 + \[Pi])^2, {{x -> -(5/3) (-3 + \[Pi]),
y -> 5, \[Mu][1] -> (20 (-3 + \[Pi]))/(3 Sqrt[3]), \[Mu][2] ->
0, \[Mu][3] ->
1/9 (9 + 6 Sqrt[3] Sin[5 + 5/3 (-3 + \[Pi])] -
2 Sqrt[3] \[Pi] Sin[5 + 5/3 (-3 + \[Pi])])}, {x -> 5/3 (-3 + \[Pi]),
y -> -5, \[Mu][1] -> (20 (-3 + \[Pi]))/(3 Sqrt[3]), \[Mu][2] ->
0, \[Mu][3] ->
1/9 (9 + 6 Sqrt[3] Sin[5 + 5/3 (-3 + \[Pi])] -
2 Sqrt[3] \[Pi] Sin[5 + 5/3 (-3 + \[Pi])])}}}
In[7]:= TimeConstrained[
Minimize[{(Subscript[x, 1] - Subscript[x, 2])^2 + (Subscript[x, 2] -
Subscript[x, 3])^4, (1 + Subscript[x, 2]^2) Subscript[x, 1] + Subscript[
x, 3]^4 - 3 == 0}, {Subscript[x, 1], Subscript[x, 2], Subscript[x,
3]}], 60]
Out[7]= $Aborted
In[8]:= AbsoluteTiming @
KKTReduce[(Subscript[x, 1] - Subscript[x, 2])^2 + (Subscript[x, 2] -
Subscript[x, 3])^4, {(1 + Subscript[x, 2]^2) Subscript[x, 1] + Subscript[
x, 3]^4 - 3 == 0}, {Subscript[x, 1], Subscript[x, 2], Subscript[x, 3]}]
Out[8]= {1.67203, {0, {{Subscript[x, 1] -> 1, Subscript[x, 2] -> 1,
Subscript[x, 3] -> 1, \[Lambda][1] -> 0}, {Subscript[x, 1] ->
AlgebraicNumber[Root[3 + 2 #1 + 2 #1^2 + #1^3 &, 1], {0, 1, 0}],
Subscript[x, 2] ->
AlgebraicNumber[Root[3 + 2 #1 + 2 #1^2 + #1^3 &, 1], {0, 1, 0}],
Subscript[x, 3] ->
AlgebraicNumber[
Root[3 + 2 #1 + 2 #1^2 + #1^3 &, 1], {0, 1, 0}], \[Lambda][1] -> 0}}}}Frank Kampas2018-08-11T17:18:58ZWhy NicholsGridLines in Nichols Plot are different than Matlab
http://community.wolfram.com/groups/-/m/t/1402189
The sensitivity lines(NicholsGridLines) in NicholsPlot in Mathematica drawn differently from what is drawn in Matlab and what I learned in lectures in control systems.
Here we can see the function P[s]=-15 (1+0.2 s/3+(s/3)^2)/(s(1+s/2)(1-1.6 s/5+(s/5)^2)(1+0.2 s/7+(s/7)^2)) which by wolfram is not Entering the 3dB sensitivity loop (just below (Pi,0)) and by Matlab it does. Wolfram Mathematica are drawing the Nichols Grid Lines as a reflection with respect to the x axis compared to what i learned at control system and Matlab for some reason. in addition, Mathematica draws the plot around the phase pi and Matlab around -Pi, why these differences?
![the sensitivity lines(NicholsGridLines) in NicholsPlot][1]
Is there a option for flipping these grid-lines somehow? Or can someone explain why it is drawn as shown?
The code I used to draw the plot:
NicholsPlot[P[s],
GridLines -> {Range[-2 \[Pi], 2 \[Pi], 0.5 \[Pi]], Automatic},
StabilityMargins -> True, PlotRange -> {{0, 2 \[Pi]}, {-50, 60}},
NicholsGridLines -> {{ -10^((3/20)), -10^((6/20))} ,},
ScalingFunctions -> {"Radian", Automatic},
Ticks -> {Range[-2 \[Pi], 2 \[Pi], \[Pi]/2], Automatic}]
Dynamic[MousePosition["Graphics"]]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=%D7%AA%D7%9E%D7%95%D7%A0%D7%94%D7%9C%D7%9C%D7%90%D7%A9%D7%9D.png&userId=1402172Eliav Louski2018-08-11T01:22:37Z[GIF] This is Only a Test (Decagons from stereographic projections)
http://community.wolfram.com/groups/-/m/t/1380624
![Decagons formed from stereographically projected points][1]
**This is Only a Test**
This one is fairly straightforward. Form 60 concentric circles on the sphere centered at the point $(0,1,0)$. On each circle, take 10 equally-spaced points, stereographically project to the plane, and form a decagon from the resulting points. Now rotate the sphere and all the points on it around the axis $(0,1,0)$. The result (at least after adding some color) is this animation. This is a sort of discretized companion to my old still piece [_Dipole_][2].
Here's the code:
Stereo[p_] := p[[;; -2]]/(1 - p[[-1]]);
With[{r = 2, n = 10, m = 60,
cols = RGBColor /@ {"#2EC4B6", "#011627", "#E71D36"}},
Manipulate[
Graphics[
{EdgeForm[Thickness[.0045]],
Join[{Reverse[#[[1]]], #[[2]]}]
&[Partition[
Table[
{Blend[cols, θ/π],
EdgeForm[Lighter[Blend[cols, θ/π], .15]],
Polygon[
Table[Stereo[(Cos[θ] {0, 1, 0} +
Sin[θ] {Cos[t], 0, Sin[t]}).RotationMatrix[ϕ, {0, 1, 0}]],
{t, π/2, 5 π/2, 2 π/n}]]},
{θ, π/(2 m), π - π/(2 m), π/m}],
m/2]]},
PlotRange -> r, ImageSize -> 540, Background -> Blend[cols, 1/2]],
{ϕ, 0, 2 π/n}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=stereo29.gif&userId=610054
[2]: https://shonkwiler.org/still-images/dipoleClayton Shonkwiler2018-07-12T03:41:03ZSolve 2 coupled 2nd ODEs and plot them with ParametricPlot?
http://community.wolfram.com/groups/-/m/t/1393597
I am interested to solve two coupled 2nd order differential equations and plot the solution using ParamatricPlot. Can anyone help me to resolve this issue? The solution is a trajectory of a particle under the influence of gravity. So, I am also interested to animate the trajectory of the particle as well. I have attached the Mathematica script with this post.Soumen Basak2018-07-28T10:13:02ZExport Graphics3D images to PDF preserving a good resolution?
http://community.wolfram.com/groups/-/m/t/1400227
Let's create image
g = Graphics3D[{Line[{{-2, 0, 2}, {2, 0, 2}, {0, 0, 4}, {-2, 0, 2}}]}]
and then export it by `Export["test.pdf", g]`. It leads to a very unsatisfactory result [![snapshot of resulting image in PDF][1]][1]
Option `AllowRasterization" -> False` does not help. How do I get PDF or EPS file of this image in the vector format?
[1]: https://i.stack.imgur.com/qaSfR.pngRodion Stepanov2018-08-09T11:13:19ZUse index.html files in Wolfram Cloud sites?
http://community.wolfram.com/groups/-/m/t/1250045
### Cross post on StackExchange: https://mathematica.stackexchange.com/questions/162265/using-index-html-files-in-wolfram-cloud-sites
---
Part as exercise, part so I could write data-science blog posts I built a website builder using Mathematica that sets up sites in the cloud.
As an example site, here is a paclet server website I set up: https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/main.html
Unfortunately, to get this to work I had to remap my site's index.html file to a main.html file, because when I try to view the site at the index.html either by explicitly routing there or by going to the implicit view I am pushed back to the implicit view and given a 500 error.
Note that I cannot copy the index.html file to the site root i.e.,
CopyFile[
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/index.html"],
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer", Permissions->"Public"]
]
as I get a `CloudObject::srverr` failure
I can't even set up a permanent redirect like so:
CloudDeploy[
Delayed@HTTPRedirect[
"https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/main.html",
<|"StatusCode" -> 301|>
],
"server",
Permissions -> "Public"
]
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/server"]
As while this apparently worked, going to that site causes my browser to spin infinitely and before finally giving up.
Even more, all of these possible hacks are ugly and I'd much rather work with the standard website setup.
How can I do this?b3m2a1 2017-12-19T17:59:55ZDisplay a Character made by "Private Character Editor of Windows"?
http://community.wolfram.com/groups/-/m/t/1399382
I have struggled to show a Private Character in my Mathematica, but failed to do it so far.
So I want to know how to do it.
Please give me a lecture.
I made my private character using "Private Character Editor of Windows", and
checked to display the one by some other softwares. It has gone well, except the case of Mathematica.
As I have used old Mahematica version 4.1, I also want to know how to do that using low level functions.
Thanksichione ichiro2018-08-07T17:37:40ZFind second derivative using D?
http://community.wolfram.com/groups/-/m/t/1399266
The code below gives incorrect 2nd derivative. figure 1 shows the original function xxSumS1[s, r] = funDervtveLAB[s, r, 0] when r=200, while second figure shows the 1st derivative of the original function "Exp[-noisepow*s]*laplaceLABs[s, r]" when r=200. The 1st derivative is correct as the 1st derivative of a decreasing function is negative (figure 2). However, the 2nd derivative "xxSumS3[s_, r_] = funDervtveLAB[s, r, 2] when r=200" seems to be incorrect. This is because I expect it to be positive for all values of s as it is the derivative of the 1st derivative and the 1st derivative is an increasing function in s. Figure 3 shows the 2nd derivative of the original function
![figure 1][1] ![figure 2][2] ![figure 3][3]
Clear["Global`*"]
a = 4.88; b = 0.43; etaLAB = 10.^(-0.1/10); etaNAB = 10.^(-21/10); etaTB = etaNAB;
PtABdB = 32; PtAB = 10^(PtABdB/10)*1*^-3; PtTBdB = 40; PtTB = 10^(PtTBdB/10)*1*^-3;
NF = 8; BW = 1*^7; noisepowdBm = -147 - 30 + 10*Log[10, BW] + NF;
noisepow = 0; RmaxLAB = 20000;
TBdensity = 1*^-6; ABdensity = 1*^-6;
alfaLAB = 2.09; alfaNAB = 2.09; alfaTB = 2.09;
mparameter = 3;
zetaLAB = PtAB*etaLAB; zetaNAB = PtAB*etaNAB; zetaTB = PtTB*etaTB;
height = 100; sinrdBrange = -10; sinr = 10.^(sinrdBrange/10);
probLoSz[z_] := 1/(1 + a*Exp[-b*(180/Pi*N[ArcTan[height/z]] - a)]);
probLoSr[r_] := 1/(1 + a*Exp[-b*(180/Pi*N[ArcTan[height/Sqrt[r^2 - height^2]]] -
a)]);
funLoS[z_] := z*probLoSz[z];
funNLoS[z_] := z*(1 - probLoSz[z]);
funLABNABs[z_, s_] := (1 - 1/(1 + s*zetaNAB*(z^2 + height^2)^(-alfaNAB/2)))*funNLoS[z];
funLABLABs[z_,
s_] := (1 - (mparameter/(mparameter + s*zetaLAB*(z^2 + height^2)^(-alfaLAB/2)))^mparameter)*funLoS[z];
funLABTBs[z_, s_] := z*(1 - 1/(1 + s*zetaTB*z^(-alfaTB)));
distnceLABNABs = (zetaLAB/zetaNAB)^(1/alfaLAB)*height^(alfaNAB/alfaLAB);
NearstInterfcLABNABs[r_] := Piecewise[{{height, r <= distnceLABNABs}, {(zetaNAB/zetaLAB)^(1/alfaNAB)* r^(alfaLAB/alfaNAB), r > distnceLABNABs}}];
NearstInterfcLABTBs[r_] := (zetaTB/zetaLAB)^(1/alfaTB)*r^(alfaLAB/alfaTB);
NearstInterfcLABLABs[r_] := r;
lowerlimitLABNABs[r_] := Sqrt[NearstInterfcLABNABs[r]^2 - height^2];
lowerlimitLABLABs[r_] := Sqrt[NearstInterfcLABLABs[r]^2 - height^2];
lowerlimitLABTBs[r_] := NearstInterfcLABTBs[r];
InteglaplaceLABNABs[s_?NumericQ, r_?NumericQ] := NIntegrate[funLABNABs[z, s], {z, lowerlimitLABNABs[r], RmaxLAB}];
InteglaplaceLABLABs[s_?NumericQ, r_?NumericQ] := NIntegrate[funLABLABs[z, s], {z, lowerlimitLABLABs[r], RmaxLAB}];
InteglaplaceLABTBs[s_?NumericQ, r_?NumericQ] := NIntegrate[funLABTBs[z, s], {z, lowerlimitLABTBs[r], RmaxLAB}];
laplaceLABNABs[s_, r_] := Exp[-2*Pi*ABdensity*InteglaplaceLABNABs[s, r]];
laplaceLABLABs[s_, r_] := Exp[-2*Pi*ABdensity*InteglaplaceLABLABs[s, r]];
laplaceLABTBs[s_, r_] := Exp[-2*Pi*TBdensity*InteglaplaceLABTBs[s, r]];
laplaceLABs[s_, r_] :=
laplaceLABNABs[s, r]*laplaceLABLABs[s, r]*laplaceLABTBs[s, r];
funDervtveLAB[s_, r_, kk_] := D[Exp[-noisepow*s]*laplaceLABs[s, r], {s, kk}];
xxSumS1[s_, r_] = funDervtveLAB[s, r, 0]; (*original function*)
xxSumS2[s_, r_] = funDervtveLAB[s, r, 1]; (* 1st derivative*)
xxSumS3[s_, r_] = funDervtveLAB[s, r, 2]; (*2nd derivative*)
xxSumR1[r_] := xxSumS1[s, r] /. s -> (mparameter*sinr/zetaLAB*r^alfaLAB);
xxSumR2[r_] := xxSumS2[s, r] /. s -> (mparameter*sinr/zetaLAB*r^alfaLAB);
xxSumR3[r_] := xxSumS3[s, r] /. s -> (mparameter*sinr/zetaLAB*r^alfaLAB);
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig-1.jpg&userId=1350020
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig-2.jpg&userId=1350020
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig-3.jpg&userId=1350020mohamed alzenad2018-08-07T20:02:23ZSolve a PDE with boundary conditions (chemical adsorption in fixed beds)?
http://community.wolfram.com/groups/-/m/t/1398247
Dear Wolfram team:
I have been trying for week to solve a system of 2 partial differential equations describing the adsorption of a chemical substance on a fixed bed (for example, a column of activated carbon). The 2 equations are the following, taken from McCabe (1993):
![Description of eq 1][1]
![Description of eq 2][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=EQ1.png&userId=1020580
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=EQ2.png&userId=1020580
Unfortunately I cannot get past the general solution (with arbitrary constants) because when I try to put boundary conditions the Mathematica program fails. Maybe I am using the wrong command or syntax, or maybe there are too much or too few boundary conditions.
I have left attached the program, where I tryed to simplify the problem combining both equations in a third.
Thank you in advance for your help.
Best regards,
Alberto SilvaAlberto Silva Ariano2018-08-06T01:04:00Z[HACKATHON] Hardware Verification Workflow with SCR1 in Wolfram Language
http://community.wolfram.com/groups/-/m/t/1400440
## The project ##
We connected Wolfram Mathematica with [SCR1 microcontroller core][1]. For this purpose, we developed a driver for SCR1 based on the Wolfram Device Framework. In our project SCR1 is not a hardware device but an RTL code of processor written in [SystemVerilog][2].
A chip design workflow is a complicated multistage process. At the design stage, engineers describe their solutions with the terminology of the register-transfer level (RTL) using RTL languages such as SystemVerilog. At the verification stage, they have to prove that the design is correct and this is the most complex phase of development. Wolfram Mathematica can help in verification providing comprehensive analytical and visualisation features.
In the project, we used SCR1 as an example of an RTL code because SCR1 is an open-source microcontroller core which is a RISC-V compatible processor. RISC-V is a computer architecture which is open too. The source files of SCR1 can be found at http://github.com/syntacore/scr1. We present a solution where we can substitute SCR1 with any other RTL design. So our project is extendable, and we may say that we built a workflow involving Wolfram Mathematica. The project aims to demonstrate a potential application for Wolfram Mathematica in the semiconductor industry.
All code is posted on GitHub: [https://github.com/ckorikov/wolfram_russianhack18_scr1][3]
![SCR1][4]
## What it can do ##
The Wolfram Device Framework creates symbolic objects that represent external devices. In our case, this is the SCR1 processor. It is the frontend of our system. A description of the backend is in the next section.
SetDirectory[NotebookDirectory[]];
Needs["SCR1Device`"];
device = DeviceOpen["SCR1"]
![Device][5]
The SCR1 symbolic object has properties and three groups of methods — read, write and execute. In our project, users can interact with general purpose registers and memory of the SCR1. For this demonstration we additionally provided access to some wires such as a memory data bus and the branching logic in the processor pipeline. Examples are below.
**Properties.** There are 4 properties of the SCR1 symbolic object:
- `State`,
- `Clock`,
- `IPC` (instruction program counter),
- `MAX_MEM` (maximal memory).
The state property reflects a state of the processor and can have the following values: `IDLE`, `WORK` and `FINISHED`. This property is `WORK` after reset. When a program completes, the state transitions to `FINISHED`. The clock contains the number of ticks of a clock signal from a simulation start. The `IPC` shows a value of the IPC register. This value is an address of a currently executed instruction. `MAX_MEM` is a size of memory in bytes. These properties are read-only and can be accessed by the name of the property as follows.
device["MAX_MEM"]
32768
**Reading methods.** The general format of these commands is `DeviceRead[device, "CMD"]`. Instead of `CMD`, use one of the following commands.
- `STATE`: read the state of SCR1 (`State`, `Finished`, `Clock`, `IPC`).
- `REGS`: read the list of register values (from 1 to 32).
- `MEM`: read the list of bytes from memory.
- `BRANCH`: read the state of branching logic (`IPC`, `Jump`, `Branch_taken`, `Branch_not_taken`, `JB_addr`).
- `DBUS`: read the memory data bus (Address, Bytes).
**Writing methods.** The general format of these commands is `DeviceWrite[device, "CMD"]`. Instead of `CMD`, use one of the following commands.
- `REGS`: modify a value of a register.
- `MEM`: modify a value of a memory cell.
**Execution methods.** The general format of these commands is `DeviceExecute[device, "CMD"]`. Instead of `CMD`, use one of the following commands.
- `RESET`: reset the processor.
- `HARD_RESET`: reset the processor and internal counters of the simulator (such as simulation time and the clock counter).
- `LOAD`: load a program to memory and reset the processor.
- `STEP`: perform one tick of the clock signal.
- `RUN`: make steps until the end of the program.
- `RUN_UNTIL_IPC`: make steps until a specific IPC value.
- `TRACE_IPC`: execute `RUN` command and return a list of IPC values.
## Basic examples ##
###1. Program loading, soft and hard resets###
To load a program execute the following command. An argument is a path to the program file.
DeviceExecute[device,
"LOAD",
"./scr1_programs/dhrystone21.bin"
];
To reset the processor use `RESET` and `HARD_RESET` commands. Hard reset is soft reset + simulator cleanup.
###2. Read data about SCR1###
These are examples of reading commands output.
Dataset@DeviceRead[device, "STATE"]
![State][6]
Here, `Finished` is a flag which is 1 if SCR1 reaches the end of the program otherwise is 0. Other output values are the same as symbolic object properties.
Dataset@DeviceRead[device, "BRANCH"]
![Branch][7]
Structures like if–then–else create branches in code execution flow. The `BRANCH` command returns information about the current branching state. `Jump`, `Branch_taken`, `Branch_not_taken` are flags. They are 1 if the instruction is jump or a branch has been detected, and it has been taken or not taken, respectively. `JB_addr` is an address of the next instruction if jump or branch has occurred.
Dataset@DeviceRead[device, "DBUS"]
![DBUS][8]
Data and program instructions are located in memory. A processor fetches them through a memory bus. `DBUS` returns an address of the memory cell and the size of the requested data in bytes.
Dataset@MapIndexed[
{#2[[1]], BaseForm[#1, 16], BaseForm[#1, 2]} &,
DeviceRead[device, "REGS"]
]
![Registers][9]
Any computations on the processor involve registers. We can read their values. This is an example of reading values of the register in binary and hexadecimal forms.
BaseForm[#, 16] &@DeviceRead[device, {"MEM", 512, 100}]
![Memory][10]
Also, we can read the contents of the memory. The first argument is the address of a cell. The second is the number of cells.
###3. Write data to memory and registers###
Write the value to the memory and check it.
DeviceWrite[device, {"MEM", 10000, 10}];
DeviceRead[device, {"MEM", 10000, 1}]
{10}
The first argument is the address of a memory cell. The second one is the value.
DeviceWrite[device, {"REGS", 5, 30}];
DeviceRead[device, "REGS"]
{0, 0, 0, 0, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
The first argument is a register index. The second one is the value.
###4. Program execution on SCR1###
There are several functions which start the program flow. The first is `STEP`. This function produces one clock of the simulator and returns the number of clocks. This function works until the end of the program. After that, the core needs to be reset. We can use the `NEXT_IPC` function if we would like to run SCR1 until the next instruction occurs. The function returns a value of new IPC. Additionally, SCR1 may be run until a particular IPC value is encountered with the `RUN_UNTIL_IPC` command. If we would like to launch SCR1 before the program ends, we can use `RUN` function. If the program prints something to display, it is redirected to `src1_output.txt` file.
Framed@Import["src1_output.txt"]
This is an example of the output.
HELL0 SCR1
Dhrystone Benchmark, Version 2.1 (Language: C)
Program compiled without 'register' attribute
Execution starts, 500 runs through Dhrystone
Execution ends
Final values of the variables used in the benchmark:
Int_Glob: 5
should be: 5
Bool_Glob: 1
should be: 1
Ch_1_Glob: A
should be: A
Ch_2_Glob: B
should be: B
Arr_1_Glob[8]: 7
should be: 7
Arr_2_Glob[8][11]: 510
should be: Number_Of_Runs + 10
Ptr_Glob->
Ptr_Comp: 15412
should be: (implementation-dependent)
Discr: 0
should be: 0
Enum_Comp: 2
should be: 2
Int_Comp: 17
should be: 17
Str_Comp: DHRYSTONE PROGRAM, SOME STRING
should be: DHRYSTONE PROGRAM, SOME STRING
Next_Ptr_Glob->
Ptr_Comp: 15412
should be: (implementation-dependent), same as above
Discr: 0
should be: 0
Enum_Comp: 1
should be: 1
Int_Comp: 18
should be: 18
Str_Comp: DHRYSTONE PROGRAM, SOME STRING
should be: DHRYSTONE PROGRAM, SOME STRING
Int_1_Loc: 5
should be: 5
Int_2_Loc: 13
should be: 13
Int_3_Loc: 7
should be: 7
Enum_Loc: 1
should be: 1
Str_1_Loc: DHRYSTONE PROGRAM, 1'ST STRING
should be: DHRYSTONE PROGRAM, 1'ST STRING
Str_2_Loc: DHRYSTONE PROGRAM, 2'ND STRING
should be: DHRYSTONE PROGRAM, 2'ND STRING
Number_Of_Runs= 500, HZ= 1000000
Time: begin= 15331, end= 165400, diff= 150069
Microseconds for one run through Dhrystone: 300
Dhrystones per Second: 3331
- tb/scr1_top_tb_axi.sv:314: Verilog $finish
## Additional examples ##
###1. Memory maps of programs###
In this example, we show a grid of memory maps for programs from the `scr1_programs` directory. A memory map is a matrix of memory cells where each element is highlighted depending on the value of the cell.
![Memory maps][12]
###2. Execution graph of programs###
We can visualise the trace of program execution. We used a directed graph whose vertices are instructions which are placed in the order of how they were executed. We can see that using the graph it is easy to find jumps in programs.
![Execution graph xor][13]
###3. Call graph###
There are assembler dumps in the `scr1_programs` directory. We use this dumps to map instructions to the names of functions. In this example, we parse assembler files, find ranges of addresses and use them for mapping.
![Call graph dhrystone][14]
###4. Transactions to memory###
This example shows how to trace data manually with Wolfram Mathematica. Also, we calculate a list of frequent addresses which is accessed by SCR1 for a particular program (dhrystone21).
![DBUS Top Dhrystone][15]
###5. Develop new devices: branch predictor###
Our solution provides loads of data about the core. Engineers can use this data to design or optimise modules. For instance, we can get information about branching of SCR1 and use this data for developing a branch predictor module.
The purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch predictors play a critical role in achieving high performance in many modern pipelined processors.
Here we use machine learning methods, a neural network, to build a predictor.
![NN Classifier][16]
## How it works ##
The driver encapsulates lower-level interactions with the SCR1. We cannot use SystemVerilog in Wolfram Mathematica directly. That is why we converted the SCR1 code to C++ code by Verilator software (https://www.veripool.org/wiki/verilator). This program is an open-source Verilog/SystemVerilog simulator. We wrapped generated C++ code with functions to communicate with Wolfram Mathematica through the Wolfram LibraryLink. The full scheme of the project is below.
![System][17]
## Conclusions ##
Over the course of 24 hours our team built a prototype of a hardware verification workflow with the SCR1 microcontroller. We implemented:
- device driver for the SCR1 processor based on the Wolfram Device
Framework;
- the C++ bridge between Wolfram Mathematica and generated
C++ by Verilator;
- examples of using this system for verification
programs and hardware;
- the design of a branch predictor.
Our verification solution provides register and memory access and a step-by-step debugger (in clock or instruction modes). To build a powerful hardware debugger it is necessary to add the feature of making dumps of arbitrary signals for any RTL design. The last is a potential topic for future work.
[1]: https://syntacore.com/page/products/processor-ip/scr1
[2]: https://en.wikipedia.org/wiki/SystemVerilog
[3]: https://github.com/ckorikov/wolfram_russianhack18_scr1
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=scr1.png&userId=1399750
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4190device.png&userId=1399750
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=state.png&userId=1399750
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=branch.png&userId=1399750
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dbus.png&userId=1399750
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=registers.png&userId=1399750
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=memory.png&userId=1399750
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4058dbus_top_dhrystone.png&userId=1399750
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=memory_maps.png&userId=1399750
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2389execution_graph_xor.png&userId=1399750
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1756call_graph_dhrystone.png&userId=1399750
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4058dbus_top_dhrystone.png&userId=1399750
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8444nn_classifier.png&userId=1399750
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=scheme.png&userId=1399750Constantine Korikov2018-08-09T14:40:31ZPerform calculations on y-axis values?
http://community.wolfram.com/groups/-/m/t/1396972
Sometimes when plotting functions, you want to do some operation on the y-axis. This comes up when you want to plot say decibels vs. frequency. It is not clear how you can do an operation like performing 10 Log10 on the y-axis values. Is there a straight forward way to do this? Incidentally, LogPlot just gives you the y-axis in log form.Jesse Sheinwald2018-08-03T16:34:54ZCustom websites and less opaque URLs
http://community.wolfram.com/groups/-/m/t/1399549
I have a number of sites hosted in the cloud these days:
* https://www.wolframcloud.com/objects/b3m2a1/home
* https://www.wolframcloud.com/objects/b3m2a1/tutorial
* https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/
And one that was [also built using Mathematica](https://www.wolframcloud.com/objects/b3m2a1/home/building-websites-with-mathematica-part-2.html) and notebooks but hosted on GitHub:
* https://paclets.github.io/PacletServer/
Now a thing to notice about those first three is that beyond their being embarrassingly slow to load, the URLs are also incredibly ugly. The .github.io isn't attractive, to be sure, but it's a pretty small insertion relative to: www.wolframcloud.com/objects. I mean, at least the organization (paclets) that creates the website is [the first thing people see in the GitHub URL](http://community.wolfram.com/groups/-/m/t/1250055).
Is there a way to get a less ugly URL? It's uncomfortable to have to send people to a URL that's so opaque it looks suspicious. I could always buy my own domain, but if the major benefit of the cloud is that it's programmable, why should I have to leave Mathematica to make it not look like I'm trying to steal people's personal info when I'm just trying to send them to my tutorial?b3m2a1 2018-08-07T22:17:52ZUse one Mathematica 'Personal License Service' for two computers?
http://community.wolfram.com/groups/-/m/t/1398856
I am interested in Mathematica. So I will purchase the Mathematica License that contains 'the Personal License Service'.
I know that 'the Personal License Service' allow me to install my license on a second personal computer.
If so, can I use one license in two computers?Yoon Young Jin2018-08-07T07:09:05ZGeoHistogram3D for simple polygon bins
http://community.wolfram.com/groups/-/m/t/1399039
![Tree coverage in Champaign, IL.][1]
If you use `GeoHistogram` with triangle, rectangle or hexagon bins, and leave the default `Tooltip` behavior, then you can use the following to convert it to a 3D geo histogram:
GeoHistogram3D[input___, {options3D___}] :=
Module[{gh, polys, p2, hist, gb, poly, texture, im, boxAspectRatio},
gh = GeoHistogram[input];
polys = Cases[gh, {color_, Tooltip[x : Polygon[___], val_] /; FreeQ[x, GeoGridPosition]} :> {x, val, color}, Infinity];
p2 = polys /. {Polygon[x_List], h_, color_} :> {
color,
Polygon@ Join[
{x /. {f1_?NumericQ, f2_?NumericQ} :> {f1, f2, 0}},
{x /. {f1_?NumericQ, f2_?NumericQ} :> {f1, f2, h}},
Append[Partition[x, 2, 1], {First[x], Last[x]}] /.
{{x1_?NumericQ, y1_?NumericQ}, {x2_?NumericQ, y2_?NumericQ}} :> {{x1, y1,0}, {x2, y2, 0}, {x2, y2, h}, {x1, y1, h}}]
};
hist = Graphics3D[p2];
gb = PlotRange /. Options[gh, PlotRange];
poly = {{gb[[1, 1]], gb[[2, 1]], 0}, {gb[[1, 1]], gb[[2, 2]], 0}, {gb[[1, 2]], gb[[2, 2]], 0}, {gb[[1, 2]], gb[[2, 1]], 0}};
texture = GeoGraphics[Frame -> False, Options[gh]];
im = Graphics3D[{Lighting -> "Neutral", Texture[texture], Polygon[poly, VertexTextureCoordinates -> {{0, 0, 0}, {0, 1, 0}, {1, 1, 0}, {1, 0, 0}}]}];
boxAspectRatio = (gb[[1, 2]] - gb[[1, 1]])/(gb[[2, 2]] - gb[[2, 1]]);
Show[im, hist, options3D, BoxRatios -> {boxAspectRatio, 1, 0.1}, Boxed -> False]
]
`Input` is the sequence of arguments for `GeoHistogram`, and the last list is for adding `Graphics3D` options.
Though you can add frames and ticks to the 2D `GeoHistogram`, the frame is removed from the underlying 2D image. I've chosen `BoxRatios` values that match with the original aspect ratio of the map produced by `GeoHistogram`, but you can change these by setting `BoxRatios` as a 3D option.
Options like `ColorFunction` and `GeoProjection` are respected. Let's look at some examples:
----------
Example 1:
reg = Polygon[First@ CountryData["UnitedKingdom", "Coordinates"]];
nums = Join[
RandomVariate[MultinormalDistribution[First@GeoPosition[Entity["City", {"London", "GreaterLondon", "UnitedKingdom"}]], IdentityMatrix[2]/4], 10^3],
RandomVariate[MultinormalDistribution[First@GeoPosition[Entity["City", {"Edinburgh", "Edinburgh", "UnitedKingdom"}]], IdentityMatrix[2]/4], 10^3]];
numsGood = Pick[nums, RegionMember[reg][nums]];
gh = GeoHistogram[numsGood, {"Triangle", 50}, GeoProjection -> "Albers", GeoBackground -> "Coastlines"];
gh3D = GeoHistogram3D[numsGood, {"Triangle", 50}, GeoProjection -> "Albers", GeoBackground -> "Coastlines", {ViewPoint -> {0, 0, \[Infinity]}, ViewVertical -> {0, 0, 1}}];
Row[{Pane@ gh, Pane@ gh3D, Pane@ gh3D}]
![Example1][2]
At least by eye, the bins appear to be in the correct positions.
----------
Example 2:
Taking https://tctechcrunch2011.files.wordpress.com/2017/04/hex.gif?w=738 as inspiration, we can get data from [here][3], take the appropriate lat-long data, and insert it:
data = Import["C:\\Users\\<username>\\Downloads\\dftRoadSafety_Accidents_2016.csv", "CSV"];
all = data[[2 ;;, {5, 4}]];
GeoHistogram3D[all, 60, ColorFunction -> (Blend[{
RGBColor[0., 0.4470588235294118, 0.596078431372549],
RGBColor[0.2901960784313726, 0.9764705882352941, 0.9490196078431372],
RGBColor[0.8, 0.996078431372549, 0.807843137254902],
RGBColor[0.9176470588235294, 0.9607843137254902, 0.5254901960784314],
RGBColor[0.9764705882352941, 0.6196078431372549, 0.1803921568627451],
RGBColor[0.9176470588235294, 0.17647058823529413`, 0.27450980392156865`]}, #] &),
GeoBackground -> GeoStyling[{"Coastlines",
"Land" -> RGBColor[0.03137254901960784, 0.06666666666666667, 0.12941176470588237`],
"Ocean" -> RGBColor[0.13333333333333333`, 0.1450980392156863, 0.18823529411764706`],
"Border" -> RGBColor[0.03137254901960784, 0.06666666666666667, 0.12941176470588237`]}], {}]
![enter image description here][4]
----------
Example 3:
The first figure comes from the documentation for tree coverage in Champaign, IL. It's the last example under the Applications section for GeoHistogram. I admit that I did modify the `GeoHistogram3D` code slightly, in that I added `EdgeForm` and `FaceForm` to be the same as the `ColorFunction`:
...
{color, EdgeForm[color], FaceForm[color], Polygon@ Join[...]}
...
with the `GeoHistogram` arguments taken from the documentation, but with more bins:
GeoHistogram3D[trees, 60, ColorFunction -> (RGBColor[0, 0.8 #, 0, 0.9 #] &), {}]
![Same as first figure][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=example3.PNG&userId=829295
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=GeoHG3D.PNG&userId=829295
[3]: http://data.dft.gov.uk/road-accidents-safety-data/dftRoadSafety_Accidents_2016.zip
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=GeoHG3D2.PNG&userId=829295Kevin Daily2018-08-07T15:00:03ZCollect coefficients in a polynomial defined by symbolic summation?
http://community.wolfram.com/groups/-/m/t/1398308
Hi all,
I'd like to distribute the squared sum and collect the coefficients in front of n and n^2 in the following expression:
Sum[a[i] (n (n - 1) + 2 n), {i, 1, K}] - Sum[a[i] n, {i, 1, K}]^2,
Would anyone know how to do this?Aurelien Bibaut2018-08-06T03:06:02ZWolfram Mathematica Virtual Conference 2011 notebooks missing
http://community.wolfram.com/groups/-/m/t/1398192
Hello all
Browsing through the Virtual Events Channel at the Video Gallery I haven't been able to download any of the notebooks of the Virtual conference 2011, ie:
- Integrated Control Systems Design:
http://www.wolfram.com/broadcast/video.php?c=100&v=1027
- Introduction to Functional Programming:
http://www.wolfram.com/broadcast/video.php?c=101&v=491
- How to Think Financially with Mathematica:
http://www.wolfram.com/broadcast/video.php?c=98&v=484
All the links to the notebooks are broken. Is there any problem with the links? Is this info not public?
Any help would be appreciated.Federico Bolaños2018-08-06T02:33:03ZStartExternalSession with a Raspberry Pi Stretch running MMA v11.2?
http://community.wolfram.com/groups/-/m/t/1397206
Hey guys,
I wondered if anyone had an issue with Raspberry Pi Stretch running MMA v11.2 in which calling
StartExternalSession[<|"System"->"Python,"SessionProlog"->"import math"|>]
yields an error accusing invalid option "SessionProlog".
I've used this on MacOS and it works fine, but it is crashing on the Pi.
I can't seem to upgrade it to v 11.3 to check is that is the issue.
Anyone encountered this issue?Giovani Diniz2018-08-03T19:25:24Z[WSC18] Computational Hairdressing
http://community.wolfram.com/groups/-/m/t/1383182
![What Paolo's hair should look like -- according to HairGenNet.][1]
Hi! My name is Jacob. My WSC18 project, you can see, is of no practical value, so I hope it instead stands as a hilarious and intuitive icebreaker of an introduction to what one can do with machine learning, as it was for me.
## Why does Paolo have an odd gray blob on his head? ##
The question that started all this funny business: can a neural network predict what kind of hair someone has based on their facial features?
The initial thesis was that, even considering the infinity of things that make someone's hair at any moment, there must be a correlation between face and hair -- however slight -- and that we'd be able to answer the above question with enough data. A neural network is no way to support this, though, and we found it easier to pitch the idea as predictor of someone's hair based on what other people who look like them have.
So: the reason why Paolo has an odd gray blob on his head is that we're trying to peer-pressure him into styling his hair the way the wisdom of the crowd says he ought to... using a generative neural network trained on tens of thousands of faces. That kind of peer-pressure.
## Overview ##
To have a neural network to predict a hairstyle from a face (hereafter referred to as "HairPredNet"), we need input and output training data: faces and hair, respectively. The problem is that there exists no massive database of faces and their corresponding hairstyles; I had to generate my own input and output.
Hundreds of millions of photos containing both face and hair have been made accessible on the internet, and my job was to find a way to separate them. This required training a segmentation network (hereafter referred to as "HairSegNet"), which was made infinitely easier by my finding a quality database of images of hair and their corresponding segmentations under [Figaro1k][2].
In short, I was able to generate an unlimited amount of training data for HairPredNet by using HairSegNet, trained on Figaro1k, on any headshot image I could find.
## **Code** ##
----------
## HairSegNet ##
The process of acquiring raw training data was a matter of downloading Figaro1k. I cropped each image -- input photo and output hair mask -- to 512x512 to keep my data consistently scaled.
Crop[b_] := ImageCrop[ImageResize[b, {512}], {512, 512}]
I imported the neural net architecture "[Pix2pix Photo-to-Street-Map Translation][3]" uninitialized for building HairSegNet on. This was done at the suggestion of mentor Rick, who explained that it's an architecture suited for generating output images from elements extracted from input images.
HairSegNet1 =
NetTrain[pix2pixTrain, <|"Input" -> FigaroIn,
"Output" -> FigaroOut|>, MaxTrainingRounds -> 3]
HairSegNet, after being trained on all of 1,050 Figaro1k data, returned useful results and could safely be described as decent, but not as being at an acceptable level for generating thousands of HairPredNet training data from. The reliability of the prediction would depend on the segmentation.
![HairSegNet trained on 1050 images][4]
Data augmentation was in order. My methods were horizontal flips, gaussian fuzz, and a combination of the two.
FigaroInFlip = (ImageReflect[#, Left]) & /@ FigaroIn;
FigaroInFuzz= (ImageEffect[#, {"GaussianNoise", 0.25}] &) /@ FigaroIn;
FigaroInFuzzFlip = (ImageEffect[#, {"GaussianNoise", 0.25}] &) /@
FigaroInFlip;
I manually added ~50 images of completely bald heads to the initial dataset of 1050. It was no problem to generate output data: ~50 completely black squares, as these images have no hair to segment. This was augmented as well.
HairSegNet2 =
NetTrain[pix2pixTrain, <|"Input" -> MassiveIn,
"Output" -> MassiveOut|>, MaxTrainingRounds -> 3]
![enter image description here][5]
Applying
DeleteSmallComponents[Binarize[]]
I was able to clean up segmentations.
![enter image description here][6]
## HairPredNet ##
We saw earlier that training a network to predict hairstyle would entail finding input faces and their corresponding hairs. We can take hairs from any portrait now with HairSegNet; we now need a way to take faces. Wolfram has an in-built function called FindFaces that, upon being applied to an image containing a face, returns the coordinates of a rectangle containing the face. Using this, I wrote a function to crop a portrait image to one containing only the bounds of that face-rectangle.
FaceTake[image_] :=
Module[{croppedimage, facebox, chinfacebox, rectangleareas,
rectanglenumber},
croppedimage = Crop[image];
rectangleareas = Area /@ FindFaces[croppedimage];
rectanglenumber = Position[rectangleareas, Max[rectangleareas]];
facebox =
List @@ FindFaces[croppedimage][[rectanglenumber[[1, 1]]]];
chinfacebox =
ReplacePart[
facebox, {{2, 2} -> facebox[[2, 2]] - 25, {1, 2} ->
facebox[[1, 2]] - 25}];
Crop[ImageTrim[croppedimage, chinfacebox]]
]
![Demonstration of FaceTake[]][7]
I collected face data using function [WebImageSearch\[\]][8] on both Google and Bing, which returned ~1000 images of dubitable quality. My main source of data was [Labeled Faces in the Wild][9], a collection of 13000 faces, most of which contain all of the subject's hair.
I called HairSegNet[] and FaceTake[] on the 13000 images to use as input and output, respectively. I took "Pix2pix Photo-to-Street-Map Translation " again as my architecture.
NetTrain[pix2pixinitialized,<|"Input" -> lfwFaces,"Output" -> lfwHair|>,TargetDevice -> "GPU",MaxTrainingRounds -> 3]
![Hair Predictions][10]
Manipulated predictions are somewhat sharper.
![Sharper][11]
----------
## Looking forward ##
A better suited architecture for the task than Pix2pix (in all its excellence) surely would have returned sharper results for either of the segmentation or prediction.
Many thanks to Michael Kaminsky for guiding me through this. It's cool!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at10.13.46AM.png&userId=1372129
[2]: http://projects.i-ctm.eu/it/progetto/figaro-1k
[3]: https://resources.wolframcloud.com/NeuralNetRepository/resources/Pix2pix-Photo-to-Street-Map-Translation
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.13.23PM.png&userId=1372129
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.23.32PM.png&userId=1372129
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=a.jpeg&userId=1372129
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.54.12PM.png&userId=1372129
[8]: http://reference.wolfram.com/language/ref/WebImageSearch.html
[9]: http://vis-www.cs.umass.edu/lfw/
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at5.05.31PM.png&userId=1372129
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at5.08.45PM.png&userId=1372129Jacob Fong2018-07-13T21:20:12ZRun Multiple Instances of Mathematica in Batch Mode?
http://community.wolfram.com/groups/-/m/t/1398363
I am looking to do some quite intensive batch processing for a large number of stock symbols.
My plan is to use a Wolfram Language script to call multiple instances of the Mathematica program, each with different input parameters.
My questions are:
1) Would a batch script execute the Mathematica program calls sequentially, or concurrently (as I hope)?
2) Are there limitations as to the # of instances that can be run concurrently? E.g. on a 20 core server can I run 20 instances on 20 kernels concurrently?
There might be a better way of doing this (I have considered parallelization) . Any suggestions appreciated.Jonathan Kinlay2018-08-06T12:36:38ZFind multinominal maximum with NMaximize?
http://community.wolfram.com/groups/-/m/t/1392154
Hello guys. This is the table with the result that should be got https://www.dropbox.com/s/rohvusatedkfh3r/Screenshot_20180725-184452.png?dl=0
The mysticism thing is that it found a part of correct values only with NMaximize. With 3000-26000 everithing is OK but with others a strange thing is happening. For example, if I take 3000 and more I get the right answer
Clear["Global`*"]
n = 100;
Y = 1000000;
q = NMaximize[{n!/((94)!*4!)*p0^(94)*ap10^4*p1*p2,
p0 + p1 + p100 + p2 + ap10 == 1 && p0 >= 0 && p1 >= 0 && p2 >= 0 &&
ap10 >= 0 &&
p100 >= 0 && (1/100)*Y*(-10*ap10 + 30*p1 + 40*p2 + 100*p100) ==
3000}, {p0, p1, p2, p100, ap10}]
{0.0272692, {p0 -> 0.94, p1 -> 0.00999995, p2 -> 0.00999991,
p100 -> 5.71366*10^-8, ap10 -> 0.0400001}}
So I took all values from -6000 to 29000 and get this. Most of the values are correct, but some of the definatele wrong.
{
RowBox[{"-", "6000"}], "0.00273003019908044`"},
{
RowBox[{"-", "5000"}], "4.002433792920723`*^-12"},
{
RowBox[{"-", "4000"}], "9.420778019991495`*^-80"},
{
RowBox[{"-", "3000"}], "0.010805254786454756`"},
{
RowBox[{"-", "2000"}], "0.014835126377371699`"},
{
RowBox[{"-", "1000"}], "1.1255118610736836`"},
{"0", "1.0083957979097389`"},
{"1000", "3.3664295470533197`*^-22"},
{"2000", "1.0105517857633675`"},
{"3000", "0.027269195370073815`"},
{"4000", "0.026831313283137287`"},
{"5000", "0.025659720271071166`"},
{"6000", "0.02396928739747252`"},
{"7000", "0.021955363082878255`"},
{"8000", "0.019861954802262567`"},
{"9000", "0.017956439320638243`"},
{"10000", "0.01623208221174882`"},
{"11000", "0.01467181808621124`"},
{"12000", "0.013260174294049413`"},
{"13000", "0.011983123654999555`"},
{"14000", "0.01082795065726872`"},
{"15000", "0.00978312998048758`"},
{"16000", "6.311707230722985`*^-18"},
{"17000", "0.007983743159662302`"},
{"18000", "0.007211133780524158`"},
{"19000", "0.00651261660134752`"},
{"20000", "0.005881151176722276`"},
{"21000", "0.005310359883946591`"},
{"22000", "0.004794466105647533`"},
{"23000", "0.004328238122009399`"},
{"24000", "0.003906938191582962`"},
{"25000", "0.0035262763453895838`"},
{"26000", "0.0031823684622467223`"},
{"27000", "4.580651706257595`*^-15"},
{"28000", "2.32433808006728`*^-15"},
{"29000", "1.5695861700229384`*^-14"}
That is more interesting with reduce function gives different results.
Clear["Global`*"]
n = 100;
Y = 1000000;
r = Reduce[
p0 + p1 + p100 + p2 + ap10 == 1 && p0 >= 0 && p1 >= 0 && p2 >= 0 &&
ap10 >= 0 &&
p100 >= 0 && (1/100)*Y*(-10*ap10 + 30*p1 + 40*p2 + 100*p100) ==
3000, {ap10, p0, p1, p2, p100}, Reals, Backsubstitution -> True];
q = NMaximize[{n!/((94)!*4!)*p0^(94)*ap10^4*p1*p2, r}, {p0, p1, p2,
p100, ap10}]
{0.0230288, {p0 -> 0.940867, p1 -> 0.00763999, p2 -> 0.00891332,
p100 -> 0.00127333, ap10 -> 0.0413065}}
FInd Maximum also dosent work
Clear["Global`*"]
Y = 1000000;
n = 100;
q = FindMaximum[{n!/((94)!*4!)*p0^(94)*p1*p2*p10^4,
p0 + p1 + p100 + p2 + p10 == 1 && p0 >= 0 && p1 >= 0 && p2 >= 0 &&
p100 >= 0 &&
p10 >= 0 && (1/100)*Y*(-10*p10 + 30*p1 + 40*p2 + 100*p100) ==
3000}, {{p0, 0.91}, {p1, 0.01387}, {p2, 0.016}, {p100,
0.021}, {p10, 0.35}}]
{4.61652*10^-31, {p0 -> 0.412788, p1 -> 0.0133285, p2 -> 0.0185802,
p100 -> 0.0428179, p10 -> 0.512485}}
So what is that, wrong table at dropbox or I need to use a different code?
Multiple searches also dont do a good thing,http://community.wolfram.com/groups/-/m/t/1164680
n = 100;
Y = 1000000;
iMin[-n!/((94)!*4!)*p0^(94)*p1*p2*p10^4,
List @@ (p0 + p1 + p2 + p10 + p100 == 1 &&
p0 >= 0 && p1 >= 0 && p2 >= 0 && p100 >= 0 &&
p10 >= 0 && (1/100)*Y*(-10*p10 + 30*p1 + 40*p2 + 100*p100) ==
3000),
Thread[{{p0, p1, p2, p100, p10}, 0, 1}], 10, 0]
{-1.57794*10^-44, {p0 -> 0.289568, p1 -> 0.0329513, p2 -> 0.0407228,
p100 -> 0.0368193, p10 -> 0.599939}}Alex Graham2018-07-25T17:01:39ZAvoid issues while retrieving tweets via Twitter connection?
http://community.wolfram.com/groups/-/m/t/1350153
Friends, I have a problem with the Twitter connection. I establish a connection and suppose I give it the name "conex".
So for example I can retrieve the tweets of somebody named María T woth the following command:
conex["TweetList", "Username" -> "María T", MaxItems -> 10]
Now I have three distinct questions::
a. I want to get only the text of the tweets by María T. Something like "Elements"->"Texts", But I have not managed to do it. How can i do it?
b. I want to get only the tweets by María te that contain a specific string. So here I want to combine the utilities of "TweetSearch" with those of "TweetList". Once again, I have not been able to do it. Is there a solution?
c. Sometimes, the tweet appears truncated. Is Mathematica still operating under the assumption of 140 character tweets, or did it already adapt to the 280 characters standard.
Any help appreciated
FranciscoFrancisco Gutierrez2018-06-02T03:51:15ZUse InputField for defining a function to be applied to a list?
http://community.wolfram.com/groups/-/m/t/1397587
Suppose I have a simple list, Range[4]. I apply a function to the members of this list, e.g. f[x_] = 1/x. With the following code I am getting what I expected
In[125]:= f[x_] = 1/x;
Range[4];
Map[f, Range[4]]
Out[127]= {1, 1/2, 1/3, 1/4}
But now I want to use an InputField for defining the function to be applied to the list Range[4]. I then use the following code:.
Panel[DynamicModule[{f = 1/x, f1},
Column[{InputField[Dynamic[f]], f1[x_] = Dynamic[f],
Map[f1, Range[4]]}]]]
Here I started with the function f =1/x, but I can change it in the input field.
With the above code I get a panel containing the input box, the definition of the new function f1 to be applied to the list Range[4], and the final result.
I had expected this to be {1, 1/2, 1/3,1/4}., but I am getting {1/x,1/x,1/x,1/x}.
What am I doing wrong?Laurens Wachters2018-08-05T07:52:25ZUse Sum while the summand is a function?
http://community.wolfram.com/groups/-/m/t/1398028
Mathematica 11.3.0.0 Windows 10 64
Before posting a bug, I would like if anybody can verify:
The following works o.k:
Sum[Times @@ (IntegerDigits[i, 7] + 1), {i, 0, 10^6}]
However if the Summand is a function we get some gibberish:
fu[n_] := Times @@ (IntegerDigits[n, 7] + 1);
Sum[fu[i], {i, 0, 10^6}]
Interestingly, the following works:
Sum[fu[i], {i, 1, 10^6}]Daniel Huber2018-08-05T14:29:18ZWhich version of Mathematica is available on Raspberry Pi ?
http://community.wolfram.com/groups/-/m/t/1349489
Hello,
I recently added Mathematica on my Raspberry 3 (originally with Raspbian Lite), and got the 11.0.1.
Any attempt to get the 11.2 version by a classical upgrade process
sudo apt-get dist-upgrade wolfram-engine
just answers that I am up-to-date... However I can read in the group lot of posts concerning 11.2 !
How can I get it ? I wouldn't like to reinstall Raspbian, as it is a special image.
Thank you for your help,
Yvesyves papegay2018-05-31T13:46:34ZLie Groups and Lie Algebras using Wolfram Language?
http://community.wolfram.com/groups/-/m/t/1397667
Sorry if this is repetition, I am fairly new to Mathematica, and would like to use it for the above module, could someone point me in the right direction please... I have used Maple for the differential geometry stuff but lost on Mathematica. I have mainly used Mathematica for Groebner Basis, matrices, complex analysis etc no problem at all.Tonde Kush2018-08-04T15:45:56ZPlace a ContourPlot under a Plot3D?
http://community.wolfram.com/groups/-/m/t/1396065
I would like to combine a 3-dimensional graph of a function with its 2-dimensional contour-plot underneath it in a professional way. But I have no idea how to start, I try this:
W[s_, b_, q_,
p_] := (1/\[Pi]) Exp[-(p^2) +
I*Sqrt[2] p (b - Conjugate[s]) - (1/
2)*((Abs[s])^2 + (Abs[b])^2) - (q^2) +
Sqrt[2]*q*(b + Conjugate[s]) - (Conjugate[s]*b)]
Wpsi[\[Alpha]_, q1_, p1_, q2_, p2_] :=
Np[\[Alpha]]^2 (W[\[Alpha], \[Alpha], q1, p1]*
W[\[Alpha], \[Alpha], q2, p2] +
W[\[Alpha], -\[Alpha], q1, p1]*W[\[Alpha], -\[Alpha], q2, p2] +
W[-\[Alpha], \[Alpha], q1, p1]*W[-\[Alpha], \[Alpha], q2, p2] +
W[-\[Alpha], -\[Alpha], q1, p1]*W[-\[Alpha], -\[Alpha], q2, p2])
plot3D = Plot3D[Wpsi[1, 0, p1, 0, p2], {p2, -2, 2}, {p1, -2, 2},
PlotTheme -> "Scientific", PlotPoints -> 60, PlotRange -> All,
ColorFunction -> Hue, PlotLegends -> Automatic, Mesh -> None];
cntplot =
ContourPlot[Wpsi[1, 0, p1, 0, p2], {p2, -2, 2}, {p1, -2, 2},
PlotRange -> All, Contours -> 20, Axes -> False, PlotPoints -> 30,
PlotRangePadding -> 0, Frame -> False, ColorFunction -> Hue];
gr = Graphics3D[{Texture[cntplot], EdgeForm[],
Polygon[{{-2, -2, -0.4}, {2, -2, -0.4}, {2, 2, -0.4}, {-2,
2, -0.4}},
VertexTextureCoordinates -> {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]},
Lighting -> "Naturel"];
Show[plot3D, gr, PlotRange -> All, BoxRatios -> {1, 1, .6},
FaceGrids -> {Back, Left}]
that gives:
![graph][1]
it is not good for me, I want some think like this:
![needs][2]
Are i can do it by mathematica ?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=graf.PNG&userId=856431
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out.PNG&userId=856431Ziane Mustapha2018-08-02T15:12:40ZTry to beat these MRB constant records!
http://community.wolfram.com/groups/-/m/t/366628
POSTED BY: Marvin Ray Burns .
**MKB constant calculations,**
![enter image description here][1] ,
**have been moved to their own discussion at**
[Calculating the digits of the MKB constant][2] .
I think the following important point got buried near the end.
When it comes to mine and a few other people's passion to calculate many digits and the dislike possessed by a few more people: it is all a matter telling us that the "universal human mind" is multi faceted in giving passion, to person a for one task, and to person b for another task!
The MRB constant is defined below. See http://mathworld.wolfram.com/MRBConstant.html
$$\text{MRB}=\sum _{n=1}^{\infty } (-1)^n \left(n^{1/n}-1\right)$$
Here are some record computations. If you know of any others let me know..
1. On or about Dec 31, 1998 I computed 1 digit of the (additive inverse of the) MRB constant with my TI-92's, by adding 1-sqrt(2)+3^(1/3)-4^(1/4) as far as I could and then by using the sum feature to compute $\sum _{n=1}^{1000 } (-1)^n \left(n^{1/n}\right).$ That first digit, by the way, is just 0.
2. On Jan 11, 1999 I computed 3 digits of the MRB constant with the Inverse Symbolic Calculator.
3. In Jan of 1999 I computed 4 correct digits of the MRB constant using Mathcad 3.1 on a 50 MHz 80486 IBM 486 personal computer operating on Windows 95.
4. Shortly afterwards I computed 9 correct digits of the MRB constant using Mathcad 7 professional on the Pentium II mentioned below.
5. On Jan 23, 1999 I computed 500 digits of the MRB constant with the online tool called Sigma.
6. In September of 1999, I computed the first 5,000 digits of the MRB Constant on a 350 MHz Pentium II with 64 Mb of ram using the simple PARI commands \p 5000;sumalt(n=1,((-1)^n*(n^(1/n)-1))), after allocating enough memory.
7. On June 10-11, 2003 over a period, of 10 hours, on a 450mh P3 with an available 512mb RAM: I computed 6,995 accurate digits of the MRB constant.
8. Using a Sony Vaio P4 2.66 GHz laptop computer with 960 MB of available RAM, on 2:04 PM 3/25/2004, I finished computing 8000 digits of the MRB constant.
9. On March 01, 2006 with a 3GH PD with 2GBRAM available, I computed the first 11,000 digits of the MRB Constant.
10. On Nov 24, 2006 I computed 40, 000 digits of the MRB Constant in 33hours and 26min via my own program in written in Mathematica 5.2. The computation was run on a 32-bit Windows 3GH PD desktop computer using 3.25 GB of Ram.
11. Finishing on July 29, 2007 at 11:57 PM EST, I computed 60,000 digits of MRB Constant. Computed in 50.51 hours on a 2.6 GH AMD Athlon with 64 bit Windows XP. Max memory used was 4.0 GB of RAM.
12. Finishing on Aug 3 , 2007 at 12:40 AM EST, I computed 65,000 digits of MRB Constant. Computed in only 50.50 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 5.0 GB of RAM.
13. Finishing on Aug 12, 2007 at 8:00 PM EST, I computed 100,000 digits of MRB Constant. They were computed in 170 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 11.3 GB of RAM. Median (typical) daily record of memory used was 8.5 GB of RAM.
14. Finishing on Sep 23, 2007 at 11:00 AM EST, I computed 150,000 digits of MRB Constant. They were computed in 330 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 22 GB of RAM. Median (typical) daily record of memory used was 17 GB of RAM.
15. Finishing on March 16, 2008 at 3:00 PM EST, I computed 200,000 digits of MRB Constant using Mathematica 5.2. They were computed in 845 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 47 GB of RAM. Median (typical) daily record of memory used was 28 GB of RAM.
16. Washed away by Hurricane Ike -- on September 13, 2008 sometime between 2:00PM - 8:00PM EST an almost complete computation of 300,000 digits of the MRB Constant was destroyed. Computed for a long 4015. Hours (23.899 weeks or 1.4454*10^7 seconds) on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 91 GB of RAM. The Mathematica 6.0 code used follows:
Block[{$MaxExtraPrecision = 300000 + 8, a, b = -1, c = -1 - d,
d = (3 + Sqrt[8])^n, n = 131 Ceiling[300000/100], s = 0}, a[0] = 1;
d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
For[k = 0, k < n, c = b - c;
b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
N[1/2 - s/d, 300000]]
17. On September 18, 2008 a computation of 225,000 digits of MRB Constant was started with a 2.66GH Core2Duo using 64 bit Windows XP. It was completed in 1072 hours. Memory usage is recorded in the attachment pt 225000.xls, near the bottom of this post. .
18. 250,000 digits was attempted but failed to be completed to a serious internal error which restarted the machine. The error occurred sometime on December 24, 2008 between 9:00 AM and 9:00 PM. The computation began on November 16, 2008 at 10:03 PM EST. Like the 300,000 digit computation this one was almost complete when it failed. The Max memory used was 60.5 GB.
19. On Jan 29, 2009, 1:26:19 pm (UTC-0500) EST, I finished computing 250,000 digits of the MRB constant. with a multiple step Mathematica command running on a dedicated 64bit XP using 4Gb DDR2 Ram on board and 36 GB virtual. The computation took only 333.102 hours. The digits are at http://marvinrayburns.com/250KMRB.txt . The computation is completely documented in the attached 250000.pd at bottom of this post.
20. On Sun 28 Mar 2010 21:44:50 (UTC-0500) EST, I started a computation of 300000 digits of the MRB constant using an i7 with 8.0 GB of DDR3 Ram on board.; But it failed due to hardware problems.
21. I computed 299,998 Digits of the MRB constant. The computation began Fri 13 Aug 2010 10:16:20 pm EDT and ended 2.23199*10^6 seconds later |
Wednesday, September 8, 2010. I used Mathematica 6.0 for Microsoft
Windows (64-bit) (June 19, 2007) That is an average of 7.44 seconds per digit.. I used my Dell Studio XPS 8100 i7 860 @ 2.80 GH 2.80 GH
with 8GB physical DDR3 RAM. Windows 7 reserved an additional 48.929
GB virtual Ram.
22. I computed exactly 300,000 digits to the right of the decimal point
of the MRB constant from Sat 8 Oct 2011 23:50:40 to Sat 5 Nov 2011
19:53:42 (2.405*10^6 seconds later). This run was 0.5766 seconds per digit slower than the
299,998 digit computation even though it used 16GB physical DDR3 RAM on the same machine. The working precision and accuracy goal
combination were maximized for exactly 300,000 digits, and the result was automatically saved as a file instead of just being displayed on the front end. Windows reserved a total of 63 GB of working memory of which at 52 GB were recorded being used. The 300,000 digits came from the Mathematica 7.0 command
Quit; DateString[]
digits = 300000; str = OpenWrite[]; SetOptions[str,
PageWidth -> 1000]; time = SessionTime[]; Write[str,
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> digits + 3, AccuracyGoal -> digits,
Method -> "AlternatingSigns"]]; timeused =
SessionTime[] - time; here = Close[str]
DateString[]
23. 314159 digits of the constant took 3 tries do to hardware failure. Finishing on September 18, 2012 I computed 314159 digits, taking 59 GB of RAM. The digits are came from the Mathematica 8.0.4 code
DateString[]
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> 314169, Method -> "AlternatingSigns"] // Timing
DateString[]
Where I have 10 digits to round off. (The command NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> big number, Method -> "AlternatingSigns"] tends to give about 3 digits of error to the right.)
**The following records are due to the work of Richard Crandall found [here][3]. **
24. Sam Noble of Apple computed 1,000,000 digits of the MRB constant in 18 days 9 hours 11 minutes 34.253417 seconds
25. Finishing on Dec 11, 2012 Ricard Crandall, an Apple scientist, computed 1,048,576 digits
in a lighting fast 76.4 hours. That's on a 2.93 Ghz 8-core Nehalem. (This was most likely processor time, which is in accordance to the following record and hardware used.)
26. I computed a little over 1,200,000 digits of the MRB constant in 11
days, 21 hours, 17 minutes, and 41 seconds,( finishing on on March 31 2013). I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
27. On May 17, 2013 I finished a 2,000,000 or more digit computation of the MRB constant, using only around 10GB of RAM. It took 37 days 5 hours 6 minutes 47.1870579 seconds. I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
28. Finally, I would like to announce a new unofficial world record computation of the MRB constant that was finished on Sun 21 Sep 2014 18:35:06. It took 1 month 27 days 2 hours 45 minutes 15 seconds. I computed 3,014,991 digits of the MRB constant with Mathematica 10.0. I Used my new version of Richard Crandall's code, below, optimized for my platform and large computations. I also used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz with 64 GB of RAM of which only 16 GB was used. Can you beat it (in more number of digits, less memory used, or less time taken)? This confirms that my previous "2,000,000 or more digit computation" was actually accurate to 2,009,993 digits. (They were used as MRBtest2M.)
(**Fastest (at MRB's end) as of 25 Jul 2014*.*)
DateString[]
prec = 3000000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] :=
Module[{a, d, s, k, bb, c, n, end, iprec, xvals, x, pc, cores = 12,
tsize = 2^7, chunksize, start = 1, ll, ctab,
pr = Floor[1.005 pre]}, chunksize = cores*tsize;
n = Floor[1.32 pr];
end = Ceiling[n/chunksize];
Print["Iterations required: ", n];
Print["end ", end];
Print[end*chunksize]; d = ChebyshevT[n, 3];
{b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
iprec = Ceiling[pr/27];
Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
x = N[E^(Log[ll]/(ll)), iprec];
pc = iprec;
While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^ll - ll;
x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];(**N[Exp[Log[ll]/ll], pr]**)x, {l, 0, tsize - 1}], {j, 0, cores - 1},
Method -> "EvaluationsPerKernel" -> 4]];
ctab = ParallelTable[Table[c = b - c;
ll = start + l - 2;
b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
c, {l, chunksize}], Method -> "EvaluationsPerKernel" -> 2];
s += ctab.(xvals - 1);
start += chunksize;
Print["done iter ", k*chunksize, " ", SessionTime[] - T0];, {k, 0,
end - 1}];
N[-s/d, pr]];
t2 = Timing[MRBtest2 = expM[prec];]; DateString[]
Print[MRBtest2]
MRBtest2 - MRBtest2M
t2 From the computation was {1.961004112059*10^6, Null}.
Here are a couple of graphs of my record computations in max digits/ year:
![enter image description here][4]![enter image description here][5]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5860Capturemkb.JPG&userId=366611
[2]: http://community.wolfram.com/groups/-/m/t/1323951?p_p_auth=W3TxvEwH
[3]: http://www.marvinrayburns.com/UniversalTOC25.pdf
[4]: /c/portal/getImageAttachment?filename=7559mrbrecord1.JPG&userId=366611
[5]: /c/portal/getImageAttachment?filename=mrbrecord3.JPG&userId=366611Marvin Ray Burns2014-10-09T18:08:49ZImport XLS files? (Mathematica 11.3)
http://community.wolfram.com/groups/-/m/t/1396533
It is normal to import XLS files previously using Mathematica 11.3!
However, when I import XLS files these days, I don't know why there is always an error message. Is there a way to check the cause of the error?
Import["C:\\Users\\prede\\20130424.xls"]
wrong information: **Import::fmterr: Cannot import data as XLS format.**Tsai Ming-Chou2018-08-03T03:15:18ZSet File and Folder Layout for Workbench project?
http://community.wolfram.com/groups/-/m/t/1398134
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CausalInferencePathQ.png&userId=319259
The image shows my general and naive structure for a Workbench Project. One of the Workbench videos recommends defining the main package (CausalInference`) to simply be a sequence of Needs, which I took to mean that I can reference the sub-packages with the simple command
<<CausalInference`
in the manual testing file CausalInference.nb and the regression testing file CausalInference.mt. The CausalInference.m file is in the included image.
However, none of the functions defined in either GraphPrimitives.m or GraphSeparation.m are found when I execute the .nb or .mt file, and therefore do not run properly. The .nb and .mt files do execute properly when I "Needs" both sub-packages explicitly. Functionally, this should be doable.
How do I get this to work as I expect?Lawrence Winkler2018-08-06T01:18:53ZFind z-score?
http://community.wolfram.com/groups/-/m/t/1397936
Hello, everyone.
I know how to calculate the probability with Mathematica:
Probability[ x >= 1.0, x ~ NormalDistribution[] ]
The answer is 0.158655.
My question is, how can I find the z-score of normal distribution when the probability is given? For example, I want to find the value of z in the below pseudo-code:
Probability[ x >= z, x ~ NormalDistribution[] ] == 0.05
Or is there any ready-made function that can compute z-score directly in Mathematica?
Thank you!Kui Chen2018-08-05T08:16:45ZExtract information from transfer function matrix using parameter names?
http://community.wolfram.com/groups/-/m/t/1397626
Dear all,
As shown in the picture below I have a transfer function matrix with one input (DisplacementGround) and two outputs (DisplacementMass and VelocityMass). I can access either one of the two elements of the matrix using SystemModelExtract and referring to the elements as {1}, {1} and {1},{2}.
However, it would be very useful to be able to extract the individual transfer functions using the actual names of the input and output quantities (e.g. DisplscementGround in this example). These names come from a Modelica model imported into Mathematica.
Is there a way to achieve it?
Thank you very much in advance.
Fabian
![Description][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MathematicaQuestion.png&userId=1355184oquichtli2018-08-04T14:37:18Z