Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions from all groups sorted by activeCan interpolating functions be used in NMinimize constraints?
http://community.wolfram.com/groups/-/m/t/1414104
In[1]:= int = Interpolation[Table[{i, (i - 5)^2}, {i, 10}]]
Out[1]= InterpolatingFunction[{{1, 10}}, {
5, 3, 0, {10}, {4}, 0, 0, 0, 0, Automatic, {}, {},
False}, {{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}}, {{16}, {9}, {4}, {1}, \
{0}, {1}, {4}, {9}, {16}, {25}}, {Automatic}]
In[5]:= NMinimize[{o, o == int[x]}, {x, 1, 10}]
During evaluation of In[5]:= InterpolatingFunction::dmval: Input value {0.652468} lies outside the range of data in the interpolating function. Extrapolation will be used.
During evaluation of In[5]:= NMinimize::bcons: The following constraints are not valid: {o==InterpolatingFunction[{{1,10}},{5,3,0,{10},{4},0,0,0,0,Automatic,{},{},False},{{1,2,3,4,5,6,7,8,9,10}},{{16},{9},{4},{1},{0},{1},{4},{9},{16},{25}},{Automatic}][x]}. Constraints should be equalities, inequalities, or domain specifications involving the variables.
Out[5]= NMinimize[{o,
o == InterpolatingFunction[{{1, 10}}, {
5, 3, 0, {10}, {4}, 0, 0, 0, 0, Automatic, {}, {}, False}, {{1,
2, 3, 4, 5, 6, 7, 8, 9, 10}}, {{16}, {9}, {4}, {1}, {0}, {1}, {
4}, {9}, {16}, {25}}, {Automatic}][x]}, {x, 1, 10}]Frank Kampas2018-08-19T13:13:06ZHow do I give Mathematica more time to compute?
http://community.wolfram.com/groups/-/m/t/1413577
I'm trying to obtain an answer from a double integral, but it appears to be timing out before I get my answer (it doesn't specify that it times out, I assume it does) and returns the problem I have given it in the form of a double integral with the same bounds. I try evaluating what Mathematica gave back, and it only returns the same thing after some time has been spent. How do I work around this or change this? I attached the .nb file that I'm trying to work with.Joshua Champion2018-08-19T04:48:30Z"Private" message to Stephen Wolfram.
http://community.wolfram.com/groups/-/m/t/1413604
Do you know why you will never become anything more than an ordinary company?
**Because you have an extraordinary product but people working in the company do not know how to use it optimally.**
Give me the person directly responsible for Wolfram Challenge. I am struggling more with your mistakes than with the challenges themselves.
You have problem with solved counter. If someone sends "wrong" solutions to the task before sending the correct solution - your counter counts all attempts, not only the right solution. Here is an example of two unsuccessful attempts at a solution plus one correct.
![enter image description here][1]
You have problems with the algorithm that compares the speed score. How do I know this? Simple, by testing.
![enter image description here][2]
And there is no doubt here. If you want, you can show your users a quicker solution than that. Put score 56.09 out of 100 on a pre-calculated solution which hardly makes any calculations...is a bit confident on your part.
PrimeGap[gap_ /;
EvenQ[gap] && gap > 0] := {{3, 5}, {7, 11}, {23, 29}, {89,
97}, {139, 149}, {199, 211}, {113, 127}, {1831, 1847}, {523,
541}, {887, 907}, {1129, 1151}, {1669, 1693}, {2477, 2503}, {2971,
2999}, {4297, 4327}, {5591, 5623}, {1327, 1361}, {9551,
9587}, {30593, 30631}, {19333, 19373}, {16141, 16183}, {15683,
15727}, {81463, 81509}, {28229, 28277}, {31907, 31957}, {19609,
19661}, {35617, 35671}, {82073, 82129}, {44293, 44351}, {43331,
43391}, {34061, 34123}, {89689, 89753}, {162143, 162209}, {134513,
134581}, {173359, 173429}, {31397, 31469}, {404597,
404671}, {212701, 212777}, {188029, 188107}, {542603,
542683}, {265621, 265703}, {461717, 461801}, {155921,
156007}, {544279, 544367}, {404851, 404941}, {927869,
927961}, {1100977, 1101071}, {360653, 360749}, {604073,
604171}, {396733, 396833}}[[gap/2]]
You have some crazy time limit of 3000 milliseconds for task. You definitely have time-consuming challenges. Even "Prime Gap" for my non-optimized code take near 11 sec = 11000 milliseconds
PrimeGap[gap_ /; EvenQ[gap] && gap > 0] := Module[
{pgFind = False, pgCounter = 1, pgResult = {}},
While[pgFind == False,
pgCounter++;
If[(Prime[pgCounter] - Prime[pgCounter - 1]) == gap,
pgResult = {Prime[pgCounter - 1], Prime[pgCounter]};
pgFind = True
, Nothing]
];
pgResult
]
![enter image description here][3]
There is an alternative in front of you now. **Connect me directly with the person in charge of "Wolfram Challenges**" to help him to clear the problems you have OR **I will open a topic in a Reddit and will began to mock you** from there by solving all your unresolved problems - believe me, they will start paying attention to me, seeing that no one else has solved them, and I'm the only one in Leaderboard for that problems. You choose.
P.s. Stephen Wolfram it is time for Wolfram Research to become Big Player. I can help you with that. Google is nothing more than Markov chain. As I said, you do not know how to use the knowledge you have. You are constantly looking for a short and quick solutions so you lack the depth. If you stop being narrow-minded you can achieve a lot.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=WolframSolvedCounter.png&userId=1404950
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=WolframSpeedScore.png&userId=1404950
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=WolframMilliseconds.png&userId=1404950Emil Enchev2018-08-19T05:14:41Z[Q] question about Compiling Substitution system
http://community.wolfram.com/groups/-/m/t/1411561
I'm trying to loop a substitution system where I want the initial condition to be user-defined but the rules that the substitution follows is randomized with each iteration so I thought of compiling a function where x and n are the user input where x is the initial condition and n is the number of iterations required I'm getting all sorts of errors and I would appreciate any assistance.
RandomSA =
Compile[{x, {n, _Integer}},
Module[{f, numb = 0},
f = SubstitutionSystem[{1 ->
Table[RandomInteger[{1, 0}], {3}, {3}, {3}],
0 -> Table[0, {3}, {3}, {3}]}, x, 1];
While[numb < n,
f = SubstitutionSystem[{1 ->
Table[RandomInteger[{1, 0}], {3}, {3}, {3}],
0 -> Table[0, {3}, {3}, {3}]}, f, 1]; numb++]; f]]Omar Mahmoud2018-08-17T20:18:43ZDrop brackets in the list.
http://community.wolfram.com/groups/-/m/t/1411039
Oh, sorry I didn't think that this thread was created. But,I have still the problem. I want to generate parameters for the Sum function. When I create Table[{z[x], {list199[[x]]}}, {x, 1}]
d = Total[
Sum[n!/z[100]!*p[100]^z[100]*Product[p[x]^z[x]/z[x]!, {x, 0, vv}],
{z[0], {list0}},
Sequence @@
Table[{z[x], {list199[[x]]}}, {x, 1}], {z[2], {list199[[2]]}}, {z[
3], {list199[[3]]}}, {z[4], {list199[[4]]}}, {z[100], {list100}}]]
I get these two warnings
Sum::vloc: The variable Sequence@@Table[{z[x],{list199[[x]]}},{x,1}] cannot be localized so that it can be assigned to numerical values.
When I use {z[1], {list199[[1]]}} all works good.
The {z[1], {list199[[1]]}} gives
{z[1], {{{0}, {1}, {0}, {0}, {0}, {2}, {0}, {0}, {1}, {1}, {1}, {0}, \
{0}, {0}, {3}, {0}, {0}, {2}, {2}, {2}, {1}, {1}, {1}, {0}, {0}, {0}, \
{1}, {1}, {1}, {0}, {4}, {0}, {3}, {3}, {3}, {1}, {1}, {0}, {2}, {2}, \
{2}, {2}, {2}, {2}, {1}, {1}, {1}, {1}}}}
The Sequence @@ Table[{z[x], {list199[[x]]}}, {x, 1}] gives
Sequence[{z[
1], {{{0}, {1}, {0}, {0}, {0}, {2}, {0}, {0}, {1}, {1}, {1}, {0}, \
{0}, {0}, {3}, {0}, {0}, {2}, {2}, {2}, {1}, {1}, {1}, {0}, {0}, {0}, \
{1}, {1}, {1}, {0}, {4}, {0}, {3}, {3}, {3}, {1}, {1}, {0}, {2}, {2}, \
{2}, {2}, {2}, {2}, {1}, {1}, {1}, {1}}}}]
So it looks good but it doesn't work. Could you help me to find the problem?Alex Graham2018-08-17T09:35:57ZExtreme points
http://community.wolfram.com/groups/-/m/t/1411030
Hi, I need to find the extreme points of the convex hull of a given set of points (in my case, a set of more than 1,000 2-dimensional vectors). Say I want to plug in as an input a list of 2-dimensional vectors such as {{0,0},{0,3},{3,0}, {1,1}}. My intuition tells me there should exist a build-in Mathematica function that spills out the set of extreme points (in the example: {{0,0},{0,3},{3,0}}). But I do not find such a function. I would be very grateful for any help or ideas. Thanks a lot, ThomasThomas Troeger2018-08-17T09:34:04ZCan examples for training be shown in the Neural Network repository?
http://community.wolfram.com/groups/-/m/t/1413864
Often I want to train the neural networks on the Neural Network Repository, but when I download the ConstructionNotebook, they almost always just say "Training: performed separately", which is useless to me. I understand that the actual training (typically on huge datasets) is not trivial, but I would really love an example of the data I need to train a network and any extra parameters I need to pass to NetTrain. The network I'm looking at at the moment is the Very Deep Net for Super-Resolution.Carl Lange2018-08-19T10:37:51ZUse float limits with Integrate?
http://community.wolfram.com/groups/-/m/t/1408375
I am a little confused with how Mathematica is integrating with Integrate. I am trying to compute an internal of the product of special functions (modified Bessel functions) with symbolic limits. After working for a minute, the computation stops with no success. However, if I carry out the same integral, but with floating point limits, it spits out an answer. What's going on here? When I give Integrate floating point limits is it reverting to NIntegrate?
Thanks,
Jeff
Edit: Here's an example. See how it fails even with rational limits?
In[1]:= (*Doesn't evaluate*)
Integrate[BesselK[1, k r] BesselI[1, k r], {r, 1/2, 3/2}]
Out[1]//InputForm= Integrate[BesselI[1, k*r]*BesselK[1, k*r], {r, 1/2, 3/2}]
In[2]:= (*Evaluates with float limits*)
Integrate[BesselK[1, k r] BesselI[1, k r], {r, .5, 1.5}]
Out[2]= -0.0705237 MeijerG[{{0.5, 0.5}, {}}, {{0., 1.}, {-1., -0.5}}, 0.5 k, 0.5] +
0.211571 MeijerG[{{0.5, 0.5}, {}}, {{0., 1.}, {-1., -0.5}}, 1.5 k, 0.5]Jeffram Olander2018-08-16T15:49:38ZIssues with InverseFourierSequenceTransform?
http://community.wolfram.com/groups/-/m/t/1402695
Yes:
In[331]:= InverseFourierSequenceTransform[1, x, n,
FourierParameters -> {a, 1}]
Out[331]= (2 \[Pi])^((1 - a)/2) DiscreteDelta[n]
No:
In[329]:= InverseFourierSequenceTransform[1, x, n,
FourierParameters -> {a, 2 Pi}]
Out[329]= 0Joe Donaldson2018-08-12T19:39:52ZError bar Log Plot
http://community.wolfram.com/groups/-/m/t/1412825
Can anyone tell me how to draw Error bar plot in logarithmic scale??Pragyan Phukan2018-08-18T18:12:44ZUse knot theory package?
http://community.wolfram.com/groups/-/m/t/1384652
Dear All,
How can I write a Mathematica file which uses the Mathematica package "knottheory" and creates a list of many (as many as possible) different non-alternating knots and links (rather minimal knot- and linkprojections) with at least 12 crossings (better: 16 or 18 crossings or even more) which satisfy: Each component of a link has as many positive crossings as negative crossings.
Best wishes,
Stephan Rosebrockstephan.rosebrock2018-07-16T11:57:29ZUse external APIs and data wrangling?
http://community.wolfram.com/groups/-/m/t/1404817
Hi, I am considering using the Wolfram Cloud to host a project that makes use of API calls to a third party (never done production deployments in the WC before). The problem is that API calls (in the default way provided by the third party) contain metadata and even the key-value pairs included in the response need to be cleaned up (the only thing that is relevant to me is the last value, in the example below that would be the number 1.9082*^7). Is there any way to make the call so that just that value is extracted (or at least the list put in usable key-value dataset format?)? If not, what would be the most efficient way to clean up the output, to simply assign that value to a variable? The code will be making many simultaneous API calls, and I'll really prefer to avoid performance issues -not to waste too much computing power wrangling data. Thanks!
{meta->{request->{granularity->Daily,start_date->2018-01-27,end_date->2018-01-31,limit->Null },status->Success,last_updated->2018-07-31},value->{{date->2018-01-27,value->1.48229*^7},{date->2018-01-28,value->1.42697*^7},{date->2018-01-29,value->1.67565*^7},{date->2018-01-30,value->1.91857*^7},{**date->2018-01-31,value->1.9082*^7**}}}George W2018-08-14T05:33:17ZSolve equation of motion with Dirac-fermions?
http://community.wolfram.com/groups/-/m/t/1400212
Dear Wolfram team:
I am a beginner of Mathematica.
My Problem is that I want solve a System with n equation of Motion in first order. In this equation of motion are creation and annihilation operators of Dirac-fermions. **I don't know and don't find a it, how I can describes the creation and annihilation operators of Dirac-fermions in Mathematica**. The equation of motion have the form:
$$\dot{c}_i^\dagger [t]=f*c_i^\dagger[t]+g[t]*c_{i+1}[t]-g[t]*c_{i+1}^\dagger[t]+h[t]*c_{i-1}[t]-h[t]*c_{i+1}^\dagger[t]\\
\dot{c}_i [t]=f*c_i[t]+g[t]*c_{i+1}^\dagger[t]-g[t]*c_{i+1}[t]-h[t]*c_{i-1}^\dagger[t]+h[t]*c_{i+1}[t],$$
where $c_i^\dagger,c_i $ are creation and annihilation operators and f,g,h are functions.
Then I want use DSolve or NDSolve to solve the equation of motion.
Thanks, for your help.Constantin Harder2018-08-09T10:05:59ZDynamically create a list of anonymous functions?
http://community.wolfram.com/groups/-/m/t/1403526
As the title indicates, if you know how to create a list of anonymous functions in your program, please tell me.satoshi nakagawa2018-08-13T21:43:23Z[WSC18] Voice sentiment classification using neural networks
http://community.wolfram.com/groups/-/m/t/1383246
## Introduction ##
Hello Wolfram community! My name is Ryan Heo and I am a student at the Wolfram high school summer camp where over the past two weeks I was able to work on and complete a project called "Voice Sentiment classification". The goal of this project was to be able to take an input of an audio speech file and be able to classify that into one of the 8 different categories of emotion using machine learning. Down below is are the steps and the procedure I followed in the completion of my project.
##Dataset##
For this project I used the Ryerson Audio-Visual Database of Emotional Speech which consisted of varying tones or emotion of speech recordings from voice actors.Down below is the importation of the speech audio files that I have downloaded
folders = Select[FileNames["*", NotebookDirectory[]],
DirectoryQ[#] && StringContainsQ[#, "Actor"] &];
fileNames = FileNames["*.wav", #] & /@ folders // Flatten;
audio = Import /@ fileNames;
## Encoding the data ##
After I have imported all the data files I took extracted the number section of the file name that matches the different emotion categories.
I then separated these numbers into eight sections which corresponded with the emotional category of these files.
Additionally, I threaded these sections to their corresponding emotion and after flattening the list and random sampling the data I inserted the data into the net encoder. I converted the data into a mel-frequency cepstrum which is based on the log power spectrum and does a better job modeling the human auditory system than the a normal cepstrum. In the net encoder I set my sampling rate and the number of filters I had to a high number in order to create more data points and to be able to extract more features respectively.
enc = NetEncoder[{"AudioMelSpectrogram", "WindowSize" -> 4096,
"Offset" -> 1024, "SampleRate" -> 44100, "MinimumFrequency" -> 1,
"MaximumFrequency" -> 22050, "NumberOfFilters" -> 128}] ;
After encoding the data I separated it into sections consisting of length 41 and reshaped the array to be inputted into neural net for training.
Down below is a representation of the mel spectrogram represented by a matrix plot
![enter image description here][1]
##Neural Networks ##
I used a combination of a convolutional neural network and a recurrent network for classification.
Down below is the convolutional neural network architecture that I used
convNet = NetChain[
{
conv[32],
conv[32],
PoolingLayer[{3, 3}, "Stride" -> 3, "Function" -> Max],
BatchNormalizationLayer[],
Ramp,
conv[64],
conv[64],
PoolingLayer[{3, 3}, "Stride" -> 3, "Function" -> Max],
BatchNormalizationLayer[],
Ramp,
conv[128],
conv[128],
PoolingLayer[{3, 3}, "Stride" -> 3, "Function" -> Max],
BatchNormalizationLayer[],
Ramp,
conv[256],
conv[256],
PoolingLayer[{3, 3}, "Stride" -> 3, "Function" -> Max],
BatchNormalizationLayer[],
Ramp,
LinearLayer[1024],
Ramp,
DropoutLayer[0.5],
LinearLayer[8],
SoftmaxLayer[]
},
"Input" -> {1, 41, 128},
"Output" -> NetDecoder[{"Class", newClasses}]
]
This is the architecture for the recurrent network with LSTM layers
recurrentNet = NetChain[{
LongShortTermMemoryLayer[128, "Dropout" -> 0.3],
LongShortTermMemoryLayer[128, "Dropout" -> 0.3],
SequenceLastLayer[],
LinearLayer[Length@newClasses],
SoftmaxLayer[]
},
"Input" -> {"Varying", 8},
"Output" -> NetDecoder[{"Class", newClasses}]
]
I then combined these two nets
netCombined = NetChain[{
NetMapOperator[cnnTrainedNet],
lstmTrained
}]
I inputted the section of the dataset files I reserved for testing and validation and inputted these files into the combined neural net achieving an accuracy of 85 percent.Down below would be a good representation of how accurate my net was in in classifying these files into their respective emotional categories.
For example in the neutral category out of the 38 samples of the randomly distributed data it was able to correctly match these files into neutral 28 times. As shown below the net seems to confuse the neutral emotion with the sadness emotion as seen by guessing sad 4 times out of 38 files.
![enter image description here][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Spectrogram.jpg&userId=1372518
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ConfusionMatrixPlot.jpg&userId=1372518
## Deploying the Microsite ##
MicroFunc[aud_Audio] := Module[{net, enc},
net = Import[CloudObject[
"https://www.wolframcloud.com/objects/ryanheo2001/CombinedNet.\
wlNet"], "WLNET"];
enc = NetEncoder[{"AudioMelSpectrogram", "WindowSize" -> 4096,
"Offset" -> 1024, "SampleRate" -> 44100, "MinimumFrequency" -> 1,
"MaximumFrequency" -> 22050, "NumberOfFilters" -> 128}];
net[Transpose[{Partition[enc[aud], 41]}, {2, 1, 3, 4}]]
]
CloudDeploy[
FormPage[{"Audio" -> "CachedFile" -> "", "URL" -> "URL" -> ""}, Which[
#Audio =!= "", MicroFunc[Audio[#Audio]],
#URL =!= "", MicroFunc[Import[#URL, "Audio"]],
True, ""] &, PageTheme -> "Red",
AppearanceRules -> <|"Title" -> "Voice Sentiment Classifier"|>],
"VoiceSentiment", Permissions -> "Public"]
The final microsite can be found here at: https://www.wolframcloud.com/objects/ryanheo2001/VoiceSentiment
Lastly, I want to thank my mentor Michael and all the instructors and staff at the wolfram summer camp. They have been very helpful and I have learned valuable lessons from them these past two weeks.Ryan Heo2018-07-13T20:57:43ZGenoa bridge collapse - how to prevent this from happening again?
http://community.wolfram.com/groups/-/m/t/1407711
Introduction
----------
On Tuesday, 14 August 2018, a large suspension bridge collapsed in Genoa, Italy, during a violent storm.
GeoGraphics[GeoMarker@Entity["City", {"Genoa", "Liguria", "Italy"}]["Coordinates"], GeoRange -> "Country"]
![enter image description here][1]
At least 39 people died. The disaster was extensively covered in the international press, see [here][2] and [here][3]. In the latter article by the BBC a graph is shown that indicates that Italy's infrastructure spending in roads has degrease from about 14 billion Euros in 2007 to slightly above 5 billion Euros in 2015. Is there a lack of investment into the road infrastructure? What is the state of our national bridges? Can a computational approach help citizens to better understand what the situation is and help them to make political decisions?
In this post I will explore some very simple representations to visualise some datasets; I will use the example of Germany for the most part. Many representations are inspired by an article in "[Die Welt][4]". At the bottom of the article they also offer a link to a [google spreadsheet with data][5] that we will use later. We will start with the data from the "[Bundesstaat fuer Strassenwesen (BASt)][6]", which is available [here][7].
Data import and preparation
----------
The data files we will look at contain the locations of all bridges in Germany with a quality grade and some additional information. We will start by importing the data
data = Import["/Users/thiel/Desktop/Zustandsnoten-excel.xlsx"];
The content of that table looks like so:
![enter image description here][8]
It has 51592 rows. The important column for us are the last three. The last two contain location data of the bridge (in the UTM32N system). The third column to the end contains a grade indicating the quality of the bridge. We will store the quality rules in the variable states:
states = {{1., 1.4, "very good"}, {1.5, 1.9, "good"}, {2.0, 2.4, "safe & sound"}, {2.5, 2.9, "suffient"}, {3.0, 3.4, "poor"}, {3.5,4.0, "very poor"}};
The first two elements of every entry indicate the range of values in which a certain descriptor (third entry) is achieved. The translations to English are a bit subjective. The last two categories (poor and very poor) are considered to be problematic.
We start with a graphic of the locations of all bridges in Germany - in fact we have to filter out some for which the location data is not available.
styling = {GeoBackground -> GeoStyling["StreetMapNoLabels", GeoStylingImageFunction -> (ImageAdjust@ColorNegate@ColorConvert[#1, "Grayscale"] &)], GeoScaleBar -> Placed[{"Metric", "Imperial"}, {Right, Bottom}], GeoRangePadding -> Full, ImageSize -> Large};
GeoGraphics[{GeoStyling[RGBColor[1, 0.85, 0], Opacity[1]], GeoDisk[#, Quantity[3, "Kilometers"]]} &
/@ (GeoPosition[GeoGridPosition[#, GeoProjectionData["UTMZone32"]]] & /@ (Cases[ToExpression
/@ data[[1, 2 ;;, -2 ;;]], {_Real, _Real}])), styling]
![enter image description here][9]
Note, that we convert the UTM32 data to standard GeoPosition entries within the command. Also, it might be quite unexpected that I use GeoGraphics with GeoDisk to visualise the data - GeoListPlot appears to be the more natural choice. Using the latter, I did have problems to plot all points and to properly size the dots. So I went for GeoGraphics and GeoDisk.
Cleaning the data some more
----------
In order to visualise the positions of the bridges that are in a poor or very poor state. For that we define a little helper function:
fquality = (Which @@ Flatten[{#[[1]] <= dummy <= #[[2]], #[[3]]} & /@ states] /. dummy -> #) &
Not elegant, but it does the job. With that we adjust the dataset
cleanDataset = {GeoPosition[GeoGridPosition[#[[2 ;; 3]],
GeoProjectionData["UTMZone32"]]], #[[1]], fquality[#[[1]]]} & /@
Cases[ToExpression /@ data[[1, 2 ;; ;; 1, -3 ;;]], {_, _Real, _Real}];
The entries look like this now:
cleanDataset[[1]]
![enter image description here][10]
We also want to compute in which province (Bundesland) the bridge is located. So first we get the list of provinces:
provinces = Entity["Country", "Germany"][EntityProperty["Country", "AdministrativeDivisions"]];
The idea was to use GeoWithinQ or so to compute this. This for example
whereQ = Which[GeoWithinQ[Entity["AdministrativeDivision", {"BadenWurttemberg", "Germany"}], #1],
Entity["AdministrativeDivision", {"BadenWurttemberg", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"Bavaria", "Germany"}], #1], Entity["AdministrativeDivision", {"Bavaria", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"Berlin", "Germany"}], #1], Entity["AdministrativeDivision", {"Berlin", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"Brandenburg", "Germany"}], #1],Entity["AdministrativeDivision", {"Brandenburg", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"Bremen", "Germany"}], #1], Entity["AdministrativeDivision", {"Bremen", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"Hamburg", "Germany"}], #1], Entity["AdministrativeDivision", {"Hamburg", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"Hesse", "Germany"}], #1], Entity["AdministrativeDivision", {"Hesse", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"LowerSaxony", "Germany"}], #1], Entity["AdministrativeDivision", {"LowerSaxony", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"MecklenburgWesternPomerania", "Germany"}], #1], Entity["AdministrativeDivision", {"MecklenburgWesternPomerania", "Germany"}], GeoWithinQ[
Entity["AdministrativeDivision", {"NorthRhineWestphalia", "Germany"}], #1], Entity["AdministrativeDivision", {"NorthRhineWestphalia", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"RheinlandPalatinate", "Germany"}], #1], Entity["AdministrativeDivision", {"RheinlandPalatinate", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"Saarland", "Germany"}], #1], Entity["AdministrativeDivision", {"Saarland", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"Saxony", "Germany"}], #1], Entity["AdministrativeDivision", {"Saxony", "Germany"}], GeoWithinQ[
Entity["AdministrativeDivision", {"SaxonyAnhalt", "Germany"}], #1], Entity["AdministrativeDivision", {"SaxonyAnhalt", "Germany"}], GeoWithinQ[Entity["AdministrativeDivision", {"SchleswigHolstein",
"Germany"}], #1], Entity["AdministrativeDivision", {"SchleswigHolstein", "Germany"}],GeoWithinQ[
Entity["AdministrativeDivision", {"Thuringia", "Germany"}], #1], Entity["AdministrativeDivision", {"Thuringia", "Germany"}]] &
does the trick:
whereQ[cleanDataset[[3, 1]]]
(*Entity["AdministrativeDivision", {"SchleswigHolstein", "Germany"}]*)
but is far too slow. So we will use the polygons of the provinces:
polygons = (#["Polygon"] /. GeoPosition -> Identity) & /@ provinces;
It turns out that this is not sufficient. Some provinces "have a hole", i.e. a city inside that is not part of the province but rather an administrative unit in its own right. This leads to two provinces having the head FilledCurve rather than Polygon. We can fix this like so:
polygonsfixed = If[Head[#] === FilledCurve, Polygon[#[[1, 1, 1, 1]] /. Line -> Polygon], #] & /@ polygons;
This basically ignores the hole. What we will do later is make sure that we first check whether a bridge is in the city (the hole inside) - if not we use the larger province surrounding it. The next command takes a while to run:
Monitor[bridgeprovince = (provinces[[#]] & /@
Flatten /@ Table[Position[RegionMember[#, cleanDataset[[k, 1, 1]]] & /@ polygonsfixed,
True], {k, 1, Length[cleanDataset]}]) /. {} -> {Missing}, k]
It turns out that some bridges are not properly localised. The following code fixes that (it relies on the fact that the list is ordered by province anyway!).
bridgeprovinceclean = bridgeprovince[[All, 1]] //. {a___, x_, Missing, b___} -> {a, x, x, b};
Let's build one list of lists containing all that data:
totalData = Flatten /@ Transpose[{cleanDataset, bridgeprovinceclean /. Missing ->
Entity["AdministrativeDivision", {"SchleswigHolstein", "Germany"}]}];
The data looks like this now:
totalData[[1 ;; 5]] // TableForm
![enter image description here][11]
Some statistics and visualisations of the data
----------
We can now count how many bridges in Germany are in a certain state:
tallyData = Tally[totalData[[All, -2]]][[Ordering[Flatten[Position[states, #] & /@ Tally[totalData[[All, -2]]][[All, 1]], 1][[All, 1]]]]]
which gives
![enter image description here][12]
Let's make a bar chart of that:
BarChart[tallyData[[All, 2]], ChartLabels -> Evaluate[Rotate[#, Pi/2] & /@ tallyData[[All, 1]]],
LabelStyle -> Directive[Bold, 15], PlotTheme -> "Marketing"]
![enter image description here][13]
Most bridges are in the "safe & sound" state; a non-negligible number is in the states "poor" or "very poor". Let's first select the bridges in the last two categories:
badBridges = Select[totalData, (#[[3]] == "poor" || #[[3]] == "very poor") &];
There are 2413 bridges in that category now, and there are their locations:
GeoGraphics[{GeoStyling[RGBColor[1, 0.0, 0], Opacity[1]], GeoDisk[#, Quantity[3, "Kilometers"]]} & /@ badBridges[[All, 1]], styling]
![enter image description here][14]
So, if you live in Germany, there is probably one of those bad bridges close to you. A GeoSmoothHistogram helps us to visualise the distribution of those bridges:
GeoSmoothHistogram[badBridges[[All, 1]], GeoRange -> Entity["Country", "Germany"],
ColorFunction -> "TemperatureMap", styling]
![enter image description here][15]
It is obvious that there are some areas where the problems are more acute than in other regions. We can represent good vs not-so-good bridges in Germany like so:
nonbadBridges = Select[totalData, ! (#[[3]] == "poor" || #[[3]] == "very poor") &];
GeoGraphics[Join[{GeoStyling[RGBColor[0, 1., 0], Opacity[1]], GeoDisk[#, Quantity[3, "Kilometers"]]} & /@ nonbadBridges[[All, 1]], {GeoStyling[RGBColor[1, 0.0, 0], Opacity[1]], GeoDisk[#, Quantity[3, "Kilometers"]]} & /@ badBridges[[All, 1]]], styling]
![enter image description here][16]
Satellite images
----------
The Wolfram Language allows us to take a deeper look at the individual bridges by using satellite data. Let's first get some images of bridges in a poor state:
GeoImage[#, GeoRange -> Quantity[50, "Meters"]] & /@ RandomSample[badBridges][[1 ;; 25, 1]]
![enter image description here][17]
These images have an excellent resolution and allow us to study the bridges in greater detail:
![enter image description here][18]
Likewise we can look at the good bridges:
goodBridges = Select[totalData, ! (#[[3]] == "good" || #[[3]] == "very good") &];
GeoImage[#, GeoRange -> Quantity[50, "Meters"]] & /@ RandomSample[goodBridges][[1 ;; 25, 1]]
![enter image description here][19]
Again the resolution is very good for individual images:
![enter image description here][20]
I was first thinking about developing a machine learning approach to try and estimate the state of a bridge from images, but that seems impossible at this level.
How good are different administrative divisions addressing infrastructure
----------
Next, we look for regional variations. We gather the data by province:
byprovince = GatherBy[totalData, Last];
and plot the ratio of poor bridges vs all bridges
GeoRegionValuePlot[Rule @@@ SortBy[Table[{byprovince[[k, 1, -1]],
N@Length[Select[byprovince[[k, All, 3]], (# == "poor" || # == "very poor") &]]/
Length[byprovince[[k]]]}, {k, 1, Length[byprovince]}], Last], GeoRange -> Entity["Country", "Germany"], styling, ColorFunction -> "TemperatureMap"]
![enter image description here][21]
Relationship between age and state of the bridge
----------
"Die Welt" offers consolidated data in an [additional link][22] (which also contains the name of the province, so we could have saved the effort above). We import the data
weltData = Import["/Users/thiel/Desktop/DIE WELT_ Zustand der Fernstraßenbrücken - Daten.tsv"];
It also contains the grades of the bridges for several consecutive test periods and for some bridges (in two of the provinces) the year it was built. Let's clean that
weltDatawDates = Select[weltData[[2 ;;]], #[[-1]] =!= "" &]
and collect the built year vs rating:
builtdatevsmark = {#[[1]], ToExpression[StringReplace[ToString[#[[2]]], "," -> "."]]} & /@ weltDatawDates[[All, {-1, 4}]]
A simple ListPlot of that looks rather boring:
ListPlot[builtdatevsmark]
![enter image description here][23]
and is difficult to interpret. But with a little bit of work we can make it much more useful
BubbleChart[Flatten /@ Tally[builtdatevsmark], PlotRange -> {{1900, 2020}, All},
ColorFunction -> Function[{x, y, r}, Lighter[Green, r/41.]], AspectRatio -> 0.3, BubbleSizes -> {0.01, 0.05}, Background -> Black,LabelStyle -> Directive[Bold, 15, White], Epilog -> {Red, Line[{{1890, 2.95}, {2020, 2.95}}], Blue, Line[{{1960, 0.8}, {1960, 3.8}}], Line[{{1980, 0.8}, {1980, 3.8}}],
Green, Line[{{1890, 1.95}, {2020, 1.95}}]}]
![enter image description here][24]
The x-axis shows the year the bridge was built. The y-axis the quality of the bridge - small numbers are better marks. The size of the bubbles roughly indicates the number of bridges at that coordinate. The red line indicates the quality threshold. Everything above is rather problematic. The green line separates the bridges in excellent states (below it) from the ok-ish bridges (above). The blue lines indicate a period from 1960-1980, which has a high proportion of bridges with issues. Bridges build after 1980 appear to be in a much better state in general. Bridges built after 2000 are usually in quite good states. This broadly suggests that bridges that are up to 20 years old are usually quite good. Bridges that are between 20-40 years old get slowly worse, and from 40-60 years old they start needing more maintenance. Another historically interesting fact is that this suggests that there were many bridges built during the Third Reich, from say 1935 to about 1941 or so. In the 10 years after the war notably fewer bridges seem to have been built.
To help illustrate that point one could also plot the regression line (orange below). We will use a linear and a quadratic model:
nlm = LinearModelFit[builtdatevsmark, {1, x}, x]
nlm2 = LinearModelFit[builtdatevsmark, {1, x, x^2}, x]
and then plot it:
Show[BubbleChart[Flatten /@ Tally[builtdatevsmark],
PlotRange -> {{1900, 2020}, All},
ColorFunction -> Function[{x, y, r}, Lighter[Green, r/41.]],
AspectRatio -> 0.3, BubbleSizes -> {0.01, 0.05},
Background -> Black, LabelStyle -> Directive[Bold, 15, White],
Epilog -> {Red, Line[{{1890, 2.95}, {2020, 2.95}}], Blue,
Line[{{1960, 0.8}, {1960, 3.8}}],
Line[{{1980, 0.8}, {1980, 3.8}}], Green,
Line[{{1890, 1.95}, {2020, 1.95}}]}],
Plot[nlm[x], {x, 1900, 2020}, PlotStyle -> Orange]]
![enter image description here][25]
or for the quadratic model (same code as above but nlm2 instead of nlm):
![enter image description here][26]
Fitting a 5th order polynomial gives this:
![enter image description here][27]
We can also plot bands for the single prediction error.
nlm2["SinglePredictionBands"]
For nlm2 that looks like so:
Show[BubbleChart[Flatten /@ Tally[builtdatevsmark],
PlotRange -> {{1900, 2020}, All},
ColorFunction -> Function[{x, y, r}, Lighter[Green, r/41.]],
AspectRatio -> 0.3, BubbleSizes -> {0.01, 0.05},
Background -> Black, LabelStyle -> Directive[Bold, 15, White],
Epilog -> {Red, Line[{{1890, 2.95}, {2020, 2.95}}], Blue,
Line[{{1960, 0.8}, {1960, 3.8}}],
Line[{{1980, 0.8}, {1980, 3.8}}], Green,
Line[{{1890, 1.95}, {2020, 1.95}}]}],
Plot[{-129.53690036856392` + 0.14262757468838871` x -
0.00003842565681513884` x^2 -
1.9603506747760122` \[Sqrt](99.64290933950231` -
0.20387038807958635` x + 0.00015683763427764343` x^2 -
5.3645803097817624`*^-8 x^3 +
6.883592653276127`*^-12 x^4), -129.53690036856392` +
0.14262757468838871` x - 0.00003842565681513884` x^2 +
1.9603506747760122` \[Sqrt](99.64290933950231` -
0.20387038807958635` x + 0.00015683763427764343` x^2 -
5.3645803097817624`*^-8 x^3 + 6.883592653276127`*^-12 x^4),
nlm2[x]}, {x, 1900, 2020}, PlotStyle -> {Yellow, Yellow, Orange},
Filling -> {{1 -> {3}, 2 -> {3}}}]]
![enter image description here][28]
Infrastructure spending
----------
The OECD offers great data on the infrastructure spending of different countries [here][29]. We import that data:
spending = Import["/Users/thiel/Desktop/DP_LIVE_15082018232318196.csv"];
and select the infrastructure spending for roads for different countries over the years:
germanySpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] == "DEU" &];
spainSpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] == "ESP" &];
italySpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] == "ITA" &];
franceSpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] ==
"FRA" &]; usaSpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] == "USA" &];
greeceSpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] == "GRC" &];
ukSpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] ==
"GBR" &]; russiaSpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] == "RUS" &];
luxemburgSpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] ==
"LUX" &]; russiaSpending =
Select[spending, #[[3]] == "ROAD" && #[[1]] == "RUS" &];
If we plot this, we reproduce the plot of CNN, and get a better range than the BBC:
DateListPlot[{{DateObject[ToString[#[[1]]]], #[[2]]} & /@
germanySpending[[All, -3 ;; -2]], {DateObject[
ToString[#[[1]]]], #[[2]]} & /@
spainSpending[[All, -3 ;; -2]], {DateObject[
ToString[#[[1]]]], #[[2]]} & /@ italySpending[[All, -3 ;; -2]],
{DateObject[ToString[#[[1]]]], #[[2]]} & /@
ukSpending[[All, -3 ;; -2]], {DateObject[
ToString[#[[1]]]], #[[2]]} & /@
russiaSpending[[All, -3 ;; -2]]}, PlotTheme -> "Marketing",
LabelStyle -> Directive[Bold, 15],
PlotLegends -> {"Germany", "Spain", "Italy", "United Kingdom",
"Russia"}, ImageSize -> Large, FrameLabel -> {"year", "spending"}]
![enter image description here][30]
The BBC article only shows the range from 2007 to 2016, a phase in which the infrastructure spending in Italy (and Spain) has fallen substantially. This could be used to make the case that a lack of infrastructure spending by Italy might have contributed to the disaster. The full period from 1995, however, shows that Italy went through a phase of higher investments from 2004-2008 and then returned to values slightly lower than before 2004. Note also that this is the total spending and not normalised by population size, GDP or size of the road network. If for example we also plot the graph for the United States the diagram looks like this:
DateListPlot[{{DateObject[ToString[#[[1]]]], #[[2]]} & /@
germanySpending[[All, -3 ;; -2]], {DateObject[
ToString[#[[1]]]], #[[2]]} & /@
spainSpending[[All, -3 ;; -2]], {DateObject[
ToString[#[[1]]]], #[[2]]} & /@ italySpending[[All, -3 ;; -2]],
{DateObject[ToString[#[[1]]]], #[[2]]} & /@
ukSpending[[All, -3 ;; -2]], {DateObject[
ToString[#[[1]]]], #[[2]]} & /@
russiaSpending[[All, -3 ;; -2]], {DateObject[
ToString[#[[1]]]], #[[2]]} & /@ usaSpending[[All, -3 ;; -2]]},
PlotTheme -> "Marketing", LabelStyle -> Directive[Bold, 15],
PlotLegends -> {"Germany", "Spain", "Italy", "United Kingdom",
"Russia", "USA"}, ImageSize -> Large,
FrameLabel -> {"year", "spending"}, PlotRange -> All]
![enter image description here][31]
where the United States dwarf the spending by the (much smaller) European countries. There does, however, appear to be a trend of increasing infrastructure spending in the United States.
(Preliminary) Conclusion
----------
We have developed a couple of graphs visualising some data about the state of bridges and produced some graphs that would appear in news coverage of the Genoa disaster. We have pulled in satellite date to inspect the bridges and seen how easy it is in the Wolfram Language to asks your own questions and get meaningful representations. This is, of course, only a shot computational essay, and by no means a comprehensive analysis. In fact, many questions can be asked next. Do different types of bridges age better (or worse)? Could additional data such as accelerometer data from mobile phones in cars that drive over bridges help to "measure" the state of the bridge? Could we use dash cam footage? There is data on cars using individual roads; could we build a model to suggest which bridges to fix first in order to minimise risk to people? It is should be easy to use TravelDirectionsData to get a warning about problematic bridges on the way.
If citizens perform this or similar types of analysis it might help to prompt politicians or the administration to act when there is a hazard. The Wolfram Language is a great tool to enable us to do so.
PS: By the way, if you want to do the same analysis for the United States, [this dataset][32] might be useful.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.02.53.png&userId=48754
[2]: https://edition.cnn.com/2018/08/15/europe/italy-genoa-morandi-bridge-collapse-intl/index.html
[3]: https://www.bbc.co.uk/news/world-europe-45193614
[4]: http://www.welt.de/politik/interaktiv/bruecken/deutschlands-bruecken-wettlauf-gegen-den-verfall.html
[5]: https://docs.google.com/spreadsheets/d/1h_%5C%20NKP3lvTnoQBFjCRa9gJLLOi2rxtIMeNucvAUsPwtI/edit?pref=2&pli=1#gid=0
[6]: https://www.bast.de/BASt_2017/DE/Statistik/statistik-node.html
[7]: https://www.bast.de/BASt_2017/DE/Statistik/Bruecken/Zustandsnoten-excel.html?nn=1819430
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.12.22.png&userId=48754
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.25.29.png&userId=48754
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.30.53.png&userId=48754
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.46.34.png&userId=48754
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.49.10.png&userId=48754
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.49.53.png&userId=48754
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.53.04.png&userId=48754
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.54.32.png&userId=48754
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.56.13.png&userId=48754
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1523.59.58.png&userId=48754
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1600.01.18.png&userId=48754
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1600.02.39.png&userId=48754
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1600.03.25.png&userId=48754
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1600.07.28.png&userId=48754
[22]: https://docs.google.com/spreadsheets/d/1h_%5C%20NKP3lvTnoQBFjCRa9gJLLOi2rxtIMeNucvAUsPwtI/edit?pref=2&pli=1#gid=0
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1600.12.48.png&userId=48754
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1600.14.35.png&userId=48754
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1601.20.48.png&userId=48754
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1601.23.19.png&userId=48754
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1601.32.50.png&userId=48754
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1601.48.06.png&userId=48754
[29]: https://data.oecd.org/transport/infrastructure-investment.htm
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1600.26.42.png&userId=48754
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2018-08-1600.31.31.png&userId=48754
[32]: https://www.fhwa.dot.gov/bridge/nbi/ascii2017.cfmMarco Thiel2018-08-15T23:46:05ZDo neural nets work with cloud deployment?
http://community.wolfram.com/groups/-/m/t/1382601
Over the last few weeks I routinely deployed neural nets to the cloud for computation. Today, even the simplest example does not work anymore:
(*initialise random neural net that talkes 200x200 image as input*)
scrnet = NetInitialize@
NetChain[{ConvolutionLayer[1, 1], PartLayer[1]}, "Input" -> {1, 200, 200}]
(*Deploy to cloud*)
cnet = CloudExport[scrnet, "MX", Permissions -> "Public"];
(*test on example image *WORKS**)
img = Import["ExampleData/ocelot.jpg"];
With[{net = CloudImport@cnet}, Image@net@{ImageData@img}]
(*Deploy as cloud form page *)
CloudDeploy[FormPage[{"image" -> "Image"},
With[{net = CloudImport@cnet}, Image@net@{ImageData@#image}] &],
Permissions -> "Public"]
The last piece of code generates a cloud object. If I try to upload the ocelot image I get `Image[Failed]` as the sole output.
![enter image description here][1]
If I do not use a net the form works fine:
CloudDeploy[FormPage[{"image" -> "Image"}, Image@ImageData@#image &],
Permissions -> "Public"]
I contacted wolfram support already and they told me to post here. Am I doing something wrong? I have ~1000 cloud credits left on a free account so that should not be the issue. Frankly, I am having a lot of issues with the documentation for cloud computing in general and am starting to wonder if it is worth the hassle.
Best, Max
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5154ScreenShot2018-08-15at11.52.09PM.png&userId=11733Maximilian Jakobs2018-07-13T14:19:54ZConvert Wolfram Language Neural Network to Keras
http://community.wolfram.com/groups/-/m/t/1407584
Suppose you want to convert a Wolfram Language Neural Network to a Keras one, how should you do it?
With a few lines of code you can build a code to convert simple Net elements (others can easily be expanded.
Let's start by creating a function that convert a Mathematica array into a Python array.
PythonArray[L_List] := Map[ToString, L, {ArrayDepth@L}] //. List[x__String] :>
"[" <> StringRiffle[{x}, ", "] <> "]"
As an example.
PythonArray[{{1,2},{3,4}}] == "[[1, 2], [3, 4]]"
We can also create a NumPy array:
PythonNumPyArray[L_List] := StringTemplate["np.array(`1`)"][PythonArray@L]
Let's create a very simple Net to test:
net = NetInitialize@NetChain[{
5, Ramp,
13, Tanh
}, "Input" -> 1];
Which is composed of only a LinearLayer and ElementwiseLayer Layers.
The Keras format for Layers LinearLayer (Dense) is as following:
model = Sequential()
model.add(Dense(5, input_shape=(1,)))
model.layers[0].set_weights(...)
Hence we can parse the Wolfram input as:
NetParseInput[layer_LinearLayer] := StringTemplate["model.add( Dense(`1`, input_shape=(`2`,)) )"][NetExtract[layer, "Output"], NetExtract[layer, "Input"]]
NetParseWeight[layer_LinearLayer, i_Integer] := StringTemplate["model.layers[`1`].set_weights(`2`)"][i,
PythonArray@{PythonNumPyArray@Transpose@NetExtract[layer, "Weights"], PythonNumPyArray@NetExtract[layer, "Biases"]}]
For the LinearLayer and
NetParseInput[layer_ElementwiseLayer] := StringTemplate["model.add(Activation('`1`'))"][Switch[NetExtract[layer, "Function"],
Ramp, "relu",
Tanh, "tanh",
_, "ERROR"
]]
NetParseWeight[layer_ElementwiseLayer, ___] := Nothing
For the layer_ElementwiseLayer. The code is pretty self-explanatory.
Now we can create a function to parse the Net.
NetParse[net_NetChain] := Block[{model, layer},
model = Table[
layer = NetExtract[net, i];
StringRiffle[{NetParseInput@layer, NetParseWeight[layer, i-1]}, "\n"]
, {i, Length@net}] // StringRiffle[#, "\n\n"] &;
model = "model = Sequential()\n\n" <> model
]
Using the example of the Net above we have:
SeedRandom[5]
NetParse@net
Outputs:
model = Sequential()
model.add( Dense(5, input_shape=(1,)) )
model.layers[0].set_weights([np.array([[-1.41198, -1.23031, 1.38467, 1.32588, -0.846079]]), np.array([0., 0., 0., 0., 0.])])
model.add(Activation('relu'))
model.add( Dense(13, input_shape=(5,)) )
model.layers[2].set_weights([np.array([[0.368951, -0.465557, 0.0588311, 0.353987, 0.415121, -0.273476, -0.493994, 0.226572, 0.189246, 0.413253, -0.0264303, 0.218976, -0.202013], [-0.4737, 0.056413, -0.509581, -0.0702224, -0.297756, -0.562088, -0.45776, -0.517487, 0.0414736, 0.446953, 0.272512, -0.181571, 0.449515], [0.543519, -0.424297, 0.531857, -0.413055, 0.395872, -0.153824, -0.0818212, -0.552472, -0.0703974, -0.36928, -0.0741213, -0.365046, 0.228673], [-0.311873, -0.309076, -0.463902, -0.54538, 0.0629959, -0.478365, -0.0507208, -0.101537, -0.267402, 0.28598, -0.542745, 0.497146, -0.0757304], [0.129907, 0.318853, -0.537684, -0.485585, 0.255933, 0.0802962, -0.288363, -0.103131, -0.230148, -0.0018353, -0.0192109, 0.184851, 0.0072636]]), np.array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])])
model.add(Activation('tanh'))
Loading this into python and making sure to import the packages:
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
We can test the result as following (in Python):
model.predict([1])
array([[ 0.3266632 , -0.76046175, 0.12077561, -0.86044425, 0.55920595,
-0.68963015, -0.17860858, -0.71611154, -0.42355815, -0.13139175,
-0.67629176, 0.15248899, 0.21291924]], dtype=float32)
While in Mathematica:
net[1]
{0.326664,-0.760461,0.120777,-0.860444,0.559206,
-0.689629,-0.178608,-0.716111,-0.423557,-0.131392,
-0.67629,0.152486,0.21292}
Which gives the same result.
Others layers can be easily added following the same rationale.Thales Fernandes2018-08-15T22:47:41ZIntegrate a function in a 4D region with singularities using NIntegrate?
http://community.wolfram.com/groups/-/m/t/1404998
Hi everyone!
I need to integrate a function in a 4D region (x1,y1,x2,y2), which explodes whenever x1=x2&&y1=y2. The code is as following:
{Lx, Ly} = {1/50, 1/50};
f1[x_, y_, n_, m_] := Cos[n*Pi*(Lx + x)/(2*Lx)]*Sin[m*Pi*(Ly + y)/(2*Ly)];
f2[x_, y_, n_, m_] := Sin[n*Pi*(Lx + x)/(2*Lx)]*Cos[m*Pi*(Ly + y)/(2*Ly)]*m*Lx/(n*Ly);
dist[x1_, y1_, x2_, y2_] := Sqrt[(x1 - x2)^2 + (y1 - y2)^2];
int[f_, n_, m_, i_, j_] := int[f, n, m, i, j] =
NIntegrate[f[x1, y1, n, m]*f[x2, y2, i, j]/dist[x1, y1, x2, y2],
{y1, -Lx, Lx}, {x1, -Lx, Lx}, {y2, -Ly, Ly}, {x2, -Lx, Lx}]
Then, I want to evaluate `int` using either `f1` or `f2`, for positive integer `(n,m,i,j)`, for instance using `AbsoluteTiming[int[f1,1,2,1,2]]`. I may need to compute these integrals thousands of times, for the different possible combinations of `(n,m,i,j)` - in the most complex cases, each of them may reach up to 14, for instance. So I must be able to calculate the integral as fast as possible.
However, the method is simply not working so well, for some. When I compute, for instance: `int[f1,4,1,1,1]`, the computation simply lasts for ever and I receive some scary error messages:
NIntegrate::errprec: Catastrophic loss of precision in the global error estimate due to insufficient WorkingPrecision or divergent integral.
I have tried multiple integration strategies and methods, but the thing that seems to work best is to simply define the integration boundaries using `{y1,-Lx,Lx},{x1,-Lx,Lx},{y2,-Ly,y1,Ly},{x2,-Lx,x1,Lx}`, taking advantage of the integration order to define the singularity locations.
The integration method "DuffyCoordinates" performed quite fast, but I didn't know whether I was correctly using the "Corners" option, e.g.:
int2[f_,n_,m_,i_,j_] := NIntegrate[f[x1, y1, n, m]*f[x2, y2, i, j]/dist[x1, y1, x2, y2],
{y1, -Lx, Lx}, {x1, -Lx, Lx}, {y2, -Ly, Ly}, {x2, -Lx, Lx},Method -> {"DuffyCoordinates","Corners" -> {0, 0, 1, 1}}]
... But it also gets stuck for `(n,m,i,j)=(4,1,1,1)`. So I don't know what else to do:
- Am I doing something wrong with this couple of integration strategies, or missing something obvious?
- I suspect much of the problems are due to the integral actually evaluating to 0, maybe because the functions are somehow odd (whatever that means in 4D?), and I'm integrating them in a symmetric interval. But I don't know how I could define my function `int` in a way that it would check whether the integrand is odd in this 4D symmetrical integration volume.Miguel Condesso2018-08-14T21:10:58ZMathematica slow down on Raspberry Pi 3
http://community.wolfram.com/groups/-/m/t/825781
Just benchmarked my RPI3 vs RPI2 Model B V1.1 with the built in Mathematica Benchmark[]. I was quite surprised to see the benchmark significantly decline. RPI3 had benchmark of 0.03 and time of 465 while the RPI2 had benchmark of 0.045 and time of 305 sec. On the RPI3, two tests dominated the time, matrix multiply and solving linear systems which took only slightly less than the entire benchmark on RPI2. Both systems were fully updated. I wonder if Wolfram could comment on this : do they feel it is the hardware or this might be improved in a future release?. As it is it is difficult to use Mathematica on an RPI and I was looking to RPI3 to improve this.
Thanksdavid p2016-03-19T02:02:04Z[TMJ] Improving the Kruskal—Katona Bounds for Complete Subgraphs of a Graph
http://community.wolfram.com/groups/-/m/t/1405633
New *THE MATHEMATICA JOURNAL* article:
----------
[Improving the Kruskal—Katona Bounds for Complete Subgraphs of a Graph][1]
--------------------------------------------------------------------
*by ROBERT COWEN*
--------------------------------------------------------------------
ABSTRACT: An important problem in graph theory is to find the number of complete subgraphs of a given size in a graph. If the graph is very large, it is usually only possible to obtain upper bounds for these numbers based on the numbers of complete subgraphs of smaller sizes. The Kruskal–Katona bounds are often used for these calculations. We investigate these bounds in specific cases and study how they might be improved.
- [Read full text »][2]
- [Submit an article »][3]
![enter image description here][4]
[1]: http://www.mathematica-journal.com/2018/07/improving-the-kruskal-katona-bounds-for-complete-subgraphs-of-a-graph/
[2]: http://www.mathematica-journal.com/2018/07/improving-the-kruskal-katona-bounds-for-complete-subgraphs-of-a-graph/
[3]: http://www.mathematica-journal.com/submit-article/
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Cowen-2_Output_65.gif&userId=20103Moderation Team2018-08-15T07:00:34Z[TMJ] Computational Aspects of Quaternionic Polynomials (Part II)
http://community.wolfram.com/groups/-/m/t/1405095
New *THE MATHEMATICA JOURNAL* article:
----------
[Computational Aspects of Quaternionic Polynomials][1]
--------------------------------------------------------------------
###Part II: Root-Finding Methods###
*by M. IRENE FALCÃO, FERNANDO MIRANDA, RICARDO SEVERINO, M. JOANA SOARES*
--------------------------------------------------------------------
ABSTRACT: This article explores the numerical mathematics and visualization capabilities of Mathematica in the framework of quaternion algebra. In this context, we discuss computational aspects of the recently introduced Newton and Weierstrass methods for finding the roots of a quaternionic polynomial.
- [Read full text »][2]
- [Submit an article »][3]
![enter image description here][4]
[1]: http://www.mathematica-journal.com/2018/07/computational-aspects-of-quaternionic-polynomials-2/
[2]: http://www.mathematica-journal.com/2018/07/computational-aspects-of-quaternionic-polynomials-2/
[3]: http://www.mathematica-journal.com/submit-article/
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Miranda-2_Output_19.gif&userId=20103Moderation Team2018-08-14T18:48:22Z[TMJ] Computational Aspects of Quaternionic Polynomials (Part I)
http://community.wolfram.com/groups/-/m/t/1405077
New *THE MATHEMATICA JOURNAL* article:
----------
[ Computational Aspects of Quaternionic Polynomials][1]
--------------------------------------------------------------------
###Part I: Manipulating, Evaluating and Factoring###
*by M. IRENE FALCÃO, FERNANDO MIRANDA, RICARDO SEVERINO, M. JOANA SOARES*
--------------------------------------------------------------------
ABSTRACT: This article discusses a recently developed Mathematica tool – `QPolynomial` – a collection of functions for manipulating, evaluating and factoring quaternionic polynomials. `QPolynomial` relies on the package `QuaternionAnalysis`, which is available for download at [w3.math.uminho.pt/QuaternionAnalysis][2].
- [Read full text »][3]
- [Submit an article »][4]
![enter image description here][5]
[1]: http://www.mathematica-journal.com/2018/05/computational-aspects-of-quaternionic-polynomials/
[2]: http://w3.math.uminho.pt/QuaternionAnalysis
[3]: http://www.mathematica-journal.com/2018/05/computational-aspects-of-quaternionic-polynomials/
[4]: http://www.mathematica-journal.com/submit-article/
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Miranda-1_Print_18.gif&userId=20103Moderation Team2018-08-14T17:56:23Z[TMJ] The Modular Group
http://community.wolfram.com/groups/-/m/t/1405063
New *THE MATHEMATICA JOURNAL* article:
----------
[The Modular Group: A Finitely Generated Group with Interesting Subgroups][1]
--------------------------------------------------------------------
*by PAUL R. MCCREARY, TERI JO MURPHY, CHRISTAN CARTER*
--------------------------------------------------------------------
ABSTRACT: The action of Möbius transformations with real coefficients preserves the hyperbolic metric in the upper half-plane model of the hyperbolic plane. The modular group is an interesting group of hyperbolic isometries generated by two Möbius transformations, namely, an order-two element ![enter image description here][2] and an element of infinite order ![enter image description here][3]. Viewing the action of the group elements on a model of the hyperbolic plane provides insight into the structure of hyperbolic 2-space. Animations provide dynamic illustrations of this action.
- [Read full text »][4]
- [Submit an article »][5]
![enter image description here][6]
[1]: http://www.mathematica-journal.com/2018/03/the-modular-group/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=McCreary_Math_1.gif&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=McCreary_Math_2.gif&userId=20103
[4]: http://www.mathematica-journal.com/2018/03/the-modular-group/
[5]: http://www.mathematica-journal.com/submit-article/
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=McCreary_Output_2.gif&userId=20103Moderation Team2018-08-14T17:44:49ZInteresting trigonometric MRB constant approximations
http://community.wolfram.com/groups/-/m/t/1405026
Let m be the MRB constant to 40 digits of precision:
m = NSum[(-1)^n (n^(1/n) - 1), {n, 1, Infinity},
Method -> "AlternatingSigns", WorkingPrecision -> 40];
. After that
1 - Cos[m/119595162821256388427204517274628339609] - Pi^2/8*10^-(2*40 - 2)
gives 0.*10^-116.
In general,
magic = 2 m/Pi
(* 0.119595162821256388427204517274628339609*)
. After that
Table[
1 - Cos[m/Floor[magic*10^n]] - Pi^2/8*10^-(2 n), {n, 10, 40}]
(* {4.385464676791947466761321772*10^-30,
2.59208994291255051050593254*10^-33,
5.2896210219556330067818305*10^-36,
1.1633653405789904134448983*10^-38,
1.318014202380040427023339*10^-41, 8.0137497971723466567979*10^-45,
1.8243662751312499138693*10^-47, 1.738640025869652510552*10^-50,
8.8137753325369667176*10^-54, 5.612639698155533691*10^-57,
1.486384016794827039*10^-59, 4.2194528318579711*10^-63,
9.31971504972644*10^-67, 9.31971504972644*10^-69,
1.06720368700503*10^-71, 3.563976666485*10^-75,
1.500848825805*10^-77, 5.6659337329*10^-81, 1.5396780515*10^-83,
9.54885630*10^-87, 1.29634494*10^-89, 5.846824*10^-93,
1.720568*10^-95, 7.0066*10^-99, 8.172*10^-102, 1.982*10^-104,
1.26*10^-107, 2.*10^-111, 2.*10^-113, 0.*10^-116, 0.*10^-118}*)
Having a term that adds accuracy is not unique to `magic = 2 m/Pi` and isn't even unique to the MRB constant :
m = NSum[(-1)^n (n^(1/n) - 1), {n, 1, Infinity},
Method -> "AlternatingSigns", WorkingPrecision -> 80];
1 - Cos[m/10^30] - 5*m^2*10^-61
(* -5.18946663509688*10^-125*)
e = N[E, 80];
1 - Cos[e/10^30] - 5*e^2*10^-61
(*-2.274922918047676628*10^-120*)Marvin Ray Burns2018-08-14T15:48:29ZCreate a custom loss function with NetTrain?
http://community.wolfram.com/groups/-/m/t/1402209
Suppose I want to Classify some data but, for my own reasons, want a custom NeuralNet architecture rather than whatever Classify develops algorithmically. AND I also want a custom loss function. In my example, I want an asymmetric loss such that predicting True when the real answer is False is a worse problem than predicting False when the real answer is True. In Classify, there is an option UtilityFunction that works splendidly in such cases. And I think the following set of layers would work to emulate a utility function in the Neural Network arena if I wanted losses in one direction to count double losses in the other direction. There may well be much better functions, I only show the code below to indicate that something may be possible.
lossnet =
NetChain[{ThreadingLayer[#1 - #2 &],
ElementwiseLayer[2*Ramp[#] + 1*Ramp[-#] &]}]
But I can't quite figure out how to put it all together. The particular toy problem I want to solve is to come up with a model that works on the Titanic dataset and predicts survival but, for my own reasons, counts a prediction of survival when the person dies as worse that prediction of death when the person survives.
Three other notes:
1) My question is related to a question asked [here][1] but no one ever answered it.
2) The documentation for the neural net framework really needs to be improved, particularly if it escapes the "Experimental" framework. Right now, it is missing the conceptual framework that would make its use easy. It also seems to have a very heavy focus on image processing rather than on data analysis in other contexts, such as social science. Moreover, some of the documentation is underinclusive. By way of example, there are options to NetGraph that are listed in the "Details" section yet there is no indication at the top of the ref page that any options exist. As a result it is extremely challenging to figure out how to deal with data such as the Titanic which is a list of Associations and for which various columns of the data need special encoding.
3) One motivation for using a custom utility function is that when one output class is scarce, the neural net frequently develops a predictor that always predicts the most common class: predicting that everyone on the Titanic will live. In the Classify context, there are ways of dealing with this: use of ClassPriors, UtilityFunctions. I'd like the same capabilities when using the Neural Network framework.
[1]: http://community.wolfram.com/groups/-/m/t/982989Seth Chandler2018-08-10T15:19:12ZGet rid of a controller/variable in Animate?
http://community.wolfram.com/groups/-/m/t/1402972
Hello!
I'm trying to get rid of controllers and the controller variable using `Animate`. I've gotten rid of the controllers, but the variable ϴ still shows up.
animate[obj_] := Animate[
With[
{v = RotationTransform[\[Theta], {0, 0, 1}][{3, 0, 3}]},
Show[obj, ViewPoint -> v]], {\[Theta], 0, 2 Pi},
Alignment -> Center,
Paneled -> False,
SaveDefinitions -> True,
AnimationRate -> .01,
AppearanceElements -> None,
AnimationRunning -> True] /.
(AppearanceElements -> _) ->
(AppearanceElements -> {})
Giving the following animation:
![PRNP][1]
If I use `ControlType->None`, ϴ does not show up...
animate2[obj_] := Animate[
With[
{v = RotationTransform[\[Theta], {0, 0, 1}][{3, 0, 3}]},
Show[obj, ViewPoint -> v]], {\[Theta], 0, 2 Pi},
Alignment -> Center,
Paneled -> False,
SaveDefinitions -> True,
AnimationRate -> .01,
AppearanceElements -> None,
AnimationRunning -> True,
ControlType -> None] /.
(AppearanceElements -> _) ->
(AppearanceElements -> {})
![enter image description here][2]
...but then it does not rotate. I realize it needs some sort of control object (or I'm guessing it does), but is there a way to hide ϴ so all I get is a rotating object that starts automatically upon code evaluation?
Any help is appreciated!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=prnp.gif&userId=1036924
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-08-13at1.57.30PM.png&userId=1036924Swede White2018-08-13T18:58:23ZCustom Neural Network Architectures for Social Science Data
http://community.wolfram.com/groups/-/m/t/1402774
The code in the notebook attached to this post sets forth my efforts to develop custom neural network architectures to work on datasets found in social sciences (or other fields). It is the result of a lot of trial and even more error. It shows how to do the following things. Some of this is covered in the Wolfram Language documentation, but not as an extensive worked example.
1. Create numerical vectors out of nominal data
2. Develop a loss function when the target consists of nominal data
3. Use ClassifierMeasurements when the classifier is the output of a trained neural network
4. Specify what form the neural network must be in for ClassifierMeasurements to work and how to modify a non-conforming trained network to be in the appropriate form.
5. Show equivalent NetChains and NetGraphs
6. Show how the neural network can itself encode nominal data contains as values to Associations, catenate that data and then pipe it through the rest of a NetGraph.
7. Show how to hook up a loss function to the output of a neural network
8. How to see the innards of a neural network more clearly, as well as the plan to convert it to something useable by MXNet.
9. How to work with Datasets and Query.
I strongly suspect that this is not the most efficient way create a neural network to analyze data contained in a Dataset with named columns and lots of nominal variable variables. However, it's the best I can do for now. I hope it is instructive to others. More importantly perhaps, I hope that these efforts will inspire others more knowledgeable in the field to show (1) how this can all be done in a more efficient manner and (2) how other bells and whistles can be added, such as a custom loss function, weighted inputs, desired distribution of predictions, etc.. While the Wolfram documentation on neural networks is extensive, as of version 11.3, in which the functionality is still deemed "Experimental," it lacks, in my view, the conceptual perspective and range of worked examples from diverse fields that I think would lower the desired barriers to entry for non-expert users of machine learning.
Note:
I did receive some excellent assistance in this effort from Wolfram Technical Support, but there comes a point when you kind of want to do it on your own. My efforts in asking the community.wolfram.com website for assistance didn't receive any immediate response and so, being the persistent sort, I decided just to try and do it on my own.
##Do the Encoding Before We Get to NetTrain##
Download the Titanic and convert it from a Dataset to a list of associations.
Short[titanic = Normal@ExampleData[{"Dataset", "Titanic"}]]
Scramble the data, delete the missing values to keep things simple, and encode survival in a way I prefer.
titanic2 =
Query[RandomSample /* (DeleteMissing[#, 1,
2] &), {"survived" -> (If[#, "survived", "died"] &)}][titanic];
Encode the nominal data as unit vectors.
titanic3 =
Query[All,
List["class" -> NetEncoder[{"Class", {"1st", "2nd", "3rd"}, "UnitVector"}],
"sex" -> NetEncoder[{"Class", {"male", "female"}, "UnitVector"}]]][
titanic2]
![enter image description here][1]
Get the data as a list of six values ruled onto a single value.
Short[titanic4 = Query[All, Values /* (Flatten[Most[#]] -> Last[#] &)][titanic3]]
![enter image description here][2]
Form training and testing data sets.
Short[{trainingData, testData} = TakeDrop[titanic4, Round[0.7*Length[titanic4]]]]
![enter image description here][3]
Create a pretty basic net chain ending with a SoftmaxLayer[] that turns the output into probabilities.
chainlinks = {LinearLayer[12], ElementwiseLayer[LogisticSigmoid], LinearLayer[4], LinearLayer[2], SoftmaxLayer[]};
nc = NetChain[chainlinks, "Input" -> 6, "Output" -> NetDecoder[{"Class", {"died", "survived"}}]]
![enter image description here][4]
Just test the NetChain to see if it works.
NetInitialize[nc][{0, 0, 1, 18, 1, 0}]
> "died"
Train the neural net. Use the CrossEntropy loss as the function to minimize. Remember that the target data needs to be encoded from died and survived to 1 and 2. Otherwise the CrossEntropyLossLayer gets unhappy. After 2000 rounds I find it's all overfitting anyway. So I limit the training rounds.
chainTrained = NetTrain[nc, trainingData, All, ValidationSet -> Scaled[0.2], LossFunction -> CrossEntropyLossLayer["Index",
"Target" -> NetEncoder[{"Class", {"died", "survived"}}]], MaxTrainingRounds -> 2000]
![enter image description here][5]
Get the TrainedNet out of the NetTrainResultsObject and see how our classifier performed.
cmo = ClassifierMeasurements[chainTrained["TrainedNet"], testData]
![enter image description here][6]
cmo["ConfusionMatrixPlot"]
![enter image description here][7]
Not bad. (But not great. The question is whether that's the fault of the classifier or just irreducible noise in the data).
##Now do it with NetGraph##
Same data, but do it with a NetGraph.
ngt = NetGraph[chainlinks, {1 -> 2 -> 3 -> 4 -> 5}, "Input" -> 6,
"Output" -> NetDecoder[{"Class", {"died", "survived"}}]]
![enter image description here][8]
From here on in, it' s all exactly the same.
graphTrained =
NetTrain[ngt, trainingData, All, ValidationSet -> Scaled[0.2],
LossFunction ->
CrossEntropyLossLayer["Index",
"Target" -> NetEncoder[{"Class", {"died", "survived"}}]],
MaxTrainingRounds -> 2000]
![enter image description here][9]
graphCmo = ClassifierMeasurements[graphTrained["TrainedNet"], testData]
![enter image description here][10]
graphCmo["ConfusionMatrixPlot"]
![enter image description here][11]
Not surprisingly, the results are very similar.
##Now Do the Encoding Within NetTrain##
Now, I want to do it with the data in a different form. I want the neural network to do the encoding. And I want to at least think about having a custom loss function. Convert the form of the data so that it is "column oriented." Basically we are going to use the third variant in the function specification set forth below.
![enter image description here][12]
{trainingData2, testData2} = Map[Normal[Transpose[Dataset[#]]] &, TakeDrop[titanic2, Round[0.7*Length[titanic2]]]];
Here' s what the training Data looks like.
Keys[trainingData2]
> {"class", "age", "sex", "survived"}
Map[Short, Values[trainingData2]]
![enter image description here][13]
Now form a NetGraph that Catenates some of the values from the data together and then goes through the same process as our NetChain (and NetGraph) above. Add a loss function at the end. Note that the data coming in from the Target port into the "myloss" layer is encoded from nominal values died and survived into integers 1 and 2.
nodes = Association["catenate" -> CatenateLayer[], "l15" -> LinearLayer[15],
"ls1" -> ElementwiseLayer[LogisticSigmoid], "l5" -> LinearLayer[5],
"l2" -> LinearLayer[2], "sm" -> SoftmaxLayer[],
"myloss" ->
CrossEntropyLossLayer["Index",
"Target" -> NetEncoder[{"Class", {"died", "survived"}}]]];
Create the connectivity structure between the nodes. Note that I am careful to specify which connectors of various NetPorts connect with other NetPort connectors. Certain Layers, like CrossEntropyLossLayer have connector names that the user can't alter so far as I can figure out. The connector name "Target" for example, needs to stay "Target." Also notice that I believe I have to generate a NetPort["Loss"] for the network to be trained.
connectivity = {{NetPort["class"], NetPort["age"], NetPort["sex"]} ->
"catenate", "catenate" -> "l15" -> "ls1" -> "l5" -> "l2" -> "sm",
"sm" -> NetPort["myloss", "Input"],
NetPort["survived"] -> NetPort["myloss", "Target"],
"myloss" -> NetPort["Loss"], "sm" -> NetPort["Output"]}
> {{NetPort["class"], NetPort["age"], NetPort["sex"]} -> "catenate",
> "catenate" -> "l15" -> "ls1" -> "l5" -> "l2" -> "sm", "sm" ->
> NetPort["myloss", "Input"], NetPort["survived"] -> NetPort["myloss",
> "Target"], "myloss" -> NetPort["Loss"], "sm" -> NetPort["Output"]}
Now let' s put our NetGraph together. Here I have to tell it how various inputs and outputs will be encoded and decoded. You will notice I do NOT tell it how to encode the "survived" values because our CrossEntropyLossLayer handles that part of the work.
ngt2 = NetGraph[nodes, connectivity,
"class" -> NetEncoder[{"Class", {"1st", "2nd", "3rd"}, "UnitVector"}],
"age" -> "Scalar",
"sex" -> NetEncoder[{"Class", {"male", "female"}, "UnitVector"}],
"Output" -> NetDecoder[{"Class", {"died", "survived"}}]]
![enter image description here][14]
Here' s a picture of our net.
NetInformation[ngt2, "FullSummaryGraphic"]
![enter image description here][15]
We can get the structure information back out of the NetGraph using some "secret" functions. I found these useful when working on this project to help me understand what was going on.
NeuralNetworks`GetNodes[ngt2]
![enter image description here][16]
NeuralNetworks`NetGraphEdges[ngt2] (* shouldn't this be called GetEdges for consistency??*; or maybe GetNodes should be NetGraphNodes??*)
> {NetPort["class"] -> NetPort[{"catenate", 1}],
> NetPort["age"] -> NetPort[{"catenate", 2}],
> NetPort["sex"] -> NetPort[{"catenate", 3}],
> NetPort["survived"] -> NetPort[{"myloss", "Target"}],
> NetPort[{"catenate", "Output"}] -> "l15", "l15" -> "ls1", "ls1" -> "l5",
> "l5" -> "l2", "l2" -> "sm", "sm" -> NetPort[{"myloss", "Input"}],
> "sm" -> NetPort["Output"], NetPort[{"myloss", "Loss"}] -> NetPort["Loss"]}
We can also get a closer look at what the neural net is going to do, athough, frankly, I don' t understand the diagram fully. It does look cool, though. (I believe the diagram essentially shows how the Wolfram Language framework will be translated to MXNet).
NeuralNetworks`NetPlanPlot[NeuralNetworks`ToNetPlan[ngt2]]
![enter image description here][17]
Anyway, let' s train the network. Notice how I designate the loss function with a string that refers to a node (NetPort) in the network. I'm not quite sure why, but you can't designate the loss function as "myloss"; again, I wish the documentation were clearer on this issue. Again, I'll stop after 2000 rounds.
titanicNet2 =
NetTrain[ngt2, trainingData2, All, LossFunction -> "Loss",
ValidationSet -> Scaled[0.2], MaxTrainingRounds -> 2000]
![enter image description here][18]
We can extract the trained network from the NetTrainResultsObject.
titanicNet2["TrainedNet"]
![enter image description here][19]
Let' s run it on the test data.
Short[titanicNet2["TrainedNet"][testData2]]
![enter image description here][20]
If I try to use ClassifierMeasurements on it, though, it fails.
ClassifierMeasurements[titanicNet2["TrainedNet"], testData2];
![enter image description here][21]
The error message is unhelpful. And nothing I could find in the documentation spells out the circumstances under which the results of a neural network can be used in ClassifierMeasurements. Maybe, however, it's because our NetGraph is producing two outputs: a Loss value and an Output value. When we make Classifiers, to be the best of my knowledge, we only get an Output value. Let's trim the network.
titanicNet2Trimmed = NetDelete[titanicNet2["TrainedNet"], "myloss"]
![enter image description here][22]
Now, when we run our trimmed network on the test data (stripped of the survived column), we just get output values as a List and not as part of a multi-key Association.
titanicNet2Trimmed[Query[KeyDrop["survived"]][testData2]]
![enter image description here][23]
And now ClassifierMeasurements works!!
cmo2 = ClassifierMeasurements[titanicNet2Trimmed, testData2 -> "survived"]
![enter image description here][24]
cmo2["ConfusionMatrixPlot"]
![enter image description here][25]
##Conclusion##
I hope the code above helps others in appreciating the incredible neural network functionality built into the Wolfram language and inspires further posts on how it can be used in creative and flexible ways.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=27521.png&userId=20103
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=a1.png&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=a2.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=103982.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=98653.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=32714.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=82165.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=93816.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=x.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15818.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=66799.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-08-13at11.40.32.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=605711.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=659712.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=941513.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=223114.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1051116.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=146917.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=272718.png&userId=20103
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=606219.png&userId=20103
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=xx.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=548820.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1048021.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=761522.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=631723.png&userId=20103Seth Chandler2018-08-12T14:18:02ZWhat type of data container is used in this example?
http://community.wolfram.com/groups/-/m/t/1403092
I'm sure this is a very novice question. I've been looking at a number of examples of things that can be done. In a few examples, such as
https://www.wolfram.com/language/11/enhanced-geo-visualization/measure-the-density-of-trees.html
![enter image description here][1]
there is a concise block of data used. See `In[1]` in the above link. What is this? I would like to understand how to build on for a home project I'm working on about home values.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=In_1.png&userId=20103Michael Madsen2018-08-13T20:45:27ZDocumentation & Functionality enhancements as NeuralNets leave Experimental
http://community.wolfram.com/groups/-/m/t/1402799
I've been doing a lot of work with Machine Learning in the Wolfram Language recently and we have tremendous capability and a clean architecture typical of Wolfram Language products. Right now, all the functions are labeled as "Experimental" and thus are not held to quite as high a standard as features that have graduated from that designation. I believe the transition to a full and permanent part of the language could be helped by addressing two matters: (a) some documentation lacunae and (b) some interoperability challenges. I am attempting to start a conversation on that point by putting forth some suggestions for enhancements to Documentation and Functionality. My focus is not on creating new fancy layers -- like ones that would create a Generative Adversarial Network or other great stuff -- but on making the existing functionality more accessible to those not completely expert in either Neural Nets or the MXNet framework on which it rests.
**Documentation**
A key issue is that we have this wonderful Classify and Predict functionality that can use Neural Networks and that kind of/sort of integrates with the NetTrain, NetGraph stuff, but the integration is not as tight as desirable and the documentation is lacking. Here are some ideas.
1. Classify[data,Method->"NeuralNetwork"] and Predict[data,Method->"NeuralNetwork"] should provide the network used for training, including the loss function. Perhaps there could be an option that had Classify and Predict return a NetTrainResultsObject. Or perhaps ClassifierInformation could extract the network in a form that could be reused within NetTrain or otherwise edited. This way one could take a Network used by Classify or Predict and (a) see more easily what the heck it was doing and (b) think of tweaks that might enhance its performance. Moreover, one could see how Classify created a Net that implemented the optional features such as IndeterminateThreshold and UtilityFunction. It would be a great learning tool.
2. There should be a worked example probably using NetGraph showing at least one way to implement every option to Classify and Predict within the NeuralNetwork paradigm. Thus ClassPriors, FeatureExtractor, FeatureNames, FeatureTypes, IndeterminateThreshold, UtilityFunction should all be shown. ValidationSet would be nice too.
3. The requirements for ClassifierMeasurements to work on the output from a NetTrain operation should be clearly stated.
4. There is a lot of functionality hidden in the NeuralNetworks` context. A lot of it is quite useful. Some of it should be promoted for more general use and documented appropriately.
**Functionality**
OK. This is a hard one -- probably much harder than I appreciate. But perhaps a start could be made.
1. It would be great to be able to just write regular Wolfram Language code and, where possible, have it automatically translated into a NetGraph expression. A function named NeuralNetForm (or NetGraphCompile or something like that)
Example:
NetGraphForm[(MapThread[#1 - #2 &] /* (Dot[{1, 2, 3}, #] &)), {"x",
"y"}] ->
NetGraph[{ThreadingLayer[#1 - #2 &],
ConstantArrayLayer["Array" -> {1, 2, 3}], DotLayer[]},
{{NetPort["x"], NetPort["y"]} -> 1, {1, 2} -> 3}]
So that then one could take the NetGraph (netg) and do the following
netg[Association["x" -> {3, 5, 8}, "y" -> {2, 16, -3}]]
And you'd get 12.
2. Right now the [NeuralNetwork repository][1] is filled with elaborate nets for doing wonderful and fancy things. But perhaps there could be a section of that repository devoted to simpler tasks: asymmetric cross entropy losses just to take a particular example.
Probably others will have additional ideas. Or it may be that my ideas are impracticable, a special case of a more general problem, or already in the works. Perhaps some constructive user feedback might help the product evolve even more successfully.
[1]: https://resources.wolframcloud.com/NeuralNetRepository/Seth Chandler2018-08-13T01:39:07ZThe Delian Brick and other 3D self-similar dissections
http://community.wolfram.com/groups/-/m/t/1368091
Divide a cuboid into two cuboids similar to the original shape. The answer involves the cube root of 2, otherwise known as the [Delian constant](http://mathworld.wolfram.com/DelianConstant.html). I've called this object the Delian Brick. It's a 3D 2-reptile. A stack of three bricks can be made using the cube root of 3, and so on.
With[{r=2^(1/3)},
Graphics3D[{Opacity[.5],
Cuboid[{0 r^0,0 r^1,0r^2},{1 r^0,1r^1,1r^2}], Cuboid[{1 r^0,0 r^1,0r^2},{2 r^0,1 r^1,1r^2}]},SphericalRegion-> True, Boxed-> False]]
![Delian Brick][1]
I'd self-discovered the Delian Brick myself, as did at least ten other recreational mathematicians I've exchanged correspondence with. It may have been known to the ancient greeks. The first publication I've found is by Dale Walton and the game company Thinkfun, who expanded it into a 3D 4-irreptile they called the Fifth Chair puzzle.
With[{r=2^(1/3)},
Graphics3D[{Opacity[.5],
{Red,Cuboid[{0 r^0,0 r^1,0r^2},{2 r^0,r^1,r^2}], Cuboid[{1 r^0,1 r^1,0r^2},{2 r^0,2 r^1,1r^2}]},
{Blue,Cuboid[{0 r^0,1 r^1,0r^2},{1 r^0,3r^1,1r^2}], Cuboid[{1 r^0,2 r^1,0r^2},{2 r^0,3 r^1,1r^2}]},
{Green,Cuboid[{0 r^0,3 r^1,0r^2},{2 r^0,4r^1,2r^2}], Cuboid[{0 r^0,2 r^1,1r^2},{2 r^0,3 r^1,2r^2}]},
{Yellow, Cuboid[{2 r^0,0 r^1,0r^2},{4 r^0,2r^1,2r^2}], Cuboid[{0 r^0,0 r^1,1r^2},{2 r^0,2 r^1,2r^2}]}}, SphericalRegion-> True, Boxed-> False]]
![fifth chair][2]
There are also [five space-filling tetrahedra](http://demonstrations.wolfram.com/SpaceFillingTetrahedra/), and at least two of them are 8-reptiles
Row[{Graphics3D[{Opacity[.5],Polygon/@Union[Sort/@
Flatten[Subsets[#,{3}]&/@(IntegerDigits/@({{020,111,121,022},{022,111,112,222},{022,111,121,222},{022,113,112,222},{022,113,123,024},{022,113,123,222},{111,202,212,113},{111,222,212,113}}+111)-1),1]]}, Boxed-> False, SphericalRegion->True],
Graphics3D[{Opacity[.5],Polygon/@Union[Sort/@
Flatten[Subsets[#,{3}]&/@(IntegerDigits/@({{002,022,111,113},{022,042,131,133},{022,222,111,113},{022,222,111,131},{022,222,113,133},{022,222,131,133},{111,131,220,222},{113,133,222,224}}+111)),1]]}, Boxed-> False, SphericalRegion->True]}]
![tetrahedron reptiles][3]
More of these self-similar 3D dissections are listed at [3D Rep-Tiles and Irreptiles](http://demonstrations.wolfram.com/3DRepTilesAndIrreptiles/). The ones I list here need to be added there. Most of the 3D rep-tiles are based on either a 2D reptile or a polycube. The four items in this discussion fit in neither of those categories. Are there others?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=DelianBrick.png&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=FifthChair.png&userId=21530
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=tetrahedronreptiles.png&userId=21530Ed Pegg2018-07-03T16:02:03ZFind the "nth" of a large PrimeNumber?
http://community.wolfram.com/groups/-/m/t/1395404
Hi Guys! I hope all of you are fine :) Maybe someone can tell me here how can I find with Wolfram Alpha or Mathematica the nth's of larger primes? I used "PrimePi", but "PrimePi" works not with large primes (primes like these 1921773217311523519374373 do not work...too large...). Is there a criterion, method and or a script with which I can find the nth's of larger primes?
I have also used the "nthprime" function, but i think this is not what i need, but when there is a method with the nth prime function to find the "th's" of larger primes, can someone here show me, how it works? To better understanding what i mean, here an example:
- 2 is the 1(<-i need this number).Primenumber
- 3 is the 2(<-i need this number).Primenumber
- 5 is the 3(<-i need this number).Primenumber
- 7 is the 4(<-i need this number).Primenumber
and so on...another example:
19 is the 8th (!) Prime, 23 is the 9th (!) Prime, 29 is the 10th (!) Prime... now i need a function to find which prime is 1921773217311523519374373? I need a function to get that out, i hope anybody here has an idea how can i find with WolframAlpha or Mathematica which/what (!) prime is 1921773217311523519374373.
I hope anyone can help me here. Kind regards and best wishes.Nural I.2018-07-31T18:14:09Z[✓] Use Except in a RegularExpression?
http://community.wolfram.com/groups/-/m/t/1392816
Friends:
Can Except be used in Regular Expressions? And then how?
This is potentially quite useful.
Suppose I want to match a pattern with the class character [A-Z] but without the class character [B-D].
This seems to be a job for combined regular expressions and Except...But then how to do it?
Any help is welcome,
FranciscoFrancisco Gutierrez2018-07-26T16:32:39ZTransform a string in UTF-8 format into a string in ANSI format?
http://community.wolfram.com/groups/-/m/t/1402419
I have a string like a = "abcdefg"
it is a UTF-8 format string.
I want to transform a into a string which is in ANSI format, how can I do that?gearss zhang2018-08-11T00:21:06ZDisplay an image with high resolution?
http://community.wolfram.com/groups/-/m/t/1403130
Hello All,
I require to display the output images at least 400 x 400 on screen.
They are tiny as shown attached.
Please find below code and nb file:
Manipulate[imageA= ;
varBin=Binarize[imageA,binimageA];
Column[{Button["Gradient Filter",
out=ImageAdjust[GradientFilter[varBin,0.5]]],
Row[{imageA,varBin,out}]}],
{binimageA,0,1}]
Thanks for usual considerationMan Oj2018-08-13T12:03:04Z[WSC18] Chronological Dating of Historical Texts Using RNNs
http://community.wolfram.com/groups/-/m/t/1382707
![Automatic Chronological Dating of Historical Texts Using RNNs][1]
#Abstract#
Chronological dating is essential for various tasks such as summarization and document retrieval. This project proposes a novel method of dating historical texts using a Recurrent Neural Network that works both on the character level and the word level. The results show a significant improvement in the accuracy of detection compared to using a word level only RNN. The error span is between 1 year and a century for most cases. Though it achieved a decent performance for the texts originating from the 19th century, the accuracy declines significantly for older texts due to their scarcity and the non-homogeneous distribution of the provided dataset.
#Data Collection and Pre-processing#
The training data is composed of public domain books collected from Openlibrary, an online project created by Aaron Swartz, Brewster Kahle and others. Wolfram Language supports a Service Connect that allows a direct interaction with the Openlibrary API.
Data Collection
In[106]:= openlibrary = ServiceConnect["OpenLibrary"]
Out[106]= ServiceObject["OpenLibrary",
"ID" -> "connection-1f55c291dcb5feaa290ece0cd1c97ed2"]
In[107]:= BookDatabase = <||>;
nbCalls = 0;
In[109]:= GetTextRequest[keys_] := {nbCalls++;
Normal@openlibrary["BookText", {"BibKeys" -> {keys}}]}
In[110]:= GetValidTextKey[keys_] :=
SelectFirst[keys,
MatchQ[Pause[.1]; Normal@openlibrary["BookText", {"BibKeys" -> {#}}],
KeyValuePattern[_ -> _String]] &];
GetFirstText[list_] := FirstCase[list, Except["NotAvailable", _String]]
In[112]:= GetTexts [keys_] :=
Quiet[GetFirstText[
Values[Normal@
openlibrary["BookText", {"BibKeys" -> RandomSample[keys, UpTo[50]]}]]]]
In[113]:= AddBook[b_] :=
BookDatabase[b["FirstPublishYear"]] =
If[MatchQ[BookDatabase[b["FirstPublishYear"]], _Missing],
{GetTexts[b["EditionKey"]]},
Append[BookDatabase[b["FirstPublishYear"]], GetTexts[b["EditionKey"]]]
]
In[114]:= AddSubject[subj_String] :=
Module[{searchResults},
(*Database init*)
BookDatabase = <||>;
(*Searching books*)
searchResults =
Select[Normal@
openlibrary["BookSearch", {"Subject" -> subj, MaxItems -> 90}],
#["HasFulltext"] &];
(*Downloading Text*)
GeneralUtilities`MonitoredMap[AddBook, searchResults];
Print[subj <> " DOWNLOADED!"];
(*Exporting*)
Export["C:\\Users\\Tarek\\OneDrive\\Documents\\Portfolio\\opportunities\\\
Wolfram Summer Camp\\Dating Historical Texts\\" <> subj <> ".wxf",
BookDatabase];
Pause[180];
]
(*TESTING*)
In[115]:= AddSubject /@ {"Religion", "Games", "Drama", "Action", "Adventure", "Horror",
"Spirituality", "Poetry", "Fantasy"}
During evaluation of In[115]:= Religion DOWNLOADED!
During evaluation of In[115]:= Games DOWNLOADED!
During evaluation of In[115]:= Drama DOWNLOADED!
During evaluation of In[115]:= Action DOWNLOADED!
During evaluation of In[115]:= Adventure DOWNLOADED!
During evaluation of In[115]:= Horror DOWNLOADED!
During evaluation of In[115]:= Spirituality DOWNLOADED!
During evaluation of In[115]:= Poetry DOWNLOADED!
During evaluation of In[115]:= Fantasy DOWNLOADED!
#Training the Neural Net#
![RNN Architecture][2]
The project uses a hybrid Word-level and Character-level Recurrent Neural Network. The word-level processing is built on the GloVe model in order to compute vector representations for words. The limitation to using a Word-level only network is that most of old books include words that were not included in the training data for GloVe. Thus, adding a character-level Network Chain seems to improve the prediction accuracy since it helps process previously unseen corpora.
Define the net
In[24]:= net = NetGraph[<|
"chars" -> {
UnitVectorLayer[],
LongShortTermMemoryLayer[50],
DropoutLayer[.5]
},
"words" -> {
NetModel[
"GloVe 100-Dimensional Word Vectors Trained on Wikipedia and Gigaword 5 \
Data"],
LongShortTermMemoryLayer[50],
SequenceLastLayer[],
DropoutLayer[.5]
},
"cat" -> CatenateLayer[],
"predict" -> {
LongShortTermMemoryLayer[100],
SequenceLastLayer[],
DropoutLayer[.5],
LinearLayer[1]
}
|>,
{
NetPort["Characters"] -> "chars",
NetPort["Words"] -> "words",
{"chars", "words"} -> "cat",
"cat" -> "predict" -> NetPort["Date"]
},
"Characters" -> NetEncoder[{"Characters", characters}],
"Date" -> NetDecoder["Scalar"]
];
Create training data
In[32]:= sample[text_String, n_: 1024] :=
Module[{len, offset},
len = StringLength@text;
offset = RandomInteger[{1, len - n - 1}];
StringPadRight[
charPreprocess@
StringTake[text, {Max[1, offset], Min[len, offset + n - 1]}], n]
];
In[33]:= getSample[KeyValuePattern[{"FullText" -> text_String,
"FirstPublishYear" -> d_DateObject}]] :=
With[{s = sample[text]},
<|"Characters" -> s, "Words" -> s, "Date" -> dateToNum@d|>
];
$samples = 100000;
In[43]:= import = Flatten[Import /@ FileNames["*.wxf", $dataDir, Infinity]];
withDate = Cases[import, KeyValuePattern["FirstPublishYear" -> _DateObject]];
trainingData =
RandomSample[
Flatten@Table[getSample /@ withDate, Ceiling[$samples/Length[withDate]]],
UpTo[$samples]];
Length@trainingData
Training
results = NetTrain[
net,
trainingData,
All,
ValidationSet -> Scaled[.25],
TargetDevice -> "GPU",
MaxTrainingRounds -> Quantity[8, "Hours"],
BatchSize -> 48,
TrainingProgressCheckpointing -> {"Directory", $dataDir <>
"Trained_Networks\\", "Interval" -> Quantity[15, "Minutes"]}
];
In[66]:= trained = results["TrainedNet"];
Save trained net
In[67]:= Export["PredictTextDate.wlnet", trained]
Out[67]= "PredictTextDate.wlnet"
#Testing#
Testing and Results
In[25]:= CalculateAccuracy[title_String] := Module[{text, predDate, actualDate},
text = processForInput[sample[ResourceData[title]]];
actualDate = ResourceObject[title]["SourceMetadata"]["Date"];
predDate = numToDate[net[text]];
{IntegerPart[
Abs[UnitConvert[DateDifference[actualDate, predDate], "Years"]]],
actualDate, DateObject[predDate, "Year"]}
]
In[50]:= titleList = {"Friends, Romans, Countrymen", "On the Origin of Species",
"Agnes Grey", "Alice in Wonderland", "The Pickwick Papers",
"The Wheels of Chance", "Pellucidar",
"The Adventures of Huckleberry Finn", "The Emerald City of Oz",
"The Old Curiosity Shop", "Adam Bede", "A Study in Scarlet",
"Micah Clarke", "Prufrock"};
In[51]:= accuracyList = CalculateAccuracy /@ titleList;
In[52]:= resultsTable =
Dataset@SortBy[
Join[{{"Error", "Actual Date", "Predicted Date"}}, accuracyList], #[[2]] &];
In[53]:= meanAccuracy = N@Mean@accuracyList[[All, 1]]
Out[53]= Quantity[25.8571, "Years"]
#Want to test it out?#
![Dating Historical Texts Microsite][3]
We have launched a micro-site that implements the current neural network architecture in order to allow the prediction of publication dates of an input text.
Link: [Dating Historical Sites Microsite][4]
You can also try testing the code using the Wolfram Code below.
Link: [Download Code][5]
#Acknowledgements#
This project could not have been accomplished without the support, encouragement and insight of my mentor: Mr. Richard Hennigan.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=communitypostimage.png&userId=1372178
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=net.png&userId=1372178
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Microsite.PNG&userId=1372178
[4]: https://www.wolframcloud.com/objects/tarek.ab.aloui/WSC2018/DatingHistoricalTexts
[5]: https://drive.google.com/open?id=15pDUwLUm_zxzD-YYmDTIyeQHMyCJJzuNTarek Aloui2018-07-13T14:55:27Z[WSS18] Generating Music with Expressive Timing and Dynamics
http://community.wolfram.com/groups/-/m/t/1380021
![Cover][1]
## Goal ##
There are many ways to generate music and one of them is algorithmic, where music is generated with the help of a list of handcrafted rules.
The approach in this project is different - I build a neural network that knows nothing about music but learns it from thousands of songs given in MIDI format.
_Apart from just generating a meaningful sequence of notes I also wanted to add **dynamics** in loudness and humanlike mistakes in timing with **no restrictions for note durations**._
- **Why dynamics and timing?**
There is no human who is able to play on a musical instrument with precisely the same loudness and strictly in time with a metronome(at least I can't). People do mistakes, but in the case of music, they are helping in creating what we call more alive music. It is a fact that dynamic music with slight time shifts sounds more interesting, so even when you write music in a program you supposed to add these "mistakes" by yourself.
- **Why performances?**
The dataset that I use for the project contains performances of [Yamaha e-piano competition][2] participants. This gives us a possibility to learn the dynamics and mistakes in timings.
__Here's an [example][3] generated by the model.__
All the code, data and trained models can be found on [GitHub][32].
The examples will be attached to this post as files just in case.
----------
## Inspiration ##
This is not an original work and mostly it's an attempt to recreate the work of [Magenta][4] team from their blog [post][5].
Nevertheless, in this post, I will try to add more details to the **preprocessing** steps and how you can build a similar neural network model in Wolfram Language.
## Data ##
I've used a [site][6] that has the Yamaha e-piano performances but also contains a set of classic and jazz compositions.
In the original [work][7] Magenta team has used only the Yamaha dataset but with a heavy augmentation on top of that: Time-stretching (making each performance up to 5% faster or slower), Transposition (raising or lowering the pitch of each performance by up to a major third).
Also, you can create your own list of MIDI files and build a dataset with the help of the code provided below in the post.
Here are links to find a lot of free MIDI songs: [The Lakh MIDI Dataset][8](very well prepared a dataset for ML projects), [MidiWorld][9] and [FreeMidi][10]
## MIDI ##
MIDI is short for Musical Instrument Digital Interface. It’s a language that allows computers, musical instruments, and other hardware to communicate.
MIDI carries event messages that specify musical notation, pitch, velocity, vibrato, panning, and clock signals (which set tempo).
_For the project, we need only events that denote where is every note starts/ends and with what are velocity and pitch._
## Preprocessing The Data ##
Even though MIDI is already a digital representation of music, we can't just take raw bytes of a file and feed it to an ML model as in the case of the models working with images. First of all, images and music are conceptually different tasks: the first is a single event(data) per item(an image), the second is a sequence of events per item(a song). Another reason is that raw MIDI representation and a single MIDI event itself contain a lot of irrelevant information to our task.
Thus we need a special data representation, a MIDI-like stream of musical events. Specifically, I use the following set of events:
- 88 **note-on** events, one for each of the 88 MIDI pitches of piano range. These events start a new note.
- 88 **note-off** events, one for each of the 88 MIDI pitches of piano range. These events release a note.
- 100 **time-shift** events in increments of 10 ms up to 1 second. These events move forward in time to the next note event.
- 34 **velocity** events, corresponding to MIDI velocities quantized into 32 bins. These events change the velocity applied to subsequent notes.
The neural network operates on a one-hot encoding over these 310 different events. This is the very same representation as in the original work but the number of note-on/note-off is fewer, I encode 88 notes in piano range instead of 127 notes in MIDI pitch range to reduce one-hot encoding vector size and make the process of learning easier.
**For example**, if you want to encode 4 notes from C major with durations of a half second and with different velocities your sequence of events would be somewhat like this(for clarity I use only indices instead of the whole one-hot encoding):
`{288, 60, 226, 148, 277, 62, 226, 150, 300, 64, 226, 152, 310, 67, 226, 155}`
![Preprocessing encoding C major example][11]
In this particular example:
- _60, 62, 64, 67_ are **note on** events(C5, D5, E5, G5). Values in a range from 1 to 88.
- _148, 150, 152, 155_ are **note off** events. Values in a range from 89 to 176.
- _226_ is a half second **time shift** event. Values in a range from 177 = 10 ms to 276 = 1 sec.
- _288, 277, 300, 310_ are **velocity** events. Values in a range from 277 to 310.
In this way, you can encode music that is expressive in dynamics and timing.
Now, let's take a look on another example with a chord from the same notes but with different durations:
`{300, 60, 62, 64, 67, 226, 152, 155, 226, 150, 226, 148}`
![C major chord][12]
As you can see, if you want to play more than one note at once you just need to put them in a single bunch of note-on events(60, 62, 64, 67).
Then you add time shift and note-off events as you needed. If you need a duration longer than 1 sec you can stack together more than one time-shift events({310, 310} = 2 sec time-shift).
**WL and MIDI**
Wolfram Language has a **built-in** support of MIDI files what is really simplifying initial work.
To get data from MIDI file you need to import it with specific elements:
![WL MIDI Import Elements][13]
In the code below I also extract and calculate needed information related to a tempo of a song.
{raw, header} = Import[path, #]& /@ {"RawData", "Header"};
tempos = Cases[Flatten[raw], HoldPattern["SetTempo" -> tempo_] :> tempo];
microsecondsPerBeat = If[Length@tempos > 0, First[tempos], 500000]; (* If there is no explicit tempo we use default 120 bpm *)
timeDivision = First@Cases[header, HoldPattern["TimeDivision" -> division_] :> division];
(* Convert timeDivision value to base of 2 *)
timeDivisionBits = IntegerDigits[timeDivision, 2];
(* Pad zeros at the beginning if the value takes less then 16 bits *)
timeDivisionBits = If[Length@timeDivisionBits < 16, PadLeft[timeDivisionBits, 16], timeDivisionBits];
(* The top bit responsible for the type of TimeDivision *)
timeDivisionType = timeDivisionBits[[1]];
framesPerSecond = timeDivisionBits[[2 ;; 8]];
ticksPerFrame = timeDivisionBits[[9 ;; 16]];
ticksPerBeat = If[timeDivisionType == 0, timeDivision, 10^6 /(framesPerSecond * ticksPerFrame)];
secondsPerTick = (microsecondsPerBeat / ticksPerBeat) * 10^-6.;
An example of raw data and header info from MIDI file in Wolfram Language:
![Raw MIDI output][14]
**SetTempo** is a number of microseconds per beat(microseconds per quarter note).
**Time Division** has two type of interpreting. If the top bit is 0 then the type is "ticks per beat" (or “pulses per quarter note”) otherwise, the type is "frames per second". We need those two values to calculate time per one **MIDI tick** that used in MIDI events as a time measurement.
One MIDI event in WL representation looks like this
`{56, {9, 0}, {46, 83}}`
- 56 is a number of **MIDI ticks** that means the total amount of time that must pass from the previous MIDI event.
It represents our **time-shift** event by simple multiplication of this number with **secondsPerTick**.
- 9 is a status byte of MIDI events(9,8 are **note-on**, **note-off** respectively).
- 0 is MIDI channel(irrelevant for us).
- 46 indicates what is a pitch of this note(related to **note-on**/**note-off** events).
- 83 is a number we encode in a **velocity** event.
If you want to understand how a real raw MIDI data structured, this [blog][15] is specifically useful.
Now, what we need is to parse a sequence of MIDI events and filter them only for events that are **note-on**, **note-off** and all the events that have the number of **MIDI ticks** greater than 0. Some of the meta-messages have irrelevant MIDI ticks thus we need to exclude them from final sequence - we just skip the events with value **F**(Meta message) in the MIDI status byte.
After filtering MIDI data you get a sequence that is ready to be encoded to the final representation and will be fed to the model.
![Filtered MIDI events][16]
To encode the sequence of MIDI events to the final representation I use the code below:
EncodeMidi[track_, secondsPerTick_] := Block[{lastVelocity = 0},
ClearAll[list];
Flatten[
Map[
Block[{list = {}},
(* Add time shifts when needed *)
If[TimeShiftByte[#, secondsPerTick] > 0, list = Join[list, EncodeTimeShift[TimeShiftByte[#, secondsPerTick]]]];
(* Proceed with logic only if it's a note event *)
If[StatusByte[#] == NoteOnByte || StatusByte[#] == NoteOffByte,
(* Add velocity if it's different from the last seen *)
If[lastVelocity != QuantizedVelocity[VelocityByte[#]] && StatusByte[#] == NoteOnByte,
lastVelocity = QuantizedVelocity[VelocityByte[#]];
list = Join[list, List[EncodeVelocity[VelocityByte[#]]]];
];
(* Add note event *)
list = Join[list, List[EncodeNote[NoteByte[#], StatusByte[#] == NoteOnByte]]];
];
(* Return encoded list*)
list
]&,
track]
, 1]];
This code has a lot of functions that I've written during the summer school but they are mostly utility short functions.
You can check them and complete implementation on [GitHub][17].
When the code for the preprocessing is ready it's time to build a dataset.
**Building Dataset**
I've made a [notebook][18] that takes care of preprocessing of MIDI files and encode them into the final representation.
(* Take all files names in Midi folder *)
files = FileNames["*", NotebookDirectory[] <> "Midi"];
dataset = Flatten[EncodeTrack /@ files, 1];
During the encoding, each track is partitioning into smaller segments:
encodings = Partition[EncodeMidi[GetMidiEvents[raw, secondsPerTick], secondsPerTick], 500];
In the original work, Magenta team split each song into 30-second segments to keep each example of manageable size. The problem is that partition by equal time doesn't give you the equal size of examples. Even though you can use varying input size in sequence models I wanted to use a static size of examples to speed up the training process. I was told that internally in WL(or maybe everywhere) it's more efficient to have the same size of every example for a model.
However, I believe this kind of partition has a drawback, in a way that an equal number of encoded events could have a different duration in time thus adding inconsistency in the dataset.
In my case, I've divided each song into segments of 500 encoded events.
![One Song Final Encoding][19]
To reduce the size of the final dataset I use only indices for one-hot encodings.
As the result, the final dimension of my dataset was **{99285, 500}**
If you want to try partition by the time you need to edit `EncodeTrack` function in [`Midi.m`][20].
With this code, you will find positions of where to split a sequence on equal time segments:
GetTimePositions[track_, seconds_, secondsPerTick_] :=
Block[{positions = {}, time = 0},
Do[
time = time + track[[i]][[1]] * secondsPerTick;
If[time > seconds, positions = Append[positions, i]; time = 0;],
{i, Length@track}];
positions
]
Where parameter `track` is a sequence of MIDI events. Then you split the same `track` with the positions you've got from the function.
segments = FoldPairList[TakeDrop, track, positions];
After that, you need to encode `segments` with the help of `EncodeMidi` function. If you do that there is one thing left - rework the model to accept varying input size but the next part will cover how to build a model with a static size of example.
----------
## Building a Model ##
Because music data is a sequence of events we need an architecture that knows how to remember, and predicts what is the next event based on all previous. This is exactly what Recurrent Neural Networks try to do - RNNs can use their internal state (memory) to process sequences of inputs. If you want to check more details I would recommend to watch this [introduction][21] video.
On the abstract level, RNN learns the probabilities of events that follow after each other. Take for example this language model from Wolfram Neural Repository, it predicts the next character of a given sequence.
NetModel["Wolfram English Character-Level Language Model V1"]["hello worl"]
The output is **d**.
You can get top 5 probabilities if you want.
NetModel["Wolfram English Character-Level Language Model V1"]["hello worl", {"TopProbabilities", 5}]
You will get:
{"d" -> 0.980898, "e" -> 0.00808785, "h" -> 0.0045687, " " -> 0.00143807, "l" -> 0.000681855}
In my work, I needed similar behavior but instead of characters, I wanted to predict encoded MIDI events. That is why the basis of the model I build is [Wolfram English Character-Level Language Model V1][22]. Also, after reading a [guide][23] about sequence learning with neural networks in WL I've decided to improve the training process with "teacher forcing" technique.
**Teacher Forcing**
In a simple language model, a model takes the last prediction from an input sequence and compute the class of it. But for "teacher forcing" we need to get classes of all predictions.
![Model comparison][24]
Comparatively to the language model I've removed one `GatedReccurentLayer` and `Dropoutlayer` due to the not so big dataset(as precautions to avoid overfitting). Another benefit of using "teacher forcing" is that you don't need to separately create labels for every example. To compute the loss we make out of an input example two sequences:
1. Everything but the **last** element(Sequence**Most**Layer)
2. Everything but the **first** element(Sequence**Rest**Layer)
![Teacher Forcing Net][25]
As you can notice the input is only one vector of indices with size 500 and labels for computing the loss are generating inside of a `NetGraph`.
Here is a visualized example of the flow with simple input:
![Input flow explanation][26]
You can find the code for creating the model in this [PerfrormanceRnnModel][27] notebook.
After all the data is ready and the model is finalized we can start training.
NetTrain[teacherForcingNet,
<|"Input" -> dataTrain|>,
All,
TrainingProgressCheckpointing -> {"File", checkPointDir, "Interval" -> Quantity[5, "Minutes"]},
BatchSize -> 64,
MaxTrainingRounds -> 10,
TargetDevice -> "GPU", (* Use CPU if you don't have Nvidia GPU *)
ValidationSet -> <|"Input" -> dataValidate|>
]
A friendly advice - it's better to use **"Checkpoining"** during the training. This will keep your mental health safe and will work as assurance that all training progress is saved.
I was training the model 30 rounds and it took around 4-5 hours on AWS' GPUs.
First 10-15 rounds weren't showing any sight of problems but later training clearly started to overfit.
![Training loss][28]
Unfortunately, I haven't had time to fix this problem because of the limited time but to overcome this problem I might reduce the size of GRUs from 512 to 256 and return Dropout layer.
## Generate Music ##
To generate music we need a model that predicts the next event in a sequence as it was in the language model. To do that I take the trained model and extract out of it "PerformanceRNN Predict Model" part.
predictNet = NetExtract[trainedNet, "predict"];
Next step is to convert this `predictNet` to a model that takes varying input size and return the class of the next event.
generateModel = NetJoin[NetTake[predictNet, 3], {
SequenceLastLayer[],
NetExtract[predictNet, {4, "Net"}],
SoftmaxLayer[]},
"Input" -> Automatic,
"Output" -> NetDecoder[{"Class", Range[310]}]
]
The resulting architecture is pretty the same as the language model from which I've started - it takes a sequence with varying size of encoded MIDI events `{177, 60, 90}` and predicts what could be next event `{177, 60, 90, ?}`.
![Model Comparison(Generation)][29]
**Now, let's the fun begin!**
generateDemo[net_, start_, len_] := Block[{obj = NetStateObject[net]},
Join@NestList[{obj[#, "RandomSample"]} &, start, len]
]
This small function is all we need to generate a sequence of the desired length.
`NetStateObject` helps to keep track of all sequences that were applied to the network, meaning every next prediction is the result of all previous events not only the recent one.
`start` should be a sequence of encoded MIDI events. It also can be a single item sequence, say you want to start from a pause or a particular note. This is a possibility to some extent put the generation process in a particular direction.
Okay, two lines of code left and you can hear play with generating of music:
generatedSequence = Flatten[generateDemo[generateModel, {60, 216, 148, 62, 200, 150, 64, 236, 152, 67, 198, 155}, 500]];
ToSound[generatedSequence]
These are other examples: [2][30], [3][31].
You can generate your own demos if download [repository][32] and open [PerformanceRNN][33] notebook.
## Further Work ##
That was a very fun and challenging task for me. I can't say that I'm satisfied with the results but this a good start and I have a direction now.
What I want to explore is Variational Autoencoder, especially [MusicVAE][34] that is made by the same Magenta team.
However, I'll start with improving the existing model by changing the architecture and cleaning the dataset to have only performances from the Yamaha dataset.
Thank you for reading the post, and feel free to ask any questions.
![Peace!][35]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2433article_cover.png&userId=1352227
[2]: http://www.piano-e-competition.com
[3]: https://drive.google.com/open?id=1I7l6hrecWsuMxqvEdUiRWtg6N6NCW34R
[4]: https://magenta.tensorflow.org/
[5]: https://magenta.tensorflow.org/performance-rnn
[6]: http://www.kuhmann.com/Yamaha.htm
[7]: https://github.com/tensorflow/magenta/tree/master/magenta/models/performance_rnn
[8]: http://colinraffel.com/projects/lmd/
[9]: http://www.midiworld.com/
[10]: https://freemidi.org/
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5707Preprocessing_explanation.png&userId=1352227
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6109Prep_ex_2.png&userId=1352227
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MIDIimportelements.png&userId=1352227
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7717raw_midi_output.png&userId=1352227
[15]: http://www.recordingblogs.com/wiki/musical-instrument-digital-interface-midi
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=filtered_midi_events.png&userId=1352227
[17]: https://github.com/Apisov/Performance-RNN-WL/blob/master/Project/Midi.m
[18]: https://github.com/Apisov/Performance-RNN-WL/blob/master/BuildData.nb
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=One_track_final_encoding.png&userId=1352227
[20]: https://github.com/Apisov/Performance-RNN-WL/blob/master/Project/Midi.m
[21]: http://www.wolfram.com/wolfram-u/catalog/wl036/
[22]: https://resources.wolframcloud.com/NeuralNetRepository/resources/Wolfram-English-Character-Level-Language-Model-V1
[23]: http://reference.wolfram.com/language/tutorial/NeuralNetworksSequenceLearning.html#1013067167
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Modelcomparison.png&userId=1352227
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3750Teacher_forcing.png&userId=1352227
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Teacher_forcing_explanation.png&userId=1352227
[27]: https://github.com/Apisov/Performance-RNN-WL/blob/master/PerformanceRNNModel.nb
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=raw_midi.png&userId=1352227
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Modelcomparison%28Generation%29.png&userId=1352227
[30]: https://drive.google.com/open?id=1GtlaOtTF_9rHiDVsrqmLnUaKCSAva1dP
[31]: https://drive.google.com/open?id=1sEihbFJw4XbVZveYl8efoM8Ivq8781ar
[32]: https://github.com/Apisov/Performance-RNN-WL
[33]: https://github.com/Apisov/Performance-RNN-WL/blob/master/PerformanceRNN.nb
[34]: https://magenta.tensorflow.org/music-vae
[35]: http://community.wolfram.com//c/portal/getImageAttachment?filename=d2eedc8a1ea8fc6a62e23b151c7fb3675c8153cc.png&userId=1352227Pavlo Apisov2018-07-11T21:20:56ZUse FindRoot for the following function?
http://community.wolfram.com/groups/-/m/t/1402742
In the example below, FindRoot doesn't work with the provided function, calcTresAtTime[mCpRes_?NumericQ, mFracClr_?NumericQ,
timeTarget_?NumericQ]. However, no problems are observed when calling the function by itself or from Plot. The [documentation][1] mentions that FindRoot first localizes all of the variables, then evaluates f with the variables being symbolic. The examples in the documentation show how to turn this off, by using _?NumericQ.
eq01ResHB =
MCpRes ures'[t] ==
mCpPump (uclr[t] - ures[t]) + UAambRes (uamb - ures[t]) +
UAbrg (ubrg - ures[t]);
eq02ClrHB =
MCpClr uclr'[t] ==
mCpPump (ures[t] - uclr[t]) + UAambClr (uamb - uclr[t]) +
UAclr (ucw - uclr[t] );
ic = {ures[0] == ures0, uclr[0] == uclr0};
eqSet = Join[{eq01ResHB, eq02ClrHB}, ic];
vars = {ures, uclr};
KuambRes = 0.025 ;
Kuabrg = 0.236;
KuambClr = 0.0024;
Kuaclr = 0.1;
calcTresAtTime[mCpRes_?NumericQ, mFracClr_?NumericQ,
timeTarget_?NumericQ] := Module[{TresSolLocal, TclrSolLocal},
parmsRes = {MCpRes -> mCpRes , UAambRes -> KuambRes,
UAbrg -> Kuabrg};
parmsClr = {MCpClr -> mFracClr mCpRes, UAambClr -> KuambClr,
UAclr -> Kuaclr};
parmsBoundary = {mCpPump -> 1, ubrg -> 200, ucw -> 60, uamb -> 70};
parmsInitialCond = {ures0 -> 70, uclr0 -> 70};
eqSetValues =
eqSet /. parmsRes /. parmsClr /. parmsBoundary /. parmsInitialCond;
{TresSolLocal, TclrSolLocal} =
NDSolveValue[eqSetValues, vars, {t, 0, 2000}];
N@TresSolLocal[timeTarget]
]
calcTresAtTime[60., 0.4, 300.]
Plot[calcTresAtTime[x, 0.4, 300.], {x, 0, 80}]
FindRoot[ 130 == calcTresAtTime[x, 0.4, 300.], {x, 0, 80}]
Below is the result of the Plot command. So this suggests that the function itself doesn't have any severe problems.
![enter image description here][2]
Below is the result of the FindRoot command
![FindRoot output][3]
Have worked through the examples in the documentation, but can't find where I have taken a wrong turn.
Any help would be appreciated.
[1]: http://reference.wolfram.com/language/ref/FindRoot.html
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PlotExample01.jpg&userId=894223
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=FindRoot_Output.PNG&userId=894223Robert McHugh2018-08-12T06:41:10ZUnderstand behavior of TableForm?
http://community.wolfram.com/groups/-/m/t/1402300
Mathematica 11.3.0.0 Windows 10 64
This may be a silly question, but I bang my head against this cumbersome feature.
Can anybody explain where the "None " in the following output comes from (and eventually how to eliminate it):
Input:
TableForm[{{1, {1, 1}}, {2, {2, 2}}}, TableDirections -> {Row, Column, Column}]
Output ( the periods indicate spaces, otherwise the text formatter of this forum screws up the formatting):
1..............2
None...1..2
............1..2Daniel Huber2018-08-11T19:45:59ZSolve the Karush-Kuhn-Tucker equations with Reduce
http://community.wolfram.com/groups/-/m/t/1402471
Some years ago I published a short article in the Mathematica Journal describing solving the Karush-Kuhn-Tucker equations with Reduce, to do symbolic optimization. I was pleased to see that the approach subsequently used by several people. However, the code in that article has the problem that it gives all local minima. I've recently updated the code to only give global minima. The new code has the advantage over Minimize that it gives multiple global minima and also provides the values of the Lagrange multipliers, which give the sensitivity of the objective function to changes in the constraints. The code is shown below with copious comments. I've also given two examples in which the code returns a result but Minimize does not, even though this is an unusual circumstance.
Code
In[1]:= (* Generate the Karush-Kuhn-Tucker Equations *)
KTEqs[obj_ (* objective function *), cons_List (* constraints *), vars_List (*
variables *)] :=
Module[{consconvrule = {GreaterEqual[x_, y_] -> LessEqual[y - x, 0],
Equal[x_, y_] -> Equal[x - y, 0],
LessEqual[x_, y_] -> LessEqual[x - y, 0],
LessEqual[lb_, x_, ub_] -> LessEqual[(x - lb) (x - ub), 0],
GreaterEqual[ub_, x_, lb_] -> LessEqual[(x - lb) (x - ub), 0]} ,
x, y, lb, ub , stdcons, eqcons, ineqcons, lambdas, mus, lagrangian, eqs1,
eqs2, eqs3, alleqns, allvars },
(* Change constraints to Equal and LessEqual form with zero on the right-
hand side *)
stdcons = cons /. consconvrule;
(* Separate the equality constraints and the inequality constraints *)
eqcons = Cases[stdcons, Equal[_, 0]][[All, 1]];
ineqcons = Cases[stdcons, LessEqual[_, 0]][[All, 1]];
(* Define the Lagrange multipliers for the equality and inequality \
constraints *)
lambdas = Array[\[Lambda], Length[eqcons]];
mus = Array[\[Mu], Length[ineqcons]];
(* Define the Lagrangian *)
lagrangian = obj + lambdas.eqcons + mus.ineqcons;
(* The derivatives of the Lagrangian are equal to zero *)
eqs1 = Thread[ D[lagrangian, {vars}] == 0];
(* Lagrange multipliers for inequality constraints are \[GreaterEqual]0 to \
get minima *)
eqs2 = Thread[mus >= 0];
(* Lagrange multipliers for inequality constraints are 0 unless the \
constraint value is 0 *)
eqs3 = Thread[mus*ineqcons == 0];
(* Collect the equations *)
alleqns = Join[eqs1, eqs2, eqs3, cons];
(* Collect the variables *)
allvars = Join[vars, lambdas, mus];
(* Return the equations and the variables *)
{alleqns, allvars}
]
In[2]:= (* Convert logical expressions to rules *)
torules[res_] := If[Head[res] === And, ToRules[res], List @@ (ToRules /@ res)]
In[3]:= (* Find the global minima *)
KKTReduce[obj_(* objective function *), cons_List (* constraints *),
vars_List (* variables *)] :=
Block[{kkteqs, kktvars, red, rls, objs, allres, minobj, sel, ret, minred,
minredrls},
(* Construct the equations and the variables *)
{kkteqs, kktvars} = KTEqs[obj, cons, vars];
(* Reduce the equations *)
red = LogicalExpand @
Reduce[kkteqs, kktvars, Reals, Backsubstitution -> True];
(* Convert the Reduce results to rules (if possible ) *)
rls = torules[red];
(* If the conversion to rules was complete *)
If[Length[Position[rls, _ToRules]] == 0,
(* Calculate the values of the objective function *)
objs = obj /. rls;
(* Combine the objective function values with the rules *)
allres = Thread[{objs, rls}];
(* Find the minimum objective value *)
minobj = Min[objs];
(* Select the results with the minimum objective value *)
sel = Select[allres, #[[1]] == minobj &];
(* Return the minimum objective value with the corresponding rules *)
ret = {minobj, sel[[All, 2]]},
(* Else if the results were not completely converted to rules *)
(* Use MinValue to find the smallest objective function value *)
minobj = MinValue[{obj, red}, kktvars];
(* Use Reduce to find the corresponding results *)
minred =
Reduce[obj == minobj && red, kktvars, Reals, Backsubstitution -> True];
(* Convert results to rules, if possible *)
minredrls = torules[minred];
ret = If[
Length[Position[minredrls, _ToRules]] == 0, {minobj, minredrls}, {minobj,
minred}];
];
(* Remove excess nesting from result *)
If[Length[ret[[2]]] == 1 && Depth[ret[[2]]] > 1, {ret[[1]], ret[[2, 1]]},
ret]
]
In[4]:=
Examples
In[5]:= Minimize[{x^2 - y^2, Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
Out[5]= Minimize[{x^2 - y^2, Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
In[6]:= KKTReduce[x^2 - y^2, {Cos[x - y] >= 1/2, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
Out[6]= {-25 + 25/9 (-3 + \[Pi])^2, {{x -> -(5/3) (-3 + \[Pi]),
y -> 5, \[Mu][1] -> (20 (-3 + \[Pi]))/(3 Sqrt[3]), \[Mu][2] ->
0, \[Mu][3] ->
1/9 (9 + 6 Sqrt[3] Sin[5 + 5/3 (-3 + \[Pi])] -
2 Sqrt[3] \[Pi] Sin[5 + 5/3 (-3 + \[Pi])])}, {x -> 5/3 (-3 + \[Pi]),
y -> -5, \[Mu][1] -> (20 (-3 + \[Pi]))/(3 Sqrt[3]), \[Mu][2] ->
0, \[Mu][3] ->
1/9 (9 + 6 Sqrt[3] Sin[5 + 5/3 (-3 + \[Pi])] -
2 Sqrt[3] \[Pi] Sin[5 + 5/3 (-3 + \[Pi])])}}}
In[7]:= TimeConstrained[
Minimize[{(Subscript[x, 1] - Subscript[x, 2])^2 + (Subscript[x, 2] -
Subscript[x, 3])^4, (1 + Subscript[x, 2]^2) Subscript[x, 1] + Subscript[
x, 3]^4 - 3 == 0}, {Subscript[x, 1], Subscript[x, 2], Subscript[x,
3]}], 60]
Out[7]= $Aborted
In[8]:= AbsoluteTiming @
KKTReduce[(Subscript[x, 1] - Subscript[x, 2])^2 + (Subscript[x, 2] -
Subscript[x, 3])^4, {(1 + Subscript[x, 2]^2) Subscript[x, 1] + Subscript[
x, 3]^4 - 3 == 0}, {Subscript[x, 1], Subscript[x, 2], Subscript[x, 3]}]
Out[8]= {1.67203, {0, {{Subscript[x, 1] -> 1, Subscript[x, 2] -> 1,
Subscript[x, 3] -> 1, \[Lambda][1] -> 0}, {Subscript[x, 1] ->
AlgebraicNumber[Root[3 + 2 #1 + 2 #1^2 + #1^3 &, 1], {0, 1, 0}],
Subscript[x, 2] ->
AlgebraicNumber[Root[3 + 2 #1 + 2 #1^2 + #1^3 &, 1], {0, 1, 0}],
Subscript[x, 3] ->
AlgebraicNumber[
Root[3 + 2 #1 + 2 #1^2 + #1^3 &, 1], {0, 1, 0}], \[Lambda][1] -> 0}}}}Frank Kampas2018-08-11T17:18:58ZWhy NicholsGridLines in Nichols Plot are different than Matlab
http://community.wolfram.com/groups/-/m/t/1402189
The sensitivity lines(NicholsGridLines) in NicholsPlot in Mathematica drawn differently from what is drawn in Matlab and what I learned in lectures in control systems.
Here we can see the function P[s]=-15 (1+0.2 s/3+(s/3)^2)/(s(1+s/2)(1-1.6 s/5+(s/5)^2)(1+0.2 s/7+(s/7)^2)) which by wolfram is not Entering the 3dB sensitivity loop (just below (Pi,0)) and by Matlab it does. Wolfram Mathematica are drawing the Nichols Grid Lines as a reflection with respect to the x axis compared to what i learned at control system and Matlab for some reason. in addition, Mathematica draws the plot around the phase pi and Matlab around -Pi, why these differences?
![the sensitivity lines(NicholsGridLines) in NicholsPlot][1]
Is there a option for flipping these grid-lines somehow? Or can someone explain why it is drawn as shown?
The code I used to draw the plot:
NicholsPlot[P[s],
GridLines -> {Range[-2 \[Pi], 2 \[Pi], 0.5 \[Pi]], Automatic},
StabilityMargins -> True, PlotRange -> {{0, 2 \[Pi]}, {-50, 60}},
NicholsGridLines -> {{ -10^((3/20)), -10^((6/20))} ,},
ScalingFunctions -> {"Radian", Automatic},
Ticks -> {Range[-2 \[Pi], 2 \[Pi], \[Pi]/2], Automatic}]
Dynamic[MousePosition["Graphics"]]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=%D7%AA%D7%9E%D7%95%D7%A0%D7%94%D7%9C%D7%9C%D7%90%D7%A9%D7%9D.png&userId=1402172Eliav Louski2018-08-11T01:22:37Z[GIF] This is Only a Test (Decagons from stereographic projections)
http://community.wolfram.com/groups/-/m/t/1380624
![Decagons formed from stereographically projected points][1]
**This is Only a Test**
This one is fairly straightforward. Form 60 concentric circles on the sphere centered at the point $(0,1,0)$. On each circle, take 10 equally-spaced points, stereographically project to the plane, and form a decagon from the resulting points. Now rotate the sphere and all the points on it around the axis $(0,1,0)$. The result (at least after adding some color) is this animation. This is a sort of discretized companion to my old still piece [_Dipole_][2].
Here's the code:
Stereo[p_] := p[[;; -2]]/(1 - p[[-1]]);
With[{r = 2, n = 10, m = 60,
cols = RGBColor /@ {"#2EC4B6", "#011627", "#E71D36"}},
Manipulate[
Graphics[
{EdgeForm[Thickness[.0045]],
Join[{Reverse[#[[1]]], #[[2]]}]
&[Partition[
Table[
{Blend[cols, θ/π],
EdgeForm[Lighter[Blend[cols, θ/π], .15]],
Polygon[
Table[Stereo[(Cos[θ] {0, 1, 0} +
Sin[θ] {Cos[t], 0, Sin[t]}).RotationMatrix[ϕ, {0, 1, 0}]],
{t, π/2, 5 π/2, 2 π/n}]]},
{θ, π/(2 m), π - π/(2 m), π/m}],
m/2]]},
PlotRange -> r, ImageSize -> 540, Background -> Blend[cols, 1/2]],
{ϕ, 0, 2 π/n}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=stereo29.gif&userId=610054
[2]: https://shonkwiler.org/still-images/dipoleClayton Shonkwiler2018-07-12T03:41:03ZSolve 2 coupled 2nd ODEs and plot them with ParametricPlot?
http://community.wolfram.com/groups/-/m/t/1393597
I am interested to solve two coupled 2nd order differential equations and plot the solution using ParamatricPlot. Can anyone help me to resolve this issue? The solution is a trajectory of a particle under the influence of gravity. So, I am also interested to animate the trajectory of the particle as well. I have attached the Mathematica script with this post.Soumen Basak2018-07-28T10:13:02ZExport Graphics3D images to PDF preserving a good resolution?
http://community.wolfram.com/groups/-/m/t/1400227
Let's create image
g = Graphics3D[{Line[{{-2, 0, 2}, {2, 0, 2}, {0, 0, 4}, {-2, 0, 2}}]}]
and then export it by `Export["test.pdf", g]`. It leads to a very unsatisfactory result [![snapshot of resulting image in PDF][1]][1]
Option `AllowRasterization" -> False` does not help. How do I get PDF or EPS file of this image in the vector format?
[1]: https://i.stack.imgur.com/qaSfR.pngRodion Stepanov2018-08-09T11:13:19ZUse index.html files in Wolfram Cloud sites?
http://community.wolfram.com/groups/-/m/t/1250045
### Cross post on StackExchange: https://mathematica.stackexchange.com/questions/162265/using-index-html-files-in-wolfram-cloud-sites
---
Part as exercise, part so I could write data-science blog posts I built a website builder using Mathematica that sets up sites in the cloud.
As an example site, here is a paclet server website I set up: https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/main.html
Unfortunately, to get this to work I had to remap my site's index.html file to a main.html file, because when I try to view the site at the index.html either by explicitly routing there or by going to the implicit view I am pushed back to the implicit view and given a 500 error.
Note that I cannot copy the index.html file to the site root i.e.,
CopyFile[
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/index.html"],
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer", Permissions->"Public"]
]
as I get a `CloudObject::srverr` failure
I can't even set up a permanent redirect like so:
CloudDeploy[
Delayed@HTTPRedirect[
"https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/main.html",
<|"StatusCode" -> 301|>
],
"server",
Permissions -> "Public"
]
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/server"]
As while this apparently worked, going to that site causes my browser to spin infinitely and before finally giving up.
Even more, all of these possible hacks are ugly and I'd much rather work with the standard website setup.
How can I do this?b3m2a1 2017-12-19T17:59:55ZDisplay a Character made by "Private Character Editor of Windows"?
http://community.wolfram.com/groups/-/m/t/1399382
I have struggled to show a Private Character in my Mathematica, but failed to do it so far.
So I want to know how to do it.
Please give me a lecture.
I made my private character using "Private Character Editor of Windows", and
checked to display the one by some other softwares. It has gone well, except the case of Mathematica.
As I have used old Mahematica version 4.1, I also want to know how to do that using low level functions.
Thanksichione ichiro2018-08-07T17:37:40ZFind second derivative using D?
http://community.wolfram.com/groups/-/m/t/1399266
The code below gives incorrect 2nd derivative. figure 1 shows the original function xxSumS1[s, r] = funDervtveLAB[s, r, 0] when r=200, while second figure shows the 1st derivative of the original function "Exp[-noisepow*s]*laplaceLABs[s, r]" when r=200. The 1st derivative is correct as the 1st derivative of a decreasing function is negative (figure 2). However, the 2nd derivative "xxSumS3[s_, r_] = funDervtveLAB[s, r, 2] when r=200" seems to be incorrect. This is because I expect it to be positive for all values of s as it is the derivative of the 1st derivative and the 1st derivative is an increasing function in s. Figure 3 shows the 2nd derivative of the original function
![figure 1][1] ![figure 2][2] ![figure 3][3]
Clear["Global`*"]
a = 4.88; b = 0.43; etaLAB = 10.^(-0.1/10); etaNAB = 10.^(-21/10); etaTB = etaNAB;
PtABdB = 32; PtAB = 10^(PtABdB/10)*1*^-3; PtTBdB = 40; PtTB = 10^(PtTBdB/10)*1*^-3;
NF = 8; BW = 1*^7; noisepowdBm = -147 - 30 + 10*Log[10, BW] + NF;
noisepow = 0; RmaxLAB = 20000;
TBdensity = 1*^-6; ABdensity = 1*^-6;
alfaLAB = 2.09; alfaNAB = 2.09; alfaTB = 2.09;
mparameter = 3;
zetaLAB = PtAB*etaLAB; zetaNAB = PtAB*etaNAB; zetaTB = PtTB*etaTB;
height = 100; sinrdBrange = -10; sinr = 10.^(sinrdBrange/10);
probLoSz[z_] := 1/(1 + a*Exp[-b*(180/Pi*N[ArcTan[height/z]] - a)]);
probLoSr[r_] := 1/(1 + a*Exp[-b*(180/Pi*N[ArcTan[height/Sqrt[r^2 - height^2]]] -
a)]);
funLoS[z_] := z*probLoSz[z];
funNLoS[z_] := z*(1 - probLoSz[z]);
funLABNABs[z_, s_] := (1 - 1/(1 + s*zetaNAB*(z^2 + height^2)^(-alfaNAB/2)))*funNLoS[z];
funLABLABs[z_,
s_] := (1 - (mparameter/(mparameter + s*zetaLAB*(z^2 + height^2)^(-alfaLAB/2)))^mparameter)*funLoS[z];
funLABTBs[z_, s_] := z*(1 - 1/(1 + s*zetaTB*z^(-alfaTB)));
distnceLABNABs = (zetaLAB/zetaNAB)^(1/alfaLAB)*height^(alfaNAB/alfaLAB);
NearstInterfcLABNABs[r_] := Piecewise[{{height, r <= distnceLABNABs}, {(zetaNAB/zetaLAB)^(1/alfaNAB)* r^(alfaLAB/alfaNAB), r > distnceLABNABs}}];
NearstInterfcLABTBs[r_] := (zetaTB/zetaLAB)^(1/alfaTB)*r^(alfaLAB/alfaTB);
NearstInterfcLABLABs[r_] := r;
lowerlimitLABNABs[r_] := Sqrt[NearstInterfcLABNABs[r]^2 - height^2];
lowerlimitLABLABs[r_] := Sqrt[NearstInterfcLABLABs[r]^2 - height^2];
lowerlimitLABTBs[r_] := NearstInterfcLABTBs[r];
InteglaplaceLABNABs[s_?NumericQ, r_?NumericQ] := NIntegrate[funLABNABs[z, s], {z, lowerlimitLABNABs[r], RmaxLAB}];
InteglaplaceLABLABs[s_?NumericQ, r_?NumericQ] := NIntegrate[funLABLABs[z, s], {z, lowerlimitLABLABs[r], RmaxLAB}];
InteglaplaceLABTBs[s_?NumericQ, r_?NumericQ] := NIntegrate[funLABTBs[z, s], {z, lowerlimitLABTBs[r], RmaxLAB}];
laplaceLABNABs[s_, r_] := Exp[-2*Pi*ABdensity*InteglaplaceLABNABs[s, r]];
laplaceLABLABs[s_, r_] := Exp[-2*Pi*ABdensity*InteglaplaceLABLABs[s, r]];
laplaceLABTBs[s_, r_] := Exp[-2*Pi*TBdensity*InteglaplaceLABTBs[s, r]];
laplaceLABs[s_, r_] :=
laplaceLABNABs[s, r]*laplaceLABLABs[s, r]*laplaceLABTBs[s, r];
funDervtveLAB[s_, r_, kk_] := D[Exp[-noisepow*s]*laplaceLABs[s, r], {s, kk}];
xxSumS1[s_, r_] = funDervtveLAB[s, r, 0]; (*original function*)
xxSumS2[s_, r_] = funDervtveLAB[s, r, 1]; (* 1st derivative*)
xxSumS3[s_, r_] = funDervtveLAB[s, r, 2]; (*2nd derivative*)
xxSumR1[r_] := xxSumS1[s, r] /. s -> (mparameter*sinr/zetaLAB*r^alfaLAB);
xxSumR2[r_] := xxSumS2[s, r] /. s -> (mparameter*sinr/zetaLAB*r^alfaLAB);
xxSumR3[r_] := xxSumS3[s, r] /. s -> (mparameter*sinr/zetaLAB*r^alfaLAB);
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig-1.jpg&userId=1350020
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig-2.jpg&userId=1350020
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fig-3.jpg&userId=1350020mohamed alzenad2018-08-07T20:02:23ZSolve a PDE with boundary conditions (chemical adsorption in fixed beds)?
http://community.wolfram.com/groups/-/m/t/1398247
Dear Wolfram team:
I have been trying for week to solve a system of 2 partial differential equations describing the adsorption of a chemical substance on a fixed bed (for example, a column of activated carbon). The 2 equations are the following, taken from McCabe (1993):
![Description of eq 1][1]
![Description of eq 2][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=EQ1.png&userId=1020580
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=EQ2.png&userId=1020580
Unfortunately I cannot get past the general solution (with arbitrary constants) because when I try to put boundary conditions the Mathematica program fails. Maybe I am using the wrong command or syntax, or maybe there are too much or too few boundary conditions.
I have left attached the program, where I tryed to simplify the problem combining both equations in a third.
Thank you in advance for your help.
Best regards,
Alberto SilvaAlberto Silva Ariano2018-08-06T01:04:00Z[HACKATHON] Hardware Verification Workflow with SCR1 in Wolfram Language
http://community.wolfram.com/groups/-/m/t/1400440
## The project ##
We connected Wolfram Mathematica with [SCR1 microcontroller core][1]. For this purpose, we developed a driver for SCR1 based on the Wolfram Device Framework. In our project SCR1 is not a hardware device but an RTL code of processor written in [SystemVerilog][2].
A chip design workflow is a complicated multistage process. At the design stage, engineers describe their solutions with the terminology of the register-transfer level (RTL) using RTL languages such as SystemVerilog. At the verification stage, they have to prove that the design is correct and this is the most complex phase of development. Wolfram Mathematica can help in verification providing comprehensive analytical and visualisation features.
In the project, we used SCR1 as an example of an RTL code because SCR1 is an open-source microcontroller core which is a RISC-V compatible processor. RISC-V is a computer architecture which is open too. The source files of SCR1 can be found at http://github.com/syntacore/scr1. We present a solution where we can substitute SCR1 with any other RTL design. So our project is extendable, and we may say that we built a workflow involving Wolfram Mathematica. The project aims to demonstrate a potential application for Wolfram Mathematica in the semiconductor industry.
All code is posted on GitHub: [https://github.com/ckorikov/wolfram_russianhack18_scr1][3]
![SCR1][4]
## What it can do ##
The Wolfram Device Framework creates symbolic objects that represent external devices. In our case, this is the SCR1 processor. It is the frontend of our system. A description of the backend is in the next section.
SetDirectory[NotebookDirectory[]];
Needs["SCR1Device`"];
device = DeviceOpen["SCR1"]
![Device][5]
The SCR1 symbolic object has properties and three groups of methods — read, write and execute. In our project, users can interact with general purpose registers and memory of the SCR1. For this demonstration we additionally provided access to some wires such as a memory data bus and the branching logic in the processor pipeline. Examples are below.
**Properties.** There are 4 properties of the SCR1 symbolic object:
- `State`,
- `Clock`,
- `IPC` (instruction program counter),
- `MAX_MEM` (maximal memory).
The state property reflects a state of the processor and can have the following values: `IDLE`, `WORK` and `FINISHED`. This property is `WORK` after reset. When a program completes, the state transitions to `FINISHED`. The clock contains the number of ticks of a clock signal from a simulation start. The `IPC` shows a value of the IPC register. This value is an address of a currently executed instruction. `MAX_MEM` is a size of memory in bytes. These properties are read-only and can be accessed by the name of the property as follows.
device["MAX_MEM"]
32768
**Reading methods.** The general format of these commands is `DeviceRead[device, "CMD"]`. Instead of `CMD`, use one of the following commands.
- `STATE`: read the state of SCR1 (`State`, `Finished`, `Clock`, `IPC`).
- `REGS`: read the list of register values (from 1 to 32).
- `MEM`: read the list of bytes from memory.
- `BRANCH`: read the state of branching logic (`IPC`, `Jump`, `Branch_taken`, `Branch_not_taken`, `JB_addr`).
- `DBUS`: read the memory data bus (Address, Bytes).
**Writing methods.** The general format of these commands is `DeviceWrite[device, "CMD"]`. Instead of `CMD`, use one of the following commands.
- `REGS`: modify a value of a register.
- `MEM`: modify a value of a memory cell.
**Execution methods.** The general format of these commands is `DeviceExecute[device, "CMD"]`. Instead of `CMD`, use one of the following commands.
- `RESET`: reset the processor.
- `HARD_RESET`: reset the processor and internal counters of the simulator (such as simulation time and the clock counter).
- `LOAD`: load a program to memory and reset the processor.
- `STEP`: perform one tick of the clock signal.
- `RUN`: make steps until the end of the program.
- `RUN_UNTIL_IPC`: make steps until a specific IPC value.
- `TRACE_IPC`: execute `RUN` command and return a list of IPC values.
## Basic examples ##
###1. Program loading, soft and hard resets###
To load a program execute the following command. An argument is a path to the program file.
DeviceExecute[device,
"LOAD",
"./scr1_programs/dhrystone21.bin"
];
To reset the processor use `RESET` and `HARD_RESET` commands. Hard reset is soft reset + simulator cleanup.
###2. Read data about SCR1###
These are examples of reading commands output.
Dataset@DeviceRead[device, "STATE"]
![State][6]
Here, `Finished` is a flag which is 1 if SCR1 reaches the end of the program otherwise is 0. Other output values are the same as symbolic object properties.
Dataset@DeviceRead[device, "BRANCH"]
![Branch][7]
Structures like if–then–else create branches in code execution flow. The `BRANCH` command returns information about the current branching state. `Jump`, `Branch_taken`, `Branch_not_taken` are flags. They are 1 if the instruction is jump or a branch has been detected, and it has been taken or not taken, respectively. `JB_addr` is an address of the next instruction if jump or branch has occurred.
Dataset@DeviceRead[device, "DBUS"]
![DBUS][8]
Data and program instructions are located in memory. A processor fetches them through a memory bus. `DBUS` returns an address of the memory cell and the size of the requested data in bytes.
Dataset@MapIndexed[
{#2[[1]], BaseForm[#1, 16], BaseForm[#1, 2]} &,
DeviceRead[device, "REGS"]
]
![Registers][9]
Any computations on the processor involve registers. We can read their values. This is an example of reading values of the register in binary and hexadecimal forms.
BaseForm[#, 16] &@DeviceRead[device, {"MEM", 512, 100}]
![Memory][10]
Also, we can read the contents of the memory. The first argument is the address of a cell. The second is the number of cells.
###3. Write data to memory and registers###
Write the value to the memory and check it.
DeviceWrite[device, {"MEM", 10000, 10}];
DeviceRead[device, {"MEM", 10000, 1}]
{10}
The first argument is the address of a memory cell. The second one is the value.
DeviceWrite[device, {"REGS", 5, 30}];
DeviceRead[device, "REGS"]
{0, 0, 0, 0, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
The first argument is a register index. The second one is the value.
###4. Program execution on SCR1###
There are several functions which start the program flow. The first is `STEP`. This function produces one clock of the simulator and returns the number of clocks. This function works until the end of the program. After that, the core needs to be reset. We can use the `NEXT_IPC` function if we would like to run SCR1 until the next instruction occurs. The function returns a value of new IPC. Additionally, SCR1 may be run until a particular IPC value is encountered with the `RUN_UNTIL_IPC` command. If we would like to launch SCR1 before the program ends, we can use `RUN` function. If the program prints something to display, it is redirected to `src1_output.txt` file.
Framed@Import["src1_output.txt"]
This is an example of the output.
HELL0 SCR1
Dhrystone Benchmark, Version 2.1 (Language: C)
Program compiled without 'register' attribute
Execution starts, 500 runs through Dhrystone
Execution ends
Final values of the variables used in the benchmark:
Int_Glob: 5
should be: 5
Bool_Glob: 1
should be: 1
Ch_1_Glob: A
should be: A
Ch_2_Glob: B
should be: B
Arr_1_Glob[8]: 7
should be: 7
Arr_2_Glob[8][11]: 510
should be: Number_Of_Runs + 10
Ptr_Glob->
Ptr_Comp: 15412
should be: (implementation-dependent)
Discr: 0
should be: 0
Enum_Comp: 2
should be: 2
Int_Comp: 17
should be: 17
Str_Comp: DHRYSTONE PROGRAM, SOME STRING
should be: DHRYSTONE PROGRAM, SOME STRING
Next_Ptr_Glob->
Ptr_Comp: 15412
should be: (implementation-dependent), same as above
Discr: 0
should be: 0
Enum_Comp: 1
should be: 1
Int_Comp: 18
should be: 18
Str_Comp: DHRYSTONE PROGRAM, SOME STRING
should be: DHRYSTONE PROGRAM, SOME STRING
Int_1_Loc: 5
should be: 5
Int_2_Loc: 13
should be: 13
Int_3_Loc: 7
should be: 7
Enum_Loc: 1
should be: 1
Str_1_Loc: DHRYSTONE PROGRAM, 1'ST STRING
should be: DHRYSTONE PROGRAM, 1'ST STRING
Str_2_Loc: DHRYSTONE PROGRAM, 2'ND STRING
should be: DHRYSTONE PROGRAM, 2'ND STRING
Number_Of_Runs= 500, HZ= 1000000
Time: begin= 15331, end= 165400, diff= 150069
Microseconds for one run through Dhrystone: 300
Dhrystones per Second: 3331
- tb/scr1_top_tb_axi.sv:314: Verilog $finish
## Additional examples ##
###1. Memory maps of programs###
In this example, we show a grid of memory maps for programs from the `scr1_programs` directory. A memory map is a matrix of memory cells where each element is highlighted depending on the value of the cell.
![Memory maps][12]
###2. Execution graph of programs###
We can visualise the trace of program execution. We used a directed graph whose vertices are instructions which are placed in the order of how they were executed. We can see that using the graph it is easy to find jumps in programs.
![Execution graph xor][13]
###3. Call graph###
There are assembler dumps in the `scr1_programs` directory. We use this dumps to map instructions to the names of functions. In this example, we parse assembler files, find ranges of addresses and use them for mapping.
![Call graph dhrystone][14]
###4. Transactions to memory###
This example shows how to trace data manually with Wolfram Mathematica. Also, we calculate a list of frequent addresses which is accessed by SCR1 for a particular program (dhrystone21).
![DBUS Top Dhrystone][15]
###5. Develop new devices: branch predictor###
Our solution provides loads of data about the core. Engineers can use this data to design or optimise modules. For instance, we can get information about branching of SCR1 and use this data for developing a branch predictor module.
The purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch predictors play a critical role in achieving high performance in many modern pipelined processors.
Here we use machine learning methods, a neural network, to build a predictor.
![NN Classifier][16]
## How it works ##
The driver encapsulates lower-level interactions with the SCR1. We cannot use SystemVerilog in Wolfram Mathematica directly. That is why we converted the SCR1 code to C++ code by Verilator software (https://www.veripool.org/wiki/verilator). This program is an open-source Verilog/SystemVerilog simulator. We wrapped generated C++ code with functions to communicate with Wolfram Mathematica through the Wolfram LibraryLink. The full scheme of the project is below.
![System][17]
## Conclusions ##
Over the course of 24 hours our team built a prototype of a hardware verification workflow with the SCR1 microcontroller. We implemented:
- device driver for the SCR1 processor based on the Wolfram Device
Framework;
- the C++ bridge between Wolfram Mathematica and generated
C++ by Verilator;
- examples of using this system for verification
programs and hardware;
- the design of a branch predictor.
Our verification solution provides register and memory access and a step-by-step debugger (in clock or instruction modes). To build a powerful hardware debugger it is necessary to add the feature of making dumps of arbitrary signals for any RTL design. The last is a potential topic for future work.
[1]: https://syntacore.com/page/products/processor-ip/scr1
[2]: https://en.wikipedia.org/wiki/SystemVerilog
[3]: https://github.com/ckorikov/wolfram_russianhack18_scr1
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=scr1.png&userId=1399750
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4190device.png&userId=1399750
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=state.png&userId=1399750
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=branch.png&userId=1399750
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dbus.png&userId=1399750
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=registers.png&userId=1399750
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=memory.png&userId=1399750
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4058dbus_top_dhrystone.png&userId=1399750
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=memory_maps.png&userId=1399750
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2389execution_graph_xor.png&userId=1399750
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1756call_graph_dhrystone.png&userId=1399750
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4058dbus_top_dhrystone.png&userId=1399750
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8444nn_classifier.png&userId=1399750
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=scheme.png&userId=1399750Constantine Korikov2018-08-09T14:40:31ZPerform calculations on y-axis values?
http://community.wolfram.com/groups/-/m/t/1396972
Sometimes when plotting functions, you want to do some operation on the y-axis. This comes up when you want to plot say decibels vs. frequency. It is not clear how you can do an operation like performing 10 Log10 on the y-axis values. Is there a straight forward way to do this? Incidentally, LogPlot just gives you the y-axis in log form.Jesse Sheinwald2018-08-03T16:34:54Z