Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Data Science sorted by activeGet historical PE-ratios with FinancialData?
https://community.wolfram.com/groups/-/m/t/1550095
I have tried to get historical PE-ratios with FinancialData like this:
FinancialData["MSFT","PERatio",{2000,1,1}
But with this I only get one value, i.e. the latest PE-ratio. I had expected a whole list of them.
What do I do wrong?Laurens Wachters2018-11-14T13:08:49ZHow do I find the list of values for an EntityProperty qualifier?
https://community.wolfram.com/groups/-/m/t/1555220
Cross-Post on [StackExchange](https://mathematica.stackexchange.com/q/186262/38205)
---
I have an [`EntityProperty`](https://reference.wolfram.com/language/ref/EntityProperty.html) and [I know how to get its list of qualifiers](https://mathematica.stackexchange.com/q/186262/38205). Now how do I figure out the possible values these can take programmatically? Here's an example to get us started:
EntityValue[
EntityProperty["Country", "ExternalBalance"],
"Qualifiers"
]
{"CurrencyUnit", "Date", "PercentOfGDP", "TradeSection"
Now, say, how do I programmatically determine what `"CurrencyUnit"`, `"TradeSection"`, and `"PercentOfGDP"` can be?b3m2a1 2018-11-18T20:46:01ZVisualizing Text Sentiments in RGBColor and ChromaticityPlot
https://community.wolfram.com/groups/-/m/t/1552094
Today, I got an idea, Why not show text sentiments value from different author in just one-shot all together?
So I write a few lines of code in Mathematica 11.3, see attached Notebook.
The idea is very nature. As people said, every color represents an emotion, and vice versa.
So I use "warm" red to represent positive, "cool" blue for negative, and green for neutral.
Let's start an example from Alice.
sentence = TextSentences[ExampleData[{"Text", "AliceInWonderland"}]];
sentiment = Classify["Sentiment", #, "Probabilities"] & /@ sentence;
width = Floor[N[Sqrt[Length@sentiment]], 1];
plot = ArrayPlot[
Partition[Take[RGBColor @@@ sentiment, width^2], width],
ImageSize -> 300, PlotLabel -> "Alice in Wonderland"]
![enter image description here][1]
Then I got all texts plot in various color.
![enter image description here][2]
Result Discussion:**
----------------------
1. Typical "Neutral" sentiments: "Origin of Species","Declaration of Independence"
![enter image description here][3]
![enter image description here][4]
2. Typical "Positive + Negative" (maybe more dramatic) sentiments: Shakespeares "Hamlet"
,"Sonnets"
![enter image description here][5]
![enter image description here][6]
3. Non-English Text: Sentiments classify may not be very accurate.
![enter image description here][7]
![enter image description here][8]
Take a glance, if the text-color match the color in your mind?
====================================================
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=example1.png&userId=569571
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=All.png&userId=569571
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=TextOriginOfSpecies.png&userId=569571
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=TextDeclarationOfIndependence.png&userId=569571
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=TextHamlet.png&userId=569571
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=TextShakespearesSonnets.png&userId=569571
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=TextFaustI.png&userId=569571
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=TextDonQuixoteISpanish.png&userId=569571Frederick Wu2018-11-16T09:51:40ZWhat is the difference between MatrixPlot and Image
https://community.wolfram.com/groups/-/m/t/1553133
So I'm reducing some data. And displaying it.
I know what the Import command imports the individual pixel values from the picture, but when displaying the image, i have the option to use matrix plot or image.
Image gives me a white image, which i know it's wrong and matrixplot gives me a black image, which seems more right.
My question is what's the difference between the two? I thought image assigned a matrix element (i,j) to an image's pixel value (k,m), creating an image from the matrix
How do I create an image from pixel values extracted? I have tried:
Mean_Picture_Value = Mean[Biases];
Image[Mean_Picture_Value ];
It's giving me a white image, which i know it's wrong.Pruthvi Acharya2018-11-17T02:30:29ZPlot multi histogram in one image, where the bins are real numbers?
https://community.wolfram.com/groups/-/m/t/1550038
Hi,
I tried to plot the attached data with no success - can some one please advise?
I try with
Histogram[Flatten[Table[i, {i, Length[bincounts ]}, {j, bincounts [[i]]}]]]
, but it plot the X axis from 1-20, instead of -099-0.64
Thanks!
data = {0,0,0,0,0,0,0,0,0,0,2,0,1,2,2,5,1,3,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
data2 = {0 ,0 ,0 ,0 ,0 ,1 ,0 ,0 ,0 ,0 ,3 ,1 ,4 ,5 ,1 ,3 ,1 ,1 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 }
bins = {-0.99 ,-0.98 ,-0.97 ,-0.96 ,-0.95 ,-0.94 ,-0.93 ,-0.92 ,-0.91, -0.9 ,-0.89 ,-0.88 ,-0.87 ,-0.86 ,-0.85 ,-0.84 ,-0.83 ,-0.82 ,-0.81 ,-0.8 ,-0.79 ,-0.78 ,-0.77 ,-0.76 ,-0.75 ,-0.74 ,-0.73, -0.72 ,-0.71 ,-0.7 ,-0.69 ,-0.68, -0.67 ,-0.66 ,-0.65 ,-0.64 }Yossi Rab2018-11-14T09:46:23ZGet right list of tensors data to port "Input" in NetTrain?
https://community.wolfram.com/groups/-/m/t/1549196
Dear all, my input and target data is prepared in tensor, which is filled from a file. Dimensions[input] yields {398,6,24,24}. First index is the training set number, second is channels (6) and 24 x 24 is my "image" size. In the same way I have prepared the target data, which is in a {398,6} tensor.
When I try to invoke NetTrain, it complains:
NetTrain[Net,{"Inputs"->input,"Targets"->target},{MaxTrainingRounds->1}]
> NetTrain: Data provided to port "Input" should be a list of 6×24×243-tensors.
**Obviously my understanding of tensors and lists seems to be flawed, why is {398,6,24,24} not a list of {6,24,24} tensors?**
The network is defined as follows, just two fully connected layers.
linear1 = LinearLayer[{6,24,24}, "Input"->{6,24,24}];
linear1 = NetInitialize[linear1];
linear2 = LinearLayer[{6}, "Input"->{6,24,24}];
linear2 = NetInitialize[linear2];
Net = NetChain[{linear1, linear2}];Markus Lenzing2018-11-13T12:57:04Z[WSS18] Introducing Hadamard Binary Neural Networks
https://community.wolfram.com/groups/-/m/t/1374288
##Introducing Hadamard Binary Neural Networks
Deep neural networks are an important tool in modern applications. It has become a major challenge to accelerate their training. As the complexity of our training tasks increase, the computation does too. For sustainable machine learning at scale, we need distributed systems that can leverage the available hardware effectively. This research hopes to exceed the current state of the art performance of neural networks by introducing a new architecture optimized for distributability. The scope of this work is not just limited to optimizing neural network training for large servers, but also to bring training to heterogeneous environments; paving way for a distributed peer to peer mesh computing platform that can harness the wasted resources of idle computers in a workplace for AI.
#### Network Architecture and Layer Evaluator
Here, I will describe the network and the Layer Evaluator, to get an in depth understanding of the network architecture.
Note:
- **hbActForward** : Forward binarization of Activations.
- **hbWForward** : Forward binarization of Weights.
- **binAggression** : Aggressiveness of binarization (Vector length to binarize)
Set up the Layer Evaluator.
layerEval[x_, layer_Association] := layerEval[x, Lookup[layer, "LayerType"], Lookup[layer, "Parameters"]];
layerEval[x_, "Sigmoid", param_] := 1/(1 + Exp[-x]);
layerEval[x_, "Ramp", param_] := Abs[x]*UnitStep[x];
layerEval[ x_, "LinearLayer", param_] := Dot[x, param["Weights"]];
layerEval[ x_, "BinLayer", param_] := Dot[hbActForward[x, binAggression], hbWForward[param["Weights"], binAggression]];
layerEval[x_, "BinarizeLayer", param_] := hbActForward[x, binAggression];
netEvaluate[net_, x_, "Training"] := FoldList[layerEval, x, net];
netEvaluate[net_, x_, "Test"] := Fold[layerEval, x, net];
Define the network
net = {<|"LayerType" -> "LinearLayer", "Parameters" -> <|"Weights" -> w0|>|>,
<|"LayerType" -> "Ramp"|>,
<|"LayerType" -> "BinarizeLayer"|>,
<|"LayerType" -> "BinLayer", "Parameters" -> <|"Weights" -> w1|>|>,
<|"LayerType" -> "Ramp"|>,
<|"LayerType" -> "BinLayer", "Parameters" -> <|"Weights" -> w2|>|>,
<|"LayerType" -> "Sigmoid"|> };
MatrixForm@netEvaluate[net, input[[1 ;; 3]], "Test" ] (* Giving network inputs *)
![enter image description here][1]
#### Advantages of Hadamard Binarization
- Faster convergence with respect to vanilla binarization techniques.
- Consistently about 10 times faster than CMMA algorithm.
- Angle of randomly initialized vectors preserved in high dimensional spaces. (Approximately 37 degrees as vector length approach infinity.)
- Reduced communication times for distributed deep learning.
- Optimization of im2col algorithm for faster inference.
- Reduction of model sizes.
### Accuracy analysis
![enter image description here][2]
As seen above, the HBNN model gives 87% accuracy, whereas the BNN model (Binary Neural Networks) give only 82%. These networks have only been trained for 5 epochs.
### Performance Analysis
X Axis: Matrix Size
| Y Axis: Time (seconds)
**CMMA vs xHBNN**
![enter image description here][3]
**MKL vs xHBNN**
$\hspace{1mm}$![enter image description here][4]
### Visualize weight histograms
![enter image description here][5]
It is evident that the Hadamard BNN preserves the distribution of the weights much better. Note that the BNN graph has a logarithmic vertical axis, for representation purposes.
### Demonstration of the angle preservation ability of the HBNN architecture
![enter image description here][6]
Binarization approximately preserves the direction of high dimensional vectors. The figure above demonstrates that the angle between a random vector (from a standard normal distribution) and its binarized version converges to ~ 37 degrees as the dimension of the vector goes to infinity. This angle is exceedingly small in high dimensions.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=tempz.png&userId=1302993
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=accuracy.png&userId=1302993
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6613xCma.png&userId=1302993
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=xMKL.png&userId=1302993
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=histogram.png&userId=1302993
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=anglepreserve.png&userId=1302993Yash Akhauri2018-07-10T22:12:26ZUsing Findfit
https://community.wolfram.com/groups/-/m/t/534384
I am trying to find a,b,c from this equation using an array of x and y,
![enter image description here][1]
This is my input in mathematica,
FindFit[{{171.29, 6}, {171.63, 12.1}, {171.86, 24.2}, {172.06,
48.3}}, {x == (((sqrt (a^2 + 8 a y) - a) (b - c))/(4 y)) + c}, {a,
b, c}, {x, y}]
But I get this message, FindFit::fitc: Number of coordinates (1) is not equal to the number of variables (2). >>
Please advice how to solve for a,b,c
[1]: /c/portal/getImageAttachment?filename=5562equation.png&userId=534369Damodaran Achary2015-07-22T22:09:16Z