Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Language sorted by active[WSS24] Modeling of interactions in aqueous electrolyte with peptide additive
https://community.wolfram.com/groups/-/m/t/3210091
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/158c2618-22ea-4c32-aada-f997d1d9cbdfXuelian Liu2024-07-10T02:19:19ZWhat does TrapezoidalRule at NIntegrate really do?
https://community.wolfram.com/groups/-/m/t/3290658
Hello everybody,
I have a question as regards the TrapezoidalRule at the implemented NIntegrate function of Mathematica.
If I calculate the numeric(!) integral of f(x)=x^2 from 0 to 3 with 5 trapezoids by hand, I get a result of 9.18. If I use the implemented method "TrapezoidalRule" with the option "Points" -> 5 it gives me exactly 9.0.
So I doubt, that I do not fully understand what Mathematica does when using this Rule. I have tried a lot of (changing the WorkingPrecision, setting the MaxRecursions to 0 etc...), but it always comes up with the "real" (analytical) result.
Can anyone please explain what's going on?
Thanks a lot,
JJJ
f[x_] := x^2;
a = 0; b = 3;
\[CapitalDelta]x[n_] := (b - a)/n;
left[n_] := \!\(
\*UnderoverscriptBox[\(\[Sum]\), \(i = 0\), \(n - 1\)]\(f[
a + i*\[CapitalDelta]x[n]]*\[CapitalDelta]x[n]\)\);
right[n_] := \!\(
\*UnderoverscriptBox[\(\[Sum]\), \(i = 1\), \(n\)]\(f[
a + i*\[CapitalDelta]x[n]]*\[CapitalDelta]x[n]\)\);
trap[n_] := (left[n] + right[n])/2;
trap2[n_] := \[CapitalDelta]x[n]/2 (f[a] + 2 \!\(
\*UnderoverscriptBox[\(\[Sum]\), \(i = 1\), \(n - 1\)]\(f[
a + i*\[CapitalDelta]x[n]]\)\) + f[b]);
{left[5.], right[5.], trap[5.], trap2[5.]}
NIntegrate[f[x], {x, a, b},
Method -> {"TrapezoidalRule", "Points" -> 3}, WorkingPrecision -> 10,
MaxRecursion -> 0]Joachim Janezic2024-10-04T13:08:10ZCompiling from a Turing machine to a cyclic tag system
https://community.wolfram.com/groups/-/m/t/3290282
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/c9ceb30d-1ded-4cf8-bb8b-1e06aa47ecf2Eric Parfitt2024-10-03T21:09:08ZTrouble using Wolfram notebooks with a screen reader on Windows
https://community.wolfram.com/groups/-/m/t/3057413
Hello,
I'm using screen reading software packages such as JAWS For Windows or NVDA. For that reason, I have to do everything via my PC keyboard, without using the mouse. This means that I can only read text, not graphics, and that graphics are only accessible to me if they have a meaningful "alt" tag attached to them.
Even though Wolfram Alpha's accessibility has improved over the past couple of years and I can use it to some extent, I think Wolfram notebooks must either be totally inaccessible for people who use a screen reader or I have no idea how I'm supposed to work with them when I can't use the mouse and when I can't read graphics.
I have set up a basic account on Wolfram Cloud.
I want to evaluate the entire contents of the file downloaded from a particular webpage (I can post the link later if someone's interested). It's a file with an ".m" extension.
This means that when I open a new notebook, I'm supposed to copy the contents of the whole file into my new notebook and then choose "Evaluate all cells".
I logged into my Wolfram Cloud account and I chose the button labeled "New computational notebook".
I thought I was supposed to reach an edit field where I would paste the contents of the "main.m" file. But I did not find any such thing on the webpage. And if it's something other than a conventional edit field of the kind that I would fill in on a web form, then I don't know where I'm supposed to paste the contents of the ".m" file.
I had a very long conversation with someone from the Wolfram support. The support person told me that they could only suggest me to read the Wolfram tutorial webpage called "Keyboard shortcut listing".
But on that webpage, I'm unable to find anything that would help me to reach the field I'm looking for. And if I'm supposed to paste the text into something other than an edit field, then I have no idea what that would be.
My question is simple:
Is there anyone here who might be able to describe to me what I should be trying to find, avoiding phrases like "click the red button" or "click the blue icon" or similar? Those won't help me. I'm currently absolutely stuck.
Thank you in advance.
- PetrPetr Pařízek2023-10-28T22:26:55ZSpectroGAN soundMap
https://community.wolfram.com/groups/-/m/t/2829436
My goal:
With great interest in GANs, a series of AI-generated spectrograms are created for further analysis and manipulation. Spectrograms are visual representations of audio signals, and GANs are machine-learning models used to generate synthetic data that resembles real data. By creating AI-generated spectrograms using GANs, I aim to study and manipulate these visual and sound representations of synthetic audio data. This could potentially have applications in fields such as audio processing and music synthesis. I created an exploratory workflow with a fundamental question in mind: what is music?
Description and process:
As an artist or musician, one begins by learning the rules and techniques of their craft. However, as one gains more experience and becomes more confident in their abilities, their own personal emotions and intuition begin to play a larger role in their creations. The unique experiences and perspectives that each individual brings to their art form the basis of their own unique voice and style. This blending of technical skill with personal emotion and intuition leads to a creation that is not just technically sound but also reflects the artist's view of the world and its place within it. In this way, art and music become a reflection not just of technical proficiency, but also of the human experience and emotion. Further research and exploration in this field will shed light on the potential of AI to augment or even change the way we experience and create art and music.
This raises the question of what happens when trained machine-learning models generate music and art. How do these AI-generated pieces evoke human emotions and experiences, if at all?
Exploratory Workflow:
1: Create GANs models
![enter image description here][1]
2: Feed spectrograms into GANs models to produce AI-generated spectrograms.
![enter image description here][2]
3: Select 20 samples of the GANs-generated spectrograms by listening to 120 random samples.
![enter image description here][3]
4. Combine the 20 samples
![enter image description here][4]
5. Denoise the combined sample
![enter image description here][5]
6. Further manipulations upon the denoise sample (Fisheye effect and etc)
![enter image description here][6]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=image%289%29.png&userId=2829000
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=image%2810%29.png&userId=2829000
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=image%2811%29.png&userId=2829000
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2023-02-12201850.jpg&userId=2829000
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=image%2812%29.png&userId=2829000
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=image%2813%29.png&userId=2829000Yingxuan Ma2023-02-13T01:20:51Z[WSS17] Music Genre Classifier
https://community.wolfram.com/groups/-/m/t/1136863
We aim to create a music genre classifier which allows the detection of the genre of audio/music files. The dataset used for training the model is the GTZAN dataset, it consists of 1000 audio tracks each 30 seconds long. It contains 10 genres, each of them have 100 tracks. For feature extraction, we will be extracting the MFCC values of the audio file. MFCC's are commonly used as features in speech recognition and music information retrieval systems.
![enter image description here][1]
We divided each song into two parts of 15 seconds each, this way we get more data and our dataset increases to 2000 songs. We will be extracting the MFCC values of all the audio files by partitioning the song into 15 seconds each.
In[25]:= rockdata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/rock/*.au"];
rockdata1 = Flatten[AudioSplit[#, 15] & /@ rockdata];
countrydata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/country/*.au"];
In[29]:= countrydata1 = Flatten[AudioSplit[#, 15] & /@ countrydata];
In[30]:= bluesdata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/blues/*.au"];
bluesdata1 = Flatten[AudioSplit[#, 15] & /@ bluesdata];
In[33]:= classicaldata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/classical/*.au"];
In[34]:= classicaldata1 =
Flatten[AudioSplit[#, 15] & /@ classicaldata];
In[10]:= discodata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/disco/*.au"];
In[35]:= discodata1 = Flatten[AudioSplit[#, 15] & /@ discodata];
In[36]:= Length@discodata1
Out[36]= 200
In[39]:= hiphopdata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/hiphop/*.au"];
In[38]:= hiphopdata1 = Flatten[AudioSplit[#, 15] & /@ hiphopdata];
In[40]:= jazzdata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/jazz/*.au"];
jazzdata1 = Flatten[AudioSplit[#, 15] & /@ jazzdata];
In[42]:= metaldata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/metal/*.au"];
metaldata1 = Flatten[AudioSplit[#, 15] & /@ metaldata];
In[130]:=
popdata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/pop/*.au"];
In[131]:= popdata1 = Flatten[AudioSplit[#, 15] & /@ popdata];
In[46]:= reggaedata =
Import["/Users/aishwaryapraveen/Desktop/Summer School \
Project/genres/reggae/*.au"];
In[47]:= reggaedata1 = Flatten[AudioSplit[#, 15] & /@ reggaedata];
MFCC Extraction
mFCCFeaturesreggaedata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ reggaedata1;
mFCCFeaturesClassReggae = Thread[mFCCFeaturesreggaedata -> "reggae"];
mFCCFeaturespopdata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ popdata1;
mFCCFeaturesClassPop = Thread[mFCCFeaturespopdata -> "pop"];
mFCCFeaturesmetaldata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ metaldata1;
mFCCFeaturesClassMetal = Thread[mFCCFeaturesmetaldata -> "metal"];
mFCCFeaturesjazzdata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ jazzdata1;
mFCCFeaturesClassJazz = Thread[mFCCFeaturesjazzdata -> "jazz"];
mFCCFeatureshiphopdata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ hiphopdata1;
mFCCFeaturesClasshiphop = Thread[mFCCFeatureshiphopdata -> "hiphop"];
mFCCFeaturesdiscodata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ discodata1;
mFCCFeaturesClassdisco = Thread[mFCCFeaturesdiscodata -> "disco"];
mFCCFeaturesbluesdata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ bluesdata1;
mFCCFeaturesClassblues = Thread[mFCCFeaturesbluesdata -> "blues"];
mFCCFeaturesclassicaldata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ classicaldata1;
mFCCFeaturesClassclassical =
Thread[mFCCFeaturesclassicaldata -> "classical"];
mFCCFeaturesrockdata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ rockdata1;
mFCCFeaturesClassrock = Thread[mFCCFeaturesrockdata -> "rock"];
mFCCFeaturescountrydata = (Values@
AudioLocalMeasurements[#, "MFCC",
PartitionGranularity -> {1., 1.}]) & /@ countrydata1;
mFCCFeaturesClasscountry =
Thread[mFCCFeaturescountrydata -> "country"];
First we will be implementing a neural network only on the first three genres to see how it performs, our training set consists of 540 songs and the validation set consists of 60 songs.
net = NetChain[{
GatedRecurrentLayer[128],
GatedRecurrentLayer[128],
SequenceLastLayer[],
LinearLayer[],
SoftmaxLayer[]},
"Input" -> {"Varying", 13},
"Output" -> NetDecoder[{"Class", {"metal", "pop", "reggae"}}]]
data = RandomSample[
Join[mFCCFeaturesClassPop, mFCCFeaturesClassReggae,
mFCCFeaturesClassMetal]];
trainSet1 = data[[1 ;; 540]];
validationSet1 = data[[541 ;;]];
trainedNet =
NetTrain[net, trainSet1, ValidationSet -> validationSet1,
MaxTrainingRounds -> 100]
cl5 = ClassifierMeasurements[trainedNet, validationSet1]
In[582]:= cl5["Accuracy"]
Out[582]= 0.733333
Confusion Matrix Plot
![enter image description here][2]
We implement a different architecture of the recurrent neural network for all the 10 genres, this time our training set is 1900 songs and validation set is 100 songs.
net4 = NetChain[{
GatedRecurrentLayer[256],
GatedRecurrentLayer[256],
GatedRecurrentLayer[256],
SequenceLastLayer[],
LinearLayer[],
SoftmaxLayer[]},
"Input" -> {"Varying", 13},
"Output" ->
NetDecoder[{"Class", {"country", "blues", "disco", "hiphop",
"jazz", "metal", "pop", "reggae", "rock", "classical"}}]
]
data = RandomSample[
Join[mFCCFeaturesClassPop, mFCCFeaturesClassReggae,
mFCCFeaturesClassMetal, mFCCFeaturesClassJazz,
mFCCFeaturesClassblues, mFCCFeaturesClassclassical,
mFCCFeaturesClasscountry, mFCCFeaturesClassdisco,
mFCCFeaturesClassrock, mFCCFeaturesClasshiphop]];
trainSet = data[[1 ;; 1900]];
validationSet = data[[1901 ;;]];
trainednet4 =
NetTrain[net4, trainSet, ValidationSet -> validationSet,
MaxTrainingRounds -> 100]
cl = ClassifierMeasurements[trainednet4, validationSet]
In[620]:= cl["Accuracy"]
Out[620]= 0.75
![enter image description here][3] Confusion Matrix Plot
We achieve an accuracy of 75% for classifying the genres of the audio files.
We will now construct a function which takes in an audio and classifies it into a genre.
In[571]:=
findGenre[sound_] :=
With[{audio =
Values@AudioLocalMeasurements[AudioResample[sound, 22050], "MFCC",
PartitionGranularity -> {1., 1.}]},
trainednet4[audio]
]
In[577]:= findGenre[rockdata[[-3]]]
Out[577]= "rock"
In[587]:= findGenre[countrydata[[-2]]]
Out[587]= "country"
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-07-05at4.02.54AM.png&userId=1082206
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-07-05at4.31.13PM.png&userId=1082206
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-07-05at4.42.33PM.png&userId=1082206Aishwarya Praveen2017-07-05T20:50:50ZCommutative avatars of representations of semisimple Lie groups
https://community.wolfram.com/groups/-/m/t/3290514
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Main03102024.png&userId=20103
[2]: https://www.wolframcloud.com/obj/06872339-7181-493f-9079-759464171944Tamas Hausel2024-10-03T20:40:45ZLoop of solving integrals constants doesn't work properly?
https://community.wolfram.com/groups/-/m/t/3290199
There are 6 steps whereby I use DSolve, then Update and evaluate the Mathematica package. Each step the output yields a function which contains obviously integration constants. For example Step 2 output yields a function with integration constants C1 , C2, C3 . Step 3 outputs yields a function with integration constants C1, C2 (instead of C4 , C5). Step 4 outputs a function with integration constants, C1, C2 (instead of C6 , C7). See attachment
&[Wolfram Notebook][1]
May you please assist with a way of achieving obtaining continuous integration constants.
[1]: https://www.wolframcloud.com/obj/74a5f355-4170-4017-850e-be93ed4557eeNomsa Ledwaba2024-10-03T08:47:11ZWhat is the substitution or replacement method for multiple functions and their derivatives?
https://community.wolfram.com/groups/-/m/t/3290412
I want to substitute functions and their derivatives in this:
{LXi = x*D[Xi[2][t], t] + Sqrt[x] h[t],
Xi[2][t]}; LPhi = { (p[t] + f[x, t]) u + g[x, t]};
f[x, t] =
C[1][t] + ((-((4 k \[Theta] h[t])/Sqrt[x]) +
6 Sqrt[x] (k h[t] + 2 Derivative[1][h][t]) +
6 x (k Derivative[1][Xi[2]][t] + (Xi[2]^\[Prime]\[Prime])[t]))/(
8 k \[Theta]))
Is this correct?
{LXi, LPhi} =
ReplaceAll[{LXi = {x*D[Xi[2][t], t] + Sqrt[x] h[t], Xi[2][t]},
LPhi = { (p[t] +
1/(8 k \[Theta]) (-(1/Sqrt[x]) 4 k \[Theta] h[t] +
6 Sqrt[x] (k (*h[t]*)+ 2 D[h[t], t]) +
6 x (k D[Xi[2][t], t] + D[Xi[2][t], {t, 2}]))) u +
g[x, t]}}, {h[t] ->
E^((Sqrt[k] t Sqrt[3 k + 8 \[Theta]])/(2 Sqrt[3])) C[1] +
E^(-((Sqrt[k] t Sqrt[3 k + 8 \[Theta]])/(2 Sqrt[3]))) C[2],
p[t] -> C[3] - (
3 E^(-((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (-1 + E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (-3 k +
3 E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3]) k - 8 \[Theta] +
8 E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3]) \[Theta] +
Sqrt[3] Sqrt[k (3 k + 8 \[Theta])] +
Sqrt[3] E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3]) Sqrt[
k (3 k + 8 \[Theta])]) C[5])/(8 (3 k + 8 \[Theta])) - (
3 E^(-((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (-1 + E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (3 Sqrt[3] k +
3 Sqrt[3] E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3]) k +
8 Sqrt[3] \[Theta] +
8 Sqrt[3] E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3]) \[Theta] - 3 Sqrt[k (3 k + 8 \[Theta])] +
3 E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3]) Sqrt[
k (3 k + 8 \[Theta])]) C[6])/(
8 (3 k + 8 \[Theta]) Sqrt[k (3 k + 8 \[Theta])]),
Xi[2][t] ->
C[4] + (Sqrt[3]
E^(-((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (-1 + E^((2 t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3])) C[
5])/(2 Sqrt[k (3 k + 8 \[Theta])]) + (
3 E^(-((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (-1 + E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3]))^2 C[
6])/(2 k (3 k + 8 \[Theta])),
D[h[t], t] -> (
E^((Sqrt[k] t Sqrt[3 k + 8 \[Theta]])/(2 Sqrt[3])) Sqrt[k]
Sqrt[3 k + 8 \[Theta]] C[1])/(2 Sqrt[3]) - (
E^(-((Sqrt[k] t Sqrt[3 k + 8 \[Theta]])/(2 Sqrt[3]))) Sqrt[k]
Sqrt[3 k + 8 \[Theta]] C[2])/(2 Sqrt[3]) ,
D[Xi[2][t], t] ->
E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3]) C[5] -
1/2 E^(-((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (-1 + E^((2 t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3])) C[
5] + (
Sqrt[3] (-1 + E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3])) Sqrt[
k (3 k + 8 \[Theta])] C[6])/(k (3 k + 8 \[Theta])) - (
Sqrt[3] E^(-((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (-1 + E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3]))^2 Sqrt[k (3 k + 8 \[Theta])] C[6])/(
2 k (3 k + 8 \[Theta])),
D[Xi[2][t], {t, 2}] -> (
2 E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3])
k (3 k + 8 \[Theta]) C[5])/(
Sqrt[3] Sqrt[k (3 k + 8 \[Theta])]) + (
E^(-((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (-1 + E^((2 t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) k (3 k + 8 \[Theta]) C[5])/(
2 Sqrt[3] Sqrt[k (3 k + 8 \[Theta])]) - (
2 E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3]) Sqrt[
k (3 k + 8 \[Theta])] C[5])/Sqrt[
3] + ((-2 (-1 + E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) k (3 k + 8 \[Theta]) +
1/2 E^(-((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (-1 + E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3]))^2 k (3 k + 8 \[Theta]) +
3/2 E^(-((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) (2/3 E^((2 t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[3])
k (3 k + 8 \[Theta]) +
2/3 E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3]) (-1 + E^((t Sqrt[k (3 k + 8 \[Theta])])/Sqrt[
3])) k (3 k + 8 \[Theta]))) C[6])/(
k (3 k + 8 \[Theta]))
Please show a generic way of substitution if possible, without having to manually find their derivatives first before substitution.Nomsa Ledwaba2024-10-03T10:03:23ZVirus production in shallow groundwater at the bank of the Danube River
https://community.wolfram.com/groups/-/m/t/3289943
![Virus production in shallow groundwater at the bank of the Danube River][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=5336Main02102024.png&userId=20103
[2]: https://www.wolframcloud.com/obj/f80a61a0-6ca0-4da8-8e21-a76f61e7fcefChristian Winter2024-10-02T18:29:52Z[WSA24] Simulation of a geometrical racing line
https://community.wolfram.com/groups/-/m/t/3289389
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=5678hero.png&userId=20103
[2]: https://www.wolframcloud.com/obj/f8b81ede-6fdd-4fd8-bc5d-05423dd7b4cbVladimir Shirkhanyan2024-10-02T10:33:47ZGauss-Bonnet for form curvatures
https://community.wolfram.com/groups/-/m/t/3289909
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8628Main02102024.png&userId=20103
[2]: https://www.wolframcloud.com/obj/743858fd-433c-4c2c-8b52-6779d10de985Oliver Knill2024-10-02T15:21:49ZAnnular solar eclipse of Oct 2, 2024
https://community.wolfram.com/groups/-/m/t/3289692
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Main02102024.png&userId=20103
[2]: https://www.wolframcloud.com/obj/8d24d13d-07e7-466c-933c-443ee8cd954dJeffrey Bryant2024-10-02T14:27:18ZUsing recurrent layer in neural network
https://community.wolfram.com/groups/-/m/t/3289669
Hello
I am new to machine learning models
I use the Classify and other instructions quite well
But I have difficulties with neural networks
I have a training set of 36 examples that is presented in the following form
training={{0.105, 0.92, -1.5} -> 2.69, {-0.04, 0.4, -1.3} ->
7.2, {-0.112, -0.36, -1.2} -> 8.18, {-0.137, 0.73, -1.2} ->
9.02, {-0.065, -0.59, -1.1} -> 15.29, {-0.049, -0.06, -0.9} ->
19.85, {-0.062, -1.26, -0.5} -> 20.97, {0.014, -0.05, -0.4} ->
19.07, {0.003, 0.25, -0.4} -> 14.94, {-0.097, 0.85, -0.4} ->
11.28, {0.073, -1.26, -0.6} -> 7.78, {0.082, -1.02, -0.8} ->
0.75, {0.096, 0.08, -0.8} -> 5.14, {-0.008, 0.7, -0.5} ->
5.22, {0.006, -1.02, -0.2} -> 6.34, {0.162, -0.22, 0.2} ->
9.97, {0.166, -0.59, 0.4} -> 12.19, {0.283, -1.64, 0.6} ->
16.34, {0.415, 1.37, 0.7} -> 18.77, {0.297, -0.22, 0.9} ->
17.58, {0.243, -1.36, 1.} -> 16.87, {0.252, 1.87, 1.2} ->
10.4, {0.168, -0.39, 1.} -> 8.98, {0.168, 1.32, 0.8} ->
4.76, {0.165, 0.93, 0.5} -> 2.5, {0.176, -0.83, 0.4} ->
2.4, {0.224, -1.49, 0.3} -> 9.03, {0.184, 1.01, 0.3} ->
12.3, {0.174, -1.12, 0.2} -> 15.07, {0.376, -0.4, 0.} ->
18.43, {0.366, -0.09, -0.1} -> 21.26, {0.397, -0.28, 0.} ->
19.59, {0.355, -0.54, 0.2} -> 12.83, {0.348, -0.73, 0.1} ->
11.01, {0.244, -1.13, 0.} -> 5.27, {0.334, -0.43, 0.1} -> 3.44}
When I use a LinearLayer[] with the following instructions everything is OK
model = NetTrain[LinearLayer[], trainingset]
And now I have a good result on a test set
model[oscil12[[3]]]
The result is : 10.07
But if I use a recurrent layer, for example BasicRecurrentLayer, and I initialize it,
netL = NetInitialize[BasicRecurrentLayer[1, "Input" -> {"Varying", 3}]]
model = NetTrain[netL, trainingset]
I do not have the right result
Question 1 : Why?, my netL is incomplete? What layer I must add? SigmoidLayer or other active or output layer?
Question 2 : What is the simpler solution?
Question 3 : When I agree with my model, can I include this model in Predict[] instruction if I choose NeuralNetwork method
Thanks for your attentionAndré Dauphiné2024-10-02T12:30:14ZMatrixSymbol simplifications
https://community.wolfram.com/groups/-/m/t/3288833
I'm exploring the new 14.1 MatrixSymbol functionality, hoping to eventually simplify complex and lengthy matrix expressions. But I'm running into limitations even with simple expressions. For example:
A = MatrixSymbol["A", {n, n}];
FullSimplify[A + Transpose[A], Assumptions -> A == Transpose[A]]
Out[18]= 2 Transpose[(MatrixSymbol["A", {n, n}])]
(** OK !! **)
FullSimplify[A - Transpose[A], Assumptions -> A == Transpose[A]]
Out[19]= 0
(** should we not see O_n,n ? **)
FullSimplify[Inverse[A] . Transpose[A], Assumptions -> A == Transpose[A]]
Out[20]= Inverse[(MatrixSymbol[
"A", {n, n}])] . Transpose[(MatrixSymbol["A", {n, n}])]
(**I was hoping to see an identity matrix **)
What am I missing?Eric Michielssen2024-09-30T23:57:43ZCalculate present value of cashflow with irregular intervals
https://community.wolfram.com/groups/-/m/t/3093128
How do I calculate the present value of a cashflow with irregular intervals and amounts. I didn't find anything about this in TimeValue[] documentation. Say, the cashflow is...
cf = {{{2024, 1, 16}, 324.67`}, {{2024, 2, 6}, 634.09`}, {{2024, 11, 20}, 356.27`}, {{2023,12,31}, 0};
What functions would return the present value on 1/1/2023 assuming a 5 percent discount?Jay Gourley2023-12-29T22:32:00ZEmpowering mathematics education through programming with Wolfram AI and chat-enabled notebooks
https://community.wolfram.com/groups/-/m/t/3288988
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/889cbbc4-8621-4c8f-a98b-d1edce587daePaul Abbott2024-10-01T17:36:50ZIs Opacity in Graphics3D flinky on your (Linux) system (Mathematica 4.1.0)?
https://community.wolfram.com/groups/-/m/t/3285996
After several years of using Windows, I recently returned to using Linux, so I can't say how long this has been an issue. I don't even know if it is a generic Linux problem, or specific to my setup. Under Linux, Graphics3D with opacity has "clipping" issues. **I would like to know if others are experiencing a similar result.** I note that the image generated by the server hosting this forum is not problematic.
After I originally posted, I noticed that if I "grab" the image with my mouse to move it around, the clipping vanishes until it is released.
Also, there is an errant comma in the graphics object list of the code I posted, but it appears to benign.
Without more resources, it is difficult to determine the source of this problem. It could be hardware, drivers, display server (X vs Windows), configuration (screen resolution, etc.), etc. Knowing if others are experiencing this problem, and under what circumstances, will help determine its source.
<h3>A notebook exhibiting the problem under Linux:</h3>
&[Example notebook producing the problem][1]
<h3>A screen-scrape of the problem taken from my Linux system:</h3>
![Example of the problem under Linux][2]
<h3>A screen-scrape of the preferred result taken from my Windows 11 system:</h3>
![enter image description here][3]
<h3>My Linux system's information:</h3>
Operating System: openSUSE Leap 15.6
KDE Plasma Version: 5.27.11
KDE Frameworks Version: 5.115.0
Qt Version: 5.15.12
Kernel Version: 6.4.0-150600.23.22-default (64-bit)
Graphics Platform: X11
Processors: 32 × Intel® Core™ i9-14900KS
Memory: 94.0 GiB of RAM
Graphics Processor: Mesa Intel® Graphics
Manufacturer: ASUS
[1]: https://www.wolframcloud.com/obj/bdaaa26f-248a-420a-b08d-994ad9861834
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=flinky-opacity-linux.png&userId=3269649
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=flinky-opacity-windows.png&userId=3269649Steven Hatton2024-09-28T15:10:18ZTopological minibands and quantum anomalous hall state in Moiré insulators
https://community.wolfram.com/groups/-/m/t/3288963
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Main01102024.png&userId=20103
[2]: https://www.wolframcloud.com/obj/7f959940-163f-40c4-a122-e86781741ed1Kaijie Yang2024-10-01T15:33:12ZAppend a column to a nested dataset
https://community.wolfram.com/groups/-/m/t/3288108
I have the following nested dataset:
![enter image description here][1]
ds = Dataset[
<|"x" ->
{<|"col1" -> 4.2, "col2" -> 2.5|>, <|"col1" -> 1.6,
"col2" -> 7.4|>, <|"col1" -> 7.1, "col2" -> 3.6|>}
,
"y" ->
{<|"col1" -> 9.1, "col2" -> 2.8|>, <|"col1" -> 2.7,
"col2" -> 5.4|>, <|"col1" -> 5.3, "col2" -> 0.6|>}
|>
]
How can I append a column `col3` to the `x` sub-dataset, so that `col3=f[col1]`, as below:
![enter image description here][2]
Doing
ds = ds["x", All, <|#, "new" -> f[#col1]|> &]
adds the column but removes information about the "y" key.
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6227Untitled.png&userId=1344988
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Untitled2.png&userId=1344988Ehud Behar2024-09-30T11:10:57ZCreative generative design with mathematical marbling
https://community.wolfram.com/groups/-/m/t/3258063
[![Generative design with mathematical marbling][1]][2]
&[Wolfram Notebook][3]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=MathematicalMarblingBanner.jpg&userId=20103
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=MathematicalMarblingBanner.png&userId=20103
[3]: https://www.wolframcloud.com/obj/18517ffb-0f53-4ab0-a2c9-a989b51c6268Phileas Dazeley-Gaist2024-08-27T14:59:23Z[WSRP24] Study of Hash Probing on a Graph
https://community.wolfram.com/groups/-/m/t/3215183
![Torus graph with grid probing performed on it][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=CoverImage.png&userId=3214738
[2]: https://www.wolframcloud.com/obj/9f5f3991-4070-4937-98e2-6e4a2218775fDavid Wang2024-07-11T18:47:11ZIs it possible to publish to the cloud using custom packages and custom stylesheets?
https://community.wolfram.com/groups/-/m/t/3288246
I have 100+ notebooks which all depend on on a custom style sheet and one or more custom packages. Is it possible to share these on the cloud so that they have access to the packages and stylesheet?Steven Hatton2024-09-30T20:19:01ZThe limit space of self-similar groups and Schreier graphs
https://community.wolfram.com/groups/-/m/t/3288175
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Main30092024.png&userId=20103
[2]: https://www.wolframcloud.com/obj/e4496ce7-2068-4b18-99e9-c9dcb1bb765dBozorgmehr vaziri2024-09-30T19:18:56ZDo LLM's learn transferable skills?
https://community.wolfram.com/groups/-/m/t/3288389
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8686hero.png&userId=20103
[2]: https://www.wolframcloud.com/obj/e6afe122-ec83-4afe-8270-cbe0c03d607aChristopher Wolfram2024-09-30T16:27:30ZImplement 'curl' command with URLRead, HTTPRequest, etc.
https://community.wolfram.com/groups/-/m/t/3282552
I am attempting to access the list of projects from a private/internal GitLab host with ML's URLRead function. I have been successful accessing the data via a browser as well as from the following command in a terminal:
curl --cert /path/to/certs.pem:its_password --header "Authorization: Bearer users_personal_access_token" https://private.gitlab.com/api/v4/projects
(with the proper values of 'its\_password', 'users\_personal\_access\_token', and 'private.gitlab.com' private git lab host). Unfortunately in WL I am unable to replicate the query with various iterations of the following:
URLRead[
HTTPRequest[
"Scheme"->"https",
"Domain"->"private.gitlab.com",
"Path"->"/api/v4/projects"
],
All,
Authentication-><|
"Headers"-><|"Authorization"->"Bearer users_personal_access_token"|>,
"PEMFile"->"/path/to/certs.pem",
"PEMFilePassword"->"its_password"
|>
]
The URLRead returns errors "400 No required SSL certificate was sent." The various iterations are moving things around, such as removing the "Authentication->..." and moving the "Header"->... inside the HTTPRequest above it; also replacing "PEMFile" (etc.) with "ClientCertificate", "CertificateFile", depending on how Mathematica Stack Exchange, Microsoft Copilot, and Claude feel that day. Even took Wireshark traces of "curl" and "URLRead"; that was overwhelming and did not manage to get any useful comparison due to my lack of experience.
Have you been successful querying services that require authentication (particularly GitLab)? How did you do it? Any assistance would be great, and thank you for your time.Ed Estrella2024-09-24T20:28:32ZSolving differential equation depending on radial and time parameter
https://community.wolfram.com/groups/-/m/t/3285392
Hi, I hope you are all doing well so I am trying to solve this differential equations which depends on radial and a time parameter to find the velocity, as an initial condition i took the value of the stationary condition which I solved separately. So when i tried to solve the final equations i got these errors which I don't know what causes it, is it the initial conditions or the method in which i want to solve it. So if any one of you have an idea it would help me a lot Thank you all in advance
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/c772837f-00e0-42a3-9c1d-4a33cc33c6daLeo Murphy2024-09-27T09:53:55ZUnexpected behaviour from NMaximize optimising a stochastic function
https://community.wolfram.com/groups/-/m/t/3286311
I have a complicated function whose output contains noise. I'm interested in using Differential Evolution via NMaximize to solve it. However, NMaximize appears to compile or somehow 'freeze' the function outside its loop.
Below is a minimal example. Here, 'function 1' produces the expected behaviour in finding the argmax $\pi$ alongside a different random maximum each time.
That is, for three example runs:
```
{{0.368844, {\[Eta] -> 3.14159}}, {1.29786, {\[Eta] ->
3.14159}}, {0.128056, {\[Eta] -> 3.14159}}}
```
My actual problem is more like 'function 2'. But in this case it provides the same result three times. The result is also wrong and not $\pi$ - probably linked with the unexpected behaviour:
```
{{0.08951, {\[Eta] -> 3.89056}}, {0.08951, {\[Eta] ->
3.89056}}, {0.08951, {\[Eta] -> 3.89056}}}
```
Here is the example code:
```
ClearAll["Global`*"]
(* function 1 *)
f1 := NMaximize[{1.0 - (\[Pi] - \[Eta])^2 +
RandomVariate[NormalDistribution[]],
0. < \[Eta] < 4.}, \[Eta] \[Element] Reals,
Method -> "DifferentialEvolution" ];
Table[f1, {3}]
(* function 2 *)
w[\[Eta]_?NumericQ] :=
Block[{\[Epsilon]}, \[Epsilon] :=
RandomVariate[NormalDistribution[]];
1.0 - (\[Pi] - \[Eta])^2 + \[Epsilon]];
f2 := NMaximize[{w[\[Eta]], 0. < \[Eta] < 4.}, \[Eta] \[Element]
Reals, Method -> "DifferentialEvolution" ];
Table[f2, {3}]
```Cameron Turner2024-09-27T21:51:02ZHelp solving a cubic equation with a parameter
https://community.wolfram.com/groups/-/m/t/3284125
&[Wolfram Notebook][1]
[1]: https://www.wolframcloud.com/obj/42b03182-7ec1-4772-bc4a-37883605494bJackie cao2024-09-26T02:56:58ZSimplifying multiple inequalities
https://community.wolfram.com/groups/-/m/t/3283386
Can you please tell me which function in Wolfram can convert the formula
(y == 0 && x == 0) || (y == 1 && x == 1) || (0 < y < 1 && x == y)
to the formula:
0 <= y <= 1 && x == y
I can't even prove their equivalence:
FullSimplify[((y == 0 && x == 0) || (y == 1 && x == 1) || (0 < y < 1 && x == y)) \[Equivalent] (0 <= y <= 1 && x == y), (x | y) \[Element] Reals]
Wolfram works in one direction only:
FullSimplify[(0 <= y <= 1 && x == y) \[Implies] ((y == 0 && x == 0) || (y == 1 && x == 1) || (0 < y < 1 && x == y)), (x | y) \[Element] Reals]
Out: TrueSasha Mandra2024-09-25T21:20:13ZMap function not fully evaluating my matrix
https://community.wolfram.com/groups/-/m/t/3286558
I'm currently trying to separate a matrix of complex numbers in a new matrix that only has the real part of each number with the following code:
G = Map[Real, Y, {2}]
But I get the following output
{{Real[4.21223 - 11.4944 I], Real[-2.20588 + 3.67647 I], Real[0],
Real[-1.17647 + 4.70588 I], Real[-0.829876 + 3.11203 I],
Real[0]}, {Real[-2.20588 + 3.67647 I], Real[9.9847 - 22.6166 I],
Real[-0.907716 + 3.78215 I], Real[-4. + 8. I], Real[-1. + 3. I],
Real[-1.8711 + 4.158 I]}, {Real[0], Real[-0.907716 + 3.78215 I],
Real[5.32668 - 16.2898 I], Real[0], Real[-1.66667 + 3.33333 I],
Real[-2.75229 + 9.17431 I]}, {Real[-1.17647 + 4.70588 I],
Real[-4. + 8. I], Real[0], Real[6.17647 - 14.7059 I],
Real[-1. + 2. I], Real[0]}, {Real[-0.829876 + 3.11203 I],
Real[-1. + 3. I], Real[-1.66667 + 3.33333 I], Real[-1. + 2. I],
Real[5.77391 - 14.1826 I], Real[-1.27737 + 2.73723 I]}, {Real[0],
Real[-1.8711 + 4.158 I], Real[-2.75229 + 9.17431 I], Real[0],
Real[-1.27737 + 2.73723 I], Real[5.90077 - 16.0695 I]}}
What should I do to fully evaluate the exprassion?Sebastián Cornejo2024-09-29T02:30:50ZTensor networks, Einsum optimizers and the path towards autodiff
https://community.wolfram.com/groups/-/m/t/2437093
I've been thinking about what it would take to make a generically optimal autodiff engine, sorely [missing](https://mathematica.stackexchange.com/questions/256323/neural-network-automatic-differentiation-autograd) right now and it comes down to einsum optimization, a well studied problem.
To see why you need an einsum optimizer, consider differentiating $F(x)=f(g(h(x)))$. Using chain rule we can write the derivative as the product of intermediate Jacobian matrices.
$$\partial F = \partial f \cdot \partial g \cdot \partial h$$
The optimal order of computing this matrix product depends on shapes of constituent matrices. This is the well known [matrix-chain problem](https://en.wikipedia.org/wiki/Matrix_chain_multiplication). For neural network applications, $f$ is the "loss" with scalar output, while intermediate results are high-dimensional, so the matrix product starts with short Jacobian matrix followed by square'ish matrices, and the optimal order is "left-to-right", aka "reverse mode differentiation." The process of doing vector/matrix product is known as the "backward" step in autodiff systems.
![enter image description here][1]
For different set of matrix shapes, you may need a different order. For instance, if output of $F$ is high-dimensional but $x$ is scalar, it's faster to multiply your Jacobians in the opposite order:
![enter image description here][2]
Or, suppose both input and output of $F$ are high-dimensional, but an intermediate result is low-dimensional. Then an optimal order would go outwards in both directions, followed by an outer product in the last step:
![enter image description here][3]
Commonly occuring orders have names in autodiff literature -- forward, reverse, mixed, cross-country, but there's really an infinity of such orders. You can solve this problem more generally by treating it as a problem of finding an optimal contraction order. This is a well-known problem in graph theory literature, corresponding to the problem of finding optimal [triangulation](https://en.wikipedia.org/wiki/Matrix_chain_multiplication#More_efficient_algorithms) in the case of chain (see David Eppstein's [notes](https://www.ics.uci.edu/~eppstein/260/011023/)).
To see the connection of matrix chain problem to tensor networks, note that a product of matrices ABC can be written as a tensor network by using Einstein's summation notation:
$$ABC=A^{i}_j B^j_k C^k_l$$
This corresponds to a chain-structured tensor network below:
![enter image description here][4]
There are different orders in which you can perform the matrix multiplication. For instance, you could do $A(BC)$ or $(AB)C$. A parenthesization like $A(BC)$ is a hierarchical clusterings of tensors $A,B,C$ which is known as the "carving decomposition" of the tensor network graph, and it provides an order of computing the result, known as the "elimination order". For instance, computing $A(BC)$ corresponds to eliminating $k$, then $j$ and corresponding carving decomposition can be visualized below:
![enter image description here][5]
For a general tensor network, the problem of optimal elimination corresponds to the problem of finding the [optimal carving decomposition](https://arxiv.org/abs/1908.11034). If individual tensor dimensions are all very large, this is equivalent to finding minimum-width carving decomposition, which is also equivalent to finding a tree embedding that [minimizes edge congestion](https://arxiv.org/abs/1906.00013) for the tensor network graph.
For instance, for the contraction problem in the [paper](https://arxiv.org/pdf/1908.11034.pdf), to compute $K=A_{ij} B_{ml}^i C_o^l D_k^j E_n^{km}F^{no}$, elimination sequence $j,m,i,k,o,l,n$ corresponds to the contraction tree below, giving edge congestion of $4$, because of the four colored bands along the edge $H$, indicating that it is a rank-4 tensor.
![enter image description here][6]
The rank of largest intermediate tensor in the minimum-width contraction schedule is the "carving-width" of the graph, so if Mathematica's `GraphData` supported "Carvingwidth" property, we could see "4" as the result of `GraphData[{"Grid",{3,2}}, "Carvingwidth"]` . Carving width of planar graphs as well as their minimum width carving decomposition can be computed in $O(n^3)$ time using the [Ratcatcher algorithm](https://www.caam.rice.edu/~ivhicks/pb1.ijoc.pdf).
For the simpler case of chain structured tensor network, [we can solve the problem]((https://colab.research.google.com/drive/1LUbm7cS_slkC7BDKgOb58GrFqP-KQBI-)) by using [opt_einsum](https://github.com/dgasmith/opt_einsum) package which can handle general tensor networks. The package has been targeted for large tensor networks occurring in quantum mechanics, so solutions it gives for small tensor networks are sometimes [suboptimal](https://github.com/dgasmith/opt_einsum/issues/167).
Another way to approach this problem is to view it as a sum over weighted paths problem. Suppose our functions $f,g,h$ all have 2 dimensional inputs and outputs. Given that $F(x)=f(g(h(x)))$, $\partial F$ is the matrix product $\partial f \cdot \partial g \cdot \partial h$, visualized pictorially below
![enter image description here][7]
Derivative of this composition reduces to computing the sum over all paths for a fixed starting point and ending point, where weight of each path is the product of weights of individual edges, with edge weights corresponding to partial derivatives of intermediate functions $f,g,h$
<img src="https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at8.54.18AM.png&userId=1046145" width="400">
For instance, $\partial F_{1,1}$ is the sum over the following 4 paths, all with endpoints at $i=1$ and $l=1$:
<img src="https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at3.53.59PM.png&userId=1046145" width="300">
Hence, an einsum optimizer provides an efficient way to solve the "sum over weighted paths" problem by reusing intermediate results optimally.
Classical solution to this problem is a dynamic programming [algorithm](https://en.wikipedia.org/wiki/Matrix_chain_multiplication#A_dynamic_programming_algorithm) with $O(n^3)$ scaling, but there's an $O(n^2)$ algorithm described [here](https://arxiv.org/abs/2104.01777), with a Mathematica implementation [here](https://github.com/LeMinhThong/Speedup-Matrix-Chain-Multiplication).
For more general automatic differentiation, you need an einsum optimizer that can optimize computation of a **sum** of tensor networks.
To see why, consider differentiating the following function:
$$F(x)=f(g(h(x))+h(x))$$
This addition corresponds to a "skip-connection", which is popular in neural network design. You "skip" an application of $g$ in the second term. As the sum of weighted paths, the derivative of $F$ can be visualized below:
![enter image description here][8]
This doesn't neatly fit into tensor network notation, however, it does reduce to the sum of two tensor networks:
<img src="https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at9.00.08AM.png&userId=1046145" width="400">
These tensor networks have sub-chains in common, so an optimal computation schedule may reuse intermediate across two networks. The problem of optimal AD schedule for skip connections is described in more detail [here](https://cs.stackexchange.com/questions/147773/optimizing-a-sum-of-matrix-chains).
Another case where you need to optimize a sum of tensor networks is when computing Hessian-vector product of simple function composition. This comes up in Chemistry problems, see discussion [here](https://community.wolfram.com/groups/-/m/t/2380163).
For a composition $F(x)=h4(h3(h2(h1(x))))$ with scalar-valued $h4$, the Hessian vector product of $F$ with vector $v$ is the following sum of tensor networks:
![enter image description here][9]
We can visually inspect this sum to manually come up with a schedule for computing Hessian-vector product $\partial^2 F \cdot h$ that takes about the same time as computing the derivative $\partial F$
![enter image description here][10]
Here's a [notebook](https://www.wolframcloud.com/obj/yaroslavvb/newton/hvp.nb) (see below) implementing this schedule to compute Hessian-vector product of composition of functions $h_1, h_2, \ldots, h_n$ in Mathematica. It gives the same result as the built-in `D[]` functionality, but much faster.
The main parts of implementation are below. First define compute Jacobians and Hessians of intermediate functions stored in a list `hs`:
```
(* derivative of i'th layer *)
dh[i_] := (
vars = xs[[i]];
D[hs[[i]][vars], {vars, 1}] /. Thread[vars -> a[i - 1]]
);
(* Hessian of i'th layer *)
d2h[i_] := (
vars = xs[[i]];
D[hs[[i]][vars], {vars, 2}] /. Thread[vars -> a[i - 1]]
);
```
Now define the recursions to compute the schedule above. In autodiff literature, this would be called "foward-on-backward" schedule:
```
(* Forward AD, f[i]=dh[i]....dh[1].v *)
f[0] = v0;
f[i_?Positive] := dh[i] . f[i - 1];
(* Backward AD, *)
b[0] = {};
b[i_?Positive] := dot[b[i - 1], dh[n - i + 1]];
(* Activations *)
a[0] := x0;
a[i_?Positive] := h[i][a[i - 1]];
```
Now define the step that combines previous quantities into Hessian-vector products:
```
fb[i_] := dot[b[n - i], d2h[i]] . f[i - 1];
F[0] = fb[n];
F[i_?Positive] := F[i - 1] . dh[n - i] + fb[n - i];
```
After hours of manually debugging mismatched indices, it was especially satisfying to be able to write the following and have it work:
```
hess=D[H,{x0,2}];
Assert[Reduce[F[n-1]==hess.v0]];
```
Another example of derivative quantity that an autodiff engine should support is the Hessian trace. For the problem above, it can be written as the following sum of tensor networks:
![enter image description here][11]
Boxes corresponding to $\nabla h_i$ are intermediate Jacobian matrices, while entries like $\nabla^2 h_i$ are Hessian tensors. They are potentially large, but for common intermediate functions occurring in neural networks Hessians tend to be structured -- 0, rank-1, diagonal or a combination of the two. For instance, for ReLU neural network with linear/convolutional layers, the only component with a non-zero Hessian is the $h4$ loss function, hence Hessian trace reduces to a loop tensor network
![enter image description here][12]
Furthermore, if the loss function $h4$ is the cross entropy loss, its Hessian corresponds to a
[difference of diagonal and rank-1](https://yaroslavvb.medium.com/using-evolved-notation-to-derive-the-hessian-of-cross-entropy-loss-195f8c7b3a92) matrices, so the trace can be written as the following difference of two tensor networks
<img src="https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at11.36.13AM.png&userId=1046145" width="300">
The 3 way hyper-edge connected to `p` in the diagram above is called the "COPY-tensor" in Quantum Tensor Network literature (see Biamonte's excellent [notes](https://arxiv.org/abs/1912.10049)), but in einsum notation it's just an index that occurs in 3 factors, `np.einsum("i,ik,il->kl", p, h3, h3)`
In addition to einsum notation, the expression above can be written in matrix notation as follows
$$\text{tr} \nabla^2F=\|\text{diag}(\sqrt{p}) \nabla h_3 \nabla h_2 \nabla h_1\|_F^2 - p'\nabla h_3 \nabla h_2 \nabla h_1 \nabla h_1' \nabla h_2' \nabla h_3' p $$
Carl Woll has a [nice package](https://mathematica.stackexchange.com/questions/207336/generating-an-efficient-way-to-compute-einsum) for converting einsums to matrix notation, it could probably be extended to also work for this case.
The problem of optimally contracting a general tensor network is hard, and people who implemented packages like [opt_einsum](https://github.com/dgasmith/opt_einsum) and [Cotengra](https://github.com/jcmgray/cotengra) mainly tried to approximate optimal solutions for large problems, like [validating](https://quantum-journal.org/papers/q-2021-03-15-410/) Google's Sycamore quantum supremacy result.
However, tensor networks occuring up in automatic differentiation tasks are much easier than general tensor networks:
- all examples above are planar, for which a schedule within a factor of $n*d$ of optimal cost can be found in $O(n^3)$ time using Ratcatcher algorithm, $n$ is the number of tensors and $d$ is the largest [bond dimension](https://tensornetwork.org/mps/#toc_1), ([discussion](https://cstheory.stackexchange.com/questions/50794/complexity-of-optimal-elimination-for-a-planar-tensor-network))
- furthermore, all examples above are "series-parallel" graphs, treewidth=2
- furthermore, some examples are trees, for which a polynomial algorithm to obtain an optimal contraction algorithm has been described [here](https://journals.aps.org/pre/abstract/10.1103/PhysRevE.100.043309)
A practically interesting solution would be to create an einsum optimizer that can do one or more of the following:
- compute optimal contraction schedule for a tree-structured tensor network, allowing hyper-edges (detailed example [here](https://cs.stackexchange.com/questions/146680/generalizing-matrix-chain-problem-optimal-summation-in-a-tree))
- compute optimal contraction schedule for a series-parallel tensor network
- compute optimal computation schedule for a sum of chain tensor networks (especially [sums corresponding](https://cs.stackexchange.com/questions/147773/optimizing-a-sum-of-matrix-chains) to derivatives of compositions with skip connections)
- compute optimal computation schedule for sum of tree tensor networks
- compute optimal computation schedule for a sum of series-parallel tensor networks
A tool that can do all of these efficiently for networks with 2-20 tensors would cover most autodiff applications that make sense to attempt in Mathematica.
&[Wolfram Notebook][13]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at5.02.41PM.png&userId=1046145
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at5.03.12PM.png&userId=1046145
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at5.39.19PM.png&userId=1046145
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at8.48.16AM.png&userId=1046145
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-05at12.58.47PM.png&userId=1046145
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-05at7.39.30AM.png&userId=1046145
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at8.53.12AM.png&userId=1046145
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at8.57.59AM.png&userId=1046145
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at9.07.49AM.png&userId=1046145
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at3.33.03PM.png&userId=1046145
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at10.48.12AM.png&userId=1046145
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2022-01-04at10.05.30AM.png&userId=1046145
[13]: https://www.wolframcloud.com/obj/268f3a39-1e2e-4218-86da-1cca0dbb9f81Yaroslav Bulatov2022-01-04T18:30:14ZHolt-Winters method should work well for forecasting trading volume and stock price
https://community.wolfram.com/groups/-/m/t/3287620
I am trying to forecast the trading volume along with stock price by using Holt-Winters Exponential Smoothing method, a robust forecasting technique when the data exhibits seasonal patterns along with a long-term trend, making it suitable for predicting stock performance or trading volume that is influenced by cyclic factors. I came up with this code:
(* Define the time series data for volume and price *)
volumeData = TimeSeries[{4.04, 5.05, 6.06, 6.85, 7.81, 9.29, 11.70, 14.60, 18.00, 21.40, 26.30, 30.60, 36.70},
{DateObject[{2024, 9}], DateObject[{2024, 10}], DateObject[{2024, 11}], DateObject[{2024, 12}],
DateObject[{2025, 1}], DateObject[{2025, 2}], DateObject[{2025, 3}], DateObject[{2025, 4}],
DateObject[{2025, 5}], DateObject[{2025, 6}], DateObject[{2025, 7}], DateObject[{2025, 8}],
DateObject[{2025, 9}]}];
priceData = TimeSeries[{0.44, 0.80, 1.10, 1.50, 1.80, 2.00, 2.50, 3.20, 3.80, 4.20, 4.50, 4.80, 5.00},
{DateObject[{2024, 9}], DateObject[{2024, 10}], DateObject[{2024, 11}], DateObject[{2024, 12}],
DateObject[{2025, 1}], DateObject[{2025, 2}], DateObject[{2025, 3}], DateObject[{2025, 4}],
DateObject[{2025, 5}], DateObject[{2025, 6}], DateObject[{2025, 7}], DateObject[{2025, 8}],
DateObject[{2025, 9}]}];
(* Apply Holt-Winters exponential smoothing *)
holtWintersVolume = TimeSeriesModelFit[volumeData, {"HoltWinters", {"Additive"}}];
holtWintersPrice = TimeSeriesModelFit[priceData, {"HoltWinters", {"Additive"}}];
(* Generate forecast for the next 12 months *)
volumeForecast = TimeSeriesForecast[holtWintersVolume, 12];
priceForecast = TimeSeriesForecast[holtWintersPrice, 12];
(* Plot both original and smoothed data *)
volumePlot = DateListPlot[{volumeData, volumeForecast}, PlotLegends -> {"Original Volume", "Forecasted Volume"},
PlotLabel -> "Holt-Winters Volume Forecast", ImageSize -> Medium];
pricePlot = DateListPlot[{priceData, priceForecast}, PlotLegends -> {"Original Price", "Forecasted Price"},
PlotLabel -> "Holt-Winters Price Forecast", ImageSize -> Medium];
{volumePlot, pricePlot}
However, Wolfram can't seem to recognize that I am passing the values from TimeSeries to variables volumeData and priceData. if i retain TimeSeries only, it manages to read at least and move one step forward. What seems to be the issue? It always say no Wolfram Language Translation found.Maria Rosario2024-09-29T12:00:08ZCan PickMode and PickedElements be used in Grid[]?
https://community.wolfram.com/groups/-/m/t/3286392
Dear Members,
I am using the ideas of Kirill Belov (https://community.wolfram.com/groups/-/m/t/3092778) to construct object representing components of electrical distribution systems. To model a system a list of such objects are required. To edit the default parameters or to delete an object, I want to be abled to select an object dynamically. I have tried TableView with PickMode and PickedElements. However, TableView will not display the OOP elements. My question is if I can use PickMode and PickedElements in tables using Grid[]? I am using Wolfram 14.1 and I have been able to find the help for these two functions. I have learnt about them in the presentation by Jason Abernathy (https://www.wolfram.com/broadcast/video.php?c=104&v=3953&p=65&&disp=grid ).
I will appreciate your kind help.
Regards
JesusJ Jesus Rico-Melgoza2024-09-28T23:23:44ZWolfram Language Runtime (SDK) demo in Zig
https://community.wolfram.com/groups/-/m/t/3252532
![Wolfram Language Runtime (SDK) demo in Zig][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1169Hero.png&userId=20103
[2]: https://www.wolframcloud.com/obj/5c80768d-a495-4c9a-baa0-655d8dc47903Daniel Sanchez2024-08-21T02:25:51Z[WSA24] Simulating pendulums using Wolfram language
https://community.wolfram.com/groups/-/m/t/3286042
![enter image description here][1]
&[Wolfram Notebook][2]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logo.png&userId=3272003
[2]: https://www.wolframcloud.com/obj/2f133411-fd55-403f-85fb-a4dcc30f7413Narek Arakelyan2024-09-27T11:34:31Z