<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://community.wolfram.com">
    <title>Community RSS Feed</title>
    <link>https://community.wolfram.com</link>
    <description>RSS Feed for Wolfram Community showing ideas tagged with Signal Processing sorted by most replies.</description>
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2527035" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/96823" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/91868" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/463721" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/344278" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2166833" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1383518" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/917048" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/587562" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/861508" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2071595" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/992466" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1383630" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3299985" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/788811" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2887842" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1219764" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/203335" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2489380" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2200588" />
      </rdf:Seq>
    </items>
  </channel>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2527035">
    <title>[WSG22] Daily Study Group: Signals, Systems and Signal Processing</title>
    <link>https://community.wolfram.com/groups/-/m/t/2527035</link>
    <description>A Wolfram U daily study group on &amp;#034;Signals, Systems and Signal Processing&amp;#034; begins on May 16, 2022.&#xD;
&#xD;
Join instructors [@Leila Fuladi][at0]  and [@Mariusz Jankowski][at1] and a cohort of fellow learners to study the concepts, mathematics, principles and techniques of signal processing. We&amp;#039;ll cover methods of analysis for both continuous-time and discrete-time signals and systems, sampling and introductory filter design. The concepts and methods of signals and systems play an important role in many areas of science and engineering and many everyday signal processing examples are included. A basic working knowledge of the Wolfram Language is recommended.&#xD;
&#xD;
**[REGISTER HERE][1]**&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
&#xD;
  [1]: https://www.bigmarker.com/series/daily-study-group-signals-systems-and-signal-processing/series_details?utm_bmcr_source=community&#xD;
  [2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WolframUBanner.jpeg&amp;amp;userId=130003&#xD;
&#xD;
 [at0]: https://community.wolfram.com/web/leilaf&#xD;
&#xD;
 [at1]: https://community.wolfram.com/web/mariuszj</description>
    <dc:creator>Abrita Chakravarty</dc:creator>
    <dc:date>2022-05-06T22:26:34Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/96823">
    <title>Simple, fast compiled peak detection based on moving average</title>
    <link>https://community.wolfram.com/groups/-/m/t/96823</link>
    <description>Recently [b][url=http://community.wolfram.com/groups/-/m/t/91868]Christopher coded a neat wavelet-based method for peak detection[/url][/b]. Peak detection is often needed in various scientific fields: all sorts of spectral analysis, time series and general feature recognition in data come to mind right away.  There are many methods for peak detection. Besides his wavelet-based code Christopher also mentions a built-in MaxDetect function rooted in image processing. [b][url=http://reference.wolfram.com/mathematica/ref/MaxDetect.html]MaxDetect[/url][/b], though, being a rather elaborate tool for multi-dimensional data (2D, 3D images) and with a specific image-processing-minded parameter tuning, was not meant to target time series and other 1D data. This got me thinking.

[b]Can we come up with a minimal compile-able peak detection code that would be accurate, robust and fast in most situations for 1D data?[/b]

I am not an expert in the subject, but intuitively peak detection consists of two stages. 
[list]
[*][b]Finding all maxima[/b]. This can be done via Differences with selection of neighbor difference pairs that change from positive to negative. Such pair would indicate local maximum.
[*][b]Filtering out peaks[/b]. Selecting those maxima that are high splashes of amplitude with respect to its immediate neighborhood. Quoted words are relative and depend on particular data. This is why they are set as tuning parameters in the algorithm.
[/list]To illustrate 2nd point lets take a look at the [b][url=http://www.wolframalpha.com/input/?i=Albert+Einstein+Wikipedia+page+hits+history]Albert Einstein Wikipedia page hits history[/url][/b] below. Obviously a large peak in the past can be lower than current average in case if there is strong trend in the data. This is why we need windowing when looking for peaks, - to compare a peak to its immediate neighborhood. 

[url=http://www.wolframalpha.com/input/?i=Albert+Einstein+Wikipedia+page+hits+history][img=width: 600px; height: 469px;]/c/portal/getImageAttachment?filename=ScreenShot2013-08-14at3.13.07PM.png&amp;amp;userId=11733[/img][/url]

Without further ado here is a function written specifically in terms of functions that can be compiled. For example I do not use MovingAverage, but do trick with Partition instead.[mcode]PeakDetect = Compile[{{data, _Real, 1}, {width, _Integer}, {cut, _Real}}, (Table[0, {width}]~Join~
      Map[UnitStep[# - cut] &amp;amp;, data[[1 + width ;; -1 - width]] - Map[Mean, Partition[data, 1 + 2 width, 1]]]~Join~
      Table[0, {width}]) ({0}~Join~ Map[Piecewise[{{1, Sign[#] == {1, -1}}, {0, Sign[#] != {1, -1}}}] &amp;amp;, 
       Partition[Differences[data], 2, 1]]~Join~{0}), CompilationTarget -&amp;gt; &amp;#034;C&amp;#034;];[/mcode]The legend for the function arguments is the following:[list]
[*]data  1D numerical list of data
[*]width  half-width of the moving average window not including central point
[*]cut  threshold at which to cut off the peak in natural units of data amplitudes 
[/list]Now lets see some usage cases. Lets import the same Albert Einstein data as a proof of concept. [mcode]raw = WolframAlpha[ &amp;#034;albert einstein&amp;#034;, {{&amp;#034;PopularityPod:WikipediaStatsData&amp;#034;, 1}, &amp;#034;TimeSeriesData&amp;#034;}];
data = raw[[All, 2]][[All, 1]];[/mcode]We use total window width of 5 points here and cut off peak at a standard deviation of the whole data. The peak labeled May 2008 is nicely picked up even though it is comparable then current average. This peak is most probably due to publication on May 13, 2008 of [b][url=http://www.amazon.com/Einstein-Life-Universe-Walter-Isaacson/dp/0743264746]one of the most famous books about Einstein[/url][/b] marked as New York Times bestseller that also got award Quill Award. Of course you can play with controls to pick or drop peaks. On the top plot one sees data, moving average, and bands formed by moving average displaced up and down by fraction of standard deviation. Any maximum above the top band becomes a peak.

[url=http://www.amazon.com/Einstein-Life-Universe-Walter-Isaacson/dp/0743264746][img=width: 800px; height: 382px;]/c/portal/getImageAttachment?filename=9672ScreenShot2013-08-14at3.47.18PM.png&amp;amp;userId=11733[/img][/url]

The code for the app is at the very end. Lets try a different data set  recent sun spot activity. [mcode]raw = WolframAlpha[&amp;#034;sun spot&amp;#034;, {{&amp;#034;SunspotsPartialTimeSeries:SpaceWeatherData&amp;#034;, 1}, &amp;#034;TimeSeriesData&amp;#034;}];
data = raw[[All, 2]];[/mcode]We right away found on May 2013 mark - a [b][url=http://en.wikipedia.org/wiki/Solar_cycle_24#May_2013]most powerful recent event described in Wikipedia page here[/url][/b]. Please let me know if you have suggestions how to speed this up or improve it generally. I would be very curious to know your opinion and critique. 

[color=#ff0000][i]The .GIF below is large - wait till it is loaded.[/i][/color]

[img=width: 800px; height: 380px;]/c/portal/getImageAttachment?filename=sunspot.gif&amp;amp;userId=11733[/img]

The following reference could be useful:[list]
[*][b][url=http://www.tcs-trddc.com/trddc_website/pdf/SRL/Palshikar_SAPDTS_2009.pdf]Simple Algorithms for Peak Detection in Time-Series[/url][/b]
[*][b][url=http://www.mdpi.com/1999-4893/5/4/588]An Efficient Algorithm for Automatic Peak Detection in Noisy Periodic and Quasi-Periodic Signals[/url][/b]
[/list]The code for the interactive app:[mcode]Manipulate[
 tt = {#, 
     Rotate[DateString[#, {&amp;#034;MonthNameShort&amp;#034;, &amp;#034; &amp;#034;, &amp;#034;Year&amp;#034;}], Pi/2]} &amp;amp; /@
    Pick[raw, PeakDetect[data, wid, thr StandardDeviation[data]], 1][[
    All, 1]];
 
 Column[{
   
   ListLinePlot[{data, 
     ArrayPad[MovingAverage[data, 1 + 2 wid], wid, &amp;#034;Fixed&amp;#034;], 
     ArrayPad[MovingAverage[data, 1 + 2 wid], wid, &amp;#034;Fixed&amp;#034;] + 
      thr StandardDeviation[data], 
     ArrayPad[MovingAverage[data, 1 + 2 wid], wid, &amp;#034;Fixed&amp;#034;] - 
      thr StandardDeviation[data]}, AspectRatio -&amp;gt; 1/6, 
    ImageSize -&amp;gt; 800, Filling -&amp;gt; {2 -&amp;gt; {1}, 3 -&amp;gt; {4}}, 
    FrameTicks -&amp;gt; {None, Automatic}, 
    FillingStyle -&amp;gt; {Directive[Red, Opacity[.7]], 
      Directive[Blue, Opacity[.7]], Directive[Gray, Opacity[.1]]}, 
    PlotStyle -&amp;gt; Opacity[.7], PlotRange -&amp;gt; All, Frame -&amp;gt; True, 
    GridLines -&amp;gt; Automatic, PlotRangePadding -&amp;gt; 0],
   
   Show[
    DateListPlot[raw, Joined -&amp;gt; True, AspectRatio -&amp;gt; 1/6, 
     ImageSize -&amp;gt; 800, Filling -&amp;gt; Bottom, Ticks -&amp;gt; {tt, Automatic}, 
     Frame -&amp;gt; False, Mesh -&amp;gt; All, PlotRange -&amp;gt; All],
    DateListPlot[
     If[# == {}, raw[[1 ;; 2]], #, #] &amp;amp;[
      Pick[raw, PeakDetect[data, wid, thr StandardDeviation[data]], 
       1]], AspectRatio -&amp;gt; 1/6, ImageSize -&amp;gt; 800, 
     PlotStyle -&amp;gt; Directive[Red, PointSize[.007]], PlotRange -&amp;gt; All]
    , PlotRangePadding -&amp;gt; {0, Automatic}]
   
   }],
 Row[{
   Control[{{thr, 1, &amp;#034;threshold&amp;#034;}, 0, 2, Appearance -&amp;gt; &amp;#034;Labeled&amp;#034;}], 
   Spacer[100],
   Control[{{wid, 3, &amp;#034;hal-width&amp;#034;}, 1, 10, 1, Setter}]
   }]
 ][/mcode]
[b]============== UPDATE =================[/b]

Thank you all very much for contributing. I collected everyone&amp;#039;s efforts and Danny&amp;#039;s two functions in a single completely compile-able expression which seems to give shortest time; - but just vaguely faster then Danny&amp;#039;s ingenious maneuver. I very much liked format suggested by Christopher, the one that Michael also kept in his packages. But I wanted to make some benchmarking and thus followed the format returned by the function [b][url=http://reference.wolfram.com/mathematica/ref/MaxDetect.html]MaxDetect[/url][/b] - simply for the sake of speed comparison. This format is just a binary list of the length of original data with 1s in positions of found peaks. 

Here is the function:[mcode]PeakDetect = 
  Compile[{{data, _Real, 1}, {width, _Integer}, {cut, _Real}}, 
   (Table[0, {width}]~Join~
      UnitStep[
       Take[data, {1 + width, -1 - width}] - 
          (Module[{tot = Total[#1[[1 ;; #2 - 1]]], last = 0.}, 
              Table[tot = tot + #1[[j + #2]] - last; 
               last = #1[[j + 1]];
               tot, {j, 0, Length[#1] - #2}]]/#2) &amp;amp;[data, 1 + 2 width] - cut]
    ~Join~Table[0, {width}]) ({0}~Join~
      Table[If[Sign[{data[[ii + 1]] - data[[ii]], 
           data[[ii + 2]] - data[[ii + 1]]}] == {1, -1}, 1, 0], 
           {ii, 1, Length[data] - 2}]~Join~{0}), CompilationTarget -&amp;gt; &amp;#034;C&amp;#034;];
dat = RandomReal[1, 10^7];

pks = MaxDetect[dat]; // AbsoluteTiming
Total[pks]
(* ======== output ======== 
{62.807928, Null}
3333361
   ======== output ======== *)

pks = PeakDetect[dat, 1, 0]; // AbsoluteTiming
Total[pks]
(* ======== output ======== 
{1.560074, Null}
3333360
   ======== output ======== *)[/mcode]And here are the speed benchmarks on 10 million data points which shows 40 times speed-up:</description>
    <dc:creator>Vitaliy Kaurov</dc:creator>
    <dc:date>2013-08-14T21:21:54Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/91868">
    <title>Wavelet-Based Peak Detection</title>
    <link>https://community.wolfram.com/groups/-/m/t/91868</link>
    <description>In Mathematica the closest thing (currently) to a peak detect function is a function called MaxDetect, which is unfortunately very slow with large datasets and could be better at finding peaks.  So (for a project which I will post here soon) I decided to write my own peak detection function.

I found a nice article from National Instruments here: [url=http://www.ni.com/white-paper/5432/en/]http://www.ni.com/white-paper/5432/en/[/url]

I have subsequently implemented this in Mathematica.

So the basic idea behind this algorithm is whenever the detail wavelet passes zero, there is a peak.  Let me explain this:

(I am assuming that you understand the basic concept of wavelets)

When you get the wavelet of a 1-dimensional object, you will get the approximation coefficients, and the detail coefficients.  As their names suggest the approximation coefficients are the basic shape of the thing (used for noise reduction) while the detail coefficients are used for the little bumps and valleys that texture the original wave.


For example

Here is some noisy data:

[img=width: 449px; height: 270px; ]/c/portal/getImageAttachment?filename=noisy.jpg&amp;amp;userId=24497[/img]


Approximate:


[img=width: 413px; height: 165px; ]/c/portal/getImageAttachment?filename=approx.jpg&amp;amp;userId=24497[/img]

Detail:
[img=width: 392px; height: 157px; ]/c/portal/getImageAttachment?filename=detail.jpg&amp;amp;userId=24497[/img]



So to access the detail vs approx wavelets from some wavelet data like this:

[mcode]data = Table[Sin[x] + RandomReal[{0, .1}], {x, 0, 2 Pi, .01}];
dwd = DiscreteWaveletTransform[data, HaarWavelet[], 3][/mcode]
So I am running a wavelet transform of data with a haar wavelet for 3 layers.

I can see the various coefficients like this:
[mcode]dwd[&amp;#034;ListPlot&amp;#034;][/mcode][img]/c/portal/getImageAttachment?filename=ScreenShot2013-08-06at8.39.46PM.png&amp;amp;userId=24497[/img]
The coefficients that end in 1 are detail wavelets while those which end in 0 are approx wavelets.
(There is a {0} and a {0,0} wavelet but it doesn&amp;#039;t show them).

The deeper you go into these coefficients the less wavelets are used to construct the wave and therefor they are more basic.
In this case we want the highest quality data so we can just look at level 1. (So the coefficient named &amp;#034;{1}&amp;#034;).


So time to get down to the real code.

Here is some example data, in this case a fourier transform.

[img=width: 764px; height: 473px; ]/c/portal/getImageAttachment?filename=data.jpg&amp;amp;userId=24497[/img]

This is a fairly easy case but you can see that the number of datapoints is giant and there is some noise toward the bottom.

So the first thing we need to do is get rid of all the little peaks we don&amp;#039;t care about.

Here &amp;#034;data&amp;#034; is the data with peaks in it and &amp;#034;min&amp;#034; is the minimum size of the peaks that will be detected.
[mcode]dwd=DiscreteWaveletTransform[If[# &amp;lt; min, 0, #] &amp;amp; /@ data, HaarWavelet[],
  1];[/mcode]
In this case I have set &amp;#034;min&amp;#034; to 1.

We can now run the inverse wavelet transform of that for the first detail coefficient:
[mcode]InverseWaveletTransform[dwd, HaarWavelet[], {1}][/mcode]Which looks like this:


[img=width: 800px; height: 491px; ]/c/portal/getImageAttachment?filename=inversewavelet.jpg&amp;amp;userId=24497[/img]

Now we detect every time those peaks cross zero.  You may also notice that there is another threshold here which I found to be useful.
[mcode]Normal[CrossingDetect[
  If[Abs[#] &amp;lt; ther, 0, #] &amp;amp; /@ 
   InverseWaveletTransform[
    DiscreteWaveletTransform[If[# &amp;lt; min, 0, #] &amp;amp; /@ data, 
     HaarWavelet[], 1], HaarWavelet[], {1}]]][/mcode]
[img=width: 800px; height: 493px; ]/c/portal/getImageAttachment?filename=peaks.jpg&amp;amp;userId=24497[/img]


This gets the position of the points in each peak:
[mcode]Flatten[Position[
  Normal[CrossingDetect[
    If[Abs[#] &amp;lt; ther, 0, #] &amp;amp; /@ 
     InverseWaveletTransform[
      DiscreteWaveletTransform[If[# &amp;lt; min, 0, #] &amp;amp; /@ data, 
       HaarWavelet[], 1], HaarWavelet[], {1}]]]
, 1]][/mcode]
We then split them into groups for each peak:
[mcode]Split[Flatten[
  Position[Normal[
    CrossingDetect[
     If[Abs[#] &amp;lt; ther, 0, #] &amp;amp; /@ 
      InverseWaveletTransform[
       DiscreteWaveletTransform[If[# &amp;lt; min, 0, #] &amp;amp; /@ data, 
        HaarWavelet[], 1], HaarWavelet[], {1}]]], 1]], 
 Abs[#1 - #2] &amp;lt; cther &amp;amp;][/mcode]
&amp;#034;cther&amp;#034; is the distance 2 points have to be appart before they are counted as sepperate peaks.  In this case I use 50 because I don&amp;#039;t want it detecting false peaks right next to the real ones.  But you could set it to split after every run of consecutive 1s.

After this it is pretty technical and so I present (drumroll) the final function!!!

[mcode]FindPeaks[data_, ther_: .2, min_: 0, cther_: 50] := 
 Function[peaks, 
   Transpose[{peaks, 
     data[[peaks]]}]][(#[[Ordering[data[[#]], -1][[1]]]] &amp;amp; /@ 
    Split[Flatten[
      Position[
       Normal[CrossingDetect[
         If[Abs[#] &amp;lt; ther, 0, #] &amp;amp; /@ 
          InverseWaveletTransform[
           DiscreteWaveletTransform[If[# &amp;lt; min, 0, #] &amp;amp; /@ data, 
            HaarWavelet[], 1], HaarWavelet[], {1}]]], 1]], 
     Abs[#1 - #2] &amp;lt; cther &amp;amp;])][/mcode]
So here is it working on the fourier transform!

[img=width: 800px; height: 495px; ]/c/portal/getImageAttachment?filename=final.jpg&amp;amp;userId=24497[/img]

The blue is the unmodified data and the red is a line from peak to peak that fell within our requirments, but with different settings we could get all of these smaller peaks off to the right.

Well that&amp;#039;s all I got so if you have any questions about the code or edits that make it better, post it here!

:D</description>
    <dc:creator>Christopher Wolfram</dc:creator>
    <dc:date>2013-08-07T00:46:29Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/463721">
    <title>Aftermath of the solar eclipse</title>
    <link>https://community.wolfram.com/groups/-/m/t/463721</link>
    <description>![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4902Hero.gif&amp;amp;userId=20103&#xD;
  [2]: https://www.wolframcloud.com/obj/5838919a-e64d-49fe-9193-c6ce123184f2</description>
    <dc:creator>Marco Thiel</dc:creator>
    <dc:date>2015-03-21T01:18:08Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/344278">
    <title>Using your smart phone as the ultimate sensor array for Mathematica</title>
    <link>https://community.wolfram.com/groups/-/m/t/344278</link>
    <description>Many fantastic posts in this community describe how to connect external devices to Mathematica and how to read the data. Connecting Mathematica to an Arduino for example allows you to read and then work with data from all kinds of sensors. In most of the cases, when we speak about connected devices, additional hardware is necessary. Smart phones, on the other hand, are our permanent companions and they host a wide array of sensors that we can tap into with Mathematica. For this post, I will be using an iPhone 5 - but a similar approach can be taken with many other smart phones. [Björn Schelter][1] and myself have worked on this together.&#xD;
&#xD;
The first thing we need in order to be able to read the iPhone is a little App which can be purchased on the iTunes App store: it is called [Sensor Data][2]. When you open the app you see a screen like this one. &#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
At the top of the screen you see an IP address and a port number (after the colon!). These numbers will be important to connect to the phone and either download data or stream sensor data directly. If you click on the &amp;#034;start capture&amp;#034; the iPhone&amp;#039;s data will be stored on the phone and can be downloaded into Mathematica. In this post we are rather interested in the &amp;#034;Streaming&amp;#034; function. If you click on the respective button on the bottom you get to a screen like this:&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
There you can choose a frequency for the measurements and start the streaming. In fact we also can choose which sensors we want to use with the Config button. &#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
The following Mathematica code will work when all (!) sensors are switched on. Now we are ready to connect to the iPhone. Switch the streaming on and execute the following commands:&#xD;
&#xD;
    ClearAll[&amp;#034;Global`*&amp;#034;];&#xD;
    For[i = 1, i &amp;lt; 3, i++, Quiet[InstallJava[]]];&#xD;
    Needs[&amp;#034;JLink`&amp;#034;]&#xD;
&#xD;
and then &#xD;
&#xD;
    LoadJavaClass[&amp;#034;java.util.Arrays&amp;#034;];&#xD;
    packet = JavaNew[&amp;#034;java.net.DatagramPacket&amp;#034;, JavaNew[&amp;#034;[B&amp;#034;, 1024], 1024];&#xD;
    socket = JavaNew[&amp;#034;java.net.DatagramSocket&amp;#034;, 10552];&#xD;
    socket@setSoTimeout[10];&#xD;
    listen[] := If[$Failed =!= Quiet[socket@receive[packet], Java::excptn], &#xD;
    record =JavaNew[&amp;#034;java.lang.String&amp;#034;, java`util`Arrays`copyOfRange @@ &#xD;
    packet /@ {getData[], getOffset[], getLength[]}]@toString[] //&#xD;
    Sow];&#xD;
&#xD;
Next we have to define a ScheduledTask to read the sensors:&#xD;
&#xD;
    RemoveScheduledTask[ScheduledTasks[]];&#xD;
    results = {}; &#xD;
    RunScheduledTask[AppendTo[results, Quiet[Reap[listen[]][[2, 1]]]]; If[Length[results] &amp;gt; 1200, Drop[results, 150]], 0.01];&#xD;
&#xD;
We also need to define a streaming function:&#xD;
&#xD;
    stream := Refresh[ToExpression[StringSplit[#[[1]], &amp;#034;,&amp;#034;]] &amp;amp; /@ Select[results[[-1000 ;;]], Head[#] == List &amp;amp;], UpdateInterval -&amp;gt; 0.01]&#xD;
&#xD;
Alright. Now comes the interesting part. Using &#xD;
&#xD;
    (*Compass*)&#xD;
    While[Length[results] &amp;lt; 1000, Pause[2]]; Dynamic[AngularGauge[Refresh[stream[[-1, 30]], UpdateInterval -&amp;gt; 0.01], {360, 0}, &#xD;
    ScaleDivisions -&amp;gt; None, GaugeLabels -&amp;gt; {Placed[&amp;#034;N&amp;#034;, Top], Placed[&amp;#034;S&amp;#034;, Bottom], Placed[&amp;#034;E&amp;#034;, Right], Placed[&amp;#034;W&amp;#034;, Left]}, ScaleOrigin -&amp;gt; {{5 Pi/2, Pi/2}, 1}, ScalePadding -&amp;gt; All, ImageSize -&amp;gt; Medium], SynchronousUpdating -&amp;gt; False]&#xD;
&#xD;
we can measure the bearing of our iPhone. The resulting compass moves as we move the iPhone:&#xD;
&#xD;
![enter image description here][6]&#xD;
&#xD;
We can also read the (x-,y-,z-) accelerometers&#xD;
&#xD;
    (*Plot Ascelerometers*)&#xD;
    While[Length[results] &amp;lt; 1000, Pause[2]]; Dynamic[Refresh[ListLinePlot[{stream[[All, 2]], stream[[All, 3]], stream[[All, 4]]}, PlotRange -&amp;gt; All], UpdateInterval -&amp;gt; 0.1]]&#xD;
&#xD;
which gives plots like this one:&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
The update is a bit bumpy, because the data is only sent every second or so from the iPhone; the measurements, however, are taken with a frequency of up to 100Hz. We can also represent the FFT of the streamed data like so:&#xD;
&#xD;
    (*Plot FFT of accelorometers*)&#xD;
    While[Length[results] &amp;lt; 1000, &#xD;
     Pause[2]]; Dynamic[&#xD;
     Refresh[ListLinePlot[&#xD;
       Log /@ {Abs[Fourier[Standardize[stream[[All, 2]]]]], &#xD;
         Abs[Fourier[Standardize[stream[[All, 3]]]]], &#xD;
         Abs[Fourier[Standardize[stream[[All, 4]]]]]}, &#xD;
       PlotRange -&amp;gt; {{0, 200}, {-5, 2.5}}, ImageSize -&amp;gt; Large], &#xD;
      UpdateInterval -&amp;gt; 0.1]]&#xD;
&#xD;
Adding a &amp;#034;real time&amp;#034; scale is also quite straight forward:&#xD;
&#xD;
(*Measurements with time scale*)&#xD;
&#xD;
    While[Length[results] &amp;lt; 1000, Pause[2]];&#xD;
    starttime = IntegerPart[stream[[2, 1]]];&#xD;
    Dynamic[Refresh[&#xD;
      ListLinePlot[&#xD;
       Transpose[{(stream[[Max[-300, -Length[stream]] ;;, 1]] - &#xD;
           starttime), stream[[Max[-300, -Length[stream]] ;;, 2]]}], &#xD;
       PlotRange -&amp;gt; All, ImageSize -&amp;gt; Large], UpdateInterval -&amp;gt; 0.01]]&#xD;
&#xD;
Well, then. We can also plot our iPhone&amp;#039;s position in space&#xD;
&#xD;
    (*3d Motion*)&#xD;
    &#xD;
    While[Length[results] &amp;lt; 1000, Pause[2]]; Dynamic[&#xD;
     Refresh[ListLinePlot[{stream[[All, 5]], stream[[All, 6]], &#xD;
        stream[[All, 7]]}, PlotRange -&amp;gt; All], UpdateInterval -&amp;gt; 0.1]]&#xD;
    &#xD;
    While[Length[results] &amp;lt; 1000, Pause[2]]; Dynamic[&#xD;
     Graphics3D[{Black, &#xD;
       Rotate[Rotate[&#xD;
         Rotate[Cuboid[{-2, -1, -0.2}, {2, 1, 0.2}], &#xD;
          stream[[-1, 7]], {0, 0, 1}], -1*stream[[-1, 6]], {0, 1, 0}], &#xD;
        stream[[-1, 5]], {1, 0, 0}]}, &#xD;
      PlotRange -&amp;gt; {{-3, 3}, {-3, 3}, {-3, 3}}, Boxed -&amp;gt; True], &#xD;
     UpdateInterval -&amp;gt; 0.1, SynchronousUpdating -&amp;gt; False]&#xD;
&#xD;
This looks like so:&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
Last but not least we can write a little GUI to access all different sensors. (This does run a bit slow though!)&#xD;
&#xD;
(*GUI all sensors*)&#xD;
&#xD;
    sensororder = {&amp;#034;Timestamp&amp;#034;, &amp;#034;Accel_X&amp;#034;, &amp;#034;Accel_Y&amp;#034;, &amp;#034;Accel_Z&amp;#034;, &amp;#034;Roll&amp;#034;, &#xD;
       &amp;#034;Pitch&amp;#034;, &amp;#034;Yaw&amp;#034;, &amp;#034;Quat.X&amp;#034;, &amp;#034;Quat.Y&amp;#034;, &amp;#034;Quat.Z&amp;#034;, &amp;#034;Quat.W&amp;#034;, &amp;#034;RM11&amp;#034;, &#xD;
       &amp;#034;RM12&amp;#034;, &amp;#034;RM13&amp;#034;, &amp;#034;RM21&amp;#034;, &amp;#034;RM22&amp;#034;, &amp;#034;RM23&amp;#034;, &amp;#034;RM31&amp;#034;, &amp;#034;RM32&amp;#034;, &amp;#034;RM33&amp;#034;, &#xD;
       &amp;#034;GravAcc_X&amp;#034;, &amp;#034;GravAcc_Y&amp;#034;, &amp;#034;GravAcc_Z&amp;#034;, &amp;#034;UserAcc_X&amp;#034;, &amp;#034;UserAcc_Y&amp;#034;, &#xD;
       &amp;#034;UserAcc_Z&amp;#034;, &amp;#034;RotRate_X&amp;#034;, &amp;#034;RotRate_Y&amp;#034;, &amp;#034;RotRate_Z&amp;#034;, &amp;#034;MagHeading&amp;#034;, &#xD;
       &amp;#034;TrueHeading&amp;#034;, &amp;#034;HeadingAccuracy&amp;#034;, &amp;#034;MagX&amp;#034;, &amp;#034;MagY&amp;#034;, &amp;#034;MagZ&amp;#034;, &amp;#034;Lat&amp;#034;, &#xD;
       &amp;#034;Long&amp;#034;, &amp;#034;LocAccuracy&amp;#034;, &amp;#034;Course&amp;#034;, &amp;#034;Speed&amp;#034;, &amp;#034;Altitude&amp;#034;, &#xD;
       &amp;#034;Proximity&amp;#034;};&#xD;
    While[Length[results] &amp;lt; 1000, Pause[2]]; Manipulate[&#xD;
     Dynamic[Refresh[&#xD;
       ListLinePlot[{stream[[All, Position[sensororder, a][[1, 1]]]], &#xD;
         stream[[All, Position[sensororder, b][[1, 1]]]], &#xD;
         stream[[All, Position[sensororder, c][[1, 1]]]]}, &#xD;
        PlotRange -&amp;gt; All, ImageSize -&amp;gt; Full], &#xD;
       UpdateInterval -&amp;gt; 0.01]], {{a, &amp;#034;Accel_X&amp;#034;}, &#xD;
      sensororder}, {{b, &amp;#034;Accel_Y&amp;#034;}, sensororder}, {{c, &amp;#034;Accel_Z&amp;#034;}, &#xD;
      sensororder}, ControlPlacement -&amp;gt; Left, &#xD;
     SynchronousUpdating -&amp;gt; False]&#xD;
&#xD;
This gives a user interface which looks like this:&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
In the drop down menu we can choose three out of all sensors. These are all available sensors:&#xD;
&#xD;
&amp;gt; &amp;#034;Timestamp&amp;#034;, &amp;#034;Accel_X&amp;#034;, &amp;#034;Accel_Y&amp;#034;, &amp;#034;Accel_Z&amp;#034;, &amp;#034;Roll&amp;#034;, &amp;#034;Pitch&amp;#034;, &amp;#034;Yaw&amp;#034;,&#xD;
&amp;gt; &amp;#034;Quat.X&amp;#034;, &amp;#034;Quat.Y&amp;#034;, &amp;#034;Quat.Z&amp;#034;, &amp;#034;Quat.W&amp;#034;, &amp;#034;RM11&amp;#034;,  &amp;#034;RM12&amp;#034;, &amp;#034;RM13&amp;#034;,&#xD;
&amp;gt; &amp;#034;RM21&amp;#034;, &amp;#034;RM22&amp;#034;, &amp;#034;RM23&amp;#034;, &amp;#034;RM31&amp;#034;, &amp;#034;RM32&amp;#034;, &amp;#034;RM33&amp;#034;, &amp;#034;GravAcc_X&amp;#034;,&#xD;
&amp;gt; &amp;#034;GravAcc_Y&amp;#034;, &amp;#034;GravAcc_Z&amp;#034;, &amp;#034;UserAcc_X&amp;#034;, &amp;#034;UserAcc_Y&amp;#034;,   &amp;#034;UserAcc_Z&amp;#034;,&#xD;
&amp;gt; &amp;#034;RotRate_X&amp;#034;, &amp;#034;RotRate_Y&amp;#034;, &amp;#034;RotRate_Z&amp;#034;, &amp;#034;MagHeading&amp;#034;, &amp;#034;TrueHeading&amp;#034;,&#xD;
&amp;gt; &amp;#034;HeadingAccuracy&amp;#034;, &amp;#034;MagX&amp;#034;, &amp;#034;MagY&amp;#034;, &amp;#034;MagZ&amp;#034;, &amp;#034;Lat&amp;#034;,   &amp;#034;Long&amp;#034;,&#xD;
&amp;gt; &amp;#034;LocAccuracy&amp;#034;, &amp;#034;Course&amp;#034;, &amp;#034;Speed&amp;#034;, &amp;#034;Altitude&amp;#034;, &amp;#034;Proximity&amp;#034;&#xD;
&#xD;
There are certainly any things that can and should be improved. The main problem seems to be that the data, even if sampled at 100Hz, is sent to the iPhone only every second or so. So it is not really real time. I hope that someone who is better at iPhone programming than I am - I am really rubbish at it- could help and write an iPhone program to stream the data in a more convenient way: one by one rather than in packets. &#xD;
&#xD;
There are many potential applications for this. Here are some I could come up with:&#xD;
&#xD;
 1. You can carry the iPhone around and measure your movements (acceleration). Attached to your hand you can measure your tremor. &#xD;
 2. The magnetometer is really cool. You can use it to find metal bars in the walls an also electric cables. &#xD;
 3. You can collect GPS data for all sorts of applications; there are ideas to use this for the detection of certain diseases. For example if it takes you longer than usual to find your car when you come from shopping that might hint at early stages of dementia --- or sleep deprivation.&#xD;
 4. When you put the phone on a machine, like a running motor, you can measure the vibrations. When you perform a frequency analysis you can check whether the motor runs alright.&#xD;
 5. Using the accelerometers I was able to measure my breathing (putting the phone on my chest).&#xD;
 &#xD;
I think that there might also be quite some potential for using the Wolfram Cloud here. Deploying a program in the cloud and reading from your phone is certainly quite interesting. The problem is that this particular app only works via WiFi. It would be nice to have one that works via 3G. &#xD;
&#xD;
So, in summary, it might be quite useful to use the iPhone&amp;#039;s sensors. The advantage is that nearly everyone carries a smartphone with them all the time. Making more of your smart phone&amp;#039;s sensors with Mathematica seems to be a nice playground for applications. I&amp;#039;d love to hear about your ideas...&#xD;
&#xD;
Cheers,&#xD;
&#xD;
Marco&#xD;
&#xD;
PS: When you are done with the streaming you should execute these commands:&#xD;
&#xD;
    (*Remove Scheduled Tasks and close link*)&#xD;
    RemoveScheduledTask[ScheduledTasks[]]; socket@close[];&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com/web/bschelter&#xD;
  [2]: https://itunes.apple.com/gb/app/sensor-data/id397619802?mt=8&#xD;
  [3]: /c/portal/getImageAttachment?filename=sensorwelcome.PNG&amp;amp;userId=48754&#xD;
  [4]: /c/portal/getImageAttachment?filename=sensorstreaming.PNG&amp;amp;userId=48754&#xD;
  [5]: /c/portal/getImageAttachment?filename=Allsensors.PNG&amp;amp;userId=48754&#xD;
  [6]: /c/portal/getImageAttachment?filename=Compass.gif&amp;amp;userId=48754&#xD;
  [7]: /c/portal/getImageAttachment?filename=Accelerometer.gif&amp;amp;userId=48754&#xD;
  [8]: /c/portal/getImageAttachment?filename=Iphonemovement.gif&amp;amp;userId=48754&#xD;
  [9]: /c/portal/getImageAttachment?filename=ScreenShot2014-09-15at23.52.46.png&amp;amp;userId=48754</description>
    <dc:creator>Marco Thiel</dc:creator>
    <dc:date>2014-09-15T23:53:14Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2166833">
    <title>Predicting COVID-19 using cough sounds classification</title>
    <link>https://community.wolfram.com/groups/-/m/t/2166833</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/cdf7d474-f4fb-4cbd-bbd5-f1fac8699f7a</description>
    <dc:creator>Siria Sadeddin</dc:creator>
    <dc:date>2021-01-18T22:46:11Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1383518">
    <title>[WSC18] Music Sentiment Analysis through Machine Learning</title>
    <link>https://community.wolfram.com/groups/-/m/t/1383518</link>
    <description>![A Representation of the emotion categorization system][1]&#xD;
&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
#Abstract&#xD;
This project aims to develop a machine learning application to identify the sentiments in a music clip. The data set I used consists of one hundred 45-second clips from the Database for Emotional Analysis of Music and an additional 103 gathered by myself. I manually labeled all 203 clips and used them as training data for my program. This program works best with classical-style music, which is the main component of my data set, but also works with other genres to an reasonable extent. &#xD;
&#xD;
#Introduction&#xD;
One of the most important functions of music is to affect emotion, but the experience of emotion is ambiguous and subjective to individual. The same music may induce a diverse range of feelings in people as a result of different context, personality, or culture. Many musical features, however, usually lead to the same effect on the human brain. For example, louder music correlates more with excitement or anger, while softer music corresponds to tenderness. This consistency makes it possible to train a supervised machine learning program based on musical features.&#xD;
&#xD;
#Background&#xD;
This project is based on James Russell&amp;#039;s circumplex model, in which a two-dimensional emotion space is constructed from the x-axis of valence level and y-axis of arousal level, as shown above in the picture. Specifically, valence is a measurement of an emotion&amp;#039;s pleasantness, whereas arousal is a measurement of an emotion&amp;#039;s intensity. Russell&amp;#039;s model provides a metric on which different sentiments can be compared and contrasted, creating four main categories of emotion: Happy (high valence, high arousal), Stressed (low valence, high arousal), Sad (low valence, low arousal), and Calm (high valence, low arousal). Within these main categories there are various sub-categories, labeled on the graph above. Notably, &amp;#034;passionate&amp;#034; is a sub-category that does not belong to any main category due to its ambiguous valence value. &#xD;
&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
#Program Structure&#xD;
The program contains a three-layer structure. The first layer is responsible for extracting musical features, the second for generating a list of numerical predictions based on different features, and the third for predicting and displaying the most probable emotion descriptors based on the second layer&amp;#039;s output.  &#xD;
![enter image description here][2]&#xD;
&#xD;
##First Layer&#xD;
&#xD;
The first layer consists of 23 feature extractors that generate numerical sequence based on different features:&#xD;
&#xD;
    (*A list of feature extractors*)&#xD;
    feMin[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;Min&amp;#034;, List]]&#xD;
    feMax[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;Max&amp;#034;, List]]&#xD;
    feMean[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;Mean&amp;#034;, List]]&#xD;
    feMedian[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;Median&amp;#034;, List]]&#xD;
    fePower[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;Power&amp;#034;, List]]&#xD;
    feRMSA[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;RMSAmplitude&amp;#034;, List]]&#xD;
    feLoud[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;Loudness&amp;#034;, List]]&#xD;
    feCrest[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;CrestFactor&amp;#034;, List]]&#xD;
    feEntropy[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;Entropy&amp;#034;, List]]&#xD;
    fePeak[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;PeakToAveragePowerRatio&amp;#034;, List]]&#xD;
    feTCent[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;TemporalCentroid&amp;#034;, List]]&#xD;
    feZeroR[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;ZeroCrossingRate&amp;#034;, List]]&#xD;
    feForm[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;Formants&amp;#034;, List]]&#xD;
    feHighFC[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;HighFrequencyContent&amp;#034;, List]]&#xD;
    feMFCC[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;MFCC&amp;#034;, List]]&#xD;
    feSCent[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;SpectralCentroid&amp;#034;, List]]&#xD;
    feSCrest[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;SpectralCrest&amp;#034;, List]]&#xD;
    feSFlat[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;SpectralFlatness&amp;#034;, List]]&#xD;
    feSKurt[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;SpectralKurtosis&amp;#034;, List]]&#xD;
    feSRoll[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;SpectralRollOff&amp;#034;, List]]&#xD;
    feSSkew[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;SpectralSkewness&amp;#034;, List]]&#xD;
    feSSlope[audio_] :=  Normal[AudioLocalMeasurements[audio, &amp;#034;SpectralSlope&amp;#034;, List]]&#xD;
    feSSpread[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;SpectralSpread&amp;#034;, List]]&#xD;
    feNovelty[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;Novelty&amp;#034;, List]]&#xD;
&amp;lt;br/&amp;gt;&#xD;
##Second Layer&#xD;
Using data generated from the first layer, the valence and arousal predictors of the second layer provide 46 predictions for the audio input, based on its different features. &#xD;
&#xD;
    (*RMSAmplitude*)&#xD;
    (*Feature extractor*) feRMSA[audio_] := Normal[AudioLocalMeasurements[audio, &amp;#034;RMSAmplitude&amp;#034;, List]]&#xD;
    dataRMSA = Table[First[takeLast[feRMSA[First[Take[musicFiles, {n}]]]]], {n, Length[musicFiles]}];&#xD;
    (*Generating predictor*) pArousalRMSA = Predict[dataRMSA -&amp;gt; arousalValueC]&#xD;
![Sample predictor function][3]&#xD;
&#xD;
&amp;lt;br/&amp;gt;&#xD;
##Third Layer&#xD;
The two parts of the third layer, main category classifier and sub-category classifier, each utilize the tensors generated in the second layer to make a prediction within their realm of emotion. The output consists of two parts, a main category emotion and a sub-category emotion.&#xD;
&#xD;
    (*Main*) emotionClassify1 = Classify[classifyMaterial -&amp;gt; emotionList1, PerformanceGoal -&amp;gt; &amp;#034;Quality&amp;#034;]&#xD;
    (*Sub*) emotionClassify2 = Classify[classifyMaterial -&amp;gt; emotionList2, PerformanceGoal -&amp;gt; &amp;#034;Quality&amp;#034;]&#xD;
![enter image description here][4]&#xD;
&#xD;
&amp;lt;br/&amp;gt;&#xD;
##Output&#xD;
If the program receives an input that is longer than 45 second, it will automatically clip the audio file into 45 second segments and return the result for each. If the last segment is less than 45 seconds, the program would still work fine on it, though with reduced accuracy. The display for each clip includes a main-category and a sub-category descriptor, with each of their associated probability also printed. &#xD;
&#xD;
###Sample testing: Debussy&amp;#039;s Clair de Lune&#xD;
![enter image description here][5]&#xD;
&#xD;
&amp;lt;br/&amp;gt;&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
#Conclusion&#xD;
The program gives very reasonable result for most music in the classical style. However, the program have three shortcomings that I plan to fix in later versions of the this program. Firstly, the program may give contradictory result (ex. happy and depressed) if the sentiment dramatically changes in the middle of a 45 second segment, perhaps reflecting the music&amp;#039;s changing emotional composition. The current 45 second clipping window is rather long and thus prone to capture contradicting emotions. In the next version of this program, the window will probably be shortened to 30 or 20 seconds to reduce prediction uncertainty. Secondly, the program&amp;#039;s processing speed has a lot of room of improvement. It currently takes about one and half minutes to compute an one minute audio file. In future versions I will remove relative ineffective feature extractors to speed things up. Lastly, the data used in creating this application is solely from myself, and therefore it is prone to my human biases. I plan to expand the data set with more people&amp;#039;s input and more genres of music. &#xD;
&#xD;
I have attached the application to this post so that everyone can try out the program.&#xD;
&#xD;
#Acknowledgement&#xD;
I sincerely thank my mentor, Professor Rob Morris, for providing invaluable guidance to help me carry out the project. I also want to thank Rick Hennigan for giving me crucial support with my code. &#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8714Emotion2DSpace.PNG&amp;amp;userId=1371765&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2406DataStructure.PNG&amp;amp;userId=1371765&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8990capture1.PNG&amp;amp;userId=1371765&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5765capture2.PNG&amp;amp;userId=1371765&#xD;
  [5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3607SampleTesting.PNG&amp;amp;userId=1371765</description>
    <dc:creator>William Yicheng Zhu</dc:creator>
    <dc:date>2018-07-14T02:40:20Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/917048">
    <title>Hiding secret messages in music</title>
    <link>https://community.wolfram.com/groups/-/m/t/917048</link>
    <description>The [new computational audio features][1] of Mathematica are really impressive. In no time you can cook up things as soon as you understand the basic algorithm. Even the simplest tricks can give quite surprising results. Before you read on, please do play this (file also uploaded to this post):&#xD;
&#xD;
***[audio file][2]***.&#xD;
&#xD;
Have you noticed anything? I think that it sounds pretty much like the well known example &#xD;
&#xD;
    ExampleData[{&amp;#034;Sound&amp;#034;, &amp;#034;Apollo11SmallStep&amp;#034;}]&#xD;
&#xD;
As a matter of fact, it contains something a little special. If you import the file into Mathematica&#xD;
&#xD;
    secsound = Import[&amp;#034;/Users/thiel/Desktop/secretsound.wav&amp;#034;];&#xD;
&#xD;
and calculate a spectrogram of it&#xD;
&#xD;
    Spectrogram[secsound, ColorFunction -&amp;gt; &amp;#034;Rainbow&amp;#034;, Frame -&amp;gt; None, ImageSize -&amp;gt; Full]&#xD;
&#xD;
you get this:&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
How to inject images into spectrograms&#xD;
---------------------------&#xD;
&#xD;
I will show now, how you can get this in a couple of lines of code. First we need to get a binary matrix (later we can do better!) of the message. This is quite straight forward:&#xD;
&#xD;
    imgdata = Reverse@ImageData[ColorNegate@Image[Binarize[Rasterize[Text[&amp;#034;  Wolfram  &amp;#034;], RasterSize -&amp;gt; 100, ImageSize -&amp;gt; 100]]]];&#xD;
&#xD;
There is nothing really tricky in here. The text is &amp;#034;Wolfram&amp;#034;, I rasterise the image and resize it. Then I binarise it. The ColorNegate is needed to exchange ones and zeros in the matrix. Because I play with images and matrices, and they have a different coordinate system (the origin is at different corners) I need to Reverse the whole thing. I can plot the result like so:&#xD;
&#xD;
    ArrayPlot[Reverse@imgdata] &#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
I now need to generate a sound that produces high amplitudes at the right places in the Spectrogram. At each and every place in the imagedata matrix that contains a one I need to proceed a little sine wave with the right frequency. The rows correspond to frequencies and the rows to time. So lets collect the different frequencies at each time:&#xD;
&#xD;
    list = Flatten[Position[#, 1]] &amp;amp; /@ Transpose[imgdata];&#xD;
&#xD;
Ok. Next we generate the corresponding sums of Sin-functions and generate a list of data from them. &#xD;
&#xD;
    listcompete = 0.1*Flatten[Table[Table[N@Total[Sin[2 Pi 300  # t] &amp;amp;@(# &amp;amp; /@ list)[[k]]], {t, 0, 8.71/200., 1/16000}], {k, 1, 100}], 1];&#xD;
&#xD;
 We can generate the corresponding sound:&#xD;
&#xD;
    Audio[Sound[SampledSoundList[listcompete, 8000]]]&#xD;
&#xD;
This gives you a little window like this&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
which allows you to &amp;#034;listen to Wolfram in dolphin language&amp;#034;. The file is attached to this post and sounds really cool. &#xD;
&#xD;
How to hide the images in sound files/music&#xD;
-------------------------------------------&#xD;
&#xD;
In order to hide the string in the &amp;#034;A small step ...&amp;#034; recording, we first shift the frequencies a little bit and decrease the amplitude:&#xD;
&#xD;
    listcompete2 = 0.02*Flatten[Table[Table[N@Total[Sin[2 Pi 1200  # t] &amp;amp;@(# &amp;amp; /@ list)[[k]]], {t, 0, 8.71/100., 1/64000}], {k, 1, 100}], 1];&#xD;
&#xD;
It looks now like this:&#xD;
&#xD;
    Spectrogram[Sound[SampledSoundList[listcompete2, 64000]], ColorFunction -&amp;gt; &amp;#034;Rainbow&amp;#034;]&#xD;
&#xD;
![enter image description here][6]&#xD;
&#xD;
If you want to listen to it use:&#xD;
&#xD;
    Audio[Sound[SampledSoundList[listcompete2, 64000]]]&#xD;
&#xD;
You should hear a sort of hissing sound. The new function AudioChannelCombine will now help us to merge the two sound objects:&#xD;
&#xD;
    AudioChannelCombine[{Audio[ExampleData[{&amp;#034;Sound&amp;#034;, &amp;#034;Apollo11SmallStep&amp;#034;}]], Audio[Sound[SampledSoundList[listcompete2, 64000]]]}]&#xD;
&#xD;
The resulting object is attached to this post. The spectrogram&#xD;
&#xD;
    Spectrogram[AudioChannelCombine[{Audio[ExampleData[{&amp;#034;Sound&amp;#034;, &amp;#034;Apollo11SmallStep&amp;#034;}]], &#xD;
    Audio[Sound[SampledSoundList[listcompete2, 64000]]]}], ColorFunction -&amp;gt; &amp;#034;Rainbow&amp;#034;, Frame -&amp;gt; None]&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
clearly shows the secret message blended into the Apollo message. &#xD;
&#xD;
Like this you can export the sound object:&#xD;
&#xD;
    secsound = Import[&amp;#034;/Users/thiel/Desktop/secretsound.wav&amp;#034;];&#xD;
&#xD;
Look for hidden messages&#xD;
------------------------&#xD;
&#xD;
This type of [procedure is quite well known][8]. I also saw it on [BBC&amp;#039;s Click][9], but I cannot remember the episode. Now you can use this to look for hidden messages in the internet or in music. Here are two examples: [file1][10] and [file2][11], both from the website above.&#xD;
&#xD;
Let&amp;#039;s look at file1&#xD;
&#xD;
    Spectrogram[snd, 150, Frame -&amp;gt; None]&#xD;
&#xD;
![enter image description here][12]&#xD;
&#xD;
and file2&#xD;
&#xD;
    Spectrogram[Import[&amp;#034;http://www.evansalazar.com/ohmpie/imageEncode/evan.mp3&amp;#034;], 150, Frame -&amp;gt; None, AspectRatio -&amp;gt; 1]&#xD;
&#xD;
![enter image description here][13]&#xD;
&#xD;
You can clearly see that the second image is actually &amp;#034;grayscale&amp;#034;. It is quite possible to achieve this by not Binarize-ing the image and using the grayscale values as amplitudes. &#xD;
&#xD;
Where to go from here?&#xD;
----------------------&#xD;
&#xD;
We could certainly try to increase the resolution and get everything into a nice little function to do all the steps for us. Another interesting thing would be to add multiple images/slices of a 3 D image into sound, and basically hide an entire 3D object. I&amp;#039;d love to see this being 3D printed.&#xD;
&#xD;
Can you detect the [message in this sound file][14]?&#xD;
&#xD;
Cheers,&#xD;
&#xD;
Marco&#xD;
&#xD;
&#xD;
  [1]: http://www.wolfram.com/language/11/computational-audio/?product=language&#xD;
  [2]: https://www.dropbox.com/s/83788i811rnjn39/secretsound.wav?dl=0&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-01at00.21.25.png&amp;amp;userId=48754&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-01at00.27.40.png&amp;amp;userId=48754&#xD;
  [5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-01at00.33.34.png&amp;amp;userId=48754&#xD;
  [6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-01at00.39.48.png&amp;amp;userId=48754&#xD;
  [7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-01at00.43.10.png&amp;amp;userId=48754&#xD;
  [8]: http://www.ohmpie.com/imageencode/&#xD;
  [9]: http://www.bbc.co.uk/programmes/b006m9ry&#xD;
  [10]: http://www.evansalazar.com/ohmpie/imageEncode/ohmpie.mp3&#xD;
  [11]: http://www.evansalazar.com/ohmpie/imageEncode/evan.mp3&#xD;
  [12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-01at00.52.56.png&amp;amp;userId=48754&#xD;
  [13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-01at00.51.55.png&amp;amp;userId=48754&#xD;
  [14]: https://www.dropbox.com/s/g6cqzute897szrz/finalmessage.wav?dl=0</description>
    <dc:creator>Marco Thiel</dc:creator>
    <dc:date>2016-09-01T00:10:48Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/587562">
    <title>Unpredictable Solar Systems</title>
    <link>https://community.wolfram.com/groups/-/m/t/587562</link>
    <description>Are there solar systems with chaotic orbits?  When astronomers look for exoplanets they look for periodic signals in the brightness of the central star.  The analysis relies on predictable behaviors. But I have always wondered whether there are unpredictable solar systems out there.  &#xD;
&#xD;
The reason for posting this now is the [news][1] of a star whose signal appears to be unpredictable (KIC 8462852).  Some articles have suggested this is due to [aliens][2].  On the other hand, Stephen Wolfram has said many times (e.g. in his [New Kind of Science][3] book) that it is pretty easy for nature to produce unpredictable sequences, and regular signals would be a better sign of civilization.&#xD;
&#xD;
You can simulate hypothetical solar systems relatively easily in Wolfram language (search the demonstrations for [three body problem][4]).  At the [Wolfram Science Summer School][5] in 2013, [Nicholas Lucas][6] did a systematic survey.  He produced a nice phase-space type of diagram and in the process found a class of behaviors which were not regular in any sense except that all of the planets did not fly away.  The possibility of planets zooming off to infinity is a possible explanation for the prevalence of regularity and order (at least when stars are far apart).  This is an example of an irregular solution:&#xD;
&#xD;
![irregular paths of 3 bodies][7]&#xD;
&#xD;
This code is a simple two body version:&#xD;
&#xD;
    s = NDSolve[{x&amp;#039;&amp;#039;[t] == 8 (y[t] - x[t])/Norm[y[t] - x[t]]^3, y&amp;#039;&amp;#039;[t] == 8 (x[t] - y[t])/Norm[x[t] - y[t]]^3,&#xD;
         x[0] == {-2, 0, 0}, y[0] == {2, 0, 0}, x&amp;#039;[0] == {0, 1, 0}, y&amp;#039;[0] == {0, -1, 0}}, {x, y}, {t, 0, 4}][[1]];&#xD;
    ParametricPlot3D[{Evaluate[x[t] /. s], Evaluate[y[t] /. s]}, {t, 0, 4}]&#xD;
&#xD;
Theoretically, from the study of simple rules (see Wolfram&amp;#039;s [book][8]), one expects the possibility of [long transients][9], but also that most transients are short.  From Wolfram&amp;#039;s [principle of computational equivalence][10], one expects that solar system dynamics can be computationally universal even from simple initial conditions (for more see Wolfram&amp;#039;s [note][11]).&#xD;
&#xD;
Maybe someone on Community knows more about this star.  Is there data out there for KIC 8462852?  In principle one should be able to take the orbital paths from a simulation and derive the brightness signal that someone would see from Earth, and do it systematically.&#xD;
&#xD;
&#xD;
  [1]: http://www.skyandtelescope.com/astronomy-news/curious-case-of-kic-8462852-102020155/&#xD;
  [2]: http://www.nbcnews.com/tech/innovation/have-scientists-discovered-alien-civilization-not-so-fast-n445161&#xD;
  [3]: http://www.wolframscience.com/nksonline/page-822&#xD;
  [4]: http://demonstrations.wolfram.com/search.html?query=three%20body&#xD;
  [5]: https://www.wolframscience.com/summerschool/&#xD;
  [6]: https://www.wolframscience.com/summerschool/2013/alumni/lucas.html&#xD;
  [7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=alien-Kepler.png&amp;amp;userId=23275&#xD;
  [8]: http://www.wolframscience.com/nksonline/toc.html&#xD;
  [9]: http://www.wolframscience.com/nksonline/page-754&#xD;
  [10]: http://www.wolframscience.com/nksonline/page-715&#xD;
  [11]: http://www.wolframscience.com/nksonline/page-972d-text</description>
    <dc:creator>Todd Rowland</dc:creator>
    <dc:date>2015-10-21T18:00:43Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/861508">
    <title>Independent component analysis for multidimensional signals</title>
    <link>https://community.wolfram.com/groups/-/m/t/861508</link>
    <description>Introduction&#xD;
------------&#xD;
&#xD;
[Independent Component Analysis (ICA)](https://en.wikipedia.org/wiki/Independent_component_analysis) is a (matrix factorization) method&#xD;
for separation of a multi-dimensional signal (represented with a matrix)&#xD;
into a weighted sum of sub-components that have less entropy than the&#xD;
original variables of the signal. See \[1,2\] for introduction to ICA&#xD;
and more details. &#xD;
&#xD;
This article/post is to proclaim the implementation of the &amp;#034;FastICA&amp;#034; algorithm in the package&#xD;
[IndependentComponentAnalysis.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/IndependentComponentAnalysis.m)&#xD;
and show a basic application with it. (I programmed that package last weekend. It has been in my ToDo list to start ICA algorithms&#xD;
implementations for several months... An interesting offshoot was [the procedure I derived](http://mathematica.stackexchange.com/questions/108182/extracting-signal-from-gaussian-noise/115715#115715)&#xD;
for the StackExchange question &#xD;
[&amp;#034;Extracting signal from Gaussian noise&amp;#034;](http://mathematica.stackexchange.com/questions/108182/extracting-signal-from-gaussian-noise).)&#xD;
&#xD;
In this article/post ICA is going to be demonstrated with both generated data and &amp;#034;real life&amp;#034; weather data (temperatures of three cities within one month).&#xD;
 &#xD;
Generated data&#xD;
--------------&#xD;
&#xD;
In order to demonstrate ICA let us make up some data in the spirit of&#xD;
the [&amp;#034;cocktail party problem&amp;#034;](https://en.wikipedia.org/wiki/Cocktail_party_effect).&#xD;
&#xD;
    (*Signal functions*)&#xD;
    Clear[s1, s2, s3]&#xD;
    s1[t_] := Sin[600 \[Pi] t/10000 + 6*Cos[120 \[Pi] t/10000]] + 1.2&#xD;
    s2[t_] := Sin[\[Pi] t/10] + 1.2&#xD;
    s3[t_?NumericQ] := (((QuotientRemainder[t, 23][[2]] - 11)/9)^5 + 2.8)/2 + 0.2&#xD;
&#xD;
    (*Mixing matrix*)&#xD;
    A = {{0.44, 0.2, 0.31}, {0.45, 0.8, 0.23}, {0.12, 0.32, 0.71}};&#xD;
&#xD;
    (*Signals matrix*)&#xD;
    nSize = 600;&#xD;
    S = Table[{s1[t], s2[t], s3[t]}, {t, 0, nSize, 0.5}];&#xD;
&#xD;
    (*Mixed signals matrix*)&#xD;
    M = A.Transpose[S];&#xD;
&#xD;
    (*Signals*)&#xD;
    Grid[{Map[&#xD;
       Plot[#, {t, 0, nSize}, PerformanceGoal -&amp;gt; &amp;#034;Quality&amp;#034;, &#xD;
         ImageSize -&amp;gt; 250] &amp;amp;, {s1[t], s2[t], s3[t]}]}]&#xD;
&#xD;
[![Original signals](http://i.stack.imgur.com/lmvbk.png)](http://i.stack.imgur.com/lmvbk.png)&#xD;
&#xD;
    (*Mixed signals*)&#xD;
    Grid[{Map[ListLinePlot[#, ImageSize -&amp;gt; 250] &amp;amp;, M]}]&#xD;
&#xD;
[![Mixed signals](http://i.stack.imgur.com/pB9Bl.png)](http://i.stack.imgur.com/pB9Bl.png)&#xD;
&#xD;
I took the data generation formulas from \[6\].&#xD;
&#xD;
ICA application&#xD;
---------------&#xD;
&#xD;
Load the package:&#xD;
&#xD;
    Import[&amp;#034;https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/IndependentComponentAnalysis.m&amp;#034;]&#xD;
&#xD;
It is important to note that the usual ICA model interpretation for the&#xD;
factorized matrix *X* is that each column is a variable (audio signal)&#xD;
and each row is an observation (recordings of the microphones at a given&#xD;
time). The matrix 3x1201 *M* was constructed with the interpretation&#xD;
that each row is a signal, hence we have to transpose *M* in order to&#xD;
apply the ICA algorithms, *X*=*M*\^T.&#xD;
&#xD;
    X = Transpose[M];&#xD;
&#xD;
    {S, A} = IndependentComponentAnalysis[X, 3];&#xD;
&#xD;
Check the approximation of the obtained factorization:&#xD;
&#xD;
    Norm[X - S.A]    &#xD;
    (* 3.10715*10^-14 *)&#xD;
&#xD;
Plot the found source signals:&#xD;
&#xD;
    Grid[{Map[ListLinePlot[#, PlotRange -&amp;gt; All, ImageSize -&amp;gt; 250] &amp;amp;, &#xD;
       Transpose[S]]}]&#xD;
&#xD;
[![Found source signals](http://i.stack.imgur.com/83EBC.png)](http://i.stack.imgur.com/83EBC.png)&#xD;
&#xD;
Because of the random initialization of the inverting matrix in the&#xD;
algorithm the result my vary. Here is the plot from another run:&#xD;
&#xD;
[![Found source signals 2](http://i.stack.imgur.com/lzQcr.png)](http://i.stack.imgur.com/lzQcr.png)&#xD;
&#xD;
The package also provides the function `FastICA` that returns an&#xD;
association with elements that correspond to the result of the function&#xD;
`fastICA` provided by the R package &amp;#034;fastICA&amp;#034;. See \[4\].&#xD;
&#xD;
Here is an example usage:&#xD;
&#xD;
    res = FastICA[X, 3];&#xD;
&#xD;
    Keys[res]    &#xD;
    (* {&amp;#034;X&amp;#034;, &amp;#034;K&amp;#034;, &amp;#034;W&amp;#034;, &amp;#034;A&amp;#034;, &amp;#034;S&amp;#034;} *)&#xD;
&#xD;
    Grid[{Map[&#xD;
       ListLinePlot[#, PlotRange -&amp;gt; All, ImageSize -&amp;gt; Medium] &amp;amp;, &#xD;
       Transpose[res[&amp;#034;S&amp;#034;]]]}]&#xD;
&#xD;
[![FastICA found source signals](http://i.stack.imgur.com/QyHLH.png)](http://i.stack.imgur.com/QyHLH.png)&#xD;
&#xD;
Note that (in adherence to \[4\]) the function `FastICA` returns the&#xD;
matrices S and A for the centralized matrix X. This means, for example,&#xD;
that in order to check the approximation proper mean has to be supplied:&#xD;
&#xD;
    Norm[X - Map[# + Mean[X] &amp;amp;, res[&amp;#034;S&amp;#034;].res[&amp;#034;A&amp;#034;]]]&#xD;
    (* 2.56719*10^-14 *)&#xD;
&#xD;
Signatures and results&#xD;
----------------------&#xD;
&#xD;
The result of the function `IndependentComponentAnalysis` is a list of&#xD;
two matrices. The result of `FastICA` is an association of the matrices&#xD;
obtained by ICA. The function `IndependentComponentAnalysis` takes a&#xD;
method option and options for precision goal and maximum number of&#xD;
steps:&#xD;
&#xD;
    In[657]:= Options[IndependentComponentAnalysis]&#xD;
&#xD;
    Out[657]= {Method -&amp;gt; &amp;#034;FastICA&amp;#034;, MaxSteps -&amp;gt; 200, PrecisionGoal -&amp;gt; 6}&#xD;
&#xD;
The intent is `IndependentComponentAnalysis` to be the front interface&#xD;
to different ICA algorithms. (Hence, it has a Method option.) The&#xD;
function `FastICA` takes as options the named arguments of the R&#xD;
function `fastICA` described in \[4\].&#xD;
&#xD;
    In[658]:= Options[FastICA]&#xD;
&#xD;
    Out[658]= {&amp;#034;NonGaussianityFunction&amp;#034; -&amp;gt; Automatic, &#xD;
     &amp;#034;NegEntropyFactor&amp;#034; -&amp;gt; 1, &amp;#034;InitialUnmixingMartix&amp;#034; -&amp;gt; Automatic, &#xD;
     &amp;#034;RowNorm&amp;#034; -&amp;gt; False, MaxSteps -&amp;gt; 200, PrecisionGoal -&amp;gt; 6, &#xD;
     &amp;#034;RFastICAResult&amp;#034; -&amp;gt; True}&#xD;
&#xD;
At this point `FastICA` has only the deflation algorithm described in&#xD;
\[1\]. (\[4\] has also the so called &amp;#034;symmetric&amp;#034; ICA sub-algorithm.) The&#xD;
R function `fastICA` in \[4\] can use only two neg-Entropy functions&#xD;
*log(cosh(x))* and *exp(-u\^2/2)*. Because of the symbolic capabilities&#xD;
of *Mathematica* `FastICA` of \[3\] can take any listable function&#xD;
through the option &amp;#034;NonGaussianityFunction&amp;#034;, and it will find and use&#xD;
the corresponding first and second derivatives.&#xD;
&#xD;
Using NNMF for ICA&#xD;
------------------&#xD;
&#xD;
It seems that in some cases, like the generated data used in this article/blog post,&#xD;
Non-Negative Matrix Factorization (NNMF) can be applied for doing ICA.&#xD;
&#xD;
To be clear, NNMF does dimension reduction, but its norm minimization&#xD;
process does not enforce variable independence. (It enforces&#xD;
non-negativity.) There are at least several articles discussing&#xD;
modification of NNMF to do ICA. For example \[6\].&#xD;
&#xD;
Load NNMF package \[5\] (from [MathematicaForPrediction at&#xD;
GitHub](https://github.com/antononcube/MathematicaForPrediction)):&#xD;
&#xD;
    Import[&amp;#034;https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/NonNegativeMatrixFactorization.m&amp;#034;]&#xD;
&#xD;
After several applications of NNMF we get signals close to the&#xD;
originals:&#xD;
&#xD;
    {W, H} = GDCLS[M, 3];&#xD;
    Grid[{Map[ListLinePlot[#, ImageSize -&amp;gt; 250] &amp;amp;, Normal[H]]}]&#xD;
&#xD;
[![NNMF found source signals](http://i.stack.imgur.com/Bb3Fk.png)](http://i.stack.imgur.com/Bb3Fk.png)&#xD;
&#xD;
For the generated data in this blog post, FactICA is much faster than NNMF and&#xD;
produces better separation of the signals with every run. The data though is a typical representative for the problems ICA is made for.&#xD;
Another comparison with image de-noising, extending [my previous blog&#xD;
post](https://mathematicaforprediction.wordpress.com/2016/05/07/comparison-of-pca-and-nnmf-over-image-de-noising/),&#xD;
will be published shortly.&#xD;
&#xD;
ICA for mixed time series of city temperatures&#xD;
----------------------------------------------&#xD;
&#xD;
Using *Mathematica*&amp;#039;s function `WeatherData` we can get temperature time series for a small set of cities over a certain time grid.&#xD;
We can mix those time series into a multi-dimensional signal, *MS*, apply ICA to *MS*, and judge the extracted source signals with the original ones.&#xD;
&#xD;
This is done with the following commands.&#xD;
&#xD;
### Get time series data&#xD;
&#xD;
    cities = {&amp;#034;Sofia&amp;#034;, &amp;#034;London&amp;#034;, &amp;#034;Copenhagen&amp;#034;};&#xD;
    timeInterval = {{2016, 1, 1}, {2016, 1, 31}};&#xD;
    ts = WeatherData[#, &amp;#034;Temperature&amp;#034;, timeInterval] &amp;amp; /@ cities;&#xD;
&#xD;
    opts = {PlotTheme -&amp;gt; &amp;#034;Detailed&amp;#034;, FrameLabel -&amp;gt; {None, &amp;#034;temperature,\[Degree]C&amp;#034;}, ImageSize -&amp;gt; 350};&#xD;
    DateListPlot[ts, &#xD;
        PlotLabel -&amp;gt; &amp;#034;City temperatures\nfrom &amp;#034; &amp;lt;&amp;gt; DateString[timeInterval[[1]], {&amp;#034;Year&amp;#034;, &amp;#034;.&amp;#034;, &amp;#034;Month&amp;#034;, &amp;#034;.&amp;#034;, &amp;#034;Day&amp;#034;}] &amp;lt;&amp;gt; &#xD;
        &amp;#034; to &amp;#034; &amp;lt;&amp;gt; DateString[timeInterval[[2]], {&amp;#034;Year&amp;#034;, &amp;#034;.&amp;#034;, &amp;#034;Month&amp;#034;, &amp;#034;.&amp;#034;, &amp;#034;Day&amp;#034;}], &#xD;
        PlotLegends -&amp;gt; cities, ImageSize -&amp;gt; Large, opts]&#xD;
&#xD;
[![City temperatures](http://i.imgur.com/exsAGOr.png)](http://i.imgur.com/exsAGOr.png)&#xD;
&#xD;
### Cleaning and resampling (if any)&#xD;
&#xD;
Here we check the data for missing data:&#xD;
&#xD;
    Length /@ Through[ts[&amp;#034;Path&amp;#034;]]&#xD;
    Count[#, _Missing, \[Infinity]] &amp;amp; /@ Through[ts[&amp;#034;Path&amp;#034;]]&#xD;
    Total[%]&#xD;
    (* {1483, 1465, 742} *)&#xD;
    (* {0,0,0} *)&#xD;
    (* 0 *)&#xD;
&#xD;
Resampling per hour:&#xD;
&#xD;
    ts = TimeSeriesResample[#, &amp;#034;Hour&amp;#034;, ResamplingMethod -&amp;gt; {&amp;#034;Interpolation&amp;#034;, InterpolationOrder -&amp;gt; 1}] &amp;amp; /@ ts&#xD;
&#xD;
### Mixing the time series&#xD;
&#xD;
In order to do a good mixing we select a mixing matrix for which all column sums are close to one:&#xD;
&#xD;
    mixingMat = #/Total[#] &amp;amp; /@ RandomReal[1, {3, 3}];&#xD;
    MatrixForm[mixingMat]&#xD;
    (* mixingMat = {{0.357412, 0.403913, 0.238675}, {0.361481, 0.223506, 0.415013}, {0.36564, 0.278565, 0.355795}} *)&#xD;
    Total[mixingMat]&#xD;
    (* {1.08453, 0.905984, 1.00948} *)&#xD;
    &#xD;
Note the row normalization.&#xD;
&#xD;
Make the mixed signals:&#xD;
&#xD;
    tsMixed = Table[TimeSeriesThread[mixingMat[[i]].# &amp;amp;, ts], {i, 3}]&#xD;
  &#xD;
Plot the original and mixed signals:&#xD;
&#xD;
    Grid[{{DateListPlot[ts, PlotLegends -&amp;gt; cities, PlotLabel -&amp;gt; &amp;#034;Original signals&amp;#034;, opts],&#xD;
    DateListPlot[tsMixed, PlotLegends -&amp;gt; Automatic, PlotLabel -&amp;gt; &amp;#034;Mixed signals&amp;#034;, opts]}}]&#xD;
     &#xD;
[![Original and mixed temperature signals](http://i.imgur.com/iULtIMV.png)](http://i.imgur.com/iULtIMV.png)&#xD;
      &#xD;
### Application of ICA&#xD;
&#xD;
At this point we apply ICA (probably more than once, but not too many times) and plot the found source signals:&#xD;
&#xD;
    X = Transpose[Through[tsMixed[&amp;#034;Path&amp;#034;]][[All, All, 2]] /. Quantity[v_, _] :&amp;gt; v];&#xD;
    {S, A} = IndependentComponentAnalysis[X, 3];&#xD;
    DateListPlot[Transpose[{tsMixed[[1]][&amp;#034;Dates&amp;#034;], #}], PlotTheme -&amp;gt; &amp;#034;Detailed&amp;#034;, ImageSize -&amp;gt; 250] &amp;amp; /@ Transpose[S]&#xD;
    &#xD;
[![ICA found temperature time series components](http://i.imgur.com/f5tABhZ.png)](http://i.imgur.com/f5tABhZ.png)    &#xD;
&#xD;
Compare with the original time series:&#xD;
&#xD;
    MapThread[DateListPlot[#1, PlotTheme -&amp;gt; &amp;#034;Detailed&amp;#034;, PlotLabel -&amp;gt; #2, ImageSize -&amp;gt; 250] &amp;amp;, {tsPaths, cities}]&#xD;
    &#xD;
[![Original temperature time series](http://i.imgur.com/dM6QPHp.png)](http://i.imgur.com/dM6QPHp.png)&#xD;
&#xD;
After permuting and inverting some of the found sources signals we see they are fairly good:&#xD;
&#xD;
    pinds = {3, 1, 2};&#xD;
    pmat = IdentityMatrix[3][[All, pinds]];&#xD;
&#xD;
    DateListPlot[Transpose[{tsMixed[[1]][&amp;#034;Dates&amp;#034;], #}], PlotTheme -&amp;gt; &amp;#034;Detailed&amp;#034;, ImageSize -&amp;gt; 250] &amp;amp; /@ &#xD;
      Transpose[S.DiagonalMatrix[{1, -1, 1}].pmat]&#xD;
&#xD;
[![Permuted and inverted found source signals](http://i.imgur.com/jMlLQl5.png)](http://i.imgur.com/jMlLQl5.png) &#xD;
&#xD;
References&#xD;
----------&#xD;
&#xD;
\[1\] A. Hyvarinen and E. Oja (2000) Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5):411-430 . URL:&#xD;
[https://www.cs.helsinki.fi/u/ahyvarin/papers/NN00new.pdf](https://www.cs.helsinki.fi/u/ahyvarin/papers/NN00new.pdf). &#xD;
&#xD;
\[2\] Wikipedia entry, [Independent component analysis](https://en.wikipedia.org/wiki/Independent_component_analysis). &#xD;
&#xD;
\[3\] A. Antonov, [Independent Component Analysis Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/IndependentComponentAnalysis.m),&#xD;
(2016), source code [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction/),&#xD;
package&#xD;
[IndependentComponentAnalysis.m](https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/IndependentComponentAnalysis.m)&#xD;
. &#xD;
&#xD;
\[4\] J. L. Marchini, C. Heaton and B. D. Ripley, fastICA, R package,&#xD;
URLs: &amp;lt;https://cran.r-project.org/web/packages/fastICA/index.html&amp;gt;,&#xD;
&amp;lt;https://cran.r-project.org/web/packages/fastICA/fastICA.pdf&amp;gt; . &#xD;
&#xD;
\[5\] A. Antonov, [Implementation of the Non-Negative Matrix Factorization algorithm in Mathematica](https://github.com/antononcube/MathematicaForPrediction/blob/master/NonNegativeMatrixFactorization.m),&#xD;
(2013), source code at [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction/),&#xD;
package [NonNegativeMatrixFactorization.m](https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/NonNegativeMatrixFactorization.m).&#xD;
&#xD;
\[6\] H. Hsieh and J. Chien, [A new nonnegative matrix factorization for independent component analysis](https://www.researchgate.net/publication/224149642_A_new_nonnegative_matrix_factorization_for_independent_component_analysis), &#xD;
(2010), Conference: Acoustics Speech and Signal Processing (ICASSP).</description>
    <dc:creator>Anton Antonov</dc:creator>
    <dc:date>2016-05-24T18:31:16Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2071595">
    <title>Identifying EEG signals with deep learning</title>
    <link>https://community.wolfram.com/groups/-/m/t/2071595</link>
    <description>![enter image description here][1]&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=sig.gif&amp;amp;userId=2025695&#xD;
  [2]: https://www.wolframcloud.com/obj/67541fca-9014-4c66-8bbf-e6db0ca3d631&#xD;
&#xD;
&#xD;
  [Original]: https://www.wolframcloud.com/obj/f20171185/Published/cortex-bci.nb</description>
    <dc:creator>Anshul Chandra</dc:creator>
    <dc:date>2020-09-06T09:55:32Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/992466">
    <title>Classifier for Human Motions with data from an accelerometer</title>
    <link>https://community.wolfram.com/groups/-/m/t/992466</link>
    <description>This project was part of a Wolfram Mentorship Program.&#xD;
&#xD;
The classification of human motions based on patterns and physical data is of great importance in developing areas such as robotics. Also, a function that recognizes a specific human motion can be an important addition to artificial intelligence and physiological monitoring systems. This project is about acquiring, curating and analyzing experimental data from certain actions such as walking, running and climbing stairs. The data taken with the help of an accelerometer needs to be turned into an acceptable input for the Classify function. Finally, the function can be updated with more data and classes to make it more efficient and whole.&#xD;
&#xD;
**Algorithms and procedures**&#xD;
&#xD;
The data for this project was acquired by programming an Arduino UNO microprocessor with a Raspberry Pi computer, using Wolfram Language. An accelerometer connected to the Arduino sent measurements each time it was called upon, and Mathematica in the Raspberry Pi collected and uploaded the data. &#xD;
The raw data had to be processed for it to be a good input for the classify function. First, it was transformed into an spectrogram (to analyze the frequency domain of the data). Then, the spectrogram&amp;#039;s image was put through the IFData function which filters out some of the noise, and finally the images were converted into numerical data with the UpToMeasurements function (main function: ComponentMeasurements).&#xD;
This collection numerical data was put in a classify function under six different classes (standing, walking, running, jumping and waving).&#xD;
&#xD;
*The IFData function and the UpToMeasurements functions were sent to me by Todd Rowland during the Mentorship. Both functions will be shown at the end of this post.&#xD;
&#xD;
**Example visualization**&#xD;
&#xD;
The following ListLinePlot is an extract from the jumping data &#xD;
&#xD;
![Example data][1]&#xD;
&#xD;
Next, the data from the plot above is turned into a spectrogram by the function Spectrogram, i.e.:   &#xD;
&#xD;
    spectrogramImage = &#xD;
     Spectrogram[jumpingData, SampleRate -&amp;gt; 10, FrameTicks -&amp;gt; None, &#xD;
      Frame -&amp;gt; False, Ticks -&amp;gt; None, FrameLabel -&amp;gt; None]&#xD;
&#xD;
&#xD;
&#xD;
![Example jumping data spectrogram][2]&#xD;
&#xD;
Finally, all the spectrogram images are used as input for the UpToMeasurements function, along with some properties for the ComponentMeasurements function:&#xD;
&#xD;
 i.e:  &#xD;
&#xD;
    numericalData = &#xD;
     N@Flatten[&#xD;
       UpToMeasurements[&#xD;
        spectrogramImage, {&amp;#034;EnclosingComponentCount&amp;#034;, &amp;#034;Max&amp;#034;, &#xD;
         &amp;#034;MaxIntensity&amp;#034;, &amp;#034;TotalIntensity&amp;#034;, &amp;#034;StandardDeviationIntensity&amp;#034;, &#xD;
         &amp;#034;ConvexCoverage&amp;#034;, &amp;#034;Total&amp;#034;, &amp;#034;Skew&amp;#034;, &amp;#034;FilledCircularity&amp;#034;, &#xD;
         &amp;#034;MaxCentroidDistance&amp;#034;, &amp;#034;ExteriorNeighborCount&amp;#034;, &amp;#034;Area&amp;#034;, &#xD;
         &amp;#034;MinCentroidDistance&amp;#034;, &amp;#034;FilledCount&amp;#034;, &amp;#034;MeanIntensity&amp;#034;, &#xD;
         &amp;#034;StandardDeviation&amp;#034;, &amp;#034;Energy&amp;#034;, &amp;#034;Count&amp;#034;, &amp;#034;MeanCentroidDistance&amp;#034;}, &#xD;
        1]]&#xD;
&#xD;
Which outputs a list of real numbers, one for each of the properties:&#xD;
&#xD;
    {0., 1., 1., 1., 1., 19294.9, 0.222164, 0.985741, 31011.8, 15212.5, \&#xD;
    9624.42, -0.0596506, 0.724527, 190.534, 0., 42584.5, 0.364667, \&#xD;
    42584., 0.453101, 0.315209, 0.232859, 0.169549, 0.00909654, 42584., \&#xD;
    98.7136}&#xD;
&#xD;
These numbers are grouped in a nested list which contains data for all 5 human motions. All the data is lastly classified in a classifier using the Classify function.&#xD;
&#xD;
After several combinations of both properties and data sets, I was able to produce classifier functions with an accuracy of 91%, and a total size of 269kb. &#xD;
&#xD;
------------------------------------------------------------&#xD;
&#xD;
**Attempt on building a classify function using image processing**&#xD;
&#xD;
On the other hand,  the image processing capabilities of Mathematica lets us extract data from images, hence it should be possible to create a classifier which recognizes the moving patterns in the frames of a video. First, I had to take the noise out of every image, this proved to be troublesome, since the background can vary greatly between video samples. Then, I binarized the image in order to isolate the moving particles in each frame, and extract their position with ImageData. Lastly, a data set can be formed from all the analyzed frames; this data can essentially be used in the same way as the accelerometer&amp;#039;s, but the classifier was unsuccessful in separating the samples accurately. &#xD;
This was mainly because the accelerometer&amp;#039;s data is taken at a constant rate and very precisely, whereas the images depend on the camera&amp;#039;s frame rate, and many other external factors. This is what made the data different enough to fail being classified with accuracy. Furthermore, if a big dataset is made from videos of people performing certain actions, the data processing can follow similar steps as the ones explained in this report. Thus producing a similar classifier function. This can further increase the functions accuracy, but the process needs an algorithm that can effectively trace the path of &amp;#034;a particle&amp;#034; that moves through each of the frames of the video, and extract precise velocity data from said movement.&#xD;
&#xD;
------------------------------------------------------------&#xD;
&#xD;
Conclusively, the classify function is working very well with the data provided, its accuracy is about 91% for the SupportVectorMachine method. This is a very good result for the human motion classifier. The next step is to add more classes to the function, and test the classifier with data acquired from different sources, such as another accelerometer and various videos of human motion footage.&#xD;
&#xD;
-----------------------------------------&#xD;
&#xD;
**Code:**&#xD;
&#xD;
 - UpToMeasurements function&#xD;
&#xD;
        UpToMeasurements[image_,property_,n_]:=MaximalBy[ComponentMeasurements[image,&amp;#034;Count&amp;#034;],Last,UpTo[n]][[All,1]]/.ComponentMeasurements[image,property]&#xD;
&#xD;
*Note: This function simplifies the exploration of properties to input in ComponentMeasurements, also, it outputs a usable list of numerical data retrieved from a given group of images.&#xD;
&#xD;
 - IFData function:&#xD;
&#xD;
        imagefunctions=&amp;lt;|1-&amp;gt; (EntropyFilter[#,3]&amp;amp;),&#xD;
        2-&amp;gt; (EdgeDetect[EntropyFilter[#,3]]&amp;amp;),&#xD;
        3-&amp;gt;Identity,&#xD;
        4-&amp;gt; (ImageAlign[reference110,#]&amp;amp;),&#xD;
        5-&amp;gt; (ImageHistogram[#,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,FrameLabel-&amp;gt;None,Ticks-&amp;gt;None]&amp;amp;),&#xD;
        6-&amp;gt; (ImageApply[#^.6&amp;amp;,#]&amp;amp;),&#xD;
        7-&amp;gt; (Colorize[MorphologicalComponents[#]]&amp;amp;),&#xD;
        8-&amp;gt; (HighlightImage[#,ImageCorners[#,1,.001,5]]&amp;amp;),&#xD;
        9-&amp;gt; (HighlightImage[#,Graphics[Disk[{200,200},200]]]&amp;amp;),&#xD;
        10-&amp;gt; ImageRotate,&#xD;
        11-&amp;gt; (ImageRotate[#,45Degree]&amp;amp;),&#xD;
        12-&amp;gt;(ImageTransformation[#,Sqrt]&amp;amp;),&#xD;
        13-&amp;gt;(ImageTransformation[#,Function[p,With[{C=150.,R=35.},{p[[1]]+(R*Cos[(p[[1]]-C)*360*2/R]/6),p[[2]]}]]]&amp;amp;),&#xD;
        14-&amp;gt;( Dilation[#,DiskMatrix[4]]&amp;amp;),&#xD;
        15-&amp;gt;( ImageSubtract[Dilation[#,1],#]&amp;amp;),&#xD;
        16-&amp;gt; (Erosion[#,DiskMatrix[4]]&amp;amp;),&#xD;
        17-&amp;gt; (Opening[#,DiskMatrix[4]]&amp;amp;),&#xD;
        18-&amp;gt;(Closing[#,DiskMatrix[4]]&amp;amp;),&#xD;
        19-&amp;gt;DistanceTransform,&#xD;
        20-&amp;gt; InverseDistanceTransform,&#xD;
        21-&amp;gt; (HitMissTransform[#,{{1,-1},{-1,-1}}]&amp;amp;),&#xD;
        22-&amp;gt;(TopHatTransform[#,5]&amp;amp;),&#xD;
        23-&amp;gt;(BottomHatTransform[#,5]&amp;amp;), &#xD;
        24-&amp;gt; (MorphologicalTransform[Binarize[#],Max]&amp;amp;),&#xD;
        25-&amp;gt; (MorphologicalTransform[Binarize[#],&amp;#034;EndPoints&amp;#034;]&amp;amp;),&#xD;
        26-&amp;gt;MorphologicalGraph,&#xD;
        27-&amp;gt;SkeletonTransform,&#xD;
        28-&amp;gt;Thinning,&#xD;
        29-&amp;gt;Pruning,&#xD;
        30-&amp;gt; MorphologicalBinarize,&#xD;
        31-&amp;gt; (ImageAdjust[DerivativeFilter[#,{1,1}]]&amp;amp;),&#xD;
        32-&amp;gt; (GradientFilter[#,1]&amp;amp;),&#xD;
        33-&amp;gt; MorphologicalPerimeter,&#xD;
        34-&amp;gt; Radon&#xD;
        |&amp;gt;;&#xD;
        &#xD;
        reference110=BlockRandom[SeedRandom[&amp;#034;110&amp;#034;];Image[CellularAutomaton[110,RandomInteger[1,400],400]]];&#xD;
        &#xD;
        IFData[n_Integer]:=Lookup[imagefunctions,n,Identity]&#xD;
        &#xD;
        IFData[&amp;#034;Count&amp;#034;]:=Length[imagefunctions]&#xD;
        &#xD;
        IFData[All]:=imagefunctions&#xD;
&#xD;
*Note: This function groups together several image filtering fuctions; it was used to simplify the exploration of functions to be used in the classifier. &#xD;
**This function was written by the the Wolfram team, but was slightly modified for this project.&#xD;
&#xD;
 - propertyVector function (this function automatically evaluates all the prior necessary code needed to create the classify functions):&#xD;
&#xD;
        propertyVector[property_]:={walkingvector=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@walk);&#xD;
        jumpingvector=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@jump);&#xD;
        standingvector=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@stand);&#xD;
        runningvector=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@run);&#xD;
        wavingvector=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@wave);&#xD;
        stairsvector=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@stairs);&#xD;
        walkingvectortest=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@testwalk);&#xD;
        jumpingvectortest=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@testjump);&#xD;
        standingvectortest=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@teststand);&#xD;
        runningvectortest=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@testrun);&#xD;
        wavingvectortest=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@testwave);&#xD;
        stairsvectortest=N@Flatten[UpToMeasurements[#,property,1]]&amp;amp;/@IFData[6]/@(Spectrogram[#,SampleRate-&amp;gt;10,FrameTicks-&amp;gt;None,Frame-&amp;gt;False,Ticks-&amp;gt;None,FrameLabel-&amp;gt;None]&amp;amp;/@teststairs);}&#xD;
        &#xD;
        Training:=trainingSet=&amp;lt;|&amp;#034;walking&amp;#034;-&amp;gt;walkingvector,&amp;#034;running&amp;#034;-&amp;gt;runningvector,&#xD;
        &amp;#034;standing&amp;#034;-&amp;gt; standingvector,&#xD;
        &amp;#034;jumping&amp;#034;-&amp;gt; jumpingvector,&#xD;
        &amp;#034;waving&amp;#034;-&amp;gt; wavingvector,&#xD;
        &amp;#034;stairs&amp;#034;-&amp;gt; stairsvector|&amp;gt;;&#xD;
        &#xD;
        Test:=testSet=&amp;lt;|&amp;#034;walking&amp;#034;-&amp;gt;walkingvectortest,&amp;#034;running&amp;#034;-&amp;gt;runningvectortest,&#xD;
        &amp;#034;standing&amp;#034;-&amp;gt; standingvectortest,&#xD;
        &amp;#034;jumping&amp;#034;-&amp;gt; jumpingvectortest,&#xD;
        &amp;#034;waving&amp;#034;-&amp;gt; wavingvectortest,&#xD;
        &amp;#034;stairs&amp;#034;-&amp;gt; stairsvectortest|&amp;gt;;&#xD;
&#xD;
 - Example code for the acceleration data acquisition from image processing:&#xD;
&#xD;
        images=Import[&amp;#034;$path&amp;#034;]&#xD;
        motionData=&#xD;
        Count[#,1]&amp;amp;/@ &#xD;
          (Flatten[    	&#xD;
          	ImageData[Binarize[ImageSubtract[ImageSubtract[#[[1]],#[[2]]],ImageSubtract[#[[2]],#[[3]]]]]]&amp;amp;/@&#xD;
        		  Partition[images,3,1],1])&#xD;
&#xD;
*Note: before this code can be used, the backgrounds of the frames of the video have to be removed, and the image has to be binarized as much as possible (some examples will be shown in the next section).&#xD;
&#xD;
 - Example code for the retrieval of raw data from DataDrop:&#xD;
&#xD;
        rawData=Values[Databin[&amp;#034;Serial#&amp;#034;, {#}]];&#xD;
        data=Flatten[rawData[&amp;#034;(xacc/yacc/zacc)&amp;#034;]];&#xD;
&#xD;
---------------------------------&#xD;
&#xD;
**Please feel free to contact me or comment if you are interested in the rest of the code ( uploading the C code to the Arduino, the manufacturer&amp;#039;s code for the accelerometer, C code switch that lets Mathematica communicate with the Arduino, and the Wolfram Language code used to start each loop in the switch that retrieves data ). Also, I could send the classify function, or any other information that I might have left out; all suggestions welcome.&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.png&amp;amp;userId=602285&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.png&amp;amp;userId=602285</description>
    <dc:creator>Pablo Ruales</dc:creator>
    <dc:date>2017-01-11T01:15:04Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1383630">
    <title>[WSC18] Analyzing and visualizing chord sequences in music</title>
    <link>https://community.wolfram.com/groups/-/m/t/1383630</link>
    <description>During this year&amp;#039;s Wolfram Summer Camp, being mentored by Christian Pasquel, I developed a tool that identifies chord sequences in music (from MIDI files) and generates a corresponding graph. The graph represents all [unique] chords as vertices, and connects every pair of chronologically subsequent chords with a directed edge. Here is an example of a graph I generated:&#xD;
&#xD;
![Graph genehrated from Bach&amp;#039;s prelude no.1 of the Well Tempered Klavier (Book I)][1]&#xD;
&#xD;
&#xD;
Below is a detailed account on the development and current state of the project, plus some background on the corresponding musical theory notions.&#xD;
&#xD;
#Introduction&#xD;
&#xD;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;**GOAL** | The aim of this project is to develop a utility that identifies chords (e.g. C Major, A minor, G7, etc.) from MIDI files, in chronological order, and then generates a graph for visualizing that chord sequence. In the graph, each vertex would represent a unique chord, and each pair chronologically adjacent chords would be connected by a directed edge (i.e. an arrow). So, for example, if at some point in the music that is being analyzed there is a transition from a major G chord to a major C chord, there would be an arrow that goes from the G Major chord to the C Major chord. Therefore, the graph would describe a [Markov chain][2] for the chords. The purpose of the graph is to visualize frequent chord sequences and progressions within a certain piece of music.&#xD;
&#xD;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;**MOTIVATION** | While brainstorming for project ideas, I don&amp;#039;t know why, I had a desire to do something with graphs. Then I asked myself, &amp;#034;What are graphs good at modelling?&amp;#034;. I mentally browsed through my areas of interest, searching for any that matched that requirement. One of my main interests is music; I am somewhat of a musician myself. And, in fact, [musical] harmony *is* a good subject to be modelled by graphs. Harmony, one of the fundamental pillars of music (and perhaps the most important), not only involves the chords themselves, but, more significantly, the *transitions* between those, which is what gives character to music. And directed graphs, and, specifically, Markov models, are a perfect match for transitions between states.&#xD;
&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
#Some background&#xD;
*Skip this if you aren&amp;#039;t interested in the musical theory part or if you already have a background in music theory!*&#xD;
&#xD;
##What is a chord?&#xD;
A chord is basically a group of notes played together (contemporarily). Chords are the quanta of musical &amp;#034;feeling&amp;#034;; the typicalbut somewhat naïveexample is the sensation of major chords sounding &amp;#034;happy&amp;#034; and minor chords sounding &amp;#034;sad&amp;#034; or melancholic (more on types of chords later). &#xD;
&#xD;
Types of chords are defined by the [intervals][3] (distance in pitch) between the notes. The *root* of a chord is the &amp;#034;most important&amp;#034; or fundamental note of the cord, in the sense that it is the &amp;#034;base&amp;#034; from which the aforementioned intervals are measured. In other words, the archetype of chord defines the &amp;#034;feel&amp;#034; and the general harmonic properties of the chord, while the root defines the pitch of the chord. So a &amp;#034;C Major&amp;#034; chord is a chord with archetype &amp;#034;major triad&amp;#034; (more on that later) built on the note C; i.e., its root is C.&#xD;
&#xD;
The *sequence* of chords in a piece constitutes its **harmony**, and it can convey much more complex musical messages or feelings than a single chord, just as in language: a single word does have meaning, but a sentence can have a much more complex meaning than any single word.&#xD;
&#xD;
##Patterns in chord sequences&#xD;
The main difference that between language and music is that language, in general, has a much stricter structure (i.e. the order of words, a.k.a. syntax) than music: the latter is an art, and there are no predetermined rules to follow. But humans \[have a tendency to\] like patterns, and music wouldn&amp;#039;t be so universally beloved if it didn&amp;#039;t contain any patterns. This also explains the unpopularity of [atonal music][4] (example [here][5]). But even atonal music has patterns: it may do its best to avoid harmonic patterns, but it still contains some level of rythmic, textural or other kinds of patterns.&#xD;
&#xD;
This is why using graphs to visualize chord sequences is interesting: it is a semidirect way of identifying the harmonic patterns that distinguish different genres, styles, forms, pieces or even fragments of music. In my project, I have mainly focused on the &amp;#034;western&amp;#034; conception of tonal music, an particularly in its &amp;#034;classical&amp;#034; version (what I mean by &amp;#034;classical&amp;#034; is, in lack of a better definition, a classification that encompasses all music where the composer is, culturally, the most important artist). That doesn&amp;#039;t mean this tool isn&amp;#039;t apt for other types of music; it just means it will analyze it from this specific standpoint.&#xD;
&#xD;
In tonal music, the harmonic patterns are all related to a certain notion of &amp;#034;center of gravity&amp;#034;: the [*tonic*][6], which is, in some way the music&amp;#039;s harmonic &amp;#034;home&amp;#034;. Classical (as in pre-XX-century) tonal music usually ends (and often starts) with the tonic chord. In fact, we can further extend the analogy with gravity by saying that music consists in a game of tension, in which the closer you are to the center of gravity (the tonic), the greater the &amp;#034;pull&amp;#034;. In an oversimplified manner, the musical equivalent of the [Schwarzschild radius][7]  is the [dominant chord][8]: it tends towards the tonic. Well, not really, because you *can* turn back from itand in fact a lot of interesting harmonical sequences consist in doing just that.&#xD;
&#xD;
##Some types of chords&#xD;
In &amp;#034;classical&amp;#034; music (see definition above), there are mainly these kinds of chords (based on the amount of unique notes they contain): triad chords (i.e. three-note chords), seventh chords (i.e. four-note chords; we&amp;#039;ll see why they&amp;#039;re called *seventh* in a bit), and ninth chords (five-note chords). There is another main distinction: major and minor chords (i.e. the cliché &amp;#034;happy&amp;#034; vs &amp;#034;sad&amp;#034; distinction). &#xD;
&#xD;
###Triad chords&#xD;
Probably the most simple and frequent chord is the triad chord (either major or minor). Here is a picture of a major and a minor triad C chord (left to right):&#xD;
&#xD;
![Major and minor triad C chords (ltr)][9]&#xD;
&#xD;
###Seventh chords&#xD;
[Seventh chords][10] are called so because they contain a seventh [interval][11]. Their main significance is in dominant chords, where they usually appear in the major-triad-minor-seventh (a.k.a [&amp;#034;dominant&amp;#034;][12]) form. Another important seventh chord form is the fully diminished seventh chord (these will be relevant for the code later), which also tends to resolve (&amp;#034;resolve&amp;#034; is music jargon for &amp;#034;transition to a chord with less tension&amp;#034;) to tonic.&#xD;
&#xD;
![Seventh chords][13]&#xD;
&#xD;
###Ninth chords&#xD;
Although not extremely frequent, they do appear in classical music. The most &amp;#034;popular&amp;#034; is the dominant ninth chord (an extension of the dominant 7th). An alternative for this chord is the minor ninth dominant chord (built from the same dominant 7th chord, but with a minor ninth instead).&#xD;
&#xD;
&amp;lt;br&amp;gt;&#xD;
&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
#Algorithms and Code&#xD;
In this section I&amp;#039;m going to walk through my code in order of execution. Four main parts can be distinguished in my project: importing and preprocessing, splitting the note sequence into &amp;#034;chunks&amp;#034; to be analyzed as chords, identifying the chord in each of those chunks, and visualizing the whole sequence as a graph.&#xD;
&#xD;
##First phase: importing and preprocessing the MIDI file&#xD;
The first operation that needs to be done is importing the MIDI file and preprocessing it. This includes selecting which elements to import from the file, converting them to a given simplified form, and performing any sorting, deletion of superfluous elements, or other modification that needs to be done.&#xD;
&#xD;
For this purpose I defined the function `importMIDI`:&#xD;
	&#xD;
	importMIDI[filename_String] := MapAt[Flatten[#, 1] &amp;amp;, MapAt[flattenAndSortSoundNotes, &#xD;
           Import[(dir &amp;lt;&amp;gt; filename &amp;lt;&amp;gt; &amp;#034;.mid&amp;#034;), {{&amp;#034;SoundNotes&amp;#034;, &amp;#034;Metadata&amp;#034;}}], &#xD;
	1], 2]&#xD;
&#xD;
Here `dir` stands for the directory where I saved all my MIDIs (to avoid having to type in the whole directory every time). Notice that we&amp;#039;re importing the music as SoundNotes *and* the file&amp;#039;s metadatawe will need it for determining the boundaries of measures (see below). The function `flattenAndSortNotes` does what it sound like: it converts the list of `SoundNote`s that `Import` returned into a flattened list of notes (i.e. a single track), sorted by their starting time. It also gets rid of anything that isn&amp;#039;t necessary for chord identification (i.e. rhythmic sounds or effects). Consult the attached notebook for the explicit definition.&#xD;
&#xD;
Here is the format the sequence of notes is returned in (i.e. `importMIDI[...][[1]]`):&#xD;
&#xD;
    {{&amp;#034;C4&amp;#034;, {0., 1.4625}}, {&amp;#034;E4&amp;#034;, {0.18125, 1.4625}}, {&amp;#034;G4&amp;#034;, {0.36875, 0.525}}, &amp;lt;&amp;lt;562&amp;gt;&amp;gt;, {&amp;#034;G2&amp;#034;, {105., 107.963}}, {&amp;#034;G4&amp;#034;, {105., 107.963}}}&#xD;
&#xD;
Each sub-list represents a note. Its first element is the actual pitch; the second is a list that represents the timespan (i.e. start and end time in seconds).&#xD;
&#xD;
&amp;lt;br&amp;gt;&#xD;
&#xD;
##Second phase: splitting the note sequence into chunks&#xD;
The challenge in this part of the project is knowing how to determine which notes form a single chord; i.e., where to put the boundary between one chord and the next. &#xD;
&#xD;
The solution I came up with is not optimal, but, until now, nothing better has occurred to me (suggestions are welcome!). It involves determining where each measure start/end lies in time from the metadata and splitting each of those into a certain amount of sub-parts; then the notes are grouped by the specific sub-part of the specific measure they pertain to.  The rationale behind this is that chords in classical music tend to be well-contained within measures or rational fractions of these.&#xD;
&#xD;
This procedure is contained in the function `chordSequenceAnalyzeUsingMeasures`. I&amp;#039;m going to go over it quickly:&#xD;
&#xD;
    chordSequenceUsingMeasures[midiData_List /; Length@midiData == 2, &#xD;
      measureSplit_: 2, analyzer_String: &amp;#034;Heuristic&amp;#034;] := &#xD;
     Block[{noteSequence, metadata,  chunkKeyframes, chunkedSequence, &#xD;
       result},&#xD;
      &#xD;
&#xD;
      (*Separate notes from metadata*)&#xD;
      noteSequence = midiData[[1]];&#xD;
      metadata = midiData[[2]];&#xD;
      &#xD;
Until here it&amp;#039;s pretty self evident.&#xD;
&#xD;
      (*Get measure keyframes*)&#xD;
      chunkKeyframes = &#xD;
       divideByN[&#xD;
        measureKeyframesFromMetadata[&#xD;
         metadata, (Last@noteSequence)[[2, 2]]], measureSplit]; &#xD;
&#xD;
Here the function `measureKeyframesFromMetadata` is called. It fetches all of the `TimeSignature` and `SetTempo` tags in the metadata and identifies the position of each measure from them. `divideByN` subdivides each measure by `measureSplit` (an optional argument with default value `2`).&#xD;
      &#xD;
      (*Chunk sequence*)&#xD;
      chunkedSequence = {};&#xD;
      Module[{i = 1},&#xD;
       Do[&#xD;
        With[{k0 = chunkKeyframes[[j]], k1 = chunkKeyframes[[j + 1]]}, &#xD;
         Module[{chunk = {}}, &#xD;
          While[&#xD;
           i &amp;lt;= Length@noteSequence &amp;amp;&amp;amp; ( &#xD;
             k0 &amp;lt;= noteSequence[[i, 2, 1]] &amp;lt; k1 || &#xD;
              k0 &amp;lt; noteSequence[[i, 2, 2]] &amp;lt;= k1 ), &#xD;
           AppendTo[chunk, noteSequence[[i]]] i++;]; &#xD;
          AppendTo[chunkedSequence, chunk]&#xD;
          ]&#xD;
         ], &#xD;
        {j, Length@chunkKeyframes - 1}]];&#xD;
      chunkedSequence = &#xD;
       DeleteCases[chunkedSequence, l_List /; Length@l == 0];&#xD;
&#xD;
Once the measures&amp;#039; timespan has been determined, a list of &amp;#034;chunks&amp;#034; (lists of notes grouped by measure part) is generated. &#xD;
      &#xD;
      (*Call analyzer*)&#xD;
      Switch[analyzer,&#xD;
       &amp;#034;Deterministic&amp;#034;, result = chordChunkAnalyze /@ chunkedSequence,&#xD;
       &amp;#034;Heuristic&amp;#034;, &#xD;
       result = heuristicChordAnalyze /@ justPitch /@ chunkedSequence&#xD;
       ];&#xD;
      &#xD;
      result = resolveDiminished7th[Split[result][[All, 1]]]&#xD;
      ]&#xD;
&#xD;
Finally, each chunk is  sent to the chord analyzer function `heuristicChordAnalyze`, which I&amp;#039;ll talk about in the next section, along with the currently mysterious `resolveDiminished7th`. &#xD;
&#xD;
Since this algorithm for &amp;#034;chunking&amp;#034; a note sequence doesn&amp;#039;t work for everything, I also developed an alternative, more naïve approach:&#xD;
&#xD;
    chordSequenceNaïve[midiData_List /; Length@midiData == 2, &#xD;
      analyzer_String: &amp;#034;Heuristic&amp;#034;, n1_Integer: 6, n2_Integer: 1] := &#xD;
     Module[{noteSequence, chunkedSequence, result},&#xD;
      &#xD;
      (*Separate notes from metadata*)&#xD;
      noteSequence = midiData[[1]];&#xD;
      &#xD;
      (*Chunk sequence*)&#xD;
      chunkedSequence = Partition[noteSequence, n1, n2];&#xD;
      &#xD;
      (*Call analyzer*)&#xD;
      result = heuristicChordAnalyze /@ justPitch@chunkedSequence;&#xD;
      &#xD;
      result = resolveDiminished7th[Split[result][[All, 1]]]&#xD;
      ]&#xD;
&#xD;
&amp;lt;br&amp;gt;&#xD;
&#xD;
##Phase 3: identifying the chord from a group of notes&#xD;
&#xD;
This has been the main conceptual challenge in the whole project. After some unsucsessful ideas, with some suggestions from Rob Morris (one of the mentors), whom I thank, I ended up developing the following algorithm. It iterates through each note and assigns it a score that represents the likeliness of that note being the root of the chord based on the presence of certain indicators (i.e. notes the presence of which define a chord, to some degree), each of which with a different weight: having a fifth, having a third, a minor seventh... Then the note with the highest chord is assumed to be the root of the chord.&#xD;
&#xD;
In code:&#xD;
&#xD;
	heuristicChordAnalyze[notes_List] := &#xD;
	 Block[{chordNotes, scores, root},&#xD;
	  &#xD;
	  (*Calls to helper functions*)&#xD;
	  chordNotes = octaveReduce /@ convertToSemitones /@ notes // DeleteDuplicates;&#xD;
	  &#xD;
	  (*Scoring*)&#xD;
	  scores = Table[Total@&#xD;
      Pick[&#xD;
       (*Score points*)&#xD;
       {24, 16, 16, 8, 2, 3, 1, 1,&#xD;
        10, 15, 15, 18},&#xD;
       &#xD;
       (*Conditions*)&#xD;
       SubsetQ[chordNotes, #] &amp;amp; /@octaveReduce /@&#xD;
          {{nt + 7}, {nt + 4}, {nt + 3}, {nt + 10}, {nt + 11}, {nt + 2}, {nt + 5}, {nt + 9},&#xD;
          {nt + 4, nt + 10}, {nt + 3, nt + 6, nt + 10}, {nt + 3, nt + 6, nt + 9}, {nt + 1, nt + 4, nt + 10}}&#xD;
       ]&#xD;
     &#xD;
         (*Substract outliers*)&#xD;
         - 18*Length@Complement[chordNotes, octaveReduce /@ {nt, 7 + nt, 4 + nt, 3 + nt, 10 + nt, 11 + nt, &#xD;
                                                             2 + nt, 5 + nt, 9 + nt, 6 + nt}],&#xD;
    	&#xD;
       {nt, chordNotes}];&#xD;
&#xD;
	  (*Return*)&#xD;
	  root = Part[chordNotes, Position[scores, Max @@ scores][[1, 1]]];&#xD;
	  &#xD;
	  {root, Which[&#xD;
        SubsetQ[chordNotes, octaveReduce /@ {root + 10 , root + 2, root + 5, root + 9}], &amp;#034;13&amp;#034;,&#xD;
        SubsetQ[chordNotes, octaveReduce /@ {root + 10, root + 2, root + 5}], &amp;#034;11&amp;#034;,&#xD;
        SubsetQ[chordNotes, octaveReduce /@ {root + 4, root + 10, root + 2}], &amp;#034;Dom9&amp;#034;,&#xD;
        SubsetQ[chordNotes, octaveReduce /@ {root + 4, root + 10, root + 1}], &amp;#034;Dom9m&amp;#034;,&#xD;
        SubsetQ[chordNotes, octaveReduce /@ {root + 11, root + 7, root + 3}], &amp;#034;m7M&amp;#034;,&#xD;
        SubsetQ[chordNotes, {octaveReduce[root + 11], octaveReduce[root + 4]}],  &amp;#034;7M&amp;#034;,&#xD;
        SubsetQ[chordNotes, {octaveReduce[root + 10], octaveReduce[root + 4]}],  &amp;#034;Dom7&amp;#034;,&#xD;
        SubsetQ[chordNotes, {octaveReduce[root + 10], octaveReduce[root + 7]}], &amp;#034;Dom7&amp;#034;,&#xD;
        SubsetQ[chordNotes, {octaveReduce[root + 10], octaveReduce[root + 6]}],  &amp;#034;d7&amp;#034;,&#xD;
        SubsetQ[ chordNotes, {octaveReduce[root + 9], octaveReduce[root + 6]}],  &amp;#034;d7d&amp;#034;,&#xD;
        SubsetQ[chordNotes, {octaveReduce[root + 10], octaveReduce[root + 3]}],  &amp;#034;m7&amp;#034;,&#xD;
        MemberQ[chordNotes, octaveReduce[root + 4]], &amp;#034;M&amp;#034;,&#xD;
        MemberQ[chordNotes, octaveReduce[root + 3]], &amp;#034;m&amp;#034;,&#xD;
        MemberQ[chordNotes, octaveReduce[root + 7]], &amp;#034;5&amp;#034;,&#xD;
        True, &amp;#034;undef&amp;#034;]}&#xD;
    ]&#xD;
&#xD;
&#xD;
###A note on notation&#xD;
In this project I use the following abbreviations for chord notation (they&amp;#039;re not in the standard format). &amp;#034;X&amp;#034; represents the root of the chord.&#xD;
&#xD;
 - *X-**5*** = undefined triad chord (just the root and the fifth)&#xD;
 - *X-**M*** = Major&#xD;
 - *X-**m*** = minor&#xD;
 - *X-**m7*** = minor triad with minor (a.k.a dominant) seventh&#xD;
 - *X-**d7d*** = fully diminished 7thchord&#xD;
 - *X-**d7*** = half diminished 7thchord&#xD;
 - *X-**Dom7*** = Dominant 7th chord&#xD;
 - *X-**7M*** = Major triad with Major 7th&#xD;
 - *X-**m7M*** = minor triad with Major 7th&#xD;
 - *X-**Dom9*** = Dominant 9th chord&#xD;
 - *X-**Dom9m*** = Dominant 7th chord with a minor 9th&#xD;
 - *X-**11*** = 11th chord&#xD;
 - *X-**13*** = 13th chord&#xD;
&#xD;
&#xD;
&#xD;
&#xD;
###Dealing with diminished 7th chords&#xD;
Now, on to `resolveDiminished7th`. What is this function on about?&#xD;
&#xD;
Well, recall the fully diminished seventh chords I mentioned in the Background section. Here&amp;#039;s the problem: they&amp;#039;re completely symmetrical! What I mean by that is that the intervals between subsequent notes are identical, even if you [invert][14] the chord. In other words, the distance in semitones between notes is constant (it&amp;#039;s 3) and is a factor of 12 (distance of 12 semitones = octave). So, given one of these chords, there is no way to determine which note is the root just by analyzing the chord itself. In the context of our algorithm, every note would have the same score!&#xD;
&#xD;
At this point I thought: &amp;#034;How do humans deal with this?&amp;#034;. And I concluded that the only way to resolve this issue is to have some contextual vision (looking at the next chord, particularly), which is how humans do it. So what `resolveDiminished7th` does is it brushes through the chord sequence stored in `result`, looking for fully diminished chords (marked with the string &amp;#034;d7d&amp;#034;), and re-assigns each of those a root by looking at the next chord:&#xD;
&#xD;
    resolveDiminished7th[chordSequence_List] := &#xD;
    Module[{result}, &#xD;
      result = Partition[chordSequence, 2, 1] /. {{nt_, &amp;#034;d7d&amp;#034;}, c2_List} :&amp;gt; Which[&#xD;
      MemberQ[octaveReduce /@ {nt, nt + 3, nt + 6, nt + 9}, octaveReduce[c2[[1]] - 1]], {{c2[[1]] - 1, &amp;#034;d7d&amp;#034;}, c2}, &#xD;
      MemberQ[octaveReduce /@ {nt, nt + 3, nt + 6, nt + 9}, octaveReduce[c2[[1]] + 4]], {{c2[[1]] + 4, &amp;#034;d7d&amp;#034;}, c2}, &#xD;
      MemberQ[octaveReduce /@ {nt, nt + 3, nt + 6, nt + 9}, octaveReduce[c2[[1]] + 6]], {{c2[[1]] + 6, &amp;#034;d7d&amp;#034;}, c2}, &#xD;
      True, {{nt, &amp;#034;d7d&amp;#034;}, c2}];&#xD;
      &#xD;
    result = Append[result[[All, 1]], Last[result][[2]]]&#xD;
    ]&#xD;
&#xD;
&#xD;
##Phase 4: Visualization&#xD;
&#xD;
Basically, my visualization function (`visualizeChordSequence`) is fundamentally a highly customized call of the `Graph` function; so I&amp;#039;ll just paste the code below and then explain what some parameters do:&#xD;
&#xD;
&#xD;
    visualizeChords[chordSequence_List, layoutSpec_String: &amp;#034;Unspecified&amp;#034;, version_String: &amp;#034;Full&amp;#034;, mVSize_: &amp;#034;Auto&amp;#034;, simplicitySpec_Integer: 0, normalizationSpec_String: &amp;#034;Softmax&amp;#034;] :=&#xD;
     Module[{purgedChordSequence, chordList, transitionRules, weights, graphicalWeights, nOfCases, edgeStyle, vertexLabels, vertexSize, vertexStyle, vertexShapeFunction, clip},&#xD;
      &#xD;
      &#xD;
      (*Preprocess*)&#xD;
      Switch[version, &#xD;
       &amp;#034;Full&amp;#034;, &#xD;
       purgedChordSequence = &#xD;
        StringJoin[toNoteName[#1], &amp;#034;-&amp;#034;, #2] &amp;amp; @@@ chordSequence,&#xD;
       &amp;#034;Basic&amp;#034;, &#xD;
       purgedChordSequence = &#xD;
        Split[toNoteName /@ chordSequence[[All, 1]]][[All, 1]]];&#xD;
      &#xD;
      &#xD;
      (*Amount of each chord*)&#xD;
      chordList = DeleteDuplicates[purgedChordSequence];&#xD;
      nOfCases = Table[{c, Count[purgedChordSequence, c]}, {c, chordList}];&#xD;
      &#xD;
      (*Transition rules between chords*)&#xD;
      Switch[version,&#xD;
       &amp;#034;Full&amp;#034;, &#xD;
       transitionRules = &#xD;
        Gather[Rule @@@ Partition[purgedChordSequence, 2, 1]],&#xD;
       &amp;#034;Basic&amp;#034;, &#xD;
       transitionRules =(*DeleteCases[*)&#xD;
        Gather[Rule @@@ Partition[purgedChordSequence, 2, 1]](*, t_/;&#xD;
       Length@t\[LessEqual]2]*) ];&#xD;
      &#xD;
      &#xD;
      (*Get processed weight for each transition*)&#xD;
      weights = Length /@ transitionRules;&#xD;
      If[normalizationSpec == &amp;#034;Softmax&amp;#034;, graphicalWeights = SoftmaxLayer[][weights]];;&#xD;
      graphicalWeights = &#xD;
       If[Min@graphicalWeights != Max@graphicalWeights, &#xD;
        Rescale[graphicalWeights, &#xD;
         MinMax@graphicalWeights, {0.003, 0.04}], &#xD;
        graphicalWeights /. _?NumericQ :&amp;gt; 0.03 ];&#xD;
      &#xD;
      (*Final transition list*)&#xD;
      transitionRules = transitionRules[[All, 1]];&#xD;
      &#xD;
      (*Graph display specs*)&#xD;
      clip = RankedMax[weights, 4];&#xD;
      &#xD;
      edgeStyle = &#xD;
       Table[(transitionRules[[i]]) -&amp;gt; &#xD;
         Directive[Thickness[graphicalWeights[[i]]], &#xD;
          Arrowheads[2.5 graphicalWeights[[i]] + 0.015], &#xD;
          Opacity[Which[&#xD;
            weights[[i]] &amp;lt;= Clip[simplicitySpec - 2, {0, clip - 2}], 0, &#xD;
            weights[[i]] &amp;lt;= Clip[simplicitySpec, {0, clip}], 0.2, &#xD;
            True, 0.6]], &#xD;
            RandomColor[Hue[_, 0.75, 0.7]], &#xD;
          Sequence @@ If[weights[[i]] &amp;lt;= Clip[simplicitySpec - 1, {0, clip - 1}], { &#xD;
             Dotted}, {}] ], {i, Length@transitionRules}];&#xD;
      &#xD;
      vertexLabels = &#xD;
       Thread[nOfCases[[All, &#xD;
          1]] -&amp;gt; (Placed[#, &#xD;
             Center] &amp;amp; /@ (Style[#[[1]], Bold, &#xD;
               Rescale[#[[2]], MinMax[nOfCases[[All, 2]]], &#xD;
                Switch[mVSize, &amp;#034;Auto&amp;#034;, {6, 20}, _List, &#xD;
                 10*mVSize[[1]]/0.3*{1, mVSize[[2]]/mVSize[[1]]}]]] &amp;amp; /@ &#xD;
             nOfCases))];&#xD;
      &#xD;
      vertexSize = &#xD;
       Thread[nOfCases[[All, 1]] -&amp;gt; &#xD;
         Rescale[nOfCases[[All, 2]], MinMax[nOfCases[[All, 2]]], &#xD;
          Switch[mVSize, &#xD;
           &amp;#034;Auto&amp;#034;, (Floor[Length@chordList/10] + 1)*{0.1, 0.3}, _List, &#xD;
           mVSize]]];&#xD;
      &#xD;
      vertexStyle = &#xD;
       Thread[nOfCases[[All, 1]] -&amp;gt; &#xD;
         Directive[Hue[0.53, 0.27, 1, 0.6], EdgeForm[Blue]]];&#xD;
      &#xD;
      vertexShapeFunction = &#xD;
       Switch[version, &amp;#034;Full&amp;#034;, Ellipsoid[#1, {3.5, 1} #3] &amp;amp;, &amp;#034;Basic&amp;#034;, &#xD;
        Ellipsoid[#1, {2, 1} #3] &amp;amp;];&#xD;
      &#xD;
      &#xD;
      &#xD;
      &#xD;
      Graph[transitionRules, &#xD;
       &#xD;
       GraphLayout -&amp;gt; &#xD;
        Switch[layoutSpec, &amp;#034;Unspecified&amp;#034;, Automatic, _, layoutSpec],&#xD;
       &#xD;
       EdgeStyle -&amp;gt; edgeStyle,&#xD;
       EdgeWeight -&amp;gt; weights,&#xD;
       VertexLabels -&amp;gt; vertexLabels,&#xD;
       VertexSize -&amp;gt; vertexSize,&#xD;
       VertexStyle -&amp;gt; vertexStyle,&#xD;
       VertexShapeFunction -&amp;gt; vertexShapeFunction,&#xD;
       PerformanceGoal -&amp;gt; &amp;#034;Quality&amp;#034;]&#xD;
      ]&#xD;
&#xD;
There are five main things to focus on in the above definition: the graph layout (passed as the argument `layoutSpec`), the edge thickness (defined in `edgeStyle`), the vertex size (defined in `vertexSize`), the version (passed as argument `version`) and the simplicity specification (`simplicitySpec`).&#xD;
&#xD;
The graph layout is a `Graph` option that can be specified in the argument `layoutSpec`. If `&amp;#034;Unspecified&amp;#034;` is passed, an automatic layout will be used. I find that the best layouts tend to be, in order of preference, &amp;#034;BalloonEmbedding&amp;#034; and &amp;#034;RadialEmbedding&amp;#034;; nevertheless, neither are a perfect fit for every piece. In the future I would like to to implement custom (i.e. pre-defined) positioning, so that I can design it in a way that best fits this project.&#xD;
&#xD;
The edge thickness is a function of the amount of times a certain transition between two chords has occurred in the chord sequence. There is an option (namely the `normalizationSpec` argument) to enable or disable using a Softmax function for assigning thicknesses to edges. This is due to the fact that for simple/short chord sequences, Softmax is actually counterproductive because it suppresses secondary but still top-ranked transitions; i.e., it assigns a very high thickness to the most frequent transition and a low thickness to all other transitions (even those that come in second or third in frequency ranking). But for large or complex sequences it is actually useful, because it &amp;#034;gets rid of&amp;#034; a lot of the \[relatively\] insignificant instances, thus making the output actually understandable (and not just a [jumbled mess of thick lines][15]).&#xD;
&#xD;
The vertex size is proportional to the number of occurrences of each particular chord (that is, without taking into account the transitions). It can also be specified manually by passing `vSize` as a list `{a,b}` such that `a` is the minimum size an `b` is the maximum.&#xD;
&#xD;
The `version` can be either `&amp;#034;Full&amp;#034;` or `&amp;#034;Basic&amp;#034;`; the default is `&amp;#034;Full&amp;#034;`. The `&amp;#034;Basic&amp;#034;` version consists of a simplified chord set in which only the root note of the chord is taken into account, and not the archetype. For example, all C chords (M, Dom7, m...) would be represented by a single `&amp;#034;C&amp;#034;` vertex.&#xD;
&#xD;
Finally, the simplicity specification (`simplicitySpec`) is a number that can be thought of, in some way, as a &amp;#034;noise&amp;#034; threshold: as it gets larger, fewer edges &amp;#034;stand out&amp;#034;that is, more of the lower-significance ones are rendered with reduced opacity or are shown as dotted lines. This is useful for large or complex sequences.&#xD;
&#xD;
&amp;lt;br&amp;gt;&#xD;
&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
#Some examples&#xD;
&#xD;
Here I will show some specific examples generated with this tool. I tried to use different styles of music for comparison.&#xD;
&#xD;
 - **Bach**&amp;#039;s [prelude no.1][16] from the Well Tempered Clavier:&#xD;
&#xD;
![Visualization of Bach&amp;#039;s prelude no.1 ][17]&#xD;
 &#xD;
  - **Debussy**&amp;#039;s [*Passepied*][18] from the *Suite Bergamasque*:&#xD;
&#xD;
![Visualization of Debussy&amp;#039;s *Passepied*][19]&#xD;
&#xD;
 - A &amp;#034;template&amp;#034; blues progression:&#xD;
&#xD;
![Blues template][20]&#xD;
&#xD;
 - **Beethoven**&amp;#039;s second movement from the *Pathétique* sonata (no.8):&#xD;
&#xD;
![Beethoven][21]&#xD;
&#xD;
 - Any &amp;#034;reggaeton&amp;#034; song (e.g. Despacito):&#xD;
&#xD;
![Reggaeton][22]&#xD;
&#xD;
#Microsite&#xD;
&#xD;
Check out the form page (a.k.a. microsite) of this project [here][23]:&#xD;
&#xD;
https://www.wolframcloud.com/objects/lammenspaolo/Chord%20sequence%20visualization&#xD;
&#xD;
[![enter image description here][24]][23]&#xD;
&#xD;
Briefly, here is what each option does (see the section **Algorithms and code** for a more detailed explanation):&#xD;
&#xD;
 - **Chunkifier funciton**: choose between splitting notes by measures or by a constant amount of notes&#xD;
 - **Measure split factor**: choose into how many pieces you want to divide measures (each piece will be analyzed as a separate chord)&#xD;
 - **Graph layout**: choose the layout option for the `Graph` call&#xD;
 - **Normalization function**: choose whether to apply a Softmax function to the weights of edges (to make results clearer in case of complex sequences).&#xD;
 - **Version**: choose &amp;#034;Full&amp;#034; for complete chord info (e.g. &amp;#034;C-M&amp;#034;, &amp;#034;D-Dom7&amp;#034;, &amp;#034;C-7M&amp;#034;...) or &amp;#034;Basic&amp;#034; for just the root of the chord (e.g. &amp;#034;C&amp;#034;, &amp;#034;D&amp;#034;...)&#xD;
 - **Vertex size**: specify vertex size as a list `{a,b}` where `a` is the minimum and `b` is the maximum size&#xD;
 - **Simplicity parameter**: visual simplification of the graph (a value of 0 means no simplification is applied)&#xD;
&#xD;
&amp;lt;br&amp;gt;&#xD;
&#xD;
#Conclusions&#xD;
I have developed a functional tool to visualize chord sequences as graphs. It is far from perfect, though. In the future, I would like improving the positioning of vertices, being able to eliminate insignificant transitions from the graph altogether, and making other visual adjustments. Furthermore, I plan to refine and optimize the chord analyzer, as right now it is just an experimental version that isn&amp;#039;t too accurate. A better &amp;#034;chunkifier&amp;#034; function could be developed too.&#xD;
&#xD;
Finally, I&amp;#039;d like to thank my mentor Christian Pasquel and all of the other WSC staff for this amazing opportunity. I&amp;#039;d also like to thank my music theory teacher, Raimon Romaní, for making me, over the years, sufficiently less terrible at musical analysis to be able to undertake this project.&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Prelude.png&amp;amp;userId=1372342&#xD;
  [2]: https://en.wikipedia.org/wiki/Markov_chain &amp;#034;Wikipedia: Markov chain&amp;#034;&#xD;
  [3]: https://en.wikipedia.org/wiki/Interval_(music) &amp;#034;Wikipedia: Interval&amp;#034;&#xD;
  [4]: https://en.wikipedia.org/wiki/Atonality &amp;#034;Wikipedia: Atonality&amp;#034;&#xD;
  [5]: https://youtu.be/L85XTLr5eBE &amp;#034;Schönberg&amp;#039;s 4th string quartet on YouTube&amp;#034;&#xD;
  [6]: https://en.wikipedia.org/wiki/Tonic_%28music%29 &amp;#034;Wikipedia: Tonic&amp;#034;&#xD;
  [7]: http://astronomy.swin.edu.au/cosmos/S/Schwarzschild+Radius &amp;#034;Basic info on Schwartzschild radius&amp;#034;&#xD;
  [8]: https://en.wikipedia.org/wiki/Dominant_(music) &amp;#034;Dominant chord&amp;#034;&#xD;
  [9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2548Macro_analysis_chords_on_C.jpg&amp;amp;userId=1372342&#xD;
  [10]: https://en.wikipedia.org/wiki/Seventh_chord &amp;#034;Wikipedia: Seventh chord&amp;#034;&#xD;
  [11]: https://en.wikipedia.org/wiki/Interval_(music) &amp;#034;Wikipedia: Interval&amp;#034;&#xD;
  [12]: https://en.wikipedia.org/wiki/Dominant_seventh_chord &amp;#034;Wikipedia: Dominant seventh&amp;#034;&#xD;
  [13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=images.png&amp;amp;userId=1372342&#xD;
  [14]: https://en.wikipedia.org/wiki/Inversion_(music)#Chords &amp;#034;Wikipedia: Inversion#Chords&amp;#034;&#xD;
  [15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Passepied.png&amp;amp;userId=1372342 &amp;#034;Jumbled mess!&amp;#034;&#xD;
  [16]: https://www.youtube.com/watch?v=aengbLEFnM8&#xD;
  [17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Prelude.png&amp;amp;userId=1372342&#xD;
  [18]: https://www.youtube.com/watch?v=hDWbVP-5DSA &amp;#034;Passepied&amp;#034;&#xD;
  [19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=deb_pass2.png&amp;amp;userId=1372342&#xD;
  [20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Blues.png&amp;amp;userId=1372342&#xD;
  [21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=pathetique.png&amp;amp;userId=1372342&#xD;
  [22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Reggaeton.png&amp;amp;userId=1372342&#xD;
  [23]: https://www.wolframcloud.com/objects/lammenspaolo/Chord%20sequence%20visualization &amp;#034;Microsite&amp;#034;&#xD;
  [24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-19at1.53.02PM.png&amp;amp;userId=11733</description>
    <dc:creator>Paolo Lammens</dc:creator>
    <dc:date>2018-07-14T05:10:03Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3299985">
    <title>[BOOK] Signals, systems, and signal processing: a computational approach</title>
    <link>https://community.wolfram.com/groups/-/m/t/3299985</link>
    <description>![Signals, systems, and signal processing: a computational approach][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Main18102024.png&amp;amp;userId=20103&#xD;
  [2]: https://www.wolframcloud.com/obj/9fef1db9-ca5d-4080-9db5-fefdbb4d3085</description>
    <dc:creator>Mariusz Jankowski</dc:creator>
    <dc:date>2024-10-17T21:55:31Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/788811">
    <title>Universal stereo plotter</title>
    <link>https://community.wolfram.com/groups/-/m/t/788811</link>
    <description>Hi,&#xD;
&#xD;
This is an idea that has challenged me since I started using MMA a little over two years ago.  The idea was to append `//stereo` to a 3D plot and instantly get a stereogram that can be manipulated just like a normal 3D plot, turning it around and viewing it from different angles.  This is my first use of `Dynamic`, so it might be a little clunky.  Help in speeding it up would be appreciated.&#xD;
&#xD;
    stereo[expr_] :=  DynamicModule[&#xD;
      {vp = {1.3, -2.4, 2.0}, vv = {0., 0., 2.0}, plot}, &#xD;
      plot = expr; &#xD;
      GraphicsRow[&#xD;
        {Show[plot, ViewPoint -&amp;gt; Dynamic[vp + {0.4, 0, 0}], ViewVertical -&amp;gt; Dynamic[vv]], &#xD;
         Show[plot, ViewPoint -&amp;gt; Dynamic[vp], ViewVertical -&amp;gt; Dynamic[vv]]}, &#xD;
        ImageSize -&amp;gt; Large&#xD;
       ]&#xD;
    ]&#xD;
&#xD;
For instance,&#xD;
&#xD;
    Plot3D[Sin[x] Cos[y] Cos[x y], {x, 0, 2 Pi}, {y, 0, 2 Pi}, &#xD;
      PlotRange -&amp;gt; All, ColorFunction -&amp;gt; &amp;#034;BlueGreenYellow&amp;#034;] // stereo&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
Viewing the stereogram takes a little practice.  I learned it from a radiologist who used it to make 3D X-rays.  He would position the patient, take an x-ray, move the x-ray source over a couple of inches, and take another x-ray.  Then he put both x-rays side-by-side on the light panel, stepped back, and crossed his eyes until the two images superimposed.  Voila! a 3D view of the interior of patient&amp;#039;s body.&#xD;
&#xD;
When learning, it&amp;#039;s helpful to hold a finger between your eyes and the image.  Focus on the finger and move it back and forth until the images coalesce in the background. Now keep the images together and re-focus your eyes on them.  That can be tricky as it&amp;#039;s quite unnatural.&#xD;
&#xD;
Once the trick is learned, stereo images are as easy to see as flat ones.&#xD;
&#xD;
The right-hand image can be moved around using the mouse and the left will follow.&#xD;
&#xD;
The function needs improvement, though.  Sometimes the images get out of sync.  Just do a shift-enter to restart.&#xD;
&#xD;
Eric&#xD;
&#xD;
EDIT: I&amp;#039;ve changed the offset in the viewpoint from {0,0.4,0} to {0.4,0,0}.  Strange how even the wrong offset results in a stereogram!&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9384stereofunction.jpg&amp;amp;userId=455211</description>
    <dc:creator>Eric Johnstone</dc:creator>
    <dc:date>2016-02-07T19:42:29Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2887842">
    <title>Exploring civil structure modeling with System Modeler</title>
    <link>https://community.wolfram.com/groups/-/m/t/2887842</link>
    <description>![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=CivilStructure2.gif&amp;amp;userId=20103&#xD;
  [2]: https://www.wolframcloud.com/obj/3b9a5696-068c-452b-bba3-2b7d47fece71</description>
    <dc:creator>Vedat Senol</dc:creator>
    <dc:date>2023-04-06T08:24:12Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1219764">
    <title>Creating Star Wars Kylo Ren voice in one line of code</title>
    <link>https://community.wolfram.com/groups/-/m/t/1219764</link>
    <description>*MODERATOR NOTE: for true Star Wars fans this post has a related Wolfram Cloud App, which you can access by clicking on the image below. Read the full post below to understand how the app works. May the Force be with you.*&#xD;
&#xD;
[![enter image description here][1]][9]&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
Kylo Ren voice in [Star Wars The Force Awakens][2] is very cool. When I watched the movie in 2015, one of the first things that came up to my mind was how to do such voice in Mathematica. At the time Mathematica capabilities to manipulate sounds were very simplistic, and was not possible to do such thing.&#xD;
&#xD;
I&amp;#039;m far far way to be a sound expert, but I tried do something like this:&#xD;
&#xD;
    file = Import[&amp;#034;https://s3-sa-east-1.amazonaws.com/rmurta/murta-audio.wav&amp;#034;];&#xD;
    audioOrg = Audio[file];&#xD;
    audioKylo = AudioPitchShift[audioOrg, 0.9];&#xD;
    audioKylo = AudioFrequencyShift[audioKylo,-200]//AudioAmplify[#, 4]&amp;amp;&#xD;
&#xD;
- Original Murta&amp;#039;s record: [Murta original][3]  &#xD;
- And here is the result: [Murta as Kylo][4]&#xD;
&#xD;
[![Kylo Voice][5]][6]&#xD;
&#xD;
[youtube link for Kylo][7]&#xD;
&#xD;
See [original post in Stack Exchange][8], where I ask for improvement in the sound hack. &#xD;
&#xD;
After that, why not create a cloud app for that?!&#xD;
&#xD;
This is amazing about Mathematica, with one line of code you can create a App, and everybody can play with It!&#xD;
&#xD;
    CloudDeploy[FormFunction[{&amp;#034;sound&amp;#034;-&amp;gt;&amp;#034;Sound&amp;#034;},&#xD;
    AudioAmplify[AudioFrequencyShift[AudioPitchShift[#sound,0.9],-200],4]&amp;amp;],&#xD;
    &amp;#034;kylo-voice-by-murta&amp;#034;,Permissions-&amp;gt;&amp;#034;Public&amp;#034;]&#xD;
&#xD;
Here is the link, so you can try it: [kyle-voice-by-murta][9].  &#xD;
&#xD;
Record your sound and speak like Kylo! &#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-11-14at1.40.15PM2.png&amp;amp;userId=11733&#xD;
  [2]: https://en.wikipedia.org/wiki/Star_Wars:_The_Force_Awakens&#xD;
  [3]: https://s3-sa-east-1.amazonaws.com/rmurta/murta-audio.wav&#xD;
  [4]: https://s3-sa-east-1.amazonaws.com/rmurta/murta-kylo-audio.wav&#xD;
  [5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=kylo-voice2.png&amp;amp;userId=11733&#xD;
  [6]: https://www.youtube.com/watch?v=zy-wqB4cbT8&amp;amp;feature=youtu.be&amp;amp;t=49s&#xD;
  [7]: https://www.youtube.com/watch?v=zy-wqB4cbT8&amp;amp;feature=youtu.be&amp;amp;t=49s&#xD;
  [8]: https://mathematica.stackexchange.com/questions/159744&#xD;
  [9]: https://www.wolframcloud.com/objects/murta/kylo-voice-by-murta</description>
    <dc:creator>Rodrigo Murta</dc:creator>
    <dc:date>2017-11-11T13:06:38Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/203335">
    <title>Signal or (quasi periodic) trajectory frequency in Mathematica</title>
    <link>https://community.wolfram.com/groups/-/m/t/203335</link>
    <description>Hi,

Continuing the topic started in [url=http://community.wolfram.com/groups/-/m/t/202094?p_p_auth=9HFetOtL]here[/url] I&amp;#039;ve decided to share my frequency search routine.
Note that I do not have Newton algorithm in it and the routing is not optimized.
The idea behind adding Newton algorithm is the following: when first approximation of the frequency is found instead of using convolution
on the full signal convolution is performed for &amp;#034;left&amp;#034; and &amp;#034;right&amp;#034; regions separately. The best is then choosen and division is performed again.
Division should be performed until some pre-defined accuracy for frequency is reached (typically ~10^-9...10^-10)
This module can be used only for equally spaced data (in time or space).

If someone is interested in improving this code I&amp;#039;ll be happy to assist. Unfortunately, I have no time to finish it by myself.

I.M.</description>
    <dc:creator>Ivan Morozov</dc:creator>
    <dc:date>2014-02-20T04:07:34Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2489380">
    <title>Fast Fourier transform visualizations</title>
    <link>https://community.wolfram.com/groups/-/m/t/2489380</link>
    <description>![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=FFT-image.jpg&amp;amp;userId=23275&#xD;
  [2]: https://www.wolframcloud.com/obj/d37e248b-a21a-4e24-bd53-397ce7e9e897</description>
    <dc:creator>Todd Rowland</dc:creator>
    <dc:date>2022-03-14T00:03:17Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2200588">
    <title>Automatic generation of FFT signal-flow graphs</title>
    <link>https://community.wolfram.com/groups/-/m/t/2200588</link>
    <description>![FFT][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [Original]: https://www.wolframcloud.com/obj/74266892-afc7-48bc-bc81-45e47085a371&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=fft_8.gif&amp;amp;userId=89693&#xD;
  [2]: https://www.wolframcloud.com/obj/7a9e9f3a-7fee-4ba8-ad52-914bdcc1d822</description>
    <dc:creator>Christophe Favergeon</dc:creator>
    <dc:date>2021-02-21T14:12:07Z</dc:date>
  </item>
</rdf:RDF>

