<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://community.wolfram.com">
    <title>Community RSS Feed</title>
    <link>https://community.wolfram.com</link>
    <description>RSS Feed for Wolfram Community showing any discussions tagged with Earth Science sorted by most likes.</description>
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/463721" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1072478" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/463610" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1671437" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/551187" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/859842" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1731676" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2828058" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/840910" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/784207" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1333342" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1345014" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/984239" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/834097" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/293403" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2740397" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/325930" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/471232" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2489121" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1732295" />
      </rdf:Seq>
    </items>
  </channel>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/463721">
    <title>Aftermath of the solar eclipse</title>
    <link>https://community.wolfram.com/groups/-/m/t/463721</link>
    <description>![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4902Hero.gif&amp;amp;userId=20103&#xD;
  [2]: https://www.wolframcloud.com/obj/5838919a-e64d-49fe-9193-c6ce123184f2</description>
    <dc:creator>Marco Thiel</dc:creator>
    <dc:date>2015-03-21T01:18:08Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1072478">
    <title>Flight data and trajectories of aeroplanes</title>
    <link>https://community.wolfram.com/groups/-/m/t/1072478</link>
    <description>Large amounts of data become evermore available - often these datasets are very valuable and difficult to access. In this post I will show how to use air traffic data to generate visualisations like this one.&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
On the website [http://www.flightradar24.com][2] one can find live flight information of most of the civil air traffic. A great amount of information on all current flights is being made available, such as position and altitude, call sign, type of plane, origin and destination and many more. There are different [subscription plans][3] with different features. The largest business plan even allows you to commercially use the data or use them for public display. Alternatively, you can contribute data https://www.flightradar24.com/add-coverage. This can be done with a tiny RTL-SDR receiver, such as [this one][4]. The setup is quite straight forward; I am usually working on Macs on which I had trouble making this work. On a Windows machine (which I only got for this purpose), the setup is quick. The software for sharing your data can be downloaded from [flightradar24&amp;#039;s website][5]. You will also need a driver for the SDR stick, which you can download [from here][6]. You should make sure that you have a good visibility of the sky and that the computer and internet connection are stable. I had to use a Windows 8/Windows 10 machine. It was quite annoying that the machine routinely reboots for software updates. If you want uninterrupted monitoring you will have to deactivate this feature. I have no experience with Windows so this was more complicated than I expected. I found a nice set of [instructions here][7].&#xD;
&#xD;
Once all of this is done and you donate your data, you will automatically upgraded to the Business plan and will have access to a very rich dataset. You will, for example, be allowed to download up to 1000 csv files per month with detailed tracking information of flights in the database. There is much more data available and using the Wolfram Language to analyse it seems to be quite natural.&#xD;
&#xD;
First explorations&#xD;
------------------&#xD;
&#xD;
I download data for a flight from Frankfurt to Aberdeen in csv format. Each row contains a time stamp, date and time of the entry, the callsign, the position (as a string), altitude, speed and direction. I can import the data and then plot it:&#xD;
&#xD;
    flightdata = Import[&amp;#034;/Users/thiel/Desktop/Flight_LH971_(cb294d1).csv&amp;#034;];&#xD;
    GeoGraphics[{Red, Thick, GeoPath@(ToExpression[#] &amp;amp; /@ Flatten[StringSplit[#, &amp;#034;,&amp;#034;] &amp;amp; /@ flightdata[[2 ;;, {4}]], 1])}]&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
I needed to use StringSplit, because the GPS coordinates come in the form of a single string. Next, I can plot the path in 3D. I can clean the data to get it into the right format:&#xD;
&#xD;
    flightphs = {Join[ToExpression[StringSplit[#[[1]], &amp;#034;,&amp;#034;]], {ToExpression[#[[2]]]}],ToExpression[#[[3]]]} &amp;amp; /@ flightdata[[2 ;;, {4, 5, 6}]];&#xD;
    Graphics3D[{Red, Thick, Line[{#[[1]], #[[2]], #[[3]]/30000.} &amp;amp; /@ flightphs[[All, 1]]]}]&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
The 30000 that I use to divide the altitude is just a scaling factor. (Note that the altitude value comes in feet.) We can now join the flight path to one of Mathematica&amp;#039;s maps:&#xD;
&#xD;
    Show[Graphics3D[{Red, Thick, Line[{#[[1]], #[[2]], -#[[3]]/30000.} &amp;amp; /@ flightphs[[All, 1]]]}, &#xD;
    Lighting -&amp;gt; {{&amp;#034;Ambient&amp;#034;, White}}], &#xD;
    Graphics3D[{Texture[Image[GeoGraphics[GeoPath[GeoPosition[flightphs[[All, 1]]]]]]], &#xD;
    Polygon[(PadRight[#, 3] &amp;amp; /@ Tuples[GeoBounds[GeoPath@GeoPosition[flightphs[[All, 1]]], Scaled[0.05]]])[[{1, 2, 4, 3}]], &#xD;
    VertexTextureCoordinates -&amp;gt; {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}], ImageSize -&amp;gt; Full]&#xD;
&#xD;
![enter image description here][10]&#xD;
&#xD;
It also looks nice on a satellite image background:&#xD;
&#xD;
    Show[Graphics3D[{Red, Thickness[0.005], Line[{#[[1]], #[[2]], -#[[3]]/30000.} &amp;amp; /@ flightphs[[All, 1]]]}, &#xD;
    Lighting -&amp;gt; {{&amp;#034;Ambient&amp;#034;, White}}], Graphics3D[{Texture[Image[GeoGraphics[{Opacity[0], &#xD;
    GeoPath[GeoPosition[flightphs[[All, 1]]]]}, GeoBackground -&amp;gt; &amp;#034;Satellite&amp;#034;]]], &#xD;
    Polygon[(PadRight[#, 3] &amp;amp; /@ Tuples[GeoBounds[GeoPath@GeoPosition[flightphs[[All, 1]]], &#xD;
    Scaled[0.05]]])[[{1, 2, 4, 3}]], VertexTextureCoordinates -&amp;gt; {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}], ImageSize -&amp;gt; Full]&#xD;
&#xD;
![enter image description here][11]&#xD;
&#xD;
Multiple flights&#xD;
----------------&#xD;
&#xD;
Next, I download 10 trajectories of the flight LH971 from Frankfurt to Aberdeen. &#xD;
&#xD;
    FileNames[&amp;#034;*&amp;#034;, &amp;#034;/Users/thiel/Desktop/Aberdeen LH971/&amp;#034;]&#xD;
&#xD;
![enter image description here][12]&#xD;
&#xD;
I then import all of the trajectories:&#xD;
&#xD;
    flightdataall = Import /@ FileNames[&amp;#034;*&amp;#034;, &amp;#034;/Users/thiel/Desktop/Aberdeen LH971/&amp;#034;];&#xD;
&#xD;
This corresponds to 282 entries - each consisting of time stamp, date and time of the entry, the callsign, the position (as a string), altitude, speed and direction. I can clean them all up&#xD;
&#xD;
    flightphsall = ({Join[ToExpression[StringSplit[#[[1]], &amp;#034;,&amp;#034;]], {ToExpression[#[[2]]]}], ToExpression[#[[3]]]} &amp;amp; /@ #[[2 ;;, {4, 5, 6}]] &amp;amp;) /@ flightdataall;&#xD;
&#xD;
and then plot them together:&#xD;
&#xD;
    Show[Graphics3D[{Red, Thickness[0.004], &#xD;
    Line[{#[[1]], #[[2]], -#[[3]]/30000.} &amp;amp; /@ #[[All, 1]] &amp;amp; /@ flightphsall]}, Lighting -&amp;gt; {{&amp;#034;Ambient&amp;#034;, White}}, &#xD;
    ViewPoint -&amp;gt; {-10.`, 5.`, -5.`}, ViewVertical -&amp;gt; {0.8924410944866072`, -0.23945064940819427`, -7.766116131708949`}], &#xD;
    Graphics3D[{Texture[Image[GeoGraphics[{Opacity[0], GeoPath[GeoPosition[flightphs[[All, 1]]]]}, &#xD;
    GeoBackground -&amp;gt; &amp;#034;Satellite&amp;#034;, GeoRange -&amp;gt; {{Min[#[[All, 1, 1]]], Max[#[[All, 1, 2]]]}, {Min[#[[All, 2, 1]]], Max[#[[All, 2, 2]]]}} &amp;amp; @(GeoBounds[GeoPath@GeoPosition[#[[All, 1]]], Scaled[0.05]] &amp;amp; /@ flightphsall)]]], &#xD;
    Polygon[(PadRight[#, 3] &amp;amp; /@ Tuples[{{Min[#[[All, 1, 1]]], Max[#[[All, 1, 2]]]}, {Min[#[[All, 2, 1]]], Max[#[[All, 2, 2]]]}} &amp;amp; @(GeoBounds[&#xD;
    GeoPath@GeoPosition[#[[All, 1]]], Scaled[0.05]] &amp;amp; /@ flightphsall)])[[{1, 2, 4, 3}]], VertexTextureCoordinates -&amp;gt; {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}], &#xD;
     ImageSize -&amp;gt; Full]&#xD;
&#xD;
![enter image description here][13]&#xD;
&#xD;
Note that I use ViewPoint and ViewVertical options. This is because without them, the orientation of the resulting 3D graphic is not optimal. So I plotted the image without the additional options and then rotated it until I was happy with the orientation. Then I use the function&#xD;
&#xD;
    extractViewPos[img_] := Flatten[Union[Extract[img, Position[img, #]] &amp;amp; /@ {ViewPoint -&amp;gt; _, ViewCenter -&amp;gt; _, ViewVertical -&amp;gt; _, ViewAngle -&amp;gt; _,ViewVector -&amp;gt; _, ViewRange -&amp;gt; _}]];&#xD;
&#xD;
Just copy the image into the square brackets and execute:&#xD;
&#xD;
    extractViewPos[-Graphic goes here-]&#xD;
&#xD;
and get&#xD;
&#xD;
    {ViewPoint -&amp;gt; {-10., 5., -5.}, ViewVertical -&amp;gt; {0.892441, -0.239451, -7.76612}}&#xD;
&#xD;
This is not my function, but I found it online and have been using it ever since. &#xD;
&#xD;
Animating a flight&#xD;
------------------&#xD;
&#xD;
Now we can mark the position of the aeroplane by a sphere and animate the flight:&#xD;
&#xD;
    backround3D2 = &#xD;
      Show[Graphics3D[{Red, Thick, Line[{#[[1]], #[[2]], -#[[3]]/30000.} &amp;amp; /@ flightphs[[All, 1]]]},Lighting -&amp;gt; {{&amp;#034;Ambient&amp;#034;, White}}, &#xD;
      ViewPoint -&amp;gt; {-10.`, 5.`, -5.`}, ViewVertical -&amp;gt; {0.8924410944866072`, -0.23945064940819427`, -7.766116131708949`}], &#xD;
      Graphics3D[{Texture[Image[GeoGraphics[{Opacity[0], GeoPath[GeoPosition[flightphs[[All, 1]]]]}, GeoBackground -&amp;gt; &amp;#034;Satellite&amp;#034;]]], &#xD;
      Polygon[(PadRight[#, 3] &amp;amp; /@ Tuples[GeoBounds[GeoPath@GeoPosition[flightphs[[All, 1]]], Scaled[0.05]]])[[{1, 2, 4, 3}]], &#xD;
      VertexTextureCoordinates -&amp;gt; {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}],ImageSize -&amp;gt; Full];&#xD;
    &#xD;
    Manipulate[&#xD;
     Show[Graphics3D[{Red, Sphere[{#[[1]], #[[2]], -#[[3]]/30000.} &amp;amp; @flightphs[[k, 1]], 0.1]}, ViewPoint -&amp;gt; {-10.`, 5.`, -5.`}, &#xD;
       ViewVertical -&amp;gt; {0.8924410944866072`, -0.23945064940819427`, -7.766116131708949`}, ImageSize -&amp;gt; Full], backround3D2], {k, 1, Length[flightphs],5}]&#xD;
&#xD;
&#xD;
![enter image description here][14]&#xD;
&#xD;
The multiple flights I have displayed above are the same route but executed by different aeroplanes. Later we will follow individual aeroplanes. But first we will look at the peculiar take-off and landing patterns. &#xD;
&#xD;
Take-off and landing patterns&#xD;
-----------------------------&#xD;
&#xD;
When we look at the take-off and touch-down times we observe that there are two main directions for both starting and destination airports. What decides which direction the planes are taking?&#xD;
&#xD;
    Graphics3D[{Red, Thickness[0.004], &#xD;
    Line[{#[[1]], #[[2]], -#[[3]]/30000.} &amp;amp; /@ #[[All, 1]] &amp;amp; /@ flightphsall]}, Lighting -&amp;gt; {{&amp;#034;Ambient&amp;#034;, White}}, &#xD;
    ViewPoint -&amp;gt; {-10.`, 5.`, -5.`}, ViewVertical -&amp;gt; {0.8924410944866072`, -0.23945064940819427`, -7.766116131708949`}]&#xD;
&#xD;
![enter image description here][15]&#xD;
&#xD;
We first need to determine the wind direction in Frankfurt and Aberdeen on days with either of the two take-off/landing directions. First for Frankfurt.&#xD;
&#xD;
    frankfurttimeheading = Table[Select[flightdataall[[k]], #[[5]] &amp;gt; 4000 &amp;amp;][[1, {1, -1}]], {k, 1, 10}]&#xD;
&#xD;
which gives&#xD;
&#xD;
    {{1488564623, 69}, {1488650932, 250}, {1488737193, 250}, {1488823517, 249}, {1488909842, 250}, {1488996665, 249}, {1489080927, 250}, {1489168646, 250}, {1489254863, 69}, {1489341257, 69}}&#xD;
&#xD;
We see that there are two clusters, one at around 69 degrees and one at around 250 degrees:&#xD;
&#xD;
    fdates69 = FromUnixTime /@ Select[frankfurttimeheading, #[[2]] &amp;lt; 100 &amp;amp;][[All, 1]];&#xD;
    fdates250 = FromUnixTime /@ Select[frankfurttimeheading, #[[2]] &amp;gt; 100 &amp;amp;][[All, 1]];&#xD;
&#xD;
The Wolfram Language knows the wind directions for those days:&#xD;
&#xD;
    WindDirectionData[Entity[&amp;#034;Airport&amp;#034;, &amp;#034;EDDF&amp;#034;], fdates69]&#xD;
    (*{Quantity[60., &amp;#034;AngularDegrees&amp;#034;], Quantity[70., &amp;#034;AngularDegrees&amp;#034;], Quantity[70., &amp;#034;AngularDegrees&amp;#034;]}*)&#xD;
&#xD;
and &#xD;
&#xD;
    WindDirectionData[Entity[&amp;#034;Airport&amp;#034;, &amp;#034;EDDF&amp;#034;], fdates250]&#xD;
    {Quantity[250., &amp;#034;AngularDegrees&amp;#034;], Quantity[170., &amp;#034;AngularDegrees&amp;#034;], Quantity[-60., &amp;#034;AngularDegrees&amp;#034;], Quantity[210., &amp;#034;AngularDegrees&amp;#034;], &#xD;
     Quantity[200., &amp;#034;AngularDegrees&amp;#034;], Quantity[-70., &amp;#034;AngularDegrees&amp;#034;], Quantity[70., &amp;#034;AngularDegrees&amp;#034;]}&#xD;
&#xD;
It knows the wind vector data; here we plot it for the two situations. &#xD;
&#xD;
    Graphics[Arrow[{{0, 0}, #}] &amp;amp; /@ QuantityMagnitude[WindVectorData[Entity[&amp;#034;Airport&amp;#034;, &amp;#034;EDDF&amp;#034;], fdates250]]]&#xD;
&#xD;
![enter image description here][16]&#xD;
&#xD;
    Graphics[Arrow[{{0, 0}, #}] &amp;amp; /@ QuantityMagnitude[WindVectorData[Entity[&amp;#034;Airport&amp;#034;, &amp;#034;EDDF&amp;#034;], fdates69]]]&#xD;
&#xD;
![enter image description here][17]&#xD;
&#xD;
If we now average the data&#xD;
&#xD;
    Show[Graphics[{Red, Thick, Arrow[{{0, 0}, Mean[QuantityMagnitude[WindVectorData[Entity[&amp;#034;Airport&amp;#034;, &amp;#034;EDDF&amp;#034;], fdates250]]]}]}], &#xD;
    Graphics[{Green, Thick, Arrow[{{0, 0}, Mean[QuantityMagnitude[WindVectorData[Entity[&amp;#034;Airport&amp;#034;, &amp;#034;EDDF&amp;#034;], fdates69]]]}]}]]&#xD;
&#xD;
![enter image description here][18]&#xD;
&#xD;
We see that the wind direction seems to correlate with the take off direction at least for relatively strong winds. This is in agreement with [general advice of how to choose the runway][19]. We can now do the same for Aberdeen. &#xD;
&#xD;
    aberdeentimeheading = Table[Select[Reverse[flightdataall[[k]]], #[[5]] &amp;gt; 4000 &amp;amp;][[1, {1, -1}]], {k, 1, 10}];&#xD;
&#xD;
We split the dates into two groups: smaller and larger than 200 degrees.&#xD;
&#xD;
    adates230 = FromUnixTime /@ Select[frankfurttimeheading, #[[2]] &amp;gt; 200 &amp;amp;][[All, 1]];&#xD;
    adates169 = FromUnixTime /@ Select[frankfurttimeheading, #[[2]] &amp;lt; 200 &amp;amp;][[All, 1]];&#xD;
&#xD;
These are the respective vectors:&#xD;
&#xD;
    Graphics[Arrow[{{0, 0}, #}] &amp;amp; /@ QuantityMagnitude[WindVectorData[Entity[&amp;#034;City&amp;#034;, {&amp;#034;Dundee&amp;#034;, &amp;#034;DundeeCity&amp;#034;, &amp;#034;UnitedKingdom&amp;#034;}], adates169]]]&#xD;
&#xD;
![enter image description here][20]&#xD;
&#xD;
    Graphics[Arrow[{{0, 0}, #}] &amp;amp; /@ QuantityMagnitude[WindVectorData[Entity[&amp;#034;City&amp;#034;, {&amp;#034;Dundee&amp;#034;, &amp;#034;DundeeCity&amp;#034;, &amp;#034;UnitedKingdom&amp;#034;}], adates230]]]&#xD;
&#xD;
![enter image description here][21]&#xD;
&#xD;
Now we can plot this for the different averaged directions. &#xD;
&#xD;
    Show[Graphics[{Red, Thick, Arrow[{{0, 0}, -Mean[QuantityMagnitude[WindVectorData[Entity[&amp;#034;City&amp;#034;, {&amp;#034;Dundee&amp;#034;, &amp;#034;DundeeCity&amp;#034;, &amp;#034;UnitedKingdom&amp;#034;}],adates169]]]}]}], &#xD;
     Graphics[{Green, Thick, Arrow[{{0, 0}, -Mean[QuantityMagnitude[WindVectorData[Entity[&amp;#034;City&amp;#034;, {&amp;#034;Dundee&amp;#034;, &amp;#034;DundeeCity&amp;#034;, &amp;#034;UnitedKingdom&amp;#034;}],adates230]]]}]}]]&#xD;
&#xD;
![enter image description here][22]&#xD;
&#xD;
The pattern here is not so clear. This could be because of low wind speeds.&#xD;
&#xD;
Following individual aeroplanes (short haul)&#xD;
--------------------------------------------&#xD;
&#xD;
 We can also follow an individual aeroplane (D-EACB) for, say, one month or so. Download the data and check that they are there:&#xD;
&#xD;
    FileNames[&amp;#034;*&amp;#034;, &amp;#034;/Users/thiel/Desktop/D-EACB/&amp;#034;]&#xD;
&#xD;
![enter image description here][23]&#xD;
&#xD;
Import the data and plot everything:&#xD;
&#xD;
    flightdataDEACB = Import /@ FileNames[&amp;#034;*&amp;#034;, &amp;#034;/Users/thiel/Desktop/D-EACB/&amp;#034;];&#xD;
    GeoGraphics[{Red, Thick, GeoPath@(ToExpression[#] &amp;amp; /@ Flatten[StringSplit[#, &amp;#034;,&amp;#034;] &amp;amp; /@ #, 1]) &amp;amp; /@ &#xD;
    flightdataDEACB[[All, 2 ;;]][[All, All, {4}]]}]&#xD;
&#xD;
![enter image description here][24]&#xD;
&#xD;
Of course, the same thing in 3D looks somewhat more impressive:&#xD;
&#xD;
    flightphsDEACB = ({Join[&#xD;
             ToExpression[&#xD;
              StringSplit[#[[1]], &amp;#034;,&amp;#034;]], {ToExpression[#[[2]]]}], &#xD;
            ToExpression[#[[3]]]} &amp;amp; /@ #[[2 ;;, {4, 5, 6}]] &amp;amp;) /@ &#xD;
       flightdataDEACB;&#xD;
    Show[Graphics3D[{Red, Thickness[0.004], &#xD;
       Line[{#[[1]], #[[2]], -#[[3]]/30000.} &amp;amp; /@ #[[All, 1]] &amp;amp; /@ &#xD;
         flightphsDEACB]}, Lighting -&amp;gt; {{&amp;#034;Ambient&amp;#034;, White}}, &#xD;
      ViewPoint -&amp;gt; {-10.`, 5.`, -5.`}, &#xD;
      ViewVertical -&amp;gt; {0.8924410944866072`, -0.23945064940819427`, \&#xD;
    -7.766116131708949`}], &#xD;
     Graphics3D[{Texture[&#xD;
        Image[GeoGraphics[{Opacity[0], &#xD;
           GeoPath@(ToExpression[#] &amp;amp; /@ &#xD;
              Flatten[StringSplit[#, &amp;#034;,&amp;#034;] &amp;amp; /@ &#xD;
                flightdataDEACBjoin[[All, {4}]], 1])}, &#xD;
          GeoBackground -&amp;gt; &amp;#034;Satellite&amp;#034;, &#xD;
          GeoRange -&amp;gt; {{Min[#[[All, 1, 1]]], &#xD;
               Max[#[[All, 1, 2]]]}, {Min[#[[All, 2, 1]]], &#xD;
               Max[#[[All, 2, 2]]]}} &amp;amp; @(GeoBounds[&#xD;
               GeoPath@GeoPosition[#[[All, 1]]], Scaled[0.05]] &amp;amp; /@ &#xD;
             flightphsDEACB), GeoProjection -&amp;gt; &amp;#034;Equirectangular&amp;#034;]]], &#xD;
       Polygon[(PadRight[#, 3] &amp;amp; /@ &#xD;
           Tuples[{{Min[#[[All, 1, 1]]], &#xD;
                Max[#[[All, 1, 2]]]}, {Min[#[[All, 2, 1]]], &#xD;
                &#xD;
                Max[#[[All, 2, 2]]]}} &amp;amp; @(GeoBounds[&#xD;
                 GeoPath@GeoPosition[#[[All, 1]]], Scaled[0.05]] &amp;amp; /@ &#xD;
               flightphsDEACB)])[[{1, 2, 4, 3}]], &#xD;
        VertexTextureCoordinates -&amp;gt; {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}], &#xD;
     ImageSize -&amp;gt; Full]&#xD;
&#xD;
![enter image description here][25]&#xD;
&#xD;
Following individual aeroplanes (long haul)&#xD;
-------------------------------------------&#xD;
&#xD;
All these flights are relatively short distance. I next choose a airbus A380 and look at its movements over a month or so.&#xD;
&#xD;
    flightdataGXLED = Import /@ FileNames[&amp;#034;*&amp;#034;, &amp;#034;/Users/thiel/Desktop/G-XLED/&amp;#034;];&#xD;
    GeoGraphics[{Red, Thick, GeoPath@(ToExpression[#] &amp;amp; /@ Flatten[StringSplit[#, &amp;#034;,&amp;#034;] &amp;amp; /@ #, 1]) &amp;amp; /@ &#xD;
     flightdataGXLED[[All, 2 ;;]][[All, All, {4}]]}, ImageSize -&amp;gt; Full]&#xD;
&#xD;
![enter image description here][26]&#xD;
&#xD;
Again a 3D representation makes the flight paths come out much better. First we prepare the data:&#xD;
&#xD;
    flightdataGXLEDjoin = Flatten[Select[flightdataGXLED[[All, 2 ;;]], Head[#] &amp;gt; 0 &amp;amp;], 1];&#xD;
    flightphsGXLED = Select[({Join[ToExpression[StringSplit[#[[1]], &amp;#034;,&amp;#034;]], {ToExpression[#[[2]]]}], ToExpression[#[[3]]]} &amp;amp; /@ #[[2 ;;, {4, 5, 6}]] &amp;amp;) /@ &#xD;
    flightdataGXLED, Length[#] &amp;gt; 0 &amp;amp;];&#xD;
&#xD;
Then we plot:&#xD;
&#xD;
    Show[Graphics3D[{Red, Thickness[0.002], Line[{#[[1]], #[[2]], -#[[3]]/3000.} &amp;amp; /@ #[[All, 1]] &amp;amp; /@ flightphsGXLED]}, Lighting -&amp;gt; {{&amp;#034;Ambient&amp;#034;, White}}, &#xD;
    ViewPoint -&amp;gt; {-10.67632349817987`, 3.7276615038423772`, -4.664042818603161`}, ViewVertical -&amp;gt; {0.11539873519013898`, -0.022518333742098193`, -0.9930639740530294`}, ImagePadding -&amp;gt; None], &#xD;
     Graphics3D[{Texture[Image[GeoGraphics[{Opacity[0], GeoPath@(ToExpression[#] &amp;amp; /@ &#xD;
     Flatten[StringSplit[#, &amp;#034;,&amp;#034;] &amp;amp; /@ flightdataGXLEDjoin[[All, {4}]], 1])}, GeoBackground -&amp;gt; &amp;#034;Satellite&amp;#034;, GeoRange -&amp;gt; {{Min[#[[All, 1, 1]]], &#xD;
     Max[#[[All, 1, 2]]]}, {Min[#[[All, 2, 1]]], Max[#[[All, 2, 2]]]}} &amp;amp; @(GeoBounds[GeoPath@GeoPosition[#[[All, 1]]], Scaled[0.05]] &amp;amp; /@ &#xD;
     flightphsGXLED), GeoProjection -&amp;gt; &amp;#034;Equirectangular&amp;#034;, ImagePadding -&amp;gt; None]]], Polygon[(PadRight[#, 3] &amp;amp; /@ &#xD;
     Tuples[{{Min[#[[All, 1, 1]]], Max[#[[All, 1, 2]]]}, {Min[#[[All, 2, 1]]], Max[#[[All, 2, 2]]]}} &amp;amp; @(GeoBounds[GeoPath@GeoPosition[#[[All, 1]]], Scaled[0.05]] &amp;amp; /@ flightphsGXLED)])[[{1, 2, 4, 3}]], VertexTextureCoordinates -&amp;gt; {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}], ImageSize -&amp;gt; Full, ImagePadding -&amp;gt; None]&#xD;
&#xD;
![enter image description here][27]&#xD;
&#xD;
Representing the flight paths on a sphere&#xD;
-----------------------------------------&#xD;
&#xD;
Note that I had to use the Equirectangular projection. Of course, particularly when looking at these long distances it would be more appropriate to represent the Earth as a sphere. We need to convert the coordinates and rescale the altitudes.&#xD;
&#xD;
    toCoordinates[coords_] := FromSphericalCoordinates[{#[[1]], Pi/2 - #[[2]], Mod[Pi + #[[3]], 2 Pi, -Pi]}] &amp;amp; /@ (Flatten[{1., #/360*2 Pi}] &amp;amp; /@ coords)&#xD;
    lengths[inputdata_] := 2.*(inputdata/Max[inputdata])&#xD;
&#xD;
The representation of the path is somewhat related to [another representation I posted on this community][28]. &#xD;
&#xD;
    myFlightPath[data_, radius_, scale_] := &#xD;
    Show[SphericalPlot3D[radius, {u, 0, Pi}, {v, 0, 2 Pi}, Mesh -&amp;gt; None, TextureCoordinateFunction -&amp;gt; ({#5, 1 - #4} &amp;amp;), &#xD;
    PlotStyle -&amp;gt; Directive[Specularity[White, 10], Texture[Import[&amp;#034;~/Desktop/backgroundimage.gif&amp;#034;]]], &#xD;
    Lighting -&amp;gt; &amp;#034;Neutral&amp;#034;, Axes -&amp;gt; False, RotationAction -&amp;gt; &amp;#034;Clip&amp;#034;, Boxed -&amp;gt; False, PlotPoints -&amp;gt; 100, &#xD;
    PlotRange -&amp;gt; {{-3, 3}, {-3, 3}, {-3, 3}}, ImageSize -&amp;gt; Full], Graphics3D[Flatten[{Green, Thickness[0.004], &#xD;
    Line[(radius + #[[2]]*scale)*#[[1]] &amp;amp; /@ Transpose[{toCoordinates[#[[All, 1]]], lengths[#[[All, 2]]]} &amp;amp;@({#[[1]], 1. #[[2]]} &amp;amp; /@ data)]]}]]]&#xD;
&#xD;
Let&amp;#039;s use that to plot one trajectory.&#xD;
&#xD;
    flightpath = {ToExpression[StringSplit[#[[4]], &amp;#034;,&amp;#034;]], #[[6]]} &amp;amp; /@ flightdataGXLED[[2, 2 ;;]];&#xD;
    myFlightPath[flightpath, 2, 1/3.]&#xD;
&#xD;
![enter image description here][29]&#xD;
&#xD;
If we want to plot multiple flights we should modify the function slightly.&#xD;
&#xD;
    myFlightPathMulti[data_, radius_, scale_] := &#xD;
     Show[SphericalPlot3D[radius, {u, 0, Pi}, {v, 0, 2 Pi}, Mesh -&amp;gt; None, TextureCoordinateFunction -&amp;gt; ({#5, 1 - #4} &amp;amp;), &#xD;
     PlotStyle -&amp;gt; Directive[Specularity[White, 10], Texture[Import[&amp;#034;~/Desktop/backgroundimage.gif&amp;#034;]]], &#xD;
     Lighting -&amp;gt; &amp;#034;Neutral&amp;#034;, Axes -&amp;gt; False, RotationAction -&amp;gt; &amp;#034;Clip&amp;#034;, Boxed -&amp;gt; False, PlotPoints -&amp;gt; 100, &#xD;
     PlotRange -&amp;gt; {{-3, 3}, {-3, 3}, {-3, 3}}, ImageSize -&amp;gt; Full], &#xD;
    Graphics3D[Flatten[{RandomColor[], Thickness[0.004], Line[(radius + #[[2]]*scale)*#[[1]] &amp;amp; /@ &#xD;
    Transpose[{toCoordinates[#[[All, 1]]], lengths[#[[All, 2]]]} &amp;amp;@({#[[1]], 1. #[[2]]} &amp;amp; /@ #)]]}] &amp;amp; /@ data]]&#xD;
&#xD;
Let&amp;#039;s prepare the data.&#xD;
&#xD;
    flightpathsmult = &#xD;
      Join[{{#[[1, 1]], 0.}}, #, {{#[[-1, 1]], 0.}}] &amp;amp; /@ Select[({ToExpression[StringSplit[#[[4]], &amp;#034;,&amp;#034;]], #[[6]]} &amp;amp; /@ # &amp;amp; /@ flightdataGXLED[[All, 2 ;;]]), Length[#] &amp;gt; 1 &amp;amp;];&#xD;
&#xD;
The trajectories seem to come out fine:&#xD;
&#xD;
    Graphics3D[&#xD;
     Flatten[{RandomColor[], Thickness[0.01], Line[(2. + #[[2]]*0.3)*#[[1]] &amp;amp; /@ &#xD;
     Transpose[{toCoordinates[#[[All, 1]]], lengths[#[[All, 2]]]} &amp;amp;@({#[[1]], 1. #[[2]]} &amp;amp; /@ #)]]}] &amp;amp; /@ flightpathsmult[[1 ;; 10]]]&#xD;
&#xD;
![enter image description here][30]&#xD;
&#xD;
Note, that there are some &amp;#034;straight lines&amp;#034; in the trajectories. They correspond to a lack of data point over unpopulated areas. Plotting all trajectories on the globe looks like this:&#xD;
&#xD;
    myFlightPathMulti[flightpathsmult, 2., 1/3.]&#xD;
&#xD;
![enter image description here][31]&#xD;
&#xD;
Let&amp;#039;s choose a black background:&#xD;
&#xD;
    myFlightPathMultiBlack[data_, radius_, scale_] := &#xD;
     Show[SphericalPlot3D[radius, {u, 0, Pi}, {v, 0, 2 Pi}, Mesh -&amp;gt; None, TextureCoordinateFunction -&amp;gt; ({#5, 1 - #4} &amp;amp;), &#xD;
     PlotStyle -&amp;gt; Directive[Specularity[White, 10], Texture[Import[&amp;#034;~/Desktop/backgroundimage.gif&amp;#034;]]], &#xD;
    Lighting -&amp;gt; &amp;#034;Neutral&amp;#034;, Axes -&amp;gt; False, RotationAction -&amp;gt; &amp;#034;Clip&amp;#034;, Boxed -&amp;gt; False, PlotPoints -&amp;gt; 100, &#xD;
    PlotRange -&amp;gt; {{-3, 3}, {-3, 3}, {-3, 3}}, ImageSize -&amp;gt; Full, Background -&amp;gt; Black], &#xD;
    Graphics3D[Flatten[{RandomColor[], Thickness[0.004], Line[(radius + #[[2]]*scale)*#[[1]] &amp;amp; /@ &#xD;
    Transpose[{toCoordinates[#[[All, 1]]], lengths[#[[All, 2]]]} &amp;amp;@({#[[1]], 1. #[[2]]} &amp;amp; /@ #)]]}] &amp;amp; /@ data, Background -&amp;gt; Black]]&#xD;
&#xD;
and plot:&#xD;
&#xD;
    myFlightPathMultiBlack[flightpathsmult, 2., 1/3.]&#xD;
&#xD;
![enter image description here][32]&#xD;
&#xD;
This gives the animation at the beginning of this post.&#xD;
&#xD;
Further Information that we can extract from the data&#xD;
-----------------------------------------------------&#xD;
&#xD;
We can also use the data to get some information about the usage of aeroplanes. We can take the plane with the call sign D-EACB and check for the percentage of time that it is airborne. We first calculate the time window I have data for:&#xD;
&#xD;
    approxTimeWindowDEACB = &#xD;
     Differences[{Select[flightdataDEACB[[1]], #[[-3]] &amp;gt; 10 &amp;amp;][[1, 1]], Select[flightdataDEACB[[Length[flightdataDEACB]]], #[[-3]] &amp;gt; 10 &amp;amp;][[-1, 1]]}][[1]]&#xD;
&#xD;
This gives 1046462 and is given in seconds. To compute the (approximate) time the plane is airborne we have to check for altitudes larger than some threshold.&#xD;
&#xD;
    approxTimeinAirDEACB = &#xD;
    Total[Flatten[Table[Differences@Reverse@Select[flightdataDEACB[[k]], #[[-3]] &amp;gt; 10 &amp;amp;][[All, 1]][[{1, -1}]], {k, 1, Length[flightdataDEACB]}]]]&#xD;
&#xD;
which gives 247835 seconds. Now, we can calculate the respective fraction.&#xD;
&#xD;
    N[approxTimeinAirDEACB/approxTimeWindowDEACB]&#xD;
&#xD;
which gives 0.236831. So this is approximately 24% of the time. Given that planes on short haul flights are mostly grounded over night and that they have substantial time at the airports this seems to be reasonable. Let&amp;#039;s do the same for the A 380.&#xD;
&#xD;
    approxTimeWindowGXLED = &#xD;
     Differences[Flatten[Select[Table[Select[Select[flightdataGXLED, Length[#] &amp;gt; 1 &amp;amp;][[k]], #[[-3]] &amp;gt; 10 &amp;amp;], {k, 1, Length[Select[flightdataGXLED, Length[#] &amp;gt; 1 &amp;amp;]]}], Length[#] &amp;gt; 1 &amp;amp;], 1][[{1, -1}, 1]]][[1]]&#xD;
&#xD;
which is 4258510, and &#xD;
&#xD;
    approxTimeinAirGXLED = &#xD;
     Total[Select[Flatten[Table[If[Length[Select[flightdataGXLED[[k]], #[[-3]] &amp;gt; 10 &amp;amp;][[All, 1]]] &amp;gt; 1, &#xD;
     Differences@Reverse@Select[flightdataGXLED[[k]], #[[-3]] &amp;gt; 10 &amp;amp;][[All, 1]][[{1, -1}]]], {k, 1, Length[flightdataGXLED]}]], NumberQ]]&#xD;
&#xD;
which is 2445537. This gives &#xD;
&#xD;
    N[approxTimeinAirGXLED/approxTimeWindowGXLED]&#xD;
&#xD;
so about 57%; hence a more efficient aeroplane use. We can now also determine the average speed when the plane is moving. &#xD;
&#xD;
    N@Mean[DeleteCases[Flatten[flightdataGXLED[[All, 2 ;;]], 1][[All, 6]],0]]&#xD;
    (*366.84*)&#xD;
&#xD;
This is 367 kts or &#xD;
&#xD;
    UnitConvert[366.84 Quantity[1, &amp;#034;Knots&amp;#034;], Quantity[1, ((&amp;#034;Kilometers&amp;#034;)/(&amp;#034;Hours&amp;#034;))]]&#xD;
&#xD;
680 km/h. Note that this is a rather inappropriate estimate, because the data is not necessarily sampled uniformly and especially data from over the oceans might be missing. If we accept that issue we can calculate the histogram of the data:&#xD;
&#xD;
    Histogram[DeleteCases[Flatten[flightdataGXLED[[All, 2 ;;]], 1][[All, 6]], 0], Automatic, &amp;#034;PDF&amp;#034;, FrameLabel -&amp;gt; {&amp;#034;Speed in kts&amp;#034;, &amp;#034;Probablity&amp;#034;}, &#xD;
    LabelStyle -&amp;gt; Directive[Bold, 16], ImageSize -&amp;gt; Large, PlotTheme -&amp;gt; &amp;#034;Marketing&amp;#034;, ColorFunction -&amp;gt; &amp;#034;TemperatureMap&amp;#034;]&#xD;
&#xD;
![enter image description here][33]&#xD;
&#xD;
This graphic suggests that the fast speed flight time is vastly underestimated. The maximum speed is &#xD;
&#xD;
    Max@DeleteCases[Flatten[flightdataGXLED[[All, 2 ;;]], 1][[All, 6]], 0]&#xD;
    (*697*)&#xD;
&#xD;
which is a ground speed of &#xD;
&#xD;
    N@UnitConvert[697 Quantity[1, &amp;#034;Knots&amp;#034;], Quantity[1, ((&amp;#034;Kilometers&amp;#034;)/(&amp;#034;Hours&amp;#034;))]]&#xD;
&#xD;
1290.84 km.h. We can compare this to the speed of sound in air.&#xD;
&#xD;
    UnitConvert[ThermodynamicData[&amp;#034;Air&amp;#034;, &amp;#034;SoundSpeed&amp;#034;, {&amp;#034;Temperature&amp;#034; -&amp;gt; Quantity[20, &amp;#034;DegreesCelsius&amp;#034;], &amp;#034;Pressure&amp;#034; -&amp;gt; Quantity[1, &amp;#034;Atmospheres&amp;#034;]}], &amp;#034;Kilometers&amp;#034;/&amp;#034;Hours&amp;#034;]&#xD;
&#xD;
which gives 1236.18 km/h. So the maximum speed is &#xD;
&#xD;
    1290.84/1236.16&#xD;
&#xD;
104% of the speed of sound, or Mach 1 (at ground level), which is a bit on the high side. Note that the top speed of the A380 is given as 1020 km/h. The peak in the histogram at cruising speed is at around 490 kts which is in fact the maximal speed of the A380 as given in Wikipedia. The higher than expected ground speed might be due to the jet stream - even though speeds of more than Mach 1 seem to be unlikely. There are [some reports of this happening][34] though. &#xD;
&#xD;
This is only a very brief description of what can be achieved with the fantastic data from flightradar24. I encourage everyone to join that community and contribute data. The data provided on that site and the power of the Wolfram Language will allow you to gain insight into what is going on in the skies. &#xD;
&#xD;
Cheers,&#xD;
&#xD;
Marco&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rotateEarth.gif&amp;amp;userId=48754&#xD;
  [2]: http://www.flightradar24.com&#xD;
  [3]: https://www.flightradar24.com/premium/&#xD;
  [4]: https://www.amazon.co.uk/NooElec-Receiver-Compatible-Packages-Guaranteed/dp/B009U7WZCA/&#xD;
  [5]: https://www.flightradar24.com/share-your-data&#xD;
  [6]: http://zadig.akeo.ie&#xD;
  [7]: http://tunecomp.net/disable-automatic-reboot-after-updates-installation-in-windows-10/&#xD;
  [8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.11.55.png&amp;amp;userId=48754&#xD;
  [9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.13.06.png&amp;amp;userId=48754&#xD;
  [10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.22.00.png&amp;amp;userId=48754&#xD;
  [11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Onepathrotate.gif&amp;amp;userId=48754&#xD;
  [12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.23.18.png&amp;amp;userId=48754&#xD;
  [13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.26.51.png&amp;amp;userId=48754&#xD;
  [14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AnimatedFlight.gif&amp;amp;userId=48754&#xD;
  [15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.36.33.png&amp;amp;userId=48754&#xD;
  [16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.41.18.png&amp;amp;userId=48754&#xD;
  [17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.42.06.png&amp;amp;userId=48754&#xD;
  [18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.43.12.png&amp;amp;userId=48754&#xD;
  [19]: https://www.ivao.aero/training/documentation/books/PP_ADC_select_runway.pdf&#xD;
  [20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.45.55.png&amp;amp;userId=48754&#xD;
  [21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.46.44.png&amp;amp;userId=48754&#xD;
  [22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.48.26.png&amp;amp;userId=48754&#xD;
  [23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.49.31.png&amp;amp;userId=48754&#xD;
  [24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-25at23.59.09.png&amp;amp;userId=48754&#xD;
  [25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-26at00.00.08.png&amp;amp;userId=48754&#xD;
  [26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-26at00.01.39.png&amp;amp;userId=48754&#xD;
  [27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-26at00.05.09.png&amp;amp;userId=48754&#xD;
  [28]: http://community.wolfram.com/groups/-/m/t/905016&#xD;
  [29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-26at00.08.34.png&amp;amp;userId=48754&#xD;
  [30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-26at00.11.21.png&amp;amp;userId=48754&#xD;
  [31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-26at00.12.03.png&amp;amp;userId=48754&#xD;
  [32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-26at00.13.55.png&amp;amp;userId=48754&#xD;
  [33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-04-26at00.21.15.png&amp;amp;userId=48754&#xD;
  [34]: http://www.telegraph.co.uk/news/worldnews/northamerica/usa/11337617/Jet-stream-blasts-BA-plane-across-Atlantic-in-record-time.html</description>
    <dc:creator>Marco Thiel</dc:creator>
    <dc:date>2017-04-25T23:26:29Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/463610">
    <title>From Ukraine: photo and map of the solar eclipse Mar 20, 2015</title>
    <link>https://community.wolfram.com/groups/-/m/t/463610</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/2fb30490-e074-40c2-99f3-80634810ef4d</description>
    <dc:creator>Vitaliy Kaurov</dc:creator>
    <dc:date>2015-03-20T17:02:11Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1671437">
    <title>US climate change at the county level</title>
    <link>https://community.wolfram.com/groups/-/m/t/1671437</link>
    <description>![US County century-scale climate change][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=figmean.jpg&amp;amp;userId=66744&#xD;
  [2]: https://www.wolframcloud.com/obj/440933c0-d91d-4dfc-ac2e-88dd71451744</description>
    <dc:creator>John Shonder</dc:creator>
    <dc:date>2019-04-28T15:26:25Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/551187">
    <title>Analyzing Crop Yields by Drone</title>
    <link>https://community.wolfram.com/groups/-/m/t/551187</link>
    <description>Over the summer, I had the opportunity to work with a [Phantom 2 Vision+ drone from DJI][1]. After a few exciting trial flights, I decided to investigate a real world application: fly over one of the abundant Illinois farm fields and record bird&amp;#039;s eye images of the crops below. Then using this data, create a simple function to estimate the crop yield for that area.&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
Recently, I drove out to a farm with a fellow Wolfram employee who also happens to be an expert drone operator. We flew the drone over a stretch of soy fields and took pictures and shot video along the way. Here is a small sample video: &#xD;
&#xD;
[Drone flying over soy field (full video):][3] https://vimeo.com/136947441&#xD;
&#xD;
[![enter image description here][4]][3]&#xD;
&#xD;
As you can see in the video, most areas of this field is in really good shape (at least when judged from this altitude). But there are some areas near a drainage ditch where the soy is not growing very well. This can happen because the soil is too moist for too long causing the root system to lack oxygen. As a result the soy plants are much smaller near the drainage area or completely missing.&#xD;
&#xD;
Let&amp;#039;s look at one particular example of this situation. We import the data from the SD card that was used as storage on the drone:&#xD;
&#xD;
    crop = Import[&amp;#034;F:\\DCIM\\101MEDIA\\DJI01951.JPG&amp;#034;]&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
The first thing to notice is the large amount of barrel distortion in this image (note how the road looks highly curved). This is due to the drone camera having very wide angle lens. There is no built-in function in the Wolfram Language to simply correct for barrel distortion (yet), so I am going to focus on a center part of the image where the distortion is not too bad. The ideal situation here would be to fly very high with a very narrow lens.&#xD;
&#xD;
So first, to get a smaller center part of the image, let&amp;#039;s take a look at a subimage about 1/3 x 1/3 the size of the original, located a little bit right of the center:&#xD;
&#xD;
    {columns, rows} = ImageDimensions[crop];&#xD;
    {rowLow, rowHigh} = {Round[rows/3], Round[2 rows/3]};&#xD;
    {columnLow, columnHigh} = {Round[columns/2], Round[5 columns/6]};&#xD;
    HighlightImage[crop, Rectangle[{columnLow, rowLow}, {columnHigh, rowHigh}], Method -&amp;gt; &amp;#034;Brightness&amp;#034;]&#xD;
&#xD;
![enter image description here][6]&#xD;
&#xD;
Using this region we can now get a sample of the crop that contains both very healthy plants (on the left) and very small or missing plants (on the right):&#xD;
&#xD;
    cropSample = ImageTake[crop, {rowLow, rowHigh}, {columnLow, columnHigh}]&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
So now the question becomes: how do I measure what fraction of the harvest is lost in this sample? A simple approach is to just turn the image into a black and white image where a white color indicates a bright area (like the green plants) and a black color indicates the soil (dark in the original image):&#xD;
&#xD;
    binaryCrop1 = Binarize[cropSample]&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
That sort of works, but the original sample still had bright areas due to visible rocks and dead leaves, so let&amp;#039;s see if we can refine this approach just a little bit. Instead of using just a plain binarize we can look at the dominant color in this image, which we expect to be greens and blacks:&#xD;
&#xD;
    dominantColors = DominantColors[cropSample, 10]&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
The result here is more or less as we would expect, with some additional grays (perhaps from dead leaves and rocks).&#xD;
&#xD;
Now let&amp;#039;s take a look at the first dominant color, a good fit for a healthy soy plant in this image. Using this color we can now calculate the color distance for each pixel giving us an image where dark colors are close matches and white colors are poor matches:&#xD;
&#xD;
    ColorDistance[cropSample, First[dominantColors]]&#xD;
&#xD;
![enter image description here][10]&#xD;
&#xD;
Again we can binarize this image. In this case also need to negate the black/white colors to get white/black so the colors in the binarized image have the same meaning as before (white=soy, black=not soy):&#xD;
&#xD;
    binaryCrop2 = ColorDistance[cropSample, First[dominantColors]] // Binarize // ColorNegate&#xD;
&#xD;
![enter image description here][11]&#xD;
&#xD;
This result actually looks quite nice: Areas with no soy plants now have a black pixel values (0) and areas with soy plants have white pixel values (1). So let&amp;#039;s count the white pixels:&#xD;
&#xD;
    soyPixels = ImageMeasurements[binaryCrop2, &amp;#034;Total&amp;#034;]&#xD;
    (* Gives: 954,214.0 *)&#xD;
&#xD;
And also lets count the total number of pixels in the image:&#xD;
&#xD;
    totalPixels = Times @@ ImageMeasurements[binaryCrop2, &amp;#034;Dimensions&amp;#034;]&#xD;
    (* Gives: 1,603,814 *)&#xD;
&#xD;
And now we have our crude estimate for the amount of soy plants in this image, about 60%:&#xD;
&#xD;
    Quantity[100 soyPixels/totalPixels, &amp;#034;Percent&amp;#034;]&#xD;
    (* Gives: 59.4966% *)&#xD;
&#xD;
Of course this is a very crude estimation, but perhaps not too far off from the actual yield when this particular spot is harvested. I actually expect this to be a slight overestimation, since smaller plants will likely have no soy beans or less soy beans than a fully grown plant. But I am not a crop analyst, so we&amp;#039;ll have to go with 60% for now.&#xD;
&#xD;
Finally, an analysis like this can easily be done on the fly (or &amp;#034;on the drone&amp;#034; in this case), where an embedded system like a Raspberry Pi or similar device can process data as it comes in, and even direct the drone to the areas of highest interest (with the most stressed plants).&#xD;
&#xD;
&#xD;
  [1]: http://www.dji.com/product/phantom-2-vision/feature&#xD;
  [2]: /c/portal/getImageAttachment?filename=0_0.jpg&amp;amp;userId=11733&#xD;
  [3]: https://vimeo.com/136947441&#xD;
  [4]: /c/portal/getImageAttachment?filename=ezgif-3078374406.gif&amp;amp;userId=11733&#xD;
  [5]: /c/portal/getImageAttachment?filename=Untitled-8.png&amp;amp;userId=22112&#xD;
  [6]: /c/portal/getImageAttachment?filename=Untitled-9.png&amp;amp;userId=22112&#xD;
  [7]: /c/portal/getImageAttachment?filename=Untitled-10.png&amp;amp;userId=22112&#xD;
  [8]: /c/portal/getImageAttachment?filename=Untitled-11.png&amp;amp;userId=22112&#xD;
  [9]: /c/portal/getImageAttachment?filename=Untitled-12.png&amp;amp;userId=22112&#xD;
  [10]: /c/portal/getImageAttachment?filename=Untitled-13.png&amp;amp;userId=22112&#xD;
  [11]: /c/portal/getImageAttachment?filename=Untitled-14.png&amp;amp;userId=22112</description>
    <dc:creator>Arnoud Buzing</dc:creator>
    <dc:date>2015-08-21T19:28:02Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/859842">
    <title>[GIF] Animating climate change through an annual window</title>
    <link>https://community.wolfram.com/groups/-/m/t/859842</link>
    <description>Yesterday I posted a little [piece of code to visualise the changing mean temperature of the earth][1]. The animation tried to point out &amp;#034;how the temperature spirals out of control&amp;#034;. In order to account for positive and negative deviations from the &amp;#034;standard&amp;#034; temperature, we needed to use a transformation from temperature difference to radius. This was non-linear and one might argue that lower temperatures are underrepresented. &#xD;
&#xD;
Here is an alternative attempt.&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
I do not post it in the same thread as the last post, because the gif is quite large and adding more the original thread might make it very slow to load. In spite of that I only have uploaded a lower quality video with every second frame. You can download the full quality file [from here][3]. Here&amp;#039;s the implementation of the new idea, which I have seen in a similar fashion somewhere online:&#xD;
&#xD;
    tempdata = Transpose[{Interpreter[&amp;#034;Date&amp;#034;][#[[All, 1]]], #[[All, 2]]}] &amp;amp;@ &#xD;
    Import[&amp;#034;http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.monthly_ns_avg.txt&amp;#034;, &amp;#034;Data&amp;#034;];&#xD;
    frames2 = &#xD;
    Table[ListLinePlot[#, PlotStyle -&amp;gt; Table[Opacity[0.7/(1 + Min[k, Length[Partition[tempdata[[All, 2]], UpTo[12]]]] - j)^2 + 0.3], {j, 1, Min[k, Length[Partition[tempdata[[All, 2]], UpTo[12]]]]}], PlotRange -&amp;gt; {{1, 12}, Evaluate[MinMax[tempdata[[All, 2]]] + {-0.1, 0.1}]}, &#xD;
    Frame -&amp;gt; True, Background -&amp;gt; Black, FrameStyle -&amp;gt; Directive[White, Bold, 16], LabelStyle -&amp;gt; Directive[White, Bold, 16], ImageSize -&amp;gt; Large, &#xD;
    ColorFunction -&amp;gt; (ColorData[&amp;#034;Temperature&amp;#034;][2^(#2) - 0.5] &amp;amp;), ColorFunctionScaling -&amp;gt; False, FrameTicks -&amp;gt; {{True, None}, {Transpose@{Range[12], {&amp;#034;Jan&amp;#034;, &amp;#034;Feb&amp;#034;, &amp;#034;Mar&amp;#034;, &amp;#034;Apr&amp;#034;, &amp;#034;May&amp;#034;, &amp;#034;Jun&amp;#034;, &amp;#034;Jul&amp;#034;, &amp;#034;Aug&amp;#034;, &amp;#034;Sep&amp;#034;, &amp;#034;Oct&amp;#034;, &amp;#034;Nov&amp;#034;, &amp;#034;Dec&amp;#034;}}, None}}, FrameLabel -&amp;gt; {&amp;#034;Month&amp;#034;, &amp;#034;\[CapitalDelta]T&amp;#034;}, &#xD;
    ImagePadding -&amp;gt; 75, AspectRatio -&amp;gt; 1, Epilog -&amp;gt; {Text[Style[1849 + Min[k, Length[Partition[tempdata[[All, 2]], UpTo[12]]]], Red, 21], {2.5, 1.}]}] &amp;amp; @&#xD;
    Partition[tempdata[[All, 2]], UpTo[12]][[1 ;; Min[k, Length[Partition[tempdata[[All, 2]], UpTo[12]]]]]], {k, 1, Length[Partition[tempdata[[All, 2]], UpTo[12]]] + 10}];&#xD;
&#xD;
You an list animate it:&#xD;
&#xD;
    ListAnimate[frames2]&#xD;
&#xD;
I exported the frames:&#xD;
&#xD;
    Monitor[Do[Export[&amp;#034;~/Desktop/ClimateGraph/frame&amp;#034; &amp;lt;&amp;gt; ToString[1000 + k] &amp;lt;&amp;gt; &amp;#034;.jpg&amp;#034;, frames2[[k]], ImageResolution -&amp;gt; 100], {k, 1, Length[frames2], 1}], k]&#xD;
&#xD;
and use the terminal command:&#xD;
&#xD;
    convert -delay 15 -loop 0 frame*.jpg animatedfull.gif&#xD;
&#xD;
to create the animation above.&#xD;
&#xD;
If you prefer the curves a bit smoother you can use:&#xD;
&#xD;
    frames3 = &#xD;
    Table[ListLinePlot[#, PlotStyle -&amp;gt; Table[Opacity[0.7/(1 + Min[k, Length[Partition[tempdata[[All, 2]], UpTo[12]]]] - j)^2 + 0.3], {j, 1, Min[k, Length[Partition[tempdata[[All, 2]], UpTo[12]]]]}], PlotRange -&amp;gt; {{1, 12}, Evaluate[MinMax[tempdata[[All, 2]]] + {-0.1, 0.1}]}, InterpolationOrder-&amp;gt;2,&#xD;
    Frame -&amp;gt; True, Background -&amp;gt; Black, FrameStyle -&amp;gt; Directive[White, Bold, 16], LabelStyle -&amp;gt; Directive[White, Bold, 16], ImageSize -&amp;gt; Large, &#xD;
    ColorFunction -&amp;gt; (ColorData[&amp;#034;Temperature&amp;#034;][2^(#2) - 0.5] &amp;amp;), ColorFunctionScaling -&amp;gt; False, FrameTicks -&amp;gt; {{True, None}, {Transpose@{Range[12], {&amp;#034;Jan&amp;#034;, &amp;#034;Feb&amp;#034;, &amp;#034;Mar&amp;#034;, &amp;#034;Apr&amp;#034;, &amp;#034;May&amp;#034;, &amp;#034;Jun&amp;#034;, &amp;#034;Jul&amp;#034;, &amp;#034;Aug&amp;#034;, &amp;#034;Sep&amp;#034;, &amp;#034;Oct&amp;#034;, &amp;#034;Nov&amp;#034;, &amp;#034;Dec&amp;#034;}}, None}}, FrameLabel -&amp;gt; {&amp;#034;Month&amp;#034;, &amp;#034;\[CapitalDelta]T&amp;#034;}, &#xD;
    ImagePadding -&amp;gt; 75, AspectRatio -&amp;gt; 1, Epilog -&amp;gt; {Text[Style[1849 + Min[k, Length[Partition[tempdata[[All, 2]], UpTo[12]]]], Red, 21], {2.5, 1.}]}] &amp;amp; @&#xD;
    Partition[tempdata[[All, 2]], UpTo[12]][[1 ;; Min[k, Length[Partition[tempdata[[All, 2]], UpTo[12]]]]]], {k, 1, Length[Partition[tempdata[[All, 2]], UpTo[12]]] + 10}];&#xD;
&#xD;
And proceed as before. The last frame looks like this:&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
Cheers,&#xD;
&#xD;
Marco&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com/groups/-/m/t/859046&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9057animated.gif&amp;amp;userId=48754&#xD;
  [3]: https://www.dropbox.com/s/g1xeppsy2wtucln/animatedfull.gif?dl=0&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-05-21at00.59.23.png&amp;amp;userId=48754</description>
    <dc:creator>Marco Thiel</dc:creator>
    <dc:date>2016-05-21T00:12:16Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1731676">
    <title>[WSC19] Use Machine Learning to Predict if Topography is Land or Ocean</title>
    <link>https://community.wolfram.com/groups/-/m/t/1731676</link>
    <description>##Goal of the Project##&#xD;
Analyze and study algorithms for predicting if topography is land or ocean by using simple image processing to find the differences between on-land relief plots and underwater relief plots because there is a difference between simple image processing and machine learning on images&#xD;
##What is Topography?##&#xD;
Topography is a broad term used to describe the detailed study of the earth&amp;#039;s surface. This includes changes in the surface such as mountains and valleys as well as features such as rivers and roads. It can also include the surface of other planets, the moon, asteroids, and meteors. Topography is closely linked to the practice of surveying, which is the practice of determining and recording the position of points in relation to one another. For example, this is a topography of a random place.&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
##Collecting Data##&#xD;
Mathematica provides lots of data of various places over the world, so I take advantages of this to scrap data of on-land and underwater features for my topographies training set. The training data set includes:&#xD;
&#xD;
 - 200 On-land features such as capital cities, cities, buildings, mountains.&#xD;
&#xD;
        buildings = EntityList[&amp;#034;Building&amp;#034;]&#xD;
        buildings = Take[buildings, -50]&#xD;
        mountains = MountainData[All]&#xD;
        mountains = Take[mountains, 50]&#xD;
        cities = EntityList[&amp;#034;City&amp;#034;] // Flatten&#xD;
        cities = Take[cities, -50]&#xD;
&#xD;
 - 200 Under-water features such as trenches, basins, seamounts, etc.&#xD;
&#xD;
        underseafeatures = &#xD;
         DeleteMissing[&#xD;
           EntityValue[EntityClass[&amp;#034;Ocean&amp;#034;, &amp;#034;Seas&amp;#034;], &amp;#034;UnderseaFeatures&amp;#034;]] // &#xD;
          Flatten&#xD;
        underseafeatureslist = Take[underseafeatures, -200]&#xD;
&#xD;
##Image Processing##&#xD;
I&amp;#039;ve done various approaches onto this Image Processing step but only one is suitable.&#xD;
&#xD;
**First Proposal**&#xD;
&#xD;
First, I was trying to calculate the slope of each topography by the following formula:&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
However, it will not work effectively if there are on-land topography and underwater one having the same slope. Consequently, the machine will not be able to differentiate which is land and which is ocean.&#xD;
&#xD;
**Second Proposal**&#xD;
&#xD;
Therefore, I moved on to my second proposal from my mentor Harshal, which is using Discrete Fourier Transformation to convert an image into a matrix. As I can see, the transformedimages using DFT do not help much because I can not distinguish the differences between images although the right one is land and the left one is ocean&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
As I can see, the transformed images using DFT do not help much because I can not distinguish the differences between images although the right one is land and the left one is ocean.&#xD;
&#xD;
**Third Proposal**&#xD;
&#xD;
Finally, my final solution is applying a high-frequency filter on the images. High frequency filtered output was easy to distinguish since surface above water is generally rough when compared to the surface underwater.&#xD;
&#xD;
Initially, start with 2 relief plots, one is the plot of Mount Everest and one is the plot of Mariana Trench.&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
From this point, I can clearly see that the left one is Everest and the right one is Mariana Trench. The reason I can differentiate those two images because the image on the right side is rough, which means it has a high probability to be land and the image on the left side is smooth, which implies that it has a high probability to be ocean. So, let&amp;#039;s try to find out a way to programmatically figure out these differences!&#xD;
&#xD;
    capitalcitieselevationPlots = &#xD;
     ReliefPlot[&#xD;
        Reverse[GeoElevationData[#, GeoRange -&amp;gt; Quantity[4, &amp;#034;Kilometers&amp;#034;],&#xD;
           GeoProjection -&amp;gt; Automatic]], ImageSize -&amp;gt; Medium, &#xD;
        ColorFunction -&amp;gt; GrayLevel, Frame -&amp;gt; False, &#xD;
        PlotRangePadding -&amp;gt; None] &amp;amp; /@ capitalcities&#xD;
    buildingselevationPlots = &#xD;
     ReliefPlot[&#xD;
        Reverse[GeoElevationData[#, GeoRange -&amp;gt; Quantity[4, &amp;#034;Kilometers&amp;#034;],&#xD;
           GeoProjection -&amp;gt; Automatic]], ImageSize -&amp;gt; Medium, &#xD;
        ColorFunction -&amp;gt; GrayLevel, Frame -&amp;gt; False, &#xD;
        PlotRangePadding -&amp;gt; None] &amp;amp; /@ buildings&#xD;
    mountainselevationPlots = &#xD;
     ReliefPlot[&#xD;
        Reverse[GeoElevationData[#, GeoRange -&amp;gt; Quantity[4, &amp;#034;Kilometers&amp;#034;],&#xD;
           GeoProjection -&amp;gt; Automatic]], ImageSize -&amp;gt; Medium, &#xD;
        ColorFunction -&amp;gt; GrayLevel, Frame -&amp;gt; False, &#xD;
        PlotRangePadding -&amp;gt; None] &amp;amp; /@ mountains&#xD;
    citieselevationPlots = &#xD;
     ReliefPlot[&#xD;
        Reverse[GeoElevationData[#, GeoRange -&amp;gt; Quantity[4, &amp;#034;Kilometers&amp;#034;],&#xD;
           GeoProjection -&amp;gt; Automatic]], ImageSize -&amp;gt; Medium, &#xD;
        ColorFunction -&amp;gt; GrayLevel, Frame -&amp;gt; False, &#xD;
        PlotRangePadding -&amp;gt; None] &amp;amp; /@ cities&#xD;
    underseafeatureselevationPlots = &#xD;
     ReliefPlot[&#xD;
        Reverse[GeoElevationData[#, GeoRange -&amp;gt; Quantity[4, &amp;#034;Kilometers&amp;#034;],&#xD;
           GeoProjection -&amp;gt; Automatic]], ImageSize -&amp;gt; Medium, &#xD;
        ColorFunction -&amp;gt; GrayLevel, Frame -&amp;gt; False, &#xD;
        PlotRangePadding -&amp;gt; None] &amp;amp; /@ underseafeatureslist&#xD;
&#xD;
Those lines of code above transfer all the locations we got into the relief plots.&#xD;
&#xD;
In the next step, I am going to apply a high-frequency filter on the images. To illustrate, for each relief plot, I turned in into black and white and also made the black and white version blurred then found the differences between those two and thus dilate it. This is how I did it in code.&#xD;
&#xD;
    finalcapitalcitieslist = &#xD;
     Dilation[ImageDifference[Blur[#, 20], #], 10] &amp;amp; /@ &#xD;
      capitalcitieselevationPlots&#xD;
    finalbuildingslist = &#xD;
     Dilation[ImageDifference[Blur[#, 20], #], 10] &amp;amp; /@ &#xD;
      buildingselevationPlots&#xD;
    finalmountainslist = &#xD;
     Dilation[ImageDifference[Blur[#, 20], #], 10] &amp;amp; /@ &#xD;
      mountainselevationPlots&#xD;
    finalcitieslist = &#xD;
     Dilation[ImageDifference[Blur[#, 20], #], 10] &amp;amp; /@ &#xD;
      citieselevationPlots&#xD;
    finalunderseafeatureslist = &#xD;
     Dilation[ImageDifference[Blur[#, 20], #], 10] &amp;amp; /@ &#xD;
      underseafeatureselevationPlots&#xD;
&#xD;
This returns:&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
These are two images are the images of Mount Everest and Mariana Trench after applying the high-frequency filter. As mentioned above, the left one is Mount Everest and the right one is Mariana Trench. Henceforth, the differences are lots of &amp;#034;bubbles&amp;#034; in the left image compare to the right one where the &amp;#034;bubbles&amp;#034; represent the abrupt changes in the initial topography.&#xD;
&#xD;
##Creating Training Set##&#xD;
&#xD;
I gathered all the separated training sets (capital cities, cities, buildings, mountains, undersea features) into a larger training set.&#xD;
&#xD;
    finalonlandlist = &#xD;
     Join[finalcapitalcitieslist, finalcitieslist, finalbuildingslist, &#xD;
      finalmountainslist]&#xD;
    finalunderseafeatureslist = finalunderseafeatureslist&#xD;
    finaltraininglist = Join[finalonlandlist, finalunderseafeatureslist]&#xD;
&#xD;
##Generating Classify Function##&#xD;
&#xD;
Creating the classify function base on the training set and also the result of each element in the set and the machine got the best fit line for my model by applying Logistic Regression method.&#xD;
&#xD;
    landExamplesAndClasses = (# -&amp;gt; &amp;#034;Land&amp;#034; &amp;amp;) /@ finalonlandlist;&#xD;
    seaExamplesAndClasses = (# -&amp;gt; &amp;#034;Sea&amp;#034; &amp;amp;) /@ finalunderseafeatureslist;&#xD;
    predict = &#xD;
     Classify[Union[landExamplesAndClasses, seaExamplesAndClasses]]&#xD;
&#xD;
##Creating the Final Function to Return Probabilities##&#xD;
&#xD;
I created the final function which returns the probabilities of both land and sea as the input is a topography of an arbitrary area.&#xD;
&#xD;
    predictor[img_] := &#xD;
     predict[Dilation[ImageDifference[Blur[img, 20], img], 10], &#xD;
       &amp;#034;Probabilities&amp;#034;] // Normal&#xD;
    test = ReliefPlot[topography, Frame -&amp;gt; False, PlotRangePadding -&amp;gt; None]&#xD;
    predictor[test]&#xD;
&#xD;
##An Application:  &amp;#034;Generating Temperature Map and Smooth Plot&amp;#034;##&#xD;
&#xD;
In this extra challenge, I randomly chose a place and generate the heat map of it and the smooth map as well.&#xD;
&#xD;
**Generating Temperature Map**&#xD;
&#xD;
What I have done was followed by this list:&#xD;
&#xD;
 - Get the coordinates of the current location.&#xD;
 - From current location, move to its right 15 times for 8000 meters in each step to get a list of new coordinates.&#xD;
 - From each coordinate in the list above, move downwards 15 times for 8000 meters in each step to get a list of new coordinates.&#xD;
 - Rearrange the list of coordinates vertical instead of horizontal.&#xD;
 - Join the lists of coordinates above all together into a big list.&#xD;
 - Make a list of plots for each coordinate in the list above.&#xD;
 - For each plot, apply the Final Function the get the Probability of Land and Water.&#xD;
 - Collect all the Land Probability of each plot and Arrange them into a  16x16-sized matrix.&#xD;
 - From the 16x16-sized matrix, convert each number into Thermometer Colors ranging from - 0,5 to 0.5.&#xD;
&#xD;
        vitri = GeoPosition[Here]&#xD;
        vitringang = &#xD;
         Table[GeoDestination[GeoPosition[vitri], {dist, 90}], {dist, 0, &#xD;
           120000, 8000}]&#xD;
        vitridoc = &#xD;
         Table[GeoDestination[GeoPosition[#], {dist, 180}], {dist, 8000, &#xD;
             120000, 8000}] &amp;amp; /@ vitringang&#xD;
        vitridocdachinhsua = Table[vitridoc[[All, n]], {n, 1, 15}]&#xD;
        tatcavitri = Join[vitridocdachinhsua, vitringang] // Flatten&#xD;
        bando = ReliefPlot[&#xD;
            Reverse[GeoElevationData[{#, {(# /. GeoPosition[x_] -&amp;gt; x)[[1]] - &#xD;
                 0.05, (# /. GeoPosition[x_] -&amp;gt; x)[[2]] + 0.05}}, &#xD;
              GeoRange -&amp;gt; Quantity[4, &amp;#034;Kilometers&amp;#034;], &#xD;
              GeoProjection -&amp;gt; Automatic, GeoCenter -&amp;gt; Automatic]], &#xD;
            ImageSize -&amp;gt; Medium, ColorFunction -&amp;gt; GrayLevel, Frame -&amp;gt; False, &#xD;
            PlotRangePadding -&amp;gt; None] &amp;amp; /@ (GeoPosition /@ tatcavitri)&#xD;
        topo1 = predictor[#] &amp;amp; /@ bando // Flatten&#xD;
        bandomoi = &#xD;
         Partition[&#xD;
          Values[topo1][[#]] &amp;amp; /@ Flatten[Position[Keys[topo1], &amp;#034;Land&amp;#034;]], 16]&#xD;
        MatrixPlot[bandomoi - .5, &#xD;
         ColorFunction -&amp;gt; ColorData[&amp;#034;ThermometerColors&amp;#034;]]&#xD;
&#xD;
And it returns:&#xD;
&#xD;
![enter image description here][6]&#xD;
&#xD;
Let&amp;#039;s compare it to the actual map:&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
The temperature map approximately matches the actual map. However, at some points, for instance, in the sea, the color of the pixel is more red than blue. I tried to figure out the reason and found out that it might be rough because it became a water body only recently approximately couple of tens of thousands of years ago.&#xD;
&#xD;
**Generating Smooth Plot**&#xD;
&#xD;
To do this, I just applied quite same process as generating the Temperature Map above.&#xD;
&#xD;
    point1 = GeoPosition[{11.111457, 107.636774}]&#xD;
    point2 = GeoPosition[{11.111457, 109.803591}]&#xD;
    point3 = GeoPosition[{8.991341, 107.636774}]&#xD;
    point4 = GeoPosition[{8.991341, 109.803591}]&#xD;
    rong = UnitConvert[GeoDistance[point1, point2], \!\(\*&#xD;
    NamespaceBox[&amp;#034;LinguisticAssistant&amp;#034;,&#xD;
    DynamicModuleBox[{Typeset`query$$ = &amp;#034;meters&amp;#034;, Typeset`boxes$$ = &#xD;
           TemplateBox[{&#xD;
    InterpretationBox[&amp;#034; &amp;#034;, 1], &amp;#034;\&amp;#034;m\&amp;#034;&amp;#034;, &amp;#034;meters&amp;#034;, &amp;#034;\&amp;#034;Meters\&amp;#034;&amp;#034;}, &#xD;
            &amp;#034;Quantity&amp;#034;, SyntaxForm -&amp;gt; Mod], &#xD;
           Typeset`allassumptions$$ = {{&#xD;
            &amp;#034;type&amp;#034; -&amp;gt; &amp;#034;Clash&amp;#034;, &amp;#034;word&amp;#034; -&amp;gt; &amp;#034;meters&amp;#034;, &#xD;
             &amp;#034;template&amp;#034; -&amp;gt; &amp;#034;Assuming \&amp;#034;${word}\&amp;#034; is ${desc1}. Use as \&#xD;
    ${desc2} instead&amp;#034;, &amp;#034;count&amp;#034; -&amp;gt; &amp;#034;2&amp;#034;, &#xD;
             &amp;#034;Values&amp;#034; -&amp;gt; {{&#xD;
               &amp;#034;name&amp;#034; -&amp;gt; &amp;#034;Unit&amp;#034;, &amp;#034;desc&amp;#034; -&amp;gt; &amp;#034;a unit&amp;#034;, &#xD;
                &amp;#034;input&amp;#034; -&amp;gt; &amp;#034;*C.meters-_*Unit-&amp;#034;}, {&#xD;
               &amp;#034;name&amp;#034; -&amp;gt; &amp;#034;Word&amp;#034;, &amp;#034;desc&amp;#034; -&amp;gt; &amp;#034;a word&amp;#034;, &#xD;
                &amp;#034;input&amp;#034; -&amp;gt; &amp;#034;*C.meters-_*Word-&amp;#034;}}}}, &#xD;
           Typeset`assumptions$$ = {}, Typeset`open$$ = {1, 2}, &#xD;
           Typeset`querystate$$ = {&#xD;
           &amp;#034;Online&amp;#034; -&amp;gt; True, &amp;#034;Allowed&amp;#034; -&amp;gt; True, &#xD;
            &amp;#034;mparse.jsp&amp;#034; -&amp;gt; 0.3739998`7.0244163634533505, &#xD;
            &amp;#034;Messages&amp;#034; -&amp;gt; {}}}, &#xD;
    DynamicBox[ToBoxes[&#xD;
    AlphaIntegration`LinguisticAssistantBoxes[&amp;#034;&amp;#034;, 4, Automatic, &#xD;
    Dynamic[Typeset`query$$], &#xD;
    Dynamic[Typeset`boxes$$], &#xD;
    Dynamic[Typeset`allassumptions$$], &#xD;
    Dynamic[Typeset`assumptions$$], &#xD;
    Dynamic[Typeset`open$$], &#xD;
    Dynamic[Typeset`querystate$$]], StandardForm],&#xD;
    ImageSizeCache-&amp;gt;{80., {8., 18.}},&#xD;
    TrackedSymbols:&amp;gt;{&#xD;
             Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$, &#xD;
              Typeset`assumptions$$, Typeset`open$$, &#xD;
              Typeset`querystate$$}],&#xD;
    DynamicModuleValues:&amp;gt;{},&#xD;
    UndoTrackedVariables:&amp;gt;{Typeset`open$$}],&#xD;
    BaseStyle-&amp;gt;{&amp;#034;Deploy&amp;#034;},&#xD;
    DeleteWithContents-&amp;gt;True,&#xD;
    Editable-&amp;gt;False,&#xD;
    SelectWithContents-&amp;gt;True]\)] // QuantityMagnitude&#xD;
    dai = UnitConvert[GeoDistance[point1, point3], \!\(\*&#xD;
    NamespaceBox[&amp;#034;LinguisticAssistant&amp;#034;,&#xD;
    DynamicModuleBox[{Typeset`query$$ = &amp;#034;meters&amp;#034;, Typeset`boxes$$ = &#xD;
           TemplateBox[{&#xD;
    InterpretationBox[&amp;#034; &amp;#034;, 1], &amp;#034;\&amp;#034;m\&amp;#034;&amp;#034;, &amp;#034;meters&amp;#034;, &amp;#034;\&amp;#034;Meters\&amp;#034;&amp;#034;}, &#xD;
            &amp;#034;Quantity&amp;#034;, SyntaxForm -&amp;gt; Mod], &#xD;
           Typeset`allassumptions$$ = {{&#xD;
            &amp;#034;type&amp;#034; -&amp;gt; &amp;#034;Clash&amp;#034;, &amp;#034;word&amp;#034; -&amp;gt; &amp;#034;meters&amp;#034;, &#xD;
             &amp;#034;template&amp;#034; -&amp;gt; &amp;#034;Assuming \&amp;#034;${word}\&amp;#034; is ${desc1}. Use as \&#xD;
    ${desc2} instead&amp;#034;, &amp;#034;count&amp;#034; -&amp;gt; &amp;#034;2&amp;#034;, &#xD;
             &amp;#034;Values&amp;#034; -&amp;gt; {{&#xD;
               &amp;#034;name&amp;#034; -&amp;gt; &amp;#034;Unit&amp;#034;, &amp;#034;desc&amp;#034; -&amp;gt; &amp;#034;a unit&amp;#034;, &#xD;
                &amp;#034;input&amp;#034; -&amp;gt; &amp;#034;*C.meters-_*Unit-&amp;#034;}, {&#xD;
               &amp;#034;name&amp;#034; -&amp;gt; &amp;#034;Word&amp;#034;, &amp;#034;desc&amp;#034; -&amp;gt; &amp;#034;a word&amp;#034;, &#xD;
                &amp;#034;input&amp;#034; -&amp;gt; &amp;#034;*C.meters-_*Word-&amp;#034;}}}}, &#xD;
           Typeset`assumptions$$ = {}, Typeset`open$$ = {1, 2}, &#xD;
           Typeset`querystate$$ = {&#xD;
           &amp;#034;Online&amp;#034; -&amp;gt; True, &amp;#034;Allowed&amp;#034; -&amp;gt; True, &#xD;
            &amp;#034;mparse.jsp&amp;#034; -&amp;gt; 0.3739998`7.0244163634533505, &#xD;
            &amp;#034;Messages&amp;#034; -&amp;gt; {}}}, &#xD;
    DynamicBox[ToBoxes[&#xD;
    AlphaIntegration`LinguisticAssistantBoxes[&amp;#034;&amp;#034;, 4, Automatic, &#xD;
    Dynamic[Typeset`query$$], &#xD;
    Dynamic[Typeset`boxes$$], &#xD;
    Dynamic[Typeset`allassumptions$$], &#xD;
    Dynamic[Typeset`assumptions$$], &#xD;
    Dynamic[Typeset`open$$], &#xD;
    Dynamic[Typeset`querystate$$]], StandardForm],&#xD;
    ImageSizeCache-&amp;gt;{80., {8., 18.}},&#xD;
    TrackedSymbols:&amp;gt;{&#xD;
             Typeset`query$$, Typeset`boxes$$, Typeset`allassumptions$$, &#xD;
              Typeset`assumptions$$, Typeset`open$$, &#xD;
              Typeset`querystate$$}],&#xD;
    DynamicModuleValues:&amp;gt;{},&#xD;
    UndoTrackedVariables:&amp;gt;{Typeset`open$$}],&#xD;
    BaseStyle-&amp;gt;{&amp;#034;Deploy&amp;#034;},&#xD;
    DeleteWithContents-&amp;gt;True,&#xD;
    Editable-&amp;gt;False,&#xD;
    SelectWithContents-&amp;gt;True]\)] // QuantityMagnitude&#xD;
    soluongbuocnhayngang = rong/8000 + 1 // Floor&#xD;
    soluongbuocnhaydoc = (dai - 8000)/8000 + 1 // Floor&#xD;
    vitringang1 = &#xD;
     Table[GeoDestination[GeoPosition[point1], {dist, 90}], {dist, 0, &#xD;
       rong, 8000}]&#xD;
    vitridoc1 = &#xD;
     Table[GeoDestination[GeoPosition[#], {dist, 180}], {dist, 8000, dai, &#xD;
         8000}] &amp;amp; /@ vitringang1&#xD;
    vitridoc1dachinhsua = &#xD;
     Table[vitridoc1[[All, n]], {n, 1, soluongbuocnhaydoc}]&#xD;
    vitridoc1dachinhsua = &#xD;
     Table[vitridoc1[[All, n]], {n, 1, soluongbuocnhaydoc}]&#xD;
    tatcavitri1 = Join[vitridoc1dachinhsua, vitringang1] // Flatten&#xD;
    bando1 = ReliefPlot[&#xD;
        Reverse[GeoElevationData[{#, {(# /. GeoPosition[x_] -&amp;gt; x)[[1]] - &#xD;
             0.05, (# /. GeoPosition[x_] -&amp;gt; x)[[2]] + 0.05}}, &#xD;
          GeoRange -&amp;gt; Quantity[4, &amp;#034;Kilometers&amp;#034;], &#xD;
          GeoProjection -&amp;gt; Automatic, GeoCenter -&amp;gt; Automatic]], &#xD;
        ImageSize -&amp;gt; Medium, ColorFunction -&amp;gt; GrayLevel, Frame -&amp;gt; False, &#xD;
        PlotRangePadding -&amp;gt; None] &amp;amp; /@ (GeoPosition /@ tatcavitri1)&#xD;
    topo11 = predictor[#] &amp;amp; /@ bando1 // Flatten&#xD;
    bandomoi1 = &#xD;
     Partition[&#xD;
       Values[topo11][[#]] &amp;amp; /@ Flatten[Position[Keys[topo11], &amp;#034;Land&amp;#034;]], &#xD;
       soluongbuocnhayngang] // Flatten&#xD;
    kethopvitrichiso = MapThread[Rule, {tatcavitri1, bandomoi1}]&#xD;
    GeoSmoothHistogram[kethopvitrichiso, &#xD;
     ColorFunction -&amp;gt; ColorData[&amp;#034;ThermometerColors&amp;#034;]]&#xD;
&#xD;
 And here it is, this is the Smooth Map of a part of Vietnam which contains sea and land.&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
Red regions in the heat map show land and blue show ocean and as visible&#xD;
&#xD;
##Wrapping-up##&#xD;
&#xD;
To summarize, in the first step, I collected the topographies from both on-land and underwater. Then in the second one, the most complicated step, I had tried to use Discrete Fourier Transform (DFT) for a topography image into the matrix, but the problem was that all Fourier images were very similar. Then I tried applying a high-frequency filter on the images. High frequency filtered output was easy to distinguish since surface above water is generally rough when compared to the surface underwater.  After the Image Processing Step, I created the two set of training sets, one is on-land training set and another one is underwater features training set. Hence, I went straight to build the classify function and the Final Function to Return Probabilities of &amp;#034;Land&amp;#034; and &amp;#034;Sea&amp;#034; as well. Finally, for an extra challenge also a nice application in other words for my project, I generated a Temperature Map of a part of Boston and tried to compare it to the actual map and the generated heat map is similar to the real map.&#xD;
&#xD;
##Future Works##&#xD;
&#xD;
As we can see, from an arbitrary relief plot of a location, it is possible to turn the relief one into the thermometer surface where blue represents &amp;#034;Sea&amp;#034; and red represents &amp;#034;Land&amp;#034; by using the probabilities that the classifier returned. The first thing that needs to be done is to think of other crucial parameters that can be used, like water bodies around buildings, etc. to increase the accuracy. After increasing the accuracy and interesting thing to do would be to analyze the topography of other celestial bodies (like Mars) and then predict regions where liquids flowed on the surface.&#xD;
&#xD;
**Are these the images of wet Mars in the past and current Mars? Who knows.**&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
##Special Thanks##&#xD;
&#xD;
I would like to give my special thanks to my mentor Mr. Harshal Gajjar for guiding me and also Mr. Mads Bahrami for giving me more challenges!&#xD;
&#xD;
[Final Project GitHub link][10]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img6.jpg&amp;amp;userId=1725013&#xD;
  [2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img1.png&amp;amp;userId=1725013&#xD;
  [3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img12.png&amp;amp;userId=1725013&#xD;
  [4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img4.png&amp;amp;userId=1725013&#xD;
  [5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img11.png&amp;amp;userId=1725013&#xD;
  [6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img8.png&amp;amp;userId=1725013&#xD;
  [7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img9.png&amp;amp;userId=1725013&#xD;
  [8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4910img10.png&amp;amp;userId=1725013&#xD;
  [9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img7.jpg&amp;amp;userId=1725013&#xD;
  [10]: https://github.com/mkhangg/WSS-Final-Project</description>
    <dc:creator>Khang Minh Nguyen</dc:creator>
    <dc:date>2019-07-11T20:52:51Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2828058">
    <title>Exploring Relief Maps on Ancient Earth</title>
    <link>https://community.wolfram.com/groups/-/m/t/2828058</link>
    <description>![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=PaleogeographicMaps.gif&amp;amp;userId=20103&#xD;
  [2]: https://www.wolframcloud.com/obj/5cd24f17-1b01-4dec-a917-b3d4ac97617b</description>
    <dc:creator>Jeffrey Bryant</dc:creator>
    <dc:date>2023-02-10T19:29:55Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/840910">
    <title>What is beyond the horizon, across the sea?</title>
    <link>https://community.wolfram.com/groups/-/m/t/840910</link>
    <description>*CLICK to ZOOM: click on the top image to zoom and see the details.*&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
[![enter image description here][4]][4]&#xD;
&#xD;
Vitaliy pointed out this interesting question: if you started navigating the sea perpendicularly to your favorite seashore and kept moving straight, which other place would you reach? One cannot answer this question just by tracing a straight line on a map, because that does not take into account the curvature of the Earth&amp;#039;s surface. We must trace geodesics and see where they lead us. Let us do that with the Wolfram Language.&#xD;
&#xD;
We basically need two ingredients: a way to construct geodesics and a way to detect whether we already reached land again. To construct geodesics, we shall use GeoPath. To check whether we reached land we shall simply check if our boat has crashed, that is, whether GeoElevationData reports a positive elevation for our current position. Let us start with the latter:&#xD;
&#xD;
Take an array of elevations covering the whole world. We choose zoom 3 to have good resolution:&#xD;
&#xD;
    In[1]:= Dimensions[ elevation = Reverse@ QuantityMagnitude[GeoElevationData[&amp;#034;World&amp;#034;, GeoZoomLevel -&amp;gt; 3]] ]&#xD;
    Out[1]= {1024, 2048}&#xD;
&#xD;
Now construct an interpolating function from that data:&#xD;
&#xD;
    In[2]:= elevationF = ListInterpolation[elevation, {{-90, 90}, {-180, 180}}];&#xD;
&#xD;
The function that will tell us whether we are still at sea when our coordinates are {lat, lon} is the following. We use Mod to handle any longitude angle determination:&#xD;
&#xD;
    In[3]:= seaQ[{lat_, lon_}] := Less[elevationF[lat, Mod[lon, 360, -180]], 0];&#xD;
&#xD;
For example, the coordinate origin is at sea:&#xD;
&#xD;
    In[4]:= seaQ[{0, 0}]&#xD;
    Out[4]= True&#xD;
&#xD;
Now let us construct the geodesic that moves perpendicularly from a given segment of seashore, say the segment from the point p to the point q, starting from the midpoint of the segment. We have to tell the function whether we want to start navigating towards the right or the left. We will use a sign (+1, -1, respectively) to indicate that:&#xD;
&#xD;
    In[5]:= geopath[{p_, q_}, sign_] := Module[{init, dist, dir, points},&#xD;
        (* Initial point of the geodesic *)&#xD;
        init = Mean[{p, q}];&#xD;
        (* Maximum distance to navigate *)&#xD;
        dist = Quantity[35000, &amp;#034;Kilometers&amp;#034;];&#xD;
        (* Initial direction of navigation *)&#xD;
        dir = GeoDirection[p, q] + sign Quantity[90, &amp;#034;AngularDegrees&amp;#034;];&#xD;
        (* Points of the geodesic *)&#xD;
        points = Reverse[Flatten[GeoGraphics`GeoEvaluate[GeoPath[{init, dist, dir}], Automatic, 50000] /. Line -&amp;gt; Identity, 1], 2];&#xD;
        (* Points of the geodesic at sea, until reaching the first inland point *)&#xD;
        points = Prepend[TakeWhile[Rest[points], seaQ], init];&#xD;
        (* The GeoPath geo primitive to be drawn with GeoGraphics *)&#xD;
        GeoPath[points, &amp;#034;TravelPath&amp;#034;, VertexColors -&amp;gt; {Green, Red}]&#xD;
      ]&#xD;
&#xD;
The use of GeoGraphics`GeoEvaluate requires explanation. This function converts a GeoPrimitive into standard Graphics primitives, and it is one of the internal steps of GeoGraphics. Its second argument specifies the geo model, that is the planet or datum (basically ellipsoidal model) of the Earth we are using; Automatic means here the default &amp;#034;ITRF00&amp;#034; datum of the Earth. Its third argument specifies the resolution in meters, so that consecutive points of the result will be separated no more than 50km. Finally, Reverse and Flatten convert back to the notation we need: a list of {lat, lon} points.&#xD;
&#xD;
We are now ready to draw the geodesics. Take the polygons of the world. We have a complete form and a simplified form:&#xD;
&#xD;
    In[6]:= pol = EntityValue[Entity[&amp;#034;GeographicRegion&amp;#034;, &amp;#034;World&amp;#034;], &amp;#034;Polygon&amp;#034;];&#xD;
    In[7]:= spol = EntityValue[GeoVariant[Entity[&amp;#034;GeographicRegion&amp;#034;, &amp;#034;World&amp;#034;], &amp;#034;SimplifiedArea&amp;#034;], &amp;#034;Polygon&amp;#034;];&#xD;
&#xD;
First example: draw geodesics from the segments of the simplified polygon of Australia. There are 192 points in our current version of this polygon:&#xD;
&#xD;
    In[8]:= Length[ australia = EntityValue[GeoVariant[Entity[&amp;#034;Country&amp;#034;, &amp;#034;Australia&amp;#034;], &amp;#034;SimplifiedArea&amp;#034;], &amp;#034;Polygon&amp;#034;][[1, 1, 1]] ]&#xD;
    Out[8]= 192&#xD;
&#xD;
Construct the geodesics. We need to start moving towards the left, so use sign -1:&#xD;
&#xD;
    In[9]:= paths = geopath[#, -1] &amp;amp; /@ Subsequences[australia, {2}];&#xD;
&#xD;
Draw them with GeoGraphics, using Vitaliy&amp;#039;s favorite black&amp;amp;white mapstyle. The beginnings of the geodesics are represented in green, and their ends are in red color:&#xD;
&#xD;
    In[10]:= GeoGraphics[{FaceForm[White], EdgeForm[White], pol, paths}, GeoBackground -&amp;gt; Black, GeoCenter -&amp;gt; Mean[australia]]&#xD;
![enter image description here][1]&#xD;
&#xD;
&#xD;
Let us now do the same thing, but with the complete polygon of Australia. We now have many more points:&#xD;
&#xD;
    In[11]:= Length[ australia = EntityValue[Entity[&amp;#034;Country&amp;#034;, &amp;#034;Australia&amp;#034;], &amp;#034;Polygon&amp;#034;][[1, 1, 1]] ]&#xD;
    Out[11]= 5140&#xD;
&#xD;
    In[12]:= GeoGraphics[{FaceForm[White], EdgeForm[White], pol, paths}, GeoBackground -&amp;gt; Black, GeoCenter -&amp;gt; Mean[australia]]&#xD;
![enter image description here][2]&#xD;
&#xD;
&#xD;
It is quite interesting to see how many places we can reach from Australia. In particular we can reach the US East coast, after crossing the Atlantic!&#xD;
&#xD;
And finally, let us do the same thing for the polygon of the Americas. We extract it from the polygons of the world:&#xD;
&#xD;
    In[13]:= Length[america = spol[[1, 2, 1, 1, 1, 1, 1]]]&#xD;
    Out[13]= 1333&#xD;
&#xD;
    In[14]:= paths = geopath[#, -1] &amp;amp; /@ Subsequences[america, {2}]];&#xD;
&#xD;
    In[15]:= GeoGraphics[{FaceForm[White], EdgeForm[White], pol, paths}, GeoBackground -&amp;gt; Black, GeoCenter -&amp;gt; Mean[america]]&#xD;
![enter image description here][3]&#xD;
&#xD;
This time, let us use the Mercator projection. There are many segments, so be patient...&#xD;
&#xD;
    In[16]:= Length[america = pol[[1, 2, 1, 1, 1]]]&#xD;
    Out[16]= 22751&#xD;
&#xD;
    In[17]:= paths = geopath[#, -1] &amp;amp; /@ Subsequences[america, {2}]];&#xD;
&#xD;
    In[18]:= GeoGraphics[{FaceForm[White], EdgeForm[White], pol, Thin, paths}, GeoBackground -&amp;gt; Black, GeoCenter -&amp;gt; Mean[america], GeoProjection -&amp;gt; &amp;#034;Mercator&amp;#034;]&#xD;
&#xD;
[![enter image description here][4]][4]&#xD;
&#xD;
Note that GeoElevationData is a bit imprecise in the area of the East Siberian sea, north of Russia, where the seafloor is not very deep, and the geodesics get interrupted there. There are also some discrepancies around Antarctica, due to the ice masses. &#xD;
&#xD;
Jose.&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-04-15at23.08.09.png&amp;amp;userId=54190&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-04-15at23.10.09.png&amp;amp;userId=54190&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-04-15at23.28.00.png&amp;amp;userId=54190&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-04-15at23.20.41.png&amp;amp;userId=54190&#xD;
&#xD;
 [at0]: http://community.wolfram.com/web/vitaliyk</description>
    <dc:creator>Jose Martin-Garcia</dc:creator>
    <dc:date>2016-04-16T04:48:16Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/784207">
    <title>[Package] Phylogenetics for Mathematica</title>
    <link>https://community.wolfram.com/groups/-/m/t/784207</link>
    <description>[Dr. P. David Polly from Indiana University][1] published a [Phylogenetics package for Mathematica][2]. A detailed [User&amp;#039;s Guide][3] is also available. Short discription is copied below.&#xD;
&#xD;
**Keywords**: Phylogenetics; Ancestral Node Reconstruction; Newick Trees&#xD;
&#xD;
**Abstract**: This add-in package for Mathematica performs basic phylogenetic functions, including reading and drawing Newick format trees, calculating phylogenetically independent contrasts, reconstructing ancestral values for continuous traits, performing random walks, and simulating continuous traits on phylogenetic trees. The file is a &amp;#034;.m&amp;#034; file, which can be imported into Mathematica 6.0 and later (functions do not work in earlier versions of Mathematica). Install using the &amp;#034;Install&amp;#034; item on the &amp;#034;File&amp;#034; menu. Once installed, you must load the package like any other with the line `&amp;lt;&amp;lt;PollyPhylogenetics`, using either this suggested name or another.&#xD;
&#xD;
&#xD;
&#xD;
  [1]: http://mypage.iu.edu/~pdpolly&#xD;
  [2]: https://scholarworks.iu.edu/dspace/handle/2022/14614&#xD;
  [3]: http://mypage.iu.edu/~pdpolly/Software/Guide%20to%20PollyPhylogenetics%203.0.pdf</description>
    <dc:creator>Vitaliy Kaurov</dc:creator>
    <dc:date>2016-02-01T13:07:47Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1333342">
    <title>Visualizing Hawaii Seismic Activity with EarthquakeData</title>
    <link>https://community.wolfram.com/groups/-/m/t/1333342</link>
    <description>If you&amp;#039;ve been following the news, you&amp;#039;ve probably seen that the Kilauea volcano in Hawaii has erupted, and there have been reports of many earthquakes. I thought I&amp;#039;d use the built-in [EarthquakeData][1] in the Wolfram Language to see if I could come up with any interesting visualizations to explore the seismic activity. I&amp;#039;m going to be using some very basic examples from the documentation.&#xD;
&#xD;
Let&amp;#039;s first grab some data from 5/1/18 to 5/6/18 and plot locations.&#xD;
&#xD;
&#xD;
    data = EarthquakeData[&#xD;
        Entity[&amp;#034;AdministrativeDivision&amp;#034;, {&amp;#034;Hawaii&amp;#034;, &amp;#034;UnitedStates&amp;#034;}], &#xD;
        4, {{2018, 5, 1}, {2018, 5, 6}}, &amp;#034;Position&amp;#034;][&amp;#034;Values&amp;#034;];&#xD;
    &#xD;
    GeoGraphics[{GeoStyling[&amp;#034;ReliefMap&amp;#034;, MaxPlotPoints -&amp;gt; 300], Red, &#xD;
      data /. GeoPosition[{x_, y_}] :&amp;gt; Point[{y, x}]}]&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
Wolfram Language returned a message indicating there were duplicates, so some values were combined.&#xD;
&#xD;
Let&amp;#039;s use an example from the documentation and examine earthquake depth.&#xD;
&#xD;
    data2 = {#[&amp;#034;Position&amp;#034;], #[&amp;#034;Depth&amp;#034;]} &amp;amp; /@ &#xD;
       Values[EarthquakeData[&#xD;
         Polygon[Entity[&#xD;
           &amp;#034;AdministrativeDivision&amp;#034;, {&amp;#034;Hawaii&amp;#034;, &amp;#034;UnitedStates&amp;#034;}]], &#xD;
         4, {{2018, 5, 1}, {2018, 5, 6}}]];&#xD;
    &#xD;
    Graphics3D[{Opacity[0.6], &#xD;
      Map[Append[Reverse[#], 0] &amp;amp;, &#xD;
       EntityValue[&#xD;
         Entity[&amp;#034;AdministrativeDivision&amp;#034;, {&amp;#034;Hawaii&amp;#034;, &amp;#034;UnitedStates&amp;#034;}], &#xD;
         &amp;#034;Polygon&amp;#034;] /. GeoPosition -&amp;gt; Identity, {-2}], Red, Opacity[1], &#xD;
      Line[{Append[Reverse[First[#1]], 0], &#xD;
          Append[Reverse[First[#1]], -QuantityMagnitude[#2]/10]} &amp;amp; @@@ &#xD;
        data2]}, Axes -&amp;gt; True]&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
Pretty impressive for just using some basic examples from the documentation. I&amp;#039;m not an expert in programming, data visualization or geocomputation, so I&amp;#039;m curious what some of you might be able to come up with!&#xD;
&#xD;
  [1]: http://reference.wolfram.com/language/ref/EarthquakeData.html&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SeismicPlot.jpeg&amp;amp;userId=1036924&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Map.jpeg&amp;amp;userId=1036924&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Depth.jpeg&amp;amp;userId=1036924</description>
    <dc:creator>Null Null</dc:creator>
    <dc:date>2018-05-06T16:39:08Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1345014">
    <title>LayeredGeoGraphics -- An awesome way to build multi-layered custom maps!</title>
    <link>https://community.wolfram.com/groups/-/m/t/1345014</link>
    <description>The Wolfram Language comes with lots of great functions, but one of my all-time favorites is [`GeoGraphics`][1] and all its `GeoSiblings`. Here is a simple example of how easy it is to create a map in the Wolfram Language:&#xD;
&#xD;
    GeoGraphics[Interpreter[&amp;#034;City&amp;#034;][&amp;#034;nyc&amp;#034;]]&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
And you can use some built-in &amp;#039;backgrounds&amp;#039; as well, for example:&#xD;
&#xD;
    GeoGraphics[Interpreter[&amp;#034;City&amp;#034;][&amp;#034;washington dc&amp;#034;], GeoBackground -&amp;gt; &amp;#034;ContourMap&amp;#034;]&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
A perhaps lesser known option, but very useful one is the [`GeoServer`][4] option, which lets you point to any tilemap server:&#xD;
&#xD;
    GeoGraphics[Interpreter[&amp;#034;City&amp;#034;][&amp;#034;rome, italy&amp;#034;], GeoServer -&amp;gt; &amp;#034;http://b.tile.openstreetmap.org/`1`/`2`/`3`.png&amp;#034;]&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
This is a very powerful feature, because there are many many tilemap servers in existence. Many are free for personal use and others require some sort of subscription with a key to unlock the tilemap server.&#xD;
&#xD;
Finding all of them is a hassle, so I am making a `GeoService` function available in [my `Prototype` paclet on GitHub][6]. This is a paclet which provides a number of half-baked (and sometimes fully baked!) functions that are not (or not yet) in the core Wolfram Language. Here is how you get this paclet (and the functions described in this post):&#xD;
&#xD;
    PacletInstall[ &amp;#034;https://github.com/arnoudbuzing/prototypes/releases/download/v0.2.7/Prototypes-0.2.7.paclet&amp;#034;]&#xD;
&#xD;
(You may need to `Quit` and restart the Wolfram Language after this)&#xD;
&#xD;
The `GeoService` function is a collection of a number of publicly documented tilemap servers:&#xD;
&#xD;
    GeoService[&amp;#034;Properties&amp;#034;]&#xD;
&#xD;
Here is the output:&#xD;
&#xD;
    {&amp;#034;OpenStreetMap&amp;#034;, {&amp;#034;OpenStreetMap&amp;#034;, &amp;#034;France&amp;#034;}, {&amp;#034;OpenStreetMap&amp;#034;, &amp;#034;France&amp;#034;, &amp;#034;Humanitarian&amp;#034;}, &amp;#034;WikiMedia&amp;#034;, &amp;#034;HikeAndBike&amp;#034;, {&amp;#034;WaymarkedTrails&amp;#034;, &amp;#034;Hiking&amp;#034;}, {&amp;#034;WaymarkedTrails&amp;#034;, &amp;#034;Cycling&amp;#034;}, &amp;#034;SkiMap&amp;#034;, &amp;#034;HillShading&amp;#034;, {&amp;#034;OpenCycleMap&amp;#034;,  &amp;#034;Cycle&amp;#034;}, {&amp;#034;OpenCycleMap&amp;#034;, &amp;#034;Transport&amp;#034;}, {&amp;#034;Mapnik&amp;#034;, &amp;#034;Grayscale&amp;#034;}, {&amp;#034;Mapnik&amp;#034;, &amp;#034;LabelFree&amp;#034;}, {&amp;#034;Stamen&amp;#034;,  &amp;#034;Toner&amp;#034;}, {&amp;#034;Stamen&amp;#034;, &amp;#034;Watercolor&amp;#034;}, {&amp;#034;ThunderForest&amp;#034;, &amp;#034;Landscape&amp;#034;}, {&amp;#034;ThunderForest&amp;#034;, &amp;#034;Outdoors&amp;#034;}, &amp;#034;Opnvkarte&amp;#034;, &amp;#034;OpenPtMap&amp;#034;, {&amp;#034;Carto&amp;#034;, &amp;#034;Dark&amp;#034;}, {&amp;#034;Carto&amp;#034;, &amp;#034;Light&amp;#034;}, &amp;#034;OpenSeaMap&amp;#034;, {&amp;#034;OpenRailwayMap&amp;#034;,  &amp;#034;Standard&amp;#034;}, {&amp;#034;OpenRailwayMap&amp;#034;, &amp;#034;MaxSpeed&amp;#034;}, {&amp;#034;OpenRailwayMap&amp;#034;, &amp;#034;Signals&amp;#034;}, {&amp;#034;ArcGIS&amp;#034;, &amp;#034;UnitedStatesTopographical&amp;#034;}}&#xD;
&#xD;
Already immediately this gives you a whole new set of maps you can make. Here are some examples:&#xD;
&#xD;
    GeoGraphics[Interpreter[&amp;#034;City&amp;#034;][&amp;#034;athens, greece&amp;#034;], GeoServer -&amp;gt; GeoService[{&amp;#034;Mapnik&amp;#034;, &amp;#034;LabelFree&amp;#034;}]]&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
    GeoGraphics[Interpreter[&amp;#034;City&amp;#034;][&amp;#034;cairo, egypt&amp;#034;], GeoServer -&amp;gt; GeoService[&amp;#034;WikiMedia&amp;#034;]]&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
But all these maps are single layered, whereas several tilemap servers provide transparent layers for some &amp;#034;theme&amp;#034;. For example here is a layer which contains only hiking trails in a part of the [Yorkshire Dales][9]:&#xD;
&#xD;
    GeoGraphics[Interpreter[&amp;#034;City&amp;#034;][&amp;#034;malham, united kingdom&amp;#034;], GeoServer -&amp;gt; GeoService[{&amp;#034;WaymarkedTrails&amp;#034;, &amp;#034;Hiking&amp;#034;}]]&#xD;
&#xD;
![enter image description here][10]&#xD;
&#xD;
This sort of layer is most useful when it is combined with another (non-transparent background) layer. One way to achieve this is to use the `Overlay` functions which can display one Wolfram Language expression on top of another:&#xD;
&#xD;
    Overlay[{&amp;#034;XXXX  &amp;#034;, &amp;#034;  OOOO&amp;#034;}]&#xD;
&#xD;
![enter image description here][11]&#xD;
&#xD;
In the [&amp;#034;Prototype&amp;#034; paclet][12] (described above) I defined a `LayeredGeoGraphics` function which uses [`Overlay`][13] to stack multiple layers. Here is an example which shows hiking trails against a grayscale [Mapnik][14] layer:&#xD;
&#xD;
    LayeredGeoGraphics[&#xD;
      Interpreter[&amp;#034;City&amp;#034;][&amp;#034;malham, united kingdom&amp;#034;], &#xD;
      {{&amp;#034;Mapnik&amp;#034;, &amp;#034;Grayscale&amp;#034;}, {&amp;#034;WaymarkedTrails&amp;#034;, &amp;#034;Hiking&amp;#034;}}, &#xD;
      GeoRange -&amp;gt; Quantity[1, &amp;#034;Miles&amp;#034;]]&#xD;
&#xD;
![enter image description here][15]&#xD;
&#xD;
And of course you can stack more than two layers. Here is an example of a label-free background layer from [Mapnik][16], with a hill-shading layer and a hiking trail layer on top of it (The hill-shading provides the &amp;#034;depth&amp;#034; look and feel):&#xD;
&#xD;
    LayeredGeoGraphics[ Interpreter[&amp;#034;City&amp;#034;][&amp;#034;malham, united kingdom&amp;#034;], {{&amp;#034;Mapnik&amp;#034;, &amp;#034;LabelFree&amp;#034;}, &amp;#034;HillShading&amp;#034;, {&amp;#034;WaymarkedTrails&amp;#034;, &amp;#034;Hiking&amp;#034;}}, GeoRange -&amp;gt; Quantity[1, &amp;#034;Miles&amp;#034;]]&#xD;
&#xD;
![enter image description here][17]&#xD;
&#xD;
The possibilities are endless! If you find additional tilemap servers please feel free to add them here as a comment, or make a pull request in my [Prototypes github repo][19]. More layers = more possibilities!&#xD;
&#xD;
&#xD;
  [1]: http://reference.wolfram.com/language/ref/GeoGraphics.html&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-01.png&amp;amp;userId=22112&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-02.png&amp;amp;userId=22112&#xD;
  [4]: http://reference.wolfram.com/language/ref/GeoServer.html&#xD;
  [5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-03.png&amp;amp;userId=22112&#xD;
  [6]: https://github.com/arnoudbuzing/prototypes&#xD;
  [7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-04.png&amp;amp;userId=22112&#xD;
  [8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-05.png&amp;amp;userId=22112&#xD;
  [9]: https://en.wikipedia.org/wiki/Yorkshire_Dales&#xD;
  [10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-06.png&amp;amp;userId=22112&#xD;
  [11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-07.png&amp;amp;userId=22112&#xD;
  [12]: https://github.com/arnoudbuzing/prototypes&#xD;
  [13]: http://reference.wolfram.com/language/ref/Overlay.html&#xD;
  [14]: http://mapnik.org/&#xD;
  [15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-08.png&amp;amp;userId=22112&#xD;
  [16]: http://mapnik.org/&#xD;
  [17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-10.png&amp;amp;userId=22112&#xD;
  [18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gg-09.png&amp;amp;userId=22112&#xD;
  [19]: https://github.com/arnoudbuzing/prototypes</description>
    <dc:creator>Arnoud Buzing</dc:creator>
    <dc:date>2018-05-24T19:58:32Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/984239">
    <title>It&amp;#039;s Ridiculously Cold Outside. Let&amp;#039;s Graph It.</title>
    <link>https://community.wolfram.com/groups/-/m/t/984239</link>
    <description>I&amp;#039;m in Chicago and this past week has been brutally cold. I decided it would be fun to see just how bad it is by visualizing this year&amp;#039;s temperatures against previous years. The results were good and I thought I&amp;#039;d share. &#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
First, we get about 60 years of temperature data. This data has some Missing values in it, but our code will handle this problem automagically:&#xD;
&#xD;
    temps = WeatherData[&amp;#034;Chicago&amp;#034; , &amp;#034;MeanTemperature&amp;#034;, {{1950, 1, 1}, Today, &amp;#034;Day&amp;#034;},  &amp;#034;DateNonMetricValue&amp;#034;]&#xD;
&#xD;
I&amp;#039;ve also asked for the values in Fahrenheit instead of the the default Celsius. You can just remove  &amp;#034;DateNonMetricValue&amp;#034; if you don&amp;#039;t like that. &#xD;
&#xD;
Making this data into a DataSet will help us efficiently manipulate it. We want to group all the temperatures by Month and Day so we can calculate things like the Mean and Max temperatures for any given day of the year. &#xD;
&#xD;
    tempDataset = Dataset@Normal@temps;&#xD;
    &#xD;
    groupedByMonthDay = &#xD;
        tempDataset[GroupBy[First /* (DateValue[#, { &amp;#034;Month&amp;#034;, &amp;#034;Day&amp;#034;}] &amp;amp;)]]&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
Let&amp;#039;s select just the list of temperatures for each Day:&#xD;
&#xD;
    groupedByMonthDayTemps = groupedByMonthDay[All, All, 2]&#xD;
&#xD;
Now we can look at the Mean temperature for each day by evaluating:&#xD;
&#xD;
    groupedByMonthDayTemps[All, Mean]&#xD;
&#xD;
We are going to reduce the list temperatures for each day with several different ways, so let&amp;#039;s make a function out this. It will take a reducing function like Mean and apply it to each day&amp;#039;s list of temperatures. &#xD;
&#xD;
    tempsReducedBy[transform_] := groupedByMonthDayTemps[All, transform];&#xD;
&#xD;
I want to look at the median temperatures as well as the minimum, maximum, and upper and lower quartiles. The functions that do this are:&#xD;
&#xD;
    reductionFunctions =  {Min, Max, Median, Quantile[#, 3/4] &amp;amp;, Quantile[#, 1/4] &amp;amp;};&#xD;
&#xD;
We can get the daily minimum temperatures and other values using them:&#xD;
&#xD;
     {minTemps, maxTemps, medianTemps, upperQTemps, lowerQTemps} = &#xD;
          tempsReducedBy /@ reductionFunctions&#xD;
    &#xD;
Unfortunately, when we plot these, we see they are kinda bumpy:&#xD;
&#xD;
    ListPlot@minTemps[Values]&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
I just just want a general curve showing what the minimum temperatures are like. We need to smooth these out. There&amp;#039;s a lot of ways to do this. I at first chose to use the Fourier Transform myself, but realized there&amp;#039;s a function for LowPass Filtering: &#xD;
&#xD;
    filterTemps[temps_] := &#xD;
     LowpassFilter[Normal@temps[[Values]], 1/20, 80, HammingWindow, &#xD;
      Padding -&amp;gt; &amp;#034;Periodic&amp;#034;]&#xD;
&#xD;
The values here are admittedly just magic numbers I&amp;#039;ve picked for aesthetic reasons. Let&amp;#039;s add this to our code:&#xD;
&#xD;
    {minTemps, maxTemps, medianTemps, upperQTemps, lowerQTemps} = &#xD;
      Map[filterTemps@*tempsReducedBy, reductionFunctions];&#xD;
&#xD;
If we plot these together, we get a good  a good background for how much the temperature varies throughout the year.&#xD;
The code is a bit long because I got picky about how I wanted it to look:&#xD;
&#xD;
    background = DateListPlot[{&#xD;
       Callout[minTemps, &amp;#034;Likely Record Low&amp;#034;],&#xD;
       Callout[maxTemps, &amp;#034;Likely Record High&amp;#034;],&#xD;
       Callout[medianTemps, &amp;#034;Median&amp;#034;],&#xD;
       Callout[upperQTemps, &amp;#034;Upper Quartile&amp;#034;],&#xD;
       Callout[lowerQTemps, &amp;#034;Lower Quartile&amp;#034;]}, {{2016, 1, 1}, Automatic,  &amp;#034;Day&amp;#034;}, &#xD;
        Joined -&amp;gt; True, PlotStyle -&amp;gt; Gray, FrameLabel -&amp;gt; Automatic,&#xD;
       FrameTicks -&amp;gt; {Table[{2016, i, 1}, {i, 12}], Automatic}, &#xD;
      Axes -&amp;gt; False]&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
Let&amp;#039;s get this years temperatures and lay it over our background:&#xD;
&#xD;
    currentTemps = &#xD;
        WeatherData[city, &amp;#034;MeanTemperature&amp;#034;, {{2016, 1, 1}, Today, &amp;#034;Day&amp;#034;}, &amp;#034;DateNonMetricValue&amp;#034;];&#xD;
    &#xD;
    thisYear = &#xD;
     DateListPlot[currentTemps, Joined -&amp;gt; True, PlotStyle -&amp;gt; Darker@Red]&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
And let&amp;#039;s lay it on top of our background:&#xD;
&#xD;
    Show[background, thisYear, &#xD;
    PlotLabel -&amp;gt; Style[&amp;#034;Daily Mean Temperature - Chicago - 2016&amp;#034;, Large, FontFamily -&amp;gt; &amp;#034;Times&amp;#034;]]&#xD;
&#xD;
![enter image description here][6]&#xD;
&#xD;
So, it&amp;#039;s certifiably ridiculously cold in Chicago. And yesterday was almost record breaking.&#xD;
&#xD;
Here are some graphs for other cities that like to complain about the cold. As we can conclusively see, my friends from other cities should stop complaining about how cold it is outside. Chicago has it far worse right now:&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Chicago_during_early_2014_North_American_Cold_Wave.jpg&amp;amp;userId=20642&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-12-20at10.29.52PM.png&amp;amp;userId=20642&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mintemps.png&amp;amp;userId=20642&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8458Background.png&amp;amp;userId=20642&#xD;
  [5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=posterworkingcopy.png&amp;amp;userId=20642&#xD;
  [6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Chicago.png&amp;amp;userId=20642&#xD;
  [7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=newyork.png&amp;amp;userId=20642&#xD;
  [8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=seattle.png&amp;amp;userId=20642</description>
    <dc:creator>Sean Clarke</dc:creator>
    <dc:date>2016-12-21T04:25:57Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/834097">
    <title>Modeling wind speed distributions with machine learning</title>
    <link>https://community.wolfram.com/groups/-/m/t/834097</link>
    <description>**What is distribution of wind speed magnitudes at a given geographic location?** You could approach this manually like the authors of this paper:&#xD;
&#xD;
[Mixture probability distribution functions to model wind speed distributions][1] &#xD;
&#xD;
where main conclusion was: &#xD;
&#xD;
&amp;gt; Results show that mixture probability functions are better alternatives to conventional Weibull, two-component mixture Weibull, gamma, and lognormal PDFs to describe wind speed characteristics.&#xD;
&#xD;
Or you can use Machine Learning and new function FindDistribution. Let&amp;#039;s first get a sample of data, say for Boston for recent 5 years:&#xD;
&#xD;
    windBOSTON = WeatherData[&amp;#034;Boston&amp;#034;, &amp;#034;WindSpeed&amp;#034;, {{2010}, {2015}, &amp;#034;Day&amp;#034;}];    &#xD;
    DateListPlot[windBOSTON]&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
Now get the magnitudes and apply FindDistribution&#xD;
&#xD;
    mags = QuantityMagnitude[windBOSTON[&amp;#034;Values&amp;#034;]];&#xD;
    dis = FindDistribution[mags]&#xD;
&#xD;
which gives, guess what, a MixtureDistribution :&#xD;
&#xD;
    MixtureDistribution[{0.711353, 0.288647}, &#xD;
    {NormalDistribution[12.8117, 4.74919], LogNormalDistribution[3.06178, 0.308954]}]&#xD;
&#xD;
Visualizing model versus experimental data looks neat!&#xD;
&#xD;
    Show[&#xD;
     Histogram[mags, Automatic, &amp;#034;ProbabilityDensity&amp;#034;, PlotTheme -&amp;gt; &amp;#034;Detailed&amp;#034;],&#xD;
     Plot[PDF[dis, x], {x, 0, 50}, PlotRange -&amp;gt; All]]&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
Try playing with other locations and see what distributions you get. Not always we will get a MixtureDistribution, wind data at different locations can be quite different. &#xD;
&#xD;
  [1]: http://link.springer.com/article/10.1186/2251-6832-3-27&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdfasq54yhtgfd.png&amp;amp;userId=11733&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdfasfq3ergfadv.png&amp;amp;userId=11733</description>
    <dc:creator>Vitaliy Kaurov</dc:creator>
    <dc:date>2016-04-04T20:48:16Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/293403">
    <title>Issues with AstronomicalData and SunPosition</title>
    <link>https://community.wolfram.com/groups/-/m/t/293403</link>
    <description>TL;DR
---
I have three issues with getting sun positions in Mathematica V10:

 - It is extremely slow compared to V9
 - It doesn&amp;#039;t *seem* to yield correct results
 - a) It needs an Internet connection, which b) is not assured to always return results and c) if not used optimally can easily consume your monthly allowance of W|A calls

Long story
===
With the advent of V10 `AstronomicalData` has been deprecated, as shown on its documentation page:
![enter image description here][1].

I&amp;#039;m not an astronomer, so my main use of this function has been restricted to its capability to get the sun position using functions calls like this:

    {
      AstronomicalData[&amp;#034;Sun&amp;#034;, {&amp;#034;Azimuth&amp;#034;, {2013, 3, 1, #, 0, 0}, {52.37`, 4.89`}}, TimeZone -&amp;gt; 1], 
      AstronomicalData[&amp;#034;Sun&amp;#034;, {&amp;#034;Altitude&amp;#034;, {2013, 3, 1, #, 0, 0}, {52.37`, 4.89`}},  TimeZone -&amp;gt; 1]
    } &amp;amp; /@ Range[0, 23]

&amp;gt; {{341.47732, -43.93930}, {2.33417, -45.21747}, {22.94232, -43.19133},
&amp;gt; {41.52167, -38.28400}, {57.55208, -31.30276}, {71.47253, -23.03159},
&amp;gt; {84.02194, -14.08294}, {95.91940, -4.92723}, {107.80166, 4.03418}, {120.23770, 12.40167}, {133.72303, 19.72563}, {148.59433, 
&amp;gt;   25.48377}, {164.84179, 29.12209}, {181.93103, 30.19428}, {198.91284,
&amp;gt;    28.55117}, {214.89602, 24.41962}, {229.46400, 
&amp;gt;   18.28613}, {242.70064, 10.70703}, {254.98998, 
&amp;gt;   2.19073}, {266.84760, -6.82566}, {278.85594, -15.94505},  {291.66723, -24.75464}, {306.00911, -32.75641}, {322.57956, 
&amp;gt; -39.29877}}

So, this gets me the sun positions in steps of an hour during a particular day in Amsterdam (TZ 1).

The same call still works in V10, though it now returns numbers with units; degrees in this case. On its first call, it reads some paclet information from a Wolfram server, but on any successive call no Internet connection is needed. I will be going into detail about timing further on, but I&amp;#039;ll say here that the V10 function takes about three times longer than its V9 namesake. I blame the addition of units for that.

With `AstronomicalData` apparently deprecated we are supposed to use its successors. In this case I need `SunPosition`. A direct translation of the above would be:

    SunPosition[GeoPosition[{52.37`, 4.89`}], DateObject[{2013, 3, 1, #, 0, 0}, TimeZone -&amp;gt; 1]] &amp;amp; /@ Range[0, 23]

&amp;gt; {{95.7, -5.1}, {107.6, 3.9}, {120.0, 12.3}, {133.5, 19.6}, {148.3, 25.4}, {164.5, 29.1}, {181.6, 30.2}, {198.6, 28.6}, {214.6, 24.5}, {229.2, 18.4}, {242.5, 10.9}, {254.8, 2.4}, {266.6, -6.7}, {278.6, -15.8}, {291.4, -24.6}, {305.7, 
-32.6}, {322.2, -39.2}, {341.3, -43.5}, {2.0, -44.8}, {22.5, -42.9}, 
{41.1, -38.0}, {57.1, -31.1}, {71.0, -22.9}, {83.6, -13.9}}

As with the new `AstronomicalData` the output is actually in degrees which I have removed in the above output for the sake of clarity. There are a few things to note:

 - `SunPosition` uses position and date objects, the latter being new in V10
 - `SunPosition` does not have a `TimeZone` option, but you can set it in `DateObject`
 - `SunPosition` can use the old lat/long list position indication. It also can use a date list to enter the date instead of a `DateObject`. In the latter case you are out of options with respect to time zones and you have to add the appropriate amount of time offset
 - It is extremely slow, and it may even time-out: 

![enter image description here][2]

 - Last but not least: the results seem to be plain wrong. It suggests that sunrise is somewhat before 1 am, which is -of course- incorrect. I assume that this has something to do with a `$GeoLocation` setting for the observer of the sun positions, but I haven&amp;#039;t managed to sort out what I am supposed to enter to get the correct sun positions for the location provided in the same call.

As to timing: I noticed very inconsistent timings for `SunPosition` compared to `AstronomicalData`, so I used the following code to collect a somewhat more statistical  sound sample:

    SetAttributes[timingTest, HoldFirst];
    timingTest[code_, repeats_Integer] :=
       Table[
          ClearSystemCache[];
          code // AbsoluteTiming // First,
          {repeats}
        ]

Using this, I collected timing of 20 calls to the following code snippets:

 - `AstronomicalData` V9 and V10: As above
 - `SunPosition`: As above
 - `SunPosition` without `GeoPosition`, just the lat/long list.
 - `SunPosition`  without `GeoPosition`, and also without the `DateObject` date, just a classical date list (with the hour set to +1 to accommodate TZ 1)
 - `SunPosition` V10 without `GeoPosition` and with the `Map` (`/@`) gone and replaced by a `DateRange` inside the call.

In the last case, the returned value is a `TimeSeries` object from which I extract the positions using the `&amp;#034;Paths&amp;#034;` method:

     SunPosition[{52.37`, 4.89`}, DateRange[{2013, 3, 1, 1, 0, 0}, {2013, 3, 1, 24, 0, 0}, &amp;#034;Hour&amp;#034;]][&amp;#034;Paths&amp;#034;][[1, All, 2]]

The results were as follows:

![enter image description here][3]

Clearly, the `SunPosition` results are very disappointing. Getting the sun positions with `SunPosition` is almost 40 times slower than using the old V9 method (which, I should add, wasn&amp;#039;t particularly quick either. I have an implementation in Mathematica code which is faster). The V10 implementation of `AstronomicalData` is also more than three times slower than the V9 version. The `DateRange` version of the call saves a lot of communication overhead. Still, it is almost *five times slower* than in V9.

The cause of all this slowness is that `SunPosition` simply does a call to Wolfram|Alpha. Sniffing the communication one sees the following string passed to the server:

    &amp;#034;1:eJxTTMoPSuNgYGAoZgESPpnFJcHcQEZwaV5AfnFmSWZ+XhoTsmxR/6GvGjH9wg4Qhr6XQxobsnzmXXYGhkxmIC+TEUSIgwggZihigIJgoAIGj/yizKr8PJigA5yBZtubwB1yrdxeDkXVIuvcH1aJOBRzAqUcS0vycxNLMpMBSAArww==&amp;#034;

which can be turned into readable form using `Uncompress`:

    {&amp;#034;SunPosition&amp;#034;, {4.89, 52.37}, {2013, 3, 1, 23, 0, 0.}, &amp;#034;Horizon&amp;#034;, 2., 2., {52.09, 5.12}, Automatic}

Here, we can recognize the lat/long of the position I used (but with lat/long reversed - Is this somehow significant?). At the end is my own `$GeoLocation`, but I don&amp;#039;t believe it is used at all (and it shouldn&amp;#039;t: I&amp;#039;m asking for the sun position over Amsterdam, not where I live). Changing it with `Block` I get the same results:

    Block[{$GeoLocation = GeoPosition[{52.37`, 40.89`}]}, SunPosition[{52.37`, 4.89`}, {2013, 3, 1, 1, 0, 0}]]

Apart from the slowness, there&amp;#039;s the issue of the necessary Internet connectivity (Want to give a demonstration and you don&amp;#039;t have Internet? Sorry, you&amp;#039;re out of luck). 

And what of the use of W|A calls? Each of the `SunPosition` tests (except the last one) took me 20 * 24 = 480 calls. So this part of my testing only already took 1440 calls, and one should be reminded that a typical Home Use license allows for only 3,000 calls per month. Things like this can go pretty fast. In fact, I once wrote an application that calculates the impact of building changes on shadows around your house throughout the year. It does in the order of 17,000 `AstronomicalData` calls. I couldn&amp;#039;t implement that naively using `SunPosition` and have it actually work. Clearly, one should now use the `DateRange` version of the call as much as possible.

----------

To wrap up: I have one real question, i.e., how to get `SunPosition` to return the same values as `AstronomicalData`, and a request to the WRI team: please put `SunPosition` in the kernel and don&amp;#039;t use W|A calls, because the situation as it is now is rather annoying and IMHO a real step backwards.

  [1]: /c/portal/getImageAttachment?filename=AstronomicalData.png&amp;amp;userId=43903
  [2]: /c/portal/getImageAttachment?filename=timeout.png&amp;amp;userId=43903
  [3]: /c/portal/getImageAttachment?filename=results.png&amp;amp;userId=43903</description>
    <dc:creator>Sjoerd de Vries</dc:creator>
    <dc:date>2014-07-13T16:39:48Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2740397">
    <title>What is the winter solstice?</title>
    <link>https://community.wolfram.com/groups/-/m/t/2740397</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/2d627ca1-036a-4d6e-afaa-bd73bd641674</description>
    <dc:creator>Jose Martin-Garcia</dc:creator>
    <dc:date>2022-12-21T15:37:36Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/325930">
    <title>Bárðarbunga earthquake swarm</title>
    <link>https://community.wolfram.com/groups/-/m/t/325930</link>
    <description>Data credit: [Icelandic Met Office][1]&#xD;
&#xD;
[Animated results][2] (using techniques shown below) and [how to generate them][3].&#xD;
&#xD;
In 2010 the volcano [Eyjafjallajökull][4] erupted and shut down (among other things) transatlantic air travel for about six days. This time it is the volcano [Bárðarbunga][5] which appears to be ready to erupt. The Icelandic API web site [http://apis.is][6] is providing a useful service, by tracking all measured earthquakes and formatting them as JSON output, which is easily imported into the Wolfram Language. In this post I will make use of this data to visualize these earthquakes.&#xD;
&#xD;
The first step is to import the dataset and to transform it into a Wolfram Language data structure aptly named [Dataset][7]. &#xD;
&#xD;
    dataset = Dataset[Map[Association, &amp;#034;results&amp;#034; /. Import[&amp;#034;http://apis.is/earthquake/is&amp;#034;, &amp;#034;JSON&amp;#034;]]];&#xD;
&#xD;
We can inspect the content of this earthquake data by simply evaluating the *dataset* symbol:&#xD;
&#xD;
    dataset&#xD;
&#xD;
![dataset][8]&#xD;
&#xD;
And we can also sort the data on any column, by using the SortBy function on the *depth* column. In this case I use *-depth* to sort from deepest earthquake to shallowest:&#xD;
&#xD;
    dataset[SortBy[-#depth &amp;amp;]]&#xD;
&#xD;
![dataset sorted][9]&#xD;
&#xD;
Now, let&amp;#039;s make a geographical map of the area around Bárðarbunga. Here, we can use the natural language input which can be accessed by pressing the Ctrl key and the = key simultaneously. This brings up the natural language input field:&#xD;
&#xD;
![natural language input][10]&#xD;
&#xD;
Now type in that field and hit the Enter key (followed by Shift-Enter to evaluate the input cell):&#xD;
&#xD;
![natural language result][11]&#xD;
&#xD;
We can do this anywhere in the input, so now we can create a map of the area near Bárðarbunga. We use the GeoGraphics function and place a GeoMarker at the center of the volcano, using a iconic image for a volcano. We set the GeoRange to 100km to see the surrounding area including a piece of Iceland&amp;#039;s coastline. For the map background we use the contour map styling:&#xD;
&#xD;
![background map][12]&#xD;
&#xD;
To get all the earthquake positions as little round disks, we extract the latitude and longitude from the dataset as values and then wrap them in GeoDisk with a radius of 1km:&#xD;
&#xD;
    earthquakes = Normal[ dataset[ All, {&amp;#034;latitude&amp;#034;, &amp;#034;longitude&amp;#034;} /* Values /* (GeoDisk[#, Quantity[1, &amp;#034;Kilometers&amp;#034;]] &amp;amp;)]];&#xD;
&#xD;
And then plot them as red disks on the previous map:&#xD;
&#xD;
![map][13]&#xD;
&#xD;
To make a slightly more refined image we can composite a relief map with a contour map and overlay the earthquake data (here *volcanoimage* is the image used above to indicate the volcano location):&#xD;
&#xD;
    reliefmap = GeoGraphics[&#xD;
    Entity[&amp;#034;Volcano&amp;#034;, &amp;#034;Bardarbunga&amp;#034;], &#xD;
      GeoRange -&amp;gt; Quantity[100, &amp;#034;Kilometers&amp;#034;], &#xD;
      GeoBackground -&amp;gt; GeoStyling[&amp;#034;ReliefMap&amp;#034;], ImageSize -&amp;gt; Large];&#xD;
    &#xD;
    contourmap = &#xD;
     GeoGraphics[Entity[&amp;#034;Volcano&amp;#034;, &amp;#034;Bardarbunga&amp;#034;], &#xD;
      GeoRange -&amp;gt; Quantity[100, &amp;#034;Kilometers&amp;#034;], &#xD;
      GeoBackground -&amp;gt; GeoStyling[&amp;#034;ContourMap&amp;#034;], ImageSize -&amp;gt; Large];&#xD;
    &#xD;
    overlay = &#xD;
     GeoGraphics[{{Red, earthquakes}, &#xD;
       GeoMarker[Entity[&amp;#034;Volcano&amp;#034;, &amp;#034;Bardarbunga&amp;#034;], volcanoimage]}, &#xD;
      GeoRange -&amp;gt; (GeoRange /. Options[contourmap, GeoRange]), &#xD;
      GeoBackground -&amp;gt; None, &#xD;
         ImageSize -&amp;gt; Large];&#xD;
    &#xD;
    Overlay[{ImageCompose[reliefmap, {contourmap, .5}], overlay}]&#xD;
&#xD;
![final map][14]&#xD;
&#xD;
&#xD;
  [1]: http://en.vedur.is/&#xD;
  [2]: https://vimeo.com/104637028&#xD;
  [3]: http://community.wolfram.com/groups/-/m/t/330289&#xD;
  [4]: http://en.wikipedia.org/wiki/Eyjafjallaj%C3%B6kull&#xD;
  [5]: http://en.wikipedia.org/wiki/B%C3%A1r%C3%B0arbunga&#xD;
  [6]: http://apis.is&#xD;
  [7]: http://reference.wolfram.com/language/ref/Dataset.html&#xD;
  [8]: /c/portal/getImageAttachment?filename=one.PNG&amp;amp;userId=22112&#xD;
  [9]: /c/portal/getImageAttachment?filename=two.PNG&amp;amp;userId=22112&#xD;
  [10]: /c/portal/getImageAttachment?filename=bbd3.PNG&amp;amp;userId=22112&#xD;
  [11]: /c/portal/getImageAttachment?filename=bbd4.PNG&amp;amp;userId=22112&#xD;
  [12]: /c/portal/getImageAttachment?filename=bbd8.PNG&amp;amp;userId=22112&#xD;
  [13]: /c/portal/getImageAttachment?filename=bbd7.PNG&amp;amp;userId=22112&#xD;
  [14]: /c/portal/getImageAttachment?filename=bbd9.PNG&amp;amp;userId=22112</description>
    <dc:creator>Arnoud Buzing</dc:creator>
    <dc:date>2014-08-22T13:25:02Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/471232">
    <title>Solar eclipses on other planets</title>
    <link>https://community.wolfram.com/groups/-/m/t/471232</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/dd8fb9a3-d593-4377-863e-8f4b75252b0c</description>
    <dc:creator>Jeffrey Bryant</dc:creator>
    <dc:date>2015-04-01T00:53:17Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2489121">
    <title>Circular sunset/sunrise calendar</title>
    <link>https://community.wolfram.com/groups/-/m/t/2489121</link>
    <description>![enter image description here][2]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][3]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3071Untitled.png&amp;amp;userId=73716&#xD;
  [2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3071Untitled.png&amp;amp;userId=73716&#xD;
  [3]: https://www.wolframcloud.com/obj/026709ec-37ec-4e2f-93f7-ec74bc6cc84d</description>
    <dc:creator>Sander Huisman</dc:creator>
    <dc:date>2022-03-12T19:29:52Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1732295">
    <title>[WSC19] Fitting the World: Physical Scale from Satellite Images</title>
    <link>https://community.wolfram.com/groups/-/m/t/1732295</link>
    <description>![Cover image][1]&#xD;
&#xD;
## What are we trying to do?&#xD;
&#xD;
The goal of this project is to produce an algorithm which can find the physical scale of any satellite image: the relationship between the content of the image, and the area it covers in kilometers.&#xD;
&#xD;
![Examples of training data][2]&#xD;
&#xD;
This is a very challenging problem because of the characteristics of the world&amp;#039;s terrain. At different zoom levels, it&amp;#039;s highly self-similar, like a fractal. We need to find a method to extract useful information from these images, while also ignoring the parts which are repeated across zoom levels.&#xD;
&#xD;
There are several ways to approach the problem. Some of the more manual methods involve trying to label image parts which only show up at certain scales. For instance, you could rely on the fact that all rooftops are generally roughly the same size, all roads have similar widths, and so on.&#xD;
&#xD;
![Examples of man-made structure][3]&#xD;
&#xD;
Those approaches all need lots of manual work, though. We attempted to solve the problem automatically, exploring ideas that might let us predict map zoom without relying on hard-coded rules.&#xD;
&#xD;
Although we explored several approaches to this problem, the most successful solution used feature extraction to convert each image to a relatively small vector, then trained a small feed-forward neural network to predict zoom from each set of features. Although the self-similarity of each image makes more traditional CV approaches very difficult, a neural network can learn the right features to approximate the zoom very well.&#xD;
&#xD;
## Methodology&#xD;
&#xD;
We wound up gathering several datasets of satellite images over the course of the project. The most useful one was a dataset of three thousand images of random locations in the continental United States, all taken at different zoom levels:&#xD;
&#xD;
```&#xD;
(* define a function to download images of a specific location and zoom *)&#xD;
satelliteImage[location_, size_?NumericQ] :=&#xD;
	GeoImage[&#xD;
		GeoDisk[location, Quantity[size/2, &amp;#034;km&amp;#034;]],&#xD;
		GeoServer -&amp;gt; &amp;#034;DigitalGlobe&amp;#034;,&#xD;
		RasterSize -&amp;gt; 400&#xD;
	]&#xD;
&#xD;
sampleCount = 3000;&#xD;
&#xD;
(* Get a list of 3000 random locations in the continental US *)&#xD;
randomLocations = &#xD;
  RandomGeoPosition[Entity[&amp;#034;Country&amp;#034;, &amp;#034;UnitedStates&amp;#034;], &#xD;
    sampleCount] /. GeoPosition[locs_List] :&amp;gt; GeoPosition /@ locs;&#xD;
&#xD;
(* Get a list of 1000 random zoom levels, from 1 to 200 km *)&#xD;
randomZooms = Table[RandomReal[{1, 200}], sampleCount];&#xD;
&#xD;
(* create a dataset of image specifications *)&#xD;
samples = MapThread[&#xD;
	&amp;lt;|&amp;#034;Location&amp;#034; -&amp;gt; #1, &amp;#034;Zoom&amp;#034; -&amp;gt; #2|&amp;gt; &amp;amp;,&#xD;
	{randomLocations, randomZooms}&#xD;
] // Dataset;&#xD;
&#xD;
(* to see the precise scripts used to download each image, download the attached notebook--they have dynamic progress indicators and checkpoint the data to disk, so they&amp;#039;re a little complicated *)&#xD;
```&#xD;
&#xD;
Working on this project, the majority of the time was spent trying several different methods to extract information from images.&#xD;
&#xD;
One attempt was visual pre-processing: running segmentation or binarization on each image before training a model or extracting some metric. Every time we did this, although the images looked really neat, the accuracy of our predictions were far worse. &#xD;
&#xD;
![Examples of image preprocessing][4]&#xD;
&#xD;
All of those preprocessing methods reduce the dimensionality of the images, but it turns out that they focus on the wrong features in the process. Our network was supposed to pick up on the large patterns: rivers, landmasses, and so on. When we segmented images, the network was forced to focus on the small details we highlighted. All in all, preprocessing using traditional CV was a failure.&#xD;
&#xD;
We also tried training a convolutional neural network from scratch on the images, which also failed. The network would, at first, immediately overfit the training set. When we reduced its size, it would fail to converge at all. Despite hours of tweaking, this method did not work either.&#xD;
&#xD;
The next idea we had was just to throw the entire dataset into `Predict[]` and see what would happen. This worked shockingly well, considering how easy it was:&#xD;
&#xD;
![Predict results][5]&#xD;
&#xD;
So the next question we had was: how? What was `Predict[]` doing behind the scenes? Doing some research, we discovered a couple starting points for our own neural network model:&#xD;
&#xD;
 - `Predict[]` preprocesses images using WL&amp;#039;s feature extraction functionality. This is based on the first few layers of the trained `ImageIdentify` convnet, combined with an autoencoder. &#xD;
 - `Predict[]` generally trains small feed-forward networks, without convolutional layers. &#xD;
&#xD;
Our general approach was to replicate this setup, with several tweaks and optimizations for better performance.&#xD;
&#xD;
Our first large improvement was a pre-processing step: image augmentation. We applied several different constant crops, translations, rotations, and reflections to each image, in order to increase the number of examples we had by a factor of 10. We then performed feature extraciton on the augmented data.&#xD;
&#xD;
```&#xD;
(* use the built-in augmentation functionality *)&#xD;
(* note: this can&amp;#039;t go in the network itself because we need to extract features with FeatureExtraction[] after this step *)&#xD;
augmentor = ImageAugmentationLayer[&#xD;
	(* Final image dimensions--from cropping *)&#xD;
	{200, 200}, &#xD;
&#xD;
	(* 50% change of either reflection *)&#xD;
	&amp;#034;ReflectionProbabilities&amp;#034; -&amp;gt; {0.5, 0.5}, &#xD;
&#xD;
	&amp;#034;Input&amp;#034; -&amp;gt; NetEncoder[{&amp;#034;Image&amp;#034;, {400, 400}}], (* 400x400 input images *)&#xD;
	&amp;#034;Output&amp;#034; -&amp;gt; NetDecoder[&amp;#034;Image&amp;#034;] (* output an image for the feature extractor *)&#xD;
]&#xD;
&#xD;
(* each original image gets 10 augmented images generated from it *)&#xD;
augmentationMultiplier = 10;&#xD;
&#xD;
(* actually augment the image set *)&#xD;
augmentedImages = Join@@Table[&#xD;
	imageSet[ All, &amp;lt;|&#xD;
		#, &#xD;
		&amp;#034;Image&amp;#034; -&amp;gt; augmentor[#Image, NetEvaluationMode -&amp;gt; &amp;#034;Train&amp;#034;]&#xD;
	|&amp;gt; &amp;amp;],&#xD;
	augmentationMultiplier&#xD;
] // RandomSample;&#xD;
```&#xD;
&#xD;
&#xD;
Our next approach was to replicate this--use `FeatureExtraction[]` to reduce the dimensionality of the images, and train a small, mostly-linear neural network on the result.&#xD;
&#xD;
```&#xD;
(* create a feature extractor trained with the first 5000 images*)&#xD;
fExtractor = FeatureExtraction[augmentedImages[;; 5000, &amp;#034;Image&amp;#034;]];&#xD;
&#xD;
(* extract features from the images *)&#xD;
features = imageSet[All, &amp;lt;|#, &amp;#034;Features&amp;#034; -&amp;gt; fExtractor[#Image]|&amp;gt; &amp;amp;]&#xD;
```&#xD;
&#xD;
When you plot the feature vectors of a few images, you can see the variance between them is clear:&#xD;
&#xD;
![Feature--image matrix][6]&#xD;
&#xD;
Although the network structure was fairly simple, we automated the generation of good hyperparameters. We trained hundreds of slightly-different networks, and evaluated their statistical performance. Out of many different permutations of layer count, layer size, activation function choice, training speed, and so on, we picked the ones which work best. The final network design we settled on was this:&#xD;
&#xD;
```&#xD;
geoNet = NetChain[ &#xD;
	{&#xD;
		200, Ramp, DropoutLayer[0.3],&#xD;
		100, Ramp,&#xD;
		20,&#xD;
		1&#xD;
	},&#xD;
	&amp;#034;Input&amp;#034; -&amp;gt; {324}, (* our feature vectors are 335-dimensional *)&#xD;
	&amp;#034;Output&amp;#034; -&amp;gt; NetDecoder[&amp;#034;Scalar&amp;#034;] (* decode into a single number: &#xD;
	zoom in km *)&#xD;
];&#xD;
geoNet = NetTrain[&#xD;
	geoNet,&#xD;
&#xD;
	(* feature(Train|Test) all have the shape { {feature, ...} -&amp;gt; zoom, ...} *)&#xD;
	featureTrain,  &#xD;
	ValidationSet -&amp;gt; featureTest,&#xD;
&#xD;
	TargetDevice -&amp;gt; &amp;#034;GPU&amp;#034;,  &#xD;
	MaxTrainingRounds -&amp;gt; 600,&#xD;
&#xD;
	(* bail if loss stops decreasing *) &#xD;
	TrainingStoppingCriterion -&amp;gt; &amp;lt;|&amp;#034;Criterion&amp;#034; -&amp;gt; &amp;#034;Loss&amp;#034;, &amp;#034;Patience&amp;#034; -&amp;gt; 50|&amp;gt;&#xD;
]&#xD;
```&#xD;
&#xD;
In the end, the general architecture of the most successful model looked like this:&#xD;
&#xD;
    +--------------------+                                                     &#xD;
    |                    |                                                     &#xD;
    |  Original dataset  |                            Generated from GeoImage[]&#xD;
    |                    |                                                     &#xD;
    +--------------------+                                                     &#xD;
      |   |   |   |                                                            &#xD;
      |   |   |   |                                                            &#xD;
      V   V   V   V                                                            &#xD;
    +---------------------------+                                              &#xD;
    | +---------------------------+                                            &#xD;
    | | +---------------------------+                                          &#xD;
    | | |                       | | |                 Generated with           &#xD;
    | | |   Augmented dataset   | | |                 ImageAugmentationLayer[] &#xD;
    | | |                       | | |                                          &#xD;
    +-|-|-----------------------+ | |                                          &#xD;
      +-|-------------------------+ |                                          &#xD;
        +---------------------------+                                          &#xD;
      |   |   |   |                                                            &#xD;
      |   |   |   |                                                            &#xD;
      V   V   V   V                                                            &#xD;
     ----------------------------------------------                            &#xD;
     ----------------------------------------------                            &#xD;
     ------- Extracted feature vectors ------------   using FeatureExtraction[]&#xD;
     ----------------------------------------------                            &#xD;
     ----------------------------------------------                            &#xD;
      |   |   |   |                                                            &#xD;
      |   |   |   |                                                            &#xD;
      V   V   V   V                                                            &#xD;
     #  #  #  #  #  #  #  #  #  #  #  #  #  #  #      Small-ish neural network &#xD;
     |\/|\/|\/|\/|\/|\/|\/|  /  /  /  /  /  /  /                                &#xD;
     |/\|/\|/\|/\|/\|/\|/\|-/--/--/--/--/--/--/                                 &#xD;
     #  #  #  #  #  #  #  #                                                     &#xD;
     |\/|\/|\/|  /  /  /  /                                                     &#xD;
     |/\|/\|/\|-/--/--/--/                                                      &#xD;
     #  #  #  #                                                                 &#xD;
     |-/--/--/                                                                  &#xD;
     |                                                                          &#xD;
     #                                            &amp;lt;-- our final prediction! &#xD;
&#xD;
&#xD;
( https://textik.com/#e650301054ce435f )&#xD;
&#xD;
## Results&#xD;
&#xD;
When we evaluated each image in our test set with this network, we got the following results:&#xD;
&#xD;
![Plot of prediction results][7]&#xD;
&#xD;
This plot shows actual zoom levels on the $x$-axis, and estimated zoom on the $y$. A reference line shows what a perfect prediction would look like. Analyzed statistically, the network had a standard deviation of 30.37 km, and an $r^2$ value of 0.732. &#xD;
&#xD;
This network clearly &amp;#034;gets the gist&amp;#034; of the data it&amp;#039;s presented. However, these results were not portable to different satellite image datasets. When we evaluated a separate test set, gathered from Wolfram satellite imagery, we got this result:&#xD;
&#xD;
![Plot of prediction results on Wolfram satellite data][8]&#xD;
&#xD;
It is clear that the network is learning something specific to the DigitalGlobe dataset we used to train it. We would hesitate to call that overfitting, because it can extrapolate to locations it has never seen before, but it relies on the specific look and tone of the DigitalGlobe data. After all, the two image sets look very different:&#xD;
&#xD;
![Satellite image comparison][9]&#xD;
&#xD;
It is possible that the network is just confused by the new colors, rather than confused by the structure of the terrain in the Wolfram dataset. Either way, though, the scope of this result is limited to images similar to the satellite dataset on which it was trained.&#xD;
&#xD;
We used another dataset of 20,000 mostly-overlapping images of Massachusetts and exploited overfitting to achieve a much more accurate (yet fragile) prediction. By managing to overfit the terrain, the model could achieve up to an $r^2$ of 0.99, but completely fell apart on any other dataset. Theoretically, you could take this brute-force approach with the entire planet, overfitting deliberately to learn the terrain. However, in our scope, we could not attempt this. &#xD;
&#xD;
## Future work&#xD;
&#xD;
One of the larger problems with this research was gathering data. In future attempts, it would be better to gather a much larger dataset (somewhere around 50,000 images) from several different satellite providers. To stop the network from overfitting on the fine-grained style of the images, we would need to find satellite providers whose data are significantly different.&#xD;
&#xD;
One other option is to improve the model&amp;#039;s feature extraction layer. Right now, the FeatureExtraction function uses the first few convolutional layers of the Wolfram ImageIdentify classifier as a starting point. By training our own convolution step specific to satellite images on a much larger dataset, we might be able to get more accurate results.&#xD;
&#xD;
# Final thoughts&#xD;
&#xD;
This research was successful in very limited scope. Future attempts at cracking this problem will have to successfully generalize to the entire globe, across several satellite image providers---a problem requiring a lot of time, and access to large computational resources, to solve. &#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=cover-image.png&amp;amp;userId=1619260&#xD;
  [2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=fig_examples.png&amp;amp;userId=1619260&#xD;
  [3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2123fig_structure.png&amp;amp;userId=1619260&#xD;
  [4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=fig_processing.png&amp;amp;userId=1619260&#xD;
  [5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=2381fig_predict.png&amp;amp;userId=1619260&#xD;
  [6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=fig_featurematrix.png&amp;amp;userId=1619260&#xD;
  [7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=fig_resultplot.png&amp;amp;userId=1619260&#xD;
  [8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=fig_resultplotwolfram.png&amp;amp;userId=1619260&#xD;
  [9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=fig_satcomparison.png&amp;amp;userId=1619260</description>
    <dc:creator>William Goodall</dc:creator>
    <dc:date>2019-07-11T23:29:28Z</dc:date>
  </item>
</rdf:RDF>

