<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://community.wolfram.com">
    <title>Community RSS Feed</title>
    <link>https://community.wolfram.com</link>
    <description>RSS Feed for Wolfram Community showing any discussions tagged with Numerical Computation with no replies sorted by most likes.</description>
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3208866" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/241732" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1140010" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/931494" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3379973" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3215276" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3094917" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/583974" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1184139" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3642901" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1351062" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/792601" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/366111" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3569199" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3359290" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3146544" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2925998" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2823264" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1926505" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1668032" />
      </rdf:Seq>
    </items>
  </channel>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3208866">
    <title>[WSS24] Implementing cosinor rhythmometry in Wolfram Language</title>
    <link>https://community.wolfram.com/groups/-/m/t/3208866</link>
    <description>![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=wss24-cosinor.jpg&amp;amp;userId=596617&#xD;
  [2]: https://www.wolframcloud.com/obj/b6b57d60-e946-42d1-b3b9-7cfdf0cbe02e</description>
    <dc:creator>Chase Turner</dc:creator>
    <dc:date>2024-07-09T18:45:38Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/241732">
    <title>Tracking 50 events within single run of NDSolve to evolve Poincaré map</title>
    <link>https://community.wolfram.com/groups/-/m/t/241732</link>
    <description>A short post inspired by [url=http://mathematica.stackexchange.com/q/46398/13]this[/url] question. Please also see this literature: [url=http://mathworld.wolfram.com/DuffingDifferentialEquation.html]1[/url] , [url=http://en.wikipedia.org/wiki/Duffing_equation]2[/url] , [url=http://www.scholarpedia.org/article/Duffing_oscillator]3[/url]. Duffing oscillator is one of the simplest system that exhibit chaos. It is a periodically forced oscillator with a nonlinear elasticity. We will use the following from:
[mcode]eq = x&amp;#039;&amp;#039;[t] + ? x&amp;#039;[t] - x[t] + x[t]^3 == ? Cos[ t];
eq // TraditionalForm[/mcode]
[img=width: 350px; height: 39px;]/c/portal/getImageAttachment?filename=ScreenShot2014-04-23at3.27.10PM.png&amp;amp;userId=11733[/img]

The unforced undamped case with gamma and delta equal to zero is a Hamiltonian system with potential energy representing a double well potential. So basically a general case is when a driving force bounces a point between to potential minima with friction present. Usually it is associated with a model of a periodically forced steel beam which is deflected toward the two magnets:

[img=width: 425px; height: 455px;]/c/portal/getImageAttachment?filename=Duffing-MagnetelasticBeam.gif&amp;amp;userId=11733[/img]

When the system in non-chaotic regime it is synchronized perfectly with the periodic force. In the phase space of x[t] and x&amp;#039;[t] the trajectory of the system is passing always through the same point and is a closed lopped trajectory over a period 2 PI of the force. When we are in chaotic regie the trajectory will split and visit many close points during different periods - with points forming an attractor which can be visualized via a Poincaré section:

[img=width: 280px; height: 240px;]/c/portal/getImageAttachment?filename=Duffing_rotating_anim.gif&amp;amp;userId=11733[/img]

Imagine you will move the Poincaré section in above picture around the complete circle - how would attractor change shape? It is inefficient to run NDSolve for every new Poincaré section and we better get all the sections within one single run of NDSolve. To do that we need 2 things:

1) Specify events to track by NDSolve with WhenEvent
2) Differentiate between events so at the end they won&amp;#039;t pile up in mixed data

Here is a simple case of tracking 6 events distant at Pi/3 - all events on one figure colored differently:

[mcode]evs = Mod[t, 2 \[Pi]] == # &amp;amp; /@ (Range[0, 2 Pi - #, #] &amp;amp;@(2 Pi/6))

Out[] = {Mod[t, 2 ?] == 0, Mod[t, 2 ?] == ?/3, Mod[t, 2 ?] == (2 ?)/3, 
         Mod[t, 2 ?] == ?, Mod[t, 2 ?] == (4 ?)/3, Mod[t, 2 ?] == (5 ?)/3}
[/mcode]
Note that Sow function has a 2nd argument - which is a tag that allows to Reap events separately.

[mcode]data = Block[{? = 0.15, ? = 0.3}, 
   Reap[NDSolve[{eq, x[0] == 0, x&amp;#039;[0] == 0, WhenEvent[Evaluate@evs, 
       Sow[{x[t], x&amp;#039;[t]}, Round[100 Mod[t, 2 ?]]]]}, {}, {t, 0, 200000}, MaxSteps -&amp;gt; ?]]];

ListPlot[data[[2]], PlotStyle -&amp;gt; Directive[Opacity[.5], PointSize[.002]], 
 PlotRange -&amp;gt; {1.8 {-1, 1}, 1.3 {-1, 1}}, AspectRatio -&amp;gt; 1, Frame -&amp;gt; True, ImageSize -&amp;gt; 450][/mcode]
[img=width: 450px; height: 443px;]/c/portal/getImageAttachment?filename=ewfdsfjkgioyiuyue56w4.png&amp;amp;userId=11733[/img]

And here is a more complicated cases of 50 events shown as animation that runs while we move Poincaré section around full circle:

[mcode]evs = Mod[t, 2 ?] == # &amp;amp; /@ (Range[0, 2 Pi - #, #] &amp;amp;@(2 Pi/50));

data = Block[{? = 0.15, ? = 0.3}, 
   Reap[NDSolve[{eq, x[0] == 0, x&amp;#039;[0] == 0, WhenEvent[Evaluate@evs, 
       Sow[{x[t], x&amp;#039;[t]}, Round[100 Mod[t, 2 ?]]]]}, {}, {t, 0, 200000}, MaxSteps -&amp;gt; ?]]];

Manipulate[
 ListPlot[data[[2]][[k]], PlotStyle -&amp;gt; PointSize[0], 
  PlotRange -&amp;gt; {1.8 {-1, 1}, 1.3 {-1, 1}}, AspectRatio -&amp;gt; 1, 
  Frame -&amp;gt; True, ImageSize -&amp;gt; 450], {k, 1, 50, 1}][/mcode]
[img=width: 450px; height: 443px;]http://i.stack.imgur.com/sHG3f.gif[/img]</description>
    <dc:creator>Vitaliy Kaurov</dc:creator>
    <dc:date>2014-04-23T20:55:52Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1140010">
    <title>[WSC17] Using Voronoi Diagrams to Optimize Offensive Schemes</title>
    <link>https://community.wolfram.com/groups/-/m/t/1140010</link>
    <description>Introduction&#xD;
------------&#xD;
&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
For the past 2 weeks at the Wolfram High School Summer Camp, I have been working on a project studying Voronoi diagrams and how I could connect it to sports. My first thought was, &amp;#034;This will never work. How can sports be connected to something from Wolfram Language?&amp;#034; But then I realized that Wolfram Language can do many things to help me achieve my goal. &#xD;
&#xD;
My problem is to find a way that optimizes the offense based on the defense of a basketball team using Voronoi diagrams. I went through multiple methods trying to find a way that could represent this problem. &#xD;
&#xD;
Creating a Voronoi Diagram&#xD;
--------------------------&#xD;
&#xD;
--------------------------&#xD;
Here is how to create a Voronoi diagram with different colored points that represent offensive and defensive players.&#xD;
&#xD;
&#xD;
    pt = RandomReal[{0, 10}, {10, 2}];(*creates a 10 by 2 array of points between 0 and 10*)&#xD;
    Show[&#xD;
     VoronoiMesh[pt],&#xD;
     Graphics[{PointSize@Medium, Red, Point[pt[[1 ;; 5]]], Blue, &#xD;
       Point[pt[[6 ;; 10]]]}]&#xD;
     ](*creates the diagram showing the points and the cells*)&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
1st Trial&#xD;
---------&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
 &amp;lt;h3&amp;gt; Using the NearestNeighborGraph to connect the offensive points &amp;lt;/h3&amp;gt;&#xD;
&#xD;
&amp;lt;p&amp;gt;This was one of my methods used to find a way that best optimizes the offense. The idea for this method was to use the NearestNeighborGraph function to create a polygon by connecting all the offensive(red) points. This will allow for us to calculate the area of the polygon which was created by all the offensive points.&amp;lt;p&amp;gt;&#xD;
&#xD;
&#xD;
    ptt = RandomReal[{0, 10}, {10, 2}];(*creates a 10 by 2 array of points between 0 and 10*)&#xD;
    ntt = NearestNeighborGraph[ptt[[1 ;; 5]], 4];(*finds the 4 nearest neighbors of the offensive(red)points*)&#xD;
    Show[&#xD;
     Show[&#xD;
      VoronoiMesh[ptt], ntt&#xD;
      ],&#xD;
     Graphics[{&#xD;
       PointSize@Medium, Red, Point[ptt[[1 ;; 5]]],&#xD;
       Blue, Point[&#xD;
        ptt[[6 ;; 10]]]}](*creates the colored points- red for offense, blue for \&#xD;
    defense*)&#xD;
     ]&#xD;
&#xD;
The result:&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
Now we could find the area of the polygon created from the connected offensive(red) points.&#xD;
&#xD;
    ptt //&#xD;
      Polygon@# &amp;amp; //&#xD;
     RegionMeasure@# &amp;amp;&#xD;
&#xD;
This should return 10.145, the area of the polygon from the previous code.&#xD;
&#xD;
We can infer that the area is significant because a greater area created from these offensive points allow for more space for the offensive players to interact. A greater area created from the offensive players also infers that less space is taken up on the court by the defensive(blue) points and more space is taken up by the offensive points. Later, I found out this method was not as efficient as other methods and had some errors such as the points not being in realistic positions like it would be on a real basketball court.&#xD;
&#xD;
So what next...&#xD;
---------------&#xD;
&#xD;
----------&#xD;
Once I realized that the previous method wasn&amp;#039;t as good as I hoped it would have been, I quickly needed to find another way. So I went to my mentor. My mentor gave me a lot of information and ways I could further improve the situation.&#xD;
&amp;lt;h3&amp;gt;Connected Components Function (2nd method)&amp;lt;/h3&amp;gt;&#xD;
This function allows the user to input their points into the function, creating a Voronoi diagram of the offensive and defensive players separated by the red and blue color. The user-inputted points can allow the user to choose which players are on offense and defense, allowing for more interaction between the user and the code. It will also provide the area of the respective colors cells, giving the user the information needed to infer a conclusion from the function.&#xD;
&#xD;
&amp;lt;h3&amp;gt;The FUNCTION&amp;lt;/h3&amp;gt;&#xD;
&#xD;
    connectedComp[points_] :=&#xD;
     Block[{off, def, oprims, oconnect, dconnect, a, b},&#xD;
      VoronoiMesh[points](*Voronoi Mesh creates Voronoi cells out of the \&#xD;
    specified points*)//&#xD;
         MeshPrimitives[#, 2] &amp;amp;(*this line generates the polygon coordinates created by \&#xD;
    each cell*)//&#xD;
        #[[&#xD;
           Ordering[(*orders the list by giving the position of the order*)&#xD;
            Position[(*finds the position where the RegionMember \&#xD;
    returns true*)&#xD;
              Outer[(*This will return a list of True or False depending \&#xD;
    on if the points in the Voronoi cells are located inside the region \&#xD;
    of the polygons created from MeshPrimitives[VoronoiMesh[points] ,2]*)&#xD;
               RegionMember,&#xD;
               MeshPrimitives[&#xD;
                VoronoiMesh[points] , 2],&#xD;
               points,&#xD;
               1]&#xD;
              , True][[All, 2]]&#xD;
            ]&#xD;
           ]] &amp;amp; //&#xD;
       (oprims = #) &amp;amp;;&#xD;
&#xD;
      off =&#xD;
       Flatten[&#xD;
        oprims[[1 ;; Length[points]/ 2]] //(*creates the offensive points(5 points)*)&#xD;
           SimpleGraph@RelationGraph[&#xD;
              Not@*RegionDisjoint,(*This provides a graph of the edges &#xD;
    and vertices of the points when the polygons are not disjoint*)&#xD;
        #,&#xD;
              VertexLabels -&amp;gt; None] &amp;amp; //&#xD;
          ConnectedComponents@# &amp;amp; //(*This line connects the components \&#xD;
    of the relation graph*)&#xD;
         (oconnect = #) &amp;amp;];(*returns the first five(offensive)points*)&#xD;
&#xD;
      def =&#xD;
       Flatten[&#xD;
        oprims[[Length[points]/2 + 1 ;; -1]] //(*creates the defensive points(5 points)*)&#xD;
           SimpleGraph@RelationGraph[&#xD;
              Not@*RegionDisjoint,(*This provides a graph of the edges &#xD;
    and vertices of the points when the polygons are not disjoint*)&#xD;
        #,&#xD;
              VertexLabels -&amp;gt; None] &amp;amp; //&#xD;
          ConnectedComponents@# &amp;amp; //(*This line connects the components \&#xD;
    of the relation graph*)&#xD;
         (dconnect = #) &amp;amp;];(*returns the last five(defensive)points*)&#xD;
&#xD;
      a = MaximalBy[&#xD;
         oconnect,&#xD;
         Total[Area /@ #] &amp;amp;&#xD;
         ] // First(*connected area of offensive team*);&#xD;
      b = MaximalBy[&#xD;
         dconnect,&#xD;
         Total[Area /@ #] &amp;amp;&#xD;
         ] // First(*connected area of defensive team*);&#xD;
      &#xD;
      red = RGBColor[&amp;#034;#E86850&amp;#034;];&#xD;
      blue = RGBColor[&amp;#034;#587498&amp;#034;];&#xD;
      &#xD;
      Show[Graphics[&#xD;
        {EdgeForm[Black],&#xD;
         Lighter@blue, Complement[def, b],&#xD;
         Lighter@red, Complement[off, a], &#xD;
         blue, b,&#xD;
         red, a,&#xD;
         Black, PointSize@Medium, Point[points]}, &#xD;
        PlotLabel -&amp;gt; &#xD;
         Grid[{{&amp;#034;Total Area of Offense: &amp;#034; &amp;lt;&amp;gt; &#xD;
             ToString[&#xD;
              Total[RegionMeasure /@ off]/(Total[RegionMeasure /@ off] + &#xD;
                 Total[RegionMeasure /@ def])], &#xD;
            &amp;#034;Total Area of Defense: &amp;#034; &amp;lt;&amp;gt; &#xD;
             ToString[&#xD;
              Total[RegionMeasure /@ def]/(Total[RegionMeasure /@ off] + &#xD;
                 Total[RegionMeasure /@ &#xD;
                   def])]}, {&amp;#034;Area of Connected Offensive Cells: &amp;#034; &amp;lt;&amp;gt; &#xD;
             ToString[&#xD;
              Total[RegionMeasure /@ a]/Total[RegionMeasure /@ off]], &#xD;
            &amp;#034;Area of Connected Defensive Cells: &amp;#034; &amp;lt;&amp;gt; &#xD;
             ToString[&#xD;
              Total[RegionMeasure /@ b]/Total[RegionMeasure /@ def]]}}, &#xD;
          Frame -&amp;gt; All, FrameStyle -&amp;gt; {Thick, Directive[Black]}]]]](*This graphic will show the Voronoi diagram and create a frame with the areas of the respective regions inside*)&#xD;
&#xD;
Here is a random example of what the function returns:&#xD;
&#xD;
    connectedComp[RandomReal[{0,10},{10,2}]]&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
The user could also input their own points:&#xD;
&#xD;
    connectedComp[{{0.4, 1}, {1.4, 1.4}, {4.6, 2.5}, {8.5, 2.1}, {9.4, 8.6}, {0.5, 0.6}, {3.4, 0.9}, {6.2, 1.2}, {8.2, 0.4}, {9.7, 0.2}}]&#xD;
&#xD;
Which will return:&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
This is an example of an efficient offense because of the large offensive area.&#xD;
&#xD;
This is an example of an inefficient offense because of the large connecting area of the defensive cells.&#xD;
&#xD;
    connectedComp[{{0.5, 0.2}, {3.4, 0.2}, {6.2, 0.5}, {8.2, 1.4}, {9.7, 0.2}, {0.4, 9.4}, {1.4, 1.4}, {5, 2.5}, {8.7, 2.1}, {9.4, 0.2}}]&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
Conclusion&#xD;
----------&#xD;
&#xD;
----------&#xD;
This function provides a very detailed diagram of the interaction between the offensive and defensive players. The red-colored cells represent the offensive players, while the blue-colored cells represent the defensive players. The lighter-colored red cells represent the offensive cells that are not connected to the rest of the the offensive cells, while the lighter-colored blue cells represent the defensive cells that are not connected to the rest of the defensive cells. The diagram also gives information about the area, in percentage, of the offensive and defensive cells. It will give you a decimal number that is determined from the area of the total offensive or defensive cells divided by the total of both the offensive and defensive cells, which represents how much space either the offense or defense covers on the court. It also provides us with data on the area of the connecting cells of offensive or defensive players. The graphic will show us a decimal number which represents the greatest connecting area for each side, calculated from maximum connected area of one side divided by the total area of that side. &#xD;
&#xD;
All of this information is significant because the greater total area of the offense allows for the offense to cover up more space on the court than the defense, giving them a better opportunity to score or advance the ball. However, a greater total area of the defense gives the offense a smaller opportunity to get a good shot and effectively run their plays. The area of connecting offensive cells shows how much space the offense could utilize without being interfered by the defense, so a greater area of connecting offensive cells would be better for optimizing the offense.&#xD;
&#xD;
Future Work&#xD;
-----------&#xD;
&#xD;
----------&#xD;
Some of the ideas in the project have room for improvement. For example, I could make it to be more user-friendly. Instead of making the user enter the points for both the offense and defense, I could use the Locator function to allow the user to drag the defense to wherever the user wants and then place the offensive points at the most optimal places based on the defense the user chose. I also could have used a more efficient code by creating a code that will use less space and make the runtime faster.&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-07-06at7.13.02PM.png&amp;amp;userId=1134707&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-07-06at7.01.35PM.png&amp;amp;userId=1134707&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-07-06at5.32.57PM.png&amp;amp;userId=1134707&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-07-06at7.43.12PM.png&amp;amp;userId=1134707&#xD;
  [5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-07-06at7.56.51PM.png&amp;amp;userId=1134707</description>
    <dc:creator>Jordan Wang</dc:creator>
    <dc:date>2017-07-07T00:08:53Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/931494">
    <title>A million digits (feh) of ?</title>
    <link>https://community.wolfram.com/groups/-/m/t/931494</link>
    <description>*NOTE: All utility functions are defined at the end and also in the attached notebook.*&#xD;
&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
You can compute the first million digits of $\pi$ without printing them in &amp;lt; 1 second (the 1st number in the the time of computation without printing, and the printed later image is actually only a minuscule part of the whole 10^6 digits):&#xD;
&#xD;
    tim[N[Pi, 10^6]]&#xD;
&amp;gt; 0.314149, 0&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
and then print the last 99:&#xD;
&#xD;
    tim[Mod[Round[10^10^6*%], 10^99]]&#xD;
&#xD;
&amp;gt; 0.021651,0&#xD;
&#xD;
&amp;gt; 315614033321272849194418437150696552087542450598956787961303311646283996346460422090106105779458151&#xD;
&#xD;
Far better than digits is a continued fraction:&#xD;
&#xD;
    longer[Pi, 9]&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
This expression = $\pi$&#xD;
&#xD;
    Simplify[ReleaseHold[%]]&#xD;
&#xD;
&amp;gt; ?&#xD;
&#xD;
and can be freely lengthened (or shortened):&#xD;
&#xD;
    longer[%%]&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
Notice that $e$, unlike $\pi$, has a pattern:&#xD;
&#xD;
    longer[E, 9]&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
But proving that ? will never develop a pattern is one of the great unsolved problems. For faster, but non-resumable continued fractions, Mathematica has, e.g.,&#xD;
&#xD;
    ContinuedFraction[Pi, 9]&#xD;
&#xD;
&amp;gt; {3, 7, 15, 1, 292, 1, 1, 1, 2}&#xD;
&#xD;
Note the largish term at position 431:&#xD;
&#xD;
    Take[ContinuedFraction[Pi, 433], -4]&#xD;
&#xD;
&amp;gt; {1, 4, 20776, 1}&#xD;
&#xD;
Around 1986, in a calculation taking several weeks, I found&#xD;
&#xD;
    tim[Take[ContinuedFraction[Pi, 11504932], -4]]&#xD;
&#xD;
&amp;gt; 8.36905,4&#xD;
&#xD;
&amp;gt; {1, 1, 878783625, 6}&#xD;
&#xD;
I thought Eric Weisstein found a bigger one, but [Pi Continued Fraction page][5] doesn&amp;#039;t seem to say. Simple functions of e also have patterns:&#xD;
&#xD;
    longer[(E^2 + 1)/(E^2 - 1), 9]&#xD;
&#xD;
![enter image description here][6]&#xD;
&#xD;
$e^x$ has a nice infinite series:&#xD;
&#xD;
    SeriesCoefficient[E^x, {x, 0, n}]&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
This means&#xD;
&#xD;
    (#1 == Activate[#1] &amp;amp; )[Inactive[Sum][x^n/n!, {n, 0, Infinity}]]&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
Taking a few terms&#xD;
&#xD;
    ser = Activate[%[[1]] /. Infinity -&amp;gt; 9]&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
we can numerically test Euler&amp;#039;s celebrated&#xD;
&#xD;
    (#1 == Activate[#1] &amp;amp; )[E^Inactivate[I*Pi]]&#xD;
&#xD;
![enter image description here][10]&#xD;
&#xD;
    N[ser /. x -&amp;gt; I*Pi]&#xD;
&#xD;
&amp;gt; -0.976022 + 0.00692527 I&#xD;
&#xD;
The square root of Euler&amp;#039;s identity is&#xD;
&#xD;
    (#1 == Activate[#1] &amp;amp; )[E^(Inactivate[I*Pi]/2)]&#xD;
&#xD;
![enter image description here][11]&#xD;
&#xD;
    N[ser /. x -&amp;gt; I*(Pi/2)]&#xD;
&#xD;
&amp;gt; 0.0000247373 + 1. I&#xD;
&#xD;
We can even use Euler&amp;#039;s identity to calculate ? by solving&#xD;
&#xD;
    eq = E^(I*x) + 1 == 0&#xD;
&#xD;
![enter image description here][12]&#xD;
&#xD;
This has infinitely many solutions!&#xD;
&#xD;
    Simplify[Solve[eq]]&#xD;
&#xD;
![enter image description here][13]&#xD;
&#xD;
    Unprotect[C]; Table[%[[1, 1]], {C[1], -2, 2}]&#xD;
&#xD;
&amp;gt; {x -&amp;gt; -3 ?, x -&amp;gt; -?, x -&amp;gt; ?, x -&amp;gt; 3 ?, x -&amp;gt; 5 ?}&#xD;
&#xD;
Newton&amp;#039;s iteration says: to solve $f(x)=0$, choose an initial guess for x and iterate $g(g(...g(x)))$ where $g = x - f/ df/dx$&#xD;
&#xD;
    g[x] = Simplify[x - eq[[1]]/D[eq[[1]], x]]&#xD;
&#xD;
![enter image description here][14]&#xD;
&#xD;
Starting with a very precise value of 3&#xD;
&#xD;
    NestList[I/E^(I*#1) + #1 + I &amp;amp; , 3.`69., 7]&#xD;
&#xD;
![enter image description here][15]&#xD;
&#xD;
Each iteration of Newton&amp;#039;s method typically doubles the number of correct digits. But isn&amp;#039;t it slightly cockeyed to seek a real answer by applying a complex function to a real approximation?  Suppose we just took the real part of $g$:&#xD;
&#xD;
    ComplexExpand[Re[g[x]]]&#xD;
&#xD;
&amp;gt; x + Sin[x]&#xD;
&#xD;
(Remembering that Euler&amp;#039;s identity generalizes to Euler&amp;#039;s formula:)&#xD;
&#xD;
    (#1 == ComplexExpand[#1] &amp;amp; )[E^(I*x)]&#xD;
&#xD;
![enter image description here][16]&#xD;
&#xD;
Using this new $g$&#xD;
&#xD;
    NestList[#1 + Sin[#1] &amp;amp; , 3.`69., 5]&#xD;
&#xD;
![enter image description here][17]&#xD;
&#xD;
triples the accuracy each time!  What if we start with 9 instead of 3?&#xD;
&#xD;
    NestList[#1 + Sin[#1] &amp;amp; , 9.`69., 5]&#xD;
&#xD;
![enter image description here][18]&#xD;
&#xD;
Dividing the last approximation by 3&#xD;
&#xD;
    Last[%]/3&#xD;
&#xD;
&amp;gt; 3.1415926535897932384626433832795028841971693993751058209749445923078&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
&#xD;
&#xD;
Utility functions&#xD;
-----------------&#xD;
&#xD;
These are definitions of all utility functions you will need for the above evaluations. &#xD;
&#xD;
    Clear[tim];&#xD;
    tim[xp_]:=(Print[#[[1]],&amp;#034;,&amp;#034;,Length[#[[2]]]];&#xD;
    If[#[[1]]&amp;gt;69,Speak[#[[1]]]];#[[2]])&amp;amp;@AbsoluteTiming[xp]&#xD;
    &#xD;
    SetAttributes[tim,HoldAll]&#xD;
    &#xD;
    Clear[shorter];&#xD;
    shorter[cf_,0]:=cf&#xD;
    &#xD;
    shorter[cf_,n_: 1]:=shorter[cf/.(a_: 0)+1/HoldForm[r_]:&amp;gt;HoldForm@@{(a+1/r)},n-1]&#xD;
    &#xD;
    Clear[longer];&#xD;
    longer[cf_,0]:=cf&#xD;
    &#xD;
    longer[x_?NumericQ,n_: 1]:=longer[HoldForm@@{x},n]&#xD;
    &#xD;
    longer[cf_,n_: 1]:=longer[cf/.HoldForm[r:Except[_Integer]]:&amp;gt;&#xD;
        Floor[r]+1/HoldForm@@{Simplify@Together[1/Mod[r,1]]},n-1]&#xD;
    &#xD;
    tail[cf_]:=Cases[cf,HoldForm[x_]-&amp;gt;x,\[Infinity]][[1]]&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.58.04PM.png&amp;amp;userId=11733&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.08.37PM.png&amp;amp;userId=11733&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.11.21PM.png&amp;amp;userId=11733&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.13.18PM.png&amp;amp;userId=11733&#xD;
  [5]: http://mathworld.wolfram.com/PiContinuedFraction.html&#xD;
  [6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.18.18PM.png&amp;amp;userId=11733&#xD;
  [7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.20.10PM.png&amp;amp;userId=11733&#xD;
  [8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.22.04PM.png&amp;amp;userId=11733&#xD;
  [9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4017ScreenShot2016-09-29at5.22.44PM.png&amp;amp;userId=11733&#xD;
  [10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.23.59PM.png&amp;amp;userId=11733&#xD;
  [11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.25.08PM.png&amp;amp;userId=11733&#xD;
  [12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.26.38PM.png&amp;amp;userId=11733&#xD;
  [13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.27.20PM.png&amp;amp;userId=11733&#xD;
  [14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.30.04PM.png&amp;amp;userId=11733&#xD;
  [15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.30.54PM.png&amp;amp;userId=11733&#xD;
  [16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.32.42PM.png&amp;amp;userId=11733&#xD;
  [17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.33.27PM.png&amp;amp;userId=11733&#xD;
  [18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-09-29at5.34.24PM.png&amp;amp;userId=11733</description>
    <dc:creator>Bill Gosper</dc:creator>
    <dc:date>2016-09-29T22:37:49Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3379973">
    <title>[WELP24] On functional iteration and roots</title>
    <link>https://community.wolfram.com/groups/-/m/t/3379973</link>
    <description>![functionaliterationroots][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=proj-6-cover.jpeg&amp;amp;userId=911151&#xD;
  [2]: https://www.wolframcloud.com/obj/930a433a-2ab4-478c-8542-f153add19e0f</description>
    <dc:creator>Wolfram Education Programs</dc:creator>
    <dc:date>2025-02-04T15:30:58Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3215276">
    <title>[WSRP24] Optimizing wind turbine purchasing for cost-efficient solutions</title>
    <link>https://community.wolfram.com/groups/-/m/t/3215276</link>
    <description>![Logarithmic objective function 3D plot][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2024-07-11at2.41.44%E2%80%AFPM.png&amp;amp;userId=3215211&#xD;
  [2]: https://www.wolframcloud.com/obj/0ad9a6c0-6331-46e0-93c3-7f067a610ba3</description>
    <dc:creator>Manvik Bhadoria</dc:creator>
    <dc:date>2024-07-11T18:57:04Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3094917">
    <title>[WELP23] Utilizing SIR Model Variations (ODEs) and Community Graph Networks for Trachoma</title>
    <link>https://community.wolfram.com/groups/-/m/t/3094917</link>
    <description>![SIR Models][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4037image.jpeg&amp;amp;userId=911151&#xD;
  [2]: https://www.wolframcloud.com/obj/47b57cbc-985d-4ea2-8431-9cef59ce865e</description>
    <dc:creator>Wolfram Education Programs</dc:creator>
    <dc:date>2024-01-02T17:29:29Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/583974">
    <title>Unit Distance McGee Graph</title>
    <link>https://community.wolfram.com/groups/-/m/t/583974</link>
    <description>Awhile ago I showed a [Unit Distance Heawood Graph][1]. Here is the McGee graph as a unit-distance graph.&#xD;
&#xD;
![McGee graph as unit distance graph][2]&#xD;
&#xD;
Here&amp;#039;s some gory looking code for the points:&#xD;
&#xD;
    McGeePoints =RootReduce[{{0,1/2},{0,-1/2},{1/2,0},{-1/2,0},&#xD;
    {Sqrt[2]/4,Sqrt[2]/4},{-Sqrt[2]/4,Sqrt[2]/4},{Sqrt[2]/4,-Sqrt[2]/4},{-Sqrt[2]/4,-Sqrt[2]/4},&#xD;
    {Cos[a],1/2+Sin[a]},{-Cos[a],1/2+Sin[a]},{Cos[a],-1/2-Sin[a]},{-Cos[a],-1/2-Sin[a]},&#xD;
    {1/2+Sin[a],Cos[a]},{-1/2-Sin[a],Cos[a]},{1/2+Sin[a],-Cos[a]},{-1/2-Sin[a],-Cos[a]},&#xD;
    {Root[81+2592 #1+24192 #1^2+81792 #1^3+149760 #1^4+215040 #1^5-1777664 #1^6-11354112 #1^7-23052288 #1^8-15728640 #1^9+6291456 #1^10+12582912 #1^11+4194304 #1^12&amp;amp;,7],Root[81-2592 #1+24192 #1^2-81792 #1^3+149760 #1^4-215040 #1^5-1777664 #1^6+11354112 #1^7-23052288 #1^8+15728640 #1^9+6291456 #1^10-12582912 #1^11+4194304 #1^12&amp;amp;,8]},&#xD;
    {-Root[81+2592 #1+24192 #1^2+81792 #1^3+149760 #1^4+215040 #1^5-1777664 #1^6-11354112 #1^7-23052288 #1^8-15728640 #1^9+6291456 #1^10+12582912 #1^11+4194304 #1^12&amp;amp;,7],Root[81-2592 #1+24192 #1^2-81792 #1^3+149760 #1^4-215040 #1^5-1777664 #1^6+11354112 #1^7-23052288 #1^8+15728640 #1^9+6291456 #1^10-12582912 #1^11+4194304 #1^12&amp;amp;,8]},&#xD;
    {Root[81+2592 #1+24192 #1^2+81792 #1^3+149760 #1^4+215040 #1^5-1777664 #1^6-11354112 #1^7-23052288 #1^8-15728640 #1^9+6291456 #1^10+12582912 #1^11+4194304 #1^12&amp;amp;,7],-Root[81-2592 #1+24192 #1^2-81792 #1^3+149760 #1^4-215040 #1^5-1777664 #1^6+11354112 #1^7-23052288 #1^8+15728640 #1^9+6291456 #1^10-12582912 #1^11+4194304 #1^12&amp;amp;,8]},&#xD;
    {-Root[81+2592 #1+24192 #1^2+81792 #1^3+149760 #1^4+215040 #1^5-1777664 #1^6-11354112 #1^7-23052288 #1^8-15728640 #1^9+6291456 #1^10+12582912 #1^11+4194304 #1^12&amp;amp;,7],-Root[81-2592 #1+24192 #1^2-81792 #1^3+149760 #1^4-215040 #1^5-1777664 #1^6+11354112 #1^7-23052288 #1^8+15728640 #1^9+6291456 #1^10-12582912 #1^11+4194304 #1^12&amp;amp;,8]},&#xD;
    {Root[81-2592 #1+24192 #1^2-81792 #1^3+149760 #1^4-215040 #1^5-1777664 #1^6+11354112 #1^7-23052288 #1^8+15728640 #1^9+6291456 #1^10-12582912 #1^11+4194304 #1^12&amp;amp;,8],&#xD;
    Root[81+2592 #1+24192 #1^2+81792 #1^3+149760 #1^4+215040 #1^5-1777664 #1^6-11354112 #1^7-23052288 #1^8-15728640 #1^9+6291456 #1^10+12582912 #1^11+4194304 #1^12&amp;amp;,7]},&#xD;
    {-Root[81-2592 #1+24192 #1^2-81792 #1^3+149760 #1^4-215040 #1^5-1777664 #1^6+11354112 #1^7-23052288 #1^8+15728640 #1^9+6291456 #1^10-12582912 #1^11+4194304 #1^12&amp;amp;,8],&#xD;
    Root[81+2592 #1+24192 #1^2+81792 #1^3+149760 #1^4+215040 #1^5-1777664 #1^6-11354112 #1^7-23052288 #1^8-15728640 #1^9+6291456 #1^10+12582912 #1^11+4194304 #1^12&amp;amp;,7]},&#xD;
    {Root[81-2592 #1+24192 #1^2-81792 #1^3+149760 #1^4-215040 #1^5-1777664 #1^6+11354112 #1^7-23052288 #1^8+15728640 #1^9+6291456 #1^10-12582912 #1^11+4194304 #1^12&amp;amp;,8],&#xD;
    -Root[81+2592 #1+24192 #1^2+81792 #1^3+149760 #1^4+215040 #1^5-1777664 #1^6-11354112 #1^7-23052288 #1^8-15728640 #1^9+6291456 #1^10+12582912 #1^11+4194304 #1^12&amp;amp;,7]},&#xD;
    {-Root[81-2592 #1+24192 #1^2-81792 #1^3+149760 #1^4-215040 #1^5-1777664 #1^6+11354112 #1^7-23052288 #1^8+15728640 #1^9+6291456 #1^10-12582912 #1^11+4194304 #1^12&amp;amp;,8],&#xD;
    -Root[81+2592 #1+24192 #1^2+81792 #1^3+149760 #1^4+215040 #1^5-1777664 #1^6-11354112 #1^7-23052288 #1^8-15728640 #1^9+6291456 #1^10+12582912 #1^11+4194304 #1^12&amp;amp;,7]}&#xD;
    }/.a-&amp;gt;ArcSin[Root[-71+624 #1+192 #1^2-2688 #1^3-1152 #1^4+3072 #1^5+2048 #1^6&amp;amp;,2]]];&#xD;
&#xD;
The lines: &#xD;
&#xD;
    McGeeLines={{1,2},{1,9},{1,10},{2,11},{2,12},{3,4},{3,13},{3,15},{4,14},{4,16},{5,8},{5,17},{5,21},{6,7},{6,18},{6,22},{7,19},{7,23},{8,20},{8,24},{9,17},{9,23},{10,18},{10,24},{11,19},{11,21},{12,20},{12,22},{13,18},{13,21},{14,17},{14,22},{15,20},{15,23},{16,19},{16,24}};&#xD;
&#xD;
That these are unit distance lines can be verified: &#xD;
&#xD;
    Select[McGeeLines, RootReduce[EuclideanDistance[McGeePoints[[#[[1]]]], McGeePoints[[#[[2]]]]]] == 1 &amp;amp;] &#xD;
&#xD;
From the points and lines we can get the graphic. &#xD;
&#xD;
    Graphics[{Line[McGeePoints[[#]]]&amp;amp;/@McGeeLines,Black,Disk[#,.03]&amp;amp;/@McGeePoints, White,Disk[#,.02]&amp;amp;/@McGeePoints}] &#xD;
&#xD;
To solve this, I started from &#xD;
&#xD;
    GraphData[&amp;#034;McGeeGraph&amp;#034;, &amp;#034;AllImages&amp;#034;][[1]] &#xD;
&#xD;
and put an octagon of diameter 1 at the center. Then I added eight points of the form `{Cos[a], 1/2 + Sin[a]}`, reflected with mirror symmetries.&#xD;
&#xD;
![McGee graph][3]&#xD;
&#xD;
I needed a point at distance 1 from two of the points.&#xD;
&#xD;
    {x, y} /. FullSimplify[Solve[EuclideanDistance[{x, y}, #] == 1 &amp;amp; /@ {{Cos[a], 1/2 + Sin[a]}, {-1/2 - Sin[a], Cos[a]}}, {x, y}]]&#xD;
&#xD;
The second solution in terms of &amp;#034;a&amp;#034; was the one I wanted. That point needed to be a distance of 1 from `{Sqrt[2]/4,Sqrt[2]/4}`, so I solved for &amp;#034;a&amp;#034; with NSolve to a thousand digits. RootApproximant didn&amp;#039;t recognize it, but did recognize sin(a) as &#xD;
&#xD;
    Root[-71 + 624 #1 + 192 #1^2 - 2688 #1^3 - 1152 #1^4 + 3072 #1^5 + 2048 #1^6 &amp;amp;, 2]&#xD;
&#xD;
Plugging that into the previous equation gave a pair of order twelve root objects as solutions.  Done.  More details are in the attached notebook.&#xD;
&#xD;
  [1]: http://community.wolfram.com/groups/-/m/t/518461&#xD;
  [2]: /c/portal/getImageAttachment?filename=McGeeGraph.gif&amp;amp;userId=21530&#xD;
  [3]: /c/portal/getImageAttachment?filename=McGee2.gif&amp;amp;userId=21530</description>
    <dc:creator>Ed Pegg</dc:creator>
    <dc:date>2015-10-16T18:17:06Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1184139">
    <title>Re-mesh, downsample &amp;amp; upsample a DiscretizeRegion object?</title>
    <link>https://community.wolfram.com/groups/-/m/t/1184139</link>
    <description>Why this question affects my choice of finite element solver&#xD;
------------------------------------------------------------&#xD;
&#xD;
In my project, I am importing a complex geometry from an STL file into Mathematica as a `MeshRegion`. I would like to edit this mesh significantly: For instance, drastically reduce or increase the number of elements (for finite element analysis) or make boolean operations. But I am not sure if Mathematica can re-mesh a `MeshRegion` that is obtained from discrete data. This question is of personal importance for me because it will help decide if I can go ahead with using `NDSolve` for the finite element analysis of my project: If re-meshing is very problematic in Mathematica, it may be wiser to import volume meshes from external software, or do all of the finite element analysis in a dedicated solver like Comsol. &#xD;
&#xD;
Re-meshing an already discretized `MeshRegion`&#xD;
--------------------------------------------&#xD;
&#xD;
Let me illustrate. When I work with an `ImplicitRegion`, I seem to have full control, I can make a mesh with very many or few elements, and apply boolean operations too:&#xD;
&#xD;
    IR = ImplicitRegion[&#xD;
       x^6 - 5 x^4 y + 3 x^4 y^2 + 10 x^2 y^3 + 3 x^2 y^4 - y^5 + y^6 + &#xD;
         z^2 &amp;lt;= 1, {x, y, z}];&#xD;
    &amp;lt;&amp;lt; NDSolve`FEM`&#xD;
    ToElementMesh[IR, MaxCellMeasure -&amp;gt; Infinity, &#xD;
      AccuracyGoal -&amp;gt; 0][&amp;#034;Wireframe&amp;#034;]&#xD;
    ToElementMesh[RegionDifference[IR, Cuboid[]], &#xD;
      MaxCellMeasure -&amp;gt; 0.001][&amp;#034;Wireframe&amp;#034;]&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
Next, I create a `MeshRegion` by applying `DiscretizeRegion`. This leaves me in basically the same situation as importing some STL mesh into Mathematica as `MeshRegion`:&#xD;
&#xD;
    MR = DiscretizeRegion[IR]; (* same as MR = Import[&amp;#034;filename.stl&amp;#034;, &amp;#034;MeshRegion&amp;#034;] *)&#xD;
&#xD;
Now I can no longer downsample:&#xD;
&#xD;
    ToElementMesh[MR, MaxCellMeasure -&amp;gt; Infinity, AccuracyGoal -&amp;gt; 0]&#xD;
    ToElementMesh[MR]&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
Same thing! No downsampling appears to have happened. Possibly no re-meshing whatsoever. Mathematica returns basically the same number of elements (`TetrahedronElement[&amp;#034;&amp;lt;&amp;#034; 16793 &amp;#034;&amp;gt;&amp;#034;]` vs `TetrahedronElement[&amp;#034;&amp;lt;&amp;#034; 16841 &amp;#034;&amp;gt;&amp;#034;]`, respectively).&#xD;
&#xD;
Also I can no longer apply boolean operations:&#xD;
&#xD;
    RegionDifference[MR, Cuboid[]] // DiscretizeRegion&#xD;
&#xD;
returns an error&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
Is there some way I could gain control of the `DiscretizeRegion` in the same way as `ImplicitRegion`? In practice, I am presented with discrete data (STL file) and I would like to be able to re-mesh it, do boolean operations, and run FEM in full control of my mesh. Is that possible?&#xD;
&#xD;
Bad brute force solution&#xD;
-----&#xD;
&#xD;
You can brute force Mathematica to re-mesh using this hack:&#xD;
&#xD;
        MR2 = DiscretizeGraphics[&#xD;
          RegionPlot3D[&#xD;
           RegionMember[MR, {x, y, z}] == True &amp;amp;&amp;amp; &#xD;
            RegionMember[Cuboid[], {x, y, z}] == False, {x, -2, 2}, {y, -2, &#xD;
            2}, {z, -2, 2}, PlotPoints -&amp;gt; 20]]&#xD;
    ToElementMesh[MR2, MaxCellMeasure -&amp;gt; Infinity, AccuracyGoal -&amp;gt; 0]&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
However, converting the `MeshRegion` (picture above, left) with `ToElementMesh` hopelessly overmeshes things (picture above, right). I get about 500k tetrahedral elements. Once again I don&amp;#039;t know how to control my mesh for finite element analysis.&#xD;
&#xD;
Disclaimer&#xD;
-----&#xD;
I have posted this question to Mathematica Stackexchange [here][5], where it has gone unanswered for several days. Because of the importance of this question for my modeling work (I need to decide if I can go ahead with Mathematica for the finite element analysis part of my project), I decided to post it to the Wolfram Community to hopefully get some developer feedback. I am happy to summarise the insights of this this discussion for the Stackexchange community once we have (hopefully) reached some conclusions. &#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.png&amp;amp;userId=1184102&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.png&amp;amp;userId=1184102&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&amp;amp;userId=1184102&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&amp;amp;userId=1184102&#xD;
  [5]: https://mathematica.stackexchange.com/questions/155484/is-it-possible-to-re-mesh-downsample-upsample-a-discretizeregion-object</description>
    <dc:creator>Alexander Erlich</dc:creator>
    <dc:date>2017-09-13T17:30:32Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3642901">
    <title>Quick intro to the (N)DSolve(Value) family</title>
    <link>https://community.wolfram.com/groups/-/m/t/3642901</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/f58b42ce-750f-48f1-9529-f640d895b1cc</description>
    <dc:creator>Michael Rogers</dc:creator>
    <dc:date>2026-02-21T02:40:03Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1351062">
    <title>A geometric multigrid solver for Mathematica?</title>
    <link>https://community.wolfram.com/groups/-/m/t/1351062</link>
    <description>*Cross-posted from [Mathematica.StackExchange](https://mathematica.stackexchange.com/questions/173616)*&#xD;
&#xD;
&#xD;
#Motivation&#xD;
&#xD;
&#xD;
Mathematica ships a variety of linear solvers through the interface&#xD;
&#xD;
    LinearSolve[A, b, Method -&amp;gt; method]&#xD;
&#xD;
the most notable for sparse matrix `A` being:&#xD;
&#xD;
 - `&amp;#034;Multifrontal&amp;#034;` - the default solver for sparse arrays; direct solver based on a spare LU-factorization performed with UMFPACK. Big advantage: `LinearSolve[A]` creates a `LinearSolveFunction` object that can be reused.&#xD;
&#xD;
 - `&amp;#034;Cholesky&amp;#034;` - a direct solver using the sparse Cholesky-factorization provided by TAUCS. Also produces a `LinearSolveFunction` but limited to positive-definite matrices.&#xD;
&#xD;
 - `&amp;#034;Pardiso&amp;#034;` - a parallelized direct solver from the Intel MKL; undocumented but much faster and not nearly as memory hungry as `&amp;#034;Multifrontal&amp;#034;`. `LinearSolve[A, Method -&amp;gt; &amp;#034;Pardiso&amp;#034;]` creates a reusable `LinearSolveFunction` object as well. The original Intel MKL Pardiso solver can do quite a lot more! Unfortunately, many of its features such as reusing symbolic factorizations are not accessible via `LinearSolve`. (Or are they? That would be great to know!)&#xD;
&#xD;
 - `&amp;#034;Krylov&amp;#034;` with the submethods `&amp;#034;ConjugateGradient&amp;#034;`, `&amp;#034;GMRES&amp;#034;`, and `&amp;#034;BiCGSTAB&amp;#034;` - iterative solvers that require good preconditioners and good starting values. These are supposed to work quite well for transient PDEs with smoothly varying coefficients.&#xD;
&#xD;
See [this site](http://reference.wolfram.com/language/tutorial/LinearAlgebraMatrixComputations.html#81663253) for reference. (It is a bit outdated but I don&amp;#039;t think that any essential changes were made in the meantime - apart from `&amp;#034;Pardiso&amp;#034;`.).&#xD;
&#xD;
I mostly use linear solvers to solve PDEs, mostly elliptic ones, but also parabolic ones every now and then. On the one hand, the direct solvers hit the wall when dealing with meshes with several million degrees of freedom. And in particular, if the meshes are 3-dimensional, factorization times explode due to the high fill-in. On the other hand, the iterative solvers within Mathematica -- if I am honest -- perform so poorly on elliptic problems that I used to question the mental sanity of several numericists who told me that I had to use an iterative solver for _every_ linear system of size larger than $5 \times 5$.&#xD;
&#xD;
**Some time ago, I learned that the really big thing would be multigrid solvers. So I wonder wether one can implement such a solver in Mathematica (be it in the native language or via LibraryLink) and how it would compare to the built-in linear solvers.**&#xD;
&#xD;
&#xD;
#Background&#xD;
&#xD;
Details about multigrid solvers can be found in this [pretty neat script](https://www.google.com/search?client=safari&amp;amp;rls=en&amp;amp;q=multi+grid+solver&amp;amp;ie=UTF-8&amp;amp;oe=UTF-8) by Volker John. That&amp;#039;s basically the source from which I drew the information to implement the V-cycle solver below.&#xD;
&#xD;
In a nutshell, a multigrid solver builds on two ingredients: A hierarchy of linear systems (with so-called _prolongation operators_ mapping between them) and a family of _smoothers_. &#xD;
&#xD;
&#xD;
#Implementation&#xD;
&#xD;
##CG-method as smoother&#xD;
&#xD;
I use an iterative conjugate-gradient solver as smoother. For some reason, Mathematica&amp;#039;s conjugate-gradient solver has an exorbitantly high latency, so I use my own implementation which I wrote several years ago. It&amp;#039;s really easy to implement; all necessary details can be found, e.g., [here](https://en.wikipedia.org/wiki/Conjugate_gradient_method#The_resulting_algorithm). Note that my implementation returns an `Association` that also provides some information on the solving process. (In particular for transient PDE with varying coefficients, the number of needed iterations is often a valuable information that one might want to use for determining wether the preconditioner has to be updated.)&#xD;
&#xD;
    Options[CGLinearSolve] = {&#xD;
       &amp;#034;Tolerance&amp;#034; -&amp;gt; 10^(-8),&#xD;
       &amp;#034;StartingVector&amp;#034; -&amp;gt; Automatic,&#xD;
       MaxIterations -&amp;gt; 1000,&#xD;
       &amp;#034;Preconditioner&amp;#034; -&amp;gt; Identity&#xD;
       };&#xD;
    &#xD;
    CGLinearSolve[A_?SquareMatrixQ, b_?MatrixQ, opts : OptionsPattern[]] :=&#xD;
       CGLinearSolve[A, #, opts] &amp;amp; /@ Transpose[b]&#xD;
    &#xD;
    CGLinearSolve[A_?SquareMatrixQ, b_?VectorQ, OptionsPattern[]] :=&#xD;
      Module[{r, u, ?, ?0, p, ?, ?old, &#xD;
        z, ?, ?, x, TOL, iter, P, precdata, normb, maxiter},&#xD;
       P = OptionValue[&amp;#034;Preconditioner&amp;#034;];&#xD;
       maxiter = OptionValue[MaxIterations];&#xD;
       normb = Sqrt[b.b];&#xD;
       If[Head[P] === String,&#xD;
        precdata = SparseArray`SparseMatrixILU[A, &amp;#034;Method&amp;#034; -&amp;gt; P];&#xD;
        P = x \[Function] SparseArray`SparseMatrixApplyILU[precdata, x];&#xD;
        ];&#xD;
       If[P === Automatic,&#xD;
        precdata = SparseArray`SparseMatrixILU[A, &amp;#034;Method&amp;#034; -&amp;gt; &amp;#034;ILU0&amp;#034;];&#xD;
        P = x \[Function] SparseArray`SparseMatrixApplyILU[precdata, x];&#xD;
        ];&#xD;
       TOL = normb OptionValue[&amp;#034;Tolerance&amp;#034;];&#xD;
       If[OptionValue[&amp;#034;StartingVector&amp;#034;] === Automatic,&#xD;
        x = ConstantArray[0., Dimensions[A][[2]]];&#xD;
        r = b&#xD;
        ,&#xD;
        x = OptionValue[&amp;#034;StartingVector&amp;#034;];&#xD;
        r = b - A.x;&#xD;
        ];&#xD;
       z = P[r];&#xD;
       p = z;&#xD;
       ? = r.z;&#xD;
       ?0 = ? = Sqrt[r.r];&#xD;
       iter = 0;&#xD;
       While[? &amp;gt; TOL &amp;amp;&amp;amp; iter &amp;lt; maxiter,&#xD;
        iter++;&#xD;
        u = A.p;&#xD;
        ? = ?/(p.u);&#xD;
        x = x + ? p;&#xD;
        ?old = ?;&#xD;
        r = r - ? u;&#xD;
        ? = Sqrt[r.r];&#xD;
        z = P[r];&#xD;
        ? = r.z;&#xD;
        ? = ?/?old;&#xD;
        p = z + ? p;&#xD;
        ];&#xD;
       Association[&#xD;
        &amp;#034;Solution&amp;#034; -&amp;gt; x,&#xD;
        &amp;#034;Iterations&amp;#034; -&amp;gt; iter,&#xD;
        &amp;#034;Residual&amp;#034; -&amp;gt; ?,&#xD;
        &amp;#034;RelativeResidual&amp;#034; -&amp;gt; Quiet[Check[?/?0, ?]],&#xD;
        &amp;#034;NormalizedResidual&amp;#034; -&amp;gt; Quiet[Check[?/normb, ?]]&#xD;
        ]&#xD;
       ];&#xD;
&#xD;
##Weighted Jacobi smoother&#xD;
&#xD;
The [weighted Jacobi method](https://en.wikipedia.org/wiki/Jacobi_method#Convergence_in_the_symmetric_positive_definite_case) is a very simply iterative solver that employs Richardson iterations with the diagonal of the matrix as preconditioner (the matrix must not have any zero elements on the diagonal!) and a bit of damping. Works in general only for diagonally dominant matrices and positive-definite matrices (if the `&amp;#034;Weight&amp;#034;` is chosen sufficiently small). It&amp;#039;s not really bad but it is not excellent either. In the test problem below, it usually necessitates one more V-cycle as the CG smoother.&#xD;
&#xD;
    Options[JacobiLinearSolve] = {&#xD;
       &amp;#034;Tolerance&amp;#034; -&amp;gt; 10^(-8),&#xD;
       &amp;#034;StartingVector&amp;#034; -&amp;gt; Automatic,&#xD;
       MaxIterations -&amp;gt; 1000,&#xD;
       &amp;#034;Weight&amp;#034; -&amp;gt; 2./3.&#xD;
       };&#xD;
    &#xD;
    JacobiLinearSolve[A_?SquareMatrixQ, b_?MatrixQ, opts : OptionsPattern[]] := &#xD;
      JacobiLinearSolve[A, #, opts] &amp;amp; /@ Transpose[b]&#xD;
    &#xD;
    JacobiLinearSolve[A_?SquareMatrixQ, b_?VectorQ, OptionsPattern[]] := &#xD;
      Module[{?, x, r, ?d, dd, iter, ?, ?0, normb, TOL, maxiter},&#xD;
       ? = OptionValue[&amp;#034;Weight&amp;#034;];&#xD;
       maxiter = OptionValue[MaxIterations];&#xD;
       normb = Max[Abs[b]];&#xD;
       TOL = normb OptionValue[&amp;#034;Tolerance&amp;#034;];&#xD;
       If[OptionValue[&amp;#034;StartingVector&amp;#034;] === Automatic,&#xD;
        x = ConstantArray[0., Dimensions[A][[2]]];&#xD;
        r = b;&#xD;
        ,&#xD;
        x = OptionValue[&amp;#034;StartingVector&amp;#034;];&#xD;
        r = b - A.x;&#xD;
        ];&#xD;
       ?d = ?/Normal[Diagonal[A]];&#xD;
       ? = ?0 = Max[Abs[r]];&#xD;
       iter = 0;&#xD;
       While[? &amp;gt; TOL &amp;amp;&amp;amp; iter &amp;lt; maxiter,&#xD;
        iter++;&#xD;
        x += ?d r;&#xD;
        r = (b - A.x);&#xD;
        ? = Max[Abs[r]];&#xD;
        ];&#xD;
       Association[&#xD;
        &amp;#034;Solution&amp;#034; -&amp;gt; x,&#xD;
        &amp;#034;Iterations&amp;#034; -&amp;gt; iter,&#xD;
        &amp;#034;Residual&amp;#034; -&amp;gt; ?,&#xD;
        &amp;#034;RelativeResidual&amp;#034; -&amp;gt; Quiet[Check[?/?0, ?]],&#xD;
        &amp;#034;NormalizedResidual&amp;#034; -&amp;gt; Quiet[Check[?/normb, ?]]&#xD;
        ]&#xD;
       ];&#xD;
&#xD;
##Setting up the solver&#xD;
&#xD;
Next is a function that takes the system matrix `Matrix` and a family of prologation operators `Prolongations` and creates a `GMGLinearSolveFunction` object. This object contains a linear solving method for the deepest level in the hierarchy, the prolongation operators, and - derived from `Matrix` and `Prolongations` - a linear system matrix for each level in the hierarchy. &#xD;
&#xD;
As it is the core idea of Galerkin schemes in FEM, we interpret the system matrix `Matrix` on the finest grid as a linear operator $A_0 \colon X_0 \to X_0&amp;#039;$, where $X_0$ denotes the finite element function space of continuous, piecwise-linear functions on the finest mesh and $X_0&amp;#039;$ denotes its dual space. Denoting the finite element function space on the $i$-th subgrid by $X_i$ and interpreting the prolongation operators in the list `Prolongations` as linear embeddings $J_i \colon X_{i} \hookrightarrow X_{i-1}$, we obtain the linear operators $A_i \colon X_i \to X_i&amp;#039;$ by _Galerkin subspace &amp;lt;s&amp;gt;projection&amp;lt;/s&amp;gt; injection_, i.e. by requiring that the following diagram is commutative:&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
Per default, `LinearSolve` is used as solver for the coarsest grid, but the user may specify any other function `F` via the option `&amp;#034;CoarsestGridSolver&amp;#034; -&amp;gt; F`.&#xD;
&#xD;
Some pretty-printing for the created `GMGLinearSolveFunction` objects is also added.&#xD;
&#xD;
    GMGLinearSolve[&#xD;
       Matrix_?SquareMatrixQ,&#xD;
       Prolongations_List,&#xD;
       OptionsPattern[{&#xD;
         &amp;#034;CoarsestGridSolver&amp;#034; -&amp;gt; LinearSolve&#xD;
         }]&#xD;
       ] := Module[{A},&#xD;
       (*Galerkin subspace projections of the system matrix*)&#xD;
       A = FoldList[Transpose[#2].#1.#2 &amp;amp;, Matrix, Prolongations];&#xD;
       GMGLinearSolveFunction[&#xD;
        Association[&#xD;
         &amp;#034;MatrixHierarchy&amp;#034; -&amp;gt; A,&#xD;
         &amp;#034;Prolongations&amp;#034; -&amp;gt; Prolongations,&#xD;
         &amp;#034;CoarsestGridSolver&amp;#034; -&amp;gt; OptionValue[&amp;#034;CoarsestGridSolver&amp;#034;][A[[-1]]],&#xD;
         &amp;#034;CoarsestGridSolverFunction&amp;#034; -&amp;gt; OptionValue[&amp;#034;CoarsestGridSolver&amp;#034;]&#xD;
         ]&#xD;
        ]&#xD;
       ];&#xD;
&#xD;
    GMGLinearSolveFunction /: MakeBoxes[S_GMGLinearSolveFunction, StandardForm] := &#xD;
      BoxForm`ArrangeSummaryBox[GMGLinearSolveFunction, &amp;#034;&amp;#034;, &#xD;
       BoxForm`GenericIcon[LinearSolveFunction],&#xD;
       {&#xD;
        {&#xD;
         BoxForm`MakeSummaryItem[{&amp;#034;Specified elements: &amp;#034;, &#xD;
           Length[S[[1, &amp;#034;MatrixHierarchy&amp;#034;, 1]][&amp;#034;NonzeroValues&amp;#034;]]}, &#xD;
          StandardForm]&#xD;
         },&#xD;
        {&#xD;
         BoxForm`MakeSummaryItem[{&amp;#034;Dimensions: &amp;#034;, &#xD;
           Dimensions[S[[1, &amp;#034;MatrixHierarchy&amp;#034;, 1]]]}, StandardForm],&#xD;
         BoxForm`MakeSummaryItem[{&amp;#034;Depth: &amp;#034;, &#xD;
           Length[S[[1, &amp;#034;MatrixHierarchy&amp;#034;]]]}, StandardForm]&#xD;
         }&#xD;
        },&#xD;
       {&#xD;
        BoxForm`MakeSummaryItem[{&#xD;
          Invisible[&amp;#034;Dimensions: &amp;#034;],&#xD;
          Column[Dimensions /@ S[[1, &amp;#034;MatrixHierarchy&amp;#034;, 2 ;;]]]},&#xD;
         StandardForm],&#xD;
        BoxForm`MakeSummaryItem[{&#xD;
          &amp;#034;CoarsestGridSolver: &amp;#034;,&#xD;
          S[[1, &amp;#034;CoarsestGridSolverFunction&amp;#034;]]&#xD;
          }, StandardForm]&#xD;
        },&#xD;
       StandardForm, &amp;#034;Interpretable&amp;#034; -&amp;gt; False&#xD;
       ];&#xD;
&#xD;
##The solver&#xD;
&#xD;
The following is the actual V-cycle solver. To my own surpise, the algorithm was not that hard to implement. As always, most work had to be invested into the user interface (and its still not complete as it lacks bulletproofing against 1D-10T errors).&#xD;
&#xD;
Actually, this V-cycle solver is a purely algebraic solver (AMG); the _geometry_ in &amp;#034;geometric multigrid solver&amp;#034; lies within the way the matrix hierarchie was constructed (namely by geometrically nested grids and Galerkin subspace methods).&#xD;
&#xD;
    Options[GMGLinearSolveFunction] = {&#xD;
       &amp;#034;StartingVector&amp;#034; -&amp;gt; Automatic,&#xD;
       &amp;#034;Tolerance&amp;#034; -&amp;gt; 1. 10^-8,&#xD;
       &amp;#034;MaxIterations&amp;#034; -&amp;gt; 25,&#xD;
       &amp;#034;StartingVectorSmoothingCounts&amp;#034; -&amp;gt; 12,&#xD;
       &amp;#034;PreSmoothingCounts&amp;#034; -&amp;gt; 8,&#xD;
       &amp;#034;PostSmoothingCounts&amp;#034; -&amp;gt; 8,&#xD;
       &amp;#034;Smoother&amp;#034; -&amp;gt; Function[&#xD;
         {A, b, x0, ?, p},&#xD;
         (&#xD;
          CGLinearSolve[A, b,&#xD;
            MaxIterations -&amp;gt; ?,&#xD;
            &amp;#034;StartingVector&amp;#034; -&amp;gt; x0,&#xD;
            &amp;#034;Tolerance&amp;#034; -&amp;gt; 10^-12&#xD;
            ][&amp;#034;Solution&amp;#034;]&#xD;
          )&#xD;
         ],&#xD;
       &amp;#034;SmootherParameters&amp;#034; -&amp;gt; None&#xD;
       };&#xD;
    &#xD;
    GMGLinearSolveFunction /: GMGLinearSolveFunction[a_Association][&#xD;
       Rhs_?VectorQ,&#xD;
       opts0 : OptionsPattern[]&#xD;
       ] := With[{&#xD;
        J = a[&amp;#034;Prolongations&amp;#034;],&#xD;
        A = a[&amp;#034;MatrixHierarchy&amp;#034;],&#xD;
        Asol = a[&amp;#034;CoarsestGridSolver&amp;#034;]&#xD;
        },&#xD;
       Module[{smoother, Rhsnorm, p, n, v, f, depth, allocationtime, startingvector, startingvectortime, solvetime, startingvectorresidual, residual, ?0, ?1, ?2, tol, maxiter, iter, opts},&#xD;
        opts = Merge[{&#xD;
           Options[GMGLinearSolveFunction],&#xD;
           opts0&#xD;
           }, Last&#xD;
          ];&#xD;
        n = Length /@ A;&#xD;
        depth = Length[n];&#xD;
        &#xD;
        smoother = opts[&amp;#034;Smoother&amp;#034;];&#xD;
        p = opts[&amp;#034;SmootherParameters&amp;#034;];&#xD;
        If[p === None, p = ConstantArray[{}, depth];];&#xD;
        &#xD;
        (* allocate memory for computations *)&#xD;
        &#xD;
        allocationtime = AbsoluteTiming[&#xD;
           v = ConstantArray[0., #] &amp;amp; /@ n;&#xD;
           f = Join[{Rhs}, ConstantArray[0., #] &amp;amp; /@ Most[n]];&#xD;
           ][[1]];&#xD;
        &#xD;
        (* compute starting vector *)&#xD;
        &#xD;
        startingvectortime = AbsoluteTiming[&#xD;
           If[VectorQ[opts[&amp;#034;StartingVector&amp;#034;]],&#xD;
            v[[1]] = opts[&amp;#034;StartingVector&amp;#034;];&#xD;
            ,&#xD;
            If[opts[&amp;#034;StartingVector&amp;#034;] =!= &amp;#034;Null&amp;#034;, opts[&amp;#034;StartingVector&amp;#034;] == Automatic];];&#xD;
           &#xD;
           If[opts[&amp;#034;StartingVector&amp;#034;] === Automatic,&#xD;
            Module[{b},&#xD;
              ?0 = opts[&amp;#034;StartingVectorSmoothingCounts&amp;#034;];&#xD;
              If[! ListQ[?0], ?0 = If[IntegerQ[?0], ConstantArray[?0, Length[n] - 1], ?0 /@ Range[depth]]];&#xD;
              b = FoldList[#1.#2 &amp;amp;, Rhs, J];&#xD;
              v[[depth]] = Asol[b[[depth]]];&#xD;
              &#xD;
              Do[v[[i]] = smoother[A[[i]], b[[i]], J[[i]].v[[i + 1]], ?0[[i]], p[[i]]], {i, depth - 1, 1, -1}];&#xD;
              ];&#xD;
            ,&#xD;
            ?0 = None;&#xD;
            ];&#xD;
           ][[1]];&#xD;
        startingvector = v[[1]];&#xD;
        residual = startingvectorresidual = Max[Abs[Rhs - A[[1]].startingvector]];&#xD;
        &#xD;
        (* perform V-cycles until tolerance is met *)&#xD;
        &#xD;
        solvetime = AbsoluteTiming[&#xD;
           ?1 = opts[&amp;#034;PreSmoothingCounts&amp;#034;];&#xD;
           If[! ListQ[?1], ?1 = If[IntegerQ[?1], ConstantArray[?1, Length[n] - 1], ?1 /@ Range[depth]]];&#xD;
           ?2 = opts[&amp;#034;PostSmoothingCounts&amp;#034;];&#xD;
           If[! ListQ[?2], ?2 = If[IntegerQ[?2], ConstantArray[?2, Length[n] - 1], ?2 /@ Range[depth]]];&#xD;
           Rhsnorm = Max[Abs[Rhs]];&#xD;
           tol = opts[&amp;#034;Tolerance&amp;#034;] Rhsnorm;&#xD;
           maxiter = opts[&amp;#034;MaxIterations&amp;#034;];&#xD;
           iter = 0;&#xD;
           While[&#xD;
            residual &amp;gt; tol &amp;amp;&amp;amp; iter &amp;lt; maxiter,&#xD;
            iter++;&#xD;
            Do[&#xD;
             v[[i]] = smoother[A[[i]], f[[i]], N[Boole[i == 1]] v[[i]], ?1[[i]], p[[i]]];&#xD;
             f[[i + 1]] = (f[[i]] - A[[i]].v[[i]]).J[[i]],&#xD;
             {i, 1, depth - 1}];&#xD;
            &#xD;
            (* solve at deepest level with &amp;#034;CoarsestGridSolver&amp;#034; *)&#xD;
           &#xD;
             v[[depth]] = Asol[f[[depth]]];&#xD;
            &#xD;
            Do[&#xD;
             v[[i]] = smoother[A[[i]], f[[i]], v[[i]] + J[[i]].v[[i + 1]], ?2[[i]], p[[i]]],&#xD;
             {i, depth - 1, 1, -1}];&#xD;
            residual = Max[Abs[Subtract[Rhs, A[[1]].v[[1]]]]];&#xD;
            ];&#xD;
           ][[1]];&#xD;
        &#xD;
        Association[&#xD;
         &amp;#034;Solution&amp;#034; -&amp;gt; v[[1]],&#xD;
         &amp;#034;StartingVectorResidual&amp;#034; -&amp;gt; startingvectorresidual,&#xD;
         &amp;#034;StartingVectorNormalizedResidual&amp;#034; -&amp;gt; &#xD;
          startingvectorresidual/Rhsnorm,&#xD;
         &amp;#034;Residual&amp;#034; -&amp;gt; residual,&#xD;
         &amp;#034;NormalizedResidual&amp;#034; -&amp;gt; residual/Rhsnorm,&#xD;
         &amp;#034;SuccessQ&amp;#034; -&amp;gt; residual &amp;lt; tol,&#xD;
         &amp;#034;Timings&amp;#034; -&amp;gt; Dataset@Association[&#xD;
            &amp;#034;Total&amp;#034; -&amp;gt; allocationtime + startingvectortime + solvetime,&#xD;
            &amp;#034;Allocation&amp;#034; -&amp;gt; allocationtime,&#xD;
            &amp;#034;StartingVector&amp;#034; -&amp;gt; startingvectortime,&#xD;
            &amp;#034;V-Cycle&amp;#034; -&amp;gt; solvetime&#xD;
            ],&#xD;
         &amp;#034;V-CycleCount&amp;#034; -&amp;gt; iter,&#xD;
         &amp;#034;SmootingCounts&amp;#034; -&amp;gt; Dataset@Association[&#xD;
            &amp;#034;StartingVector&amp;#034; -&amp;gt; {?0},&#xD;
            &amp;#034;Pre&amp;#034; -&amp;gt; {?1},&#xD;
            &amp;#034;Post&amp;#034; -&amp;gt; {?2}&#xD;
            ],&#xD;
         &amp;#034;StartingVector&amp;#034; -&amp;gt; startingvector,&#xD;
         &amp;#034;Smoother&amp;#034; -&amp;gt; smoother,&#xD;
         &amp;#034;Depth&amp;#034; -&amp;gt; depth&#xD;
         ]&#xD;
        ]&#xD;
       ];&#xD;
&#xD;
#Applications&#xD;
&#xD;
##2D Example&#xD;
&#xD;
What we need now is a test case! _Just by chance_, I have recently updated my [Loop subdivision routine](http://community.wolfram.com/groups/-/m/t/1338790) such that it also returns the subdivision matrix if we ask kindly. We can use these subdivision matrices as prolongation operators!&#xD;
&#xD;
So, let&amp;#039;s start with a rather coarse mesh on the unit disk and refine it by Loop subdivision (you will need the code for [LoopSubdivide](http://community.wolfram.com/groups/-/m/t/1338790) if you want to try this):&#xD;
&#xD;
    R = DiscretizeRegion[Disk[], MaxCellMeasure -&amp;gt; 0.001];&#xD;
    depth = 5;&#xD;
    {R, J} = {Last[#1], Reverse[Rest[#2]]} &amp;amp; @@ &#xD;
       Transpose@NestList[LoopSubdivide, {R, {{0}}}, depth - 1];&#xD;
&#xD;
Let&amp;#039;s solve the following elliptic problem with Neumann boundary conditions on the disk $\varOmega$:&#xD;
&#xD;
$$\begin{array}{rcll}&#xD;
 (\varepsilon - \Delta) \, u &amp;amp;= &amp;amp;f, &amp;amp; \text{in $\varOmega\setminus \partial \varOmega$,}\\&#xD;
 \nu \,u&amp;amp;= &amp;amp;0, &amp;amp; \text{on $\partial \varOmega$.}&#xD;
\end{array}$$&#xD;
&#xD;
We can use Mathematica&amp;#039;s FEM capacities to assemble the system matrix and the right hand side for us:&#xD;
&#xD;
    f = X \[Function] 16. Sinc[4. Pi Sqrt[Abs[Dot[X + 0.5, X + 0.5]]]] - 16. Sinc[4. Pi Sqrt[Abs[Dot[X - 0.5, X - 0.5]]]] + N[Sign[X[[2]]]] + N[Sign[X[[1]]]];&#xD;
    fvec = Map[f, MeshCoordinates[R]];&#xD;
    &#xD;
    Needs[&amp;#034;NDSolve`FEM`&amp;#034;]&#xD;
    Rdiscr = ToElementMesh[&#xD;
       R,&#xD;
       &amp;#034;MeshOrder&amp;#034; -&amp;gt; 1,&#xD;
       &amp;#034;NodeReordering&amp;#034; -&amp;gt; False,&#xD;
       MeshQualityGoal -&amp;gt; 0&#xD;
       ];&#xD;
    vd = NDSolve`VariableData[{&amp;#034;DependentVariables&amp;#034;, &amp;#034;Space&amp;#034;} -&amp;gt; {{u}, {x, y}}];&#xD;
    sd = NDSolve`SolutionData[{&amp;#034;Space&amp;#034;} -&amp;gt; {Rdiscr}];&#xD;
    cdata = InitializePDECoefficients[vd, sd,&#xD;
       &amp;#034;DiffusionCoefficients&amp;#034; -&amp;gt; {{-IdentityMatrix[2]}}, &#xD;
       &amp;#034;MassCoefficients&amp;#034; -&amp;gt; {{1}}, &amp;#034;LoadCoefficients&amp;#034; -&amp;gt; {{f[{x, y}]}}];&#xD;
    bcdata = InitializeBoundaryConditions[vd  sd, {DirichletCondition[u[x, y] == 0., True]}];&#xD;
    mdata = InitializePDEMethodData[vd, sd];&#xD;
    dpde = DiscretizePDE[cdata, mdata, sd];&#xD;
    &#xD;
    {b, L, damping, M} = dpde[&amp;#034;All&amp;#034;];&#xD;
    b = Flatten[Normal[b]];&#xD;
    A = L + 0.0001 M;&#xD;
&#xD;
Now we create a `GMGLinearSolveFunction` object and solve the equation:&#xD;
&#xD;
    S = GMGLinearSolve[A, J]; // AbsoluteTiming // First&#xD;
    xGMG = S[b,&#xD;
          &amp;#034;Tolerance&amp;#034; -&amp;gt; 1. 10^-8,&#xD;
          &amp;#034;StartingVectorSmoothingCounts&amp;#034; -&amp;gt; 12,&#xD;
          &amp;#034;PreSmoothingCounts&amp;#034; -&amp;gt; 8,&#xD;
          &amp;#034;PostSmoothingCounts&amp;#034; -&amp;gt; 8&#xD;
          ][&amp;#034;Solution&amp;#034;]; // AbsoluteTiming // First&#xD;
&#xD;
&amp;gt; 0.835408&#xD;
&amp;gt; &#xD;
&amp;gt; 1.04969&#xD;
&#xD;
&#xD;
###Timings&#xD;
&#xD;
Here are the timings for some other solvers:&#xD;
&#xD;
    xKrylov = LinearSolve[A, b, Method -&amp;gt; {&#xD;
           &amp;#034;Krylov&amp;#034;,&#xD;
           &amp;#034;Method&amp;#034; -&amp;gt; &amp;#034;ConjugateGradient&amp;#034;,&#xD;
           &amp;#034;Preconditioner&amp;#034; -&amp;gt; &amp;#034;ILU0&amp;#034;&#xD;
           }]; // AbsoluteTiming // First&#xD;
    xTAUCS = LinearSolve[A, b, &amp;#034;Method&amp;#034; -&amp;gt; &amp;#034;Cholesky&amp;#034;]; // AbsoluteTiming // First&#xD;
    xUMFPACK = LinearSolve[A, b]; // AbsoluteTiming // First&#xD;
    xPardiso = LinearSolve[A, b, &amp;#034;Method&amp;#034; -&amp;gt; &amp;#034;Pardiso&amp;#034;]; // AbsoluteTiming // First&#xD;
&#xD;
&amp;gt; 67.948&#xD;
&amp;gt; &#xD;
&amp;gt; 6.89134&#xD;
&amp;gt; &#xD;
&amp;gt; 6.0961&#xD;
&amp;gt;&#xD;
&amp;gt; 2.30715&#xD;
&#xD;
Three things to observe here: &#xD;
&#xD;
 1. Mathematica&amp;#039;s `&amp;#034;ConjugateGradient&amp;#034;` is the _absolute_ looser here. (But don&amp;#039;t ask me for `&amp;#034;GMRES&amp;#034;` or `&amp;#034;BiCGSTAB&amp;#034;`; I was not in the mood of waiting for them.) &#xD;
&#xD;
 2. `&amp;#034;Cholesky&amp;#034;` cannot convert its limitation to positive-definite matrices into any advantage. That&amp;#039;s also why I never it.&#xD;
&#xD;
 3. `GMGLinearSolve` is actually a bit faster than `&amp;#034;Pardiso&amp;#034;`.&#xD;
&#xD;
###Errors&#xD;
&#xD;
Here are the errors; I use the UMFPACK &amp;#039;s solution as &amp;#034;ground truth&amp;#034; (it doesn&amp;#039;t matter, though).&#xD;
&#xD;
    Max[Abs[xUMFPACK - xGMG]]&#xD;
    Max[Abs[xUMFPACK - xTAUCS]]&#xD;
    Max[Abs[xUMFPACK - xPardiso]]&#xD;
    Max[Abs[xUMFPACK - xKrylov]]&#xD;
&#xD;
&amp;gt; 3.90012*10^-10&#xD;
&amp;gt; &#xD;
&amp;gt; 1.14953*10^-9&#xD;
&amp;gt;&#xD;
&amp;gt; 2.45955*10^-10&#xD;
&amp;gt; &#xD;
&amp;gt; 6.27234*10^-10&#xD;
&#xD;
&#xD;
They all have comparable accuracy. So, this simple multigrid solver, implemented within a long afternoon, seems to be at least on par with Pardiso. Not bad, is it?&#xD;
&#xD;
&#xD;
###Multiple solves are still faster with direct solvers on 2D grids&#xD;
&#xD;
Once the factorizations of the direct solvers are computed and stored in `LinearSolveFunction` objects, the actual solves (triangle forward and backward substitutions) are _much_ faster. Note that this is not necessarily the usage spectrum of iterative methods.&#xD;
Anyways, here are some timings:&#xD;
&#xD;
    solUMFPACK = Quiet[LinearSolve[A]]; // AbsoluteTiming // First&#xD;
    xUMFPACK = solUMFPACK[b]; // AbsoluteTiming // First&#xD;
    solTAUCS = LinearSolve[A, &amp;#034;Method&amp;#034; -&amp;gt; &amp;#034;Cholesky&amp;#034;]; // AbsoluteTiming // First&#xD;
    xTAUCS = solTAUCS[b]; // AbsoluteTiming // First&#xD;
    solPardiso = LinearSolve[A, &amp;#034;Method&amp;#034; -&amp;gt; &amp;#034;Pardiso&amp;#034;]; // AbsoluteTiming // First&#xD;
    xPardiso = solPardiso[b]; // AbsoluteTiming // First&#xD;
&#xD;
&amp;gt; 6.07364&#xD;
&amp;gt; &#xD;
&amp;gt; 0.142823&#xD;
&amp;gt; &#xD;
&amp;gt; 7.28346&#xD;
&amp;gt; &#xD;
&amp;gt; 0.183195&#xD;
&amp;gt; &#xD;
&amp;gt; 2.13817&#xD;
&amp;gt; &#xD;
&amp;gt; 0.236214&#xD;
&#xD;
Note that I used `Quiet` for UMFPACK because it complains about a bad condition number of the system and the error handling would add about 20(!) seconds to the timings. There is however no problem with the numerical errors:&#xD;
&#xD;
    Max[Abs[xGMG - xUMFPACK]]&#xD;
    Max[Abs[xGMG - xTAUCS]]&#xD;
    Max[Abs[xGMG - xPardiso]]&#xD;
&#xD;
    3.90012*10^-10&#xD;
    &#xD;
    7.59533*10^-10&#xD;
    &#xD;
    1.44077*10^-10&#xD;
&#xD;
##3D Example&#xD;
&#xD;
The problem with direct solvers is that starting in 3 dimensions, their performance for dealing with matrices stemming from PDEs drops rapidly. This is why I wanted to show at least one 3-dimensional example. As there is no immediate analogon for Loop subdivision of tetrahedral meshes, I use hexahedral meshes instead.&#xD;
&#xD;
###Preparation&#xD;
&#xD;
Here are some helper functions for generating the grid hierachy and the prolongation mappings. Prolongation is performed by hex-wise trilinear interpolation. This fits well since we are going to use first-order meshes.&#xD;
&#xD;
    Needs[&amp;#034;NDSolve`FEM`&amp;#034;]&#xD;
    cubemesh[n_] := Module[{R},&#xD;
      R = ArrayMesh[ConstantArray[1, ConstantArray[n, 3]], DataRange -&amp;gt; ConstantArray[{0, 1}, 3]];&#xD;
      ToElementMesh[&#xD;
       &amp;#034;Coordinates&amp;#034; -&amp;gt; MeshCoordinates[R],&#xD;
       &amp;#034;MeshElements&amp;#034; -&amp;gt; {HexahedronElement[ MeshCells[R, 3, &amp;#034;Multicells&amp;#034; -&amp;gt; True][[1, 1]]]},&#xD;
       &amp;#034;MeshOrder&amp;#034; -&amp;gt; 1, &amp;#034;NodeReordering&amp;#034; -&amp;gt; False, MeshQualityGoal -&amp;gt; 0&#xD;
       ]&#xD;
      ]&#xD;
    &#xD;
    getEdges = Compile[{{i, _Integer}, {idx, _Integer, 1}},&#xD;
       Table[{i, Compile`GetElement[idx, j]}, {j, 1, Length[idx]}],&#xD;
       CompilationTarget -&amp;gt; &amp;#034;C&amp;#034;,&#xD;
       RuntimeAttributes -&amp;gt; {Listable},&#xD;
       Parallelization -&amp;gt; True,&#xD;
       RuntimeOptions -&amp;gt; &amp;#034;Speed&amp;#034;&#xD;
       ];&#xD;
    &#xD;
    getProlongationOperator[Rfine_ElementMesh, Rcoarse_ElementMesh, h_] :=&#xD;
       Module[{pfine, pcoarse},&#xD;
       pfine = Rfine[&amp;#034;Coordinates&amp;#034;];&#xD;
       pcoarse = Rcoarse[&amp;#034;Coordinates&amp;#034;];&#xD;
       #/Total[#, {2}] &amp;amp;@SparseArray[&#xD;
         Join @@ getEdges[&#xD;
            Range[Length[pfine]],&#xD;
            Nearest[pcoarse -&amp;gt; Automatic, pfine, {?, h 1.1}, &#xD;
             DistanceFunction -&amp;gt; ChessboardDistance]&#xD;
            ] -&amp;gt; 1.,&#xD;
         {Length[pfine], Length[pcoarse]}, 0.&#xD;
         ]&#xD;
       ];&#xD;
&#xD;
This creates the actual grid hierarchy and the prolongation mappings.&#xD;
&#xD;
    dList = Range[6, 2, -1];&#xD;
    nList = 2^dList;&#xD;
    hList = 1./(2^(dList));&#xD;
    RList = cubemesh /@ nList; // AbsoluteTiming // First&#xD;
    J = Table[ getProlongationOperator[RList[[i]], RList[[i + 1]], &#xD;
          hList[[i]]], {i, 1, Length[RList] - 1}]; // AbsoluteTiming // First&#xD;
&#xD;
&amp;gt; 3.84804&#xD;
&amp;gt; &#xD;
&amp;gt; 0.603694&#xD;
&#xD;
Again, we solve a linear elliptic problem with homogeneous Neumann boundary conditions (this is easier to implement than Dirichlet conditions). Moreover, I thought it would be a good idea to prescribe an analytic solution, so that we can discretize its right-hand side of the equation, solve the discretized PDE, and compare with the analytic solution in the end. (Note that it is essential that `v` below satisfies the homogeneous Neumann boundary conditions.)&#xD;
&#xD;
    ? = 1.;&#xD;
    Quiet[&#xD;
      XX = {X[[1]], X[[2]], X[[3]]};&#xD;
      v = X \[Function] Cos[5 Pi X[[1]]] Cos[Pi X[[2]]] Cos[3 Pi X[[3]]];&#xD;
      ?v = X \[Function] Evaluate[Tr[D[v[XX], {XX, 2}]]];&#xD;
      f = X \[Function] Evaluate[? v[XX] - ?v[XX]]&#xD;
      ];&#xD;
    &#xD;
    vd = NDSolve`VariableData[{&amp;#034;DependentVariables&amp;#034;, &amp;#034;Space&amp;#034;} -&amp;gt; {{u}, {x, y, z}}];&#xD;
    sd = NDSolve`SolutionData[{&amp;#034;Space&amp;#034;} -&amp;gt; {RList[[1]]}];&#xD;
    cdata = InitializePDECoefficients[vd, sd,&#xD;
       &amp;#034;DiffusionCoefficients&amp;#034; -&amp;gt; {{-IdentityMatrix[3]}},&#xD;
       &amp;#034;MassCoefficients&amp;#034; -&amp;gt; {{1}},&#xD;
       &amp;#034;LoadCoefficients&amp;#034; -&amp;gt; {{f[{x, y, z}]}}&#xD;
       ];&#xD;
    bcdata = InitializeBoundaryConditions[vd, sd, {DirichletCondition[u[x, y, z] == 0., True]}];&#xD;
    mdata = InitializePDEMethodData[vd, sd];&#xD;
    dpde = DiscretizePDE[cdata, mdata, sd]; // AbsoluteTiming // First&#xD;
    &#xD;
    {b, L, damping, M} = dpde[&amp;#034;All&amp;#034;];&#xD;
    b = Flatten[Normal[b]];&#xD;
    A = L + ? M;&#xD;
&#xD;
&amp;gt; 2.21493&#xD;
&#xD;
The finest mesh `RList[[1]]` consists of `262144` hexahedra. The system matrix has size `{274625, 274625}` and contains `7189057` nonzero values.&#xD;
&#xD;
###Timings&#xD;
&#xD;
Let&amp;#039;s get to the timings. This time, we see that the conjugate gradient solver (with &amp;#034;ILU0&amp;#034; preconditioner) performs _much_ better than the direct solvers:&#xD;
&#xD;
    xUMFPACK = LinearSolve[A, b]; // AbsoluteTiming // First&#xD;
    xPardiso = LinearSolve[A, b, Method -&amp;gt; &amp;#034;Pardiso&amp;#034;]; // AbsoluteTiming // First&#xD;
    solCG = CGLinearSolve[A, b,&#xD;
         &amp;#034;Tolerance&amp;#034; -&amp;gt; 1. 10^-6,&#xD;
         &amp;#034;Preconditioner&amp;#034; -&amp;gt; &amp;#034;ILU0&amp;#034;]; // AbsoluteTiming // First&#xD;
    xCG = solCG[&amp;#034;Solution&amp;#034;];&#xD;
&#xD;
&amp;gt; 141.175&#xD;
&amp;gt; &#xD;
&amp;gt; 32.0759&#xD;
&amp;gt;&#xD;
&amp;gt; 1.70319&#xD;
&#xD;
I&amp;#039;d like to point out that a major part of the time needed by UMFPACK is due to the OS&amp;#039;s memory management (I have only 16 GB of RAM installed.)&#xD;
&#xD;
With slightly fine tuned parameters, the geometric multigrid solver performs even better:&#xD;
&#xD;
    S = GMGLinearSolve[A, J]; // AbsoluteTiming // First&#xD;
    solGMG = S[b, &amp;#034;Tolerance&amp;#034; -&amp;gt; 1. 10^-4,&#xD;
         &amp;#034;StartingVectorSmoothingCounts&amp;#034; -&amp;gt; 6,&#xD;
         &amp;#034;PreSmoothingCounts&amp;#034; -&amp;gt; 4,&#xD;
         &amp;#034;PostSmoothingCounts&amp;#034; -&amp;gt; 4&#xD;
         ]; // AbsoluteTiming // First&#xD;
    xGMG = solGMG[&amp;#034;Solution&amp;#034;];&#xD;
&#xD;
&amp;gt; 0.405304&#xD;
&amp;gt; &#xD;
&amp;gt; 0.242293&#xD;
&#xD;
You can inspect `Dataset@solGMG` and see for yourself that only two(!) V-cycles were needed. So, `S` can solve the same equation with hundreds(!) of right hand sides before Pardiso has finished factorization and start the first actual solve. So in practice, even the `LinearSolveFunction` objects generated by `LinearSolve` will hardly make up for this difference.&#xD;
&#xD;
###Errors&#xD;
&#xD;
    Max[Abs[xUMFPACK - xPardiso]]&#xD;
    Max[Abs[xUMFPACK - xCG]]&#xD;
    Max[Abs[xUMFPACK - xGMG]]&#xD;
&#xD;
&amp;gt; 1.42109*10^-14&#xD;
&amp;gt; &#xD;
&amp;gt; 6.63873*10^-6&#xD;
&amp;gt; &#xD;
&amp;gt; 3.34638*10^-6&#xD;
&#xD;
At first glance, this does not look that good for the iterative solvers, _but_ when taking also the discretization error into accout, these errors are negligible:&#xD;
&#xD;
    xTrue = Map[v, RList[[1]][&amp;#034;Coordinates&amp;#034;]];&#xD;
    Max[Abs[xTrue - xPardiso]]/Max[Abs[xTrue]]&#xD;
    Max[Abs[xTrue - xCG]]/Max[Abs[xTrue]]&#xD;
    Max[Abs[xTrue - xGMG]]/Max[Abs[xTrue]]&#xD;
&#xD;
&amp;gt; 0.00298862&#xD;
&amp;gt; &#xD;
&amp;gt; 0.00299525&#xD;
&amp;gt; &#xD;
&amp;gt; 0.00298605&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=a.png&amp;amp;userId=1338740</description>
    <dc:creator>Henrik Schumacher</dc:creator>
    <dc:date>2018-06-04T18:47:18Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/792601">
    <title>[GIF] Transitions (Cross sections of the rhombic icosahedron)</title>
    <link>https://community.wolfram.com/groups/-/m/t/792601</link>
    <description>![Cross sections of the rhombic icosahedron][1]&#xD;
&#xD;
**Transitions**&#xD;
&#xD;
I have [long been fascinated][2] by the family of shapes you get when you look at the intersection of a polyhedron or polytope with a (hyper)plane. I think what&amp;#039;s most interesting to me is the challenge of trying to use the cross sections to try to mentally reconstruct the entire original shape. Of course, you can check your answer with polyhedron slices, but this really gets interesting as a tool for visualizing higher-dimensional polytopes (or, for that matter, other shapes).&#xD;
&#xD;
In any case, after writing a bunch of *ad hoc* code for slicing particular shapes, I finally decided to write a general function which could produce slices of arbitrary convex polytopes of any dimension. As you can see, I&amp;#039;m basically just taking advantage of the `ConvexHullMesh` function:&#xD;
&#xD;
    slices[edges_, vec_, plotrange_] := &#xD;
     Module[{projector, pedges, n, times, positions, v},&#xD;
       projector = Orthogonalize[NullSpace[{vec}]];&#xD;
       pedges[t_] := (1 - t) #[[1]] + t #[[2]] &amp;amp; /@ edges;&#xD;
       n = Length[pedges[.5]];&#xD;
       times = &#xD;
        Table[NSolve[{pedges[t][[i]].vec == #, 0 &amp;lt;= t &amp;lt;= 1}, t], {i, 1, &#xD;
          n}];&#xD;
       positions = Flatten[Position[times, a_ /; a != {}, 1]];&#xD;
       v = Table[&#xD;
         pedges[t][[positions[[i]]]] /. &#xD;
          Flatten[times[[positions[[i]]]], 1], {i, 1, &#xD;
          Length[positions]}];&#xD;
       ConvexHullMesh[projector.# &amp;amp; /@ v, PlotRange -&amp;gt; plotrange, &#xD;
        PlotTheme -&amp;gt; &amp;#034;Polygons&amp;#034;]&#xD;
       ] &amp;amp;&#xD;
&#xD;
And here&amp;#039;s the code for generating the slices of the rhombic icosahedron in the above GIF (notice that I don&amp;#039;t start at the bottom and cut things off before getting to the top; past the cutoff points, you just have a linearly growing or shrinking pentagon, which is kind of boring):&#xD;
&#xD;
    DynamicModule[{polyhedron, sliceaxis, plotrange, cols, &#xD;
      edges, θ},&#xD;
     polyhedron = &amp;#034;RhombicIcosahedron&amp;#034;;&#xD;
     sliceaxis = {0, 0, 1};&#xD;
     plotrange = 2;&#xD;
     cols = RGBColor /@ {&amp;#034;#04D976&amp;#034;, &amp;#034;#0f2532&amp;#034;};&#xD;
     edges = PolyhedronData[polyhedron, &amp;#034;VertexCoordinates&amp;#034;][[#]] &amp;amp; /@ &#xD;
       PolyhedronData[polyhedron, &amp;#034;Edges&amp;#034;][[2, 1]];&#xD;
     Manipulate[θ = 3/(2 Sqrt[5]) Cos[s]; &#xD;
      Graphics[{FaceForm[None], &#xD;
        EdgeForm[&#xD;
         Directive[JoinForm[&amp;#034;Round&amp;#034;], cols[[1]], Thickness[.005]]], &#xD;
        GraphicsComplex[MeshCoordinates[#], MeshCells[#, 2]] &amp;amp;[&#xD;
         slices[edges, sliceaxis, plotrange][θ]]}, &#xD;
       PlotRange -&amp;gt; plotrange, ImageSize -&amp;gt; 540, &#xD;
       Background -&amp;gt; cols[[2]]], {s, 0, π}]&#xD;
     ]&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rhombicicosa5c.gif&amp;amp;userId=610054&#xD;
[2]: http://shonkwiler.org/collections/#/slices/</description>
    <dc:creator>Clayton Shonkwiler</dc:creator>
    <dc:date>2016-02-12T06:02:47Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/366111">
    <title>Use .NETLink to call the MathNet library</title>
    <link>https://community.wolfram.com/groups/-/m/t/366111</link>
    <description>Wolfram Language(WL) provides several useful interfaces to link itself to libraries written in other language. **NETLink** and **JLink** are such very powerful tools that bridge WL to the strong objective-oriented languages. I am going to show briefly an example about how to call a NuGet library for C sharp from WL. &#xD;
&#xD;
http://www.mathdotnet.com/ is a project on NuGet that promotes the numerical computing power of C sharp. It does lots of things like what BLAS and LAPACK does for linear algebra and numeric solvers. Of course Mathematica and MATLAB(R) are much more mature products than this library. I use this library simply to show how you can call an external library. &#xD;
&#xD;
I am writing this example with Mathematica 10 + Windows 7 + Visual Studio 2012.&#xD;
&#xD;
1. Follow this link http://numerics.mathdotnet.com/ to install the `MathNet` assembly into any open project or clone the git drive&#xD;
1. You can locate the assembly on your machine by right-clicking the entry in Solution Explorer (VS 2012):  **reference**-&amp;gt;**MathNet.Numeric**-&amp;gt;**Properties**&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
Now you can open Mathematica and load the `.NETLink` package.&#xD;
 &#xD;
    Needs[&amp;#034;NETLink`&amp;#034;]&#xD;
&#xD;
You can launch the runtime link connection with &#xD;
&#xD;
    InstallNET[]&#xD;
Assume things are going well, here is what you see: &#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
If you have targeted the location of `MathNet.Numerics.dll` in your project, you can load the assembly with `LoadNETAssembly`&#xD;
&#xD;
    LoadNETAssembly[&amp;#034;MathNet.Numerics&amp;#034;,&amp;#034;C:\\Users\\shenghui\\...myCSharpProj1\\packages\\MathNet.Numerics.3.2.3\\lib\\net40\\&amp;#034;]&#xD;
&#xD;
Of course your absolute path may be different, here it is just my location to store the `dll` file. As in the VS2012, you have a xml tree about the build-in classes and objects in this `MathNet` package: &#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
The same view can be generated within Mathematica by `NETTypeInfo` function: &#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
http://numerics.mathdotnet.com/docs/ provides the following example code ( `Using`&amp;#039;s are omitted ): &#xD;
&#xD;
    Matrix&amp;lt;double&amp;gt; A = DenseMatrix.OfArray(new double[,] {&#xD;
            {1,1,1,1},&#xD;
            {1,2,3,4},&#xD;
            {4,3,2,1}});&#xD;
    Vector&amp;lt;double&amp;gt;[] nullspace = A.Kernel();&#xD;
    &#xD;
This is where Mathematica has to call a different method. The `Vector` generic type is valid and can be loaded into Mathematica without problem: &#xD;
&#xD;
    In:= netType=LoadNETType[&amp;#034;MathNet.Numerics.LinearAlgebra.Vector&amp;lt;int&amp;gt;&amp;#034;]&#xD;
    Out:= NETType[MathNet.Numerics.LinearAlgebra.Vector`1[System.Int32],16]&#xD;
&#xD;
However `NETNew` cannot create a vector object:&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
because there is no constructor with this type. The workaround is from taking a look at class list of the `NETTypeInfo` of the `MathNet` assembly: &#xD;
&#xD;
![enter image description here][6] &#xD;
&#xD;
`NETInfo[assem, &amp;#034;Classes&amp;#034;, &amp;#034;*Dense*&amp;#034;]` is a call of such funciton with pattern match. I use this function when I notice that the C Sharp code in the online example calls the `Dense.Build` method to create a dynamic (a fancy way of using `new`) vector or matrix object. The dense method stores every item of the array. Contrary, the sparse method only store the non-zero entries. &#xD;
&#xD;
The `....DenseVector` type contains public constructors: &#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
(Notice that `NETTypeInfo` can take the string name of the class/type/assembly as well) &#xD;
&#xD;
To create a vector in the runtime, you just put a regular Mathematica vector of real numbers into the place of arguments: &#xD;
&#xD;
    In:= a = NETNew[ &amp;#034;MathNet.Numerics.LinearAlgebra.Double.DenseVector&amp;#034;, {1, 2, 3, 4}] &#xD;
    Out:= &amp;lt;&amp;lt;NETObject[MathNet.Numerics.LinearAlgebra.Double.DenseVector]&amp;gt;&amp;gt;&#xD;
&#xD;
    In:= b = NETNew[ &amp;#034;MathNet.Numerics.LinearAlgebra.Double.DenseVector&amp;#034;, 10] &#xD;
    Out:= &amp;lt;&amp;lt;NETObject[MathNet.Numerics.LinearAlgebra.Double.DenseVector]&amp;gt;&amp;gt;&#xD;
    (* b is {0,0,...,0} *)&#xD;
&#xD;
    In:= c = NETNew[&amp;#034;MathNet.Numerics.LinearAlgebra.Double.DenseMatrix&amp;#034;, 2, 2, {1,2,3,4}]&#xD;
    Out:= &amp;lt;&amp;lt;NETObject[MathNet.Numerics.LinearAlgebra.Double.DenseMatrix]&amp;gt;&amp;gt;&#xD;
    (* c is a col-major matrix: {{1,3},{2,4}}*)&#xD;
&#xD;
To get the result back to Mathematica, use the method associated with these types: &#xD;
&#xD;
     In:= a@ToArray[]&#xD;
     Out= {1.,2.,3.,4.}&#xD;
&#xD;
     In:= c@ToArray[]&#xD;
     Out= {{1.,3.},{2.,4.}}&#xD;
&#xD;
I can do this because the return type of the `ToArray` is `double[]` or `double[,]`, which are basic `System` type in C sharp and Mathematica can convert these type to `Real` automatically. One more thing to say here is that Mathematica kernel is smart enough to avoid namespace/context clashing: &#xD;
&#xD;
    In:= res=(c@Transpose[])@Inverse[] (*MathNet&amp;#039;s Transpose and Inverse*)&#xD;
    Out= « NETObject[MathNet.Numerics.LinearAlgebra.Double.DenseMatrix]&amp;#187;&#xD;
&#xD;
Check the result:&#xD;
&#xD;
    In:= res@ToArray[]&#xD;
    Out= {{-2.,1.},{1.5,-0.5}}&#xD;
&#xD;
which is the same as the familiar Mathematica implementation: &#xD;
&#xD;
    In:= Inverse[Transpose[c@ToArray[]]] (*the Mathematica Inverse and Transpose*)&#xD;
    Out= {{-2.,1.},{1.5,-0.5}}&#xD;
&#xD;
&#xD;
  [1]: /c/portal/getImageAttachment?filename=cap2.PNG&amp;amp;userId=23928&#xD;
  [2]: /c/portal/getImageAttachment?filename=cap1.PNG&amp;amp;userId=23928&#xD;
  [3]: /c/portal/getImageAttachment?filename=cap4.PNG&amp;amp;userId=23928&#xD;
  [4]: /c/portal/getImageAttachment?filename=cap3.PNG&amp;amp;userId=23928&#xD;
  [5]: /c/portal/getImageAttachment?filename=cap5.PNG&amp;amp;userId=23928&#xD;
  [6]: /c/portal/getImageAttachment?filename=cap6.PNG&amp;amp;userId=23928&#xD;
  [7]: /c/portal/getImageAttachment?filename=cap7.PNG&amp;amp;userId=23928</description>
    <dc:creator>Shenghui Yang</dc:creator>
    <dc:date>2014-10-09T01:50:09Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3569199">
    <title>Testing improper integrals with Mathematica</title>
    <link>https://community.wolfram.com/groups/-/m/t/3569199</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/581e0b2c-c1c4-428b-b2d8-67501096ac1c</description>
    <dc:creator>Denis Ivanov</dc:creator>
    <dc:date>2025-11-02T07:36:59Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3359290">
    <title>Ideal cycle using R134a</title>
    <link>https://community.wolfram.com/groups/-/m/t/3359290</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/6e3750f7-a745-4c1e-ab2c-ce7ca0c9411e</description>
    <dc:creator>Housam Binous</dc:creator>
    <dc:date>2025-01-18T13:54:49Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3146544">
    <title>Distillation-column product purity as function of feed thermal quality</title>
    <link>https://community.wolfram.com/groups/-/m/t/3146544</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/cf46f393-5d67-4e65-9287-05f775b6294e</description>
    <dc:creator>Housam Binous</dc:creator>
    <dc:date>2024-03-24T08:09:38Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2925998">
    <title>Computation of the Bifurcation Diagram for the Three-Variable Autocatalator</title>
    <link>https://community.wolfram.com/groups/-/m/t/2925998</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/331f466f-9901-498d-9fc6-00610c892d78</description>
    <dc:creator>Housam Binous</dc:creator>
    <dc:date>2023-05-26T19:45:33Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2823264">
    <title>[R&amp;amp;DL] Wolfram R&amp;amp;D LIVE: PDE Modeling</title>
    <link>https://community.wolfram.com/groups/-/m/t/2823264</link>
    <description>*MODERATOR NOTE: This is the notebook used in the livestream &amp;#034;PDE Modeling&amp;#034; on Wednesday, February 1 -- a part of Wolfram R&amp;amp;D livestream series announced and scheduled here: https://wolfr.am/RDlive For questions about this livestream, please leave a comment below.*&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/c7af8c67-f47b-4c08-9837-1a54ab80ba4b</description>
    <dc:creator>EDITORIAL BOARD</dc:creator>
    <dc:date>2023-02-03T19:05:42Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1926505">
    <title>SEI2HR model with quarantine scenarios</title>
    <link>https://community.wolfram.com/groups/-/m/t/1926505</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/8c53b21b-204a-44ca-8f58-443bd91a05e3</description>
    <dc:creator>Anton Antonov</dc:creator>
    <dc:date>2020-04-06T16:33:51Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1668032">
    <title>Improving speed of Binomial and Multinomial Random Draws</title>
    <link>https://community.wolfram.com/groups/-/m/t/1668032</link>
    <description>The Binomial and Multinomial random number generators in Mathematica are fast if multiple draws are needed from the same distribution, i.e., when the distribution parameters do not change across the draws. This can be seen by generating, for example, 500,000 draws from the Binomial distribution.&#xD;
&#xD;
    In[30]:= AbsoluteTiming[ RandomVariate[BinomialDistribution[100, 0.6], 500000];]&#xD;
    Out[30]= {0.017365, Null}&#xD;
&#xD;
However, the speed is slower, relative to some other systems, when the parameters change across draws. Such a situation arises often when performing certain Monte Carlo simulations.&#xD;
&#xD;
For example, if we have a vector nvec that contains the number of binomial trials for each draw and a vector pvec that contains the corresponding probabilities of success.&#xD;
&#xD;
    nvec = RandomInteger[{5, 125}, 500000];&#xD;
    pvec = RandomReal[{0, 1}, 500000];&#xD;
&#xD;
Then we have&#xD;
&#xD;
    In[28]:= AbsoluteTiming[&#xD;
     Mean[Table[&#xD;
        RandomVariate[BinomialDistribution[nvec[[i]], pvec[[i]]]], {i, 1, &#xD;
         Length@nvec}]] // N&#xD;
     ]&#xD;
    &#xD;
    Out[28]= {36.2144, 32.5283}&#xD;
&#xD;
This can be addressed via an implementation of fast random number generators for the Binomial as described in&#xD;
&#xD;
Kachitvichyanukul, V.; Schmeiser, B.W. &amp;#034;Binomial random variate generation.&amp;#034; Comm. ACM 31 (1988), no .2, 216 - 222.&#xD;
&#xD;
A Mathematica implementation based on the above paper involves the following three functions.&#xD;
&#xD;
When the number of trials is small, the geometric method from&#xD;
&#xD;
Devroye. L. &amp;#034;Generating the maximum of independent identically distributed random variables.&amp;#034; Computers and Mathematics with Applications 6, 1960, 305-315. can be use, as in the following function.&#xD;
&#xD;
    ablRanBinGeom = Compile[{{n, _Integer}, {p, _Real}}, &#xD;
           Module[&#xD;
                {y = 0, x = 0, comp = 0, er, v, scale = 0.0},&#xD;
                If[p &amp;gt;= 1.0, Return[n]];&#xD;
                If[p &amp;lt;= 0.5, comp = 0; scale = -1.0/Internal`Log1p[-p], comp = 1; &#xD;
                    scale = -1.0/Log[p]&#xD;
                ];&#xD;
                While[True, &#xD;
                    er = -Log[RandomReal[]];&#xD;
                    v = er*scale;&#xD;
                    If[v &amp;gt; n, Break[]];&#xD;
                    y = y + Ceiling[v];&#xD;
                    If[y &amp;gt; n, Break[]];&#xD;
                    x = x + 1;&#xD;
                ];&#xD;
                If[comp == 1, n - x, x]&#xD;
                ],&#xD;
           CompilationTarget -&amp;gt; &amp;#034;C&amp;#034;, RuntimeAttributes -&amp;gt; {Listable}&#xD;
           ];&#xD;
&#xD;
For larger n, we can use another function from the Communications of ACM paper referred above.  &#xD;
&#xD;
    ablRanBinBtpe = Compile[{{n, _Integer}, {p, _Real}},&#xD;
       Module[&#xD;
            {comp = 0, r, q, nrq, fM, M, Mi, p1, xM, xL, xR, c, a, lamL, &#xD;
         lamR, p2, p3, p4,&#xD;
            y, u, v, x, k, S, F, t, A, x1, f1, z, w, rho},&#xD;
    &#xD;
            If[p &amp;gt;= 1.0, Return[n]];&#xD;
    &#xD;
            If[p &amp;lt;= 0.5, &#xD;
                comp = 0; r = p; q = 1.0 - p, &#xD;
                comp = 1; r = 1.0 - p; q = p&#xD;
            ];&#xD;
    &#xD;
            nrq = n*r*q;&#xD;
            fM = (n + 1)*r;&#xD;
            M = Floor[fM];&#xD;
            Mi = Floor[M];&#xD;
            p1 = Floor[2.195*Sqrt[nrq] - 4.6*q] + 0.5;&#xD;
            xM = M + 0.5;&#xD;
            xL = xM - p1;&#xD;
            xR = xM + p1;&#xD;
            c = 0.134 + 20.5/(15.3 + M);&#xD;
            a = (fM - xL)/(fM - xL*r);&#xD;
            lamL = a*(1.0 + 0.5*a);&#xD;
            a = (xR - fM)/(xR*q);&#xD;
            lamR = a*(1.0 + 0.5*a);&#xD;
            p2 = p1*(1.0 + 2.0*c);&#xD;
            p3 = p2 + c/lamL;&#xD;
            p4 = p3 + c/lamR;&#xD;
    &#xD;
            y = 0;&#xD;
            While[True, (* Step 1 *)&#xD;
    &#xD;
                u = p4*RandomReal[];&#xD;
                v = RandomReal[];&#xD;
                Which[&#xD;
                    u &amp;lt;=  p1, &#xD;
                    y = Floor[xM - p1*v + u];&#xD;
                    Break[],&#xD;
    &#xD;
                    u &amp;lt;= p2, (* Step 2 *)&#xD;
                    x = xL + (u - p1)/c;&#xD;
                    v = v*c + 1.0 - Abs[M - x + 0.5]/p1;&#xD;
                    If[v &amp;gt; 1, Continue[]];&#xD;
                    y = Floor[x],&#xD;
    &#xD;
                    u &amp;lt;= p3 ,(* Step 3 *)&#xD;
                    y = Floor[xL + Log[v]/lamL];&#xD;
                    If[y &amp;lt; 0, Continue[]];&#xD;
                    v = v*(u - p2)*lamL,&#xD;
    &#xD;
                    True, (* Step 4 *)&#xD;
    &#xD;
                    y = Floor[xR - Log[v]/lamR];&#xD;
                    If[y &amp;gt; n, Continue[]];&#xD;
                    v = v*(u - p3)*lamR&#xD;
                ];&#xD;
                A = Log[v]; &#xD;
         If[A &amp;gt; (LogGamma[Mi + 1] + LogGamma[n - Mi + 1] + &#xD;
             LogGamma[y + 1] + LogGamma[n - y + 1] + (y - Mi)*Log[r/q]), &#xD;
          Continue[]&#xD;
          ];&#xD;
         Break[];&#xD;
         ];&#xD;
         If[comp == 1, n - y, y]&#xD;
        ], &#xD;
       CompilationTarget -&amp;gt; &amp;#034;C&amp;#034;, &#xD;
       CompilationOptions -&amp;gt; {&amp;#034;InlineExternalDefinitions&amp;#034; -&amp;gt; True}, &#xD;
       RuntimeAttributes -&amp;gt; {Listable}&#xD;
       ];&#xD;
&#xD;
We can automatically choose the appropriate method for different values of the parameters using the function&#xD;
&#xD;
     ablRanBinomial = Compile[{{n, _Integer}, {p, _Real}},&#xD;
           Module[&#xD;
                {q = 1.0 - p},&#xD;
                If[n*Min[p, q] &amp;gt; 20, ablRanBinBtpe[n, p], ablRanBinGeom[n, p]]&#xD;
            ],&#xD;
           CompilationTarget -&amp;gt; &amp;#034;C&amp;#034;, &#xD;
           CompilationOptions -&amp;gt; {&amp;#034;InlineExternalDefinitions&amp;#034; -&amp;gt; True}, &#xD;
           RuntimeAttributes -&amp;gt; {Listable}&#xD;
    ];&#xD;
&#xD;
The above functions result in considerable improvement in speed of random number generation for the Binomial. &#xD;
&#xD;
    In[32]:= AbsoluteTiming[&#xD;
      Mean@Table[&#xD;
        ablRanBinomial[nvec[[i]], pvec[[i]]], {i, 1, Length@nvec}]] // N&#xD;
    &#xD;
    Out[32]= {0.413019, 32.5307}&#xD;
&#xD;
This can be further improved by leveraging the listability property of the functions as follows. &#xD;
&#xD;
    In[33]:= AbsoluteTiming[Mean@ablRanBinomial[nvec, pvec] // N]&#xD;
    &#xD;
    Out[33]= {0.156881, 32.5337}&#xD;
&#xD;
The above functions can then be used for generating multinomial draws. The multinomial distribution is specified via two parameters. n is the number of trials, and p is a vector of probabilities that sum to 1. &#xD;
Multinomial variates can be generated using the following function: &#xD;
&#xD;
    ablRanMultinomial=Compile[{{n, _Integer},{p, _Real, 1}},&#xD;
    Module[&#xD;
        {k=Length[p], rp,i, km1,cn,pi,xi,x},&#xD;
        rp=1.0;cn=n;&#xD;
        i=0;&#xD;
        km1=k-1;&#xD;
        x=Table[0, {k}];&#xD;
        While[i&amp;lt;km1 &amp;amp;&amp;amp; cn &amp;gt;0,&#xD;
            i+=1;&#xD;
            pi=p[[i]];&#xD;
            If[pi &amp;lt; rp,&#xD;
                xi=ablRanBinomial[cn, pi/rp];&#xD;
                x[[i]]=xi;&#xD;
                cn-=xi;&#xD;
                rp-=pi;,&#xD;
                x[[i]]=cn;&#xD;
                cn=0&#xD;
            ];&#xD;
        ];&#xD;
        If[i==km1, x[[k]]=cn,&#xD;
            Do[x[[j]]=0,{j,i+1, k}]&#xD;
        ];&#xD;
        x&#xD;
    ],&#xD;
    CompilationTarget-&amp;gt;&amp;#034;C&amp;#034;, &#xD;
    CompilationOptions-&amp;gt;{&amp;#034;InlineExternalDefinitions&amp;#034; -&amp;gt; True}, &#xD;
    RuntimeAttributes-&amp;gt;{Listable}&#xD;
    ];&#xD;
&#xD;
This can be used as follows: &#xD;
&#xD;
    In[36]:= ablRanMultinomial[20, {0.5, 0.3, 0.2}]&#xD;
    Out[36]= {12, 7, 1}</description>
    <dc:creator>Asim Ansari</dc:creator>
    <dc:date>2019-04-24T14:20:43Z</dc:date>
  </item>
</rdf:RDF>

