<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://community.wolfram.com">
    <title>Community RSS Feed</title>
    <link>https://community.wolfram.com</link>
    <description>RSS Feed for Wolfram Community showing any discussions tagged with Image Processing sorted by most likes.</description>
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2593151" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2498984" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1659553" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2007434" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2774101" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2135869" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2435403" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2342501" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/960843" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2051264" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1025046" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2542490" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/922544" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3062832" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2153018" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/884348" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/900782" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2152918" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2222977" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1787163" />
      </rdf:Seq>
    </items>
  </channel>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2593151">
    <title>⭐ [R&amp;amp;DL] Wolfram R&amp;amp;D Developers on LIVE Stream</title>
    <link>https://community.wolfram.com/groups/-/m/t/2593151</link>
    <description>**Introducing our brand new YouTube channel, [Wolfram R&amp;amp;D][1]! Our channel features livestreams, behind-the-scenes creator presentations, insider videos, and more.**&#xD;
&#xD;
----------&#xD;
&#xD;
Join us for the unique Wolfram R&amp;amp;D livestreams on [Twitch][2] and [YouTube][3] led by our developers! &#xD;
&#xD;
You will see **LIVE** stream indicators on these channels on the dates listed below. The live streams provide tutorials and behind the scenes look at Mathematica and the Wolfram Language directly from developers.&#xD;
&#xD;
Join our livestreams every Wednesday at 11 AM CST and interact with developers who work on data science, machine learning, image processing, visualization, geometry, and other areas.&#xD;
&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
⭕ **UPCOMING** EVENTS&#xD;
&#xD;
&#xD;
- Jan 29 -- Reinforcement Learning Applied to Feedback Control with [Suba Thomas][61]&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
✅ **PAST** EVENTS  &#xD;
&#xD;
&#xD;
- April 24 -- [FeynCalc][60]&#xD;
- April 3 -- [Explore the Total Solar Eclipse of April 2024][59]&#xD;
- Mar 22 -- [20 Years of xAct Tensor Computer Algebra][58] &#xD;
- Feb 28 -- [Zero Knowledge Authentication][57]&#xD;
- Jan 17 -- [Nutrients by the Numbers][56]&#xD;
- Dec 13 -- [Understanding Graphics][55]&#xD;
- Oct 18 -- [Overview of Number Theory][54]&#xD;
- Sep 27 -- [QMRITools: Processing Quantitative MRI Data][52]&#xD;
- Sep 13 -- [Make High Quality Graph Visualization][51]&#xD;
- Sep 6 -- [Insider&amp;#039;s View of Graphs &amp;amp; Networks][53]&#xD;
- Aug 30 -- [Labeling Everywhere][49]&#xD;
- Aug 22 -- [Equation Generator for Equation-of-Motion Coupled Cluster Assisted by CAS][48]&#xD;
- Aug 16 -- [Foreign Function Interface][4]&#xD;
- July 26 -- [Modeling Fluid Circuits][6]&#xD;
- July 19 -- [Geocomputation][5]&#xD;
- July 5 -- [Protein Visualization][7]&#xD;
- Jun 14 -- [Chat Notebooks bring the power of Notebooks to LLMs][8]&#xD;
- May 31-- [Probability and Statistics: Random Sampling][9]&#xD;
- May 24 -- [Problem Solving][10]&#xD;
- May 17 -- [The state of Optimization][11]&#xD;
- May 10 -- [Building a video game with Wolfram notebooks][12]&#xD;
- April 26 -- [Control Systems: An Overview][13]&#xD;
- April 19 -- [MaXrd: A crystallography package developed for research support][14]&#xD;
- April 5th -- [Relational database in the Wolfram Language][15]&#xD;
- Mar 29th -- [Build your first game in the Wolfram Language with Unity game engine] [16]&#xD;
- Mar 22nd -- [Everything to know about Mellin-Barnes Integrals - Part II][17]&#xD;
- Mar 15th -- [Building your own Shakespearean GPT - a ChatGPT like GPT model][18]&#xD;
- Mar 8th -- [Understand Time, Date and Calendars][19]&#xD;
- Mar 1st -- [Introducing Astro Computation][20]&#xD;
- Feb 22nd -- [Latest features in System Modeler][21]&#xD;
- Feb 15th -- [Everything to know about Mellin-Barnes Integrals][22]&#xD;
- Feb 8th -- [Dive into Video Processing][23]&#xD;
- Feb 1st -- [PDE Modeling][24]&#xD;
- Jan. 25th -- [Ask Integration Questions to Oleg Marichev][25]&#xD;
- Jan. 18th -- [My Developer Tools][26]&#xD;
- Jan. 11th -- [Principles of Dynamic Interfaces][27]&#xD;
- Dec. 14th -- [Wolfram Resource System: Repositories &amp;amp; Archives][28]&#xD;
- Dec. 7th -- [Inner Workings of ImageStitch: Image Registration, Projection and Blending][29]&#xD;
- Nov. 30th -- [Q&amp;amp;A for Calculus and Algebra][30]&#xD;
- Nov. 23rd -- [xAct: Efficient Tensor Computer Algebra][31]&#xD;
- Nov. 16th -- [Latest in Machine Learning][32]&#xD;
- Nov. 9th -- [Computational Geology][33]&#xD;
- Nov. 2nd -- [Behind the Scenes at the Wolfram Technology Conference 2022][34]&#xD;
- Oct 26th -- [Group Theory Package (GTPack) and Symmetry Principles in Condensed Matter][35]&#xD;
- Oct 12th -- [Tree Representation for XML, JSON and Symbolic Expressions][36]&#xD;
- Oct. 5th -- [A Computational Exploration of Alcoholic Beverages][37]&#xD;
- Sept. 28th -- [Q&amp;amp;A with Visualization &amp;amp; Graphics Developers][38]&#xD;
- Sept. 14th -- [Paclet Development][39]&#xD;
- Sept. 7th -- [Overview of Chemistry][40]&#xD;
- Aug. 24th -- [Dive into Visualization][41]  &#xD;
- Aug. 17th -- [Latest in Graphics &amp;amp; Shaders][42]   &#xD;
- Aug. 10th -- [What&amp;#039;s new in Calculus &amp;amp; Algebra][43]   &#xD;
&#xD;
&#xD;
&#xD;
&#xD;
&#xD;
&amp;gt; **What are your interests? Leave a comment here on this post to share your favorite topic suggestions for our livestreams.**  &#xD;
**Follow us on our live broadcasting channels [Twitch][44] and [YouTube][45] and for the up-to-date announcements on our social media: [Facebook][46] and [Twitter][47].**&#xD;
&#xD;
&#xD;
  [1]: https://wolfr.am/1eatWLcDA&#xD;
  [2]: https://www.twitch.tv/wolfram&#xD;
  [3]: https://wolfr.am/1eatWLcDA&#xD;
  [4]: https://www.youtube.com/watch?v=C82NHpy7D6k&#xD;
  [5]: https://community.wolfram.com/groups/-/m/t/2985580&#xD;
  [6]: https://community.wolfram.com/groups/-/m/t/2982197&#xD;
  [7]: https://community.wolfram.com/groups/-/m/t/2982114&#xD;
  [8]: https://youtu.be/ZqawtrWwE0c&#xD;
  [9]: https://community.wolfram.com/groups/-/m/t/2946101&#xD;
  [10]: https://community.wolfram.com/groups/-/m/t/2925156&#xD;
  [11]: https://community.wolfram.com/groups/-/m/t/2921756&#xD;
  [12]: https://community.wolfram.com/groups/-/m/t/2918746&#xD;
  [13]: https://community.wolfram.com/groups/-/m/t/2917597&#xD;
  [14]: https://community.wolfram.com/groups/-/m/t/2911327&#xD;
  [15]: https://community.wolfram.com/groups/-/m/t/2907390&#xD;
  [16]: https://community.wolfram.com/groups/-/m/t/2921593&#xD;
  [17]: https://community.wolfram.com/groups/-/m/t/2861119&#xD;
  [18]: https://community.wolfram.com/groups/-/m/t/2847286&#xD;
  [19]: https://community.wolfram.com/groups/-/m/t/2851575&#xD;
  [20]: https://community.wolfram.com/groups/-/m/t/2852934&#xD;
  [21]: https://community.wolfram.com/groups/-/m/t/2842136&#xD;
  [22]: https://community.wolfram.com/groups/-/m/t/2838335&#xD;
  [23]: https://community.wolfram.com/groups/-/m/t/2827166&#xD;
  [24]: https://community.wolfram.com/groups/-/m/t/2823264&#xD;
  [25]: https://community.wolfram.com/groups/-/m/t/2821053&#xD;
  [26]: https://youtu.be/istKGqpDUsw&#xD;
  [27]: https://community.wolfram.com/groups/-/m/t/2777853&#xD;
  [28]: https://youtu.be/roCkXVkDuLA&#xD;
  [29]: https://youtu.be/pYHAz-NatXI&#xD;
  [30]: https://youtu.be/r7Hjdr_D7c4&#xD;
  [31]: https://community.wolfram.com/groups/-/m/t/2713818&#xD;
  [32]: https://community.wolfram.com/groups/-/m/t/2705779&#xD;
  [33]: https://community.wolfram.com/groups/-/m/t/2701172&#xD;
  [34]: https://youtu.be/UrM-OBu3H9o&#xD;
  [35]: https://community.wolfram.com/groups/-/m/t/2678940&#xD;
  [36]: https://community.wolfram.com/groups/-/m/t/2649407&#xD;
  [37]: https://community.wolfram.com/groups/-/m/t/2635049&#xD;
  [38]: https://community.wolfram.com/groups/-/m/t/2618033&#xD;
  [39]: https://community.wolfram.com/groups/-/m/t/2616863&#xD;
  [40]: https://community.wolfram.com/groups/-/m/t/2613617&#xD;
  [41]: https://community.wolfram.com/groups/-/m/t/2605432&#xD;
  [42]: https://community.wolfram.com/groups/-/m/t/2600997&#xD;
  [43]: https://community.wolfram.com/groups/-/m/t/2596451&#xD;
  [44]: https://www.twitch.tv/wolfram&#xD;
  [45]: https://wolfr.am/1eatWLcDA&#xD;
  [46]: https://www.facebook.com/wolframresearch&#xD;
  [47]: https://twitter.com/WolframResearch&#xD;
  [48]: https://www.youtube.com/live/ElP55ZILxPw?si=nsAPOQ3u-RbvuGKX&#xD;
  [49]: https://community.wolfram.com/groups/-/m/t/3007543&#xD;
  [50]: https://community.wolfram.com/web/charlesp&#xD;
  [51]: https://community.wolfram.com/groups/-/m/t/3019288&#xD;
  [52]: https://www.youtube.com/live/KM1yWHRrF2k?si=g2R7rHB2IinVRpo6&#xD;
  [53]: https://community.wolfram.com/groups/-/m/t/3009184&#xD;
  [54]: https://community.wolfram.com/groups/-/m/t/3064700&#xD;
  [55]: https://community.wolfram.com/groups/-/m/t/3084291&#xD;
  [56]: https://community.wolfram.com/groups/-/m/t/3104670&#xD;
  [57]: https://community.wolfram.com/groups/-/m/t/3164204&#xD;
  [58]: https://youtube.com/playlist?list=PLdIcYTEZ4S8TSEk7YmJMvyECtF-KA1SQ2&amp;amp;si=paXZHs0ZzGdB7y1y&#xD;
  [59]: https://youtube.com/playlist?list=PLdIcYTEZ4S8RyjEB7JSAsGerbYHl5xXeJ&amp;amp;si=xkNtkIDvKHFWHVmD&#xD;
  [60]: https://youtu.be/KUWK19Gx2LE?si=qbKISbL8FtvweSWo&#xD;
  [61]: https://community.wolfram.com/web/subat</description>
    <dc:creator>Charles Pooh</dc:creator>
    <dc:date>2022-08-05T21:37:19Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2498984">
    <title>Computational Art Contest 2022</title>
    <link>https://community.wolfram.com/groups/-/m/t/2498984</link>
    <description>&amp;gt; *SHARE this contest*: https://wolfr.am/CompArt-22 &#xD;
&#xD;
# WINNERS&#xD;
&#xD;
Thank you to everyone who submitted entries into this contest! It was a blast to see all the amazing art you created. After deliberation by our judges, these are the winners:&#xD;
&#xD;
- **Honorable Mention**: ARTIST: [Daniel Hoffmann][1], EXHIBIT: &amp;#034;[The Memory of Persistence][2]&amp;#034;&#xD;
&#xD;
- **Staff Winner**: ARTIST: [Anton Antonov][3], EXHIBIT: &amp;#034;[Rorschach mask animations projected over 3D surfaces][4]&amp;#034;&#xD;
&#xD;
- **3rd Place**: ARTIST: [Jacqueline Doan][5], EXHIBIT: &amp;#034;[Kuramoto oscillators with phase lag][6]&amp;#034;&#xD;
&#xD;
- **2nd Place**: ARTIST: [Tom Verhoeff][7], EXHIBIT: &amp;#034;[Sculpture from 18 congruent pieces][8]&amp;#034;&#xD;
&#xD;
- **1st Place**: ARTIST: [Frederick Wu][9], EXHIBIT: &amp;#034;[Love heart jewelry IV: the giving tree][10]&amp;#034;&#xD;
&#xD;
&#xD;
![enter image description here][11]&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
# CONTEST&#xD;
&#xD;
Flex your creative and computational skill with Wolfram&amp;#039;s Computational Art Contest that kicks off today, Monday, March 28th! Share your work with the community, and potentially win free Wolfram merchandise. Programmers and artists of all skill levels are encouraged to participate!&#xD;
&#xD;
This contest is inspired by Genuary, an annual project releasing generative art prompts during the month of January. We&amp;#039;re elated to see the creative works of our users and engage with the community while exploring the scope of computational art within the Wolfram Language.&#xD;
&#xD;
## Rules &amp;amp; Guidelines ##&#xD;
&#xD;
 - Submission deadline is April 25th at 9am Central Time. Posts posted&#xD;
   after will not be included in judging&#xD;
   &#xD;
 - Participants must fill out a detailed Community profile ( example:&#xD;
   https://community.wolfram.com/web/claytonshonkwiler ) and create a&#xD;
   Community post about their submission. The post must include the code&#xD;
   used to create graphics and the final piece of art placed at the top&#xD;
   of the post. An explanation of how their code works is required,&#xD;
   moreover participants are encouraged to write more about their&#xD;
   creative process.&#xD;
   &#xD;
 - Participants submit their entry by commenting on this post with an&#xD;
   image of their art, along with a link to their Community post&#xD;
   &#xD;
 - Multiple submissions per participant are allowed, but please keep the&#xD;
   number of submissions under three&#xD;
   &#xD;
 - Each participant can only win once. Participants&amp;#039; best-performing&#xD;
   piece, as determined by the judges, will be used when determining&#xD;
   winners&#xD;
   &#xD;
 - Both static images and animations can be submitted. Animations are&#xD;
   preferred in a GIF format; if the animation is too large for a GIF,&#xD;
   the post can point to a public YouTube video.&#xD;
   &#xD;
 - Submissions will be judged by a handful of Wolfram experts, with the&#xD;
   following parameters:   &#xD;
       - Visual aesthetics&#xD;
       - Wolfram Language code&#xD;
       - Creativity&#xD;
       - Explanation of process&#xD;
   &#xD;
 - Submissions from all areas of computational art are welcome&#xD;
   &#xD;
 - Submissions from former or current Wolfram employees are allowed, but&#xD;
   will be judged as their own category with only one winner&#xD;
   &#xD;
 - Submitting previous work/posts is allowed, but must meet the&#xD;
   requirements stated above&#xD;
&#xD;
## Encouragements ##&#xD;
&#xD;
 - Not sure where to start? We encourage you to look at other user&amp;#039;s&#xD;
   submission for inspiration, or look at some of the work in the visual&#xD;
   arts group of Community: &#xD;
       - Artists&amp;#039; group: https://wolfr.am/ART-examples &#xD;
       - Artist (see Staff Picks section): https://community.wolfram.com/web/claytonshonkwiler&#xD;
   &#xD;
 - We encourage you to vote and comment on other people&amp;#039;s submissions.&#xD;
   &#xD;
 - Please spread the word about this competition, among your friends and&#xD;
   other social media!&#xD;
&#xD;
## Prizes ##&#xD;
&#xD;
First, second, and third place winners will be featured on all of Wolfram&amp;#039;s social media accounts, as well as receiving their choice of free Wolfram merchandise. We are able to ship merchandise to countries listed on the Wolfram Store: https://store.wolfram.com. If your country is not listed on the Wolfram Store, we strongly encourage you to still submit an entry, as we will feature winners submissions regardless of location.&#xD;
&#xD;
### Important ###&#xD;
&#xD;
All contest rules have been explained above under Rules &amp;amp; Guidelines. It&amp;#039;s encouraged for all participants to read the rules carefully to prevent disqualification. If you have any additional questions, ask directly in the thread comments or contact us by email at t-artcontest@wolfram.com . We recommend reading other people comments as they clarify the nature of the contest as well. Comments deemed by moderators as superfluous to the thread may be removed or transferred by moderators to keep competition professional.&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com/web/danielsanderhoffmann&#xD;
  [2]: https://community.wolfram.com/groups/-/m/t/2518220&#xD;
  [3]: https://community.wolfram.com/web/antononcube&#xD;
  [4]: https://community.wolfram.com/groups/-/m/t/2518279&#xD;
  [5]: https://community.wolfram.com/web/jacquelinengocdoan&#xD;
  [6]: https://community.wolfram.com/groups/-/m/t/2509110&#xD;
  [7]: https://community.wolfram.com/web/tverhoeff&#xD;
  [8]: https://community.wolfram.com/groups/-/m/t/2513265&#xD;
  [9]: https://community.wolfram.com/web/wufei1978&#xD;
  [10]: https://community.wolfram.com/groups/-/m/t/2430827&#xD;
  [11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=news-congrads-kkluyshnik-02-04-19.jpg&amp;amp;userId=11733</description>
    <dc:creator>Eryn Gillam</dc:creator>
    <dc:date>2022-03-28T17:35:43Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1659553">
    <title>Knitting images: using Radon transform and its inverse for creative arts</title>
    <link>https://community.wolfram.com/groups/-/m/t/1659553</link>
    <description>Dear all, inspired by another [great post][1] of [@Anton Antonov][at0]  and in particular there by a remark of [@Vitaliy Kaurov][at1]  pointing to [the art of knitting images][2] I could not resist trying with Mathematica. Clearly - this problem is crying out loudly for **Radon transform**! &#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
I start by choosing some example image, convert it to inverse grayscale and perform the Radon transform. &#xD;
&#xD;
    ClearAll[&amp;#034;Global`*&amp;#034;]&#xD;
    img0 = RemoveBackground[&#xD;
       ImageTrim[&#xD;
        ExampleData[{&amp;#034;TestImage&amp;#034;, &amp;#034;Girl3&amp;#034;}], {{80, 30}, {250, 240}}], {&amp;#034;Background&amp;#034;, {&amp;#034;Uniform&amp;#034;, .29}}];&#xD;
    img1 = ImageAdjust[ColorNegate@ColorConvert[RemoveAlphaChannel[img0], &amp;#034;Grayscale&amp;#034;]];&#xD;
    {xDim, yDim} = {180, 400}; (* i.e. angles between 1\[Degree] and 180\[Degree] *)&#xD;
    &#xD;
    rd0 = Radon[img1, {xDim, yDim}];&#xD;
    ImageCollage[{img0, ImageAdjust@rd0}, Method -&amp;gt; &amp;#034;Rows&amp;#034;, &#xD;
     Background -&amp;gt; None, ImagePadding -&amp;gt; 10]&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
Every column of the Radon image represents a different angle of projection. So next I separate these columns into (here 180) single Radon images and do an inverse Radon transform on each:&#xD;
&#xD;
    maskLine[a_] := Table[If[a == n, 1, 0], {n, 1, xDim}];&#xD;
    maskImg = Table[Image[ConstantArray[maskLine[c], yDim]], {c, 1, xDim}];&#xD;
    rdImgs = rd0 maskImg;&#xD;
    ProgressIndicator[Dynamic[n], {1, xDim}]&#xD;
    invRadImgs = &#xD;
      Table[{ImageApply[If[# &amp;gt; 0, #, 0] &amp;amp;, &#xD;
         InverseRadon[rdImgs[[n]]]], -(n - 91) \[Degree]}, {n, 1, xDim}];&#xD;
&#xD;
These data already represent the angle dependent intensities for backpropagation. Now one just has *somehow* to translate these intensities into discretely spaced lines (because this is the actual task in analogy to the above mentioned knitting ). Here is my simple attempt, which e.g. for 69° gives the following result (I am not really happy with this -  there is definitely room for improvement!):&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
    valsAngle[invRads_] := Module[{img, angle, data, l2},&#xD;
       angle = Last@invRads;&#xD;
       data = Max /@ (Transpose@*ImageData@*ImageRotate @@ invRads);&#xD;
       l2 = Round[Length[data]/2];&#xD;
       data = MapIndexed[{First[#2] - l2, #1} &amp;amp;, data];&#xD;
       {Select[&#xD;
         Times @@@ ({#1, &#xD;
              If[#2 &amp;gt; .0003, 1, 0]} &amp;amp; @@@ ((Mean /@ # &amp;amp;)@*Transpose /@ &#xD;
              Partition[data, 5])), # != 0 &amp;amp;], angle}  (* &#xD;
       limiting value of 0.0003 is just empirical! *)&#xD;
       ];&#xD;
    &#xD;
    va = valsAngle /@ invRadImgs;&#xD;
    graphicsData[va_] := Module[{u, angle},&#xD;
       {u, angle} = va;&#xD;
       InfiniteLine[# {Cos[angle], -Sin[angle]}, {Sin[angle], &#xD;
           Cos[angle]}] &amp;amp; /@ u];&#xD;
    &#xD;
    gd = graphicsData /@ va;&#xD;
    Graphics[{Thickness[.0003], gd}, ImageSize -&amp;gt; 600, &#xD;
     PlotRange -&amp;gt; {{-170, 170}, {-220, 220}}]&#xD;
&#xD;
... and the result is a bunch of lines:&#xD;
&#xD;
![enter image description here][6]&#xD;
 [at0]: https://community.wolfram.com/web/antononcube&#xD;
&#xD;
 [at1]: https://community.wolfram.com/web/vitaliyk&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com/groups/-/m/t/1555648?p_p_auth=T7A50bYl&#xD;
  [2]: http://artof01.com/vrellis/works/knit.html&#xD;
  [3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ImageOfLines.gif&amp;amp;userId=32203&#xD;
  [4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img0rd0.jpg&amp;amp;userId=32203&#xD;
  [5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=linesample.png&amp;amp;userId=32203&#xD;
  [6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ImageOfLines.png&amp;amp;userId=32203</description>
    <dc:creator>Henrik Schachner</dc:creator>
    <dc:date>2019-04-13T20:01:08Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2007434">
    <title>[WSG20] New Daily Study Group begins Monday, June 22</title>
    <link>https://community.wolfram.com/groups/-/m/t/2007434</link>
    <description>Our newest [Daily Study Group][1] offers a jump-start on earning Wolfram certifications and covers topics including Wolfram Notebooks, image processing and multiparadigm data science. Sign up at: https://wolfr.am/NpiuhRsg&#xD;
&#xD;
&#xD;
  [1]: https://www.wolfram.com/wolfram-u/special-event/study-groups/</description>
    <dc:creator>Jamie Peterson</dc:creator>
    <dc:date>2020-06-18T21:21:10Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2774101">
    <title>[WSG23] Daily Study Group: Wolfram Language Basics</title>
    <link>https://community.wolfram.com/groups/-/m/t/2774101</link>
    <description>A Wolfram U daily study group covering the implementation of Wolfram Language for tasks ranging from basic programming to video analysis begins on January 17, 2023 and runs through February 3. This study group will run on weekdays from 11:00AM&amp;#x2013;12:00PM Central US time.&#xD;
&#xD;
This study group is an incredible way either to start learning Wolfram Language or to explore new functionality you haven&amp;#039;t yet used. We will cover a very broad variety of topics, including but not limited to image and sound analysis, symbolics and numerics, function visualization and even cloud computation and deployment. We will even cover useful tips and tricks to help you work efficiently with notebooks!&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
**No prior Wolfram Language experience is necessary.** As usual, we will have questions, study materials, quizzes along the way to help you master the subject matter. &#xD;
&#xD;
You can [**REGISTER HERE**][2]. I hope to see you there!&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=meechstogram.png&amp;amp;userId=1711324&#xD;
  [2]: https://www.bigmarker.com/series/daily-study-group-wolfram-language-basics-wsg34/series_details?utm_bmcr_source=community&#xD;
  [3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WolframUBanner%281%29%281%29.jpeg&amp;amp;userId=1711324</description>
    <dc:creator>Arben Kalziqi</dc:creator>
    <dc:date>2023-01-11T04:29:36Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2135869">
    <title>Tilings and constraint programming</title>
    <link>https://community.wolfram.com/groups/-/m/t/2135869</link>
    <description>Introduction&#xD;
------------&#xD;
&#xD;
The goal of this post is to start from images like this example one :&#xD;
&#xD;
![Girl3][1]&#xD;
&#xD;
and generate pictures like:&#xD;
&#xD;
![Girl3Gray][2]&#xD;
&#xD;
or&#xD;
&#xD;
![Girl3Color][3]&#xD;
&#xD;
Mathematica at least 12.1 will be required since Mixed Integer programming is used.&#xD;
&#xD;
Explanation&#xD;
-----------&#xD;
&#xD;
 &#xD;
&#xD;
Let&amp;#039;s take the first image (black and white) as an example.&#xD;
&#xD;
Let&amp;#039;s assume we have a collection of 16 tiles:&#xD;
&#xD;
![GrayTiles][4]&#xD;
&#xD;
The problem to solve is how to place the tiles on the picture so that the gray content of a tile is close to the gray content of the picture below it and at the same time the topological constraints are satisfied.&#xD;
&#xD;
By topological constraints, I mean that the tiles must be compatible.&#xD;
&#xD;
This is allowed:&#xD;
&#xD;
![Allowed][5]&#xD;
&#xD;
This is forbidden because the dark horizontal band is continuing as a white horizontal band.&#xD;
&#xD;
![Forbidden][6]&#xD;
&#xD;
We are going to translate this problem into a set of equations on integer variables and with a linear cost function to optimize. The final problem will be solved with the LinearOptimization function from Mathematica 12.1.&#xD;
&#xD;
Each pixel of the image is encoded by a vector ![eq1][7] because there are 16 different tiles in this example. The components of the vector can be either 0 or 1.&#xD;
&#xD;
This is expressed as:&#xD;
&#xD;
    VectorLessEqual[{0, v}], VectorLessEqual[{v, 1}], v \[Element] Vectors[nbTiles, Integers]&#xD;
&#xD;
For each pixel, only one tile can be used. We cannot put several tiles on a pixel but only choose one and only one.&#xD;
&#xD;
If we use the constraint:&#xD;
&#xD;
![eq2][8]&#xD;
&#xD;
then we express that only one tile can be used. Indeed, since the component are integers and equal to 0 or 1, then the only way to satisfy this equation is that one of the components, and only one, is equal to one.&#xD;
&#xD;
Expressing the topological constraints is similar.&#xD;
&#xD;
We have equations like:&#xD;
&#xD;
![eq3][9]&#xD;
&#xD;
This equation is describing a relationship between pixel (x,y) and pixel (x+1,y).&#xD;
&#xD;
The values on left and right side can either be 0 or 1 (at same time). When zero it means : none of the tiles is used. When one, it means one of the tile is used. So, the translation of the equation is:&#xD;
&#xD;
If the tile 0,3 or 5 are used at pixel (x,y) then the tiles 3,6,9 or 11 must be used at pixel (x+1,y).&#xD;
&#xD;
(I have not checked if it makes sense with the set of tiles I am using as example. The constraints for those tiles are probably different.).&#xD;
&#xD;
&#xD;
To describe the topological constraint of the tiles, we have functions like:&#xD;
&#xD;
    rightSide[tileA[{a_,b_,c_}]] :={b,c};&#xD;
&#xD;
This is  giving a key describing the right side of the tile. Those keys are then used in associations to build the topological constraints. The key can be anything so if you want to add new tiles, you can just use the key you want to describe the sides of your tiles.&#xD;
&#xD;
The generic function rightSide must be extended with new cases when new tiles are added.&#xD;
&#xD;
Then, we need to express how good the tiles are approximating the original picture.&#xD;
&#xD;
For this, an error function is created. It is a sum of terms:&#xD;
&#xD;
![eq4][10]&#xD;
&#xD;
It means that if the tile i is selected for pixel (x,y) then the approximation error is `Subscript[f, k]`&#xD;
&#xD;
The function averageColor must be extended with new tiles. It returns the color content of a tile : a RGBColor. The code is using a color distance to compute the errors.&#xD;
&#xD;
That&amp;#039;s why the input picture is always converted to RGB and the alpha channel removed.&#xD;
&#xD;
So, finally we have translated our problem into a set of integer constraints and with a linear cost function to optimize. It is a mixed integer programming problem which can be solved with LinearOptimization.&#xD;
&#xD;
The tiles must be displayed. It is done by the function tileDraw and the tile is drawn in a square from (0,0) to (1,1) corners.&#xD;
The code is rasterizing those graphics into 50x50 pixel images.&#xD;
&#xD;
I have had lots of problems with those pictures due to rounding errors ... probably due to the very old GPU on my very old computer.&#xD;
So I tuned the vector code assuming the final tile image is 50x50 pixels. Now the pictures are well aligned, there is no more one row or one column of wrong pixels on the boundary of the tiles.&#xD;
&#xD;
But this may cause a problem on your configuration. So, if the tile pictures are not rendering correctly on your side, you&amp;#039;ll need to tune my vector graphic code again.&#xD;
&#xD;
If the picture is too big, solving the full mixed integer programming problem may take too long. But we can solve a sub-optimal problem. We divide the picture into sub-pictures and solve the problem on each sub-picture then we recombine the solutions. For it to work : we must add new constraints to express compatibility between the pictures.&#xD;
&#xD;
For instance, the left side of a picture at (row,col) must be compatible with the right side of the picture at (row,col-1). So, the problem must be solved in a given order so that the constraints can be propagated from one sub-picture to the other.&#xD;
&#xD;
For some tiles, the sub-optimal solution can be very good from an artistic point of view (the dark tiles below are working well). For other tiles (the smith tiles in the notebook) either the sub-optimal problem cannot always be solved (because the constraints coming from the previous pictures can&amp;#039;t be satisfied) or the sub-optimal problem will look bad from time to time.&#xD;
&#xD;
So this idea of using sub-picture is really dependent on the kind of tiles used. You need to experiment. But it is art after all.&#xD;
&#xD;
Example of use&#xD;
--------------&#xD;
&#xD;
First, we get an example picture:&#xD;
&#xD;
    srcImage = ImageCrop[ExampleData[{&amp;#034;TestImage&amp;#034;, &amp;#034;Girl3&amp;#034;}], {190, 270}]&#xD;
&#xD;
The picture is resized, converted to RGB and any alpha channel removed.&#xD;
&#xD;
    imgToAnalyze = &#xD;
     ImageResize[&#xD;
      ImageAdjust[&#xD;
       RemoveAlphaChannel[ColorConvert[srcImage, &amp;#034;RGB&amp;#034;], Black]], {50, &#xD;
       Automatic}]&#xD;
&#xD;
For the dark tiles (knots), we decide to only use gray levels. First color is the background of the tiles. Other colors are for the circles and vertical and horizontal bands.&#xD;
&#xD;
    tileData = &#xD;
      mkDarkTiles[RGBColor[&#xD;
       0.5, 0.5, 0.5], {RGBColor[0., 0., 0.], RGBColor[1., 1., 1.]}];&#xD;
&#xD;
It gives a total of 16 tiles. The more tiles, the more difficult it is to solve the problem. 16 is ok on my old computer.&#xD;
&#xD;
    tileData[&amp;#034;allTilesImg&amp;#034;] // Length&#xD;
&#xD;
The problem is solved on 15x15 sub pictures.&#xD;
&#xD;
    solution = partitionSolve[tileData, imgToAnalyze, 15];&#xD;
&#xD;
The final picture is generated from the tiles and the solution.&#xD;
&#xD;
    img = createPict[tileData, solution];&#xD;
&#xD;
And you&amp;#039;ll get:&#xD;
&#xD;
![Girl3Gray][2]&#xD;
&#xD;
Have fun ! I hope the vectorial code will not have to be tuned to generate the tile pictures (rounding errors).&#xD;
&#xD;
The notebook is attached to the post.&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Girl3.png&amp;amp;userId=89693&#xD;
  [2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Girl3Gray.png&amp;amp;userId=89693&#xD;
  [3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Girl3Color.png&amp;amp;userId=89693&#xD;
  [4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Tiles.png&amp;amp;userId=89693&#xD;
  [5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Allowed.png&amp;amp;userId=89693&#xD;
  [6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Forbidden.png&amp;amp;userId=89693&#xD;
  [7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=eq1.png&amp;amp;userId=89693&#xD;
  [8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=eq2.png&amp;amp;userId=89693&#xD;
  [9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=eq3.png&amp;amp;userId=89693&#xD;
  [10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=eq4.png&amp;amp;userId=89693&#xD;
  [11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=LenaGray.png&amp;amp;userId=89693</description>
    <dc:creator>Christophe Favergeon</dc:creator>
    <dc:date>2020-12-11T16:08:18Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2435403">
    <title>Designing Townscaper town on computable base-grid</title>
    <link>https://community.wolfram.com/groups/-/m/t/2435403</link>
    <description>![Designing Townscaper town on computable base-grid][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=frames2.gif&amp;amp;userId=20103&#xD;
  [2]: https://www.wolframcloud.com/obj/0d953f64-4678-4d55-9d6b-17997fa2f42a</description>
    <dc:creator>Silvia Hao</dc:creator>
    <dc:date>2022-01-02T11:23:53Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2342501">
    <title>Fractal art: custom Mandelbrot set functions</title>
    <link>https://community.wolfram.com/groups/-/m/t/2342501</link>
    <description>*MODERATOR NOTE: related resource function can be found here*  &#xD;
https://resources.wolframcloud.com/FunctionRepository/resources/MandelbrotSetRemap&#xD;
&#xD;
----&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=frac_hero.jpg&amp;amp;userId=20103&#xD;
  [2]: https://www.wolframcloud.com/obj/17f4f067-3491-4d7a-bf84-5601782ad11e</description>
    <dc:creator>Mark Greenberg</dc:creator>
    <dc:date>2021-08-14T14:58:17Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/960843">
    <title>Seam carving (liquid or content aware rescaling) in Wolfram Language</title>
    <link>https://community.wolfram.com/groups/-/m/t/960843</link>
    <description>*MODERATOR NOTE:* a resource function based on this work is now available:    &#xD;
https://resources.wolframcloud.com/FunctionRepository/resources/ContentAwareImageResize/&#xD;
&#xD;
----------&#xD;
&#xD;
- **Update 1:** Now forward energy is also implemented.&#xD;
 - **Update 2:** Now also extending an image has been implemented.&#xD;
&#xD;
## Seam carving ##&#xD;
![enter image description here][12]&#xD;
&#xD;
Hi All,&#xD;
&#xD;
As you know Wolfram Language can do a lot of image processing, but one thing it can&amp;#039;t yet do is so-called liquid rescaling. Liquid rescaling is a way of cropping the image but as opposed to ImageCrop it is content-aware. Meaning it will crop first parts of the image that have less &amp;#039;information&amp;#039;. I&amp;#039;ll show you how we can implement such cropping in Wolfram Language. We start of with importing an image:&#xD;
&#xD;
    img=Import[&amp;#034;tower.png&amp;#034;];&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
I now define a so-called energy function that describes where the image is information-rich:&#xD;
&#xD;
    EnergyFunction[img_Image] := GradientFilter[img, 1, Method -&amp;gt; &amp;#034;ShenCastan&amp;#034;]&#xD;
&#xD;
We can test this out:&#xD;
&#xD;
    EnergyFunction[img]&#xD;
&#xD;
giving:&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
Now the idea is to find vertical &amp;#039;seams&amp;#039; such that the sum of the values of the energy function along a seam is as low as possible. This seam contains the least of &amp;#039;energy&amp;#039; or &amp;#039;information&amp;#039; and we will therefore remove it first. We have to use the following code to do that:&#xD;
&#xD;
    MinFilter1[x_List] := Min /@ Partition[x, 3, 1, {2, 2}, 1.0 10^6] (* same as MinFilter[x,1] but 10x faster *)&#xD;
    MinPosition[x_List] := First[Ordering[x, 1]] (* position of min element *)&#xD;
    Neighbours[n_Integer?Positive, len_Integer?Positive] := Which[n == 1, If[len &amp;gt; 1, {1, 2}, {1}],&#xD;
      n == len, If[len &amp;gt; 1, {len, len - 1}, {1}],&#xD;
      True, {n - 1, n, n + 1}&#xD;
      ]&#xD;
    FindSeam[mat_List?MatrixQ] := &#xD;
     Module[{dimx, dimy, seam, neighbours, values, newpos, ii},&#xD;
      {dimy, dimx} = Dimensions[mat];&#xD;
      seam = ConstantArray[-1, dimy];&#xD;
      seam[[-1]] = MinPosition[mat[[-1]]];&#xD;
      Do[&#xD;
       neighbours = Neighbours[seam[[ii + 1]], dimx];&#xD;
       values = mat[[ii, neighbours]];&#xD;
       newpos = neighbours[[MinPosition[values]]];&#xD;
       seam[[ii]] = newpos&#xD;
       ,&#xD;
       {ii, dimy - 1, 1, -1}&#xD;
       ];&#xD;
      seam&#xD;
      ]&#xD;
    ComputeEnergyField[img_Image] := FoldList[#2 + MinFilter1[#1] &amp;amp;, ImageData[EnergyFunction[img]]]&#xD;
    ComputeEnergyField[mat_List] := ComputeEnergyField[Image[mat]]&#xD;
&#xD;
we can now test it out:&#xD;
&#xD;
    seam = FindSeam[ComputeEnergyField[img]];&#xD;
    HighlightImage[img, Transpose[{seam, Range[Length[seam]]}]]&#xD;
&#xD;
seam will be a list of horizontal position of the pixel for each row of pixels. We can use HighlightImage to see the &amp;#039;seam&amp;#039;:&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
We can remove (carve) that seam from the original image, after which we can repeat this process over and over and crop 1 pixel each time until we have the width that we need (note that we need to recalculate the energy function after every carve). Of course one could do the same with horizontal seams if one wants to crop in the vertical direction. Simply use the same code but rotate the image 90 degrees, remove some seems, and then rotate is 90 degrees back.&#xD;
&#xD;
To make it more interactive we can calculate all the seems and save these separately, after which we can very easily crop to a desired width:&#xD;
&#xD;
    ClearAll[MinFilter1, MinPosition, Neighbours, FindSeam, EnergyFunction, ComputeEnergyField, Carve, FillNthPosition, CreateSeamcarveImageData, SeamCarve]&#xD;
    MinFilter1[x_List] := Min /@ Partition[x, 3, 1, {2, 2}, 1.0 10^6] (* same as MinFilter[x,1] but 10x faster *)&#xD;
    MinPosition[x_List] := First[Ordering[x, 1]] (* position of min element *)&#xD;
    Neighbours[n_Integer?Positive, len_Integer?Positive] := Which[n == 1, If[len &amp;gt; 1, {1, 2}, {1}],&#xD;
      n == len, If[len &amp;gt; 1, {len, len - 1}, {1}],&#xD;
      True, {n - 1, n, n + 1}&#xD;
      ]&#xD;
    FindSeam[mat_List?MatrixQ] := &#xD;
     Module[{dimx, dimy, seam, neighbours, values, newpos, ii},&#xD;
      {dimy, dimx} = Dimensions[mat];&#xD;
      seam = ConstantArray[-1, dimy];&#xD;
      seam[[-1]] = MinPosition[mat[[-1]]];&#xD;
      Do[&#xD;
       neighbours = Neighbours[seam[[ii + 1]], dimx];&#xD;
       values = mat[[ii, neighbours]];&#xD;
       newpos = neighbours[[MinPosition[values]]];&#xD;
       seam[[ii]] = newpos&#xD;
       ,&#xD;
       {ii, dimy - 1, 1, -1}&#xD;
       ];&#xD;
      seam&#xD;
      ]&#xD;
    EnergyFunction[img_Image] := GradientFilter[img, 1, Method -&amp;gt; &amp;#034;ShenCastan&amp;#034;]&#xD;
    ComputeEnergyField[img_Image] := FoldList[#2 + MinFilter1[#1] &amp;amp;, ImageData[EnergyFunction[img]]]&#xD;
    ComputeEnergyField[mat_List] := ComputeEnergyField[Image[mat]]&#xD;
    Carve[mat_List?ArrayQ, seam_List] := MapThread[Delete, {mat, seam}, 1]&#xD;
    FillNthPosition[x_List, n_Integer?Positive, fill_, empty_: 0] := &#xD;
     Block[{pos, out},&#xD;
      out = x;&#xD;
      pos = Position[out, empty, {1}, n, Heads -&amp;gt; False];&#xD;
      out[[pos[[n]]]] = fill;&#xD;
      out&#xD;
      ]&#xD;
    CreateSeamcarveImageData[img_Image] := Block[{imagedata, dims, dimx, dimy, carveinfo, seam, energyinfo},&#xD;
      imagedata = ImageData[img];&#xD;
      dims = {dimy, dimx} = Dimensions[imagedata, 2];&#xD;
      carveinfo = ConstantArray[0, dims];&#xD;
      PrintTemporary[Dynamic[Row[{&amp;#034;Calculating: &amp;#034;, i, &amp;#034;/&amp;#034;, dimx}]]];&#xD;
      Do[&#xD;
       energyinfo = ComputeEnergyField[imagedata];&#xD;
       seam = FindSeam[energyinfo];&#xD;
       carveinfo = &#xD;
        MapThread[FillNthPosition[#1, #2, i, 0] &amp;amp;, {carveinfo, seam}];&#xD;
       imagedata = Carve[imagedata, seam];&#xD;
       ,&#xD;
       {i, dimx}&#xD;
       ];&#xD;
      {img, carveinfo}&#xD;
      ]&#xD;
    SeamCarve[{img_Image, carveinfo_List}, n_Integer?NonNegative] := Block[{imgdata, pick, sel, m},&#xD;
      imgdata = ImageData[img];&#xD;
      If[Dimensions[imgdata, 2] == Dimensions[carveinfo],&#xD;
       m = Clip[n, {0, Length[carveinfo[[1]]] - 1}];&#xD;
       sel = UnitStep[m - carveinfo];&#xD;
       Image[Pick[ImageData[img], sel, 0]]&#xD;
       ,&#xD;
       Abort[];&#xD;
       ]&#xD;
      ]&#xD;
&#xD;
Now we pre-calculate the positions of the seams:&#xD;
&#xD;
    out = CreateSeamcarveImageData[img];&#xD;
    Manipulate[SeamCarve[out, n], {n, 0, 500, 1}]&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
We can now interactively change the width without cropping essential features away. To give you a comparison, here I cut away 100 pixels (from 286 pixels to 186 pixels) using different methods:&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
To give you a better idea of what the seams are I show you the seam information:&#xD;
&#xD;
    Image[Rescale[out[[2]]]]&#xD;
    Colorize[out[[2]]]&#xD;
&#xD;
![enter image description here][6]&#xD;
&#xD;
Where (in the top figure) dark colors mean seams that will be removed first, while brighter seams are left for last. You can clearly see that the tower and the person are saved until the last. Using this code we can now crop the width of any picture.&#xD;
&#xD;
## Object removal ##&#xD;
&#xD;
But we can extend this method: say that we want to remove an object from an image. We can use the same algorithm but now we use extra negative weight to the region we want to delete. Say we have the following image:&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
and we would like to remove the guy on the left. We create a mask that includes his shadow:&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
We update our code to include a negative mask:&#xD;
&#xD;
    ClearAll[ComputeEnergyFieldNegativeMask, CreateSeamcarveImageDataNegativeMask]&#xD;
    ComputeEnergyFieldNegativeMask[img_Image, mask_Image] := FoldList[#2 + MinFilter1[#1] &amp;amp;, ImageData[EnergyFunction[img]] - 10000 ImageData[mask]]&#xD;
    ComputeEnergyFieldNegativeMask[mat_List, mask_List] := ComputeEnergyFieldNegativeMask[Image[mat], Image[mask]]&#xD;
    CreateSeamcarveImageDataNegativeMask[img_Image, mask_Image] := Block[{maskdata, imagedata, dims, dimx, dimy, carveinfo, seam, &#xD;
       energyinfo}, (* removal *)&#xD;
      If[ImageDimensions[img] == ImageDimensions[mask],&#xD;
       imagedata = ImageData[img];&#xD;
       maskdata = ImageData[ColorConvert[mask, &amp;#034;Grayscale&amp;#034;]];&#xD;
       dims = {dimy, dimx} = Dimensions[imagedata, 2];&#xD;
       carveinfo = ConstantArray[0, dims];&#xD;
       PrintTemporary[Dynamic[Row[{&amp;#034;Calculating: &amp;#034;, i, &amp;#034;/&amp;#034;, dimx}]]];&#xD;
       Do[&#xD;
        energyinfo = ComputeEnergyFieldNegativeMask[imagedata, maskdata];&#xD;
        seam = FindSeam[energyinfo];&#xD;
        carveinfo = &#xD;
         MapThread[FillNthPosition[#1, #2, i, 0] &amp;amp;, {carveinfo, seam}];&#xD;
        imagedata = Carve[imagedata, seam];&#xD;
        maskdata = Carve[maskdata, seam];&#xD;
        ,&#xD;
        {i, dimx}&#xD;
        ];&#xD;
       {img, carveinfo}&#xD;
       ,&#xD;
       Abort[];&#xD;
       ]&#xD;
      ]&#xD;
&#xD;
Now trying out the code:&#xD;
&#xD;
    img = Import[&amp;#034;beachpeeps.jpg&amp;#034;];&#xD;
    mask = Import[&amp;#034;beachpeepsmask.png&amp;#034;];&#xD;
    AbsoluteTiming[out = CreateSeamcarveImageDataNegativeMask[img, mask];]&#xD;
    Manipulate[SeamCarve[out, n], {n, 0, 75, 1}]&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
Or a static comparison:&#xD;
&#xD;
![enter image description here][10]&#xD;
&#xD;
## Object protection ##&#xD;
&#xD;
In the same spirit, we can protect certain areas in the image that are important to us:&#xD;
&#xD;
    ClearAll[CreateSeamcarveImageDataPositiveMask, ComputeEnergyFieldPositiveMask]&#xD;
    ComputeEnergyFieldPositiveMask[img_Image, mask_Image] := FoldList[#2 + MinFilter1[#1] &amp;amp;, ImageData[EnergyFunction[img]] + 10000 ImageData[mask]]&#xD;
    ComputeEnergyFieldPositiveMask[mat_List, mask_List] := ComputeEnergyFieldPositiveMask[Image[mat], Image[mask]]&#xD;
    CreateSeamcarveImageDataPositiveMask[img_Image, mask_Image] := Block[{maskdata, imagedata, dims, dimx, dimy, carveinfo, seam, &#xD;
       energyinfo}, (* removal *)&#xD;
      If[ImageDimensions[img] == ImageDimensions[mask],&#xD;
       imagedata = ImageData[img];&#xD;
       maskdata = ImageData[ColorConvert[mask, &amp;#034;Grayscale&amp;#034;]];&#xD;
       dims = {dimy, dimx} = Dimensions[imagedata, 2];&#xD;
       carveinfo = ConstantArray[0, dims];&#xD;
       PrintTemporary[Dynamic[Row[{&amp;#034;Calculating: &amp;#034;, i, &amp;#034;/&amp;#034;, dimx}]]];&#xD;
       Do[&#xD;
        energyinfo = ComputeEnergyFieldPositiveMask[imagedata, maskdata];&#xD;
        seam = FindSeam[energyinfo];&#xD;
        carveinfo = &#xD;
         MapThread[FillNthPosition[#1, #2, i, 0] &amp;amp;, {carveinfo, seam}];&#xD;
        imagedata = Carve[imagedata, seam];&#xD;
        maskdata = Carve[maskdata, seam];&#xD;
        ,&#xD;
        {i, dimx}&#xD;
        ];&#xD;
       {img, carveinfo}&#xD;
       ,&#xD;
       Abort[];&#xD;
       ]&#xD;
      ]&#xD;
&#xD;
Where now our mask includes all the 3 persons in the image, including there shadows:&#xD;
&#xD;
![enter image description here][11]&#xD;
&#xD;
Again, we call our function:&#xD;
&#xD;
    img = Import[&amp;#034;beachpeeps.jpg&amp;#034;];&#xD;
    mask = Import[&amp;#034;beachpeepsmask2.png&amp;#034;];&#xD;
    AbsoluteTiming[out = CreateSeamcarveImageDataPositiveMask[img, mask];]&#xD;
    Manipulate[SeamCarve[out, n], {n, 0, 500, 1}]&#xD;
&#xD;
Giving:&#xD;
&#xD;
![enter image description here][12]&#xD;
&#xD;
Or some static comparisons:&#xD;
&#xD;
![enter image description here][13]&#xD;
&#xD;
## Final thoughts ##&#xD;
&#xD;
For the energy-function we can use different functions, I used a simple (fast) GradientFilter, but other functions like ImageSaliencyFilter or EntropyFilter could also work. Seams now go from top to bottom, and the position of the seam changes at most 1 horizontal position per row of pixels, this can be changed without too much effort to be a bigger neighbourhood. Also this can be applied to videos but a 3-dimensional carve in space-time has to be calculated. Moreover, the energy function can be improved by including &amp;#039;forward energy&amp;#039;. I can&amp;#039;t wait for this functionality to be included in the Wolfram Language.&#xD;
&#xD;
**EDIT:**&amp;lt;br/&amp;gt;&#xD;
**See my reply below, on how to implement forward energy.**&#xD;
&#xD;
&#xD;
P.S. I&amp;#039;m still confused why the guy has a skateboard on the beach...&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=tower.png&amp;amp;userId=73716&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=energy.png&amp;amp;userId=73716&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=seam.png&amp;amp;userId=73716&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=tower.gif&amp;amp;userId=73716&#xD;
  [5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparison.png&amp;amp;userId=73716&#xD;
  [6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=seaminfo.png&amp;amp;userId=73716&#xD;
  [7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=beachpeeps.jpg&amp;amp;userId=73716&#xD;
  [8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=beachpeepsmask.png&amp;amp;userId=73716&#xD;
  [9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=guygone.gif&amp;amp;userId=73716&#xD;
  [10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7920guygonecomparison.png&amp;amp;userId=73716&#xD;
  [11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=beachpeepsmask2.png&amp;amp;userId=73716&#xD;
  [12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=savepeople.gif&amp;amp;userId=73716&#xD;
  [13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=preservestack.jpg&amp;amp;userId=73716</description>
    <dc:creator>Sander Huisman</dc:creator>
    <dc:date>2016-11-10T23:42:16Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2051264">
    <title>Estimation of energy yield of 2020 Beirut port explosion</title>
    <link>https://community.wolfram.com/groups/-/m/t/2051264</link>
    <description>Probably most of you heard the sad news that there was a giant explosion in the port of Beirut today August 3rd 2020. Several videos were released on which we can do analysis. Note that the method I will use was also famously used by G.I. Taylor to find the energy of the Trinity nuclear bomb test, and he found the right amount to within 10%! We will not be so lucky as the video quality was relatively poor as compared to the high-speed imaging done back then.&#xD;
&#xD;
I extracted several frames from one of the videos:&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
    SetDirectory[NotebookDirectory[]];&#xD;
    v1 = Import[&amp;#034;1.mp4&amp;#034;];&#xD;
    fra = VideoExtractFrames[v1, Interval[{11, 12}]]&#xD;
    fra = ImageRotate[#, Right] &amp;amp; /@ fra;&#xD;
&#xD;
For each of the frames I identified the explosion by clicking 3 point on the circle:&#xD;
&#xD;
     data={&#xD;
    {7,{{157.15625,365.20703125000006`},{233.83984375,379.76562500000006`},{272.015625,312.91015625000006`}}},&#xD;
    {8,{{318.16796874999994`,322.81640625000006`},{228.7890625,462.8515625},{103.61328125,393.38281250000006`}}},&#xD;
    {9,{{341.03515625000006`,311.34765625},{308.27734375,478.125},{93.86328125,420.34375}}},&#xD;
    {10,{{359.08984375,315.546875},{351.48828125,478.63671875000006`},{86.55078125,454.5078125}}},&#xD;
    {11,{{375.62109375,325.64453125},{330.05859375,535.3984375},{62.0390625,434.51171875}}},&#xD;
    {12,{{376.0390625,326.765625},{337.94140625,539.9257812499999},{46.4140625,462.55859375}}}&#xD;
    };&#xD;
&#xD;
The first is the index of the frames, the last elements are points of the circle:&#xD;
&#xD;
    circs = CircleThrough /@ data[[;; 6, 2]];&#xD;
    r = circs[[All, 2]];&#xD;
&#xD;
Here is the visualization:&#xD;
&#xD;
    Table[HighlightImage[fra[[data[[i, 1]]]], circs[[i]], &amp;#034;Boundary&amp;#034;], {i, Length[data]}]&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
Notice that I tracked the orange &amp;#039;glow&amp;#039;, not the shockwave or the smoke that was there partially before the main explosion (so on the conservative side and underestimating the energy release).&#xD;
&#xD;
From Google earth I estimated the size of the face of the building on the left (a grain elevator) and found that every pixel corresponds to 0.59 m roughly (~22 meters corresponding to ~37 pixels).&#xD;
&#xD;
    cali = 0.5888486673789164`;&#xD;
    realr = r cali&#xD;
    &#xD;
The timestamps can be found from the video framerate.&#xD;
&#xD;
    Information[Import[&amp;#034;video.mp4&amp;#034;]].&#xD;
&#xD;
And so the timestamps are created and the dataset is created:&#xD;
&#xD;
    t = (Range[0, Length[realr] - 1]) 1/29.97;&#xD;
    tr = Transpose[{t, realr}]&#xD;
&#xD;
Since the explosion started between two frames we include that in the fit (the t0):&#xD;
    &#xD;
    fit = FindFit[&#xD;
      tr, { a (x + t0)^0.4, 0 &amp;lt; t0 &amp;lt; 1/30}, {{a, 200}, {t0, 1/60}}, x]&#xD;
    realfit = a (x + t0)^0.4 /. fit&#xD;
    tzero = t0 /. fit&#xD;
    realfitshifted = a (x)^0.4 /. fit&#xD;
    prefactor = a /. fit&#xD;
&#xD;
The fit can be found [here][3] and is based on dimensional analysis with the variable E (energy), r (radius of the explosion), t (time), and ρ (density). This also explains the exponent 0.4 used for fitting.&#xD;
&#xD;
We plot the data and the fit:&#xD;
&#xD;
    Show[{ListPlot[Transpose[{t + tzero, realr}]], &#xD;
      Plot[realfitshifted, {x, 0, 0.2}]}, &#xD;
     PlotRange -&amp;gt; {{0, 0.2}, {0, 120}}, Frame -&amp;gt; True, &#xD;
     FrameLabel -&amp;gt; {&amp;#034;t&amp;#034;, &amp;#034;r [m]&amp;#034;}]&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
Which is a pretty good fit. &#xD;
&#xD;
We can now calculate the energy back from the explosion:&#xD;
&#xD;
    ClearAll[r, e, t, \[Rho]]&#xD;
    r == (e t^2/\[Rho])^(1/5)&#xD;
    Refine[DivideSides[%, t^(2/5)], t &amp;gt; 0]&#xD;
    %[[2]] == Quantity[prefactor, &amp;#034;Meters&amp;#034;/&amp;#034;Seconds&amp;#034;^(2/5)]&#xD;
    % /. \[Rho] -&amp;gt; Quantity[1, &amp;#034;Kilograms&amp;#034;/&amp;#034;Meters&amp;#034;^3]&#xD;
    energy = e /. Solve[%, e][[1]]&#xD;
&#xD;
Yielding:&#xD;
&#xD;
    Quantity[4.2808721214488837`*^11, &amp;#034;Joules&amp;#034;]&#xD;
&#xD;
and we can convert it to kiloton of TNT:&#xD;
&#xD;
    UnitConvert[energy, &amp;#034;KilotonsOfTNT&amp;#034;]&#xD;
&#xD;
yielding:&#xD;
&#xD;
    Quantity[0.102315, &amp;#034;KilotonsOfTNT&amp;#034;]&#xD;
&#xD;
This number is comparable to the 2015 Tianjin explosion (0.3 kilo tonnes of TNT). &#xD;
    &#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2020-08-04at21.44.20.png&amp;amp;userId=73716&#xD;
  [2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2020-08-05at12.00.12.png&amp;amp;userId=73716&#xD;
  [3]: https://en.wikipedia.org/wiki/Nuclear_weapon_yield#Calculating_yields_and_controversy&#xD;
  [4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2020-08-04at21.53.18.png&amp;amp;userId=73716</description>
    <dc:creator>Sander Huisman</dc:creator>
    <dc:date>2020-08-04T19:57:48Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1025046">
    <title>Measuring the fractal dimension of a tree photo</title>
    <link>https://community.wolfram.com/groups/-/m/t/1025046</link>
    <description>This is a photo of some tree branches that I took at the park today.&#xD;
&#xD;
[&amp;lt;img src=&amp;#034;http://community.wolfram.com//c/portal/getImageAttachment?filename=20170304_121322.jpg&amp;amp;userId=38370&amp;#034; width=&amp;#034;500&amp;#034;&amp;gt;](http://community.wolfram.com//c/portal/getImageAttachment?filename=20170304_121322.jpg&amp;amp;userId=38370)&#xD;
&#xD;
It looks like a fractal, and a nice candidate for measuring the fractal dimension.  Here&amp;#039;s how we can do that in Mathematica.&#xD;
&#xD;
Let us start by binarizing the image.&#xD;
&#xD;
    img = Import[&amp;#034;~/Downloads/20170304_121322.jpg&amp;#034;];&#xD;
&#xD;
    bin = LocalAdaptiveBinarize[ImageAdjust[img, 0.3], 200, {0.9, 0, 0}];&#xD;
&#xD;
[&amp;lt;img src=&amp;#034;http://community.wolfram.com//c/portal/getImageAttachment?filename=bin.png&amp;amp;userId=38370&amp;#034; width=&amp;#034;300&amp;#034;&amp;gt;](http://community.wolfram.com//c/portal/getImageAttachment?filename=bin.png&amp;amp;userId=38370)&#xD;
&#xD;
For a better result, I increased the contrast slightly (`ImageAdjust`), then used local adaptive binarization. This method chooses a different binarization threshold for each pixel based on its neighbourhood.  It helps preserve small branches while avoiding including any part of the clouds (the weather was not very good today).&#xD;
&#xD;
The easiest way to measure fractal dimension is *box counting*: overlay a square grid, and see how many of the grid cells contain some of the object in the image.  What would we get if the object is a line?&#xD;
&#xD;
    Table[&#xD;
      Rasterize[Graphics[{Antialiasing -&amp;gt; False, Line[{{0, 0}, {1, 1}}]}],&#xD;
        ImageSize -&amp;gt; 256, RasterSize -&amp;gt; k],&#xD;
      {k, 2^Range[2, 8]}&#xD;
      ] // ListAnimate&#xD;
&#xD;
The box count would double (i.e. multiply by $2^1$) every time the box size is halved.  What would happen if the object is a filled disk?&#xD;
&#xD;
    Table[&#xD;
      Rasterize[Graphics[{Antialiasing -&amp;gt; False, Disk[]}], &#xD;
       ImageSize -&amp;gt; 256, RasterSize -&amp;gt; k],&#xD;
      {k, 2^Range[2, 8]}&#xD;
      ] // ListAnimate&#xD;
&#xD;
The box count would quadruple (i.e. multiply by $2^2$) every time the box size is halved.&#xD;
&#xD;
In general, for a $d$-dimensional object the box count $n$ is proportional to power $-d$ of the box size $l$: &amp;amp;nbsp;$n \sim l^{-d}$.  For some objetcs, $d$ can be a fractional (non-integer) number.  Such objects are called fractals.  Let&amp;#039;s try it on the tree photo!&#xD;
&#xD;
First, let us choose box sizes in pixels so that an integer number of boxes fit in the image.&#xD;
&#xD;
    seq = Reverse[Intersection @@ Divisors /@ ImageDimensions[img]]&#xD;
    (* {1008, 504, 336, 252, 168, 144, 126, 112, 84, 72, 63, 56, 48, 42, 36, 28, 24, 21, 18, 16, 14, 12, 9, 8, 7, 6, 4, 3, 2, 1} *)&#xD;
&#xD;
Luckily, the image width and height had many small prime factors in common, so we have many box sizes to work with.  Now let us count boxes.&#xD;
&#xD;
    {width, height} = ImageDimensions[img]&#xD;
    (* {3024, 4032} *)&#xD;
&#xD;
    result = {&#xD;
      width/#,&#xD;
      Last@First@ImageLevels@Image[&#xD;
          BlockMap[Min, ImageData[bin], {#, #}],&#xD;
          &amp;#034;Bit&amp;#034;]&#xD;
      } &amp;amp; /@ seq; // AbsoluteTiming&#xD;
&#xD;
    (* {9.79212, Null} *)&#xD;
&#xD;
[BlockMap][1] will partition the image data into blocks (i.e. boxes) and take the minimum of each. Since black pixels are represented by 0 and white pixels by 1, we get 0 only for those boxes that contain at least one black pixel, i.e. they contain a bit of the tree branches.  `BlockMap` was introduced in Mathematica 10.2.  Those using older versions can substitute `BlockMap[f, matrix, {k,k}]` by `Map[f, Partition[matrix, {k,k}], {2}]`.&#xD;
&#xD;
[ImageLevels][2] counts how many times each possible pixel value appears in an image. To make sure that the only possible pixel values are 0 and 1, we create a `&amp;#034;Bit&amp;#034;` image.&#xD;
&#xD;
Is the result really a power function of the form $n = \text{(const.)} l^d$? The easy way to test this is to plot it on a log-log plot.  Power functions appear as a line when plotted on a logarithmic scale.&#xD;
&#xD;
    ListLogLogPlot[result]&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
The slope of the line will give the exponent: $\ln n = \ln\text{(const.)} + d \ln l$.&#xD;
&#xD;
    fm = LinearModelFit[Log[result], {1, x}, x]&#xD;
&#xD;
It is about $\approx 1.85$.  Despite the thick tree trunks, the image of the branches behaves as a *lower-than-two dimensional* structure.&#xD;
&#xD;
Let us plot the power law fit together with the original data:&#xD;
&#xD;
    plot = Show[&#xD;
      Plot[fm[x], {x, Log[result[[1, 1]]], Log[result[[-1, 1]]]}, &#xD;
       PlotStyle -&amp;gt; Black],&#xD;
      ListLogLogPlot[result, PlotStyle -&amp;gt; Red],&#xD;
      AspectRatio -&amp;gt; 1, Axes -&amp;gt; False, Frame -&amp;gt; True&#xD;
      ]&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
You can see that the power law is a pretty good fit. The slope of the curve is more or less the same at all size scales.  This indicates a certain kind of scale invariance in the image.  Indeed, it is clear that if we magnified a small part of the image, it would look similar to the whole: small branches are miniature copies of larger ones.  This is a characteristic feature of fractals.&#xD;
&#xD;
To visualize the box counting method, we can show the series of finer and finer box-grids:&#xD;
&#xD;
    sz = 252;&#xD;
    imgs = Image[BlockMap[Min, ImageData[bin], {#, #}], &amp;#034;Bit&amp;#034;] &amp;amp; /@ seq&#xD;
&#xD;
Scaling them to the same size and animating them show how they approach the original image better and better:&#xD;
&#xD;
    imgs = If[First@ImageDimensions[#] &amp;gt; sz, &#xD;
         ImageResize[Image[#, Real], sz], (* downscale as grayscale *)&#xD;
         ImageResize[#, sz, Resampling -&amp;gt; &amp;#034;Nearest&amp;#034;]] &amp;amp; /@ imgs; (* upscale as bitmap *)&#xD;
&#xD;
    frm = Table[&#xD;
      ImageCompose[ImageResize[img, 252], {im, 0.5}], {im, imgs}];&#xD;
&#xD;
    ListAnimate[frm]&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
We can show the animation alongside the scaling plot:&#xD;
&#xD;
    frames = Table[&#xD;
       Row[{&#xD;
         Show[plot, AspectRatio -&amp;gt; 1, ImageSize -&amp;gt; 336, &#xD;
          Epilog -&amp;gt; {AbsolutePointSize[10], Red, Point@Log[result[[i]]]}],&#xD;
          Image[frm[[i]], Magnification -&amp;gt; 1]&#xD;
         }],&#xD;
       {i, Length[imgs]}];&#xD;
&#xD;
With some extra styling it looks like this:&#xD;
&#xD;
![enter image description here][6]&#xD;
&#xD;
I hope you enjoyed this small demonstration!&#xD;
&#xD;
&#xD;
  [1]: http://reference.wolfram.com/language/ref/BlockMap.html&#xD;
  [2]: http://reference.wolfram.com/language/ref/ImageLevels.html&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=plot.png&amp;amp;userId=38370&#xD;
  [4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=plot2.png&amp;amp;userId=38370&#xD;
  [5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif.com-gif-maker.gif&amp;amp;userId=38370&#xD;
  [6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=animation.gif&amp;amp;userId=38370</description>
    <dc:creator>Szabolcs Horvát</dc:creator>
    <dc:date>2017-03-04T17:33:57Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2542490">
    <title>Imaging a rotating disk with a rolling shutter</title>
    <link>https://community.wolfram.com/groups/-/m/t/2542490</link>
    <description>![enter image description here][1]&#xD;
&#xD;
The excellent [community contribution][2] by [@Greg Hurst][at0] and a Wikipedia [animation][3], Inspired me to look further into this &amp;#034;rolling shutter effect on rotating objects&amp;#034;. &#xD;
When we capture a video of a rotating disk with a rolling shutter, we have two independent movements: the colored disk rotating at *rps revolutions per second* and the shutter line sweeping one frame at *fps frames per second*. The ratio rps/fps is the driver of the rolling shutter effect (or the disk rps alone if we normalize the shutter fps to 1 frame per second).  In order to best demonstrate this effect, ratios of rps/fps are taken to be in the range 1.5-2.5&#xD;
&#xD;
This is a colored disk of m pixels wide, rotated over an angle theta.&#xD;
&#xD;
    colors = {RGBColor[1, 0, 1], RGBColor[0.988, 0.73, 0.0195], RGBColor[&#xD;
       0.266, 0.516, 0.9576], RGBColor[0.207, 0.652, 0.324], RGBColor[&#xD;
       0, 0, 1], RGBColor[1, 0, 0]};&#xD;
    colorDisk[theta_, m_, cols_] := &#xD;
     ImageResize[&#xD;
      Image[Graphics[&#xD;
        MapThread[{#3, &#xD;
           Disk[{0, 0}, 1, {#1, #2} + theta]} &amp;amp;, {Pi Range[0, 5, 1]/3, &#xD;
          Pi Range[1, 6]/3, colors}]]], m]&#xD;
&#xD;
A video of the rotating disk consists of a series of frames. Each frame is captured during one passage of the shutter line. The function *angularPosition* links the angular progress of the disk to the frame number (frm) and the row number at the position of the shutter line:&#xD;
&#xD;
    angularPosition[frm_, row_, rps_, &#xD;
      m_] := -2 Pi (-1 + (-1 + frm) m + row) rps/m&#xD;
&#xD;
Th function *diskFrameImage* computes the result of the shutter line swiping a colored disk (of size m, rotating at rps revolutions per second) at frame number frm and up to row number toRow:&#xD;
&#xD;
    diskFrameImage[frm_, toRow_, rps_, m_, cols_] := &#xD;
     ImageAssemble[&#xD;
      Transpose@{ParallelTable[&#xD;
         ImageTake[&#xD;
          colorDisk[angularPosition[frm, r, rps, m], m, colors], {r}], {r,&#xD;
           toRow}]}]&#xD;
&#xD;
This is the first frame of a video of a disk rotating at a speed ratio rps/fps of 1 : &#xD;
&#xD;
    With[{m = 200, frm = 1, rps = 1}, &#xD;
     diskFrameImage[frm, m, rps, m, colors]]&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
This shows the influence of the disk rps/fps ratio on the appearance of the first frame  captured: &#xD;
&#xD;
    With[{m = 200, frm = 1}, &#xD;
     Grid[{{&amp;#034;rps=0.512&amp;#034;, &amp;#034;rps=1.512&amp;#034;, &amp;#034;rps=2.512&amp;#034;}, &#xD;
       diskFrameImage[frm, m, #, m, colors] &amp;amp; /@ {.512, 1.512, 2.512}}]]&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
Below is a GIF showing the capture of the first frame of a disk rotating at rps/fps 1. The code is used to generate all of the following GIFs.&#xD;
&#xD;
    With[{m = 200, frm = 1, rps = 1.},&#xD;
     Animate[&#xD;
      Grid[{{&#xD;
         ImageCompose[&#xD;
          colorDisk[angularPosition[frm, row, rps, m], m, colors], &#xD;
          Graphics[Line[{{-m, 0}, {m, 0}}]], Scaled[{.5, (m - row)/m}]],&#xD;
         ImageCompose[diskFrameImage[frm, row, rps, m, colors], &#xD;
          Graphics[Line[{{-m, 0}, {m, 0}}]], Scaled[{.5, .01}]]}}, &#xD;
       Alignment -&amp;gt; Top], {row, 1, m}]]&#xD;
&#xD;
![enter image description here][6]&#xD;
&#xD;
Below are 4 examples showing the capture of the first frame of a video of a rotating disk with a roller shutter at 1fs. The disk rotates at 0.5 rps (top left), 1.0 rps (top right), 1.5 rps (bottom left) and 2.0 rps (bottom right). As the disk rotates faster relative to the shutter, the captured image becomes more complex.&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
The subsequent frames are captured the same way. Below are the first 5 frames of a video of a disk rotating at 1.5123 rps:&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
There ought to be a lot more images that result from the transformation of a rotation into a capture with a rolling shutter. I hope this contribution can inspire more community members.&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=combirotodisk15and2.gif&amp;amp;userId=20103&#xD;
  [2]: https://community.wolfram.com/groups/-/m/t/2489445&#xD;
  [3]: https://upload.wikimedia.org/wikipedia/commons/1/15/Rolling_shutter_effect_animation.gif&#xD;
  [4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8345locusfulldiskrps1.png&amp;amp;userId=68637&#xD;
  [5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9528colordiskrpscompare.png&amp;amp;userId=68637&#xD;
  [6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=newrotodiskfrm1rps05small.gif&amp;amp;userId=68637&#xD;
  [7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=combirotodisk05and10.gif&amp;amp;userId=68637&#xD;
  [8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=combirotodisk15and2.gif&amp;amp;userId=68637&#xD;
  [9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=allframesvideofinal.gif&amp;amp;userId=68637&#xD;
&#xD;
 [at0]: https://community.wolfram.com/web/ghurst</description>
    <dc:creator>Erik Mahieu</dc:creator>
    <dc:date>2022-06-02T12:22:57Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/922544">
    <title>Convert 2D into a 3D object: radiotherapy treatment planning system</title>
    <link>https://community.wolfram.com/groups/-/m/t/922544</link>
    <description>Dear all,&#xD;
&#xD;
my data consist of a list of lists of points in 3D. After running the code&#xD;
&#xD;
    ClearAll[&amp;#034;Global`*&amp;#034;]&#xD;
    SetDirectory[NotebookDirectory[]];&#xD;
    sliceData = &amp;lt;&amp;lt; &amp;#034;SliceData.txt&amp;#034;;&#xD;
    Graphics3D[Line /@ sliceData, Boxed -&amp;gt; True, Axes -&amp;gt; True]&#xD;
&#xD;
one gets:&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
Clearly the data describe a 3D object (as &amp;#034;wire frame&amp;#034;).&#xD;
&#xD;
**QUESTION:** *How can those data be converted into a single 3D Mathematica object (graphics, mesh, ...)? Is there an already implemented way (a routine) I am missing?*&#xD;
&#xD;
&#xD;
----------&#xD;
&#xD;
&#xD;
NOTE on DATA:&#xD;
-------------&#xD;
&#xD;
The data stem from our radiotherapy treatment planning system. There we are working with contours which are drawn on any single CT slice. Organs at risk and target volumes are defined by those contours. One special contour is the &amp;#034;BODY contour&amp;#034;; this is what is shown in my example data. Dose will be calculated only inside this BODY contour, therefore e.g. the volume around the ears and nose appears to be exaggerated (for being on the safe side). Those treatment plans can be exported as DICOM files and nicely imported in Mathematica.&#xD;
&#xD;
When high energy radiation is applied to a body it turns out that the dose next to the skin is highly diminished; this is due to the buildup effect of the dose. When full dose at the skin is wanted, one has to anticipate this buildup effect, and this can be realized by putting a &amp;#034;flab&amp;#034; onto the skin: A flab is a layer made of some tissue equivalent material.&#xD;
&#xD;
Optimal flabs can have quite irregular shapes. I recently learned at a conference that there is the possibility to 3D print those individual flabs - if one can provide the data ... So - probably to everybody&amp;#039;s disappointment - I do not want to print any BODY contour, but I imagined a BODY contour might serve here as kind of a &amp;#034;honey pot&amp;#034;.&#xD;
&#xD;
Best regards and many thanks! -- Henrik&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sliceImg.png&amp;amp;userId=32203</description>
    <dc:creator>Henrik Schachner</dc:creator>
    <dc:date>2016-09-11T20:40:44Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3062832">
    <title>The Telephone Game - next level with GPT</title>
    <link>https://community.wolfram.com/groups/-/m/t/3062832</link>
    <description>![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=w5qgsdf.jpg&amp;amp;userId=11733&#xD;
  [2]: https://www.wolframcloud.com/obj/04458d24-aacf-4cd7-8a5d-46efde37e927</description>
    <dc:creator>Marco Thiel</dc:creator>
    <dc:date>2023-11-09T22:51:38Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2153018">
    <title>Books as pixels: rendering text as organized color</title>
    <link>https://community.wolfram.com/groups/-/m/t/2153018</link>
    <description>[![enter image description here][1]](https://easyzoom.com/image/238912)&#xD;
&#xD;
The entire text of The Great Gatsby rendered as colored pixels. Zoom interactively in here:  &#xD;
&#xD;
https://easyzoom.com/image/238912&#xD;
&#xD;
[![enter image description here][2]](https://easyzoom.com/image/238912)&#xD;
&#xD;
Hi all, making my first post here. I&amp;#039;ve been using Mathematica in my research for 10 years or so, but now that I&amp;#039;m an MFA student I&amp;#039;m finding it extremely useful in my art practice. This post is just a fun simple idea and I plan on posting my more complicated work soon. Let me know what you think!&#xD;
&#xD;
&#xD;
----------&#xD;
&amp;amp;[Wolfram Notebook][3]&#xD;
&#xD;
&#xD;
  [Original]: https://www.wolframcloud.com/obj/76b4e5d2-aed6-4162-9543-39fa9139b920&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif-6-c1b05e197418.gif&amp;amp;userId=20103&#xD;
  [2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=booksAsPixels.jpg&amp;amp;userId=20103&#xD;
  [3]: https://www.wolframcloud.com/obj/2b820c1b-c4e8-495c-ae0f-194793676745</description>
    <dc:creator>Jack Madden</dc:creator>
    <dc:date>2021-01-02T21:07:56Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/884348">
    <title>[WSS16] Image Colorization</title>
    <link>https://community.wolfram.com/groups/-/m/t/884348</link>
    <description>The aim of my project for the Wolfram Science Summer School was to build a neural network which could be able to colorize grayscale images in a realistic way. The network has been built following the article [1]. In this paper, the authors propose a fully automated approach for colorization of grayscale images, which uses a combination of global image features, which are extracted from the entire image, and local image features, which are computed from small image patches. Global priors provide information at an image level such as whether or not the image was taken indoors or outdoors, whether it is day or night, etc., while local features represent the local texture or object at a given location. By combining both features, it&amp;#039;s possible to leverage the semantic information to color the images without requiring human interaction. The approach is based on Convolutional Neural Networks, which have a strong capacity for learning and is trained to predict the chrominance of a grayscale image using the CIE L*a*b* colorspace. Predicting colors has the nice property that training data is practically free: any color photo can be used as a training example. &#xD;
&#xD;
**Net Layers**&#xD;
&#xD;
The model consists of four main components: a low-level features network, a mid-level features network, a global features network, and a colorization network. First, a common set of shared low-level features are extracted from the image. Using these features, a set of global image features and mid-level image features are computed. Then, the mid-level and the global features are both fused by a &amp;#034;fusion layer&amp;#034; and used as the input to a colorization network that outputs the final chrominance map. &#xD;
Each layer has a ReLu transfer function except for the last convolution of the colorization network, where a sigmoid function is applied.The model is able to process images of any size, but it is most efficient when the input images are 224x224 pixels, as the shared low-level features layers can share outputs. Note that when the input image size is of a different resolution, while the low-level feature weights are shared, a rescaled image of size 224x224 must be used for the global features network.This requires processing both the original image and the rescaled image through the low-level features network, increasing both memory consumption and computation time. For this reason, we trained&#xD;
the model exclusively with images of size 224x224 pixels.&#xD;
&#xD;
*Low-Level Features Network*&#xD;
&#xD;
A 6-layer Convolutional Neural Network obtains low-level features directly from the input image. The convolution filter bank the network represents are shared to feed both the global features network and the mid-level features network. In order to reduce the size of the feature maps, we use convolution layers with increased strides instead of using max-pooling layers (as usual for similar kinds of networks). If padding is added to the layer, the output is effectively half the size of the input layer. We used 3x3 convolution kernels exclusively and a padding of 1x1 to ensure the output is the same size (or half if using a stride of 2) as the input.&#xD;
&#xD;
*Global Features Network*&#xD;
&#xD;
The global image features are obtained by further processing the low-level features with four convolutional layers followed by three fully-connected layers.This results in a 256-dimensional vector representation of the image.&#xD;
&#xD;
*Mid-Level Features Network*&#xD;
&#xD;
The mid-level features are obtained by processing the low-level features further with two convolutional layers. The output is bottlenecked from the original 512-channel low-level features to 256-channel mid-level features. Unlike the global image features, the low-level and mid-level features networks are fully convolutional networks, such that the output is a scaled version of the input.&#xD;
&#xD;
*Fusion Layer*&#xD;
&#xD;
In order to be able to combine the global image features, a 256-dimensional vector, with the (mid-level) local image features, a 28x28x256-dimensional tensor, the authors introduce a fusion layer. This can be thought of as concatenating the global features with the local features at each spatial location and processing them through a small one-layer network.This effectively combines the global feature and the local features to obtain a new feature map that is, as the mid-level features, a 3D volume.&#xD;
&#xD;
*Colorization Network*&#xD;
&#xD;
Once the features are fused, they are processed by a set of convolutions and upsampling layers, which use the nearest neighbour technique so that the output is twice as wide and twice as tall. These layers are alternated until the output is half the size of the original input. The&#xD;
output layer of the colorization network consists of a convolutional layer with a Sigmoid transfer function that outputs the chrominance of the input grayscale image. Finally, the computed chrominance is upsampled and combined with the input intensity/luminance image to produce the resulting color image. In order to train the network, we used the Mean Square Error (MSE) criterion. Given a color image for training, the input of the model is the grayscale image while the target output is the a*b* components of the CIE L*a*b* colorspace. The a*b* components are globally normalized so they lie in the [0,1] range of the Sigmoid transfer function. &#xD;
&#xD;
*Colorization with Classification*&#xD;
&#xD;
While training with only color images using the MSE criterion does give good performance, sometimes it could make obvious mistakes due to not properly learning the global context of the image, e.g., whether it is indoors or outdoors. As learning these networks is an non-convex&#xD;
problem, we facilitated the optimization by also training for classification jointly with the colorization. As we trained the model using a large-scale dataset for classification of N classes (Mathematica ImageIdentify dataset), we had classification labels available for training. These labels correspond to a global image tag and thus can be used to guide the training of the global image features. We did this by introducing another very small neural network that consists of two fully-connected layers: a hidden layer with 256 outputs and an output layer with as many outputs as the number of classes in the dataset. The input of this network is the second to last layer of the global features network with 512 outputs. We trained this network using the cross-entropy loss, jointly with the MSE loss for the colorization network.&#xD;
&#xD;
*Implementation*&#xD;
&#xD;
The aim of my project was to build the network described in the paper using the new NeuralNetworks framework of Mathematica 11. In order to achieve this, some adjustments were needed. &#xD;
First of all, we decided to train and evaluate the network only on images of 224x224 pixels size, in order to use (and train) only one low-level features network, instead of two with shared weights and different outputs.&#xD;
The final network has two inputs: the first one is the colored 224x224 px image, encoded by the &amp;#034;NetEncoder&amp;#034; function in LAB colorspace, the second one the class of the image. The two outputs (named &amp;#034;Loss&amp;#034; and &amp;#034;Output&amp;#034;) represent the values of the two loss function used (one for the colorization, the other one for the classification), which are then summed together by the NetTrain function. The three color channels of the input image are split by the split layer: the L channel feeds the &amp;#034;low-level features&amp;#034; network, while the a,b channels are scaled and concatenated in order to obtain a target set for the mean squared loss function comparable with the output of the colorization network. The fusion layer has been replaced by a broadcast layer, which joins the rank 3- tensor, output of the mid-level network, with the vector from the global features network. However, the way they are combined is not exactly the same as the one described in the paper. To evaluate the trained network on a grayscale image it&amp;#039;s necessary to drop some branches of the network, such as the classification network and the layers that process the a,b channels of the colored input image in order to produce the target set for the colorization loss function. &#xD;
&#xD;
![Network described in the paper][1]&#xD;
![Network implementation with Mathematica NeuralNetworks framework][2]&#xD;
&#xD;
**Results**&#xD;
&#xD;
![enter image description here][3]&#xD;
&#xD;
**Conclusions**&#xD;
&#xD;
The network described in the paper has been trained on the Places scene dataset [Zhou et al. 2014], which consists of 2,448,872 training images and 20,500 validation images, with 205 classes corresponding to the types of the scene. They filtered the images by removing grayscale images and those that have little color variance with a small automated script. They trained using a batch size of 128 for 200,000 iterations corresponding to roughly 11 epochs. This takes roughly 3 weeks on one core of a NVIDIAR TeslaR K80 GPU. &#xD;
We needed to introduce some new layers in the existing framework and to fix some bugs, so we were able to train our network only for 14 hours on a dataset of 350000 images on one core of a GPU Titan machine. Furthermore, the images in our training set mainly represent specific items, so probably better results may be achieved introducing also images of different types of subjects (landscapes, human created images, indoors, etc). The results we obtained are showed in the section above and are quite good. We are confident that with a deeper and longer training our network would give considerably better results.&#xD;
&#xD;
**Open Problems / Future Developments**&#xD;
&#xD;
Due to the separation between the global and local features, it is possible to use global features computed on one image in combination with local features computed on another image, to change the style of the resulting colorization. One of the more interesting things the model can do is adapting the colorization of one image to the style of another. This is straight-forward to do with this model due to the decorrelation between the global features and the mid-level features. In order to colorize an image A using the style taken from an image B, it&amp;#039;s necessary to compute the mid-level local features of image A and the global features from image B. Than it&amp;#039;s possible to fuse these features and process them with the colorization network. Both the local and the global features are computed from grayscale images: it&amp;#039;s not necessary to use any color&#xD;
information at all.&#xD;
&#xD;
The main limitation of the method lies in the fact that it is datadriven and thus will only be able to colorize images that share common properties with those in the training set. In order to evaluate on significantly different types of images, it would be necessary to train a&#xD;
the model for all type of images (indoor, outdoor, human-created...). In order to obtain good style transfer results, it is important for both images to have some semantic level of similarity between them.&#xD;
&#xD;
**References**&#xD;
&#xD;
[1] Satoshi Iizuka, Edgar Simo - Serra, and Hiroshi Ishikawa.&amp;#034;Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification&amp;#034;.&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=netwPicture.png&amp;amp;userId=884315&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MyNetwork.png&amp;amp;userId=884315&#xD;
  [3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=result2.png&amp;amp;userId=884315</description>
    <dc:creator>Sabrina Giollo</dc:creator>
    <dc:date>2016-07-07T19:53:19Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/900782">
    <title>Mathematica 11 Release</title>
    <link>https://community.wolfram.com/groups/-/m/t/900782</link>
    <description>Mathematica 11 is now out and ready to compute! Since the release of Version 10 two years ago, Mathematica has grown by leaps and bounds, with 500+ new functions. This version introduces both enhancements to computations across the board and completely new areas of functionality, continuing Mathematica&amp;#039;s growth as the state-of-the-art technical platformand we are very excited to share these developments with the world. Some of the most notable features include:&#xD;
&#xD;
**[3D Printing][1]** &#xD;
&#xD;
![3D-Printed Triceratops][2]&#xD;
&#xD;
 - Print your 3D models to local printers or online printing services with [Printout3D][3].&#xD;
 - Use [FindMeshDefects][4] and [RepairMesh][5] to fix your models before printing.&#xD;
 - Hollow your models out with [ShellRegion][6] to lower the printing cost.&#xD;
&#xD;
**[Computational Audio][7]**&#xD;
&#xD;
![Audio Image][8]&#xD;
&#xD;
 - Version 11 includes a brand-new [Audio][9] object to represent audio created from importing or an array of data.&#xD;
 - Edit your audio with [AudioPad][10], [AudioTrim][11], [AudioSplit][12], [AudioResample][13], etc.&#xD;
 - Synthesize sounds using the new [AudioGenerator][14] function.&#xD;
 - Apply filters, take [AudioMeasurements][15] and visualize your audio with [AudioPlot][16].&#xD;
&#xD;
**[Neural Networks][17]**&#xD;
&#xD;
![Neural Network][18]&#xD;
&#xD;
 - Define network topologies with [NetGraph][19] or with a chain of layers with [NetChain][20].&#xD;
 - Train your neural nets with [NetTrain][21].&#xD;
 - Train networks on either CPUs or NVIDIA GPUs.&#xD;
 - Are your training sets of images too large to hold in-memory? The neural network functions support out-of-core image datasets.&#xD;
&#xD;
**[Improved Machine Learning][22]**&#xD;
&#xD;
![Identifying Notable Celebrities][23]&#xD;
&#xD;
 - [ImageIdentify][24] now recognizes over 10,000 objects.&#xD;
 - Extract features from images, text and other data types using [FeatureExtract][25].&#xD;
 - [Classify][26] works even better with images, including an option to customize what [FeatureExtractor][27] it uses.&#xD;
 - Find formulas for time series data using [FindFormula][28].&#xD;
 - Find clusters in your data with enhanced options for [FindClusters][29], including new methods and setting a [CriterionFunction][30].&#xD;
&#xD;
And these are only just a sample of the new features, including the [Wolfram Channel Framework][31], enhanced notebook processing, cloud-aware [WolframScript][32], improved [symbolic and numeric calculus][33] and so much more.&#xD;
&#xD;
To read more about what&amp;#039;s new in Mathematica 11, read Stephen Wolfram&amp;#039;s [release-day blog post][34] and check out the [New in 11 page][35].&#xD;
&#xD;
&#xD;
  [1]: http://www.wolfram.com/language/11/3d-printing/&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=triceratops.png&amp;amp;userId=900759&#xD;
  [3]: http://reference.wolfram.com/language/ref/Printout3D.html&#xD;
  [4]: http://reference.wolfram.com/language/ref/FindMeshDefects.html&#xD;
  [5]: http://reference.wolfram.com/language/ref/RepairMesh.html&#xD;
  [6]: http://reference.wolfram.com/language/ref/ShellRegion.html&#xD;
  [7]: http://www.wolfram.com/language/11/computational-audio/&#xD;
  [8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=audioplot.png&amp;amp;userId=900759&#xD;
  [9]: http://reference.wolfram.com/language/ref/Audio.html&#xD;
  [10]: http://reference.wolfram.com/language/ref/AudioPad.html&#xD;
  [11]: http://reference.wolfram.com/language/ref/AudioTrim.html&#xD;
  [12]: http://reference.wolfram.com/language/ref/AudioSplit.html&#xD;
  [13]: http://reference.wolfram.com/language/ref/AudioResample.html&#xD;
  [14]: http://reference.wolfram.com/language/ref/AudioGenerator.html&#xD;
  [15]: http://reference.wolfram.com/language/ref/AudioMeasurements.html&#xD;
  [16]: http://reference.wolfram.com/language/ref/AudioPlot.html&#xD;
  [17]: http://www.wolfram.com/language/11/neural-networks/&#xD;
  [18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=neuralnetworkdigits.png&amp;amp;userId=900759&#xD;
  [19]: http://reference.wolfram.com/language/ref/NetGraph.html&#xD;
  [20]: http://reference.wolfram.com/language/ref/NetChain.html&#xD;
  [21]: http://reference.wolfram.com/language/ref/NetTrain.html&#xD;
  [22]: http://www.wolfram.com/language/11/improved-machine-learning/&#xD;
  [23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=CumberbatchMachineLearning.png&amp;amp;userId=900759&#xD;
  [24]: http://reference.wolfram.com/language/ref/ImageIdentify.html&#xD;
  [25]: http://reference.wolfram.com/language/ref/FeatureExtract.html&#xD;
  [26]: http://reference.wolfram.com/language/ref/Classify.html&#xD;
  [27]: http://reference.wolfram.com/language/ref/FeatureExtractor.html&#xD;
  [28]: http://reference.wolfram.com/language/ref/FindFormula.html&#xD;
  [29]: http://reference.wolfram.com/language/ref/FindClusters.html&#xD;
  [30]: http://reference.wolfram.com/language/ref/CriterionFunction.html&#xD;
  [31]: http://reference.wolfram.com/language/guide/Channel-BasedCommunication.html&#xD;
  [32]: http://reference.wolfram.com/language/ref/program/wolframscript.html&#xD;
  [33]: http://www.wolfram.com/language/11/symbolic-and-numeric-calculus/&#xD;
  [34]: http://blog.wolfram.com/2016/08/08/today-we-launch-version-11/&#xD;
  [35]: http://www.wolfram.com/mathematica/new-in-11/</description>
    <dc:creator>Zachary Littrell</dc:creator>
    <dc:date>2016-08-08T16:43:43Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2152918">
    <title>Verify documentation example before the release of a new version?</title>
    <link>https://community.wolfram.com/groups/-/m/t/2152918</link>
    <description>My question is based on the observation that when a new Mathematica version is released (this time being 12.2) some of the functions that don&amp;#039;t even receive an update from the previous version often get broken. &#xD;
&#xD;
How is it so that the examples in the documentation are not checked for consistency of operation/results before a new Mathematica version is shipped? I believe that even with a team of 5-6 people in the quality control department (assuming there is any) it is possible to check the examples for all listed functions in Mathematica in about a week.&#xD;
&#xD;
Why is such a simple task and simple expectation so hard to meet? The issue at the end is that a buggy release will cause many of the mission critical functions for say an end user to break. &#xD;
&#xD;
It is a pity that I have to install not one version (that I can fully trust) but keep four versions of Mathematica (11.3, 12.0,12.1 and 12.2) simultaneously (grabbing over 50 GB). Reason: because some things work in one version and the others in another version. Please do enlighten me how hard are the user expectations to be met. We do not want more and more functions; we just want the functions to work reliably and correctly. There seems to be issues with the quality control here that needs to be sorted before releasing future versions. &#xD;
&#xD;
&#xD;
Take this simple example:&#xD;
&#xD;
ReplacePixelValue did not receive any update since 2014 (version 10). Then how come the second example in its documentation (in the &amp;#034;Applications&amp;#034; subsection) cease to function properly. Furthermore, I am attaching a notebook with another example for this particular function to demonstrate that something that works perfectly in 12.0 and 12.1 fails in the 12.2 release.&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/1cb0dbd1-e7cb-48cf-abd1-1324cf87e89e</description>
    <dc:creator>Ali Hashmi</dc:creator>
    <dc:date>2021-01-02T15:53:14Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2222977">
    <title>Deep fields: pixel sorting Hubble images of deep space</title>
    <link>https://community.wolfram.com/groups/-/m/t/2222977</link>
    <description>&amp;amp;[Wolfram Notebook][1]&#xD;
&#xD;
&#xD;
  [Original]: https://www.wolframcloud.com/obj/8a8fbd01-b0d8-4798-beec-166e0898b2b1&#xD;
&#xD;
&#xD;
  [1]: https://www.wolframcloud.com/obj/5d2efdc4-f66c-486b-9f5a-298d70381d5d</description>
    <dc:creator>Jack Madden</dc:creator>
    <dc:date>2021-03-18T17:28:29Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1787163">
    <title>Converting OpenPose for Wolfram Language</title>
    <link>https://community.wolfram.com/groups/-/m/t/1787163</link>
    <description>Tuseeta-san&amp;#039;s [post][1] is how to convert a trained model of TensorFlow to Mathematica. Converting trained models from a language other than Mathematica to Mathematica is very beneficial to Mathematica users. So I&amp;#039;ll show how to convert a trained model of PyTorch to Mathematica along with Tuseeta-san&amp;#039;s post. &#xD;
![enter image description here][2]&#xD;
## Step 1: Figure out the architecture ##&#xD;
The [model][3] to be converted is Pose Estimation that detects the human skeleton (body parts and their connections) from an image. It&amp;#039;s called OpenPose.The model consists of Feature map that extracts image features and six Stage maps.&#xD;
Feature map extracts image features from an input image(size:368*368). Each  Stage map has two-branch, the first branch predicts confidence and the second predicts PAFs?Part Affinity Fields), along with the image feature .Two-branch are concatenated for next stage.&#xD;
## Step 2: Coding it in Mathematica ##&#xD;
**Feature map**&#xD;
&#xD;
The Feature map consists of the first 23 layers of VGG-19, followed by 2 sets of Convolution and Ramp.&#xD;
&#xD;
Extract the first 23 layers of VGG-19.&#xD;
&#xD;
    vgg19 = NetModel[&amp;#034;VGG-19 Trained on ImageNet Competition Data&amp;#034;];&#xD;
    vgg19sub = Take[vgg19, {1, 23}];&#xD;
&#xD;
Change Encoder.&#xD;
&#xD;
    enc = NetExtract[vgg19, &amp;#034;Input&amp;#034;];&#xD;
    enc = NetReplacePart[&#xD;
       enc, {&amp;#034;ImageSize&amp;#034; -&amp;gt; {368, 368}, &#xD;
        &amp;#034;VarianceImage&amp;#034; -&amp;gt; {0.229, 0.224, 0.225}, &#xD;
        &amp;#034;MeanImage&amp;#034; -&amp;gt; {0.485, 0.456, 0.406}}];&#xD;
    featurefirst = NetReplacePart[vgg19sub, &amp;#034;Input&amp;#034; -&amp;gt; enc];&#xD;
&#xD;
Add Convolution and Ramp.&#xD;
&#xD;
    feature = &#xD;
      NetAppend[&#xD;
       featurefirst, {&amp;#034;convadd1&amp;#034; -&amp;gt; &#xD;
         ConvolutionLayer[256, 3, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 1], &#xD;
        &amp;#034;reluadd1&amp;#034; -&amp;gt; Ramp,&#xD;
        &amp;#034;convadd2&amp;#034; -&amp;gt; &#xD;
         ConvolutionLayer[128, 3, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 1], &#xD;
        &amp;#034;reluadd2&amp;#034; -&amp;gt; Ramp}];&#xD;
&#xD;
**Stage map**&#xD;
&#xD;
Each Stage map consists only of Convolutions and Ramps.&#xD;
&#xD;
Stage 1: The differences between two branches is that the last output channel number is 38 or 19.&#xD;
&#xD;
    blk11 = NetChain[{&#xD;
        ConvolutionLayer[128, 3, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 1], Ramp,&#xD;
        ConvolutionLayer[128, 3, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 1], Ramp,&#xD;
        ConvolutionLayer[128, 3, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 1], Ramp,&#xD;
        ConvolutionLayer[512, 1, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 0], Ramp,&#xD;
        ConvolutionLayer[38, 1, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 0]}];&#xD;
    blk12 = NetChain[{&#xD;
        ConvolutionLayer[128, 3, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 1], Ramp,&#xD;
        ConvolutionLayer[128, 3, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 1], Ramp,&#xD;
        ConvolutionLayer[128, 3, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 1], Ramp,&#xD;
        ConvolutionLayer[512, 1, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 0], Ramp,&#xD;
        ConvolutionLayer[19, 1, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 0]}];&#xD;
&#xD;
Stage 2?6: The difference between Stage1 and Stage2?6 is the kinds and the numbers of layers.&#xD;
&#xD;
    blkx1 = NetChain[{&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 1, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 0], Ramp,&#xD;
        ConvolutionLayer[38, 1, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 0]}];&#xD;
    blkx2 = NetChain[{&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 7, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 3], Ramp,&#xD;
        ConvolutionLayer[128, 1, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 0], Ramp,&#xD;
        ConvolutionLayer[19, 1, &amp;#034;Stride&amp;#034; -&amp;gt; 1, &amp;#034;PaddingSize&amp;#034; -&amp;gt; 0]}];&#xD;
&#xD;
Finally, create OpenPose.&#xD;
&#xD;
    openpose = NetGraph[{&#xD;
       &amp;#034;feature&amp;#034; -&amp;gt; feature,(*feature*)&#xD;
       &amp;#034;blk11&amp;#034; -&amp;gt; blk11, &amp;#034;blk12&amp;#034; -&amp;gt; blk12,(*stage 1*)&#xD;
       &amp;#034;blk21&amp;#034; -&amp;gt; blkx1, &amp;#034;blk22&amp;#034; -&amp;gt; blkx2,(*stage 2*)&#xD;
       &amp;#034;cat12&amp;#034; -&amp;gt; CatenateLayer[],&#xD;
       &amp;#034;blk31&amp;#034; -&amp;gt; blkx1, &amp;#034;blk32&amp;#034; -&amp;gt; blkx2,(*stage 3*)&#xD;
       &amp;#034;cat23&amp;#034; -&amp;gt; CatenateLayer[],&#xD;
       &amp;#034;blk41&amp;#034; -&amp;gt; blkx1, &amp;#034;blk42&amp;#034; -&amp;gt; blkx2,(*stage 4*)&#xD;
       &amp;#034;cat34&amp;#034; -&amp;gt; CatenateLayer[],&#xD;
       &amp;#034;blk51&amp;#034; -&amp;gt; blkx1, &amp;#034;blk52&amp;#034; -&amp;gt; blkx2,(*stage 5*)&#xD;
       &amp;#034;cat45&amp;#034; -&amp;gt; CatenateLayer[],&#xD;
       &amp;#034;blk61&amp;#034; -&amp;gt; blkx1, &amp;#034;blk62&amp;#034; -&amp;gt; blkx2,(*stage 6*)&#xD;
       &amp;#034;cat56&amp;#034; -&amp;gt; CatenateLayer[]&#xD;
       },&#xD;
      {&amp;#034;feature&amp;#034; -&amp;gt; &amp;#034;blk11&amp;#034;, &amp;#034;feature&amp;#034; -&amp;gt; &amp;#034;blk12&amp;#034;,(*stage 1*)&#xD;
       {&amp;#034;blk11&amp;#034;, &amp;#034;blk12&amp;#034;, &amp;#034;feature&amp;#034;} -&amp;gt; &amp;#034;cat12&amp;#034;,(*stage 2*)&#xD;
       &amp;#034;cat12&amp;#034; -&amp;gt; &amp;#034;blk21&amp;#034;, &amp;#034;cat12&amp;#034; -&amp;gt; &amp;#034;blk22&amp;#034;,&#xD;
       {&amp;#034;blk21&amp;#034;, &amp;#034;blk22&amp;#034;, &amp;#034;feature&amp;#034;} -&amp;gt; &amp;#034;cat23&amp;#034;,(*stage 3*)&#xD;
       &amp;#034;cat23&amp;#034; -&amp;gt; &amp;#034;blk31&amp;#034;, &amp;#034;cat23&amp;#034; -&amp;gt; &amp;#034;blk32&amp;#034;,&#xD;
       {&amp;#034;blk31&amp;#034;, &amp;#034;blk32&amp;#034;, &amp;#034;feature&amp;#034;} -&amp;gt; &amp;#034;cat34&amp;#034;,(*stage 4*)&#xD;
       &amp;#034;cat34&amp;#034; -&amp;gt; &amp;#034;blk41&amp;#034;, &amp;#034;cat34&amp;#034; -&amp;gt; &amp;#034;blk42&amp;#034;,&#xD;
       {&amp;#034;blk41&amp;#034;, &amp;#034;blk42&amp;#034;, &amp;#034;feature&amp;#034;} -&amp;gt; &amp;#034;cat45&amp;#034;,(*stage 5*)&#xD;
       &amp;#034;cat45&amp;#034; -&amp;gt; &amp;#034;blk51&amp;#034;, &amp;#034;cat45&amp;#034; -&amp;gt; &amp;#034;blk52&amp;#034;,&#xD;
       {&amp;#034;blk51&amp;#034;, &amp;#034;blk52&amp;#034;, &amp;#034;feature&amp;#034;} -&amp;gt; &amp;#034;cat56&amp;#034;,(*stage 6*)&#xD;
       &amp;#034;cat56&amp;#034; -&amp;gt; &amp;#034;blk61&amp;#034;, &amp;#034;cat56&amp;#034; -&amp;gt; &amp;#034;blk62&amp;#034;&#xD;
       }]&#xD;
  &#xD;
![enter image description here][4]&#xD;
&#xD;
## Step 3: Importing the Weights and the Biases ##&#xD;
Download &amp;#034;[pose_model_scratch.pth][5]&amp;#034; as a trained model of PyTorch.&#xD;
&#xD;
Import the parameters, the weights and the biases. I referred &amp;#034;[How to import python pickle *.pkl?][6]&amp;#034;&#xD;
&#xD;
    session = StartExternalSession[&amp;#034;Python-NumPy&amp;#034;];&#xD;
    parameters = ExternalEvaluate[session, &amp;#034;import torch&#xD;
    import numpy as np&#xD;
    import pickle as pkl&#xD;
       &#xD;
    net_weights = torch.load(&#xD;
           &amp;#039;pose_model_scratch.pth&amp;#039;, map_location={&amp;#039;cuda:0&amp;#039;: &amp;#039;cpu&amp;#039;})&#xD;
    keys = list(net_weights.keys())&#xD;
       &#xD;
    parameters = {}&#xD;
    for i in range(len(keys)):&#xD;
           t = net_weights[keys[i]]       &#xD;
           x = t.numpy()&#xD;
           parameters[keys[i]] = x.flatten()&#xD;
    parameters&amp;#034;];&#xD;
    DeleteObject[session];&#xD;
    &#xD;
    keys = Keys[parameters];&#xD;
    parameters = Values[parameters];&#xD;
&#xD;
## Step 4: Parsing the Weights and the Biases ##&#xD;
&#xD;
The parameters is 184 sets of one-dimensional lists. They consists of the Weights and the Biases of 92 Convolution layers in OpenPose.&#xD;
&#xD;
Get a list of layer names for OpenPose with depth level. Then, get a list of 92 names where convolution layer is used in it.&#xD;
&#xD;
    layernames = &#xD;
      GroupBy[Keys@NetInformation[openpose, &amp;#034;Layers&amp;#034;], First] // Values;&#xD;
    convlayernames = (Position[&#xD;
         NetInformation[openpose, &amp;#034;Layers&amp;#034;], _ConvolutionLayer] // &#xD;
        Flatten)[[All, 1]]&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
As you can see in keys, you can see that the order of convolution layers of OpenPose is different from the order of convolution layers of &amp;#034;pose_model_scratch.pth&amp;#034; &#xD;
&#xD;
     keys&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
So, manually sort   the order of convolution layers in OpenPose into the order of &amp;#034;pose_model_scratch.pth&amp;#034;&#xD;
&#xD;
    convlayernamesGH = {{&amp;#034;feature&amp;#034;, &amp;#034;conv1_1&amp;#034;}, {&amp;#034;feature&amp;#034;, &amp;#034;conv1_2&amp;#034;},&#xD;
       {&amp;#034;feature&amp;#034;, &amp;#034;conv2_1&amp;#034;}, {&amp;#034;feature&amp;#034;, &amp;#034;conv2_2&amp;#034;},&#xD;
       {&amp;#034;feature&amp;#034;, &amp;#034;conv3_1&amp;#034;}, {&amp;#034;feature&amp;#034;, &amp;#034;conv3_2&amp;#034;}, {&amp;#034;feature&amp;#034;, &amp;#034;conv3_3&amp;#034;}, {&amp;#034;feature&amp;#034;, &amp;#034;conv3_4&amp;#034;},&#xD;
       {&amp;#034;feature&amp;#034;, &amp;#034;conv4_1&amp;#034;}, {&amp;#034;feature&amp;#034;, &amp;#034;conv4_2&amp;#034;},&#xD;
       {&amp;#034;feature&amp;#034;, &amp;#034;convadd1&amp;#034;}, {&amp;#034;feature&amp;#034;, &amp;#034;convadd2&amp;#034;},&#xD;
       {&amp;#034;blk11&amp;#034;, 1}, {&amp;#034;blk11&amp;#034;, 3}, {&amp;#034;blk11&amp;#034;, 5}, {&amp;#034;blk11&amp;#034;, 7}, {&amp;#034;blk11&amp;#034;, 9},&#xD;
       {&amp;#034;blk21&amp;#034;, 1}, {&amp;#034;blk21&amp;#034;, 3}, {&amp;#034;blk21&amp;#034;, 5}, {&amp;#034;blk21&amp;#034;, 7}, {&amp;#034;blk21&amp;#034;, 9}, {&amp;#034;blk21&amp;#034;, 11}, {&amp;#034;blk21&amp;#034;, 13},&#xD;
       {&amp;#034;blk31&amp;#034;, 1}, {&amp;#034;blk31&amp;#034;, 3}, {&amp;#034;blk31&amp;#034;, 5}, {&amp;#034;blk31&amp;#034;, 7}, {&amp;#034;blk31&amp;#034;, 9}, {&amp;#034;blk31&amp;#034;, 11}, {&amp;#034;blk31&amp;#034;, 13},&#xD;
       {&amp;#034;blk41&amp;#034;, 1}, {&amp;#034;blk41&amp;#034;, 3}, {&amp;#034;blk41&amp;#034;, 5}, {&amp;#034;blk41&amp;#034;, 7}, {&amp;#034;blk41&amp;#034;, 9}, {&amp;#034;blk41&amp;#034;, 11}, {&amp;#034;blk41&amp;#034;, 13},&#xD;
       {&amp;#034;blk51&amp;#034;, 1}, {&amp;#034;blk51&amp;#034;, 3}, {&amp;#034;blk51&amp;#034;, 5}, {&amp;#034;blk51&amp;#034;, 7}, {&amp;#034;blk51&amp;#034;, 9}, {&amp;#034;blk51&amp;#034;, 11}, {&amp;#034;blk51&amp;#034;, 13},&#xD;
       {&amp;#034;blk61&amp;#034;, 1}, {&amp;#034;blk61&amp;#034;, 3}, {&amp;#034;blk61&amp;#034;, 5}, {&amp;#034;blk61&amp;#034;, 7}, {&amp;#034;blk61&amp;#034;, 9}, {&amp;#034;blk61&amp;#034;, 11}, {&amp;#034;blk61&amp;#034;, 13},&#xD;
       {&amp;#034;blk12&amp;#034;, 1}, {&amp;#034;blk12&amp;#034;, 3}, {&amp;#034;blk12&amp;#034;, 5}, {&amp;#034;blk12&amp;#034;, 7}, {&amp;#034;blk12&amp;#034;, 9},&#xD;
       {&amp;#034;blk22&amp;#034;, 1}, {&amp;#034;blk22&amp;#034;, 3}, {&amp;#034;blk22&amp;#034;, 5}, {&amp;#034;blk22&amp;#034;, 7}, {&amp;#034;blk22&amp;#034;, 9}, {&amp;#034;blk22&amp;#034;, 11}, {&amp;#034;blk22&amp;#034;, 13},&#xD;
       {&amp;#034;blk32&amp;#034;, 1}, {&amp;#034;blk32&amp;#034;, 3}, {&amp;#034;blk32&amp;#034;, 5}, {&amp;#034;blk32&amp;#034;, 7}, {&amp;#034;blk32&amp;#034;, 9}, {&amp;#034;blk32&amp;#034;, 11}, {&amp;#034;blk32&amp;#034;, 13},&#xD;
       {&amp;#034;blk42&amp;#034;, 1}, {&amp;#034;blk42&amp;#034;, 3}, {&amp;#034;blk42&amp;#034;, 5}, {&amp;#034;blk42&amp;#034;, 7}, {&amp;#034;blk42&amp;#034;, 9}, {&amp;#034;blk42&amp;#034;, 11}, {&amp;#034;blk42&amp;#034;, 13},&#xD;
       {&amp;#034;blk52&amp;#034;, 1}, {&amp;#034;blk52&amp;#034;, 3}, {&amp;#034;blk52&amp;#034;, 5}, {&amp;#034;blk52&amp;#034;, 7}, {&amp;#034;blk52&amp;#034;, 9}, {&amp;#034;blk52&amp;#034;, 11}, {&amp;#034;blk52&amp;#034;, 13},&#xD;
       {&amp;#034;blk62&amp;#034;, 1}, {&amp;#034;blk62&amp;#034;, 3}, {&amp;#034;blk62&amp;#034;, 5}, {&amp;#034;blk62&amp;#034;, 7}, {&amp;#034;blk62&amp;#034;, 9}, {&amp;#034;blk62&amp;#034;, 11}, {&amp;#034;blk62&amp;#034;, 13}&#xD;
       };&#xD;
&#xD;
Get the position of each element of convlayernamesGH in OpenPose.&#xD;
&#xD;
    convlayerpos = &#xD;
     Flatten[Position[layernames, #] &amp;amp; /@ convlayernamesGH, 1]&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
Reshape each one-dimensional list of parameters to the dimension of the corresponding weight or bias.&#xD;
&#xD;
    getDimB[layer_] := Dimensions@NetExtract[layer, &amp;#034;Biases&amp;#034;]&#xD;
    getDimW[layer_] := Dimensions@NetExtract[layer, &amp;#034;Weights&amp;#034;]&#xD;
    convs = NetExtract[NetInitialize[openpose], #] &amp;amp; /@ convlayerpos;&#xD;
    dimW = getDimW /@ convs;&#xD;
    dimB = getDimB /@ convs;&#xD;
    dim = Flatten[Transpose[{dimW, dimB}], 1];&#xD;
    parametersReshape = MapThread[ArrayReshape, {parameters, dim}];&#xD;
&#xD;
## Step 5: Linking the Weights and the Biases ##&#xD;
Replace the initial values of weights and biases in OpenPose with learned parameters, and finally get trained OpenPose.&#xD;
&#xD;
    replacenames =&#xD;
      Flatten[&#xD;
       Transpose[{Flatten@{#, &amp;#034;Weights&amp;#034;} &amp;amp; /@ convlayernamesGH, &#xD;
         Flatten@{#, &amp;#034;Biases&amp;#034;} &amp;amp; /@ convlayernamesGH}], 1];&#xD;
    rule = Thread[replacenames -&amp;gt; parametersReshape];&#xD;
    trainedOpenPose = NetReplacePart[openpose, rule]&#xD;
&#xD;
![enter image description here][10]&#xD;
&#xD;
## Step 6: Making the tests ##&#xD;
For simplification, estimate pose for the image of single-person. The Output2 of OpenPose shows the confidence of 19 body parts in each part where the image is divided into 46 * 46.&#xD;
&#xD;
1:Nose, 2:Neck, 3:RShoulder, 4:RElbow, 5:RWrist, 6:LShoulder, 7:LElbow, 8:LWrist,&#xD;
9:RHip, 10:RKnee, 11:RAnkle, 12:LHip, 13:LKnee, 14:LAnkle, 15:REye, 16:LEye, 17:REar, 18:LEar, 19:Bkg&#xD;
&#xD;
Define the function to get the position of max of confidence of each body part.&#xD;
&#xD;
    maxpts[img_, confidences_, idex_] := Module[{pos, pts, h},&#xD;
      pos = Reverse@First@Position[h = confidences[[idex]], Max@h];&#xD;
      pts = (pos/46)*ImageDimensions@img; &#xD;
      pts = {pts[[1]], (ImageDimensions@img)[[2]] - pts[[2]]}&#xD;
      ]&#xD;
&#xD;
Connect the detected body parts and show the result together on the original image.&#xD;
&#xD;
    showpose[img_] := Module[{bodylist, size, out, confidences, pts, pose},&#xD;
      bodylist = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14};&#xD;
      size = {368, 368};&#xD;
      out = trainedOpenPose[img];&#xD;
      confidences = out[[2]];&#xD;
      pts = maxpts[img, confidences, #] &amp;amp; /@ bodylist;&#xD;
      pose = Graphics[{&#xD;
         Yellow, Thickness[.0125], Line[pts[[#]] &amp;amp; /@ {1, 2}],&#xD;
         Green, Line[pts[[#]] &amp;amp; /@ {2, 3, 4, 5}],&#xD;
         Cyan, Line[pts[[#]] &amp;amp; /@ {2, 6, 7, 8}],&#xD;
         Orange, Line[pts[[#]] &amp;amp; /@ {2, 9, 10, 11}],&#xD;
         Magenta, Line[pts[[#]] &amp;amp; /@ {2, 12, 13, 14}],&#xD;
         PointSize[Large], Red, Point[pts],&#xD;
         White, Point[{{0, 0}, ImageDimensions@img}]&#xD;
         }, ImagePadding -&amp;gt; All];&#xD;
      Show[img, pose]&#xD;
      ]&#xD;
&#xD;
Let&amp;#039; try.&#xD;
&#xD;
    img = Import[&amp;#034;ichiro.jpg&amp;#034;];&#xD;
    showpose[img]&#xD;
&#xD;
![enter image description here][11]&#xD;
&#xD;
## Future work ##&#xD;
? Estimate the pose of an image in which multi-person are by using PAFs of output1 of OpenPose.&#xD;
&#xD;
? Convert a more accurate pose estimation model.&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com/groups/-/m/t/1785523&#xD;
  [2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=00.jpg&amp;amp;userId=1013863&#xD;
  [3]: https://arxiv.org/pdf/1611.08050.pdf&#xD;
  [4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=752004.jpg&amp;amp;userId=1013863&#xD;
  [5]: https://www.dropbox.com/s/ae071mfm2qoyc8v/pose_model.pth?dl=0&#xD;
  [6]: https://mathematica.stackexchange.com/questions/181273/how-to-import-%5C%20python-pickle-pkl?rq=1&#xD;
  [7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=691905.jpg&amp;amp;userId=1013863&#xD;
  [8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=387908.jpg&amp;amp;userId=1013863&#xD;
  [9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=109406.jpg&amp;amp;userId=1013863&#xD;
  [10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=179107.jpg&amp;amp;userId=1013863&#xD;
  [11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=161401.jpg&amp;amp;userId=1013863</description>
    <dc:creator>Kotaro Okazaki</dc:creator>
    <dc:date>2019-09-10T21:36:49Z</dc:date>
  </item>
</rdf:RDF>

