<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://community.wolfram.com">
    <title>Community RSS Feed</title>
    <link>https://community.wolfram.com</link>
    <description>RSS Feed for Wolfram Community showing any discussions tagged with Architecture sorted by most viewed.</description>
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/121507" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1112012" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/434905" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1632078" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/186965" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/430092" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1727982" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/386677" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/547218" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/486430" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/1037946" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2316573" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/387917" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/3442449" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/796356" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/529314" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2474537" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/329903" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/2579441" />
        <rdf:li rdf:resource="https://community.wolfram.com/groups/-/m/t/392411" />
      </rdf:Seq>
    </items>
  </channel>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/121507">
    <title>Optimal lighting configuration of 5 lamps in a square room</title>
    <link>https://community.wolfram.com/groups/-/m/t/121507</link>
    <description>[b]With 5 point-source lights in a square room, what is the optimal configuration for even lighting?[/b]&#xD;
&#xD;
To make this question concrete, say that each wall has length 1, the room has no height (i.e., two dimensional) and we have five identical lights that are pointsized and want to know the optimal placement that maximizes even lighting.  That could mean&#xD;
[mcode]f = 1/((x - x1)^2 + (y - y1)^2) + 1/((x - x2)^2 + (y - y2)^2) + 1/((x - x3)^2 + (y - y3)^2) + 1/((x - x4)^2 + (y - y4)^2) + 1/((x - x5)^2 + (y - y5)^2);[/mcode]&#xD;
a) maximizing the value of the minimal illumination [mcode]Minimize[f, {x,0,1},{y,0,1}][/mcode]or an integral measure like&#xD;
&#xD;
b) maximizing the total illumination where the brightest areas are considered as being some default value, e.g., the value of [mcode]Integrate[Min[f, f0], {x,0,1},{y,0,1}][/mcode]&#xD;
For an example configuration of light sources with&#xD;
[mcode]f = With[{n = 5}, Sum[1/((x - (.5 + .45 Cos[2 Pi i/n]))^2 + (y - (.5 + .45 Sin[2 Pi i/n]))^2), {i, 0, n - 1}]]&#xD;
[/mcode]and then here is the minimum illumination&#xD;
[mcode]NMinimize[{f, 0 &amp;lt;= x &amp;lt;= 1 &amp;amp;&amp;amp; 0 &amp;lt;= y &amp;lt;= 1}, {x, y}] (*{14.349, {x -&amp;gt; 1., y -&amp;gt; 1.}}*)&#xD;
[/mcode]and here that point is shown on a contour plot&#xD;
&#xD;
[img=width: 360px; height: 359px;]/c/portal/getImageAttachment?filename=lights5.jpg&amp;amp;userId=23275[/img]&#xD;
&#xD;
For that configuration here is the integral (which I had to approximate with a Sum)&#xD;
[mcode]Sum[Min[1.2 (14.349), f], {x, 0.0001, 1, .01}, {y, 0.0001, 1, .01}]/10^4 (*17.2146*)&#xD;
[/mcode]I&amp;#039;d be interested in optimization approaches, but also aesthetic approaches, e.g., symmetries, angles, shadows, or patterns made by contour lines.&#xD;
&#xD;
To generalize, not only other numbers of lights, but try tacking on albedo of 50% so the wall reflect half of the light they receive.</description>
    <dc:creator>Todd Rowland</dc:creator>
    <dc:date>2013-09-10T16:45:23Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1112012">
    <title>BVH Accelerated 3D Shadow Mapping</title>
    <link>https://community.wolfram.com/groups/-/m/t/1112012</link>
    <description>[Shadow mapping][1] is a process of applying shadows to a computer graphic.  `Graphics3D` allows the user to specify lighting conditions for the surfaces of 3D graphical primitives, however, visualising the shadow an object projects onto a surface requires the processes of shadow mapping.  Each pixel of the projection surface must check if it is visible from the light source; if this check returns false then the pixel forms a shadow.  This becomes a problem of geometric intersections, i.e., for this case, the intersection between a line and a triangle.  For a 3D model with 100s and more of polygons, repeated intersection tests across the entire model for each pixel is an extremely costly (and inefficient) task.  Now this becomes a problem of search optimisation.    &#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
&#xD;
Obtaining Data&#xD;
--------------&#xD;
&#xD;
This project uses 3D models from [SketchUp&amp;#039;s online repository][3] which are converted to COLLADA files using SketchUp.  The functions used are held in a package, accessible via [github][4] along with all the data referenced throughout.&#xD;
&#xD;
    (* load package and 3D model *)&#xD;
    &amp;lt;&amp;lt; &amp;#034;https://raw.githubusercontent.com/b-goodman/\&#xD;
    GeometricIntersections3D/master/GeometricIntersections3D.wl&amp;#034;;&#xD;
    &#xD;
    modelPath = &#xD;
      &amp;#034;https://raw.githubusercontent.com/b-goodman/\&#xD;
    GeometricIntersections3D/master/Demo/House/houseModel4.dae&amp;#034;;&#xD;
    &#xD;
    (* vertices of model&amp;#039;s polygons *)&#xD;
    polyPoints = Delete[0]@Import[modelPath, &amp;#034;PolygonObjects&amp;#034;];&#xD;
    &#xD;
    (* import model as region *)&#xD;
    modelRegion = Import[modelPath, &amp;#034;MeshRegion&amp;#034;];&#xD;
    &#xD;
    (* use region to generate minimal bounding volume *)&#xD;
    cuboidPartition = Delete[0]@BoundingRegion[modelRegion, &amp;#034;MinCuboid&amp;#034;];&#xD;
    &#xD;
    (* verify *)&#xD;
    Graphics3D[{&#xD;
      Polygon[polyPoints],&#xD;
      {Hue[0, 0, 0, 0], EdgeForm[Black], Cuboid[cuboidPartition]}&#xD;
      }, Boxed -&amp;gt; False]&#xD;
&#xD;
![imported model data][5]&#xD;
&#xD;
Generate a Bounding Volume Hierarchy (BVH)&#xD;
------------------------------------------&#xD;
&#xD;
Shadow mapping (and more generally collision testing) may be optimised via space partitioning achieved by dividing the 3D model&amp;#039;s space into a hierarchy of bounding volumes (BV) stored as a graph, thus forming a [bounding volume hierarchy][6].  The simplest case uses the result of an intersection between a ray and a single BV for the entire model to discard all rays which don&amp;#039;t come close to any of the model&amp;#039;s polygons.  Of course, those which do pass the first test must still be tested against the entire model so the initial BV is subdivided with each sub BV assigned to a particular part of the model hence reducing the total amount of polygons to be tested against.  The initial BV forms the root of the tree and it&amp;#039;s subdivisions (leaf boxes) are joined via edges.  We can add more levels to the tree by repeating the subdivision for each of the leaf boxes and ultimately refining the search for potential intersecting polygons.   &#xD;
&#xD;
    (* Begin tree.  Initial AABB is root.  Subdivide root AABB and link returns to root *) &#xD;
    newBVH[cuboidPartitions_,polyPoints_]:=Block[{&#xD;
    newLevel,edges&#xD;
    },&#xD;
    newLevel=Quiet[cullIntersectingPartitions[cuboidSubdivide[cuboidPartitions],polyPoints]];&#xD;
    edges=cuboidPartitions\[DirectedEdge]#&amp;amp;/@newLevel;&#xD;
    Return[&amp;lt;|&#xD;
    &amp;#034;Tree&amp;#034;-&amp;gt;TreeGraph[edges],&#xD;
    &amp;#034;PolygonObjects&amp;#034;-&amp;gt;polyPoints&#xD;
    |&amp;gt;];&#xD;
    ];&#xD;
&#xD;
    bvh = newBVH[{cuboidPartition}, polyPoints];&#xD;
    &#xD;
The BVH is a tree graph with the model&amp;#039;s polygon vertices encapsulated within an association&#xD;
&#xD;
    Keys[bvh]&#xD;
    &#xD;
    {&amp;#034;Tree&amp;#034;, &amp;#034;PolygonObjects&amp;#034;}&#xD;
    &#xD;
The BVH consists of a root box derived from the model&amp;#039;s minimal  bounding volume and it&amp;#039;s 8 sub-divisions &#xD;
&#xD;
    bvh[&amp;#034;Tree&amp;#034;]&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
The boxes at the lowest level of the BVH are the leaf boxes &#xD;
&#xD;
    leafBoxesLV1 = &#xD;
      Select[VertexList[bvh[&amp;#034;Tree&amp;#034;]], &#xD;
       VertexOutDegree[bvh[&amp;#034;Tree&amp;#034;], #] == 0 &amp;amp;];&#xD;
    &#xD;
    Graphics3D[{&#xD;
      Polygon[polyPoints],&#xD;
      {Hue[0, 0, 0, 0], EdgeForm[Black], Cuboid /@ leafBoxesLV1}&#xD;
      }, Boxed -&amp;gt; False]&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
Adding a new level sub-divides each leaf box into 8 sub-divisions. &#xD;
&#xD;
    With[{&#xD;
      testCuboid = {{0, 0, 0}, {10, 10, 10}}&#xD;
      },&#xD;
     Manipulate[&#xD;
      Graphics3D[{&#xD;
        If[n == 0, Cuboid[testCuboid], &#xD;
         Cuboid /@ Nest[cuboidSubdivide, testCuboid, n]]&#xD;
        }, Boxed -&amp;gt; False, Axes -&amp;gt; {True, False}],&#xD;
      {{n, 0}, 0, 4, 1}&#xD;
      ]&#xD;
     ]&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
The time needed for each addition to the BVH increases dramatically.&#xD;
&#xD;
    Length /@ NestList[cuboidSubdivide, {{{0, 0, 0}, {1, 1, 1}}}, 5]&#xD;
    &#xD;
     {1, 8, 64, 512, 4096, 32768}&#xD;
&#xD;
1-2 added levels is usually enough for the models used in this project.&#xD;
&#xD;
    (* Each new subdivision acts as root.  For each, subdivide further and remove any non-intersecting boxes.  Link back to parent box as directed edge *)&#xD;
    addLevelBVH[BVH_]:=Block[{&#xD;
    tree=BVH[&amp;#034;Tree&amp;#034;],polyPoints=BVH[&amp;#034;PolygonObjects&amp;#034;],returnEdges&#xD;
    },&#xD;
    Module[{&#xD;
    subEdges=Map[&#xD;
    Function[{levelComponent},levelComponent\[DirectedEdge]#&amp;amp;/@Quiet@cullIntersectingPartitions[cuboidSubdivide[levelComponent],polyPoints]],&#xD;
    Pick[VertexList[tree],VertexOutDegree[tree],0]]&#xD;
    },&#xD;
    returnEdges=ConstantArray[0,Length[subEdges]];&#xD;
    Do[returnEdges[[i]]=EdgeAdd[tree,subEdges[[i]]],{i,1,Length[subEdges],1}];&#xD;
    ];&#xD;
    returnEdges=DeleteDuplicates[Flatten[Join[EdgeList/@returnEdges]]];&#xD;
    Return[&amp;lt;|&#xD;
    &amp;#034;Tree&amp;#034;-&amp;gt;TreeGraph[returnEdges],&#xD;
    &amp;#034;PolygonObjects&amp;#034;-&amp;gt;polyPoints&#xD;
    |&amp;gt;]&#xD;
    ];&#xD;
&#xD;
 &#xD;
    bvh2 = addLevelBVH[bvh];&#xD;
    bvh2[&amp;#034;Tree&amp;#034;]&#xD;
&#xD;
![enter image description here][10]&#xD;
&#xD;
Any subs which don&amp;#039;t intersect with the model don&amp;#039;t contribute to the BVH and so are removed as part of the process.&#xD;
&#xD;
    cullIntersectingPartitions=Compile[{&#xD;
    {cuboidPartitions,_Real,3},&#xD;
    {polyPoints,_Real,3}&#xD;
    },&#xD;
    Select[cuboidPartitions,Function[{partitions},MemberQ[ParallelMap[Quiet@intersectTriangleBox[partitions,#]&amp;amp;,polyPoints],True]]],&#xD;
    CompilationTarget-&amp;gt;&amp;#034;C&amp;#034;&#xD;
    ];&#xD;
&#xD;
&#xD;
Visualising the leaf boxes shows that empty BVs are removed.  &#xD;
&#xD;
    leafBoxesLV2 = &#xD;
      Select[VertexList[bvh2[&amp;#034;Tree&amp;#034;]], &#xD;
       VertexOutDegree[bvh2[&amp;#034;Tree&amp;#034;], #] == 0 &amp;amp;];&#xD;
    &#xD;
    Graphics3D[{&#xD;
      Polygon[polyPoints],&#xD;
      {Hue[0, 0, 0, 0], EdgeForm[Black], Cuboid /@ leafBoxesLV2}&#xD;
      }, Boxed -&amp;gt; False]&#xD;
&#xD;
![enter image description here][11]&#xD;
&#xD;
Once  all levels are added, the BVH is finalised by linking each leaf box to its associated polygons.  This does not effect the tree structure as the link association is held seperate.&#xD;
&#xD;
    (*For each outermost subdivision (leaf box), find intersecting polygons.  Link to intersecting box via directed edge.  Append to graph *)&#xD;
    finalizeBVH[BVH_]:=Block[{&#xD;
    (* all leaf boxes for BVH *)&#xD;
    leafBoxes=Select[&#xD;
    VertexList[BVH[&amp;#034;Tree&amp;#034;]],&#xD;
    VertexOutDegree[BVH[&amp;#034;Tree&amp;#034;],#]==0&amp;amp;&#xD;
    ],&#xD;
    (* setup temp association *)&#xD;
    temp=&amp;lt;||&amp;gt;,&#xD;
    (* block varaibles *)&#xD;
    leafPolygons,&#xD;
    leafPolygonsEdges&#xD;
    },&#xD;
    (* For each BVH leaf box *)&#xD;
    Do[&#xD;
    (* 3.1. intersecitng polygons for specified BVH leaf box *)&#xD;
    leafPolygons=Select[&#xD;
    BVH[&amp;#034;PolygonObjects&amp;#034;],&#xD;
    Quiet@intersectTriangleBox[leafBoxes[[i]],#]==True&amp;amp;&#xD;
    ];&#xD;
    (* 3.2. associate each specified BVH leaf box to its intersecting polygon(s) *)&#xD;
    AppendTo[temp,leafBoxes[[i]]-&amp;gt;leafPolygons],&#xD;
    {i,1,Length[leafBoxes],1}&#xD;
    ];&#xD;
    Return[&amp;lt;|&#xD;
    &amp;#034;Tree&amp;#034;-&amp;gt;BVH[&amp;#034;Tree&amp;#034;],&#xD;
    &amp;#034;LeafObjects&amp;#034;-&amp;gt;temp,&#xD;
    &amp;#034;PolygonObjects&amp;#034;-&amp;gt;BVH[&amp;#034;PolygonObjects&amp;#034;]&#xD;
    |&amp;gt;]&#xD;
    ];&#xD;
&#xD;
    bvh2 = finalizeBVH[bvh2];&#xD;
&#xD;
While it only needs doing once, generating the BVH is often the longest part of the procedure so it&amp;#039;s a good idea to export it on completion.&#xD;
&#xD;
&#xD;
Generating The Scene&#xD;
--------------------&#xD;
&#xD;
The scene is an encapsulation of all data and parameters used for the ray caster.  It&amp;#039;s initially structured as: &#xD;
    &#xD;
    scene=&amp;lt;|&#xD;
    &amp;#034;BVH&amp;#034;-&amp;gt;BVHobj,                               -- (*The BVH previously generated*)&#xD;
    &amp;#034;SourcePositions&amp;#034;-&amp;gt;lightingPath,      -- (*The 3D position(s) of the light source*)&#xD;
    &amp;#034;FrameCount&amp;#034;-&amp;gt;frameCount,            -- (*A timestep for animation and a parameter if lightingPath is continuous*)&#xD;
    &amp;#034;Refinement&amp;#034;-&amp;gt;rayRefinement,        -- (*Ray density; smaller values give finer results.*)&#xD;
    &amp;#034;ProjectionPoints&amp;#034;-&amp;gt;planeSpec,       -- (*3D points forming surface(s) that shadow(s) are cast onto.*)&#xD;
    &amp;#034;FrameData&amp;#034;-&amp;gt;&amp;lt;||&amp;gt;                           -- (*Initially empty, data from the ray caster will be stored here.*)&#xD;
    |&amp;gt;&#xD;
&#xD;
&#xD;
Generating The Projection Surface&#xD;
-----------------------------&#xD;
&#xD;
The house should look like its casting it&amp;#039;s shadow onto the earth so we define a list of points which represent the discrete plane it stands on.  Each ray is a line drawn between each point on the projection surface and the position of the scene&amp;#039;s light source.&#xD;
&#xD;
    (* rayRefinement *)&#xD;
    ref = 20;&#xD;
    (* the height of the projection surface *)&#xD;
    planeZoffset = 0;&#xD;
    (* the discrete projection surface - each point is the origin of a \&#xD;
    ray *)&#xD;
    projectionPts = &#xD;
      Catenate[Table[{x, y, planeZoffset}, {x, -900, 1200, ref}, {y, -600,&#xD;
          1000, ref}]];&#xD;
    &#xD;
    Graphics3D[{&#xD;
      Polygon[polyPoints],&#xD;
      Cuboid /@ ({##, ## + {ref, ref, 0}} &amp;amp; /@ projectionPts)&#xD;
      }, Axes -&amp;gt; True, AxesLabel -&amp;gt; {&amp;#034;X&amp;#034;, &amp;#034;Y&amp;#034;, &amp;#034;Z&amp;#034;}, ImageSize -&amp;gt; Large]&#xD;
&#xD;
![enter image description here][12]&#xD;
&#xD;
&#xD;
Specifying A Light Source&#xD;
-------------------------&#xD;
&#xD;
The light source is typically a continuous BSplineFunction which is sampled according to the number of frames the user wants.  But it may also be a discrete list of 3D points too (in which case the number of frames is equal to the length of the list).&#xD;
&#xD;
Using a modification of  a `SunPosition` example in the documentation, a list of the 3D Cartesian positions of the sun between sunrise and sunset, with a time step 30 minutes, is produced.&#xD;
&#xD;
![enter image description here][13]&#xD;
&#xD;
    solarPositionPts0[location_:Here, date_:DateValue[Now,{&amp;#034;Year&amp;#034;,&amp;#034;Month&amp;#034;,&amp;#034;Day&amp;#034;}],tSpec_:{30,&amp;#034;Minute&amp;#034;}]:=&#xD;
    Evaluate[CoordinateTransformData[&amp;#034;Spherical&amp;#034;-&amp;gt;&amp;#034;Cartesian&amp;#034;,&amp;#034;Mapping&amp;#034;,{1,\[Pi]/2-(#2 Degree),2Pi-(#1 Degree)}]]&amp;amp;@@@(Function[{series},Map[QuantityMagnitude,series[&amp;#034;Values&amp;#034;],{2}]]@SunPosition[location,DateRange[Sunrise[#],Sunset[#],tSpec]&amp;amp;[DateObject[date]]])&#xD;
&#xD;
    solarPositionPts[Here, DateObject[{2017, 6, 1}], {30, &amp;#034;Minute&amp;#034;}]&#xD;
    &#xD;
    {{0.4700, -0.88253, -0.0155}, {0.4026, -0.91178, 0.0809},...,{0.4219, 0.90493, 0.0554}}&#xD;
&#xD;
&#xD;
It&amp;#039;s easier to rotate the sun&amp;#039;s path rather than the model and projection plane.  Different transforms may also be applied to best-fit the path into the scene.&#xD;
&#xD;
    solarXoffset = 0;&#xD;
    solarYoffset = 0;&#xD;
    solarZoffset = 0;&#xD;
    zRotation = \[Pi]/3.5;&#xD;
    scale = 1300;&#xD;
    &#xD;
    sourceSpec = &#xD;
      RotationTransform[zRotation, {0, 0, 1}][&#xD;
        # + {solarXoffset, solarYoffset, solarZoffset} &amp;amp; /@ (solarPositionPts[Here, DateObject[{2017, 6, 1}], {30, &amp;#034;Minute&amp;#034;}] scale)&#xD;
      ];&#xD;
    &#xD;
    lightingPath = BSplineCurve[sourceSpec];&#xD;
&#xD;
&#xD;
&#xD;
Specify A Frame Count&#xD;
---------------------&#xD;
&#xD;
A frame count must be specified to discretize the light path into 3D points.  Each of these points forms the end of each ray.&#xD;
If the light source is a discrete list, then its length is used to infer the frame count instead and does not need specifying by the user.&#xD;
&#xD;
    frameCount = 30;&#xD;
&#xD;
&#xD;
Constructing The Scene&#xD;
----------------------&#xD;
&#xD;
Now we can preview the scene&#xD;
&#xD;
    Graphics3D[{&#xD;
      Polygon[polyPoints],&#xD;
      Cuboid /@ ({##, ## + {ref, ref, 0}} &amp;amp; /@ projectionPts),&#xD;
      lightingPath,&#xD;
      {Darker@Yellow, PointSize[0.03], &#xD;
       Point[BSplineFunction[sourceSpec] /@ Range[0, 1, N[1/frameCount]]]}&#xD;
      }, Axes -&amp;gt; True, AxesLabel -&amp;gt; {&amp;#034;X&amp;#034;, &amp;#034;Y&amp;#034;, &amp;#034;Z&amp;#034;}, ImageSize -&amp;gt; Large]&#xD;
&#xD;
![enter image description here][14]&#xD;
&#xD;
 All paramaters have been set, it&amp;#039;s time to construct the scene.  Specifying a continuous lighting path must be done using a `BSPlineCurve`.&#xD;
&#xD;
    scene = newScene[bvh2, lightingPath, frameCount, ref, projectionPts]&#xD;
&#xD;
![enter image description here][15]&#xD;
&#xD;
&#xD;
&#xD;
Processing A Scene For Shadow Mapping&#xD;
-------------------------------------&#xD;
&#xD;
The BVH optimises the ray caster by reducing the number of polygons to search against for an intersection.  If the ray intersects with the BVH root box then a breadth-first search along the BVH tree is initiated.  Starting with the root box,the out-components are selected by their intersection with a ray and are used as roots for the search&amp;#039;s next level.&#xD;
&#xD;
    (* select peripheral out-components of root box that intersect with ray *)&#xD;
    intersectingSubBoxes[BVHObj_,initialVertex_,rayOrigin_,raySource_]:=Select[Rest[VertexOutComponent[BVHObj[&amp;#034;Tree&amp;#034;],{initialVertex},1]],intersectRayBox[#,rayOrigin,raySource]==True&amp;amp;];&#xD;
&#xD;
    (* for root box intersecting rays, find which leaf box(es) intersect with ray *)&#xD;
    BVHLeafBoxIntersection[BVHObj_,rayInt_,rayDest_]:=Block[{v0},&#xD;
    (*initialize search *)v0=intersectingSubBoxes[BVHObj,VertexList[BVHObj[&amp;#034;Tree&amp;#034;]][[1]],rayInt,rayDest];&#xD;
    (* breadth search *)&#xD;
    If[v0=={},Return[v0],&#xD;
    While[&#xD;
    (* check that vertex isn&amp;#039;t a polygon - true if !0.  Check that intersection isn&amp;#039;t empty *)&#xD;
    AllTrue[VertexOutDegree[BVHObj[&amp;#034;Tree&amp;#034;],#]&amp;amp;/@v0,#=!=0&amp;amp;],&#xD;
    v0=Flatten[intersectingSubBoxes[BVHObj,#,rayInt,rayDest]&amp;amp;/@v0,1];&#xD;
    If[v0==={},Break[]]&#xD;
    ];&#xD;
    Return[v0];&#xD;
    ]&#xD;
    ];&#xD;
    &#xD;
&#xD;
The code below generates a visualisation of this process using the input data from the scene generated.&#xD;
&#xD;
    raySource = scene[&amp;#034;ProjectionPoints&amp;#034;][[3700]];&#xD;
    rayDestination = scene[&amp;#034;FrameData&amp;#034;][16][&amp;#034;SourcePosition&amp;#034;];&#xD;
    &#xD;
    lv1Intersection = &#xD;
      BVHLeafBoxIntersection[bvh, raySource, rayDestination];&#xD;
    lv2Intersection = &#xD;
      BVHLeafBoxIntersection[bvh2, raySource, rayDestination];&#xD;
    &#xD;
    lv1Subgraph = &#xD;
      Subgraph[Graph[EdgeList[bvh2[&amp;#034;Tree&amp;#034;]]], &#xD;
       First[VertexList[bvh2[&amp;#034;Tree&amp;#034;]]] \[DirectedEdge] # &amp;amp; /@ &#xD;
        lv1Intersection];&#xD;
    lv2Subgraphs = Subgraph[Graph[EdgeList[bvh2[&amp;#034;Tree&amp;#034;]]], Flatten[Table[&#xD;
         lv1Intersection[[&#xD;
             i]] \[DirectedEdge] # &amp;amp; /@ (Intersection[#, &#xD;
               lv2Intersection] &amp;amp; /@ ((Rest@&#xD;
                  VertexOutComponent[bvh2[&amp;#034;Tree&amp;#034;], #] &amp;amp; /@ &#xD;
                lv1Intersection)))[[i]],&#xD;
         {i, 1, Length[lv1Intersection], 1}&#xD;
         ], 1]];&#xD;
    lbl = ((#[[1]] -&amp;gt; #[[2]]) &amp;amp; /@ (Transpose[{lv2Intersection, &#xD;
           ToString /@ Range[Length[lv2Intersection]]}]));&#xD;
    edgeStyle = Join[&#xD;
       ReleaseHold@&#xD;
        Thread[(# -&amp;gt; HoldForm@{Thick, Blue}) &amp;amp;[EdgeList[lv2Subgraphs]]],&#xD;
       ReleaseHold@&#xD;
        Thread[(# -&amp;gt; HoldForm@{Thick, Red}) &amp;amp;[EdgeList[lv1Subgraph]]]&#xD;
       ];&#xD;
    &#xD;
    rayBVHTraversal = Graph[EdgeList[bvh2[&amp;#034;Tree&amp;#034;]], EdgeStyle -&amp;gt; edgeStyle,&#xD;
       VertexLabels -&amp;gt; lbl,&#xD;
       GraphHighlight -&amp;gt; lv2Intersection,&#xD;
       ImageSize -&amp;gt; Medium];&#xD;
    &#xD;
    rayModelIntersection = Graphics3D[{&#xD;
        {Green, Thickness[0.01], &#xD;
         Line[{raySource, rayDestination - {220, -400, 400}}]},&#xD;
        {Hue[0, 0, 0, 0], EdgeForm[{Thick, Red}], &#xD;
         Cuboid /@ lv1Intersection},&#xD;
        {Hue[.6, 1, 1, .3], EdgeForm[{Thick, Blue}], &#xD;
         Cuboid /@ lv2Intersection},&#xD;
        {Opacity[0.5], Polygon[polyPoints]},&#xD;
        Inset @@@ &#xD;
         Transpose[{ToString /@ Range[Length[lv2Intersection]], &#xD;
           RegionCentroid /@ Cuboid @@@ lv2Intersection}]&#xD;
        }];&#xD;
    &#xD;
    Column[{&#xD;
      Row[&#xD;
       Show[rayModelIntersection, ViewPoint -&amp;gt; #, Boxed -&amp;gt; False, &#xD;
          ImageSize -&amp;gt; Medium] &amp;amp; /@ {{-\[Infinity], 0, &#xD;
          0}, {0, -\[Infinity], 0}}&#xD;
       ],&#xD;
      rayBVHTraversal&#xD;
      }]&#xD;
&#xD;
![enter image description here][17]&#xD;
&#xD;
At the centre of the graph lies the vertex representing the root BV where all searches originate from.  The search continues out form all vertices which have intersected with the ray.&#xD;
&#xD;
&#xD;
    (* test intersection between ray and object polygon via BVH search *)&#xD;
    intersectionRayBVH[BVHObj_,rayOrigin_,rayDest_]:=With[{&#xD;
    intersectionLeafBoxes=BVHLeafBoxIntersection[BVHObj,rayOrigin,rayDest]&#xD;
    },&#xD;
    Block[{i},If[intersectionLeafBoxes=!={},&#xD;
    Return[Catch[For[i=1,i&amp;lt;Length[#],i++,&#xD;
    Function[{thowQ},If[thowQ,Throw[thowQ]]][intersectRayTriangle[#[[1]],#[[2]],#[[3]],rayOrigin,rayDest]&amp;amp;@#[[i]]]&#xD;
    ]&amp;amp;[DeleteDuplicates[Flatten[Lookup[BVHObj[&amp;#034;LeafObjects&amp;#034;],intersectionLeafBoxes],1]]]]===True],&#xD;
    Return[False]&#xD;
    ]]&#xD;
    ];&#xD;
&#xD;
Once the tree has been fully searched, the remaining boxes are used to lookup their associated polygons.  Since the same polygon may intersect more than one box, any duplicates are deleted.  A line-triangle intersection test is iteratively applied over the resultant list, breaking at the first instance of a True return.  This ray has now been found to intersect a part of the 3D model thus its origin point (from the `projectionPts` list) will represent a single point of shadow on the projection surface.  This point is stored in a list which will be used to draw the shadow for a single frame.  &#xD;
&#xD;
    candidatePolys = DeleteDuplicates[Flatten[Lookup[&#xD;
         bvh2[&amp;#034;LeafObjects&amp;#034;],&#xD;
         BVHLeafBoxIntersection[bvh2, raySource, rayDestination]&#xD;
         ], 1]];&#xD;
    &#xD;
    intersectingPolys = &#xD;
      Select[candidatePolys,PrimitiveIntersectionQ3D[Line[{raySource, rayDestination}],Triangle[#]] &amp;amp;];&#xD;
    &#xD;
    rayModelIntersectionPolys = Graphics3D[{&#xD;
        {Green, Thickness[0.01], &#xD;
         Line[{raySource, rayDestination - {220, -400, 400}}]},&#xD;
        {Hue[1, 1, 1, .5], EdgeForm[Black], Polygon[candidatePolys]},&#xD;
        {Hue[0.3, 1, 1, .5], Polygon[intersectingPolys]}&#xD;
        }, Boxed -&amp;gt; False];&#xD;
    &#xD;
    Row[Show[rayModelIntersectionPolys, ViewPoint -&amp;gt; #, ImageSize -&amp;gt; Medium] &amp;amp; /@ {{0, 0, \[Infinity]}, {0, \[Infinity], 0}}]&#xD;
&#xD;
Highlighted in green, the ray has been found to intersect with 2 polygons.&#xD;
&#xD;
![enter image description here][18]&#xD;
&#xD;
The BVH search is performed for each ray, for each frame.&#xD;
&#xD;
A scene is the input for the ray caster.  If a scene is to be re-processed with different parameters then a new scene must be made.&#xD;
The output of the ray caster is held within a scene object.  The data for each frame is associated to its frame index and is all held in the scene&amp;#039;s &amp;#034;FrameData&amp;#034; field.&#xD;
&#xD;
Begin processing.  A status bar will indicate progress in terms of frames rendered.&#xD;
&#xD;
    scene = renderScene[scene];&#xD;
&#xD;
it&amp;#039;s best to save any progress by exporting afterwards.&#xD;
&#xD;
    Export[&amp;#034;House_scene.txt&amp;#034;, Compress[scene]]&#xD;
&#xD;
&#xD;
Reviewing Processed Scenes&#xD;
--------------------------&#xD;
&#xD;
Each frame holds the shadow and ground data separately and have been expressed as zero-thickness cuboids (tiles) and each with side length equal to the `rayRefinement` parameter (recall that smaller values yield finer results).&#xD;
&#xD;
Individual frames are accessed by their frame index.  This examines frame 10.&#xD;
&#xD;
    Keys[scene[&amp;#034;FrameData&amp;#034;][10]]&#xD;
    &#xD;
    {&amp;#034;ShadowPts&amp;#034;, &amp;#034;SourcePosition&amp;#034;, &amp;#034;GroundPts&amp;#034;}&#xD;
&#xD;
&#xD;
Accessing the processed scene&amp;#039;s &amp;#034;FrameData&amp;#034; field allows a single specified frame to be drawn in Graphics3D.&#xD;
&#xD;
    Graphics3D[{&#xD;
      Polygon[scene[&amp;#034;BVH&amp;#034;][&amp;#034;PolygonObjects&amp;#034;]],&#xD;
      {GrayLevel[0.3], EdgeForm[], &#xD;
       Cuboid /@ scene[&amp;#034;FrameData&amp;#034;][10][&amp;#034;ShadowPts&amp;#034;]},&#xD;
      {EdgeForm[], Cuboid /@ scene[&amp;#034;FrameData&amp;#034;][10][&amp;#034;GroundPts&amp;#034;]},&#xD;
      {Darker@Yellow, PointSize[0.04], &#xD;
       Point[scene[&amp;#034;FrameData&amp;#034;][10][&amp;#034;SourcePosition&amp;#034;]]}&#xD;
      }, Boxed -&amp;gt; False, Background -&amp;gt; LightBlue]&#xD;
&#xD;
![enter image description here][23]&#xD;
&#xD;
&#xD;
`viewSceneFrame` does the task above for any processed scene and specified frame.  It inherits Graphics3D options as well as custom ones affecting the scene elements (shadow and ground style, toggle source drawing and gridlines).&#xD;
&#xD;
    viewSceneFrame[scene, 10, DrawGrid -&amp;gt; False, ShadowColor -&amp;gt; GrayLevel[0.3], SurfaceColor -&amp;gt; Lighter@Orange,  DrawSource -&amp;gt; True, Boxed -&amp;gt; False, Background -&amp;gt; LightBlue]&#xD;
&#xD;
![enter image description here][24]&#xD;
&#xD;
    Show[viewSceneFrame[scene, 10, DrawGrid -&amp;gt; False, &#xD;
      ShadowColor -&amp;gt; GrayLevel[0.3], SurfaceColor -&amp;gt; Lighter@Orange, &#xD;
      DrawSource -&amp;gt; True, Boxed -&amp;gt; False, Background -&amp;gt; LightBlue], &#xD;
     ViewPoint -&amp;gt; {0, 0, \[Infinity]}]&#xD;
&#xD;
![enter image description here][25]&#xD;
&#xD;
&#xD;
    sceneBounds = Join[&#xD;
       Most[MinMax /@ Transpose[scene[&amp;#034;ProjectionPoints&amp;#034;]]],&#xD;
       {MinMax[&#xD;
         Last /@ Values[scene[&amp;#034;FrameData&amp;#034;][[All, &amp;#034;SourcePosition&amp;#034;]]]]}&#xD;
       ];&#xD;
    viewSceneFrame[scene, 10, DrawGrid -&amp;gt; False, &#xD;
     ShadowColor -&amp;gt; GrayLevel[0.3], SurfaceColor -&amp;gt; Lighter@Orange, &#xD;
     DrawSource -&amp;gt; True, Boxed -&amp;gt; False, Background -&amp;gt; LightBlue, &#xD;
     Axes -&amp;gt; True, AxesLabel -&amp;gt; {&amp;#034;X&amp;#034;, &amp;#034;Y&amp;#034;, &amp;#034;Z&amp;#034;}, PlotRange -&amp;gt; sceneBounds]&#xD;
&#xD;
![enter image description here][26]&#xD;
&#xD;
&#xD;
Retaining the same options, the scene may also be animated.  To ensure a smooth playback, each frame is exported as .gif into `$TemporaryDirectory`, imported back as a list and animated.  The animation is also exported for future use.&#xD;
&#xD;
    animateScene[scene,&#xD;
     DrawGrid -&amp;gt; False,&#xD;
     ShadowColor -&amp;gt; GrayLevel[0.3],&#xD;
     SurfaceColor -&amp;gt; Lighter@Orange,&#xD;
     DrawSource -&amp;gt; True,&#xD;
     Boxed -&amp;gt; False,&#xD;
     Background -&amp;gt; LightBlue,&#xD;
     PlotRange -&amp;gt; sceneBounds,&#xD;
     ImageSize -&amp;gt; {{800}, {600}}&#xD;
     ]&#xD;
&#xD;
![enter image description here][27]&#xD;
&#xD;
&#xD;
We can also plot the cumulative solar exposure.&#xD;
&#xD;
All points from the projection plane which don&amp;#039;t intersect with the model (i.e, aren&amp;#039;t shadow points) are extracted from the scene&amp;#039;s frames&#xD;
&#xD;
    exposure = Values[scene[&amp;#034;FrameData&amp;#034;][[All, &amp;#034;GroundPts&amp;#034;]]]&#xD;
&#xD;
![enter image description here][28]&#xD;
&#xD;
&#xD;
&#xD;
The occurrences of each exposure point is tallied&#xD;
&#xD;
    tally = Tally[Flatten[exposure, 1]]&#xD;
&#xD;
![enter image description here][29]&#xD;
&#xD;
&#xD;
And the range of frequencies from which is generated&#xD;
&#xD;
    tallyRange = &#xD;
     Range @@ Insert[MinMax[Last /@ SortBy[tally, Last]], 1, -1]&#xD;
    &#xD;
    {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, \&#xD;
    20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30}&#xD;
&#xD;
&#xD;
A color scale corresponding to the range of frequencies from above will be used to colorize the plot&#xD;
&#xD;
    colorScale = &#xD;
     ColorData[&amp;#034;SolarColors&amp;#034;, &amp;#034;ColorFunction&amp;#034;] /@ Rescale[tallyRange]&#xD;
&#xD;
![enter image description here][30]&#xD;
&#xD;
&#xD;
Replacement rules are used to replace each exposure point&amp;#039;s frequency with it&amp;#039;s corresponding color&#xD;
&#xD;
    colorScaleRules = Thread @@ {tallyRange -&amp;gt; colorScale}&#xD;
&#xD;
![enter image description here][31]&#xD;
&#xD;
&#xD;
The resultant exposure map is a list of tiles, each coloured according to it&amp;#039;s positional frequency.&#xD;
&#xD;
    heatMap = &#xD;
     Insert[MapAt[Cuboid, &#xD;
         Reverse@MapAt[Replace[colorScaleRules], #, -1], -1], EdgeForm[], &#xD;
        2] &amp;amp; /@ tally&#xD;
&#xD;
![enter image description here][32]&#xD;
&#xD;
Finally, the map is drawn.  It&amp;#039;s still a `Graphics3D` object so it may be rotated and viewed from any angle.&#xD;
&#xD;
    Row[{&#xD;
      Show[Graphics3D[{&#xD;
         {Opacity[0.3], Green, Polygon[scene[&amp;#034;BVH&amp;#034;][&amp;#034;PolygonObjects&amp;#034;]]},&#xD;
         heatMap&#xD;
         }, Boxed -&amp;gt; False, ImageSize -&amp;gt; Large], ViewPoint -&amp;gt; Above],&#xD;
      BarLegend[{&amp;#034;SolarColors&amp;#034;, MinMax[tallyRange]}]&#xD;
      }]&#xD;
&#xD;
![enter image description here][33]&#xD;
&#xD;
The process of generating an exposure map forms a function within the `GeometricIntersections3D` package.  &#xD;
Alternative color schemes may also be specified.&#xD;
&#xD;
    sceneExposureMap[scene, &amp;#034;TemperatureMap&amp;#034;]&#xD;
&#xD;
![enter image description here][34]&#xD;
&#xD;
&#xD;
The bar scale for the exposure plot measures duration in frames but a time scale may be recovered.&#xD;
Given that the solar path used to light the scene lasts about 14 hours and the scene was rendered for 30 frames, that gives about 30 minutes per frame.&#xD;
&#xD;
    dailySunHours = &#xD;
     UnitConvert[DateDifference[Sunrise[], Sunset[]], &#xD;
      MixedRadix[&amp;#034;Hours&amp;#034;, &amp;#034;Minutes&amp;#034;, &amp;#034;Seconds&amp;#034;]]&#xD;
&#xD;
![enter image description here][35]&#xD;
&#xD;
&#xD;
    dailySunHours/30&#xD;
&#xD;
![enter image description here][36]&#xD;
&#xD;
&#xD;
&#xD;
&#xD;
&#xD;
&#xD;
&#xD;
&#xD;
This has been a very rewarding project with some exciting potentials beyond computer graphics.  Indeed, much optimisations can be made to the intersections package.  &#xD;
The different methods of space partitioning for BVH construction should be investigated as the one currently employed is rather rudimentary.&#xD;
Anti-aliasing methods to be investigated also.&#xD;
&#xD;
Both the House and Sundial processes are documented in the notebooks attached.  All necessary data may also be downloaded to save time.&#xD;
&#xD;
&#xD;
&#xD;
&#xD;
 &#xD;
&#xD;
&#xD;
  [1]: https://en.wikipedia.org/wiki/Shadow_mapping&#xD;
  [2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=animation.gif&amp;amp;userId=605083&#xD;
  [3]: https://3dwarehouse.sketchup.com/&#xD;
  [4]: https://github.com/b-goodman/GeometricIntersections3D&#xD;
  [5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_Import.png&amp;amp;userId=605083&#xD;
  [6]: https://en.wikipedia.org/wiki/Bounding_volume_hierarchy&#xD;
  [7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_TreeLV1.png&amp;amp;userId=605083&#xD;
  [8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_leafBoxesLV1.png&amp;amp;userId=605083&#xD;
  [9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=cuboidSubdivide.gif&amp;amp;userId=605083&#xD;
  [10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_TreeLV2.png&amp;amp;userId=605083&#xD;
  [11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_leafBoxesLV2.png&amp;amp;userId=605083&#xD;
  [12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_projectionPoints.png&amp;amp;userId=605083&#xD;
  [13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=solarPosition.PNG&amp;amp;userId=605083&#xD;
  [14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_scenePreview.png&amp;amp;userId=605083&#xD;
  [15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_sceneConstructor.png&amp;amp;userId=605083&#xD;
  [16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_projectionPoints.png&amp;amp;userId=605083&#xD;
  [17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_raySearch.png&amp;amp;userId=605083&#xD;
  [18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_rayIntersection.png&amp;amp;userId=605083&#xD;
  [19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_projectionPoints.png&amp;amp;userId=605083&#xD;
  [20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_projectionPoints.png&amp;amp;userId=605083&#xD;
  [21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_projectionPoints.png&amp;amp;userId=605083&#xD;
  [22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_projectionPoints.png&amp;amp;userId=605083&#xD;
  [23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_singleFrame_1.png&amp;amp;userId=605083&#xD;
  [24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_singleFrame_2.png&amp;amp;userId=605083&#xD;
  [25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_singleFrame_3.png&amp;amp;userId=605083&#xD;
  [26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_singleFrame_4.png&amp;amp;userId=605083&#xD;
  [27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=animation_House.gif&amp;amp;userId=605083&#xD;
  [28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_exposureStep_A.png&amp;amp;userId=605083&#xD;
  [29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_exposureStep_B.png&amp;amp;userId=605083&#xD;
  [30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_exposureStep_C.png&amp;amp;userId=605083&#xD;
  [31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_exposureStep_D.png&amp;amp;userId=605083&#xD;
  [32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_exposureStep_E.png&amp;amp;userId=605083&#xD;
  [33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_solarMap_A.png&amp;amp;userId=605083&#xD;
  [34]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Process_House_solarMap_B.png&amp;amp;userId=605083&#xD;
  [35]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sunHours.png&amp;amp;userId=605083&#xD;
  [36]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sunHoursPerFrame.png&amp;amp;userId=605083</description>
    <dc:creator>Benjamin Goodman</dc:creator>
    <dc:date>2017-06-01T07:01:10Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/434905">
    <title>A Smart Cities Hackathon. 20-22 February 2015, at UPC, Barcelona, Spain</title>
    <link>https://community.wolfram.com/groups/-/m/t/434905</link>
    <description>This hackathon is intended to propose some challenges around the Smart City Concept and offer the participants to showcase its skills, learn emerging technologies and share ideas.&#xD;
&#xD;
Main objective is to promote the usage of the iCity platform building new services based on PC or embedded systems such as simple platforms like Raspberry Pi or Galileo boards. In particular, to develop applications through iCity Platform, interacting with opened citys Information Systems to provide public interest services. In particular, Information Services offered by Municipality of Barcelona. Main topics are Data acquisition and actuation, Monitoring and management, Security, Transport and Mobility, e-Government, Environment, Tourism and culture, Sustainability.&#xD;
&#xD;
The Hackathon is opened to developers, students, researchers, business thinkers, policy analysts, journalists, designers, community organizers, urban planners, or anyone else who is interested in solving the biggest urban challenges today.&#xD;
Bernat Espigule will be available during the event to teach you the Wolfram language for projects developed with Raspberry Pi.&#xD;
&#xD;
Global event: http://www.global.datafest.net/&#xD;
&#xD;
Local event: http://www.global.datafest.net/cities/barcelona-spain</description>
    <dc:creator>Anna Calveras</dc:creator>
    <dc:date>2015-02-04T13:35:43Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1632078">
    <title>3D visualization of the Tokyo subway system</title>
    <link>https://community.wolfram.com/groups/-/m/t/1632078</link>
    <description>Last month, a TV program &amp;#034;[Chico Will Scold You][1]&amp;#034; of Japan public broadcasting picked up the Tokyo subway. The TV show introduced  the 3 D model of Tokyo subway, &amp;#034;[Tokyo Arteria][2]&amp;#034; which created by Takatsugu KURIYAMA.&#xD;
&#xD;
I was influenced by the beauty, so I tried to make something similar with Mathematica. Because I create it using Mathematica&amp;#039;s Tube function, I named it &amp;#034;Tokyo Tube&amp;#034;.&#xD;
![enter image description here][3]&#xD;
&#xD;
There are 13 subways in Tokyo. I gathered the information of latitude, longitude and depth (from the ground) of all stations (287 stations) from web and book.&#xD;
&#xD;
I will explain on the Hanzomon line as an example. The figure on the left shows connecting stations by the shortest straight line. However, the actual route is a more complicated. The figure on the right is the result of Google Map route search.&#xD;
&#xD;
![enter image description here][4]&#xD;
&#xD;
The TV program explained why Tokyo&amp;#039;s subway is so complicated.&#xD;
The answer is there is the law that if you buy land, you will have the right of both the ground and the underground. So it is necessary to pay the land fee to all of them to get through a subway. In order to avoid this, a subway runs under the roads owned by the state or municipality because their underground can be used free of charge for public interest purposes.&#xD;
In addition, the roads in Tokyo are not straight because the roads are made by filling roads and waterways that were made in a radial shape based on the Edo Castle during the Edo period.&#xD;
&#xD;
- From the result of Google Map route search, I search the route position using PixelValuePositions function.&#xD;
&#xD;
![enter image description here][5]&#xD;
![enter image description here][6]&#xD;
&#xD;
- I sort the points by y - axis, latitude. So the first point is the starting station, the Shibuya Station and the last point is the terminal station, Oshiage Station. However, just connecting the sorted points does not follow the route (left figure).  So, I use FindShortestTour function to rearrange according to the route, and select about 100 points from them for the Graphics3D (right figure).&#xD;
&#xD;
        p2 = Sort[p, #1[[2]] &amp;lt; #2[[2]] &amp;amp;]; &#xD;
        s = FindShortestTour[p2, 1, Length[p2]];&#xD;
        p3 = Append[&#xD;
           Take [p2[[Last[s]]], {1, m = Length@Last[s], Round[m/100]}], &#xD;
           p2[[-1]]];&#xD;
        {ListLinePlot[p2], ListLinePlot[p3]}&#xD;
&#xD;
![enter image description here][7]&#xD;
&#xD;
These points are converted into 3 D display based on the information of latitude, longitude and depth of all stations above. The actual map obtained by GeoGraphics function is pasted on the top surface.&#xD;
&#xD;
![enter image description here][8]&#xD;
&#xD;
The following is the result of carrying out this work for 13 lines.&#xD;
&#xD;
![enter image description here][9]&#xD;
&#xD;
The figure below shows the Ginza Line, the oldest in business, and the Oedo Line, the newest. The subway created later is deeper. And the Roppongi Station on the Oedo Line is at the deepest position 42.3 meters underground.&#xD;
&#xD;
![enter image description here][10]&#xD;
&#xD;
In addition, the Fukutoshin Line, which is the most recently full-opened, was created between the Marunouchi Line and the Shinjuku Line. The distance between the lines is about 11 cm at the shortest distance.&#xD;
&#xD;
Now, the law that it can be used free of charge at a place deeper than 40 meters if it is the public interest purpose was made. The Linear Chuo Shinkansen, which digs underground straight in the urban area, will aim to open in 2027.&#xD;
&#xD;
I have attached a notebook of Graphics3D &amp;#034;Tokyo Tube&amp;#034;, so please rotate it.&#xD;
&#xD;
  [1]: http://www4.nhk.or.jp/chikochan/&#xD;
  [2]: https://www.youtube.com/watch?v=eW59JgzyH70&#xD;
  [3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1040801.jpg&amp;amp;userId=1013863&#xD;
  [4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=287302.jpg&amp;amp;userId=1013863&#xD;
  [5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=796108.jpg&amp;amp;userId=1013863&#xD;
  [6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=103603.jpg&amp;amp;userId=1013863&#xD;
  [7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=1044204.jpg&amp;amp;userId=1013863&#xD;
  [8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=895805.jpg&amp;amp;userId=1013863&#xD;
  [9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=892206.jpg&amp;amp;userId=1013863&#xD;
  [10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=101107.jpg&amp;amp;userId=1013863</description>
    <dc:creator>Kotaro Okazaki</dc:creator>
    <dc:date>2019-03-14T13:29:19Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/186965">
    <title>Linear algebra &amp;amp; Calculus&amp;amp; equation</title>
    <link>https://community.wolfram.com/groups/-/m/t/186965</link>
    <description>How can I Insert this expression in mathematicaas shown in the photo?
[img=float: right; width: 292px; height: 100px;]http://community.wolfram.com/c/portal/getImageAttachment?filename=dsfsdfsdfsdfds.PNG&amp;amp;userId=136046[/img]</description>
    <dc:creator>Ahmed Al-Ali</dc:creator>
    <dc:date>2014-01-18T15:01:58Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/430092">
    <title>How to make a hole in a Graphics3D object?</title>
    <link>https://community.wolfram.com/groups/-/m/t/430092</link>
    <description>My question is that: how to make a hole in a Graphics3D object? For example, there are two objects ,cub1 and cub2:&#xD;
&#xD;
    cub1=Cuboid[{0,0,0},{20,2,20}];&#xD;
    cub2=Cuboid[{12,0,8},{17,2,17}];&#xD;
    Graphics3D[{cub1,cub2}]&#xD;
![enter image description here][1]&#xD;
&#xD;
I want to make a window at the positon of the cub2, like under.&#xD;
&#xD;
    DiscretizeRegion[RegionDifference[cub1,cub2]]&#xD;
![enter image description here][2]&#xD;
&#xD;
But this object is obtained by use DiscretizeRegion, it is not a Graphics3D object, it is a MeshRegion object, and it make system too slow. Then, how to get a Graphics3D object?&#xD;
&#xD;
  [1]: /c/portal/getImageAttachment?filename=2015-01-28_165029.jpg&amp;amp;userId=430077&#xD;
  [2]: /c/portal/getImageAttachment?filename=2015-01-28_165215.jpg&amp;amp;userId=430077</description>
    <dc:creator>yang l</dc:creator>
    <dc:date>2015-01-28T09:00:47Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1727982">
    <title>[WSS19] Monitoring the development and spread of cities</title>
    <link>https://community.wolfram.com/groups/-/m/t/1727982</link>
    <description># Introduction&#xD;
&#xD;
Cities all over the world grow over time at different classes and rates. It is beneficial for citizens, companies, governments, ...etc to get an estimate of these rates and the direction of the development. In this study we explore several ways to collect and process data (manual and through an API), and try to find suitable way to predict the development a city using satellite images.&#xD;
# Data&#xD;
Satellite data images are becoming more available than ever across the globe. They offer images with different information layers that highlight points (areas) of interest accompanied  with numeric and nominal data. There are several ways to obtain this kind of datasets such as purchasing them from a provider, but there are some other methods to obtain datasets for free such as for example downloading images through Google Earth Pro application, which allows user to explore images of certain position through time in a simple way and it switches between images from different satellites while the user changes parameters like zoom level and time. Another way to get free satellite images is by using an API from data provider which is more time efficient and can provide pre-highlighted areas of interest in the given images.&#xD;
&#xD;
### Manual collection of satellite images&#xD;
The following animation is a timelapse of the development of Dubai and its surrounding area through the time interval 1984 - 2016 made by images we collected manually from Google Earth Pro and stored them as a gif file.&#xD;
![enter image description here][1]&#xD;
Using such dataset specially if it contains sand covered areas, allows limited control of choosing specific class to study. For example if the desired class to study is urban change, then separating urban developed areas from sand areas based on color classification may produce in accurate data even with high resolution images.&#xD;
If the desired study is for overall development of the city including all the classes, then this dataset can be used by binarize the image differences over time.&#xD;
&#xD;
    imgList = Import[&amp;#034;your list of collected images.jpg&amp;#034;];&#xD;
    diffList = Differences[imgList];&#xD;
    ImageMeasurements[Binarize[#, .1], &amp;#034;Total&amp;#034;] &amp;amp; /@ diffList // ListLinePlot&#xD;
and to monitor the change rate between images over time we plotted the sum of data values in the binarized image differences.&#xD;
&#xD;
    diffdata = ImageMeasurements[Binarize[#, .1], &amp;#034;Total&amp;#034;] &amp;amp; /@ diffList;&#xD;
    &#xD;
    diffdata // ListLinePlot&#xD;
![enter image description here][2]&#xD;
&#xD;
The spicks here are due to  some anomalies in the images, in that specific data case the spicks are due to wind or sand storms that changed the surface structure of some sand covered areas, these changes appeared in two of the image differences. In order to remove these anomalies we removed the outliers from the list of total data in the binarized image differences.&#xD;
&#xD;
    q = Quartiles[diffdata];&#xD;
    maxq = q[[3]] + 1.5 (q[[3]] - q[[1]]);&#xD;
    minq = q[[1]] - 1.5 (q[[3]] - q[[1]]);&#xD;
    &#xD;
    newimgList = &#xD;
      Delete[imgList, Position[diffdata, _?(minq &amp;gt; # || # &amp;gt; maxq &amp;amp;)]];&#xD;
    &#xD;
    newdiffList = Differences[newimgList];&#xD;
    &#xD;
    newdiffdata = &#xD;
      ImageMeasurements[Binarize[#, .1], &amp;#034;Total&amp;#034;] &amp;amp; /@ newdiffList;&#xD;
    &#xD;
    newdiffdata // ListLinePlot&#xD;
![enter image description here][3]&#xD;
&#xD;
    ListAnimate[Accumulate[Binarize[#, .1] &amp;amp; /@ newdiffList], &#xD;
     ImageSize -&amp;gt; Large]&#xD;
![enter image description here][4]&#xD;
In order to compare the change rate for multiple areas (with the same parameters values) we calculated the average of change rate over time.&#xD;
&#xD;
    Mean@newdiffdata&#xD;
    10852.4&#xD;
### API utilization for collecting Satellite images&#xD;
We were able to connect to the API of NASA&amp;#039;s website and  get images with acceptable resolution and many classes. We first got the classes color code then we started to receive images of the metropolitan area of  Shanghai since it is the city with the highest change rate over time among the most populated ones in the world.&#xD;
&#xD;
    rawLegend = &#xD;
      Import[&amp;#034;https://gibs.earthdata.nasa.gov/colormaps/v1.3/MODIS_\&#xD;
    Combined_IGBP_Land_Cover_Type_Annual.xml&amp;#034;];&#xD;
    &#xD;
    legend = Association@&#xD;
       Cases[rawLegend, &#xD;
        XMLElement[&#xD;
          &amp;#034;LegendEntry&amp;#034;, {&amp;#034;rgb&amp;#034; -&amp;gt; color_, &amp;#034;tooltip&amp;#034; -&amp;gt; name_, &#xD;
           &amp;#034;id&amp;#034; -&amp;gt; _}, {}] :&amp;gt; &#xD;
         RGBColor[ToExpression /@ StringSplit[color, &amp;#034;,&amp;#034;]/255] -&amp;gt; name, &#xD;
        Infinity];&#xD;
    &#xD;
    legendReverse = AssociationMap[Reverse, legend]&#xD;
    topcities = &#xD;
     EntityList[&#xD;
      EntityClass[&amp;#034;MetropolitanArea&amp;#034;, &amp;#034;Population&amp;#034; -&amp;gt; TakeLargest[10]]]&#xD;
&#xD;
Which get the list of Tokyo, Mexico City, Seoul, Mumbai, Sao Paulo, Manila, New York-Northern New Jersey-Long Island, NY-NJ-PA, Jakarta, New Delhi, Shanghai.&#xD;
&#xD;
    GeoListPlot[&#xD;
     EntityClass[&amp;#034;MetropolitanArea&amp;#034;, &amp;#034;Population&amp;#034; -&amp;gt; TakeLargest[10]][&#xD;
      &amp;#034;Position&amp;#034;], GeoRange -&amp;gt; &amp;#034;World&amp;#034;, GeoProjection -&amp;gt; &amp;#034;Robinson&amp;#034;]&#xD;
&#xD;
![enter image description here][5]&#xD;
&#xD;
    cityImgs10 = &#xD;
      Function[p, &#xD;
        GeoImage[GeoDisk[p, Quantity[70, &amp;#034;Miles&amp;#034;]], GeoZoomLevel -&amp;gt; 8, &#xD;
           GeoServer -&amp;gt; \&#xD;
    {&amp;#034;https://gibs.earthdata.nasa.gov/wmts/epsg3857/best/MODIS_Combined_\&#xD;
    L3_IGBP_Land_Cover_Type_Annual/default/&amp;#034; &amp;lt;&amp;gt; &#xD;
              DateString[#, {&amp;#034;Year&amp;#034;, &amp;#034;-&amp;#034;, &amp;#034;Month&amp;#034;, &amp;#034;-&amp;#034;, &amp;#034;Day&amp;#034;}] &amp;lt;&amp;gt; &#xD;
              &amp;#034;/GoogleMapsCompatible_Level8/`1`/`3`/`2`.png&amp;#034;, &#xD;
             &amp;#034;ZoomRange&amp;#034; -&amp;gt; {1, 8}}, Background -&amp;gt; Black] &amp;amp; /@ &#xD;
         DateRange[DateObject[{2001, 1, 1}], DateObject[{2017, 1, 1}], &#xD;
          &amp;#034;Year&amp;#034;]] /@ &#xD;
       EntityClass[&amp;#034;MetropolitanArea&amp;#034;, &amp;#034;Population&amp;#034; -&amp;gt; TakeLargest[10]][&#xD;
        &amp;#034;Position&amp;#034;];&#xD;
    binaryMaps = &#xD;
      Map[Binarize[&#xD;
         ColorDetect[#, legendReverse[&amp;#034;Urban and Built-up Lands&amp;#034;]], &#xD;
         0.9999] &amp;amp;, cityImgs10, {2}];&#xD;
    &#xD;
    newdiffList10 = Differences[#] &amp;amp; /@ binaryMaps;&#xD;
    &#xD;
    newdiffdata10 = ImageMeasurements[#, &amp;#034;Total&amp;#034;] &amp;amp; /@ newdiffList10;&#xD;
    ListLinePlot[newdiffdata10, &#xD;
     PlotLegends -&amp;gt; &#xD;
      ReplacePart[EntityValue[topcities, &amp;#034;Name&amp;#034;], 7 -&amp;gt; &amp;#034;NY NJ&amp;#034;], &#xD;
     PlotLabel -&amp;gt; Style[&amp;#034;change rate of urban growth&amp;#034;, Black, 15]]&#xD;
![enter image description here][6]&#xD;
&#xD;
    With[{data = &#xD;
       ReverseSort@&#xD;
        AssociationThread[&#xD;
         ReplacePart[EntityValue[topcities, &amp;#034;Name&amp;#034;], 7 -&amp;gt; &amp;#034;NY NJ&amp;#034;], &#xD;
         Mean /@ newdiffdata10]},&#xD;
     BarChart[data, ChartLabels -&amp;gt; (Rotate[#, Pi/2] &amp;amp; /@ Keys[data]), &#xD;
      PlotLabel -&amp;gt; &#xD;
       Style[&amp;#034;Average change rate of urban growth&amp;#034;, Black, 15]]]&#xD;
![enter image description here][7]&#xD;
&#xD;
    {ListAnimate[cityImgs10[[10]]], ListAnimate[binaryMaps[[10]]]}&#xD;
![enter image description here][8]&#xD;
![enter image description here][9]&#xD;
# Prediction of future urban development&#xD;
To predict the next urban development in a certain area, we need historical data which we already obtained in the previous section. Since Wolfram language provide several methods for prediction we decided to look for a suitable method for our study. In case of classified images, only 17 years available in historical data that are available to be obtained via API  (2001 - 2017), which is not enough to train Neural Network for prediction, so we decided to simulate a convolution layer and use it among a statistical model. First we use the binarized classified images, by that we only limit the study to the urban areas. Then we transform the image into binary vectors, a vector for each raw of pixels. A pixel place is one if this place is part of the urban area, and zero otherwise. Next we  divide the training images into smaller parts, each part and its historical versions will be used to predict a corresponding pixel.&#xD;
We divided the training set into sets of five consecutive images with offset equal to one. each group of five is used such that the first four images lead to the fifth one in prediction, by that we increased the size of training data.&#xD;
### Methods&#xD;
Decision trees is one of the suitable methods for these kind of predictions. We used the first sixteen images to predict the following 7 images.&#xD;
&#xD;
    binaryMapsShanghai = binaryMaps[[10]];&#xD;
    &#xD;
    rules = Catenate[&#xD;
       Function[maps,&#xD;
         Module[{flatMatrices},&#xD;
          flatMatrices = &#xD;
           Catenate@&#xD;
            Transpose[&#xD;
             Partition[ImageData[#, &amp;#034;Bit&amp;#034;], {3, 3}, {1, 1}, 2, 0] &amp;amp; /@ &#xD;
              Most[maps], {3, 1, 2, 4, 5}];&#xD;
          Thread[flatMatrices -&amp;gt; Catenate@ImageData[Last[maps], &amp;#034;Bit&amp;#034;]]&#xD;
          ]] /@ Partition[binaryMapsShanghai[[;; -2]], 5, 1]&#xD;
       ];&#xD;
    &#xD;
    pf = Predict[rules, Method -&amp;gt; &amp;#034;DecisionTree&amp;#034;]&#xD;
    predictimg[pf_] := Module[{output, predimg},&#xD;
       output = &#xD;
        Map[pf, Transpose[&#xD;
          Partition[ImageData[#, &amp;#034;Bit&amp;#034;], {3, 3}, {1, 1}, 2, 0] &amp;amp; /@ &#xD;
           binaryMaps[[-4 ;; -1]], {3, 1, 2, 4, 5}], {2}];&#xD;
       predimg = Binarize[Image@output, 0.1];&#xD;
       AppendTo[binaryMaps, predimg];&#xD;
       ];&#xD;
    &#xD;
    predictimg[pf];&#xD;
Every time the function `predictimg` is being called, it generates a new predicted binary image and add it to the list of binary images of the area in study. Here is an example of seven images generated by this function.&#xD;
&#xD;
![enter image description here][10]&#xD;
&#xD;
A closer look.&#xD;
&#xD;
![enter image description here][11]&#xD;
&#xD;
In order to see how good the model is, we fitted a linear regression ,model on the change rate of urban development for Shanghai and then compared the expected results with the ones we got from our model.&#xD;
&#xD;
    rateChange = &#xD;
     LinearModelFit[Transpose[{Range[16], newdiffdata10[[10]]}], x, x]&#xD;
    ListPlot[Transpose[{(rateChange[#] &amp;amp; /@ Range[17, 22]), {271.`, 152.`,&#xD;
         88.`, 57.`, 34.`, 15.`}}], Filling -&amp;gt; Axis, &#xD;
     PlotLabel -&amp;gt; &#xD;
      Style[&amp;#034;Relation between expexted and generated change rate&amp;#034;, Black, &#xD;
       15]]&#xD;
&#xD;
![enter image description here][12]&#xD;
&#xD;
The model was quite cautious in deciding the next spread parts but we believe with some tweaks of the inputs we can get better results in the future versions of that model. We tried other statistical methods, for example Linear regression and Random Forest, but they did not show any change in the predicted images. We also tried the Nearest Neighbor method, which need much more time to produce an output and due to time limitations, we couldn&amp;#039;t produce an output using this method.&#xD;
# Future plan&#xD;
The model is in its early stage. More investigation in available methods. We still do not know how the Nearest Neighbor method will behave. Also there may be other API&amp;#039;s that allow more data to be imported which may give better results. Tweaking the inputs may also affect the outputs and with some optimizations we may make better use of data at hand.&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9161Dubai2.gif&amp;amp;userId=1700735&#xD;
  [2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=spikes.png&amp;amp;userId=1700735&#xD;
  [3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=clean.png&amp;amp;userId=1700735&#xD;
  [4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4117dubaibinary.gif&amp;amp;userId=1700735&#xD;
  [5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=cities.png&amp;amp;userId=1700735&#xD;
  [6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=changerate.png&amp;amp;userId=1700735&#xD;
  [7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=changerate2.png&amp;amp;userId=1700735&#xD;
  [8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=shang.gif&amp;amp;userId=1700735&#xD;
  [9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=binshang.gif&amp;amp;userId=1700735&#xD;
  [10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8462results.gif&amp;amp;userId=1700735&#xD;
  [11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8195closerlook.gif&amp;amp;userId=1700735&#xD;
  [12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=deviation.png&amp;amp;userId=1700735</description>
    <dc:creator>Ahmed Elbanna</dc:creator>
    <dc:date>2019-07-10T04:49:48Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/386677">
    <title>Exporting 3ds model from mathematica</title>
    <link>https://community.wolfram.com/groups/-/m/t/386677</link>
    <description>Hi, I&amp;#039;m an architect I don&amp;#039;t know much about mathematica but I&amp;#039;m trying to learn how to use it. The geometry is quite interesting for me to use in architectural forms so I need to export some demonstrations into 3ds models. Is there anybody who can help me about it?</description>
    <dc:creator>Aysu Aysoy</dc:creator>
    <dc:date>2014-11-10T08:15:04Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/547218">
    <title>WeairePhelan structure in mathematica</title>
    <link>https://community.wolfram.com/groups/-/m/t/547218</link>
    <description>Hello friends I hope you are well, I tell them the reason why today I write, after seeing a video where they explain some things about the [WeairePhelan structure][1], I wonder, can be such a structure in mathematica?, I search on wolfram alpha on the WeairePhelan structure, but I got nothing, maybe introduced it evil in that search engine , if someone can tell me how to make that structure in mathematica is grateful, I intend to do something further along with this structure, greetings to all.&#xD;
&#xD;
![enter image description here][2]&#xD;
&#xD;
&#xD;
  [1]: https://en.wikipedia.org/wiki/Weaire%E2%80%93Phelan_structure&#xD;
  [2]: /c/portal/getImageAttachment?filename=12-14-hedral_honeycomb.png&amp;amp;userId=11733</description>
    <dc:creator>Luis Ledesma</dc:creator>
    <dc:date>2015-08-13T18:31:26Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/486430">
    <title>London The Gherkin: plot volume of revolution in 3d</title>
    <link>https://community.wolfram.com/groups/-/m/t/486430</link>
    <description>Hello, I am new here not sure how to plot this &#xD;
&#xD;
&amp;gt; x=?2.2116*10^(?15)*y^(8)+1.3603*10^(?12)*y^(7)-3.42899*10^(?10)*y^(6)+4.56861*10^(?8)*y^(5)-3.45065*10^(?6)*y^(4)+1.39347*10^?4*y^(3)-0.00284795*y^(2)+0.0911615*y+24.5 t&#xD;
&#xD;
hen rotate it around by 360 or 2 pi to form a 3d object. May someone please help me graph this, it has a upper bound of 179.8 and lower bound of 0. This is supposed to form the famous building The Gherkin. Thanks in advance. I have tried a few times but didn&amp;#039;t work properly.&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
&#xD;
  [1]: /c/portal/getImageAttachment?filename=1.png&amp;amp;userId=486415</description>
    <dc:creator>Bob H</dc:creator>
    <dc:date>2015-04-26T01:45:03Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/1037946">
    <title>Testing for beauty</title>
    <link>https://community.wolfram.com/groups/-/m/t/1037946</link>
    <description>What do you think of the idea of automatically judging if a piece of data was beautiful?  This could mean the data in an image (ImageData) or maybe the result of a computation (e.g. CellularAutomaton), or anything, although I am thinking of a list or an array of numbers primarily.&#xD;
&#xD;
My first thought was that there are many filters for image processing, but I don&amp;#039;t know which might be useful.  The next thing I think of is mathematical transforms.  For example, taking the Fourier or Hadamard transform you expect the coefficients to decay, and if they don&amp;#039;t then that would not be nice.&#xD;
&#xD;
This code deletes the constant term and does some measure of the variance, using Mean as a shortcut to counting the 0&amp;#039;s and 1&amp;#039;s, those closer to the min than the max respectively without knowing the length or dimension.  (Note Fourier does not assume the size is a power of 2 but Hadamard does.)&#xD;
&#xD;
    FourierBeauty[list_] :=  Mean[1. - Round[Rescale[Abs[Rest[Flatten[Fourier[list]]]]]]]&#xD;
&#xD;
Maybe for an image this might not be bad.  Here is what it picks out of the ExampleData test images:&#xD;
&#xD;
    Grid[{#, ExampleData[#]} &amp;amp; /@ &#xD;
      MaximalBy[ExampleData[&amp;#034;TestImage&amp;#034;], &#xD;
       FourierBeauty[&#xD;
         ImageData[&#xD;
          Binarize[&#xD;
           ImageResize[&#xD;
            ColorConvert[ExampleData[#], &amp;#034;Grayscale&amp;#034;], {64, 64}]]]] &amp;amp;], &#xD;
     Frame -&amp;gt; All]&#xD;
![enter image description here][1]&#xD;
&#xD;
but here are the CAs it likes the most.&#xD;
&#xD;
    MaximalBy[Range[0, 255], &#xD;
     Sum[FourierBeauty[ CellularAutomaton[#, RandomInteger[1, 2^8], {{0, 2^8 - 1}}]], 100] &amp;amp;]-&amp;gt;{1, 3, 5, 17, 57, 87, 119, 127}&#xD;
&#xD;
&#xD;
  [1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fourier-beauty-image.jpg&amp;amp;userId=23275</description>
    <dc:creator>Todd Rowland</dc:creator>
    <dc:date>2017-03-23T02:46:11Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2316573">
    <title>[WSC21] Building isolation and average shape analysis with image processing</title>
    <link>https://community.wolfram.com/groups/-/m/t/2316573</link>
    <description>![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=image%281%29.png&amp;amp;userId=2316532&#xD;
  [2]: https://www.wolframcloud.com/obj/b413a444-3a6d-4675-8984-99255ca6959e</description>
    <dc:creator>Sidharth Jain</dc:creator>
    <dc:date>2021-07-15T18:30:57Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/387917">
    <title>How can I export this graphic into maya or 3ds?</title>
    <link>https://community.wolfram.com/groups/-/m/t/387917</link>
    <description>![How can I export this graphic into maya or 3ds?][1]&#xD;
&#xD;
&#xD;
  [1]: /c/portal/getImageAttachment?filename=wcommun.JPG&amp;amp;userId=386657</description>
    <dc:creator>Aysu Aysoy</dc:creator>
    <dc:date>2014-11-12T07:55:09Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/3442449">
    <title>Visualizing the land values in Tokyo</title>
    <link>https://community.wolfram.com/groups/-/m/t/3442449</link>
    <description>![Visualizing the land values in Tokyo][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=VisualizingthelandvaluesinTokyo-optimize.gif&amp;amp;userId=20103&#xD;
  [2]: https://www.wolframcloud.com/obj/89d7af9d-4b49-48b0-9d50-a3c85473d6d9</description>
    <dc:creator>Kotaro Okazaki</dc:creator>
    <dc:date>2025-04-10T14:13:53Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/796356">
    <title>Fold a cube?</title>
    <link>https://community.wolfram.com/groups/-/m/t/796356</link>
    <description>Hello to all, I would like to share with you the following video ([Building unimaginable shapes][1] ) to me has left impressed by the results that are obtained, the question I have is,&#xD;
there are tools in Mathematica to do what is explained in the video? I mean essentially fold a cube at any point of their edges.&#xD;
&#xD;
I would like to receive feedback regarding any topic that is covered in the video say, greetings and thanks for advance.&#xD;
&#xD;
&#xD;
  [1]: https://www.youtube.com/watch?v=dsMCVMVTdn0</description>
    <dc:creator>Luis Ledesma</dc:creator>
    <dc:date>2016-02-19T03:14:06Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/529314">
    <title>Help exporting to Autocad.</title>
    <link>https://community.wolfram.com/groups/-/m/t/529314</link>
    <description>Hello, &#xD;
&#xD;
I will wave the novice flag first and admit I am new to the use of Mathematica. I have the code of a shape I want to export as a *.dxf file , but after seeing some youtube tutorials and seeing some threads over here, i haven&amp;#039;t managed to get it right.&#xD;
&#xD;
&#xD;
    Manipulate[&#xD;
     Show[calabi[0, 0, 0, alpha, 0, clr], ViewPoint -&amp;gt; {-1.4, 0, 1.4}, &#xD;
      Lighting -&amp;gt; &#xD;
       If[clr, {{&amp;#034;Ambient&amp;#034;, GrayLevel[.5]}, {&amp;#034;Directional&amp;#034;, White, &#xD;
          ImageScaled@{0, 0, 2}}}, {{&amp;#034;Ambient&amp;#034;, &#xD;
          GrayLevel[.25]}, {&amp;#034;Directional&amp;#034;, RGBColor[0.5, .5, 1], &#xD;
          ImageScaled@{0, 1, 0}}, &#xD;
              {&amp;#034;Directional&amp;#034;, RGBColor[1, 0.5, 0.5], &#xD;
          ImageScaled@{1, -1, 0}}, {&amp;#034;Directional&amp;#034;, RGBColor[0.5, 1, .5], &#xD;
          ImageScaled@{-1, -1, 0}}}], PlotRange -&amp;gt; 1.2, Boxed -&amp;gt; False, &#xD;
      Axes -&amp;gt; False, SphericalRegion -&amp;gt; True, ImageSize -&amp;gt; {450, 450}, &#xD;
      ViewAngle -&amp;gt; \[Pi]/4.5],&#xD;
     {{alpha, \[Pi]/4, &amp;#034;projection angle&amp;#034;}, 0, 2 Pi},&#xD;
     {{clr, False, &amp;#034;color code surface&amp;#034;}, {True, False}},&#xD;
     Initialization :&amp;gt; {&#xD;
       u1[a_, b_] := .5 (E^(a + I*b) + E^(-a - I*b));&#xD;
       u2[a_, b_] := .5 (E^(a + I*b) - E^(-a - I*b));&#xD;
       z1k[a_, b_, n_, k_] := E^(k*2*Pi*I/n)*u1[a, b]^(2.0/n);&#xD;
       z2k[a_, b_, n_, k_] := E^(k*2*Pi*I/n)*u2[a, b]^(2.0/n);&#xD;
       n = 5;&#xD;
       calabi[x_, y_, z_, \[Alpha]_, t_, c_] := &#xD;
        Table[&#xD;
         With[{alpha = \[Alpha] - t}, &#xD;
          ParametricPlot3D[&#xD;
           Evaluate@{Re[z1k[a, b, n, k1]] + x, Re[z2k[a, b, n, k2]] + y, &#xD;
             Cos[alpha]*Im[z1k[a, b, n, k1]] + &#xD;
              Sin[alpha]*Im[z2k[a, b, n, k2]] + z}, {a, -1, 1}, {b, &#xD;
            0, \[Pi]/2}, Boxed -&amp;gt; False, Axes -&amp;gt; False, PlotPoints -&amp;gt; 15, &#xD;
           PlotStyle -&amp;gt; &#xD;
            If[c, RGBColor@{If[k1 == 0 &amp;amp;&amp;amp; k2 == 0, 0, &#xD;
                Rescale[k1, {0, n - 1}]], &#xD;
               If[k1 == 0 &amp;amp;&amp;amp; k2 == 0, 0, Rescale[k2, {0, n - 1}]], &#xD;
               If[k1 == 0 &amp;amp;&amp;amp; k2 == 0, 1, 0]}, {RGBColor[.5, .5, 1], &#xD;
              Specularity[White, 128]}], MaxRecursion -&amp;gt; 0, &#xD;
           PerformanceGoal -&amp;gt; &amp;#034;Speed&amp;#034;, Mesh -&amp;gt; None]], {k1, 0, &#xD;
          n - 1}, {k2, 0, n - 1}];&#xD;
       }, SynchronousInitialization -&amp;gt; False]&#xD;
&#xD;
&#xD;
![A screencap of the desired figure][1]&#xD;
&#xD;
&#xD;
  [1]: /c/portal/getImageAttachment?filename=ScreenShot2015-07-11at23.53.12.png&amp;amp;userId=528896&#xD;
&#xD;
&#xD;
&#xD;
I understand how ungrateful it is to use a first post asking for help but I&amp;#039;ve hit a dead end with this. Any help would be much appreciated. &#xD;
&#xD;
&#xD;
Best,&#xD;
Carlos</description>
    <dc:creator>Carlos Ortega</dc:creator>
    <dc:date>2015-07-12T10:30:20Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2474537">
    <title>Tokyo Metro trains at rush hour: geo-temporal data visualizations in 2D/3D</title>
    <link>https://community.wolfram.com/groups/-/m/t/2474537</link>
    <description>![enter image description here][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=Tokyo_Metro.gif&amp;amp;userId=20103&#xD;
  [2]: https://www.wolframcloud.com/obj/d99efd17-8925-4815-9bfb-968c86d4f6dc</description>
    <dc:creator>Kotaro Okazaki</dc:creator>
    <dc:date>2022-02-18T01:07:26Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/329903">
    <title>Extensions for Problem Solver Generator and Pure Research w/ Wolfram Lang.</title>
    <link>https://community.wolfram.com/groups/-/m/t/329903</link>
    <description>I expected to see The Step by Step Experiments conducted by Gregory Mendel in Moravia, in his Monastery, the fact that Mendel is no accident, and why he picked peas and which studies he conducted and how and the models he developed, the terminology. I&amp;#039;m watching Biology 7.00x, and Wolfram should be able to compute that. A pure extension of Mathematica Pura towards Biologica Pura, I don&amp;#039;t know.  &#xD;
(Just submitted to Wolfram|Alpha Feedback)&#xD;
&#xD;
A natural extension of Wolfram&amp;#039;s Problem Generator also, but I&amp;#039;m also alluding to the ideas on the blog post by Stephen Wolfram on conducting research on Pure Mathematics extending the capabilities and computational resourcefulness of the language to realms beyond Mathematics.&#xD;
&#xD;
  [1]: http://goo.gl/ZHMCzt</description>
    <dc:creator>Francisco Barreto</dc:creator>
    <dc:date>2014-08-28T01:27:02Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/2579441">
    <title>[WSC22] Create Greek columns with different depths of flutes and fillets</title>
    <link>https://community.wolfram.com/groups/-/m/t/2579441</link>
    <description>![Doric Columns][1]&#xD;
&#xD;
&amp;amp;[Wolfram Notebook][2]&#xD;
&#xD;
&#xD;
  [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=WolframImageColumns.JPG&amp;amp;userId=2578855&#xD;
  [2]: https://www.wolframcloud.com/obj/cb76b429-97c7-40ee-bbba-190aa8f12842</description>
    <dc:creator>Mahira Hafeez</dc:creator>
    <dc:date>2022-07-21T18:40:18Z</dc:date>
  </item>
  <item rdf:about="https://community.wolfram.com/groups/-/m/t/392411">
    <title>Manipulate code in grasshopper?</title>
    <link>https://community.wolfram.com/groups/-/m/t/392411</link>
    <description>I need to use the same modal in grasshopper and manipulate it. I can only export the modal as a mesh but I need surface in rhino. I found this plug-in which helps mathematica to work with grasshopper: Mantis V 0.5 . But I don&amp;#039;t think it is the right thing to use. Basically the question is: &#xD;
&#xD;
**Is it possible to write the code and manipulate it in grasshopper?**&#xD;
&#xD;
Here is the code:&#xD;
&#xD;
    Manipulate[&#xD;
     Module[{\[CurlyEpsilon] = 10^-6, c1 = Tan[a1], c2 = Tan[a2], &#xD;
       c3 = Tan[a3], c4 = Tan[a4], c5 = Tan[a5], c6 = Tan[a6]}, &#xD;
      ContourPlot3D[&#xD;
       Evaluate[&#xD;
        c6 Sin[3 x] Sin[2 y] Sin[z] + c4 Sin[2 x] Sin[3 y] Sin[z] + &#xD;
          c5 Sin[3 x] Sin[y] Sin[2 z] + c2 Sin[x] Sin[3 y] Sin[2 z] + &#xD;
          c3 Sin[2 x] Sin[y] Sin[3 z] + c1 Sin[x] Sin[2 y] Sin[3 z] == 0], &#xD;
         {x, \[CurlyEpsilon], Pi - \[CurlyEpsilon]}, &#xD;
         {y, \[CurlyEpsilon], Pi - \[CurlyEpsilon]}, &#xD;
         {z, \[CurlyEpsilon], Pi - \[CurlyEpsilon]},&#xD;
        Mesh -&amp;gt; False, ImageSize -&amp;gt; {400, 400}, Boxed -&amp;gt; False, Axes -&amp;gt; False, &#xD;
        NormalsFunction -&amp;gt; &amp;#034;Average&amp;#034;, PlotPoints -&amp;gt; ControlActive[10, 30], PerformanceGoal -&amp;gt; &amp;#034;Speed&amp;#034;]], &#xD;
      {{a1, 1, &amp;#034;\!\(\*SubscriptBox[\(\[Alpha]\), \(1\)]\)&amp;#034;}, -Pi/2 - 0.01, Pi/2 + 0.01, ImageSize -&amp;gt; Tiny}, &#xD;
      {{a2, 1, &amp;#034;\!\(\*SubscriptBox[\(\[Alpha]\), \(2\)]\)&amp;#034;}, -Pi/2 - 0.01, Pi/2 + 0.01, ImageSize -&amp;gt; Tiny}, &#xD;
      {{a3, 1, &amp;#034;\!\(\*SubscriptBox[\(\[Alpha]\), \(3\)]\)&amp;#034;}, -Pi/2 - 0.01, Pi/2 + 0.01, ImageSize -&amp;gt; Tiny}, &#xD;
      {{a4, 1, &amp;#034;\!\(\*SubscriptBox[\(\[Alpha]\), \(4\)]\)&amp;#034;}, -Pi/2 - 0.01, Pi/2 + 0.01, ImageSize -&amp;gt; Tiny}, &#xD;
      {{a5, 1, &amp;#034;\!\(\*SubscriptBox[\(\[Alpha]\), \(5\)]\)&amp;#034;}, -Pi/2 - 0.01, Pi/2 + 0.01, ImageSize -&amp;gt; Tiny}, &#xD;
      {{a6, 1, &amp;#034;\!\(\*SubscriptBox[\(\[Alpha]\), \(6\)]\)&amp;#034;}, -Pi/2 - 0.01, Pi/2 + 0.01, ImageSize -&amp;gt; Tiny}, &#xD;
     AutorunSequencing -&amp;gt; {1, 3, 5}, ControlPlacement -&amp;gt; Left]&#xD;
&#xD;
![enter image description here][1]&#xD;
&#xD;
&#xD;
  [1]: /c/portal/getImageAttachment?filename=ScreenShot2014-11-18at1.19.24PM.png&amp;amp;userId=11733</description>
    <dc:creator>Aysu Aysoy</dc:creator>
    <dc:date>2014-11-18T18:06:44Z</dc:date>
  </item>
</rdf:RDF>

