Community RSS Feed
https://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Recreation sorted by activeAnamorphosis and reflection between a Conical Mirror and a Cylinder
https://community.wolfram.com/groups/-/m/t/1660442
![intro][1]
Catoptric or mirror anamorphoses are deformed images that can only be seen undeformed with the help of a mirror.
Here, we experiment with a conical mirror surrounded by a vertical cylindrical surface.
We want to compute points of a deformed (anamorphic) image on the cylinder's inner surface such that it is perceived by the viewer as an undeformed image when looking down in the cone shaped mirror.
![enter image description here][2]
The above drawing shows the anamorphic setup: a conical mirror (radius r=1, height=h), surrounded by a cylindrical surface (radius R>r).
The viewpoint V is along the vertical axis of the cylinder (at infinity relative to the size of the cone).
A point S (xa,ya,za) on the cylinder's inner surface is reflected by the mirror at Q to the viewer's eye at V. The viewer perceives the point at I (xi,yi,0). The lines VQ and SQ form equal angles with the normal to the sphere at Q.
![enter image description here][3]
The above animation demonstrates the relation between the point I traveling along a straight line while its anamorphic map follows a curve on the inner surface of the cylinder.
We now write a function that expresses this geometric relationship:
cone2Cylinder[imagePoint : {xi_, yi_}, coneHeight : h_,
cylinderRadius : R_] :=
{(R xi)/Sqrt[xi^2 + yi^2], (R yi)/Sqrt[
xi^2 + yi^2],
h - h Sqrt[
xi^2 + yi^2] + (-R + Sqrt[xi^2 + yi^2]) Cot[2 ArcTan[1/h]]}
This function maps an image point to an anamorphic point.
To test our function, we use [again][4] one of the logos generated by the Wolfram Demonstration "[Character Rotation Patterns][5]" by Chris Carlson.
Which, after converting to a GraphicsComplex looks like this:
ig = ImageGraphics[sun, 2, Method -> "Exact"];
lines = Normal[ig][[1, -1]] /. FilledCurve -> Identity;
scaledLines = Map[#/948 - .5 &, lines, {6}]
Graphics[{Thick, scaledLines}]
![enter image description here][6]
We now compute the point coordinates of the lines in the GraphicsComplex to their anamorphic map {xa,ya,za} using the function cone2Cylinder.
anaLines = Map[anaCone2Cylinder[#, 1.57, 1.15] &, scaledLines, {5}];
Graphics3D[{{Opacity[.2], White,
Cylinder[{{0, 0, .3}, {0, 0, 1.2}}, 1.25]},
AbsoluteThickness[3], %}, Boxed -> False]
![enter image description here][7]
We then convert the anamorphic 3D drawing to the 2 dimensional developed interior face of the cylinder as {ArcTan[xa,ya} , za}. This GIF illustrates the unfolding of the cylindrical image:
![enter image description here][8]
developLineCoordinates =
Flatten[Map[{ArcTan @@ Most[#], Last[#]} &, anaLines, {5}][[-1]],
1][[All, 1]];
lstPP = Partition[#, 2, 1] & /@ developLineCoordinates;
DeleteCases[#, _?(EuclideanDistance @@ # > 1 &)] & /@ lstPP;
Graphics[{AbsoluteThickness[2], Line /@ %}, FrameTicks -> None,
Frame -> True, ImageSize -> 600]
develop = Image[%];
![enter image description here][9]
After printing the cylinder development to the right size (52 cm by 14 cm), it is glued around a cardboard cylinder (radius 8 cm). A home made conical mirror (base radius 7 cm, height 12 cm) is put inside the cylinder at the center. The anamorphic image on the cylinder wall is reflected as the undeformed original by the conical mirror. Here is the result: (the center is hidden by a coin resting at the top of the cone since anamorphic maps of points close to the cone center are off at infinite height on the cylinder wall)
![enter image description here][10]
**Another application** of the function is to use one of the many popular curves (".....-like curve" ) that can be extracted using Interpreter
Interpreter["PopularCurve"]["bunny-like curve"];
bugsbunnyPrimitives =
First@Cases[
First[ParametricPlot[
Entity["PopularCurve", "BunnyCurve"]["ParametricEquations"][
t], {t, 0, 30 \[Pi]}]] /. {x_?NumericQ,
y_?NumericQ} :> {x - 85, y - 50}/800, _Line, \[Infinity]];
![enter image description here][11]
The anamorphic map is created by applying anaCone2Cylinder to the point coordinates:
anaBunny =
Map[anaCone2Cylinder[#, 1.755, 1.25] &, bugsbunnyPrimitives, {2}];
Animate[Graphics3D[
Rotate[{{Opacity[.2], White,
Cylinder[{{0, 0, .25}, {0, 0, 1}}, 1.25]}, AbsoluteThickness[3],
Red, anaBunny}, \[Phi], {0, 0, 1}], Boxed -> False], {\[Phi], 0,
2 \[Pi]}]
![enter image description here][12]
This is the developed cylinder:
developRules = {x_?NumericQ, y_?NumericQ,
z_?NumericQ} :> {ArcTan[x, y], z};
developed = anaBunny /. developRules;
DeleteCases[
Partition[developed[[1]], 2, 1], _?(EuclideanDistance @@ # > 1 &)];
Graphics[{Red, AbsoluteThickness[3], Line /@ %}, FrameTicks -> None,
Frame -> True]
![enter image description here][13]
And the result, printed, glued inside a cylinder and using the same setup as in the previous example:
![enter image description here][14]
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logojointpics.png&userId=68637
[2]: https://community.wolfram.com//c/portal/getImageAttachment?filename=3379geometrycone.png&userId=68637
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=animationskipeedframes.gif&userId=68637
[4]: https://community.wolfram.com/groups/-/m/t/1646795?p_p_auth=1iKz6YW8
[5]: http://demonstrations.wolfram.com/CharacterRotationPatterns/
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=S-logooutlinecopy.png&userId=68637
[7]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9897logocylinder.png&userId=68637
[8]: https://community.wolfram.com//c/portal/getImageAttachment?filename=unfolding.gif&userId=68637
[9]: https://community.wolfram.com//c/portal/getImageAttachment?filename=4680logodeveloped.png&userId=68637
[10]: https://community.wolfram.com//c/portal/getImageAttachment?filename=logofinalcombi.png&userId=68637
[11]: https://community.wolfram.com//c/portal/getImageAttachment?filename=9385bunnyoriginal.png&userId=68637
[12]: https://community.wolfram.com//c/portal/getImageAttachment?filename=cylinderanimation.gif&userId=68637
[13]: https://community.wolfram.com//c/portal/getImageAttachment?filename=8734bunnydeveloped.png&userId=68637
[14]: https://community.wolfram.com//c/portal/getImageAttachment?filename=6598bunnyfinal.jpg&userId=68637Erik Mahieu2019-04-15T07:18:15ZPacking arbitrary shapes with WordCloud
https://community.wolfram.com/groups/-/m/t/1659824
![enter image description here][1]
We can extract information from `WordCloud` in order to translate a collection of regions so they pack nicely. First I'll create some `BoundaryMeshRegions` similar to how the glyphs were created by OP:
$letters = Table[BoundaryDiscretizeGraphics[
Text[Style[c, Italic, FontFamily -> "Times"]], _Text], {c, Alphabet[]}];
n = 30;
BlockRandom[
glyphs = RandomChoice[$letters, n];
scales = RandomReal[5, n],
RandomSeeding -> 1234
];
Plot the word cloud using random orientations:
wc = WordCloud[AssociationThread[glyphs, scales], WordSpacings -> 0,
WordOrientation -> "Random", RandomSeeding -> 1234]
<img src="https://i.stack.imgur.com/cquRW.png" width="360"/>
Notice that the objects aren't quite touching. Luckily when we convert this scene back to a collection of regions, they will seem to be touching. I think this has to do with padding within `Inset`. Using regions in the beginning rather then just graphics makes it easier to convert the insets into explicit coordinates and avoid padding.
insets = Cases[wc2, _Inset, ∞];
insetToReg[mr_, c_, p_, s_] :=
BoundaryMeshRegion[TransformedRegion[#,
TranslationTransform[c - RegionCentroid[BoundingRegion[#]]]],
MeshCellStyle -> {1 -> Black, 2 -> RandomColor[Hue[_]]}]& @ RegionResize[mr[[1]], s]
BlockRandom[Show[insetToReg @@@ insets], RandomSeeding -> 1234]
<img src="https://i.stack.imgur.com/1k8w7.png" width="360"/>
Or if you prefer a region instead of just a visualization:
RegionUnion[insetToReg @@@ insets]
<img src="https://i.stack.imgur.com/JByJo.png" width="360"/>
We can do this for polygons too:
BlockRandom[
polys =
Table[BoundaryMeshRegion[#[[FindShortestTour[#][[2]]]],
Line[Mod[Range[16], 15, 1]]] &[RandomReal[{0, 1}, {15, 2}]], n];
scales = RandomReal[{0, 1}, n],
RandomSeeding -> 1234
];
wc = WordCloud[AssociationThread[polys, scales], WordSpacings -> 0,
WordOrientation -> "Random", RandomSeeding -> 1234];
BlockRandom[Show[insetToReg @@@ Cases[wc, _Inset, ∞]],
RandomSeeding -> 1234]
<img src="https://i.stack.imgur.com/HMvfF.png" width="360"/>
[1]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2019-04-15at11.14.52AM.png&userId=11733Chip Hurst2019-04-14T08:47:18ZKnitting images
https://community.wolfram.com/groups/-/m/t/1659553
Dear all, inspired by another [great post][1] of [@Anton Antonov][at0] and in particular there by a remark of [@Vitaliy Kaurov][at1] pointing to [the art of knitting images][2] I could not resist trying with Mathematica. Clearly - this problem is crying out loudly for **Radon transform**!
![enter image description here][3]
I start by choosing some example image, convert it to inverse grayscale and perform the Radon transform.
ClearAll["Global`*"]
img0 = RemoveBackground[
ImageTrim[
ExampleData[{"TestImage", "Girl3"}], {{80, 30}, {250, 240}}], {"Background", {"Uniform", .29}}];
img1 = ImageAdjust[ColorNegate@ColorConvert[RemoveAlphaChannel[img0], "Grayscale"]];
{xDim, yDim} = {180, 400}; (* i.e. angles between 1\[Degree] and 180\[Degree] *)
rd0 = Radon[img1, {xDim, yDim}];
ImageCollage[{img0, ImageAdjust@rd0}, Method -> "Rows",
Background -> None, ImagePadding -> 10]
![enter image description here][4]
Every column of the Radon image represents a different angle of projection. So next I separate these columns into (here 180) single Radon images and do an inverse Radon transform on each:
maskLine[a_] := Table[If[a == n, 1, 0], {n, 1, xDim}];
maskImg = Table[Image[ConstantArray[maskLine[c], yDim]], {c, 1, xDim}];
rdImgs = rd0 maskImg;
ProgressIndicator[Dynamic[n], {1, xDim}]
invRadImgs =
Table[{ImageApply[If[# > 0, #, 0] &,
InverseRadon[rdImgs[[n]]]], -(n - 91) \[Degree]}, {n, 1, xDim}];
These data already represent the angle dependent intensities for backpropagation. Now one just has *somehow* to translate these intensities into discretely spaced lines (because this is the actual task in analogy to the above mentioned knitting ). Here is my simple attempt, which e.g. for 69° gives the following result (I am not really happy with this - there is definitely room for improvement!):
![enter image description here][5]
valsAngle[invRads_] := Module[{img, angle, data, l2},
angle = Last@invRads;
data = Max /@ (Transpose@*ImageData@*ImageRotate @@ invRads);
l2 = Round[Length[data]/2];
data = MapIndexed[{First[#2] - l2, #1} &, data];
{Select[
Times @@@ ({#1,
If[#2 > .0003, 1, 0]} & @@@ ((Mean /@ # &)@*Transpose /@
Partition[data, 5])), # != 0 &], angle} (*
limiting value of 0.0003 is just empirical! *)
];
va = valsAngle /@ invRadImgs;
graphicsData[va_] := Module[{u, angle},
{u, angle} = va;
InfiniteLine[# {Cos[angle], -Sin[angle]}, {Sin[angle],
Cos[angle]}] & /@ u];
gd = graphicsData /@ va;
Graphics[{Thickness[.0003], gd}, ImageSize -> 600,
PlotRange -> {{-170, 170}, {-220, 220}}]
... and the result is a bunch of lines:
![enter image description here][6]
[at0]: https://community.wolfram.com/web/antononcube
[at1]: https://community.wolfram.com/web/vitaliyk
[1]: https://community.wolfram.com/groups/-/m/t/1555648?p_p_auth=T7A50bYl
[2]: http://artof01.com/vrellis/works/knit.html
[3]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ImageOfLines.gif&userId=32203
[4]: https://community.wolfram.com//c/portal/getImageAttachment?filename=img0rd0.jpg&userId=32203
[5]: https://community.wolfram.com//c/portal/getImageAttachment?filename=linesample.png&userId=32203
[6]: https://community.wolfram.com//c/portal/getImageAttachment?filename=ImageOfLines.png&userId=32203Henrik Schachner2019-04-13T20:01:08ZGame of Life (Manual) Neural Network
https://community.wolfram.com/groups/-/m/t/1424749
The [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) have a very simple set of rules that can be summarized in Mathematica with two lines:
GameOfLifeRule[{1,2|3}|{0,3}] := 1
GameOfLifeRule[{center_Integer, sum_Integer}] := 0
If the center cell is alive (1) and it has 2 or 3 neighbors (sum), it will live (1). If the center cell is dead (0) but it has 3 neighbors (sum), it will live (1). Otherwise, all shall die (1).
In this post, we'll manually construct a Neural Network (NN) that can give the next state of the Game of Life. It will require no training whatsoever and we'll use only our knowledge of the rules of the game.
# First NN
Let's ''train'' a very simple NN that will able to tell what is the next state of the board given the `center` cell and the `sum` of its neighbors. First, we generate the data for the NN (only 18 elements).
data = Flatten@Table[
{center, sum} -> List@GameOfLifeRule[{center, sum}]
, {sum, 0, 8}, {center, 0, 1}]
And then create a very simple NN. The values of the `Weights` and `Biases` below were chosen after a 1-min simulation was run and then rounded. And then the layer `ElementwiseLayer[#^5 &]` was added to make the final output either 0 or 1 more sharply. The NN bellow doesn't need to be trained as it is already "trained".
net = NetChain[{
LinearLayer[2, "Weights" -> {{0,3},{-4,-4}},
"Biases" -> {-11,11}],
LogisticSigmoid,
LinearLayer[1, "Weights" -> {{-16,-16}},
"Biases" -> 8],
ElementwiseLayer[#^5 &],
LogisticSigmoid
}, "Input" -> 2, "Output" -> 1];
Let's test it:
Tally@Table[Round@net@d[[1]] == d[[2]], {d, data}]
(* Output *) {{True, 18}}
There is probably a clever way of doing the same, but it will suffice for now.
# Second NN
Now that we can predict the next state of a cell, we need to build an NN that can apply those rules to all the board.
A 3x3 convolution is the key of doing that. Look at the following two kernels:
$\begin{pmatrix}
1&1&1 \\
1&0&1 \\
1&1&1
\end{pmatrix}$ and $\begin{pmatrix}
0&0&0 \\
0&1&0 \\
0&0&0
\end{pmatrix}$
The first one gets the sum of the neighbors of the central cell while the other is just a duplicate of the central cell. Which is basically the inputs of the previous NN built.
So, in order to build an NN that can play the game of life, we need to run the two convolutions above and apply the previous NN to it, it can be done as:
netPlay[W_Integer, H_Integer] := NetChain[{
ConvolutionLayer[2, {3, 3}, "PaddingSize" -> 1,
"Weights" -> {
{{{0, 0, 0}, {0, 1, 0}, {0, 0, 0}}},
{{{1, 1, 1}, {1, 0, 1}, {1, 1, 1}}}
}, "Biases" -> None],
ReshapeLayer[{2, W*H}],
TransposeLayer[],
NetMapOperator[net],
ReshapeLayer[{1, H, W}]
},
"Input" -> NetEncoder[{"Image", {W, H},
ColorSpace -> "Grayscale"}],
"Output" -> NetDecoder[{"Image",
ColorSpace -> "Grayscale"}]
]
Where we have reshaped and transposed the layers so we could apply the previous net only at the channel's level.
Now we need to test our NN. To do it, we will build a function that generates a random Game Of Life board.
RandomLife[W_Integer, H_Integer] := Block[{mat, pad=2},
mat = ArrayPad[RandomInteger[1, {H, W}], pad];
mat[[1,1]]=mat[[-1,-1]]=1;
Rule @@ (ImagePad[Image@#, -pad] & /@ (
CellularAutomaton["GameOfLife", {mat, 0}, 1]))
]
Where we generate a random board and pad it, apply the Game of Life rules and then crop the result. This is needed since Mathematica changes the size of the board with the CellularAutomaton function, hence we use a trick of padding and adding a cell that will die in the next interaction, just to make sure the board will stay the same size. A very hacky way, but nonetheless, it works...
a = RandomLife[30, 20]
netPlay[30, 20][a[[1]]] - a[[2]]
![result][1]
Where we can see from the difference of both images that the NN can indeed predict the Game of Life rules.
Let's now apply it to a more realistic scenario. Gliders!
img = Image@ArrayPad[{{0,1,0},{0,0,1},{1,1,1}}, 10];
ListAnimate@NestList[(netPlay@@ImageDimensions@img), img, 45]
![Glider][2]
Notice that the Glider just dies at the corner of the board.
An interesting problem that we could pose is to write the problem backward. Given the current configuration, try to find a previous one, only using random images as a training set. It is well-known that the Game Of Life is not reversible, so this procedure it's not always possible. But it would be interesting to see what the NN would predict.
One could build such a neural network by feeding random images and then using convolutions to build the previous step, and then apply the previous network shown above to get back the input image and see the differences, in a kind of auto-encoder-way.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2018-08-26_173812.png&userId=845022
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rest.GIF&userId=845022Thales Fernandes2018-08-26T20:51:49Z