Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Visual Arts sorted by activeThe Delian Brick and other 3D self-similar dissections
http://community.wolfram.com/groups/-/m/t/1368091
Divide a cuboid into two cuboids similar to the original shape. The answer involves the cube root of 2, otherwise known as the [Delian constant](http://mathworld.wolfram.com/DelianConstant.html). I've called this object the Delian Brick. It's a 3D 2-reptile. A stack of three bricks can be made using the cube root of 3, and so on.
With[{r=2^(1/3)},
Graphics3D[{Opacity[.5],
Cuboid[{0 r^0,0 r^1,0r^2},{1 r^0,1r^1,1r^2}], Cuboid[{1 r^0,0 r^1,0r^2},{2 r^0,1 r^1,1r^2}]},SphericalRegion-> True, Boxed-> False]]
![Delian Brick][1]
I'd self-discovered the Delian Brick myself, as did at least ten other recreational mathematicians I've exchanged correspondence with. It may have been known to the ancient greeks. The first publication I've found is by Dale Walton and the game company Thinkfun, who expanded it into a 3D 4-irreptile they called the Fifth Chair puzzle.
With[{r=2^(1/3)},
Graphics3D[{Opacity[.5],
{Red,Cuboid[{0 r^0,0 r^1,0r^2},{2 r^0,r^1,r^2}], Cuboid[{1 r^0,1 r^1,0r^2},{2 r^0,2 r^1,1r^2}]},
{Blue,Cuboid[{0 r^0,1 r^1,0r^2},{1 r^0,3r^1,1r^2}], Cuboid[{1 r^0,2 r^1,0r^2},{2 r^0,3 r^1,1r^2}]},
{Green,Cuboid[{0 r^0,3 r^1,0r^2},{2 r^0,4r^1,2r^2}], Cuboid[{0 r^0,2 r^1,1r^2},{2 r^0,3 r^1,2r^2}]},
{Yellow, Cuboid[{2 r^0,0 r^1,0r^2},{4 r^0,2r^1,2r^2}], Cuboid[{0 r^0,0 r^1,1r^2},{2 r^0,2 r^1,2r^2}]}}, SphericalRegion-> True, Boxed-> False]]
![fifth chair][2]
There are also [five space-filling tetrahedra](http://demonstrations.wolfram.com/SpaceFillingTetrahedra/), and at least two of them are 8-reptiles
Row[{Graphics3D[{Opacity[.5],Polygon/@Union[Sort/@
Flatten[Subsets[#,{3}]&/@(IntegerDigits/@({{020,111,121,022},{022,111,112,222},{022,111,121,222},{022,113,112,222},{022,113,123,024},{022,113,123,222},{111,202,212,113},{111,222,212,113}}+111)-1),1]]}, Boxed-> False, SphericalRegion->True],
Graphics3D[{Opacity[.5],Polygon/@Union[Sort/@
Flatten[Subsets[#,{3}]&/@(IntegerDigits/@({{002,022,111,113},{022,042,131,133},{022,222,111,113},{022,222,111,131},{022,222,113,133},{022,222,131,133},{111,131,220,222},{113,133,222,224}}+111)),1]]}, Boxed-> False, SphericalRegion->True]}]
![tetrahedron reptiles][3]
More of these self-similar 3D dissections are listed at [3D Rep-Tiles and Irreptiles](http://demonstrations.wolfram.com/3DRepTilesAndIrreptiles/). The ones I list here need to be added there. Most of the 3D rep-tiles are based on either a 2D reptile or a polycube. The four items in this discussion fit in neither of those categories. Are there others?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=DelianBrick.png&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=FifthChair.png&userId=21530
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=tetrahedronreptiles.png&userId=21530Ed Pegg2018-07-03T16:02:03Z[GIF] This is Only a Test (Decagons from stereographic projections)
http://community.wolfram.com/groups/-/m/t/1380624
![Decagons formed from stereographically projected points][1]
**This is Only a Test**
This one is fairly straightforward. Form 60 concentric circles on the sphere centered at the point $(0,1,0)$. On each circle, take 10 equally-spaced points, stereographically project to the plane, and form a decagon from the resulting points. Now rotate the sphere and all the points on it around the axis $(0,1,0)$. The result (at least after adding some color) is this animation. This is a sort of discretized companion to my old still piece [_Dipole_][2].
Here's the code:
Stereo[p_] := p[[;; -2]]/(1 - p[[-1]]);
With[{r = 2, n = 10, m = 60,
cols = RGBColor /@ {"#2EC4B6", "#011627", "#E71D36"}},
Manipulate[
Graphics[
{EdgeForm[Thickness[.0045]],
Join[{Reverse[#[[1]]], #[[2]]}]
&[Partition[
Table[
{Blend[cols, θ/π],
EdgeForm[Lighter[Blend[cols, θ/π], .15]],
Polygon[
Table[Stereo[(Cos[θ] {0, 1, 0} +
Sin[θ] {Cos[t], 0, Sin[t]}).RotationMatrix[ϕ, {0, 1, 0}]],
{t, π/2, 5 π/2, 2 π/n}]]},
{θ, π/(2 m), π - π/(2 m), π/m}],
m/2]]},
PlotRange -> r, ImageSize -> 540, Background -> Blend[cols, 1/2]],
{ϕ, 0, 2 π/n}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=stereo29.gif&userId=610054
[2]: https://shonkwiler.org/still-images/dipoleClayton Shonkwiler2018-07-12T03:41:03Z[WSC18] Computational Hairdressing
http://community.wolfram.com/groups/-/m/t/1383182
![What Paolo's hair should look like -- according to HairGenNet.][1]
Hi! My name is Jacob. My WSC18 project, you can see, is of no practical value, so I hope it instead stands as a hilarious and intuitive icebreaker of an introduction to what one can do with machine learning, as it was for me.
## Why does Paolo have an odd gray blob on his head? ##
The question that started all this funny business: can a neural network predict what kind of hair someone has based on their facial features?
The initial thesis was that, even considering the infinity of things that make someone's hair at any moment, there must be a correlation between face and hair -- however slight -- and that we'd be able to answer the above question with enough data. A neural network is no way to support this, though, and we found it easier to pitch the idea as predictor of someone's hair based on what other people who look like them have.
So: the reason why Paolo has an odd gray blob on his head is that we're trying to peer-pressure him into styling his hair the way the wisdom of the crowd says he ought to... using a generative neural network trained on tens of thousands of faces. That kind of peer-pressure.
## Overview ##
To have a neural network to predict a hairstyle from a face (hereafter referred to as "HairPredNet"), we need input and output training data: faces and hair, respectively. The problem is that there exists no massive database of faces and their corresponding hairstyles; I had to generate my own input and output.
Hundreds of millions of photos containing both face and hair have been made accessible on the internet, and my job was to find a way to separate them. This required training a segmentation network (hereafter referred to as "HairSegNet"), which was made infinitely easier by my finding a quality database of images of hair and their corresponding segmentations under [Figaro1k][2].
In short, I was able to generate an unlimited amount of training data for HairPredNet by using HairSegNet, trained on Figaro1k, on any headshot image I could find.
## **Code** ##
----------
## HairSegNet ##
The process of acquiring raw training data was a matter of downloading Figaro1k. I cropped each image -- input photo and output hair mask -- to 512x512 to keep my data consistently scaled.
Crop[b_] := ImageCrop[ImageResize[b, {512}], {512, 512}]
I imported the neural net architecture "[Pix2pix Photo-to-Street-Map Translation][3]" uninitialized for building HairSegNet on. This was done at the suggestion of mentor Rick, who explained that it's an architecture suited for generating output images from elements extracted from input images.
HairSegNet1 =
NetTrain[pix2pixTrain, <|"Input" -> FigaroIn,
"Output" -> FigaroOut|>, MaxTrainingRounds -> 3]
HairSegNet, after being trained on all of 1,050 Figaro1k data, returned useful results and could safely be described as decent, but not as being at an acceptable level for generating thousands of HairPredNet training data from. The reliability of the prediction would depend on the segmentation.
![HairSegNet trained on 1050 images][4]
Data augmentation was in order. My methods were horizontal flips, gaussian fuzz, and a combination of the two.
FigaroInFlip = (ImageReflect[#, Left]) & /@ FigaroIn;
FigaroInFuzz= (ImageEffect[#, {"GaussianNoise", 0.25}] &) /@ FigaroIn;
FigaroInFuzzFlip = (ImageEffect[#, {"GaussianNoise", 0.25}] &) /@
FigaroInFlip;
I manually added ~50 images of completely bald heads to the initial dataset of 1050. It was no problem to generate output data: ~50 completely black squares, as these images have no hair to segment. This was augmented as well.
HairSegNet2 =
NetTrain[pix2pixTrain, <|"Input" -> MassiveIn,
"Output" -> MassiveOut|>, MaxTrainingRounds -> 3]
![enter image description here][5]
Applying
DeleteSmallComponents[Binarize[]]
I was able to clean up segmentations.
![enter image description here][6]
## HairPredNet ##
We saw earlier that training a network to predict hairstyle would entail finding input faces and their corresponding hairs. We can take hairs from any portrait now with HairSegNet; we now need a way to take faces. Wolfram has an in-built function called FindFaces that, upon being applied to an image containing a face, returns the coordinates of a rectangle containing the face. Using this, I wrote a function to crop a portrait image to one containing only the bounds of that face-rectangle.
FaceTake[image_] :=
Module[{croppedimage, facebox, chinfacebox, rectangleareas,
rectanglenumber},
croppedimage = Crop[image];
rectangleareas = Area /@ FindFaces[croppedimage];
rectanglenumber = Position[rectangleareas, Max[rectangleareas]];
facebox =
List @@ FindFaces[croppedimage][[rectanglenumber[[1, 1]]]];
chinfacebox =
ReplacePart[
facebox, {{2, 2} -> facebox[[2, 2]] - 25, {1, 2} ->
facebox[[1, 2]] - 25}];
Crop[ImageTrim[croppedimage, chinfacebox]]
]
![Demonstration of FaceTake[]][7]
I collected face data using function [WebImageSearch\[\]][8] on both Google and Bing, which returned ~1000 images of dubitable quality. My main source of data was [Labeled Faces in the Wild][9], a collection of 13000 faces, most of which contain all of the subject's hair.
I called HairSegNet[] and FaceTake[] on the 13000 images to use as input and output, respectively. I took "Pix2pix Photo-to-Street-Map Translation " again as my architecture.
NetTrain[pix2pixinitialized,<|"Input" -> lfwFaces,"Output" -> lfwHair|>,TargetDevice -> "GPU",MaxTrainingRounds -> 3]
![Hair Predictions][10]
Manipulated predictions are somewhat sharper.
![Sharper][11]
----------
## Looking forward ##
A better suited architecture for the task than Pix2pix (in all its excellence) surely would have returned sharper results for either of the segmentation or prediction.
Many thanks to Michael Kaminsky for guiding me through this. It's cool!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at10.13.46AM.png&userId=1372129
[2]: http://projects.i-ctm.eu/it/progetto/figaro-1k
[3]: https://resources.wolframcloud.com/NeuralNetRepository/resources/Pix2pix-Photo-to-Street-Map-Translation
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.13.23PM.png&userId=1372129
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.23.32PM.png&userId=1372129
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=a.jpeg&userId=1372129
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.54.12PM.png&userId=1372129
[8]: http://reference.wolfram.com/language/ref/WebImageSearch.html
[9]: http://vis-www.cs.umass.edu/lfw/
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at5.05.31PM.png&userId=1372129
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at5.08.45PM.png&userId=1372129Jacob Fong2018-07-13T21:20:12Z