Message Boards Message Boards

Mandelbrot Set on Neural Network

MODERATOR NOTE: a submission to computations art contest, see more: https://wolfr.am/CompArt-22


Showing Abs and Arg fields of Mandelbrot set under 200 iterations

Mandelbrot_wallpaper.png


Showing Abs field of Mandelbrot set under 9 iterations

9<em>iterationabs.png


Showing Re vs. Im complex mapping of Mandelbrot set under 200 iterations

ReIm<em>Complexmap.png


Mandelbrot Set on Neural Network

The MXNet-based neural network framework has been introduced to Wolfram Language since version 11. It is mostly designed for deep learning. However if we think about it, neural network is merely another fancy way to implement a certain program. As more and more functions are supported, it is now possible for us to "compile" general numerical program to neural network, which essentially gives us an interface to GPGPU parallel ability without the hassle of low-level coding.

Inspired by Brian's recent post on MathCompile, we demonstrate in this post how to implement a neural network to compute the Mandelbrot set.

Utilities

As usual here is the code dump of some helper functions. Readers can evaluate then safely skip this section.

code dump of helper functions

ClearAll[pipe, branch]
pipe   = RightComposition;
branch = Through@*{##} &;
Needs["NeuralNetworks`"]
Clear[netInputPortSort]
netInputPortSort[inPortLst_List] := Function[origNet, netInputPortSort[origNet, inPortLst]]
netInputPortSort[origNet_NetGraph, inPortLst_List] :=
 Block[{NetGraph = Inactive[NetGraph], fullnet, net},
           fullnet         = origNet
           ; net           = fullnet[[1]]
           ; net["Inputs"] = AssociationThread[Rule[inPortLst, inPortLst /. net["Inputs"]]]
           ; ReplacePart[fullnet, 1 -> net]
      ] // Activate

Basic concept

The center of the Mandelbrot set computation is a simple iteration:

$$\left\{ \begin{align} z_n &= c+z_{n-1}^2 \\ z_0 &= 0 \end{align} \right.$$

Under simple iteration it leads to the Mandelbrot set.

Block[{region, c, faclets}
         ,
         (* random-points as different c for iteration: *)
         region = Disk[{-.5, 0}, 2] // DiscretizeGraphics // DiscretizeRegion[#, MaxCellMeasure -> .0001] &
         ; faclets = MeshCells[region, 2][[;; , 1]]
         ; c = MeshCoordinates[region].{1, I}
         ;(* The iteration taking place simultaneously for all c, nested 21 steps: *)
         Nest[Function[z, z^2 + c], 0, 21] //
              (* Visualizing the result: *)
              pipe[
                      pipe[(* Regularizing the norm for a clear visualization: *)
                           Abs, HistogramTransform, Rescale, (1 - #)^2 &]
                      , Append[ReIm[c]\[Transpose], #]\[Transpose] &
                      , Graphics3D[
                                    GraphicsComplex[#, {EdgeForm[], Polygon@faclets}]
                                    , Boxed -> False, SphericalRegion -> True, RotationAction -> "Fit"
                                    , ViewPoint -> {1, -2.4, 2.4}, ViewVertical -> {0, 0, 1}
                                  ] &
                  ]
     ]

3D mesh of Mandelbrot set

Using the built-in MandelbrotSetPlot function, it's straightforward to plot the traditional Mandelbrot set visualization:

MandelbrotSetPlot[{-2.5 - 2 I, 1.5 + 2 I}]

MandelbrotSetPlot

With some test we can notice that, for fixed MaxIterations, the relationship between computing time of MandelbrotSetPlot and ImageResolution follows a power law.

builtInTimeTest = {#, AbsoluteTiming[MandelbrotSetPlot[{-2.5 - 2 I, 1.5 + 2 I}, MaxIterations -> 100, ImageResolution -> #];][[1]]} & /@ Range[101, 4001, 100]

NonlinearModelFit[builtInTimeTest, a r^k, {a, k}, r][r] //
  LogLogPlot[#, {r, 400, 5000}, PlotStyle -> Directive[Red, AbsoluteThickness[2]], Frame -> True, FrameTicks -> All] & //
  Show[{#
        , ListLogLogPlot[builtInTimeTest, PlotMarkers -> "OpenMarkers", PlotRange -> All]
        }, FrameLabel -> {"ImageResolution", "AbsoluteTiming"}] &

time benchmark of MandelbrotSetPlot

This is a scenario where parallel computation might make a difference.


Mandelbrot set on neural network -- A naïve attempt

The implementation

Iteration core net

Neural network (short for NN) doesn't support complex number (yet). Aside from that, the iteration formula is simple enough for ThreadingLayer:

iterNet = Module[{c = cX + cY I, z = x + y I},
  z^2 + c // pipe[
                    ReIm, ComplexExpand, Echo[#, "{Re[\!\(\*SuperscriptBox[\(z\), \(2\)]\)+c], Im[\!\(\*SuperscriptBox[\(z\), \(2\)]\)+c]}:"] &
                    , Map@pipe[
                                 Inactive[Function][{cX, cY, x, y}, #] &
                                 , Activate
                                 , ThreadingLayer
                              ]
                    , Inactive[NetGraph][#
                                            , {
                                                   (NetPort /@ StringSplit["cX,cY,x,y", ","]) -> # & /@ {1, 2}
                                                   , 1 -> NetPort["x"], 2 -> NetPort["y"]
                                              }
                                        ] &
                    , Activate
                 ]
  ]

iterNet

It's very easy to perform the iteration with the help of Nest / NestList.

Block[{region, c, xyInit, result}
         ,
         (* random-points as different c for iteration: *)
         region = Disk[{0, 0}, .5] // DiscretizeGraphics // DiscretizeRegion[#, MaxCellMeasure -> .01] &
         ; c = MeshCoordinates[region].{1, I} // ReIm
         ; c = AssociationThread[StringSplit["cX,cY", ","], c\[Transpose]]
         ; xyInit = AssociationThread[StringSplit["x,y", ","], 0*Values[c]]
         ; result = NestList[
                                pipe[
                                      Map[Clip[#, {-2, 2}] &]
                                      , Join[c, #] &
                                      , iterNet
                                    ]
                                , xyInit, 20
                            ]
         ; result //
             pipe[
                     Values, Rest, Map@Transpose
                     , Transpose
                     , MapThread[{#2, BSplineCurve@#1} &, {#, # // Length // Range // N // Rescale // Map[ColorData["Rainbow"]]}] &
                     , Graphics[#, Frame -> True, FrameTicks -> All, PlotRange -> {GoldenRatio {-1, 1}, {-1, 1}}, PlotRangeClipping -> True] &
                 ]
     ]

Nest on iterNet

Nest net

Now we have the core net representing the iteration formula, we need to mimic the functionality of Nest[f, expr, n] fully on a neural network.

The go-to function is of course NetNestOperator. However in our case, the same $c$ is used repeatedly for all iterations, so we'd like to pass it directly to each iteration. Thus, we do it with the following customized NetNestPartialOperator function:

ClearAll[NetNestPartialOperator]
NetNestPartialOperator[net_, staticPorts_List, nestPort_, iterNum_Integer] :=
 Module[{pIdx, netIdx, staticpath, nestpath},
  netIdx = Range[iterNum]
  ; pIdx = AssociationThread[staticPorts, staticPorts // Length // Range // Map[ToString]]
  ; staticpath = Function[p, NetPort["static" <> pIdx[p]] -> (NetPort[#, p] & /@ netIdx)] /@ staticPorts
  ; nestpath = Thread[Flatten[{NetPort["nest"], netIdx // Most}] -> (NetPort[#, nestPort] & /@ netIdx)]
  ; Inactive[NetGraph][ConstantArray[net, iterNum], {staticpath, nestpath} // Flatten] //
       Activate // 
       netInputPortSort[Flatten[{"nest", "static" <> # & /@ Values[pIdx]}]]
  ]

Basically our NetNestPartialOperator takes a net_NetGraph as the 1st argument. One of net's input ports, i.e. nestPort, is going to be nested across iterations; other input ports (i.e. staticPorts) will be feed in constant/non-nested values.

Here is a simple example realizing a 5 steps nest iteration Nest[Function[c,a+b+c],c0,5]:

Module[{coreNet, nestNet}
           , coreNet = NetGraph[{ThreadingLayer[#1+#2+#3&]}, {{NetPort["a"],NetPort["b"],NetPort["c"]}->1},"a"->"Real","b"->"Real","c"->"Real"]
           ; nestNet = NetNestPartialOperator[coreNet, {"a", "b"}, "c", 5]
           ; {
               {"coreNet", "nestNet"}
               , Quiet[
                         NetInformation[#, "MXNetNodeGraph"] // Graph[#, VertexSize -> 0.2, VertexLabels -> "Name"] &
                      ] & /@ {coreNet, nestNet}
             }\[Transpose] // 
             Grid[#, Background -> {{GrayLevel[.9], None}, None}, Frame -> All, FrameStyle -> Black] &
      ]

NetNestPartialOperator example

The Mandelbrot net

In order to construct a net suitable for NetNestPartialOperator, we merge the real and imaginary ports in iterNet into a single "complex" port.

zNet = NetGraph[{ReplicateLayer[1], ReplicateLayer[1], CatenateLayer[]}, {NetPort["1"] -> 1, NetPort["2"] -> 2, {1, 2} -> 3}];

nestcoreNet = NetGraph[
                        {PartLayer[1], PartLayer[2], iterNet, zNet}
                        , {
                               (NetPort[#] -> NetPort[3, #]) & /@ StringSplit["cX,cY", ","]
                             , NetPort["z"] -> {1, 2}
                             , {{1, 2}, StringSplit["x,y", ","]} // MapThread[(#1 -> NetPort[3, #2]) &]
                             , {StringSplit["x,y", ","], StringSplit["1,2", ","]} // MapThread[(NetPort[3, #1] -> NetPort[4, #2]) &]
                          }
                      ]

nestcoreNet

Thus a full net computing the Mandelbrot set is constructed as follows:

ClearAll[mandelbrot]
mandelbrot[iterNum_Integer?Positive] :=
     NetGraph[{
                  zNet
                , ElementwiseLayer[0 # &]
                , NetNestPartialOperator[nestcoreNet, StringSplit["cX,cY", ","], "z", iterNum]
              }, {
                  {NetPort["cX"], NetPort["cY"]} -> 1 -> 2
                , {2, NetPort["cX"], NetPort["cY"]} -> 3
              }
             ]

Results

We showcase our mandelbrot net with 9 iteration steps over the complex region ${-2-1.2I,1+1.2I}$, with a image resolution $401 \times 501$. The network is executed on a GPU (NVIDIA GeForce GTX 1050 Ti) with "Real64" precision.

mandelbrotF = mandelbrot[9];
result =
  Module[{region = {-2 - 1.2 I, 1 + 1.2 I}, resol = 501, aspr},
         region = region // ReIm // Transpose
         ; aspr = region.{-1, 1} // Reverse // Apply[Divide]
         ; resol = resol {1, aspr} // Round
         ; {resol, region} //
             pipe[
                   MapThread[Rescale[Range[#1] // N // Rescale, {0, 1}, #2] &]
                   , Tuples
                   , AssociationThread[{"cX", "cY"}, #\[Transpose]] &
                   , AbsoluteTiming[mandelbrotF[#, WorkingPrecision -> "Real64", TargetDevice -> "GPU"]] &
                   , pipe[
                            branch[pipe[First, Quantity[#, "Seconds"] &, Echo[#, "Computing time on NN:"] &], Last]
                            , Last]
                   , Developer`ToPackedArray
                   , ArrayReshape[#, {2, Sequence @@ resol}] &
                   , {1, I}.# &
                   , Transpose, Reverse
                 ]
        ];

(* Computing time on NN: 3.08505 s *)
result // Dimensions
(* {401, 501} *)
result //
   pipe[
         branch[
                 pipe[ Abs
                     , pipe[
                             branch[Flatten /* HistogramTransform, Dimensions /* Last]
                           , Apply@Partition
                           ]
                     , Rescale, (1 - #)^5 &
                     ]
               , pipe[Arg, Sin, Rescale]
               ]
       , Map @ pipe[
                     Image[#, ImageSize -> Length[#]] &
                   , Colorize[#, ColorFunction -> "DarkColorFractalGradient"] &
                   ]
       ]

naive result

Note the highlighted periodic bulbs and Mandelbrot dendritic islands in the left image, and the pattern approximately following the external rays of the set in the right image.

As a comparison, here is the result from the same region, iteration steps and resolution using the built-in MandelbrotSetPlot function.

AbsoluteTiming[
                MandelbrotSetPlot[{-2 - 1.2 I, 1 + 1.2 I}, MaxIterations -> 9, ImageResolution -> 501]
              ] //
     pipe[
           branch[pipe[First, Quantity[#, "Seconds"] &, Echo[#, "Computing time:"] &], Last], Last
         , #[[1, 1]] &, Reverse, Image[#, ImageSize -> Dimensions[#][[1]]] &
         ]

(* Computing time: 0.187943 s *)

MandelbrotSetPlotOver(-2-1.2I,1+1.2I)

Clearly at this resolution scale, MandelbrotSetPlot is much faster than our neural-net function. But at a much larger resolution our function will eventually win.

myMandelbrotSetPlotTiming[resolution_Integer] := 
 Module[ {region = {-2 - 1.2 I, 1 + 1.2 I}, aspr, resol}
       , region = region // ReIm // Transpose
       ; aspr   = region.{-1, 1} // Reverse // Apply[Divide]
       ; resol  = resolution {1, aspr} // Round
       ; {resol, region} //
            pipe[
                  MapThread[Rescale[Range[#1] // N // Rescale, {0, 1}, #2] &]
                , Tuples
                , AssociationThread[{"cX", "cY"}, #\[Transpose]] &
                , AbsoluteTiming[mandelbrotF[#, WorkingPrecision -> "Real64", TargetDevice -> "GPU"];][[1]] &
                , {resolution, #} &
                ]
       ]

myTimeTest = myMandelbrotSetPlotTiming /@ Range[101, 4001, 100];

NonlinearModelFit[myTimeTest, a r^k, {a, k}, r][r] //
  LogLogPlot[#, {r, 400, 5000}, PlotStyle -> Directive[Purple, AbsoluteThickness[2]], Frame -> True, FrameTicks -> All] & //
 Show[ {
         #
       , ListLogLogPlot[myTimeTest, PlotMarkers -> Graphics[{FaceForm[White], EdgeForm[{Orange, AbsoluteThickness[1]}], Polygon[CirclePoints[4]]}, ImageSize -> 7], PlotRange -> All]
       }
     , FrameLabel -> {"ImageResolution", "AbsoluteTiming"}
     ] &

time benchmark of NN based mandelbrot

Comparing with previous benchmark result of MandelbrotSetPlot, we can see our NN-based function has an advantage when the image resolution is large enough.

time bench comparison


Mandelbrot set on neural network -- Avoid overflow

Careful readers will find that for larger iteration steps and/or a larger computing region, our naïvely implemented mandelbrot will lead to overflow. That is of course due to the $c$ values leading to divergency. One simple way to avoid this overflow is to constrain the nested $z$ to a bounded region at every iteration step.

The implementation

A constrained iteration core net

The simplest way to constrain the nested $z$ is to clip it into a rectangle region.

iterNet = Module[{c = cX + cY I, z = x + y I},
  z^2 + c // pipe[
                   ReIm, ComplexExpand, Echo[#, "{Re[\!\(\*SuperscriptBox[\(z\), \(2\)]\)+c], Im[\!\(\*SuperscriptBox[\(z\), \(2\)]\)+c]}:"] &
                 , Map@pipe[
(* The constraint: -> *)     Block[{escR = 3}, escR Clip[#/escR]] &
                           , Inactive[Function][{cX, cY, x, y}, #] &
                           , Activate
                           , ThreadingLayer
                           ]
                 , Inactive[NetGraph][#
                                         , {
                                                (NetPort /@ StringSplit["cX,cY,x,y", ","]) -> # & /@ {1, 2}
                                                , 1 -> NetPort["x"], 2 -> NetPort["y"]
                                           }
                                     ] &
                 , Activate
                 ]
  ]

The Mandelbrot net

Re-evaluating the rest of the code leads to our simply constrained mandelbrotCons function.

nestcoreNet = NetGraph[
                        {PartLayer[1], PartLayer[2], iterNet, zNet}
                      , {
                          (NetPort[#] -> NetPort[3, #]) & /@ StringSplit["cX,cY", ","]
                        , NetPort["z"] -> {1, 2}
                        , {{1, 2}, StringSplit["x,y", ","]} // MapThread[(#1 -> NetPort[3, #2]) &]
                        , {StringSplit["x,y", ","], StringSplit["1,2", ","]} // MapThread[(NetPort[3, #1] -> NetPort[4, #2]) &]
                        }
                      ]
ClearAll[mandelbrotCons]
mandelbrotCons[iterNum_Integer?Positive] :=
     NetGraph[{
                zNet
              , ElementwiseLayer[0 # &]
              , NetNestPartialOperator[nestcoreNet, StringSplit["cX,cY", ","], "z", iterNum]
              },
              {
                {NetPort["cX"], NetPort["cY"]} -> 1 -> 2
              , {2, NetPort["cX"], NetPort["cY"]} -> 3
              }
             ]

Results

Now we can go for much larger iteration steps on a larger region.

mandelbrotConsF = mandelbrotCons[200];
result =
  Module[{region = {-3 - 2 I, 1 + 2 I}, resol = 1001, aspr},
         region = region // ReIm // Transpose
         ; aspr = region.{-1, 1} // Reverse // Apply[Divide]
         ; resol = resol {1, aspr} // Round
         ; {resol, region} //
             pipe[
                   MapThread[Rescale[Range[#1] // N // Rescale, {0, 1}, #2] &]
                   , Tuples
                   , AssociationThread[{"cX", "cY"}, #\[Transpose]] &
                   , AbsoluteTiming[mandelbrotConsF[#, WorkingPrecision -> "Real64", TargetDevice -> "GPU"]] &
                   , pipe[
                            branch[pipe[First, Quantity[#, "Seconds"] &, Echo[#, "Computing time on NN:"] &], Last]
                            , Last]
                   , Developer`ToPackedArray
                   , ArrayReshape[#, {2, Sequence @@ resol}] &
                   , {1, I}.# &
                   , Transpose, Reverse
                 ]
        ];

(* Computing time on NN: 6.29684 s *)
result // Dimensions
(* {1001, 1001} *)
result //
   pipe[
         branch[
                 pipe[ Abs
                     , pipe[
                             branch[Flatten /* HistogramTransform, Dimensions /* Last]
                           , Apply@Partition
                           ]
                     , Rescale, (1 - #)^5 &
                     ]
               , pipe[Arg, Sin, Rescale]
               ]
       , Map @ pipe[
                     Image[#, ImageSize -> Length[#]] &
                   , Colorize[#, ColorFunction -> "DarkColorFractalGradient"] &
                   ]
       ]

constrained result

Or thresholding at 2 to get the classical Mandelbrot set.

result // Abs // UnitStep[2 - #] & // Image

classical Mandelbrot 200

Zoom-in

For zoomed-in regions, this clipped version gives interesting results.

result =
  Module[{region = {-0.65 + 0.47 I, -0.4 + 0.72 I}, resol = 1001, aspr},
         region = region // ReIm // Transpose
         ; aspr = region.{-1, 1} // Reverse // Apply[Divide]
         ; resol = resol {1, aspr} // Round
         ; {resol, region} //
             pipe[
                   MapThread[Rescale[Range[#1] // N // Rescale, {0, 1}, #2] &]
                   , Tuples
                   , AssociationThread[{"cX", "cY"}, #\[Transpose]] &
                   , AbsoluteTiming[mandelbrotConsF[#, WorkingPrecision -> "Real64", TargetDevice -> "GPU"]] &
                   , pipe[
                            branch[pipe[First, Quantity[#, "Seconds"] &, Echo[#, "Computing time on NN:"] &], Last]
                            , Last]
                   , Developer`ToPackedArray
                   , ArrayReshape[#, {2, Sequence @@ resol}] &
                   , {1, I}.# &
                   , Transpose, Reverse
                 ]
        ];

(* Computing time on NN: 6.06289 s *)
result // pipe[
                branch[
                        pipe[ Abs
                            , pipe[branch[Flatten /* HistogramTransform, Dimensions /* Last], Apply@Partition]
                            , Rescale, (1 - #)^5 &, Sin[\[Pi]/2 #] &, Rescale
                            ]
                      , pipe[Arg, Sin[#/2] &, Rescale]
                      ]
              , Map@pipe[ Image[#, ImageSize -> Round[Length[#]/2]] &
                        , Colorize[#, ColorFunction -> "StarryNightColors"] &
                        ]
              ]

zoom-inAt(-0.65+0.47I,-0.4+0.72I)

Complex map

We can also stylize the result any way we want, say, showing the complex mapping like the ComplexPlot.

Abs vs. Arg:

result // pipe[
                branch[
                        pipe[ Arg
                            , branch[
                                      pipe[Sin[#/2] &, Rescale]
                                    , pipe[
                                            Sin[50 #] &, ArcSin, Rescale, #^.5 &
                                          ]
                                    ]
                            ]
                      , pipe[
                              Abs
                            , Rescale, Log[10^-3 + #] &, Rescale
                            , branch[
                                      pipe[Sin[200 #] &, ArcSin, Rescale, #^.5 &]
                                    , pipe[#^5 &, Rescale[#, {0, 1}, {1, .3}] &]
                                    ]
                            ]
                      ]
              , Apply@Function[{arg2, abs2}, {arg2[[1]], arg2[[2]] abs2[[1]], abs2[[2]]}]
              , Image[#, ColorSpace -> "HSB", Interleaving -> False] &
              ]

Complex map: Abs vs Arg

Or Re vs. Im:

result // pipe[
                branch[
                      pipe[
                            ReIm, Transpose[#, {2, 3, 1}] &
                          , Map@pipe[
                                      Cos[100 2 \[Pi] #] &, Rescale, #^.2 &
                                    ]
                          ]
                      , pipe[
                              Abs
                            , Rescale, Log[10^-3 + #] &, Rescale
                            , branch[
                                      pipe[Sin[\[Pi]/2 #] &, Rescale]
                                    , pipe[#^5 &, Rescale[#, {0, 1}, {1, .5}] &]
                                    ]
                            ]
                      ]
              , Apply@Function[{reim, abs2}, {abs2[[1]], Times @@ reim, abs2[[2]]}]
              , Image[#, ColorSpace -> "HSB", Interleaving -> False] &
              ]

Complex map: Re vs Im

A different region

(The detailed code in this section is omitted in the post, but included in the attached notebook.)

Computing over a different region (-0.0452407411 + 0.9868162204352258 I + 2.7 10^-5 {-1 - I, 1 + I}) and fiddling with color palettes from ColorData["ThemeGradients"] gives us this wallpaper-like result (the ColorFunction used here is "M10DefaultDensityGradient").

zoom-in 2

Also the corresponding complex map plots:

complex maps: Abs vs Arg & Re vs Im


References

Attachments:
POSTED BY: Silvia Hao
9 Replies

Very nice! I like non-NN uses of the neural network functions :)

I made a simplified version of this concept:

Set some constants regarding the number of iterations, the frame bounds, and the resolution:

dims = {1000, 1000};
bounds = {{-2, 1}, {-1.5, 1.5}};
iterations = 200;

Create a network that runs one iteration:

stepnet=NetGraph[
    <|
        "re"->PartLayer[1],
        "im"->PartLayer[2],
        "sqre"->ThreadingLayer[#1^2-#2^2&],
        "sqim"->ThreadingLayer[2*#1*#2&],
        "catenate"->CatenateLayer[],
        "reshape"->ReshapeLayer[Prepend[dims,2]],
        "c"->ConstantPlusLayer["Biases"->{
            ConstantArray[N@Range[dims[[2]]]/dims[[2]]*(bounds[[1,2]]-bounds[[1,1]])+bounds[[1,1]],dims[[1]]],
            Transpose@ConstantArray[N@Range[dims[[1]]]/dims[[1]]*(bounds[[2,2]]-bounds[[2,1]])+bounds[[2,1]],dims[[2]]]}],
        "clip"->ElementwiseLayer[Clip[#,{-10000,10000}]&]
    |>,{
        NetPort["Input"]->{"re","im"}->{"sqre","sqim"}->"catenate"->"reshape"->"c"->"clip"->NetPort["Output"]
    },"Input"->Prepend[dims,2]]

step net Then create a network that runs the single-step-network repeatedly and calculates the squared norm at each resulting point:

net=NetGraph[<|
        "init"->ConstantArrayLayer["Array"->ConstantArray[0,Prepend[dims,2]]],
        "steps"->NetNestOperator[stepnet,iterations],
        "re"->PartLayer[1],
        "im"->PartLayer[2],
        "normsq"->ThreadingLayer[#1^2+#2^2&]
    |>,
    {"init"->"steps"->{"re","im"}->"normsq"->NetPort["Output"]}
]

net Finally, evaluate the network:

Image@UnitStep[2^2 - net[]]

mandelbrot

Attachments:
Posted 5 years ago

Nice post! In my experience, many built-in fractal functions is not very efficient. Compare with custom compilation function maybe better. The fastest implementation should be using CUDALink or OpenCLLink.

Clear[mandelbrot];
mandelbrot = Compile[{{X, _Real, 1}, y},
   Table[
    Module[{z, c},
     z = 0. I; c = x + y I;
     Do[z = z^2 + c, {9}];
     z
     ],
    {x, X}
    ], CompilationTarget -> "C", RuntimeOptions -> "Speed", 
   RuntimeAttributes -> {Listable}
   ];

resol = 4001;
Dimensions[data = mandelbrot[Range[-3., 1., 4/(resol - 1)],  Range[-2., 2., 4/(resol - 1)]]] // AbsoluteTiming
Colorize[ImageResize[Image[Exp[-Abs@data]], 300], ColorFunction -> "TemperatureMap"]

enter image description here

Another implementation, In my PC, it's about 10 times faster than the built-in.

Clear[mandelbrot];
mandelbrot = Compile[{{X, _Real, 1}, y, maxIt},
   Table[
    Module[{i = 0, z, c},
     z = c = x + y I;
     While[i++ <= maxIt && Re[z]^2 + Im[z]^2 < 4, z = z^2 + c];
     i
     ],
    {x, X}],
   CompilationTarget -> "C", RuntimeOptions -> "Speed", RuntimeAttributes -> {Listable}
];

resol=4000;
iterNum=200;
region={-3-2I,1+2I};

{{x1, x2}, {y1, y2}} = N@{Re@region, Im@region};

AbsoluteTiming[
Echo@AbsoluteTiming[data=mandelbrot[Range[x1,x2,(x2-x1)/(resol-1)],Range[y1,y2,(y2-y1)/(resol-1)],iterNum];];
colorTable=With[{n=Max[data]},
Developer`ToPackedArray@Table[If[i<n,List@@ColorData["M10DefaultFractalGradient",(i-1)/n],N@{0,0,0}],{i,n}]];
Image[colorTable[[#]]&/@data,"Real"]
]

MandelbrotSetPlot[region, MaxIterations -> iterNum, ImageResolution -> resol] // AbsoluteTiming

enter image description here

Attachments:
POSTED BY: Hongyang Cao
Posted 5 years ago

These are great examples of the diverse uses of the Neural Networks framework. Is it possible to access MXNet's automatic differentiation (autograd) facility from within Mathematica? Automatic differentiation (distinct from Mathematica's symbolic differentiation capabilities) is a foundational technology for much of Machine Learning these days and it would be great if this could be done from within Mathematica. In my opinion, this could be really transformative.

POSTED BY: Asim Ansari

Hi Murray. Unfortunately I don't have any knowledge about Mac or did I have access to a Mac computer. I heard MXNet team has been working on supporting OpenCL, but it doesn't seem to be done yet. Maybe in the future we can export the model to other devices through neutral formats, say, ONNX.

POSTED BY: Silvia Hao

Hi Christopher, thanks very much for sharing your idea!

One major problem in both our networks is that they only support a predefined iteration step number $n$. And for any $n$ we'll need to re-generate the networks and re-deploy them to the GPU, which is time-consuming. A possible workaround in current Mathematica's scope is to use NetTrain as the iterator, and to use a data generating function as the hack to loop the result of each iteration back to its input ports. That way we shall be able to perform a dynamic iteration with a fancy graph chart. This is just an idea though. If I can make it work I'll share my experience in follow-up posts here.

POSTED BY: Silvia Hao

enter image description here - Congratulations! This post is now featured in our Staff Pick column as distinguished by a badge on your profile of a Featured Contributor! Thank you, keep it coming, and consider contributing your work to the The Notebook Archive!

POSTED BY: EDITORIAL BOARD

Alas, the use of NVIDIA GPU leaves out many Mac users, including this user of a current iMac.

Can the GPU-specific code be rewritten so as to use Radeon GPUs?

POSTED BY: Murray Eisenberg

The performance is impressive. Thanks for sharing!

POSTED BY: Silvia Hao

Hi Asim. Is NetPortGradient what you are looking for?

POSTED BY: Silvia Hao
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract