Group Abstract Group Abstract

Message Boards Message Boards

Mandelbrot Set on Neural Network

Attachments:
POSTED BY: Silvia Hao
9 Replies

Very nice! I like non-NN uses of the neural network functions :)

I made a simplified version of this concept:

Set some constants regarding the number of iterations, the frame bounds, and the resolution:

dims = {1000, 1000};
bounds = {{-2, 1}, {-1.5, 1.5}};
iterations = 200;

Create a network that runs one iteration:

stepnet=NetGraph[
    <|
        "re"->PartLayer[1],
        "im"->PartLayer[2],
        "sqre"->ThreadingLayer[#1^2-#2^2&],
        "sqim"->ThreadingLayer[2*#1*#2&],
        "catenate"->CatenateLayer[],
        "reshape"->ReshapeLayer[Prepend[dims,2]],
        "c"->ConstantPlusLayer["Biases"->{
            ConstantArray[N@Range[dims[[2]]]/dims[[2]]*(bounds[[1,2]]-bounds[[1,1]])+bounds[[1,1]],dims[[1]]],
            Transpose@ConstantArray[N@Range[dims[[1]]]/dims[[1]]*(bounds[[2,2]]-bounds[[2,1]])+bounds[[2,1]],dims[[2]]]}],
        "clip"->ElementwiseLayer[Clip[#,{-10000,10000}]&]
    |>,{
        NetPort["Input"]->{"re","im"}->{"sqre","sqim"}->"catenate"->"reshape"->"c"->"clip"->NetPort["Output"]
    },"Input"->Prepend[dims,2]]

step net Then create a network that runs the single-step-network repeatedly and calculates the squared norm at each resulting point:

net=NetGraph[<|
        "init"->ConstantArrayLayer["Array"->ConstantArray[0,Prepend[dims,2]]],
        "steps"->NetNestOperator[stepnet,iterations],
        "re"->PartLayer[1],
        "im"->PartLayer[2],
        "normsq"->ThreadingLayer[#1^2+#2^2&]
    |>,
    {"init"->"steps"->{"re","im"}->"normsq"->NetPort["Output"]}
]

net Finally, evaluate the network:

Image@UnitStep[2^2 - net[]]

mandelbrot

Attachments:
Posted 6 years ago
Attachments:
POSTED BY: Hongyang Cao
POSTED BY: Silvia Hao

Hi Murray. Unfortunately I don't have any knowledge about Mac or did I have access to a Mac computer. I heard MXNet team has been working on supporting OpenCL, but it doesn't seem to be done yet. Maybe in the future we can export the model to other devices through neutral formats, say, ONNX.

POSTED BY: Silvia Hao
Posted 6 years ago

These are great examples of the diverse uses of the Neural Networks framework. Is it possible to access MXNet's automatic differentiation (autograd) facility from within Mathematica? Automatic differentiation (distinct from Mathematica's symbolic differentiation capabilities) is a foundational technology for much of Machine Learning these days and it would be great if this could be done from within Mathematica. In my opinion, this could be really transformative.

POSTED BY: Asim Ansari

Hi Asim. Is NetPortGradient what you are looking for?

POSTED BY: Silvia Hao

The performance is impressive. Thanks for sharing!

POSTED BY: Silvia Hao

Alas, the use of NVIDIA GPU leaves out many Mac users, including this user of a current iMac.

Can the GPU-specific code be rewritten so as to use Radeon GPUs?

POSTED BY: Murray Eisenberg

enter image description here - Congratulations! This post is now featured in our Staff Pick column as distinguished by a badge on your profile of a Featured Contributor! Thank you, keep it coming, and consider contributing your work to the The Notebook Archive!

POSTED BY: EDITORIAL BOARD
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard