Message Boards Message Boards

10
|
6039 Views
|
5 Replies
|
22 Total Likes
View groups...
Share
Share this post:

About Mathematica's GPU roadmap

Posted 3 years ago

World's computation is gradually switching to GPUs and other SIMD devices, and if Mathematica doesn't adapt, it'll end up cornered as a tool for toy computations. Are there plans to make it easy to use GPUs in Mathematica?

You can see the reason for this in the graph below enter image description here

2006 witnessed the death of Dennard scaling and since them, much of new performance growth went into parallel computation rather than sequential. You can now buy a gaming card RTX 3090 with 36 teraflops of computation, compared to something like 200 gigaflops for CPU, and the gap is growing. These SIMD flops have gotten so cheap, that companies like Google are giving them out for free (https://colab.research.google.com/ runs things for free on a low-end GPU)

As a concrete example is this issue. I couldn't find an easy way to move this computation to GPU so had to reimplement it in Python. The changes you need to switch from CPU to GPU are minimal in modern numpy-like frameworks (ie, Jax, CuPY, PyTorch): add ".cuda()" in front of some tensors and their computations are automatically on GPU.

Mathematica could use similar approach to gradually enter the realm of user-friendly GPU computation -- first add an operation to move an array to GPU and back, and then implement GPU version of operation for the most common operations (matrix multiply), then gradually expand to more operations requested by users.

POSTED BY: Yaroslav Bulatov
5 Replies

BTW, the main downside of moving to Python is the lack of good visualization APIs. They keep changing, and the one that is stable (matplotlib) is very limiting.

POSTED BY: Yaroslav Bulatov

I would suggest the very versatile FunctionLayer.

I posted a detail answer under your SE question, basically FunctionLayer can automatically compile normal WL code to neural network, the later can then run on GPU with a single option:

f = FunctionLayer @ Function[{mat, x}
                , {Normalize[Tanh[mat . x[[1]] ]], Normalize[Tanh[mat . x[[2]] ]]}
              ]

f // NetExtract[#, "Net"] & // Information[#, "FullSummaryGraphic"] &

auto-gen computing graph

f[<|
    "mat" -> RandomVariate[NormalDistribution[0, 1/Sqrt[10]], {10, 10}]
    , "x" -> RandomVariate[NormalDistribution[], {2, 10}]
  |>
 , TargetDevice -> "GPU"
 ]
POSTED BY: Silvia Hao

Oh, interesting! Now we just need a utility that would rewrite the original Mathematica snippet into GPU-runnable version automatically :P

POSTED BY: Yaroslav Bulatov

Yes exactly! And I imaging that's a job very suitable for a rule substitution system like what we have in WL. :)

POSTED BY: Silvia Hao

OTOH, I just remember a new feature: CompiledLayer has got back-propagation ability, which sounds promising:

back and forward passes of CompiledLayer

POSTED BY: Silvia Hao
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract