Message Boards Message Boards

5
|
7800 Views
|
10 Replies
|
23 Total Likes
View groups...
Share
Share this post:

Mathematica v12.3: CUDA GPU still not working

I am using Mathematica V12.3 on an Aurora-Ryzen with GeForce RTX 2090 and v11.3 of the NVIDIA toolkit. All the latest paclets/drivers are installed.

There are multiple reports of Mathematica not working properly with NVIDIA CUDA (cf https://community.wolfram.com/groups/-/m/t/2141352). Unfortunately, these appear to have fallen on deaf ears at WR. This is a pity, as it means that there is no option but to use Python or Matlab, on which platforms deep learning applications work flawlessly.

This deficiency in GPU computation, together with the absence of any functionality for Reinforcement Learning, leaves the Wolfram Language trailing competitors in the important field of data science / machine learning that, I would argue, it should be at the forefront of.

Mathematica appears to be able to process graphics data with NetTrain using the GPU as previously reported in the prior post. However, when processing numerical data with NetTrain, or building an Anomaly detector using AnomalyDetection, it reverts to using the CPU, despite TargetDevice-> "GPU", as for instance in the attached notebooks.

These can be used to test your own Mathematica/CUDA GPU set-up and enable you to report back any issues.

POSTED BY: Jonathan Kinlay
10 Replies

About numerical vs image data: There is absolutely no difference between them from the neural net perspective. Images are immediately turned into numerical data by NetEncoder["Image"]

Thats what I figured, which is why I couldnt understand the apparent discrepancy.

However I tested a version of your example using numerical data and the speed-up is indeed huge. So all seems to be well.

Thank you!

POSTED BY: Jonathan Kinlay

About problems with other GPUs: By briefly going to the previous thread, all I can see about neural net functionality besides the 3090 problem is MacOS support (NVIDIA/Apple's fault, not ours) and a complaint about a faulty 12.2 update which we fixed a few days later with another update. I'm not going to comment on CUDALink because I'm not involved with it. I consider the GPU support on the ML side pretty solid: we've been successfully using NetTrain for our own internal projects on a variety of GPU models and machines (including AWS instances) for years. If you or any other user still have problems please contact tech support.

About numerical vs image data: There is absolutely no difference between them from the neural net perspective. Images are immediately turned into numerical data by NetEncoder["Image"] and fed to the network as such. I have ran your own example on CPU vs GPU on my laptop (Dell XPS 15, GTX 1650M) and GPU is actually showing an improvement:

t = AbsoluteTime[];
NetTrain[net, TrainingData, BatchSize -> 10000, TargetDevice -> "CPU"];
Print[AbsoluteTime[] - t];

24.876159

t = AbsoluteTime[];
NetTrain[net, TrainingData, BatchSize -> 10000, TargetDevice -> "GPU"];
Print[AbsoluteTime[] - t];

15.667683

With a larger net, the improvement is massive (don't set a large BatchSize here or memory will blow up)

TrainingData = 
  RandomReal[1, {10000, 4}] -> RandomReal[1, {10000, 4}];
net = NetChain[{500, Ramp, 500, Ramp, 500, Ramp, 4}];

t = AbsoluteTime[];
NetTrain[net, TrainingData, MaxTrainingRounds -> 5, 
  TargetDevice -> "CPU"];
Print[AbsoluteTime[] - t];

7.083551

t = AbsoluteTime[];
NetTrain[net, TrainingData, MaxTrainingRounds -> 5, 
  TargetDevice -> "GPU"];
Print[AbsoluteTime[] - t];

0.654267

Do you get similar results for CPU vs GPU timings (especially with the second example)?

POSTED BY: Jonathan Kinlay

Not off-topic at all - thanks for the heads-up!

POSTED BY: Jonathan Kinlay

Jonathan, sorry if this is off topic, but in the dashboard, you can check if the post is an idea or post from the left icons

enter image description here

You can also choose to view only one of the two options

enter image description here

POSTED BY: Ahmed Elbanna
POSTED BY: Jonathan Kinlay
POSTED BY: Sander Huisman

It was posted under "Share an idea", not "Ask a question".

Still, I suppose the obvious question would be: "When can we expect WR to remedy the ongoing issues with GPU functionality, that have been extant since v12.0?"

And also: "When can we expect some Reinforcement learning capability to be forthcoming?"

POSTED BY: Jonathan Kinlay

What is your question?

POSTED BY: Sander Huisman
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract