Group Abstract Group Abstract

Message Boards Message Boards

5
|
10.1K Views
|
10 Replies
|
23 Total Likes
View groups...
Share
Share this post:

Mathematica v12.3: CUDA GPU still not working

I am using Mathematica V12.3 on an Aurora-Ryzen with GeForce RTX 2090 and v11.3 of the NVIDIA toolkit. All the latest paclets/drivers are installed.

There are multiple reports of Mathematica not working properly with NVIDIA CUDA (cf https://community.wolfram.com/groups/-/m/t/2141352). Unfortunately, these appear to have fallen on deaf ears at WR. This is a pity, as it means that there is no option but to use Python or Matlab, on which platforms deep learning applications work flawlessly.

This deficiency in GPU computation, together with the absence of any functionality for Reinforcement Learning, leaves the Wolfram Language trailing competitors in the important field of data science / machine learning that, I would argue, it should be at the forefront of.

Mathematica appears to be able to process graphics data with NetTrain using the GPU as previously reported in the prior post. However, when processing numerical data with NetTrain, or building an Anomaly detector using AnomalyDetection, it reverts to using the CPU, despite TargetDevice-> "GPU", as for instance in the attached notebooks.

These can be used to test your own Mathematica/CUDA GPU set-up and enable you to report back any issues.

POSTED BY: Jonathan Kinlay
10 Replies

About numerical vs image data: There is absolutely no difference between them from the neural net perspective. Images are immediately turned into numerical data by NetEncoder["Image"]

Thats what I figured, which is why I couldnt understand the apparent discrepancy.

However I tested a version of your example using numerical data and the speed-up is indeed huge. So all seems to be well.

Thank you!

POSTED BY: Jonathan Kinlay

About problems with other GPUs: By briefly going to the previous thread, all I can see about neural net functionality besides the 3090 problem is MacOS support (NVIDIA/Apple's fault, not ours) and a complaint about a faulty 12.2 update which we fixed a few days later with another update. I'm not going to comment on CUDALink because I'm not involved with it. I consider the GPU support on the ML side pretty solid: we've been successfully using NetTrain for our own internal projects on a variety of GPU models and machines (including AWS instances) for years. If you or any other user still have problems please contact tech support.

About numerical vs image data: There is absolutely no difference between them from the neural net perspective. Images are immediately turned into numerical data by NetEncoder["Image"] and fed to the network as such. I have ran your own example on CPU vs GPU on my laptop (Dell XPS 15, GTX 1650M) and GPU is actually showing an improvement:

t = AbsoluteTime[];
NetTrain[net, TrainingData, BatchSize -> 10000, TargetDevice -> "CPU"];
Print[AbsoluteTime[] - t];

24.876159

t = AbsoluteTime[];
NetTrain[net, TrainingData, BatchSize -> 10000, TargetDevice -> "GPU"];
Print[AbsoluteTime[] - t];

15.667683

With a larger net, the improvement is massive (don't set a large BatchSize here or memory will blow up)

TrainingData = 
  RandomReal[1, {10000, 4}] -> RandomReal[1, {10000, 4}];
net = NetChain[{500, Ramp, 500, Ramp, 500, Ramp, 4}];

t = AbsoluteTime[];
NetTrain[net, TrainingData, MaxTrainingRounds -> 5, 
  TargetDevice -> "CPU"];
Print[AbsoluteTime[] - t];

7.083551

t = AbsoluteTime[];
NetTrain[net, TrainingData, MaxTrainingRounds -> 5, 
  TargetDevice -> "GPU"];
Print[AbsoluteTime[] - t];

0.654267

Do you get similar results for CPU vs GPU timings (especially with the second example)?

  1. If you review the complete thread you will see that users were advised to try InstallCUDA[], on more than one occasion.

  2. Yes, the discussion ended with a review of issues relating to the RTX 3090 on WL12.2. But earlier in the discussion users (including myself) reported issues with several other GPU cards.

  3. In the discussion of the RTX 3090, I and another user were able to get it working with GPU with v11.3 of the Toolkit, but only for Nets that involve image processing. Our point was (and remains) that GPU support for the RTX 3090 does not (so far) appear to extend to numerical data.

  4. So that fact that NetTrain on LeNet (a net trained on image data) works much faster (10x-11x) for TargetDevice -> "GPU" does not come as new news (although the performance improvement is welcome).

  5. I can't find a more suitable example of NetTrain that uses purely numerical data. If you would like to suggest one, I would be happy to test it and report the results back here.

POSTED BY: Jonathan Kinlay

Not off-topic at all - thanks for the heads-up!

POSTED BY: Jonathan Kinlay

Jonathan, sorry if this is off topic, but in the dashboard, you can check if the post is an idea or post from the left icons

enter image description here

You can also choose to view only one of the two options

enter image description here

POSTED BY: Ahmed Elbanna

It's an option at the top of the page, when you create a new post. But I don't see where it shows up after posting.

This is running on Windows 10.

I no longer see a definitive list of supported GPU cards anywhere on the Wolfram web site.

This GPU should be supported because (1) it's supported by NVIDIA and (2) It is supported by MMA, assuming we can take the results on CUDAInformation[] and InstallCUDA[] at face value (also, I have noted that GPU functionality works seamlessly with this card in Matlab).

If that is not the case, i.e. if this particular GPU is not supported by Mathematica for some reason, then that is an additional issue: either of both of of the CUDAInformation or InstallCUDA functions should report an issue if the specific card is not supported. Either that, or we need a new function CUDACompatibleQ[] to check for compatibility issues.

But again, from previous discussions, this is by no means the only GPU card experiencing difficulties with V12. See the previous post for details.

The purpose of posting here rather than simply going to customer support is that it gives other MMA users the opportunity to test their own configurations and publish the result. Hopefully that way users will get more traction with WR to focus resources on the problem and deal with it.

POSTED BY: Jonathan Kinlay

Aah sorry, I do not see where the "share an idea" shows up after posting. Not sure on either to be honest. Are you running Windows or Linux? And have you contacted them directly to ask whether or not your specific GPU is supported?

POSTED BY: Sander Huisman

It was posted under "Share an idea", not "Ask a question".

Still, I suppose the obvious question would be: "When can we expect WR to remedy the ongoing issues with GPU functionality, that have been extant since v12.0?"

And also: "When can we expect some Reinforcement learning capability to be forthcoming?"

POSTED BY: Jonathan Kinlay

What is your question?

POSTED BY: Sander Huisman
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard