Message Boards Message Boards

GROUPS:

Will Mathematica make use of Apple Silicon's GPU in the near future?

Posted 15 days ago
1290 Views
|
7 Replies
|
11 Total Likes
|

With even the new MacMini being equipped with a GPU for neural computing, to use Apples marketing spiel, and able to run TensorFlow models what is the likelihood of Mathematica allowing access the M1's GPU any time soon?

With a good bit of pulling the hair out of my head I've just managed to configure a MacPro 4,1 with an NVIDIA CUDA capable GPU to run Mathematica Deep Learning code and I'm well impressed. It easily outperforms my much newer MacMini. So I'm gagging to see what performance the M1 will provide.

7 Replies

Not a chance in the near future (about few years? ). There is no other technology which is able to be comparable with Nvida CUDA.

Are you sure? Benchmarks I have seen for the new chips would indicate otherwise. Granted, these numbers are not direct comparisons of neural net computations, but it probably indicates what is possible.

As I see it, the main sticking point is that the open-source software Wolfram uses for GPU acceleration of neural net computations is dominated by NVIDIA. The trick, which I think is possible done by a small group of people, would be to make a function-call equivalent library that uses the Apple technology.

This should be within the capabilities of Wolfram ore even people in this community. At the very least, macOS and iOS (iPadOS) users would be able to do computations significantly faster than they can now. Even if it can't match NVIDIA, which is debatable, it would be orders of magnitude faster than just using the CPU.

Note that the M1 chip does not support eGPUs. It is possible that the chips that Apple ends up putting in the Mac Pro or iMac Pro will support these add-ons. Most of us don't have the deep pockets (or ample grant money) to invest in this solution, though.

I fooled around with this stuff back at the time when Apple used NVIDIA. What was frustrating was the fact that NVIDIA would change which cards were supported, and so you could never depend on the resource being available. This is certainly the case now for people using Windows or Linux Systems. The nice thing about Apple is that their APIs are abstracted from the hardware, so if the hardware changes (which it just did in a big way), the APIs do not.

I can understand the desire on the part of Wolfram to use cross-platform solutions whenever possible. However, Wolfram has, in the past, made use of specific hardware and OS functionality to fully exploit any advantages. Remember that the Notebook paradigm was available on the Mac (and Next) a long time before Windows because versions of Windows before Win 95 were not very capable. In addition, Mathematica users on Macs could make full use of the 32-bit (and 64-bit) architecture while Windows computers were still hobbled by 16 bit CPUs. Now the we have a glimpse of what Apple Silicon can do, I hope that Wolfram Research will once again take advantage of hardware and OS opportunities as it has in the past.

I have not done this type of coding for some time, or I would be tempted to take on the task myself.

First of all, the current Mathematica support of CUDA is still very limited. Another tools (MATLAB) provides significantly better GPU computing capabilities.

NVIDIA CUDA is not only machine learning engine, this is a highly optimized eco-system of libraries which covers many domains of applied mathematics with excellent performance. The main intention of Wolfram Research should be to cover all as possible domain of applications, not only "machine learning", which is now very popular.

Moreover, especially for general high performance computing the Nvidia CUDA provide really unique programming tool set with excellent performance.

As I said before, there is not a comparable solution so far. From my point of view the Apple Silicon close the door of GPU high performance computing for their customers.

Another question is, how the WRI solve the problem with lack of highly optimized Intel math libraries (MKL, IPP, ...) for ARM64 CPU's (Apple Silicon).

Finally, Apple choose the complicated way to create brand new computer without x86-64 native compatibility and without Nvidia CUDA compatibility. I am afraid that especially for Mathematica users on Apple Silicon the next few years will be very frustrating. But, may be I am wrong and there is some secret solution ...??!!

I must apologize for getting this thread off on a tangent. Ultimately, the issue is not whether Apple's neural net/GPU engine is better than NVIDIA, but whether Mathematica will make full use of the hardware available to macOS (and probably iPadOS) users.

After what in my opinion is a slow start, Mathematica is making use of Metal (rather than relying on OpenGL), and the results are very good. This has permitted expansion into new areas, such as the very experimental ray tracing in the 12.2 beta.

All I am looking for as a long-time Mathematica on Macintosh user is that the software make full use of the available hardware. I don't care if NVIDIA is faster on different hardware, as long I have decent hardware acceleration with my hardware.

Mathematica has been characterized as a swiss army knife. The analogy is not exact, since software does not place the same constraints as physical design. However, the analogy is apt in the following sense: it meets the needs of the vast majority of users who need to use some mathematical techniques or C/S magic without having to deal with the messy details. I look at the developers of Mathematica 's functions as collaborators in a very real sense. Back in the dark ages, I filled that role when I worked in a research lab, but those says are long gone.

In the case of Machine learning, I have no doubt that there are dedicated tools that doo a better job, just as there are better tools for audio and image processing -- although there are some things that Mathematica can do that dedicated programs do not, simply because of access to mathematical functionality created for another purpose.

I am not concerned with the availability of optimized libraries similar to those for Intel (and NVIDIA). Worst case, the Rosetta II emulator should be able to handle this functionality and still be faster than than running on Intel. However, Apple has the resources to do what you suggest -- and they have already done so with previous transitions. Hardware evolves all the time. It can be painful, but that is what it is, living on the frontier. It is perhaps my perspective of having coded for nearly 50 years, that I have a different context.

Bottom line

What I would like to see is that any function in Wolfram language that has an option "UseGPU" will actually use the GPU on my Mac in the not too distant future. I don't care that I could get faster results by investing in different hardware (and spending the time and money to make it work). For almost everything else that I need my computer to do, Mathematica and Wolfram Language give me the tools I need to do a much better job than I could do myself. It is only the lack of Wolfram's use of the GPU and related hardware technologies (Neural engine, etc.) that is an issue.

I have spoken with several Mathematica users who use macOS, both within Wolfram Research and elsewhere, and this is something that all of them would like.

"What I would like to see is that any function in Wolfram language that has an option "UseGPU" will actually use the GPU on my Mac in the not too distant future. I don't care that I could get faster results by investing in different hardware (and spending the time and money to make it work). For almost everything else that I need my computer to do, Mathematica and Wolfram Language give me the tools I need to do a much better job than I could do myself. It is only the lack of Wolfram's use of the GPU and related hardware technologies (Neural engine, etc.) that is an issue."

Exactly.

Thanks. I should have led with that.

I hope someone from Wolfram management is looking at this thread.

Posted 5 days ago

Stephen noted that he uses a Mac Pro in his February 2019 entry on his blog. At that time, "Mac Pro" referred to the 2013 "trash can" model.

I could see him working quite well with a 16GB M1 Mac Mini with 2TB of storage. You can drive two 4K displays with that machine. It appears that Apple's swapping performance is improved in this version of the OS. Coupled with the faster SD access, that can compensate for the smaller RAM. As a bonus, you can get exactly the same mobile processor/RAM/storage on the M1 Macbook Air.

The thermals of this processor are astonishing. Many of the reviewers noted they went for days without recharging the battery. The MBA is built without a fan, but it takes many minutes of full load before it gets thermally throttled. Anyone who uses the current Intel processors to keep their lap warm will have to get a cat.

I hope the M1 becomes the de facto platform for Wolfram's technical staff. I'd love to see Wolfram's engine take advantage of the M1's GPU and Neural Engine for outstanding performance on the Mac. That same code should work for running code on the iPhone and iPad.

POSTED BY: phil
Answer
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract