Message Boards Message Boards

How-To-Guide: External GPU on OSX - how to use CUDA on your Mac

POSTED BY: Marco Thiel
31 Replies
Posted 4 years ago

As I mentioned the best option would be for Wolfram to support Metal to accelerate general purpose computing on GPUs (instead of just graphics for the notebook). This way Mathematica/Wolfram Language would become the best option to run GPU-accelerated neural nets on recent Macs. Even Tensorflow doesn't support that so you're stuck using the cloud.

P.S. this is a mandatory step anyway when they eventually have to build an ARM-based Mac version. Same goes for a full iPad app.

POSTED BY: maxscience n

P.S. this is a mandatory step anyway when they eventually have to build an ARM-based Mac version. Same goes for a full iPad app.

This is my concern as well. Wolfram was there at the WWDC Keynote when the switch to Intel Macs was announced. (There are videos with Theo and Rob showing how the 'little switch' worked.) Since then, support for new Apple technologies has lagged behind -- they just made the deadline for 64 bit apps with the front end. Although ARM-based Macs are presently just rumors, I think that it is almost certain that this will happen.

I understand the desire to have a common code base for different platforms. In my opinion, leveraging the work that Apple has already done to make code hardwire independent -- Specifically Metal in all its implementations -- outweighs this, and will provide a superior program for Mac users, certainly, and possibly also for Windows users.

I've read all the threads in this lengthy post, and more replies keep coming in. This topic is very important, so I wanted to summarize it with a conclusion for all my fellow machine learners (and devs please correct me if I'm missing anything).

Here's the bottom line: do not buy an e-GPU if you own a modern (2019+) mac.

If you want to use NetTrain[..., TargetDevice -> "GPU"], then you have only 3 options:

  1. Buy a Linux box with an Nvidia GPU (pricey)

  2. Email WRI and ask them to add the ROCm version of mxnet as an alternative NetTrain backend, which currently runs very nicely on non-Nvidia GPUs! This shouldn't be too hard since the MXNetLink` APIs are pretty clean. (unlikely)

  3. Email WRI about upgrading the frontend to support remote (preemptive) mathlinks - this would allow NetTrain to use remote GPUs in locally running notebooks. Currently, running local notebook with dynamics listening to a remote kernel is possible, but extremely unstable! (more unlikely)

Notes:

  • Yes, options 2 and 3 are unlikely, but it will help if you do them anyway :)

  • You could, of course, upload a "wolfram-script" file and data to a (p2/p3-class) ec2 machine and run it remotely in the command line.... but really, don't do this. You would be giving up all the reasons (dynamic ergonomic notebook interface) for which you use Mathematica in the first place! At this point just do yourself a favor and learn TensorFlow instead.

  • Don't even think about trying to use remote desktops. Running VNC over AWS (unless you are a tortoise or sloth) would drive you mad with all the lagging super-low frame rates.

POSTED BY: Michael Sollami

Thank you, Marco and Mike, this is extremely useful.

I fully agree that a desirable solution would be for Mathematica to support Apple GPU installations out of the box - one of the reasons that I love Mathematica is that I do not have to install a thousand libraries (or even compile them from scratch) but that things work straight away.

However, I would be quite happy if there was just decent support for cloud-based GPUs. By this, I don't mean support for Wolfram Script (at least a step in the right direction) but support for full interactive Mathematica. I would love for Mathematica online to offer the option to chose a GPU backend. I'd happily pay for it as long as the charges are reasonably (ie comparable to EC2 or Google).

Ultimately, if I had to choose between cloud GPU support and desktop GPU support, I would probably opt for the cloud. After all, I can't run a compute job that takes several days several days on my laptop!

POSTED BY: Bernd Meyer

Cloud support would be of primary use for large jobs. However, the GPU is, or could be used for so many 'everyday' operations beyond simple graphics that you wouldn't want to have to call the cloud for each instance. There are more than 50 functions that can use a GPU if an acceptable one is found, and I for one would like to be able to use those out of the box on my Mac. And, as someone pointed out, it should be possible to implement parallel computing on the GPU as well as the various cores of the CPU.

As I pointed out, Apple has already done most of the heavy lifting with their various Meta frameworks, rendering the code hardware independent for both macOS and iOS (iPadOS) for the long term. It is simply a matter of prioritization at Wolfram whether Mathematica will be able to use the technology on these platforms.

I have heard or read several times recently that Wolfram is seeking feedback to help set priorities. The more people who make this a desirable feature, the more likely we will be to seeing it implemented. Unfortunately, my main venue for schmoozing is the WTC, which will be on-line only this year.

Posted 4 years ago

Indeed and the iPad Pro today is already more powerful than quite a few notebooks... I don't think Mac users are a minority for Wolfram, it's a substantial percentage of the userbase. Even Mathematica 1 was released for the Mac first.

Currently Tensorflow and other popular machine learning frameworks do not support Metal (aside from Apple's CoreML). This means that in Jupyter notebooks for example you are forced to use the cloud to compute using a GPU. Given that Wolfram wants to increase the usage of its language in data science, Metal support should be a no brainer so I'm surprised Wolfram hasn't prioritized this.

POSTED BY: maxscience n
Posted 4 years ago

Given the Mac Pro and the latest powerful MacBook pros, etc. Wolfram should implement Metal support so that we can get GPU acceleration even on modern hardware without NVIDIA GPUs... Any words on this from Wolfram? They also advertise OpenCL compatibility, that framework was an open source effort from Apple to abstract away CUDA so that it would also work on non-NVIDIA gpus.. I guess it's Metal now. The documentation at https://reference.wolfram.com/language/OpenCLLink/tutorial/Setup.html#10897850 looks outdated. Given that Apple will likely add/switch to ARM processors even for Macs in a year or two, I hope Wolfram gets up to speed on adopting Metal asap.

POSTED BY: maxscience n

I agree 100%.

Apple, which has a lot more resources than Wolfram, has already done the heavy lifting in abstracting the hardware for the GPUs. Further, I doubt that the open-source framework that Wolfram is using for GPU acceleration of neural networks will ever escape the thumb of NVIDIA.

I realize that macOS users are a minor component of Mathematica users, but they are a significant minority. If you add in iOS users (I hope that a native Mathematica for iOS will eventually be released), the number of people who could benefit by fully integrating Metal into Mathematica would be significant.

Wolfram has already made a good start by using Metal for graphics. It is time to take the next step.

Mathematica 12.1 does support Metal. From John Fultz's WTC 2019 presentation:

Metal for macOS (upgraded from OpenGL 4.1)

POSTED BY: Eric Smith

Right. But this is Metal for graphics rendering, not Metal for Machine Learning, etc. Apple expanded the domain for their 'Metal' technology. So, although graphics rendering is much improved, Mathematica does not make use of any of the neural networks stuff.

Posted 4 years ago

Not only that but Wolfram could use Metal (so GPUs) for a lot of parallel processing functions as well (like Parallelize, ParallelEvaluate, etc.)... Right now it looks like Mathematica's parallelism is only about its kernel running on CPU cores.

POSTED BY: maxscience n

Since Nvidia drivers are no longer supported in MacOS 10.15, I assume this no longer works, or am I mistaken?

POSTED BY: Michael Sollami

Dear Mike,

it is true that the newer versions of OSX do not support Nvidia drivers. I do run GPUs only under older versions of OSX. I have had to downgrade one computer to make it work, which is a pain because of a new chip which makes downgrading difficult.

I also use GPUs on OSX extensively for teaching.

At the end it works, as long as you can work on older system (or dual boot etc).

Best wishes,

Marco

Best wishes, Marco

POSTED BY: Marco Thiel
Posted 5 years ago

Set the environment variables LDLIBRARYPATH, LIBRARY_PATH and CPATH to the directory extracted from the download.

If needed, separate multiple directories with : as in the PATH environment variable.

export CUDNN_ROOT=/home/YourUserName/libs/cudnn
export LD_LIBRARY_PATH=$CUDNN_ROOT/lib64:$LD_LIBRARY_PATH
export CPATH=$CUDNN_ROOT/include:$CPATH
export LIBRARY_PATH=$CUDNN_ROOT/lib64:$LD_LIBRARY_PATH

To uninstall CUDA, run:

/usr/local/cuda/bin/uninstallxxx
POSTED BY: Clem Harvey

Is it necessary to get CUDALink working and let CUDAResourcesInstall[] run if I only need to use TargetDevice -> "GPU" in NetTrain, but never any functions from the CUDALink package? Does NetTrain depend on CUDALink or are they separate?

Marco, thank you for all of this very usable information. Following these instructions I was easily able to get a similar setup working, the main difference is that my GPU is an NVIDIA GTX 1080 Ti. I'm posting some benchmarks for GPU comparison (and some details about the setup the end).

The network training task came in at about 2 minutes 17 seconds withthe 1080 Ti: NetTrain image

The ImageConvolve was still slower with CUDA, but not a lot slower:

ImageConvolve

I was not successful in growing a Mandelbulb, but I didn't put any real effort into trying to troubleshoot this.

Thanks again, I never would have attempted this had you not documented your setup... best wishes... Jan

PS: On a related note, Apple appears to be moving towards supporting external GPU's in the upcoming High Sierra OS release, but apparently only with computers that support Thunderbolt 3.

PPS: Some technical details:

Computer: Model Name: MacBook Pro Model Identifier: MacBookPro11,4 Processor Name: Intel Core i7 Processor Speed: 2.2 GHz

eGPU: I have a BizonBox 2S connected by Thunderbolt 2. I first downgraded the Command Line Tools. Then I followed the BizonBox instructions (including having an external monitor plugged into the GPU via HDMI). Then I followed Marco's instructions for the Wolfram setup. I only experienced one minor setback, which was that the CUDAResourcesInstall[] crashed the Wolfram kernel the first time I tried it, but worked fine after launching a new kernel.

POSTED BY: Jan Segert

Marco, thank you for this post. I assume the instructions will get a lot simpler once everyone switches to macOS High Sierra which natively supports eGPU's

https://9to5mac.com/2017/06/07/hands-on-macos-high-sierra-native-egpu-support-shows-promise-video/

POSTED BY: Eric Smith

Hi Eric,

yes, it sounds as if this might get easier. I suppose the problem with the Command Line Tools would persist though. As soon as I can get a final version of High Sierra, I will try it out and report back to this Community.

Cheers,

Marco

POSTED BY: Marco Thiel
Posted 7 years ago

Thank you Marco! I used your instructions to get a BizonBox 2S successfully working on my MacBook Pro.

Some glitches I ran into that I'll mention in case they come up for others:

  1. I had trouble getting the Bizon to activate, and then to have the Nvidia control panel find the Bizon. Sometimes actions would work and other times not. After experiments and consultation with support, I replaced the long Thunderbolt cable supplied with the Bizon with a shorter one (three foot). That solved many of the problems.

  2. I still had difficulty getting the mac to see the box. Support said that the Mac should be off when the box is connected or disconnected. That helped.

  3. Finally I couldn't get cuda recognized as installed by the Nvidia app or Mm. The last thing I did before it worked was plug in a display to the Bizon - then everything started working. The display didn't stay plugged in, and I don't need it now, but it seemed that it needed to be there to initialize something.

All of this was with a lot of on/off, rebooting, trying different things so I'm not sure the above is necessary, but in the end it got mine working following your instructions.

Thanks again Marco - I'm not sure I would have stuck it out without knowing there was light at the end of the tunnel.

Mike

POSTED BY: Updating Name

Dear Mike,

thank you for your nice words. I am glad if some of what I wrote helped.

You are right that sometimes it takes a bit of rerunning bits several times, and some rebooting to make it work. On a "clean" Mac the instructions appeared to work, but after trying this on many Macs now, there is often some rebooting required. Also, when you run an update of the OS you might have to perform some of the instructions again.

On the bright side, I got the GPU to work on all Macs we have tried so far. The script on this page: https://github.com/goalque/automate-eGPU sometimes seemed to make a difference, particularly after an OS update. Also OSX regularly wants to update the downgraded Command Line Tools, which is a bit annoying.

Best wishes,

Marco

POSTED BY: Marco Thiel
Posted 7 years ago

A good source of current information on eGPU's: http://barefeats.com/

POSTED BY: David Proffer
Posted 7 years ago

Has anyone set up an eGPU with Windows?

POSTED BY: Diego Zviovich

enter image description here - Congratulations! This post is now a Staff Pick! Thank you for your wonderful contributions. Please, keep them coming!

POSTED BY: Moderation Team

Dear Wolfram Team,

I am very glad and thankful that you reacted so quickly to the comments about GPU access on Macs. Having access to this framework opens up many possibilities in research and teaching. I appreciate it that you sorted this out so swiftly and efficiently.

Thank you,

Marco

POSTED BY: Marco Thiel

Besides the Bizon Box (which comes with support), there are also a couple of other, cheaper DIY options available which have been reviewed on https://egpu.io/news/ For more eGPU benchmarks (not Mathematica) see http://barefeats.com.

POSTED BY: Arno Bosse

Marco,

Awesome post! I was just looking into doing this.

What is the reason for downgrading the command line tools? If you do not downgrade can you still run the built in Neural net functions (without using the compiler)?

Thanks

POSTED BY: Neil Singer

Dear Neil,

The downgrading is strictly speaking not necessary if you only want the Wolfram Language's Machine Learning and functions that do not require compilation.

If you have the latest you see something like this,

enter image description here

but with "The Version ('80300')" or so. It is a warning that the compilation failed. It is not a Mathematica/WolframLanguage problem. If you followed the instructions in the OP you would have generated a folder

/Developer/NVIDIA/CUDA-8.0/samples/2_Graphics/Mandelbrot/

you could try to use "make" to compile and that will fail unless you have downgraded the command line tools. See also this discussion here.

The process needs the command line c-compilers and there is an incompatibility, I think.

Best wishes,

Marco

POSTED BY: Marco Thiel

Dear Marco, Thank you for a very informative post. You had responded to a question about downgrading the command line tools with:

"the downgrading is strictly speaking not necessary if you only want the Wolfram Language's Machine Learning and functions that do not require compilation."

I was wondering if you were had tried NetTrain without the downgrade of command line tools to 7.2. Thanks..Jan

POSTED BY: Jan Segert

That looks really neat! I had no idea that there was such a large speed-up! Which GPU do you have inside your bizon box? nevermind I see it in the screenshot I'm thinking about buying one...

POSTED BY: Sander Huisman

Hi Sander,

Yes, I've got the TitanX. I do not have comparative benchmarks with the other ones though.

For me it was definitely worth buying the boxes - and I am lucky that Wolfram reintroduced the support for them. I wouldn't say that I am particularly good at CUDA (quite the opposite), but I could make some code run substantially faster, which was really important for a project I have.

Note, that you can also buy the BizonBox without the GPU, so if you have a spare one flying around you can (most likely) use that one.

Cheers,

Marco

POSTED BY: Marco Thiel
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract