Message Boards Message Boards

NetTrain TargetDevice GPU error

Posted 8 years ago

When I add the option TargetDevice->"GPU" to NetTrain in Mathematica 11.0.0.0 I get the error 'First argument to NetTrain should be a fully specified net'. Is this broken in 11.0.0.0?

47 Replies
Posted 4 years ago

Given the Mac Pro and the latest powerful MacBook pros, etc. Wolfram should implement Metal support so that we can get GPU acceleration even on modern hardware without NVIDIA GPUs... Any words on this from Wolfram? They also advertise OpenCL compatibility, that framework was an open source effort from Apple to abstract away CUDA so that it would also work on non-NVIDIA gpus.. I guess it's Metal now. The documentation at https://reference.wolfram.com/language/OpenCLLink/tutorial/Setup.html#10897850 looks outdated. Given that Apple will likely add/switch to ARM processors even for Macs in a year or two, I hope Wolfram gets up to speed on adopting Metal asap. On top of that the processor in the iPad Pro is already more powerful than some mac notebooks (GPUs included).

POSTED BY: maxscience n

Hi everyone,

I have just posted a CUDA how-to for Mac users. When training neural network I see a speed increase of more than 300 wrt to CPU computation.

Cheers,

Marco

POSTED BY: Marco Thiel
Posted 7 years ago

Just having updated to MM 11.1 and finding out that GPU training on Mac OS X fails without any announcement or description in the Documentation of Nettrain[]. As for other users in the forum also for our organization this makes it difficult for us to use Mathematica as training platform for MachineLearning. In our algorithm development department, we have mainly 15" MacBook Pro Mid/Late 2013 notebooks deployed, all with GPUs, also for CUDA/OpenCL Programming within Mathematica (still have to check that CUDALink and OpenCLLink both function in MM 11.1). A consistent GPU support across all platforms, including CUDA where available and OpenCL (works fine with many core CPUs or AMD GPUs) is a great plus for Mathematica.

POSTED BY: Tobias Kramer

Recently the moderators have had to remove posts or parts thereof. These actions involve posts that strayed, either outside the scope of this forum, or actual propriety. We ask that posters refrain from comments that fall outside such bounds. Please make sure you are familiar with the forum rules: http://wolfr.am/READ-1ST

POSTED BY: Moderation Team

Another cloud option everyone can use is this.

Running Mathematica on the Cloud with Rescale.

Video: https://support.rescale.com/customer/en/portal/articles/2651932-running-mathematica-on-the-cloud-with-rescale

POSTED BY: Manjunath Babu

It is not available on my Macbook Pro because it doesn't have a GPU graphics card. My Nvidia 1080 card on my PC works well.

POSTED BY: David Johnston

I suppose you are running a custom Linux machine rather than a normal out of the box Mac. I guess that is one way to get a GPU on Mac OS but it won't solve your CUDA SDK problem. Probably something you should disclose rather than making it seem like anyone who thinks GPU support wouldn't normally be available on Mac's is an idiot.

That is very cool that you had it working on 11.0. Congrats on that. Last summer, I was not able to use GPU on either my iMac nor my Macbook Pro. Wolfram Support told me that most Mac's don't generally have GPU's so I figured it was a general truth. I found a couple external GPU options but none were really more cost effective over just buying a new Alienware with 1080 GPU, which I did.

POSTED BY: David Johnston
Posted 7 years ago

Seems that the GPU ability of NeuralNetwork is not supported in 11.1, this works fine on 11.0 on OS X MMA 11.1 GPU support no on OS X

POSTED BY: David Proffer

After Apple failed to provide any new Macs with NVIDIA GPU's in its latest update round, we made the decision that it would not be worth the development time for us to continue supporting GPU training for the few older Mac models that have NVIDIA GPU's (I have one myself), when none of the last 3 generations of Mac have any NVIDIA GPU's. So we have unfortunately deprecated GPU support for neural networks on OSX.

Apologies for the inconvenience this has caused.

Posted 7 years ago

Understand the challenges, but a rather unfortunate way to announce this. Strung people on in the betas, no official statement before or at 11.1 release, your premier support portal was down yesterday after 11.1 release so no way to ask as paid customer and Apple and NVIDIA still fully support a set of NVIDIA GPU's on OS X.

POSTED BY: David Proffer

Does this mean that OSX users who previously used nVidia cards with an eGPU over Thunderbolt (e.g. Akitio Node or BizonBox) will no longer be able to do so in v11.1? I was planning on purchasing such a set-up myself (c. $550 for Akitio Node and a used GeForce 980Ti 6GB). Update: Nevermind, I see now from the context that the answer is likely 'yes' – this won't work anymore.

POSTED BY: Arno Bosse
Posted 7 years ago

Unfortunately correct, you can cross Thunderbolt GPU under MMA 11.1 off the list. I am not saying Apple and NVIDIA will release a Pascal GPU on Thunderbolt, but I've not seen a more elegant/functional scientific workstation architecture than a 5K Retina iMac with Thunderbolt GPUs running Mathematica 11.0.1. This pictured little Mac Mini's USD150 Geforce 950 GPU in a external Thunderbolt enclosure outperforms the unit's Intel i7 by 60 fold on Mathematica 11.0 MXNet computations. Yes, I have 1070 GPU in a remote kernel on Linux providing similar but far less elegant and with far more systems integration costs. Not Wolfram's fault for the Apple/NVIDIA relationship rocks, but abandoning the platform that enabled MMA to expand as successfully as it did is disappointing and short sighted (not fully supporting neural networks on a platform is endgame for platform).

NVIDIA Thunderbolt GPU's seem to be a solid solution and direction under OS X for deep learning and other GPU accelerated computations, just remove Mathematica from the equation :-( ..... The chart in the lower right is hopefully understood at Apple.

Attachments:
POSTED BY: David Proffer

Oh, what an unfortunate choice! I just bought 4 BizonBoxes at the beginning of this year only because of the Neural Network framework of Mathematica. It worked really nicely for MMA 11.0.1 on OSX and opened up the large and powerful new expansion of the Wolfram Language. This decisions cuts all OSX users off a large chunk of the new development in the Wolfram Language.

I also do not think that it is just the few old OSX legacy devices with GPUs that you stop supporting, but it is cutting off OSX users from using other techniques to access the neural network framework. It appears that much of the development of the Wolfram Language goes in the direction of machine learning. GPUs are crucial for that. Training times on my networks went down from a day to a couple of minutes.

I believe that there were several solutions available so that OSX users could make use of this exciting new development in the Wolfram Language, but this decision seems to cut all of us off, which is unfortunate I think.

Best wishes,

Marco

POSTED BY: Marco Thiel

@Marco Thiel, @David Proffer, @Arno Bosse: We are looking into a solution to this. I will have an update for you within a week.

@Marco Thiel, @David Proffer, @Arno Bosse, @Tobias Kramer : we have decided to resume GPU support for OSX. We are working on a paclet update that we are hoping to release soon. Apologies again for the inconvenience caused!

That is excellent news — thank you! I look forward to hearing from @Marco Thiel and others once the paclet is released how well this works with external GPUs. With the release of the nVidia 1080 Ti lots of people will be selling their old cards. It's a good time to pick-up a still very fast 1 or 2 or 3 year old card on ebay and drop it into an Akitio Node, for example.

POSTED BY: Arno Bosse

Dear Sebastian,

thank you very much for that helpful reply. I will try to find some time to post some easy instructions of how to make the BizonBox work with Mathematica and show some benchmarks.

Thanks a lot,

Marco

EDIT: Sorry for that. I have fixed the thing. Actually, it works better now than in 10.4. I did not need to perform one step and that caused trouble now. The thing works out of the box.

POSTED BY: Marco Thiel
Posted 7 years ago

FYI... April 4, 2017 - Apple pushes the reset button on the Mac Pro https://techcrunch.com/2017/04/04/apple-pushes-the-reset-button-on-the-mac-pro/

I ask him what Apple’s philosophy on external GPUs is. (Matthew Panzarino (@panzer) ) “I think they have a place,” he says, and leaves it at that. (Craig Federighi)

POSTED BY: David Proffer
Posted 7 years ago

FYI: Nvidia announces new Titan Xp GPU along with upcoming beta Pascal drivers for the Mac https://9to5mac.com/2017/04/06/nvidia-titan-xp-beta-pascal-drivers-mac/ Update: We have reached out to Nvidia for a statement about compatibility down the line with lesser 10-series cards, and IÂ’m happy to report that Nvidia states that all Pascal-based GPUs will be Mac-enabled via upcoming drivers. This means that you will be able to use a GTX 1080, for instance, on a Mac system via an eGPU setup, or with a Hackintosh build. Exciting times, indeed.

POSTED BY: David Proffer

Dear Sebastian,

it is great to see that the GPU framework is enabled for Mac users again. I was just wondering one thing. As I was preparing a post with instructions of how to use the Bizon Box with Mathematica I tried to run some old code. It appears that the option "CompilerInstallation" is not valid anymore. Is that by design?

Best wishes,

Marco

POSTED BY: Marco Thiel

It's working great for NetChain on my Nvidia 1080. However, doesn't look like they built GPU support for Classify and Predict with

Classify[trainingData, {Method -> "NeuralNetwork", 
  PerformanceGoal -> "Quality", TargetDevice -> "GPU"}]

Wish it did though! Put that on the list.

POSTED BY: David Johnston

This issue seems to be resolved in version 11.1 (at least for my GTX 1050)!

Gijsbert

Yes, 11.1 should work on all recent NVIDIA GPU's now.

Hello,

I'm training a Neural Network and its taking a lot of time. So, in my NetTrain[] function, I set an additional parameter TargetDevice->"GPU". I got this error message.

enter image description here

I'm using a MacBook Pro with "Intel Iris Graphics 6100 1536 MB" Graphics card.

Is it possible to set this as my GPU and train my Neural Network? Is there any alternate way to connect to a cloud to train my Neural Network?

POSTED BY: Manjunath Babu

We only support NVIDIA GPU's at the moment. So your Intel GPU won't work.

Wolfram Cloud also doesn't have any GPU's at the moment. You would need to use EC2 or Google Cloud yourself to make use of their GPU's.

@Sabastian, Is there a way to use Google Cloud GPU's for training NetChain other than having an extremely expensive Enterprise Network license?

POSTED BY: David Johnston

FYI, downgrading to CUDA 7.5 does not solve the issue.

GW

I have upgraded to a GTX 1050 with compute capability 6.1 and my error is gone. The training results are wrong however. Without TargetDevice->"GPU" the results are OK, but with TargetDevice->"GPU" I am seeing this while training:

TargetDevice GPU

and the resulting recognition is also very poor. I am running Mathematica 11.0.1.0 on 64-bit Ubuntu 4.4.0-51-generic, the Nvidia driver version is 875.20 and the CUDA version is 8.0.

How can I troubleshoot this?

GW

When I run the example.nb above, everything works fine. I am assuming my GTX 660 is compatible but TargetDevice is just not a parameter of Classify yet.

This should be added because everything I have built uses Classify or Predict and forcing users like myself to specify the parameters of super complicated NeuralNetworks in order to utilize GPUs is not going to be good for mass adoption.

BTW: It only took about 2 minutes to finish the full training in the example.nb which is amazing!

POSTED BY: David Johnston

Here is the error I am getting:

Classify::optx: Unknown option TargetDevice in Classify[{1->One,2->Two,3->Three,4->Four},{Method->NeuralNetwork,PerformanceGoal->Quality,TargetDevice->GPU}].
POSTED BY: David Johnston

No, Classify has no support for GPU (as you can see from docs).

Here is my info:

In[9]:= CUDAInformation[]

Out[9]= {1 -> {"Name" -> "GeForce GTX 660", "Clock Rate" -> 888500, 
   "Compute Capabilities" -> 3., "GPU Overlap" -> 1, 
   "Maximum Block Dimensions" -> {1024, 1024, 64}, 
   "Maximum Grid Dimensions" -> {2147483647, 65535, 65535}, 
   "Maximum Threads Per Block" -> 1024, 
   "Maximum Shared Memory Per Block" -> 49152, 
   "Total Constant Memory" -> 65536, "Warp Size" -> 32, 
   "Maximum Pitch" -> 2147483647, 
   "Maximum Registers Per Block" -> 65536, "Texture Alignment" -> 512,
    "Multiprocessor Count" -> 6, "Core Count" -> 192, 
   "Execution Timeout" -> 1, "Integrated" -> False, 
   "Can Map Host Memory" -> True, "Compute Mode" -> "Default", 
   "Texture1D Width" -> 65536, "Texture2D Width" -> 65536, 
   "Texture2D Height" -> 65536, "Texture3D Width" -> 4096, 
   "Texture3D Height" -> 4096, "Texture3D Depth" -> 4096, 
   "Texture2D Array Width" -> 16384, 
   "Texture2D Array Height" -> 16384, 
   "Texture2D Array Slices" -> 2048, "Surface Alignment" -> 512, 
   "Concurrent Kernels" -> True, "ECC Enabled" -> False, 
   "TCC Enabled" -> False, "Total Memory" -> 1610612736}}

In[10]:= CUDAResourcesInformation[]

Out[10]= {{"Name" -> "CUDAResources", "Version" -> "10.5.0", 
  "BuildNumber" -> "", "Qualifier" -> "Win64", 
  "WolframVersion" -> "10.5+", "SystemID" -> {"Windows-x86-64"}, 
  "Description" -> "{ToolkitVersion -> 7.0, MinimumDriver -> 300.0}", 
  "Category" -> "", "Creator" -> "", "Publisher" -> "", 
  "Support" -> "", "Internal" -> False, 
  "Location" -> 
   "C:\\Users\\David\\AppData\\Roaming\\Mathematica\\Paclets\\\
Repository\\CUDAResources-Win64-10.5.0", "Context" -> {}, 
  "Enabled" -> True, "Loading" -> Manual, 
  "Hash" -> "79fa747a52a45bf2d78e2c3516c80061"}}
POSTED BY: David Johnston

Should this code work in Mathematica 11 on a Dell Alienware with Nvidia GPU 660?

data = {1 -> "One", 2 -> "Two", 3 -> "Three", 4 -> "Four"};
Classify[data, {Method -> "NeuralNetwork", 
  PerformanceGoal -> "Quality", TargetDevice -> "GPU"}]
POSTED BY: David Johnston

Ah, this is the problem. We are making use of cuDNN, which requires compute capability >=3.0 (https://developer.nvidia.com/cudnn).

It is a bug though that we are not catching this properly and displaying a useful message.

Posted 7 years ago

Trying this on my notebook. Mathematica 11.1 on Win7 64-bit, GPU is Quadro K2100M with compute capabilities 3.0, latest drivers. There's also an Intel integrated graphics card, but Mathematica is set to us NVIDIA:

CUDAInformation[]
{1->{Name->Quadro K2100M,Clock Rate->666500,Compute Capabilities->3.,GPU Overlap->1,Maximum Block Dimensions->{1024,1024,64},Maximum Grid Dimensions->{2147483647,65535,65535},Maximum Threads Per Block->1024,Maximum Shared Memory Per Block->49152,Total Constant Memory->65536,Warp Size->32,Maximum Pitch->2147483647,Maximum Registers Per Block->65536,Texture Alignment->512,Multiprocessor Count->3,Core Count->96,Execution Timeout->1,Integrated->False,Can Map Host Memory->True,Compute Mode->Default,Texture1D Width->65536,Texture2D Width->65536,Texture2D Height->65536,Texture3D Width->4096,Texture3D Height->4096,Texture3D Depth->4096,Texture2D Array Width->16384,Texture2D Array Height->16384,Texture2D Array Slices->2048,Surface Alignment->512,Concurrent Kernels->True,ECC Enabled->False,TCC Enabled->False,Total Memory->2147483648}}

CUDADriverVersion[]
368.39

And I get this message:

NetTrain::badtrgdev: TargetDevice -> GPU could not be used, please ensure that you have a compatible NVIDIA graphics card and have installed the latest drivers.

Any ideas why this fails? Thank you in advance.

POSTED BY: Gregory K

I am running Mathematica on Ubuntu 16.04. Here is the same info for my system:

enter image description here

Perhaps it is related to the CUDA Compute Capabilities of my Geforce GTX 460.

Gijsbert

I tried Sean's code on my OS X 10.11.6 El Captain with the following configuration. The odds is in my favor during the compilation of the training process on a local GPU:


Train with GPU

train on gpu


Result

result


Configuration:

GPU info

I haven't try on the Win10 Workstation yet.

POSTED BY: Shenghui Yang

There does seem to be problem. It probably only happens with the your graphics card or driver. We haven't been able yet to reproduce it.

What Operating System are you using? Additionally, do you know what your graphics card is?

We would like to reproduce the problem as closely as possible. If you can, please send an email to support@wolfram.com about this issue. If you can, include a notebook where you have evaluated the command SystemInformation[]. This will tell us more about your computer and its graphics card, helping us to fix this issue.

POSTED BY: Sean Clarke

Hi,

Before I hit Shift+Enter to evaluate the NetTrain command my notebook looks like:

enter image description here

After I hit Shift+Enter to evaluate the NetTrain command my notebook looks like:

enter image description here

So the evaluation of the NetTrain command invalidates these objects? Still I do not get the same output as without the TargetDevice->"GPU" option.

GW

In your picture, some of the code is black and some of the code is blue.

The syntax highlighter highlights things in blue if they are undefined. This means that it thinks "lenet", "trainingdata" and "testData" are all undefined for some reason.

I've attached my notebook. It shows a simple evaluation which leads to a failure (because I don't have a compatible graphics card). But you can see it runs.

Attachments:
POSTED BY: Sean Clarke

I have tried that and at least the error message is now gone but it still does not seem to work. Without the TargetDevice option you get:

enter image description here

but with the TargetDevice option nothing seems to happen:

enter image description here

Any ideas?

GW

The documentation is doing something slightly confusing. It is reassigning to the value "lenet".

Try the following. Rename "lenet" to "trainedlenet" after it has been trained. You will want to re-run everything from the top after doing this:

lenet = NetChain[{ConvolutionLayer[20, 5], Ramp, PoolingLayer[2, 2], 
   ConvolutionLayer[50, 5], Ramp, PoolingLayer[2, 2], FlattenLayer[], 
   500, Ramp, 10, SoftmaxLayer[]}, 
  "Output" -> NetDecoder[{"Class", Range[0, 9]}], 
  "Input" -> NetEncoder[{"Image", {28, 28}, "Grayscale"}]];

trainedlenet = NetTrain[lenet, trainingData, ValidationSet -> testData, 
  MaxTrainingRounds -> 3, TargetDevice -> "GPU"]

This way you can be sure to not get them mixed up.

POSTED BY: Sean Clarke

I have attached the second example as a Notebook. It works as designed without TargetDevice->"GPU", but fails if you specify that option.

GW

Attachments:

Again, can you post the code that is failing for your example?

Also, what is the failure message with your "I first tried the following example available on the Wolfram website" example?

Hi,

I first tried the following example available on the Wolfram website:

http://www.wolfram.com/language/11/neural-networks/accelerate-training-using-a-gpu.html?product=mathematica

That failed, so I then tried to modify the following example that is also available on the Wolfram website:

http://www.wolfram.com/language/11/neural-networks/digit-classification.html?product=mathematica

It works until you modify the NetTrain command and add TargetDevice->"GPU". That results in the same error.

GW

Can you post your full example that you think should be working? Its impossible to debug this without more information.

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract