Unfortunately correct, you can cross Thunderbolt GPU under MMA 11.1 off the list. I am not saying Apple and NVIDIA will release a Pascal GPU on Thunderbolt, but I've not seen a more elegant/functional scientific workstation architecture than a 5K Retina iMac with Thunderbolt GPUs running Mathematica 11.0.1. This pictured little Mac Mini's USD150 Geforce 950 GPU in a external Thunderbolt enclosure outperforms the unit's Intel i7 by 60 fold on Mathematica 11.0 MXNet computations. Yes, I have 1070 GPU in a remote kernel on Linux providing similar but far less elegant and with far more systems integration costs. Not Wolfram's fault for the Apple/NVIDIA relationship rocks, but abandoning the platform that enabled MMA to expand as successfully as it did is disappointing and short sighted (not fully supporting neural networks on a platform is endgame for platform). NVIDIA Thunderbolt GPU's seem to be a solid solution and direction under OS X for deep learning and other GPU accelerated computations, just remove Mathematica from the equation :-( ..... The chart in the lower right is hopefully understood at Apple.
Oh, what an unfortunate choice! I just bought 4 BizonBoxes at the beginning of this year only because of the Neural Network framework of Mathematica. It worked really nicely for MMA 11.0.1 on OSX and opened up the large and powerful new expansion of the Wolfram Language. This decisions cuts all OSX users off a large chunk of the new development in the Wolfram Language. I also do not think that it is just the few old OSX legacy devices with GPUs that you stop supporting, but it is cutting off OSX users from using other techniques to access the neural network framework. It appears that much of the development of the Wolfram Language goes in the direction of machine learning. GPUs are crucial for that. Training times on my networks went down from a day to a couple of minutes. I believe that there were several solutions available so that OSX users could make use of this exciting new development in the Wolfram Language, but this decision seems to cut all of us off, which is unfortunate I think. Best wishes, Marco
|
|
That is excellent news — thank you! I look forward to hearing from @Marco Thiel and others once the paclet is released how well this works with external GPUs. With the release of the nVidia 1080 Ti lots of people will be selling their old cards. It's a good time to pick-up a still very fast 1 or 2 or 3 year old card on ebay and drop it into an Akitio Node, for example.
|
|
Dear Sebastian, thank you very much for that helpful reply. I will try to find some time to post some easy instructions of how to make the BizonBox work with Mathematica and show some benchmarks. Thanks a lot, Marco EDIT: Sorry for that. I have fixed the thing. Actually, it works better now than in 10.4. I did not need to perform one step and that caused trouble now. The thing works out of the box.
|
|
FYI:
Nvidia announces new Titan Xp GPU along with upcoming beta Pascal drivers for the Mac
https://9to5mac.com/2017/04/06/nvidia-titan-xp-beta-pascal-drivers-mac/
Update: We have reached out to Nvidia for a statement about compatibility down the line with lesser 10-series cards, and IÂ’m happy to report that Nvidia states that all Pascal-based GPUs will be Mac-enabled via upcoming drivers. This means that you will be able to use a GTX 1080, for instance, on a Mac system via an eGPU setup, or with a Hackintosh build. Exciting times, indeed.
|
|
Dear Sebastian, it is great to see that the GPU framework is enabled for Mac users again. I was just wondering one thing. As I was preparing a post with instructions of how to use the Bizon Box with Mathematica I tried to run some old code. It appears that the option
"CompilerInstallation" is not valid anymore. Is that by design? Best wishes, Marco
|
|
It's working great for NetChain on my Nvidia 1080. However, doesn't look like they built GPU support for Classify and Predict with
Classify[trainingData, {Method -> "NeuralNetwork",
PerformanceGoal -> "Quality", TargetDevice -> "GPU"}]
Wish it did though! Put that on the list.
|
|
This issue seems to be resolved in version 11.1 (at least for my GTX 1050)! Gijsbert
|
|
Yes, 11.1 should work on all recent NVIDIA GPU's now.
|
|
Hello, I'm training a Neural Network and its taking a lot of time. So, in my NetTrain[] function, I set an additional parameter TargetDevice->"GPU". I got this error message. I'm using a MacBook Pro with "Intel Iris Graphics 6100 1536 MB" Graphics card. Is it possible to set this as my GPU and train my Neural Network? Is there any alternate way to connect to a cloud to train my Neural Network?
|
|
We only support NVIDIA GPU's at the moment. So your Intel GPU won't work. Wolfram Cloud also doesn't have any GPU's at the moment. You would need to use EC2 or Google Cloud yourself to make use of their GPU's.
|
|
@Sabastian, Is there a way to use Google Cloud GPU's for training NetChain other than having an extremely expensive Enterprise Network license?
|
|
FYI, downgrading to CUDA 7.5 does not solve the issue. GW
|
|
I have upgraded to a GTX 1050 with compute capability 6.1 and my error is gone. The training results are wrong however. Without TargetDevice->"GPU" the results are OK, but with TargetDevice->"GPU" I am seeing this while training: and the resulting recognition is also very poor. I am running Mathematica 11.0.1.0 on 64-bit Ubuntu 4.4.0-51-generic, the Nvidia driver version is 875.20 and the CUDA version is 8.0. How can I troubleshoot this? GW
|
|
When I run the example.nb above, everything works fine. I am assuming my GTX 660 is compatible but TargetDevice is just not a parameter of Classify yet. This should be added because everything I have built uses Classify or Predict and forcing users like myself to specify the parameters of super complicated NeuralNetworks in order to utilize GPUs is not going to be good for mass adoption. BTW: It only took about 2 minutes to finish the full training in the example.nb which is amazing!
|
|
Here is the error I am getting:
Classify::optx: Unknown option TargetDevice in Classify[{1->One,2->Two,3->Three,4->Four},{Method->NeuralNetwork,PerformanceGoal->Quality,TargetDevice->GPU}].
|
|
No, Classify has no support for GPU (as you can see from docs).
|
|
Here is my info:
In[9]:= CUDAInformation[]
Out[9]= {1 -> {"Name" -> "GeForce GTX 660", "Clock Rate" -> 888500,
"Compute Capabilities" -> 3., "GPU Overlap" -> 1,
"Maximum Block Dimensions" -> {1024, 1024, 64},
"Maximum Grid Dimensions" -> {2147483647, 65535, 65535},
"Maximum Threads Per Block" -> 1024,
"Maximum Shared Memory Per Block" -> 49152,
"Total Constant Memory" -> 65536, "Warp Size" -> 32,
"Maximum Pitch" -> 2147483647,
"Maximum Registers Per Block" -> 65536, "Texture Alignment" -> 512,
"Multiprocessor Count" -> 6, "Core Count" -> 192,
"Execution Timeout" -> 1, "Integrated" -> False,
"Can Map Host Memory" -> True, "Compute Mode" -> "Default",
"Texture1D Width" -> 65536, "Texture2D Width" -> 65536,
"Texture2D Height" -> 65536, "Texture3D Width" -> 4096,
"Texture3D Height" -> 4096, "Texture3D Depth" -> 4096,
"Texture2D Array Width" -> 16384,
"Texture2D Array Height" -> 16384,
"Texture2D Array Slices" -> 2048, "Surface Alignment" -> 512,
"Concurrent Kernels" -> True, "ECC Enabled" -> False,
"TCC Enabled" -> False, "Total Memory" -> 1610612736}}
In[10]:= CUDAResourcesInformation[]
Out[10]= {{"Name" -> "CUDAResources", "Version" -> "10.5.0",
"BuildNumber" -> "", "Qualifier" -> "Win64",
"WolframVersion" -> "10.5+", "SystemID" -> {"Windows-x86-64"},
"Description" -> "{ToolkitVersion -> 7.0, MinimumDriver -> 300.0}",
"Category" -> "", "Creator" -> "", "Publisher" -> "",
"Support" -> "", "Internal" -> False,
"Location" ->
"C:\\Users\\David\\AppData\\Roaming\\Mathematica\\Paclets\\\
Repository\\CUDAResources-Win64-10.5.0", "Context" -> {},
"Enabled" -> True, "Loading" -> Manual,
"Hash" -> "79fa747a52a45bf2d78e2c3516c80061"}}
|
|
Should this code work in Mathematica 11 on a Dell Alienware with Nvidia GPU 660?
data = {1 -> "One", 2 -> "Two", 3 -> "Three", 4 -> "Four"};
Classify[data, {Method -> "NeuralNetwork",
PerformanceGoal -> "Quality", TargetDevice -> "GPU"}]
|
|
Ah, this is the problem. We are making use of cuDNN, which requires compute capability >=3.0 (https://developer.nvidia.com/cudnn). It is a bug though that we are not catching this properly and displaying a useful message.
|
|
Trying this on my notebook. Mathematica 11.1 on Win7 64-bit, GPU is Quadro K2100M with compute capabilities 3.0, latest drivers. There's also an Intel integrated graphics card, but Mathematica is set to us NVIDIA:
CUDAInformation[]
{1->{Name->Quadro K2100M,Clock Rate->666500,Compute Capabilities->3.,GPU Overlap->1,Maximum Block Dimensions->{1024,1024,64},Maximum Grid Dimensions->{2147483647,65535,65535},Maximum Threads Per Block->1024,Maximum Shared Memory Per Block->49152,Total Constant Memory->65536,Warp Size->32,Maximum Pitch->2147483647,Maximum Registers Per Block->65536,Texture Alignment->512,Multiprocessor Count->3,Core Count->96,Execution Timeout->1,Integrated->False,Can Map Host Memory->True,Compute Mode->Default,Texture1D Width->65536,Texture2D Width->65536,Texture2D Height->65536,Texture3D Width->4096,Texture3D Height->4096,Texture3D Depth->4096,Texture2D Array Width->16384,Texture2D Array Height->16384,Texture2D Array Slices->2048,Surface Alignment->512,Concurrent Kernels->True,ECC Enabled->False,TCC Enabled->False,Total Memory->2147483648}}
CUDADriverVersion[]
368.39
And I get this message:
NetTrain::badtrgdev: TargetDevice -> GPU could not be used, please ensure that you have a compatible NVIDIA graphics card and have installed the latest drivers.
Any ideas why this fails? Thank you in advance.
|
|
I am running Mathematica on Ubuntu 16.04. Here is the same info for my system: Perhaps it is related to the CUDA Compute Capabilities of my Geforce GTX 460. Gijsbert
|
|
I tried Sean's code on my OS X 10.11.6 El Captain with the following configuration. The odds is in my favor during the compilation of the training process on a local GPU:
Train with GPU
Result
Configuration:I haven't try on the Win10 Workstation yet.
|
|
There does seem to be problem. It probably only happens with the your graphics card or driver. We haven't been able yet to reproduce it. What Operating System are you using? Additionally, do you know what your graphics card is? We would like to reproduce the problem as closely as possible. If you can, please send an email to support@wolfram.com about this issue. If you can, include a notebook where you have evaluated the command SystemInformation[]. This will tell us more about your computer and its graphics card, helping us to fix this issue.
|
|
Hi, Before I hit Shift+Enter to evaluate the NetTrain command my notebook looks like: After I hit Shift+Enter to evaluate the NetTrain command my notebook looks like: So the evaluation of the NetTrain command invalidates these objects? Still I do not get the same output as without the TargetDevice->"GPU" option. GW
|
|
In your picture, some of the code is black and some of the code is blue. The syntax highlighter highlights things in blue if they are undefined. This means that it thinks "lenet", "trainingdata" and "testData" are all undefined for some reason. I've attached my notebook. It shows a simple evaluation which leads to a failure (because I don't have a compatible graphics card). But you can see it runs.
I have tried that and at least the error message is now gone but it still does not seem to work. Without the TargetDevice option you get: but with the TargetDevice option nothing seems to happen: Any ideas? GW
|
|
The documentation is doing something slightly confusing. It is reassigning to the value "lenet". Try the following. Rename "lenet" to "trainedlenet" after it has been trained. You will want to re-run everything from the top after doing this:
lenet = NetChain[{ConvolutionLayer[20, 5], Ramp, PoolingLayer[2, 2],
ConvolutionLayer[50, 5], Ramp, PoolingLayer[2, 2], FlattenLayer[],
500, Ramp, 10, SoftmaxLayer[]},
"Output" -> NetDecoder[{"Class", Range[0, 9]}],
"Input" -> NetEncoder[{"Image", {28, 28}, "Grayscale"}]];
trainedlenet = NetTrain[lenet, trainingData, ValidationSet -> testData,
MaxTrainingRounds -> 3, TargetDevice -> "GPU"]
This way you can be sure to not get them mixed up.
|
|
I have attached the second example as a Notebook. It works as designed without TargetDevice->"GPU", but fails if you specify that option. GW
Again, can you post the code that is failing for your example? Also, what is the failure message with your "I first tried the following example available on the Wolfram website" example?
|
|
Can you post your full example that you think should be working? Its impossible to debug this without more information.
|
|
Reply to this discussion
in reply to
Group Abstract
| | |