Message Boards Message Boards


NetTrain will exit Kernel when using an RTX 2080 GPU.

Posted 7 months ago
2 Replies
6 Total Likes

I recently upgraded my main GPU to an RTX 2080 GPU and whenever I try to use it to train a Neural Network with it, Mathematica will simply exit the kernel Without error message, aborting the computation.

For example:

resource = ResourceObject["MNIST"];
trainingData = ResourceData[resource, "TrainingData"];
testData = ResourceData[resource, "TestData"];

network = NetChain[{FlattenLayer[], LinearLayer[], SoftmaxLayer[-1]},
  "Output" -> NetDecoder[{"Class", Range[0, 9]}],
  "Input" -> NetEncoder[{"Image", {28, 28}, "Grayscale"}]

trained = NetTrain[network, trainingData, ValidationSet -> testData,
   MaxTrainingRounds -> 50, TargetDevice -> {"GPU", 1}]; 

Simply crashes the Kernel, while training the network with my secondary GPU, a GTX 750Ti, by setting {"GPU", 2} works just fine.

On the other hand, CUDA seems to be working well, CUDAQ[] returns true, $CUDADeviceCount lists my two GPUs and

CUDAImageConvolve[IMAGE, , {{-1, -2, 3}}] 

will work just like in the sample page when the option

$CUDADevice = 1

is set.

Any idea on why this might be happening?

Edit: I'm running Mathematica 11.3, that I just reinstalled in case was the issue, running on Windows 10 Pro 64-bits.


2 Replies

The issue is that the neural net framework of 11.3 was compiled against CUDA Toolkit 9.0, which is incompatible with the new Turing generation GPUs (ie the 2080). CUDA 10 was only released around 10 days ago, and it added support ("CUDA 10 is the first version of CUDA to support the new NVIDIA Turing architecture":

Neural nets in Mathematica 12 will use CUDA 10 and be compatible with your GPU.

Ugh... wish I would have read this before getting an RTX 2070. I there an ETA for Mma 12? Or a way to get this working before the release of 12?

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
or Discard

Group Abstract Group Abstract