Message Boards Message Boards


NetTrain will exit Kernel when using an RTX 2080 GPU.

Posted 16 days ago
1 Reply
5 Total Likes

I recently upgraded my main GPU to an RTX 2080 GPU and whenever I try to use it to train a Neural Network with it, Mathematica will simply exit the kernel Without error message, aborting the computation.

For example:

resource = ResourceObject["MNIST"];
trainingData = ResourceData[resource, "TrainingData"];
testData = ResourceData[resource, "TestData"];

network = NetChain[{FlattenLayer[], LinearLayer[], SoftmaxLayer[-1]},
  "Output" -> NetDecoder[{"Class", Range[0, 9]}],
  "Input" -> NetEncoder[{"Image", {28, 28}, "Grayscale"}]

trained = NetTrain[network, trainingData, ValidationSet -> testData,
   MaxTrainingRounds -> 50, TargetDevice -> {"GPU", 1}]; 

Simply crashes the Kernel, while training the network with my secondary GPU, a GTX 750Ti, by setting {"GPU", 2} works just fine.

On the other hand, CUDA seems to be working well, CUDAQ[] returns true, $CUDADeviceCount lists my two GPUs and

CUDAImageConvolve[IMAGE, , {{-1, -2, 3}}] 

will work just like in the sample page when the option

$CUDADevice = 1

is set.

Any idea on why this might be happening?

Edit: I'm running Mathematica 11.3, that I just reinstalled in case was the issue, running on Windows 10 Pro 64-bits.


The issue is that the neural net framework of 11.3 was compiled against CUDA Toolkit 9.0, which is incompatible with the new Turing generation GPUs (ie the 2080). CUDA 10 was only released around 10 days ago, and it added support ("CUDA 10 is the first version of CUDA to support the new NVIDIA Turing architecture":

Neural nets in Mathematica 12 will use CUDA 10 and be compatible with your GPU.

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
or Discard

Group Abstract Group Abstract