Group Abstract Group Abstract

Message Boards Message Boards

Issue with CUDA Functionality in Mathematica 12.1

POSTED BY: Jonathan Kinlay
5 Replies

I tested the notebook and got the same error independant of the number of options. I use Windows 10 with :

{1 -> {"Name" -> "Quadro M1000M", "Clock Rate" -> 1071500, 
   "Compute Capabilities" -> 5.`, "GPU Overlap" -> 1, 
   "Maximum Block Dimensions" -> {1024, 1024, 64}, 
   "Maximum Grid Dimensions" -> {2147483647, 65535, 65535}, 
   "Maximum Threads Per Block" -> 1024, 
   "Maximum Shared Memory Per Block" -> 49152, 
   "Total Constant Memory" -> 65536, "Warp Size" -> 32, 
   "Maximum Pitch" -> 2147483647, 
   "Maximum Registers Per Block" -> 65536, "Texture Alignment" -> 512,
    "Multiprocessor Count" -> 4, "Core Count" -> 512, 
   "Execution Timeout" -> 1, "Integrated" -> False, 
   "Can Map Host Memory" -> True, "Compute Mode" -> "Default", 
   "Texture1D Width" -> 65536, "Texture2D Width" -> 65536, 
   "Texture2D Height" -> 65536, "Texture3D Width" -> 4096, 
   "Texture3D Height" -> 4096, "Texture3D Depth" -> 4096, 
   "Texture2D Array Width" -> 16384, 
   "Texture2D Array Height" -> 16384, 
   "Texture2D Array Slices" -> 2048, "Surface Alignment" -> 512, 
   "Concurrent Kernels" -> True, "ECC Enabled" -> False, 
   "TCC Enabled" -> False, "Total Memory" -> 2147483648}}
Driver version 442.19

CUDAFunction::liblnch: During the evaluation, an error LIBRARY_FUNCTION_ERROR was raised when launching the library function.

However I updated my driver to version 442.92 and now it's work a little bit. with 256 options no problem. With 512 it starts generating the error message again. With a fresh kernel and 512 options it works one or two times and then the error message appears again. I would call Wolfram Support.

POSTED BY: l van Veen

As far as I know, there is a thing called unified memory access in cuda, but only starting from compute capability 3.0 and above, then can automate the memory copy process between host and device. the original cuda programming, one need to copy the data from host to device to do the compute, then copy the result back,

POSTED BY: vincent feng

It could be that Wolfram no longer supports the rather antiquated graphics card I'm using. But CudaQ[] = True and I would expect it to = False if that were the case.

Another possible explanation is that I am accessing the computer remotely. Obviously I am aware that GPU is not supported using Remote Desktop; but I have tested using both AnyDesk and TeamViewer, both of which has worked on this machine in previous Mathematica versions. Neither works now.

POSTED BY: Jonathan Kinlay

I am using Mathematica 12.1 on the test,

POSTED BY: vincent feng
Attachments:
POSTED BY: vincent feng
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard