Message Boards Message Boards

GROUPS:

Use GPUs to compute faster RandomFunction and ParallelTable?

Posted 6 months ago
586 Views
|
1 Reply
|
0 Total Likes
|

Hi, I have a code where I use RandomFunction and ParallelTable. Could somebody guide me how to make use of my GPUs for accelerating this computation? I have looked through the CUDA Programming (http://reference.wolfram.com/language/CUDALink/tutorial/Programming.html#135446596) but I do not understand the codes which are later linked through CUDAFunctionLoad.

I believe that if you want to use the GPU for a calculation like this, you basically have to program it yourself in C. Mathematica cannot do such things automatically for you.

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract