Message Boards Message Boards

1
|
3809 Views
|
1 Reply
|
1 Total Likes
View groups...
Share
Share this post:

Successful NN GPU training on a multi-GPU local machine?

Many useful discussions here about GPU training issues, with different hardware mentioned. So I thought it might be useful to ask about success. My general question is: in the absence of access to cloud-based GPU resources, can the WL take full advantage of a local machine specified for NN work?

So... if you regularly do WL NN training using either a newer, high-end GPU (e.g., RTX 2080) or, even better a machine with multiple such GPUs, can you let us know, and what the hardware is?

POSTED BY: Gareth Russell
Posted 3 years ago

Yea, I am doing WL training on my desktop, which has a GTX 1060 6gb and an RTX 2060. I might use one GPU and leave the other open for experimentation while my main model trains, then use both GPUs when I am not experimenting anymore.

My biggest complaint is the CPU utilization seems extremely high. Often when I train, my GPU is not being used more than 10% but my CPU is maxed out. I am still playing around with this, and it seems a little better in 12.3. I might try to store images as NumericArrays and hopefully reduce CPU consumption from converting images to the input file format.

If anyone has tips to reduce CPU consumption during training, that would be a great addition to this thread!

POSTED BY: Alec Graves
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract