Yea, I am doing WL training on my desktop, which has a GTX 1060 6gb and an RTX 2060. I might use one GPU and leave the other open for experimentation while my main model trains, then use both GPUs when I am not experimenting anymore.
My biggest complaint is the CPU utilization seems extremely high. Often when I train, my GPU is not being used more than 10% but my CPU is maxed out. I am still playing around with this, and it seems a little better in 12.3. I might try to store images as NumericArrays and hopefully reduce CPU consumption from converting images to the input file format.
If anyone has tips to reduce CPU consumption during training, that would be a great addition to this thread!