Note, the following issues are present in Wolfram Desktop / Mathematica up to v12.3 on Windows 10, but they are known to WRI. Hopefully they are fixed soon.
It is possible to do use the notebook GUI environment for loading and manipulating large amounts of data, I have simply found it unreliable for neural network training- with the interface slowing down significantly or freezing during training.
I have started a 12-hour NetTrain session in the Notebook environment only to realize several hours later that the entire interface and training process froze 15 minutes in. This was with a large number of batches each second, so maybe graphing the training progress was a culprit with this one. Regardless, I have never experienced such an issue when training from the command line interface.
Next, because NetTrain takes so much CPU and documentation notebook formatting is seemingly done on the same thread, the documentation pages become unusable while training. This is a minor complaint since the documentation is online as well, but it is annoying if you want to improve your code while your model is training.
Finally, neural network GPU memory is not always de-allocated unless you quit the kernel which allocated it - meaning if you use NetTrain in a notebook for large-model GPU training, you will need to restart the kernel to free GPU memory before running NetTrain again. This means you also need to re-run any initialization cells and re-load your (hopefully DumpSave'd) data every time you want to run NetTrain in a notebook. When using wolframscript
, the GPU-memory-allocating process dies when the script ends, immediately freeing GPU memory for use when re-running the script.
Anyway, I am really looking forward to the updates that fix these problems :)