Message Boards Message Boards

Wolfram neural networks in other platforms?

Posted 2 years ago

Is there a way to load a neural network from the Wolfram repository, in a platform such as TensorFlow that handles deep data more quickly than Mathematica?

POSTED BY: Collin Merenoff
10 Replies

My yearly fee is currently set to Standard. Will I have to upgrade to Premium to install Wolfram Desktop?

POSTED BY: Collin Merenoff
Posted 2 years ago

I do not know, but I think it depends on what Wolfram product you have.

If you have Mathematica, you need the desktop version. If you have Wolfram programming lab, it seems the premium subscription allows you to access desktop files.

If you just wanted to export a model, you should be able to do that from the web version - save the model data to a file and download it from your Wolfram cloud files.

POSTED BY: Alec Graves

Thank you! Where can I find more information about the Mathematica command line?

POSTED BY: Collin Merenoff
Posted 2 years ago

Hi Colin,

wolframscript is documented here. You might also want to take a look at the Wolfram Engine.

POSTED BY: Rohit Namjoshi
Posted 2 years ago

I just made a brief video about using the wolframscript command line, as I find the existing educational resources somewhat lacking. First half of the video covers installation and second half covers usage:

Also note, you do not need to install Wolfram engine if you already installed wolfram desktop. Wolfram engine comes with Wolfram desktop. If you already have Wolfram desktop installed, you can just download and install wolframscript to get access to the command line interface.

POSTED BY: Alec Graves
Posted 2 years ago

Yes, the Wolfram library can export MXNet architecture and weight files:

https://reference.wolfram.com/language/ref/format/MXNet.html

There are open source libraries that can convert MXNet models to tensorflow models.

Also, there is newer support for ONNX model exports, but I have never tried this myself. ONNX has importers from all the major neural network libraries (tensorflow, pytorch, etc.)

https://reference.wolfram.com/language/ref/format/ONNX.html

Lastly, if you want to process big datasets with Mathematica, I have found this is possible if you ditch the GUI and stick to writing scripts (.m/.wl files) and running them with the wolframscript -f command line utility - like you do in python. The Wolfram/Mathematica Desktop GUI is definitely too slow - and crashy- for a lot of 'real' problems with large datasets right now :(

POSTED BY: Alec Graves

Are you sure? I think I saw a video of how to process extremely large data (tens of GBs) using the Wolfram desktop. Its not done with Import though.

POSTED BY: Jack I Houng
Posted 2 years ago

Hi Jack,

Perhaps you are referring to this?

POSTED BY: Rohit Namjoshi
Posted 2 years ago

Note, the following issues are present in Wolfram Desktop / Mathematica up to v12.3 on Windows 10, but they are known to WRI. Hopefully they are fixed soon.

It is possible to do use the notebook GUI environment for loading and manipulating large amounts of data, I have simply found it unreliable for neural network training- with the interface slowing down significantly or freezing during training.

I have started a 12-hour NetTrain session in the Notebook environment only to realize several hours later that the entire interface and training process froze 15 minutes in. This was with a large number of batches each second, so maybe graphing the training progress was a culprit with this one. Regardless, I have never experienced such an issue when training from the command line interface.

Next, because NetTrain takes so much CPU and documentation notebook formatting is seemingly done on the same thread, the documentation pages become unusable while training. This is a minor complaint since the documentation is online as well, but it is annoying if you want to improve your code while your model is training.

Finally, neural network GPU memory is not always de-allocated unless you quit the kernel which allocated it - meaning if you use NetTrain in a notebook for large-model GPU training, you will need to restart the kernel to free GPU memory before running NetTrain again. This means you also need to re-run any initialization cells and re-load your (hopefully DumpSave'd) data every time you want to run NetTrain in a notebook. When using wolframscript, the GPU-memory-allocating process dies when the script ends, immediately freeing GPU memory for use when re-running the script.

Anyway, I am really looking forward to the updates that fix these problems :)

POSTED BY: Alec Graves
Posted 1 year ago

Edit (in 2022, WL v13.1), it appears that several of these issues have been fixed. The NetTrain progress reporting graph no longer makes the Documentation UI completely unusable, re-running NetTrain in the same WL kernel no longer causes GPU memory to build up, and the training progress graph interface seems generally more performant and less crashy. Thanks for the updates, WRI!

POSTED BY: Alec Graves
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract