# Documentation & Functionality enhancements as NeuralNets leave Experimental

Posted 2 months ago
346 Views
|
|
7 Total Likes
|
 I've been doing a lot of work with Machine Learning in the Wolfram Language recently and we have tremendous capability and a clean architecture typical of Wolfram Language products. Right now, all the functions are labeled as "Experimental" and thus are not held to quite as high a standard as features that have graduated from that designation. I believe the transition to a full and permanent part of the language could be helped by addressing two matters: (a) some documentation lacunae and (b) some interoperability challenges. I am attempting to start a conversation on that point by putting forth some suggestions for enhancements to Documentation and Functionality. My focus is not on creating new fancy layers -- like ones that would create a Generative Adversarial Network or other great stuff -- but on making the existing functionality more accessible to those not completely expert in either Neural Nets or the MXNet framework on which it rests.DocumentationA key issue is that we have this wonderful Classify and Predict functionality that can use Neural Networks and that kind of/sort of integrates with the NetTrain, NetGraph stuff, but the integration is not as tight as desirable and the documentation is lacking. Here are some ideas. Classify[data,Method->"NeuralNetwork"] and Predict[data,Method->"NeuralNetwork"] should provide the network used for training, including the loss function. Perhaps there could be an option that had Classify and Predict return a NetTrainResultsObject. Or perhaps ClassifierInformation could extract the network in a form that could be reused within NetTrain or otherwise edited. This way one could take a Network used by Classify or Predict and (a) see more easily what the heck it was doing and (b) think of tweaks that might enhance its performance. Moreover, one could see how Classify created a Net that implemented the optional features such as IndeterminateThreshold and UtilityFunction. It would be a great learning tool. There should be a worked example probably using NetGraph showing at least one way to implement every option to Classify and Predict within the NeuralNetwork paradigm. Thus ClassPriors, FeatureExtractor, FeatureNames, FeatureTypes, IndeterminateThreshold, UtilityFunction should all be shown. ValidationSet would be nice too. The requirements for ClassifierMeasurements to work on the output from a NetTrain operation should be clearly stated. There is a lot of functionality hidden in the NeuralNetworks context. A lot of it is quite useful. Some of it should be promoted for more general use and documented appropriately. FunctionalityOK. This is a hard one -- probably much harder than I appreciate. But perhaps a start could be made. It would be great to be able to just write regular Wolfram Language code and, where possible, have it automatically translated into a NetGraph expression. A function named NeuralNetForm (or NetGraphCompile or something like that) Example:  NetGraphForm[(MapThread[#1 - #2 &] /* (Dot[{1, 2, 3}, #] &)), {"x", "y"}] -> NetGraph[{ThreadingLayer[#1 - #2 &], ConstantArrayLayer["Array" -> {1, 2, 3}], DotLayer[]}, {{NetPort["x"], NetPort["y"]} -> 1, {1, 2} -> 3}] So that then one could take the NetGraph (netg) and do the following  netg[Association["x" -> {3, 5, 8}, "y" -> {2, 16, -3}]] `And you'd get 12. Right now the NeuralNetwork repository is filled with elaborate nets for doing wonderful and fancy things. But perhaps there could be a section of that repository devoted to simpler tasks: asymmetric cross entropy losses just to take a particular example. Probably others will have additional ideas. Or it may be that my ideas are impracticable, a special case of a more general problem, or already in the works. Perhaps some constructive user feedback might help the product evolve even more successfully.