Thank you for taking the time to pose these questions to the users!
Question 1: Importance of prettified structures
I am much more interested in fine-tuning models than simply running them. Having some structure to models does make it easier to properly ‘cut-off’ the final layers to replace with my own for training, especially when the network graph is complex with lots of branching. It is particularly nice when residual blocks or ‘bottleneck’ blocks are grouped together.
That being said, I do not really care if names are human-readable. 1, 2, 3, … are fine layer/block names for my purposes. Also, I do not really care if there is any cleanup below the ‘block’ level.
Finally, if it were a choice between painstakingly developer-structured models and having more models, I would much prefer having more models (with usage and fine-tuning examples).
Question 2: WNNR section importance
- Training data used and validation accuracy
- Resource retrieval and NetModel parameters/versions
- Basic Usage
- Transfer learning
- Feature extraction
- Net information
- Weight Visualization
- MXNet Export
I would consider weight visualization to be a fun ‘toy’ example, but largely unnecessary since it is covered in an article on the website (and even in a few talks Stephen Wolfram has given):
https://www.wolfram.com/language/12/neural-network-framework/visualize-the-insides-of-a-neural-network.html?product=mathematica
MXNet/ONNX Export/Import are covered in the documentation for “MXNet” Import/Export, so I do not know how important this section is (unless it is specifically different/complicated for a particular network). I suppose having this section could help those who are new to the Neural Network functionality learn that they can export models as stand-alone MXNet/ONNX format.
Conclusions
I largely agree with the answer on StackExchange. I think transfer learning is one of the most important sections and having more clean examples of how to do it is important. I personally have never used the ‘construction notebooks’ because I did not know they existed, but they seem like a really good resource to learn how to build your own complicated neural networks using WL. I think having better examples (maybe GitHub repositories or a dedicated documentation section with example NN usage of building common neural networks like Transformer networks, LSTMs, etc.) would be even more valuable to users than having construction notebooks for every network in WNNR.