I am a developer in the machine learning team, and mostly involved with the neural net repository. One of the discussions with my group raised these question in my mind, and these might be stupid questions, but here they are:
Question 1: Would our users rather want more models in their respective field of applications, that are correct, functioning and tested, or would they want models that are correct, functioning tested and properly structured? In other words, a lot of the times, models when imported are not structured properly, and many a times we spend a great deal of time restructuring them (in lack of better words, prettifying them).
If I were a user, I would rather have more models that are correct and ugly, rather than less models that are correct and pretty, because a great deal of developer time is spent in "prettifying them". However, the question to the users here, is there any potential benefit of having a pretty model as opposed to an ugly one, that could justify endless developer hours being spent on making a model pretty? I maybe missing some details or use of pretty and structured models here, and that's why I reached out to our community to see if there are any advantages to justify the tradition of making the published models pretty. This would potentially help us free some time to actually convert models for various application areas.
Please import the models to learn about the pretty and ugly versions (there are other differences). The .onnx has only the backbone and the FPN (you can verify that the output are the 4 probabilities and box locs). A first-level restructuring where relevant sections are marked backbone, FPN and the rest of the sections would not take much time, but to prettify it to the full, would require a lot of restructuring. For practical purposes, one would simply want to extract the relevant sections like backbone and/or BiFPN for further uses.
Question 2: That also brings me to the next topic, how important are each of these sections in the main page you see for the models in the WNNR? https://resources.wolframcloud.com/NeuralNetRepository/resources/EfficientNet-Trained-on-ImageNet-with-AdvProp-and-AutoAugment/ e.g. If someone could rank the usefulness of each section of the above shingle page, starting from "Resource Retrieval" to "Export to MXNet" that would be great as well.
Question 3: Would you prefer seeing more sections in such shingle pages? Sections explaining the architecture, like they do in the original research papers? If so, any ideas for how it should look/be formatted? How much details should each shingle page have?
P.S. All the groups tagged here are relevant application areas of the WNNR.