User Portlet
Featured Contributor
| Discussions |
|---|
| This algorithm was originally called Model Synthesis. I created for my PhD. I published this in 2007, nine years before Wave Function Collapse and they are the same algorithm. If you're going to call it Wave Function Collapse, you should at least... |
| It's a bit hidden but you can use the same spec as the nets and layers (e.g. NetChain). ``` NetEncoder[{"Function", Flatten[IntegerDigits[#, 2, 8]] &, "Varying"}] ``` Depending on your application, you can also avoid flattening in the encoder... |
| &[Wolfram Notebook][1] [1]: https://www.wolframcloud.com/obj/d1bbed68-2aed-4fd9-9297-d61e98fb99ee |
| Follow-up question: The training panel returned by Information[modelname] has different training panels "Learning curve", "Accuracy" and "Learning curves for all algorithms". They can be selected in the notebook after the panel is rendered. Is it... |
| I think you should use a generator function, for example: enc = NetEncoder[{"Function", Flatten[IntegerDigits[#, 2, 8]] &, {192}}] b = ByteArray[Table[RandomInteger[{0, 255}], 10 * 24]] genTrain = Function[ArrayReshape[Normal[b[[1 ;;... |
| Nikolay -- thanks so much for your response. The private function you mentioned: MachineLearning`file23DecisionTree` PackagePrivate` toTree@p[[1]]["Model"]["Tree"] Did indeed produce a visual of a tree. Unfortunately, the information in... |
| Thank you for clarification. Could you please provide few references? I am not aware of this biased estimator for the variance. |
| Hi Gianluca and thanks for sharing this! One question: Why do you use shared Weight of NetArray in LinearLayer? |