User Portlet
Featured Contributor
| Discussions |
|---|
| This is not supported at the moment but it's on our todo list. |
| You have to pay attention to spaces ``` Select[NetExtract[lm, {"Output", "Labels"}], StringContainsQ["hitman", IgnoreCase -> True]] (* {" Whitman", " Hitman"} *) ``` The vocabulary token is `" Hitman"` with capital H and a space at the... |
| This is very nice and well put together! Thanks for sharing it with the community |
| Hi Ethan, to answer your questions 1. We are working on a stable diffusion model for the [net repository][1]. 2. Compression tools for models are nice but right now the core priority is to provide support for multiple framework in order to get... |
| Hello there! Tomorrow (16 November 2022) I am going to demo our Machine Learning functionality and the latest additions and developement directions. We'll look at the the full ML stack including a focus on the current efforts in model... |
| Using the same dataset ``` data = ResourceData["Sample Data: Titanic Survival"]; titanic = Classify[data -> "SurvivalStatus"] ``` after training you can extract the estimated data distribution using `Information` ``` dist =... |
| What you are looking for seems to be ``` AggregationLayer[Max] ``` which will take `[channels, height, width]` and return `[channels]` using `Max` to aggregate the other dimensions. |
| This network is solving a regression task and has no error rate. Building a classification network which automatically uses a cross entropy loss (e.g. which ends with a logistic sigmoid or a softmax layer) will automatically add the error rate among... |
| Hi Gianluca and thanks for sharing this! One question: why do you map `NetTrain` instead of of using `MaxTrainingRounds`? Do you want to reset the learning rate? |
| At the moment the clustering metrics are all internal and used to optimize hyper-parameters. We have a plan to expose them and if there is some interest all the better. For the time being, and keeping in mind that is code might change in the... |