Hi,
I was experimenting with NetGANOperator
and the training seemed to work fine as the loss function results were evolving more or less as I would have expected.
However, when training was done, I tried to generate images from a few latent vectors and I always got the same result (visually anyway). That seemed odd to me and I thought the only way the result could not depend on the input would be that all weights had converged towards zero, leaving only the biases.
Then I wondered : isn't that a trivial solution for a GAN? The generator would always produce the same output, and the discriminator only has to learn how to recognize that particular output.
I understand this is not a question specific to Mathematica but rather to machine learning in general but I thought I could get some explanations from this community.
Here is the notebook I was working on: