Message Boards Message Boards

[WSS20] Text generation using GANs

enter image description here

POSTED BY: Suman Sigdel
12 Replies

Nice project! I wanted to run your code, but there's many missing definitions, e.g. convolutionBlock, ArrayLayer, vocabulary, some .wlnet files etc.) do you think you can add them so it is runnable?

POSTED BY: Michael Sollami

Hi Mike,

Thank you for your interest in my project. I have uploaded .wlnet files here https://github.com/sumansid/Text-Generation-using-GANs and will also add the full notebook. Please let me know if you have any questions.

POSTED BY: Suman Sigdel

Full Notebook can be found here : https://notebookarchive.org/2020-07-6hfo8lo

POSTED BY: Suman Sigdel

enter image description here -- you have earned Featured Contributor Badge enter image description here Your exceptional post has been selected for our editorial column Staff Picks http://wolfr.am/StaffPicks and Your Profile is now distinguished by a Featured Contributor Badge and is displayed on the Featured Contributor Board. Thank you!

POSTED BY: EDITORIAL BOARD

Thank you so much !

POSTED BY: Suman Sigdel

Impressive! Really well written. I like the GIF!

POSTED BY: Pedro Cabral

Thanks Pedro ! I'm glad you liked the project :D

POSTED BY: Suman Sigdel

GANs are very unstable during training as there are two networks that are dependent on each other for improvements in their performance. While several methods to improve the stability were implemented in the GAN architecture, they still suffer from non-convergence. Due to their unstable nature, an early stopping method fails to retrieve the best performing GAN model. In order to solve this, we will save a checkpoint of the model in every round along with the n-gram performance metric to select the best performing model.

This stuck out to me as similar to my experience implementing a GAN in Tensorflow2. The instabilities were maddening. Still trying to improve some problems with mode collapse. Glad to hear your model selection strategy eventually worked.

POSTED BY: Blair Birdsell

GANs are very unstable during training as there are two networks that are dependent on each other for improvements in their performance. While several methods to improve the stability were implemented in the GAN architecture, they still suffer from non-convergence. Due to their unstable nature, an early stopping method fails to retrieve the best performing GAN model. In order to solve this, we will save a checkpoint of the model in every round along with the n-gram performance metric to select the best performing model.

This stuck out to me as similar to my experience implementing a GAN in Tensorflow2. The instabilities were maddening. Still trying to improve some problems with mode collapse. Glad to hear your model selection strategy eventually worked.

POSTED BY: Blair Birdsell

Thank you. Good to hear that you had similar experience. Check out GAN Hacks by Soumith Chintala. https://github.com/soumith/ganhacks .

POSTED BY: Suman Sigdel

Excellent work.

POSTED BY: Osama Sarhan
Posted 2 years ago

Missing definitions such as ArrayLayer... Notebook does not work... Any ideas?

POSTED BY: Phil Nguyen
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract