Message Boards Message Boards

Clarify sequence-to-sequence learning with neural nets in WL?


I've been learning about recurrent neural networks lately and I think I'm starting to get the basic idea of how they work. I'm particularly interested in the sequence transformation capabilities of these nets for applications in both NLP and generative art. I've played with a few simple (non-recurrent) nets in Mathematica, but would like to learn more about how to implement recurrent sequence-to-sequence learning.

I've read the Wolfram tutorial Sequence Learning and NLP with Neural Networks, and I'm particularly interested in the section titled Integer Addition with Variable-Length Output. If I understand correctly, sequence-to-sequence learning involves converting a sequence to a vector, and then converting that vector into another sequence. I understand (mostly) the "sequence-to-vector" parts with things like SequenceLastLayer[]. However, I'm still not entirely clear from the tutorial how the "vector-to-sequence" part of this works. Are there other, more descriptive examples somewhere?

POSTED BY: Andrew Campbell
8 days ago

Hi, Andrew,

The following presentation notebook may be fruitful information.

Hands-on Neural Networks for Text and Audio (Workshop)

by Jerome Louradour, Timothee Verdier

POSTED BY: Kotaro Okazaki
5 days ago

Group Abstract Group Abstract