Message Boards Message Boards

0
|
5882 Views
|
1 Reply
|
0 Total Likes
View groups...
Share
Share this post:

Clarify sequence-to-sequence learning with neural nets in WL?

Posted 6 years ago

I've been learning about recurrent neural networks lately and I think I'm starting to get the basic idea of how they work. I'm particularly interested in the sequence transformation capabilities of these nets for applications in both NLP and generative art. I've played with a few simple (non-recurrent) nets in Mathematica, but would like to learn more about how to implement recurrent sequence-to-sequence learning.

I've read the Wolfram tutorial Sequence Learning and NLP with Neural Networks, and I'm particularly interested in the section titled Integer Addition with Variable-Length Output. If I understand correctly, sequence-to-sequence learning involves converting a sequence to a vector, and then converting that vector into another sequence. I understand (mostly) the "sequence-to-vector" parts with things like SequenceLastLayer[]. However, I'm still not entirely clear from the tutorial how the "vector-to-sequence" part of this works. Are there other, more descriptive examples somewhere?

POSTED BY: Andrew Campbell

Hi, Andrew,

The following presentation notebook may be fruitful information.

Hands-on Neural Networks for Text and Audio (Workshop)

by Jerome Louradour, Timothee Verdier

POSTED BY: Kotaro Okazaki
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract