Message Boards Message Boards

0
|
4579 Views
|
1 Reply
|
1 Total Likes
View groups...
Share
Share this post:

Set the last layer in an RNN so the output will be a list of tokens?

Posted 3 years ago

Let’s assume I have “abbcab”-> {“Apple”,”Banana”}, “Bibb”-> {“Carrot”,"Padding"}. where the feature is a sequence of characters, so a string, and the target is a list of words. The target maximum length is 11, and the maximum feature length is 208. While building an RNN, I just set input-> NetEncoder[{"Characters"}], but I want to output-> NetDecoder[{"Tokens", targetVal}]. However, I don't know how to set up the last layer to get such output. I did check the tutorial/NeuralNetworksSequenceLearning##1680168479

You are referring to the right tutorial page (tutorial/NeuralNetworksSequenceLearning#1680168479 : subsection "Integer Addition with Variable-Length Output" of "Sequence-to-Sequence Learning").

So you should also define an architecture in two parts: an encoderNet, which encodes your input string, and a decoderNet which generates the next output token given the previous generated ones (starting with a special token which means "nothing generated yet", and that can generate a special token to stop the generation).

That is the way to generate variable-length sequences with neural networks. There is no special output layer that can generate sequences of different lengths.

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract