DEV Community

Cover image for Understanding Seq2Seq Neural Networks – Part 6: Decoder Outputs and the Fully Connected Layer
Rijul Rajesh
Rijul Rajesh

Posted on

Understanding Seq2Seq Neural Networks – Part 6: Decoder Outputs and the Fully Connected Layer

In the previous article, we were looking at the embedding values in the encoder and the decoder.

As you can see, they have different input words and symbols (tokens) and different weights, which result in different embedding values for each token.

Because we have just finished encoding the English sentence “Let’s go,” the decoder starts with the embedding values for the token.

The decoder then performs computations using two layers of LSTMs, each with two LSTM cells.

The output values (the short-term memories, or hidden states) from the top layer of LSTM cells are then transformed using additional weights and biases in what is called a fully connected layer.

We will explore this further in the next article.


Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.

Just run:

ipm install repo-name
Enter fullscreen mode Exit fullscreen mode

… and you’re done! 🚀

Installerpedia Screenshot

🔗 Explore Installerpedia here

Top comments (0)