DEV Community

Cover image for Understanding Attention Mechanisms – Part 5: How Attention Produces the First Output
Rijul Rajesh
Rijul Rajesh

Posted on

Understanding Attention Mechanisms – Part 5: How Attention Produces the First Output

In the previous article, we stopped at using the softmax function to scale the scores.

When we scale the values for the first encoded word “Let’s” by 0.4:

And we scale the values for the second encoded word “go” by 0.6:


Finally, we add the scaled values together:


These sums combine the separate encodings for both input words, “Let’s” and “go”, based on their similarity to EOS.
These are the attention values for EOS.


Now, to determine our first output word, we need to:

  • Feed the attention values into a fully connected layer
  • Also include the encoding for EOS
  • Then pass everything through a softmax function

This allows the model to select the first output word, “vamos”.

But we haven’t reached EOS yet.
We will explore how to move further in the next article.


Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.

Just run:

ipm install repo-name
Enter fullscreen mode Exit fullscreen mode

… and you’re done! 🚀

Installerpedia Screenshot

🔗 Explore Installerpedia here

Top comments (0)