DEV Community

Cover image for Three Decoding Methods in NLP
James Briggs
James Briggs

Posted on

1

Three Decoding Methods in NLP

In this video, we will cover three ways to decode the output probabilities from NLP models - greedy search, random sampling, and beam search.

Learning how to decode outputs can make a huge difference in diagnosing model issues and improving text output quality - and as an added bonus it's super easy.

One of the often-overlooked parts of sequence generation in natural language processing (NLP) is how we select our output tokens — otherwise known as decoding.

You may be thinking — we select a token/word/character based on the probability of each token assigned by our model.

This is half-true — in language-based tasks, we typically build a model which outputs a set of probabilities to an array where each value in that array represents the probability of a specific word/token.

At this point, it might seem logical to select the token with the highest probability? Well, not really — this can create some unforeseen consequences — as we will see soon.

When we are selecting a token in machine-generated text, we have a few alternative methods for performing this decode — and options for modifying the exact behavior too.

In this video we will explore three different methods for selecting our output token, these are:

  • Greedy Decoding
  • Random Sampling
  • Beam Search

AWS Q Developer image

Your AI Code Assistant

Automate your code reviews. Catch bugs before your coworkers. Fix security issues in your code. Built to handle large projects, Amazon Q Developer works alongside you from idea to production code.

Get started free in your IDE

Top comments (0)

AWS Security LIVE!

Join us for AWS Security LIVE!

Discover the future of cloud security. Tune in live for trends, tips, and solutions from AWS and AWS Partners.

Learn More