DEV Community

Cover image for Understanding Tokenization: A Deep Dive into Tokenizers with Hugging Face
Dmitry Romanoff
Dmitry Romanoff

Posted on

Understanding Tokenization: A Deep Dive into Tokenizers with Hugging Face

Tokenization is a fundamental concept in natural language processing (NLP), especially when dealing with language models. In this article, we'll explore what a tokenizer does, how it works, and how we can leverage it using Hugging Face's transformers library [https://huggingface.co/docs/transformers/index] for a variety of applications.

What is a Tokenizer?

At its core, a tokenizer breaks down raw text into smaller units called tokens. These tokens can represent words, subwords, or characters, depending on the type of tokenizer being used. The goal of tokenization is to convert human-readable text into a form that is more interpretable by machine learning models.

Tokenization is critical because most models don’t understand text directly. Instead, they need numbers to make predictions, which is where the tokenizer comes in. It takes in text, processes it, and outputs a mathematical representation that the model can work with.

In this post, we'll walk through how tokenization works using a pre-trained model from Hugging Face, explore the different methods available in the transformers library, and look at how tokenization influences downstream tasks such as sentiment analysis.

Setting Up the Model and Tokenizer

First, let's import the necessary libraries from the transformers package and load a pre-trained model. We'll use the "DistilBERT" model fine-tuned for sentiment analysis.

from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification

# Load the pre-trained model and tokenizer
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Create the classifier pipeline
classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
Enter fullscreen mode Exit fullscreen mode

Tokenizing Text

With the model and tokenizer set up, we can start tokenizing a simple sentence. Here's an example sentence:

sentence = "I love you! I love you! I love you!"
Enter fullscreen mode Exit fullscreen mode

Let’s break down the tokenization process step by step:

1. Tokenizer Output: Input IDs and Attention Mask

When you call the tokenizer directly, it processes the text and outputs several key components:

  • input_ids: A list of integer IDs representing the tokens. Each token corresponds to an entry in the model's vocabulary.
  • attention_mask: A list of ones and zeros indicating which tokens should be attended to by the model. This is especially useful when dealing with padding.
res = tokenizer(sentence)
print(res)
Enter fullscreen mode Exit fullscreen mode

Output:

{
    'input_ids': [101, 1045, 2293, 2017, 999, 1045, 2293, 2017, 999, 1045, 2293, 2017, 999, 102],
    'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
}
Enter fullscreen mode Exit fullscreen mode
  • input_ids: The integers represent the tokens. For example, 1045 corresponds to "I", 2293 to "love", and 999 to "!".
  • attention_mask: The ones indicate that all tokens should be attended to. If there were padding tokens, you would see zeros in this list, indicating they should be ignored.

2. Tokenization

If you're curious about how the tokenizer splits the sentence into individual tokens, you can use the tokenize() method. This will give you a list of tokens without the underlying IDs:

tokens = tokenizer.tokenize(sentence)
print(tokens)
Enter fullscreen mode Exit fullscreen mode

Output:

['i', 'love', 'you', '!', 'i', 'love', 'you', '!', 'i', 'love', 'you', '!']
Enter fullscreen mode Exit fullscreen mode

Notice that tokenization involves breaking down the sentence into smaller meaningful units. The tokenizer also converts all characters to lowercase, as we are using the distilbert-base-uncased model, which is case-insensitive.

3. Converting Tokens to IDs

Once we have the tokens, the next step is to convert them into their corresponding integer IDs using the convert_tokens_to_ids() method:

ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
Enter fullscreen mode Exit fullscreen mode

Output:

[1045, 2293, 2017, 999, 1045, 2293, 2017, 999, 1045, 2293, 2017, 999]
Enter fullscreen mode Exit fullscreen mode

Each token has a unique integer ID that represents it in the model's vocabulary. These IDs are the actual input that the model uses for processing.

4. Decoding the IDs Back to Text

Finally, you can decode the token IDs back into a human-readable string using the decode() method:

decoded_string = tokenizer.decode(ids)
print(decoded_string)
Enter fullscreen mode Exit fullscreen mode

Output:

i love you! i love you! i love you!
Enter fullscreen mode Exit fullscreen mode

Notice that the decoded string is very close to the original input, except for the removal of capitalization, which was standard behavior for the "uncased" model.

Understanding Special Tokens

In the output of the input_ids, you may have noticed two special tokens: 101 and 102. These tokens are special markers used by many models to denote the beginning and end of a sentence. Specifically:

  • 101: Marks the beginning of the sentence.
  • 102: Marks the end of the sentence.

These special tokens help the model understand the boundaries of the input text.

The Attention Mask

As mentioned earlier, the attention_mask helps the model distinguish between real tokens and padding tokens. In this case, the attention_mask is a list of ones, indicating that all tokens should be attended to. If there were padding tokens, you would see zeros in the mask to instruct the model to ignore them.

Tokenizer Summary

To summarize, tokenization is a crucial step in converting text into a form that machine learning models can process. Hugging Face’s tokenizer handles various tasks such as:

  • Converting text into tokens.
  • Mapping tokens to unique integer IDs.
  • Generating attention masks for models to know which tokens are important.

Conclusion

Understanding how a tokenizer works is key to leveraging pre-trained models effectively. By breaking down text into smaller tokens, we enable the model to process the input in a structured, efficient manner. Whether you're using a model for sentiment analysis, text generation, or any other NLP task, the tokenizer is an essential tool in the pipeline.

Top comments (0)