DEV Community

Cover image for Comparing Text Similarity Measurement Methods: Sentence Transformers vs. Fuzzy
Miguel Müller
Miguel Müller

Posted on • Updated on

Comparing Text Similarity Measurement Methods: Sentence Transformers vs. Fuzzy

Introduction:

The measurement of text similarity is a fundamental task in the realm of natural language processing and information retrieval. It underpins a wide array of applications, including information retrieval, text categorization, recommendation systems, and more. Accurately assessing the similarity between textual data can have a profound impact on the performance of these systems. In this article, we delve into text similarity measurement and explore two distinct methodologies for achieving this goal: Sentence Transformers and Fuzzy.

Text similarity measurement involves determining how similar two pieces of text are, often expressed as a numerical score. This score can be used for a variety of purposes, such as identifying similar documents in a large corpus, matching user queries to relevant content, and even detecting plagiarism. The methods we will discuss here cater to different aspects of text similarity, from semantic equivalence to character-level matching.

In real-world applications, the choice between Sentence Transformers and Fuzzy is not a matter of one-size-fits-all, but rather a strategic decision guided by the specific domain and requirements of the task at hand. Sentence Transformers shine when the context and meaning of text play a crucial role, such as in natural language understanding and document retrieval. On the other hand, Fuzzy excels when the focus is on character-level similarities, making it a valuable tool for tasks like deduplication of records, correcting typos, and matching strings with minor variations. Understanding these nuances empowers practitioners to select the most suitable method for their particular use cases, unlocking the potential of text similarity measurement in diverse domains.


Practical Examples:

Comparing Semantic Similarity with Sentence Transformers: Let's consider an example involving the use of Sentence Transformers to measure semantic similarity. Imagine we have two sentences: "The vast ocean is beautiful" and "The immense sea is stunning". We can leverage Sentence Transformers to compute embeddings for these sentences and measure their semantic similarity. The result, obtained using the MiniLM model, reflects their shared semantic meaning and might yield a high similarity score.

"The vast ocean is beautiful" and "The immense sea is stunning"

  • Sentence Transformers: 0.8006699085235596
  • Fuzzy: 0.52

Comparing Textual Variations with Fuzzy: Next, let's explore the practical application of Fuzzy. Suppose we have two strings with minor variations: "color" and "colour". Fuzzy can be employed to calculate the similarity between these strings by focusing on character-level differences. In this case, Fuzzy's comparison, based on the Levenshtein distance, would highlight that "color" and "colour" are closely related due to their minor spelling variation.

"color" and "colour"

  • Sentence Transformers: 0.973908543586731
  • Fuzzy: 0.91

To bring these concepts to life, let's look at a code example that demonstrates the practical use of both methods to measure text similarity.

This code, along with additional resources and updates, is available on our GitHub repository at https://github.com/miguelsmuller/comparing-text-similarity, where you can further explore the implementation and applications of these text similarity measurement methods.

import sys

from sentence_transformers import SentenceTransformer, util
from thefuzz import fuzz
from torch import Tensor


def compare_with_model(text1: str, text2: str):
    model = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
    embedding_1: Tensor = model.encode(text1, convert_to_tensor=True)
    embedding_2: Tensor = model.encode(text2, convert_to_tensor=True)

    similarity = util.pytorch_cos_sim(embedding_1, embedding_2)[0][0].item()

    return similarity


def compare_with_fuzz(text1: str, text2: str):
    similarity = fuzz.ratio(text1, text2) / 100

    return similarity


if len(sys.argv) != 3:
    print("Please provide exactly two sentences as arguments.")
    sys.exit(1)

sentence1 = sys.argv[1]
sentence2 = sys.argv[2]

model_similarity = compare_with_model(sentence1, sentence2)
fuzz_similarity = compare_with_fuzz(sentence1, sentence2)

print("Usando Modelo:", model_similarity)
print("Usando Fuzz:", fuzz_similarity)
Enter fullscreen mode Exit fullscreen mode

Method 1: Sentence Transformers

Sentence Transformers represent a modern approach to text similarity measurement by leveraging deep learning and contextual embeddings. These models are pretrained on vast corpora of text data, allowing them to capture the intricate semantic relationships between words and phrases. Sentence Transformers generate high-dimensional vector representations of text, where similar phrases are mapped to nearby points in the vector space. The key idea is to encode not just individual words but also the context and meaning of an entire sentence, making them ideal for tasks like semantic search, document retrieval, and paraphrase identification.

At the core of Sentence Transformers is the ability to transform text into dense, semantically meaningful embeddings. The model takes a sentence as input and, through a series of neural network layers, converts it into a fixed-size vector. This vector encodes the sentence's meaning in a high-dimensional space, facilitating comparisons between different sentences.

To learn more about embeddings, you can visit Hariom Gautam's article, from where i also sourced the image below.

vector from a word

Sentence Transformers find applications in various domains, including natural language understanding, information retrieval, and question answering systems. They excel in scenarios where understanding the underlying semantics of text is crucial, such as matching user queries with relevant documents or identifying similar sentences in large text corpora.

To illustrate the method's capability, consider the task of measuring the similarity between two sentences: "The quick brown fox jumps over the lazy dog" and "A fast brown fox leaps over a dozing dog". Sentence Transformers can generate embeddings for these sentences and measure their similarity based on semantic content. In this case, the two sentences are likely to receive a high similarity score, reflecting their shared meaning despite slight wording differences.

"The quick brown fox jumps over the lazy dog" and "A fast brown fox leaps over a dozing dog"

  • Sentence Transformers: 0.8295611143112183
  • Fuzzy: 0.63

Sentence Transformers offer the advantage of capturing nuanced semantic information, but they may require substantial computational resources. Their performance shines in tasks requiring an understanding of context and semantics, but their effectiveness can diminish when dealing with character-level similarities or extremely short text snippets.


Method 2: Fuzzy

Fuzzy, with its whimsical name, offers a unique approach to text similarity measurement. Instead of delving into the semantic intricacies of text, Fuzzy focuses on character-level similarities. It employs algorithms like the Levenshtein distance to calculate the difference between two strings in terms of the number of insertions, deletions, and substitutions required to transform one into the other. This method excels when the task involves comparing strings with slight variations, making it a valuable tool for various applications, including record deduplication, spell checking, and string matching.

At the core of Fuzzy is the computation of a similarity ratio, often expressed as a percentage, that quantifies how similar two strings are. This ratio is calculated by comparing the characters in the two strings and determining the number of operations needed to make them identical.

Fuzzy finds extensive utility in domains where small differences in text are critical, such as database management and data cleansing. It is instrumental in detecting and resolving near-duplicate records, ensuring data quality by correcting typos, and matching strings with minor deviations.

Consider a scenario in which a database contains multiple records with slight variations in names, such as "John Smith" and "Jon Smithe". Fuzzy can be employed to calculate a similarity ratio, allowing for the identification and removal of these near-duplicate entries. The result is a cleaner and more accurate database.

"John Smith" and "Jon Smithe"

  • Sentence Transformers: 0.7414223551750183
  • Fuzzy: 0.9

Fuzzy is highly efficient at handling character-level similarities, particularly in scenarios where minor text variations matter. However, it is less suited for capturing semantic nuances in text and is best reserved for tasks that prioritize character-based comparisons.


Comparison of Methods:

When to Use Sentence Transformers: Sentence Transformers excel in scenarios where the context and meaning of text are paramount. If your task involves understanding the semantic relationships between words, phrases, or sentences, Sentence Transformers are a valuable choice. They are particularly effective for applications like natural language understanding, document retrieval, and paraphrase identification. When your goal is to match user queries with relevant content or find semantically similar sentences, Sentence Transformers are a robust option.

When to Use Fuzzy: On the other hand, Fuzzy comes into its own when character-level similarities are the primary focus. If your text data contains variations due to typos, minor differences, or slight misspellings, Fuzzy is a dependable choice. It excels in tasks like deduplication of records, spell checking, and string matching, where capturing subtle character-level distinctions is crucial.

The choice between these methods boils down to the nature of your data and the objectives of your task. If your goal is to measure the semantic relatedness of text and capture its meaning, Sentence Transformers are the go-to option. On the other hand, if your primary concern is to deal with textual variations and character-level differences, Fuzzy is the method of choice. Understanding this fundamental distinction empowers practitioners to make informed decisions and choose the method that aligns with their specific needs.

Selecting the right method is crucial for the success of your text similarity measurement task. Consider the nature of your text data, the specific requirements of your application, and the level of similarity you aim to capture. Combining these insights will guide you to choose either Sentence Transformers or Fuzzy, ensuring that your text similarity measurements align with your objectives.


Conclusion:

In this article, we explored two distinct approaches to measuring text similarity: Sentence Transformers and Fuzzy. Each method has its place and utility depending on the context and task requirements. By understanding the differences between them, professionals can make informed decisions about which method to use in their projects. The choice between semantic similarity and character-based similarity depends on the type of text being compared and the specific project needs.


References:


Acknowledgments:

Cover Photo by Jason Dent at Unsplash

Top comments (0)