DEV Community

Cover image for I Tried Out Qodo's New Embed Model Qodo-Embed-1🤯
Kiran Naragund
Kiran Naragund Subscriber

Posted on

23 18 18 18 18

I Tried Out Qodo's New Embed Model Qodo-Embed-1🤯

Hello Devs👋

Recently, I tried out Qodo's Qodo-Embed-1, a new state-of-the-art code embedding model built specifically for retrieval tasks in software development. I was really impressed with how well it performs, even in the smaller 1.5B variant 🤏.

In this article, I'm covering these topics in detail:

  • What are Code Embeddings?
  • What is Qodo-Embed-1-1.5B?
  • Models Core Capabilities
  • Getting Hands-On with Qodo-Embed-1-1.5B

Let's get started🚀

Wait

But wait, before we get started let's first understand:

What are Code Embeddings?

Code embeddings convert complex code structures into numerical vectors that capture the meaning and functionality of the code.

In simple words, think of code embeddings like turning your code into smart numbers that capture what the code means, not just what it looks like.

Its just like how Google Maps gives GPS coordinates to locations (latitude, longitude), code embeddings give a vector (a list of numbers) to your code so machines can understand and compare it.

For example, consider these two Python functions:

def add_numbers(a, b):
    return a + b

def sum_two_values(x, y):
    result = x + y
    return result
Enter fullscreen mode Exit fullscreen mode

So, if two pieces of code do similar things, their embeddings will be close to each other in that number space, even if the code looks different.

What is Qodo-Embed-1-1.5B?

Qodo-Embed-1-1.5B is a light weight(1.5B parameters) state-of-the-art code embedding model designed for retrieval tasks in the software development domain.

This model is optimized for natural language-to-code and code-to-code retrieval.

Core Capabilities of this model:

🔍 Code Search: Enables efficient searching across large codebases

🧠 Retrieval-Augmented Generation (RAG): Enhances code generation with contextual understanding

🤓 Semantic Code Understanding: Captures complex relationships between code snippets

🌐 Multi-Language Support: Processes code from 9 major programming languages.(Python, C++, C#, Go, Java, JavaScript, PHP, Ruby, Typescript)

📈 High-Dimensional Embeddings: Generates rich 1536-dimensional representations

If you're interested in learning more about this, you can check this blog

Getting Hands-On with Qodo-Embed-1-1.5B

Now that we understand what code embeddings are and what Qodo-Embed-1-1.5B brings to the table,

Let’s dive into how you can start using this model with with some example use cases.

Getting Started

Qodo-Embed-1 is available in two sizes:

  • Lite (1.5B) - Qodo-Embed-1-1.5B
  • Medium (7B) - Qodo-Embed-1-7B

You can use the model via Hugging Face Transformers or SentenceTransformers libraries.

Quick setup

# Install required libraries
pip install sentence-transformers
Enter fullscreen mode Exit fullscreen mode

🔍 Use Case: Sentence Similarity

Let's try to find similar sentence by comparing with each other.

from sentence_transformers import SentenceTransformer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np

# Load the model
model = SentenceTransformer("Qodo/Qodo-Embed-1-1.5B", trust_remote_code=True)

# Source sentence and comparison list
source_sentence = "That is a very happy person"
sentences_to_compare = [
    "That is a happy person",
    "That is a happy dog",
    "Today is a sunny day",
    "The man is joyful and smiling"
]

# Encode source and comparison sentences
source_embedding = model.encode([source_sentence])
comparison_embeddings = model.encode(sentences_to_compare)

# Compute cosine similarity (returns a 2D array)
similarity_scores = cosine_similarity(source_embedding, comparison_embeddings)[0]

# Find most similar sentence
most_similar_idx = int(np.argmax(similarity_scores))
most_similar_sentence = sentences_to_compare[most_similar_idx]
similarity_score = similarity_scores[most_similar_idx]

# Print results
print(f"Source Sentence: \"{source_sentence}\"")
print(f"Most Similar Sentence: \"{most_similar_sentence}\"")
print(f"Similarity Score: {similarity_score:.4f}")
Enter fullscreen mode Exit fullscreen mode

What’s happening here?🤔:

  • The model converts each sentence into a vector of numbers.
  • Then we use cosine_similarity to compare how close they are.
  • You'll see that "That is a happy person" is more similar to "That is a very happy person".

Expected Output: The model returns the most similar sentence and the similarity score
Note: Score ranges from 0 to 1 and higher means more similar.

Source Sentence: "That is a very happy person"
Most Similar Sentence: "That is a happy person"
Similarity Score: 0.9795
Enter fullscreen mode Exit fullscreen mode

🔍 Use Case: Code Search

Now, we will try to find a specific code snippet from multiple code snippets through natural language query.

from sentence_transformers import SentenceTransformer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np


model = SentenceTransformer("Qodo/Qodo-Embed-1-1.5B", trust_remote_code=True)

snippets = [
    """def binary_search(arr, target):
    low, high = 0, len(arr)-1
    while low <= high:
        mid = (low + high) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            low = mid + 1
        else:
            high = mid - 1
    return -1""",

    """def bubble_sort(arr):
    for i in range(len(arr)):
        for j in range(0, len(arr)-i-1):
            if arr[j] > arr[j+1]:
                arr[j], arr[j+1] = arr[j+1], arr[j]""",

    """def reverse_linked_list(head):
    prev = None
    current = head
    while current:
        next_node = current.next
        current.next = prev
        prev = current
        current = next_node
    return prev"""
]

query = "How to reverse a linked list in Python"
query_embedding = model.encode(query)
snippets_embeddings = model.encode(snippets)

similarities = cosine_similarity([query_embedding], snippets_embeddings)
most_similar_idx = np.argmax(similarities)

print("Most relevant code snippet:")
print(snippets[most_similar_idx])
Enter fullscreen mode Exit fullscreen mode

What’s happening here?🤔

  • We're encoding a natural language query and multiple code snippets.
  • The model identifies which snippet is most relevant to the query using cosine similarity.

✅ Expected Output: The model correctly identifies the reverse_linked_list function as the most relevant match.

Most relevant code snippet:
def reverse_linked_list(head):
    prev = None
    current = head
    while current:
        next_node = current.next
        current.next = prev
        prev = current
        current = next_node
    return prev
Enter fullscreen mode Exit fullscreen mode

🔍 Example Use Case: RAG with Contextual Understanding

Now let's test the Contextual Understanding of the Model

import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
from sentence_transformers import SentenceTransformer

model = SentenceTransformer("Qodo/Qodo-Embed-1-1.5B", trust_remote_code=True)

user_prompt = "Create a Flask route that accepts POST requests and returns JSON"
context_snippets = [
    """from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/submit', methods=['POST'])
def submit():
    data = request.get_json()
    return jsonify(data)
""",
    """from fastapi import FastAPI, Request

app = FastAPI()

@app.post("/items")
def create_item(request: Request):
    return {"message": "Item received"}"""
]

prompt_embedding = model.encode(user_prompt)
context_embeddings = model.encode(context_snippets)
scores = cosine_similarity([prompt_embedding], context_embeddings)

best_match = context_snippets[np.argmax(scores)]
print("Context snippet to augment generation:")
print(best_match)
Enter fullscreen mode Exit fullscreen mode

What’s happening here?🤔

  • We're trying to retrieve the most relevant context snippet based on the prompt provided.

✅ Expected Output: The model selects the correct Flask-based snippet for code augmentation in a RAG pipeline.

Context snippet to augment generation:
from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/submit', methods=['POST'])
def submit():
    data = request.get_json()
    return jsonify(data)
Enter fullscreen mode Exit fullscreen mode

🔍 Example Use Case: Multi-Language Support

Now, let's test that will the model able to identify the different programming languages.

import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
from sentence_transformers import SentenceTransformer

model = SentenceTransformer("Qodo/Qodo-Embed-1-1.5B", trust_remote_code=True)

queries = [
    "How to define a function in Java",
    "How to create a list in Ruby"
]

snippets = [
    """public void greet() {
    System.out.println("Hello World");
}""",
    """my_list = ["apple", "banana", "cherry"]"""
]

query_embeddings = model.encode(queries)
snippet_embeddings = model.encode(snippets)
scores = cosine_similarity(query_embeddings, snippet_embeddings)

for i, query in enumerate(queries):
    best_match = snippets[np.argmax(scores[i])]
    print(f"Best match for '{query}':\n{best_match}\n")
Enter fullscreen mode Exit fullscreen mode

✅ Expected Output: Each query is matched to the correct language-specific code snippet, verifying the model’s multilingual capability.

Best match for 'How to define a function in Java':
public void greet() {
    System.out.println("Hello World");
}

Best match for 'How to create a list in Ruby':
my_list = ["apple", "banana", "cherry"]
Enter fullscreen mode Exit fullscreen mode

The examples I used are meant to demonstrate core capabilities, you can extend them for more complex retrieval workflows and experiment with your own ideas too!🙂

👉 You can find all these code snippets in my GitHub repository and try them out yourself.

🙏 Final Thoughts

So, In each examples the model delivered correct and accurate results.
Even it's 1.5B parameters, it delivers performance comparable to models 3–4 times its size.

If you are building a RAG pipeline, an internal code search tool, or just want to enrich your dev environment with smart retrieval, this is a model worth trying.

Thank you for reading this far. If you find this article useful, please like and share this article. Someone could find it useful too.💖

Connect with me on GitHub and LinkedIn

Hostinger image

Get n8n VPS hosting 3x cheaper than a cloud solution

Get fast, easy, secure n8n VPS hosting from $4.99/mo at Hostinger. Automate any workflow using a pre-installed n8n application and no-code customization.

Start now

Top comments (3)

Collapse
 
neha_varad_ profile image
Neha Varad

Thank you

Collapse
 
johnmusings profile image
John

Awesome

Collapse
 
dev_kiran profile image
Kiran Naragund

Thanks John.
Appreciate your support🙏

AWS Security LIVE!

Join us for AWS Security LIVE!

Discover the future of cloud security. Tune in live for trends, tips, and solutions from AWS and AWS Partners.

Learn More