DEV Community

Cover image for Adding Unit Tests to Your Django Project with CodeBeaver - Tutorial
Vittorio Banfi
Vittorio Banfi

Posted on • Edited on • Originally published at codebeaver.ai

Adding Unit Tests to Your Django Project with CodeBeaver - Tutorial

Your Django project is growing. More users, more features, more complexity. You know you need a proper testing strategy, but who has time to write hundreds of unit tests? Between shipping features and fixing bugs, testing often takes a back seat - until something breaks in production.

What if you could set up a complete testing infrastructure in minutes and have AI write and maintain your tests automatically?

This tutorial shows you exactly that. We'll take your existing Django project from zero to fully tested by:

  1. Setting up a professional testing infrastructure using pytest-django (10 minutes)
  2. Connecting your GitHub repository to CodeBeaver's AI testing pipeline (2 minutes)
  3. Learning the LLM-powered workflow we use at CodeBeaver to ship features faster and with fewer bugs

By the end of this tutorial, you'll have:

  • A complete testing setup that automatically generates tests for new code
  • AI-powered test maintenance that keeps your test suite up-to-date as your code evolves
  • A modern development workflow that leverages LLMs to write better, more testable code

Best of all? The entire setup takes less than 15 minutes. Let's get started!

Prerequisites

Before we begin, make sure you have:

  • A Django project (existing or new)
  • Python 3.6+ installed (python --version to check)
  • pip installed (pip --version to verify)
  • A GitHub account (GitLab and Bitbucket also work)
  • Basic familiarity with Django and pip

Setting Up Your Django Project for Testing

Remember how Django's startproject command set up the foundation of your project? Setting up testing requires a similar foundation - a few key files and configurations that will make everything else smoother. Let's build this foundation step by step.

Required Files and Project Structure

In a typical Django project, your tests might live in a tests.py file within each app. While this works for small projects, as your codebase grows, you'll want a more organized structure. Here's what we recommend:

myproject/
├── manage.py
├── pyproject.toml      # New: Modern Python project configuration
├── pytest.ini         # New: pytest configuration
├── conftest.py       # New: Shared pytest fixtures
├── myproject/
│   ├── __init__.py
│   ├── settings.py
│   └── urls.py
└── myapp/
    ├── __init__.py
    ├── models.py
    ├── views.py
    ├── tests/        # Instead of tests.py, use a directory
    │   ├── __init__.py
    │   ├── test_models.py
    │   ├── test_views.py
    │   └── conftest.py  # App-specific fixtures
    └── factories/    # New: Factories for test data
        ├── __init__.py
        └── user_factory.py
Enter fullscreen mode Exit fullscreen mode

This structure separates your tests by component (models, views, etc.) while keeping them close to the code they're testing. Think of it like organizing your kitchen - you want your spices near your cooking area, but still sorted by type.

Installing pytest-django and Friends

While Django's built-in test framework is good, pytest offers more powerful features. Let's install the tools we'll need:

pip install pytest-django pytest-cov factory-boy
Enter fullscreen mode Exit fullscreen mode

Add these to your project's requirements. In the root of your repository add a requirements-test.txt file and add the following:

pytest-django>=4.5.2
pytest-cov>=4.1.0
factory-boy>=3.3.0
-r requirements.txt  # Inherit your main requirements
Enter fullscreen mode Exit fullscreen mode

Think of these packages as your testing toolkit:

  • pytest-django: The power drill of Django testing
  • pytest-cov: Your coverage measuring tape
  • factory-boy: Your test data assembly line

Configuration: Making Everything Work Together

Let's set up pyproject.toml - think of this as your project's master configuration:

[tool.pytest.ini_options]
DJANGO_SETTINGS_MODULE = "myproject.settings"
python_files = ["test_*.py", "*_test.py"]
addopts = """
    --ds=myproject.settings
    --reuse-db
    --cov=.
    --cov-report=term-missing
    --cov-fail-under=80
"""
Enter fullscreen mode Exit fullscreen mode

Now create conftest.py in your project root. This file will hold fixtures (reusable test components) that any test can use:

import pytest
from django.contrib.auth import get_user_model
from django.test import Client

@pytest.fixture
def client():
    """A Django test client instance."""
    return Client()

@pytest.fixture
def auth_client(client, django_user_model):
    """A Django test client logged in as a basic user."""
    user = django_user_model.objects.create_user(
        username='testuser',
        password='testpass123'
    )
    client.login(username='testuser', password='testpass123')
    return client
Enter fullscreen mode Exit fullscreen mode

Setting Up Your Test Database

Django handles test databases automatically, but let's make them faster and more reliable. Add this to your test settings:

# settings/test.py
from .base import *  # Import your base settings

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': ':memory:',  # Use in-memory database for tests
    }
}

# Speed up password hashing
PASSWORD_HASHERS = [
    'django.contrib.auth.hashers.MD5PasswordHasher',
]

# Disable migrations for tests
class DisableMigrations:
    def __contains__(self, item):
        return True
    def __getitem__(self, item):
        return None

MIGRATION_MODULES = DisableMigrations()
Enter fullscreen mode Exit fullscreen mode

Creating Your Test Files Manually (optional)

CodeBeaver will create test files for you, so you can skip this section if you want.

Some engineers like to write tests, and it can be a good idea to understand how to create them manually first. A workflow that we also see is that developers create one test file manually while developing the feature and then let CodeBeaver take over from there. In this case, CodeBeaver adds tests to cover edge cases. It will also maintain the test file as you change the code.

Let's say you have a simple Django model for blog posts. Here's how you'd structure its tests:

# myapp/tests/test_models.py
import pytest
from django.utils import timezone
from myapp.models import BlogPost

@pytest.mark.django_db
class TestBlogPost:
    def test_create_post(self):
        post = BlogPost.objects.create(
            title="Test Post",
            content="Test Content",
            published_at=timezone.now()
        )
        assert post.title == "Test Post"
        assert post.content == "Test Content"

    def test_post_str_representation(self):
        post = BlogPost.objects.create(
            title="Test Post",
            content="Test Content"
        )
        assert str(post) == "Test Post"
Enter fullscreen mode Exit fullscreen mode

For views, you'll want separate test files:

# myapp/tests/test_views.py
import pytest
from django.urls import reverse

@pytest.mark.django_db
class TestBlogPostViews:
    def test_post_list_view(self, client):
        url = reverse('blog:post_list')
        response = client.get(url)
        assert response.status_code == 200

    def test_post_detail_view(self, client, blog_post):
        url = reverse('blog:post_detail', kwargs={'pk': blog_post.pk})
        response = client.get(url)
        assert response.status_code == 200
        assert blog_post.title in response.content.decode()
Enter fullscreen mode Exit fullscreen mode

Using Factories for Test Data

Instead of creating test data manually in each test, use factories:

# myapp/factories/blog_factory.py
import factory
from django.utils import timezone
from myapp.models import BlogPost

class BlogPostFactory(factory.django.DjangoModelFactory):
    class Meta:
        model = BlogPost

    title = factory.Sequence(lambda n: f"Test Post {n}")
    content = factory.Faker('paragraph')
    published_at = factory.LazyFunction(timezone.now)
    author = factory.SubFactory('myapp.factories.UserFactory')
Enter fullscreen mode Exit fullscreen mode

Now your tests become much cleaner:

def test_recent_posts(self):
    # Create 5 posts at once
    posts = BlogPostFactory.create_batch(5)
    recent_posts = BlogPost.objects.recent()
    assert len(recent_posts) == 5
Enter fullscreen mode Exit fullscreen mode

This setup might seem like a lot, but it's like mise en place in cooking - having everything prepared makes the actual work much smoother. In the next section, we'll see how CodeBeaver can help maintain and expand your test suite automatically, working within this structure we've created.

Your First Test-Driven PR: Watching CodeBeaver in Action

Now that we have our testing infrastructure set up, let's see how CodeBeaver helps maintain your test suite. We'll walk through a real-world scenario: adding a new feature to track user engagement on blog posts.

Installing CodeBeaver

Now that we have our Python project structured and ready, let's integrate CodeBeaver into our workflow. This integration will transform your repository from having no test coverage to maintaining comprehensive test suites automatically. The process is straightforward and takes just a few minutes.

Step 1: Authentication and Authorization

First, navigate to codebeaver.ai and select "Sign up with GitHub". This initiates a secure OAuth flow that will allow CodeBeaver to interact with your repositories. If you're using GitLab or Bitbucket, you'll find similar options for those platforms.

Image description

After authenticating, you'll be prompted to authorize CodeBeaver's access to your repositories. You'll see an installation screen that allows you to choose between personal and organizational repositories, select specific repositories or grant access to all.

Image description

Click "Install CodeBeaver" to proceed. Don't worry about getting the permissions exactly right - you can always modify these settings later as your needs change.

Step 2: Repository Selection

Once authorized, you'll be presented with a dashboard showing your available repositories. This is where you'll select the repository we just created and enable CodeBeaver for it.

Image description

The repository selection interface presents a clear list of your repositories, with options to search and filter if you manage many projects. Select your Python risk calculator repository to proceed.

Step 3: Automatic Configuration

CodeBeaver will now analyze your repository structure to determine:

  • The programming language(s) in use
  • Testing frameworks present (pytest in our case)
  • Project structure and dependencies
  • Existing test configurations

Based on this analysis, CodeBeaver will attempt to auto-configure itself. For a standard Python project like ours that uses pytest, this process should complete successfully.

Image description

If auto-configuration succeeds, you'll see options for how you'd like to proceed with CodeBeaver. Select the Pull Request you just opened before. You are done! CodeBeaver will start working on your Pull Request.

What If Auto-Configuration Fails?

If you are using your own project, it may happen that CodeBeaver will not be able to auto-configure itself. Don't worry! This usually happens when:

  • Your project uses a non-standard structure
  • You have multiple testing frameworks
  • You need custom test commands

In these cases, you can:

  1. Check the troubleshooting guide in the CodeBeaver documentation
  2. Add a codebeaver.yml configuration file to your repository
  3. Contact CodeBeaver support for assistance

With CodeBeaver installed and configured, you're ready to experience automated test generation in action. In the next section, we'll create our first pull request and watch as CodeBeaver automatically generates and maintains your tests.

Trying everything out

Everything is set up, so let's try it out!

Let's say we want to add a feature that tracks how many times each blog post is viewed. First, we'll modify our BlogPost model:

# blog/models.py
from django.db import models
from django.utils import timezone

class BlogPost(models.Model):
    title = models.CharField(max_length=200)
    content = models.TextField()
    author = models.ForeignKey('auth.User', on_delete=models.CASCADE)
    published_at = models.DateTimeField(default=timezone.now)
    view_count = models.PositiveIntegerField(default=0)  # New field

    def increment_views(self):
        """Increment the view count for this post."""
        # Using F() to avoid race conditions
        from django.db.models import F
        self.view_count = F('view_count') + 1
        self.save(update_fields=['view_count'])

    def get_engagement_score(self):
        """Calculate an engagement score based on views and age."""
        if not self.published_at:
            return 0

        age_in_days = (timezone.now() - self.published_at).days
        if age_in_days < 1:
            age_in_days = 1

        return round(self.view_count / age_in_days, 2)
Enter fullscreen mode Exit fullscreen mode

And update our view to use this new functionality:

# blog/views.py
from django.views.generic import DetailView
from django.db import transaction
from .models import BlogPost

class BlogPostDetailView(DetailView):
    model = BlogPost
    template_name = 'blog/post_detail.html'

    def get_object(self, queryset=None):
        obj = super().get_object(queryset)
        with transaction.atomic():
            obj.increment_views()
        return obj

    def get_context_data(self, **kwargs):
        context = super().get_context_data(**kwargs)
        context['engagement_score'] = self.object.get_engagement_score()
        return context
Enter fullscreen mode Exit fullscreen mode

Now, let's create a new branch and commit these changes:

git checkout -b feature/post-analytics
git add .
git commit -m "feat: add view tracking and engagement scoring to blog posts"
Enter fullscreen mode Exit fullscreen mode

Now, we can open a pull request and watch CodeBeaver do its magic.

Opening the Pull Request

Create a new branch and commit these changes:

git checkout -b feature/post-analytics
git add blog/models.py blog/views.py
git commit -m "feat: add view tracking and engagement scoring to blog posts"
git push origin feature/post-analytics
Enter fullscreen mode Exit fullscreen mode

Open a pull request on GitHub. You are done! CodeBeaver will start working on your Pull Request.

Understanding CodeBeaver's Analysis

When CodeBeaver analyzes your PR, it looks for several key aspects:

  1. New or modified model fields (view_count)
  2. Business logic methods (increment_views, get_engagement_score)
  3. View modifications that affect database state
  4. Potential race conditions
  5. Edge cases in calculations

Within a few minutes, CodeBeaver will create a new PR with generated tests. You can check an example Pull Request here.

Best Practices and Tips: Making Your Django Code Test-Friendly

When writing testable code, many developers make the mistake of starting with implementation. The real breakthrough comes from starting with clear documentation and well-structured functions. In this section, we'll explore how to make your Django code not just testable, but a joy to test - focusing on powerful docstrings, clear function contracts, and code organization patterns that make testing natural and effective.

The Power of Docstrings: Your Testing Blueprint

Think of a docstring as a contract between you and future developers (including yourself!). But with CodeBeaver, it's more than that - it's your direct line of communication to the AI about what your code should do. Let's look at a poorly documented function and transform it into a testing-friendly version:

# Before: Hard to test, unclear expectations
def process_order(order, user):
    if user.is_active:
        order.status = 'processing'
        order.save()
        return True
    return False
Enter fullscreen mode Exit fullscreen mode
# After: Clear expectations, easy to test
def process_order(order, user) -> bool:
    """
    Process a new order if the user is active.

    Args:
        order: Order instance to process
        user: User attempting to process the order

    Returns:
        bool: True if order was processed, False if user is inactive

    Raises:
        ValueError: If order is already processed
        TypeError: If order or user are incorrect types

    Example:
        >>> user = User.objects.create(is_active=True)
        >>> order = Order.objects.create(status='new')
        >>> process_order(order, user)
        True
        >>> order.status
        'processing'
    """
    if not isinstance(order, Order) or not isinstance(user, User):
        raise TypeError("Invalid order or user type")

    if order.status != 'new':
        raise ValueError(f"Cannot process order with status {order.status}")

    if not user.is_active:
        return False

    order.status = 'processing'
    order.save()
    return True
Enter fullscreen mode Exit fullscreen mode

Let's break down why this docstring is so powerful for testing:

1. Input Documentation

  • Clear parameter descriptions
  • Type hints that CodeBeaver can validate
  • Explicit preconditions (order must be 'new')

2. Output Contract

  • Return value meaning is explicit
  • All possible outcomes are documented
  • Examples show expected behavior

3. Error Conditions

  • All exceptions are documented
  • Error scenarios are clearly defined
  • Edge cases are mentioned

Why Docstrings Matter More Than You Think

Here's a real example of how good docstrings saved my team time. We had a payment processing function:

def calculate_subscription_renewal(subscription, renewal_date=None):
    """
    Calculate the next renewal amount and date for a subscription.

    Args:
        subscription: Subscription model instance
        renewal_date: Optional date to calculate renewal for
                     Defaults to subscription.current_period_end

    Returns:
        tuple: (
            renewal_amount: Decimal - The amount to charge
            renewal_date: datetime - When the renewal takes effect
            currency: str - Three-letter currency code
        )

    Examples:
        Basic renewal:
        >>> sub = Subscription.objects.create(amount=100)
        >>> amount, date, currency = calculate_subscription_renewal(sub)
        >>> amount
        Decimal('100.00')

        Prorated renewal:
        >>> from datetime import timedelta
        >>> future = timezone.now() + timedelta(days=15)
        >>> amount, date, currency = calculate_subscription_renewal(sub, future)
        >>> amount < Decimal('100.00')
        True

    Raises:
        ValueError: If subscription is cancelled or renewal_date is in the past
        TypeError: If renewal_date is not None or datetime
    """
Enter fullscreen mode Exit fullscreen mode

When CodeBeaver saw this docstring, it generated comprehensive tests that caught several edge cases:

  • Timezone handling around midnight
  • Proration calculations at month boundaries
  • Currency conversion edge cases
  • Invalid date validation

The examples in the docstring were particularly valuable - CodeBeaver used them as a starting point and then generated variations to test boundary conditions.

Example of a complete LLM workflow

Here's a workflow that has transformed how my team writes testable code. If you are using a GitHub Copilot, Cursor or VSCode Copilot, you can use it to help you write docstrings.

Simply, you start by writing the name of your def. Then you add a simple docstring with the purpose of the function, nothing else:

def process_order(order, user):
    """
    Process the order by matching the delivery address to the user's address. If the user has no address, we raise an error. If it takes more than 4 days, return that in the return object. otherwise the return object contains only the order
    """
    raise NotImplementedError("Not implemented yet")
Enter fullscreen mode Exit fullscreen mode

I then select the def and ask the LLM to write a more detailed docstring, by defining return values and other details. For example, with Claude 3-5 I get:

def process_order(order, user):
    """
    Process an order by validating and matching delivery address with user's address.

    Args:
        order (Order): The order to be processed
        user (User): The user placing the order

    Returns:
        dict: Processing result containing:
            - 'order' (Order): The processed order object
            - 'processing_time' (int, optional): Number of days for delivery,
              only included if exceeds 4 days
            - 'status' (str): Processing status ('success' or 'delayed')

    Raises:
        ValueError: If user has no address configured
        TypeError: If order or user are not of correct type
        ValidationError: If order is already processed or invalid

    Examples:
        Standard processing:
        >>> user = User.objects.create(address="123 Main St")
        >>> order = Order.objects.create(status="pending")
        >>> result = process_order(order, user)
        >>> result
        {'order': <Order: #123>, 'status': 'success'}

        Delayed processing:
        >>> result = process_order(delayed_order, user)
        >>> result
        {
            'order': <Order: #124>,
            'status': 'delayed',
            'processing_time': 5
        }
    """
    raise NotImplementedError("Not implemented yet")
Enter fullscreen mode Exit fullscreen mode

I now review the docstring and make edits if the LLM hallucinated something or got something wrong.

Once that's done, I just select the NotImplementedError and ask the LLM to implement the function. Et voilà! I have a working function that you can review. Not only that: Thanks to the detailed docstring, CodeBeaver will generate tests for you that cover all the possible cases after you push your changes:

git add .
git commit -m "feat: implement process_order"
git push origin feature/process-order
Enter fullscreen mode Exit fullscreen mode

Now, I open a PR and CodeBeaver will start working on the PR.

Conclusion: Your Journey to Testing Excellence

You've come a long way from where we started! Remember that initial scenario - the Django developer pushing to production and hoping nothing breaks? That's no longer you. You've taken the first crucial steps toward building a robust, tested Django application. Let's reflect on your journey and plan your next steps.

What You've Learned

Think back to where we started. You now understand not just the mechanics of testing Django applications, but the deeper principles that make testing effective. You've learned how to structure your code with testing in mind, starting with those crucial docstrings that serve as both documentation and testing blueprints. You've seen how CodeBeaver can transform those docstrings into comprehensive test suites, catching edge cases you might never have thought about.

The most important shift isn't in the tools or configurations - it's in how you think about your code. You're no longer writing functions that merely work; you're crafting well-documented, testable components that prove they work. That's a fundamental transformation in how you approach software development.

Your Next Testing Adventures

Now that you have the foundation in place, here are some exciting directions to explore:

1. Expand Your Test Coverage Gradually

Start with your most critical paths - user authentication, payment processing, data mutations. CodeBeaver can help you identify these areas by analyzing your codebase. Remember, you don't need to test everything at once. Focus on:

# High-priority testing targets:
def process_payment(order):
    """This affects your bottom line - test it first!"""

# Medium priority:
def update_user_preferences(user, preferences):
    """Important but not critical - test after core functions"""

# Lower priority:
def get_user_avatar_url(user):
    """Nice to have tests, but not urgent"""
Enter fullscreen mode Exit fullscreen mode

2. Enhance Your Testing Infrastructure

As your test suite grows, consider adding these powerful tools:

  • Factory Boy for more sophisticated test data generation
  • pytest-xdist for parallel test execution
  • pytest-cov for detailed coverage reporting

3. Build Testing into Your Workflow

Make testing as natural as writing code:

  1. Write your docstring
  2. Let your favorite LLM help implement the function
  3. Push to GitHub
  4. Let CodeBeaver generate and maintain your tests
  5. Review and merge

The Path Forward

Remember that testing isn't a destination - it's a journey. Every test you add makes your application more reliable, every docstring makes your code more maintainable, and every PR becomes an opportunity to improve your test coverage.

Don't feel pressured to achieve 100% coverage immediately. Start with the most critical parts of your application and let your test suite grow organically. CodeBeaver will be there to help you maintain and expand your tests as your codebase evolves.

Additional Resources

To continue your testing journey:

Final Thoughts

Testing might have seemed daunting when you started reading this tutorial, but look how far you've come! You now have the knowledge and tools to build reliable, well-tested Django applications. Remember, every great codebase started with a single test. Your journey to testing excellence is just beginning, and with CodeBeaver by your side, you're well-equipped for the road ahead.

Now, how about opening that first PR and letting CodeBeaver help you write some tests? Your future self (and your users) will thank you!

Hostinger image

Get n8n VPS hosting 3x cheaper than a cloud solution

Get fast, easy, secure n8n VPS hosting from $4.99/mo at Hostinger. Automate any workflow using a pre-installed n8n application and no-code customization.

Start now

Top comments (0)

Qodo Takeover

Introducing Qodo Gen 1.0: Transform Your Workflow with Agentic AI

Rather than just generating snippets, our agents understand your entire project context, can make decisions, use tools, and carry out tasks autonomously.

Read full post