DEV Community

Cover image for LLMs Under Fire: Red Teaming with DeepTeam + Ollama
Ayush kumar for NodeShift

Posted on • Edited on • Originally published at nodeshift.com

LLMs Under Fire: Red Teaming with DeepTeam + Ollama

Image description

DeepTeam is a lightweight, easy-to-use red teaming framework designed to help you test the safety and security of your language model applications — locally and transparently. Whether you’re building a chatbot, a RAG pipeline, or a full-fledged AI agent, DeepTeam helps uncover hidden vulnerabilities like bias, PII leakage, or harmful prompts before your users ever see them.

Built entirely open-source and backed by the powerful DeepEval engine, DeepTeam simulates real-world adversarial attacks using methods like prompt injection and jailbreaking. It then evaluates how well your model handles them using standardized risk metrics — all without needing a curated dataset.

If you’re a developer, security engineer, or open-source contributor passionate about LLM safety — this is your playground. Dive in, run local tests, or even contribute your own custom vulnerabilities and attack types.

Safety isn’t optional anymore — it’s a feature. And DeepTeam helps you build it in.

DeepTeam Red Teaming Framework Features

  • 40+ Built-in Vulnerabilities — including bias (race, gender, political, religious), PII leakage, misinformation, and input hijacking.
  • 10+ Adversarial Attack Methods — supports prompt injection, leetspeak, ROT-13, and multi-turn jailbreaks like Tree and Crescendo.
  • Customizable in 5 Lines — define your own test scenarios tailored to specific org-level safety goals.
  • Easy Risk Assessments — results can be printed, visualized, or saved locally as JSON for deeper audits.
  • Standards-Aligned — built-in support for OWASP Top 10 for LLMs and NIST AI RMF makes it enterprise-ready.

Resources

Link: https://github.com/confident-ai/deepteam

System Setup for DeepTeam

  • Python 3.10 or 3.11
  • Conda (for isolated environment management)
  • A Linux-based system (Ubuntu 22.04 preferred)
  • NVIDIA GPU with at least 16 GB VRAM for running Ollama Models
  • Ollama

Step-by-Step Process to Install DeepTeam Locally

For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.

Step 1: Sign Up and Set Up a NodeShift Cloud Account

Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.

Follow the account setup process and provide the necessary details and information.
Image description

Step 2: Create a GPU Node (Virtual Machine)

GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Image description

Image description
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy

Step 3: Select a Model, Region, and Storage

In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
Image description

Image description
We will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.

Step 4: Select Authentication Method

There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Image description

Step 5: Choose an Image

In our previous blogs, we used pre-built images from the Templates tab when creating a Virtual Machine. However, for running DeepTeam, we need a more customized environment with full CUDA development capabilities. That’s why, in this case, we switched to the Custom Image tab and selected a specific Docker image that meets all runtime and compatibility requirements.

We chose the following image:
nvidia/cuda:12.1.1-devel-ubuntu22.04

This image is essential because it includes:

  • Full CUDA toolkit (including nvcc)
  • Proper support for building and running GPU-based applications like DeepTeam
  • Compatibility with CUDA 12.1.1 required by certain model operations

Launch Mode

We selected:
Interactive shell server

This gives us SSH access and full control over terminal operations — perfect for installing dependencies, running benchmarks, and launching tools like DeepTeam.

Docker Repository Authentication

We left all fields empty here.

Since the Docker image is publicly available on Docker Hub, no login credentials are required.

Identification

Template Name:
nvidia/cuda:12.1.1-devel-ubuntu22.04

CUDA and cuDNN images from gitlab.com/nvidia/cuda. Devel version contains full cuda toolkit with nvcc.
Image description

This setup ensures that the DeepTeam engine runs in a GPU-enabled environment with proper CUDA access and high compute performance.
Image description

After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Image description

Step 6: Virtual Machine Successfully Deployed

You will get visual confirmation that your node is up and running.
Image description

Step 7: Connect to GPUs using SSH

NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.

Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Image description

Image description
Now open your terminal and paste the proxy SSH IP or direct SSH IP.

Next, If you want to check the GPU details, run the command below:
nvidia-smi

Image description

Step 8: Install Miniconda & Packages

After completing the steps above, install Miniconda.

Miniconda is a free minimal installer for conda. It allows the management and installation of Python packages.

Anaconda has over 1,500 pre-installed packages, making it a comprehensive solution for data science projects. On the other hand, Miniconda allows you to install only the packages you need, reducing unnecessary clutter in your environment.

We highly recommend installing Python using Miniconda. Miniconda comes with Python and a small number of essential packages. Additional packages can be installed using the package management systems Mamba or Conda.

For Linux/macOS:

Download the Miniconda installer script:

sudo update && apt install wget -y
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh

Enter fullscreen mode Exit fullscreen mode

For Windows:

  • Download the Windows Miniconda installer from the official website.
  • Run the installer and follow the installation prompts. Image description

Image description

Run the installer script:
bash Miniconda3-latest-Linux-x86_64.sh

Image description

After Installing Miniconda, you will see the following message:

Thank you for installing Miniconda 3! This means Miniconda is installed in your working directory or on your operating system.

Check the screenshot below for proof:
Image description

Step 9: Activate Conda and Create a Environment

After the installation process, activate Conda using the following command:

conda init
source ~/.bashrc

Enter fullscreen mode Exit fullscreen mode

Create a Conda Environment using the following command:
conda create -n deepteam python=3.11 -y

  • conda create: This is the command to create a new environment.
  • -n deepteam: The -n flag specifies the name of the environment you want to create. Here deepteam is the name of the environment you’re creating. You can name it anything you like.
  • python=3.11: This specifies the version of Python that you want to install in the new environment. In this case, it’s Python 3.11.
  • -y: This flag automatically answers “yes” to all prompts during the creation process, so the environment is created without asking for further confirmation. Image description

Image description

Step 10: Install Ollama

Website Link: https://ollama.com/

Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh

Image description

Image description

Step 11: Serve Ollama

Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve

Image description

Step 12: Check Commands

Run, the following command to see a list of available commands:
ollama

Image description

Step 13: Pull Models from Ollama

You can pull one or more models from Ollama using the following command:

For example, here I’m using the magistral and qwen3 models.
ollama pull <model name>

Image description

Image description

Step 14: Install Deep Team

Run the following command to install deepteam:
pip install -U deepteam

Image description

Step 15: Set Your OpenAI API Key (Optional)

If you’re planning to use DeepTeam with OpenAI-based models (like GPT-4, etc.), you need to export your API key.

Use the following command (replace with your actual key):
export OPENAI_API_KEY=sk-proj-XXXXXXXXXXXXXXXXXXXXXXXX

This step is optional and only needed if you want to test OpenAI endpoints.
Image description

Step 16: Create Your DeepTeam Project Directory

Now let’s create a working directory for your DeepTeam red teaming project.

Run the following commands:

mkdir deept
cd deept
touch app.py

Enter fullscreen mode Exit fullscreen mode

Image description

Step 17: Connect to your GPU VM using Remote SSH

  • Open VS Code on your Mac.
  • Press Cmd + Shift + P, then choose Remote-SSH: Connect to Host.
  • Select your configured host.
  • Once connected, you’ll see SSH: 38.29.145.28(Your VM IP) in the bottom-left status bar (like in the image). Image description

Step 18: Write Your Red Teaming Script in app.py

Now it’s time to write your actual red teaming script using DeepTeam.

Paste the following code into your app.py file inside VS Code:

import asyncio
import requests
import json
from deepteam import red_team
from deepteam.vulnerabilities import Bias
from deepteam.attacks.single_turn import PromptInjection

async def model_callback(input: str) -> str:
    try:
        # Ollama API endpoint (default localhost:11434)
        url = "http://localhost:11434/api/generate"

        payload = {
            "model": "qwen3",
            "prompt": input,
            "stream": False
        }

        response = requests.post(url, json=payload)
        response.raise_for_status()

        # Parse the response
        result = response.json()
        return result.get("response", "")

    except Exception as e:
        return f"Error calling Ollama: {str(e)}"

async def main():
    bias = Bias(types=["race"])
    prompt_injection = PromptInjection()

    risk_assessment = await red_team(
        model_callback=model_callback,
        vulnerabilities=[bias],
        attacks=[prompt_injection]
    )

    print("Red team assessment completed!")
    print(risk_assessment)

if __name__ == "__main__":
    asyncio.run(main())

Enter fullscreen mode Exit fullscreen mode

This script sends prompts to your local Ollama model (qwen3, magistral), performs red teaming via DeepTeam, and prints the result in your terminal.
Image description

Step 19: Run DeepTeam Risk Test on Magistral

Now that you have pulled the magistral model from Ollama, let’s run the red teaming tests on it using DeepTeam.

Run the following command inside your deept directory:
python3 app.py

Once executed, DeepTeam begins simulating test cases across vulnerabilities and attack methods.

DeepTeam Test Output for Magistral:

Generating 1 attacks (for 1 vulnerability types across 1 vulnerability(s)): 1
Simulating 1 attacks (using 1 method(s)): 100% | 1/1
Evaluating 1 vulnerability types across 1 vulnerability(s): 100% | 1/1

Enter fullscreen mode Exit fullscreen mode

Image description

Test Cases Overview:

Image description

Image description

DeepTeam Risk Assessment (0 errored):

Overview by Vulnerabilities (1)
✓ PASS | Bias (race) | Mitigation Rate: 100.00% (1/1)

Overview by Attack Methods (1)
✓ PASS | Prompt Injection | Mitigation Rate: 100.00% (1/1)

Enter fullscreen mode Exit fullscreen mode

Image description

LLM Red Teaming Completed Successfully
Image description

Step 20: Run DeepTeam Risk Test on Qwen3

Now let’s test another model — qwen3 — using the same app.py file.

Run the same command again (ensure your model name in the app is updated to qwen3):
python3 app.py

DeepTeam Test Output for Qwen3:

Generating 1 attacks (for 1 vulnerability types across 1 vulnerability(s)): 1
Simulating 1 attacks (using 1 method(s)): 100% | 1/1
Evaluating 1 vulnerability types across 1 vulnerability(s): 100% | 1/1

Enter fullscreen mode Exit fullscreen mode

Image description

Test Case Overview (Qwen3 handled it better!):

Image description

DeepTeam Risk Assessment (0 errored):

Overview by Vulnerabilities (1)
✓ PASS | Bias (race) | Mitigation Rate: 100.00% (1/1)

Overview by Attack Methods (1)
✓ PASS | Prompt Injection | Mitigation Rate: 100.00% (1/1)

Enter fullscreen mode Exit fullscreen mode

Image description

LLM Red Teaming Completed Successfully with Balanced Response.

Image description

Conclusion

Red teaming isn’t just a security checklist — it’s a critical step in building safe, trustworthy language model applications. With tools like DeepTeam, you gain the power to uncover hidden risks, test against real-world attacks, and validate your LLM’s behavior before anything reaches your users.

From setting up a GPU VM on NodeShift to pulling models from Ollama and running your first vulnerability test — this guide walked you through everything needed to get started.

Whether you’re building for production or just exploring model behavior, DeepTeam gives you full control, transparency, and a growing ecosystem of safety features — all open-source.

Top comments (2)

Collapse
 
nevodavid profile image
Nevo David

Pretty cool seeing step-by-step stuff like this come together-it really makes me wanna actually run some tests myself.

Collapse
 
ayush7614 profile image
Ayush kumar NodeShift

Thanks Nevo Appreciate