Discover how CrewAI agents can automatically convert Jira requirements into Xray test cases. Includes full setup instructions, project layout, and a ready-to-run GitHub example.
Prerequisites
Before starting this tutorial, make sure you have completed the following setup steps. Each guide walks you through the process from scratch.
These credentials and accounts are required for the CrewAI agents to interact with Jira and Xray.
Required Accounts & API Access
OpenAI API Access
Jira Test Account
How to Set Up a Jira Test Account and Generate an API Key | by Abdul Qadir | Mar, 2026 | Medium
Xray for Jira Setup
Why We Are Using CrewAI?
There are many agent frameworks available today, but CrewAI is a great choice for this project, especially if you’re new to building AI agents.
Key Advantages
Beginner-friendly — simple concepts and minimal boilerplate
Role-based agents — define agents with clear goals (e.g., writer, reviewer)
YAML configuration — behavior can be defined without heavy coding
Fast to prototype — go from idea to working multi-agent system quickly
Lightweight — easier to understand than more complex orchestration frameworks
Active community — growing ecosystem and documentation
In short, CrewAI lets you focus on what the agents should do, not on building infrastructure.
For a project like converting Jira stories into Xray tests, this simplicity makes development faster and much easier for newcomers
Section 1 — Clone the Repository
Start by cloning the public project repository to your machine. This repo contains the full CrewAI project, configuration files, and example code used in this tutorial.
1. Install Git (if not already installed)
Check if Git is installed:
Windows / macOS
git --version
If you see a version number, you’re good to go.
2. Clone the repository
qadir-dev-hub/ai-xray-test-generator
Windows (PowerShell)
git clone qadir-dev-hub/ai-xray-test-generator
cd testcrew_ai
macOS / Linux (Terminal)
git clone qadir-dev-hub/ai-xray-test-generator
cd testcrew_ai
3. Create your environment file
Copy the example environment file:
Windows
copy .env.example .env
macOS / Linux
cp .env.example .env
Open .env and fill in your credentials (Xray, Jira, etc.).
Section 2 — Install uv and Project Dependencies
This project uses uv to manage Python environments and dependencies.
uv automatically creates a virtual environment and installs everything defined in pyproject.toml.
Install uv
Windows (PowerShell)
Run this command:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
This will:
Install uv for your user
Add it to your PATH
Require no admin privileges
macOS / Linux
Run:
curl -LsSf https://astral.sh/uv/install.sh | sh
Verify installation
Close and reopen your terminal, then run:
uv --version
If a version number appears, installation succeeded
If uv is not recognized, restart your terminal or log out and back in so the PATH update takes effect.
Install project dependencies
Make sure you are in the project root (the folder containing pyproject.toml), then run:
uv sync
This command will:
Create a .venv virtual environment
Install CrewAI and all required packages
Generate a uv.lock file for reproducible installs
Quick sanity check
If everything installed correctly, you can test Python execution inside the environment:
uv run python --version
You should see Python 3.10–3.13 (as defined in the project requirements).
Architecture Diagram
This diagram shows how a Jira user story is transformed into Xray manual tests using AI agents built with CrewAI.
- A user story from Jira serves as the input.
- CrewAI orchestrates the workflow between agents.
- The Test Case Writer Agent generates initial test cases.
- The Test Case Reviewer Agent refines and finalizes them.
- A custom tool calls the Xray API to create tests.
- Xray saves the manual Test issues back in Jira.
In short, the pipeline automates the flow from requirements → AI-generated test cases → ready-to-use Xray tests , significantly reducing manual effort.
Section 3 — Project Structure Overview
Now that the project is installed, let’s take a quick look at how the repository is organized.
Understanding the project structure will make it much easier to follow how CrewAI agents, tasks, and tools work together to generate Xray test cases from Jira requirements.
A simplified view of the repository looks like this:
testcrew_ai/
├── src/
│ └── testcrew_ai/
│ ├── config/
│ │ ├── agents.yaml
│ │ └── tasks.yaml
│ ├── tools/
│ │ └── xray_tool.py
│ ├── crew.py
│ └── main.py
├── .env
├── pyproject.toml
└── README.md
What each part does
- config/agents.yaml Defines the AI agents used in the workflow, including their roles, goals, and behavior.
- config/tasks.yaml Describes the tasks assigned to each agent, such as generating and reviewing test cases.
- tools/xray_tool.py Contains the custom tool responsible for connecting to the Xray API and creating manual test cases in Jira.
- crew.py Brings the agents, tasks, and tools together into a CrewAI workflow.
- main.py Serves as the entry point for running the project.
- .env Stores the API keys and credentials required for OpenAI, Jira, and Xray.
- pyproject.toml Manages project metadata and dependencies.
Why this structure works well
This layout keeps the project clean and easy to extend:
- Configuration is separated from code
- Agents and tasks are easy to update without rewriting logic
- External integrations are isolated inside reusable tools
- The execution flow stays simple and readable
This separation is one of the reasons CrewAI is a good fit for projects like this. You can clearly see where the agent behavior lives, where the business logic lives, and where external systems like Xray are connected.
In the next section, we’ll look at how the agents are defined and how each one contributes to the test generation workflow and how these agents are assigned work through tasks.yaml, and how the overall process moves from Jira requirements to Xray-ready test cases.
Read part 2 here — https://dev.to/abdul_qadir/ai-powered-test-generation-jira-to-xray-with-crewai-part-2-agents-tasks-and-xray-integration-3jid


Top comments (0)