DEV Community

Abdul Qadir
Abdul Qadir

Posted on • Originally published at Medium on

AI-Powered Test Generation: Jira to Xray with CrewAI: Part 2 — Agents, Tasks, and Xray Integration

Discover how CrewAI agents can automatically convert Jira requirements into Xray test cases. Includes full setup instructions, project layout, and a ready-to-run GitHub example.

Section 1 — Defining the CrewAI Agents

With the project structure in place, the next step is understanding the agents that drive the workflow.

In CrewAI, agents are assigned clear responsibilities. Rather than relying on a single general-purpose agent, this project uses two specialized agents: one to generate test cases and another to review and improve them.

The actual agent definitions live in config/agents.yaml:

test_case_writer:
  role: QA Test Case Writer
  goal: Write comprehensive functional test cases from Jira stories
  backstory: >
    You are a senior QA engineer.
    You write clear, structured test cases
    including positive, negative, and edge scenarios.
  verbose: true
  llm: openai/gpt-5-nano

test_case_reviewer:
  role: QA Test Case Reviewer
  goal: Review and improve test cases for quality and coverage
  backstory: >
    You are a QA lead.
    You ensure test cases fully cover acceptance criteria,
    are clear, consistent, and complete.
  verbose: true
  llm: openai/gpt-5.4
Enter fullscreen mode Exit fullscreen mode

What these agents do

  • The QA Test Case Writer reads the Jira story and creates an initial set of functional test cases.
  • The QA Test Case Reviewer checks those test cases for clarity, completeness, and overall coverage before they are sent to Xray.

This two-agent approach mirrors a real QA workflow. One agent focuses on drafting the tests, while the second acts as a quality gate to refine the output. In many cases, it also makes sense to use different LLMs for these roles, such as one model for fast generation and another for stronger review quality.

Why this configuration matters

Each agent definition includes a few important pieces:

  • role defines the responsibility of the agent in the workflow.
  • goal tells the agent what outcome it should aim for.
  • backstory gives behavioral context that helps shape the style and quality of its output.
  • llm specifies which model powers the agent.
  • verbose: true makes execution easier to inspect during development.

One of CrewAI’s strengths is that this behavior can be adjusted in YAML without changing core application logic. That makes the system easier to tune as your prompts, test quality standards, or workflow evolve.

In the next section, we’ll look at how these agents are assigned work through tasks.yaml, and how the overall process moves from Jira requirements to Xray-ready test cases.

Section 2 — Defining the Tasks

Once the agents are defined, the next step is assigning work to them. In CrewAI, this is done through config/tasks.yaml.

This file describes what each agent should do, what input it receives, what output it should produce, and where that output should be saved. In this project, the workflow is split into two tasks: one for writing test cases and one for reviewing and publishing them.

The actual task definitions look like this:

write_test_cases:
  description: >
    INPUT:
    - story: Jira user story text

    You are provided with a Jira user story.

    Your task is to write comprehensive functional test cases based on the story.

    Requirements:
    - Cover positive, negative, and edge cases
    - Align all test cases with the acceptance criteria
    - Use clear, unambiguous language

    Each test case must include:
    - Test Case ID
    - Title
    - Preconditions
    - Test Steps
    - Expected Result

    Jira User Story:
    {story}

    Output the test cases in Markdown format.
    Save the output to output/test_cases_v1.md.
  expected_output: >
    A Markdown file containing clear, well-structured functional test cases
    derived from the Jira user story.
  agent: test_case_writer
  output_file: output/test_cases_v1.md

review_test_cases:
  description: >
    INPUTS:
    - story: Jira user story text

    You are provided with:
    - The original Jira user story
    - A set of test cases written in output/test_cases_v1.md

    Jira User Story:
    {story}

    Review the existing test cases and improve them by:
    - Ensuring full coverage of acceptance criteria
    - Identifying and adding missing scenarios
    - Improving clarity and consistency
    - Removing redundancy or ambiguity

    Do NOT remove valid test cases unless necessary.
    You may refactor, merge, or expand them where appropriate.

    Output the improved test cases in Markdown format.
    Save the output to output/test_cases_v2_reviewed.md.

    After generating the final improved test cases,
    use the "XrayCreateTestTool" to create a Manual Test
    in Xray for each finalized test case.

    Pass:
    - summary = Test Case Title
    - description = Full test case content
    - project_key = "SCRUM"
  expected_output: >
    A reviewed and improved Markdown file containing high-quality,
    acceptance-criteria-aligned test cases.
  agent: test_case_reviewer
  output_file: output/test_cases_v2_reviewed.md
Enter fullscreen mode Exit fullscreen mode

What these tasks do

The first task, write_test_cases, is assigned to the test_case_writer agent. Its job is to read the Jira story and generate the first version of the test cases. The output is saved to output/test_cases_v1.md.

The second task, review_test_cases, is assigned to the test_case_reviewer agent. This task takes the original Jira story along with the first draft of the test cases, improves the content, and saves the final reviewed version to output/test_cases_v2_reviewed.md.

Why this task design works well

This setup creates a simple but effective two-stage workflow:

  • the first task focuses on generation
  • the second task focuses on refinement and delivery

That separation improves the final quality of the tests while keeping each agent’s responsibility clear.

Another important detail is that the review task does more than just edit Markdown output. It also instructs the reviewer agent to call the XrayCreateTestTool, which creates a Manual Test issue in Xray for each finalized test case. This is the step that moves the workflow from AI-generated content to actual test management inside Jira.

Key takeaway

The tasks.yaml file acts as the operational blueprint of the system. It tells CrewAI what should happen, in what sequence, and what each stage should produce.

In the next section, we’ll look at how these agents and tasks are wired together in crew.py to create the full CrewAI workflow.

Section 3 — Wiring the Workflow in crew.py

After defining the agents and tasks, the next step is connecting them into a working workflow. In this project, that orchestration happens in crew.py.

This file is where the CrewAI application is assembled. It maps the YAML configuration into Python objects, attaches tools to the right agent, and defines how the tasks should run.

The core implementation looks like this:

from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from testcrew_ai.tools.xray_tool import XrayCreateTestTool
from typing import Any

@CrewBase
class TestCrewAI:
    """
    Crew class for manual test case writing and reviewing.
    """
    agents_config: str = 'config/agents.yaml'
    tasks_config: str = 'config/tasks.yaml'

    @agent
    def test_case_writer(self) -> Agent:
        """Agent responsible for writing test cases."""
        return Agent(
            config=self.agents_config['test_case_writer'],
            verbose=True
        )

    @agent
    def test_case_reviewer(self) -> Agent:
        """Agent responsible for reviewing test cases, with Xray tool."""
        return Agent(
            config=self.agents_config['test_case_reviewer'],
            verbose=True,
            tools=[XrayCreateTestTool()]
        )

    @task
    def write_test_cases(self) -> Task:
        """Task for writing test cases."""
        return Task(
            config=self.tasks_config['write_test_cases'],
        )

    @task
    def review_test_cases(self) -> Task:
        """Task for reviewing test cases."""
        return Task(
            config=self.tasks_config['review_test_cases'],
        )

    @crew
    def crew(self) -> Crew:
        """Creates the manual testing crew."""
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,
            verbose=True,
        )
Enter fullscreen mode Exit fullscreen mode

What crew.py does

This file serves as the bridge between configuration and execution.

  • It loads agent definitions from config/agents.yaml
  • It loads task definitions from config/tasks.yaml
  • It creates Python Agent and Task objects using CrewAI decorators
  • It attaches the XrayCreateTestTool to the reviewer agent
  • It runs the workflow using a sequential process

Why the reviewer gets the Xray tool

One important design choice in this file is that the XrayCreateTestTool is attached only to the test_case_reviewer agent.

That means the writer agent is responsible only for generating draft test cases, while the reviewer agent handles the final refinement and the publishing step into Xray. This is a clean separation of responsibilities and helps keep the workflow easier to reason about.

Why use Process.sequential

The crew is configured with Process.sequential, which means the tasks run in order rather than in parallel.

That makes sense for this use case because the second task depends on the output of the first one:

  • first, the writer creates the initial test cases
  • then, the reviewer improves them
  • finally, the reviewer uses the Xray tool to create Manual Test issues

A sequential flow is the simplest and safest orchestration model for this kind of staged QA pipeline.

Why this file matters

If agents.yaml defines who the agents are, and tasks.yaml defines what they do, then crew.py defines how everything comes together into one executable system.

It is the file that turns configuration into a working CrewAI pipeline.

In the next section, we’ll look at main.py, where the Jira issue is fetched, prepared as input, and passed into the crew for execution.

Section 4 — Running the Workflow from main.py

With the agents, tasks, and crew now defined, the final step is to run the workflow with a real Jira issue. In this project, that logic lives in main.py.

This file acts as the entry point of the application. It loads environment variables, fetches a Jira issue, extracts the story content into a format suitable for the agents, and then starts the CrewAI workflow.

The key parts of main.py are shown below:

#!/usr/bin/env python
import os
import warnings
import requests
from requests.auth import HTTPBasicAuth
from dotenv import load_dotenv
from typing import Dict
from testcrew_ai.crew import TestCrewAI

warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")

# Load .env variables
load_dotenv()

JIRA_BASE_URL = os.getenv("JIRA_BASE_URL")
JIRA_EMAIL = os.getenv("JIRA_EMAIL")
JIRA_API_TOKEN = os.getenv("JIRA_API_TOKEN")

def get_jira_issue(issue_key: str) -> Dict:
    """
    Fetch a Jira issue by key.
    """
    url = f"{JIRA_BASE_URL}/rest/api/3/issue/{issue_key}"
    response = requests.get(
        url,
        auth=HTTPBasicAuth(JIRA_EMAIL, JIRA_API_TOKEN),
        headers={"Accept": "application/json"}
    )
    response.raise_for_status()
    return response.json()

def extract_story_for_ai(issue: Dict) -> str:
    """
    Extracts a formatted story from a Jira issue for AI processing.
    """
    fields = issue["fields"]
    description_text = "No description provided"
    description = fields.get("description")
    if description and "content" in description:
        lines = []
        for block in description["content"]:
            if "content" in block:
                for item in block["content"]:
                    if item.get("type") == "text":
                        lines.append(item["text"])
        description_text = "\n".join(lines)
    return f"""
Title:
{fields.get("summary", "No title")}

Description:
{description_text}

Issue Type:
{fields["issuetype"]["name"]}

Priority:
{fields["priority"]["name"] if fields.get("priority") else "Not set"}
""".strip()

def run(issue_key: str = None) -> None:
    """
    Run the research crew for a given Jira issue key.
    """
    if issue_key is None:
        issue_key = os.getenv("JIRA_ISSUE_KEY", "SCRUM-1")
    issue = get_jira_issue(issue_key)
    story_text = extract_story_for_ai(issue)
    inputs = {'story': story_text}
    result = TestCrewAI().crew().kickoff(inputs=inputs)
    print(result)

if __name__ == " __main__":
    run()
Enter fullscreen mode Exit fullscreen mode

What main.py does

This file is responsible for preparing the real-world input that the agents will use.

  • It loads Jira credentials from .env
  • It fetches a Jira issue using the Jira REST API
  • It extracts useful fields such as title, description, issue type, and priority
  • It converts that data into a structured text prompt
  • It passes that prompt into the CrewAI workflow as the story input

Why the story extraction step matters

The agents do not work directly with raw Jira JSON. Instead, extract_story_for_ai() transforms the issue into a cleaner, readable text format.

That is important because LLMs perform much better when given structured natural language rather than deeply nested API responses. By extracting only the fields that matter, the project gives the agents a clearer understanding of the requirement they need to convert into test cases.

How the execution flow works

At runtime, the flow is simple:

  • a Jira issue key is provided, either directly or through the .env file
  • the issue is fetched from Jira
  • the story is formatted for AI processing
  • the crew is kicked off with that story as input
  • the tasks run in sequence until the final reviewed test cases are created and sent to Xray

This makes main.py the handoff point between Jira data and the CrewAI pipeline.

In the next section, we’ll look at the custom xray_tool.py implementation, which is responsible for creating Manual Test issues in Xray from the final reviewed output.

Section 5 — Creating Manual Tests in Xray with xray_tool.py

The final piece of the workflow is the custom tool that sends the reviewed test cases into Xray. This logic lives in tools/xray_tool.py.

While the agents and tasks handle the AI-driven parts of the pipeline, this tool handles the external integration. Its job is to authenticate with Xray Cloud and create Manual Test issues in the target Jira project.

Here is the implementation used in this project:

import os
import requests
from dotenv import load_dotenv
from crewai.tools import BaseTool
from typing import Any, Dict

def load_xray_env() -> Dict[str, str]:
    """
    Loads and returns Xray credentials from environment variables.
    Raises an exception if required variables are missing.
    """
    load_dotenv()
    client_id = os.getenv("XRAY_CLIENT_ID")
    client_secret = os.getenv("XRAY_CLIENT_SECRET")
    if not client_id or not client_secret:
        raise EnvironmentError("XRAY_CLIENT_ID and XRAY_CLIENT_SECRET must be set in environment.")
    return {"client_id": client_id, "client_secret": client_secret}

def get_xray_token() -> str:
    """
    Authenticates with Xray and returns a bearer token.
    """
    creds = load_xray_env()
    response = requests.post(
        "https://xray.cloud.getxray.app/api/v2/authenticate",
        json={
            "client_id": creds["client_id"],
            "client_secret": creds["client_secret"]
        }
    )
    response.raise_for_status()
    return response.text.strip('"')

class XrayCreateTestTool(BaseTool):
    name: str = "Create Xray Test"
    description: str = "Creates a Manual Test in Xray Jira project"

    def _run(self, summary: str, description: str, project_key: str = "SCRUM") -> Any:
        """
        Creates a manual test in the specified Xray Jira project.
        """
        token = get_xray_token()
        payload = [
            {
                "xray_testtype": "Manual",
                "fields": {
                    "project": {"key": project_key},
                    "summary": summary,
                    "description": description,
                    "issuetype": {"name": "Test"},
                }
            }
        ]
        response = requests.post(
            "https://xray.cloud.getxray.app/api/v2/import/test/bulk",
            headers={
                "Authorization": f"Bearer {token}",
                "Content-Type": "application/json"
            },
            json=payload
        )
        response.raise_for_status()
        return response.json()
Enter fullscreen mode Exit fullscreen mode

How the tool works

This tool performs two main steps:

  • It authenticates with Xray Cloud using XRAY_CLIENT_ID and XRAY_CLIENT_SECRET
  • It sends a request to the Xray bulk test import API to create a Manual Test issue

The authentication step returns a bearer token, which is then included in the request headers for the test creation call.

Why this tool matters

This is the point where the workflow moves beyond text generation and becomes actionable.

Up to this stage, the agents are producing and refining Markdown test cases. But with XrayCreateTestTool, the final output is pushed directly into Xray as real Jira Test issues. That makes the system much more useful in practice, because the generated tests are no longer just suggestions, they become part of the team’s test management workflow.

How it connects to the review task

Earlier, in tasks.yaml, the reviewer task instructed the agent to call XrayCreateTestTool after producing the final improved test cases.

That means the reviewer agent is responsible for:

  • validating the quality of the final test cases
  • selecting the finalized content
  • sending each approved test case to Xray

This is a nice design because it keeps test publication tied to the quality-control stage rather than the initial draft stage.

Running the full Jira-to-Xray workflow

After setting up the project and configuring your credentials, run:

crewai run

This will execute the CrewAI pipeline end to end:

  • fetch the Jira story
  • generate the first draft of test cases
  • review and improve the test cases
  • create Manual Test issues in Xray

The screenshot below shows the project running through the CrewAI pipeline.

Wrapping up

At this point, the full workflow is in place:

  • Jira provides the source requirement
  • the writer agent generates initial test cases
  • the reviewer agent improves them
  • the custom Xray tool creates Manual Test issues in Jira

Together, these pieces form an end-to-end pipeline that turns Jira stories into structured, reviewable, and publishable Xray tests using CrewAI.

Read part 1 — https://dev.to/abdul_qadir/ai-powered-test-generation-jira-to-xray-with-crewai-part-1-setup-and-project-overview-p0a

Top comments (0)