1. Introduction
Still battling 'Wagile' processes, a skewed developer-to-tester ratio, and specs that could double as doorstops? You're not alone. This technical deep-dive builds on our previous discussion to show you exactly how to implement AI-assisted testing using Model Context Protocol (MCP) within your existing Atlassian ecosystem.
What You'll Achieve
By the end of this guide, you'll have a working AI assistant that can:
- Parse Confluence specifications to identify test scenarios and acceptance criteria
- Read your Jira tickets and extract acceptance criteria
- Generate comprehensive test cases in your team's format
- Identify edge cases you might have missed
- Create test documentation that actually gets used
Why MCP Over Generic AI Tools
You can paste specifications into ChatGPT or use Copilot, but MCP has key benefits:
- Direct Integration: No copy-pasting between tools
- Context Awareness: Understands your specific project structure
- Reduced Hallucination: Works with actual data, and rules, not assumptions
- Pattern Learning: Adapts to your team's test framework
What This Guide Covers
This isn't about fixing broken processes—that's a cultural challenge. This is about practical implementation:
- Setting up MCP to connect with your Atlassian stack
- Configuring secure connections through corporate networks
- Creating your first AI-generated test suite
- Measuring the impact on your team's velocity
2. What You'll Need Before Starting
MCP Atlassian
It's worth reading about the MCP sever that will be used: https://github.com/sooperset/mcp-atlassian
It's distributed as a Docker image, which means that the setup is generally very simple. The only rinkles may be corporate certificates, which we talk about later.
Docker
First up, you'll need Docker installed on your machine. Whether you're using Docker Desktop or Docker Engine doesn't matter—both work fine. If you don't have Docker, go to docs.docker.com/get-docker. Then, follow the installation guide for your operating system.
Why use Docker? It keeps the MCP server running the same way on any setup. Plus, it manages all your dependencies without cluttering your system.
Amazon Q CLI
Next, you'll need the Amazon Q Command Line Interface. Installation instructions are here: [https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-installing.html]. If they're set up right, you can run q --version
without any errors.
Atlassian Access
For Jira:
- A Jira account with API access (most regular accounts have this)
- A Personal Access Token (PAT)—not your password!
- The URL of your Jira instance (e.g., https://yourcompany.atlassian.net)
For Confluence:
- Like Jira: account, PAT, and URL
- Make sure you can access the spaces containing your specifications
Don't know how to create a PAT? In Atlassian, go to your account settings, find "Security," and look for "API tokens." Create one and save it somewhere secure—you can't view it again!
Corporate Certificates
Here's the fun part—if you're behind a corporate firewall, you'll need your company's SSL certificates. They're needed to allow the Docker container to trust your corporate proxy/firewall for outbound connections. Usually you'll have set them up when you started, probably in a certs folder in your home directory. They usually come as .crt
, .pem
or .cerfiles
.
3. Setting Up Your Environment
Creating Your Configuration File
In the project directory, you'll find a file called .env-example
. This is your template. Copy it to create your actual configuration:
cp .env-example .env
Adding Your Credentials
Open the .env
file in your IDE. You'll see something like this:
JIRA_URL=https://yourcompany.atlassian.net
JIRA_PAT=your-jira-personal-access-token-here
CONFLUENCE_URL=https://yourcompany.atlassian.net/wiki
CONFLUENCE_PAT=your-confluence-personal-access-token-here
Replace the placeholder values with your actual credentials. A few tips:
- Don't include quotes around the values
- Make sure there are no trailing spaces
- Double-check your URLs—they should be the base URLs without any paths
Understanding Configuration Options
You'll also see some SSL-related settings:
SSL_VERIFY=true
CERT_PATH=/path/to/certificates
If you're behind a corporate firewall, you may need to set SSL_VERIFY=false
for testing. But for production use, keep it true and provide the correct certificate path.
Warning: Setting SSL_VERIFY=false significantly reduces security and should never be used in production or for sensitive data. It's only for temporary testing to isolate certificate issues. Always strive to provide the correct certificate path and keep SSL_VERIFY=true
4. Building and Running with Docker
Preparing Certificates
If you use corporate certificates like Zscaler, we need to build the docker image ourselves. Place certificates in your project root for Docker to access them. If you already have certificates elsewhere (like ~/certs), you can modify the Dockerfile to copy from that location instead.
The Docker build process will automatically add these certificates. This lets the container make secure connections through your corporate network.
Building the Docker Image
With your certificates in place, run:
docker build -t mcp-atlassian-with-zscaler .
This command does several things:
- Creates a Python environment
- Installs all necessary dependencies
- Configures your certificates
- Sets up the MCP server
The first build might take a few minutes.
If the build fails, it's usually because of missing certificates or network issues. Check the error messages—they're surprisingly helpful.
5. Connecting to Amazon Q
With your Docker container built, it's time to connect everything together.
Starting Your First Chat Session
Open your terminal and run the following command:
q chat
When you run this command, Amazon Q automatically:
- Reads your MCP configuration from ./.amazonq/mcp.json.
- Establishes connections to your configured tools, including the MCP server.
Example mcp.json file
{
"mcpServers": {
"mcp_atlassian": {
"command": "docker",
"args": ["run", "-i", "--rm", "--env-file", ".env", "mcp-atlassian-with-zscaler"],
"disabled": false
}
}
}
"The MCP server runs inside Docker and communicates via stdin/stdout, so no port configuration is needed unless you're customising the setup."
Understanding the Initialisation
You should see something like this:
✓ mcp_atlassian loaded in 2.40 s
✓ 2 of 2 mcp servers initialized.
Each checkmark means a successful connection. If you see any errors here, it usually means:
- Your credentials are incorrect
- The URLs in your
.env
file are wrong - Network/firewall issues
If there are errors
- Ask Amazon Q about it
- Open docker desktop, look for the containers, select yours and click
logs
.
Verifying Your Connections
Once connected, try a simple query to verify everything works:
What are my open Jira tickets?
If Q responds with your actual tickets, congratulations! You're connected. If not, check your Jira PAT and URL in the .env
file.
- At this point the MCP will probably ask for permission to run a
tool
. A tool is a coded method to communicate. - When asked press
t
to trust for this session - or
y
for this question - You can configure the permissions of tools in amazong q from the chat [https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-chat-tools.html]
6. Using AI for Test Generation
Now for the fun part—actually using this setup to generate tests.
Example Queries That Work Well
Here are some queries that deliver immediate value:
Basic Jira queries:
- "What are my open assigned tickets, list them in priority order"
- "Show me all tickets in the current sprint for project XYZ"
- "What tickets are blocked?"
Test generation from tickets:
- "What tests should I do for <ticket ID>"
- "Generate test cases for the acceptance criteria in PROJ-12345"
- "What edge cases should I consider for ticket <ticket ID>?"
Referencing Confluence Specifications
When you have detailed specs in Confluence, you can get specific:
In the spec <confluence page id> in section 1.8.5, what test cases and edge cases should I be looking for?
The AI will:
- Fetch the specific Confluence page
- Find the exact section you mentioned
- Analyse the requirements
- Generate relevant test cases
Combining Jira Tickets with Specifications
This is where the real power shows up. You can cross-reference:
For ? and the spec <confluence page id> in section 1.8.5, are there any requirements missing or unclear? What test cases should we do for this?
The AI will:
- Pull the Jira ticket details
- Fetch the Confluence specification
- Compare acceptance criteria with spec requirements
- Identify gaps or inconsistencies
- Suggest comprehensive test coverage
Finding Gaps in Requirements
One of the most valuable uses is identifying what's missing:
- "Looking at ticket XYZ-123 and its linked specification, what acceptance criteria might be missing?"
- "Based on the UI mockups in CONF-789 and the requirements in JIRA-456, what scenarios aren't covered?"
- "What questions should I ask the product owner about this feature?"
7. Working with Chat Sessions
Amazon Q can keep context between sessions. This is very helpful for ongoing testing tasks.
Resuming Previous Conversations
When you're in a project directory, Q remembers your previous conversations. To continue where you left off:
q chat --resume
You'll see a summary like:
We discussed your assigned Jira tickets related to configuring cars then examined the rules in your .q folder that define test case formats and bundle selection conventions for your project.
Understanding Context Retention
Q maintains context within a folder, which means:
- Your test patterns are remembered
- Previous queries inform new responses
- You can build on earlier work without re-explaining
Best Practices for Ongoing Projects
To get the most from context retention:
- Use project-specific folders: Keep each project's chats separate
- Start sessions with context: "We're testing the car configurator feature"
- Reference previous work: "Using the test pattern we established yesterday..."
- Build incrementally: Start with simple tests, then add complexity
8. Troubleshooting Common Issues
Things don't always work perfectly. Here's how to fix common problems.
Connection Problems
Symptom: MCP servers fail to initialise
Common causes:
- Incorrect URLs in
.env
file - Network firewall blocking connections
- Docker not running
Fix:
- Verify Docker is running:
docker ps
- Try with
SSL_VERIFY=false
temporarily to isolate certificate issues
Authentication Errors
Symptom: "401 Unauthorized" or "403 Forbidden" errors
Common causes:
- Expired or incorrect PAT
- Insufficient permissions
Fix:
- Generate a new PAT in Atlassian
- Ensure the token has read access to projects/spaces you need
- Update
.env
file with new token - Rebuild Docker image
Certificate Issues
Symptom: SSL verification errors
Common causes:
- Missing corporate certificates
- Certificates in wrong location
- Expired certificates
Fix:
- Get latest certificates from IT
- Place in project root directory
- Rebuild Docker image
- Ensure
CERT_PATH
in.env
is correct
9. Next Steps
You're up and running—now what?
Quick Wins to Try First
- Generate tests for your most complex feature: Pick that configurator or rules engine that everyone avoids testing manually
- Document existing features: Use AI to create test documentation for features that were "completed" without proper test cases
- Find test gaps: Run your existing test suites through AI analysis to identify missing scenarios
Building Evidence for Process Improvement
Track these metrics from day one:
- Time to create test cases (before vs. after)
- Number of edge cases identified
- Defects found using AI-generated tests
- Time saved per sprint
Use this data to show management the value of:
- Better developer-to-tester ratios
- Time for exploratory testing
- Investment in test automation
Expanding Usage Across Your Team
Start small and grow:
- Begin with one complex feature
- Share success stories in team meetings
- Create team-specific prompt templates
- Schedule brown-bag sessions to train others
- Build a library of effective queries
Remember: This tool doesn't fix broken processes, but it gives you breathing room to work on what really matters—building quality into your products from the start.
The path forward isn't about replacing testers with AI. It's about amplifying what testers do best: thinking critically about quality, understanding user needs, and finding the problems that matter.
10. Getting more advanced
- Amazon Q like most LLMs thrives on rules.
- Rules are markdown files that can either be global or project/repo based.
- Rules are stored in the
./.amazonq/rules
folder. - In a chat Q will load the rules, but sometimes it may forget about some of them, which is why your prompts matter.
Building rules
I would start off small and describe the interations of a feature in human readable terms. We tend to have states and interactions described in Figma. Without these interactions Q doesn't know what your system looks like, which means it may not be able to write the cases how you'd like.
We have a framework that takes BDD steps and translates those into automated UI tests in webdriver IO, for example
For example, if your framework has a step like: Then the user clicks the continue
button on the configuration page where continue
refers to a specific UI element (a 'locator') within a 'page object' for the configuration
page. Your rules would teach Q about these conventions, allowing it to generate BDD steps that directly translate to your existing automation framework.
With this Q knows the layout of our page objects and the rules of how we write BDD tests.
I taught it these rules through trial and error some of the time, where it would produce something that looked correct, but was missing a something, perhaps the way it checked something. Using the chat interface I would ask why it did something, and work through the problem with the LLM. At the end I would ask it to update the rules.
Understanding the business
One thing to bear in mind, is that the buisiness when they view you test cases don't want to see detailed steps with code
in them, they want something human readable. That's something the LLM can do using the knowledge you taught it of interactions and states from Figma and the understanding of how you structure your tests.
We can now ask the LLM to produce cases that can run automation, but also human readable test cases. It can even produce them within the same request.
Prompt Engineering
Sometimes the LLM doesn't always do what you ask, which is why propmts matter. If you've been working with the LLM within a chat, you can probably just say make a test case about putting in a light bulb
and it'll do it. But, prompts really help
Create a BDD test case for [TICKET_ID/valid car configurations] following our framework conventions.
Context:
- Framework: WebdriverIO with BDD (Cucumber)
- Domain: Bundle selection testing
- Scope: Up to and including finalising configuration
Requirements:
1. Use current valid car configurations from our business rules
2. Follow standard Given/When/Then structure with proper indentation
3. Include page navigation verification at each step
4. Add strategic screenshot steps for:
- Initial state
- After key user actions
- Final state verification
5. Clearly mark any new page object elements needed beneath the test: // NEW ELEMENT NEEDED: [description]
Mandatory inclusions:
- Initial test setup steps
- Page load verification
- Data validation steps
- Error handling considerations
Output format:
Feature: [Feature name]
Scenario: [Scenario name]
Given...
When...
Then...
Top comments (0)