QA engineers don't just find bugs—they build the systems that prevent them. You write test plans, define acceptance criteria, document defects, draft automation frameworks, communicate risk to stakeholders, and maintain the quality standards that let teams ship with confidence.
The paperwork is significant: test case documentation, bug reports, test summary reports, regression suites, API test scripts, release sign-offs. And the communication overhead—explaining defects to developers, justifying test coverage to product managers, reporting status to leadership—takes time away from actual testing.
ChatGPT can't run your tests. But it can dramatically accelerate the documentation, reporting, and communication work that surrounds your QA practice. These 35 prompts cover the full QA workflow: test planning, test case writing, defect reporting, automation, and stakeholder communication.
Test Planning and Strategy
Prompt 1 — Write a test plan
Write a test plan for [feature or system]. Feature description: "[what it does]. Scope: [what's in / out of scope for testing]. Test objectives: [what we're validating]. Test types to execute: [functional, regression, performance, security, UAT]. Entry/exit criteria: [when testing starts and when it's done]. Risks: [what could delay or compromise testing]. Resources needed: [people, environments, tools]. Timeline: [phases and dates]."
Prompt 2 — Write a risk-based testing strategy
Write a risk-based testing strategy for [release or feature]. User impact areas (ranked by risk): [list the features or flows by business criticality]. For each area: assess the risk level (High/Medium/Low), the consequence of a defect, the recommended test depth (full regression / smoke / exploratory), and the test types needed. Use this to prioritize testing effort when time is limited.
Prompt 3 — Define acceptance criteria
Write acceptance criteria for the following user story: [paste story]. For each criterion use Gherkin format: Given [precondition], When [action], Then [expected result]. Cover: happy path, at least 2 edge cases, error handling, and any business rule validations. Each criterion must be independently verifiable. Flag any ambiguity in the story that needs product clarification before testing.
Prompt 4 — Write a test scope document
Write a test scope document for [project or release]. Features included in this release: [list]. Testing types in scope: [functional, regression, smoke, API, performance, security — list what applies]. Features explicitly out of scope: [list]. Environments: [dev, staging, production]. Dependencies: [third-party systems, data, access needed]. Assumptions: [list]. Risks of gaps in scope: [note any].
Prompt 5 — Create a test estimation
Create a test effort estimation for [release]. Features to test: [list]. For each feature: estimate the number of test cases, the hours to write them, the hours to execute them (manual), and the hours for automation (if applicable). Total: [aggregate]. Assumptions: [list]. Risk buffer: [X%]. Present as a table the development team and PM can use for sprint planning.
Test Case Writing
Prompt 6 — Write functional test cases
Write test cases for [feature]. Format each test case with: test case ID, title, preconditions, test steps (numbered), expected result, and priority (P1/P2/P3). Cover: happy path, alternate flows, negative cases, boundary values, and error handling. Assume a tester who knows the system but didn't write the feature. Each test case should be independently executable.
Prompt 7 — Write API test cases
Write API test cases for the following endpoint: [method] [endpoint]. Inputs: [parameters, request body]. Expected behavior: [describe happy path]. Write test cases covering: successful response (200), invalid input (400), unauthorized (401), not found (404), server error (500), edge cases for each parameter, and any business rule validations. Include the expected response body structure for key cases.
Prompt 8 — Write regression test cases
Write a regression test suite for [module or feature area]. The core functionality that must always work: [list critical paths]. For each regression case: write a concise test case title, one-line description, and the critical assertion (what must be true for the test to pass). Prioritize by impact — P1 cases should catch any regression that would block release.
Prompt 9 — Write negative test cases
Write negative test cases for [feature]. Focus on: invalid inputs (wrong type, out of range, null/empty, special characters), unauthorized access, missing required fields, exceeding limits, concurrent/race conditions, and system boundary violations. For each case: describe the input, the expected error response or behavior, and why this matters to test. These should be the cases that break naive implementations.
Prompt 10 — Write end-to-end test cases
Write end-to-end test cases for the following user journey: [describe the flow from start to finish]. Break it into: preconditions, step-by-step actions with expected result at each step, key assertions, and how to verify the final state. Flag any steps that require test data setup or environment-specific conditions. Write for a tester who knows the product but hasn't tested this flow.
Defect Reporting and Management
Prompt 11 — Write a bug report
Write a professional bug report for the following defect: [describe what happened]. Include: bug title (clear, specific), severity (Critical/High/Medium/Low), priority (P1-P4), environment (OS, browser, version, test data), steps to reproduce (numbered, precise), expected behavior, actual behavior, and attachments needed (logs, screenshots — describe). Write clearly enough that a developer who wasn't there can reproduce it immediately.
Prompt 12 — Write a critical bug escalation
Write an escalation message for a critical bug found in [environment] that [describe impact — blocks testing / affects data integrity / security vulnerability / crashes application]. What we found: [describe]. Impact: [who is affected, what functionality is broken]. Steps to reproduce: [brief]. Recommended action: [what needs to happen — immediate fix, rollback, hotfix, workaround]. Audience: engineering lead and product manager. Urgent but factual.
Prompt 13 — Write a defect triage summary
Write a defect triage summary for today's triage meeting. Defects reviewed: [list with IDs and titles]. For each defect: severity, priority agreed upon, owner assigned, resolution target (this sprint / next sprint / backlog), and any decision notes. Total open defects: [X]. Defects blocking release: [X]. Defects closed since last triage: [X]. Format for team standup and as a record of triage decisions.
Prompt 14 — Write a defect trend analysis
Write a defect trend analysis for [sprint/release]. Total defects found: [X]. By severity: Critical [X], High [X], Medium [X], Low [X]. Defect origins: [by feature area or component]. Defects found in: [dev / QA / UAT / production]. Defect trends vs. prior release: [+/-X%]. Key observations: [what the data tells us about code quality, test coverage, or process gaps]. Recommendations: [1-3 actionable improvements].
Test Automation
Prompt 15 — Write a Playwright test script
Write a Playwright (TypeScript) test for the following scenario: [describe the user action and expected outcome]. Cover: navigating to [URL], performing [actions], asserting [expected state]. Use: page.goto(), page.click(), page.fill(), expect(). Include: a descriptive test name, setup/teardown if needed, and a comment explaining what's being validated. Follow Playwright best practices — avoid hard-coded waits.
Prompt 16 — Write a Cypress test
Write a Cypress test for: [describe the scenario]. Include: cy.visit(), cy.get() with data-testid selectors (preferred), cy.contains(), cy.should() assertions. Cover: the happy path with at least 2 assertions, one negative case, and a check on the final state. Include a describe block and it block with clear names. Add comments for any non-obvious assertions.
Prompt 17 — Write API automation with Jest
Write a Jest test suite for the [endpoint] API endpoint. Import: axios or fetch. Test cases to include: [list scenarios]. For each test: call the endpoint with the test input, assert the status code, assert key response fields, and clean up test data if needed. Use beforeAll/afterAll for setup and teardown. Include one positive and one negative test at minimum.
Prompt 18 — Write automation framework documentation
Write documentation for our [framework name] automation framework. Cover: purpose and scope, tech stack (language, test runner, libraries), folder structure overview, how to set up locally (prerequisites, install, config), how to run tests (full suite, specific file, tag-based), how to add a new test, naming conventions, and how to interpret test results. Audience: new QA engineer joining the team.
Prompt 19 — Review automation code for quality
Review the following automation test code for quality: [paste code]. Check for: hard-coded test data that should be externalized, selectors that are likely to be brittle (class names, XPath), missing assertions, lack of error handling, tests that depend on order of execution, magic numbers/strings, missing descriptive names, and anything that will make this test fail randomly (flaky tests). Suggest specific improvements.
Stakeholder Communication
Prompt 20 — Write a test summary report
Write a test summary report for [release/sprint]. Testing period: [dates]. Test cases executed: [X] of [X planned]. Pass rate: [X%]. Defects found: Critical [X], High [X], Medium [X], Low [X]. Defects resolved: [X]. Open defects blocking release: [list]. Test coverage: [areas tested]. Risk assessment: [is this release ready?]. Recommendation: [ship / hold / conditional ship with known issues]. Format for product and engineering leadership.
Prompt 21 — Write a quality risk assessment
Write a quality risk assessment for releasing [feature/product] on [date]. Coverage gaps: [areas not fully tested]. Known open defects: [list with severity]. Technical risks: [any architectural or integration concerns]. Testing assumptions that may not hold: [list]. Recommendation: [go / no-go / conditional go with mitigation]. This document records that stakeholders were informed of risks before the release decision was made.
Prompt 22 — Write a sprint QA status update
Write a daily/weekly QA status update for sprint [X]. Testing progress: [X of X test cases completed]. Defects: [X new, X resolved, X open]. Blocking issues: [list]. Risk to sprint goal: [High/Medium/Low] because [reason]. What I need from the team: [any blockers requiring action]. Planned for next [day/week]: [list]. Keep it under 200 words — this goes to the Slack channel.
Prompt 23 — Explain a defect to a developer
I need to communicate clearly about a defect to a developer who thinks it's not reproducible. The bug: [describe]. My evidence: [what I have — steps, logs, screenshots, environment]. Write a clear, technical explanation that: describes exactly what I did, what I observed, and what I expected, provides all the information needed to reproduce it independently, and suggests possible causes to investigate. Collaborative tone — we're solving this together.
Prompt 24 — Explain testing trade-offs to a PM
A product manager wants to skip [specific type of testing] to ship faster. Write a clear explanation of: what risk we're accepting by skipping it, what could go wrong in production, how we could mitigate the risk if we do skip it (smoke test, feature flag, canary release), and what my recommendation is. Help them make an informed decision — not just a "no."
Process and Documentation
Prompt 25 — Write a QA process onboarding guide
Write a QA onboarding guide for a new engineer joining our team. Cover: our testing philosophy, the tools we use and how to access them, our test case management system (how to find and run tests), our defect tracking process (how to file, prioritize, and track bugs), how we work with developers and PMs, our definition of done for QA, and the most important things to know in the first 30 days.
Prompt 26 — Write a testing standards document
Write a QA testing standards document for our team. Cover: test case quality standards (what makes a good test case), defect severity and priority definitions (with examples), naming conventions for test cases and defects, required fields for test case and bug documentation, our approach to test data management, and standards for automation code quality. Format as a team reference document.
Prompt 27 — Write a lessons learned after a production bug
Write a QA lessons learned document after the following production incident: [describe bug and impact]. Timeline: [when it was introduced, when found in production]. Why QA didn't catch it: [honest analysis — test coverage gap, edge case not considered, environment difference, etc.]. What we're changing: [coverage additions, process improvements, tooling]. How we'll prevent this type of issue going forward: [specific actions]. Blameless tone.
Prompt 28 — Write test data requirements
Write test data requirements for [feature or system]. Data scenarios needed: [list — new user, returning user, edge case accounts, etc.]. Sensitive data handling: [how we handle PII in test environments]. Data setup method: [seed scripts, anonymized prod copy, manual entry]. Data teardown: [how test data is cleaned up]. Dependencies: [any systems that need to provide data]. Data refresh cadence: [when test data needs to be reset].
Professional Development
Prompt 29 — Prepare for a QA job interview
I'm interviewing for a QA Engineer role at [company type]. The role focuses on [area — manual / automation / full-stack QA / SDET]. Prepare me for: behavioral questions (give me the STAR structure), technical QA questions (test case design, defect lifecycle, automation), system design questions for QA, and a testing scenario question. For each category, give me the key concepts to demonstrate and one example response structure.
Prompt 30 — Write a QA career growth self-assessment
Write a QA engineer self-assessment for my performance review. My current skills: [list — manual testing, API testing, automation, tools]. Projects this year: [list]. Growth areas I want to address: [list]. My goal: [where I want to be in 12 months — SDET, QA lead, etc.]. Help me articulate my accomplishments in terms of quality impact delivered, not just tasks completed, and frame my development goals in terms of business value.
Prompt 31 — Write a test automation proposal
Write a proposal to introduce [test automation framework] at our organization. Current state: [manual testing only / partial automation / other]. Proposed investment: [tool, setup time, maintenance]. Expected benefits: [faster regression cycles, reduced manual effort, earlier defect detection]. ROI estimate: [time saved per sprint × sprints per year]. Risks and mitigation: [list]. Ask: [what you need approved — time, budget, training]. Format for engineering leadership.
Prompt 32 — Write a retrospective item about QA
Write a constructive retrospective item about a QA-related issue from this sprint: [describe what happened — e.g., late test start, flaky tests delaying CI, poor bug descriptions]. Format as: what happened, impact on the team, root cause, proposed improvement for next sprint, and who owns the action item. Blameless, specific, actionable.
Prompt 33 — Summarize a testing conference talk
I attended a testing conference talk on [topic]. Notes: [paste]. Summarize the 5 most actionable takeaways for a QA engineer working in [context — agile startup / enterprise / mobile / API-heavy]. For each takeaway: what it is, why it matters, and one specific thing I can do differently starting next sprint.
Prompt 34 — Write a QA team charter
Write a QA team charter for our [X]-person QA team. Cover: team mission (why we exist), scope of responsibility, what success looks like (metrics), how we work with engineering and product, our quality standards, decision-making authority (what QA owns vs. shared with dev), and our norms for communication and escalation. This document should help any new team member understand what QA means at our company.
Prompt 35 — Write a quality metrics dashboard narrative
Write a narrative summary for our quality metrics dashboard. Metrics to interpret: [list — defect escape rate, test coverage %, automation coverage %, mean time to detect, pass rate trend]. Current values: [paste numbers]. Trend: [improving/declining/flat]. Key insight: [what the data says about our quality posture]. Recommendation: [one thing leadership should focus on or fund]. Under 200 words. For a VP Engineering audience.
Getting the Most From These Prompts
Provide feature and system context. These prompts need your product knowledge — what the feature does, who uses it, what the risks are. Generic input produces generic output. The more context you add, the more targeted the test cases and reports.
Use for first drafts and structure. ChatGPT is excellent at test plan structure and defect report formatting. Your expertise is what makes test cases accurate, comprehensive, and actually useful for finding bugs.
Review automation code carefully. Generated automation code compiles but may not be robust. Review for flakiness, brittle selectors, and missing edge cases before adding to your suite.
The Complete QA Engineer AI Toolkit
These 35 prompts cover the full QA workflow. If you want the full system — advanced automation prompt sets by framework, defect management templates, test strategy frameworks for different product types, and a complete QA process documentation library — the QA Engineer AI Toolkit has everything organized.
Get the QA Engineer AI Toolkit →
Bookmark this page. Share it with your QA team. Start with one prompt on your next test plan — you'll ship better quality, faster.
Top comments (0)