Why use an AI assistant
"AI won’t replace humans. But humans who use AI will replace those who don’t." - Sam Altman
Test automation is a perfect domain for AI augmentation. Much of the early work—boilerplate code, fixtures, selectors, test skeletons—follows predictable patterns. AI assistants can generate these patterns quickly, suggest test flows, and help debug issues, especially when given clear context.
GitHub Copilot’s ecosystem is built for this. With Instruction files, you can define your architecture and coding rules. With Skills, you can give Copilot new capabilities. With Plugins, you can extend Copilot into domain‑specific workflows like Playwright automation.
The engineer still owns the architecture, validations, and edge cases. The AI accelerates the repetitive parts.
How I used the AI assistant
Working with an AI assistant is like onboarding a new team member. Even the smartest assistant doesn’t automatically understand your project or expectations. You have to teach it how you work.
Step 1 — Define the rules with Instruction Files
I created Instruction files that described:
- Architecture and folder structure
- Naming conventions
- Preferred libraries
- Locator strategy
- Coding style
- Test design principles
- Quality standards These Instruction files became the “brain” the assistant used to generate consistent code.
Step 2 — Equip the assistant with Skills and Plugins
To make the AI genuinely useful for Playwright automation, I added:
- Playwright CLI Skills: These give Copilot real browser access, DOM exploration, and the ability to inspect selectors.
- Testing Automation Plugin: This plugin adds Playwright‑specific workflows, structured test generation, and UI exploration capabilities.
- QA Sub‑Agent: A specialized reasoning model tuned for QA tasks.
With these components, I effectively built a custom AI QA automation stack:
• QA Sub‑Agent → QA‑focused reasoning
• Testing Automation Plugin → Playwright engine
• Playwright CLI Skills → Browser tools
• Instruction Files → My rules and architecture
At this point, the assistant behaved like a junior QA engineer who already understood my framework.
============================
Example test generation
Here is the kind of prompt I gave the AI assistant:
Use the instruction files and explore the website https://www.target.com/, then generate the test for the workflow below
- Search for the product "AIWA ARC Noise Cancelling Over Ear Wireless Headphones"
- Add to cart (via Choose Options dialog)
- Verify cart
The assistant explored the site, followed the instructions, and produced a test using the Page Object Model, Playwright’s recommended selectors, and Pytest fixtures. It wasn’t a final product, I still reviewed and refined it, but it gave me a strong first draft in minutes instead of hours.
What worked well
The biggest benefit was speed. I could move from setup to first test execution much faster because the AI assistant handled a lot of the repetitive scaffolding. Another advantage was consistency, since I could ask it to follow the same naming patterns, locator strategy, and fixture style across files, which matches Copilot best practices for maintaining a consistent codebase.
What to watch out for
Speed
The biggest win was how quickly I moved from an empty folder to a running test suite. The assistant handled repetitive scaffolding so I could focus on architecture and quality.
Consistency
Because I defined my conventions upfront, the assistant generated code that matched my structure, naming patterns, and locator strategy. This aligns with Copilot’s best practices for maintaining a consistent codebase.
Reduced Cognitive Load
Instead of switching constantly between writing fixtures, designing POMs, and drafting tests, I could offload the mechanical parts and stay focused on higher‑level decisions.
What to Watch Out For
AI‑generated code still requires human review. Common pitfalls include:
- Brittle or overly specific selectors
- Missing edge cases
- Incorrect assumptions about application behavior
- Over‑generalized test flows when prompts are vague The more specific the prompt, the better the output. Treat the AI as a fast junior engineer: helpful, but not a substitute for expertise.
Conclusion
- Using GitHub Copilot’s Instruction files, Skills, and Plugins to build a Playwright + Pytest automation framework is not about replacing engineers—it’s about amplifying them. By combining AI‑generated scaffolding with human review, Playwright best practices, and thoughtful test design, you can dramatically accelerate framework setup without sacrificing quality.
- For teams looking to scale automation quickly, this hybrid workflow turns what used to be a slow, manual process into a streamlined, efficient one. It’s a modern, practical approach to building reliable UI automation at speed.
Top comments (0)