<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aykut Denizci</title>
    <description>The latest articles on DEV Community by Aykut Denizci (@aykutde96).</description>
    <link>https://dev.to/aykutde96</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aykutde96"/>
    <language>en</language>
    <item>
      <title>Configuring Playwright MCP Like a Pro: Custom Headers, Cookies, and Smarter Agents</title>
      <dc:creator>Aykut Denizci</dc:creator>
      <pubDate>Wed, 26 Nov 2025 05:40:42 +0000</pubDate>
      <link>https://dev.to/aykutde96/configuring-playwright-mcp-like-a-pro-custom-headers-cookies-and-smarter-agents-237g</link>
      <guid>https://dev.to/aykutde96/configuring-playwright-mcp-like-a-pro-custom-headers-cookies-and-smarter-agents-237g</guid>
      <description>&lt;p&gt;How can you use Playwright MCP more effectively? How can you handle login scenarios? And how can you run Playwright MCP using your own browser profile? I hope this article helps answer questions like these :) Enjoy reading!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo59z94zjws5iqzgqoxnd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo59z94zjws5iqzgqoxnd.png" alt="Ai Generated Image" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We use Playwright MCP, but are we really using it efficiently? Does it cover all our cases? After finding myself thinking things like “It doesn’t support this” or “I don’t think it can do that,” I realized the real issue was that I wasn’t configuring it correctly. With proper configuration, I discovered that it can actually solve all of my problems. In this article, we’ll look at how to use MCP more effectively through the configurations we can provide. (You can find even more details in the &lt;a href="https://github.com/microsoft/playwright-mcp?tab=readme-ov-file#configuration" rel="noopener noreferrer"&gt;&lt;strong&gt;link&lt;/strong&gt;&lt;/a&gt;.)&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Config
&lt;/h4&gt;

&lt;p&gt;With a config.json file you create, you can provide most of the settings you normally define inside playwright.config.ts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Browser Args&lt;/li&gt;
&lt;li&gt;Extra Http Header&lt;/li&gt;
&lt;li&gt;ViewPort Size&lt;/li&gt;
&lt;li&gt;Permissions&lt;/li&gt;
&lt;li&gt;bypassing CSP, and similar configurations can all be added through this JSON file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these settings, MCP can also help you handle cases where the page behaves differently based on specific headers you pass. This allows the agents to analyze the page with those headers applied. An example config file would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "browser": {
    "launchOptions": {
      "args": [
        "--ignore-certificate-errors"
      ]
    },
    "contextOptions": {
      "extraHTTPHeaders": {
        "test-header": "true"
      },
      "permissions": ["geolocation"],
      "bypassCSP": true
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You also need to provide the location of this file inside your mcp.json, under the args section of the Playwright MCP configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    "playwright": {
      "command": "npx",
      "args": [
        "@playwright/mcp@latest",
        "--config=path/to/playwright-mcp.config.json"
      ]
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Storage State
&lt;/h4&gt;

&lt;p&gt;If you need cookies for a full and accurate page analysis, this part is exactly what you need. Anyone using Playwright is familiar with the concept of &lt;em&gt;storage state&lt;/em&gt;, which saves your session’s cookie values and lets your tests run with them. The same applies to MCP by creating a JSON file and providing your cookie values there, you can run MCP with the session you want.&lt;/p&gt;

&lt;p&gt;In my opinion, the biggest advantage is eliminating the login step. By supplying token-based cookies, you prevent MCP from wasting time on login flows and let it directly analyze authenticated pages. An example storage state JSON file might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "cookies": [
    {
      "name": "testCookie",
      "value": "testValue123",
      "domain": ".testDomain.com",
      "path": "/",
      "expires": -1,
      "httpOnly": false,
      "secure": true,
      "sameSite": "Lax"
    },
    {
      "name": "token",
      "value": "testToken123.",
      "domain": ".testDomain.com",
      "path": "/",
      "expires": -1,
      "httpOnly": false,
      "secure": true,
      "sameSite": "Lax"
    }
  ],
  "origins": []
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll need to provide this file in the same place inside your mcp.json as well, just like the previous configuration. It should look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    "playwright": {
      "command": "npx",
      "args": [
        "@playwright/mcp@latest",
        "--storage-state=path/to/storage-state.json"
      ]
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. Init Script
&lt;/h4&gt;

&lt;p&gt;If you want MCP to run certain actions at the very beginning, or if you want to show specific logs, alerts, or messages on the DOM under certain conditions, the init script will do exactly what you need. You can take the curl of a request and have it executed at the start of the MCP run or whenever the conditions you define are met. You can even customize it to display error messages directly on the UI, as shown in the example 😄&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1wb324we8p1mx8c0tpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1wb324we8p1mx8c0tpp.png" alt="Init Script Error Message Example" width="510" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To use this, simply create an init-script.js file and provide its path in your mcp.json as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    "playwright": {
      "command": "npx",
      "args": [
        "@playwright/mcp@latest",
        "--isolated",
        "--init-script=path/to/init-script.js"
      ]
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4. Extension
&lt;/h4&gt;

&lt;p&gt;If you want MCP to run with your own browser settings using your personal browser profile this part is exactly what you need. After downloading and installing the extension (as shown in the &lt;a href="https://github.com/microsoft/playwright-mcp/blob/main/extension/README.md" rel="noopener noreferrer"&gt;&lt;strong&gt;link&lt;/strong&gt;&lt;/a&gt;), you can run your MCP tests on your own browser profile by adding the following configuration to your mcp.json.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    "playwright": {
      "command": "npx",
      "args": [
        "@playwright/mcp@latest",
        "--extension"
      ]
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After completing the setup, when you run MCP, it will launch a browser using your profile and ask for permission to access it. Once you approve that prompt, it will continue running with your own browser settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo75inygymcm4rlnxm6s4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo75inygymcm4rlnxm6s4.png" alt="Playwright Extension Permission" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/debbie-obrien/" rel="noopener noreferrer"&gt;Debbie O'Brien&lt;/a&gt; also has a great video about this extension on YouTube. I definitely recommend watching it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/uE0r51pneSA?si=opgO-bCizppC7uxK" rel="noopener noreferrer"&gt;https://youtu.be/uE0r51pneSA?si=opgO-bCizppC7uxK&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Other Settings
&lt;/h4&gt;

&lt;p&gt;In addition to the features mentioned above, there are a few more settings that can be useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Isolated:&lt;/strong&gt; Ensures that each test session starts in an isolated browser profile. Use --isolated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Device:&lt;/strong&gt; If you want your MCP to emulate different devices while running tests and analyzing pages, you can set this option.
Example: --device=iPhone 15.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headless:&lt;/strong&gt; If you don’t want to see a browser launching while MCP does its work in the background, add this option to your mcp.json
--headless.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, you can also configure agents to perform the actions you want. Playwright MCP server allows you to run JavaScript evaluate commands. For example, you can add a step like the following to the Planner agent’s MD file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;When you have finished manually testing, add a thick red border around the specific areas that have been tested&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac4gqdxt2aezadzek4ph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac4gqdxt2aezadzek4ph.png" alt="Playwright MCP JS Evaluate" width="800" height="927"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the end of the analysis, the browser view looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6hru52th3fbaoqyoy6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6hru52th3fbaoqyoy6u.png" alt="Result of the test page" width="800" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/debbie-obrien/" rel="noopener noreferrer"&gt;Debbie O'Brien&lt;/a&gt; also has a great video on this topic, where you can learn more details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/n0CFmm38o4Y?si=FxV00dNx1SHB_CJQ" rel="noopener noreferrer"&gt;https://youtu.be/n0CFmm38o4Y?si=FxV00dNx1SHB_CJQ&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for reading this far. I hope you found it helpful 🙏&lt;/p&gt;

</description>
      <category>devtools</category>
      <category>webdev</category>
      <category>softwaretesting</category>
      <category>testautomation</category>
    </item>
    <item>
      <title>Playwright Agents</title>
      <dc:creator>Aykut Denizci</dc:creator>
      <pubDate>Fri, 14 Nov 2025 11:09:59 +0000</pubDate>
      <link>https://dev.to/aykutde96/playwright-agents-3if</link>
      <guid>https://dev.to/aykutde96/playwright-agents-3if</guid>
      <description>&lt;p&gt;In this article, I’ll talk about the Playwright agents introduced with Playwright version 1.56 and walk through the concept with examples.&lt;br&gt;&lt;br&gt;
Playwright 1.56 brings three types of agents into our workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Planner&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Generator&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Healer&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumnr6ryny828otdiv80w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumnr6ryny828otdiv80w.png" alt="Ai Generated Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To automatically generate these agents, you can run one of the following commands depending on the AI-powered editor you’re using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Visual Studio Code -&amp;gt; Works for Cursor as well
npx playwright init-agents --loop=vscode
# Claude Code
npx playwright init-agents --loop=claude
# Opencode
npx playwright init-agents --loop=opencode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Cursor users, you can convert the generated file into MDC format and move it under &lt;em&gt;.cursor/rules&lt;/em&gt; to continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak3eoy1sdhxmma2ak60z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak3eoy1sdhxmma2ak60z.png" alt="Agents"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Planner&lt;/strong&gt; visits the page you need to test, analyzes it, and generates test cases in Markdown format. We will try these cases on Trendyol’s cart page for that, we want it to add an item to the cart and navigate to the cart before starting the analysis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F664%2F1%2Al1Fx1A3x9C2KmNl6trqjSA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F664%2F1%2Al1Fx1A3x9C2KmNl6trqjSA.png" alt="Planner Agent"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After analyzing the page, it creates the test plan in .md format, closes the browser, and completes the process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgw3g49h65bfr528yjgu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgw3g49h65bfr528yjgu.png" alt="Planner Agent Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When reviewing the generated test plan, we can clearly see that it produces scenarios we actually need. It doesn’t just think functionally when I tested it on a login page, it even generated security related cases (such as XSS vulnerabilities and CSRF protection).&lt;/p&gt;

&lt;p&gt;Now, let’s take one of the cases created by the Planner and try to generate the test using the &lt;strong&gt;Generator&lt;/strong&gt; agent. For this example, I’m choosing the case that increases the product quantity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1v8spkkgcuujo17nvwk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1v8spkkgcuujo17nvwk.png" alt="Generator Agent"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fealmnziesoxty542nwsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fealmnziesoxty542nwsf.png" alt="Generator Agent Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The test was successfully created. It is able to automatically add the necessary commands I normally use in my basket tests (such as login and adding a random product to the cart) inside the beforeEach, following the exact structure I use in my other spec files. It also generates the test in the same style I use for example, by waiting for responses instead of relying on static waits. This means I don’t have to refactor the test to match my existing code standards.&lt;/p&gt;

&lt;p&gt;When I ran this test across the six projects defined in my Playwright config, five passed and one failed. To fix this, we’ll use the &lt;strong&gt;Healer&lt;/strong&gt;  agent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58p2mg29kfv7sciucfpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58p2mg29kfv7sciucfpx.png" alt="Healer Agent"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After enabling the Healer, it runs the tests across all projects, identifies flaky parts, and if it doesn’t catch the issue on the first attempt, it retries the failing test several times to reproduce the flaky behavior.&lt;br&gt;&lt;br&gt;
Once it identifies the issue, it applies logical fixes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5v6e41cqfix605ezteb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5v6e41cqfix605ezteb.png" alt="Healer Agent Finding Issues"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then it applies sensible fixes to the issues it finds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftflgt4ehcin4qgtiqv6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftflgt4ehcin4qgtiqv6k.png" alt="Fixed Issues"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is the result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flog46igdwv5h9yln1i84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flog46igdwv5h9yln1i84.png" alt="Result"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Playwright Agents are evolving beyond simple tools that execute test code they now actively contribute to test analysis, maintenance, and even scenario generation. Each agent has its own strengths. For example, the &lt;strong&gt;Healer&lt;/strong&gt; re-runs failing steps, analyzes UI changes, proposes fixes, and retries tests multiple times to detect flaky behavior. The &lt;strong&gt;Planner&lt;/strong&gt; can generate user scenarios we might not even think of and provides highly sensible additional checks for existing tests. The &lt;strong&gt;Generator&lt;/strong&gt; analyzes our existing tests, adds necessary preconditions automatically, and produces new tests that match our coding style. This leads to more consistent test suites and significantly reduces the need for manual adjustments.&lt;/p&gt;

&lt;p&gt;With all these capabilities combined, I can confidently say that Playwright Agents bring real value to my testing process in terms of both speed and quality.&lt;/p&gt;

&lt;p&gt;For more details, you can read Playwright’s official documentation or watch the YouTube video they shared.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://playwright.dev/docs/test-agents" rel="noopener noreferrer"&gt;https://playwright.dev/docs/test-agents&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/HLegcP8qxVY"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>playwright</category>
      <category>testautomation</category>
      <category>mcp</category>
      <category>softwaretesting</category>
    </item>
    <item>
      <title>Sharding in Playwright: Speeding Up Your Test Suites and CI Pipelines</title>
      <dc:creator>Aykut Denizci</dc:creator>
      <pubDate>Mon, 03 Nov 2025 10:07:41 +0000</pubDate>
      <link>https://dev.to/aykutde96/sharding-in-playwright-speeding-up-your-test-suites-and-ci-pipelines-303k</link>
      <guid>https://dev.to/aykutde96/sharding-in-playwright-speeding-up-your-test-suites-and-ci-pipelines-303k</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwwvzykg6tue3war9aqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwwvzykg6tue3war9aqn.png" alt="Ai Generated Image For Sharding Expression" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, I’ll talk about why using &lt;strong&gt;sharding&lt;/strong&gt; in Playwright automation projects is so important, how it affects your test durations, and how you can use it effectively based on your project needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://playwright.dev/docs/test-sharding" rel="noopener noreferrer"&gt;By default, &lt;strong&gt;Playwright&lt;/strong&gt; runs test files in parallel and strives for optimal utilization of CPU cores on your machine.&lt;br&gt;&lt;br&gt;
To achieve even greater parallelization, you can further scale Playwright test execution by running tests on multiple machines simultaneously — a process Playwright refers to as &lt;strong&gt;“sharding.”&lt;/strong&gt; &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without sharding, all tests are distributed among the workers within a single pod.&lt;br&gt;&lt;br&gt;
When sharding is enabled, the test suite is divided into multiple shards, and each shard runs its own workers on separate pods.&lt;br&gt;&lt;br&gt;
This allows true multi pod parallelism.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workers&lt;/strong&gt; → parallel test executors within the same pod&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shards&lt;/strong&gt; → distribute the test suite across multiple pods&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To enable sharding, simply add the following option to the end of your test command.&lt;br&gt;&lt;br&gt;
Here, x represents the active shard index, and y represents the total number of shards:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--shard=x/y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re using sharding, the &lt;em&gt;fullyParallel&lt;/em&gt; parameter becomes even more important.&lt;br&gt;&lt;br&gt;
When this parameter is set to true, all your tests are divided and executed across the shards.&lt;br&gt;&lt;br&gt;
If it’s set to false, the distribution happens on a file basis — meaning entire test files are assigned to shards rather than individual tests.&lt;/p&gt;

&lt;p&gt;However, there’s an important nuance here: even when &lt;em&gt;fullyParallel&lt;/em&gt; is set to false, &lt;strong&gt;the same test file can still run on different shards if it’s executed under multiple projects defined in your Playwright configuration&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
For example, a &lt;em&gt;basket1.spec.ts&lt;/em&gt; file might run on one shard for the &lt;em&gt;project1&lt;/em&gt; project and on another shard for the &lt;em&gt;project2&lt;/em&gt; project.&lt;/p&gt;

&lt;p&gt;You can add sharding to your YAML configuration as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parallel: 4
  script:
    - npx playwright test --shard=$CI_NODE_INDEX/$CI_NODE_TOTAL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Tip: If you’re using npm run or yarn run, you need to add an extra -- before your Playwright arguments.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run test -- --shard=1/4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our automation project, we built a &lt;strong&gt;login system&lt;/strong&gt; that assigns different users to each worker within a spec file to prevent them from interfering with one another.&lt;br&gt;&lt;br&gt;
However, this setup introduced &lt;strong&gt;flakiness&lt;/strong&gt; in our tests when running with Playwright’s &lt;em&gt;fullyParallel&lt;/em&gt; parameter — both in true and false modes.&lt;/p&gt;

&lt;p&gt;To overcome this issue, we started &lt;strong&gt;manually assigning spec files to specific shards&lt;/strong&gt; , giving us full control over how tests are distributed.&lt;br&gt;&lt;br&gt;
If you’re facing a similar problem, you can configure your YAML file as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parallel:
    matrix:
      - SPEC_FILE: "basket1.spec.ts basket2.spec.ts"
      - SPEC_FILE: "basket3.spec.ts checkout.spec.ts checkout2.spec.ts"
      - SPEC_FILE: "checkout3.spec.ts basket4.spec.ts"
      - SPEC_FILE: "checkout4.spec.ts"  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When doing so, it’s important to keep &lt;strong&gt;the completion times of shards as close as possible&lt;/strong&gt; to optimize the overall pipeline duration.&lt;br&gt;&lt;br&gt;
If one spec file takes significantly longer than others, consider splitting it into multiple smaller specs and assigning them to different shards — this can lead to a noticeable time gain.&lt;/p&gt;

&lt;p&gt;For example, in our project, the checkout4.spec.ts file used to take much longer than other specs.&lt;br&gt;&lt;br&gt;
By splitting it into two separate specs and assigning them to different shards, we were able to &lt;strong&gt;significantly reduce the total test duration.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6hz436xph2tm0jhlsha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6hz436xph2tm0jhlsha.png" alt="Test Step Before Sharding" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvju8p911i9x2awo3urje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvju8p911i9x2awo3urje.png" alt="Test Step After Sharding" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After completing the sharding setup, the next important step is &lt;strong&gt;merging the reports&lt;/strong&gt; generated by the shards.&lt;br&gt;&lt;br&gt;
To merge the reports, you need to set the &lt;strong&gt;reporter type&lt;/strong&gt; in your Playwright configuration to  &lt;strong&gt;blob&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The blob reporter is mergeable and can be converted into other reporting formats. It can include &lt;strong&gt;screenshots, traces, and other attachments&lt;/strong&gt; , making it ideal for shard-based parallel test runs.&lt;/p&gt;

&lt;p&gt;You can add the following setting to your configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default defineConfig({
  testDir: './tests',
  reporter: process.env.CI ? 'blob' : 'html',
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To merge your reports, you can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx playwright merge-reports --reporter html ./all-blob-reports
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the --reporter parameter specifies &lt;strong&gt;which format the blob reports will be converted to&lt;/strong&gt; , and the path at the end points to the folder containing the &lt;strong&gt;blob reports&lt;/strong&gt; to merge.&lt;/p&gt;

&lt;p&gt;Additionally, if you have multiple reporting configurations or want to generate multiple report types from blob reports, you can create a separate config file and pass it to the merge-reports command.&lt;/p&gt;

&lt;p&gt;For example, in our project, we generate both &lt;strong&gt;HTML&lt;/strong&gt; and &lt;strong&gt;Allure&lt;/strong&gt; reports.&lt;br&gt;&lt;br&gt;
Here’s the merge.config.ts we use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default {
    testDir: 'tests',
    reporter: [['html', { open: 'never' }], 
    ['allure-playwright', {
        outputFolder: "allure-results",
      }],
      ['json', { outputFile: 'playwright-report/test-results.json' }]],
  };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then use this config file in the merge step as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx playwright merge-reports --config merge.config.ts ./blob-report
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After completing the merging step, let’s look at &lt;strong&gt;how sharding improved our test durations&lt;/strong&gt; in the project.&lt;br&gt;&lt;br&gt;
In our automation project, we divide our tests into different schedules based on tags for example, schedule1, schedule2, and so on. The impact of our sharding structure on these schedules is shown in the table below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatawrapper.dwcdn.net%2Fg5r04%2Ffull.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatawrapper.dwcdn.net%2Fg5r04%2Ffull.png" alt="Impact of Sharding on Our Project (Table)" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for the readings 🎭&lt;/p&gt;

</description>
      <category>playwright</category>
      <category>cicd</category>
      <category>softwaretesting</category>
      <category>testautomation</category>
    </item>
  </channel>
</rss>
