<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alperen Coşkun</title>
    <description>The latest articles on DEV Community by Alperen Coşkun (@muhendiskedibey).</description>
    <link>https://dev.to/muhendiskedibey</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/muhendiskedibey"/>
    <language>en</language>
    <item>
      <title>Is Playwright's sharding slowing you down? Meet "Pawdist"</title>
      <dc:creator>Alperen Coşkun</dc:creator>
      <pubDate>Thu, 11 Sep 2025 06:39:28 +0000</pubDate>
      <link>https://dev.to/muhendiskedibey/is-playwrights-sharding-slowing-you-down-meet-pawdist-iap</link>
      <guid>https://dev.to/muhendiskedibey/is-playwrights-sharding-slowing-you-down-meet-pawdist-iap</guid>
      <description>&lt;p&gt;If you've been using Playwright for a while on a large test suite, you've probably used the &lt;code&gt;--shard&lt;/code&gt; option to parallelize your tests across multiple machines or CI runners. At first, it seems like the perfect solution. But as your test suite grows, you start to notice a frustrating problem: some of your test runners finish in a reasonable amount of time, while others can take significantly longer. Ultimately, you're stuck waiting for the slowest one to complete.&lt;/p&gt;

&lt;p&gt;The main reason for this is how Playwright's sharding works: it's &lt;strong&gt;static&lt;/strong&gt;. It splits the test files into even chunks before the tests start and assigns each chunk to a specific machine or CI runner. For example, if you're sharding across 4 runners, it divides your tests into 4 predetermined groups.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Runner 1 runs the first quarter of tests
npx playwright test --shard=1/4

# Runner 2 runs the second quarter
npx playwright test --shard=2/4

# ...and so on
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach works perfectly if all your tests take exactly the same amount of time. But in the real world, that never happens. Some tests are short and simple, while others involve complex user flows and take much longer. This static division leads to a significant imbalance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Uneven Load Distribution:&lt;/strong&gt; One runner might get all the "easy" tests and finish early, sitting idle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wasted Resources:&lt;/strong&gt; While one runner is idle, another is struggling with a long queue of "hard" tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Longer Execution Times:&lt;/strong&gt; Your total test run time is dictated by your slowest shard, not the average.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A Real-World Example
&lt;/h2&gt;

&lt;p&gt;To make this problem more concrete, let's look at a screenshot from a real GitLab CI pipeline where a suite of 53 tests was distributed across 4 runners.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbv9vqwxlfxf42ae09js.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbv9vqwxlfxf42ae09js.png" alt="GitLab CI pipeline example" width="559" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the test distribution is far from balanced:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Runner 1 (shard 1/4):&lt;/strong&gt; 46 seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runner 2 (shard 2/4):&lt;/strong&gt; 1 minute 45 seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runner 3 (shard 3/4):&lt;/strong&gt; 41 seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runner 4 (shard 4/4):&lt;/strong&gt; 1 minute 1 second&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The total execution time for our test suite is determined by the slowest runner, which is &lt;strong&gt;1 minute and 45 seconds&lt;/strong&gt;. Meanwhile, Runner 3 finished in just &lt;strong&gt;41 seconds&lt;/strong&gt;! This means one of our CI runners sat completely idle for over a minute, waiting for the others to catch up. This is an example of static sharding leading to wasted CI/CD resources and longer feedback cycles.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Dynamic Test Distribution with Pawdist
&lt;/h2&gt;

&lt;p&gt;I'd like to introduce &lt;strong&gt;Pawdist&lt;/strong&gt;, a high-performance, Rust-based dynamic test distributor I developed to solve this problem. Instead of pre-assigning tests, Pawdist uses a proven &lt;strong&gt;Manager-Worker&lt;/strong&gt; architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Manager:&lt;/strong&gt; It scans all your tests and creates a single work queue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workers:&lt;/strong&gt; They connect to the manager and ask for a test. As soon as a worker finishes a test, it asks for the next one from the queue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This way, no CI runner ever sits idle. A runner that quickly finishes a short test can immediately grab the next available one, helping to clear the queue faster. This ensures your resources are utilized much more efficiently until the very last test is complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Showdown: Pawdist vs. Playwright Sharding
&lt;/h2&gt;

&lt;p&gt;To demonstrate the real-world impact, I created a sample Playwright project with 100 tests. I deliberately designed it to expose the weakness of static sharding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tests 1-50:&lt;/strong&gt; Intentionally long, taking 15-25 seconds each.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests 51-100:&lt;/strong&gt; Intentionally short, taking 1-15 seconds each.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// A snippet from the test file to show the imbalance
import { test, expect } from '@playwright/test';

// ...

test('Distribution Test 4', async ({ page }) =&amp;gt; {
  await page.waitForTimeout(25000); // 25 seconds (long test)
  expect(true).toBe(true);
});

// ...

test('Distribution Test 77', async ({ page }) =&amp;gt; {
  await page.waitForTimeout(1000); // 1 second (short test)
  expect(true).toBe(true);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I ran this test suite using three different methods, keeping the total parallel count at 4 for this methods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run 1: The Baseline (No Sharding)
&lt;/h2&gt;

&lt;p&gt;First, I ran all 100 tests on a single machine using 4 parallel workers. This gives us our best-case-scenario time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyu2mjjurso8u0pva5vg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyu2mjjurso8u0pva5vg.png" alt=" " width="800" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The entire test suite finished in 6 minutes and 3 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run 2: The Imbalance of Static Sharding
&lt;/h2&gt;

&lt;p&gt;Next, I split the tests across two runners (simulating two CI machines), each running 2 parallel workers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeu3z8hoi99i1f2pbc79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeu3z8hoi99i1f2pbc79.png" alt=" " width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because Playwright splits tests by order, the result created a severe imbalance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shard 1/2 (Tests 1-50):&lt;/strong&gt; Received all the long tests and took &lt;strong&gt;8 minutes and 36 seconds&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shard 2/2 (Tests 51-100):&lt;/strong&gt; Received all the short tests and finished in just &lt;strong&gt;3 minutes and 32 seconds&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The total execution time ballooned to 8 minutes and 36 seconds, dictated by the slowest shard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run 3: The Pawdist Solution (Dynamic Distribution)
&lt;/h2&gt;

&lt;p&gt;Finally, I used Pawdist with two workers, each set to 2 parallel runners.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdacb5noelt3t6wq4nh2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdacb5noelt3t6wq4nh2j.png" alt=" " width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The logs clearly show the magic of dynamic distribution. Instead of being locked into a predefined group of tests, workers pull from a single, shared queue. When a worker finishes a test (whether short or long) it immediately requests the very next test from the central queue. This ensures that even if one worker is tied up with a long test, another can complete several shorter tests in the meantime, effectively balancing the load in real-time.&lt;/p&gt;

&lt;p&gt;The entire test suite finished in &lt;strong&gt;6 minutes and 5 seconds&lt;/strong&gt;, nearly matching the ideal baseline and eliminating the massive imbalance caused by static sharding.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Method&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Total Execution Time&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Baseline (No Sharding)&lt;/td&gt;
&lt;td&gt;6m 3s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Playwright Sharding&lt;/td&gt;
&lt;td&gt;8m 36s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pawdist&lt;/td&gt;
&lt;td&gt;6m 5s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The benchmark speaks for itself. Pawdist provides the scalability of distributed testing without the performance penalties of static sharding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ready to Speed Up Your Tests?
&lt;/h2&gt;

&lt;p&gt;If you're looking to optimize your Playwright test execution times and make the most of your CI resources, give Pawdist a try!&lt;/p&gt;

&lt;p&gt;By switching to a dynamic distribution model, you can achieve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Faster Overall Execution:&lt;/strong&gt; Your suite finishes when the last test completes, not the slowest shard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimal Resource Utilization:&lt;/strong&gt; No more idle CI runners waiting for others to catch up.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;True Dynamic Load Balancing:&lt;/strong&gt; Tests are assigned on-demand for maximum efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For detailed installation and usage instructions, you can check out the project's comprehensive README on &lt;a href="https://github.com/muhendiskedibey/pawdist" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>playwright</category>
      <category>rust</category>
      <category>performance</category>
      <category>ci</category>
    </item>
    <item>
      <title>How to full screen a browser in Playwright?</title>
      <dc:creator>Alperen Coşkun</dc:creator>
      <pubDate>Sun, 25 Feb 2024 12:37:47 +0000</pubDate>
      <link>https://dev.to/muhendiskedibey/how-to-full-screen-a-browser-in-playwright-1np1</link>
      <guid>https://dev.to/muhendiskedibey/how-to-full-screen-a-browser-in-playwright-1np1</guid>
      <description>&lt;p&gt;One of the things that newcomers to Playwright may find strange at first is that even when the browser is full screen, the content displayed remains in a small area.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5y492w4y3ow178pkhiq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5y492w4y3ow178pkhiq.png" alt=" " width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main reason for this is that in the default device parameters, separate values are defined for the viewport. The viewport is the area in the browser where the content is displayed, for example if you go to &lt;a href="https://whatismyviewport.com/" rel="noopener noreferrer"&gt;whatismyviewport&lt;/a&gt; you will see a value smaller than your screen resolution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygc4j0blatf04ykhvn10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygc4j0blatf04ykhvn10.png" alt=" " width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you look at the viewport resolution in the default device parameters in Playwright, you will see that it remains quite small and since these values remain constant, the area where the content will be displayed remains the same even if the browser is made full screen.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  "Desktop Chrome": {
    "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.6312.4 Safari/537.36",
    "screen": {
      "width": 1920,
      "height": 1080
    },
    "viewport": {
      "width": 1280,
      "height": 720
    },
    "deviceScaleFactor": 1,
    "isMobile": false,
    "hasTouch": false,
    "defaultBrowserType": "chromium"
  },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source: &lt;a href="https://github.com/microsoft/playwright/blob/main/packages/playwright-core/src/server/deviceDescriptorsSource.json" rel="noopener noreferrer"&gt;deviceDescriptorsSource.json&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I did some research to solve this situation and the solutions I found were either directly wrong or not enough. As a result of my own experiments, I obtained the solution I found the most suitable and it is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    {
      name: 'chromium',
      use: { 
        ...devices['Desktop Chrome'],
        deviceScaleFactor: undefined,
        viewport: null,
        launchOptions: {
          args: ['--start-maximized']
        },
      },
    },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the example above, I have added additional parameters for Chromium in the &lt;code&gt;playwright.config.ts&lt;/code&gt; file. The argument in &lt;code&gt;launchOptions&lt;/code&gt; already makes the Chromium (and of course Chrome) browser window fullscreen, but as I mentioned at the beginning of this article, the viewport resolution remains constant, so the area where the content is displayed is still small.&lt;/p&gt;

&lt;p&gt;In order for the viewport resolution to change dynamically, the &lt;code&gt;viewport&lt;/code&gt; parameter must be set to &lt;code&gt;null&lt;/code&gt;, but if you do this alone, it is not enough because you get the following error when you try to run the test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: "deviceScaleFactor" option is not supported with null "viewport"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's why I also defined the &lt;code&gt;deviceScaleFactor&lt;/code&gt; parameter as &lt;code&gt;undefined&lt;/code&gt;. When you run the test in its final form, the browser now comes up full screen as it should in reality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1ng4b0zypemcyl3jrnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1ng4b0zypemcyl3jrnz.png" alt=" " width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you found my article useful, you can like it and share it for the benefit of people around you, thank you.&lt;/p&gt;

</description>
      <category>playwright</category>
      <category>typescript</category>
      <category>testing</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
