<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tanmay Gupta</title>
    <description>The latest articles on DEV Community by Tanmay Gupta (@tanmay_gupta_2e3afbed80c7).</description>
    <link>https://dev.to/tanmay_gupta_2e3afbed80c7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tanmay_gupta_2e3afbed80c7"/>
    <language>en</language>
    <item>
      <title>Why Your Playwright Tests Pass But Images Are Broken</title>
      <dc:creator>Tanmay Gupta</dc:creator>
      <pubDate>Sat, 28 Feb 2026 10:11:01 +0000</pubDate>
      <link>https://dev.to/tanmay_gupta_2e3afbed80c7/why-your-playwright-tests-pass-but-images-are-broken-3dgk</link>
      <guid>https://dev.to/tanmay_gupta_2e3afbed80c7/why-your-playwright-tests-pass-but-images-are-broken-3dgk</guid>
      <description>&lt;p&gt;Your tests are green. CI passed. You deploy.&lt;/p&gt;

&lt;p&gt;And then someone Slacks you: "Half the images on the product page are broken."&lt;/p&gt;

&lt;p&gt;You check the test run. Everything passed. No failures. No warnings. The suite did exactly what it was told to do.&lt;/p&gt;

&lt;p&gt;That's the gap nobody talks about.&lt;/p&gt;




&lt;h2&gt;
  
  
  Playwright doesn't catch what it's never asked to check
&lt;/h2&gt;

&lt;p&gt;Broken images don't show up as &lt;a href="https://testdino.com/blog/playwright-test-failure/" rel="noopener noreferrer"&gt;passing tests&lt;/a&gt;. They show up as passing tests. Playwright isn't wrong here. It checked what your assertions told it to check. Nothing more.&lt;/p&gt;

&lt;p&gt;Four things cause this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. A 404&lt;/strong&gt; The CDN path changed, a file got deleted, the src attribute points at something that no longer exists. The browser renders a broken icon. The test moves on.&lt;br&gt;
&lt;strong&gt;2. An invisible image&lt;/strong&gt; CSS hides the element. The container collapses to zero height. Something overlays it. &lt;code&gt;isVisible()&lt;/code&gt; comes back &lt;code&gt;true&lt;/code&gt; because the element is technically in the DOM. Users see nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. A 1x1 placeholder&lt;/strong&gt; The image loads but it's a pixel-sized fallback from a bad API response or a failed upload. &lt;code&gt;naturalWidth&lt;/code&gt; is 1. The browser is satisfied. You're not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. A lazy-loaded image your test never reached&lt;/strong&gt; If the test didn't scroll far enough, the image never entered the viewport. Playwright only sees what's on screen.&lt;/p&gt;

&lt;p&gt;Same result in your &lt;a href="https://testdino.com/blog/playwright-reporting/" rel="noopener noreferrer"&gt;Playwright report&lt;/a&gt; for all four: a green checkmark.&lt;/p&gt;


&lt;h2&gt;
  
  
  What you can catch with Playwright today
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Network interception&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The most direct approach. Listen to every response, filter for image resource types, throw on anything returning 4xx.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// global-setup.ts or a shared fixture
page.on('response', response =&amp;gt; {
  if (
    response.request().resourceType() === 'image' &amp;amp;&amp;amp;
    response.status() &amp;gt;= 400
  ) {
    console.error(`Broken image: ${response.url()} — ${response.status()}`);
  }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wire it into a global fixture and every test picks it up automatically. The catch: fixtures break quietly. Someone refactors the setup file, the listener disappears, and you won't know until users report broken images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;naturalWidth check&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Catches images that failed to load entirely. Pair this with solid &lt;a href="https://testdino.com/blog/playwright-assertions/" rel="noopener noreferrer"&gt;Playwright assertions&lt;/a&gt; to keep the checks reliable.&lt;/p&gt;

&lt;p&gt;const brokenImages = await page.$$eval('img', imgs =&amp;gt;&lt;br&gt;
  imgs&lt;br&gt;
    .filter(img =&amp;gt; img.naturalWidth === 0)&lt;br&gt;
    .map(img =&amp;gt; img.src)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;);

if (brokenImages.length &amp;gt; 0) {
  throw new Error(
    `${brokenImages.length} broken image(s) found:\n${brokenImages.join('\n')}`
  );
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Catches hard 404s. Misses the 1x1 placeholder case. Add a dimension check to cover those:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const suspiciousImages = await page.$$eval('img', imgs =&amp;gt;
  imgs
    .filter(img =&amp;gt; img.naturalWidth &amp;lt;= 1 || img.naturalHeight &amp;lt;= 1)
    .map(img =&amp;gt; ({ src: img.src, w: img.naturalWidth, h: img.naturalHeight }))
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Allure Playwright integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're using Allure as your &lt;a href="https://testdino.com/blog/playwright-reporting-tools/" rel="noopener noreferrer"&gt;Playwright reporting tool&lt;/a&gt;, you can attach image check findings directly into the report as step annotations or attachments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { allure } from 'allure-playwright';

test('check for broken images', async ({ page }) =&amp;gt; {
  await page.goto('/products');

  const brokenImages = await page.$$eval('img', imgs =&amp;gt;
    imgs.filter(img =&amp;gt; img.naturalWidth === 0).map(img =&amp;gt; img.src)
  );

  if (brokenImages.length &amp;gt; 0) {
    await allure.attachment(
      'Broken Images',
      Buffer.from(brokenImages.join('\n')),
      'text/plain'
    );
    throw new Error(`Found ${brokenImages.length} broken image(s)`);
  }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Allure renders the attachment inline when someone views the failed test. The reviewer sees the list of broken image URLs without opening a separate trace file. That's a genuine improvement in report readability.&lt;/p&gt;

&lt;p&gt;What Allure doesn't do is detect broken images on its own. The detection logic is yours. Allure surfaces what you push into it. If you don't write the check, nothing shows up regardless of which reporter you're using.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visual regression snapshot testing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The most complete approach. &lt;a href="https://testdino.com/blog/playwright-visual-testing/" rel="noopener noreferrer"&gt;Playwright visual testing&lt;/a&gt; uses &lt;code&gt;toHaveScreenshot()&lt;/code&gt; to compare the current render against a saved baseline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test('product page images render correctly', async ({ page }) =&amp;gt; {
  await page.goto('/products');
  await page.waitForLoadState('networkidle');
  await expect(page).toHaveScreenshot('product-page.png');
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A pixel diff catches any image that changed, disappeared, or loaded incorrectly. The tradeoff is maintenance. Baselines drift between environments, font rendering differs between Linux CI and local macOS, and sensitivity thresholds need tuning. On a large team with frequent UI changes, managing baselines becomes a significant ongoing task.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Where all of this breaks down&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Setting it up once is not the hard part. Living with it at scale is.&lt;/p&gt;

&lt;p&gt;One CDN image breaks. 15 tests fail. Your Playwright &lt;a href="https://testdino.com/blog/playwright-html-reporter/" rel="noopener noreferrer"&gt;HTML report&lt;/a&gt; shows 15 separate entries with no indication they're connected. Someone on your team opens each &lt;a href="https://testdino.com/blog/playwright-trace-viewer/" rel="noopener noreferrer"&gt;Playwright trace&lt;/a&gt; one by one, checks the network tab each time, and pieces together that it's all the same 404. That's 45 minutes on a good day.&lt;/p&gt;

&lt;p&gt;Traces aren't fast even for a single failure. Download the zip, open the viewer locally, find the network panel, cross-reference with the screenshot. Useful, yes. Fast, no. At 40 failures you've just taken up someone's entire morning.&lt;/p&gt;

&lt;p&gt;The part that frustrates teams most: you can't tell if this is new. Did the image break today or has it been &lt;a href="https://testdino.com/blog/manage-playwright-flaky-tests/" rel="noopener noreferrer"&gt;flaky&lt;/a&gt; for two weeks? Did it start after last Tuesday's deploy? Both the Playwright HTML reporter and Allure focus on single-run visibility. Neither carries memory across runs, so even with solid detection logic, the reporting layer can't tell you whether this is a pattern or a one-off. Tracking tests across runs is where single-run reporters hit their ceiling.&lt;/p&gt;




&lt;h2&gt;
  
  
  What a reporting layer built for Playwright actually changes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://testdino.com/" rel="noopener noreferrer"&gt;TestDino&lt;/a&gt; is a Playwright-specific reporting and test management platform. Once your image assertions catch a failure, here's what's different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In-platform traces, no downloading.&lt;/strong&gt;Screenshot, network logs, console output, and trace data are all in one panel per test. What used to take 10 minutes to piece together takes about 30 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error grouping.&lt;/strong&gt;Those 15 tests failing from the same broken image show up as one issue in the Error Analysis tab, not 15. You look at it once, understand what broke, fix it once. See how &lt;a href="https://testdino.com/blog/playwright-debugging-guide/" rel="noopener noreferrer"&gt;Playwright debugging&lt;/a&gt; changes when failures are grouped by root cause.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI failure classification.&lt;/strong&gt;TestDino labels each failure as &lt;code&gt;Bug&lt;/code&gt;, &lt;code&gt;UI Change&lt;/code&gt;, &lt;code&gt;Flaky&lt;/code&gt;, or &lt;code&gt;Misc.&lt;/code&gt; For broken images, this routes the fix correctly without a triage meeting:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11g67pegnnd34wg0q6tt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11g67pegnnd34wg0q6tt.jpg" alt=" " width="560" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test history.&lt;/strong&gt; TestDino shows you the exact CI run where a test started failing. If your landing page hero image broke on Tuesday at 3pm, that's one click. No correlating GitHub Actions timestamps with deployment logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tag analytics.&lt;/strong&gt;Tag image tests with &lt;code&gt;@visual&lt;/code&gt; or &lt;code&gt;@images&lt;/code&gt; and TestDino gives you pass/fail trend data for that group across every run. You stop finding out images are broken when users report them. See how &lt;a href="https://testdino.com/blog/playwright-reporting-metrics/" rel="noopener noreferrer"&gt;Playwright reporting&lt;/a&gt; metrics look when tracked over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub PR comments.&lt;/strong&gt; If an image assertion fails on a pull request, TestDino posts a test summary comment on that PR automatically. The developer who introduced the broken path sees it before the code merges. Production never sees it. The &lt;a href="https://testdino.com/blog/playwright-automation-checklist/" rel="noopener noreferrer"&gt;Playwright automation checklist&lt;/a&gt; covers how to wire this up end to end.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quickstart fixture
&lt;/h2&gt;

&lt;p&gt;Drop this into your Playwright project and every test gets automatic image coverage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// fixtures/imageCheck.ts
import { test as base } from '@playwright/test';

export const test = base.extend({
  page: async ({ page }, use) =&amp;gt; {
    const brokenImages: string[] = [];

    page.on('response', response =&amp;gt; {
      if (
        response.request().resourceType() === 'image' &amp;amp;&amp;amp;
        response.status() &amp;gt;= 400
      ) {
        brokenImages.push(`${response.status()} — ${response.url()}`);
      }
    });

    await use(page);

    if (brokenImages.length &amp;gt; 0) {
      throw new Error(
        `Broken images detected:\n${brokenImages.join('\n')}`
      );
    }
  }
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use this test instead of Playwright's default in your spec files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// product-images.spec.ts
import { test } from '../fixtures/imageCheck';
import { expect } from '@playwright/test';

test('product page has no broken images', async ({ page }) =&amp;gt; {
  await page.goto('/products');
  await page.waitForLoadState('networkidle');
  // fixture handles the image check automatically
  // add your other assertions here
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also build a &lt;a href="https://testdino.com/blog/playwright-custom-reporter/" rel="noopener noreferrer"&gt;custom Playwright reporter&lt;/a&gt; that outputs broken image data directly into your CI pipeline logs if you want the findings surfaced at the runner level rather than test level.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where to start
&lt;/h2&gt;

&lt;p&gt;If your tests aren't catching broken images at all, start with the fixture above. 20 minutes of setup, every test in your suite gets coverage without touching individual spec files.&lt;/p&gt;

&lt;p&gt;If you're using Allure and want to enrich your reports with image check data, the &lt;code&gt;allure-playwright&lt;/code&gt; attachment approach earlier works well alongside the fixture. The fixture throws on failures. The attachment gives reviewers context without opening traces.&lt;/p&gt;

&lt;p&gt;If you're catching failures but triage is eating time, try &lt;a href="https://testdino.com/" rel="noopener noreferrer"&gt;TestDino&lt;/a&gt; at &lt;a href="https://sandbox.testdino.com/" rel="noopener noreferrer"&gt;Sandbox&lt;/a&gt;. Nothing changes in how you run tests. Connect it, run a suite, see the grouping and classification on your own data.&lt;/p&gt;

&lt;p&gt;The detection side of this is solvable with what Playwright gives you today. The part that actually costs teams time is everything that happens after a test fails. That's the part worth getting right.&lt;/p&gt;

</description>
      <category>playwright</category>
      <category>testing</category>
      <category>qa</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Why Playwright-CLI Beats MCP for AI‑Driven Browser Automation</title>
      <dc:creator>Tanmay Gupta</dc:creator>
      <pubDate>Mon, 16 Feb 2026 05:42:59 +0000</pubDate>
      <link>https://dev.to/tanmay_gupta_2e3afbed80c7/why-playwright-cli-beats-mcp-for-ai-driven-browser-automation-l7h</link>
      <guid>https://dev.to/tanmay_gupta_2e3afbed80c7/why-playwright-cli-beats-mcp-for-ai-driven-browser-automation-l7h</guid>
      <description>&lt;p&gt;Most browser + LLM setups still bolt MCP tools onto Playwright, so every click sends huge DOMs, accessibility trees, and logs back to the model. That wastes tokens, reduces useful context, and makes longer sessions unstable.&lt;/p&gt;

&lt;p&gt;Standard Playwright HTML reports also get hard to work with once you have more than a few dozen end‑to‑end tests, so teams struggle to spot real patterns behind flaky failures.&lt;/p&gt;

&lt;p&gt;The deep dive at &lt;a href="https://testdino.com/blog/playwright-cli/" rel="noopener noreferrer"&gt;https://testdino.com/blog/playwright-cli/&lt;/a&gt; explains how Microsoft’s playwright-cli keeps browser state outside the model, returns only small element references and YAML flows, and works with normal npx playwright test plus better reporting so teams can debug faster with less noise.&lt;/p&gt;

</description>
      <category>playwright</category>
      <category>testing</category>
      <category>automation</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
