<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: artshllaku</title>
    <description>The latest articles on DEV Community by artshllaku (@artshllaku).</description>
    <link>https://dev.to/artshllaku</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/artshllaku"/>
    <language>en</language>
    <item>
      <title>Your Tests Are Passing. That Doesn't Mean They're Good</title>
      <dc:creator>artshllaku</dc:creator>
      <pubDate>Fri, 13 Mar 2026 23:21:36 +0000</pubDate>
      <link>https://dev.to/artshllaku/your-tests-are-passing-that-doesnt-mean-theyre-good-48a3</link>
      <guid>https://dev.to/artshllaku/your-tests-are-passing-that-doesnt-mean-theyre-good-48a3</guid>
      <description>&lt;p&gt;I've worked on projects with 85% test coverage where bugs still made it to production every week. The team would look at the coverage report, see green, and feel safe. But the tests were lying.&lt;/p&gt;

&lt;p&gt;Here's what was actually happening:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;test('should process user data', async () =&amp;gt; {&lt;br&gt;
  const result = await processUser(mockData);&lt;br&gt;
  expect(result).toBeTruthy();&lt;br&gt;
});&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This test "covers" the processUser function. Coverage goes up. CI is green. But what does it actually prove? That the function returns something that isn't null, undefined, 0, or false. That's it. The function could return completely wrong data and this test would still pass.&lt;/p&gt;

&lt;p&gt;Or this one — common in E2E tests:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;test('user signup flow', async ({ page }) =&amp;gt; {&lt;br&gt;
  await page.goto('/signup');&lt;br&gt;
  await page.fill('#email', 'test@test.com');&lt;br&gt;
  await page.fill('#password', '12345678');&lt;br&gt;
  await page.click('button[type="submit"]');&lt;br&gt;
  await page.waitForURL('/dashboard');&lt;br&gt;
});&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It walks through the whole signup flow but never checks if the user was actually created, if the right page content loaded, or if the form shows errors for bad input. It checks that a redirect happened. That's all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coverage tools can't catch this.&lt;/strong&gt; Coverage measures which lines of code ran during your test. It doesn't care if you actually verified anything.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's the difference between a good test and a bad one?
&lt;/h2&gt;

&lt;p&gt;A bad test runs your code and checks almost nothing. A good test runs your code and verifies that specific things happened correctly.&lt;/p&gt;

&lt;h4&gt;
  
  
  Bad:
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;expect(response).toBeTruthy();&lt;br&gt;
expect(user).toBeDefined();&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Good:
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;expect(response.status).toBe(200);&lt;br&gt;
expect(user.email).toBe('test@test.com');&lt;br&gt;
expect(user.role).toBe('admin');&lt;br&gt;
expect(user.createdAt).toBeInstanceOf(Date);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Bad tests give you confidence that isn't real. You think your code works because tests pass. Then a customer reports a bug that your test suite should have caught — but didn't, because the assertions were too weak.&lt;/p&gt;

&lt;h3&gt;
  
  
  The real metrics that matter
&lt;/h3&gt;

&lt;p&gt;Instead of asking "how much of my code is tested?", start asking:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How many assertions does each test have?&lt;/strong&gt; A test with one weak assertion is barely a test.&lt;br&gt;
&lt;strong&gt;What kind of matchers am I using?&lt;/strong&gt; toBeTruthy() is almost always the wrong choice. toBe(), toEqual(), toContain() actually verify values.&lt;br&gt;
&lt;strong&gt;Am I testing edge cases?&lt;/strong&gt; The happy path working doesn't mean your code handles errors, empty inputs, or boundary values.&lt;br&gt;
&lt;strong&gt;Am I testing behavior or implementation?&lt;/strong&gt; Tests that break when you refactor are testing the wrong thing.&lt;/p&gt;

&lt;h3&gt;
  
  
  I built a tool to automate this
&lt;/h3&gt;

&lt;p&gt;I got tired of reviewing PRs and manually pointing out weak tests, so I built gapix — a CLI tool that analyzes your test quality automatically.&lt;/p&gt;

&lt;p&gt;It parses your test files using AST analysis, looks at every single assertion, categorizes your matchers, and gives each test file a quality score from 0 to 100.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;npx &lt;a class="mentioned-user" href="https://dev.to/artshllaku"&gt;@artshllaku&lt;/a&gt;/gapix analyze ./src&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What it checks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assertion density&lt;/strong&gt; — tests with zero or only one assertion get flagged&lt;br&gt;
&lt;strong&gt;Matcher strength&lt;/strong&gt; — using toBeTruthy() where you could check actual values&lt;br&gt;
&lt;strong&gt;Edge case coverage&lt;/strong&gt; — whether you're testing more than just the happy path&lt;br&gt;
&lt;strong&gt;Assertion categories&lt;/strong&gt; — equality checks, mock verifications, error handling, DOM assertions&lt;br&gt;
&lt;strong&gt;Overall quality grade&lt;/strong&gt; — Poor, Fair, Good, or Excellent for each file&lt;/p&gt;

&lt;p&gt;It generates an interactive HTML report you can open in the browser:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npx @artshllaku/gapix show-report&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You get a dark-themed dashboard showing every test file, its score, individual findings, and suggestions for improvement.&lt;/p&gt;

&lt;h3&gt;
  
  
  It works with any framework
&lt;/h3&gt;

&lt;p&gt;Jest, Vitest, Playwright, Cypress - doesn't matter. If your tests are written in TypeScript or JavaScript, gapix can analyze them. It reads the AST, not framework-specific APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optional AI analysis
&lt;/h3&gt;

&lt;p&gt;If you want deeper analysis, you can connect an AI provider (OpenAI or Ollama) and gapix will give you context-aware suggestions — not generic advice, but specific findings based on your actual code and what your tests are checking.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npx @artshllaku/gapix set-provider openai&lt;br&gt;
npx @artshllaku/gapix set-key sk-your-key&lt;br&gt;
npx @artshllaku/gapix analyze ./src&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Without AI, it still runs full rule-based analysis using AST parsing. The AI just adds an extra layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why this matters more than coverage
&lt;/h3&gt;

&lt;p&gt;I've seen this pattern too many times:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Team sets a coverage threshold (80%)&lt;/li&gt;
&lt;li&gt;Developers write tests to hit the number&lt;/li&gt;
&lt;li&gt;Tests become a checkbox, not a safety net&lt;/li&gt;
&lt;li&gt;Bugs get through because tests don't verify real behavior&lt;/li&gt;
&lt;li&gt;Team loses trust in the test suite&lt;/li&gt;
&lt;li&gt;People stop writing tests or start skipping them&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The fix isn't more tests. It's better tests. One well-written test with strong assertions is worth more than ten tests that just call functions and check &lt;code&gt;toBeTruthy()&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get started
&lt;/h3&gt;

&lt;p&gt;`# Run it once without installing&lt;br&gt;
npx &lt;a class="mentioned-user" href="https://dev.to/artshllaku"&gt;@artshllaku&lt;/a&gt;/gapix analyze ./src&lt;/p&gt;

&lt;h1&gt;
  
  
  Or install globally
&lt;/h1&gt;

&lt;p&gt;npm i -g &lt;a class="mentioned-user" href="https://dev.to/artshllaku"&gt;@artshllaku&lt;/a&gt;/gapix&lt;br&gt;
gapix analyze ./src&lt;br&gt;
It's free, open source, and takes about 30 seconds to get your first report.`&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/artshllk/gapix" rel="noopener noreferrer"&gt;https://github.com/artshllk/gapix&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'd love your feedback. If you try it on your codebase and something doesn't work right or the suggestions aren't helpful, open an issue. I'm actively working on this and want it to be useful for real-world projects.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>testing</category>
      <category>opensource</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Master Mobile Web Testing with Playwright: A Beginner’s Guide</title>
      <dc:creator>artshllaku</dc:creator>
      <pubDate>Wed, 19 Mar 2025 01:36:23 +0000</pubDate>
      <link>https://dev.to/artshllaku/master-mobile-web-testing-with-playwright-a-beginners-guide-2a9d</link>
      <guid>https://dev.to/artshllaku/master-mobile-web-testing-with-playwright-a-beginners-guide-2a9d</guid>
      <description>&lt;p&gt;Playwright is a powerful tool for automating browser testing, and it’s not just limited to desktop browsers. With Playwright, you can also test your web applications on mobile devices. Let me explain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Can Playwright Do for Mobile Testing?&lt;/strong&gt;&lt;br&gt;
Playwright allows you to emulate real mobile devices like smartphones and tablets. This means you can test how your website or web app behaves on different devices without needing the physical hardware. You can configure Playwright to simulate various device-specific behaviors, such as:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Screen size and viewport:&lt;/strong&gt; Test how your site looks on different screen resolutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User agent:&lt;/strong&gt; Match the browser details of specific devices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Touch support:&lt;/strong&gt; Simulate touch interactions for mobile devices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Geolocation, locale, and timezone:&lt;/strong&gt; Test location-based features or language-specific content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Permissions:&lt;/strong&gt; Check how your app handles permissions like notifications or camera access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Color scheme:&lt;/strong&gt; Test dark mode or light mode compatibility.&lt;/p&gt;

&lt;p&gt;While Playwright is a very flexible and powerful tool, it’s important to note that it only works for mobile websites, not native mobile apps. If you’re looking to test native apps, you’ll need tools like Appium. But for responsive web design and mobile web testing, Playwright is a fantastic choice.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to Emulate Mobile Devices in Playwright
&lt;/h2&gt;

&lt;p&gt;Playwright comes with a wide range of pre-configured device profiles that you can use right out of the box. These profiles include popular devices like iPhones, iPads, and Android phones. You can find the full list of supported devices in Playwright’s official GitHub repository:&lt;br&gt;
Device Descriptors Source.&lt;/p&gt;

&lt;p&gt;Here’s a quick example of how to set up mobile emulation in your Playwright configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { chromium, devices } = require(‘playwright’);
const iPhone = devices[‘iPhone 16 Pro’];

(async () =&amp;gt; {
 const browser = await chromium.launch();
 const context = await browser.newContext({
 …iPhone, // Emulates iPhone 16 Pro
 });
 const page = await context.newPage();
 await page.goto('https://yourwebsite.com');
 // Test code
 await browser.close();
})();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also specify the viewport and other settings manually if you don’t want to use a pre-configured device profile. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const context = await browser.newContext({
 viewport: { width: 402, height: 874 }, // iPhone 16 Pro dimensions
 userAgent: 'Mozilla/5.0 (iPhone; CPU iPhone OS 17_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.0 Mobile/15E148 Safari/604.1',
 hasTouch: true,
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Challenges and Limitations
&lt;/h2&gt;

&lt;p&gt;While Playwright is a great tool, there are a few challenges you might face:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No Native App Support:&lt;/strong&gt; As mentioned earlier, Playwright can’t test native mobile apps. It’s strictly for web applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real Device Behavior:&lt;/strong&gt; Emulation is great, but it’s not the same as testing on a real device. Some behaviors, like performance or specific hardware interactions, might differ.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic Content:&lt;/strong&gt; If your website has dynamic content (e.g., ads or animations), it might behave differently on emulated devices compared to real ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Playwright for Mobile Testing?
&lt;/h2&gt;

&lt;p&gt;Despite its limitations, Playwright is a powerful tool for developers and testers who want to:&lt;/p&gt;

&lt;p&gt;Ensure their website is responsive and works well on all screen sizes.&lt;/p&gt;

&lt;p&gt;Test cross-browser compatibility on mobile devices.&lt;/p&gt;

&lt;p&gt;Automate repetitive testing tasks and save time.&lt;/p&gt;

&lt;p&gt;Simulate complex user interactions, such as touch gestures or geolocation-based features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Playwright is a powerful and easy-to-use tool for testing mobile websites. While it doesn’t replace testing on real devices completely, it’s a great way to find and fix responsiveness problems and make sure your website looks good on different screen sizes. If you’re already using Playwright for desktop testing, adding mobile emulation to your tests is a simple and smart choice&lt;/p&gt;

&lt;p&gt;Give it a try, and let me know how it works for you! If you’ve faced any challenges or have tips to share, feel free to leave a comment below.&lt;/p&gt;

</description>
      <category>playwright</category>
      <category>testing</category>
      <category>frontend</category>
      <category>mobile</category>
    </item>
    <item>
      <title>Best E2E Automation Testing Practices</title>
      <dc:creator>artshllaku</dc:creator>
      <pubDate>Wed, 27 Nov 2024 12:39:45 +0000</pubDate>
      <link>https://dev.to/artshllaku/best-e2e-automation-testing-practices-27ii</link>
      <guid>https://dev.to/artshllaku/best-e2e-automation-testing-practices-27ii</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fislirr5ysg404rzxfvnl.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fislirr5ysg404rzxfvnl.jpeg" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
This article shares insights and best practices for end-to-end (E2E) testing based on my hands-on experience. I started with minimal knowledge in this field, but over time, I learned the importance of building robust, reliable tests. Facing challenges like flaky tests and unstable pipelines taught me valuable lessons. My goal here is to go beyond the basics and offer strategies that reduce test maintenance, improve stability, and enhance readability in complex projects.&lt;/p&gt;

&lt;p&gt;Instead of reiterating what's already covered in official documentation, this guide focuses on practical techniques I've applied successfully in real-world projects. If you're new to E2E testing or want to deepen your understanding, I recommend exploring these resources alongside my experiences:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.cypress.io/app/core-concepts/best-practices" rel="noopener noreferrer"&gt;Official Cypress Best Practices Guide&lt;/a&gt;&lt;br&gt;
&lt;a href="https://playwright.dev/docs/best-practices" rel="noopener noreferrer"&gt;Official Playwright Best Practices Guide&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Defining Test Purpose and Scope&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One of the first lessons I learned was the importance of clarity when starting a test. Ask yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What functionality am I testing?&lt;/li&gt;
&lt;li&gt;What is the expected outcome?&lt;/li&gt;
&lt;li&gt;What are the test's boundaries?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, when verifying an e-commerce application's checkout flow, define whether you're testing the ability to complete a purchase, inventory updates, or order confirmation emails. Narrowing your scope prevents unnecessary interactions and keeps tests focused.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples&lt;/strong&gt;&lt;br&gt;
Well-Defined Test Purpose: Test login functionality using valid credentials and verify successful redirection.&lt;/p&gt;

&lt;p&gt;Scope Control: Skip database checks if the goal is purely to validate UI behavior.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Using TypeScript for Stronger Tests&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Early on, I worked with JavaScript in my tests, but as my projects grew, I realized the benefits of TypeScript. Its type safety and IDE support significantly improve test maintainability by catching errors during development and enhancing code readability.&lt;/p&gt;

&lt;p&gt;Here’s a simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface UserCredentials {
  username: string;
  password: string;
}

const login = ({ username, password }: UserCredentials) =&amp;gt; {
  cy.get('[data-testid="username"]').type(username);
  cy.get('[data-testid="password"]').type(password);
  cy.get('[data-testid="login-button"]').click();
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using TypeScript ensures that my test inputs are always valid, especially in complex flows involving API responses or structured data. This consistency has saved me hours of debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Writing Readable Tests&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Another lesson I learned the hard way is that tests need to be clear and intuitive for anyone on the team, not just developers. Avoid embedding unnecessary logic and focus on leveraging framework-specific syntax for simplicity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;❌ Complex Logic:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cy.get('.items').then(($items) =&amp;gt; {
  Array.from($items).forEach(item =&amp;gt; {
    if (item.innerText.includes('Special')) {
      cy.wrap(item).click();
    }
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;✅ Framework Features:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cy.get('.items')
  .contains('Special')
  .click();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second approach is not only cleaner but also leverages Cypress features, reducing the chances of flakiness due to minor UI changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Integrating E2E Tests with GitHub Actions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One of my most impactful contributions was automating E2E tests in the CI/CD pipeline using GitHub Actions. This ensures tests run with every push or pull request, catching issues early.&lt;/p&gt;

&lt;p&gt;Here’s an example of a workflow I’ve used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: CI

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  e2e:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Set up Node.js
        uses: actions/setup-node@v2
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm install

      - name: Run E2E Tests
        run: npm run test:e2e

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This workflow has helped maintain code quality while fostering a collaborative culture of continuous improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Reducing Flakiness in Tests&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Flaky tests can be a nightmare. I've spent a good part of my career dealing with them, and here are some strategies that worked for me:&lt;/p&gt;

&lt;p&gt;Avoid Overlapping Tests: Isolate execution contexts using before and after hooks to set up and tear down test data.&lt;br&gt;
Keep Tests Small and Focused: Testing a single functionality per test simplifies debugging and reduces complexity.&lt;br&gt;
Regular Reviews: Periodically refactor flaky tests and align them with current application behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Example:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cy.intercept('POST', '/api/checkout', { statusCode: 200, body: { order: '12345' } });
cy.get('[data-testid="checkout-button"]').click();
cy.get('[data-testid="order-confirmation"]').should('contain', 'Order 12345');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Stubbing network requests like this has been key in controlling external dependencies and reducing test failures.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By implementing these practices, I’ve significantly improved test reliability and maintainability in my projects. While advanced E2E testing requires balancing real-world interactions with stable test design, these lessons have been invaluable in my journey. I hope they help you too!&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>cypress</category>
      <category>playwright</category>
      <category>qa</category>
      <category>javascript</category>
    </item>
    <item>
      <title>WET vs. DRY: Testing Principles You Should Know</title>
      <dc:creator>artshllaku</dc:creator>
      <pubDate>Sun, 24 Nov 2024 10:48:34 +0000</pubDate>
      <link>https://dev.to/artshllaku/wet-vs-dry-testing-principles-you-should-know-4hob</link>
      <guid>https://dev.to/artshllaku/wet-vs-dry-testing-principles-you-should-know-4hob</guid>
      <description>&lt;p&gt;In software development, writing clear and maintainable tests is as crucial as writing the code itself. Two commonly discussed principles in this context are WET (Write Everything Twice) and DRY (Don’t Repeat Yourself).&lt;/p&gt;

&lt;p&gt;These principles help guide how we structure tests, balancing readability, maintainability, and efficiency. Let’s dive into what they mean, explore examples, and understand when to apply each approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📝 What is WET Testing?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;WET testing is a style where repetition in test cases is allowed. While often seen as less ideal, this approach can prioritize simplicity and clarity—particularly for straightforward tests.&lt;/p&gt;

&lt;p&gt;Pros of WET Tests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity: Easy to read and understand, especially for newcomers.&lt;/li&gt;
&lt;li&gt;Isolation: Each test stands on its own, avoiding dependencies.&lt;/li&gt;
&lt;li&gt;Quick to Write: Ideal for smaller projects or simpler scenarios.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example of WET Testing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;describe('Login Tests - WET', () =&amp;gt; {
  test('should allow user to login with valid credentials', async () =&amp;gt; {
    await page.goto('https://example.com/login');
    await page.fill('input[name="username"]', 'user1');
    await page.fill('input[name="password"]', 'password1');
    await page.click('button[type="submit"]');
    await expect(page).toHaveURL('https://example.com/dashboard');
  });

  test('should show an error with invalid credentials', async () =&amp;gt; {
    await page.goto('https://example.com/login');
    await page.fill('input[name="username"]', 'user1');
    await page.fill('input[name="password"]', 'wrongpassword');
    await page.click('button[type="submit"]');
    await expect(page).toHaveText('Invalid username or password');
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the login steps are repeated across tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✨ What is DRY Testing?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DRY testing focuses on minimizing redundancy by abstracting shared logic into reusable functions or setups. This approach shines in complex or large projects.&lt;/p&gt;

&lt;p&gt;Pros of DRY Tests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced Redundancy: Centralizes logic, avoiding repetition.&lt;/li&gt;
&lt;li&gt;Ease of Maintenance: Changes only need to be made in one place.&lt;/li&gt;
&lt;li&gt;Cleaner Code: Focuses tests on behavior rather than setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example of DRY Testing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;describe('Login Tests - DRY', () =&amp;gt; {
  const login = async (username, password) =&amp;gt; {
    await page.goto('https://example.com/login');
    await page.fill('input[name="username"]', username);
    await page.fill('input[name="password"]', password);
    await page.click('button[type="submit"]');
  };

  test('should allow user to login with valid credentials', async () =&amp;gt; {
    await login('user1', 'password1');
    await expect(page).toHaveURL('https://example.com/dashboard');
  });

  test('should show an error with invalid credentials', async () =&amp;gt; {
    await login('user1', 'wrongpassword');
    await expect(page).toHaveText('Invalid username or password');
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the login function centralizes the shared steps, making the tests cleaner and easier to maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;💡 When to Use WET vs. DRY?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From personal experience, choosing between WET and DRY depends on your project’s complexity and requirements.&lt;/p&gt;

&lt;p&gt;Use WET when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your tests are simple and isolated.&lt;/li&gt;
&lt;li&gt;The code is unlikely to change frequently.&lt;/li&gt;
&lt;li&gt;You prioritize clarity over abstraction.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use DRY when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have repeated logic across multiple tests.&lt;/li&gt;
&lt;li&gt;The codebase is large and maintainability is a concern.&lt;/li&gt;
&lt;li&gt;You need to refactor tests for efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🔑 Key Takeaways&lt;/strong&gt;&lt;br&gt;
While the DRY principle is generally preferred, WET tests have their place. Strive for a balance that enhances both clarity and maintainability. For smaller projects or straightforward scenarios, a WET approach might suffice. However, in larger, more complex test suites, adopting DRY can significantly improve your workflow.&lt;/p&gt;

&lt;p&gt;Ultimately, the goal is to write tests that are clear, maintainable, and efficient—whatever approach gets you there!&lt;/p&gt;

</description>
      <category>testing</category>
      <category>playwright</category>
      <category>javascript</category>
      <category>cypress</category>
    </item>
  </channel>
</rss>
