<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Michael burry</title>
    <description>The latest articles on DEV Community by Michael burry (@michael_burry_00).</description>
    <link>https://dev.to/michael_burry_00</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/michael_burry_00"/>
    <language>en</language>
    <item>
      <title>Software Testing Life Cycle (STLC): From Manual Chaos to Automated Confidence with Keploy</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Mon, 20 Apr 2026 16:31:42 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/software-testing-life-cycle-stlc-from-manual-chaos-to-automated-confidence-with-keploy-3peg</link>
      <guid>https://dev.to/michael_burry_00/software-testing-life-cycle-stlc-from-manual-chaos-to-automated-confidence-with-keploy-3peg</guid>
      <description>&lt;p&gt;Shipping reliable software quickly is one of the hardest challenges in modern development. The Software Testing Life Cycle (STLC) provides a structured approach to ensure quality, but in many teams, it still relies heavily on manual effort.&lt;/p&gt;

&lt;p&gt;As engineering teams move toward faster releases and continuous delivery, the traditional execution of STLC often becomes a bottleneck instead of an enabler.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is STLC
&lt;/h2&gt;

&lt;p&gt;The Software Testing Life Cycle is a sequence of phases designed to validate software quality. It ensures that applications meet both functional and non-functional requirements before reaching users.&lt;/p&gt;

&lt;p&gt;The key phases include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requirement Analysis&lt;/li&gt;
&lt;li&gt;Test Planning&lt;/li&gt;
&lt;li&gt;Test Case Development&lt;/li&gt;
&lt;li&gt;Test Environment Setup&lt;/li&gt;
&lt;li&gt;Test Execution&lt;/li&gt;
&lt;li&gt;Test Closure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each phase plays a role in ensuring software reliability, but the way these phases are executed has evolved significantly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem with Traditional STLC
&lt;/h2&gt;

&lt;p&gt;In many teams, STLC still looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test cases are written manually&lt;/li&gt;
&lt;li&gt;Test data is created artificially&lt;/li&gt;
&lt;li&gt;Integration testing is complex and fragile&lt;/li&gt;
&lt;li&gt;Feedback cycles are slow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This creates friction between development and testing, especially in fast-paced environments where releases happen frequently.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Developer-First Approach to STLC
&lt;/h2&gt;

&lt;p&gt;Modern teams are shifting toward automation-first workflows where developers take more ownership of testing. Instead of writing test cases from scratch, they rely on real system behavior to drive testing.&lt;/p&gt;

&lt;p&gt;This is where Keploy introduces a different approach.&lt;/p&gt;

&lt;p&gt;Keploy allows developers to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Capture real API interactions&lt;/li&gt;
&lt;li&gt;Automatically generate test cases&lt;/li&gt;
&lt;li&gt;Create mocks without manual setup&lt;/li&gt;
&lt;li&gt;Run tests as part of the development workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reduces the need for manual intervention and aligns testing closely with actual application usage.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Keploy Enhances Each STLC Phase
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Requirement Analysis
&lt;/h3&gt;

&lt;p&gt;Instead of relying only on documentation, developers can observe real API traffic and understand how the system behaves in production-like scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Planning
&lt;/h3&gt;

&lt;p&gt;Test coverage is derived from actual usage patterns, removing guesswork and improving relevance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Case Development
&lt;/h3&gt;

&lt;p&gt;Test cases are automatically generated from captured interactions. This eliminates repetitive scripting and reduces human error.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Environment Setup
&lt;/h3&gt;

&lt;p&gt;Dependencies are handled through auto-generated mocks, making environments more stable and easier to replicate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Execution
&lt;/h3&gt;

&lt;p&gt;Tests can run continuously within development pipelines, providing faster feedback.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test Closure
&lt;/h3&gt;

&lt;p&gt;Failures are easier to analyze because they are based on real-world scenarios rather than synthetic test data.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Approach Works
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Reduces manual effort in test creation&lt;/li&gt;
&lt;li&gt;Improves test coverage using real data&lt;/li&gt;
&lt;li&gt;Speeds up feedback cycles&lt;/li&gt;
&lt;li&gt;Aligns testing with developer workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of treating testing as a separate phase, it becomes an integrated part of development.&lt;/p&gt;




&lt;h2&gt;
  
  
  STLC in Modern Development
&lt;/h2&gt;

&lt;p&gt;The concept of STLC remains important, but its execution must adapt to current engineering practices. Teams that continue to rely on manual-heavy processes often struggle with speed and scalability.&lt;/p&gt;

&lt;p&gt;A developer-driven testing approach makes STLC more efficient by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automating repetitive tasks&lt;/li&gt;
&lt;li&gt;Reducing maintenance overhead&lt;/li&gt;
&lt;li&gt;Enabling faster iterations&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Software Testing Life Cycle is not outdated, but the traditional way of implementing it is no longer sufficient for modern development.&lt;/p&gt;

&lt;p&gt;By adopting tools like Keploy, teams can transform STLC into a faster, more reliable, and developer-friendly process. This shift helps organizations maintain quality without slowing down innovation.&lt;/p&gt;

</description>
      <category>stlc</category>
      <category>startup</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Test Data Management: The Missing Piece in Scalable Test Automation</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Mon, 20 Apr 2026 16:28:03 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/test-data-management-the-missing-piece-in-scalable-test-automation-2pih</link>
      <guid>https://dev.to/michael_burry_00/test-data-management-the-missing-piece-in-scalable-test-automation-2pih</guid>
      <description>&lt;p&gt;Test Data Management (TDM) is one of the most overlooked aspects of modern software testing. Teams invest heavily in automation frameworks, CI/CD pipelines, and tooling—but often ignore the quality and reliability of the data powering those tests.&lt;/p&gt;

&lt;p&gt;Without the right data, even well-written test cases fail to deliver consistent results.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Test Data Management?
&lt;/h2&gt;

&lt;p&gt;Test Data Management is the process of creating, managing, and maintaining data used in testing environments. It ensures that test cases run against realistic, consistent, and compliant datasets.&lt;/p&gt;

&lt;p&gt;A strong TDM strategy helps teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create reliable test scenarios&lt;/li&gt;
&lt;li&gt;Maintain data privacy and compliance&lt;/li&gt;
&lt;li&gt;Reduce test flakiness&lt;/li&gt;
&lt;li&gt;Improve debugging efficiency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a deeper breakdown, this guide on test data management provides a practical overview of tools and workflows:&lt;br&gt;
&lt;a href="https://keploy.io/blog/community/test-data-management" rel="noopener noreferrer"&gt;https://keploy.io/blog/community/test-data-management&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Challenges Teams Face
&lt;/h2&gt;

&lt;p&gt;Most teams struggle with similar issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tests depend on shared or unstable staging data&lt;/li&gt;
&lt;li&gt;Data gets overwritten between test runs&lt;/li&gt;
&lt;li&gt;Sensitive production data is reused unsafely&lt;/li&gt;
&lt;li&gt;Engineers manually create and maintain datasets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These problems lead to flaky tests, slower releases, and increased maintenance overhead.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Traditional Approaches Don’t Scale
&lt;/h2&gt;

&lt;p&gt;Traditional TDM solutions focus on generating or masking data, but they still rely heavily on manual effort.&lt;/p&gt;

&lt;p&gt;Key limitations include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High maintenance cost for datasets&lt;/li&gt;
&lt;li&gt;Difficulty keeping data in sync with production&lt;/li&gt;
&lt;li&gt;Limited coverage of real-world edge cases&lt;/li&gt;
&lt;li&gt;Time-consuming setup for every new feature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As systems grow more complex, these approaches become harder to sustain.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Shift Toward Data from Real Usage
&lt;/h2&gt;

&lt;p&gt;A more effective approach is to generate test data from actual application behavior instead of manually creating it.&lt;/p&gt;

&lt;p&gt;This is where modern tools like Keploy introduce a different model.&lt;/p&gt;

&lt;p&gt;Instead of relying on synthetic datasets, Keploy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Captures real API traffic&lt;/li&gt;
&lt;li&gt;Automatically generates test cases&lt;/li&gt;
&lt;li&gt;Creates mocks and stubs based on real interactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means your test data is derived from real-world usage, not assumptions.&lt;/p&gt;




&lt;h2&gt;
  
  
  How This Improves Test Data Management
&lt;/h2&gt;

&lt;p&gt;Using real traffic as a source of truth solves several core TDM issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Eliminates the need for manual data creation&lt;/li&gt;
&lt;li&gt;Reduces inconsistencies between environments&lt;/li&gt;
&lt;li&gt;Improves test coverage with realistic scenarios&lt;/li&gt;
&lt;li&gt;Keeps test data up-to-date automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach also aligns better with modern CI/CD workflows, where speed and reliability are critical.&lt;/p&gt;




&lt;h2&gt;
  
  
  Best Practices for Effective TDM
&lt;/h2&gt;

&lt;p&gt;To build a scalable test data strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid relying on shared mutable datasets&lt;/li&gt;
&lt;li&gt;Use production-like data patterns whenever possible&lt;/li&gt;
&lt;li&gt;Automate data provisioning and cleanup&lt;/li&gt;
&lt;li&gt;Minimize manual intervention in test setup&lt;/li&gt;
&lt;li&gt;Prefer tools that generate data from real usage&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Test Data Management is not just a supporting function—it directly impacts the reliability and scalability of your testing strategy.&lt;/p&gt;

&lt;p&gt;If your tests are unstable or difficult to maintain, the issue often lies in how your data is managed.&lt;/p&gt;

&lt;p&gt;Moving toward automated, real-world data generation can significantly reduce effort while improving test quality. Tools like Keploy represent this shift by removing the dependency on manually created datasets and aligning testing closer to actual user behavior.&lt;/p&gt;

&lt;p&gt;For a detailed understanding and practical examples, refer to:&lt;br&gt;
&lt;a href="https://keploy.io/blog/community/test-data-management" rel="noopener noreferrer"&gt;https://keploy.io/blog/community/test-data-management&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tdm</category>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>API Testing Strategies: Building Reliable and Scalable APIs</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Mon, 20 Apr 2026 16:22:36 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/api-testing-strategies-building-reliable-and-scalable-apis-44ln</link>
      <guid>https://dev.to/michael_burry_00/api-testing-strategies-building-reliable-and-scalable-apis-44ln</guid>
      <description>&lt;p&gt;In today’s API-first world, applications depend heavily on seamless communication between services. A single failing endpoint can break entire workflows. That’s why strong API testing strategies are essential for building reliable and scalable systems.&lt;/p&gt;

&lt;p&gt;This article explores practical strategies developers can implement to ensure high-quality APIs.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is an API Testing Strategy?
&lt;/h2&gt;

&lt;p&gt;An &lt;a href="https://keploy.io/blog/community/api-testing-strategies" rel="noopener noreferrer"&gt;API testing strategy&lt;/a&gt; is a structured approach to validate API functionality, performance, security, and reliability. It ensures that APIs behave correctly under different conditions and continue to perform as expected throughout their lifecycle.&lt;/p&gt;

&lt;p&gt;Unlike UI testing, API testing focuses on backend logic and data exchange, making it faster, more stable, and easier to automate.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why API Testing Matters
&lt;/h2&gt;

&lt;p&gt;APIs act as the backbone of modern applications. Without proper testing, even minor issues can lead to major failures.&lt;/p&gt;

&lt;p&gt;Effective API testing helps to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure accurate data exchange&lt;/li&gt;
&lt;li&gt;Detect bugs early in development&lt;/li&gt;
&lt;li&gt;Improve system performance&lt;/li&gt;
&lt;li&gt;Strengthen security&lt;/li&gt;
&lt;li&gt;Enable faster and safer deployments&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Core API Testing Strategies
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Start with Clear API Specifications
&lt;/h3&gt;

&lt;p&gt;Before writing tests, fully understand the API contract. This includes request formats, response structures, authentication methods, and error handling.&lt;/p&gt;

&lt;p&gt;Clear specifications prevent incorrect assumptions and improve test accuracy.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Test Across Multiple Layers
&lt;/h3&gt;

&lt;p&gt;A strong API testing approach includes different types of testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Functional Testing – Ensures APIs behave as expected&lt;/li&gt;
&lt;li&gt;Integration Testing – Validates interaction between services&lt;/li&gt;
&lt;li&gt;Performance Testing – Measures speed and scalability&lt;/li&gt;
&lt;li&gt;Security Testing – Identifies vulnerabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Testing across layers ensures complete system coverage.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Automate API Testing
&lt;/h3&gt;

&lt;p&gt;Manual testing cannot keep up with modern development speed. Automation allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster test execution&lt;/li&gt;
&lt;li&gt;Consistent validation&lt;/li&gt;
&lt;li&gt;Easy integration into CI/CD pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Automated tests ensure reliability with every code change.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Validate More Than Status Codes
&lt;/h3&gt;

&lt;p&gt;Checking only HTTP status codes is not enough.&lt;/p&gt;

&lt;p&gt;A robust strategy includes validating:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Response payload (JSON/XML)&lt;/li&gt;
&lt;li&gt;Data correctness&lt;/li&gt;
&lt;li&gt;Headers and metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deep validation ensures APIs return meaningful and accurate data.&lt;/p&gt;




&lt;h3&gt;
  
  
  5. Test Edge Cases and Negative Scenarios
&lt;/h3&gt;

&lt;p&gt;Most failures happen in unexpected situations, not ideal ones.&lt;/p&gt;

&lt;p&gt;Make sure to test:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Invalid inputs&lt;/li&gt;
&lt;li&gt;Missing parameters&lt;/li&gt;
&lt;li&gt;Unauthorized access&lt;/li&gt;
&lt;li&gt;Large or malformed payloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Strong error handling improves API resilience.&lt;/p&gt;




&lt;h3&gt;
  
  
  6. Shift Testing Left
&lt;/h3&gt;

&lt;p&gt;Testing should begin early in the development lifecycle.&lt;/p&gt;

&lt;p&gt;Integrate API testing into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Development workflows&lt;/li&gt;
&lt;li&gt;CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Pre-deployment checks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Early testing reduces bugs, saves time, and lowers costs.&lt;/p&gt;




&lt;h3&gt;
  
  
  7. Use Realistic Test Environments
&lt;/h3&gt;

&lt;p&gt;Testing APIs in environments similar to production ensures accurate results.&lt;/p&gt;

&lt;p&gt;Benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-world behavior simulation&lt;/li&gt;
&lt;li&gt;Reduced deployment risks&lt;/li&gt;
&lt;li&gt;Reliable performance insights&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  8. Monitor APIs Continuously
&lt;/h3&gt;

&lt;p&gt;API testing doesn’t end after deployment.&lt;/p&gt;

&lt;p&gt;Continuous monitoring helps track:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Response times&lt;/li&gt;
&lt;li&gt;Error rates&lt;/li&gt;
&lt;li&gt;System performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures ongoing reliability and quick issue detection.&lt;/p&gt;




&lt;h2&gt;
  
  
  Advanced Testing Approaches
&lt;/h2&gt;

&lt;p&gt;To make testing more structured, teams often follow frameworks that focus on validation, automation, error handling, and reliability.&lt;/p&gt;

&lt;p&gt;These approaches ensure that no critical aspect of API testing is overlooked.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Mistakes to Avoid
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Testing only happy paths&lt;/li&gt;
&lt;li&gt;Ignoring edge cases&lt;/li&gt;
&lt;li&gt;Lack of automation&lt;/li&gt;
&lt;li&gt;Skipping security checks&lt;/li&gt;
&lt;li&gt;Poor test data management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Avoiding these mistakes significantly improves API quality.&lt;/p&gt;




&lt;h2&gt;
  
  
  Modern API Testing with Automation
&lt;/h2&gt;

&lt;p&gt;Modern development teams are adopting automated API testing as a core part of their workflow. It enables faster releases, continuous validation, and better collaboration between teams.&lt;/p&gt;

&lt;p&gt;Tools like Keploy help generate test cases from real API traffic, reducing manual effort and improving test coverage. This makes it easier to maintain reliable APIs without slowing down development.&lt;/p&gt;

&lt;p&gt;To explore a deeper breakdown of these strategies, check out the detailed guide on API testing strategies on Keploy’s blog.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;API testing is no longer optional—it’s a critical part of modern software development. A well-defined API testing strategy ensures your applications remain stable, secure, and scalable as they grow.&lt;/p&gt;

&lt;p&gt;Investing in the right strategies today will save time, reduce bugs, and improve user experience in the long run.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>api</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I Broke Prod 3 Times — Here's How Proper Retesting Would Have Saved Us</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Wed, 08 Apr 2026 12:56:02 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/i-broke-prod-3-times-heres-how-proper-retesting-would-have-saved-us-hk9</link>
      <guid>https://dev.to/michael_burry_00/i-broke-prod-3-times-heres-how-proper-retesting-would-have-saved-us-hk9</guid>
      <description>&lt;p&gt;I've been in software for eight years. I've survived death marches, a startup pivot that rewrote half the codebase in six weeks, and a migration to microservices that nobody fully understood until it was already in production.&lt;/p&gt;

&lt;p&gt;But the three incidents I think about most aren't the big architectural disasters. They're the ones that started with a developer — sometimes me — saying: &lt;em&gt;"It's just a small fix. We already tested this. Ship it."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the story of those three incidents, what actually went wrong, and how a proper retesting protocol would have stopped each one before it became a 2 AM Slack storm.&lt;/p&gt;

&lt;p&gt;If you want the structured playbook, here's a solid &lt;a href="https://keploy.io/blog/community/retesting-in-software-testing" rel="noopener noreferrer"&gt;retesting guide&lt;/a&gt; to bookmark. But if you want the human version — the version with the panic and the postmortems and the lessons that actually stuck — keep reading.&lt;/p&gt;




&lt;h2&gt;
  
  
  Incident #1: The "One-Line Fix" That Took Down Checkout
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What happened
&lt;/h3&gt;

&lt;p&gt;It was a Tuesday afternoon. A bug had been sitting in our backlog for two sprints — a minor formatting issue in how we displayed discount codes at checkout. Wrong case, nothing functional, just cosmetic. The ticket had been deprioritized twice because it wasn't affecting conversions.&lt;/p&gt;

&lt;p&gt;Then a customer-facing exec noticed it during a demo and suddenly it was P1.&lt;/p&gt;

&lt;p&gt;Our developer found the fix in about four minutes. Literally one line — a &lt;code&gt;.toLowerCase()&lt;/code&gt; call on the coupon input field. She tested it locally, it looked great, and we pushed it to production through our fast-track deploy process (which existed specifically for "low-risk" cosmetic fixes).&lt;/p&gt;

&lt;p&gt;Within 20 minutes, our error monitoring lit up. Checkout was failing for anyone who had a coupon applied.&lt;/p&gt;

&lt;p&gt;The root cause: our coupon validation logic upstream was case-sensitive. It expected codes in uppercase. The &lt;code&gt;.toLowerCase()&lt;/code&gt; fix made the UI display correctly, but broke the validation handshake. Coupons that were valid were now being rejected as invalid. Customers were losing discounts in the middle of checkout and abandoning.&lt;/p&gt;

&lt;p&gt;We rolled back in 40 minutes. The incident window was about an hour.&lt;/p&gt;

&lt;h3&gt;
  
  
  What proper retesting would have caught
&lt;/h3&gt;

&lt;p&gt;The fix was never tested against the full checkout flow — only the display behavior. A proper retest would have included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Boundary testing:&lt;/strong&gt; What happens when a valid uppercase coupon is entered after this change?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration verification:&lt;/strong&gt; Does the front-end input still communicate correctly with the validation service?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;End-to-end scenario:&lt;/strong&gt; Complete a checkout with a coupon applied.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fix was cosmetic on the surface but touched an input field with downstream dependencies. Retesting only the visual output while ignoring the functional chain is how one-line fixes become one-hour outages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; There is no such thing as a "cosmetic" fix that touches user input. The blast radius of any change to an input field includes everything downstream of that field.&lt;/p&gt;




&lt;h2&gt;
  
  
  Incident #2: The Regression Nobody Ran
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What happened
&lt;/h3&gt;

&lt;p&gt;Six months later, different team, same pattern.&lt;/p&gt;

&lt;p&gt;We had a nasty bug in our notifications service — users weren't receiving email confirmations for certain account actions. It had been reported by a handful of users, confirmed by QA, and assigned to a senior engineer who tracked it to a race condition in our async job queue.&lt;/p&gt;

&lt;p&gt;The fix was genuinely complex. It took three days, two code reviews, and a solid round of unit testing before it was merged. QA verified the specific scenario from the bug report — the exact action that triggered the race condition — and it passed cleanly. Ticket closed. Sprint closed. Everyone went home.&lt;/p&gt;

&lt;p&gt;The following Monday we discovered that password reset emails had stopped working entirely.&lt;/p&gt;

&lt;p&gt;The notifications service powered both flows. The fix had resolved the race condition for account confirmations by changing how jobs were enqueued — but that change had altered behavior for the password reset flow in a way nobody had mapped out. Password reset emails had been silently failing since Friday's deploy.&lt;/p&gt;

&lt;p&gt;We caught it because a new employee tried to reset their password on their first day and got nothing. Not exactly the onboarding experience we aimed for.&lt;/p&gt;

&lt;h3&gt;
  
  
  What proper retesting would have caught
&lt;/h3&gt;

&lt;p&gt;The QA engineer verified the bug report scenario. Nobody ran a broader regression on the notifications service.&lt;/p&gt;

&lt;p&gt;What was missing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Component-level regression:&lt;/strong&gt; After fixing the queue logic, every feature that uses the notifications service should have been retested — not just the broken one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency mapping:&lt;/strong&gt; A quick audit of "what else calls this service?" before closing the ticket.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smoke test in staging:&lt;/strong&gt; A post-deploy smoke test covering core user flows (including password reset) would have surfaced this within minutes of Friday's deploy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The unit tests were thorough for the race condition. But unit tests don't catch integration-level regressions. The component was fixed; the system was broken.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; Retesting a bug fix means retesting the component, not just the scenario. Map your dependencies before you close the ticket.&lt;/p&gt;




&lt;h2&gt;
  
  
  Incident #3: We Tested in the Wrong Environment
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What happened
&lt;/h3&gt;

&lt;p&gt;This one is the most embarrassing, because by this point we had a retesting checklist. We had learned from the previous incidents. We were doing the thing.&lt;/p&gt;

&lt;p&gt;Except we weren't doing the thing in the right place.&lt;/p&gt;

&lt;p&gt;A bug had been reported where users on a specific legacy plan tier were getting incorrect pricing displayed on their dashboard. The pricing logic was in a configuration service that read from a database table. A developer found the issue — a missing condition in a query — fixed it, and QA tested it thoroughly in our staging environment. All plan tiers displayed correctly. Ticket verified. Deployed to production Friday afternoon.&lt;/p&gt;

&lt;p&gt;By Saturday morning, we had support tickets from enterprise customers — not the legacy tier, but our highest-value accounts — saying their pricing looked wrong.&lt;/p&gt;

&lt;p&gt;What had happened: our staging database was months out of date. Enterprise plan configurations that existed in production didn't exist in staging. The query fix was correct, but it had an unintended side effect on plan types that our staging data didn't include. We tested correctly in an environment that didn't reflect reality.&lt;/p&gt;

&lt;p&gt;The fix was straightforward, but the enterprise customer trust damage took weeks to repair.&lt;/p&gt;

&lt;h3&gt;
  
  
  What proper retesting would have caught
&lt;/h3&gt;

&lt;p&gt;The retesting process was sound. The environment was the problem.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Production-parity staging:&lt;/strong&gt; Our staging database needed to be refreshed with anonymized production data regularly — especially before testing anything that touches pricing or plan configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge case data coverage:&lt;/strong&gt; Any fix that touches multi-tier logic should be tested against a representative sample of all active configurations, not just the ones that happen to exist in staging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-deploy validation gate:&lt;/strong&gt; A quick sanity check in a production-like environment before any pricing-related deploy, full stop.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We had a checklist. The checklist didn't include "verify the environment reflects production data." It does now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; A perfect retesting process in an imperfect environment is still a broken process. Environment parity is not a DevOps nicety — it's a testing prerequisite.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pattern Across All Three
&lt;/h2&gt;

&lt;p&gt;Looking back at these incidents, the surface-level causes are different — wrong scope, missed dependencies, wrong environment. But they all share the same root: &lt;strong&gt;we treated retesting as confirmation of the fix, not as verification of the system.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The developer fixed what was broken. QA confirmed it was fixed. Nobody asked: &lt;em&gt;what else could this have changed?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That question — "what else?" — is the difference between retesting and real retesting.&lt;/p&gt;

&lt;p&gt;Here's the mental model that changed how our team thinks about this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A bug fix is a delta. Retesting is the process of understanding the full impact of that delta — not just the intended impact.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Every fix has:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The intended effect (the bug is gone).&lt;/li&gt;
&lt;li&gt;The potential unintended effects (what else the change touches).&lt;/li&gt;
&lt;li&gt;The environmental assumptions (does this hold in production, not just staging?).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Good retesting covers all three.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Changed After Incident #3
&lt;/h2&gt;

&lt;p&gt;After the third incident, we stopped treating retesting as a QA-phase activity and started treating it as a shared engineering responsibility. Here's what actually changed in our process:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developers now write regression tests as part of bug fixes.&lt;/strong&gt; Not a separate story, not a future sprint item — part of the same PR. If you fixed it, you prove it with a test that would have caught it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bug tickets now require a dependency field.&lt;/strong&gt; Before a fix goes to QA, the developer lists every component, service, or data model the fix touches. QA uses that list to scope the retest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Staging data is refreshed before any pricing, billing, or configuration change.&lt;/strong&gt; Non-negotiable gate in our deploy checklist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We run a smoke test suite on every production deploy.&lt;/strong&gt; Ten minutes, covers our twenty most critical user flows. It's caught three would-be incidents in the eight months since we introduced it.&lt;/p&gt;

&lt;p&gt;None of this is revolutionary. It's the stuff every retesting guide recommends. The difference is that now we actually do it, because we remember what it felt like when we didn't.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Uncomfortable Truth About "Fast" Teams
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody says out loud: the pressure to skip retesting almost always comes from the top. Developers and QA engineers generally know when a fix needs more testing. They feel it. But when a manager is asking why a ticket isn't closed, or when a sprint is ending and the board needs to be cleared, the path of least resistance is to mark it done and hope.&lt;/p&gt;

&lt;p&gt;That hope is expensive. An hour of proper retesting costs an engineer an hour. An incident costs engineering hours, support hours, customer trust, and sometimes revenue.&lt;/p&gt;

&lt;p&gt;The math is not complicated. The organizational will to do the math is.&lt;/p&gt;

&lt;p&gt;If you're a team lead or an engineering manager reading this: the single most effective thing you can do for your production stability is to give your QA team explicit permission to slow down and retest properly. Make it a cultural norm that reopening a ticket for insufficient testing is a sign of diligence, not failure.&lt;/p&gt;

&lt;p&gt;The alternative is finding out at 2 AM.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If your team is building a retesting process from scratch or tightening up an existing one, this &lt;a href="https://keploy.io/blog/community/retesting-in-software-testing" rel="noopener noreferrer"&gt;retesting guide&lt;/a&gt; is worth the read. Less war stories, more frameworks — but the lessons rhyme.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>devops</category>
      <category>agile</category>
      <category>software</category>
    </item>
    <item>
      <title>How I Set Up Integration Tests for a Node.js + PostgreSQL App (with Zero Flakiness)</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Wed, 08 Apr 2026 10:31:55 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/how-i-set-up-integration-tests-for-a-nodejs-postgresql-app-with-zero-flakiness-23k6</link>
      <guid>https://dev.to/michael_burry_00/how-i-set-up-integration-tests-for-a-nodejs-postgresql-app-with-zero-flakiness-23k6</guid>
      <description>&lt;p&gt;I spent three weeks being haunted by a test suite that passed locally and failed in CI. Not sometimes — randomly. A different test each time. No stack trace that made sense. Pure chaos.&lt;/p&gt;

&lt;p&gt;After way too much coffee and one very long Saturday, I figured out the root cause: my integration tests were sharing database state, spinning up connections that weren't being closed, and relying on mock data that didn't reflect how PostgreSQL actually behaves.&lt;/p&gt;

&lt;p&gt;This is the guide I wish I had back then. By the end, you'll have a Node.js + PostgreSQL integration test setup that is isolated, fast, deterministic, and doesn't randomly implode in your CI pipeline.&lt;/p&gt;

&lt;p&gt;Let's build it from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We're Actually Testing
&lt;/h2&gt;

&lt;p&gt;Before we write a single line of code, let's be clear about what integration testing means in this context.&lt;/p&gt;

&lt;p&gt;Unit tests check a function in isolation — you mock the database, mock the HTTP client, mock everything. Integration tests check that your code works with &lt;strong&gt;real dependencies&lt;/strong&gt;. That means a real PostgreSQL instance, real queries, real connection pooling behavior.&lt;/p&gt;

&lt;p&gt;The problem most people run into: they treat integration tests like unit tests. They share a single DB connection across test files. They don't clean up between tests. They hardcode ports. Then they wonder why the tests are flaky.&lt;/p&gt;

&lt;p&gt;Here's the stack we'll use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node.js&lt;/strong&gt; (Express API)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; (via &lt;code&gt;pg&lt;/code&gt; pool)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jest&lt;/strong&gt; (test runner)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testcontainers&lt;/strong&gt; (spins up a real Postgres Docker container per test suite)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supertest&lt;/strong&gt; (HTTP assertion)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-app/
├── src/
│   ├── app.js           # Express app
│   ├── db.js            # DB connection pool
│   └── routes/
│       └── users.js     # User routes
├── tests/
│   └── integration/
│       ├── setup.js     # Test DB setup/teardown
│       └── users.test.js
├── package.json
└── jest.config.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 1 — Install Dependencies
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;express pg
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--save-dev&lt;/span&gt; jest supertest testcontainers @testcontainers/postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why Testcontainers? Because it spins up a &lt;strong&gt;real, isolated PostgreSQL instance&lt;/strong&gt; inside Docker for each test suite, then tears it down when done. No shared state. No "but it works on my machine." Every test run starts clean.&lt;/p&gt;

&lt;p&gt;The only prerequisite: Docker must be running on your machine and in CI.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2 — The App We're Testing
&lt;/h2&gt;

&lt;p&gt;Keep it simple. A users API with two endpoints — create a user and fetch all users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;src/db.js&lt;/code&gt;&lt;/strong&gt; — connection pool factory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Pool&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getPool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_HOST&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;localhost&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;parseInt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_PORT&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;5432&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_NAME&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myapp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_USER&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_PASSWORD&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;idleTimeoutMillis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;closePool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getPool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;closePool&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the &lt;code&gt;closePool()&lt;/code&gt; function. This is not optional. If you don't close the pool at the end of your tests, Jest hangs forever because open DB connections keep the Node process alive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;src/routes/users.js&lt;/code&gt;&lt;/strong&gt; — user routes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getPool&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../db&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;router&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// GET /users — fetch all users&lt;/span&gt;
&lt;span class="nx"&gt;router&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getPool&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT id, name, email, created_at FROM users ORDER BY created_at DESC&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Error fetching users:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Internal server error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// POST /users — create a user&lt;/span&gt;
&lt;span class="nx"&gt;router&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;email&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;name and email are required&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Basic email format check&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;emailRegex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sr"&gt;/^&lt;/span&gt;&lt;span class="se"&gt;[^\s&lt;/span&gt;&lt;span class="sr"&gt;@&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;+@&lt;/span&gt;&lt;span class="se"&gt;[^\s&lt;/span&gt;&lt;span class="sr"&gt;@&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;\.[^\s&lt;/span&gt;&lt;span class="sr"&gt;@&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;+$/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;emailRegex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Invalid email format&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getPool&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id, name, email, created_at&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// PostgreSQL unique constraint violation&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;code&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;23505&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;409&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Email already exists&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Error creating user:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Internal server error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;router&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;code&gt;src/app.js&lt;/code&gt;&lt;/strong&gt; — Express app (exported so Supertest can use it without starting a server):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;usersRouter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./routes/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;usersRouter&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 3 — The Integration Test Setup
&lt;/h2&gt;

&lt;p&gt;This is the most important file. The &lt;code&gt;setup.js&lt;/code&gt; handles spinning up PostgreSQL in Docker, running your schema migrations, setting environment variables so the app connects to the test DB, and tearing everything down after.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;tests/integration/setup.js&lt;/code&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PostgreSqlContainer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@testcontainers/postgresql&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Pool&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;setupTestDatabase&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Spin up a real PostgreSQL instance in Docker&lt;/span&gt;
  &lt;span class="c1"&gt;// Each test suite gets its own isolated database&lt;/span&gt;
  &lt;span class="nx"&gt;container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PostgreSqlContainer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres:15-alpine&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;withDatabase&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;testdb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;withUsername&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;testuser&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;withPassword&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;testpass&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// Point the app to this container&lt;/span&gt;
  &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_HOST&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getHost&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_PORT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getMappedPort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_NAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getDatabase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_USER&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUsername&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_PASSWORD&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getPassword&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// Create a pool directly to run migrations&lt;/span&gt;
  &lt;span class="nx"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getHost&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getMappedPort&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getDatabase&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUsername&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getPassword&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="c1"&gt;// Run schema — in production you'd use a migration tool like Flyway or node-pg-migrate&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`
    CREATE TABLE IF NOT EXISTS users (
      id SERIAL PRIMARY KEY,
      name VARCHAR(255) NOT NULL,
      email VARCHAR(255) NOT NULL UNIQUE,
      created_at TIMESTAMP DEFAULT NOW()
    )
  `&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;teardownTestDatabase&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Wipe all rows between tests — faster than dropping/recreating tables&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;clearDatabase&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;TRUNCATE TABLE users RESTART IDENTITY CASCADE&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;setupTestDatabase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;teardownTestDatabase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;clearDatabase&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why &lt;code&gt;TRUNCATE ... RESTART IDENTITY CASCADE&lt;/code&gt; instead of &lt;code&gt;DELETE FROM&lt;/code&gt;?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;TRUNCATE&lt;/code&gt; is much faster than &lt;code&gt;DELETE&lt;/code&gt; on large datasets and resets the auto-increment sequence, so your &lt;code&gt;id&lt;/code&gt; values are predictable (&lt;code&gt;1, 2, 3...&lt;/code&gt;) across tests. &lt;code&gt;CASCADE&lt;/code&gt; handles foreign key relationships automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4 — Writing the Integration Tests
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;tests/integration/users.test.js&lt;/code&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;supertest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../src/app&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;closePool&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../src/db&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;setupTestDatabase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;teardownTestDatabase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;clearDatabase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./setup&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Users API — Integration Tests&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="c1"&gt;// Runs once before all tests in this file&lt;/span&gt;
  &lt;span class="nf"&gt;beforeAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;setupTestDatabase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;60000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// 60s timeout — Docker pull can take a moment first run&lt;/span&gt;

  &lt;span class="c1"&gt;// Runs once after all tests complete&lt;/span&gt;
  &lt;span class="nf"&gt;afterAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;closePool&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;          &lt;span class="c1"&gt;// Close the app's connection pool&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;teardownTestDatabase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// Stop the Docker container&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="c1"&gt;// Runs before each individual test — wipes DB state&lt;/span&gt;
  &lt;span class="nf"&gt;beforeEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;clearDatabase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="c1"&gt;// ─── GET /users ──────────────────────────────────────────────&lt;/span&gt;

  &lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;GET /users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;returns an empty array when no users exist&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;([]);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;returns all users ordered by created_at descending&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Seed two users directly into the DB&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Alice&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;alice@example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Bob&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;bob@example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveLength&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="c1"&gt;// Bob was created last, should appear first&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Bob&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Alice&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="c1"&gt;// ─── POST /users ─────────────────────────────────────────────&lt;/span&gt;

  &lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST /users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;creates a user and returns 201 with the created record&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Charlie&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;charlie@example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toMatchObject&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;any&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Charlie&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;charlie@example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;created_at&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;any&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;returns 400 when name is missing&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;noname@example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toMatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/name/i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;returns 400 when email format is invalid&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Dave&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;not-an-email&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toMatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/email/i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;returns 409 when email already exists — tests real DB unique constraint&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// First insert succeeds&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Eve&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;eve@example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="c1"&gt;// Second insert with same email hits PostgreSQL unique constraint&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Eve Again&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;eve@example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;409&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toMatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/already exists/i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the 409 test — this is exactly the kind of thing a unit test with mocks &lt;strong&gt;cannot&lt;/strong&gt; catch reliably. The unique constraint lives in PostgreSQL. You either test against a real database or you're guessing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5 — Jest Config
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;jest.config.js&lt;/code&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;testEnvironment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;testMatch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;**/tests/integration/**/*.test.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;testTimeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Docker container startup&lt;/span&gt;
  &lt;span class="na"&gt;maxWorkers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;// Run test files sequentially — prevents port conflicts&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;maxWorkers: 1&lt;/code&gt; setting is important. Each test file spins up its own Docker container, which is already isolated. Running files in parallel can exhaust Docker resources and cause unpredictable failures — exactly the kind of flakiness we're trying to eliminate.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 6 — Running the Tests
&lt;/h2&gt;

&lt;p&gt;Add scripts to &lt;code&gt;package.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"test:integration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jest --config jest.config.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"test:integration:watch"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jest --config jest.config.js --watch"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run &lt;span class="nb"&gt;test&lt;/span&gt;:integration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First run will pull the &lt;code&gt;postgres:15-alpine&lt;/code&gt; image from Docker Hub — takes 30–60 seconds. Every run after that uses the cached image and starts in about 3–5 seconds.&lt;/p&gt;

&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt; PASS  tests/integration/users.test.js
  Users API — Integration Tests
    GET /users
      ✓ returns an empty array when no users exist (48ms)
      ✓ returns all users ordered by created_at descending (61ms)
    POST /users
      ✓ creates a user and returns 201 with the created record (42ms)
      ✓ returns 400 when name is missing (12ms)
      ✓ returns 400 when email format is invalid (11ms)
      ✓ returns 409 when email already exists (39ms)

Test Suites: 1 passed, 1 total
Tests:       6 passed, 6 total
Time:        8.3s
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 7 — CI/CD with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;The setup above works locally. Here's how to make it work in GitHub Actions — no extra configuration needed since Testcontainers handles Docker automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;.github/workflows/integration-tests.yml&lt;/code&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Integration Tests&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;develop&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;integration-tests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Node.js&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;20'&lt;/span&gt;
          &lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;npm'&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install dependencies&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm ci&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run integration tests&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run test:integration&lt;/span&gt;
        &lt;span class="c1"&gt;# No need to manually start Postgres — Testcontainers handles it&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. GitHub Actions runners have Docker installed by default. Testcontainers detects it automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Three Rules That Eliminated My Flakiness
&lt;/h2&gt;

&lt;p&gt;Looking back, every flaky test I ever had came from violating one of these:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 1 — Never share state between tests.&lt;/strong&gt; Use &lt;code&gt;beforeEach&lt;/code&gt; with &lt;code&gt;TRUNCATE&lt;/code&gt; to reset. A test that passes because the previous test seeded data is a test that will randomly fail when you reorder files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 2 — Always close your connections.&lt;/strong&gt; Call &lt;code&gt;closePool()&lt;/code&gt; in &lt;code&gt;afterAll&lt;/code&gt;. Open connections = hanging Jest process = CI timeout = false failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 3 — Test against real PostgreSQL, not in-memory fakes.&lt;/strong&gt; SQLite and in-memory databases behave differently from PostgreSQL. Unique constraints, specific data types, &lt;code&gt;RETURNING&lt;/code&gt; clauses, transaction isolation — these all behave subtly differently. Mock them and you're testing a fiction.&lt;/p&gt;




&lt;h2&gt;
  
  
  Taking It Further: Automated Mock Generation with Keploy
&lt;/h2&gt;

&lt;p&gt;The setup above is solid for testing your own API endpoints. But in real applications you have &lt;strong&gt;external dependencies&lt;/strong&gt; — third-party APIs, payment services, email providers. You can't spin those up in Docker.&lt;/p&gt;

&lt;p&gt;The traditional answer is to write mocks manually. The problem: manual mocks drift from reality. The service changes its response format, your mock doesn't, your tests keep passing, production breaks.&lt;/p&gt;

&lt;p&gt;This is where &lt;a href="https://keploy.io/blog/community/integration-testing-a-comprehensive-guide" rel="noopener noreferrer"&gt;Keploy&lt;/a&gt; takes a different approach. Instead of writing mocks by hand, Keploy &lt;strong&gt;records real API traffic&lt;/strong&gt; during development or staging runs, then replays those recorded interactions as deterministic stubs during testing. Your mocks are always based on real data, not what you thought the API would return.&lt;/p&gt;

&lt;p&gt;For a Node.js + PostgreSQL app like the one we built here, Keploy captures the actual DB queries and external calls during a real run, then replays them in CI without needing a live database or live external services at all. It's the closest thing to testing against production without actually hitting production.&lt;/p&gt;

&lt;p&gt;If you want to understand the full picture of what integration testing is, the different types (top-down, bottom-up, sandwich), and how to fit it into a CI/CD pipeline, I'd recommend reading &lt;a href="https://keploy.io/blog/community/integration-testing-a-comprehensive-guide" rel="noopener noreferrer"&gt;Keploy's comprehensive integration testing guide&lt;/a&gt; — it covers the theory behind everything we implemented here.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Here's what we built:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A real Express + PostgreSQL app&lt;/li&gt;
&lt;li&gt;Integration tests using &lt;strong&gt;Testcontainers&lt;/strong&gt; (real Docker-based Postgres per suite)&lt;/li&gt;
&lt;li&gt;Proper &lt;strong&gt;setup/teardown&lt;/strong&gt; with &lt;code&gt;beforeAll&lt;/code&gt;, &lt;code&gt;afterAll&lt;/code&gt;, &lt;code&gt;beforeEach&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Fast state reset with &lt;strong&gt;TRUNCATE RESTART IDENTITY&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Proper connection pool cleanup to prevent hanging Jest processes&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;GitHub Actions&lt;/strong&gt; CI config that works without any extra setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result: a test suite that behaves identically on your laptop, your colleague's laptop, and your CI server. No more random failures. No more "works on my machine."&lt;/p&gt;

&lt;p&gt;If you have questions or a different approach that's worked well for you, drop it in the comments — always curious to hear how other teams handle this.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>node</category>
      <category>postgres</category>
      <category>devops</category>
    </item>
    <item>
      <title>Integration Testing: The Complete Developer’s Guide to Strategy, Tools, and Modern Best Practices</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Wed, 25 Feb 2026 12:30:11 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/integration-testing-the-complete-developers-guide-to-strategy-tools-and-modern-best-practices-2360</link>
      <guid>https://dev.to/michael_burry_00/integration-testing-the-complete-developers-guide-to-strategy-tools-and-modern-best-practices-2360</guid>
      <description>&lt;p&gt;Modern software systems are no longer monolithic. They are distributed, API-driven, cloud-native, and composed of multiple services, databases, third-party integrations, queues, and front-end applications. While unit tests validate individual components, they cannot guarantee that modules work together correctly.&lt;/p&gt;

&lt;p&gt;That’s where integration testing becomes mission-critical.&lt;/p&gt;

&lt;p&gt;In this in-depth guide, we’ll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What &lt;a href="https://keploy.io/blog/community/integration-testing-a-comprehensive-guide" rel="noopener noreferrer"&gt;integration testing&lt;/a&gt; really means for developers&lt;/li&gt;
&lt;li&gt;Types and approaches&lt;/li&gt;
&lt;li&gt;Integration testing in microservices &amp;amp; cloud-native systems&lt;/li&gt;
&lt;li&gt;CI/CD integration strategy&lt;/li&gt;
&lt;li&gt;Top integration testing tools&lt;/li&gt;
&lt;li&gt;Companies providing integration testing solutions&lt;/li&gt;
&lt;li&gt;Real-world challenges and best practices&lt;/li&gt;
&lt;li&gt;How modern tools like Keploy simplify integration testing&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What Is Integration Testing?
&lt;/h2&gt;

&lt;p&gt;Integration testing is a software testing phase where individual modules or services are combined and tested as a group to verify their interactions.&lt;/p&gt;

&lt;p&gt;Instead of testing functions in isolation (like unit testing), integration testing validates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API communication&lt;/li&gt;
&lt;li&gt;Database interactions&lt;/li&gt;
&lt;li&gt;Service-to-service calls&lt;/li&gt;
&lt;li&gt;External system integrations&lt;/li&gt;
&lt;li&gt;Message queue workflows&lt;/li&gt;
&lt;li&gt;Data consistency across layers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In modern systems, integration testing often includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;REST API validation&lt;/li&gt;
&lt;li&gt;GraphQL communication&lt;/li&gt;
&lt;li&gt;Database writes and reads&lt;/li&gt;
&lt;li&gt;Event-driven messaging (Kafka, RabbitMQ)&lt;/li&gt;
&lt;li&gt;Third-party service calls&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why Integration Testing Matters More Than Ever
&lt;/h2&gt;

&lt;p&gt;Today’s architectures rely heavily on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microservices&lt;/li&gt;
&lt;li&gt;Cloud infrastructure&lt;/li&gt;
&lt;li&gt;Serverless functions&lt;/li&gt;
&lt;li&gt;Third-party SaaS APIs&lt;/li&gt;
&lt;li&gt;Payment gateways&lt;/li&gt;
&lt;li&gt;Identity providers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A small mismatch between two services can cause:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data corruption&lt;/li&gt;
&lt;li&gt;Failed transactions&lt;/li&gt;
&lt;li&gt;Broken authentication&lt;/li&gt;
&lt;li&gt;Inconsistent states&lt;/li&gt;
&lt;li&gt;Silent production failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unit tests won’t catch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incorrect API contracts&lt;/li&gt;
&lt;li&gt;Serialization/deserialization issues&lt;/li&gt;
&lt;li&gt;Timeout problems&lt;/li&gt;
&lt;li&gt;Schema mismatches&lt;/li&gt;
&lt;li&gt;Network-related failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Integration testing fills this gap.&lt;/p&gt;




&lt;h2&gt;
  
  
  Types of Integration Testing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Big Bang Integration Testing
&lt;/h3&gt;

&lt;p&gt;All modules are integrated at once and tested together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple approach&lt;/li&gt;
&lt;li&gt;Suitable for small systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hard to isolate failures&lt;/li&gt;
&lt;li&gt;Debugging becomes difficult&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. Incremental Integration Testing
&lt;/h3&gt;

&lt;p&gt;Modules are integrated step-by-step.&lt;/p&gt;

&lt;h4&gt;
  
  
  a) Top-Down Integration
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;High-level modules tested first&lt;/li&gt;
&lt;li&gt;Uses stubs for lower-level modules&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  b) Bottom-Up Integration
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Lower-level modules tested first&lt;/li&gt;
&lt;li&gt;Uses drivers for higher modules&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  c) Sandwich (Hybrid) Integration
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Combines top-down and bottom-up approaches&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  3. Contract Testing (Modern Integration Testing)
&lt;/h3&gt;

&lt;p&gt;Popular in microservices architecture.&lt;/p&gt;

&lt;p&gt;Validates API contracts between services to ensure compatibility.&lt;/p&gt;

&lt;p&gt;Tools like Pact help verify that consumers and providers agree on request/response formats.&lt;/p&gt;




&lt;h2&gt;
  
  
  Integration Testing in Microservices Architecture
&lt;/h2&gt;

&lt;p&gt;Microservices add complexity:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Independent deployments&lt;/li&gt;
&lt;li&gt;Separate databases&lt;/li&gt;
&lt;li&gt;Distributed transactions&lt;/li&gt;
&lt;li&gt;Asynchronous communication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Integration testing here must validate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API gateway routing&lt;/li&gt;
&lt;li&gt;Inter-service communication&lt;/li&gt;
&lt;li&gt;Event-driven workflows&lt;/li&gt;
&lt;li&gt;Database integrity&lt;/li&gt;
&lt;li&gt;Circuit breaker handling&lt;/li&gt;
&lt;li&gt;Retry mechanisms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without integration testing, microservices can fail silently across boundaries.&lt;/p&gt;




&lt;h2&gt;
  
  
  Integration Testing in CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;In modern DevOps workflows, integration tests must run automatically inside CI pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Typical Flow:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Code pushed&lt;/li&gt;
&lt;li&gt;Unit tests run&lt;/li&gt;
&lt;li&gt;Services spun up (Docker)&lt;/li&gt;
&lt;li&gt;Integration tests executed&lt;/li&gt;
&lt;li&gt;Reports generated&lt;/li&gt;
&lt;li&gt;Deployment decision made&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Tools commonly used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker Compose&lt;/li&gt;
&lt;li&gt;Kubernetes test environments&lt;/li&gt;
&lt;li&gt;GitHub Actions&lt;/li&gt;
&lt;li&gt;GitLab CI&lt;/li&gt;
&lt;li&gt;Jenkins&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Integration testing should be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast&lt;/li&gt;
&lt;li&gt;Deterministic&lt;/li&gt;
&lt;li&gt;Environment-independent&lt;/li&gt;
&lt;li&gt;Automated&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Popular Integration Testing Tools
&lt;/h2&gt;

&lt;p&gt;Below are widely used tools in the developer ecosystem.&lt;/p&gt;




&lt;h3&gt;
  
  
  1. Keploy
&lt;/h3&gt;

&lt;p&gt;Keploy is a modern API testing and integration testing platform designed specifically for developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Records real API calls&lt;/li&gt;
&lt;li&gt;Generates test cases automatically&lt;/li&gt;
&lt;li&gt;Creates mocks for dependencies&lt;/li&gt;
&lt;li&gt;Works seamlessly in CI/CD&lt;/li&gt;
&lt;li&gt;Ideal for backend and microservices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keploy eliminates manual test writing and ensures production-like integration testing.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Postman
&lt;/h3&gt;

&lt;p&gt;Primarily used for API testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API request validation&lt;/li&gt;
&lt;li&gt;Environment management&lt;/li&gt;
&lt;li&gt;Collection runner&lt;/li&gt;
&lt;li&gt;Newman CLI for CI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Good for API-level integration testing but limited for full microservices flows.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. SoapUI (by SmartBear)
&lt;/h3&gt;

&lt;p&gt;Provided by &lt;strong&gt;SmartBear&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Strong for SOAP and REST integration testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best For:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise API testing&lt;/li&gt;
&lt;li&gt;Complex integrations&lt;/li&gt;
&lt;li&gt;Load testing&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. RestAssured
&lt;/h3&gt;

&lt;p&gt;Java-based integration testing library.&lt;/p&gt;

&lt;p&gt;Commonly used in backend projects.&lt;/p&gt;

&lt;p&gt;Works well with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JUnit&lt;/li&gt;
&lt;li&gt;TestNG&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  5. Cypress
&lt;/h3&gt;

&lt;p&gt;Primarily an end-to-end tool but can validate integrations in frontend + backend flows.&lt;/p&gt;




&lt;h3&gt;
  
  
  6. Selenium
&lt;/h3&gt;

&lt;p&gt;Provided by &lt;strong&gt;SeleniumHQ&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Mostly UI testing, but often part of integration testing for full workflows.&lt;/p&gt;




&lt;h3&gt;
  
  
  7. Pact
&lt;/h3&gt;

&lt;p&gt;Consumer-driven contract testing tool.&lt;/p&gt;

&lt;p&gt;Best for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microservices&lt;/li&gt;
&lt;li&gt;API contract validation&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  8. Testcontainers
&lt;/h3&gt;

&lt;p&gt;Allows running real databases and services inside Docker during integration tests.&lt;/p&gt;

&lt;p&gt;Supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL&lt;/li&gt;
&lt;li&gt;MySQL&lt;/li&gt;
&lt;li&gt;Kafka&lt;/li&gt;
&lt;li&gt;Redis&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  9. JMeter (Apache)
&lt;/h3&gt;

&lt;p&gt;Provided by &lt;strong&gt;Apache Software Foundation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Primarily performance testing, but also used for integration validation under load.&lt;/p&gt;




&lt;h2&gt;
  
  
  Companies Providing Integration Testing Solutions
&lt;/h2&gt;

&lt;p&gt;Many companies specialize in integration testing services or tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. SmartBear
&lt;/h3&gt;

&lt;p&gt;Provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SoapUI&lt;/li&gt;
&lt;li&gt;ReadyAPI&lt;/li&gt;
&lt;li&gt;API automation tools&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. Tricentis
&lt;/h3&gt;

&lt;p&gt;Offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise test automation&lt;/li&gt;
&lt;li&gt;Integration and regression testing&lt;/li&gt;
&lt;li&gt;Tosca platform&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  3. Micro Focus
&lt;/h3&gt;

&lt;p&gt;Provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;UFT (Unified Functional Testing)&lt;/li&gt;
&lt;li&gt;Enterprise integration testing solutions&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. IBM
&lt;/h3&gt;

&lt;p&gt;Provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IBM Rational Test tools&lt;/li&gt;
&lt;li&gt;Integration testing frameworks&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  5. Accenture
&lt;/h3&gt;

&lt;p&gt;Offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise QA services&lt;/li&gt;
&lt;li&gt;Integration validation for large-scale systems&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  6. Infosys
&lt;/h3&gt;

&lt;p&gt;Provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Digital testing services&lt;/li&gt;
&lt;li&gt;API and integration testing&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  7. TCS (Tata Consultancy Services)
&lt;/h3&gt;

&lt;p&gt;Offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;End-to-end integration testing&lt;/li&gt;
&lt;li&gt;Cloud-native testing&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Challenges in Integration Testing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Environment Setup Complexity
&lt;/h3&gt;

&lt;p&gt;Spinning up multiple services is difficult.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Flaky Tests
&lt;/h3&gt;

&lt;p&gt;Network timeouts, race conditions, unstable test environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Slow Execution
&lt;/h3&gt;

&lt;p&gt;Integration tests are slower than unit tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Data Management
&lt;/h3&gt;

&lt;p&gt;Managing test data consistency is challenging.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. External Dependencies
&lt;/h3&gt;

&lt;p&gt;Third-party APIs may fail or rate-limit.&lt;/p&gt;




&lt;h2&gt;
  
  
  Best Practices for Integration Testing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Use Realistic Test Environments
&lt;/h3&gt;

&lt;p&gt;Prefer containers over mocks when possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Automate Everything
&lt;/h3&gt;

&lt;p&gt;Integration tests should run automatically in CI.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Keep Tests Deterministic
&lt;/h3&gt;

&lt;p&gt;Avoid dependency on external unstable services.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Use Contract Testing
&lt;/h3&gt;

&lt;p&gt;Prevent API breaking changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Isolate Test Data
&lt;/h3&gt;

&lt;p&gt;Use seeded databases.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Monitor Integration Failures
&lt;/h3&gt;

&lt;p&gt;Track patterns in CI logs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Integration Testing vs Other Testing Types
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Testing Type&lt;/th&gt;
&lt;th&gt;Focus&lt;/th&gt;
&lt;th&gt;Scope&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Unit Testing&lt;/td&gt;
&lt;td&gt;Individual functions&lt;/td&gt;
&lt;td&gt;Small&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Integration Testing&lt;/td&gt;
&lt;td&gt;Module interactions&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System Testing&lt;/td&gt;
&lt;td&gt;Entire application&lt;/td&gt;
&lt;td&gt;Large&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;End-to-End Testing&lt;/td&gt;
&lt;td&gt;Full workflow&lt;/td&gt;
&lt;td&gt;Very Large&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Integration testing acts as a bridge between unit testing and &lt;a href="https://keploy.io/blog/community/end-to-end-testing-guide" rel="noopener noreferrer"&gt;full system testing&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Integration Testing for Modern Tech Stack
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Backend Frameworks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Spring Boot&lt;/li&gt;
&lt;li&gt;Node.js&lt;/li&gt;
&lt;li&gt;Django&lt;/li&gt;
&lt;li&gt;.NET&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Databases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL&lt;/li&gt;
&lt;li&gt;MongoDB&lt;/li&gt;
&lt;li&gt;MySQL&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Messaging
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kafka&lt;/li&gt;
&lt;li&gt;RabbitMQ&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cloud
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS&lt;/li&gt;
&lt;li&gt;Azure&lt;/li&gt;
&lt;li&gt;GCP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Integration testing must validate these connections reliably.&lt;/p&gt;




&lt;h2&gt;
  
  
  Example Integration Testing Strategy for Microservices
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Unit test every service&lt;/li&gt;
&lt;li&gt;Use contract testing for APIs&lt;/li&gt;
&lt;li&gt;Use Testcontainers for real DB&lt;/li&gt;
&lt;li&gt;Use Keploy to record and replay production calls&lt;/li&gt;
&lt;li&gt;Run integration tests in CI&lt;/li&gt;
&lt;li&gt;Block deployment if integration fails&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This layered approach ensures production stability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Future of Integration Testing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AI-generated test cases&lt;/li&gt;
&lt;li&gt;Automatic mock generation&lt;/li&gt;
&lt;li&gt;Production traffic replay&lt;/li&gt;
&lt;li&gt;Real-time CI insights&lt;/li&gt;
&lt;li&gt;Shift-left testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Modern tools are making integration testing developer-first rather than QA-only.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Integration testing is no longer optional in distributed systems.&lt;/p&gt;

&lt;p&gt;It ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API reliability&lt;/li&gt;
&lt;li&gt;Service compatibility&lt;/li&gt;
&lt;li&gt;Data consistency&lt;/li&gt;
&lt;li&gt;Production stability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For modern DevOps teams, integration testing must be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated&lt;/li&gt;
&lt;li&gt;Containerized&lt;/li&gt;
&lt;li&gt;CI/CD integrated&lt;/li&gt;
&lt;li&gt;Developer-friendly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Platforms like &lt;a href="https://keploy.io/" rel="noopener noreferrer"&gt;Keploy&lt;/a&gt; are redefining integration testing by automating API test generation and reducing manual effort, making it easier for developer communities to adopt strong integration testing practices.&lt;/p&gt;

&lt;p&gt;If you are building microservices, APIs, or distributed applications, investing in a robust integration testing strategy is one of the smartest decisions you can make.&lt;/p&gt;

</description>
      <category>softwaretesting</category>
      <category>testing</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building Reliable Software Through Smart Testing Strategies</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Fri, 30 Jan 2026 11:11:55 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/building-reliable-software-through-smart-testing-strategies-6k0</link>
      <guid>https://dev.to/michael_burry_00/building-reliable-software-through-smart-testing-strategies-6k0</guid>
      <description>&lt;p&gt;In today’s fast paced digital world, software quality plays a major role in user trust and business success. Modern applications are complex, often built using multiple services, APIs, and platforms. A small failure in one part of the system can affect the entire user experience.&lt;/p&gt;

&lt;p&gt;To prevent such issues, development teams rely on well structured testing strategies. By combining smoke testing, functional testing, integration testing, and end to end testing, organizations can ensure that their products remain stable, scalable, and reliable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding the Role of Software Testing
&lt;/h2&gt;

&lt;p&gt;Software testing is more than finding bugs. It is a continuous process that validates whether an application meets business requirements and technical standards. Each testing layer serves a specific purpose and contributes to overall system quality.&lt;/p&gt;

&lt;p&gt;Rather than depending on a single testing method, successful teams use a balanced approach that covers different risk areas.&lt;/p&gt;




&lt;h2&gt;
  
  
  Smoke Testing: The First Line of Defense
&lt;/h2&gt;

&lt;p&gt;Smoke testing is performed after a new build is deployed. Its main purpose is to verify that critical features are working before deeper testing begins.&lt;/p&gt;

&lt;p&gt;Typical smoke tests include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application startup verification&lt;/li&gt;
&lt;li&gt;User login validation&lt;/li&gt;
&lt;li&gt;Core navigation checks&lt;/li&gt;
&lt;li&gt;Basic data processing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By identifying major failures early, &lt;a href="https://keploy.io/blog/community/developers-guide-to-smoke-testing-ensuring-basic-functionality" rel="noopener noreferrer"&gt;smoke testing&lt;/a&gt; saves time and prevents unstable builds from moving forward.&lt;/p&gt;




&lt;h2&gt;
  
  
  Functional Testing: Ensuring Feature Accuracy
&lt;/h2&gt;

&lt;p&gt;Once basic stability is confirmed, teams move to functional testing. This stage focuses on validating that each feature behaves according to specifications.&lt;/p&gt;

&lt;p&gt;Functional testing helps verify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Form submissions&lt;/li&gt;
&lt;li&gt;Search functionality&lt;/li&gt;
&lt;li&gt;Payment workflows&lt;/li&gt;
&lt;li&gt;Notification systems&lt;/li&gt;
&lt;li&gt;User profile management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This process ensures that every component performs as expected from a user perspective.&lt;/p&gt;




&lt;h2&gt;
  
  
  Integration Testing: Strengthening System Connections
&lt;/h2&gt;

&lt;p&gt;While individual features may work well on their own, problems often appear when systems interact. &lt;a href="https://keploy.io/blog/community/integration-testing-a-comprehensive-guide" rel="noopener noreferrer"&gt;Integration testing&lt;/a&gt; focuses on validating communication between modules, services, and databases.&lt;/p&gt;

&lt;p&gt;It helps detect issues such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incorrect data exchange&lt;/li&gt;
&lt;li&gt;API failures&lt;/li&gt;
&lt;li&gt;Authentication mismatches&lt;/li&gt;
&lt;li&gt;Configuration errors&lt;/li&gt;
&lt;li&gt;Service dependency problems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By testing these connections, teams reduce the risk of system wide failures.&lt;/p&gt;




&lt;h2&gt;
  
  
  End to End Testing: Validating Real User Journeys
&lt;/h2&gt;

&lt;p&gt;End to end testing evaluates complete user workflows across the application. It simulates real world scenarios from start to finish, ensuring that all components work together seamlessly.&lt;/p&gt;

&lt;p&gt;Common end to end test cases include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User registration and onboarding&lt;/li&gt;
&lt;li&gt;Product browsing and checkout&lt;/li&gt;
&lt;li&gt;Order processing and tracking&lt;/li&gt;
&lt;li&gt;Account updates and support requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This testing layer confirms that the application delivers a smooth and reliable user experience.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building a Balanced Testing Strategy
&lt;/h2&gt;

&lt;p&gt;A strong testing framework combines all major testing types into a unified process.&lt;/p&gt;

&lt;p&gt;An effective testing flow usually follows this order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Smoke testing verifies basic stability&lt;/li&gt;
&lt;li&gt;Functional testing validates feature behavior&lt;/li&gt;
&lt;li&gt;Integration testing confirms system connections&lt;/li&gt;
&lt;li&gt;End to end testing checks complete workflows&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This layered approach improves coverage and minimizes blind spots.&lt;/p&gt;




&lt;h2&gt;
  
  
  Automation and Continuous Testing
&lt;/h2&gt;

&lt;p&gt;As applications scale, manual testing becomes inefficient. Automation plays a vital role in maintaining consistency and speed.&lt;/p&gt;

&lt;p&gt;Key advantages of automated testing include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster release cycles&lt;/li&gt;
&lt;li&gt;Continuous feedback&lt;/li&gt;
&lt;li&gt;Reduced human error&lt;/li&gt;
&lt;li&gt;Improved test coverage&lt;/li&gt;
&lt;li&gt;Better CI pipeline integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By integrating automated tests into development workflows, teams can detect issues early and respond quickly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Managing Test Data and Environments
&lt;/h2&gt;

&lt;p&gt;Reliable testing depends on stable data and environments. Poor management can lead to inconsistent results.&lt;/p&gt;

&lt;p&gt;Best practices include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using isolated test databases&lt;/li&gt;
&lt;li&gt;Resetting environments regularly&lt;/li&gt;
&lt;li&gt;Maintaining clean test datasets&lt;/li&gt;
&lt;li&gt;Controlling configuration changes&lt;/li&gt;
&lt;li&gt;Monitoring dependency availability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These practices help maintain test accuracy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Challenges in Software Testing
&lt;/h2&gt;

&lt;p&gt;Despite best efforts, teams often face obstacles that affect testing quality.&lt;/p&gt;

&lt;p&gt;Some common challenges include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintaining outdated test scripts&lt;/li&gt;
&lt;li&gt;Handling unstable test environments&lt;/li&gt;
&lt;li&gt;Managing complex dependencies&lt;/li&gt;
&lt;li&gt;Balancing speed and quality&lt;/li&gt;
&lt;li&gt;Limited testing resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overcoming these issues requires continuous improvement and collaboration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measuring Testing Effectiveness
&lt;/h2&gt;

&lt;p&gt;To improve testing processes, organizations should track meaningful performance indicators.&lt;/p&gt;

&lt;p&gt;Important metrics include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test execution time&lt;/li&gt;
&lt;li&gt;Defect detection rate&lt;/li&gt;
&lt;li&gt;Production bug frequency&lt;/li&gt;
&lt;li&gt;Coverage of critical workflows&lt;/li&gt;
&lt;li&gt;Issue resolution time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These insights support data driven decision making.&lt;/p&gt;




&lt;h2&gt;
  
  
  Future Trends in Software Testing
&lt;/h2&gt;

&lt;p&gt;Software testing continues to evolve alongside technology.&lt;/p&gt;

&lt;p&gt;Emerging trends include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traffic based testing&lt;/li&gt;
&lt;li&gt;Contract testing&lt;/li&gt;
&lt;li&gt;AI assisted test automation&lt;/li&gt;
&lt;li&gt;Observability driven validation&lt;/li&gt;
&lt;li&gt;Service virtualization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These innovations help teams manage increasing system complexity.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Delivering high quality software requires more than writing good code. It demands a thoughtful and structured testing approach.&lt;/p&gt;

&lt;p&gt;By combining smoke testing, functional testing, integration testing, and end to end testing, teams can build reliable systems that meet user expectations and business goals.&lt;/p&gt;

&lt;p&gt;A balanced testing strategy reduces risk, improves confidence, and supports long term product success.&lt;/p&gt;

</description>
      <category>software</category>
      <category>testing</category>
      <category>automation</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Integration vs. E2E &amp; System Testing — A Practical Testing Pyramid Playbook (with Real CI Pipelines)</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Wed, 14 Jan 2026 13:20:46 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/integration-vs-e2e-system-testing-a-practical-testing-pyramid-playbook-with-real-ci-pipelines-1del</link>
      <guid>https://dev.to/michael_burry_00/integration-vs-e2e-system-testing-a-practical-testing-pyramid-playbook-with-real-ci-pipelines-1del</guid>
      <description>&lt;p&gt;As software systems grow more distributed, most failures no longer come from a single function or class. They happen when services interact, data flows across boundaries, or assumptions break between components.&lt;/p&gt;

&lt;p&gt;That’s why teams struggle to balance integration tests, end-to-end (E2E) tests, and system tests. Used incorrectly, they slow CI pipelines and reduce trust in test results. Used correctly, they provide fast feedback and strong release confidence.&lt;/p&gt;

&lt;p&gt;This article explains how these test types differ, when to use each for maximum ROI, and how real teams structure their CI pipelines around them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Testing Pyramid (As It Works in Real Teams)
&lt;/h2&gt;

&lt;p&gt;The classic testing pyramid looks simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unit tests at the base&lt;/li&gt;
&lt;li&gt;Integration tests in the middle&lt;/li&gt;
&lt;li&gt;End-to-end tests at the top&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But many real teams accidentally flip this pyramid—relying heavily on E2E tests and skipping &lt;a href="https://keploy.io/blog/community/integration-testing-a-comprehensive-guide" rel="noopener noreferrer"&gt;integration testing&lt;/a&gt;. The result is slow feedback, flaky builds, and late bug discovery.&lt;/p&gt;

&lt;p&gt;Let’s break down each layer with real examples and CI usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration Testing: Where Most ROI Comes From
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Integration Tests Validate
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;API-to-API communication&lt;/li&gt;
&lt;li&gt;Service ↔ database interactions&lt;/li&gt;
&lt;li&gt;Message brokers, caches, and external dependencies&lt;/li&gt;
&lt;li&gt;Request/response contracts and error handling&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Order Service → Payment Service&lt;/li&gt;
&lt;li&gt;Auth Service → User Database&lt;/li&gt;
&lt;li&gt;API → Kafka → Consumer&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Strengths
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;High signal for backend regressions&lt;/li&gt;
&lt;li&gt;Faster than E2E tests&lt;/li&gt;
&lt;li&gt;Catches contract and data issues early&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Limitations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Requires careful dependency isolation&lt;/li&gt;
&lt;li&gt;Not a replacement for full user-journey validation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Best Used When
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You have microservices or API-heavy backends&lt;/li&gt;
&lt;li&gt;Production bugs usually occur at service boundaries&lt;/li&gt;
&lt;li&gt;You need fast, reliable CI feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;In practice:&lt;/strong&gt;&lt;br&gt;
Integration tests form the &lt;strong&gt;spine of backend confidence&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://keploy.io/blog/community/end-to-end-testing-guide" rel="noopener noreferrer"&gt;End-to-End Testing&lt;/a&gt;: Validate Critical User Paths
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What E2E Tests Validate
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Full user journeys across UI, backend, and infrastructure&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;User signs up → logs in → places order → receives confirmation&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Strengths
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Closest to real user behavior&lt;/li&gt;
&lt;li&gt;Confirms wiring across the entire stack&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Limitations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Slow execution&lt;/li&gt;
&lt;li&gt;High maintenance cost&lt;/li&gt;
&lt;li&gt;Fragile due to UI and environment changes&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Best Used When
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Covering revenue-critical flows&lt;/li&gt;
&lt;li&gt;Running smoke tests post-deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key rule:&lt;/strong&gt;&lt;br&gt;
If E2E tests dominate your CI pipeline, your feedback loop will suffer.&lt;/p&gt;
&lt;h2&gt;
  
  
  System Testing: Release Readiness, Not Developer Feedback
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What System Tests Validate
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The entire application as a single deployed unit&lt;/li&gt;
&lt;li&gt;Functional and non-functional behavior&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Load handling under peak traffic&lt;/li&gt;
&lt;li&gt;Security and auth across modules&lt;/li&gt;
&lt;li&gt;SLA and reliability checks&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Strengths
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Closest to production conditions&lt;/li&gt;
&lt;li&gt;Strong release confidence&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Limitations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Slow&lt;/li&gt;
&lt;li&gt;Environment-heavy&lt;/li&gt;
&lt;li&gt;Not suitable for frequent CI runs&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Best Used When
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Before major releases&lt;/li&gt;
&lt;li&gt;In staging or pre-production environments&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Side-by-Side Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Test Type&lt;/th&gt;
&lt;th&gt;Speed&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;th&gt;Flakiness&lt;/th&gt;
&lt;th&gt;Primary Goal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Integration&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Low–Medium&lt;/td&gt;
&lt;td&gt;Validate service interactions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;End-to-End&lt;/td&gt;
&lt;td&gt;Slow&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Validate user journeys&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System&lt;/td&gt;
&lt;td&gt;Very Slow&lt;/td&gt;
&lt;td&gt;Very High&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Validate release readiness&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  Real CI Pipeline Examples (Production Patterns)
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Pull Request CI — Fast Developer Feedback
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Catch breaking changes early&lt;br&gt;
&lt;strong&gt;Runs on:&lt;/strong&gt; Every PR&lt;br&gt;
&lt;strong&gt;Time budget:&lt;/strong&gt; 5–15 minutes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stages&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lint &amp;amp; static analysis&lt;/li&gt;
&lt;li&gt;Unit tests&lt;/li&gt;
&lt;li&gt;Integration tests (isolated dependencies)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using &lt;strong&gt;&lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PR CI&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Unit tests&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;make test-unit&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Integration tests&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;make test-integration&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this works&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No shared environments&lt;/li&gt;
&lt;li&gt;Deterministic failures&lt;/li&gt;
&lt;li&gt;Fast merge confidence&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Main Branch CI — Regression Protection
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Validate merged code before release&lt;br&gt;
&lt;strong&gt;Runs on:&lt;/strong&gt; &lt;code&gt;main&lt;/code&gt; / &lt;code&gt;develop&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Time budget:&lt;/strong&gt; 20–40 minutes&lt;/p&gt;

&lt;p&gt;Using &lt;strong&gt;Jenkins&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;&lt;span class="n"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;stages&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Build'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'make build'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Unit Tests'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'make test-unit'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Integration Tests'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'make test-integration'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'E2E Smoke Tests'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'make test-e2e-smoke'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key design choice&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only &lt;strong&gt;smoke-level&lt;/strong&gt; E2E tests&lt;/li&gt;
&lt;li&gt;Integration tests catch most regressions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Nightly CI — System Validation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Validate full system behavior&lt;br&gt;
&lt;strong&gt;Runs on:&lt;/strong&gt; Nightly / scheduled&lt;br&gt;
&lt;strong&gt;Time budget:&lt;/strong&gt; 1–3 hours&lt;/p&gt;

&lt;p&gt;Using &lt;strong&gt;GitLab CI&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;system_tests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./deploy-staging.sh&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./run-system-tests.sh&lt;/span&gt;
  &lt;span class="na"&gt;only&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;schedules&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Not used for PR feedback&lt;/li&gt;
&lt;li&gt;Focused on readiness, not correctness&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where Teams Lose ROI
&lt;/h2&gt;

&lt;p&gt;Common real-world anti-patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running full E2E tests on every PR&lt;/li&gt;
&lt;li&gt;Using shared staging environments in CI&lt;/li&gt;
&lt;li&gt;Treating system tests as regression tests&lt;/li&gt;
&lt;li&gt;Skipping integration tests entirely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These patterns slow delivery and erode trust in pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Practical Testing Pyramid Playbook
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unit tests&lt;/strong&gt;&lt;br&gt;
Fast, cheap, local correctness&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration tests (core layer)&lt;/strong&gt;&lt;br&gt;
Service interactions, contracts, data flows&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Minimal E2E tests&lt;/strong&gt;&lt;br&gt;
Critical user paths only&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System tests&lt;/strong&gt;&lt;br&gt;
Release confidence, not daily feedback&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If a test runs frequently, it must be &lt;strong&gt;fast and deterministic&lt;/strong&gt;.&lt;br&gt;
If it validates production readiness, it belongs &lt;strong&gt;outside PR CI&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Strong testing strategies aren’t about more tests—they’re about &lt;strong&gt;placing the right tests at the right layer&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integration tests deliver the best speed-to-confidence ratio&lt;/li&gt;
&lt;li&gt;E2E tests protect critical workflows&lt;/li&gt;
&lt;li&gt;System tests ensure release readiness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Teams that follow this pyramid ship faster, debug less, and trust their CI pipelines.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>productivity</category>
      <category>cicd</category>
      <category>integration</category>
    </item>
    <item>
      <title>End-to-End Testing in Modern Software: A Practical Guide for Developers</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Tue, 06 Jan 2026 14:31:40 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/end-to-end-testing-in-modern-software-a-practical-guide-for-developers-1g3</link>
      <guid>https://dev.to/michael_burry_00/end-to-end-testing-in-modern-software-a-practical-guide-for-developers-1g3</guid>
      <description>&lt;p&gt;Modern applications are built very differently than they were a few years ago. Instead of single codebases, teams now work with microservices, APIs, cloud infrastructure, and third-party dependencies. While this architecture enables faster development, it also increases the risk of failures that are difficult to detect early.&lt;/p&gt;

&lt;p&gt;Many bugs don’t come from broken functions but from broken workflows. This is where end-to-end testing becomes essential.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding End-to-End Testing
&lt;/h2&gt;

&lt;p&gt;End-to-end testing validates how a system behaves from a user’s point of view. It checks whether complete workflows work as expected across all layers of the application, rather than focusing on individual components.&lt;/p&gt;

&lt;p&gt;A typical workflow might include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User actions from a UI or client&lt;/li&gt;
&lt;li&gt;API requests across multiple services&lt;/li&gt;
&lt;li&gt;Business logic execution&lt;/li&gt;
&lt;li&gt;Database operations&lt;/li&gt;
&lt;li&gt;External integrations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;End-to-end tests ensure that all of these parts work together correctly under realistic conditions.&lt;/p&gt;

&lt;p&gt;In real-world systems, &lt;strong&gt;&lt;a href="https://keploy.io/blog/community/end-to-end-testing-guide" rel="noopener noreferrer"&gt;e2e testing&lt;/a&gt;&lt;/strong&gt; helps teams verify that critical user journeys function reliably across frontend interfaces, backend services, APIs, and infrastructure components.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Developers Can’t Rely Only on Unit Tests
&lt;/h2&gt;

&lt;p&gt;Unit tests are great for validating logic quickly and catching regressions early. However, they operate in isolation and rely heavily on mocks and assumptions.&lt;/p&gt;

&lt;p&gt;Even integration tests, while useful, often validate limited interactions in controlled environments. They may not reflect real production behavior, where configuration issues, data inconsistencies, and network failures occur.&lt;/p&gt;

&lt;p&gt;End-to-end testing addresses this gap by validating the system as a whole. It answers the most important question:&lt;/p&gt;

&lt;p&gt;Does the application actually work for users?&lt;/p&gt;




&lt;h2&gt;
  
  
  Real Issues E2E Testing Helps Uncover
&lt;/h2&gt;

&lt;p&gt;End-to-end tests are especially effective at detecting problems such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Broken service-to-service communication&lt;/li&gt;
&lt;li&gt;Incorrect API contracts&lt;/li&gt;
&lt;li&gt;Authentication and permission issues&lt;/li&gt;
&lt;li&gt;Data inconsistencies across systems&lt;/li&gt;
&lt;li&gt;Misconfigured environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These issues are difficult to identify without running tests that exercise the full application flow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Challenges With End-to-End Testing
&lt;/h2&gt;

&lt;p&gt;Despite its value, end-to-end testing is often misunderstood or misused.&lt;/p&gt;

&lt;p&gt;One challenge is test stability. Because e2e tests depend on multiple services and environments, failures may occur due to infrastructure issues rather than real defects.&lt;/p&gt;

&lt;p&gt;Another issue is execution time. Running full workflows takes longer than running unit or integration tests, making it impractical to run large e2e suites on every commit.&lt;/p&gt;

&lt;p&gt;Teams that succeed with end-to-end testing usually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Limit coverage to critical user paths&lt;/li&gt;
&lt;li&gt;Avoid testing edge cases already covered elsewhere&lt;/li&gt;
&lt;li&gt;Run e2e tests as part of release validation rather than constant feedback loops&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  When End-to-End Testing Matters Most
&lt;/h2&gt;

&lt;p&gt;Not every feature requires an end-to-end test. However, some scenarios benefit greatly from it.&lt;/p&gt;

&lt;p&gt;End-to-end testing is especially valuable when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Releasing user-facing features&lt;/li&gt;
&lt;li&gt;Deploying changes across multiple services&lt;/li&gt;
&lt;li&gt;Introducing new integrations&lt;/li&gt;
&lt;li&gt;Migrating infrastructure or architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By focusing on high-impact workflows, teams can gain confidence without maintaining overly large test suites.&lt;/p&gt;




&lt;h2&gt;
  
  
  E2E Testing as Part of a Balanced Strategy
&lt;/h2&gt;

&lt;p&gt;The most effective testing strategies combine multiple layers of validation.&lt;/p&gt;

&lt;p&gt;A healthy setup usually includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unit tests for fast, frequent feedback&lt;/li&gt;
&lt;li&gt;Integration tests for validating service interactions&lt;/li&gt;
&lt;li&gt;End-to-end tests for system-level confidence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layered approach reduces risk while keeping testing efficient and maintainable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;As systems become more distributed and interconnected, testing complete workflows becomes increasingly important. End-to-end testing provides visibility into how real users experience the product and helps teams catch issues before they reach production.&lt;/p&gt;

&lt;p&gt;When applied thoughtfully, it complements other testing practices and plays a key role in delivering reliable software.&lt;/p&gt;

</description>
      <category>e2e</category>
      <category>testing</category>
      <category>softwaredevelopment</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Integration Testing: Definition, How-to, Examples</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Mon, 05 Jan 2026 20:09:22 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/integration-testing-definition-how-to-examples-1nmd</link>
      <guid>https://dev.to/michael_burry_00/integration-testing-definition-how-to-examples-1nmd</guid>
      <description>&lt;p&gt;Imagine organizing a large event. The venue, catering, invitations, and audio system all work perfectly on their own. But when the event begins, everything must come together seamlessly. If check-in fails, food is delayed, or the sound system breaks, the entire experience suffers.&lt;/p&gt;

&lt;p&gt;This is where integration testing becomes essential. Integration testing verifies that different parts of a software system such as services, APIs, databases, and external systems work correctly together. Even when individual modules pass unit tests, issues like data mismatches, communication failures, or configuration errors often surface only when components interact.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explain what integration testing is, why it matters, and how to implement it effectively. We’ll cover its types, benefits, best practices, and real-world examples to help you apply integration testing with confidence in modern software systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Integration Testing?
&lt;/h2&gt;

&lt;p&gt;Integration testing in &lt;a href="https://keploy.io/blog/community/testing-methodologies-in-software-testing" rel="noopener noreferrer"&gt;software testing&lt;/a&gt; focuses on validating interactions between different parts of an application. These parts may be internal modules or external systems such as third-party APIs and services. The goal is to ensure that the complete system behaves correctly when its components are connected.&lt;/p&gt;

&lt;p&gt;In the testing pyramid, integration testing sits between unit testing and end-to-end testing. After verifying individual units in isolation, integration testing ensures those units communicate correctly before moving on to full user-flow validation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is Integration Testing Crucial in Modern Software Development?
&lt;/h2&gt;

&lt;p&gt;As applications become more distributed and feature-rich, integration testing ensures that all the systems and modules work together. Whether you’re dealing with monolithic apps or &lt;a href="https://keploy.io/blog/community/getting-started-with-microservices-testing" rel="noopener noreferrer"&gt;microservices architectures&lt;/a&gt;, integration testing plays a key role in validating data flow, module interactions, and overall functionality.&lt;/p&gt;

&lt;p&gt;Here are the key benefits of integration testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identifying Bugs Linked to Module Interactions&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Many bugs arise from how components interact with each other. For example, data mismatches or API failures may only surface when two modules communicate. Integration testing helps catch these errors early.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Validating Data Flow&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Integration testing ensures that data passed between components remains consistent and accurately flows from one module to another. For example, when an API sends data to a database, integration testing ensures that the data is processed correctly and remains intact.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mitigating Production Risk&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
By identifying integration issues early, integration testing helps prevent larger failures once the application is in production. This is crucial in preventing disruptions to users and maintaining smooth operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improving System Reliability&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Effective integration tests ensure that the combined system performs as expected under different scenarios. Integration testing helps validate the system’s resilience and ensures that modules work well in tandem.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Integration Testing Fits in the Software Development Cycle&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In the software development cycle, integration testing sits between unit testing and system testing.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa1oqs05sm4bo84bbruc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa1oqs05sm4bo84bbruc.webp" alt="software development cycle" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unit Testing&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Focuses on testing individual components or functions in isolation, ensuring each unit works as expected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration Testing&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Tests how components or modules interact, ensuring they work together as intended.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System Testing&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Ensures that the entire system works as a whole, including testing performance, security, and user experience.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While unit tests are quick and targeted, integration tests validate the interactions between components. They provide the next level of confidence that the system will behave as expected when all pieces come together.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How to Write Effective Integration Tests&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Writing integration tests requires careful planning, preparation, and execution. Here’s a step-by-step approach:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5slegc3quq6rbyxz8p64.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5slegc3quq6rbyxz8p64.webp" alt="UI to API to DB" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Define the Scope of Integration Tests&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Clarify which components will be tested together (e.g., API + front-end, service + database, UI + backend API).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prepare Test Data &amp;amp; Environment&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Use realistic datasets, mock data, or test environments (e.g., Docker containers) to simulate real-world conditions without affecting production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Design Comprehensive Test Cases&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Define the test inputs, expected results, preconditions, and cleanup. This helps in validating specific interactions, error handling, and data flow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate Test Execution&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Automate tests using frameworks like JUnit, pytest, or Keploy, and integrate them into &lt;a href="https://keploy.io/blog/community/how-cicd-is-changing-the-future-of-software-development" rel="noopener noreferrer"&gt;CI/CD pipelines&lt;/a&gt; to ensure tests run with every code change.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Verify Results&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Look at status codes, check payload correctness, and monitor side effects (like emails sent or database changes).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cleanup &amp;amp; Teardown&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Ensure that all test data is cleared, keeping the test environment consistent for future runs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Integration Testing Works in Action&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In practice, integration testing involves connecting modules in a controlled environment. Here's an overview:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bootstrapping&lt;/strong&gt;: Initialize the modules, mocking external dependencies if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test Execution&lt;/strong&gt;: Trigger scenarios that initiate interactions, such as API requests or UI actions that call APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Logging &amp;amp; Observation&lt;/strong&gt;: Capture logs, metrics, and traces to monitor for errors or performance issues during the test.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Assertion &amp;amp; Reporting&lt;/strong&gt;: Use assertions to compare expected vs. actual results, providing detailed reports for debugging.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Does Integration Testing Involve?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interface Compatibility&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Ensures that all teams share a common understanding of method signatures, data formats, and endpoints. For example, when APIs communicate with databases, teams must align on request formats and response schemas.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Integrity&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Validates that data transformations and transfers maintain meaning and structure. This is crucial for ensuring consistency and accuracy as data moves across components (e.g., from an API to a database).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;System Behavior&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
This step involves ensuring that workflows across modules achieve the expected business outcomes or user experience.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance Testing&lt;/strong&gt;: This is crucial, especially in high-traffic scenarios. For example, when APIs and databases work together under load, integration tests ensure that response times and throughput remain consistent as traffic increases.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error &amp;amp; Exception Handling&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;&lt;br&gt;
Error handling involves testing for scenarios where failures may occur, such as timeouts, retries, or system crashes. Integration testing ensures that your system handles failures gracefully — by retrying failed API calls or reverting to fallback procedures during communication breakdowns. This minimizes disruption and ensures a smooth user experience.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Are the Key Steps in Integration Testing?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozx2y61fvkit25bzww4x.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozx2y61fvkit25bzww4x.webp" alt="Key Steps in Integration Testing" width="800" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plan Strategy&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Identify the desired integration strategy (e.g., Big Bang, Bottom-Up). Record entry and exit criteria.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Design Test Cases&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Identify positive flows, boundary conditions, and failure modes for each integration point.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setup Environment&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Provision test servers, containers, message brokers, and versioned test data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Execute Tests&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Execute automated scripts while gathering logs to track performance and errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Log &amp;amp; Track Defects&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Track issues in a defect management system (e.g., Jira) with detailed reproduction steps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fix &amp;amp; Retest&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Developers resolve defects, and testers re-execute tests until criteria are met.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What Is the Purpose of an Integration Test?
&lt;/h2&gt;

&lt;p&gt;The overarching aim is to assess the functioning of the integrated component of the modules together. Specifically checks may be categorized into three types:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6xssl4e8wyrweyw6o1e.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6xssl4e8wyrweyw6o1e.webp" alt="Venn Diagram of Integration Testing" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interface Compatibility&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Ensuring the integrity of the called parameters and their definition and data formats.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Integrity:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Ensuring transformations and transfers maintain meaning and structure in the transaction.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System Behavior&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Ensuring that workflows across the module types achieve the expected business outcomes or user experience.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key Types of Integration Testing
&lt;/h2&gt;

&lt;p&gt;There are several approaches to integration testing, each suited to different types of systems:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xt0uqw66zhp2f33lu5o.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xt0uqw66zhp2f33lu5o.webp" alt="types of integration testing" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Big-Bang Integration Testing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: All modules are integrated after unit testing is completed, and the entire system is tested at once.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;: Easy setup, no need to create intermediate tests or stubs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;: Difficult to pinpoint the root cause of failures, and if integration fails, it can block all work.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Bottom-Up Integration Testing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: Testing begins with the lowest-level modules and gradually integrates higher-level modules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;: Provides granular testing of the underlying components before higher-level modules are built.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;: Requires the creation of driver modules for simulation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Top-Down Integration Testing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: Testing begins with the top-level modules, using stubs to simulate lower-level components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;: Early validation of user-facing features and overall system architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;: Lower-level modules are tested later in the process, delaying defect discovery.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Mixed (Sandwich) Integration Testing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: Combines top-down and bottom-up approaches to integrate and test components simultaneously from both ends.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;: Allows parallel integration, detecting defects at multiple levels early.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;: Requires careful planning to synchronize both testing strategies.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Best Practices for Integration Testing&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plan Early&lt;/strong&gt;: Start planning your integration tests during the design phase to ensure you have the right test cases in place.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clear Test Cases&lt;/strong&gt;: Write clear and concise test cases that cover a variety of scenarios — including failure conditions and edge cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automation&lt;/strong&gt;: Use automated testing tools (like Postman, JUnit, or Keploy) to speed up the process and run tests more frequently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Mock Data&lt;/strong&gt;: If possible, use &lt;a href="https://keploy.io/blog/community/a-technical-guide-to-test-mock-data-levels-tools-and-best-practices" rel="noopener noreferrer"&gt;mock data&lt;/a&gt; or services to simulate real interactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance Testing&lt;/strong&gt;: Consider measuring response times and performance during integration testing, especially for high-volume applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tools for Integration Testing
&lt;/h3&gt;

&lt;p&gt;While you mention popular tools like Postman, JUnit, and Selenium, expanding this section with more specific tools and their use cases will provide additional value to readers:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1.&lt;/strong&gt; &lt;strong&gt;Keploy&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe67mxncuryklh9rfzy3k.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe67mxncuryklh9rfzy3k.webp" alt="keploy" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: Keploy is an automation tool that helps developers generate integration tests by recording real user interactions and replaying them as test cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt;: Ideal for automating &lt;strong&gt;API&lt;/strong&gt;, &lt;strong&gt;service&lt;/strong&gt;, and &lt;strong&gt;UI&lt;/strong&gt; integration tests with minimal manual effort.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why It’s Useful&lt;/strong&gt;: Keploy saves time by automatically creating test cases and integrating them into &lt;strong&gt;CI/CD pipelines&lt;/strong&gt;, ensuring repeatability and reliability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. SoapUI&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frng7u229fb5agstk0pzx.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frng7u229fb5agstk0pzx.webp" alt="SoapUI" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: SoapUI is a tool designed specifically for testing &lt;strong&gt;SOAP&lt;/strong&gt; and &lt;strong&gt;REST&lt;/strong&gt; web services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt;: Great for testing APIs that interact with multiple external systems and services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why It’s Useful&lt;/strong&gt;: SoapUI supports functional, load, and security testing for APIs, ensuring comprehensive validation for service integration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Citrus&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1gc582mzlgarav4inwu.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1gc582mzlgarav4inwu.webp" alt="Citrus" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: Citrus is designed for application integration testing in messaging applications and microservices.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt;: Perfect for validating asynchronous systems and message-based communication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why It’s Useful&lt;/strong&gt;: Citrus supports JMS, HTTP, and other protocols, providing a robust framework for testing message-based interactions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4. Postman&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl86ajims0cj2fbmuogt.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl86ajims0cj2fbmuogt.webp" alt="Postman" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: Postman is a popular tool for &lt;a href="https://keploy.io/blog/community/everything-you-need-to-know-about-api-testing" rel="noopener noreferrer"&gt;API testing&lt;/a&gt;, enabling developers to send HTTP requests and validate responses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt;: Widely used for testing RESTful APIs and simulating real-world user requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Why It’s Useful&lt;/strong&gt;: With its automation and workflow features, Postman ensures your APIs are robust and properly integrated into your applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Importance of Test Data Management&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Good &lt;a href="https://keploy.io/blog/community/7-best-test-data-management-tools-in-2024" rel="noopener noreferrer"&gt;test data management&lt;/a&gt; is key to reliable service integration testing. Use realistic data that accurately represents real-world scenarios. Here are some recommendations to promote test data consistency:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Mock Data in Place of External Services&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
If external system services are unavailable, use mock data that simulates external services' behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Consistency&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
For integration tests to be meaningful, the data utilized in those tests should remain consistent across tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Anonymize Data&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
If using production data, always anonymize it to comply with privacy laws and regulations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Real-Life Case Studies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;E-commerce Platform Example&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Integration tests ensure that different services in an &lt;a href="https://www.cs-cart.com/" rel="noopener noreferrer"&gt;e-commerce platform&lt;/a&gt; communicate properly. When a user adds an item to their cart and proceeds to checkout, integration tests ensure services like inventory management, payment gateways, and shipping services work seamlessly together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Healthcare Application Example&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
In a healthcare platform, integration tests ensure that patient registration data interacts correctly with the billing and appointment scheduling systems. Integration tests help ensure that when a patient registers, the system updates the appointment schedule and billing data in real-time.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Challenges &amp;amp; Solutions&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Managing External Dependencies&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Mocking tools or containerized environments can replicate the behavior of external dependencies, making testing more effective when services are unavailable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Governance&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Create realistic test data and reset it after each test to maintain consistency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Working with Asynchronous Systems&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: For message-driven or event-based systems, use tools like Citrus to manage message delivery and timing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Applications of Testing
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w89f1qhkx8xsb97m9rh.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w89f1qhkx8xsb97m9rh.webp" alt="Application of Integration Testing" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a vital ingredient of contemporary software systems. When many components, services, or layers are working with each other, it can help provide assurance that they are performing as expected. The areas below highlight situations when Testing is most useful.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Microservices Architectures&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://keploy.io/blog/community/getting-started-with-microservices-testing" rel="noopener noreferrer"&gt;Microservices Testing&lt;/a&gt; generally refers to applications that distribute functionality among multiple deployable services that can be deployed independently. With integration tests in a microservice architecture, one can validate the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Reliable inter-service communication through either REST APIs or gRPC interfaces&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Proper messages are delivered through message queuing systems (e.g., Kafka or RabbitMQ)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Services can register and discover each other in a dynamic environment (e.g., Consul or Eureka)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: One test could provide verification that the order service actually calls the payments service, and the payments service responds with the expected response.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Client–Server Systems&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;For most traditional or modern client-server based applications (e.g., web apps or mobile applications) an integration test can validate that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use cases validate that the "Frontend" interactive interface calls and communicates with the "Backend" APIs as expected&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Establish data flow from the user to the client interaction and determine whether that action is reflected in the database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allow for authentication and management of session state across all layers of the system&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Verify that the form submission from the web client is received by the server.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Third-Party Integrations&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Numerous apps are based on external services to provide core functionality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This will specifically show thorough and valid consumption of APIs (like Google Maps, OAuth, Stripe)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Correct response and error handling for errors, such as timeouts, discarded responses, and discards from version changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security and compliance issues when communicating sensitive information.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Ensure that if a third-party gateway payment fails, the application logs the failure and appropriately handles it.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Data Pipelines&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In systems that do primarily data transformation/movement (such as an ETL/ELT workflow), an integration test can confirm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Proper sequencing and transformation of data across all processing stages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data integrity, proving it is intact, from when it is read from the source, to stored or visualized.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handling schema changes or missing data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Ensuring raw (not processed) data from logs, is cleaned, transformed appropriately, and loaded in the data warehouse.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://keploy.io/blog/community/manual-vs-automation-testing" rel="noopener noreferrer"&gt;Manual Testing vs. Automated Testing&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxzpc3msju72dutqj861.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxzpc3msju72dutqj861.webp" alt="manual testing vs automated testing" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Manual Integration Testing&lt;/th&gt;
&lt;th&gt;Automated Integration Testing&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Repeatability&lt;/td&gt;
&lt;td&gt;Prone to human error, time-consuming&lt;/td&gt;
&lt;td&gt;Fast, consistent, and repeatable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coverage&lt;/td&gt;
&lt;td&gt;Limited by the tester’s time&lt;/td&gt;
&lt;td&gt;Can cover many scenarios overnight&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintenance Effort&lt;/td&gt;
&lt;td&gt;Low initial setup, high ongoing cost&lt;/td&gt;
&lt;td&gt;High initial setup, low ongoing cost&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reporting&lt;/td&gt;
&lt;td&gt;Subjective, ad-hoc logs&lt;/td&gt;
&lt;td&gt;Structured logs, metrics, and dashboards&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Automated Testing&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
&lt;a href="https://keploy.io/blog/community/guide-to-automated-testing-tools-in-2025" rel="noopener noreferrer"&gt;Automated testing&lt;/a&gt; is well suited for testing that is repetitive, high-volume, and regression testing. Automated testing is capable of providing faster feedback, improved scalability, and more reliability than manual testing.&lt;/p&gt;

&lt;p&gt;Keploy improves automated service-level testing by capturing real user interactions to automatically generate test cases without writing them yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Choose Keploy for Integration Testing?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://keploy.io" rel="noopener noreferrer"&gt;Keploy&lt;/a&gt; revolutionizes integration testing by capturing real API traffic and automatically generating test cases from it. It mocks external systems, ensuring that the tests are repeatable and reliable, making integration testing easier and faster. With seamless CI/CD integration, Keploy ensures that your code is always validated before it reaches production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij3q8lqvcwql0ak5f26k.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij3q8lqvcwql0ak5f26k.webp" alt="Keploy Logo" width="654" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key benefits of using Keploy for integration testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Traffic-Based Test Generation&lt;/strong&gt;: Capture real user traffic and convert it into automated test cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mocking &amp;amp; Isolation&lt;/strong&gt;: Mock external systems to ensure repeatable, isolated tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Regression Detection&lt;/strong&gt;: Automatically replay tests to detect integration issues with every code change.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CI/CD Integration&lt;/strong&gt;: Works seamlessly with GitHub Actions, Jenkins, and GitLab CI for continuous testing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Integration testing is crucial for ensuring that all components in your software application work as expected when combined. By following the best practices and utilizing tools like Keploy, you can streamline your testing process, detect issues early, and ensure your system is reliable.&lt;/p&gt;

&lt;p&gt;Whether you’re working with microservices or a monolithic architecture, integration testing helps ensure smooth communication and functionality across modules, ultimately improving the quality and reliability of your software.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;FAQs&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How frequently should I run integration tests?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Integration tests should be run on every pull request in your CI pipeline and as part of nightly regression testing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Can integration tests replace unit tests?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
No, unit tests check individual units, while integration tests ensure that units work together.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How does Keploy help with integration testing?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Keploy automates integration testing by recording real user interactions and generating tests, while mocking external systems to ensure repeatability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Is it appropriate to use mocks for external services?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Use real services when possible, but mocks are a great alternative when external services are unavailable or costly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How do integration tests differ from E2E tests?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Integration tests check the interactions between modules, while end-to-end tests check entire user workflows across the system.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reference: &lt;a href="https://keploy.io/blog/community/integration-testing-a-comprehensive-guide" rel="noopener noreferrer"&gt;Keploy.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>cicd</category>
      <category>automation</category>
      <category>software</category>
    </item>
    <item>
      <title>How AI Is Changing Integration, Functional, and End to End Testing</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Thu, 01 Jan 2026 08:09:22 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/how-ai-is-changing-integration-functional-and-end-to-end-testing-4093</link>
      <guid>https://dev.to/michael_burry_00/how-ai-is-changing-integration-functional-and-end-to-end-testing-4093</guid>
      <description>&lt;p&gt;Software teams today are shipping faster than ever. Microservices, APIs, cloud infrastructure, and continuous deployment have become the norm. While this speed helps teams deliver value quickly, it also puts a lot of pressure on testing. Traditional automation struggles to keep up with constantly changing systems, flaky environments, and growing test maintenance costs.&lt;/p&gt;

&lt;p&gt;This is where AI powered testing tools are starting to make a real impact. Instead of relying only on static scripts, AI driven approaches focus on behavior, patterns, and real system usage. The result is smarter testing across integration testing, functional testing, and end to end testing.&lt;/p&gt;

&lt;p&gt;This article explores how AI is reshaping these three critical testing layers and what that means for modern development teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration Testing in a Rapidly Changing System
&lt;/h2&gt;

&lt;p&gt;Integration testing focuses on verifying how different parts of a system work together. This includes service to service communication, API contracts, database interactions, and external dependencies. In modern architectures, even a small change in one service can break several integrations.&lt;/p&gt;

&lt;p&gt;Traditional integration tests are usually written manually and tightly coupled to implementation details. As APIs evolve or schemas change, these tests tend to break even when the system is still working correctly. Over time, teams spend more effort fixing tests than validating behavior.&lt;/p&gt;

&lt;p&gt;AI changes this approach by learning how services actually interact. Instead of relying only on predefined assertions, AI driven tools analyze request and response patterns, detect anomalies, and generate integration test scenarios based on real traffic or observed behavior.&lt;/p&gt;

&lt;p&gt;This leads to better coverage of real world use cases. It also reduces false failures caused by minor, non breaking changes. Integration testing becomes more resilient and more aligned with how systems behave in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Functional Testing Beyond Static Test Cases
&lt;/h2&gt;

&lt;p&gt;Functional testing ensures that features behave according to business requirements. It answers questions like whether a user can log in, place an order, or update a profile successfully. While functional testing is essential, maintaining large functional test suites is often painful.&lt;/p&gt;

&lt;p&gt;Manual test writing does not scale well, and scripted automation quickly becomes outdated as requirements change. Small UI or API changes can cause dozens of functional tests to fail even when the feature still works.&lt;/p&gt;

&lt;p&gt;AI powered functional testing focuses on intent rather than exact steps. Instead of testing every click or response value rigidly, AI models understand expected outcomes and acceptable variations. They can generate functional test cases from requirements, user stories, or observed usage flows.&lt;/p&gt;

&lt;p&gt;Another advantage is stability. AI systems can recognize flaky behavior and adjust execution dynamically. This reduces noise in test results and helps teams focus on real functional issues instead of false alarms.&lt;/p&gt;

&lt;p&gt;As a result, functional testing becomes less about maintaining scripts and more about validating real business behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  End to End Testing That Reflects Real User Journeys
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://keploy.io/blog/community/end-to-end-test-automation-guide" rel="noopener noreferrer"&gt;End to end testing&lt;/a&gt; validates complete workflows across the entire system. This includes frontend interactions, backend services, databases, and third party integrations. These tests provide high confidence but are also the most expensive to build and maintain.&lt;/p&gt;

&lt;p&gt;Traditional end to end testing often relies on long, fragile scripts that break whenever something changes in the UI or backend. Because of this, teams either limit their end to end coverage or avoid running these tests frequently.&lt;/p&gt;

&lt;p&gt;AI brings a different approach. Instead of scripting every path manually, AI can observe how users actually interact with the system and generate realistic end to end flows automatically. These flows reflect real usage patterns rather than idealized test scenarios.&lt;/p&gt;

&lt;p&gt;AI can also help with test data generation, environment variability, and failure analysis. When an end to end test fails, AI based tools can analyze logs, network calls, and behavior patterns to identify the likely root cause. This saves significant debugging time.&lt;/p&gt;

&lt;p&gt;With AI, end to end testing becomes more reliable, more representative of real users, and easier to maintain.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AI Improves Test Maintenance and Developer Confidence
&lt;/h2&gt;

&lt;p&gt;One of the biggest challenges in testing is maintenance. Tests that require constant updates quickly lose trust. AI helps reduce this burden by adapting tests as systems evolve.&lt;/p&gt;

&lt;p&gt;Instead of failing immediately when something changes, AI driven tests can evaluate whether the change actually affects expected behavior. This leads to fewer false positives and more meaningful feedback.&lt;/p&gt;

&lt;p&gt;For developers, this means faster feedback loops and higher confidence in test results. Tests become a safety net rather than a bottleneck. Teams can move faster without sacrificing quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Role of Testers in an AI Driven Testing World
&lt;/h2&gt;

&lt;p&gt;AI does not eliminate the need for testers. Instead, it shifts their role. Testers spend less time writing and fixing scripts and more time focusing on test strategy, risk analysis, exploratory testing, and understanding user behavior.&lt;/p&gt;

&lt;p&gt;AI handles repetitive and data heavy tasks. Humans focus on judgment, creativity, and business context. This collaboration leads to better quality outcomes than either approach alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI powered testing is changing how teams approach integration testing, functional testing, and end to end testing. By focusing on behavior, patterns, and real usage, AI reduces maintenance effort and increases test reliability.&lt;/p&gt;

&lt;p&gt;As systems continue to grow in complexity, static testing approaches will struggle to keep up. Teams that adopt AI driven testing early will be better positioned to ship faster, catch real issues earlier, and maintain confidence in their software quality.&lt;/p&gt;

&lt;p&gt;If you want, I can also adapt this specifically for Dev.to formatting, add a stronger intro hook, or tailor it for an API first or microservices audience.&lt;/p&gt;

</description>
      <category>e2e</category>
      <category>testing</category>
      <category>automation</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Agile Vs Waterfall A Practical Guide for Modern Development Teams</title>
      <dc:creator>Michael burry</dc:creator>
      <pubDate>Mon, 29 Dec 2025 11:01:48 +0000</pubDate>
      <link>https://dev.to/michael_burry_00/agile-vs-waterfall-a-practical-guide-for-modern-development-teams-551j</link>
      <guid>https://dev.to/michael_burry_00/agile-vs-waterfall-a-practical-guide-for-modern-development-teams-551j</guid>
      <description>&lt;p&gt;Choosing a development methodology is one of the most important decisions a team makes before starting a project. The way work is planned, built, tested, and delivered depends heavily on this choice. &lt;a href="https://keploy.io/blog/community/agile-vs-waterfall-methodology-guide" rel="noopener noreferrer"&gt;Agile Vs Waterfall&lt;/a&gt; is a comparison every developer eventually encounters, especially when moving between startups, enterprises, or different engineering cultures.&lt;/p&gt;

&lt;p&gt;This article breaks down both methodologies from a practical engineering perspective and helps you decide which one fits your project and team.&lt;/p&gt;

&lt;p&gt;What Is the Waterfall Model&lt;/p&gt;

&lt;p&gt;Waterfall is a linear and sequential approach to software development. Each phase of the project flows into the next, starting with requirements and ending with deployment and maintenance. Once a phase is completed, the team does not go back unless there is a major revision.&lt;/p&gt;

&lt;p&gt;Waterfall works best when requirements are clearly defined from the beginning. Teams spend significant time documenting specifications, architecture, and acceptance criteria before writing any code. This structure makes progress easy to track and reduces ambiguity, but it also limits flexibility when changes appear later in the lifecycle.&lt;/p&gt;

&lt;p&gt;From a developer perspective, Waterfall often means long development phases followed by testing near the end. Bugs or design issues discovered late can be expensive to fix, especially if they affect earlier decisions.&lt;/p&gt;

&lt;p&gt;What Is Agile Development&lt;/p&gt;

&lt;p&gt;Agile is an iterative and incremental approach focused on delivering small, working pieces of software frequently. Instead of waiting months for a final release, teams work in short cycles called sprints and continuously improve the product based on feedback.&lt;/p&gt;

&lt;p&gt;Agile emphasizes collaboration between developers, testers, product managers, and stakeholders. Requirements are treated as evolving rather than fixed. This allows teams to respond quickly to changing user needs or technical challenges.&lt;/p&gt;

&lt;p&gt;For developers, Agile usually means faster feedback, more frequent releases, and closer alignment with product goals. It also requires strong communication and discipline, since less upfront documentation means decisions must be clearly shared within the team.&lt;/p&gt;

&lt;p&gt;Agile Vs Waterfall Core Differences&lt;/p&gt;

&lt;p&gt;The biggest difference between Agile Vs Waterfall is how change is handled. Waterfall assumes stability and resists change once development begins. Agile expects change and builds processes around adapting quickly.&lt;/p&gt;

&lt;p&gt;Delivery is another key difference. Waterfall delivers the product at the end of the cycle, while Agile delivers usable features continuously. Testing in Waterfall typically happens after development, whereas Agile integrates testing throughout each sprint.&lt;/p&gt;

&lt;p&gt;Documentation also differs significantly. Waterfall relies on detailed documentation upfront. Agile prioritizes working software and collaboration, using documentation only where it adds value.&lt;/p&gt;

&lt;p&gt;When Waterfall Makes Sense&lt;/p&gt;

&lt;p&gt;Waterfall is still relevant for certain types of projects. It works well when requirements are fixed, scope is clearly defined, and compliance or regulatory approvals are required. Examples include financial systems, government applications, and large scale infrastructure projects.&lt;/p&gt;

&lt;p&gt;In these environments, predictability and documentation are more important than speed or flexibility. Teams benefit from knowing exactly what needs to be built and when.&lt;/p&gt;

&lt;p&gt;When Agile Is the Better Choice&lt;/p&gt;

&lt;p&gt;Agile is ideal for products with evolving requirements or unclear initial scope. Most modern web applications, SaaS platforms, and internal tools benefit from Agile because user feedback and market conditions change frequently.&lt;/p&gt;

&lt;p&gt;Agile allows developers to ship early, learn from real usage, and reduce the risk of building the wrong thing. It also supports continuous integration and continuous delivery practices, which are common in modern engineering teams.&lt;/p&gt;

&lt;p&gt;Hybrid Approaches in Real World Teams&lt;/p&gt;

&lt;p&gt;Many teams today do not follow pure Agile or pure Waterfall. Instead, they use hybrid approaches. High level planning and architecture may follow a Waterfall style, while development and testing are done iteratively using Agile practices.&lt;/p&gt;

&lt;p&gt;This approach helps teams maintain long term direction while still adapting to change during implementation. It is especially common in larger organizations transitioning from traditional models to more modern workflows.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Agile Vs Waterfall is not about which methodology is better overall. It is about choosing the right approach for your project, team, and constraints. Waterfall offers structure and predictability. Agile provides flexibility and faster feedback.&lt;/p&gt;

&lt;p&gt;The most effective development teams understand both models and apply them thoughtfully rather than following one rigidly. By aligning methodology with real world needs, teams can deliver better software with fewer surprises.&lt;/p&gt;

</description>
      <category>sdlc</category>
      <category>agile</category>
      <category>waterfall</category>
      <category>software</category>
    </item>
  </channel>
</rss>
