<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohamed Tahri</title>
    <description>The latest articles on DEV Community by Mohamed Tahri (@metahris).</description>
    <link>https://dev.to/metahris</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/metahris"/>
    <language>en</language>
    <item>
      <title>Snapshot Testing in Python with pytest-verify — Part 2: Async Support</title>
      <dc:creator>Mohamed Tahri</dc:creator>
      <pubDate>Sat, 25 Oct 2025 17:05:59 +0000</pubDate>
      <link>https://dev.to/metahris/snapshot-testing-in-python-with-pytest-verify-part-2-async-support-1fi9</link>
      <guid>https://dev.to/metahris/snapshot-testing-in-python-with-pytest-verify-part-2-async-support-1fi9</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/metahris/snapshot-testing-in-python-with-pytest-verify-1bgo"&gt;previous article&lt;/a&gt;, we explored how &lt;a href="https://github.com/metahris/pytest-verify" rel="noopener noreferrer"&gt;pytest-verify&lt;/a&gt; makes snapshot testing effortless for structured data — JSON, YAML, XML, DataFrames, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But what if your tests are asynchronous?&lt;/strong&gt;&lt;br&gt;
Modern Python apps rely heavily on async I/O — think of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Async web frameworks like FastAPI, aiohttp, or Quart&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Async database drivers (asyncpg, motor)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Message brokers and streaming systems&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Until now, testing async functions meant juggling plugins and decorators.&lt;br&gt;
Not anymore 🚀&lt;/p&gt;
&lt;h2&gt;
  
  
  Introducing Native Async Support
&lt;/h2&gt;

&lt;p&gt;Starting from &lt;strong&gt;pytest-verify v1.2.0&lt;/strong&gt;, you can snapshot async test functions directly — no wrappers, no extra ceremony.&lt;/p&gt;

&lt;p&gt;Whether your test is async or simply returns a coroutine, &lt;strong&gt;@verify_snapshot&lt;/strong&gt; now handles both seamlessly:&lt;/p&gt;
&lt;h2&gt;
  
  
  ⚙️ Setup
&lt;/h2&gt;

&lt;p&gt;1.Install the async extras:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install pytest-verify[async]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Enable async mode in your pytest configuration (either in &lt;strong&gt;pytest.ini&lt;/strong&gt; or &lt;strong&gt;pyproject.toml&lt;/strong&gt;):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;pyproject.toml&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[tool.pytest.ini_options]
asyncio_mode = "auto"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;pytest.ini&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[pytest]
asyncio_mode = auto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Example 1 — Async REST API Snapshot
&lt;/h2&gt;

&lt;p&gt;Let’s simulate a lightweight async API call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import asyncio
from pytest_verify import verify_snapshot

async def get_user():
    await asyncio.sleep(0.1)
    return {"id": 123, "name": "Mohamed", "country": "France"}

@verify_snapshot()
async def test_async_user_snapshot():
    """Ensure async API output stays stable."""
    user = await get_user()
    return user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ On first run → baseline snapshot created.&lt;br&gt;
✅ On next runs → automatic comparison with full diff view if anything changes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Example 2 — Async Data Transformation
&lt;/h2&gt;

&lt;p&gt;You can also snapshot results from async pipelines or background tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from pytest_verify import verify_snapshot


async def compute_metrics():
        return {"accuracy": 99.94, "loss": 0.102}


@verify_snapshot(abs_tol=0.1)
async def test_async_data_pipeline():
    """Verify numeric tolerance in async output."""
    return await compute_metrics()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ pytest-verify waits for the coroutine, serializes the returned dict, and applies your configured tolerances automatically — no extra setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example 3 — Async Ignore Fields and Tolerances
&lt;/h2&gt;

&lt;p&gt;You can combine async support with all snapshot options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from pytest_verify import verify_snapshot
import asyncio, random

@verify_snapshot(
    ignore_fields=["$.meta.timestamp"],
    abs_tol_fields={"$.metrics.latency": 1.0}
)
async def test_async_snapshot_with_ignores():
    await asyncio.sleep(0.05)
    return {
        "meta": {"timestamp": "2025-10-26T09:00:00Z"},
        "metrics": {"latency": random.uniform(99, 101)},
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ This test ignores the volatile timestamp field&lt;br&gt;
✅ Allows ±1.0 drift for latency values&lt;br&gt;
✅ Works perfectly under async execution&lt;/p&gt;
&lt;h2&gt;
  
  
  Wrapping Up — Async Testing Made Effortless
&lt;/h2&gt;

&lt;p&gt;In modern Python, async is everywhere — your tests shouldn’t lag behind.&lt;br&gt;
With &lt;strong&gt;pytest-verify &amp;gt;= v1.2.0&lt;/strong&gt;, you can now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Snapshot async APIs and coroutine results directly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use all features — ignore fields, tolerances, diff viewer — in async mode.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keep your test suite consistent and declarative.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No extra plugins. No decorator juggling. Just pure, powerful snapshot testing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  💡 Final Thought
&lt;/h2&gt;

&lt;p&gt;If your tests look like this 👇:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;await some_async_func()
assert ...
assert ...
assert ...
assert ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now replace them with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot()
async def test_async_output():
    return await some_async_func()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cleaner, safer, and instantly snapshot-aware.&lt;/p&gt;

&lt;p&gt;If you find this new async support useful, give &lt;a href="https://github.com/metahris/pytest-verify" rel="noopener noreferrer"&gt;pytest-verify&lt;/a&gt; a ⭐ on GitHub and&lt;br&gt;
share your feedback!&lt;/p&gt;

</description>
      <category>python</category>
      <category>pytest</category>
      <category>snapshot</category>
      <category>testing</category>
    </item>
    <item>
      <title>Snapshot Testing in Python with pytest-verify</title>
      <dc:creator>Mohamed Tahri</dc:creator>
      <pubDate>Fri, 24 Oct 2025 09:15:39 +0000</pubDate>
      <link>https://dev.to/metahris/snapshot-testing-in-python-with-pytest-verify-1bgo</link>
      <guid>https://dev.to/metahris/snapshot-testing-in-python-with-pytest-verify-1bgo</guid>
      <description>&lt;h2&gt;
  
  
  💡 Why Snapshot Testing Matters
&lt;/h2&gt;

&lt;p&gt;When you work with &lt;strong&gt;API&lt;/strong&gt;, &lt;strong&gt;machine learning models&lt;/strong&gt;, &lt;strong&gt;data pipelines&lt;/strong&gt;, or &lt;strong&gt;configuration files&lt;/strong&gt;, your Python tests often deal with &lt;strong&gt;large structured outputs&lt;/strong&gt; — JSON, YAML, XML, DataFrames, etc.&lt;/p&gt;

&lt;p&gt;Keeping track of every single field with traditional assertions quickly becomes a nightmare:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    assert data["status"] == "ok"
    assert data["count"] == 200
    assert data["users"][0]["id"] == 123
    assert data["users"][0]["active"] is True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of chasing fragile asserts, what if you could just &lt;em&gt;snapshot&lt;/em&gt; your API’s response and automatically detect meaningful changes?&lt;/p&gt;

&lt;p&gt;That’s what &lt;strong&gt;&lt;a href="https://pypi.org/project/pytest-verify/" rel="noopener noreferrer"&gt;pytest-verify&lt;/a&gt;&lt;/strong&gt; does.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔍 Introducing pytest-verify
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/metahris/pytest-verify" rel="noopener noreferrer"&gt;pytest-verify&lt;/a&gt; is a lightweight extension to &lt;code&gt;pytest&lt;/code&gt; that&lt;br&gt;
&lt;strong&gt;automatically saves and compares your test outputs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of asserting &lt;strong&gt;field-by-field&lt;/strong&gt;, you just return the object,&lt;br&gt;
and &lt;code&gt;@verify_snapshot&lt;/code&gt; does the rest.&lt;/p&gt;

&lt;p&gt;It detects the data type (JSON, YAML, XML, etc.), serializes it,&lt;br&gt;
creates a &lt;code&gt;.expected&lt;/code&gt; snapshot, and compares future test runs to it.&lt;/p&gt;

&lt;p&gt;If something changes — you get a &lt;strong&gt;clear unified diff&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  ⚙️ Installation
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install pytest-verify
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  🧠 How It Works
&lt;/h2&gt;

&lt;p&gt;The decorator &lt;code&gt;@verify_snapshot&lt;/code&gt; automatically:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Detects the data format based on your test’s return type.&lt;/li&gt;
&lt;li&gt;Serializes it to a stable format (JSON, YAML, XML, etc.).&lt;/li&gt;
&lt;li&gt;Saves a baseline &lt;code&gt;.expected&lt;/code&gt; file on first run.&lt;/li&gt;
&lt;li&gt;Compares future runs against that baseline.&lt;/li&gt;
&lt;li&gt;Displays a unified diff when something changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On first run, it creates a snapshot file such as::&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;__snapshots__/test_weather_api_snapshot.expected.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;On subsequent runs, it compares and prints a diff if the result&lt;br&gt;
has changed beyond your tolerances or ignored fields.&lt;/p&gt;
&lt;h2&gt;
  
  
  🌦 Example 1 — Snapshot Testing an API Response
&lt;/h2&gt;

&lt;p&gt;Let’s say you’re testing a REST API endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

def fetch_user_data():
    response = requests.get("https://api.example.com/users/42")
    return response.json()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you print it out, you get something like this 👇:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "user": {"id": 42, "name": "Ayoub", "role": "admin"},
  "meta": {"timestamp": "2025-10-24T12:00:00Z", "api_version": "v3.4"},
  "metrics": {"latency": 152.4, "success_rate": 99.9}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perfect. Now let’s write a snapshot test for it.&lt;/p&gt;

&lt;p&gt;1.Basic API Snapshot&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from pytest_verify import verify_snapshot

@verify_snapshot()
def test_user_api_snapshot():
    from myapp.api import fetch_user_data
    return fetch_user_data()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 On the first run, this creates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;__snapshots__/test_user_api_snapshot.expected.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;with the formatted API response saved inside.&lt;br&gt;
On future runs, it compares automatically — no asserts required.&lt;/p&gt;

&lt;p&gt;2.Ignoring Dynamic Fields&lt;/p&gt;

&lt;p&gt;A day later, the API changes the timestamp and ID.&lt;br&gt;
Same structure, different values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "user": {"id": 1051, "name": "Ayoub", "role": "admin"},
  "meta": {"timestamp": "2025-10-25T10:05:00Z", "api_version": "v3.4"},
  "metrics": {"latency": 153.0, "success_rate": 99.9}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your test breaks — but should it?&lt;/p&gt;

&lt;p&gt;Let’s tell &lt;code&gt;pytest-verify&lt;/code&gt; to ignore fields that are expected to change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(ignore_fields=["$.user.id", "$.meta.timestamp"])
def test_user_api_snapshot_ignore_fields():
    from myapp.api import fetch_user_data
    return fetch_user_data()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Now your snapshot ignores the dynamic fields while still catching real structure or data changes.&lt;/p&gt;

&lt;p&gt;3.Handle Numeric Drift with Global Tolerances&lt;/p&gt;

&lt;p&gt;Let’s say the backend metrics fluctuate a bit between runs.&lt;/p&gt;

&lt;p&gt;New response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "user": {"id": 42, "name": "Ayoub", "role": "admin"},
  "meta": {"timestamp": "2025-10-24T12:10:00Z", "api_version": "v3.4"},
  "metrics": {"latency": 152.9, "success_rate": 99.89}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tiny differences like these shouldn’t fail your test.&lt;br&gt;
This is where global tolerances come in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(
    ignore_fields=["$.meta.timestamp"],
    abs_tol=1.0,
    rel_tol=0.01
)
def test_user_api_snapshot_with_global_tolerance():
    from myapp.api import fetch_user_data
    return fetch_user_data()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ This allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Any numeric field to vary by ±1.0 (abs_tol)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Or by up to 1% difference (rel_tol)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don’t need to list every field — the tolerance applies globally to all numeric values.&lt;/p&gt;

&lt;p&gt;4.Field-Specific Tolerances&lt;/p&gt;

&lt;p&gt;Now imagine you want finer control — maybe latency can fluctuate more than success rate.&lt;/p&gt;

&lt;p&gt;You can define per-field tolerances using JSONPath-like syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(
    ignore_fields=["$.meta.timestamp"],
    abs_tol_fields={"$.metrics.latency": 0.5},
    rel_tol_fields={"$.metrics.success_rate": 0.005}
)
def test_user_api_snapshot_field_tolerances():
    from myapp.api import fetch_user_data
    return fetch_user_data()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Only metrics.latency allows ±0.5 difference&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Only metrics.success_rate allows 0.5% relative variation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All other fields must match exactly&lt;/p&gt;

&lt;p&gt;5.Complex JSON with Wildcards&lt;/p&gt;

&lt;p&gt;Now picture a microservice returning a full system report:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "services": [
    {"name": "auth", "uptime": 99.98, "latency": 210.5, "debug": "ok"},
    {"name": "billing", "uptime": 99.92, "latency": 315.7, "debug": "ok"}
  ],
  "meta": {"timestamp": "2025-10-25T11:00:00Z"}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can mix ignore fields, wildcards, and numeric tolerances easily:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(
    ignore_fields=["$.services[*].debug", "$.meta.timestamp"],
    abs_tol_fields={"$.services[*].latency": 1.0},
    rel_tol_fields={"$.services[*].uptime": 0.01}
)
def test_service_health_report():
    return {
        "services": [
            {"name": "auth", "uptime": 99.97, "latency": 211.3, "debug": "ok"},
            {"name": "billing", "uptime": 99.90, "latency": 314.9, "debug": "ok"},
        ],
        "meta": {"timestamp": "2025-10-25T11:30:00Z"},
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Wildcards ([*]) apply tolerance rules to every item in the list.&lt;/p&gt;

&lt;h2&gt;
  
  
  YAML Snapshot Testing
&lt;/h2&gt;

&lt;p&gt;YAML files are everywhere — from CI pipelines and Helm charts to deployment manifests.&lt;br&gt;&lt;br&gt;
They’re also &lt;strong&gt;prone to drift&lt;/strong&gt;: values change slightly, orders shift, and formatting differences cause false positives.&lt;/p&gt;

&lt;p&gt;1.Simple Example — Kubernetes Deployment Snapshot&lt;/p&gt;

&lt;p&gt;Here’s a basic test for a Kubernetes deployment YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from pytest_verify import verify_snapshot

@verify_snapshot(ignore_order_yaml=True)
def test_kubernetes_deployment_yaml():
    return """
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: user-service
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: user-service
        spec:
          containers:
            - name: user-service
              image: registry.local/user-service:v1.2
              ports:
                - containerPort: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ This saves the deployment structure as a .expected.yaml snapshot.&lt;br&gt;
On future runs, it automatically detects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;if you changed the number of replicas,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;switched the container image,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or modified any key fields.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ The flag ignore_order_yaml=True makes it order-insensitive,&lt;br&gt;
so switching the order of YAML keys or list items won’t trigger false diffs.&lt;/p&gt;

&lt;p&gt;2.CI/CD Pipeline Config example with Tolerances and Ignores&lt;/p&gt;

&lt;p&gt;Now let’s test something closer to a real DevOps setup, like a CI pipeline YAML.&lt;/p&gt;

&lt;p&gt;Imagine your CI config (.gitlab-ci.yml) evolves frequently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
  - build
  - test
  - deploy

variables:
  TIMEOUT: 60
  RETRIES: 3

build:
  stage: build
  script:
    - docker build -t myapp:${CI_COMMIT_TAG:-latest} .
  tags: ["docker"]

test:
  stage: test
  script:
    - pytest --maxfail=1 --disable-warnings -q
  allow_failure: false

deploy:
  stage: deploy
  script:
    - ./scripts/deploy.sh
  environment: production
  when: manual

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can snapshot this configuration, allowing minor numeric drift (like timeouts or retry limits changing slightly) and ignoring volatile fields (like tags or environment metadata).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(
    ignore_order_yaml=True,
    ignore_fields=["$.variables.RETRIES", "$.deploy.environment"],
    abs_tol_fields={"$.variables.TIMEOUT": 5},
)
def test_cicd_pipeline_yaml_snapshot():
    return """
    stages:
      - build
      - test
      - deploy

    variables:
      TIMEOUT: 62
      RETRIES: 3

    build:
      stage: build
      script:
        - docker build -t myapp:${CI_COMMIT_TAG:-latest} .
      tags: ["docker"]

    test:
      stage: test
      script:
        - pytest --maxfail=1 --disable-warnings -q
      allow_failure: false

    deploy:
      stage: deploy
      script:
        - ./scripts/deploy.sh
      environment: staging
      when: manual
    """

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Here’s what happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;ignore_order_yaml=True — key order won’t break the snapshot&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ignore_fields=["$.variables.RETRIES", "$.deploy.environment"]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ignores expected environment differences abs_tol_fields={"$.variables.TIMEOUT": 5} — allows ±5 seconds difference for timeout settings&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly what you want when managing evolving CI/CD configs or Helm charts — detect real changes, but ignore noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  XML Snapshot Testing
&lt;/h2&gt;

&lt;p&gt;1.Simple Example — Invoice Report &lt;/p&gt;

&lt;p&gt;Here’s a basic XML test that verifies invoice data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from pytest_verify import verify_snapshot

@verify_snapshot()
def test_invoice_xml_snapshot():
    return """
    &amp;lt;Invoices&amp;gt;
        &amp;lt;Invoice id="INV-001"&amp;gt;
            &amp;lt;Customer&amp;gt;EDF&amp;lt;/Customer&amp;gt;
            &amp;lt;Total&amp;gt;4590.25&amp;lt;/Total&amp;gt;
            &amp;lt;Date&amp;gt;2025-10-25&amp;lt;/Date&amp;gt;
        &amp;lt;/Invoice&amp;gt;
        &amp;lt;Invoice id="INV-002"&amp;gt;
            &amp;lt;Customer&amp;gt;Cegos&amp;lt;/Customer&amp;gt;
            &amp;lt;Total&amp;gt;3120.10&amp;lt;/Total&amp;gt;
            &amp;lt;Date&amp;gt;2025-10-25&amp;lt;/Date&amp;gt;
        &amp;lt;/Invoice&amp;gt;
    &amp;lt;/Invoices&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ On first run, this saves a .expected.xml snapshot under &lt;strong&gt;snapshots&lt;/strong&gt;/.&lt;/p&gt;

&lt;p&gt;On the next run, pytest-verify will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Parse both XML documents structurally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compare tags, attributes, and values.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Show a clear diff if anything changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now imagine the system recalculates taxes overnight:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Invoices&amp;gt;
    &amp;lt;Invoice id="INV-001"&amp;gt;
        &amp;lt;Customer&amp;gt;EDF&amp;lt;/Customer&amp;gt;
        &amp;lt;Total&amp;gt;4590.75&amp;lt;/Total&amp;gt;
        &amp;lt;Date&amp;gt;2025-10-26&amp;lt;/Date&amp;gt;
    &amp;lt;/Invoice&amp;gt;
    &amp;lt;Invoice id="INV-002"&amp;gt;
        &amp;lt;Customer&amp;gt;Cegos&amp;lt;/Customer&amp;gt;
        &amp;lt;Total&amp;gt;3120.15&amp;lt;/Total&amp;gt;
        &amp;lt;Date&amp;gt;2025-10-26&amp;lt;/Date&amp;gt;
    &amp;lt;/Invoice&amp;gt;
    &amp;lt;GeneratedAt&amp;gt;2025-10-26T08:30:00Z&amp;lt;/GeneratedAt&amp;gt;
&amp;lt;/Invoices&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Different totals (by a few cents) and a new generation timestamp?&lt;br&gt;
Let’s not fail the test for that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(
    ignore_fields=["//GeneratedAt", "//Invoice/Date"],
    abs_tol_fields={"//Invoice/Total": 0.5}
)
def test_invoice_xml_with_tolerance():
    return """
    &amp;lt;Invoices&amp;gt;
        &amp;lt;Invoice id="INV-001"&amp;gt;
            &amp;lt;Customer&amp;gt;EDF&amp;lt;/Customer&amp;gt;
            &amp;lt;Total&amp;gt;4590.75&amp;lt;/Total&amp;gt;
            &amp;lt;Date&amp;gt;2025-10-26&amp;lt;/Date&amp;gt;
        &amp;lt;/Invoice&amp;gt;
        &amp;lt;Invoice id="INV-002"&amp;gt;
            &amp;lt;Customer&amp;gt;Cegos&amp;lt;/Customer&amp;gt;
            &amp;lt;Total&amp;gt;3120.15&amp;lt;/Total&amp;gt;
            &amp;lt;Date&amp;gt;2025-10-26&amp;lt;/Date&amp;gt;
        &amp;lt;/Invoice&amp;gt;
        &amp;lt;GeneratedAt&amp;gt;2025-10-26T08:30:00Z&amp;lt;/GeneratedAt&amp;gt;
    &amp;lt;/Invoices&amp;gt;
    """

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Here’s what this test does:&lt;/p&gt;

&lt;p&gt;ignore_fields=["//GeneratedAt", "//Invoice/Date"]&lt;br&gt;
→ Ignores date/time fields that change daily.&lt;/p&gt;

&lt;p&gt;abs_tol_fields={"//Invoice/Total": 0.5}&lt;br&gt;
→ Allows a small numeric drift (±0.5) on totals — perfect for rounding or currency conversions.&lt;/p&gt;

&lt;p&gt;Even if you add new invoices or minor numeric updates, the test stays stable and shows a clean, colorized diff for real structure or data changes.&lt;/p&gt;

&lt;p&gt;2.Advanced Example — Mixed Tolerances &amp;amp; Wildcards&lt;/p&gt;

&lt;p&gt;Here’s how it looks for something larger, like a shipment report:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(
    ignore_fields=[
        "//ReportGeneratedAt",
        "/Shipments/*/TrackingID"
    ],
    abs_tol_fields={
        "/Shipments/*/Weight": 0.1
    },
    rel_tol_fields={
        "/Shipments/*/Cost": 0.02
    }
)
def test_shipment_xml_report():
    return """
    &amp;lt;ShipmentsReport&amp;gt;
        &amp;lt;ReportGeneratedAt&amp;gt;2025-10-26T08:30:00Z&amp;lt;/ReportGeneratedAt&amp;gt;
        &amp;lt;Shipments&amp;gt;
            &amp;lt;Shipment id="SHP-001"&amp;gt;
                &amp;lt;TrackingID&amp;gt;XYZ123&amp;lt;/TrackingID&amp;gt;
                &amp;lt;Weight&amp;gt;12.45&amp;lt;/Weight&amp;gt;
                &amp;lt;Cost&amp;gt;52.00&amp;lt;/Cost&amp;gt;
            &amp;lt;/Shipment&amp;gt;
            &amp;lt;Shipment id="SHP-002"&amp;gt;
                &amp;lt;TrackingID&amp;gt;ABC987&amp;lt;/TrackingID&amp;gt;
                &amp;lt;Weight&amp;gt;8.10&amp;lt;/Weight&amp;gt;
                &amp;lt;Cost&amp;gt;39.90&amp;lt;/Cost&amp;gt;
            &amp;lt;/Shipment&amp;gt;
        &amp;lt;/Shipments&amp;gt;
    &amp;lt;/ShipmentsReport&amp;gt;
    """

    """

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Explanation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;//ReportGeneratedAt → recursive ignore for global timestamps&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;/Shipments/*/TrackingID → wildcard ignore for all  elements under any shipment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;/Shipments/*/Weight → absolute tolerance (±0.1) for weight variations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;/Shipments/*/Cost → relative tolerance (±2%) for cost differences&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💡 Perfect for ERP exports, financial feeds, or shipment data where minor numeric or date drifts are normal, but structure or logical changes must be caught.&lt;/p&gt;

&lt;h2&gt;
  
  
  DataFrame Snapshot Testing
&lt;/h2&gt;

&lt;p&gt;When validating transformations or ETL jobs, comparing large datasets by hand is painful.&lt;br&gt;&lt;br&gt;
Snapshot testing lets you &lt;strong&gt;lock in expected data outputs&lt;/strong&gt; — and automatically detect meaningful changes later.&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;pytest-verify&lt;/code&gt;, you can snapshot entire &lt;code&gt;pandas.DataFrame&lt;/code&gt;s and compare them &lt;em&gt;structurally&lt;/em&gt; and &lt;em&gt;numerically&lt;/em&gt;, with support for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ignored columns,&lt;/li&gt;
&lt;li&gt;Absolute and relative tolerances,&lt;/li&gt;
&lt;li&gt;CSV-based diff storage for readability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Simple Example — Aggregated Sales Report&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s say you have a pipeline that aggregates daily sales:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd
from pytest_verify import verify_snapshot

@verify_snapshot()
def test_sales_dataframe_snapshot():
    data = {
        "region": ["North", "South", "West"],
        "total_sales": [1025.0, 980.0, 1100.5],
        "transactions": [45, 40, 52],
    }
    return pd.DataFrame(data)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ On first run, it will create a baseline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;__snapshots__/test_sales_dataframe_snapshot.expected.csv

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the next run, it will:&lt;/p&gt;

&lt;p&gt;Compare the same DataFrame’s numeric and textual columns,&lt;/p&gt;

&lt;p&gt;Show a readable diff if anything changes.&lt;/p&gt;

&lt;p&gt;Now Imagine Minor Numeric Drift&lt;/p&gt;

&lt;p&gt;Your ETL job reruns with slightly different rounding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data = {
    "region": ["North", "South", "West"],
    "total_sales": [1025.3, 979.8, 1100.7],
    "transactions": [45, 40, 52],
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without tolerance, this would fail — but those changes are meaningless.&lt;br&gt;
Let’s fix that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(
    ignore_columns=["last_updated"],
    abs_tol=0.5,
    rel_tol=0.02
)
def test_etl_dataframe_with_tolerance():
    # Imagine this is the output of a real ETL job
    data = {
        "region": ["North", "South", "West"],
        "total_sales": [1025.3, 979.8, 1100.7],
        "transactions": [45, 40, 52],
        "last_updated": ["2025-10-25T10:30:00Z"] * 3,
    }
    return pd.DataFrame(data)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ What’s happening here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;ignore_columns=["last_updated"] → dynamic timestamps are ignored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;abs_tol=0.5 → numeric values can differ by ±0.5.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;rel_tol=0.02 → also allows a 2% proportional drift (good for scaled data).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  NumPy Snapshot Testing
&lt;/h2&gt;

&lt;p&gt;Machine learning and scientific computations rarely produce &lt;strong&gt;exactly&lt;/strong&gt; the same floats across environments or library versions.&lt;br&gt;&lt;br&gt;
Snapshot testing with tolerance control lets you verify your numeric logic without being too strict about minor floating-point differences.&lt;/p&gt;

&lt;p&gt;Let’s say your model predicts normalized probabilities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
from pytest_verify import verify_snapshot

@verify_snapshot()
def test_numpy_array_snapshot():
    # Output of a model or a simulation
    return np.array([0.12345, 0.45678, 0.41977])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ This creates a .expected.json snapshot with the array serialized to a list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  0.12345,
  0.45678,
  0.41977
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now imagine your model runs on another machine (different BLAS/LAPACK lib), and the new output is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.array([0.1235, 0.4567, 0.4198])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mathematically the same — but your tests fail.&lt;br&gt;
Let's fix that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(abs_tol=1e-3, rel_tol=1e-3)
def test_numpy_with_tolerance():
    # Example: predictions from a stochastic model
    return np.array([0.1235, 0.4567, 0.4198])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Explanation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;abs_tol=1e-3 allows absolute drift of 0.001&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;rel_tol=1e-3 allows small relative variations (e.g., 0.1% change on large values)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means any tiny numeric jitter is ignored,&lt;br&gt;
while larger drifts (like 0.01 or 1%) still fail and trigger a diff.&lt;/p&gt;
&lt;h2&gt;
  
  
  Pydantic &amp;amp; Dataclasses Snapshot Testing
&lt;/h2&gt;

&lt;p&gt;When testing business logic, it’s common to work with structured models — like API responses defined with &lt;strong&gt;Pydantic&lt;/strong&gt; or internal objects using &lt;strong&gt;dataclasses&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;pytest-verify&lt;/code&gt; handles both natively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically detects &lt;code&gt;BaseModel&lt;/code&gt; or &lt;code&gt;@dataclass&lt;/code&gt; types&lt;/li&gt;
&lt;li&gt;Serializes them to JSON&lt;/li&gt;
&lt;li&gt;Compares snapshots with full support for ignored fields and tolerances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;1.Testing a Pydantic API Response:&lt;/p&gt;

&lt;p&gt;Let’s say you have a model describing a user profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from pydantic import BaseModel
from pytest_verify import verify_snapshot

class User(BaseModel):
    id: int
    name: str
    country: str
    last_login: str
    score: float

@verify_snapshot(ignore_fields=["id", "last_login"])
def test_pydantic_user_snapshot():
    """Ensure the API response remains stable except dynamic fields."""
    return User(
        id=101,
        name="Ayoub",
        country="France",
        last_login="2025-10-25T14:23:00Z",
        score=98.42
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ On the first run, you’ll get:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;__snapshots__/test_pydantic_user_snapshot.expected.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, if the API changes id or timestamp → ignored.&lt;/p&gt;

&lt;p&gt;2.Using Dataclasses for Business Logic&lt;/p&gt;

&lt;p&gt;If you use dataclasses for domain models or DTOs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from dataclasses import dataclass
from pytest_verify import verify_snapshot

@dataclass
class Order:
    order_id: int
    customer: str
    total: float
    updated_at: str

@verify_snapshot(ignore_fields=["updated_at"])
def test_dataclass_order_snapshot():
    """Validate order structure stays stable."""
    return Order(order_id=1234, customer="Mohamed", total=249.99, updated_at="2025-10-25T12:00:00Z")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ On first run → baseline created.&lt;br&gt;
If you later change field names or structure → the diff will highlight the mismatch.&lt;/p&gt;

&lt;p&gt;3.Adding Field-Level Tolerances&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@verify_snapshot(
    abs_tol_fields={"$.total": 0.5},  # allow ±0.5 on total
    ignore_fields=["$.updated_at"]
)
def test_dataclass_order_tolerance():
    return Order(order_id=1234, customer="Mohamed", total=250.20, updated_at="2025-10-25T12:05:00Z")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrapping Up — Snapshot Testing, Evolved
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Traditional tests assert &lt;strong&gt;values&lt;/strong&gt;.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Snapshot tests assert &lt;strong&gt;intent&lt;/strong&gt; — they capture what your output &lt;em&gt;should&lt;/em&gt; look like, and let you evolve confidently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With &lt;strong&gt;pytest-verify&lt;/strong&gt;, you can snapshot everything that matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ JSON &amp;amp; YAML — configs, APIs, and structured data
&lt;/li&gt;
&lt;li&gt;🧩 XML — ERP feeds, reports, and system exports
&lt;/li&gt;
&lt;li&gt;📊 DataFrames — ETL jobs and analytics pipelines
&lt;/li&gt;
&lt;li&gt;🔢 NumPy arrays — ML results and scientific computations
&lt;/li&gt;
&lt;li&gt;🧱 Pydantic &amp;amp; Dataclasses — stable schemas and domain models
&lt;/li&gt;
&lt;li&gt;✍️ Text or Binary — templates, logs, or compiled assets
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every snapshot is reproducible, human-readable, and version-controlled.&lt;br&gt;&lt;br&gt;
When something changes, you see exactly &lt;em&gt;what&lt;/em&gt; and &lt;em&gt;where&lt;/em&gt; — no more blind “assert equality” blocks.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;If you’ve ever run into this question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Did this change actually break something or just shift a float?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then &lt;code&gt;pytest-verify&lt;/code&gt; is your new best friend.&lt;br&gt;&lt;br&gt;
It brings clarity and precision — one snapshot at a time.&lt;/p&gt;

&lt;p&gt;if you find &lt;code&gt;pytest-verify&lt;/code&gt; useful, give it a ⭐ on GitHub and share feedback!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check part-2&lt;/strong&gt;: &lt;a href="https://dev.to/metahris/snapshot-testing-in-python-with-pytest-verify-part-2-async-support-1fi9"&gt;Snapshot Testing in Python with pytest-verify — Part 2: Async Support&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>pytest</category>
      <category>snapshot</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
