<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kevi</title>
    <description>The latest articles on DEV Community by Kevi (@kevi019).</description>
    <link>https://dev.to/kevi019</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kevi019"/>
    <language>en</language>
    <item>
      <title>Stop Guessing What Caused Your Flaky Tests Fail or Pass</title>
      <dc:creator>Kevi</dc:creator>
      <pubDate>Sun, 05 Apr 2026 17:51:19 +0000</pubDate>
      <link>https://dev.to/kevi019/stop-guessing-what-caused-your-flaky-tests-fail-or-pass-51ee</link>
      <guid>https://dev.to/kevi019/stop-guessing-what-caused-your-flaky-tests-fail-or-pass-51ee</guid>
      <description>&lt;p&gt;Flaky tests don’t fail when you expect them to.&lt;br&gt;
They fail when you least have time.&lt;/p&gt;

&lt;p&gt;One moment everything is green, the next your CI pipeline is red — and then, magically, it passes on rerun.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❌ ❌ ✅ → Passed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So… what just happened?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Was it a network issue?&lt;/li&gt;
&lt;li&gt;Timing? State leakage?&lt;/li&gt;
&lt;li&gt;The classic DOM detached?&lt;/li&gt;
&lt;li&gt;May be, the fixture didnt return the value?&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Problem: We Only See the Final Outcome
&lt;/h2&gt;

&lt;p&gt;Most test reports show you &lt;strong&gt;only the final result&lt;/strong&gt;, or you install a bunch of plugins that would scrap all the xmls for you to show you multiple tests of same title and you click each one of them to see which might have ran first?&lt;/p&gt;

&lt;p&gt;If a test fails twice and passes on the third attempt, all you see is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TestCheckoutFlow → Rerun
TestCheckoutFlow → Rerun
TestCheckoutFlow → Rerun
TestCheckoutFlow → PASSED
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That “pass” hides everything that matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why did it fail initially?&lt;/li&gt;
&lt;li&gt;What changed between attempts?&lt;/li&gt;
&lt;li&gt;Is this a real bug or just instability?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 You’re left guessing or too much time scrolling and finding what would have ran first.&lt;/p&gt;

&lt;p&gt;And guessing doesn’t scale — especially in CI and neither you wanna write a log scraper on top of it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Flaky Tests Actually Do to Your System
&lt;/h2&gt;

&lt;p&gt;Flaky tests aren’t just annoying. They slowly break your engineering system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You start ignoring failures (“just rerun it”) - coz I dont have time to dig through the logs&lt;/li&gt;
&lt;li&gt;Confidence in CI drops&lt;/li&gt;
&lt;li&gt;Real bugs(Uncaught ones) get buried under noise&lt;/li&gt;
&lt;li&gt;Debugging becomes reactive instead of intentional&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Over time, your test suite stops being a safety net and becomes background noise.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Missing Piece: Attempt-Level Visibility
&lt;/h2&gt;

&lt;p&gt;The root issue is simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We don’t see what happened in each retry.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you use tools like &lt;code&gt;pytest-rerunfailures&lt;/code&gt;, retries happen silently. The final result is shown — the journey is lost.&lt;/p&gt;

&lt;p&gt;But that journey is where the bug lives.&lt;/p&gt;

&lt;p&gt;I had a test that was just consistently failing in re-runs and since I had no control over the data at that point of time, the re-runs simply showed me that there were multiple records created and it failed(But there would not be multiple records in the first one?), so why did it fail in the first place?&lt;/p&gt;




&lt;h2&gt;
  
  
  A Better Way: Capture Every Attempt
&lt;/h2&gt;

&lt;p&gt;To solve this, we shipped an update to &lt;code&gt;pytest-html-plus&lt;/code&gt; that records every attempt of a test, not just the final one.&lt;/p&gt;

&lt;p&gt;Instead of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TestLogin → Passed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TestLogin  → Passed
 ├── Attempt 1 → Failed (TimeoutError)
 ├── Attempt 2 → Failed (Element not found)
 └── Attempt 3 → Passed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can answer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is it a timing issue?&lt;/li&gt;
&lt;li&gt;Is the UI not ready?&lt;/li&gt;
&lt;li&gt;Is an API inconsistent?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 No more guessing. Just evidence.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real Example: A “Passing” Test That Was Actually Broken
&lt;/h2&gt;

&lt;p&gt;I had a test that looked harmless:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❌ ❌ ❌ → Passed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Normally, I would move on.&lt;/p&gt;

&lt;p&gt;But with attempt-level logs, the story changed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Attempt 1 → API response delayed - but created a partial record&lt;/li&gt;
&lt;li&gt;Attempt 2 → UI rendered partially - but did not find that full record&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This wasn’t a passing test.&lt;/p&gt;

&lt;p&gt;This was a &lt;strong&gt;race condition waiting to hit production&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  But Won’t This Make Reports Noisy?
&lt;/h2&gt;

&lt;p&gt;Yes — if done poorly.&lt;/p&gt;

&lt;p&gt;Logging every attempt can easily clutter reports, especially in large test suites.&lt;/p&gt;

&lt;p&gt;So we designed it to stay &lt;strong&gt;clean by default&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Attempts are &lt;strong&gt;collapsible&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Logs are grouped per attempt&lt;/li&gt;
&lt;li&gt;You expand only what you need&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 You get detail &lt;em&gt;without&lt;/em&gt; sacrificing readability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Plug &amp;amp; Play with Pytest
&lt;/h2&gt;

&lt;p&gt;Getting started is simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pytest-html-plus
pytest &lt;span class="nt"&gt;--html&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;report.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0loit6s6pie311tmx5h8.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0loit6s6pie311tmx5h8.gif" alt="Working example" width="400" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTML reports with attempt breakdown&lt;/li&gt;
&lt;li&gt;JSON output for automation&lt;/li&gt;
&lt;li&gt;Flaky test detection&lt;/li&gt;
&lt;li&gt;Clear visibility into retries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No major setup. No workflow changes. No third party plugins needed&lt;/p&gt;




&lt;h2&gt;
  
  
  When This Helps the Most
&lt;/h2&gt;

&lt;p&gt;This is especially useful if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You use retries in CI (&lt;code&gt;pytest-rerunfailures&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Tests pass locally but fail in CI&lt;/li&gt;
&lt;li&gt;You suspect timing, async, or state issues&lt;/li&gt;
&lt;li&gt;You’ve ever said “it passed on rerun, so it’s fine”&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Mindset Shift
&lt;/h2&gt;

&lt;p&gt;Stop asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Did the test pass?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Start asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“How did the test pass?”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because that’s where the real bugs hide.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Flaky tests don’t just waste time —&lt;br&gt;
they slowly &lt;strong&gt;erode trust in your system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And once trust is gone,&lt;br&gt;
your tests stop protecting you.&lt;/p&gt;

&lt;p&gt;If you can see every attempt,&lt;br&gt;
you can fix the root cause — not just silence the symptom.&lt;/p&gt;




&lt;p&gt;If you want to try it out:&lt;/p&gt;

&lt;p&gt;👉 &lt;em&gt;[&lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;https://github.com/reporterplus/pytest-html-plus&lt;/a&gt; / &lt;a href="https://pypi.org/project/pytest-html-plus/" rel="noopener noreferrer"&gt;https://pypi.org/project/pytest-html-plus/&lt;/a&gt;]&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Would love to hear how you’re dealing with flaky tests in your setup.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>python</category>
      <category>pytest</category>
      <category>playwright</category>
    </item>
    <item>
      <title>pytest-html-plus VS Code — A Must-Have Extension If You Run Pytests</title>
      <dc:creator>Kevi</dc:creator>
      <pubDate>Sat, 24 Jan 2026 08:54:06 +0000</pubDate>
      <link>https://dev.to/kevi019/pytest-html-plus-vs-code-a-must-have-extension-if-you-run-pytests-e3h</link>
      <guid>https://dev.to/kevi019/pytest-html-plus-vs-code-a-must-have-extension-if-you-run-pytests-e3h</guid>
      <description>&lt;h2&gt;
  
  
  The problem no one talks about
&lt;/h2&gt;

&lt;p&gt;HTML reports even our own &lt;code&gt;pytest-html-plus&lt;/code&gt; are usually opened like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download artifact&lt;/li&gt;
&lt;li&gt;Unzip&lt;/li&gt;
&lt;li&gt;Open HTML&lt;/li&gt;
&lt;li&gt;Scroll&lt;/li&gt;
&lt;li&gt;Switch back to code&lt;/li&gt;
&lt;li&gt;Switch back to browser&lt;/li&gt;
&lt;li&gt;Repeat&lt;/li&gt;
&lt;li&gt;It works — but it breaks flow.&lt;/li&gt;
&lt;li&gt;When you’re debugging, context switching is the real tax.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  That’s exactly where &lt;a href="https://marketplace.visualstudio.com/items?itemName=reporterplus.pytest-html-plus-vscode" rel="noopener noreferrer"&gt;pytest-html-plus-vscode&lt;/a&gt; fits in.
&lt;/h2&gt;

&lt;p&gt;Reports should live where you work&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_Your reports must be in the same place where you are fixing the bugs, no matter where your tests might have ran&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The VS Code extension lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load reports directly inside the editor&lt;/li&gt;
&lt;li&gt;View structured test results without leaving your workspace&lt;/li&gt;
&lt;li&gt;Jump between failures and code faster&lt;/li&gt;
&lt;li&gt;Treat test reports as part of development, not an external artifact&lt;/li&gt;
&lt;li&gt;This is especially powerful when debugging CI failures locally.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why this extension exists
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://marketplace.visualstudio.com/items?itemName=reporterplus.pytest-html-plus-vscode" rel="noopener noreferrer"&gt;extension&lt;/a&gt; was not built to “preview HTML”.&lt;/p&gt;

&lt;p&gt;It was built to answer one question quickly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What failed, and what do I need to look at next? and &lt;strong&gt;Everything else is secondary.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What you actually gain
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Faster feedback loops&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You no longer lose momentum opening files, switching apps, or searching logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Failures (List them in the side bar) → context (Direcly jump to the line of failure) → fix.&lt;/p&gt;

&lt;p&gt;All inside VS Code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. CI reports, locally&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The extension works beautifully with archived reports from CI.&lt;/p&gt;

&lt;p&gt;Download the artifact, open it in VS Code, and you’re immediately back in problem-solving mode — &lt;strong&gt;not setup mode.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because the reporter and extension are designed together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The data structure is predictable&lt;/li&gt;
&lt;li&gt;Reports load faster&lt;/li&gt;
&lt;li&gt;UI decisions are intentional, not generic&lt;/li&gt;
&lt;li&gt;This tight coupling is what makes the experience smooth instead of clunky.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Who should install this?
&lt;/h2&gt;

&lt;p&gt;If you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run Pytest regularly&lt;/li&gt;
&lt;li&gt;Debug from CI logs&lt;/li&gt;
&lt;li&gt;Care about flow state&lt;/li&gt;
&lt;li&gt;Live in VS Code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This extension isn’t “nice to have”.&lt;/p&gt;

&lt;p&gt;It becomes one of those tools you forget you installed — until you work on a machine without it.&lt;/p&gt;

&lt;p&gt;The bigger picture&lt;/p&gt;

&lt;p&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=reporterplus.pytest-html-plus-vscode" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt; are not separate tools.&lt;/p&gt;

&lt;p&gt;They’re two halves of the same idea:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testing doesn’t end when Pytest finishes.&lt;/li&gt;
&lt;li&gt;It ends when you understand the result.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Final note&lt;/p&gt;

&lt;p&gt;You don’t need more tools.&lt;br&gt;
You need fewer tools that work together.&lt;/p&gt;

&lt;p&gt;That’s the philosophy behind both the reporter and the extension.&lt;/p&gt;

</description>
      <category>python</category>
      <category>pytest</category>
      <category>programming</category>
      <category>testing</category>
    </item>
    <item>
      <title>From Repeating Boilerplate to One-Click Insights: My Journey with pytest-html-plus</title>
      <dc:creator>Kevi</dc:creator>
      <pubDate>Tue, 30 Sep 2025 05:41:56 +0000</pubDate>
      <link>https://dev.to/kevi019/from-repeating-boilerplate-to-one-click-insights-my-journey-with-pytest-html-plus-1d49</link>
      <guid>https://dev.to/kevi019/from-repeating-boilerplate-to-one-click-insights-my-journey-with-pytest-html-plus-1d49</guid>
      <description>&lt;p&gt;When you’re running tests across multiple branches and environments, one question always comes up:&lt;/p&gt;

&lt;p&gt;“Okay, but which commit and which env did this report come from?”&lt;/p&gt;

&lt;p&gt;At first, we were using pytest-html. It’s a solid plugin — if you have time for writing the basic hooks in conftest, it gives you a clean HTML report out of the box. But even after that, there was one big missing piece: execution metadata.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No branch info.&lt;/li&gt;
&lt;li&gt;No commit ID.&lt;/li&gt;
&lt;li&gt;No environment label.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Basically, the report looked fine, but without context it was just a wall of green/red.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Boilerplate Phase
&lt;/h2&gt;

&lt;p&gt;Like most devs, we hacked around it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Added hooks in conftest.py&lt;/li&gt;
&lt;li&gt;Pulled in env vars for branch/commit&lt;/li&gt;
&lt;li&gt;Injected them manually into the report header&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It worked. The first time, it even felt kind of cool — “hey, now our report shows branch and commit!”&lt;/p&gt;

&lt;p&gt;But then came the second project.&lt;br&gt;
And the third.&lt;/p&gt;

&lt;p&gt;Every repo needed the same glue code. If we fixed one thing, we had to fix it everywhere. The overhead became annoying. Honestly, it felt like I was solving the same problem again and again instead of writing tests.&lt;/p&gt;

&lt;p&gt;After repeating the same boilerplate across projects, I started looking around.&lt;/p&gt;

&lt;p&gt;Allure was the first obvious option — but honestly, it felt like way too much setup and bloat just to view something as basic as branch, commit, or env metadata and asking my manager to setup java in their system to open the report. I didn’t want dashboards, databases, and 10 extra moving parts. I just wanted a simple HTML report with context.&lt;/p&gt;

&lt;p&gt;Thanks to Gemini, It suggested me pytest-html-plus and their screenshot contained the relevant execution metadata I needed which I first thought would have to be configured but&lt;/p&gt;

&lt;p&gt;It just… had the metadata baked in.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Branch name? &lt;/li&gt;
&lt;li&gt;Commit ID&lt;/li&gt;
&lt;li&gt;Environment pulled from cli args &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All sitting neatly at the top of the same single-page HTML I was already used to. No extra config, no repetitive hooks, no “copy this snippet to every repo.” &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We Also removed all of our report generation hooks in the conftest&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The best part of it was, it was easily shareable being a self contained html and no setups needed to view by our business.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Changed Things
&lt;/h2&gt;

&lt;p&gt;It sounds small, but it completely changed how we shared reports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No more digging through CI logs to see which run this was.&lt;/li&gt;
&lt;li&gt;No more maintaining copy-pasted conftest logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just open the HTML, and all the context is right there.&lt;/p&gt;

&lt;p&gt;For me, it turned the report into something I could actually trust at a glance.&lt;/p&gt;

&lt;p&gt;If you’re already on pytest-html and tired of wiring up metadata by hand, give &lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;pytest-html-plus&lt;/a&gt; a look. For us, it was the difference between “yet another report” and “a report with real context.”&lt;/p&gt;

</description>
      <category>programming</category>
      <category>python</category>
      <category>testing</category>
      <category>cicd</category>
    </item>
    <item>
      <title>I stopped using plugins to merge JUnit XMLs in Pytest (pytest-html-plus)</title>
      <dc:creator>Kevi</dc:creator>
      <pubDate>Tue, 16 Sep 2025 05:09:23 +0000</pubDate>
      <link>https://dev.to/kevi019/i-stopped-writing-scripts-to-merge-junit-xmls-in-pytest-19j1</link>
      <guid>https://dev.to/kevi019/i-stopped-writing-scripts-to-merge-junit-xmls-in-pytest-19j1</guid>
      <description>&lt;p&gt;For the longest time, I thought generating XML reports from Pytest was just… annoying.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run tests with -n for parallel workers → I’d get 4 different XML files or more with increasing number of tests.&lt;/li&gt;
&lt;li&gt;Retry a flaky test → extra XMLs scattered around.&lt;/li&gt;
&lt;li&gt;Then I’d have to either write my own merge script or dig up some plugin to stitch them back together.(Not to mention the xml paths relative and absolute and file not found stuff or generating the blobs)&lt;/li&gt;
&lt;li&gt;Finally, I’d spend time debugging why my CI job wasn’t uploading the “right” XML to TestRail.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of this felt like extra work just to give my test management tool a single file and devops guys who are already loaded and this would be a P3 for them to look at.&lt;/p&gt;

&lt;p&gt;While searching for something that would automatically generate and merge, I stumbled upon this underrated tool called &lt;code&gt;pytest-html-plus&lt;/code&gt;, and I was surprised that it quietly solved this problem for me.&lt;/p&gt;

&lt;p&gt;All I had to do was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pytest --generate-xml --xml-report=testrail.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it gave me one combined XML, even with parallel runs and retries. No extra merge step, no custom code.&lt;/p&gt;

&lt;p&gt;The XML already included logs, stdout/stderr, flaky test information, screenshots etc. Uploading to my test management tool was finally a single step instead of three, supposedly made my sync with devops team much lesser as they only needed to use the test rail command to upload the xml.&lt;/p&gt;

&lt;p&gt;If you are using pytest and trying to link your tests with an test management tool, try this before going for other CI plugins&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;https://github.com/reporterplus/pytest-html-plus&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Phew!&lt;/p&gt;

</description>
      <category>pytest</category>
      <category>python</category>
      <category>programming</category>
      <category>testing</category>
    </item>
    <item>
      <title>Discovered how easy can traceability in test become with smart(not just good) reporting tools. pytest-html-plus</title>
      <dc:creator>Kevi</dc:creator>
      <pubDate>Wed, 03 Sep 2025 08:51:50 +0000</pubDate>
      <link>https://dev.to/kevi019/discovered-how-easy-can-traceability-in-test-become-with-good-reporting-tools-418m</link>
      <guid>https://dev.to/kevi019/discovered-how-easy-can-traceability-in-test-become-with-good-reporting-tools-418m</guid>
      <description>&lt;p&gt;One of the challenges in test automation is keeping track of where each test belongs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Which JIRA ticket does it cover?&lt;/li&gt;
&lt;li&gt;Is there a linked test case ID in Testmo or Notion?&lt;/li&gt;
&lt;li&gt;Can I see bug reports linked directly from failing tests?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Normally, this means a lot of manual mapping, plugins, or custom scripts. Also, with bloated reporting tools its more of navigation than instantly doing it. plus the way to enable is to configure their own custom markers.&lt;/p&gt;

&lt;p&gt;I was surprised to see that in pytest-html-plus, it’s already built in and accommodated very much in their single page reporter.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The HTML report comes with search bar. Just type(either partial or full) and tests are instantly filtered by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test names&lt;/li&gt;
&lt;li&gt;Linked external IDs (JIRA-123, DOC-456, etc.)&lt;/li&gt;
&lt;li&gt;Custom URLs or keywords&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means if you tagged your tests with a JIRA issue or a test case ID, you can just type that ID in the report and immediately see all related tests&lt;/p&gt;

&lt;p&gt;You don’t need any custom logic. Just use standard pytest.mark decorators:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pytest

@pytest.mark.jira("https://jira.example.com/browse/PROJ-123")
@pytest.mark.testcase("https://testcases.example.com/case/5678")
def test_login():
    assert True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it.&lt;br&gt;
When you generate the report, these links show up next to your test case, clickable, so you can jump directly to JIRA, Notion, or your test case system.&lt;/p&gt;
&lt;h2&gt;
  
  
  Tracing Coverage
&lt;/h2&gt;

&lt;p&gt;Searching for a quoted ID (like PROJ-123) shows you all tests linked to that reference.&lt;/p&gt;

&lt;p&gt;Before I could think what about the cases where we may have missed tracking? I saw the filter called, &lt;code&gt;filter for unlinked tests&lt;/code&gt; (using the “Show Untracked” filter) to see which ones aren’t mapped to any external system yet and this was a game changer, I could easily and tag all the test cases with the relevant JIRA and bug ids and this has become our new metric in my company.&lt;/p&gt;

&lt;p&gt;This also makes it easy to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shows how many test cases we have automated &lt;/li&gt;
&lt;li&gt;Check test automation coverage during release cycles&lt;/li&gt;
&lt;li&gt;Group results by feature or ticket&lt;/li&gt;
&lt;li&gt;Spot untracked cases that need linking&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Why I Like This
&lt;/h2&gt;

&lt;p&gt;No extra setup. No additional plugins. No scripts to maintain.&lt;br&gt;
It’s just pytest.mark + run tests → open report → search and navigate. &lt;strong&gt;We did not need a huge test management tool to do this job.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Traceability is one of those things that’s usually painful, but here it feels natural.&lt;/p&gt;

&lt;p&gt;That’s all — clean, built-in traceability without extra effort.&lt;/p&gt;

&lt;p&gt;You can checkout the report by installing and running your pytest&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install pytest-html-plus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Frankly, installation is all what matters to generate this wonderful report, so I did not mention any commands about pytest as you can use them as usual.&lt;/p&gt;

&lt;p&gt;Check out the their plugin and the repo here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/reporterplus/pytest-html-plus" rel="noopener noreferrer"&gt;https://github.com/reporterplus/pytest-html-plus&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pypi.org/project/pytest-html-plus/" rel="noopener noreferrer"&gt;https://pypi.org/project/pytest-html-plus/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>pytest</category>
      <category>programming</category>
      <category>testing</category>
    </item>
    <item>
      <title>No More “How to Create Pytest HTML Reports or how to email test reports"</title>
      <dc:creator>Kevi</dc:creator>
      <pubDate>Sat, 16 Aug 2025 09:40:15 +0000</pubDate>
      <link>https://dev.to/kevi019/no-more-how-to-create-pytest-html-reports-or-how-to-send-email-55j2</link>
      <guid>https://dev.to/kevi019/no-more-how-to-create-pytest-html-reports-or-how-to-send-email-55j2</guid>
      <description>&lt;p&gt;Today I ran into something funny — I needed a decent HTML report for my pytest run, and my first thought was:&lt;br&gt;
another half-hour lost to figuring out pytest-html configs.&lt;/p&gt;

&lt;p&gt;Usually, that means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing &lt;code&gt;pytest-html&lt;/code&gt; or &lt;code&gt;allure-pytest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Setting up .ini or passing a long list of CLI flags (or even installing Java in allure’s case)&lt;/li&gt;
&lt;li&gt;Making sure people on other machines can actually open the report&lt;/li&gt;
&lt;li&gt;Writing a little conftest.py hook to add metadata&lt;/li&gt;
&lt;li&gt;Trying not to break CI/CD in the process&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Time spent? Way too much, just for a simple report.&lt;/p&gt;

&lt;p&gt;And in my case, there was another catch — I was running tests in parallel with pytest-xdist.&lt;/p&gt;

&lt;p&gt;If you’ve done that before, you will need another merger plugin to merge all the xmls&lt;/p&gt;

&lt;p&gt;So I looked around for that plugin but found pytest-html-plus.&lt;/p&gt;

&lt;p&gt;I didn’t even read the docs at first — I just installed it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install pytest-html-plus&lt;br&gt;
pytest -n auto&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Done.&lt;/p&gt;

&lt;p&gt;That’s it.&lt;br&gt;
…and it just worked.&lt;br&gt;
Full HTML report, already pretty, self-contained, no setup, no config.&lt;/p&gt;

&lt;p&gt;What surprised me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It handled parallel runs without breaking the report (even with xdist)&lt;/li&gt;
&lt;li&gt;Added metadata like branch, commit, environment, and timestamp automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I could copy logs and errors with one click&lt;/p&gt;

&lt;p&gt;The report was shareable as-is without worrying about dependencies&lt;/p&gt;

&lt;p&gt;I didn’t have to touch a single test file — no extra hooks or decorators&lt;/p&gt;

&lt;p&gt;Honestly, the whole “report” part of my workflow went from “ugh, I’ll do it later” to “done in 5 seconds”, even in CI.&lt;/p&gt;

&lt;p&gt;If you’ve been wrestling with HTML reports in pytest — especially with parallel runs — this was a nice surprise.&lt;/p&gt;

&lt;p&gt;Even better part is that I could &lt;a href="https://pytest-html-plus.readthedocs.io/en/main/cli/send-email.html" rel="noopener noreferrer"&gt;email&lt;/a&gt; the test report by just triggering &lt;code&gt;--plus-email&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://pypi.org/project/pytest-html-plus/" rel="noopener noreferrer"&gt;https://pypi.org/project/pytest-html-plus/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>pytest</category>
      <category>python</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
