<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mihir Shinde</title>
    <description>The latest articles on DEV Community by Mihir Shinde (@byteframe).</description>
    <link>https://dev.to/byteframe</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/byteframe"/>
    <language>en</language>
    <item>
      <title>How to Fix Flaky Tests in GitHub Actions</title>
      <dc:creator>Mihir Shinde</dc:creator>
      <pubDate>Fri, 17 Apr 2026 00:36:33 +0000</pubDate>
      <link>https://dev.to/byteframe/how-to-fix-flaky-tests-in-github-actions-3e32</link>
      <guid>https://dev.to/byteframe/how-to-fix-flaky-tests-in-github-actions-3e32</guid>
      <description>&lt;p&gt;You know the drill: CI goes red, you check the logs, the failure looks unrelated to your changes. You hit re-run. It passes. You merge. And the cycle repeats tomorrow.&lt;/p&gt;

&lt;p&gt;This guide covers the six most common patterns behind flaky tests in GitHub Actions and gives you concrete fixes for each. Not theories — actual code changes and configuration updates you can apply today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before you start fixing
&lt;/h2&gt;

&lt;p&gt;The first step is knowing which tests are flaky and how often they fail. If you're guessing based on Slack complaints, you're working blind. &lt;a href="https://www.kleore.com" rel="noopener noreferrer"&gt;Kleore&lt;/a&gt; analyzes your CI history and ranks every flaky test by failure rate and cost — so you fix the worst ones first.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Timing &amp;amp; race conditions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; Test passes locally, fails intermittently in CI. Often involves UI tests, async operations, or anything that waits for a condition to become true.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root cause:&lt;/strong&gt; GitHub Actions runners have variable performance. A 2-core runner under load is slower than your M3 MacBook. Tests that assume operations complete within a specific window break when the runner is under pressure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Replace fixed waits with condition-based polling. For E2E tests with Playwright or Cypress, use their built-in auto-waiting mechanisms instead of explicit sleeps. For backend tests, poll with exponential backoff rather than sleeping.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Shared mutable state
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; Test passes in isolation (it.only) but fails when run with the full suite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root cause:&lt;/strong&gt; Tests share a database, in-memory store, filesystem, or global variable. Test A writes data that Test B doesn't expect, or Test A forgets to clean up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Isolate test state completely. If you're using a shared test database, consider running each test file in its own database schema or using containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. External service dependencies
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; Tests fail with network timeouts, 503 errors, or rate-limit responses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root cause:&lt;/strong&gt; Your tests make real HTTP calls to APIs you don't control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Mock at the HTTP boundary, not the function level. Use MSW or similar tools to intercept HTTP at the network level.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Environment differences
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; Tests pass on macOS, fail on Linux. Or pass with Node 20, fail with Node 22.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root cause:&lt;/strong&gt; Assumptions baked into tests about the OS, timezone, locale, filesystem behavior, or available system resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Pin your CI environment explicitly. Always set &lt;code&gt;TZ=UTC&lt;/code&gt;, use a .node-version file, and normalize filesystem paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Port &amp;amp; resource conflicts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; EADDRINUSE errors, database connection failures, or file lock errors — especially when tests run in parallel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Use dynamic port allocation and unique database names per test worker.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Test order dependency
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; Tests pass when run in the default order, fail when randomized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root cause:&lt;/strong&gt; Test A sets up state that Test B implicitly depends on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Enable test randomization (&lt;code&gt;--randomize&lt;/code&gt; in Jest, &lt;code&gt;sequence.shuffle&lt;/code&gt; in Vitest) to catch these issues early. Make every test self-contained.&lt;/p&gt;

&lt;h2&gt;
  
  
  The meta-fix: Retry as a bandaid, not a cure
&lt;/h2&gt;

&lt;p&gt;Retry logic hides the problem. A test that fails 30% of the time and gets retried 3 times will appear to pass 99.7% of the time — while still costing you 3x the CI minutes and masking the underlying issue.&lt;/p&gt;

&lt;p&gt;Retry to unblock your team today. Fix the root cause this sprint.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to prioritize which tests to fix first
&lt;/h2&gt;

&lt;p&gt;Prioritize by failure frequency, blast radius, cost per failure, and fix complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Or let &lt;a href="https://www.kleore.com" rel="noopener noreferrer"&gt;Kleore&lt;/a&gt; do the prioritization for you.&lt;/strong&gt; Kleore analyzes your GitHub Actions history and ranks every flaky test by failure rate, cost, and impact. You get a prioritized list with dollar amounts — so you know exactly where to start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/apps/kleore" rel="noopener noreferrer"&gt;Scan my repos — free&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Also read: &lt;a href="https://www.kleore.com/blog/what-are-flaky-tests" rel="noopener noreferrer"&gt;What Are Flaky Tests?&lt;/a&gt; | &lt;a href="https://www.kleore.com/blog/flaky-test-cost" rel="noopener noreferrer"&gt;How Much Do Flaky Tests Actually Cost?&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>testing</category>
      <category>cicd</category>
      <category>devops</category>
    </item>
    <item>
      <title>How Much Do Flaky Tests Actually Cost?</title>
      <dc:creator>Mihir Shinde</dc:creator>
      <pubDate>Fri, 17 Apr 2026 00:10:34 +0000</pubDate>
      <link>https://dev.to/byteframe/how-much-do-flaky-tests-actually-cost-1mjh</link>
      <guid>https://dev.to/byteframe/how-much-do-flaky-tests-actually-cost-1mjh</guid>
      <description>&lt;p&gt;When teams talk about the cost of flaky tests, they usually start with CI minutes. That's the visible part — the line item on your GitHub bill. But CI compute is maybe 10% of the real cost. The other 90% is human time, delayed shipping, and the slow erosion of engineering culture.&lt;/p&gt;

&lt;p&gt;Let's break it down with real numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 1: CI compute
&lt;/h2&gt;

&lt;p&gt;This is the easy math. Every time a flaky test causes a re-run, you're paying for the same CI job twice.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Example team&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Average CI run duration&lt;/td&gt;
&lt;td&gt;12 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Flaky-caused re-runs per week&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wasted CI minutes per week&lt;/td&gt;
&lt;td&gt;480 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Actions cost per minute&lt;/td&gt;
&lt;td&gt;$0.008&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monthly CI waste&lt;/td&gt;
&lt;td&gt;~$60/month&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;$60 a month? That's nothing, right? That's the trap. CI compute is cheap enough that nobody escalates it. But it's the tip of the iceberg.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 2: Developer time
&lt;/h2&gt;

&lt;p&gt;This is where the real money goes. Every flaky failure triggers a human response:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developer sees red CI badge on their PR&lt;/li&gt;
&lt;li&gt;Opens CI logs, scrolls through output&lt;/li&gt;
&lt;li&gt;Tries to figure out if the failure is real or flaky&lt;/li&gt;
&lt;li&gt;Decides to re-run (or asks a teammate)&lt;/li&gt;
&lt;li&gt;Waits for the re-run to finish&lt;/li&gt;
&lt;li&gt;Resumes their previous work — but the context switch already happened&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Research on context switching shows it takes an average of 23 minutes to regain deep focus after an interruption. Even if the investigation itself takes only 5 minutes, the true cost per interruption is closer to 30 minutes of productive time.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Example team&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Flaky interruptions per week&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context-switch cost per interruption&lt;/td&gt;
&lt;td&gt;30 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total developer hours lost per week&lt;/td&gt;
&lt;td&gt;20 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average fully-loaded eng cost&lt;/td&gt;
&lt;td&gt;$85/hour&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monthly developer time waste&lt;/td&gt;
&lt;td&gt;~$6,800/month&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That's over 100x the CI compute cost. And this is for a modest team with a moderate flaky test problem. A team of 30 engineers with a bad flaky test culture can easily burn $20,000+/month in lost productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 3: Shipping velocity
&lt;/h2&gt;

&lt;p&gt;Flaky tests don't just waste time — they slow down how fast you ship.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PRs stay open longer. A PR that gets a flaky red build sits in review limbo.&lt;/li&gt;
&lt;li&gt;Merge conflicts compound. Longer PR lifetimes mean more merge conflicts.&lt;/li&gt;
&lt;li&gt;Deploys batch up. When teams can't merge quickly, changes pile up into larger, riskier deploys.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Layer 4: Trust erosion
&lt;/h2&gt;

&lt;p&gt;This is the most dangerous cost because it's invisible until it's catastrophic.&lt;/p&gt;

&lt;p&gt;When tests are unreliable, developers develop a reflex: "It's probably just flaky." This is rational behavior given unreliable signals. But it means real failures get ignored too.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Phase&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Phase 1&lt;/td&gt;
&lt;td&gt;Team re-runs flaky tests and reports them in Slack&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase 2&lt;/td&gt;
&lt;td&gt;Team re-runs without reporting — it's just background noise&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase 3&lt;/td&gt;
&lt;td&gt;Team merges with red CI, assuming flakiness&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase 4&lt;/td&gt;
&lt;td&gt;A real bug slips through. "We thought it was flaky."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase 5&lt;/td&gt;
&lt;td&gt;Production incident. Post-mortem identifies eroded CI trust as root cause.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The total picture
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Cost layer&lt;/th&gt;
&lt;th&gt;Monthly cost&lt;/th&gt;
&lt;th&gt;Visibility&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CI compute&lt;/td&gt;
&lt;td&gt;$60&lt;/td&gt;
&lt;td&gt;On your bill&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Developer time&lt;/td&gt;
&lt;td&gt;$6,800&lt;/td&gt;
&lt;td&gt;Hidden&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Shipping velocity&lt;/td&gt;
&lt;td&gt;$???&lt;/td&gt;
&lt;td&gt;Invisible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Trust erosion&lt;/td&gt;
&lt;td&gt;$???&lt;/td&gt;
&lt;td&gt;Invisible until incident&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$7,000–$25,000+/month&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The irony: the cost that shows up on your bill (CI minutes) is the smallest component.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can you actually do about it?
&lt;/h2&gt;

&lt;p&gt;Step one is visibility. You can't fix what you can't see. Most teams have no idea how many flaky tests they have, which ones are the worst, or what they cost.&lt;/p&gt;

&lt;p&gt;That's the gap &lt;a href="https://www.kleore.com" rel="noopener noreferrer"&gt;Kleore&lt;/a&gt; fills. It connects to your GitHub repos, analyzes your CI run history, and gives you a ranked list of every flaky test — with dollar costs attached. No configuration, no test framework changes, no new CLI tools. Just the data you need to start making decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/apps/kleore" rel="noopener noreferrer"&gt;Scan my repos — free →&lt;/a&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>cicd</category>
      <category>github</category>
      <category>devops</category>
    </item>
    <item>
      <title>GitHub Actions Slow? What's Actually Wasting Your Time</title>
      <dc:creator>Mihir Shinde</dc:creator>
      <pubDate>Thu, 16 Apr 2026 23:02:01 +0000</pubDate>
      <link>https://dev.to/byteframe/github-actions-slow-whats-actually-wasting-your-time-4elh</link>
      <guid>https://dev.to/byteframe/github-actions-slow-whats-actually-wasting-your-time-4elh</guid>
      <description>&lt;p&gt;Your GitHub Actions pipeline takes 20 minutes. Your team runs it 50 times a day. That's 16 hours of CI compute daily — and most of it is waste. Developers context-switch while waiting, merge queues back up, and by the end of the week your team has lost an entire engineer's worth of productive time to a slow pipeline.&lt;/p&gt;

&lt;p&gt;The fix isn't "buy bigger runners." It's eliminating the waste that's already in your pipeline. Here are the five biggest time wasters and how to fix each one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hidden cost of slow CI
&lt;/h2&gt;

&lt;p&gt;Slow CI doesn't just waste compute. It creates a cascade of productivity losses that compound across your team:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer wait time&lt;/strong&gt;: A developer waiting 20 minutes for CI is not coding. They're checking Slack, reading Hacker News, or starting a second task that creates costly context-switching when CI finishes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context switching&lt;/strong&gt;: Studies show it takes 23 minutes to fully refocus after a context switch. A 20-minute CI wait often creates a 43-minute productivity gap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Merge queue bottlenecks&lt;/strong&gt;: When CI takes 20 minutes, your merge queue can process 3 PRs per hour at most (serially). With a team of 10 developers, PRs stack up and block each other.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment velocity&lt;/strong&gt;: Slow CI means fewer deployments per day, which means larger batch sizes, which means more risk per deploy. It's a vicious cycle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The math is simple: if your CI takes 20 minutes and you have 10 developers, optimizing it to 8 minutes saves 2 hours of developer wait time per day. At $150/hour loaded engineering cost, that's $300/day or $78,000/year.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time waster #1: Flaky test reruns
&lt;/h2&gt;

&lt;p&gt;This is the single biggest source of CI waste, and it's the one most teams underestimate. When a flaky test fails, developers re-run the entire pipeline. That re-run wastes 100% of the compute — you're running the same tests again just to get a different roll of the dice.&lt;/p&gt;

&lt;p&gt;The numbers are staggering. In our analysis of 10,000 GitHub Actions workflow runs, we found that 15-25% of CI compute is wasted on flaky test reruns. That means if you spend $10,000/month on GitHub Actions, $1,500 to $2,500 is literally burned on re-running tests that aren't actually broken.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Identify and quarantine flaky tests. You can't fix what you can't measure. Start by identifying which tests are flaky, then quarantine them so they don't block CI while you fix the root causes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time waster #2: No dependency caching
&lt;/h2&gt;

&lt;p&gt;Every CI run that starts with &lt;code&gt;npm install&lt;/code&gt; or &lt;code&gt;pip install -r requirements.txt&lt;/code&gt; from scratch is downloading the same packages over and over. For a typical Node.js project, this wastes 1-3 minutes per run. Multiply that by 50 runs/day and you're losing 1-2.5 hours daily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Use &lt;code&gt;actions/cache&lt;/code&gt; or built-in caching. Pro tip: &lt;code&gt;npm ci&lt;/code&gt; is faster than &lt;code&gt;npm install&lt;/code&gt; in CI because it skips the lockfile resolution step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time waster #3: Running all tests on every PR
&lt;/h2&gt;

&lt;p&gt;If a PR only changes a README file, there's no reason to run your entire test suite. Yet most teams configure their pipeline to run everything on every push. For large monorepos, this wastes enormous amounts of compute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Use path filters and affected test detection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time waster #4: Sequential jobs that could be parallel
&lt;/h2&gt;

&lt;p&gt;Many teams structure their pipeline as a linear chain: lint → type-check → unit tests → integration tests → e2e tests. Lint and tests don't depend on each other — they can run simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Parallelize independent jobs. With parallel sharding, you can cut total wall time from 19 minutes to ~4 minutes. You pay for more compute-minutes, but your developers get feedback 5x faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time waster #5: Oversized Docker images
&lt;/h2&gt;

&lt;p&gt;If your CI builds Docker images, the image size directly impacts build time, push time, and pull time. A 2GB image takes minutes to push and pull. Most of that size is build dependencies that aren't needed at runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Multi-stage builds with slim base images.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick wins checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;✅ Enable dependency caching — 5 minutes to set up, saves 1-3 minutes per run&lt;/li&gt;
&lt;li&gt;✅ Parallelize lint/typecheck/test — 15 minutes to restructure your workflow&lt;/li&gt;
&lt;li&gt;✅ Add path filters — 10 minutes to add &lt;code&gt;paths&lt;/code&gt; to your workflow trigger&lt;/li&gt;
&lt;li&gt;✅ Shard your test suite — 20 minutes to set up matrix strategy&lt;/li&gt;
&lt;li&gt;✅ Identify and quarantine flaky tests — 5 minutes to install &lt;a href="https://www.kleore.com" rel="noopener noreferrer"&gt;Kleore&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;✅ Use multi-stage Docker builds — 30 minutes, cuts image size 50-90%&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.kleore.com/blog/github-actions-ci-optimization" rel="noopener noreferrer"&gt;kleore.com/blog/github-actions-ci-optimization&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>cicd</category>
      <category>testing</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
