<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Harsh Thakkar</title>
    <description>The latest articles on DEV Community by Harsh Thakkar (@harsh0369).</description>
    <link>https://dev.to/harsh0369</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/harsh0369"/>
    <language>en</language>
    <item>
      <title>Why Your AWS CI/CD Pipeline maybe Slower Than It Should Be (Mine Was Too)</title>
      <dc:creator>Harsh Thakkar</dc:creator>
      <pubDate>Mon, 13 Apr 2026 06:39:30 +0000</pubDate>
      <link>https://dev.to/harsh0369/why-your-aws-cicd-pipeline-maybe-slower-than-it-should-be-mine-was-too-2c8h</link>
      <guid>https://dev.to/harsh0369/why-your-aws-cicd-pipeline-maybe-slower-than-it-should-be-mine-was-too-2c8h</guid>
      <description>&lt;p&gt;It was one of those days where nothing was technically broken… but everything felt off.&lt;/p&gt;

&lt;p&gt;Deployments were going through. Pipelines were green. No alarms screaming.&lt;br&gt;
And yet every push took forever.&lt;/p&gt;

&lt;p&gt;I remember staring at the screen after triggering a simple change. A tiny config tweak. Something that should’ve gone through in a couple of minutes. Instead, I watched my AWS pipeline crawl… step by step… like it had all the time in the world.&lt;/p&gt;

&lt;p&gt;For a YAML change.&lt;/p&gt;




&lt;p&gt;At the time, I told myself, &lt;em&gt;“Yeah, CI/CD pipelines are just slow sometimes.”&lt;/em&gt;&lt;br&gt;
That was my first mistake.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The lie we tell ourselves&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your pipeline works, you stop questioning it.&lt;/p&gt;

&lt;p&gt;That’s what I did.&lt;/p&gt;

&lt;p&gt;I had a pretty standard setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code pushed → CodePipeline triggers&lt;/li&gt;
&lt;li&gt;CodeBuild runs tests + build&lt;/li&gt;
&lt;li&gt;Artifacts go to S3&lt;/li&gt;
&lt;li&gt;Deploy via CodeDeploy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing exotic. No weird hacks. It looked clean.&lt;/p&gt;

&lt;p&gt;But under the surface, it was quietly inefficient in ways I didn’t notice for months.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The moment it clicked&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One afternoon, I had to deploy 5 times in a row (😅).&lt;br&gt;
Same pipeline. Same steps. Same wait… every time.&lt;/p&gt;

&lt;p&gt;That’s when it hit me:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I was spending more time waiting for my pipeline than actually coding.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And worse… I had accepted it.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Where the time was actually going&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I finally sat down and traced a single run end-to-end. Not casually. Properly.&lt;/p&gt;

&lt;p&gt;And yeah… it was uncomfortable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. CodeBuild was doing way too much&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I had bundled everything into one build phase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;install dependencies&lt;/li&gt;
&lt;li&gt;run tests&lt;/li&gt;
&lt;li&gt;build artifacts&lt;/li&gt;
&lt;li&gt;package everything&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Seemed efficient, right?&lt;/p&gt;

&lt;p&gt;Except… every single run started from scratch.&lt;/p&gt;

&lt;p&gt;No caching.&lt;/p&gt;

&lt;p&gt;So even if I changed one line, it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reinstalled node modules&lt;/li&gt;
&lt;li&gt;rebuilt layers&lt;/li&gt;
&lt;li&gt;redid everything like it had never seen my project before&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That alone was eating 6–8 minutes.&lt;/p&gt;

&lt;p&gt;What I didn’t realize at the time:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Stateless builds are great… until they’re unnecessarily stateless.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;2. I ignored caching because it felt &lt;em&gt;“optional”&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS makes caching in CodeBuild possible, but not exactly obvious.&lt;/p&gt;

&lt;p&gt;I skipped it initially because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It adds config complexity&lt;/li&gt;
&lt;li&gt;Cache invalidation is annoying&lt;/li&gt;
&lt;li&gt;“It’s fine for now”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Classic.&lt;/p&gt;

&lt;p&gt;When I finally enabled caching for dependencies (node_modules, pip, etc.), build times dropped almost immediately.&lt;/p&gt;

&lt;p&gt;Not dramatically. But noticeably.&lt;/p&gt;

&lt;p&gt;Still… caching comes with trade-offs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sometimes stale dependencies sneak in&lt;/li&gt;
&lt;li&gt;Debugging weird build issues becomes harder&lt;/li&gt;
&lt;li&gt;You need to think about cache keys (which I initially didn’t 😅)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;3. Serial execution everywhere&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This one hurt a bit.&lt;/p&gt;

&lt;p&gt;My pipeline stages were strictly linear:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Build → Test → Package → Deploy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;No parallelism. No optimization.&lt;/p&gt;

&lt;p&gt;Even independent steps were waiting on each other.&lt;/p&gt;

&lt;p&gt;Looking back, I could’ve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run tests in parallel with certain build steps&lt;/li&gt;
&lt;li&gt;Split pipelines by service instead of monolith builds&lt;/li&gt;
&lt;li&gt;Avoid blocking everything for one slow task&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But I didn’t. Because linear pipelines are easy to reason about.&lt;/p&gt;

&lt;p&gt;And sometimes… we choose simplicity over speed without realizing the cost.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;4. Artifact handling was... lazy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I was passing around large artifacts between stages.&lt;br&gt;
Bigger than they needed to be.&lt;/p&gt;

&lt;p&gt;Stuff that didn’t even change between runs was getting repackaged and uploaded again.&lt;/p&gt;

&lt;p&gt;It wasn’t obvious at first. But S3 upload + download latency adds up.&lt;/p&gt;

&lt;p&gt;Especially when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You compress everything every time&lt;/li&gt;
&lt;li&gt;You don’t separate static vs dynamic assets&lt;/li&gt;
&lt;li&gt;You treat artifacts like a dumping ground&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In hindsight, this was just… sloppy engineering.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;5. Over-triggering pipelines&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This one was subtle.&lt;/p&gt;

&lt;p&gt;Every push triggered the full pipeline even for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;README changes&lt;/li&gt;
&lt;li&gt;minor config tweaks&lt;/li&gt;
&lt;li&gt;non-deployable updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I was burning compute time (and patience) on changes that didn’t need deployment.&lt;/p&gt;

&lt;p&gt;A simple filter or conditional trigger would’ve helped.&lt;/p&gt;

&lt;p&gt;But I didn’t add it until much later.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What changed after all this&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not overnight. And not perfectly.&lt;/p&gt;

&lt;p&gt;But gradually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I split heavy builds into smaller, more focused steps&lt;/li&gt;
&lt;li&gt;Added caching (carefully… and with some regret during debugging 😅)&lt;/li&gt;
&lt;li&gt;Introduced conditional triggers&lt;/li&gt;
&lt;li&gt;Reduced artifact size and duplication&lt;/li&gt;
&lt;li&gt;Parallelized what I could without making things unreadable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the result?&lt;/p&gt;

&lt;p&gt;My pipeline dropped from ~18 minutes to around 6–8 minutes on average.&lt;/p&gt;

&lt;p&gt;Still not blazing fast. But acceptable.&lt;/p&gt;

&lt;p&gt;More importantly it felt under control.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The part nobody talks about&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Faster pipelines aren’t free.&lt;/p&gt;

&lt;p&gt;Every optimization introduces trade-offs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Caching → faster builds, harder debugging&lt;/li&gt;
&lt;li&gt;Parallelism → speed, but more complexity&lt;/li&gt;
&lt;li&gt;Smaller artifacts → better performance, but more structure required&lt;/li&gt;
&lt;li&gt;Conditional triggers → efficiency, but risk of missing deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There’s no perfect setup.&lt;/p&gt;

&lt;p&gt;Just… intentional ones.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What I’d do differently now&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If I were starting fresh:&lt;/p&gt;

&lt;p&gt;I wouldn’t aim for the perfect pipeline.&lt;/p&gt;

&lt;p&gt;I’d aim for visibility first.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Measure each stage early&lt;/li&gt;
&lt;li&gt;Understand where time goes&lt;/li&gt;
&lt;li&gt;Optimize only what actually hurts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because honestly…&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Most pipelines aren’t slow because of AWS.&lt;br&gt;
They’re slow because of decisions we stopped questioning.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;Final thought&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your pipeline feels slow, it probably is.&lt;/p&gt;

&lt;p&gt;And if you’ve gotten used to it… that’s the real problem.&lt;/p&gt;

&lt;p&gt;I did too.&lt;/p&gt;

&lt;p&gt;Until one day I couldn’t ignore it anymore.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>devops</category>
      <category>automation</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Why Most AWS-Based Developer Toolchains Fail After 6 Months (And What I Changed)</title>
      <dc:creator>Harsh Thakkar</dc:creator>
      <pubDate>Sun, 05 Apr 2026 06:37:18 +0000</pubDate>
      <link>https://dev.to/harsh0369/why-most-aws-based-developer-toolchains-fail-after-6-months-and-what-i-changed-2j2g</link>
      <guid>https://dev.to/harsh0369/why-most-aws-based-developer-toolchains-fail-after-6-months-and-what-i-changed-2j2g</guid>
      <description>&lt;p&gt;This is a story of an internal organizational project, that taught me more than any practical roadmap could ever...&lt;/p&gt;

&lt;p&gt;The first time it broke, it wasn’t even during a deploy.⚠️&lt;/p&gt;

&lt;p&gt;It was a random Wednesday afternoon. No traffic spike, no big release, nothing dramatic. Just a Slack message from a backend team:&lt;br&gt;
“It seems builds are taking like 25 minutes now. Did something change?”&lt;/p&gt;

&lt;p&gt;Nothing had changed. That was the problem.😐&lt;/p&gt;




&lt;p&gt;Six months earlier, I had proudly stitched together what I thought was a clean AWS-native developer toolchain. Code went into GitHub, triggered a pipeline, flowed through build, test, deploy everything nicely wired with managed services. Minimal servers, maximum “cloud-native elegance.”&lt;/p&gt;

&lt;p&gt;It felt… modern.✨&lt;/p&gt;

&lt;p&gt;For about three months.&lt;/p&gt;

&lt;p&gt;Then things started getting weird in small ways. Not failures. Friction.&lt;/p&gt;

&lt;p&gt;Build times creeping up. Logs harder to trace. Random permission errors that fixed themselves if you retried. Nobody panicked, because individually, each issue was… tolerable.😬&lt;/p&gt;

&lt;p&gt;but collectively the system was rotting !&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The illusion I bought into&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the time, I genuinely believed this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“If we use more managed services, we’ll have less to worry about.”💡&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In hindsight, that wasn’t wrong. It was just incomplete.&lt;/p&gt;

&lt;p&gt;What I didn’t realize was that I was trading operational burden for cognitive burden. And the latter is sneakier.&lt;/p&gt;

&lt;p&gt;Because when something breaks in a traditional setup, you at least know where to look.&lt;/p&gt;

&lt;p&gt;When it breaks across five AWS services glued together by IAM roles and implicit triggers… good luck 😅&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The day it actually failed 💥&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We had a hotfix that needed to go out quickly. Nothing major, just a small patch to fix a data validation issue.&lt;/p&gt;

&lt;p&gt;Pipeline triggered. Build started.▶️&lt;/p&gt;

&lt;p&gt;Then it hung.⛔&lt;/p&gt;

&lt;p&gt;No error. Just… stuck.😶&lt;/p&gt;

&lt;p&gt;We checked logs. Partial logs. Because the logs were split across services. One part in build logs, one part in deployment logs, some events in CloudWatch, some not showing up at all.&lt;/p&gt;

&lt;p&gt;After 40 minutes, other team member manually redeployed from their system.&lt;/p&gt;

&lt;p&gt;It worked. ✅&lt;/p&gt;

&lt;p&gt;That was the moment I knew the system had failed not because it crashed, but because no one trusted it anymore.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Where things actually went wrong ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It wasn’t a single bad decision. It was a series of reasonable ones.&lt;/p&gt;

&lt;p&gt;That’s what makes this tricky.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. We optimized for setup, not longevity 🏗️&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Early on, everything was fast to set up. Click here, configure that, connect this trigger.&lt;/p&gt;

&lt;p&gt;In hindsight, we built something that was easy to create but hard to understand.&lt;/p&gt;

&lt;p&gt;There’s a difference.&lt;/p&gt;

&lt;p&gt;After a few months, nobody remembered:&lt;/p&gt;

&lt;p&gt;which service triggered what&lt;br&gt;
why certain permissions existed&lt;br&gt;
what would break if we changed something small&lt;/p&gt;

&lt;p&gt;Including me.🙋‍♂️&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. We let IAM complexity spiral 🌀&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At first, permissions were tight. Thoughtful.&lt;/p&gt;

&lt;p&gt;Then came edge cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“just add this permission for now”&lt;/li&gt;
&lt;li&gt;“we’ll clean this up later”&lt;/li&gt;
&lt;li&gt;“it’s blocking the pipeline”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We never cleaned it up.&lt;/p&gt;

&lt;p&gt;Six months in, we had roles that nobody fully understood. Some were over-permissive, others randomly failed due to missing access.⚠️&lt;/p&gt;

&lt;p&gt;The worst part? Failures weren’t consistent.&lt;/p&gt;

&lt;p&gt;Retrying sometimes “fixed” things. That’s dangerous it teaches people to ignore root causes.🚫&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Debugging became archaeology&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This one hurt the most.😓&lt;/p&gt;

&lt;p&gt;To debug a single pipeline run, we had to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;jump between multiple dashboards&lt;/li&gt;
&lt;li&gt;correlate timestamps manually&lt;/li&gt;
&lt;li&gt;guess which service dropped the signal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There was no single narrative of “what happened.”&lt;/p&gt;

&lt;p&gt;Just fragments.&lt;/p&gt;

&lt;p&gt;I remember thinking: why is this harder than debugging a monolith on a single server?&lt;/p&gt;

&lt;p&gt;That question stuck with me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. We overcomposed the system 🧱&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At some point, we crossed a line from modular to fragmented.&lt;/p&gt;

&lt;p&gt;Every small concern became its own piece:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build&lt;/li&gt;
&lt;li&gt;test&lt;/li&gt;
&lt;li&gt;artifact storage&lt;/li&gt;
&lt;li&gt;deploy orchestration&lt;/li&gt;
&lt;li&gt;notifications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Individually, each piece made sense.👍&lt;/p&gt;

&lt;p&gt;Together, they formed a system that had too many moving parts to reason about.&lt;/p&gt;

&lt;p&gt;What I didn’t realize at the time:&lt;br&gt;
&lt;strong&gt;Every boundary you introduce is also a failure point.⚠️&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What I changed (and what felt wrong at first)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The fixes weren’t glamorous. Some even felt like a step backward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I reduced the number of services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was controversial internally.&lt;/p&gt;

&lt;p&gt;Instead of chaining multiple AWS services, I consolidated parts of the pipeline into fewer components even if that meant slightly more responsibility in one place.&lt;/p&gt;

&lt;p&gt;Less “cloud-native purity.”&lt;br&gt;
More predictability.&lt;/p&gt;

&lt;p&gt;And honestly? Things got easier to debug almost immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I started designing for failure visibility&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not prevention. Visibility.🔍&lt;/p&gt;

&lt;p&gt;We added:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clearer, centralized logging (not perfect, just better)&lt;/li&gt;
&lt;li&gt;explicit failure points instead of silent retries&lt;/li&gt;
&lt;li&gt;fewer “magic” triggers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I stopped trying to make everything seamless.&lt;/p&gt;

&lt;p&gt;Because seamless systems are hard to inspect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I treated IAM as code and not configuration💻&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was a big shift.&lt;/p&gt;

&lt;p&gt;Instead of tweaking permissions ad hoc, we started defining them more explicitly and reviewing changes like actual code.&lt;/p&gt;

&lt;p&gt;It slowed us down in the short term.&lt;/p&gt;

&lt;p&gt;But it removed that creeping uncertainty of “who can do what anymore?”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I accepted a bit of duplication📄&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Earlier, I tried to DRY everything out across pipelines and environments.&lt;/p&gt;

&lt;p&gt;Now? Some duplication stays.&lt;/p&gt;

&lt;p&gt;Why?&lt;/p&gt;

&lt;p&gt;Because over-abstraction in infrastructure makes things harder to reason about when something breaks.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Clarity &amp;gt; cleverness.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Every time.✔️&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The uncomfortable truth😬&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most AWS-based developer toolchains don’t fail because AWS is unreliable.&lt;/p&gt;

&lt;p&gt;They fail because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;they become too abstract&lt;/li&gt;
&lt;li&gt;too distributed&lt;/li&gt;
&lt;li&gt;too “smart”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And nobody owns the full picture anymore.&lt;/p&gt;

&lt;p&gt;It’s not a tooling problem. It’s a design mindset problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I’d do differently from day one&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If I had to rebuild everything:&lt;/p&gt;

&lt;p&gt;I’d start with a simple question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“When this breaks at 2 AM, how quickly can someone understand what happened?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not “how scalable is it”&lt;br&gt;
Not “how serverless is it”&lt;br&gt;
Not “how elegant is it”&lt;/p&gt;

&lt;p&gt;Just that.🎯&lt;/p&gt;

&lt;p&gt;Because six months in, that’s the only thing that really matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One last thing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;People love to say:&lt;br&gt;
“Use managed services so you can focus on business logic.”&lt;/p&gt;

&lt;p&gt;I still agree with that.&lt;/p&gt;

&lt;p&gt;But there’s a hidden cost.&lt;/p&gt;

&lt;p&gt;You’re not eliminating complexity.&lt;br&gt;
You’re relocating it.&lt;/p&gt;

&lt;p&gt;And if you’re not careful…&lt;br&gt;
you’ll end up debugging a system that nobody fully understands.&lt;/p&gt;

&lt;p&gt;Including the person who built it.🙃&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>devops</category>
      <category>architecture</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
