<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tessa Kriesel</title>
    <description>The latest articles on DEV Community by Tessa Kriesel (@tessak22).</description>
    <link>https://dev.to/tessak22</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tessak22"/>
    <language>en</language>
    <item>
      <title>How to Measure Time-to-Value for Your API (Without a Data Team or Custom Analytics Build)</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Wed, 11 Mar 2026 19:37:03 +0000</pubDate>
      <link>https://dev.to/builtfordevs/how-to-measure-time-to-value-for-your-api-without-a-data-team-or-custom-analytics-build-5029</link>
      <guid>https://dev.to/builtfordevs/how-to-measure-time-to-value-for-your-api-without-a-data-team-or-custom-analytics-build-5029</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1501139083538-0139583c060f%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDF8fHRpbWV8ZW58MHx8fHwxNzczMTg5MTIxfDA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1501139083538-0139583c060f%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDF8fHRpbWV8ZW58MHx8fHwxNzczMTg5MTIxfDA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" alt="How to Measure Time-to-Value for Your API (Without a Data Team or Custom Analytics Build)" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You don't know your TTV. I'm willing to bet on it.&lt;/p&gt;

&lt;p&gt;Not because you're not tracking things. You probably have a dashboard full of signups, API calls, session lengths, and something labeled "activation rate" that nobody has ever cleanly defined. You're tracking plenty. You're just not tracking the right thing.&lt;/p&gt;

&lt;p&gt;Here's why that matters: developers who don't reach value fast leave fast. And you won't know they were ever close until they're already gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem With Activity Metrics
&lt;/h2&gt;

&lt;p&gt;Most API founders measure activity and call it progress. Sound familiar?&lt;/p&gt;

&lt;p&gt;You end up tracking things like time to first API call, number of calls in the first session, or completion of an onboarding checklist. These metrics tell you a developer showed up and touched your product. They don't tell you whether that developer got what they came for.&lt;/p&gt;

&lt;p&gt;Here's where it breaks down. The developer who made 3 API calls and churned looks identical in your data to the developer who made 3 API calls and became a power user. Until you look at whether they hit your Value Moment.&lt;/p&gt;

&lt;p&gt;You're measuring motion, not progress. That's the real problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Value Moment?
&lt;/h2&gt;

&lt;p&gt;Your Value Moment is the specific interaction where a developer gets undeniable proof that your API does what they need it to do.&lt;/p&gt;

&lt;p&gt;Not "they called the endpoint." Not "they saw a response." The moment they got a result that made them think: &lt;em&gt;okay, this actually works for my use case.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It looks different for every product:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For a payments API, it's the first successful test transaction in their dev environment&lt;/li&gt;
&lt;li&gt;For a speech-to-text API, it's the first transcript that returns with over 95% accuracy on their test audio&lt;/li&gt;
&lt;li&gt;For an AI completions API, it's the first response that matches their expected output format&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Value Moment is not generic. It's specific to what your product actually does. And it's almost always one concrete event, not a sequence.&lt;/p&gt;

&lt;p&gt;Your TTV is the gap between signup and that moment. That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Value Moment Method: Define It, Track It, Fix It
&lt;/h2&gt;

&lt;p&gt;You can define your Value Moment and start measuring TTV in an afternoon. Here's exactly how.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Define Your Value Moment
&lt;/h3&gt;

&lt;p&gt;Pull up data on your best-retained developers, the ones who've been active 60+ days. What did they do in the first 24 hours that churned developers almost never did?&lt;/p&gt;

&lt;p&gt;You're looking for one identifiable event. If you can't access the data yet, interview three to five developers who successfully integrated your API. Ask them: "What was the moment you knew this was going to work?" Their answers will cluster around one or two specific interactions.&lt;/p&gt;

&lt;p&gt;That cluster is your Value Moment. Write it down in plain language: "A developer successfully [specific action] with our API." That sentence is the foundation of everything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Identify the Signal
&lt;/h3&gt;

&lt;p&gt;Your Value Moment needs to map to something your system actually logs: an event, a response code, a specific API call.&lt;/p&gt;

&lt;p&gt;For most APIs this is straightforward. If your Value Moment is "first successful test transaction," your signal is probably &lt;code&gt;transaction.success&lt;/code&gt; with a test-mode flag. If it's "first accurate transcript returned," your signal might be a response with a confidence score above your threshold.&lt;/p&gt;

&lt;p&gt;If your system doesn't log the signal you need, that's a one-line tracking addition, not an analytics overhaul. Add the event, tag it as your Value Moment signal, move on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Measure the Gap
&lt;/h3&gt;

&lt;p&gt;TTV = time from signup to first Value Moment signal.&lt;/p&gt;

&lt;p&gt;That's the whole formula. You're not averaging session lengths or weighting engagement scores. You're measuring one gap: how long does it take a developer to get their first undeniable proof your API works?&lt;/p&gt;

&lt;p&gt;Track it for every new developer who signs up. Over time you'll have a distribution. What's the median? What's the 75th percentile? And the number that really matters: what percentage of developers hit the Value Moment at all within their first 7 days?&lt;/p&gt;

&lt;p&gt;That last number explains your retention curve better than anything else in your dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Do With Your TTV Number
&lt;/h2&gt;

&lt;p&gt;Your TTV number tells you two things immediately.&lt;/p&gt;

&lt;p&gt;First, whether your onboarding is actually working. If the median TTV for developers who stay long-term is 18 minutes and your current median is 4 hours, that gap is your onboarding problem. Not your product. Your onboarding.&lt;/p&gt;

&lt;p&gt;Second, where your intervention window is. Developers who don't hit the Value Moment in their first session almost never come back. That's not a judgment call, it's a pattern. Your job is to get them there before that window closes.&lt;/p&gt;

&lt;p&gt;From here you optimize by working backward from the Value Moment. What's the friction between signup and that first signal? Where do developers drop off before they get there? Those are the specific things to fix. Not "improve the docs" or "add more examples" in the abstract, but the precise steps between where developers start and where they need to get.&lt;/p&gt;

&lt;h2&gt;
  
  
  See Your TTV Before It Shows Up in Your Churn Report
&lt;/h2&gt;

&lt;p&gt;Here's the bottom line: defining your Value Moment takes an afternoon. Watching TTV data accumulate manually, correlating it against retention, turning it into something you can actually act on? That takes a lot longer.&lt;/p&gt;

&lt;p&gt;That's what Built for Devs handles for you. Sign up, define your Value Moment, and as new developers flow through your onboarding, you'll see your TTV in your dashboard automatically. No custom analytics pipeline required.&lt;/p&gt;

&lt;p&gt;You don't need a data team to start. You need to define the right moment, then let the tool do the tracking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://builtfor.dev/score?ref=blog.builtfor.dev" rel="noopener noreferrer"&gt;Start measuring your developer TTV →&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbalq27aweul7l6ea071.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbalq27aweul7l6ea071.png" alt="How to Measure Time-to-Value for Your API (Without a Data Team or Custom Analytics Build)" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;View of Built for Devs Dashboard&lt;/em&gt;&lt;/p&gt;

</description>
      <category>api</category>
      <category>tools</category>
      <category>devrel</category>
    </item>
    <item>
      <title>Stop asking developers what they think. Watch what they do.</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Tue, 10 Mar 2026 17:55:12 +0000</pubDate>
      <link>https://dev.to/builtfordevs/stop-asking-developers-what-they-think-watch-what-they-do-4k9b</link>
      <guid>https://dev.to/builtfordevs/stop-asking-developers-what-they-think-watch-what-they-do-4k9b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1530210124550-912dc1381cb8%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDI2fHx0aGlua2luZ3xlbnwwfHx8fDE3NzMxMTA0MzV8MA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1530210124550-912dc1381cb8%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDI2fHx0aGlua2luZ3xlbnwwfHx8fDE3NzMxMTA0MzV8MA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" alt="Stop asking developers what they think. Watch what they do." width="2000" height="1334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ask a developer how your onboarding went and they'll probably say it was fine. Maybe a little long. Could use better docs. You take notes, you nod, you move on.&lt;/p&gt;

&lt;p&gt;What they won't tell you: they almost quit twice, they couldn't find the API reference without Googling, and the moment they hit your pricing page they had to go back and re-read the docs before they trusted you enough to continue.&lt;/p&gt;

&lt;p&gt;That version of events lives in their behavior. Not in their answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem isn't your feedback. It's what feedback can't capture.
&lt;/h2&gt;

&lt;p&gt;You're getting feedback. You're talking to users. You're doing the things the startup playbook says to do. But the signal you're collecting is optimized for your comfort, not your understanding.&lt;/p&gt;

&lt;p&gt;When a developer answers "was that confusing?", they don't replay the experience. They construct a memory of it—filtered through what they think you want to hear, how competent they want to appear, and what they've already forgotten. They won't tell you your onboarding took 47 minutes and they almost quit three times. They'll say it was "a little long, but fine." They self-censor because they've moved on and don't remember the exact sticking points.&lt;/p&gt;

&lt;p&gt;That's not dishonesty. That's just how memory works.&lt;/p&gt;

&lt;p&gt;The enemy isn't your users. It's the method you're using to understand them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Behavior tells you what answers can't
&lt;/h2&gt;

&lt;p&gt;Your existing users have context. They've learned your product, forgiven its rough edges, built workarounds. They're not starting from scratch—they're showing you what a loyalist does, not what a new developer does. Your network is worse. They want to be helpful. They'll push through friction that would make a stranger quit.&lt;/p&gt;

&lt;p&gt;What you need is a developer who actually matches the profile of someone you're trying to win—not a friend, not a power user, not someone already in your orbit. The friction that matters is what happens before they care about you. Before they've decided to give you the benefit of the doubt.&lt;/p&gt;

&lt;p&gt;What you see when that developer tries your product for the first time will not match what your feedback sessions told you.&lt;/p&gt;

&lt;p&gt;You'll see where they actually get stuck—not where they said they got stuck. You'll see the moment they almost quit. You'll see the exact point where your product either earns their trust or loses it. And you'll see things you never thought to ask about, because you didn't know they were problems.&lt;/p&gt;

&lt;p&gt;One founder we worked with discovered this the hard way. Their onboarding requested full Google Drive permissions when all they actually needed was access to a single spreadsheet. Feedback sessions didn't surface it. But when they started watching real developers go through the flow—and listening to what those developers said out loud—they found something no form had captured: developers were genuinely annoyed. Many said they would never grant those permissions. Nobody had written that in a feedback session because nobody asks "how do you feel about our OAuth scope?"&lt;/p&gt;

&lt;p&gt;Most founders who go from asking to watching describe the same experience: they thought they knew their onboarding flow. They'd walked through it dozens of times. Then they watched a real developer do it and saw three failure points they'd never once noticed—because they already knew where everything was.&lt;/p&gt;

&lt;p&gt;You know your product too well to use it naively. A developer who's never seen it before doesn't. That's exactly why their experience is the one worth watching.&lt;/p&gt;

&lt;h2&gt;
  
  
  The cost of collecting the wrong signal
&lt;/h2&gt;

&lt;p&gt;With limited runway, every product bet has a price. The features you ship in the next two months either move retention or they don't. If those decisions are grounded in self-reported feedback, you're making expensive guesses dressed up as research.&lt;/p&gt;

&lt;p&gt;There's a specific kind of founder regret that comes from shipping a big feature, watching it go unused, and then finding out through a casual conversation that the feature developers actually needed was something much simpler. Something they never mentioned in a feedback session because they didn't think you'd want to hear it, or they'd already found a workaround and stopped thinking about it.&lt;/p&gt;

&lt;p&gt;That regret is avoidable. Not by doing more research. By doing a different kind.&lt;/p&gt;

&lt;h2&gt;
  
  
  The signal you can't collect any other way
&lt;/h2&gt;

&lt;p&gt;The only way to get honest signal is to watch a developer who's never seen your product try to use it—without context, without guidance, without anyone from your team in the room. No prompts. No hand-holding. Just a real developer and your product.&lt;/p&gt;

&lt;p&gt;The problem is finding them. Not just any developer—one who actually looks like the developer you're trying to win. Your network is limited. Your existing users aren't starting from zero. There's no obvious place to find the right people with zero prior exposure—so most founders never get there at all.&lt;/p&gt;

&lt;p&gt;That's the problem &lt;a href="https://builtfor.dev/?ref=blog.builtfor.dev" rel="noopener noreferrer"&gt;Built for Devs&lt;/a&gt; solves. We match real developers who fit the profile of your target user, bring them to your product for the first time, and capture everything—every click, every hesitation, every moment they say "this makes no sense" out loud. That data powers a findings report that tells you precisely where the experience breaks and what to fix first. Not a list of opinions. A prioritized action plan grounded in real behavior.&lt;/p&gt;

&lt;p&gt;That discomfort of watching someone struggle through something you built—that's the data you've been missing.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>tools</category>
      <category>devrel</category>
    </item>
    <item>
      <title>Why Developers Aren't Coming Back After Their First Session</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Mon, 09 Mar 2026 14:50:57 +0000</pubDate>
      <link>https://dev.to/builtfordevs/why-developers-arent-coming-back-after-their-first-session-1f3b</link>
      <guid>https://dev.to/builtfordevs/why-developers-arent-coming-back-after-their-first-session-1f3b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1584105154398-99080f81b8ee%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDF8fGNsb3NlZCUyMGRvb3J8ZW58MHx8fHwxNzczMDY2ODk4fDA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1584105154398-99080f81b8ee%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDF8fGNsb3NlZCUyMGRvb3J8ZW58MHx8fHwxNzczMDY2ODk4fDA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" alt="Why Developers Aren't Coming Back After Their First Session" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You've fixed the tutorial. You've cleaned up the docs. You've killed the bugs early users reported. Developers still aren't coming back.&lt;/p&gt;

&lt;p&gt;If this is your situation, you're probably running the same troubleshooting loop most founders run: something about the product isn't clicking. So you improve the product. You add a better onboarding flow. You write a getting-started guide. You watch the metrics. Nothing changes.&lt;/p&gt;

&lt;p&gt;Here's what I keep seeing across dev tool companies stuck in this loop: the diagnosis is wrong. First-session churn usually isn't a product problem. It's a stage problem. And if you're measuring product metrics to fix a stage problem, you're going to keep shipping features that don't move retention.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Onboarding Loop That Goes Nowhere
&lt;/h2&gt;

&lt;p&gt;Most founders respond to first-session churn by improving onboarding. This makes sense on the surface—developers aren't coming back, so something about their first experience isn't working, so you fix the first experience.&lt;/p&gt;

&lt;p&gt;The problem is that "first experience" and "first session" aren't the same thing. The onboarding flow is one slice of the first session. But a developer's first session spans a journey that started before they ever opened your product—and continues through stages that have nothing to do with your tutorial.&lt;/p&gt;

&lt;p&gt;When you only measure what happens inside your product during session one, you're measuring a fraction of what actually determines whether someone comes back. You can optimize that fraction indefinitely and never touch the part that matters.&lt;/p&gt;

&lt;p&gt;I call this stage blindness: diagnosing developer churn with product metrics when the drop-off is happening at the journey level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Developers Actually Exit
&lt;/h2&gt;

&lt;p&gt;There's a framework I use to think about how developers move through any tool's ecosystem. Five stages: Discover, Evaluate, Learn, Build, Scale.&lt;/p&gt;

&lt;p&gt;Most first-session exits happen in Evaluate and Learn. Not Build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Discover&lt;/strong&gt; is how they found you—search, a recommendation, a conference talk. If they're in your product, they've cleared Discover.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evaluate&lt;/strong&gt; is the decision stage. Before a developer commits to learning anything, they're asking a question you probably can't see on your dashboard: &lt;em&gt;is this actually for me?&lt;/em&gt; They're scanning docs, reading changelog entries, checking the GitHub repo, looking at the community, reviewing pricing. They're trying to answer the question fast, before they invest real time. If your product doesn't give them a fast, clear answer, they're gone—and they look like a session-one exit, but the real exit happened during Evaluate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn&lt;/strong&gt; is where developers start building skills, not building product. They're running the quickstart. They're working through examples. They're hitting friction and deciding whether the friction is temporary (learning curve) or permanent (bad product fit). If the gap between your quickstart and real-world usage is too big, or if the docs assume knowledge they don't have, they exit here. Again, looks like session-one churn. Actually a stage failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build&lt;/strong&gt; is where most onboarding optimization is aimed. More tutorials, better error messages, cleaner UI. The problem: if developers are exiting in Evaluate and Learn, they never make it to Build. Improving Build doesn't help.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scale&lt;/strong&gt; is retention at depth—developers who've integrated your tool and are now spreading it to their team or expanding usage. You don't have a Scale problem if you're seeing first-session exits. But you won't get there without clearing the earlier stages.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Stage Blindness Costs You
&lt;/h2&gt;

&lt;p&gt;When you can't see which stage developers are exiting, you default to shipping. More features, more onboarding improvements, more docs. I've watched companies spend a quarter iterating on their Build experience when their real problem was an Evaluate stage that left developers unsure whether the tool was even meant for them.&lt;/p&gt;

&lt;p&gt;The compounding cost: every iteration cycle you spend in the wrong stage is a cycle you're not spending on the right one. And with 4-6 months of runway, you don't have many cycles left.&lt;/p&gt;

&lt;p&gt;The other cost is signal distortion. When you're measuring product engagement during session one and attributing poor retention to product quality, you're training yourself to look in the wrong place. The feedback loop reinforces the wrong diagnosis.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Diagnose Which Stage Is Failing
&lt;/h2&gt;

&lt;p&gt;You can't improve what you can't see. The starting point is mapping your developer journey with enough specificity to identify where the drop-off actually happens.&lt;/p&gt;

&lt;p&gt;Start with Evaluate. What does a developer experience between finding your product and deciding to invest time in it? What questions are they trying to answer? How fast can they answer them? If you don't know, your Evaluate stage might be invisible to you—which is exactly the problem.&lt;/p&gt;

&lt;p&gt;Then look at Learn. What's the gap between your quickstart and a real use case? Where does that first "wait, this doesn't work the way I expected" moment happen? That's usually where Learn exits occur.&lt;/p&gt;

&lt;p&gt;If you want a fast read on where your developer journey is failing, take the Developer Adoption Score at &lt;a href="https://builtfor.dev/score?ref=blog.builtfor.dev" rel="noopener noreferrer"&gt;builtfor.dev/score&lt;/a&gt;. Free, about 5 minutes, and it gives you an actual score on your journey—not a generic 'improve your onboarding' recommendation. Want to go deeper? Our product offering tracks real developer behavior across every stage so you can see exactly where the drop-off happens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2tudh7jukqk8r05p8ms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2tudh7jukqk8r05p8ms.png" alt="Why Developers Aren't Coming Back After Their First Session" width="800" height="483"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Developer Journey Tracking in the Pro Plan&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Question to Ask
&lt;/h2&gt;

&lt;p&gt;The next time a developer doesn't come back after session one, the question isn't "what was wrong with their first experience?" The question is "which stage failed them?"&lt;/p&gt;

&lt;p&gt;Those are different questions with different answers. And until you can ask the right one, the onboarding loop keeps going.&lt;/p&gt;

&lt;p&gt;You've already tried fixing the product. That's not what's broken.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>tooling</category>
      <category>devrel</category>
    </item>
    <item>
      <title>The 3 Signals That Tell You Whether to Rebuild or Improve Your Dev Tool Onboarding</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Mon, 09 Mar 2026 04:57:13 +0000</pubDate>
      <link>https://dev.to/builtfordevs/the-3-signals-that-tell-you-whether-to-rebuild-or-improve-your-dev-tool-onboarding-2iim</link>
      <guid>https://dev.to/builtfordevs/the-3-signals-that-tell-you-whether-to-rebuild-or-improve-your-dev-tool-onboarding-2iim</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1709593729967-f490f19a7a96%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDY1fHxzdG9wbGlnaHR8ZW58MHx8fHwxNzczMDMyMTA5fDA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1709593729967-f490f19a7a96%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDY1fHxzdG9wbGlnaHR8ZW58MHx8fHwxNzczMDMyMTA5fDA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" alt="The 3 Signals That Tell You Whether to Rebuild or Improve Your Dev Tool Onboarding" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most teams are having the wrong argument. The debate isn't "should we rebuild"—it's whether you've looked at the right data first.&lt;/p&gt;

&lt;p&gt;I've pulled more than a few product teams out of this exact trap—one that burned a quarter rebuilding their onboarding to see activation barely move, another that iterated for six months on a flow with a structural problem from the start, adding tooltips and reorganizing steps while the actual issue sat untouched two layers deeper.&lt;/p&gt;

&lt;p&gt;Here's what both teams had in common: they made the rebuild vs. improve call based on instinct and internal pressure, not signal.&lt;/p&gt;

&lt;p&gt;That's fixable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real problem with this decision
&lt;/h2&gt;

&lt;p&gt;When activation flatlines, the conversation usually goes one of two ways.&lt;/p&gt;

&lt;p&gt;The first: "We've been incrementally improving this for months. It's time to rebuild." This feels decisive. But it's often a reaction to frustration, not evidence. Rebuilding satisfies the internal need to feel like you're taking bold action. It doesn't automatically address the root cause.&lt;/p&gt;

&lt;p&gt;The second: "A full rebuild is too expensive right now. Let's keep iterating." This feels prudent. But if the current flow has a structural problem—a wrong assumption about what developers need to experience first, or a sequence that doesn't match how they actually think—iterating around it will only produce marginal gains. You're polishing something that needs to be torn down.&lt;/p&gt;

&lt;p&gt;Neither instinct is wrong, exactly. They just aren't grounded in what the data can actually tell you.&lt;/p&gt;

&lt;p&gt;There are three signals that cut through the argument. I've seen them hold up across enough developer products to trust them. And the time it takes to check all three is probably less than your next sprint planning meeting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Signal 1: Where in the journey is the drop-off happening?
&lt;/h2&gt;

&lt;p&gt;This one requires you to be honest about what "activation" actually means for your product. Not sign-up. Not email confirmation. The moment a developer has experienced enough of your product to understand whether it solves their problem.&lt;/p&gt;

&lt;p&gt;Now look at your drop-off data. Where are developers leaving?&lt;/p&gt;

&lt;p&gt;If developers are dropping off before they've had the chance to reach that value moment—before the first API call, before the first integration runs, before they've gotten any output at all—that's usually a rebuild signal. The problem isn't that your steps are confusing or your copy is unclear. The problem is that the path to value is too long, or the framing at the front end is creating the wrong expectations, or developers aren't self-selecting correctly before they start.&lt;/p&gt;

&lt;p&gt;The instinct in this situation is to improve the steps right before the drop-off. But drop-off location is telling you something different: it's showing you where the gap between developer expectation and product reality became too wide to cross. If that gap opens before developers have experienced core value at all, improving the surrounding steps won't close it. The architecture is wrong—the flow is asking developers to commit before they understand what they're committing to.&lt;/p&gt;

&lt;p&gt;That's not an iteration problem. That's a structural one.&lt;/p&gt;

&lt;p&gt;If your drop-off is concentrated in the middle or later stages—after developers have had initial contact with the core value, but before they've completed the full flow—that's more often an improvement opportunity. The structure is right. There's friction you haven't removed yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The signal:&lt;/strong&gt; Drop-off concentrated before value exposure means rebuild. Drop-off concentrated after value exposure means improve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Signal 2: Does completing onboarding predict activation?
&lt;/h2&gt;

&lt;p&gt;This one surprises teams every time they run it. Pull two cohorts: developers who completed your onboarding flow, and developers who skipped it and went straight to your docs. Compare activation rates.&lt;/p&gt;

&lt;p&gt;If developers who completed onboarding activate at significantly higher rates than those who skipped it, your flow is doing something useful. The signal-to-noise ratio is there. You have a structure worth improving.&lt;/p&gt;

&lt;p&gt;If the difference is small—or if developers who skipped onboarding activate at similar or better rates—that's a different story. It means your onboarding isn't providing something developers couldn't find themselves. They're working around it. That's a structural indictment, not an execution one.&lt;/p&gt;

&lt;p&gt;When developers route around your designed path and still succeed, it usually means the flow is optimized around internal assumptions about what developers need—not around what they actually do to reach value. The flow reflects how your team thinks about the product, not how developers experience it. You can improve that flow indefinitely. You're improving the wrong thing.&lt;/p&gt;

&lt;p&gt;The variation worth watching: if high-ICP developers skip onboarding and succeed, but lower-ICP developers complete it and still don't activate, your onboarding might be serving the wrong segment. That's a rebuild with a different diagnosis—you're building onboarding for developers who don't need it, instead of for the ones who do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The signal:&lt;/strong&gt; Completion predicts activation means improve. Completion and activation are uncorrelated means rebuild.&lt;/p&gt;

&lt;h2&gt;
  
  
  Signal 3: Has friction migrated without completion rates improving?
&lt;/h2&gt;

&lt;p&gt;This is the one that tells you whether you've been iterating in circles.&lt;/p&gt;

&lt;p&gt;Pull the last three iterations of your onboarding. For each one: where was the primary drop-off point, and what was the overall completion rate?&lt;/p&gt;

&lt;p&gt;If the friction point has moved between iterations—step 4 became step 5 became step 7—but the completion rate hasn't meaningfully improved, you've been relocating friction, not removing it. Developers are hitting the same resistance; it's just showing up in a different place now.&lt;/p&gt;

&lt;p&gt;This is the clearest rebuild signal in the data. It tells you there's a load-bearing structural issue that iteration hasn't touched. The friction isn't in the steps. It's in the assumption the steps are built on.&lt;/p&gt;

&lt;p&gt;When the problem location moves but completion doesn't improve, you haven't solved anything—you've relocated the resistance. Eventually teams stop noticing that the friction point keeps shifting because each iteration feels like progress. It's not. You're rebuilding the same wall in a different spot.&lt;/p&gt;

&lt;p&gt;If friction is migrating but completion is improving—say, from 30% to 45% to 55%—that's a healthy iteration signal. Keep going. The structure is sound and the changes are working.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The signal:&lt;/strong&gt; Friction migrates, completion stays flat means rebuild. Friction migrates, completion improves means improve.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the three signals combine
&lt;/h2&gt;

&lt;p&gt;You rarely need all three to point the same direction to get clarity. Here's how I use them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two or more signals point to rebuild:&lt;/strong&gt; Stop iterating. You're spending engineering time on the wrong problem. Rebuild means rethinking the journey architecture—what the flow is trying to accomplish, in what order, for which developer. Not just a new UI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One signal points to rebuild, others are inconclusive:&lt;/strong&gt; Dig deeper before committing. One anomaly is worth investigating; it might be segment-specific or explained by another variable. But don't dismiss it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All signals point to improve:&lt;/strong&gt; Good news—you have a structural foundation worth building on. The work ahead is meaningful friction removal, clearer copy, better error handling, tighter time-to-value within the existing architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No signals point to rebuild and activation is still flat:&lt;/strong&gt; Look upstream. The problem might not be in your onboarding at all. It might be in how developers arrive—the expectations your marketing and content are creating before they sign up. Fixing the flow won't fix a positioning mismatch.&lt;/p&gt;

&lt;h2&gt;
  
  
  One more thing before you start
&lt;/h2&gt;

&lt;p&gt;The instinct call usually costs you a quarter. Occasionally it costs two. The three signals cost an afternoon with your analytics.&lt;/p&gt;

&lt;p&gt;That's worth doing before you walk into the sprint planning meeting and announce a rebuild that might be the wrong call—or commit to another round of improvements on a flow that needs to be torn down.&lt;/p&gt;

&lt;p&gt;If you want to know where your developer journey is actually breaking, the &lt;a href="https://builtfor.dev/score?ref=blog.builtfor.dev" rel="noopener noreferrer"&gt;Developer Adoption Score&lt;/a&gt; is a free tool that evaluates your current experience across all five stages. It takes about ten minutes and gives you a starting point for this kind of diagnosis.&lt;/p&gt;

&lt;p&gt;The debate teams have about rebuild vs. improve is usually the right debate. They're just having it without the data that would make it productive.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>tooling</category>
      <category>devrel</category>
    </item>
    <item>
      <title>Why Developers Drop Off Before Their First API Call (And How to Fix It)</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Sun, 08 Mar 2026 04:05:16 +0000</pubDate>
      <link>https://dev.to/builtfordevs/why-developers-drop-off-before-their-first-api-call-and-how-to-fix-it-h8a</link>
      <guid>https://dev.to/builtfordevs/why-developers-drop-off-before-their-first-api-call-and-how-to-fix-it-h8a</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1591522811280-a8759970b03f%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDV8fHRoaW5rfGVufDB8fHx8MTc3Mjk0MjUxNnww%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1591522811280-a8759970b03f%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDV8fHRoaW5rfGVufDB8fHx8MTc3Mjk0MjUxNnww%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" alt="Why Developers Drop Off Before Their First API Call (And How to Fix It)" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You've watched developers sign up, poke around, and disappear—all before making a single API call. You've checked the docs. You've improved the getting-started guide. Rewritten the quickstart. Added tooltips. The drop-off didn't move.&lt;/p&gt;

&lt;p&gt;So you add more docs. Better docs. You hire someone to clean up the copy. You obsess over the first-run experience. Still nothing.&lt;/p&gt;

&lt;p&gt;Here's what's actually happening: you've been fixing the wrong thing.&lt;/p&gt;

&lt;p&gt;Developer drop-off before the first API call is almost never a documentation problem. It's a design assumption problem. And the assumption at the center of it is one almost every dev tool founder makes without realizing it.&lt;/p&gt;

&lt;p&gt;You designed your onboarding for the developer who already knows why they're there.&lt;/p&gt;

&lt;p&gt;That developer is rare. Most of the people coming through your sign-up flow don't know yet. They're evaluating. They showed up with a vague problem and a browser tab full of competitors. They're not looking for instructions. They're looking for a reason to stay.&lt;/p&gt;

&lt;p&gt;Your onboarding has no answer for that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Signs Your Onboarding Is Optimized for the Wrong Developer
&lt;/h2&gt;

&lt;p&gt;Not all drop-off looks the same. Before we talk about what to fix, be honest about where you are. Check every statement that's true for your product right now.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your onboarding starts with "let's get you set up" rather than "here's what you'll be able to do"&lt;/li&gt;
&lt;li&gt;The getting-started guide assumes the developer has already chosen your product&lt;/li&gt;
&lt;li&gt;You have zero content in the first-run experience that explains why a developer should keep going&lt;/li&gt;
&lt;li&gt;Your quickstart requires credentials or configuration before showing value&lt;/li&gt;
&lt;li&gt;You can see where developers drop off in your funnel, but you don't know why&lt;/li&gt;
&lt;li&gt;Developers who ask questions in your Discord or Slack often say things like "I'm not sure if this is the right tool for me"&lt;/li&gt;
&lt;li&gt;Your product analytics show sign-ups climbing but first API calls staying flat&lt;/li&gt;
&lt;li&gt;You've improved documentation but seen no improvement in activation&lt;/li&gt;
&lt;li&gt;You've never watched a developer who doesn't know your product use it for the first time&lt;/li&gt;
&lt;li&gt;Most of your feedback comes from developers who made it through—not from the ones who left&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;If you checked 6 or more:&lt;/strong&gt; Your onboarding is designed for the committed developer. Everyone still evaluating you is walking into a wall.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you checked 3 to 5:&lt;/strong&gt; You've got a mixed experience. Some developers find what they need. Others don't, and they leave quietly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you checked 2 or fewer:&lt;/strong&gt; You're either genuinely strong here, or you haven't looked closely enough yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem: You're Starting the Conversation in the Middle
&lt;/h2&gt;

&lt;p&gt;Most dev tool founders assume that someone who signs up has made a decision. They haven't. Sign-up is curiosity, not commitment.&lt;/p&gt;

&lt;p&gt;When a developer lands in your product for the first time, here's their actual mental state: "I've heard of this. I have a problem that might fit. I have maybe 20 minutes to figure out if this is worth my time." They're not ready for your step-by-step quickstart. They're asking one question first: &lt;em&gt;Is this for me?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If your onboarding can't answer that question in the first two minutes, they leave. Not because they couldn't find the docs. Because they couldn't see themselves in your product.&lt;/p&gt;

&lt;p&gt;This is the assumption that kills activation: that showing people how to use your product is more important than showing them why they should. You've front-loaded instructions for a decision they haven't made yet.&lt;/p&gt;

&lt;p&gt;I see this constantly. Founders spend months refining the technical accuracy of their quickstart, and almost no time on the moment before it—the moment where a developer decides whether to read a single word of it.&lt;/p&gt;

&lt;p&gt;Once you lose that developer, they're gone. They're not coming back. They've already verified in their own mind that your product isn't what they need. You can try to pull them back with email sequences, but unless you know exactly why they left, you're guessing. And they already made up their mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Fix (Starting This Week)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Build an onboarding fork for the evaluating developer
&lt;/h3&gt;

&lt;p&gt;The developer who knows why they're there needs a fast path to their first API call. The developer who's still deciding needs something different: proof that this is worth their time before they do a single technical step.&lt;/p&gt;

&lt;p&gt;Add a branch. At the start of your onboarding flow, let developers self-select: "I have a specific use case and I'm ready to build" vs. "I'm evaluating whether this is the right tool." The second path isn't a watered-down version of the first. It's a completely different experience—one designed to answer &lt;em&gt;Is this for me?&lt;/em&gt; before asking anything of the developer.&lt;/p&gt;

&lt;p&gt;This takes more work to build. It also directly addresses the moment where most of your drop-off is happening.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Move your value proof before your credential ask
&lt;/h3&gt;

&lt;p&gt;Count the steps in your current onboarding flow before a developer sees something working. Every step you ask of a developer before delivering proof of value is a step where they can leave.&lt;/p&gt;

&lt;p&gt;The goal is to invert the sequence. Value first, configuration second. If you can't show a developer what your product does without asking them to set things up first, figure out how to sandbox it. Interactive demos, pre-populated environments, example outputs they can explore before touching a single credential—any of these buys you trust before you ask for effort.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Add an activation question to your first-run experience
&lt;/h3&gt;

&lt;p&gt;Right now you're tracking behavior but not intent. You can see where developers drop off. You can't see what they were trying to do when they did.&lt;/p&gt;

&lt;p&gt;Add a single question early in the onboarding flow—not a survey, one question: "What are you trying to build?" or "What problem brought you here?" You don't need to use the answer to gate anything. You need it to understand which type of developer is coming through your door, and to make them feel seen. A developer who's asked what they're trying to accomplish feels like the product was made for them. That feeling matters more than most founders think.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Go watch a developer use your product for the first time
&lt;/h3&gt;

&lt;p&gt;Not someone on your team. Not someone in your network who's heard your pitch. Find a developer who matches your ICP but has never heard of you, and watch them use your product. Don't help. Don't explain. Just watch.&lt;/p&gt;

&lt;p&gt;You will see the exact moment they get confused, lose confidence, or stop believing the product is for them. That moment is your real drop-off problem. It's invisible in your funnel data and completely visible in a single session.&lt;/p&gt;

&lt;p&gt;Most founders avoid this because it's uncomfortable. It's also the single fastest way to understand what's actually happening between sign-up and activation.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Keep This From Recurring
&lt;/h2&gt;

&lt;p&gt;Fixing the immediate symptoms isn't enough. The underlying problem is that you have no ongoing signal from developers who didn't make it through. Your feedback loops are built around the people who survived your onboarding, which means they're systematically blind to the experience of everyone who didn't.&lt;/p&gt;

&lt;p&gt;Build a process to capture signal from the right population. That means:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exit questions at drop-off points.&lt;/strong&gt; When a developer goes inactive in the first session without reaching activation, trigger a short question: "Was there something you were looking for that you couldn't find?" Not a survey. One question, low friction. Even a 5% response rate gives you patterns you don't have now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regular first-time user sessions.&lt;/strong&gt; Make it a recurring practice, not a one-off. Every six weeks, watch one developer who matches your ICP go through your onboarding cold. The patterns you see across sessions will tell you more about your activation problem than your analytics alone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Track intent alongside behavior.&lt;/strong&gt; Your funnel shows you what developers did. Intent data—what they said they were trying to do—tells you why the behavior happened. Both are necessary. Right now most dev tool founders only have one of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shortest Path to Answers
&lt;/h2&gt;

&lt;p&gt;If you want to understand your drop-off problem faster than any of this, there's one move that accelerates everything: get structured feedback from developers who match your ICP and have never heard of you.&lt;/p&gt;

&lt;p&gt;Not your current users. Not your network. Developers in the evaluation mindset—exactly the state of mind of the people leaving your product before their first API call.&lt;/p&gt;

&lt;p&gt;Everything above tells you what to fix. Unbiased evaluation sessions tell you which fixes matter most, in which order, for the specific developer segment you're losing.&lt;/p&gt;

&lt;p&gt;That's the sequence. Know who's leaving and why. Then fix it.&lt;/p&gt;

</description>
      <category>api</category>
      <category>tooling</category>
      <category>devrel</category>
    </item>
    <item>
      <title>Developer Journey Map: 5 Stages to Reduce Dev Churn</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Sat, 07 Mar 2026 00:54:00 +0000</pubDate>
      <link>https://dev.to/builtfordevs/developer-journey-map-5-stages-to-reduce-dev-churn-g4n</link>
      <guid>https://dev.to/builtfordevs/developer-journey-map-5-stages-to-reduce-dev-churn-g4n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1619468129361-605ebea04b44%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDF8fG1hcHxlbnwwfHx8fDE3NzI5MzEyNTN8MA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1619468129361-605ebea04b44%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDF8fG1hcHxlbnwwfHx8fDE3NzI5MzEyNTN8MA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" alt="Developer Journey Map: 5 Stages to Reduce Dev Churn" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You've fixed your docs. Added a quickstart. Shipped features based on feedback. And retention is still flat.&lt;/p&gt;

&lt;p&gt;The problem isn't your product. It's where you stopped looking.&lt;/p&gt;

&lt;p&gt;Most teams treat Build like the finish line. They optimize the hell out of getting developers from sign-up to first API call, then wonder why those developers don't stick around. The developer journey has five stages—and if you're only investing in the middle ones, you're leaking developers at both ends without knowing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Five Stages
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Discover&lt;/strong&gt; — First impression. Developers are barely paying attention yet. The only question they're asking: do you get me? If your messaging doesn't signal that immediately, they're gone before they ever consider you.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What belongs here: website, inspiring content, social presence, developer referrals, community.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evaluate&lt;/strong&gt; — Now they're deliberate. Comparing you to alternatives, checking pricing, looking for social proof. This is where most companies lose developers silently—no drop-off notification, no signal. They just move on.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What belongs here: docs, quickstart guide, GitHub presence, pricing clarity, social proof.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn&lt;/strong&gt; — Hands-on testing. Time to first meaningful use matters here. So does confidence. The question isn't just "does this work?"—it's "can I make this work?" Developers who aren't sure they can succeed with your product won't stick around to find out.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What belongs here: product access, tutorials, use case content, community where they can find answers fast.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build&lt;/strong&gt; — They're executing. Speed to MVP matters. So does support when they get stuck. These are your highest-intent users—they've cleared every hurdle to get here. Treat them like it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What belongs here: reference guide, build shortcuts, changelog, responsive support.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scale&lt;/strong&gt; — This is where retained developers live. They're invested. They want to give feedback, influence your roadmap, and tell their peers. Developers are word-of-mouth marketing at the highest level of trust—but only if you've built something worth talking about at this stage. Most teams never do.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What belongs here: case studies, beta programs, feedback and technical advisory board, developer partnerships.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Most Companies Actually Invest
&lt;/h2&gt;

&lt;p&gt;Evaluate, Learn, and maybe Build. That's it.&lt;/p&gt;

&lt;p&gt;Scale gets almost nothing. It's treated like a passive outcome—"if they stay, they stay." It's not. It requires the same active investment as every other stage. The developers who reach Scale are your most valuable asset. Ignoring them is one of the most expensive mistakes a dev tool company can make.&lt;/p&gt;

&lt;p&gt;Sound familiar?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem: You Can't Fix What You Can't See
&lt;/h2&gt;

&lt;p&gt;Mapping the journey is step one. Measurement is where it gets real.&lt;/p&gt;

&lt;p&gt;Most teams know their sign-up rate. Almost none can tell you what's happening between stages—where developers are stalling, why they drop off, what's actually moving them forward.&lt;/p&gt;

&lt;p&gt;Without behavioral signal across all five stages, you're making product, content, and DevRel decisions based on incomplete information. You might be losing developers in Evaluate because your docs aren't trustworthy. Or in Scale because there's no community to plug into. You won't know until you look.&lt;/p&gt;

&lt;p&gt;The journey map isn't a one-time strategy exercise. It's a living measurement framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Here
&lt;/h2&gt;

&lt;p&gt;For each of the five stages, answer three questions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What does the developer need to move forward?&lt;/li&gt;
&lt;li&gt;What do you have in place today?&lt;/li&gt;
&lt;li&gt;Where are the gaps?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You'll find them fast. The gaps in your Scale stage will probably surprise you.&lt;/p&gt;

&lt;p&gt;That's exactly what Built for Devs helps you do—map your developer journey, track behavioral signals across all five stages, and make decisions based on what's actually happening. &lt;a href="https://builtfor.dev/score?ref=blog.builtfor.dev" rel="noopener noreferrer"&gt;Get your free Developer Adoption Score&lt;/a&gt; and see where your gaps are today.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>tooling</category>
      <category>devrel</category>
    </item>
    <item>
      <title>Feature Roadmaps Don't Drive Market Fit—Positive Experiences Do</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Sun, 01 Mar 2026 23:46:00 +0000</pubDate>
      <link>https://dev.to/builtfordevs/feature-roadmaps-dont-drive-market-fit-positive-experiences-do-24k6</link>
      <guid>https://dev.to/builtfordevs/feature-roadmaps-dont-drive-market-fit-positive-experiences-do-24k6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1498637841888-108c6b723fcb%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDJ8fHJvYWRtYXB8ZW58MHx8fHwxNzcyOTI3MjI5fDA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1498637841888-108c6b723fcb%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDJ8fHJvYWRtYXB8ZW58MHx8fHwxNzcyOTI3MjI5fDA%26ixlib%3Drb-4.1.0%26q%3D80%26w%3D2000" alt="Feature Roadmaps Don't Drive Market Fit—Positive Experiences Do" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every founder obsesses over their feature roadmap. But the most successful dev tool founders I know—the ones who actually hit PMF—don't prioritize features at all. They prioritize the sequence of developer "aha moments." While you're debating whether to build that advanced dashboard or API versioning, they're mapping out the emotional journey that turns a skeptical developer into an advocate. The difference isn't what they build. It's when they reveal it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Feature Trap: Why Your Roadmap is Killing PMF
&lt;/h2&gt;

&lt;p&gt;Here's what happens in every dev tool Slack channel: "Users are asking for X feature." "Competitor Y just shipped Z." "Let's add it to the roadmap."&lt;/p&gt;

&lt;p&gt;Sound familiar?&lt;/p&gt;

&lt;p&gt;You're not alone. Most founders build their roadmap exactly this way—competitor analysis plus user requests equals priority. It feels logical. It feels data-driven.&lt;/p&gt;

&lt;p&gt;But here's the brutal truth: developers abandon your tool after signup not because you lack features, but because you're solving for the wrong moment in their journey.&lt;/p&gt;

&lt;p&gt;The problem is you're building for the developer who's already convinced your tool is worth their time. But that developer doesn't exist yet.&lt;/p&gt;

&lt;p&gt;Instead, you're facing a developer who's burned by broken promises, skeptical of new tools, and protective of their workflow. They've been hurt before. They don't trust you yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Cost of Feature-First Thinking
&lt;/h2&gt;

&lt;p&gt;Developers don't abandon tools because they lack features. They abandon tools because they can't figure out if those features actually solve their problem—and they give up trying.&lt;/p&gt;

&lt;p&gt;Think about the last dev tool you adopted. You didn't start by exploring every feature. You started with one specific pain point. You needed a quick win to justify spending more time. Only after that first success did you explore deeper functionality.&lt;/p&gt;

&lt;p&gt;But here's what most founders miss: developers need multiple small wins before they trust a new tool enough to integrate it into their actual workflow. Each win builds confidence. Each win reduces the perceived risk of adoption.&lt;/p&gt;

&lt;p&gt;The feature-first approach front-loads complexity. You're asking developers to evaluate your entire value proposition before they've experienced any value at all. That's cognitive overload, not product-market fit.&lt;/p&gt;

&lt;p&gt;The result? Developers sign up, get overwhelmed, and vanish. You blame it on onboarding or messaging. But the real issue is sequence—you're revealing your value in the wrong order.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Moment-First Framework: How PMF Winners Actually Build
&lt;/h2&gt;

&lt;p&gt;The founders who crack PMF think differently. They don't ask "What features should we build?" They ask "What sequence of moments will turn a skeptical developer into an advocate?"&lt;/p&gt;

&lt;p&gt;Here's their framework:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Map the emotional journey from skepticism to advocacy.&lt;/strong&gt;  Developers don't go from "never heard of you" to "integrated into my CI/CD pipeline" in one step. They move through predictable emotional states: Skeptical → Curious → Convinced → Trusting → Advocating.&lt;/p&gt;

&lt;p&gt;Your job isn't to build features. Your job is to design the moments that move developers through this progression.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identify the sequence of small wins that build developer confidence.&lt;/strong&gt;  Every successful dev tool follows the same pattern: Quick Win → Trust Signal → Integration Moment.&lt;/p&gt;

&lt;p&gt;The Quick Win proves immediate value with minimal investment. It's the 'wow, this actually works' moment—the one that happens in under 5 minutes.&lt;/p&gt;

&lt;p&gt;The Trust Signal demonstrates the tool won't break their existing workflow. It's proof that your tool plays nicely with their stack, their team, their process.&lt;/p&gt;

&lt;p&gt;The Integration Moment is when they commit to using your tool in production. It's the point of no return—and it only happens after you've earned their trust.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sequence these moments deliberately.&lt;/strong&gt;  Most founders reveal everything at once. Winners reveal just enough to create the next moment. They hide complexity until developers are ready for it.&lt;/p&gt;

&lt;p&gt;This isn't about dumbing down your product. It's about respecting how developers actually adopt tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your 30-Day Moment Mapping Sprint
&lt;/h2&gt;

&lt;p&gt;Ready to stop building features and start building moments? Here's your implementation plan:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 1: Interview 10 developers about their tool adoption journey.&lt;/strong&gt;  Don't ask about your product. Ask about the last dev tool they adopted successfully—the one they actually use. What was their first interaction? When did they decide to keep using it? What almost made them quit?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 2: Map your current user flow against emotional states.&lt;/strong&gt;  Walk through your signup flow, onboarding, and first-use experience. At each step, ask: "What is the developer feeling right now? Skeptical? Confused? Confident?" Identify where you're creating friction instead of momentum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 3: Identify your 3 core moments and sequence them.&lt;/strong&gt;  What's your Quick Win? What's your Trust Signal? What's your Integration Moment? Design each moment to build on the previous one. Hide everything else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 4: Test the new flow with 5 developers.&lt;/strong&gt;  Watch them use your tool. Don't explain anything. Just observe. Are they experiencing the moments you designed? Where do they get stuck? Where do they light up?&lt;/p&gt;

&lt;p&gt;The founders who achieve PMF don't build better features. They build better moments. They understand that developers need emotional progression—not feature progression.&lt;/p&gt;

&lt;p&gt;Your roadmap shouldn't be a list of capabilities. It should be a sequence of experiences that transforms skeptics into advocates.&lt;/p&gt;

&lt;p&gt;Stop building features. Start building moments.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>tooling</category>
      <category>devrel</category>
    </item>
    <item>
      <title>How I Used Claude Code's Multi-Agent Orchestration and Laravel to Rebuild a Backend Overnight</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Sun, 15 Feb 2026 19:24:31 +0000</pubDate>
      <link>https://dev.to/tessak22/how-i-used-claude-codes-multi-agent-orchestration-and-laravel-to-rebuild-a-backend-overnight-174</link>
      <guid>https://dev.to/tessak22/how-i-used-claude-codes-multi-agent-orchestration-and-laravel-to-rebuild-a-backend-overnight-174</guid>
      <description>&lt;p&gt;Last night at 8:50 PM, I sat down to rebuild a product's backend from scratch. By 11 PM, I had a complete Laravel API with auth, CRUD, real-time WebSockets, and chat. By 1:50 AM, the existing React Native frontend was wired up and running against it locally. Deployed to the cloud the next morning.&lt;/p&gt;

&lt;p&gt;That's about two hours for the entire backend. Five hours total including frontend integration.&lt;/p&gt;

&lt;p&gt;Here's how multi-agent orchestration, Laravel, and solid documentation made that possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Was Building
&lt;/h2&gt;

&lt;p&gt;The product is a real-time sports analytics platform. Live play-by-play, game scores, chat, coach dashboards, player profiles, the whole deal. The original product was built on a monolithic Python script with a FastAPI backend. It worked, but it wasn't built to scale.&lt;/p&gt;

&lt;p&gt;I decided to tear down the backend and rebuild it on Laravel. The existing React Native frontend just needed to be rewired to the new API. And I needed it done fast, because we're shipping in August.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three-Tool Workflow
&lt;/h2&gt;

&lt;p&gt;I ended up using different Claude tools for different parts of the process. I'd done some research beforehand on how to work with Claude effectively, and the workflow that emerged was effective enough that I want to break it down.&lt;/p&gt;

&lt;p&gt;I used three:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude in the browser&lt;/strong&gt; for strategic planning. I fed it the CEO's product documentation, the existing codebase context, and our technical requirements. It generated a phased build plan, six phases covering everything from auth to deployment prep. This wasn't vague, it was specific enough to hand directly to Claude Code as prompts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude Code&lt;/strong&gt; for execution. Each phase of the build plan became a prompt. Claude Code scaffolded the Laravel backend phase by phase: migrations, models, controllers, API resources, broadcasting events, chat with profanity filtering, security hardening. Six phases in about two hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-agent orchestration&lt;/strong&gt; for the messy middle. This is where it got interesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  18 Agents Fixing Errors Simultaneously
&lt;/h2&gt;

&lt;p&gt;After the backend was built and the frontend was wired up, I had 175 TypeScript errors across 33 files. The old Python API returned flat objects with string IDs. The new Laravel API returned nested resources with integer IDs. Every screen that touched game data, team data, or player data was broken.&lt;/p&gt;

&lt;p&gt;Instead of fixing them one file at a time, I spun up multiple Claude Code agents in parallel. At one point, 18 agents were running simultaneously, each tackling a different batch of errors. One agent handled coach dashboard screens. Another fixed player stat field names. Another rewired the scores screens from flat fields to nested team objects.&lt;/p&gt;

&lt;p&gt;They all worked against the same codebase without stepping on each other, because each agent had a scoped task. The errors went from 175 to 0 across eight batches.&lt;/p&gt;

&lt;p&gt;That kind of parallelism just isn't possible with a single-threaded workflow, human or AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Laravel Was the Cheat Code
&lt;/h2&gt;

&lt;p&gt;The original backend was a monolithic Python script. One file doing everything. Translating that into Laravel was almost unfairly easy.&lt;/p&gt;

&lt;p&gt;Laravel gives you so much out of the box. Sanctum for API auth with token management. Eloquent for database models with relationships. API Resources for consistent JSON response shaping. Broadcasting with Reverb for WebSockets. Built-in rate limiting, validation, and middleware. Even the profanity filter for chat was straightforward with Laravel's pipeline pattern.&lt;/p&gt;

&lt;p&gt;What would've been weeks of custom plumbing in the old stack became &lt;code&gt;php artisan make:model&lt;/code&gt;, define relationships, write a resource class, done. The CRUD patterns practically wrote themselves, and Claude Code was incredibly effective at generating Laravel code because the framework's conventions are so well-defined.&lt;/p&gt;

&lt;p&gt;Six phases. Two and a half hours. A complete API with auth, CRUD, game stats, play-by-play engine, real-time broadcasting, chat, and deployment prep.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation Made the AI Work
&lt;/h2&gt;

&lt;p&gt;The reason Claude's build plan was so accurate on the first pass? The founder's documentation was thorough.&lt;/p&gt;

&lt;p&gt;The founder had detailed specs for every feature. Field definitions for game data. User role descriptions. Chat behavior rules. Stat categories and how they should be grouped. When I fed that into Claude in the browser alongside the existing codebase, the AI had enough context to generate a build plan that actually mapped to reality.&lt;/p&gt;

&lt;p&gt;Garbage in, garbage out applies to AI workflows too. If the product documentation had been vague, I'd have spent half my time clarifying requirements instead of building.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Timeline
&lt;/h2&gt;

&lt;p&gt;Here's how the night actually went:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;What Happened&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;8:50 PM&lt;/td&gt;
&lt;td&gt;Created the Laravel project, started Phase 1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9:32 PM&lt;/td&gt;
&lt;td&gt;Auth system complete (Sanctum, login, register, roles)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:00 PM&lt;/td&gt;
&lt;td&gt;Core CRUD, search, dashboard, coach notes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:15 PM&lt;/td&gt;
&lt;td&gt;Game stats engine, play-by-play, data export&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:31 PM&lt;/td&gt;
&lt;td&gt;Real-time broadcasting, chat, profanity filter&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:54 PM&lt;/td&gt;
&lt;td&gt;Security hardening, error tracking, deployment prep&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:00 PM&lt;/td&gt;
&lt;td&gt;Backend done. Six phases in ~2 hours.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11:42 AM&lt;/td&gt;
&lt;td&gt;Brought in Claude Code for frontend integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1:50 AM&lt;/td&gt;
&lt;td&gt;Frontend wired to backend, full stack running locally&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Next morning&lt;/td&gt;
&lt;td&gt;Deployed backend to Laravel Cloud, frontend to Vercel&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Push-to-deploy on both platforms. The team could pull up the app on their phones by the time church was over.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Tell Other Developers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use AI at every layer, but different AI for each layer.&lt;/strong&gt; Browser-based Claude is great for planning and strategy. Claude Code is great for execution. Multi-agent orchestration is great for parallel problem-solving. Don't try to do everything in one tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pick a framework with strong conventions.&lt;/strong&gt; Laravel's opinionated structure made it trivially easy for Claude Code to generate correct, idiomatic code. The more predictable your framework, the more effective AI-assisted development becomes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invest in documentation before you start building.&lt;/strong&gt; The single biggest accelerator wasn't the AI itself, it was the quality of the input. Thorough product docs meant the build plan was right on the first try. No back-and-forth. No "actually, I meant this."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let the agents scope themselves.&lt;/strong&gt; When I ran 18 agents simultaneously, Claude Code broke the work into batches on its own, each agent handling a specific set of files and error types. They didn't overlap. I didn't have to manually assign work, I just pointed it at the problem and it parallelized the fix.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't be afraid of ambitious timelines.&lt;/strong&gt; A complete backend rebuild in one night sounds absurd. But with the right tools, the right framework, and solid documentation, it's just a series of well-scoped tasks executed in parallel. The ceiling on what one developer can ship in a session has fundamentally changed.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Tessa Kriesel, CEO of &lt;a href="https://www.builtfor.dev/" rel="noopener noreferrer"&gt;Built for Devs&lt;/a&gt; and part-time CTO at a startup. By day I help developer tools with product adoption and go-to-market. By night, apparently, I rebuild backends with 18 AI agents running at once. Find me online if you want to talk developer products or multi-agent orchestration.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>laravel</category>
      <category>ai</category>
      <category>productivity</category>
      <category>learning</category>
    </item>
    <item>
      <title>Why the Next AWS Outage Will Cost You More Than the Last One (And What to Do About It)</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Thu, 05 Feb 2026 23:08:37 +0000</pubDate>
      <link>https://dev.to/controlplane/why-the-next-aws-outage-will-cost-you-more-than-the-last-one-and-what-to-do-about-it-508j</link>
      <guid>https://dev.to/controlplane/why-the-next-aws-outage-will-cost-you-more-than-the-last-one-and-what-to-do-about-it-508j</guid>
      <description>&lt;p&gt;When AWS US-EAST-1 went dark on October 20, 2025, over 3,500 companies across 60 countries went down with it.&lt;/p&gt;

&lt;p&gt;Not because their code was broken. Because their architecture was.&lt;/p&gt;

&lt;p&gt;Here's what happened: a race condition in DynamoDB's DNS management system triggered a cascade that took down everything depending on it. Auth services. Routing layers. Even companies running in &lt;em&gt;other&lt;/em&gt; AWS regions discovered their "multi-region" setups had hidden dependencies on US-EAST-1.&lt;/p&gt;

&lt;p&gt;If you watched that unfold from your incident Slack channel, you already know 100% uptime is a myth. The real question isn't whether your infrastructure will fail. It's whether your architecture keeps serving traffic while the hyperscaler figures it out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spoiler: most architectures don't.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Math Nobody Wants to Talk About&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Availability is a simple ratio:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Availability = MTBF / (MTBF + MTTR)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Most engineering teams obsess over MTBF (how do we prevent failures?). That's the wrong question. The October outage lasted &lt;strong&gt;15 hours&lt;/strong&gt;. AWS's own SLA guarantees 99.99% for most services, which allows roughly 52 minutes of downtime &lt;em&gt;per year&lt;/em&gt;. Fifteen hours blew past that in a single incident.&lt;/p&gt;

&lt;p&gt;For large enterprises, unplanned downtime now costs an average of $2 million per hour. Not because servers are expensive. Because revenue stops, customer trust erodes, and under regulations like DORA (fully implemented in 2025), financial institutions face actual penalties for failing to demonstrate resilience by design.&lt;/p&gt;

&lt;p&gt;Here's what the nines actually look like in practice:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Availability&lt;/th&gt;
&lt;th&gt;Annual Downtime&lt;/th&gt;
&lt;th&gt;What It Actually Takes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;99.9% (three nines)&lt;/td&gt;
&lt;td&gt;8.45 hours&lt;/td&gt;
&lt;td&gt;Single cloud, good ops team&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.99% (four nines)&lt;/td&gt;
&lt;td&gt;52.56 minutes&lt;/td&gt;
&lt;td&gt;Redundancy within one provider&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.999% (five nines)&lt;/td&gt;
&lt;td&gt;5.26 minutes&lt;/td&gt;
&lt;td&gt;Cross-cloud failover, zero single points of failure&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;See the jump from four nines to five? That's not a 25% improvement in ops discipline. That's a fundamentally different architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;You've Already Crossed the Complexity Horizon&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Your backend isn't complicated like a jet engine where cause and effect are linear. It's complex like a biological system.&lt;/p&gt;

&lt;p&gt;A DNS hiccup triggers aggressive retry loops across thousands of microservices. That saturates your database connection pool. Your load balancer marks an entire region as down. One small thing breaks, and suddenly everything breaks in ways nobody predicted.&lt;/p&gt;

&lt;p&gt;Systems theorists call this the Complexity Horizon: the point where interdependencies are so dense that cascading failure isn't a risk to mitigate. It's a mathematical certainty to plan for.&lt;/p&gt;

&lt;p&gt;Three patterns made the October outage as devastating as it was. Sound familiar?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Thundering Herd.&lt;/strong&gt; A core service hiccupped. Thousands of client applications entered aggressive retry loops simultaneously, creating a self-inflicted DDoS that prevented the system from ever stabilizing. The fix couldn't deploy because the problem kept feeding itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The IAM Lockout.&lt;/strong&gt; The engineers who needed to fix the problem couldn't authenticate to their own systems. Why? The identity layer was part of the failure chain. The people with the keys were locked outside with everyone else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monoculture Risk.&lt;/strong&gt; Three providers control 63% of global cloud infrastructure. A power issue in one Virginia data center cascaded into a global economic disruption in minutes. Virginia. One state. Global impact.&lt;/p&gt;

&lt;p&gt;Every one of these patterns stems from the same root cause: deep dependency on a single provider's infrastructure stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Real Decision Most Teams Are Avoiding&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After every major outage, the playbook is the same. Better monitoring. Tighter runbooks. More chaos engineering.&lt;/p&gt;

&lt;p&gt;Those are all fine. But here's the problem—they're optimizations within the same architecture that just failed you.&lt;/p&gt;

&lt;p&gt;The real decision is structural: &lt;strong&gt;Do you keep bolting resilience onto a single-cloud foundation, or do you put an orchestration layer between your code and the infrastructure?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let me explain myself.&lt;/p&gt;

&lt;p&gt;This is the same evolution that played out with email. There was a time when every company employed Exchange Server engineers (at least two, because if one was out, you needed redundancy). Email was a solved problem being re-solved by every organization individually. At enormous cost.&lt;/p&gt;

&lt;p&gt;Then Google and Microsoft offered email as a service. You paid by the mailbox and never thought about it again. The Exchange Server engineers didn't disappear. The good ones moved up the stack to work on problems that actually differentiated their business.&lt;/p&gt;

&lt;p&gt;Cloud infrastructure is at that exact inflection point right now.&lt;/p&gt;

&lt;p&gt;Every company delivering digital services is hiring platform engineering teams to stitch together the same backend concerns: secrets management, service discovery, mutual TLS, geo-routing, logging, metrics, tracing, observability. The cloud gives you building blocks (Kubernetes as a service, object storage, managed databases), but the integration work between those primitives and production-ready software? That's on you. Every single time.&lt;/p&gt;

&lt;p&gt;That's a massive amount of duplicated effort across the entire industry. And it's the reason most organizations can't get past four nines. They're spending all their engineering budget rebuilding the same plumbing instead of investing in the architecture that would actually change the math.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Here's What Actually Changes the Math&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Getting to five nines (5.26 minutes of downtime per year) requires three things that are nearly impossible when you're locked into a single cloud provider:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instant cross-cloud failover.&lt;/strong&gt; When AWS goes down, your workloads need to be serving from GCP or Azure within seconds. Not hours. Not "we'll spin up a DR environment." Actually serving production traffic from another provider without missing a beat. That's what turns a 15-hour outage into a non-event for your customers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero hidden single points of failure.&lt;/strong&gt; Your identity layer. Your DNS. Your routing. None of it can depend on the provider that's currently on fire. This requires a genuine abstraction layer, not just multi-region deployments that secretly phone home to a single control plane.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Portability without rearchitecting.&lt;/strong&gt; If moving off a provider requires months of engineering work, you don't have resilience. You have a very expensive backup plan you'll never actually execute under pressure.&lt;/p&gt;

&lt;p&gt;This is the problem Control Plane was built to solve.&lt;/p&gt;

&lt;p&gt;The platform provides a single orchestration layer across AWS, Azure, GCP, Oracle, and on-prem infrastructure. Your code deploys once and runs anywhere. When a provider goes down, traffic shifts automatically: no manual intervention, no runbooks, no 3 AM pages.&lt;/p&gt;

&lt;p&gt;We call it the non-stick layer. Your workloads aren't welded to any single provider, so the cost of moving—for resilience, cost optimization, or avoiding lock-in—drops to near zero.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Part Your CFO Will Actually Care About&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Resilience alone is a hard budget conversation. "Spend more money so that when something bad happens, it's less bad" is a tough sell. I get it.&lt;/p&gt;

&lt;p&gt;But here's what most teams miss: the architecture that delivers five-nines resilience also fundamentally changes your cost structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You stop paying for idle compute.&lt;/strong&gt; Traditional cloud billing charges you for full VMs whether you're using 100% of the CPU or 3%. Control Plane bills in millicores (thousandths of a vCPU). You pay for the actual compute your workload consumes, not the full machine sitting there mostly idle. Customers see 40-60% savings on cloud compute. That's real money.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You get reserved instance pricing without the commitment.&lt;/strong&gt; Instead of locking into a three-year contract to get a reasonable per-core rate, Control Plane offers on-demand pricing lower than what most providers charge for reserved instances. No commitment. Fractional billing. The math just works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You shrink or redeploy your platform engineering team.&lt;/strong&gt; The median platform engineer costs $180-220K fully loaded. Most mid-size companies employ 4-10 of them to maintain the backend plumbing that Control Plane provides out of the box. That's $700K to $2.2M per year in labor spent re-solving solved problems. Before you even factor in the opportunity cost of what those engineers could be building instead.&lt;/p&gt;

&lt;p&gt;Add it up: lower compute costs, no lock-in premiums, and a platform engineering team that can finally work on the product instead of the plumbing.&lt;/p&gt;

&lt;p&gt;The resilience is almost a bonus.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What You Should Actually Do Next&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The October outage wasn't an anomaly. It was a preview. As AI workloads grow and backend complexity increases, the cascades will get worse. Here's how to get ahead of the next one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accept that outages are inevitable and design for recovery speed.&lt;/strong&gt; Your competitive advantage isn't preventing failures. It's your Resilience Velocity—how fast your architecture recovers without human intervention. Invest in automated failover, not bigger ops teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Eliminate monoculture risk at the architecture level.&lt;/strong&gt; Multi-region isn't multi-cloud. If your "redundancy" strategy lives entirely within one provider's ecosystem, you're diversified in geography but not in risk. True resilience means your workloads can run on any provider and switch between them automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop rebuilding solved infrastructure.&lt;/strong&gt; Every month your platform team spends maintaining secrets management, service mesh, and observability tooling is a month they're not spending on the product your customers are paying for. The same pattern that moved email from on-prem Exchange to managed services is coming for backend infrastructure. The companies that make that shift early will ship faster, spend less, and sleep better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit your hidden dependencies.&lt;/strong&gt; After October, dozens of companies discovered their "multi-cloud" setups had hidden dependencies on US-EAST-1 for auth or routing. Map every service your infrastructure depends on and ask: if this goes down, do we go down with it?&lt;/p&gt;

&lt;p&gt;The Complexity Horizon isn't something you overcome. It's something you architect around.&lt;/p&gt;

&lt;p&gt;The companies that weathered October without a scratch weren't the ones with the biggest ops teams. They were the ones whose architecture made the provider outage irrelevant.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Control Plane delivers production-grade backend infrastructure across every major cloud provider, with automatic cross-cloud failover, fractional compute billing, and built-in secrets management, service mesh, and observability.&lt;/em&gt; &lt;a href="https://dub.sh/cp-docs" rel="noopener noreferrer"&gt;&lt;em&gt;See how it works →&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>productivity</category>
      <category>architecture</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>What a fantastic event this was.</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Mon, 09 Dec 2024 00:47:31 +0000</pubDate>
      <link>https://dev.to/tessak22/what-a-fantastic-event-this-was-4ofj</link>
      <guid>https://dev.to/tessak22/what-a-fantastic-event-this-was-4ofj</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/dthompsondev" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F359729%2Fd5de48f7-efc3-4f1c-8566-4d946fc3724d.jpg" alt="dthompsondev"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/dthompsondev/financial-post-mortem-for-the-commit-your-code-conference-57of" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Financial Post Mortem For The Commit Your Code Conference&lt;/h2&gt;
      &lt;h3&gt;Danny Thompson ・ Dec 9 '24&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#javascript&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#opensource&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#career&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>DevRel is Sales</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Wed, 07 Feb 2024 22:49:58 +0000</pubDate>
      <link>https://dev.to/tessak22/devrel-is-sales-4djj</link>
      <guid>https://dev.to/tessak22/devrel-is-sales-4djj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1605814573621-0513c34a0d58%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDUzfHxzaG9ja2VkfGVufDB8fHx8MTcwNzMzOTk1N3ww%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1605814573621-0513c34a0d58%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wxMTc3M3wwfDF8c2VhcmNofDUzfHxzaG9ja2VkfGVufDB8fHx8MTcwNzMzOTk1N3ww%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D2000" alt="DevRel is Sales" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You read that right. DevRel &lt;strong&gt;is&lt;/strong&gt; sales.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hkeqej9fn5pm5rfvc9g.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hkeqej9fn5pm5rfvc9g.gif" alt="DevRel is Sales" width="498" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before you jump to conclusions, let me explain myself. Have you ever given a conference presentation or demo, had a developer ask you questions afterward, and later that same developer became a customer?&lt;/p&gt;

&lt;p&gt;Who influenced that sale? 🤨&lt;/p&gt;

&lt;p&gt;Are we traditional sales? Absolutely not. However, DevRel's efforts and relations do contribute to the company's total revenue. I know the true skeptics are ready to fight me in the comments, but DevRel teams are influential to a company's revenue. The whole premise of developer relations is to leverage developer peers to engage with developers because they don't want anything to do with a salesperson or a marketing campaign.&lt;/p&gt;

&lt;p&gt;We deliver conference presentations and host events. We create meaningful content that solves technical problems. We make friends everywhere we go and have a strong network. These are all things that developer relations practitioners do every day, and these efforts attract developers. Therefore, they also attract paying customers too. The difference is that developer relations need to stay authentic to their relationship &amp;amp; community-building roots to gain trust with developers. Therefore, measuring a DevRel team with a sales quota is bound to fail, but reporting how DevRel influenced the company's revenue is just smart business.&lt;/p&gt;

&lt;h2&gt;
  
  
  Are you reporting revenue influenced?
&lt;/h2&gt;

&lt;p&gt;The problem isn't whether DevRel is or isn't sales, though. The problem is that most developer relations teams aren't reporting their influence on revenue. You can tell me all day long that DevRel teams aren't measured by revenue, but which teams have we seen laid off the most this year in tech? DevRel.&lt;/p&gt;

&lt;p&gt;My pre-tech career was in business, so I know if a function or team doesn't contribute to the bottom line, it will be cut when times get tough. And honestly, it's bound to get questioned every step of the way even when times are good. Sound familiar, DevRel practitioners, getting questioned every step of the way? It's because we're not properly reporting the full extent of our impact on the organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  DQLs to Revenue-Influenced
&lt;/h3&gt;

&lt;p&gt;If your team is reporting DQLs (DevRel Qualified Leads), reporting revenue-influenced should be fairly easy. Track down where that lead went and if a sale ever surfaced. Before you know it, you have a list of deals that your DevRel team influenced. Who's just a cost center now?!&lt;/p&gt;

&lt;p&gt;If you're not tracking DQLs, you can still capture a revenue-influenced data point, but the work is a little harder. You will need to more deeply understand the impact of your efforts and if your efforts contributed to a deal. In many DevRel roles, I would occasionally play a sales engineer role. I'd be asked to join a call or event where I could use my lovely personality and tech chops to win over a prospective customer. These efforts are great to capture for revenue influenced.&lt;/p&gt;

&lt;p&gt;We may also find ourselves converting a developer to a customer by the end of a week-long conference. You're just being a great developer advocate and supporting a peer, but that's all impact that needs to be properly reported.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you're doing DevRel well, every function of your org should &lt;em&gt;feel&lt;/em&gt; your impact, but your executives should &lt;strong&gt;see it&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Okay, okay, DevRel isn't sales in a traditional sense, but we surely influence revenue, and we should ensure these metrics are seen and known across the company.&lt;/p&gt;

</description>
      <category>devrel</category>
    </item>
    <item>
      <title>Navigating the DevRel Job Market</title>
      <dc:creator>Tessa Kriesel</dc:creator>
      <pubDate>Tue, 16 Aug 2022 19:35:29 +0000</pubDate>
      <link>https://dev.to/tessak22/navigating-the-devrel-job-market-26k4</link>
      <guid>https://dev.to/tessak22/navigating-the-devrel-job-market-26k4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1484981138541-3d074aa97716%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDE2fHxjYXJlZXIlMjBtYXJrZXR8ZW58MHx8fHwxNjYwNjc4MDMy%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimages.unsplash.com%2Fphoto-1484981138541-3d074aa97716%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DMnwxMTc3M3wwfDF8c2VhcmNofDE2fHxjYXJlZXIlMjBtYXJrZXR8ZW58MHx8fHwxNjYwNjc4MDMy%26ixlib%3Drb-1.2.1%26q%3D80%26w%3D2000" alt="Navigating the DevRel Job Market" width="2000" height="1333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developer Relations &lt;em&gt;aka DevRel&lt;/em&gt; is an ever-changing landscape as the industry becomes more widely known and evolves. The tech industry, in general, is seeing a shift in hiring prioritization and an increasing number of layoffs most recently. I’ve watched many of my DevRel peers find themselves laid off looking for other opportunities unexpectedly.&lt;/p&gt;

&lt;p&gt;With that being said, I still believe our industry will continue growing at exponential rates as more founders &amp;amp; leaders begin to see the impact DevRel can have when companies target developers as one of their primary audiences. If you’re finding yourself on the job market and considering a role in DevRel, or a shift from a previous role, I’ve pulled together a few resources to help you navigate your DevRel career search.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to Find DevRel Jobs
&lt;/h2&gt;

&lt;p&gt;I wrote a piece last year with a great &lt;a href="https://www.commonroom.io/blog/where-to-look-for-devrel-jobs/" rel="noopener noreferrer"&gt;list of places to look for DevRel jobs&lt;/a&gt;, including the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.commonroom.io/uncommon/jobs/" rel="noopener noreferrer"&gt;Uncommon job board&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://airtable.com/shr4xdmpFPWnRxf5q" rel="noopener noreferrer"&gt;DevRel Collective&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devrelweekly.com/" rel="noopener noreferrer"&gt;DevRel Weekly newsletter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.devrelx.com/jobs" rel="noopener noreferrer"&gt;DevRelX jobs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developerrelations.com/jobs" rel="noopener noreferrer"&gt;DeveloperRelations.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://communitymanagerjobs.co/" rel="noopener noreferrer"&gt;Community Manager jobs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ve also found that &lt;a href="https://www.linkedin.com/jobs/search/?currentJobId=3051864235&amp;amp;keywords=developer%20relations" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; has improved its keywords and search results around Developer Relations and found many key roles there too.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevRel Compensation
&lt;/h2&gt;

&lt;p&gt;Developer Relations compensation ranges greatly depending on the role you serve, your experience, and your industry focus. I’ve found that DevRel compensation ranges can be comparable to engineering roles—both engineers and engineering managers. In my opinion, DevRel practitioners require comprehensive skill sets, and I feel their compensation should match their capabilities. Most DevRel practitioners have some engineering experience and skills in other areas such as public speaking, social media, technical writing, education, etc., making them jack-of-all-trades (or Jill).&lt;/p&gt;

&lt;p&gt;Two great compensation surveys have been facilitated in the last year that you can review to understand the salary ranges better.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.commonroom.io/blog/2022-developer-relations-compensation-report/" rel="noopener noreferrer"&gt;Common Room 2022 DevRel Compensation Report&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/bffjossy/devrel-collective-2021-salary-survey-introduction-40j8"&gt;DevRel Collective 2021 Salary Report&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  DevRel Roles
&lt;/h2&gt;

&lt;p&gt;Developer Relations has a variety of roles and career paths that are ever-changing based on what a company and team needs to succeed. However, I’ll break down a few typical roles you may see while searching job boards and sites.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developer Advocate&lt;/li&gt;
&lt;li&gt;Developer Experience aka DX Engineer&lt;/li&gt;
&lt;li&gt;(Technical) Community Manager&lt;/li&gt;
&lt;li&gt;(Technical) Program Manager&lt;/li&gt;
&lt;li&gt;DevRel Engineer&lt;/li&gt;
&lt;li&gt;(Technical) Outreach Manager&lt;/li&gt;
&lt;li&gt;Technical Writer / Editor&lt;/li&gt;
&lt;li&gt;Head of Developer Relations&lt;/li&gt;
&lt;li&gt;Manager, Developer Advocacy&lt;/li&gt;
&lt;li&gt;Manager, Developer Experience&lt;/li&gt;
&lt;li&gt;Manager, Developer Education&lt;/li&gt;
&lt;li&gt;Manager, Developer Marketing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;New to DevRel? If so, some of these titles may seem foreign to you. You can leverage a slide deck I created called &lt;a href="https://docs.google.com/presentation/d/1zRpQ9jcenlCF82pqb2Aj_yUS8aOnAR26H9rafXP34FM/edit?usp=sharing" rel="noopener noreferrer"&gt;What is DevRel&lt;/a&gt; to learn more about where these roles may fit within the team.&lt;/p&gt;

&lt;p&gt;If you’re curious about learning more about DevRel team levels, there’s a &lt;a href="https://slack.engineering/defining-a-career-path-for-developer-relations/" rel="noopener noreferrer"&gt;great post from Bear Douglas at Slack&lt;/a&gt; you should check out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Research
&lt;/h2&gt;

&lt;p&gt;If you’re looking for additional insights into DevRel roles, compensation, hiring practices, and more, there are a few resources you could check out.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.stateofdeveloperrelations.com/" rel="noopener noreferrer"&gt;State of Developer Relations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://layoffs.fyi/" rel="noopener noreferrer"&gt;Layoffs.fyi&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.teamblind.com/search/developer%20relations" rel="noopener noreferrer"&gt;Blind&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devrelweekly.com/" rel="noopener noreferrer"&gt;DevRel Weekly&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developeravocados.net/" rel="noopener noreferrer"&gt;Developer Avocados&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.devrelx.com/" rel="noopener noreferrer"&gt;DevRelX&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developerrelations.com/" rel="noopener noreferrer"&gt;DeveloperRelations.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/erinmikail/status/1558511704987910145?s=20&amp;amp;t=bh8gQEbQ3eZKHR_sxA_Gog" rel="noopener noreferrer"&gt;Airtable template for tracking your job search&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Interested in learning more about how to build your personal brand and find success in a DevRel career? Check out my upcoming mastermind group starting in September.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk5jkcgwswdz45t9xqhv.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk5jkcgwswdz45t9xqhv.jpeg" alt="Navigating the DevRel Job Market" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  DevRel Mastermind: Finding DevRel Career Success
&lt;/h3&gt;

&lt;p&gt;​Join a 4-week DevRel Mastermind intended to help DevRel practitioners build their brand, land their dream job, &amp;amp; find career success.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lu.ma/devrel-career-success" rel="noopener noreferrer"&gt;Learn More&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devrel</category>
    </item>
  </channel>
</rss>
