<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joel C</title>
    <description>The latest articles on DEV Community by Joel C (@jlinco).</description>
    <link>https://dev.to/jlinco</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jlinco"/>
    <language>en</language>
    <item>
      <title>When the Lead Is the Bottleneck</title>
      <dc:creator>Joel C</dc:creator>
      <pubDate>Thu, 26 Mar 2026 18:51:24 +0000</pubDate>
      <link>https://dev.to/jlinco/when-the-lead-is-the-bottleneck-49j0</link>
      <guid>https://dev.to/jlinco/when-the-lead-is-the-bottleneck-49j0</guid>
      <description>&lt;p&gt;There's a specific kind of meeting that engineers learn to dread.&lt;br&gt;
Not the ones where things are going wrong - those are expected, they're manageable, they're part of the job. The ones I mean are the meetings where things are clearly going wrong, someone in the room says exactly what needs to be said, and nothing changes. The concern is heard, noted, and set aside. The decision was already made before anyone walked in.&lt;/p&gt;

&lt;p&gt;I've been in that meeting. I've raised the right concern, watched it land, watched it get dismissed, and then watched the person who dismissed it repeat it back minutes later as their own reasoning for doing the opposite. And I've sat with the quiet frustration of knowing that the DevOps engineer in the same meeting saw it too, agreed, and was overruled just the same.&lt;/p&gt;

&lt;h3&gt;
  
  
  What do you do with that?
&lt;/h3&gt;

&lt;p&gt;For a while, you keep trying. You document. You find better ways to make the case. You hope that the evidence will eventually be loud enough that it can't be ignored. Sometimes it works. Often it doesn't. And in the case I'm describing, the evidence eventually got very loud indeed — in the form of a system that stopped working and costs that had climbed well past what anyone wanted to admit.&lt;br&gt;
That's what this post is about. Not just bad technical decisions, but the leadership dynamic that makes them inevitable.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Lead Who Couldn't Be Wrong
&lt;/h3&gt;

&lt;p&gt;There's a particular failure mode in technical leadership that's hard to name without sounding like you're just complaining about someone you didn't like. So let me try to be precise about what I mean.&lt;br&gt;
The lead I'm describing wasn't incompetent in the conventional sense. He could talk about architecture. He had opinions. He made decisions. What he couldn't do, or wouldn't do was hold those decisions lightly. Couldn't entertain the possibility that a call made three months ago might need revisiting now that the system had taught us something new. Couldn't hear a concern from his team without experiencing it as a challenge to his authority.&lt;/p&gt;

&lt;p&gt;At one point, I asked him directly to walk me through the reasoning behind a particular technical decision. His response was that I couldn't question those decisions.&lt;br&gt;
Not that the reasoning was complex. Not that it was documented elsewhere. Not that this wasn't the right time. Just: you can't question this.&lt;/p&gt;

&lt;p&gt;That's not technical leadership. That's positional authority dressed up as technical leadership. And the difference matters enormously, because one of them makes systems better and one of them makes systems fragile.&lt;/p&gt;




&lt;h3&gt;
  
  
  Ego Is an Architectural Decision
&lt;/h3&gt;

&lt;p&gt;Here's something that took me a while to articulate: the way a lead responds to being challenged is itself an architectural input.&lt;/p&gt;

&lt;p&gt;When a lead creates an environment where concerns can't be raised, what they're actually doing is removing a feedback loop from the system. And feedback loops, the ability to sense when something isn't working and correct for it, are not optional features in complex systems. They're load-bearing. Take them out and the system becomes brittle. It can hold together for a while on momentum and good fortune, but it loses the ability to adapt. And systems that can't adapt don't survive contact with reality for very long.&lt;/p&gt;

&lt;p&gt;This is what I mean when I say ego is an architectural decision. Every time a lead shuts down a question, dismisses a concern, or makes themselves the single point through which all technical truth must pass, they're making a choice about how information flows through the team. They're deciding, in effect, that their own judgment is more reliable than the collective intelligence of the people closest to the work.&lt;/p&gt;

&lt;p&gt;Sometimes that's true. Mostly it isn't. And the cost of being wrong about it isn't just a bad meeting — it's a system designed around blind spots.&lt;/p&gt;




&lt;h3&gt;
  
  
  What Good Actually Looks Like
&lt;/h3&gt;

&lt;p&gt;I want to be careful here not to paint a picture where the lesson is simply "be humble", because humility without substance is just another kind of performance.&lt;br&gt;
What I've seen work, in the teams and leads worth learning from, looks more like this:&lt;br&gt;
They separate the decision from the ego. A good lead can say "I made that call, the evidence now suggests it was wrong, let's revisit it" without experiencing it as a personal collapse. The decision was wrong. They're not. They know what they don't know. The DevOps engineer in that meeting understood the infrastructure constraints at a level the lead didn't. A good lead uses that. They don't override it.&lt;/p&gt;

&lt;p&gt;They stay close to the problem. The further a lead gets from the actual work, the code, the errors, the logs, the cost reports, the more they start making decisions based on the architecture in their head rather than the system in production. These two things diverge faster than most people expect.&lt;/p&gt;

&lt;p&gt;They treat questions as information. When an engineer asks why a decision was made, that's not insubordination. That's someone trying to understand the system well enough to work in it effectively. The right response is an explanation, not a shutdown.&lt;/p&gt;

&lt;p&gt;None of this is complicated in theory. In practice, it requires something that's genuinely difficult: the ability to hold authority and uncertainty at the same time. To be the person responsible for the direction without needing to be the person who is always right.&lt;/p&gt;




&lt;h3&gt;
  
  
  For the Engineers in the Room
&lt;/h3&gt;

&lt;p&gt;If you're not the lead, if you're the one raising the concern and watching it go nowhere, this part is for you.&lt;br&gt;
Document everything. Not out of paranoia, but because clear written records of what was raised, when, and what the response was serve everyone when the reckoning eventually comes. And it usually comes.&lt;/p&gt;

&lt;p&gt;Find your allies. In the situation I described, I wasn't alone in that meeting. The DevOps engineer saw the same thing I saw. That matters, not because two voices are always louder than one, but because it confirms your read on the situation and keeps you grounded when the environment starts making you doubt yourself.&lt;/p&gt;

&lt;p&gt;Know when you've done what you can. There's a point where you've made the case as clearly as it can be made, through every appropriate channel, and the decision still isn't changing. Recognising that point, and deciding what to do with it, is one of the harder professional skills to develop. Staying indefinitely in a situation where good work goes nowhere and good judgment gets overridden has a cost that tends to show up slowly and then all at once.&lt;/p&gt;

&lt;p&gt;And write it down. Not just for the legal record, though that matters too. Write it down because experiences like these are genuinely instructive, and the detail fades faster than you expect. The clearer you can be about what happened, what you tried, and what the outcome was, the more useful it becomes to you, and eventually to others.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Thing About Systems
&lt;/h3&gt;

&lt;p&gt;Systems tend to reflect the people who build them. Not always in obvious ways, but the assumptions, the blind spots, the things that were never questioned, they end up embedded in the architecture. In the choice of tools, in the way services talk to each other, in the decisions that were made once and never revisited.&lt;/p&gt;

&lt;p&gt;A system built by someone who couldn't be questioned will have unquestioned assumptions running through it. And those assumptions are fine until the moment they aren't, at which point they're very expensive to find and very difficult to remove.&lt;br&gt;
This is why the human side of technical work isn't a soft topic. It's not the optional extra you get to once the real engineering is done. It is engineering. The way a team communicates, challenges each other, surfaces concerns, and makes decisions together — that's part of the system. It shows up in the output.&lt;/p&gt;

&lt;p&gt;The lead who shut down questions didn't just create a difficult work environment. He created a fragile system. Those two things were the same decision, made over and over, in meeting after meeting, until the system ran out of room to absorb the cost.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://dev.to/jlinco/part-1-serverless-is-not-a-silver-bullet-what-lambdas-are-actually-for-3ja3"&gt;Part 1: Serverless Is Not a Silver Bullet — Understanding What Lambdas Are Actually For&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://dev.to/jlinco/microservices-doesnt-mean-lambda-everything-4339"&gt;Part 2: Microservices Doesn't Mean Lambda Everything&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Part 3: When the Lead is the Bottleneck&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This series started with a technical post about Lambda functions. It was always going to end here — because the technical decisions and the human ones were never really separate.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>techleadership</category>
      <category>systemdesign</category>
      <category>buildinpublic</category>
      <category>software</category>
    </item>
    <item>
      <title>Microservices Doesn't Mean Lambda Everything</title>
      <dc:creator>Joel C</dc:creator>
      <pubDate>Wed, 18 Mar 2026 08:21:22 +0000</pubDate>
      <link>https://dev.to/jlinco/microservices-doesnt-mean-lambda-everything-4339</link>
      <guid>https://dev.to/jlinco/microservices-doesnt-mean-lambda-everything-4339</guid>
      <description>&lt;p&gt;Here's a scenario that might be familiar.&lt;/p&gt;

&lt;p&gt;A team is building on a microservices architecture. Things are moving. There's momentum. Then a long-running background process starts failing — timeouts, cost spikes, instability. Someone raises the concern: &lt;em&gt;Lambda is ephemeral. It's designed for short, simple, one-time tasks. It runs, finishes, goes back to sleep. A long-running daily job is the wrong fit.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The lead's response: &lt;em&gt;No, that's not what Lambda is for&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Then, in the same breath, he repeated the same explanation back: Lambda is ephemeral, it handles short, quick tasks. Then he used it as the justification for keeping the Lambda in place. The DevOps engineer in the room, who shared the same concern, was instructed to make it work regardless.&lt;/p&gt;

&lt;p&gt;I was in that room. And what followed was predictable in hindsight: costs climbed, the system became increasingly fragile, and eventually the whole thing came down.&lt;/p&gt;

&lt;p&gt;Not because microservices was the wrong pattern. But because the team had confused an architectural philosophy with a compute mandate, and the person with decision-making authority wasn't willing to separate the two, even when the reasoning to do so came out of his own mouth.&lt;/p&gt;

&lt;p&gt;That distinction is what this post is about.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Microservices Actually Means
&lt;/h2&gt;

&lt;p&gt;Microservices is an architectural pattern built around one core idea: breaking a system into small, independently deployable services, each responsible for a specific business capability, each able to be developed, scaled, and maintained on its own.&lt;/p&gt;

&lt;p&gt;The benefits are real when applied correctly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Services can be scaled independently based on their specific load&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Teams can own and deploy individual services without coordinating a monolith release&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A failure in one service doesn't have to cascade across the entire system&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Different services can use different technologies where appropriate&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice what's absent from that list: any mention of how those services are computed. Microservices says nothing about Lambda, containers, VMs, or bare metal. It describes boundaries and responsibilities, not runtime infrastructure.&lt;/p&gt;

&lt;p&gt;This is where teams slip.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Conflation Problem
&lt;/h2&gt;

&lt;p&gt;When a team goes serverless-first alongside a microservices architecture, there's a seductive logic that takes hold: each microservice is a function, Lambda runs functions, therefore each microservice should be a Lambda.&lt;/p&gt;

&lt;p&gt;This sounds coherent. It is not.&lt;/p&gt;

&lt;p&gt;A microservice is a unit of business capability with its own data, its own API, its own deployment lifecycle. A Lambda function is a specific compute primitive with specific constraints. It is stateless, ephemeral, time-limited. These are not the same thing, and they don't map cleanly onto each other.&lt;/p&gt;

&lt;p&gt;Some microservices are perfectly suited for Lambda: lightweight, event-driven, short-lived operations. An authentication token validator. A webhook processor. A notification dispatcher. These fit the Lambda model well.&lt;/p&gt;

&lt;p&gt;Others are not. A service that runs a nightly data reconciliation job. A service that processes large files sequentially over an extended period. A service that maintains a persistent connection. Forcing these into Lambda doesn't make your architecture more "micro". It makes it more fragile, more expensive, and harder to debug.&lt;/p&gt;

&lt;p&gt;The pattern should serve the problem. When it starts working the other way around, you're no longer doing architecture. You are cosplaying at scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  Complexity Is a Cost
&lt;/h2&gt;

&lt;p&gt;One of the underappreciated principles of system design is that complexity has a price, and that price compounds over time.&lt;br&gt;
Microservices, done well, manage complexity by isolating it. Each service owns its domain cleanly, and the interactions between services are well-defined. Done poorly, microservices multiply complexity: more services means more network calls, more failure points, more observability challenges, more deployment overhead.&lt;/p&gt;

&lt;p&gt;The decision to adopt microservices should come with an honest accounting of that overhead. It makes sense when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The system is large enough that different parts genuinely need to scale independently&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multiple teams need to work autonomously without stepping on each other&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Different components have meaningfully different reliability or performance requirements&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It adds cost without proportional benefit when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The system is early-stage and the domain isn't well understood yet&lt;/li&gt;
&lt;li&gt;The team is small and the coordination overhead outweighs the autonomy gains&lt;/li&gt;
&lt;li&gt;The services are so tightly coupled that deploying one still requires deploying others&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Starting with a well-structured monolith and extracting services as genuine need emerges is often the more pragmatic path. It's less exciting to say. It's more honest.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Principle Underneath All of This
&lt;/h2&gt;

&lt;p&gt;Every architectural decision is a trade-off. Microservices trades simplicity for scalability and autonomy. Lambda trades control and longevity for speed and cost-efficiency at the right scale. These are good trades in the right contexts.&lt;/p&gt;

&lt;p&gt;The failure mode isn't choosing one of these patterns. It's choosing them without understanding what you're trading, and then refusing to revisit that choice when the evidence suggests it isn't working.&lt;/p&gt;

&lt;p&gt;What made the meeting I described particularly costly wasn't just the wrong technical call. It was that the correct reasoning was present in the room, understood well enough to be articulated, and still didn't change the outcome. When that happens, the problem is no longer technical. It's structural. And structural problems tend to be more expensive.&lt;/p&gt;

&lt;p&gt;Good architecture is not a set of decisions made once at the beginning of a project. It's an ongoing conversation between what the system needs and what the current design provides. When that conversation stops, when the architecture becomes fixed and unquestionable, the system starts accumulating the cost of that silence.&lt;/p&gt;

&lt;p&gt;Sometimes that cost is technical debt. Sometimes it's a failed deployment. Sometimes it's the whole thing coming down.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Looks Like in Practice
&lt;/h2&gt;

&lt;p&gt;A few questions worth asking before committing to any architectural pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What problem is this pattern solving for this system, right now?&lt;/li&gt;
&lt;li&gt;What are the constraints and costs this pattern introduces?&lt;/li&gt;
&lt;li&gt;Are we using this because it's the right fit, or because it's what we know?&lt;/li&gt;
&lt;li&gt;What would need to be true for this decision to be wrong? Are we watching for it?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last question is the most important one. An architecture you can't question is an architecture you can't improve.&lt;/p&gt;

&lt;p&gt;Part 3 covers what happens when the person responsible for asking those questions is the one refusing to ask them.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is part of an ongoing series on tech decisions, architecture, and the human dynamics that shape both.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>architecture</category>
      <category>lambda</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Part 1: Serverless Is Not a Silver Bullet. What Lambdas Are Actually For</title>
      <dc:creator>Joel C</dc:creator>
      <pubDate>Tue, 10 Mar 2026 17:00:00 +0000</pubDate>
      <link>https://dev.to/jlinco/part-1-serverless-is-not-a-silver-bullet-what-lambdas-are-actually-for-3ja3</link>
      <guid>https://dev.to/jlinco/part-1-serverless-is-not-a-silver-bullet-what-lambdas-are-actually-for-3ja3</guid>
      <description>&lt;p&gt;There's a particular kind of technical mistake that's easy to make and expensive to fix. It doesn't come from ignorance, exactly. It comes from taking something that works well in one context and applying it everywhere because it's available, familiar, and fits the current architecture on paper.&lt;/p&gt;

&lt;p&gt;Lambdas are a good example of this.&lt;/p&gt;

&lt;p&gt;I've seen this mistake made in production, at cost, by people who should have known better. But before I get into that, let's establish the foundation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Lambda?
&lt;/h2&gt;

&lt;p&gt;Lambda is Amazon's serverless compute offering. The concept is elegant: you write a function, you define it's triggers, and AWS handles everything else. No server provisioning, no infrastructure management and no paying for idle time. The function runs, does what it has to and then disappears.&lt;/p&gt;

&lt;p&gt;That last part is vital: &lt;strong&gt;it disappears&lt;/strong&gt;. Lambdas are ephemeral by design. They spin up to handle a task and shut down when they are done. This isn't a limitation, it's the point. It's what makes them powerful in the right context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Lambda Really Shines
&lt;/h2&gt;

&lt;p&gt;The sweet spot for Lambda is tasks that are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Short-lived&lt;/strong&gt;: the task needs to complete within a bounded time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event-driven&lt;/strong&gt;: something happens(trigger) and a function responds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Intermittent&lt;/strong&gt;: the task does not need to be running constantly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stateless&lt;/strong&gt;: the task does not need to remember anything between calls&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Classic examples are: image processing after an upload to S3, sending notifications triggered by a user action, responding to an API Gateway request. These are tasks that happen, finish and move on. Lambda handles them cleanly and cheaply.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where It Breaks Down
&lt;/h2&gt;

&lt;p&gt;Lambda has hard constraints that don't bend regardless of how you configure it. This is where teams get into trouble:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;15-minute maximum execution time&lt;/strong&gt;. If your task runs longer, Lambda kills it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cold starts&lt;/strong&gt;. When a function hasn't been invoked recently, there's a startup delay. For latency-sensitive workflows, this matters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pricing model&lt;/strong&gt;. You're billed per invocation and per GB-second of compute time. For short, infrequent tasks this is excellent. For long-running or high-frequency workloads, costs compound fast.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Statelessness&lt;/strong&gt;. Every invocation starts fresh. If your process needs to maintain state across a long operation, you're working against the architecture.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's consider a long running background job. One that needs to run daily, process a large dataset, maintain context between executions, and potentially run for much longer than 15 minutes.&lt;/p&gt;

&lt;p&gt;Obviously, Lambda is the wrong tool for this. Not because it can't be coerced into the scenario described above with workarounds, but because you'd be fighting the design of the service at every step. Increasing your cost and increasing the fragility of your application.&lt;/p&gt;

&lt;p&gt;There are tools that have been purpose-built for this kind of workload, such as EC2 Instances, ECS tasks, containerised services, or even a simple scheduled job on a properly configured server. These options exist precisely because sustained, stateful, long-running compute is a fundamentally different issue than ephemeral event-driven execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Mistake Is in the Reasoning, Not the Tool
&lt;/h2&gt;

&lt;p&gt;Lambda is a well-designed service. The issue is never the tool itself. It's the reasoning (or lack thereof) behind the decision to use it.&lt;/p&gt;

&lt;p&gt;"We're using Lambda because we're serverless first" is a strategy.&lt;/p&gt;

&lt;p&gt;"We're using Lambda because we're using Lambda" is not.&lt;/p&gt;

&lt;p&gt;When the justification for a technical decision is the decision itself, that's a signal worth paying attention to. Good system design  is not just about asking &lt;em&gt;if we can user this tool&lt;/em&gt;, but &lt;em&gt;if we should, here, for this specific requirement.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I learned this the expensive way, watching a system architect itself into a corner because no one was allowed to ask that second question.&lt;/p&gt;




</description>
      <category>techleadership</category>
      <category>aws</category>
      <category>serverless</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Starting Something I Should Have Started Years Ago</title>
      <dc:creator>Joel C</dc:creator>
      <pubDate>Mon, 02 Mar 2026 15:26:34 +0000</pubDate>
      <link>https://dev.to/jlinco/starting-something-i-should-have-started-years-ago-13h9</link>
      <guid>https://dev.to/jlinco/starting-something-i-should-have-started-years-ago-13h9</guid>
      <description>&lt;p&gt;After over a decade in the tech space, I'm finally doing what I've always said I would do: I'm writing about it.&lt;/p&gt;

&lt;p&gt;This won't be just about the technical stuff, there'll be plenty of that. This will cover the real stuff. The human side of tech work. The contracts that seemed fine until they weren't. The tools that promised everything and delivered half. Clients and employers who taught me what professionalism should look like and the ones who taught me what it doesn't. Red flags I missed, and the ones I learned to spot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Now?
&lt;/h3&gt;

&lt;p&gt;Honestly, two reasons. There's a lot to talk about, which is funny, because I'm not someone who talks much. And second, I'm currently in a situation that made it very clear to me: if I don't share what I've learned, someone else is going to learn these lessons the hard way. That's a waste&lt;/p&gt;

&lt;h3&gt;
  
  
  What to expect
&lt;/h3&gt;

&lt;p&gt;I'll be writing about tech tools and architecture decisions: what worked, what didn't and why. About working across borders and cultures in remote-first environments. About the human side of this industry: the contracts, clients, red flags and stuff that didn't make it into job descriptions. And when the time is right, there's a story I'm going to tell that I think a lot of people in this space need to hear.&lt;/p&gt;

&lt;p&gt;If you've been in tech for a while and have stories and lessons you think will benefit others, do reach out. Who knows, we could collaborate. If you're early in your career and have questions about navigating this space, feel free to ask. Let's make this useful.&lt;/p&gt;

&lt;p&gt;I'm making this public so I actually follow through. You know, accountability, right?&lt;/p&gt;

&lt;p&gt;Let's see where this goes&lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>techcareers</category>
      <category>remotework</category>
      <category>lessons</category>
    </item>
  </channel>
</rss>
