<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matt Hogg</title>
    <description>The latest articles on DEV Community by Matt Hogg (@mrmatthogg).</description>
    <link>https://dev.to/mrmatthogg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mrmatthogg"/>
    <language>en</language>
    <item>
      <title>Vibe Coding Is Bad News For Good Ideas</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/vibe-coding-is-bad-news-for-good-ideas-1j8g</link>
      <guid>https://dev.to/mrmatthogg/vibe-coding-is-bad-news-for-good-ideas-1j8g</guid>
      <description>&lt;p&gt;There’s something about reckless “AI” use, and vibe coding in particular, that’s bothered me for reasons I’ve never quite been able to put my finger on until now. Indulge me as I rant about our total disregard for constraints and the good ideas that come from them.&lt;/p&gt;

&lt;p&gt;I’ve noticed that the creator of &lt;a href="https://claude.com/product/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, &lt;a href="https://borischerny.com/about/" rel="noopener noreferrer"&gt;Boris Cherny&lt;/a&gt;, is mentioned relentlessly on social media by users with feedback. Cherny has a developer relations role to play, yes, but he’s also actively developing a product. And, boy, do folks have opinions on that product!&lt;/p&gt;

&lt;p&gt;I find it cringe that any ol’ terminal jockey can come in from the cold and pitch him their very particular edge case. Cherny likely disagrees with me. Not only does he respond and engage with the feedback he’s also &lt;a href="https://www.threads.com/@amyleecodes/post/DVU5KEOEVg2" rel="noopener noreferrer"&gt;automated the triage—and fixes—for anything posted on Threads, Slack, or Github&lt;/a&gt;. OK, then.&lt;/p&gt;

&lt;p&gt;I don’t think this is sound engineering or product development. Vibe coders mistake &lt;em&gt;productivity&lt;/em&gt; for &lt;em&gt;product&lt;/em&gt;. I worry about “AI” boosters bragging about the former instead of focusing on the latter Lately it feels like it’s more about the quantity of ideas done, and not the quality of ideas considered.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is yet another reminder that using AI doesn’t make the product better, when you don’t have a team that is customer-obsessed / product obsessed.&lt;/p&gt;
&lt;p&gt;And a team that is customer/product-obsessed without AI (or very little AI) will still run laps around one with AI…&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://bsky.app/profile/gergely.pragmaticengineer.com/post/3mh67inq3tc27" rel="noopener noreferrer"&gt;gergely.pragmaticengineer.com&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Should an implementation be this fast and easy? Should execution be this deceptively cheap? (Psst, it’s &lt;a href="https://www.wheresyoured.at/anthropic-is-bleeding-out/" rel="noopener noreferrer"&gt;subsidized&lt;/a&gt;.) It’s not just that we puny &lt;a href="https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry" rel="noopener noreferrer"&gt;humans suffer “brain fry”&lt;/a&gt; and can’t keep up cognitively. Throughout the creative process, eliminating the role of constraints and limitations is a tragic misstep.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I have talked to quite a few vibe coding companies who are proud of their super rapid development.&lt;/p&gt;
&lt;p&gt;None of them seemed to know what “product development strategy” means or what will happen after you paid them for the first delivery.&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://www.threads.com/@mistakenotmy/post/DVEjyyXDUAv" rel="noopener noreferrer"&gt;mistakenotmy&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Constraints and creativity go hand in hand. &lt;a href="https://en.wikipedia.org/wiki/Leonardo_da_Vinci" rel="noopener noreferrer"&gt;Leonardo da Vinci&lt;/a&gt; proclaimed, “Art lives from constraints and dies from freedom.” &lt;a href="https://en.wikipedia.org/wiki/Orson_Welles" rel="noopener noreferrer"&gt;Orson Welles&lt;/a&gt; was once heard to say, “The enemy of art is the absence of limitations.” Serial entrepreneur &lt;a href="https://en.wikipedia.org/wiki/Biz_Stone" rel="noopener noreferrer"&gt;Biz Stone&lt;/a&gt; wrote, “Constraint inspires creativity.”&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://en.wikipedia.org/wiki/Green_Eggs_and_Ham" rel="noopener noreferrer"&gt;Green Eggs and Ham&lt;/a&gt;&lt;/em&gt; exists because Seuss’ editor bet him $50 that he couldn’t write an engaging children’s book using only 50 distinct words. It’s now one of the most famous children’s books in the English-speaking world.&lt;/p&gt;

&lt;p&gt;Of course, we’re talking about tech that’s predicated on consuming &lt;em&gt;everything&lt;/em&gt; and then generating &lt;em&gt;anything&lt;/em&gt; we ask for. Shame on me for being so naive, eh? “AI” isn’t remotely interested in constraints. “AI” doesn’t encourage restraint in its use.&lt;/p&gt;

&lt;p&gt;When I see vibe coders running wild and loving it, I feel like Dr. Ian Malcolm in Jurassic Park, pounding on the table and whispering, “Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should!”&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/_oNgyUAEv0Q"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;For one thing, not every idea is worth pursuing. The vast majority of ideas will be tepid, dumb, dangerous, or hardly worth the effort in a hundred little ways. What does it mean to never say no, fire up a dozen simultaneous agents while you head off to the gym, and build a mediocre app that does everything?&lt;/p&gt;

&lt;p&gt;If the effort is zero and you implement every single idea then I’d argue you don’t actually have any meaningful ideas. You have no vision and what you’re building isn’t refined nor intentional. What are you making? Do you know why?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;your org rarely has good ideas. ideas being expensive to implement was actually helping&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://twitter.com/thdxr/status/2022574719694758147" rel="noopener noreferrer"&gt;thdxr&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;For another thing, people are way too excited about optimizing for a bottleneck that never existed. Our coding capacity was never a problem. I’ve previously argued that &lt;a href="https://matthogg.fyi/developers-your-job-is-not-to-write-code/" rel="noopener noreferrer"&gt;a developer’s job is not to write code&lt;/a&gt; but this isn’t what I meant! We have many duties before and after the part where we’re typing, and none of that’s solved with vibe coded slop.&lt;/p&gt;

&lt;p&gt;The irony is that when we trivialize the technical effort then the entire project is at existential risk. Vibe coding makes things far worse, and we don’t yet know the impact of relying on it as we have been. The developers on your team serve as an intentional bulkhead against bad ideas, and you don’t have a viable business without that. &lt;a href="https://www.linkedin.com/posts/chrisfitkin_ai-makes-devs-10x-more-efficient-bs-activity-7434266290397196288-TTBW/" rel="noopener noreferrer"&gt;Christopher Fitkin&lt;/a&gt; muses:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When building was expensive, that friction acted as a filter (saving companies tons). Now more mediocre ideas get built… and costs tick up, up, up.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’ve read some effusive posts about vibe coding from former colleagues where the thinly veiled insinuation is, “Just LOOK at everything I can do now that I don’t have developers getting in my way!”&lt;/p&gt;

&lt;p&gt;My god, is that what you thought of me and my teams all this time…? We’re indeed a constraint. But do you see us as &lt;em&gt;impediments&lt;/em&gt; rather than fellow problem solvers? On behalf of every developer I’ve every worked with, fuck that. Do you think we want to be Debbie Downers when we start chewing on your ask and a million questions and details rise to the surface? Have you ever thought that perhaps it’s the sign of a truly engaged developer who gives a shit? That maybe constraints matter?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The number of times I’ve had jr devs or sr pms basically say how hard could it be? Guys, the code is the easy part. What not to code is the part we get paid for.&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://www.threads.com/@dave_minnigerode/post/DUyJdJ5EYNX" rel="noopener noreferrer"&gt;dave_minnigerode&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’m genuinely saddened to think we’re no longer able to entertain even a tiny bit of restraint and collective self-discipline. It’s a shame that we now see constraints as obstacles that have plagued us instead of conditions that shape better solutions.&lt;/p&gt;

&lt;p&gt;I consider software engineering to be an act of &lt;em&gt;curation&lt;/em&gt;—why not ship one awesome thing rather than 10 tired clichés smothered in defects, security holes, and accessibility problems? I think aviator slash writer &lt;a href="https://en.wikipedia.org/wiki/Antoine_de_Saint-Exup%C3%A9ry" rel="noopener noreferrer"&gt;Antoine de Saint-Exupéry&lt;/a&gt; said it best:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We’re far from perfection; we’re in the midst of a mass delusion. People are &lt;a href="https://craigmod.com/essays/software_bonkers/" rel="noopener noreferrer"&gt;software bonkers&lt;/a&gt; and proud of it. I’ve seen former colleagues now openly ponder if maybe traditional requirements are obsolete. Wut. You’ve heard of requirements, right? You know, also known as &lt;em&gt;constraints&lt;/em&gt;?!?&lt;/p&gt;

&lt;p&gt;Look, I can totally understand how vibe coding can feel empowering. It’s novel tech that seemingly bends space and time to provide you with skills and throughput you’d never have otherwise. That is indeed futuristic!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Everyone who ever cornered a developer at a party to tell them about a great app idea is now able to vibe code it and discover for themselves why the developers they talked to all said the idea wouldn’t work.&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://bsky.app/profile/glenatron.bsky.social/post/3mexsk57bl22i" rel="noopener noreferrer"&gt;glenatron&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;But a lot of this is brute force marketing and investor razzle dazzle, obscuring the fact that “AI” is patently bad at good ideas. We’re still going to need humans to supply those.&lt;/p&gt;

&lt;p&gt;I’m waiting to see a truly good idea come out of vibe coding. All this limitless firepower and you’ve &lt;a href="https://craigmod.com/essays/software_bonkers/" rel="noopener noreferrer"&gt;reinvented accounting software&lt;/a&gt;. Yay? Replacing the SaaS product that annoys you the most isn’t innovation. Making yet another to-do app isn’t novel. Building data dashboards doesn’t make you a renegade. Generating your own bespoke software is good &lt;em&gt;for you&lt;/em&gt; but it’s very likely not a good idea.&lt;/p&gt;

&lt;p&gt;It’s almost as if the lack of constraints isn’t helpful.&lt;/p&gt;

&lt;p&gt;It’s entirely possible I’m wrong. Maybe what it means to create software is radically changing. I have serious reservations about code quality and &lt;a href="https://nolanlawson.com/2026/02/07/we-mourn-our-craft/" rel="noopener noreferrer"&gt;craft&lt;/a&gt;, but OK. I’ll cross that bridge when I come to it. &lt;a href="https://andrewmurphy.io/blog/the-five-stages-of-losing-our-craft" rel="noopener noreferrer"&gt;I’ll grieve and I’ll adapt&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But other things will change along the way. Right now I’m far more concerned that we’re also fundamentally redefining what it means to just, like, have an idea. This feels like a pretty big prerequisite to all other paradigm shifts that “AI” might signal. I hope we get it right. I don’t have any solutions here. Well, maybe one—I’m sure as hell not going to ask any LLMs for their ideas.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>watercooler</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Wherein I Find Myself Writing About Writing</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Sat, 06 Dec 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/wherein-i-find-myself-writing-about-writing-37cc</link>
      <guid>https://dev.to/mrmatthogg/wherein-i-find-myself-writing-about-writing-37cc</guid>
      <description>&lt;p&gt;Most everyone finds writing to be challenging (especially those who say they enjoy it). This is because writing is an intentional, thoughtful act. This is as it should be, but “AI” has recently exacerbated the misbelief that writing is simply an output. In fact, it’s a creative process that has merit in and of itself.&lt;/p&gt;

&lt;p&gt;For some time I’ve been sitting on the idea of writing about my personal rules for the essays I publish. I use this space to work out ideas that get stuck in my head a little too long. I tend to focus more on the human side of software development and such topics should be evergreen and less likely to be ephemeral or trendy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your writing is a reflection of your thinking.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://www.threads.com/@rands/post/DL-rNjHS2Qz" rel="noopener noreferrer"&gt;rands&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’m kind of breaking that last rule by writing this essay (and &lt;a href="http://localhost:8080/wherein-i-find-myself-concerned-about-sparkles/" rel="noopener noreferrer"&gt;it’s not the first time&lt;/a&gt;). As much as I’ve been waiting for the “AI” bubble to pop, I just can’t ignore the damage it’s inflicting before that day arrives.&lt;/p&gt;

&lt;p&gt;It’s precisely because I interpret the act of writing as human thought made tangible, after considerable effort, that I find “AI” so inappropriate and unqualified for the task. &lt;strong&gt;Writing is thinking.&lt;/strong&gt; As long as machines are incapable of the latter they should never do the former.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unfit For The Job
&lt;/h2&gt;

&lt;p&gt;When we understand the mechanics of a Large Language Model (LLM) it’s evident they’re not thinking. LLMs can do certain things quite well enough but we have to &lt;a href="https://seldo.com/posts/what-ive-learned-about-writing-ai-apps-so-far" rel="noopener noreferrer"&gt;know their quirks and limitations and tailor our use accordingly&lt;/a&gt;. Laurie Voss reminds us that LLMs are “good at transforming text into less text” which is nowhere near the same thing as writing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There’s no magic here. If I ask an LLM to produce an entire document out of thin air it doesn’t actually do that—it needs excessive prompting and data. The most original writing any LLM has ever produced is &lt;em&gt;still derivative&lt;/em&gt;. By definition an LLM will never produce a &lt;em&gt;wholly original&lt;/em&gt; idea.&lt;/p&gt;

&lt;p&gt;Most of us aren’t inventing cold fusion or faster-than-light travel but I’d like to think at least some of my energy at work is spent on unique or novel ideas. I won’t be getting that from a document my coworker passes to me while mentioning they used “AI” to write it. Laurie Voss again:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;LLMs only know what you tell them. Give it a short prompt and ask for long text and you will get endless waffle, drivel, pointless rambling, and hallucinations. There is no way to get an LLM to perform the thought necessary to write something for you. You have to do the thinking. To get an LLM to write something good you have to give it a prompt so long you might as well have just written the thing yourself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’m not abdicating this much responsibility (i.e., thinking) to a machine that’s unfit for the job. I likely wouldn’t do so even if the effort were near zero. As it is I have to micro-manage the LLM, validate its output, and also ignore the ethical, environmental, political or economic impacts of it all? Great tool, thank you!&lt;/p&gt;

&lt;p&gt;We devalue those jobs traditionally centered around writing—copywriters, translators, journalists, and even developers—only to discover later that these tasks require true skill and &lt;a href="https://www.nbcnews.com/tech/tech-news/humans-hired-to-fix-ai-slop-rcna225969" rel="noopener noreferrer"&gt;humans have to be brought in to fix the machines’ work&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Most people assume LLMs to be more sophisticated than they are (i.e., they turn “text into less text”) because our general perception is that writing—almost all writing—is a chore or an end product. We ignore the complexity—and value—of &lt;em&gt;the process itself&lt;/em&gt; and just want a document in our hands as soon as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Side Effects May Include…
&lt;/h2&gt;

&lt;p&gt;The so-called promise of “AI” really capitalizes on this notion and incentivizes us to optimize for the wrong thing. This is affecting us in many ways that have only recently become apparent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We really don’t have consensus on its utility.&lt;/strong&gt; The most powerful people in society like “AI” more than the rest of us. A recent study by Dayforce found that &lt;a href="https://www.businessinsider.com/executives-adopting-ai-higher-rates-than-workers-research-2025-10" rel="noopener noreferrer"&gt;87% of executives are using “AI” regularly while only 27% of employees are&lt;/a&gt;. Holy shit! We know that executives are somewhat divorced from reality and don’t really understand the day-to-day of their own businesses. &lt;a href="https://hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong" rel="noopener noreferrer"&gt;Most leaders aren’t even aware this gap exists&lt;/a&gt; at all!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workslop is on the rise and we hate each other for it.&lt;/strong&gt; The phrase “workslop” is truly inspired. However, the more we use “AI” to create passable facsimiles of written work &lt;a href="https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity" rel="noopener noreferrer"&gt;the more everyone else has to deal with it&lt;/a&gt;. Productivity suffers and resentment festers. The irony is your workslop is consuming &lt;em&gt;everyone else’s&lt;/em&gt; time and energy. Even worse, if you do this to your coworkers they’re struggling with how to tell you to knock it off. It erodes mutual trust or respect. Workers haven’t been this annoyed with each other since we started reheating fish in the company microwave for lunch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“AI” is altering how we communicate.&lt;/strong&gt; When it comes to language, we’re screwed no matter what. We mimic an LLM’s style (accidentally or not) just because it’s part of the zeitgeist now. We avoid the use of certain punctation or turns of phrase to distinguish ourselves from the machines. Either way, we’ve ceded territory that belonged to us. I, for one, will have my precious em dashes and Oxford commas pried from my cold, dead hands.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Only AI uses emdash”&lt;/p&gt;
&lt;p&gt;Sorry babe, some of us are just literate&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://www.threads.com/@ewdatsgross/post/DRyDxHIDg5v" rel="noopener noreferrer"&gt;ewdatsgross&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;It’s fucking lazy.&lt;/strong&gt; I’m sorry, it just is. I can’t tiptoe around this point. There’s the &lt;a href="https://gobetweenlab.com/posts/opinion-zone/laziness/" rel="noopener noreferrer"&gt;good kind of lazy&lt;/a&gt; and there’s just plain ol’ lazy. Know the difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“AI” makes us feel inadequate.&lt;/strong&gt; When “AI” appears in our software toolbars and panels there’s a heavy insinuation that I don’t know how to do my fucking job. Nowadays I’m not even granted the dignity of a blank canvas. A new document comes pre-loaded with “AI” calls to action (e.g., “Generate document”, “Help me write…”, &lt;a href="http://localhost:8080/wherein-i-find-myself-concerned-about-sparkles/" rel="noopener noreferrer"&gt;sparkles everywhere&lt;/a&gt;). The iconoclastic designer &lt;a href="https://www.mikemonteiro.com/" rel="noopener noreferrer"&gt;Mike Monteiro&lt;/a&gt; examines this (and more) with zinger after zinger in his talk &lt;a href="https://www.youtube.com/watch?v=zH2dFXDMwe4" rel="noopener noreferrer"&gt;“How to draw an orange”&lt;/a&gt;…&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/zH2dFXDMwe4"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;Every single human being has an intrinsic capacity to create—write, draw, sing, dance, glue things to other things—from a very young age. Sadly, a great many people “grow up” and lose this over time. It never goes away entirely, however, and we can reclaim it. It’s just hard-earned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing Friction Can Be Cool, Man
&lt;/h2&gt;

&lt;p&gt;I don’t always know where I stand, down to the last detail, on appropriate “AI” use but I agree with this sentiment from &lt;a href="https://frankchimero.com/blog/2025/beyond-the-machine/" rel="noopener noreferrer"&gt;Frank Chimero&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Where do I stand in relation to the machine—above it, beside it, under it? Each position carries a different kind of power dynamic. To be above is to steer, beside is to collaborate, below is to serve.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Well, if we want to be “above it” that doesn’t come for free. Any degree of creative autonomy (not just writing) comes at some cost but that’s the point. We have to work those muscles continually. We have to do some hard work. A colleague of mine, Jenny, believes we should be &lt;a href="https://phirephoenix.com/blog/2025-10-11/friction" rel="noopener noreferrer"&gt;choosing friction&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you can push a button and get a screenplay or a symphony or a painting at the cost of a nominal subscription fee that does not begin to cover the true expense of this technology to the world, if you did not have to at least subconsciously face your mortality and decide that the pursuit of this piece of art is what you want to spend your finite time on, if your desire to speak is not strong enough to overcome the friction of learning how to speak, is it something that needed to be said?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Bravo! Writing is a deliberate decision that reflects and refines our thinking. There’s no optimization for that. Even that work email you’re composing presumably needs to be sent, and therefore friction is a prerequisite. But the process itself has value because proper space was reserved to &lt;em&gt;just think&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Neuroscientist here… creativity is a process. A process. A FUCKING PROCESS. Not an output.&lt;/p&gt;
&lt;p&gt;Nobody is gatekeeping the process – AI simply invites you to skip to the end, to the output. Which, as mentioned, is not creativity.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://www.threads.com/@drrachelbarr/post/DROQfSojFaS" rel="noopener noreferrer"&gt;drrachelbarr&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;“AI” can really muddy the waters. An LLM can’t think and therefore it’s not a tool that’ll encourage us to think for ourselves, at least not by default. Still, the grifters will pretend otherwise. In the meantime we’re left &lt;a href="https://www.nplusonemag.com/issue-51/the-intellectual-situation/large-language-muddle/" rel="noopener noreferrer"&gt;wringing our hands and having multiple existential crises&lt;/a&gt; about it. Cultural norms and beliefs are forming around this, slowly, and not all of them are favorable. We enjoy calling it “slop” for a reason! This feels to me like a very good opportunity to make “AI” utterly gross and uncool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do Not Fuck With Miyazaki
&lt;/h2&gt;

&lt;p&gt;We can take our cue from the great director and animator &lt;a href="https://en.wikipedia.org/wiki/Hayao_Miyazaki" rel="noopener noreferrer"&gt;Hayao Miyazaki&lt;/a&gt;. In March 2025, OpenAI boasted about its new image generation capabilities by encouraging people to turn their selfies into anime characters according to Miyazaki’s famous style. Everybody did it, had a chuckle, and it was offensive.&lt;/p&gt;

&lt;p&gt;Miyazaki’s animation is done by hand. His films require hundreds of thousands of drawings each. He wears a goddamn apron to work! The results are &lt;em&gt;magical&lt;/em&gt; and &lt;a href="https://www.youtube.com/watch?v=zvY-SlHuDSo" rel="noopener noreferrer"&gt;this is precisely what makes him one of our greatest storytellers&lt;/a&gt;. Of course, “AI” devoured his entire body of work and now you can skip over all that fluff. Output over process, right? This is the kind of shortcut that Miyazaki has spent his entire career resisting on principle.&lt;/p&gt;

&lt;p&gt;Some years ago a few young upstarts showed Miyazaki an AI model that could automate movement for computer generated figures. There’s a deadly pause before he responds to the demo and absolutely &lt;em&gt;murders them&lt;/em&gt; with his words. You can watch for yourself—here’s the infamous video.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/ngZ0K3lWKRc"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;“Well, we would like to build machine that can draw pictures like humans do.” I’m sorry, what was that? Did they really say that in front of the king himself?!? Get the fuck outta here.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Only Humans Can Do
&lt;/h2&gt;

&lt;p&gt;Another director, Bong Joon Ho, &lt;a href="https://deadline.com/2025/11/bong-joon-ho-jceline-song-jenna-ortega-ai-marrakech-1236630981/" rel="noopener noreferrer"&gt;recently offered another very direct opinion&lt;/a&gt; that, um, straddles the line:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;My official answer is, AI is good because it’s the very beginning of the human race finally seriously thinking about what only humans can do. But my personal answer is, I’m going to organize a military squad, and their mission is to destroy AI.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’m not sure about the military squad but I agree that the silver lining could be a greater awareness of our human potential. I’m hopeful but we have to outlast the CEOs, grifters, boosters, and cheapskates to get there.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“AI is clearly sticking around so you have to get used to it” wrong. I don’t have to get used to shit. I am a practiced hater and I can keep this going for decades if I am required to&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://bsky.app/profile/cuteosphere.bsky.social/post/3m6xlho5rnk2k" rel="noopener noreferrer"&gt;cuteosphere&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Contrary to what my chosen field and university degree might imply, I very much prefer working with people over machines. In my experience the most challenging part of creativity in group settings is plain ol’ communication, and much of that is written. Let’s avoid shortcuts in our writing that will do more harm than good.&lt;/p&gt;

&lt;p&gt;As for my personal writing on this site you should consider this essay, and many more to come, as my act of protest against the “AI” hype. I will happily labor over each word, sentence, and paragraph. Is it hard work? Yep. Is it expedient? Nope. Is it worthwhile? Maybe don’t ask anybody on Hacker News or Reddit. But I’m not writing for them, am I? I’m writing for me and my very human mind.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>writing</category>
      <category>motivation</category>
    </item>
    <item>
      <title>My Working Principles For Managing Ego, Empathy, And Humility</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Sun, 23 Nov 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/my-working-principles-for-managing-ego-empathy-and-humility-5b2k</link>
      <guid>https://dev.to/mrmatthogg/my-working-principles-for-managing-ego-empathy-and-humility-5b2k</guid>
      <description>&lt;p&gt;&lt;em&gt;This essay is half of a 2-part series about ego, empathy, and humility for developers and technical leaders. For thoughts on why these principles matter at work, you might also enjoy &lt;a href="https://matthogg.fyi/a-unified-theory-of-ego-empathy-and-humility-at-work/" rel="noopener noreferrer"&gt;A Unified Theory Of Ego, Empathy, And Humility At Work&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The ego can be an amorphous thing. Empathy and humility are lofty ideals. So, it can seem daunting to figure out how to manage them at work. The good news is that there many tools in our toolkit to choose from and iterate. The pay off, big or small, will be immediate.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://matthogg.fyi/a-unified-theory-of-ego-empathy-and-humility-at-work/" rel="noopener noreferrer"&gt;my previous essay on the subject&lt;/a&gt; I kept it theoretical. I made the point that our egos are burdens while empathy and humility are &lt;strong&gt;tools for the pursuit of information in the name of solving problems as a group&lt;/strong&gt;. In this essay I’ll outline the principles that have worked for me when following this premise.&lt;/p&gt;

&lt;p&gt;This advice isn’t just for coping with feelings or emotional stress. While that alone would be nice enough, they’re also &lt;em&gt;actionable rules for effectiveness&lt;/em&gt; through managing your ego and focusing on empathy and humility. The principles are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Questions before statements&lt;/li&gt;
&lt;li&gt;Strong opinions weakly held&lt;/li&gt;
&lt;li&gt;Be your own &lt;del&gt;worst&lt;/del&gt; first critic&lt;/li&gt;
&lt;li&gt;Act like an apprentice&lt;/li&gt;
&lt;li&gt;We are not alone&lt;/li&gt;
&lt;li&gt;Accentuate the positive&lt;/li&gt;
&lt;li&gt;Always take the high road&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Questions Before Statements
&lt;/h2&gt;

&lt;p&gt;Starting with a question instead of a statement will set the tone for the entire discussion that follows. The former will get you key information from the outset while the latter has the potential to put people on the defensive.&lt;/p&gt;

&lt;p&gt;I like to open as many conversations as I can with a question that essentially asks, “What am I missing here?” I can eventually make my point but it will be better informed when I do. And that’s if I still need to make my point—the right question will often satisfy both sides and negate the need entirely. And, no, leading or rhetorical questions don’t count! Try to ask genuine, open-ended (i.e., not yes/no) questions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;someone once told me “emotional intelligence is just the art of not saying the first thing your ego wants to” and honestly that just rewired my brain.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://www.threads.com/@simiianand/post/DQzsArPkkW_" rel="noopener noreferrer"&gt;simiianand&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;When doing code review, for instance, a system like &lt;a href="https://conventionalcomments.org/" rel="noopener noreferrer"&gt;Conventional Comments&lt;/a&gt; favors this approach and makes it a truly collaborative exercise. The pull request author has slightly more context than the reviewers (i.e., it’s their work) so ask questions instead of making dogmatic statements and blocking the pull request outright. I’ve found this so effective that I’ve even used similar “prefixes” when speaking face-to-face.&lt;/p&gt;

&lt;p&gt;This approach also helps us keep an open mind when &lt;a href="https://matthogg.fyi/legacy-code-may-be-the-friend-we-havent-met-yet/" rel="noopener noreferrer"&gt;working with legacy code&lt;/a&gt;. There’s a big difference in utility between complaining, “This code is shit!” versus asking an old-timer, “What led us to implement it this way?”&lt;/p&gt;

&lt;p&gt;This is adjacent to the idea of &lt;a href="https://en.wikipedia.org/wiki/Active_listening" rel="noopener noreferrer"&gt;active listening&lt;/a&gt; where the goal is “listening to understand” rather than jumping in to make your point. When you do get a chance to speak, all the better if your response is a &lt;em&gt;clarifying question&lt;/em&gt; rather than a counterpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strong Opinions Weakly Held
&lt;/h2&gt;

&lt;p&gt;It’s much easier to default to questions before statements when you have &lt;a href="https://web.archive.org/web/20130626002837/https://saffo.com/02008/07/26/strong-opinions-weakly-held/" rel="noopener noreferrer"&gt;strong opinions weakly held&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This concept, first described by Paul Saffo, is a heuristic for turning an imperfect or incomplete conclusion into something better. We convert uncertainty into certainty but with less bias. If you accept the premise from &lt;a href="https://matthogg.fyi/a-unified-theory-of-ego-empathy-and-humility-at-work/" rel="noopener noreferrer"&gt;my previous essay&lt;/a&gt; that everything is bigger than you then this is the uncertainty you’re trying to root out.&lt;/p&gt;

&lt;p&gt;To do so we must genuinely challenge our own assumptions. No opinion should be so sacred that it can’t be amended when we uncover new information. When we actively try to prove ourselves wrong we can prevent the tight coupling of our identities with our work (a.k.a. ego). Code review is the perfect opportunity to practice this—you’re not defined by your code. Code review helps you become a better programmer, if you let it.&lt;/p&gt;

&lt;p&gt;At the end of the day we’re all looking for the best ideas. That’s not likely to come entirely from within every single time, is it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Be Your Own &lt;del&gt;Worst&lt;/del&gt; First Critic
&lt;/h2&gt;

&lt;p&gt;Saffo actively looked for moments to try and prove himself wrong and make his stance more robust. It’s a good idea to be the &lt;em&gt;first critic&lt;/em&gt; of your own output—emails, meetings, documentation, pull requests, and so on. You don’t have to be cruel to yourself, but if you poke just a little bit your communications become significantly more efficient.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Was chatting with a Senior Staff Eng[ineer] about how to become better at writing. We agreed that you need to be your own critic first to write well. Again, I saw a similar opinion when chatting with an ex-Meta Distinguished Engineer … He said, “To write well, you need to read well.”&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://www.threads.com/@ryanlpeterman/post/DLU-9T1MSpq" rel="noopener noreferrer"&gt;ryanlpeterman&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before you smash the “send” button ask yourself, “Is what I’ve typed out true?” It’s fine to be subjective so long as all parties recognize it. Otherwise, a little due diligence is in order. Checking yourself before someone else does can greatly improve—or even prevent—your message. To this end I personally make it a habit to draft most of what I write “offline” first, including seemingly inconsequential Slack or Teams messages.&lt;/p&gt;

&lt;p&gt;This is harder to do for verbal communication but it’s still worthwhile. In meetings you can take notes while someone else is speaking and try to formulate your thoughts. On video calls you can open a new browser tab and quickly look something up. And you can always pause a moment before answering a question—the silence won’t kill you.&lt;/p&gt;

&lt;p&gt;Besides fact-checking yourself it’s also useful to consider what comes next. Whatever you send or say, what’s the most likely response you’ll get? Typically it’ll be a follow-up question or some sort of task. Try to anticipate this, revise your message to account for it, and cut out at least one back-and-forth. One of these statements is more empathic than the other:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;“We have a problem!!!”&lt;/li&gt;
&lt;li&gt;“We have a problem, but we’re going to…”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Lastly, I can’t stress enough that “AI” will not adequately assess or correct your writing for you. This is a &lt;em&gt;purposeful, thoughtful act&lt;/em&gt; and nothing is gained by farming it out it to a machine that lacks context or emotion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Act Like An Apprentice
&lt;/h2&gt;

&lt;p&gt;Most great developers I’ve met share a common trait in that they love learning new things. Whether it’s tinkering in their off hours or rising to a new challenge at work, it demonstrates humility. I think we should be transparent about this with our coworkers and act like apprentices out in the open.&lt;/p&gt;

&lt;p&gt;There will always be something we don’t know. Our skill set is never perfectly complete. Something that’s trendy today could be obsolete tomorrow. It’s best to be candid about it. Others will appreciate your curiosity and might even help you on your particular learning journey.&lt;/p&gt;

&lt;p&gt;You can act like an apprentice no matter how senior you are, too. There’s a lot to learn from peers and those junior to you, no matter how much experience you think you have. For example, I perfected &lt;a href="https://matthogg.fyi/a-technical-interview-doesnt-have-to-suck/" rel="noopener noreferrer"&gt;my hiring philosophy&lt;/a&gt; quite late in my career, based on the insights of great coworkers. They were simply better at it than me and my eyes were open enough to recognize it.&lt;/p&gt;

&lt;p&gt;The pursuit of learning isn’t just about skills. You should be actively &lt;em&gt;learning about yourself&lt;/em&gt;, too. Seek out constructive feedback and look at criticism as an opportunity to improve. We learn more when we’re wrong (and told about it) than when we think we’re right.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Are Not Alone
&lt;/h2&gt;

&lt;p&gt;The whole point of practicing empathy and humility is that you don’t work alone in a vacuum.&lt;/p&gt;

&lt;p&gt;This means that all of us are leading each other by example &lt;em&gt;whether we realize it or not&lt;/em&gt;. So, it’s best to model the behavior we’d like to see and do so as intentionally as possible. This applies for leaders and non-leaders alike. The more we show ownership, take responsibility, admit mistakes, ask for help, or offer support the more others will do the same.&lt;/p&gt;

&lt;p&gt;The result will be a &lt;a href="https://en.wikipedia.org/wiki/Prosocial_behavior" rel="noopener noreferrer"&gt;prosocial&lt;/a&gt; environment that fosters psychological safety. Everyone feels comfortable participating, asking questions, and putting forth ideas. We don’t do this just to be nice. This is how we effectively solve problems in group settings. I’ve seen firsthand the stark difference between my department, where people felt safe with each other, and other departments where people absolutely did not. More importantly, my people could see the difference, too.&lt;/p&gt;

&lt;p&gt;Once you have this safety net, you must use it! &lt;a href="https://dev.to/adegiamb/youre-not-alone-you-have-an-army-328e"&gt;As my friend Anthony says&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You’re not alone. You have teammates, leaders, tools, and experts—an entire army ready to move with you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Leaders need to do a little more when modeling empathy and humility. Make sure you’re the one speaking the least. Show vulnerability. Be self-deprecating. Trust your people. &lt;a href="https://conventionalcomments.org/communication/" rel="noopener noreferrer"&gt;Replace “you” with “we”&lt;/a&gt;. Foster inclusion by looking for underrepresented voices and make sure everyone at the table has a chance to be heard. Lastly, find peers at your level that you can commiserate with when you need it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accentuate The Positive
&lt;/h2&gt;

&lt;p&gt;Managing group dynamics isn’t straightforward. This is especially difficult for a group that may be challenged, stressed, or otherwise under pressure. When we’re heads down trying to get work done it’s very easy to focus on what’s going wrong and not on what’s gone right.&lt;/p&gt;

&lt;p&gt;Make it a point to pause often and celebrate victories together, no matter how small they may seem. Often you’ll have to remind somebody of their own accomplishments and break them out of their tunnel vision. A little recognition can go a long way.&lt;/p&gt;

&lt;p&gt;This is not something we should rely on only leaders to do. Anyone can recognize a win when they see it, even if it’s strictly from their point of view. These moments should also be celebrated. If anything, gratitude might even land better coming from a peer than from a leader!&lt;/p&gt;

&lt;p&gt;You shouldn’t ignore setbacks or failures, of course. They need to be acknowledged openly so that lessons can be taken from them. That in itself is a positive thing.&lt;/p&gt;

&lt;p&gt;The golden rule is to &lt;a href="https://www.victory-strategies.com/wisdom/praiseinpublic" rel="noopener noreferrer"&gt;praise in public but coach in private&lt;/a&gt;. Positive comments should be frequent and seen by the entire group while criticism should be held for 1-on-1 settings. Even well-meaning feedback could be embarrassing if it’s given in front of others, and it won’t stick. If you actually want somebody to correct for something then coach them in private.&lt;/p&gt;

&lt;p&gt;Research suggests &lt;a href="https://dev.to/chenmike/building-empathy-as-a-software-developer-3leb"&gt;the ideal ratio of positive comments to criticism should be 5-to-1&lt;/a&gt;. You don’t have to walk around waving pompoms all day, but it’s good to be mindful of this balance. Again, it’s not just to be nice. When it comes time to give criticism it’ll be better received because you’ve already worked to build trust and respect with them. An emphasis on the positive mitigates the negative before it arrives.&lt;/p&gt;

&lt;h2&gt;
  
  
  Always Take The High Road
&lt;/h2&gt;

&lt;p&gt;The sad truth about this essay is that there’s no guarantee that anybody else will play ball with you. We can’t presume that anyone we work with is similarly interested in acting with empathy or humility. &lt;em&gt;You must put away your ego even if nobody else is doing the same&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This means avoiding politics, pettiness, excessive gossip, or improper backchannels. When someone is throwing their ego around, you’ll be tempted to match them and fight fire with fire. Do not engage! Do not stoop to their level! Never take the bait! Hold the line! &lt;em&gt;Always take the high road!&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“It costs nothing to be kind!”&lt;/p&gt;
&lt;p&gt;I’m sorry but what are you talking about, it’s the most expensive and dangerous thing in the world. That’s why people who remain kind in all circumstances are such heroes, they’re giving of themselves and voluntarily making themselves vulnerable.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://bsky.app/profile/jasonkpargin.bsky.social/post/3m3y5izuc3k2b" rel="noopener noreferrer"&gt;jasonkpargin&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;I don’t give this advice lightly and I can’t sugarcoat it— &lt;strong&gt;it will be exhausting&lt;/strong&gt; but it’s worth it in the end. It will sometimes feel unfair in practice but empathy and humility always win. Use them even when nobody else is—&lt;em&gt;especially&lt;/em&gt; when nobody else is.&lt;/p&gt;




&lt;p&gt;I’ve evolved these tips, tricks, and habits over many years. My progression has been so gradual that it wasn’t until recently that I fully realized there might be a &lt;a href="https://matthogg.fyi/a-unified-theory-of-ego-empathy-and-humility-at-work/" rel="noopener noreferrer"&gt;unified theory&lt;/a&gt; hiding underneath my hard-earned habits.&lt;/p&gt;

&lt;p&gt;I hope somebody finds this insightful in some way, but don’t take my word for it. I’m humble enough to recognize that what’s worked for me may not work for you. That’s the beauty of it. Just start by recognizing that everything is bigger than you and we’re all trying to solve problems together. See what techniques you can then come up with to leverage that awareness.&lt;/p&gt;

&lt;p&gt;Don’t be afraid to try something or make mistakes. Despite all of the above, I still get it wrong sometimes. I’ve misspoken, had tiny outbursts, and overlooked a detail someone was putting right in front of my face. It’s OK. Humans are not perfectly known quantities, and that includes ourselves.&lt;/p&gt;

&lt;p&gt;Trust in your genuine attempts to improve yourself, and trust that others around you will recognize the same. My final piece of advice, therefore, is to make sure you extend some empathy to yourself, too. Do your best, and good luck!&lt;/p&gt;

</description>
      <category>management</category>
      <category>career</category>
      <category>motivation</category>
      <category>leadership</category>
    </item>
    <item>
      <title>A Unified Theory Of Ego, Empathy, And Humility At Work</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Sun, 23 Nov 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/a-unified-theory-of-ego-empathy-and-humility-at-work-50l1</link>
      <guid>https://dev.to/mrmatthogg/a-unified-theory-of-ego-empathy-and-humility-at-work-50l1</guid>
      <description>&lt;p&gt;&lt;em&gt;This essay is half of a 2-part series about ego, empathy, and humility for developers and technical leaders. For advice on how to execute the ideas in this essay, you might also enjoy &lt;a href="https://matthogg.fyi/my-working-principles-for-managing-ego-empathy-and-humility/" rel="noopener noreferrer"&gt;My Working Principles For Managing Ego, Empathy, And Humility&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In our daily lives empathy and humility are obvious virtues we aspire to. They keep our egos in check. Less obvious is that they’re practical skills in the workplace, too. I think, for developers and technical leaders in particular, that the absence of ego is the best way to further our careers and do great work.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://www.psychologytoday.com/us/blog/theory-of-knowledge/202105/what-is-the-ego" rel="noopener noreferrer"&gt;the simplest of terms&lt;/a&gt; the ego is the characteristic of personhood that enables us to practice self-reflection, self-awareness, and accountability for the actions or decisions we take.&lt;/p&gt;

&lt;p&gt;However, the ego also motivates us to reframe our perception of the world in whatever way keeps us centered in it. Each of us is perpetually driven to justify our place in the world. This &lt;em&gt;constant self-justification&lt;/em&gt; is like an engine that idles for our entire lives, and it requires constant fine-tuning. When it runs amok this is what we call a “big” ego.&lt;/p&gt;

&lt;h2&gt;
  
  
  Breaking News! Developers Have Egos!
&lt;/h2&gt;

&lt;p&gt;I’m not thinking only of the 10x engineer stereotype, although I’ve worked with such folks in the past. Ego is more nuanced than that. Besides the most arrogant developer in the room throwing their weight around, our egos manifest in hundreds of ways that are much harder to detect.&lt;/p&gt;

&lt;p&gt;As developers we’re more susceptible to letting our egos run free. The nature of our work is so technical that to others it can seem obscure, arcane, or even magical. Sometimes we don’t do enough to actively dispel that notion—and just like that half the work of self-justification is already done for us.&lt;/p&gt;

&lt;p&gt;Very often it’s not intentional. The simplest example is the overuse of jargon and acronyms. We all do it, but as &lt;a href="https://clearleft.com/thinking/letters-of-exclusion" rel="noopener noreferrer"&gt;Jeremy Keith explains&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Still, I get why initialisms run rampant in technical discussions. You can be sure that most discussions of particle physics would be incomprehensible to outsiders, not necessarily because of the concepts, but because of the terminology.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Simply mashing a few letters together can be empowering for ourselves while being exclusionary for others. It’s an artifact—albeit a small one—of our egos. We know what the technobabble means. Our justified place in the universe is maintained.&lt;/p&gt;

&lt;p&gt;Sometimes we express our egos more deliberately. Developers have a clear tendency towards gatekeeping. For most, it’s an honest mistake. There’s a fine line between holding others to a certain expectation versus actively keeping people on the outside. When we see ourselves doing this we can correct it easily enough.&lt;/p&gt;

&lt;p&gt;Sadly there are developers who seemingly like to gatekeep. They get to feel like wizards in their towers with their dusty books and potions. But, it’s actually self-limiting. Gatekeeping by definition means you’re fixed in place and never moving, standing guard for eternity.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Gatekeeping is always weak, unless you’ve been hired to literally guard a gate.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://twitter.com/shanselman/status/1361469401317269505" rel="noopener noreferrer"&gt;shanselman&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;My point is our egos can “leak” in so many ways that it takes diligence to catch it let alone correct it. The following is a short, incomplete list of typical statements we as developers might say or hear at work. If you parse them more precisely each one is an attempt at self-justification:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“That’s the way we’ve always done it.”&lt;/li&gt;
&lt;li&gt;“It’s not that complicated! You just…”&lt;/li&gt;
&lt;li&gt;“Yeah, I should be able to finish this in a day.”&lt;/li&gt;
&lt;li&gt;“This legacy codebase is an absolute disaster.”&lt;/li&gt;
&lt;li&gt;“Assign it to me. Nobody else will be able to fix it.”&lt;/li&gt;
&lt;li&gt;“You can’t be a senior dev. You don’t know anything about…”&lt;/li&gt;
&lt;li&gt;“Ugh, our morning standup is so useless.”&lt;/li&gt;
&lt;li&gt;“This feature is too important to assign to the junior dev.”&lt;/li&gt;
&lt;li&gt;“We should start using this new tool in our pipeline.”&lt;/li&gt;
&lt;li&gt;“We should never use that new tool in our pipeline.”&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Everything Is Bigger Than You
&lt;/h2&gt;

&lt;p&gt;The ego is concerned with the self but very easily becomes something harmful in the absence of new information or context. Indeed, the ego nudges us to self-justify so much that one could argue it actively &lt;em&gt;resists&lt;/em&gt; new information when left unchecked.&lt;/p&gt;

&lt;p&gt;You may have read one of the example statements above with some familiarity and thought, “But what if I’m right?”&lt;/p&gt;

&lt;p&gt;To which I’d say: OK, but should that be your default stance? Why might you feel the need to immediately start a conversation with a self-justification? There are ways to adjust our approach, make our points, and accept new information all at the same time.&lt;/p&gt;

&lt;p&gt;In any interaction—be it a meeting, Slack thread, or water cooler conversation—we must remember that &lt;strong&gt;the matter at hand is bigger than us in ways we don’t yet understand&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is a simple enough heuristic but we need the skills to gain that understanding. We need empathy and humility. Empathy is the ability to &lt;a href="https://www.psychologytoday.com/us/blog/speaking-in-tongues/202012/what-is-empathy" rel="noopener noreferrer"&gt;recognize and comprehend what someone else is thinking or feeling&lt;/a&gt;. Humility is a resistance to our &lt;a href="https://www.psychologytoday.com/us/blog/brainsnacks/201501/the-paradoxical-power-of-humility" rel="noopener noreferrer"&gt;“competitive reflexes”&lt;/a&gt; through the practice of emotional neutrality and vulnerability. Both serve to counteract the ego.&lt;/p&gt;

&lt;p&gt;To make these concepts more actionable I find it simpler to define them in terms of the purposes they serve. Specifically…&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Empathy is how we &lt;em&gt;gather new information&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Humility is how we &lt;em&gt;allow information to change our behavior&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This framing also helps remind us what empathy and humility &lt;em&gt;are not&lt;/em&gt;. It’s not about putting yourself in another’s shoes, as the saying goes. It’s not about being submissive or a pushover. It’s not about altruism or self-sacrifice. We can easily practice empathy and humility without it ever being at our own expense.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pursuit Of Information
&lt;/h2&gt;

&lt;p&gt;I don’t know about you but I go to work to solve problems, be creative, and build shit. I can’t think of a single instance where an unruly ego solved anything I’ve worked on. Ego just makes an existing challenge worse. Solutions require information I don’t have yet.&lt;/p&gt;

&lt;p&gt;Empathy and humility are usually top of mind during situations of pain or distress, but they’re really aspects of emotional intelligence that should be activated at all times. Once you adjust your mindset to treat them as basic tools for the &lt;em&gt;pursuit of information&lt;/em&gt; you’ll see opportunities to leverage them everywhere.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Early in my career, I thought my job was to find bugs and enforce standards. But when I started asking, ‘What’s blocking you from delivering quality?’ everything changed.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://bsky.app/profile/kato-coaching.com/post/3lhsq42f2jt2a" rel="noopener noreferrer"&gt;kato-coaching.com&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Developers can apply this mindset with almost anybody they come into contact with. Fellow developers, naturally. But also less technical teammates (e.g., QAs, designers, product owners, stakeholders) who have their own unique skills and context that our success depends on. And of course our users should be at the center of every problem we’re working to solve. Lastly, even executives and upper management have some insight to offer if you dare (&lt;a href="https://matthogg.fyi/how-to-care-about-your-job-when-it-doesnt-care-about-you/" rel="noopener noreferrer"&gt;but only up to a certain point&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  “Be Curious, Not Judgmental”
&lt;/h2&gt;

&lt;p&gt;I’ve been waiting years for a chance to work &lt;a href="https://en.wikipedia.org/wiki/Ted_Lasso" rel="noopener noreferrer"&gt;Ted Lasso&lt;/a&gt; into one of my essays. Today’s the day, readers.&lt;/p&gt;

&lt;p&gt;The titular character is such an archetype for leadership that my jaw hit the floor when I first watched the show. The example Ted sets has spawned &lt;a href="https://www.google.com/search?q=ted+lasso+leadership" rel="noopener noreferrer"&gt;countless think pieces about leadership and management&lt;/a&gt;. Suffice it to say he exhibits all of my principles over the series’ 34 episodes. He’s empathy and humility sporting a mustache. He’s the absence of ego personified.&lt;/p&gt;

&lt;p&gt;I highly recommend watching the show but to get a taste this 5 minute clip is worth your time. This is the famous “darts scene”…&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/3S16b-x5mRA"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;There’s a common and derisive attitude that qualities like empathy or humility are signs of weakness. You have to get all up in your feelings. Ew! But they require enormous reserves of strength, patience, and determination. It’s those who follow their egos who are weak.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Empathy is exhausting because you have to feel in both directions.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://bsky.app/profile/rands.bsky.social/post/3m54opmkby22z" rel="noopener noreferrer"&gt;rands&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Letting your ego take control is the easiest thing in the world. Just ask any toddler throwing a temper tantrum. Resisting those impulses and remaining calm, on the other hand, has been a virtue humanity has aspired to for thousands of years. As the Roman emperor and &lt;a href="https://en.wikipedia.org/wiki/Stoicism" rel="noopener noreferrer"&gt;Stoic&lt;/a&gt; philosopher &lt;a href="https://en.wikipedia.org/wiki/Marcus_Aurelius" rel="noopener noreferrer"&gt;Marcus Aurelius&lt;/a&gt; wrote: “The nearer a man comes to a calm mind, the closer he is to strength.”&lt;/p&gt;

&lt;h2&gt;
  
  
  You’re Neither Ted Lasso Nor A Roman Emperor
&lt;/h2&gt;

&lt;p&gt;The practice of empathy, humility, and keeping your ego in check will &lt;strong&gt;test you daily&lt;/strong&gt;. The feedback I’ve received the most from my coworkers is that I’m extraordinarily calm and even-keeled in any situation—even situations where I’d be right to freak out.&lt;/p&gt;

&lt;p&gt;Is that just naturally my personality? Maybe in part, but &lt;em&gt;remaining calm is a choice&lt;/em&gt;. I’m actively choosing to favor solutions over my own ego. To my colleagues past and present I confess to you now that any time you’ve seen me calm, cool, and collected I was very likely &lt;strong&gt;internally screaming&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If this sounds like a lot of work you might be wondering if it’s worth it. I think it is. At the very least your coworkers and colleagues will like you better. That’s no small thing.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;That people like working with you is a skill I see many technical people ignore yet it is often the biggest thing holding them back in their careers.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://www.threads.com/@carnage4life/post/DQSCcRiEjLg" rel="noopener noreferrer"&gt;carnage4life&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;In all seriousness, the positive feedback I get most about the developers I manage is when they’ve demonstrated empathy and humility while dialing back their egos. This is because they’re people we can work with—literally. Nobody wants to work with a narcissist or a rock star. Nobody is materially impressed by &lt;a href="https://matthogg.fyi/developers-your-job-is-not-to-write-code/" rel="noopener noreferrer"&gt;how many lines of code we wrote&lt;/a&gt;, or how fast we wrote it.&lt;/p&gt;

&lt;p&gt;When people want to work with us—or even look forward to it—that means we have trust and respect. We’ll be on proper footing for working effectively as a group to solve problems. For developers this looks like coaching a junior developer, hopping on a quick call to pair with somebody, or understanding the business value of the next backlog item. For leaders this looks like people who feel empowered to do their work, who can proactively identify issues, or who can rally and adapt when circumstances change.&lt;/p&gt;

&lt;p&gt;Anybody can do this! I can’t think of any other career advice that’s as universal as empathy and humility. Everybody is capable of, at any point in their lives, small yet impactful improvements.&lt;/p&gt;

&lt;p&gt;So remember—watch your ego and look for opportunities to leverage empathy and humility in the pursuit of information so that you can solve problems together.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://matthogg.fyi/my-working-principles-for-managing-ego-empathy-and-humility/" rel="noopener noreferrer"&gt;my next essay on this subject&lt;/a&gt; I’ll get into the practical. What I like about this advice is that, while there’s much we can do, we don’t have to do it all to see some benefit. We can pick and choose and try something out. We can take your time and grow. Nobody’s perfect, not even Ted Lasso. Even if we take after a character like &lt;a href="https://ted-lasso.fandom.com/wiki/Roy_Kent" rel="noopener noreferrer"&gt;Roy Kent&lt;/a&gt; we can still call that a win. Just watch the show, OK?&lt;/p&gt;

</description>
      <category>management</category>
      <category>career</category>
      <category>motivation</category>
      <category>leadership</category>
    </item>
    <item>
      <title>How To Care About Your Job When It Doesn't Care About You</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Mon, 23 Jun 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/how-to-care-about-your-job-when-it-doesnt-care-about-you-43o4</link>
      <guid>https://dev.to/mrmatthogg/how-to-care-about-your-job-when-it-doesnt-care-about-you-43o4</guid>
      <description>&lt;p&gt;We all want to do awesome things and make an impact at work. However, what we call “work” is a relationship between employer and employee that’s inherently and persistently designed to benefit the former over the latter. How do we meaningfully contribute, earn a living, and maybe even enjoy ourselves when the organization simply does not care about us?&lt;/p&gt;

&lt;p&gt;It’s harsh, but that’s captialism, baby! Any organization (“org”) exists for a purpose that’s greater than the individuals within it (e.g., profit, public service). We are a means to that end. Same goes for anything we personally get out of the arrangement be it salary, experience, foosball tables, or free lunches.&lt;/p&gt;

&lt;p&gt;This precarious and lopsided power dynamic manifests itself in our day-to-day in a variety of weird ways we often take for granted. Even if it never gets to something as extreme as layoffs or restructuring we still have to navigate this as best we can.&lt;/p&gt;

&lt;p&gt;We need to do our best work possible all while one tiny fact looms in the back of our minds. At the end of the day &lt;em&gt;we’re just line items on somebody’s spreadsheet.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Hundreds Of People In A Trenchcoat
&lt;/h2&gt;

&lt;p&gt;One prevailing assumption that crumbles under the slightest scrutiny is that an org is a cohesive entity. We all know deep down it’s merely a collective made up of people, and yet we overlook this all the time. Heaven help us all if the company refers to itself as family. Ugh!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If an employer ever says “We’re like a family here” what they mean is they’re going to ruin you psychologically&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://twitter.com/KevinFarzad/status/1196483462934351872" rel="noopener noreferrer"&gt;Kevin Farzad&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;The head of the collective—executive leadership—isn’t a single brain. It could be 3, 5, or 7 brains all trying to act like a cohesive unit. But if you personally had 7 brains inside your head how do you think you’d come across to others? &lt;em&gt;Companies are not rational actors&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Anything an org does is just an emergent property of humans trying to work together. Organizations, especially large ones, are not optimizing for complete or perfect decisions. They’re optimizing for as much alignment and consensus as they can get to keep moving (sometimes by fiat).&lt;/p&gt;

&lt;p&gt;By necessity these decisions will always be suboptimal and that will infuriate workers who care about what they do. Most of these decisions &lt;em&gt;will not make sense&lt;/em&gt; because we can’t interpret a collective action as if it came from an individual. Rich Gilbert nails it in his great video &lt;a href="https://www.youtube.com/watch?v=B-nIHrXbig4" rel="noopener noreferrer"&gt;“Caring Less About Work can get us what we really want”&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you’re going to work and and you’re getting frustrated and you’re intelligent and you’re looking around at all the dummies around that are not making the right decision, that is by design.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can plead, “How could they DO this?” or complain, “This is a dumb decision,” but executive leadership really is at arm’s length to us and our work. Leadership lacks the detail we regularly see in the course of our work and they’re dealing in the abstract. Actions from leadership will therefore often feel extreme, sudden, swift, half-baked, reactive, fickle, or temperamental.&lt;/p&gt;

&lt;p&gt;The ultimate effect is that the org doesn’t care about us. One could even argue that it cannot care about us. An org is operating on another plane of reality entirely. Whatever it “does” says nothing about the individuals along for the ride. We’re better off not trying to interpret its actions too closely, lest we break our brains.&lt;/p&gt;

&lt;h2&gt;
  
  
  Corporate Dick Moves
&lt;/h2&gt;

&lt;p&gt;Most of the time &lt;a href="https://en.wikipedia.org/wiki/Hanlon's_razor" rel="noopener noreferrer"&gt;Hanlon’s razor&lt;/a&gt; explains the dysfunction we see, but the nature of orgs can also provide temptation or cover for some willful disregard and gaslighting.&lt;/p&gt;

&lt;p&gt;A good example of gaslighting might be a lack of true upward mobility. Does the company promote often? Or does it say it will and then take forever to actually do it? How about raises—do they even happen and how are they calculated? Does the company promote from within or does it recruit a higher position with a new joiner?&lt;/p&gt;

&lt;p&gt;That’s the carrot, but the stick is far worse! We’re always on the chopping block whether we know it or not. Not even management or the CTO are safe. Layoffs are ever-present and come in several forms.&lt;/p&gt;

&lt;p&gt;If we’re unlucky enough to work somewhere that does &lt;a href="https://en.wikipedia.org/wiki/Vitality_curve" rel="noopener noreferrer"&gt;stack ranking&lt;/a&gt; (e.g., Microsoft, Meta, Shopify) we can kinda sorta at least see it coming. Yay? Despite &lt;a href="https://www.reddit.com/r/technology/comments/1ftxvr3/comment/lpvhupb/" rel="noopener noreferrer"&gt;all evidence&lt;/a&gt; &lt;a href="https://www.vanityfair.com/news/business/2012/08/microsoft-lost-mojo-steve-ballmer" rel="noopener noreferrer"&gt;to the contrary&lt;/a&gt; CEOs absolutely love this shit. They definitely do not care who’s let go and for those who remain—until the next quarter—they don’t care about the paranoia or distrust it causes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When I interviewed at Meta, they had me do interviews with four engineers, seemingly chosen at random from within the company. The last guy had been there by far the longest. I asked him “What’s it like to work there?” and he said “Oh, working here will rewire your brain.”&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://bsky.app/profile/3psboyd.bsky.social/post/3lghflu5g722o" rel="noopener noreferrer"&gt;Matt Boyd&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Maybe stack ranking is too cumbersome… Probably best to just do layoffs arbitrarily. The company doesn’t even need to be in trouble! &lt;a href="https://www.cnbc.com/2025/05/13/microsoft-is-cutting-3percent-of-workers-across-the-software-company.html" rel="noopener noreferrer"&gt;Entirely profitable companies can do layoffs&lt;/a&gt;, often more than once and with impunity. It’s a great flex for shareholders—so trendy that the stock price might go down if CEOs don’t do it. WTF? As &lt;a href="https://www.anildash.com//2025/04/19/ai-first-is-the-new-return-to-office/" rel="noopener noreferrer"&gt;Anil Dash notes&lt;/a&gt; this is more than just cutting costs:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Big tech CEOs and VCs really love performing for each other. We know they hang out in group chats like high schoolers, preening and sending each other texts, each trying to make sure they’re all wearing the latest fashions, whether it’s a gold chain or a MAGA hat or just repeating a phrase that they heard from another founder.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The “return to office” push is another such trend. There’s no positive intepretation for it that has any basis in our post-2020 reality. Even supposing it’s not a “quiet layoff” it’s still not good. Executives continue to poke this bear—up to a full &lt;a href="https://www.cnbc.com/2024/09/16/amazon-jassy-tells-employees-to-return-to-office-five-days-a-week.html" rel="noopener noreferrer"&gt;5 days a week in a physical office&lt;/a&gt;—and when we go along with it then the org knows it can do almost anything it wants to us.&lt;/p&gt;

&lt;p&gt;And perhaps the darkest trend in recent times is the holy grail for CEOs—not hiring people in the first place.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;when a leader says human workers are a source of “bottlenecks”, and that automation is the source of “solutions”, pay very, very, very close attention&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://follow.ethanmarcotte.com/@beep/114422525209914527" rel="noopener noreferrer"&gt;Ethan Marcotte&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;If people think “AI” can meaningfully replace human developers, we’re in for a reckoning very soon. For one, we need junior developers and senior developers cannot exist in a vacuum. A generational talent gap is coming. For another, worsening software quality will erode and ruin companies. We’re left to ride this wave until orgs finally comprehend that &lt;a href="https://matthogg.fyi/developers-your-job-is-not-to-write-code/" rel="noopener noreferrer"&gt;developers do more than just produce code&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  We’re Cogs In The Machine
&lt;/h2&gt;

&lt;p&gt;Ahem, that last section got a little negative! However, a sober acknowledgment of the circumstances around us is the first step in loving our jobs.&lt;/p&gt;

&lt;p&gt;Yes, really. We can throw up our hands and lament that we’re just cogs in the machine, but I prefer to turn that idiom on its head a bit. It’s an apt metaphor.&lt;/p&gt;

&lt;p&gt;I’m just one gear of many. I have limited capabilities as do my neighbouring gears—I can rotate clockwise or counterclockwise. When I do something my neighbours react in turn, and vice versa. We come in different sizes and shapes resulting in differences in torque transmission, rotational speed, and so on.&lt;/p&gt;

&lt;p&gt;My direct neighbours are very obviously cooperating with me as our teeth are interlocking. Conversely, &lt;em&gt;the further away a gear is from me the less direct, and evident, our influence on each other becomes&lt;/em&gt;. The overall machine &lt;em&gt;from my point of view&lt;/em&gt; begins to act strangely, resists me, or jams up completely.&lt;/p&gt;

&lt;p&gt;This actual machine would certainly be useless if it were real. The point is there’s little sense in struggling over the distant cogs but there’s plenty to focus on right around us.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/tD0aFZkFrFA"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The concentric &lt;a href="https://learningloop.io/glossary/circles-of-influence" rel="noopener noreferrer"&gt;circles of concern, influence, and control&lt;/a&gt;, popularized by &lt;a href="https://en.wikipedia.org/wiki/Stephen_Covey" rel="noopener noreferrer"&gt;Stephen Covey&lt;/a&gt;, is a famous framework for personal effectiveness and decision-making that translates extremely well to the context of an org. The org overall is our &lt;em&gt;circle of concern&lt;/em&gt;—it completely surrounds us but is so far removed that we can only observe and adapt to it in relatively minor ways.&lt;/p&gt;

&lt;h2&gt;
  
  
  Think Globally, Act Locally
&lt;/h2&gt;

&lt;p&gt;What this means is we can put the overall organization and its shenanigans in the back of our minds while we &lt;em&gt;optimize our local relationships&lt;/em&gt; and maximize our job satisfaction with real people and meaningful tasks. This is within our &lt;em&gt;circle of influence&lt;/em&gt;. Namely, this is our manager, our reports, and our peers or teammates.&lt;/p&gt;

&lt;p&gt;We should figure out what our manager needs in order to perform their job and then become their ally towards those causes. This feedback loop builds trust very quickly and we’ll feel more empowered. Our boss also acts as a “canary in the coal mine”—if they’re comfortable in the org maybe we can be, too.&lt;/p&gt;

&lt;p&gt;Quick note: the advice above only works if our boss is a true leader and has our backs. If they’re driven by ego or more interested in licking executives’ boots, fuck 'em.&lt;/p&gt;

&lt;p&gt;If we ourselves have direct reports, then we &lt;a href="https://en.wikipedia.org/wiki/Servant_leadership" rel="noopener noreferrer"&gt;practice servant leadership&lt;/a&gt; and put as much energy into these relationships as we can. Listen to them every day, figure out what they need, and collaborate with them on those actions. I know &lt;a href="https://matthogg.fyi/my-manager-readme/" rel="noopener noreferrer"&gt;I became the leader I am today thanks to my reports&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Obviously we spend most of our time with our fellow developers, designers, architects, QAs, product managers, analysts, and so on. These are the people, on a daily basis, that we want to impress and be impressed by. Ideally, they’re just plain fun to hang out with, too!&lt;/p&gt;

&lt;p&gt;Never underestimate how much job satisfaction can be derived from the mutual respect and admiration among people who like each other and are operating at the top of their game.&lt;/p&gt;

&lt;p&gt;This is the real company that we work for, as it were. It’s a subset of the overall org that we can readily interpret because none of it is abstract. It also means &lt;em&gt;we don’t have to withdraw or disengage&lt;/em&gt; under the crushing weight of organizational nonsense. There will always be a venue where we can practice core principles—empathy, support, excellence, collaboration, camaraderie—and that means we can still grow our careers and our networks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Yep, You’re A Resource—Valuable And Finite
&lt;/h2&gt;

&lt;p&gt;Once we’ve drawn our local boundaries we know &lt;em&gt;where&lt;/em&gt; to best apply ourselves. However, we also need to understand our individual boundaries, or rather &lt;em&gt;how much&lt;/em&gt; we apply ourselves. This is within our &lt;em&gt;circle of control&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We start by &lt;em&gt;not volunteering at work&lt;/em&gt;. We already have a job that we get paid for, so anything beyond that is extra work and in case I haven’t been clear so far—the org won’t care.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/scrZlK9oi5c"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Despite our best intentions, such work will be thankless and the &lt;a href="https://en.wikipedia.org/wiki/Diminishing_returns" rel="noopener noreferrer"&gt;law of diminishing returns&lt;/a&gt; will slap us in the face so hard we’ll get whiplash. We can do ourselves some serious lasting damage if we don’t exercise a healthy level of self-discipline.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you push yourself to your limits and burn out for a company, you are trading years of your future productivity for minor gains in the present.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://bsky.app/profile/maxnichols.bsky.social/post/3ljlk6bjdok2x" rel="noopener noreferrer"&gt;Max Nichols&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;This isn’t the same as coasting, holding back, or phoning it in. There are plenty of ways to do a lot of good work within our circle of control. Indeed, it’s the only thing we truly control—applying our skills and abilities and getting direct satisfaction from that.&lt;/p&gt;

&lt;p&gt;There will always be times where we feel an urge to do more than asked, show some initiative, or simply seek appreciation from those beyond our circle of influence. We can indulge these urges, if we want, but we must be mindful. Whenever we’re doing extra work it should be clear to us why we’re doing it.&lt;/p&gt;

&lt;p&gt;The simplest (and least likely) reason to “volunteer” ourselves is because the org compensates us for it. What? It could happen! Maybe.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Telling my boss “yes,” when they asked if I speak Spanish fluently and following it up with, “but not for free,” has been the highlight of my day, so far.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://twitter.com/lenubienne/status/1534232243593437184" rel="noopener noreferrer"&gt;Nani 🏹&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Alternatively, we can “compensate” ourselves. We can always do extra work as long as we’re getting something out of it. This runs the gamut from acquiring a new skill to optimizing an annoying workflow bottleneck to securing our team’s delivery of a long-awaited feature.&lt;/p&gt;

&lt;p&gt;Whatever the reason or result, any extra work is &lt;em&gt;a courtesy we extend&lt;/em&gt; to ourselves or our circle of influence. We should always &lt;em&gt;assume that only we (or our teammates) will appreciate it&lt;/em&gt;. If that’s not enough to justify the effort then we don’t do it.&lt;/p&gt;

&lt;p&gt;This threshold of what constitutes extra work (or if we should do it) will be personal. The calculation will vary per person. Some can tolerate a great deal if the pay is good, while others can’t. Others will risk burning themselves out doing small favors for everybody, and others won’t.&lt;/p&gt;

&lt;p&gt;To each their own, I say. We can’t feel guilty or less of ourselves when we see somebody else’s circle of control is different than our own. We can only define it for ourselves and work happily within those confines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gee, Thanks…?
&lt;/h2&gt;

&lt;p&gt;We should be careful about counting any vague, top-down forms of recognition as fair exchange for extra work or validation that we’re doing a good job.&lt;/p&gt;

&lt;p&gt;If we’re doing great work everyone will know because our circle of influence (manager, reports, peers) will be singing our praises. Word will spread. We don’t need to actively seek out credit or accolades by trying to do more.&lt;/p&gt;

&lt;p&gt;Either way, the slippery slope here is that an understanding of your contributions loses valuable detail at the org level. By definition, &lt;em&gt;the wider our reputation the less control we have over its interpretaton&lt;/em&gt;. The org doesn’t care about us, so when all is said and done even the brightest star at the company will still be a resource.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you always deliver, they’ll always lean on you. And the more efficient you are? The more you get exploited.&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://bsky.app/profile/thisisworkwell.bsky.social/post/3lkvjst5kaq2q" rel="noopener noreferrer"&gt;WorkWell&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;This perverse form of “recognition” could backfire on us in several ways. We might become the subject matter expert for some arcane legacy system or a new feature that requires constant maintenance. We risk getting typecast as “The Only Person Who Knows How To Do X”. We’ll get saddled with an additional workload or a new role we don’t actually have the bandwidth for.&lt;/p&gt;

&lt;p&gt;Again, we might actually be OK with such recognition—some could be legitimate career opportunities—but let’s not fool ourselves into thinking it’s true gratitude on the part of the org.&lt;/p&gt;

&lt;h2&gt;
  
  
  Job Seeking As A Hobby
&lt;/h2&gt;

&lt;p&gt;However much we ingratiate ourselves with the org nothing is guaranteed. Despite the many YouTubers who naively pontificate on how to become recession-proof or dodge layoffs whenever the org flinches, no single company is going to save us. We have to be ready to save ourselves, and at any moment.&lt;/p&gt;

&lt;p&gt;This means &lt;em&gt;job seeking on a casual and continual basis&lt;/em&gt;. Treat it like a hobby. Yes, even if we have a job already. And yes, even if we love that job.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/JcQ7Np4D-Rs"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Think of the activities we do almost exclusively when we’re between jobs and do them on a smaller scale more regularly. Keeping our resume up to date, for instance, is far easier if we document our accomplishments as they happen.&lt;/p&gt;

&lt;p&gt;We can also practice other activities like staying in touch with colleagues, researching companies and their cultures, attending meetups or networking events, and even doing interviews!&lt;/p&gt;

&lt;p&gt;A low key job search can help in a few ways. For one, should we suddenly find ourselves unemployed we’re better prepared. We’ve already done some of the legwork that’d feel much harder to do in the moment. Any practice we’ve done also comes across as confidence to potential recruiters and interviewers.&lt;/p&gt;

&lt;p&gt;Second, it’s a chance for introspection and self-awareness. We’ll come to know more clearly what we like or dislike about our current job. It will also help to identify what we’re capable of and what we expect of ourselves or others. Useful for future jobs, yes, but this understanding makes us better at our current job, too.&lt;/p&gt;

&lt;p&gt;If nothing else, it might just cheer us up from time to time. I’ve had more than one Very Bad Day that found me cruising jobs on LinkedIn during my lunch break. And at one particularly toxic company I had a draft resignation later open for a couple of months. Whenever the job wore me down I’d edit it a bit more. You know, as a treat! By the time I actually sent that letter it was bulletproof and very satisfying.&lt;/p&gt;

&lt;p&gt;Ultimately, proactive job seeking like this provides us with &lt;em&gt;confidence, autonomy, and options&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sorry, One More Thing About Executives
&lt;/h2&gt;

&lt;p&gt;I’ve tried very hard to describe organizational behavior on its own terms and not as the deliberate intent of specific people.&lt;/p&gt;

&lt;p&gt;Just like us, executives have a role to play as part of the org, too. They may benefit from the system more than we do but they’re subjected to it all the same. Your CEO might actually be a nice human being. Most of them are! Elon Musk, on the other hand, is a demon wearing a meat suit. It’s the billionaires—and leaders who emulate them—that we need to worry about.&lt;/p&gt;

&lt;p&gt;So, I don’t have any useful advice for Musk, Bezos, Zuckerberg, or Altman. For anyone else in a leadership position I can only hope you looking into every nook and cranny of your org, seeing the people working for you, and doing right by them as best you can.&lt;/p&gt;

&lt;h2&gt;
  
  
  You Do You
&lt;/h2&gt;

&lt;p&gt;I’ll admit I started writing this while in a somewhat bitter mood. However, it was a good exercise in helping me firm up some positive and actionable lessons. I find it reassuring that how I navigate an org doesn’t have to contradict &lt;a href="https://matthogg.fyi/my-manager-readme/" rel="noopener noreferrer"&gt;my personal principles or how I work&lt;/a&gt;. In fact, they’re the very tools I need.&lt;/p&gt;

&lt;p&gt;I firmly believe that we can find a way to exist—flourish, even—without the org eroding our personal values. We don’t want to be naive, but that doesn’t mean we just give in to cynicism.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I hate my job. The work sucks. The people suck. The pay sucks.&lt;br&gt;
&lt;em&gt;looks up and sees motivational poster on wall&lt;/em&gt;&lt;br&gt;
Well this changes everything&lt;/p&gt;
&lt;cite&gt;– &lt;a href="https://twitter.com/BuckyIsotope/status/663909140855582720" rel="noopener noreferrer"&gt;Dr. Bucky Isotope, PhD BOFA Economics&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;The steps seem simple enough. Make peace with the fact that any org is an irrational entity that always exists on the periphery. Focus our attention and effort on our circle of influence. Maintain rigorous and healthy personal boundaries. Always be ready to move on.&lt;/p&gt;

&lt;p&gt;It sucks that we have to do so even if we like where we work or feel like we’re doing well. It’s human nature to worry less about our well-being when the risks or costs are not evident or immediate. However, things are lovely until suddenly they’re not. &lt;em&gt;Protect yourself&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;My promise to you is that there will be far more good days than bad days. Ultimately, the fact that your job doesn’t care about you is liberating! It’s not personal. You do you.&lt;/p&gt;

&lt;p&gt;My promise to myself is that I’m going to spend less time accommodating the company and more time empowering myself. Less time worried about what the CEO thinks and more time worried about how my coworkers feel. Less energy trying to save the world and more time simply being proud of myself and my work.&lt;/p&gt;

</description>
      <category>management</category>
      <category>career</category>
      <category>motivation</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Wherein I Find Myself Concerned About Sparkles</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Wed, 15 May 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/wherein-i-find-myself-concerned-about-sparkles-h63</link>
      <guid>https://dev.to/mrmatthogg/wherein-i-find-myself-concerned-about-sparkles-h63</guid>
      <description>&lt;p&gt;This trend of using sparkles (✨) as the visual metaphor when slapping “AI” onto our products has been bothering me lately. It’s not just sketchy to pass off large language models (LLMs) as something magical. There are also deeper implications here about how design plays into Big Tech’s desired narratives.&lt;/p&gt;

&lt;p&gt;You’ve seen it, I’m sure. Every company under the sun has aligned on the sparkles to represent their “AI” features: OpenAI, Apple, Google, Notion, Miro, Atlassian, Spotify, Adobe, Grammarly, Zoom, Wix, and more. All the cool kids are doing it.&lt;/p&gt;

&lt;p&gt;The emoji itself is decades old and is basically a &lt;a href="https://www.youtube.com/watch?v=g-pG79LOtMw" rel="noopener noreferrer"&gt;cultural export of Japanese mobile provider NTT Docomo&lt;/a&gt;. Ever since it’s proven to be surprisingly flexible and adaptable—used to convey a wide variety of sentiments like sarcasm, emphasis, positivity, delight, wonder, and magic. Somewhere &lt;a href="https://bootcamp.uxdesign.cc/the-unstoppable-rise-of-spark-as-ais-iconic-symbol-ca663162cccc" rel="noopener noreferrer"&gt;between 2016 and 2020 this “magic” symbolism was first applied to tech products&lt;/a&gt; and that’s where we find ourselves today.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Dad, why do we use sparkles as the icon for AI?”&lt;br&gt;
“Because ChatGPT came out and designers were amazed, so they chose an icon to show how amazed they were.”&lt;br&gt;
“So the icon isn’t about the AI, but people’s reaction to it?”&lt;br&gt;
“Yep.”&lt;br&gt;
“What about when the next amazing thing comes out?”&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://twitter.com/hobdaydesign/status/1738508168978743341" rel="noopener noreferrer"&gt;Anthony Hobday&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Behind this visual decoration there are still computers doing things for us, albeit inside a non-deterministic black box. It’s fine to be excited about new technology but not at the expense of fully understanding what it can and can’t do. It’s a shame because “AI” is quite good at some things but saving the world is not one of them. Unfortunately capitalism requires outsized and unending growth rather than modest gains over time. As &lt;a href="https://www.linkedin.com/pulse/ai-iconography-does-sparkle-jordan-rothe/" rel="noopener noreferrer"&gt;Jordan Rothe&lt;/a&gt; puts it:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In short, people expect AI to be magic. While it’s a truly remarkable technology that I anticipate will impact our world in huge ways we haven’t even imagined, it has it’s own limitations and biases and requires tremendous work and data to function; it ain’t magic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I use quotes around “AI” because they’re in fact &lt;a href="https://en.wikipedia.org/wiki/Large_language_model" rel="noopener noreferrer"&gt;LLMs&lt;/a&gt; and not actually “intelligent” in the sense Big Tech would have you believe. They’re &lt;a href="https://en.wikipedia.org/wiki/Stochastic_parrot" rel="noopener noreferrer"&gt;stochastic parrots&lt;/a&gt;—an achievement in autocomplete perhaps but not an artificial brain by any means.&lt;/p&gt;

&lt;p&gt;I’ve had to repeatedly explain to more than one excited layperson that they’re projecting an ability onto a system that literally cannot do what they’re excited about (e.g., &lt;em&gt;“Can you believe what ChatGPT can do?!?”&lt;/em&gt;).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s a profound powerful valuable array of serious tools and we’re determined to embrace it with the same rush and juvenility as “web3”&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://twitter.com/tomfgoodwin/status/1782412025294970922" rel="noopener noreferrer"&gt;Tom Goodwin&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are numerous indications that &lt;a href="https://www.wheresyoured.at/peakai/" rel="noopener noreferrer"&gt;current progress in “AI” may already be peaking&lt;/a&gt;. Also, rumors of &lt;a href="https://www.wheresyoured.at/bubble-trouble/" rel="noopener noreferrer"&gt;“model collapse”&lt;/a&gt; are brewing—a shocking but entirely predictable knock-on effect of how LLMs work. This essay isn’t specifically about whether “AI” is a scam, but it kinda is (if we keep selling it this way).&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/BFzphfgwv8E"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;With a better understanding of how “AI” works and its true utility, there are far better and more accurate symbols out there. We could’ve chosen robots (🤖), dice (🎲), or even slot machines (🎰).&lt;/p&gt;

&lt;p&gt;Better yet, why not—and stop me if you’ve heard this before—&lt;a href="https://www.fastcompany.com/91030156/how-the-sparkle-emoji-took-over-ai" rel="noopener noreferrer"&gt;depict the actions these fancy buttons actually do&lt;/a&gt; for the user? From the article:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;But since hitting the vague sparkles leads to a different action on each service, Saffar wonders whether it could prevent users from creating a mental model of how the product works, set unrealistic expectations, and leave them confused and annoyed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’d argue that nobody has that mental model, not even the very people selling this technology. We’re not using a visual metaphor for what the “AI” can do but &lt;em&gt;for what we want everyone to hope it will do&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;For most traditional UI icons, the visual metaphor is predetermined and easy to interpret. For instance, the “save” icon used to be a floppy disk (god, I’m old) because that’s where the software would put your file. The path from symbol to action was a very straight line. For sparkles that path is a labyrinth.&lt;/p&gt;

&lt;p&gt;I know I’m ranting about this cute little icon but that’s because it’s more than just a user experience (UX) challenge.&lt;/p&gt;

&lt;p&gt;When is an icon is just an icon? Never! &lt;a href="https://www.linkedin.com/pulse/design-always-political-paola-amparan/" rel="noopener noreferrer"&gt;Design is political&lt;/a&gt;. There’s always an underlying agenda that puts forth a certain idealogy or bias, whether we realize it or not. It’s propaganda, but for what? As &lt;a href="http://thepoliticsofdesign.com/about-the-book" rel="noopener noreferrer"&gt;Ruben Pater writes in his book&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The political system in which the designer works and lives cannot be disconnected from the design she/he creates. A political ideology is continuously being produced and communicated through design. Acknowledging this can give designers more agency in their practice to either serve or subvert the status quo.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Technology is political, too. Much has been said for years about the biases baked into algorithms or models by way of the programmers who created them. The energy consumption required to mine Bitcoin or summarize an article into bullet points rivals that of small countries. Big Tech is downright giddy about the jobs that can be replaced by “AI”.&lt;/p&gt;

&lt;p&gt;I have to wonder what narrative is being given to us, and how. At best, it’s misguided techno-optimism in the name of progress. At worst, it’s a deliberate grift. But either way, &lt;a href="https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_what_it_does" rel="noopener noreferrer"&gt;the purpose of a system is what it does&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;These trade-offs are apparently worth it for enough people because it’s happening as I write this. But let’s not dwell on the consequences, right? We can’t neatly summarize all of that nuance in a cute little icon anyway.&lt;/p&gt;

&lt;p&gt;Technical illiteracy, marketing hype, copyright infringement, content theft, disinformation, rampant capitalism, environmental damage, societal upheaval, ethics in tech, the future of work—those fun little sparkles are doing a lot of heavy lifting!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>design</category>
      <category>ux</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>My Manager README</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Fri, 03 May 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/my-manager-readme-4o46</link>
      <guid>https://dev.to/mrmatthogg/my-manager-readme-4o46</guid>
      <description>&lt;p&gt;This README is an attempt to succinctly explain who I am, how I work, and my personality quirks. For the reader I hope it helps expedite any potential interaction between us. I also hope it helps me triangulate and refine how I present myself in work settings.&lt;/p&gt;

&lt;h2&gt;
  
  
  In The Beginning...
&lt;/h2&gt;

&lt;p&gt;I've been working in web development since 1999. I was a "full-stack developer" before it was even a phrase—writing "spaghetti code" in &lt;a href="https://en.wikipedia.org/wiki/ColdFusion_Markup_Language" rel="noopener noreferrer"&gt;ColdFusion&lt;/a&gt; or &lt;a href="https://en.wikipedia.org/wiki/PHP" rel="noopener noreferrer"&gt;PHP&lt;/a&gt; to connect to databases in &lt;a href="https://en.wikipedia.org/wiki/Microsoft_Access" rel="noopener noreferrer"&gt;Microsoft Access&lt;/a&gt; while creating gaudy UIs in &lt;a href="https://en.wikipedia.org/wiki/Adobe_Flash" rel="noopener noreferrer"&gt;Flash&lt;/a&gt;. We were called "webmasters" back then.&lt;/p&gt;

&lt;p&gt;My original passion is frontend development. CSS is my all-time favorite programming language (yes, it's a programming language). My expertise and curiosity is primarily client-side—HTML, accessibility, CSS, responsive design, making delightful UIs and clicky things, etc.&lt;/p&gt;

&lt;p&gt;Over my career I've earned responsibility and authority by giving a damn. This means asking questions, showing an interest in a particular problem space, understanding the value or impact of the work, thinking about process for myself or my teammates, and communicating well. I firmly believe exceeding expectations in this way is what's made me a great developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Is Me Now
&lt;/h2&gt;

&lt;p&gt;It's this feedback loop—exceed expectations, gain more responsibility, exceed new expectations, and repeat—that's put me in leadership and management roles since 2015.&lt;/p&gt;

&lt;p&gt;I use my own development background as a force multiplier to support, grow, protect, and defend my reports. Because I've been there myself I'm uniquely qualified to coach developers or guide them through any challenge. By combining my experience with the philosophy of &lt;a href="https://en.wikipedia.org/wiki/Servant_leadership" rel="noopener noreferrer"&gt;servant leadership&lt;/a&gt; I firmly believe investing in the individual is what's best for the organization, too.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Do For You
&lt;/h2&gt;

&lt;p&gt;As a leader there are numerous things you can expect me to do for you without hesitation. I feel it's important to list them because people can often talk themselves out of asking for help when they shouldn't. I can...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide context and perspective about the company at large.&lt;/li&gt;
&lt;li&gt;Advise on your individual or team priorities.&lt;/li&gt;
&lt;li&gt;Gather and relay feedback on your performance.&lt;/li&gt;
&lt;li&gt;Identify and coach for career growth opportunities.&lt;/li&gt;
&lt;li&gt;Offer technical guidance by way of pair programming.&lt;/li&gt;
&lt;li&gt;Help with &lt;a href="https://en.wikipedia.org/wiki/Rubber_duck_debugging" rel="noopener noreferrer"&gt;rubber duck&lt;/a&gt; debugging.&lt;/li&gt;
&lt;li&gt;Participate in code reviews.&lt;/li&gt;
&lt;li&gt;Find peer support from other developers, teams, and departments.&lt;/li&gt;
&lt;li&gt;Communicate with stakeholders or adjacent parties.&lt;/li&gt;
&lt;li&gt;Rally on production issues.&lt;/li&gt;
&lt;li&gt;Hire your teammates.&lt;/li&gt;
&lt;li&gt;Organize regular, recurring 1-on-1 conversations.&lt;/li&gt;
&lt;li&gt;Celebrate your accomplishments in public settings.&lt;/li&gt;
&lt;li&gt;Listen to your complaints or rants.&lt;/li&gt;
&lt;li&gt;Offer my opinions on anything from coding to process.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My Values And Principles
&lt;/h2&gt;

&lt;p&gt;These are the values and principles that I've accumulated through my career and inform my daily interactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I work for you.&lt;/strong&gt; If you think I'm a good leader then you should see the people I work for—my reports! Any success I might enjoy as a leader is a direct reflection of my reports, the feedback loops I maintain with each of them, and their capacity to execute. I am here to help.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proactive communication.&lt;/strong&gt; I speak candidly, but factually, and I try to anticipate what people want before they want it. I'm liberal with my status updates and "FYI" messages. When I take an action item I'll tell you when I expect to finish. When you ask me a question I'll provide any supporting links, documents, or other material that I have. When I think my response is likely to generate a follow-up question then I'll answer it before you even have to ask.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We're doing hard things.&lt;/strong&gt; I'll be asking you to do novel or difficult work. If our problems were simple then they'd already be solved and we'd be bored or unemployed. So, let's embrace it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I love helping developers grow.&lt;/strong&gt; This means I'll challenge you to &lt;a href="https://matthogg.fyi/developers-your-job-is-not-to-write-code/" rel="noopener noreferrer"&gt;do more than just write code&lt;/a&gt; (or code that you're used to). If I ask something of you that you've never done before remember that it's because I have every reason to believe you can do it. &lt;a href="https://www.kalzumeus.com/2015/10/30/developing-in-stockfighter-with-no-trading-experience/" rel="noopener noreferrer"&gt;Patrick McKenzie&lt;/a&gt; said it best:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Every great developer you know got there by solving problems they were unqualified to solve until they actually did it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Perfection is unlikely.&lt;/strong&gt; I aim for steady improvement, iteration, and evolution towards an ideal. Progress might be slow or modest—if we're trending ever upward then I'm happy. If we're not but we know what to do about it, I'm also happy. &lt;a href="https://toolshed.com/" rel="noopener noreferrer"&gt;Andy Hunt&lt;/a&gt; said this about software, but I think it can easily apply more generally:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;No one in the brief history of computing has ever written a piece of perfect software. It's unlikely that you'll be the first.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Zen detachment.&lt;/strong&gt; I regularly locate myself within &lt;a href="https://positivepsychology.com/circles-of-influence/" rel="noopener noreferrer"&gt;the circles of concern, influence, and control&lt;/a&gt;. Living in the circle of concern is certainly uncomfortable but it's far better than mistakenly thinking I'm in the circle of control! I jokingly refer to the former as achieving "zen detachment"—knowing what I don't directly control can actually reduce my stress. The latter is just borrowing trouble.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's do it.&lt;/strong&gt; I'm often inclined to "just do it." I'm typically in favor of launching semi-aggressive MVPs, rolling forward, or tolerating short term frustration and growing pains if it means we're on a path to iterate and improve. I don't like to &lt;a href="https://en.wikipedia.org/wiki/Meta_Platforms#History" rel="noopener noreferrer"&gt;move fast and break things&lt;/a&gt; but I'm certainly more pragmatic and less risk-averse than most people. I rely on my coworkers to push back if I'm being reckless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Work/life balance.&lt;/strong&gt; As hard as we might work, we're not performing heart transplants or rescuing people from burning buildings. The firmer the boundaries between work and life the better I can focus on both. I care very little about when you clock in/out, flex time, appointments, or time off as long as your work gets done and your team knows where you are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make 'em laugh.&lt;/strong&gt; A sense of humor is actually helpful for getting work done. It's not just about cracking jokes—although I absolutely will do so at the slightest provocation—but also the ability to stop, take a breath, and loosen up a tiny bit. I like how &lt;a href="https://twitter.com/daisyowl/status/841802094361235456" rel="noopener noreferrer"&gt;@daisyowl&lt;/a&gt; sums up the absurdity of being self-serious all the time:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;if you ever code something that "feels like a hack but it works," just remember that a CPU is literally a rock that we tricked into thinking&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Empathy and humility are assets.&lt;/strong&gt; These aren't just personality traits but "skills" that make me better at my job. Ego will always get in the way of solving problems. Empathy and humility are the basis of any interaction I start with someone. Likewise, I assume positive intent from others. If somebody fails to meet that standard I see no reason to lower myself to that level or fight ego with ego. Empathy is not an infinite resource for me but when I'm tapped that's only discussed behind closed doors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strong opinions, weakly held.&lt;/strong&gt; &lt;a href="https://web.archive.org/web/20130626002837/https://saffo.com/02008/07/26/strong-opinions-weakly-held/" rel="noopener noreferrer"&gt;Paul Saffo's mantra&lt;/a&gt; is useful for starting with uncertainty and arriving at a conclusion. No opinion is so absolute that it can't be amended in light of new information or fresh perspective. I actively seek out such information to prove my "strong opinions" wrong or see where they start to bend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speak up!&lt;/strong&gt; If I don't say something then I shouldn't be surprised when nothing improves. I make observations, offer suggestions, or ask questions when I have them. They might be dumb. Sometimes I'm genuinely being an idiot, but otherwise it's a tactic. I'm using &lt;a href="https://en.wikipedia.org/wiki/Socratic_questioning" rel="noopener noreferrer"&gt;Socratic questioning&lt;/a&gt; to explore solutions and ideas that otherwise could be missed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical hiring is buggy.&lt;/strong&gt; I have an &lt;a href="https://matthogg.fyi/a-technical-interview-doesnt-have-to-suck/" rel="noopener noreferrer"&gt;unorthodox interviewing philosophy&lt;/a&gt; based on my experiences when hiring people. It's served me well but it's certainly not the industry standard nor to everyone's liking.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Communication Style
&lt;/h2&gt;

&lt;p&gt;Communication is a core skill and therefore my communication style merits particular emphasis here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inbox Zero.&lt;/strong&gt; I'm at "Inbox Zero" all day every (business) day. You'll typically get an acknowledgment or response from me within 30-60 minutes. Yes, really. A non-zero unread message count stresses me out more than any lack of focus. I don't recommend this for others, but years ago I made the trade-off to prioritize unblocking my reports even if that means a lot of &lt;a href="https://en.wikipedia.org/wiki/Human_multitasking" rel="noopener noreferrer"&gt;context switching&lt;/a&gt; for me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transparency and clarity.&lt;/strong&gt; I tell you what I know when I know it. I'm candid about what I don't know and what's uncertain or subject to change. I distinguish objective facts from my own personal speculation or interpretation. If I need something from you I state it clearly or otherwise indicate it's "FYI" only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous messaging over meetings over email.&lt;/strong&gt; I prefer asynchronous messaging (e.g., Slack) 95% of the time because I can get you a response faster than any other method. When that won't work, meetings or ad hoc calls are more effective (i.e., larger groups or Slack threads that approach 99 replies). I do not like emails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Loud and clear.&lt;/strong&gt; I overcommunicate by default. This means repeating the same thing across multiple venues (e.g., Jira, Confluence, email, Slack channels, pull requests). My messaging will be as public as possible for maximum visibility. I'm also vocal with my compliments and gratitude. So far nobody has told me to shut up!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Meetings.&lt;/strong&gt; I appreciate properly organized meetings and decline invites when I can't determine their value. I may have &lt;a href="https://matthogg.fyi/meetings-am-i-right/" rel="noopener noreferrer"&gt;written an essay on this topic&lt;/a&gt; because I'm a big nerd.&lt;/p&gt;

&lt;h2&gt;
  
  
  Personality Quirks
&lt;/h2&gt;

&lt;p&gt;I'm not a corporate robot, so you should probably be aware of certain personality traits or behaviors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I'm an introvert.&lt;/strong&gt; I can talk a good game but my social battery drains quickly. I tend to be more forthcoming with people I know well over time. It's also why asynchronous and written communication comes more naturally to me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If there's a joke to be made I'll probably do it.&lt;/strong&gt; It's very hard for me to resist making a joke once it pops into my brain. Puns and dad jokes are the lowest hanging fruit but that doesn't stop me. I'm also playful, sarcastic, and self-deprecating in most meetings. Chat sidebars in video calls were a godsend for me because I can wisecrack without derailing the meeting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I use a lot of GIFs and emoji.&lt;/strong&gt; Like, a lot. A picture is worth 1,000 words, after all. Also, it's GIF with a hard "G" and I won't be taking questions at this time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spelling and grammar matters.&lt;/strong&gt; One wrong character can alter the entire meaning of sentence. But it's more than that—I'm the guy who edits his Slack messages after the fact because the innocuous typo is staring me in the face and I can't focus until it's corrected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watch my language.&lt;/strong&gt; I use a lot of idioms, analogies, and metaphors in conversation. They are great shorthand for complex concepts but I do tend to mix them or take them too far. Worse than that, using them presumes a cultural or linguistic background you might not have. If my meaning is ever unclear you'd be correct to stop me and ask that I explain it again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I think out loud.&lt;/strong&gt; Writing allows me to collect and prepare my thoughts. When I'm speaking I'm often thinking out loud and on the fly. When I do this I'm aware I have a tendency to look around the room and often past you while my brain is working (this is more obvious in person and less so on video calls). I often search on the spot for just the right word which can be disruptive mid-sentence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I love completing tasks.&lt;/strong&gt; There are few sensations better than the dopamine hit when I complete a task and cross it off my to-do list. Like, literally the act of crossing it off my list makes me giddy. By extension, I can get pretty enthusiastic about other people's tasks, too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some Exceptions May Apply
&lt;/h2&gt;

&lt;p&gt;I do my best with all of the above but I'm only human. I will make mistakes or present the occasional contradiction. I appreciate when people tell me where I'm falling short or being inconsistent.&lt;/p&gt;

&lt;p&gt;If we're undergoing an emergency or existential threat that affects our company, teams, or product then the laws of nature may no longer apply. I will swiftly—and temporarily—put aside any of the above during such times. Ideals are nice but they're useless if I don't have a job or a website to work on.&lt;/p&gt;

&lt;h2&gt;
  
  
  The End...?
&lt;/h2&gt;

&lt;p&gt;This is a living document that undergoes regular revisions as I myself change over time. If you ever find a discrepancy between this README and my actual behavior, I invite you to please reach out and help me improve it. Thank you!&lt;/p&gt;

</description>
      <category>management</category>
      <category>leadership</category>
      <category>career</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Meetings, Am I Right?</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Sat, 20 Apr 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/meetings-am-i-right-5aj1</link>
      <guid>https://dev.to/mrmatthogg/meetings-am-i-right-5aj1</guid>
      <description>&lt;p&gt;Meetings aren't evil in and of themselves. They're fundamentally just people talking, right? And yet we hate meetings with a contempt that often borders on irrational rage. Probably because it's people talking.&lt;/p&gt;

&lt;p&gt;Meetings have always been frustrating but I feel like it's become more pronounced in recent years since the rise of remote work during the pandemic.&lt;/p&gt;

&lt;p&gt;Technology has made it that much easier to "hop on a call" and suddenly bring our coworkers into our home offices, one meeting immediately after another and without end. We don't even have to get out of our chairs and move to another meeting room! Video calls also carry a cognitive load that is &lt;a href="https://ideas.ted.com/zoom-fatigue-is-real-heres-why-video-calls-are-so-draining/" rel="noopener noreferrer"&gt;literally exhausting&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Can you hop on a call?”&lt;br&gt;
Can YOU hop off a cliff?&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://www.threads.net/@ohnoshetwitnt/post/C2Q-cFOg9cM" rel="noopener noreferrer"&gt;The Volatile Mermaid&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's reached such a critical mass that &lt;a href="https://www.npr.org/2023/02/15/1156804295/shopify-delete-meetings-zoom-virtual-productivity" rel="noopener noreferrer"&gt;companies like Shopify felt compelled to "delete" almost all meetings&lt;/a&gt; sight unseen!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Naturally, as a tech company, Shopify wrote code to do this. A bot went into everyone's calendars and purged all recurring meetings with three or more people, giving them that time back.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Shopify's heart may be in the right place but I can't help but feel they might be looking past the actual problem here.&lt;/p&gt;

&lt;p&gt;In a &lt;a href="https://matthogg.fyi/developers-your-job-is-not-to-write-code/" rel="noopener noreferrer"&gt;previous essay&lt;/a&gt; I advised that "showing contempt or disdain for meetings" is something that developers should try to &lt;em&gt;avoid&lt;/em&gt;. Now, I knew it would touch a lot of developers' nerves—ramming a developer into a meeting when they're deep in code is an unforgiveable offense.&lt;/p&gt;

&lt;p&gt;Of course, context switching is hard for everyone. As a result, everyone hates meetings. I get it, but I'm not personally opposed to meetings and I don't think they're intrinsically bad. Nobody is going to catch me using one of those &lt;a href="https://fellow.app/tools/meeting-cost-calculator/" rel="noopener noreferrer"&gt;meeting cost calculators&lt;/a&gt; to smugly point out how my precious time is being wasted.&lt;/p&gt;

&lt;p&gt;However, I do think it's &lt;em&gt;extremely easy&lt;/em&gt; to run meetings poorly and when it happens the perceived damage can be lasting. Meetings—organizing or attending them—aren't a casual, passive act. They require real effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  It Takes All Kinds
&lt;/h2&gt;

&lt;p&gt;Some kinds of meetings can't or shouldn't be avoided. Their utility is genuine and we can safely dismiss these as their purpose is self-evident. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One-on-ones between managers and reports.&lt;/li&gt;
&lt;li&gt;Agile ceremonies—as long as everybody understands their purpose!&lt;/li&gt;
&lt;li&gt;Planning or designing a technical solution.&lt;/li&gt;
&lt;li&gt;An existential crisis for the company (i.e., the site is on fire).&lt;/li&gt;
&lt;li&gt;When a Slack or email thread has gone on far too long (i.e., the opposite of "this meeting could've been an email").&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, it's just &lt;a href="https://randsinrepose.com/archives/the-seven-meetings-you-hate/" rel="noopener noreferrer"&gt;all of the other kinds of meetings&lt;/a&gt; we have to watch out for. How do we identify and attend high-value meetings and also organize them ourselves? Here's what I do.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Attend A Meeting (Or Not)
&lt;/h2&gt;

&lt;p&gt;When a meeting invite arrives in my inbox there's only one question I need to ask myself: &lt;em&gt;will attending this meeting bring me value or will I bring value to the meeting?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If the answer is "no" or "I'm not sure" then I decline the meeting. It's as simple as that.&lt;/p&gt;

&lt;p&gt;If the meeting has no agenda, appears with less than a day's notice, the organizer obviously didn't check my schedule, or the organizer made no mention of it beforehand then that's gonna be a "no" from me—I haven't been convinced of this meeting's value for myself or others.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If I decline your meeting, it means one of two things: (1) I don’t know why I was invited, or (2) I know why, but I am not buying it.&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://mastodon.social/@rands/111291212295278915" rel="noopener noreferrer"&gt;rands&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;How politely I decline is at my discretion. If I care to, I'll let the organizer know I'm double-booked at that time or request more context before I can accept the invite. More often than not, however, I just decline and go about my day. If somebody &lt;em&gt;really&lt;/em&gt; needs me there, they'll find me.&lt;/p&gt;

&lt;p&gt;Furthermore, this value assessment doesn't stop with the invite. It extends to the meeting itself. Once I've agreed to attend, I must then be present, stay focused and avoid multi-tasking. Before the pandemic I made it a point to &lt;em&gt;never&lt;/em&gt; bring my laptop to any meeting. That's not possible these days, so it takes even more concentration to have an effective meeting.&lt;/p&gt;

&lt;p&gt;Conversely, I can also "decline" mid-meeting—I can politely leave early once I've extracted or contributed the value that was needed. Even worse, we should all sign off as soon as we've heard somebody suggest we "take that offline" two or more times.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To Organize Your Meetings
&lt;/h2&gt;

&lt;p&gt;OK, I've drawn some boundaries and I'm defending my calendar, but what about my own meetings? My advice is to &lt;em&gt;embellish&lt;/em&gt; (the invite) and &lt;em&gt;empathize&lt;/em&gt; (with the attendees).&lt;/p&gt;

&lt;h3&gt;
  
  
  Embellish The Invite
&lt;/h3&gt;

&lt;p&gt;By "embellish" I mean that I put as much into the invite as I can to set expectations and then I over-communicate the invite itself.&lt;/p&gt;

&lt;p&gt;This starts with the meeting title itself. I summarize as best I can and avoid dreadful non-titles like "Quick Chat" that tell us nothing. And a small pet peeve of mine—use proper grammar!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Is it okay to block my calendar off with a meeting title “leave me alone”&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://www.threads.net/@smoothieking/post/C4OSF7ZP2-O" rel="noopener noreferrer"&gt;Smoothie King&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next is the agenda, of course. For smaller meetings I'll at least summarize what we're going to talk about. One sentence is better than a blank description. For more involved meetings I'll outline all the topics to cover. Bonus points when I can timebox each agenda item, again, to set expectations for attendees. Anything on the agenda should lead to a &lt;em&gt;result&lt;/em&gt;—a question answered, an action item created, and so on.&lt;/p&gt;

&lt;p&gt;If I have any links or documents that serve as homework or pre-read material, I include them. I want people in my meetings to hit the ground running and start prepared. There's also a chance someone will pre-read and address part of the agenda ahead of time, thereby shortening my meeting.&lt;/p&gt;

&lt;p&gt;My final embellishment is after the invite is sent. I'll advise the same people in Slack to watch for the invite and offer additional context. This might seem excessive but it demonstrates that the meeting, and its attendees, are important to me. And again, there's always a chance I'll start a discussion that gets me what I need and negates the meeting entirely.&lt;/p&gt;

&lt;h3&gt;
  
  
  Empathize With The Attendees
&lt;/h3&gt;

&lt;p&gt;Demonstrating empathy at work is a true skill, and that includes when I'm organizing meetings.&lt;/p&gt;

&lt;p&gt;For instance, the simplest way to show others that I'm trying to be respectful of their time is to use the scheduling/availability tool of my calendar to find a time that works for those involved. Why compound the pressure on my attendees by double-booking them when it only takes me a few minutes to see if they're even free?&lt;/p&gt;

&lt;p&gt;Additionally, I prefer to book time that's adjacent to a attendees' existing meetings. This means I'm not adding a context-switching burden for people by dropping a meeting right in the middle of an otherwise empty afternoon, for example.&lt;/p&gt;

&lt;p&gt;I'm also very clear about who's a required versus optional attendee and why—heaven is a meeting invite someone knows they can decline without any guilt.&lt;/p&gt;

&lt;p&gt;Where applicable, I remind people when a meeting will be recorded. If someone is unable to attend this can give them some relief that they won't miss anything. And quite frankly, there are many meetings I wish I could've watched later at 2X speed if I'd had the chance!&lt;/p&gt;

&lt;p&gt;Lastly—and this is a pet peeve of mine—I don't "give people their time back" if the meeting concludes early. Humans don't talk like that! I just thank people for their time and end the meeting.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;omg thanks for ending the meeting 4 minutes early and "giving me some time back" -- now I can finally pursue my passions&lt;/p&gt;
&lt;cite&gt;— &lt;a href="https://twitter.com/sablaah/status/1572250827003252738" rel="noopener noreferrer"&gt;sarah&lt;/a&gt;&lt;/cite&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Sorry, Not Sorry...?
&lt;/h2&gt;

&lt;p&gt;I wrote this with some hesitation because it feels a little bit facile. I'm not saying anything groundbreaking here. If you've read this far and think all of this is self-evident—I agree!&lt;/p&gt;

&lt;p&gt;The fact remains, however, that for years I've regularly coached people or listened to them complain about their unruly calendars. So, the problem does exist and must be confronted. This is what's worked for me.&lt;/p&gt;

&lt;p&gt;Until such time as everyone everywhere magically creates perfect meetings we need to learn how to distinguish the good from the bad, establish some healthy boundaries, and perhaps—gasp!—even talk to each other.&lt;/p&gt;

</description>
      <category>webbev</category>
      <category>career</category>
      <category>process</category>
      <category>meetings</category>
    </item>
    <item>
      <title>Developers, Your Job Is Not To Write Code</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Fri, 16 Feb 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/developers-your-job-is-not-to-write-code-22kd</link>
      <guid>https://dev.to/mrmatthogg/developers-your-job-is-not-to-write-code-22kd</guid>
      <description>&lt;p&gt;&lt;em&gt;This essay is adapted from a lightning talk that I originally presented to coworkers in 2019.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The advice I've consistently given to developers who want to grow might seem counter-intuitive, but it's always held true for me—the first step to being a great developer is recognizing that your job is not to write code.&lt;/p&gt;

&lt;p&gt;How do I know this to be true? Well, I've never spent a single minute of a one-on-one with a developer talking about code that they've written! That's just not what I spend energy on when I'm coaching someone. The path to becoming more senior is not through better coding.&lt;/p&gt;

&lt;p&gt;Furthermore, your stakeholders—brace yourself!—don't give a shit about your clean code, the developer experience, tooling, or any other tinkering you might do. They'll be equally satisfied with a massive, complex feature or a single line change in a configuration file. All that matters is that you've delivered value.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1607856517109387264-3" src="https://platform.twitter.com/embed/Tweet.html?id=1607856517109387264"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1607856517109387264-3');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1607856517109387264&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Similarly, when working on a team you could be the Greatest Developer In The World but if nobody likes working with you or understands what you're doing every day then you're essentially a human paperweight.&lt;/p&gt;

&lt;p&gt;So, there must be many more skills required of you than just typing into your favorite IDE. Thinking that "developers write code" is like thinking "carpenters hammer nails"—it's simplistic, naive, and a trap.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1516834079798349824-417" src="https://platform.twitter.com/embed/Tweet.html?id=1516834079798349824"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1516834079798349824-417');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1516834079798349824&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;h2&gt;
  
  
  A Trap Hiding In Plain Sight
&lt;/h2&gt;

&lt;p&gt;There are obvious signs you might be stuck in this "developers write code" trap. Some of these might apply at one time or another and that's OK as long as you're self-aware. But the more of this you exhibit on a regular basis, the more work you'll have to put in to extricate yourself. These signs include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Showing contempt or disdain for meetings.&lt;/li&gt;
&lt;li&gt;Not checking, acknowledging, or responding to messages.&lt;/li&gt;
&lt;li&gt;Not participating in "non-essential" conversations or social activities.&lt;/li&gt;
&lt;li&gt;Not tolerating interruptions or context-switching.&lt;/li&gt;
&lt;li&gt;Apologizing for delays in planned coding tasks.&lt;/li&gt;
&lt;li&gt;Measuring your output solely by counting pull requests.&lt;/li&gt;
&lt;li&gt;Not asking questions or thinking critically about the work.&lt;/li&gt;
&lt;li&gt;Not contributing suggestions, ideas, or solutions.&lt;/li&gt;
&lt;li&gt;Displaying a lack of curiosity in your problem domain.&lt;/li&gt;
&lt;li&gt;Worrying about your work at the expense of your teammates' work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These can be fairly common developer stereotypes. The perception from other people that "developers write code" is also part of the trap. Let's hope you don't work for a certain "genius" CEO who &lt;a href="https://eloncodereview.com/" rel="noopener noreferrer"&gt;asks you to print out your code&lt;/a&gt; for his own personal review!&lt;/p&gt;

&lt;p&gt;To truly succeed you'll have to spot these anti-patterns and correct these assumptions for &lt;em&gt;both yourself and others&lt;/em&gt;. You really have to own your narrative or someone else will do it for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why It's A Trap
&lt;/h2&gt;

&lt;p&gt;The "developers write code" trap is detrimental to your company, your team, and especially you. It will limit you and ironically make you a &lt;em&gt;worse&lt;/em&gt; developer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Short-term thinking (e.g., "What's my next coding task?") means you're running in place.&lt;/li&gt;
&lt;li&gt;You're more likely to be emotionally invested in your code which limits your learning opportunities or potential solutions the next time that code needs to be touched.&lt;/li&gt;
&lt;li&gt;You'll be typecast (or extinct) as a specialist developer—remember when we had Flash developers...?&lt;/li&gt;
&lt;li&gt;Burnout is more likely because you're a passenger and not an active participant in the work or its destiny.&lt;/li&gt;
&lt;li&gt;You won't be given the novel and interesting challenges within your team, technical or otherwise.&lt;/li&gt;
&lt;li&gt;There will be fewer career paths available to you because you want to keep your fingers on the keyboard.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  So, What Is Your Job Then?
&lt;/h2&gt;

&lt;p&gt;If your job is not to write code, then what is it then? Fair question, and the answer is simple...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your job is to solve problems.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yep, that's it.&lt;/p&gt;

&lt;p&gt;Obviously, we do solve &lt;em&gt;some problems&lt;/em&gt; by writing code. But we also solve problems by reading code, testing code, debugging code, or deleting code (my personal favorite).&lt;/p&gt;

&lt;p&gt;But wait, there's more! All of these things are your job as a developer and serve to solve a problem. Go ahead and read through this. I'll wait.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Organizing and attending social events&lt;/li&gt;
&lt;li&gt;Volunteering&lt;/li&gt;
&lt;li&gt;Suggesting or enacting changes in process&lt;/li&gt;
&lt;li&gt;Interviewing candidates and reviewing resumes&lt;/li&gt;
&lt;li&gt;Attending conferences&lt;/li&gt;
&lt;li&gt;Being someone's &lt;a href="https://en.wikipedia.org/wiki/Rubber_duck_debugging" rel="noopener noreferrer"&gt;rubber duck&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Showing people cool stuff you found&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food" rel="noopener noreferrer"&gt;Eating your own dog food&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Writing documentation&lt;/li&gt;
&lt;li&gt;Mentoring and coaching others&lt;/li&gt;
&lt;li&gt;Attending meetings&lt;/li&gt;
&lt;li&gt;Debugging or tracing code&lt;/li&gt;
&lt;li&gt;Chatting in the kitchen/hallway/etc.&lt;/li&gt;
&lt;li&gt;Testing code&lt;/li&gt;
&lt;li&gt;Testing in production&lt;/li&gt;
&lt;li&gt;Prototyping&lt;/li&gt;
&lt;li&gt;Giving demos and presentations&lt;/li&gt;
&lt;li&gt;Googling shit&lt;/li&gt;
&lt;li&gt;Estimating effort&lt;/li&gt;
&lt;li&gt;Doing lightning talks&lt;/li&gt;
&lt;li&gt;Reading code (PRs)&lt;/li&gt;
&lt;li&gt;Attending training&lt;/li&gt;
&lt;li&gt;Deleting code&lt;/li&gt;
&lt;li&gt;Designing systems and solutions&lt;/li&gt;
&lt;li&gt;Finding or reporting bugs&lt;/li&gt;
&lt;li&gt;Troubleshooting&lt;/li&gt;
&lt;li&gt;Brainstorming&lt;/li&gt;
&lt;li&gt;Reading Stack Overflow&lt;/li&gt;
&lt;li&gt;Asking questions&lt;/li&gt;
&lt;li&gt;Researching&lt;/li&gt;
&lt;li&gt;Being a subject matter expert&lt;/li&gt;
&lt;li&gt;Keeping up with Slack/email&lt;/li&gt;
&lt;li&gt;Unblocking teammates&lt;/li&gt;
&lt;li&gt;Talking to stakeholders&lt;/li&gt;
&lt;li&gt;Caring about the users&lt;/li&gt;
&lt;li&gt;Writing code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of these would typically be considered "soft skills" which is a term I despise. They're all equal skills. I don't position writing, communicating, empathizing, helping, or thinking in second place after the so-called "hard" skills.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/LBYxKUCgRwk"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;You need all of it to be &lt;a href="https://skamille.medium.com/an-incomplete-list-of-skills-senior-engineers-need-beyond-coding-8ed4a521b29f" rel="noopener noreferrer"&gt;effective at your job and climb the ranks&lt;/a&gt;. As &lt;a href="https://agileotter.blogspot.com/2014/09/programming-is-mostly-thinking.html" rel="noopener noreferrer"&gt;Tim Ottinger puts it&lt;/a&gt; programming is "mostly thinking" and your code is "just the residue of the work". &lt;a href="https://www.karlsutt.com/articles/communicating-effectively-as-a-developer/" rel="noopener noreferrer"&gt;Karl Sutt also has great communication advice&lt;/a&gt; that demonstrates how &lt;em&gt;empathy itself is a useful skill&lt;/em&gt; requiring genuine effort:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Writing effectively is a superpower, there is no denying it. As a software engineer, you write &lt;strong&gt;a lot&lt;/strong&gt;. Most of the writing you do is for computers. Businesses, however, consist of people.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Go Do Your Job
&lt;/h2&gt;

&lt;p&gt;We all know developers in their default state would much rather be left alone to write code. I've been there, too. Coding is fun! It's comfortable! But I dare say it's too comfortable. Frankly, if you're not challenging yourself you're just doing something that &lt;em&gt;anybody else&lt;/em&gt; could be doing in your place.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1591067274492141570-613" src="https://platform.twitter.com/embed/Tweet.html?id=1591067274492141570"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1591067274492141570-613');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1591067274492141570&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;I encourage you to broaden your understanding of what a "great developer" is and then meet that definition.&lt;/p&gt;

&lt;p&gt;Remember that your productivity and contributions are measured by more than just code output. Recognize that there's a long list of skills that are equally valuable and necessary and then pick the next one to get better at. Juniors and seniors alike can do this. There will always be another skill to polish.&lt;/p&gt;

&lt;p&gt;Once you realize that you're more than just your code you'll be well on your way to becoming a better developer.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>career</category>
      <category>motivation</category>
      <category>skills</category>
    </item>
    <item>
      <title>A Technical Interview Doesn't Have To Suck!</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Tue, 30 Aug 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/a-technical-interview-doesnt-have-to-suck-45ej</link>
      <guid>https://dev.to/mrmatthogg/a-technical-interview-doesnt-have-to-suck-45ej</guid>
      <description>&lt;p&gt;Countless technical interviews are conducted every day that are little more than theatrical self-sabotage for everyone involved. To say that this process is deeply, truly, fundamentally flawed is an understatement. We need to do better.&lt;/p&gt;

&lt;p&gt;What kinds of technical interviews am I talking about? The list of offenses is endless: whiteboard tests in pseudocode, hours-long (or days-long!) take-home tests, sorting algorithm questions, &lt;a href="https://en.wikipedia.org/wiki/Fizz_buzz" rel="noopener noreferrer"&gt;FizzBuzz&lt;/a&gt;, &lt;a href="https://leetcode.com/" rel="noopener noreferrer"&gt;LeetCode&lt;/a&gt;, asking a stranger to write working code in 20 minutes while you stare at them, asking "gotcha" coding questions with a single correct answer, puzzles, riddles, brain teasers... The mind boggles!&lt;/p&gt;

&lt;p&gt;Candidates dread interviews like these. Interviewers aren't exactly proud of themselves, either. At least, they &lt;em&gt;really&lt;/em&gt; shouldn't be. Technical interviewing has become so dysfunctional that &lt;a href="https://news.ncsu.edu/2020/07/tech-job-interviews-anxiety/" rel="noopener noreferrer"&gt;it's better at assessing a candidate's anxiety than their skills&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/rqEHkFYB9qg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;So, folks, what the hell are we doing? And can we please get our shit together? Read on.&lt;/p&gt;

&lt;h2&gt;
  
  
  First, An Anecdote
&lt;/h2&gt;

&lt;p&gt;A couple of years ago I was interviewing for an engineering manager role at a local Toronto startup. A part of this interview involved a coding exercise where I was asked to find the intersection of 2 arrays.&lt;/p&gt;

&lt;p&gt;While working out the solution using ES6 array methods like &lt;code&gt;filter()&lt;/code&gt; and &lt;code&gt;includes()&lt;/code&gt; the interviewers suggested I scrap that approach and do it &lt;em&gt;without&lt;/em&gt; ES6. OK... I did my best to oblige with some nested &lt;code&gt;for&lt;/code&gt; loops. I was then quizzed on the computational cost of both approaches.&lt;/p&gt;

&lt;p&gt;Once we finished I asked the interviewers if it was perhaps a style convention of the company to avoid ES6 in favor of hyper-optimized code. I was floored when they told me, "Oh, no, not at all! All of the developers use ES6 freely. We go with whatever works and is fastest for the developer."&lt;/p&gt;

&lt;p&gt;We wasted 45 minutes evaluating things that &lt;em&gt;didn't matter&lt;/em&gt;! Why was an engineering manager candidate with 20 years of experience writing basic JavaScript code? Why did the interviewers specify constraints that they themselves don't even follow?&lt;/p&gt;

&lt;p&gt;Maybe I shouldn't have been that surprised. Throughout my career I've been in many a bad interview:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I once withdrew from applying at a billion-dollar company when they wanted to schedule a total of &lt;em&gt;17 hours&lt;/em&gt; of interviews!&lt;/li&gt;
&lt;li&gt;I once did a take-home test that I was told would then be discussed with me in our interview—it was never mentioned again &lt;em&gt;and&lt;/em&gt; I had to do an additional whiteboard exercise!&lt;/li&gt;
&lt;li&gt;When I interviewed for my first job out of university I was given a written (!) test which I completed in a room alone. I did get the job, though!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Red Flags, Red Flags Everywhere
&lt;/h2&gt;

&lt;p&gt;We know instinctively that the typical technical interview process is a minefield of issues because we've all experienced it. Any one of these problems alone would be toxic enough, but when they're combined it's a disaster.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Poor Predictor
&lt;/h3&gt;

&lt;p&gt;First of all, many technical interview techniques simply aren't predictive of a candidate's actual job performance. I mean, how could they be?&lt;/p&gt;

&lt;p&gt;We can't adequately assess somebody in just an hour or two. To correct for this &lt;a href="https://www.bbc.com/worklife/article/20210727-the-rise-of-never-ending-job-interviews" rel="noopener noreferrer"&gt;interviews are starting to get longer&lt;/a&gt;. This is, of course, more expensive for the company to do and also sours potential applicants.&lt;/p&gt;

&lt;p&gt;Regardless, we're also getting the format wrong. Solving a problem on a whiteboard is indeed a skill, for instance, but &lt;a href="https://www.freecodecamp.org/news/why-is-hiring-broken-it-starts-at-the-whiteboard-34b088e5a5db" rel="noopener noreferrer"&gt;as Quincy Larson points out&lt;/a&gt; it has "almost nothing to do with modern software development."&lt;/p&gt;

&lt;h3&gt;
  
  
  Nervous Candidates
&lt;/h3&gt;

&lt;p&gt;Second, candidates are primed to expect difficult interviews and are therefore nervous. This is bad because we’re not interviewing these candidates at their best. Candidates also feel compelled to furiously study or cram for the &lt;em&gt;interview itself&lt;/em&gt; and not the job. It's an industry of its very own.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1482097995512369152-647" src="https://platform.twitter.com/embed/Tweet.html?id=1482097995512369152"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1482097995512369152-647');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1482097995512369152&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;This is because we make them jump through hoops, obsess about the Almighty Whiteboard, give them "fun" (read: &lt;a href="https://iaap-journals.onlinelibrary.wiley.com/doi/10.1111/apps.12163" rel="noopener noreferrer"&gt;narcissistic and sadistic&lt;/a&gt;) &lt;a href="https://twitter.com/emollick/status/1387550833395576835" rel="noopener noreferrer"&gt;brain teasers&lt;/a&gt;, or push all applicants (junior, senior, or rock star) through the same funnel &lt;a href="https://medium.com/@evnowandforever/f-you-i-quit-hiring-is-broken-bb8f3a48d324" rel="noopener noreferrer"&gt;whether it makes sense or not&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nothing Like The Actual Job
&lt;/h3&gt;

&lt;p&gt;Third, technical interviews aren't representative of our day-to-day activities or workplace cultures. We wouldn't trust most code cranked out by a coworker in 20 minutes, for instance.&lt;/p&gt;

&lt;p&gt;Fortunately, in our day-to-day we have many guardrails in place to mitigate such a thing (e.g., code review, team leads to pair with, user stories with acceptance criteria, etc.). A candidate doesn't have access to those amenities, and the interview is more unfair and &lt;em&gt;adversarial&lt;/em&gt; as a result.&lt;/p&gt;

&lt;p&gt;Just imagine if a coworker actually implemented a sorting algorithm on company time! We'd rightly question their sanity. They should Google it and then use a library, and that's what we'd advise them to do. Why does the typical interview not do the same?&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1514423070345342979-882" src="https://platform.twitter.com/embed/Tweet.html?id=1514423070345342979"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1514423070345342979-882');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1514423070345342979&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;For candidates, the very format of most interviews can distort and obscure the company's true work culture. We then lose a vital mechanism for assessing them—would we &lt;em&gt;appreciate&lt;/em&gt; working with this person?&lt;/p&gt;

&lt;h3&gt;
  
  
  Most Companies Are Not Big Tech
&lt;/h3&gt;

&lt;p&gt;Fourth, many interview practices aren't a good fit for most companies. We adopt many a "best practice" from the &lt;a href="https://en.wikipedia.org/wiki/Big_Tech" rel="noopener noreferrer"&gt;big tech companies&lt;/a&gt;—FANG, FAANG, MAMAA, or whatever we're calling them today—whether it's a good idea or not. The same goes for technical interviews.&lt;/p&gt;

&lt;p&gt;Google interviews the way they do because they have to do it &lt;em&gt;at scale&lt;/em&gt;—that's hiring many tens of thousands of people a year. The vast majority of companies don't have this problem, and for them &lt;a href="https://twitter.com/GergelyOrosz/status/1460915713175179268" rel="noopener noreferrer"&gt;the approach should be different&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There's an entire cottage industry writing about FAANG's hiring processes. If you're a startup or even a small company, don't give this much credence because likely the decision they made doesn't apply to you. And the decisions they didn't make is because it doesn't scale. This is their weakness and your advantage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Our Privilege Is Showing
&lt;/h3&gt;

&lt;p&gt;Finally—and this is the most damning—technical interviews can be racist, ableist, or sexist. There's a &lt;a href="https://twitter.com/GergelyOrosz/status/1464233082010144769" rel="noopener noreferrer"&gt;bias in place for developers who are women, from an underrepresented group, or who have performance anxiety&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Candidates in these categories already have to overcompensate just to be seen as &lt;em&gt;equals&lt;/em&gt;. They can be great at what they do but also have to navigate the prejudices of the strangers interviewing them. This can be exhausting or damaging for the candidates and they'll freeze up. Meanwhile, any interviewer unaware of their own bias and privilege will conclude the candidate isn't a good hire.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Hire Determines How We Hire
&lt;/h2&gt;

&lt;p&gt;That is &lt;em&gt;a lot&lt;/em&gt; that we're enabling or perpetuating without fully realizing it! To be clear, anybody who's ever conducted an interview has been complicit in this—myself included.&lt;/p&gt;

&lt;p&gt;Much about technical interviewing &lt;em&gt;feels&lt;/em&gt; correct, to interviewers and candidates alike, even though it's superficial. Developers tend to prefer objectivity, clear outcomes, and subjects that are easily categorized into neat little buckets. So, when it comes to interviewing developers we just want to know if they can write the damn software or not, right?&lt;/p&gt;

&lt;p&gt;There are many people who cling to technical interviews because of a belief that &lt;a href="https://blog.codinghorror.com/why-cant-programmers-program/" rel="noopener noreferrer"&gt;many candidates can't write code&lt;/a&gt;. Like, at all. As in, it's "disturbing and appalling" and a "slap in the face to anyone who writes software for a living." Clearly these imposters must be stopped! &lt;a href="http://www.kegel.com/academy/getting-hired.html" rel="noopener noreferrer"&gt;Dan Kegel writes&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A surprisingly large fraction of applicants, even those with masters' degrees and PhDs in computer science, fail during interviews when asked to carry out basic programming tasks. For example, I've personally interviewed graduates who can't answer "Write a loop that counts from 1 to 10" or "What's the number after F in hexadecimal?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This hasn't been my experience. I'd respectfully disagree and posit that using the interview to test "basic" programming ability isn't a wise use of time. My daily rate is better spent on other activities, but that's just me.&lt;/p&gt;

&lt;p&gt;This problem could likely be solved with a phone screen by a recruiter who has a list of rudimentary questions. Instead, we've over-corrected for the problem of assessing competence and aggressively applied that solution to &lt;em&gt;every&lt;/em&gt; candidate.&lt;/p&gt;

&lt;p&gt;I'm not saying we should do away with coding tests entirely. I wouldn't be so bold as to suggest anything quite so radical. What I &lt;em&gt;would&lt;/em&gt; be so bold as to suggest, however, is that maybe we don't need to be so fucking perfect.&lt;/p&gt;

&lt;p&gt;For instance, if you're interviewing a junior how much should you truly expect them to know coming in? They're more likely to learn on the job. Your company should provide the space and support for them to grow, and consequently your hiring process should reflect that. If your company &lt;em&gt;doesn't&lt;/em&gt; support growth, I might need to write a different essay...&lt;/p&gt;

&lt;p&gt;And for seniors, we don't need to thoroughly test their programming ability at a basic level. Their many years' experience, glowing references, or a goddamn &lt;em&gt;conversation&lt;/em&gt; with them serves as a reasonable heuristic for that. Do we genuinely think senior applicants are liars and frauds? All of them?&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1498659651562336261-302" src="https://platform.twitter.com/embed/Tweet.html?id=1498659651562336261"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1498659651562336261-302');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1498659651562336261&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;When hiring, the goal is to find valuable additions to your team. That's basically it. As &lt;a href="https://dev.to/cher/interviewing-as-a-software-engineer-sucks-73b"&gt;Cher Scarlett rightly points out&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The thing you're looking for, realistically, is the candidate who shows you what you're missing, and that can rarely be figured from a coding exercise, even if it's not done live.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The goal isn't to defend your team from fakers and imposters, or to find the perfect candidate. Doing so only leads to hiring the people that most exactly match your expectations—and your implicit biases. This directly influences the interview format and results in a monoculture within your team.&lt;/p&gt;

&lt;p&gt;As hiring managers, we should recognize when we've had a good technical interview when we're &lt;em&gt;surprised&lt;/em&gt; by a candidate instead of merely satisfied. Did the candidate tick all of the boxes on your stupid checklist? Or, did you genuinely &lt;em&gt;enjoy&lt;/em&gt; meeting them?&lt;/p&gt;

&lt;h2&gt;
  
  
  Principles For Better Technical Interviews
&lt;/h2&gt;

&lt;p&gt;To foster that kind of serendipity I much prefer to diversify my technical interviewing toolkit by adhering to these 3 high-level principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Comfort&lt;/li&gt;
&lt;li&gt;No-code&lt;/li&gt;
&lt;li&gt;Realism&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Comfort
&lt;/h3&gt;

&lt;p&gt;Do what you can at all times to make a candidate comfortable. I can't stress enough how important it is to set expectations and be clear with the candidate about what they're going to experience.&lt;/p&gt;

&lt;p&gt;It starts with the invitation to interview. I always indicate who's attending the interview, including their names and titles. I make sure to emphasize that it won't be a whiteboard or live coding exercise. For remote interviews, I strongly suggest the candidate try out our video conference software ahead of time including the camera, microphone, and screen-sharing functionality.&lt;/p&gt;

&lt;p&gt;All of this serves to put the candidate at ease as much as possible. A nervous candidate is not a valuable interview because they're not properly equipped to demonstrate their skills and abilities. I'm not just being kind to the candidate, I'm also &lt;em&gt;maximizing the accuracy of the interview&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;During the interview I also take steps to put the candidate at ease. This means reminding them that they're doing great, providing psychological safety, or making space for them to take quick mental breaks.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1546797773290446848-783" src="https://platform.twitter.com/embed/Tweet.html?id=1546797773290446848"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1546797773290446848-783');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1546797773290446848&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;The best way to provide breaks is to split the interview into multiple segments and in between them I ask if they have any questions for me. At the very least I'll allow for a longer amount of time at the end for questions—I'm often surprised how much the candidates' questions reveal about them!&lt;/p&gt;

&lt;h3&gt;
  
  
  No-code
&lt;/h3&gt;

&lt;p&gt;This essay has really been building up to make this exact point, hasn't it? It's perhaps no surprise, but we've got to stop pinning our all of our hopes on performative coding exercises.&lt;/p&gt;

&lt;p&gt;If I only evaluate a candidate's coding abilities then all I know about them is that they can code and little else.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/jTkMEPbl3Yg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;I ultimately don’t care how well someone can simply write code, to be honest. What I do care about: if they're confident in their abilities and aware of their weaknesses, if they show an attention to detail, if they demonstrate problem-solving skills, if they care about the user experience, if they're willing to learn, and so on.&lt;/p&gt;

&lt;p&gt;Poorly executed technical interviews don’t assess for these qualities because they're digging in the wrong place. Instead I prefer to rely on realism.&lt;/p&gt;

&lt;h3&gt;
  
  
  Realism
&lt;/h3&gt;

&lt;p&gt;I like a technical interview to be as close to reality as I can get. This is easier said than done. It means balancing the temptation to inspect a candidate at the finest granularity against the need to see how’d they actually perform in the real world.&lt;/p&gt;

&lt;p&gt;I don't want to invigilate a coding exam. I'd much rather look for my next great coworker. I just want to be real and stop with the posturing. We're not inverting binary trees or implementing algorithms from scratch.&lt;/p&gt;

&lt;p&gt;All of us look up shit on Google &lt;em&gt;hourly&lt;/em&gt;. We talk to our teammates when we're blocked, and help each other get unblocked. We brainstorm problems together and debate possible solutions. We troubleshoot. We apply critical thinking to prioritize work and estimate its effort. And sometimes we throw our hands up in the air and say, "I have no damn clue."&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1379195256630095872-857" src="https://platform.twitter.com/embed/Tweet.html?id=1379195256630095872"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1379195256630095872-857');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1379195256630095872&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;So, I have conversations with candidates on these terms—and I assess their programming ability for free! A candidate will necessarily give me insight into their coding skills over the course of the discussion, and I avoid staring at an IDE and acting as an overpaid human linter.&lt;/p&gt;

&lt;p&gt;An additional benefit of realism is that candidates are also evaluating me and my company. Sometimes deliberately so. I need to present the company's culture and ways of working as a differentiator in a vast ocean of coding interviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Interviewer's Toolkit
&lt;/h2&gt;

&lt;p&gt;There are numerous ways to avoid pure coding interviews and their shortcomings. Recall I used the word "toolkit" earlier. I recommend having a variety of interview activities at the ready. To name a few:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Present the candidate with a bug to troubleshoot together. This can be staged for the candidate, or an actual bug that nobody on your team has fixed yet.&lt;/li&gt;
&lt;li&gt;Brainstorm the implementation of a new feature. This can be done with design mockups or with an existing feature of your website or application. The candidate plays the role of a dev on the team while the interviewers are role-playing as product managers and the like. Get the candidate to plot out the work needed, the trade-offs of certain choices, the complexities, and so on.&lt;/li&gt;
&lt;li&gt;Ask the candidate to open Dev Tools for your website or application and poke around. Let them go wherever their curiosity takes them. The trick to this activity—and what makes it challenging for all involved—is that it's open-ended by design. Take note of what the candidate is drawn to, how they use the tools, how they ask and even answer their own questions, and so on—sometimes they'll even discover a bug! Good candidates can really shine here when they come prepared with notes because they already took a peek before the interview.&lt;/li&gt;
&lt;li&gt;Give the candidate a &lt;a href="https://fulcrum.lever.co/a-better-way-to-interview-software-engineers-fa9b5d2b5316" rel="noopener noreferrer"&gt;mock pull request&lt;/a&gt; and ask them to conduct a code review. They can share their screen and go through it line-by-line and either speak or type their commentary. Interviewers may do some role-play as the “author” of the pull request.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For folks used to typical technical interviews, activities like these will be &lt;em&gt;so frustratingly close&lt;/em&gt; to coding only to stop short. This is intentional. Sorry, not sorry. Remember the guiding principles are comfort, no-code, and realism.&lt;/p&gt;

&lt;p&gt;It's also good advice to &lt;a href="https://laurieontech.com/posts/interviews/" rel="noopener noreferrer"&gt;mix and match or otherwise tailor the interview to the candidate or the role being filled&lt;/a&gt;. Having a pool of interview activities to draw from makes this easier.&lt;/p&gt;

&lt;p&gt;Lastly, at the end of any interview I make sure to ask, "What feedback do you have for us about our interview process?"&lt;/p&gt;

&lt;p&gt;This question allows me to iterate and improve on how I interview, but also gives me a glimpse into competitors' interview processes. I'm often surprised at how freely candidates will compare the current interview to others they've done, and what they point out will reveal a lot about themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Only Winning Move Is Not To Play
&lt;/h2&gt;

&lt;p&gt;So, that's how I see things, but it's been hard-earned. We've dug ourselves a very deep hole, as an industry. It's an uphill battle for any individual to improve things just within their own organization.&lt;/p&gt;

&lt;p&gt;As hiring managers, we can only hope our companies give us the leeway to improve the process. For myself, I've arrived at these techniques over years. In some of my roles, I've done this in bits and pieces where there have been gaps. In some roles I've even subverted or danced around heavy-handed or prescriptive HR departments...&lt;/p&gt;

&lt;p&gt;A good company is one that asks how &lt;em&gt;you&lt;/em&gt; as the hiring manager want to handle it and supports your efforts.&lt;/p&gt;

&lt;p&gt;For jobseekers and candidates, it's much more challenging to move the needle. How do you point out the absurdity of it all without getting shown the door? There are ways to decline a technical interview that are constructive and you might even come out of it looking smart!&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1461774310868062208-926" src="https://platform.twitter.com/embed/Tweet.html?id=1461774310868062208"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1461774310868062208-926');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1461774310868062208&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;As a jobseeker, if a particular opportunity smells "off" for any reason be prepared to declare that &lt;a href="https://daedtech.com/hiring-is-broken/" rel="noopener noreferrer"&gt;hiring is broken&lt;/a&gt; and stand up for yourself. Tell that hiring manager or recruiter &lt;a href="https://no.whiteboard.codes/" rel="noopener noreferrer"&gt;why you don't want to participate in their technical interview&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The worst that'll happen is a hiring manager will choose not to move forward. But maybe a polite refusal will make them take notice and accommodate. Better still, if enough of the talent pool makes proper technical interviewing a factor in their job search, the industry will have no choice but to follow suit.&lt;/p&gt;

&lt;p&gt;So, demand better! Hiring managers should elevate their candidates. Jobseekers should draw boundaries that both protect their well-being and foster their potential. Perhaps then we'll all meet in the middle and find the process truly works for everyone involved.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>career</category>
      <category>interview</category>
      <category>management</category>
    </item>
    <item>
      <title>Hacks Are Fine</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Thu, 08 Jul 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/hacks-are-fine-5891</link>
      <guid>https://dev.to/mrmatthogg/hacks-are-fine-5891</guid>
      <description>&lt;p&gt;&lt;em&gt;This essay has been adapted from a lightning talk that I originally presented to coworkers in 2019. It was intended as an absurd, tongue-in-cheek thought experiment, but I'd like to think there could possibly be some truth to it...&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There's a slight problem with the standard definition of a hack. It says more about why you &lt;em&gt;wouldn't&lt;/em&gt; want to use one than why you &lt;em&gt;might&lt;/em&gt;. What if—now hear me out—hacks are fine?&lt;/p&gt;

&lt;p&gt;A "hack" (also known as a "kludge") is &lt;a href="https://en.wikipedia.org/wiki/Kludge#Computer_science" rel="noopener noreferrer"&gt;defined by Wikipedia&lt;/a&gt; as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A solution to a problem, the performance of a task, or a system fix which is inefficient, inelegant ("hacky"), or even incomprehensible, but which somehow works.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A definition like this will prime readers a certain way by using more negative words than positive. Also, that little bit of bewilderment at the very end really gets me—&lt;em&gt;somehow&lt;/em&gt; hacks work!?! Gosh! My goodness!&lt;/p&gt;

&lt;p&gt;So, let's consider several reasons why we maybe shouldn't be surprised by hacks as viable, successful solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Excitement
&lt;/h2&gt;

&lt;p&gt;Hacks can be exciting! Nobody wants a protagonist who has all the time and resources they need at their disposal. That's not good storytelling. Just ask MacGyver here.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/EKyCh_WuV6M"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Now, I'm only half-joking. You could find yourself working for a company, or on a particular project, where you don't have all of the resources or support you'll need to be effective.&lt;/p&gt;

&lt;p&gt;This will force you to get creative and think outside the box. And after a few such challenges in your career, you'll be a better developer because of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pragmatism
&lt;/h2&gt;

&lt;p&gt;When it comes to prototypes, A/B tests, and confirming hypotheses about your product the best way to effectively deliver is actually by writing the fastest, shittiest code you can.&lt;/p&gt;

&lt;p&gt;And sometimes you just have to &lt;em&gt;ship it&lt;/em&gt;, even for production code. There's often a deadline, at which point nobody really cares that you're a code poet. A hack could be the path of least resistance.&lt;/p&gt;

&lt;p&gt;A certain amount of technical or product debt isn't, in and of itself, a bad thing. It's an &lt;em&gt;artifact&lt;/em&gt; of a decision that was made to get to an end result. It's evidence of a compromise that was made, or a trade-off that (I hope!) was carefully weighed.&lt;/p&gt;

&lt;p&gt;How you track that debt and pay it down is another question, of course...&lt;/p&gt;

&lt;h2&gt;
  
  
  Critical Thinking
&lt;/h2&gt;

&lt;p&gt;Speaking of trade-offs, hacks are an opportunity for critical thinking. A responsible hack means you've considered things like cost versus benefit or risk versus reward. This is actually a very important learned skill over time.&lt;/p&gt;

&lt;p&gt;At the very least, in the short-term a hack might shake loose the ideal solution to a problem you've been struggling with all day. You'll think about it when you go home that evening and maybe even lose some sleep over it. But, in mulling over the hack you'd really prefer to avoid in the morning, you just might come back with a better solution!&lt;/p&gt;

&lt;h2&gt;
  
  
  YAGNI
&lt;/h2&gt;

&lt;p&gt;You also need to consider the likelihood that You Aren't Gonna Need It (a.k.a. &lt;a href="https://martinfowler.com/bliki/Yagni.html" rel="noopener noreferrer"&gt;YAGNI&lt;/a&gt;). A simpler solution is almost always the better solution, if you can pull it off. Sometimes that might feel like a hack.&lt;/p&gt;

&lt;p&gt;I can't be sure, but I think Ron Jeffries might be &lt;a href="https://ronjeffries.com/xprog/articles/practices/pracnotneed/" rel="noopener noreferrer"&gt;against premature implementation&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Resist that impulse, every time. Always implement things when you actually need them, never when you just foresee that you need them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t9sxrvrqhbzttjv2o74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t9sxrvrqhbzttjv2o74.png" alt="A 3-panel web comic by xkcd where the character asks somebody out of frame to pass the salt, only to wait 20 minutes while they develop a generalized system to pass arbitrary condiments because they think it'll save time in the long run." width="550" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Trying to predict the future of your code is ambitious. The odds are low that you'll end up being correct. So, it might be more prudent to hedge your bets with a conservative stop-gap measure that costs you less up front.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impermanence
&lt;/h2&gt;

&lt;p&gt;In a similar vein, very little of your hard work will live forever. Software can have a very short shelf life whether it's due to redesigns, refactors, business closures or acquisitions, startup pivots, and so on.&lt;/p&gt;

&lt;p&gt;I once did some quick math on a napkin and surveyed my previous 9 jobs going all the way back to the late 90s. It's probable that only about 20% of anything I'd ever worked on still existed. Obviously, output from my very first job is gone—so long, &lt;a href="https://en.wikipedia.org/wiki/Adobe_ColdFusion" rel="noopener noreferrer"&gt;Coldfusion&lt;/a&gt;!—but so is stuff I worked on just a few years before.&lt;/p&gt;

&lt;p&gt;The point being, if perhaps there's a hack in production that you're &lt;em&gt;really&lt;/em&gt; not proud of you might not have to worry for very long!&lt;/p&gt;

&lt;h2&gt;
  
  
  Durability
&lt;/h2&gt;

&lt;p&gt;On the other hand, your hack could last a lifetime.&lt;/p&gt;

&lt;p&gt;If that hacky code is working and delivering value then &lt;a href="https://matthogg.fyi/legacy-code-may-be-the-friend-we-havent-met-yet/" rel="noopener noreferrer"&gt;you've created legacy code&lt;/a&gt;. Congratulations!&lt;/p&gt;

&lt;p&gt;But if that hack doesn't work perfectly, well, I still wouldn't worry much. We can all think of that bug, maintenance task, or tech debt issue that sits in a Jira backlog for months or even years, right? A codebase will always have issues that ultimately never get looked at, which raises the question: &lt;em&gt;if you never fix it, was it ever broken?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-501718937420455937-90" src="https://platform.twitter.com/embed/Tweet.html?id=501718937420455937"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-501718937420455937-90');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=501718937420455937&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;For instance, I can tell you that one of Canada's most popular websites contains a self-identified hack that's been executed by hundreds of thousands of visitors daily for at least 7 years and counting.&lt;/p&gt;

&lt;p&gt;For fun, I recommend searching for terms like &lt;code&gt;hack&lt;/code&gt; or &lt;code&gt;FIXME&lt;/code&gt; hidden in the comments of your codebases and see what you find.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's Not Just Software
&lt;/h2&gt;

&lt;p&gt;Speaking of durable hacks, we can broaden the original defintion beyond just software. We're surrounded by &lt;em&gt;many&lt;/em&gt; hacks which we take for granted every day.&lt;/p&gt;

&lt;p&gt;Advances in science and engineering often progress by way of hacks. The Apollo moon landings are a great example—to get to the moon we literally &lt;em&gt;threw away an entire Saturn V rocket every time&lt;/em&gt;. Oh, and if you expected to come home you had to crash a tiny module into the ocean and be rescued. And yet, there are footprints on the moon!&lt;/p&gt;

&lt;p&gt;And let's not forget what a central processing unit (CPU) truly is:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-841802094361235456-424" src="https://platform.twitter.com/embed/Tweet.html?id=841802094361235456"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-841802094361235456-424');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=841802094361235456&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;The scientific theory of evolution by natural selection is the ultimate hack. Evolution does not prematurely optimize! Natural selection works with whatever's at hand and the smallest change that works well enough is the winner. You, reading this right now, are the result of an innumerable series of tiny hacks.&lt;/p&gt;

&lt;p&gt;That said, the human body is still a hot mess:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Our eyes are wired backwards.&lt;/li&gt;
&lt;li&gt;Our spines were never meant to be vertical.&lt;/li&gt;
&lt;li&gt;We put food and air down the same hole.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And then there's childbirth. The pelvis and birth canal are narrow because we grew to enjoy walking on 2 feet. However, our brains are also big and, like, really smart. So, babies come out when they do because their heads wouldn't fit if they waited any longer! Babies also have soft skulls to facilitate this, and can only come out head-first and facing backwards. Consequently, newborns still develop as if they're in the womb—the so-called 4th trimester.&lt;/p&gt;

&lt;h2&gt;
  
  
  OK, OK, But The Software!
&lt;/h2&gt;

&lt;p&gt;Sorry! That was a bit of a tangent.&lt;/p&gt;

&lt;p&gt;If you employ a hack, don't be so ashamed. Don't be too proud, either. Above all, don't be lazy—be certain and deliberate about &lt;em&gt;why&lt;/em&gt; you're using a hack. Remember there are times you shouldn't use a hack (i.e., extending or tacking onto an existing hack).&lt;/p&gt;

&lt;p&gt;If you discover a hack, try to reserve judgement. Consider why it might be there... Is the developer going for a moon shot? Do they think they're MacGyver? Is YAGNI a consideration? Use the hack as an opportunity for dialogue and empathic code review.&lt;/p&gt;

&lt;p&gt;So, I guess I'm ultimately pleading for ambivalence and nuance—hacks can be evil, lazy, clever, useful, or anything in between. Hacks are just... fine. But since hacks are all around us, and constantly so, it's to our benefit to accept their utility and consider them an essential skill in any developer's bag of tricks. Maybe.&lt;/p&gt;

&lt;p&gt;&lt;small&gt;Images, in order of appearance, are courtesy of &lt;a href="https://xkcd.com/974/" rel="noopener noreferrer"&gt;xkcd&lt;/a&gt;.&lt;/small&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>career</category>
      <category>motivation</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>Legacy Code May Be The Friend We Haven't Met Yet</title>
      <dc:creator>Matt Hogg</dc:creator>
      <pubDate>Sun, 06 Jun 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/mrmatthogg/legacy-code-may-be-the-friend-we-haven-t-met-yet-152e</link>
      <guid>https://dev.to/mrmatthogg/legacy-code-may-be-the-friend-we-haven-t-met-yet-152e</guid>
      <description>&lt;p&gt;In web development it’s a common trope that working with legacy code is considered boring, painful, or even beneath us. Only the shiniest new framework or tech stack will do! The truth, however, is that legacy code is impossible to avoid. But don’t worry, that’s actually a good thing.&lt;/p&gt;

&lt;p&gt;React has been all the rage for about 7 years, but it's in use on only &lt;a href="https://w3techs.com/technologies/details/js-react" rel="noopener noreferrer"&gt;2% of all websites&lt;/a&gt;. The story’s the same for Vue and others. For all their hype, the opportunity to work somewhere that’s using React or Vue in the wild is incredibly niche and privileged!&lt;/p&gt;

&lt;h2&gt;
  
  
  You Can Run, But You Can’t Hide
&lt;/h2&gt;

&lt;p&gt;WordPress, on the other hand, serves up &lt;a href="https://w3techs.com/technologies/details/cm-wordpress" rel="noopener noreferrer"&gt;42% of the web&lt;/a&gt;. jQuery is an outcast nowadays, but is still found on a staggering &lt;a href="https://w3techs.com/technologies/details/js-jquery" rel="noopener noreferrer"&gt;78% of all websites&lt;/a&gt;. Or maybe you’ve heard of COBOL? About &lt;a href="https://blog.codinghorror.com/cobol-everywhere-and-nowhere/" rel="noopener noreferrer"&gt;80% of the code running the world&lt;/a&gt;—that’s payroll systems, ATMs, traffic lights, and so on—is written in COBOL. That’s over &lt;em&gt;220 billion lines&lt;/em&gt; of COBOL, and nobody ever talks about it.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1203430051615662080-665" src="https://platform.twitter.com/embed/Tweet.html?id=1203430051615662080"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1203430051615662080-665');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1203430051615662080&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Hell, if one wanted to make truly &lt;em&gt;serious&lt;/em&gt; money it’d be far better to be &lt;a href="https://www.forbes.com/sites/tomtaulli/2020/07/13/cobol-language-call-it-a-comeback/" rel="noopener noreferrer"&gt;a COBOL programmer yanked out of retirement&lt;/a&gt;. React seems downright quaint by comparison.&lt;/p&gt;

&lt;p&gt;Given numbers like these, it’s quite impossible to shape our careers to avoid legacy code.&lt;/p&gt;

&lt;p&gt;Any given company might be splitting their aging monolith into microservices, stitching a bunch of microservices back together again, migrating tech stacks or CMS platforms, acquiring smaller companies and their code, relying heavily on some random internal tool that a developer hacked up as a favor to a coworker, and so on.&lt;/p&gt;

&lt;p&gt;Even the youngest of tech startups will have a problem with legacy code—they just won’t realize it right away. The greenest of greenfield projects will quickly wither.&lt;/p&gt;

&lt;p&gt;So, legacy code is everywhere and runs for years (if not decades) making trillions of dollars a year and delivering value to users. The fact that it does so, and without fanfare, should give us pause. If legacy code is so successful, why do we have such contempt for it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Too Scary, Too Profitable
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=afZGZOL6Fr8" rel="noopener noreferrer"&gt;Dylan Beattie has the perfect definition for legacy code&lt;/a&gt; to help us answer this: &lt;em&gt;“Legacy code is code that’s too scary to update and too profitable to delete.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/afZGZOL6Fr8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;He also makes the observation that writing new code is more exciting than reading old code. And &lt;a href="https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/" rel="noopener noreferrer"&gt;Joel Spolsky expands on that idea&lt;/a&gt; when he claims, “It’s harder to read code than to write it.” He goes on:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. We’re not excited by incremental renovation: tinkering, improving, planting flower beds.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s quite the combination, isn’t it? We’re expected to poke at code that’s “too scary to update” and might fall over if we look at it the wrong way. But... We didn’t write this code... Can’t we work on this killer feature instead? Can we delete this code and start over? This isn’t the luxury and glamor of web development that we were promised!&lt;/p&gt;

&lt;p&gt;Let’s be frank—everyone in our line of work can write net new code. That’s the bare minimum expected of us, in this humble author’s opinion. It’s a far more valuable and marketable skill to read, interpret, and understand code. This is the challenge that legacy code puts to us.&lt;/p&gt;

&lt;p&gt;And, as is often the case when something is scary, this says more about ourselves than it does about the code. We need to face this head on, stop projecting our foibles onto inanimate codebases, and genuinely improve ourselves and the industry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenge Accepted
&lt;/h2&gt;

&lt;p&gt;Legacy code really is an opportunity and a proving ground for us to perfect many skills and talents. To list just a handful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Useful and efficient troubleshooting techniques.&lt;/li&gt;
&lt;li&gt;A heightened sense for patterns or antipatterns.&lt;/li&gt;
&lt;li&gt;New languages or tech stacks that they don’t teach in school.&lt;/li&gt;
&lt;li&gt;Strategies for pragmatic, effective unit testing.&lt;/li&gt;
&lt;li&gt;Battle-hardened architectural decisions and strategies.&lt;/li&gt;
&lt;li&gt;Smarter or more robust bug fixing skills.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s not to say any of the above comes easily. We just might have to break the code to understand it—deliberately or otherwise. And likely more than once. And sometimes on production.&lt;/p&gt;

&lt;p&gt;And yet, we can probably trust legacy code more than we’d guess! &lt;a href="https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/" rel="noopener noreferrer"&gt;Joel Spolsky again&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The idea that the new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It &lt;em&gt;is&lt;/em&gt; patently absurd, isn’t it? There’s a hubris that’s plainly on display and driving our contempt for legacy code. Even worse, it could be silently influencing the technical strategy for many organizations. Spolsky one last time:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[W]hen you start from scratch there is absolutely no reason to believe that you are going to do a better job than you did the first time. First of all, you probably don’t even have the same programming team that worked on version one, so you don’t actually have “more experience”. You’re just going to make most of the old mistakes again, and introduce some new problems that weren’t in the original version.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Which brings us to one of the most important skills that legacy code can teach us: &lt;em&gt;empathy&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We must remember that almost every codebase we might encounter was without a doubt built with the best of intentions. Decisions were made at a given point in time according to the constraints, assumptions, and best information available. But it never feels like that many years later, does it? While very easy to overlook, understanding these circumstances can help us better interpret the code we're looking at.&lt;/p&gt;

&lt;p&gt;And there's a corollary here that might be worrisome to any of us who've tried to tailor our careers to avoid any encounters with legacy code.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1361314511043629059-774" src="https://platform.twitter.com/embed/Tweet.html?id=1361314511043629059"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1361314511043629059-774');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1361314511043629059&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;There are rockstar developers creating legacy code &lt;em&gt;right this very moment&lt;/em&gt;. It doesn't matter if that code is lovingly crafted using the latest tech stack and best practices—in a few years a group of total strangers will beg their manager to print out the entire codebase and set it on fire.&lt;/p&gt;

&lt;p&gt;Right now, somebody somewhere is cursing a line of code that one of us has written. So, whether we're inheriting it or &lt;em&gt;producing&lt;/em&gt; it, we can't escape legacy code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Innovate Or... Why?
&lt;/h2&gt;

&lt;p&gt;Much is done in the name of innovation. We trip over ourselves chasing the shiny and new on a never-ending conveyer belt of frameworks and tooling. And then we snipe at each other on Twitter to justify the energy we've spent staying up to date, or we feel guilty for failing to keep up.&lt;/p&gt;

&lt;p&gt;Companies undertake massive overhauls of their codebases, also for innovation's sake. They may fall short of their ambitions or ultimately succeed—only to do it all again years later.&lt;/p&gt;

&lt;p&gt;So do we embrace legacy code with open arms and avoid modernizing? Or vice versa? No, of course not. But at this point it should be obvious that when it comes to code—old &lt;em&gt;or&lt;/em&gt; new—there are many influences at play and most of them aren't technical.&lt;/p&gt;

&lt;p&gt;Let's recognize that legacy code and unbridled innovation are at opposite ends of a spectrum, and that we can learn, improve, and extract value from the gap in between. Legacy code will always be with us and a healthy attitude about it favors developers, companies, and the entire field of web development.&lt;/p&gt;

</description>
      <category>career</category>
      <category>webdev</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
