<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nick Humrich</title>
    <description>The latest articles on DEV Community by Nick Humrich (@nhumrich).</description>
    <link>https://dev.to/nhumrich</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nhumrich"/>
    <language>en</language>
    <item>
      <title>AI will replace everyone but you</title>
      <dc:creator>Nick Humrich</dc:creator>
      <pubDate>Tue, 07 Oct 2025 15:30:50 +0000</pubDate>
      <link>https://dev.to/nhumrich/ai-will-replace-everyone-but-you-34h9</link>
      <guid>https://dev.to/nhumrich/ai-will-replace-everyone-but-you-34h9</guid>
      <description>&lt;p&gt;Two years ago, artificial intelligence jumped ahead in usefulness, thanks to the release of generalized LLMs via ChatGPT. Since then, its powers have grown to allow LLMs to do all sorts of things, from building applications and generating images to even making entire movies. &lt;/p&gt;

&lt;p&gt;As is common with any new technological release, there is a crowd of people sowing fear, uncertainty, and doubt. Don't get me wrong, there are times in history where maybe the warning signs should be sounded, but is this one of them? As I have looked more into the claims people seem to be making about AI and replacing jobs, I have started to notice a trend. &lt;/p&gt;

&lt;p&gt;See if you can notice that same trend as you read some of these summarized testimonials from people I know. (reworded for comedic effect)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product managers&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dude, dev and UX is over. I can vibe-code an entire UX and app without a dev or UX team. But PM's? They are here to stay. AI is so bad at Product Management. Thats the real skill. Maybe devs who learn PM skills will survive. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Frontend engineers&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dude, UX is over. I can create working mocks using AI, then easily turn that into a working frontend without needing to have anyone design flows for me. But frontend dev is here to stay; the AI generated frontend code is so sloppy. I will be fixing slop created by these UX designers for years. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;UX Designers&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dude, dev is over, especially frontend. I can create mocks in figma, then turn that into working software without a dev. But UX designers? They are here to stay; AI is so bad at knowing good UX design. It just takes random concepts and slings them together, nothing is cohesive. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Backend engineers&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dude -- pm, UX, and front-end dev is over. I can use AI to come up with product ideas, have it help me design and implement a front-end. But backend code? It's here to stay. AI is so bad at actually implementing good scalable, secure practices. All this AI slop people are generating is going to be creating a lot of problems. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Doctors&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dude, a lot of people might be out of a job, including lawyers. But doctors? here to stay, this AI gets so many things wrong and makes mistakes at an intern level. AI is really hurting medicine right now. Now let me ask AI about this medical law real quick...&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Lawyers&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dude, AI is going to make everyone out of a job. It can code. It can diagnose medical issues. It can even invent new medications. There really isn't anything AI can't do. Well, except for law. LOL, it's so bad at knowing precedent and writing legal briefs. Lawyers might be the only job in this post-AI boom.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Plumbers&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;lol, we aren't going anywhere.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Wait, what!?
&lt;/h2&gt;

&lt;p&gt;What's going on here? It seems like every single person seems to feel comfortable in their job, but in the same breath, thinks everyone else's job is at stake. It's as if every single knowledge worker is saying, "AI will replace everyone. Well, except for me obviously."&lt;/p&gt;

&lt;p&gt;Luckily, people have already done all the psychological analysis needed to help explain why this is happening. This effect is created by a mixture of two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
AI is average at everything.
It turns out that most things in your life, you are sub-average at. So when you use a tool that makes you suddenly perform at least average, the end result feels incredible. You are usually completely okay operating at an “average” level for things you don't consider a core competency. &lt;/li&gt;
&lt;li&gt;Gell-Mann Amnesia Effect&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Gell-Mann Amnesia Effect
&lt;/h2&gt;

&lt;p&gt;The Gell-Mann Amnesia effect is an effect where you discount the stories in the newspaper which you have knowledge about, but then somehow think every other story is credible. It was first explained like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.&lt;/p&gt;

&lt;p&gt;In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;— Michael Crichton, "Why Speculate?" (2002)&lt;/p&gt;

&lt;p&gt;Or perhaps, a more brief way to say this is by quoting Erwin Knoll's law of media accuracy:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Both of these quotes are trying to say that we can accurately notice that AI is only average when we have first-hand knowledge of the thing AI is doing. We can see the holes, the problems. An area where average is not sufficient for us, AI breaks down. But when we ourselves are below average, we get "amnesia" and somehow think AI must be excellent in that thing. &lt;br&gt;
Now, both of these quotes are talking about "media", but I think it applies to this subject. If for whatever reason, you are still hung-up on that, here is another one for you: &lt;/p&gt;

&lt;h3&gt;
  
  
  Sokol's paradox
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;it is more difficult to know what one doesn't know than what one does&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What we are getting at here, is that we tend to forget that we ourselves lack knowledge and wisdom in areas where we have not put forth much effort. AI is significantly better at these things than us. The combination of it being really fast, and also more accurate than ourselves, certainly feels magical. But there is a psychological effect that we incorrectly extrapolate from that. Your ability to do a task was transformed from a 2/10 to a 5/10 immediately. "AI did that in only 2 years! Extrapolating, sure, it will get to 10/10 rapidly."&lt;/p&gt;

&lt;p&gt;But, when we ourselves are at an 8/10, and we see AI jump to 5/10, we say, "oh, that's nice... but 5-&amp;gt;6 is very hard. 6-&amp;gt;7, even harder. And 7-&amp;gt;8? Well, that's a large jump."&lt;/p&gt;

&lt;p&gt;The correct way to extrapolate is to say, "Hmm, AI isn't great at this one thing I am great at. Perhaps its not actually great at anything. Perhaps, I am just terrible at the things I think AI is great at."&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How Organizational Complexity Scales</title>
      <dc:creator>Nick Humrich</dc:creator>
      <pubDate>Thu, 26 Sep 2024 05:38:47 +0000</pubDate>
      <link>https://dev.to/nhumrich/how-organizational-complexity-scales-4j7f</link>
      <guid>https://dev.to/nhumrich/how-organizational-complexity-scales-4j7f</guid>
      <description>&lt;p&gt;This is a follow-up to a previous post: &lt;a href="https://dev.to/nhumrich/microservices-the-worst-technical-decision-you-will-ever-make-jff"&gt;https://dev.to/nhumrich/microservices-the-worst-technical-decision-you-will-ever-make-jff&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the previous post, I mentioned how one con to monolithic architecture is that you have higher organizational cost. A buddy of mine asked me to explain "organizational cost" and give some examples. Well, I am finally writing this post over a year later, sorry Devin.&lt;/p&gt;

&lt;p&gt;In order to talk about organizational complexity (and more specifically, how it changes with microservices on monolith approaches), I first want to bring up a story that maybe you are all too familiar with. &lt;br&gt;
The other day, I was organizing my home office, and had a lot of extra cables that have piled up from various devices. There were several cables and related peripherals that I wanted to save. Naturally, I threw them all into a single bin. The very next day, I needed a cable while at my companies' office. The IT department had all the cables neatly sorted so that I could find the cable I needed fast. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8lxqgyv2akddfot1ig6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8lxqgyv2akddfot1ig6.jpg" alt="Neatly sorted cabled into multiple bins" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It would take much longer to find a specific cable in my one bin full of cables. It would also, however, take longer to put them away. Especially now that I have them all in a single bin, it would take more effort to organize them than to leave them alone. This is, literally, organization complexity. You will also notice that both situations are probably correct for their use case. I don't need a fully organized set of shelves for my home office, because only I use them, and I grab cables rarely. The IT office at my work office has people asking for cables probably every day. If they had a single giant bin full of cables, it would take them all day to find all the cables they needed. &lt;/p&gt;

&lt;p&gt;It is somewhat hard to define or quantify this organization complexity, but we can see that if we compare the two solutions, there is a cost associated with the task that in other words would not be there if we had "organized" things differently. &lt;/p&gt;

&lt;h1&gt;
  
  
  Time to garden
&lt;/h1&gt;

&lt;p&gt;Now, let's imagine that we have a community garden, about 5 acres of land. Different people help each other to plant all varieties of plants and vegetables. At first, everyone helps share the load. Most of the land is unused, but the garden keeps growing. The garden is quite a success, and more and more people start planting in the garden. Weeds continue to naturally grow. As weeds affect all plants, most people help out and start to weed, but there are always a couple that don't quite feel worth getting rid of. As the garden keeps growing and growing, some weeds start to affect only certain spots of land. There are now so much weeds, and used land, that gardeners start to realize that their efforts weeding might be futile. There is no way that a single gardener can possibly weed everything themselves. They start to focus on only their plants. If the weeds are in their patch of land, they weed it, otherwise, it gets left untouched. &lt;/p&gt;

&lt;p&gt;Something else funny starts to happen. The plants start to cross-pollinate, and there becomes weird hybrid plants that start to pop up. People are not sure who's plants they are, since they could belong to multiple people. &lt;br&gt;
As people continue to plant more plants, it becomes harder to know who's planted what where. It becomes obvious that unless someone takes over and controls the rate at which weeds and hybrid plants grow, it will become a mess. The city comes in and starts prioritizing certain weeds and hybrid plants to get done. Since people are generally confused about whose plants are whose, and whose land is whose, it takes more and more city officials to keep weeds and hybrids under control. They spend a majority of their time telling gardeners which weeds to weed. &lt;/p&gt;

&lt;p&gt;Eventually, the city officials realize that this is taking too much time, and they aren't getting any other initiatives done. They spray paint lines on the ground, and assign each gardener a plot of land. Each gardener is responsible for their own plot of land. The city no longer cares as much about individual weeds. Gardeners are free to let their own plot fill full of weeds as much as they want. This causes conflict because some gardeners do a superb job keeping their plot of land clean and free of weeds, but the weeds from the plot next to them keep coming over. City officials decide to ask another city how they deal with this problem. The other city has installed fences and road between each assigned plot of land. This prevents weeds from creeping over and it gives very clear ownership. There is a downside that it makes it much harder for gardeners to work together and share water, as they have to install weird passages between their fences. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tuezwzqy7m156d5wouu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tuezwzqy7m156d5wouu.jpg" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  But enough of the metaphors...
&lt;/h1&gt;

&lt;p&gt;As you likely realized, this story is not really about gardens. It is a representation of how software teams function. When a growing dev team continues to work in a single code base, the lines between bugs and features get blurred. Due to multiple initiatives happening in the company, it's no longer feasible for devs to work in all areas. They start to specialize around certain products/features. Teams working on different features start to step on each-other's toes. Certain bugs and technical debt comes up, and it becomes less clear which teams should work on what. Leadership starts drawing lines of ownership. Ownership reduces communication overhead, because instead of a leader telling people what they should work on, it becomes clear who should work on something based on who owns a thing. However, these "lines" are just suggestions. It becomes far too easy to reach across the line, and grab something that you need. This overcomplicates code and continues to blur the lines. &lt;/p&gt;

&lt;p&gt;Microservices are a very heavy-handed approach that essentially forces boundaries and prevents blurred ownership lines. Crossing boundaries is difficult, so people do it less often. More importantly, ownership can almost never be misunderstood. So the amount of time spent cross-communicating is far less. &lt;/p&gt;

&lt;h1&gt;
  
  
  Now some actual Examples
&lt;/h1&gt;

&lt;p&gt;Here are some real examples I have seen and how they can play out in each scenario.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You need to upgrade your tool/language/framework/thing. Let's say for now, it's a version of node. In a monolith, if we want to update node, we just need to do it once! This sounds easy and simple, especially compared to the microservices approach where we would have to do it for each microservice, dramatically increasing the amount of work. However, the problem with the monolith approach is it touches everything. Upgrading is a very dangerous thing, as every feature that exists could potentially be effected. In order to be safe, it's probably better to make sure that every single team has a chance to test their things for breakages. There also isn't a way to test this type of change behind a feature toggle. This means that there are now two branches being kept up to date. As the rest of the engineering team keeps working on features, the person working on the upgrade has to keep merging things in over and over, potentially continuing to resolve merge conflicts. Another option is to "stop the world" and make the entire department work on the upgrade, so there are no conflicts. This is a hard problem to organize. Its organization complexity. In the microservices world, even though you have to "do more work", coordinating that work is very easy. Each team upgrades their own service when they get around to it. One team might do it this week, another team might do it 3 months from now after their big deadline. They don't have to coordinate or communicate with any other team, so the whole project becomes pretty easy. The communication overhead is near zero. &lt;/li&gt;
&lt;li&gt;You have a bug in production that is affecting a lot of users. You don't know exactly what is causing the issue, you just know it's at a specific endpoint. You ask the team that owns that endpoint to look at it, but the bug is in a specific library they haven't touched. Before you know it, half of the department is in a war-room discussing who might know what changed, rather than actually fixing the bug. Finally, they find the root cause, the person who made the change fixes the issue. And in the meantime, you have paid half of the department to fix this issue. In a microservices world, you notify the owners of the endpoint. They realize there is a 500 coming from another service. That other services team has already been notified because of the 500. They already have a fix in place. More than half the department wasn't even aware anything happened. &lt;/li&gt;
&lt;li&gt;You have a feature you want to build. The team you think would own it is working on other higher priority teams. You assign it to a different team because "it doesn't really matter too much" if a different team builds it. They spend a lot of time ramping up, and build it in a way the other team wouldn't approve of, which is a problem because long term they will be the owners. In a microservices world, the team you want to work on it is too busy, so it goes on the backlog, and doesn't get worked on immediately. Either that, or ramp up time and boundaries will be obvious, and you make the decision if it's worth having another team work on it, knowing there are barriers to entry, and that you need to keep that team involved. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, all of these are extreme examples. You certainly can organize teams to function better. At the end of the day, a perfectly organized monolith and perfectly designed microservices look exactly the same. The difference is that one is about building a culture of expectations, and the other is about enforcement. From a culture point of view, that makes microservices sound a bit heavy-handed. However, it's more of an acceptance of reality. Pressures in the business will force bad decisions. If you don't believe me, ask if you have ever had technical debt. &lt;br&gt;
Microservices is an intentional approach to prevent the business from ever putting you in a spot where really for organization is considered. It forces certain boundaries, despite business pressure. &lt;br&gt;
In all situations, those boundaries prevent you from taking on too much technical debt. The difference is, in some cases, you might need the debt, and in other cases, the debt wasn't truly needed. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“I have been asked what I mean by “word of honor.” I will tell you. Place me behind prison walls—walls of stone ever so high, ever so thick, reaching ever so far into the ground—there is a possibility that in some way or another I might be able to escape; but stand me on the floor and draw a chalk line around me and have me give my word of honor never to cross it. Can I get out of that circle? No, never! I’d die first.”&lt;br&gt;
― Karl G. Maeser &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What Karl had wrong, is that he didn't consider the fact that someone else might push him. "whoops, sorry mate", and now Karl left the circle. Intentions do not accurately represent future results.&lt;/p&gt;

&lt;p&gt;At the end of the day, the only difference between microservice proponents and the rest is that microservice proponents dont trust anyone to stay inside the lines. This has nothing to do with trust in the individuals themselves, but rather, it's because they recognize mistakes happen. Business pressures will cause people to violate sound principles, even if subconcious. There are some mistakes so costly, you prevent humans from doing them, even if you trust the indivuduals completely. If you transfer $1 million to a bank, you hire armed gaurds. Why don't you just ask your most trusted friend? or do it yourself? Because the issue isn't your trust in the persons character, its the trust that nothing else outside their control will happen. &lt;/p&gt;

&lt;p&gt;Microservice architectures reduce organizational complexity by enforcing boundaries with electric fences. If you feel your organization is disciplined enough to respect boundaries at all costs, without the need for microservices, than you should probably avoid microservices. But for the rest of us, microservices are the only invevitable architecture. &lt;/p&gt;

&lt;p&gt;Do you have the discipline to have many cooks in a large kitchen? Or would you prefer to have many kitchens? As for me, I think there is a thing as too many cooks. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>REST is dead</title>
      <dc:creator>Nick Humrich</dc:creator>
      <pubDate>Sun, 04 Aug 2024 04:28:08 +0000</pubDate>
      <link>https://dev.to/nhumrich/rest-is-dead-1p4f</link>
      <guid>https://dev.to/nhumrich/rest-is-dead-1p4f</guid>
      <description>&lt;p&gt;Nothing feels so ubiquitous as REST. With how widely adopted REST is, its surprising how little of a standard there actually is. Everyone seems to have their own understanding of what REST is, and they pick and choose principles from it, ignoring the rest. This frustration of REST has led a lot to wonder what out there is better. A lot of people have started to look into GraphQL as a potential replacement. &lt;/p&gt;

&lt;p&gt;Hopefully in this article I can help you know that when to choose REST, when to choose GraphQL, and when to choose neither. &lt;/p&gt;

&lt;h1&gt;
  
  
  Why did REST win?
&lt;/h1&gt;

&lt;p&gt;Sometimes in order to understand the nature of a thing, you have to understand the mindset of the person who made the thing. Or, in other words: history matters. REST became king in a world where two things were happening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The age of API's. Lots of services started to offer API's for people. Weather data, geological data, etc. It become a standard way for programmers to share complex information.&lt;/li&gt;
&lt;li&gt;SOAP was currently king.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The main premise of rest is in the name: State Transfer. REST was designed for applications where the API (or backend) was primarily a facade into a database. The API was a cacheable way for lots of clients to access said database. The beauty of REST is that each individual object has its own URL (e.g. books/1234), so its very easy to cache objects to take load off your server. This is why REST encourages ID's in the URL.&lt;br&gt;
If you have a public API, REST is still probably a great pattern for you. By public I mean the data is the same no matter the user. For example: weather data, cat pictures, or even &lt;a href="https://swapi.dev/" rel="noopener noreferrer"&gt;star wars information&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;REST also won because it wasn't SOAP. SOAP is an older RPC standard. The main problem with SOAP is that it tied itself to XML. XML was complex, and as such, became a huge security vector. As security concerns of XML rose, javascript also gained popularity. These two things led people to prefer JSON based API's to XML ones. REST wrongfully became synonymous with "JSON based API", or more accurately "Not-XML". &lt;/p&gt;
&lt;h1&gt;
  
  
  What about GraphQL
&lt;/h1&gt;

&lt;p&gt;With the realization that REST doesn't actually mean anything, facebook decided to go back to the drawing board and invent their own standards from the ground-up. The result? GraphQL. The main advantage that draws people to GraphQL is that it is actually a standard. There is only one way to do GraphQL. You cannot "reinvent" GraphQL, as all the existing clients would break. There are a bunch of other "reasons" people claim to like GraphQL, but those &lt;em&gt;actually&lt;/em&gt; are all part of REST. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr88anztxob0dgu2dkdw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr88anztxob0dgu2dkdw5.png" alt="Image description" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"GraphQL allows you to take 10 requests, and make it a single request. Less data fetching": &lt;em&gt;Actually&lt;/em&gt; there are some REST standards that allow you to include sub-data or related data to objects such as &lt;a href="https://stateless.co/hal_specification.html" rel="noopener noreferrer"&gt;HAL&lt;/a&gt;, but most implementations would allow you to set an "expanded" field in the query params. EST certainly allows you to do this if you wanted to implement it, way easier than migrating to GraphQL&lt;/li&gt;
&lt;li&gt;"GraphQL allows you to make a single request smaller by only sending the fields I am interested in": &lt;em&gt;Actually,&lt;/em&gt; many implementations of REST allow you to pass which fields you want back in the query param in order to shorten the request size. Also, this isn't nearly as big of a deal in an http/2+ world. &lt;/li&gt;
&lt;li&gt;"GraphQL allows you to specific how many objects you want back" (for example &lt;code&gt;first: 2&lt;/code&gt;): &lt;em&gt;Actually&lt;/em&gt; almost all mature implementations of REST allow you to set page sizes for pagination and give custom sort parameters. &lt;/li&gt;
&lt;li&gt;"GraphQL allows devs to discover api and object contracts": &lt;em&gt;Actually&lt;/em&gt; REST is supposed to have this with both HATEOAS and the "OPTIONS" method. However, I will give props to GraphQL on this one because most people don't implement this in REST. However, with the recent popularity of OpenAPI (swagger), it feels like this one is becoming less of a concern.&lt;/li&gt;
&lt;li&gt;"GraphQL has better tooling": Wait... hold up... installing a client and having to "discover" an API is better than clicking on a link in a browser? Sorry, its hard to beat the ease of REST.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I looked up a bunch of other benefits that people claim for GraphQL and honestly, I can't find a single PRO to graphQL that can't be true for REST. &lt;br&gt;
That is however, except for one thing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GraphQL is actually a standard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is one thing GraphQL has going for it. &lt;/p&gt;

&lt;p&gt;That being said,  GraphQL does feel like a good fit for basic applications where you want the API to act like an ORM so you can keep 90% of your business logic in frontend. &lt;/p&gt;
&lt;h1&gt;
  
  
  Cons of GraphQL
&lt;/h1&gt;

&lt;p&gt;Everything comes with tradeoffs. That includes GraphQL. The exception with GraphQL however, is it comes with cons, but with only one pro: standards. &lt;/p&gt;

&lt;p&gt;So, here are some things about GraphQL you have to consider before adopting it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A single endpoint: GraphQL makes all requests through a single endpoint. This makes it hard to track metrics/logs on a "per request" level. Datadog/graphana/etc all expect different "endpoints" to have different URL's. Pretty much no tool will work out of the gate for tracking usage/metrics. There are so many other cons to sharing a single endpoint. But the main point is: {tool} does not support it out of the gate, so suddenly you find yourself reimplementing things that would otherwise be 5 minutes of effort. &lt;/li&gt;
&lt;li&gt;Performance: Similar to above, there is no easy way to know the performance of a single "endpoint" because a single request is a custom "query" by the client.&lt;/li&gt;
&lt;li&gt;Throttling: If you expose your API to 3rd parties, some might abuse it. With a REST api you can add throttling, either globally, or per endpoint. With graphql, a single request could be a massive hit to performance, whereas another client could have 100's of simple fast requests. So there is no easy way to throttle clients.&lt;/li&gt;
&lt;li&gt;Only a read standard: More on this below, but graphql has no say/standards around how writes should work, only reads.&lt;/li&gt;
&lt;li&gt;Security: Because every field can be requested, you have to check permisisons on every single field instead of just at an endpoint level. &lt;/li&gt;
&lt;li&gt;N+1 problem: GraphQL is essentially an ORM for the frontend. It allows clients to ask for queries that otherwise might not be very performant. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But lets talk about something else. In all my years writing RESTful API's and doing contract negotiation, there have been a lot of debates on how to do various things in the API. Having standards would have been nice, they would have resolved a lot of discussions quickly. There are three very common, unresolved, debates in the REST world:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Where do you put IDs? What about nested Objects? Is it &lt;code&gt;owners/12442/dogs/89203&lt;/code&gt; or &lt;code&gt;dogs/89203&lt;/code&gt;? What goes in the URL versus the body on PUT's (updates)&lt;/li&gt;
&lt;li&gt;How do you update nested objects? For example, say you have a person with an "addresses" attribute. How do you update it? Does a PATCH replace the list whole-sale with the one I provided? How do you "add" to a child list without idempotency/race-condition issues?&lt;/li&gt;
&lt;li&gt;How do you deal with non-CRUD business logic? Say for example you want an API for "trigger upload". This doesn't actually return an object. Its not "state transfer" its "create side effects", which breaks out of the model of REST.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;GraphQL certainly resolves #1 because the standard says that all objects live in a global namespace, and ID's go on each object, with child objects below. However #2 isn't resolved at all by GraphQL, you still have to have this discussion. And #3 is only resolved by GraphQL in a weird way. See, we have been talking about the advantage to GraphQL being "standards", but GraphQL is only a Query Language. It only standardizes the reads (GET's in REST land). There is actually no standard at all for how to deal with updates/creates/deletes. The lack of standards actually makes GraphQL more palatable. A lot of REST people get too caught up in adhering to REST principles, and end up overcomplicating it. So, GraphQL's benefit here is actually not standards, but the lack thereof. By removing standards, all philosophic debate ceases to exist, and most devs just fallback to RPC.&lt;/p&gt;
&lt;h1&gt;
  
  
  So what's ideal?
&lt;/h1&gt;

&lt;p&gt;Most API's these days are "private" APIs. Private meaning, the API returns data unique to the current logged in user. If you get "books" and I get "books", we will get a different list because its our own data. &lt;br&gt;
REST was not built for private API's, and GraphQL introduces some concerns. &lt;br&gt;
If we were take the ideas together and try to build the "perfect" system for a private API, what would that look like?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We would need a clear way to get the schema of an endpoint, and discover its shape: Discoverable, Strongly Typed contract&lt;/li&gt;
&lt;li&gt;We need an endpoint for each action&lt;/li&gt;
&lt;li&gt;No ID's in the URL at all. A url represents an "endpoint". 1:1 mapping between handler/URL.&lt;/li&gt;
&lt;li&gt;A standard way to handle updates, not just reads&lt;/li&gt;
&lt;li&gt;A way to handle actions beyond CRUD&lt;/li&gt;
&lt;li&gt;Uses serialization native to the browser/javascript&lt;/li&gt;
&lt;li&gt;Is useable in any language/cross-platform&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I only know of one existing standard that checks all these boxes: SOAP&lt;/p&gt;

&lt;p&gt;Okay, okay, I am only partially joking. SOAP is in fact the only standard I know of that checks these boxes, but we probably should add one more requirement:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Doesn't use XML&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are a couple modern tools that come close but don't quite check all the boxes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TRPC&lt;/strong&gt;: Is only useable in typescript, and doesn't provide any standards around updates&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GRPC&lt;/strong&gt;: Is not browser native, and doesn't provide any standard around updates&lt;/p&gt;
&lt;h1&gt;
  
  
  Lets Over-Engineer
&lt;/h1&gt;

&lt;p&gt;Since there isn't really a good off the shelf product that checks all the boxes, let's do what engineers do: Invent our own! But hold up, if we are going to go through the effort of inventing our own, let's look at some other things. &lt;/p&gt;

&lt;p&gt;So far, we have discussed REST and GraphQL. REST is very good at state transfer, and GraphQL is good for relationship data modeling. However, most of the applications I write these days only have one client: the frontend. &lt;br&gt;
Also, the "backends for frontends" design pattern would encourage the frontend, and an external API, to be different anyways. So, I would like to design a standard to use, specifically for front-end consumption. &lt;/p&gt;

&lt;p&gt;When using an API on the frontend, state transfer based API's can lead to problems where too much business logic is on the frontend. For example, if you want to add a new "song" and assign it to a "composer" and a "producer", you would have to create the song, then add the relationships after. If one of those things fails, the Frontend now has to handle it. Or, you can let the business logic be on the backend, and build a "remote function" specifically for that thing, &lt;code&gt;createSong()&lt;/code&gt; which maybe optionally takes a composer ID and a producer ID. This pattern would be RPC. This encourages us to create functions specific to user behavior, rather than making the frontend control business logic and modifying state itself. &lt;/p&gt;

&lt;p&gt;When writing a front-end application, the one main concern that all frontends deal with is state. What needs to be happen and be updated when a user clicks a button? Or what happens when a user deletes a thing?&lt;br&gt;
If we were to use graphQL or REST, the frontend would be responsible for knowing what things to even look for. For example, lets say you delete an author. When the author is deleted, all of their books are deleted as well. In order for the frontend to know that the books are deleted, it would have to specifically ask for the new list of books. The thing about deleted things is there is no way to get a list of deleted things, so instead you have to get a list of all books, and do a diff, and update the diff. &lt;br&gt;
It might not be too hard to know that books are deleted when an author is. But now imagine a couple months later, we add an audiobook feature. In order for the frontend to correctly update state for audiobooks, they now have to go back and modify the original query for delete author, and also check for audiobooks. What the frontend ultimately wants to know in this situation is, "what are the side effects of this action." &lt;/p&gt;

&lt;p&gt;This means there are two use-cases for frontend API's. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Gathering initial state. When a frontend component initially loads, it needs to go to the backend to gather all the data.&lt;/li&gt;
&lt;li&gt;Update state. When a user takes an action, the frontend needs to potentially update multiple components. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can design our API's around this. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;RPC calls for gathering initial state are able to provide additional infomation and relationships. Essentially the equivalent of "GET" in rest.&lt;/li&gt;
&lt;li&gt;RPC calls for taking action. Unlike in REST, these return nothing by default. Instead of them returning a specific object, they return a list of "events". The backend generates a single event for every side effect that happens due to the action. This way, the backend is responsible for telling the frontend what needs to change. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you would like to see a spec I am working on for this type of thing, &lt;a href="https://github.com/nhumrich/STNT" rel="noopener noreferrer"&gt;go here&lt;/a&gt;. Otherwise, here is an example of a response with events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST /authors.delete
{"id": "12345"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// response
{"_events": [
   {"author.deleted": {"id": "12345", "name": "bob", ...},
   {"book.deleted": {"id": "1111", "title": "Musings of a software engineer", ...},
   {"audiobook.deleted": {"id": "1000", "title": "Hills to die on", ...}
]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As for the GETS (or &lt;code&gt;list&lt;/code&gt;), here is an example of how you can get child relationships:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST /authors.list
{"_include": ["books", "audiobooks"]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// response
{"authors": [
    {"id": "12345", "name": "bob", ...},
    {"id": "22341", "name": "sally", ...},
],
"_includes": {
    "books": [
        {"id": "1111", "title": "Musings of a software engineer", ...},
        {"id": "1010", "title": "Things on my shelf", ...},
    ],
    "audiobooks": {
        {"id": "1000", "title": "Hills to die on", ...}
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>api</category>
      <category>restapi</category>
      <category>graphql</category>
      <category>rpc</category>
    </item>
    <item>
      <title>Microservices: The Worst Technical Decision You Will Ever Make</title>
      <dc:creator>Nick Humrich</dc:creator>
      <pubDate>Tue, 27 Jun 2023 04:18:35 +0000</pubDate>
      <link>https://dev.to/nhumrich/microservices-the-worst-technical-decision-you-will-ever-make-jff</link>
      <guid>https://dev.to/nhumrich/microservices-the-worst-technical-decision-you-will-ever-make-jff</guid>
      <description>&lt;p&gt;The debate of microservices vs monoliths seems to be getting some traction. Most trends happen in a cycle. Fanny packs were awesome; everyone was wearing them. Then they were lame, and you would get made fun of for wearing them. Now, they are back in. &lt;br&gt;
Microservices and Monoliths seem to follow a similar trend. Microservices were really awesome, and everyone was trying them. Now, there is a "self-correcting" of the market and those who have tried microservices are pivoting back to monoliths. The honeymoon of Docker and Kubernetes is over, and people are starting to realize that microservices are complex. &lt;br&gt;
Microservices have the ability to take what may be a fairly simple application, and turn it into a distributed system. The requirement for writing features goes from "will it work" to "which letter do we drop in CAP". These are much more difficult problems. &lt;/p&gt;

&lt;p&gt;There really isn't any technical decision you can make that complicates a codebase/application more than adopting microservices. &lt;br&gt;
Microservices are complex -- once you adopt them, you start thinking about different things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service discovery&lt;/li&gt;
&lt;li&gt;Isolating Failures&lt;/li&gt;
&lt;li&gt;Circuit Breakers&lt;/li&gt;
&lt;li&gt;Data Consistency&lt;/li&gt;
&lt;li&gt;Separating Tests&lt;/li&gt;
&lt;li&gt;Independent deployability&lt;/li&gt;
&lt;li&gt;Backwards Compatibility&lt;/li&gt;
&lt;li&gt;API Security (zero trust)&lt;/li&gt;
&lt;li&gt;Standards and contracts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A lot of these things never need to be considered when deploying as a monolith. If we were to chart the technical complexity of each solution, it would probably look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqy1e2nd4albd1aex23k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqy1e2nd4albd1aex23k.png" alt="microservices have high technical complexity while monoliths are low"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are microservices popular?
&lt;/h2&gt;

&lt;p&gt;One major question worth asking is, "If microservices are so bad, why are they popular". The answer is that they solve a different problem. Microservices have become popular because they are designed to solve organizational problems, not technical ones. Another way to say this is that microservices are an &lt;strong&gt;accepted tradeoff&lt;/strong&gt; for solving a non-technical problem: people. &lt;/p&gt;

&lt;p&gt;No discussion on microservices is ever complete without a mention of &lt;a href="https://en.wikipedia.org/w/index.php?title=Conway%27s_law" rel="noopener noreferrer"&gt;Conway's law&lt;/a&gt;. That is because Conway's law is at the heart of why microservices matter. If you don't understand Conway's law, you do not understand microservices. Martin Fowler, a well respected software architect, claims that this law is perhaps the only "law" in software that is &lt;em&gt;always&lt;/em&gt; true. &lt;a href="https://martinfowler.com/bliki/ConwaysLaw.html" rel="noopener noreferrer"&gt;In a recent article on the topic,&lt;/a&gt; he said:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[Conways Law is] Important enough to affect every system I've come across, and powerful enough that you're doomed to defeat if you try to fight it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you are unfamiliar with Conway's law, it states that a software design will eventually mirror the organizational structure of the people who built it. If you have a messy organization, you will have a messy software structure. &lt;br&gt;
Microservices are an architectural choice that intentionally complicates technology, in order to force a better, clearer communication structure. In other words, they are a technical implementation to an organizational problem; not a "solution" to a technical problem. &lt;/p&gt;

&lt;p&gt;Let's add the organizational complexity to our graph:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh14h8rekme6abdtxpww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh14h8rekme6abdtxpww.png" alt="graph of organizational complexity for microservices and monolith, where organizational complexity is higher for a monolith"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The astute reader might notice that the "total" complexity of microservices is higher. &lt;/p&gt;

&lt;h2&gt;
  
  
  Growth Factor
&lt;/h2&gt;

&lt;p&gt;The one thing that should hopefully be clear now, is that people who say microservices are better, and people who say monolith is better, are both correct. The problem is, they are talking about different problems. &lt;br&gt;
Monoliths are technically easy, but harder to scale organizationally. &lt;br&gt;
Microservices are technically difficult, but easier to scale organizationally.&lt;/p&gt;

&lt;p&gt;As an organization grows, both technical complexity and organizational complexity increase. Where you are in your growth as a technology and company will determine where on the cross-section of "perfect solution" you stand. The growth rate of technical complexity over time is linear:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9v1jo1zz87ejmed76lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9v1jo1zz87ejmed76lg.png" alt="basic graph showing linear growth"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As your product becomes larger, the complexity grows with it. Organizational complexity however grows exponentially: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e9gx6t95lhbzj1obopo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e9gx6t95lhbzj1obopo.png" alt="basic graph showing exponential growth"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When teams adopt and practice microservices, that complexity flips. Organizational becomes linear, and technical becomes exponential. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrfw67ug1cluo4i4i3r0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrfw67ug1cluo4i4i3r0.png" alt="Image description"&gt;&lt;/a&gt; &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tud3rfjnfffuhlc9eir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tud3rfjnfffuhlc9eir.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Software abstraction
&lt;/h2&gt;

&lt;p&gt;On the surface, the two appear to be relatively similar in overall complexity, with different problems. In order to accurately see how things might shape out, we must consider the ability of compounding improvement over time.&lt;br&gt;
Software is complex because everything you do is on top of hundreds of layers of abstraction. We are able to program so efficiently these days because decades of other developers have abstracted hard things for us. This means that as software becomes more and more complex, the complexity to us as developers, doesn't actually increase as fast, since abstractions hold the complexity down.&lt;/p&gt;

&lt;p&gt;Organizational complexity however cannot be abstracted. Many of the same problems that plagued companies 50 years ago, are still problems. Organizational problems can only be fixed by learned behavior. No technology can solve a behavioral problem.&lt;/p&gt;

&lt;p&gt;As your company grows, and learns to adapt to microservices, the technical complexity over time won't actually be exponential, since many of the complex parts will become abstracted. Likewise, as the industry continues to improve, a lot of those complexities will make their way into libraries and common practices. &lt;/p&gt;

&lt;p&gt;While microservices might be a technical hurdle for your team, it is a much better long-term solution than a technically-simple-organizationally-complex monolith. It's probably best to think of this whole debate not as one versus the other, but a spectrum. It doesn't have to be one or the other, you can have a mixed architecture.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
    <item>
      <title>The colors of the async rainbow</title>
      <dc:creator>Nick Humrich</dc:creator>
      <pubDate>Tue, 28 Mar 2023 18:25:27 +0000</pubDate>
      <link>https://dev.to/nhumrich/the-colors-of-the-async-rainbow-5anf</link>
      <guid>https://dev.to/nhumrich/the-colors-of-the-async-rainbow-5anf</guid>
      <description>&lt;p&gt;Asynchronous programming has become increasingly re-popular in recent years. It allows a different mental model on concurrency. The syntax most commonly used to implement asynchronous programming the &lt;code&gt;async&lt;/code&gt;/&lt;code&gt;await&lt;/code&gt; syntax. This refers to the use of keywords such as "async" and "await" to denote asynchronous functions.&lt;/p&gt;

&lt;p&gt;Some have argued that &lt;code&gt;async&lt;/code&gt; and &lt;code&gt;await&lt;/code&gt; adds a complexity, often referred to as "&lt;a href="https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/"&gt;colored functions&lt;/a&gt;". The main argument being that you can't call async functions from sync ones. This makes it frustrating when you have a function which is labeled as &lt;code&gt;async&lt;/code&gt;, and you are currently writing a non-async function. That async function essentially becomes "off limits". This is a legitimate argument, but is what I will consider "acceptable trade-off" for adapting async/await. &lt;/p&gt;

&lt;p&gt;While the arguments against colored functions are sound, I actually still think colored arguments are good. That is to say, the benefits outweigh the costs: colored functions are actually a useful tool for developers, particularly when it comes to remote calls and input/output (I/O) operations.&lt;/p&gt;

&lt;p&gt;There is a famous blog post (which I have linked to several times before) which compares the cost of different computation primitives. &lt;br&gt;
The post is called &lt;a href="https://blog.codinghorror.com/the-infinite-space-between-words/"&gt;The Infinite Space between worlds by Jeff Atwood&lt;/a&gt;. The main premise is just turning CPU time into human time, so it's easier to compare time differences. For example, let's say that we have to load data from memory which takes 120ns, but for our human scale, let's say that takes 6 minutes. Now, we need to get the same data from a database. Assuming the database in the same datacenter, and is relatively fast, that would be about 3 ms. In our human scale, that is 3 months! &lt;/p&gt;

&lt;p&gt;I work remotely, and sometimes I ask my coworker Gwen a question via slack/email/whatever. She usually responds in 6 minutes, and I think that is a great response time! Normally, when I ask Gwen a question, I just work synchronously on tasks, since her average response time is low. I have another coworker, Skyler, who averages a 3-month response time. If you were me, and knew that was his average response time, you would not work synchronously. That is to say, you wouldn't wait until he responds before continuing to do any work at all. Instead, you would move on to some other task and pick up the current task once you get a response (if you even wait at all).&lt;/p&gt;

&lt;p&gt;I could follow the same pattern of asynchronously working with my 6-minute coworker, but at some point, constantly switching between tasks, only to pick it back up 6 minutes later gets exhausting. There is a cost to constant context switching, so instead, I opt for just waiting for those 6 minutes of my time. &lt;/p&gt;

&lt;p&gt;If we crank it up a notch, my boss needs me to complete a specific task. My boss works similar to me, and needs to know if the task will be complete in 10 minutes, or 3 months. The problem is, it depends on which co-worker I will be coordinating with. And that difference changes how I think about the workload entirely. In fact, the difference between working with my 3-month coworker, and working with &lt;em&gt;both&lt;/em&gt; of my coworkers is essentially meaningless. As soon as you add in my 3-month coworker, he is my bottleneck, and all other factors aren't going to contribute to total time in any meaningful way. &lt;/p&gt;



&lt;p&gt;The analogy here is important when it comes to asynchronous programming. When we make an external call, to another service or a database, we are impacting our overall time for that function. So much so, that it is often necessary to know when a function will do something remotely. &lt;/p&gt;

&lt;p&gt;Say we have the following function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;get_users_name&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function gives us the &lt;code&gt;name&lt;/code&gt; of a user object. Does it do this by using the existing object? Or does it do this by reaching out to a database/service somewhere? It's impossible to know without just &lt;em&gt;knowing&lt;/em&gt;. However, if the function is &lt;code&gt;async&lt;/code&gt;, then we would have a pretty good idea that it is going remotely somewhere. And that difference changes how we model our code, and how we think about this function call. &lt;/p&gt;

&lt;p&gt;If we use threads or even green threads instead, then this function still has a color. It's still making a remote call. That color is just invisible to us until we inspect it. Async is an explicit color, whereas green threads are an implicit color. Non-async is like invisible ink. It's still there, you just can't see it until you look with a black light. Another way to think about &lt;code&gt;async&lt;/code&gt; is as a "type" for a remote function. &lt;/p&gt;

&lt;p&gt;Yes, remote functions have a color. They are viral. Any function that calls them inherits that color. I believe that the color being explicit is a useful tool that helps you reason about your programs' behavior. By denoting remote calls and I/O operations with colored functions, developers can quickly identify the most time-consuming parts of their code and optimize them accordingly. So the next time you see a colored function in your code, don't be afraid of it – embrace it as a useful tool for optimizing your program's performance.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>No, you won't be replaced by AI</title>
      <dc:creator>Nick Humrich</dc:creator>
      <pubDate>Fri, 17 Mar 2023 04:39:13 +0000</pubDate>
      <link>https://dev.to/nhumrich/no-you-wont-be-replaced-by-ai-4eh4</link>
      <guid>https://dev.to/nhumrich/no-you-wont-be-replaced-by-ai-4eh4</guid>
      <description>&lt;p&gt;Artificial intelligence (AI) has taken the world by storm, and its impact on the software development industry is undeniable. The recent news and development of Large Language Model (LLM)  algorithms and other forms of AI have made people realize that computers are beginning to code. Some have speculated that AI will eventually replace developers altogether, rendering their skills and expertise obsolete. There is a lot of fear, uncertainty, and doubt around this. As LLMs become more powerful, and AI assistants like copilot start to write more code, developers fear if they, in fact, are still relevant. Will the demand for one of the most advanced skills of the modern age suddenly go away? No. It won't.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm2oupqndbki6zq1276i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm2oupqndbki6zq1276i.jpg" alt="Image of human evolution from monkeys at the beginning to robots at the end" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Programming Evolution
&lt;/h2&gt;

&lt;p&gt;In 1972, Dennis Ritchie released the first "high-level programming language": C. As a successor to B, C was one of the first widely used "human-readable" programming languages. It changed the world. For the first time, programmers no longer needed to think about machine code. They could think about overall data and instructions, and let the language compile it into machine code. Up until that point, programmers had to know the instruction set of the specific CPU they were programming on. They had to understand how the hardware worked. Of course, further down the road C++, and Java in turn, also revolutionized the software development industry by allowing developers to write code more efficiently and with greater ease. The development of all these higher level languages sparked the question of whether high-level programming languages would eventually replace the need for developers, as assembly language was gradually phased out of use. It is true that developers today, write code at a significantly faster rate than developers in the days before C. Higher level programming languages have certainly changed the game! In fact, while C was once considered a "high level programming language", it is often referred to today as "low level". C is now being replaced by rust, and starting to enter the way of assembly. What was once our herald, is now our legacy. &lt;/p&gt;

&lt;p&gt;The reality is that high-level programming languages did not eliminate the need for developers. Look around, there are more developers and "code-adjacent" jobs than ever before. As software becomes easier to write, difficult analytical problems surface more. New achievements unlock new challenges. As computers got better and cheaper, we started storing more data. Eventually, we had so much data, we needed to create new ways to deal with it. New databases, new hardware. This follows the theory of induced demand. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4rfsp24r56cec6bb24x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4rfsp24r56cec6bb24x.jpg" alt="lots of electrical towers in the sunset" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Induced demand is a concept in economics where increased supply can lead to increased demand. The best way to look at this is, over the years, electricity has gotten cheaper and cheaper. And yet, governments and companies have never spent more on electricity than now. As prices fall, their budgets increase. Induced demand says that as things such as electricity become cheaper (or, more readily available), then more people use it as it becomes less cost-prohibitive, and demand actually increases beyond the original level.&lt;/p&gt;

&lt;p&gt;We can see this same pattern through software. When software was in its infancy, only governments and large organizations could afford it. As software became cheaper to produce, we saw more and more solutions being solved by software, and more users than before. As our ability to produce high-quality software improves, so does demand. When the cost per line of code goes down, the desire to purchase code goes up.&lt;/p&gt;

&lt;h3&gt;
  
  
  So, if cost goes down, will my salary drop?
&lt;/h3&gt;

&lt;p&gt;We have been talking about the cost of software going down over time, but have developer salaries dropped over the years? No, they have continued to grow. If we were to compare you to electricity as mentioned above, the cost of electricity goes down, but the budget for power goes up. In other words, you are the power plant. As code becomes easier to write, you don't get paid less, you produce more, thus, increasing your own value further. Improvements in the industry improve your output, thus increasing your value. &lt;/p&gt;

&lt;h2&gt;
  
  
  Code is not the "bottleneck"
&lt;/h2&gt;

&lt;p&gt;Just as high-level programming languages did not replace the need for critical thinking and problem-solving abilities, AI will not replace skills that are required to develop complex software applications. The ability to analyze complex problems, develop innovative solutions, and collaborate with others is a key aspect of software development that cannot be replaced by any AI. AI can certainly automate certain tasks, but it cannot replace the creativity and intuition that are required to develop innovative software solutions. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Writing code is simply a means to an end, not an end in itself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Software development is not just about writing code, but about creating solutions to complex problems. Writing code is simply a means to an end, not an end in itself. The ability to solve complex problems and create innovative solutions is a critical aspect of software development. Programming requires creativity, insight, and intuition, all of which are uniquely human characteristics that cannot be replicated by machines.&lt;/p&gt;

&lt;p&gt;That is to say that writing software is a process of designing simple interfaces for complex, robust solutions. It's a process of exploring. Of debugging. Of empathy. &lt;/p&gt;

&lt;h2&gt;
  
  
  You have a brain
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftji8fn6buz0escuqupfv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftji8fn6buz0escuqupfv.jpg" alt="human brain" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI does not think. It simply produces the most statistically likely response based on previous training. The inherit flaw with AI, is that it can only make decisions as good as the training data it has.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I should think that you Jedi would have more respect for the difference between knowledge and wisdom.&lt;br&gt;
   ―Dexter Jettster&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Amazon did a study where they created an AI platform to screen resumes. They had to eventually scrap it because it was shown that the AI was biased towards males&lt;a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G" rel="noopener noreferrer"&gt;1&lt;/a&gt; The problem with their resume screener is that it used current Amazon employee resumes as training data. Amazon was already a biased place with too many males. They were actually aware of that problem, which is why they created the AI. But AI cannot perform better than its training data. &lt;/p&gt;

&lt;p&gt;Now think about your code. Is it perfect? Is it flawless? Does it pass all the test cases? It's it bug free the first time, every time? If you answered yes to these, please reach out, because I will hire you in a flash. The reality is that software is buggy. Like, really buggy. If you look at open source projects, chances are you will notice that the higher the amount of stars on a project, the higher the amount of bug reports it has. Software is often so buggy, that as software engineers, we have backlogs and ticket systems with labels such as "won't fix". Issues so hairy, or so rare, we decide they aren't worth our time. The training data for AI is bug laden software. The quality of the code AI writes can never be better than the quality of software it is given.&lt;/p&gt;

&lt;h2&gt;
  
  
  You are safe
&lt;/h2&gt;

&lt;p&gt;Might your job change as AI becomes more prevalent? Yes. Might you have to learn some new ways of working, new languages, new skills? Yes. Might you have to increase the amount of analytical and intuitive work you do every day? Yes. &lt;br&gt;
But the need for a "programmer" is not going away.&lt;br&gt;
Just as we couldn't anticipate assembly going the way of the dodo, it's too early to tell what will happen to programming as we know it today. But your job, your livelihood, your brain, are not obsolete. They will be more needed than ever.  &lt;/p&gt;




&lt;p&gt;For my next article on AI "replacing people" see: &lt;a href="https://dev.to/nhumrich/ai-will-replace-everyone-but-you-34h9"&gt;https://dev.to/nhumrich/ai-will-replace-everyone-but-you-34h9&lt;/a&gt;  &lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>career</category>
    </item>
    <item>
      <title>Quality vs Speed</title>
      <dc:creator>Nick Humrich</dc:creator>
      <pubDate>Mon, 06 Mar 2023 21:51:09 +0000</pubDate>
      <link>https://dev.to/nhumrich/quality-vs-speed-20pl</link>
      <guid>https://dev.to/nhumrich/quality-vs-speed-20pl</guid>
      <description>&lt;p&gt;In software engineering, there has been an ongoing debate regarding the relationship between quality and speed. Some argue that focusing too much on speed can compromise quality, while others believe that quality and speed are inherently linked. In this article, we will explore why quality and speed of software engineering are coupled and how this relationship impacts the software development process.&lt;/p&gt;

&lt;p&gt;Quality and speed are both crucial components of successful software development. Quality refers to the extent to which the software meets its intended purpose, is reliable, efficient, and maintainable. Speed, on the other hand, refers to the ability to deliver software quickly and efficiently. While these two concepts may seem at odds, they are in fact interdependent, and focusing on one can positively impact the other.&lt;/p&gt;

&lt;p&gt;Firstly, quality directly affects speed. Software that is poorly written and difficult to maintain will inevitably lead to delays and increased development time. When a team is faced with fixing bugs and maintaining subpar code, it slows down the entire development process. This can also impact the speed of future projects, as developers may need to devote time to fixing the same issues repeatedly.&lt;/p&gt;

&lt;p&gt;In contrast, well-written, maintainable code can actually speed up the development process. Clean, organized code is easier to understand, which makes it easier to build on and maintain. This allows developers to focus on developing new features rather than fixing bugs or deciphering code. By prioritizing quality and building a strong foundation, developers can create a more efficient development process and ultimately deliver software faster.&lt;/p&gt;

&lt;p&gt;Secondly, speed can positively impact quality. In today's fast-paced technological landscape, speed is often a critical component of success. Being able to deliver software quickly and efficiently can give businesses a competitive edge. When teams are working under tight deadlines and need to deliver quickly, they are forced to prioritize the most important features and functionality. This can help to reduce unnecessary complexity and ensure that the software is lean and efficient.&lt;/p&gt;

&lt;p&gt;Furthermore, fast delivery can help teams receive timely feedback, which is essential for maintaining quality. Early feedback allows developers to identify issues and make adjustments before the product is complete. This can help to catch bugs, improve usability, and ensure that the software meets the needs of its intended users. By prioritizing speed and delivering quickly, teams can receive feedback earlier in the development process and ultimately build higher quality software.&lt;/p&gt;

&lt;p&gt;Quality and speed are not mutually exclusive. Focusing on one can positively impact the other, and both are crucial components of successful software development. By prioritizing quality and building a strong foundation, developers can create a more efficient development process and ultimately deliver software faster. At the same time, fast delivery can help teams receive timely feedback, which is essential for maintaining quality. By understanding the interdependent relationship between quality and speed, developers can create software that is both high-quality and delivered quickly.&lt;/p&gt;

</description>
      <category>codequality</category>
      <category>speed</category>
      <category>devrel</category>
    </item>
    <item>
      <title>The Evolution of Disorder in Large Software Projects</title>
      <dc:creator>Nick Humrich</dc:creator>
      <pubDate>Tue, 31 Jan 2023 16:08:37 +0000</pubDate>
      <link>https://dev.to/nhumrich/the-evolution-of-disorder-in-large-software-projects-jab</link>
      <guid>https://dev.to/nhumrich/the-evolution-of-disorder-in-large-software-projects-jab</guid>
      <description>&lt;p&gt;If you have ever worked on a large software project, you are likely all-too-familiar with technical debt. Technical debt, in my experience, is synonymous with "messy work environment." In other words, the tools/code/architecture is built in such a way, that doing certain tasks is not optimal because there is a mess that needs to be cleaned up. As you build these projects, and notice this mess, you start to make decisions between cleaning up the mess, or leaving it there. In some cases, you may even choose to make the mess worse. &lt;/p&gt;

&lt;p&gt;My life dealing with messes and disorder began at a very young age. As a child, I always had a messy, disorganized room. I would have very little places of open floor of which to even step on in order to leave my room. Today, this habit often makes it to my work desk. My desk in my office regularly has things all over it. I occasionally clean up my desk, and make it look really nice, but it never lasts long. It seems to stay clean for a couple of days. &lt;br&gt;
While having a discussion about this phenomenon with a friend of mine, I was introduced to Broken Windows Theory. &lt;/p&gt;

&lt;p&gt;Broken Windows theory is a theory that says crime is more likely to happen in neighborhoods/houses that have broken windows. The concept is that subconsciously, we (or miscreants) believe clean, orderly houses are more likely to be monitored. This theory was created by psychologists James Q. Wilson and George L. Kelling after hearing about a story from Philip Zimbardo, a Stanford psychologist.&lt;br&gt;
In his story, Zimbardo left two cars unmarked, without plates parked in public lots in different cities. One car had the hood up and doors open. The car with the hood up and doors open was trashed within days, and the other car was left untouched. Zimbardo decided to smash in the window of the untouched car, and in within hours, the car was destroyed. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wLZHAMTQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82cvu2q2mawxdgdacqn9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wLZHAMTQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82cvu2q2mawxdgdacqn9.jpg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kelling and Wilson explain the theory like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Social psychologists and police officers tend to agree that if a window in a building is broken and is left unrepaired, all the rest of the windows will soon be broken. This is as true in nice neighborhoods as in rundown ones. Window-breaking does not necessarily occur on a large scale because some areas are inhabited by determined window-breakers whereas others are populated by window-lovers; rather, one un-repaired broken window is a signal that no one cares, and so breaking more windows costs nothing. (It has always been fun.)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The emphasis for me is the last section: "one un-repaired broken window is a signal that no one cares, and so breaking more windows costs nothing."&lt;/p&gt;

&lt;p&gt;Upon learning about this theory, suddenly my messy desk made sense. Once I leave one thing on my desk, then my brain is willing to leave, yet another thing on my desk. Over time, cleaning up a single thing on my desk is irrelevant. I may as well leave it, since it won't affect the overall state of disorder. &lt;/p&gt;

&lt;h2&gt;
  
  
  back to our regular scheduled program
&lt;/h2&gt;

&lt;p&gt;I have also come to notice this same behavior in software. As a team works on a feature, there are often discussions on technical debt. What is the priority of the feature? Is it worth taking on technical debt to ship it faster? One of these items, for example, is automated tests. You can reduce some time before release if you skip tests this time. Tests are just one form of technical quality, but there are many other factors as well that also get put aside.&lt;/p&gt;

&lt;p&gt;Every time you choose to add technical debt, it reinforces that taking on technical debt is an acceptable lever to pull when under pressure of some form. In other words, if you already have a broken window, what's one more? One poor quality feature leads to another, and another, and another. Eventually, your building runs out of windows to break, and your only answer is to burn it down and start building it over again. &lt;/p&gt;

&lt;p&gt;Broken window theory also teaches us that the rate of disorder is exponential. As more technical debt enters a project, the rate at which it takes on more debt increases. If you have ever worked on a project that was later considered "legacy" you will know that there is no faster way to kill a software project than to call it "legacy." When a software project reaches legacy status, all windows are ripe for breaking and fixing them becomes pointless. &lt;/p&gt;

&lt;h2&gt;
  
  
  What to do about it
&lt;/h2&gt;

&lt;p&gt;Learning about broken window theory often leads to "zero-tolerance" policies. In policing, it means ticketing every minor infraction to encourage communities to not get to the point of having minor cosmetic issues. However, zealotry is not the solution to broken windows. The solution is care and training. In other words, we don't ticket people for having broken windows, but we can help encourage people to fix windows once broken. Preventing all technical debt is not the solution. Being a zealot about tests and quality will actually slow down teams significantly, as perfect quality is unobtainable. A balance must be quested. What that balance is depends a lot on the maturity of the idea you are working on.&lt;br&gt;
I once heard -- though unfortunately, don't remember where -- that fidelity should match maturity.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The fidelity of a design should never exceed the maturity of the idea/project&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In other words, the quality of your current feature should never be greater than the maturity of that feature. If you are unsure of the feature and its impact, you should not plan for much quality. But is a project is very mature, than your changes should have high quality. &lt;br&gt;
We can see an example by looking at chefs in cooking shows versus chefs in a fancy restaurant. If you have ever watched a competitive cooking show, such as master chef, you will see chefs make meals without ever cleaning up. In these shows, the chefs have one to three hours to cook a couple of dishes. Time is tight, and scalability is low. Cleaning up, or preventing messes, is the lowest item on their priority. They will keep cooking no matter the mess they create, with the only exception being if the mess prevents their next immediate task from happening. &lt;/p&gt;

&lt;p&gt;In contrast, chefs in a fancy restaurant have every reason to care about cleanliness. First, if the restaurant is located in America, there are actual laws that require the kitchen maintain a certain level of cleanliness. Second, a clean kitchen ensures that future meals are able to be cooked smoothly. As dirty dishes pile up, and counter tops get dirty, it prevents future meals from being prepared in a timely manner. For these reason, the kitchen often gets cleaned as part of the process for cooking. &lt;/p&gt;

&lt;p&gt;Similarly, our teams need to find the right balance between quality and speed. Quality is only in conflict of speed in the short term, but the two are always tightly coupled in the long run. &lt;/p&gt;

&lt;p&gt;Perhaps, its time for you to do an inventory of your own code-base. How much disorder do you have? Does it seem to be getting worse or better over time? Are there specific repo's/projects that you feel more inclined to clean up vs others? Do you have any practices in place to metaphorically replace broken windows?&lt;/p&gt;

</description>
      <category>software</category>
      <category>architecture</category>
      <category>teams</category>
    </item>
    <item>
      <title>Why I don't like UUIDs</title>
      <dc:creator>Nick Humrich</dc:creator>
      <pubDate>Mon, 10 Oct 2022 16:21:58 +0000</pubDate>
      <link>https://dev.to/nhumrich/why-i-dont-like-uuids-5d9n</link>
      <guid>https://dev.to/nhumrich/why-i-dont-like-uuids-5d9n</guid>
      <description>&lt;p&gt;A lot of people use UUIDs — most specifically, UUIDv4 – to generate ID's. After all, a UUID is an ID that is universally unique. And UUIDv4 is a library that exist in essentially every language, since it's a standard. UUID's are great because they are essentially guaranteed to be unique, across a worldwide dataset, without needing to “check for existence” for the ID. The UUID standard is so common, even databases support them out of the box. &lt;/p&gt;

&lt;p&gt;So why then do I not like them? Three reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Since they are so ubiquitous, it's now impossible to tell them apart from any other ID&lt;/li&gt;
&lt;li&gt;They are needlessly long&lt;/li&gt;
&lt;li&gt;They can't be sorted.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Ubiquitous
&lt;/h2&gt;

&lt;p&gt;UUIDv4 has reached the point where lots of tooling and products are now using them. For example, while I was personally dealing with an error in a production web app, I was trying to get more information. The request itself had a correlation ID (a UUIDv4), a tracing ID (a UUIDv4), the ID of the object itself (a UUIDv4), and the error was assigned a specific URL for debugging, which was a UUIDv4. There were so many UUIDv4 everywhere. It becomes hard to know what the ID is supposed to represent. It would have been nice to see an ID and recognize what it was supposed to be the ID of.&lt;/p&gt;

&lt;p&gt;To fix this, I like to pre-pend my ID's with some signifier such as &lt;code&gt;r_&lt;/code&gt; for a request, so that I know anything that starts with &lt;code&gt;r_&lt;/code&gt; is a request. Then I do this for each object type. &lt;/p&gt;

&lt;h2&gt;
  
  
  Needlessly long
&lt;/h2&gt;

&lt;p&gt;The goal of UUID is to generate an ID with enough entropy, that the chance of a collision (generating the same ID twice) is low. This should be true, even on a massive dataset. Due to the low probability of collision, you can generate a UUID without needing to handle duplicate IDs in your code, or without having to using incrementing ID's. The number of possible UUID's generated from UUIDv4 is 3.4e+38. We will call this number the "entropy of UUIV4".&lt;/p&gt;

&lt;p&gt;Here is an example of UUIDv4:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;7b48b4b7–341d-4ad8–93e1-ed5ebc910f7d&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can read the UUIDv4 spec if you care to learn what these number/letters represent. But a quick summary to understand how we calculate entropy. A UUIDv4 has: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4 dashes, always in the same place&lt;/li&gt;
&lt;li&gt;5 sections of hex-encoded bytes (4 bytes, 2 bytes, 2 bytes, 2 bytes, 6 bytes)&lt;/li&gt;
&lt;li&gt;Total of 36 characters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So we have 36 characters, 32 random characters between 0-9 or a-f. So 36 characters, 16 possible characters for each character. So 16 * 16 * 16... etc. 32 times. Or 16^32 = 3.4e+38. (this even ignores that the first couple of characters aren't entirely random)&lt;/p&gt;

&lt;p&gt;In practice, UUIDv4 is supposed to be 122 bits, not 36 characters; the 36 characters are just a final representation of the bits. But in practice, everyone just converts a UUID to a string first, and keeps it as a string. Therefore, there is some wasted space in a UUIDv4. First, it has 4 characters that are completely useless: the 4 dashes. Second, only using the characters a through f in each character, is ignoring 20 other possible characters per character slot. &lt;/p&gt;

&lt;p&gt;If we instead used all the characters, we would have 36 characters per slot, and could get better entropy with only 25 characters: 8e38&lt;/p&gt;

&lt;p&gt;We could also use upper case and lower case character, giving us 58 characters total, for better entropy than UUIDv4 with only 22 characters: 6.2e38&lt;/p&gt;

&lt;p&gt;A 22 character random string would look like &lt;code&gt;YOf41WCV7KmVUYN56CB8Bi&lt;/code&gt; and still have less chance of collision than a UUIDv4. &lt;/p&gt;

&lt;p&gt;But is that level of entropy even needed? &lt;/p&gt;

&lt;p&gt;UUIDv4 is designed to have VERY high entropy. But if we are generating ID's for things that will never have more than a billion or even a trillion in number, that level of entropy is unneeded. &lt;br&gt;
You would have to generate about 2.71 quintillion UUIDv4 before having a good chance of a collision, or 103 trillion UUIDv4's with a 1/1,000,000,000 chance of a collision. Most systems do not need that level of entropy. &lt;/p&gt;

&lt;p&gt;For example, if we use 0-9, a-z, and have a 16 character random string, you would have to generate 100million ID's before having a 1 in a billion chance of getting a collision. &lt;/p&gt;
&lt;h2&gt;
  
  
  Can't be sorted
&lt;/h2&gt;

&lt;p&gt;UUIDv4's consist of almost entirely random bits. They can't be sorted because any form of sorting wouldn't make sense. When using ID's for objects in a database, it can be helpful to sort by ID for faster indexing or pagination purposes. But when you use UUIDv4, you can not sort by the ID. &lt;br&gt;
One possibility to get sorting working on IDs is to include a timestamp into the ID. Then we can sort by the timestamp. If you use a unix timestamp, you get a number such as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;1665199900&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;and that number can be converted to a 7-character alphanumeric string using base32, or 6-characters with base64 if you want to use upper and lower case letters. (or both are 8 characters if you include the &lt;code&gt;=&lt;/code&gt; padding)&lt;br&gt;
Note: for sortability to remain intact after encoding, you need to use an alternate base32/64 than the standard, as the standard puts numbers at the end, and we need numbers at the beginning for lexicographical sorting. One such implementation is &lt;a href="https://www.crockford.com/base32.html"&gt;crockfords base32&lt;/a&gt; &lt;/p&gt;
&lt;h1&gt;
  
  
  Putting it all together
&lt;/h1&gt;

&lt;p&gt;If we want to have sortable, short, recognizable IDs, we can do it by combining all three techniques.&lt;br&gt;
Since we are going to include timestamps in our IDs, we need very little entropy in the actual random portion. &lt;br&gt;
If we have two or more IDs generated at the same sub-millisecond, the IDs will not be sorted in order, but at a millisecond level, I do not think that is an issue.&lt;/p&gt;

&lt;p&gt;This gives us the following string representation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;XX_TTTTTTTTTRRRR&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Where:&lt;br&gt;
 &lt;code&gt;X&lt;/code&gt; = 1 or two characters to help identify the object type&lt;br&gt;
 &lt;code&gt;T&lt;/code&gt; = 9 characters representing 1/10th millisecond epoch&lt;br&gt;
 &lt;code&gt;R&lt;/code&gt; = 4 random characters (1048576 combinations within 1/10th of a millisecond, which is very low collision odds for 1000 items in that timeframe)&lt;/p&gt;

&lt;p&gt;Here is some sample code of how you could generate it in python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;random&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;string&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;base32_crockford&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;b32&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;gen_id&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prefix&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;''&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;ts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;10_000&lt;/span&gt;
    &lt;span class="n"&gt;time_section&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;b32&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;random_section&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;''&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ascii_lowercase&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;digits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;prefix&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;time_section&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;random_section&lt;/span&gt;

&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gen_id&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'rq_'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An example from the above code:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;rq_1j5hf5bk2nqg2&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;vs our original:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;7b48b4b7–341d-4ad8–93e1-ed5ebc910f7d&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>programming</category>
    </item>
  </channel>
</rss>
