<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manuel Holzrichter</title>
    <description>The latest articles on DEV Community by Manuel Holzrichter (@krippke).</description>
    <link>https://dev.to/krippke</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/krippke"/>
    <language>en</language>
    <item>
      <title>Why I never tell AI what to do</title>
      <dc:creator>Manuel Holzrichter</dc:creator>
      <pubDate>Tue, 28 Apr 2026 13:39:00 +0000</pubDate>
      <link>https://dev.to/krippke/why-i-never-tell-ai-what-to-do-1384</link>
      <guid>https://dev.to/krippke/why-i-never-tell-ai-what-to-do-1384</guid>
      <description>&lt;p&gt;A few weeks ago, I had a problem I thought I understood. I had a solution in my head, clean and complete. I sat down, opened the AI, and described the task in careful detail. Inputs, outputs, structure, edge cases. I asked it to implement what I had described. A minute later I had a confident, well-structured answer back. Names were good. Tests were there. The shape looked right.&lt;/p&gt;

&lt;p&gt;I started reading. Halfway through I noticed something off. A small assumption the AI had made, perfectly reasonable, that quietly contradicted a constraint I had never written down. The constraint was obvious to me. It was not obvious to anyone else, and I had not put it on the page. I tried to patch the result. The patch broke the next thing. By the end of the afternoon I had thrown most of it away and started over.&lt;/p&gt;

&lt;p&gt;The output was not the problem. The problem was that I had told the AI what to do before checking that we agreed on what the problem was. I had skipped the part where I find out whether the other side of the conversation actually understands the situation. That afternoon, I changed how I work with AI. I stopped giving instructions. I started asking questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The flip
&lt;/h2&gt;

&lt;p&gt;The change is small in the moment and large over a week of work. Instead of opening with "do X," I open by describing the problem and asking the AI what approaches it would consider. I read the answer. I push back where my context says push back. I ask follow-up questions about the parts I do not yet trust. Only once we agree on direction do I let it commit to anything substantial.&lt;/p&gt;

&lt;p&gt;The first few minutes look slower. The afternoon looks faster. The week looks much faster. Same task, same tool, different first move.&lt;/p&gt;

&lt;h2&gt;
  
  
  The two assumptions
&lt;/h2&gt;

&lt;p&gt;The method only works if you hold two things in your head at the same time, and most people drop one of them.&lt;/p&gt;

&lt;p&gt;The first: &lt;strong&gt;the AI knows more than me.&lt;/strong&gt; Across the breadth of software engineering, it has read more code, seen more patterns, and been exposed to more trade-offs than I will encounter in my career. On any topic where I have not spent serious time, its judgement is probably broader than mine.&lt;/p&gt;

&lt;p&gt;The second: &lt;strong&gt;I know more about my context than the AI ever will.&lt;/strong&gt; The constraints I have not written down. The team agreements that live in old chat threads. The legacy decisions that look weird until you know who made them and why. The customer who reacts badly to one specific phrase. None of this is in the model's training data, and none of it gets in until I put it there.&lt;/p&gt;

&lt;p&gt;Both are almost always true at the same time, and they pull in different directions. Treating the AI as a search engine wastes the first one. Treating it as an oracle ignores the second. The working pattern I have settled on is built to honor both. I lean on the AI for what it knows. I do not lean on it for what only I know.&lt;/p&gt;

&lt;h2&gt;
  
  
  Questions reveal understanding; instructions assume it
&lt;/h2&gt;

&lt;p&gt;The cleanest analogy I have for this is working with a trainee. When I want to know what a trainee has actually understood, I do not hand them an instruction and watch the result. By the time the result is in front of me, the signal is muddy. Did they get it right because they understood, or because the instruction was tight enough that any careful person would have produced the same thing? Did they get it wrong because they misunderstood the task, or because they misunderstood one specific word I used?&lt;/p&gt;

&lt;p&gt;So I ask questions instead. From different angles. "How would you approach this?" "What would you watch out for?" "Where do you see the risk?" Their answers tell me, very quickly, where the gap is and what context I need to share before they start.&lt;/p&gt;

&lt;p&gt;With an AI it is the same move, but the asymmetry runs the other way. I am not testing whether the AI knows the field. I am testing whether its understanding of &lt;em&gt;my situation&lt;/em&gt; is good enough that I can safely lean on its broader knowledge. Questions are how I find out. Instructions skip that step and hope for the best. Hope is not a working method.&lt;/p&gt;

&lt;h2&gt;
  
  
  Direction before detail
&lt;/h2&gt;

&lt;p&gt;The most expensive mistake I make with AI is not a wrong answer. It is reviewing a detailed answer while the direction is still open.&lt;/p&gt;

&lt;p&gt;When the direction is settled, detail review is fast. You know what you are looking for. You know what should be there and what should not be. The eye lands on the wrong things quickly because the right things have a clear shape.&lt;/p&gt;

&lt;p&gt;When the direction is still open, detail review is a trap. Important shaping decisions hide inside what looks like noise. A default value, a phrasing, a chosen abstraction, a quietly skipped concern. Each one is small enough to slide past a reviewer who is busy parsing whether the surface looks correct. And once one of them slides past, everything downstream is built on top of it. By the time the misread surfaces, you are not fixing a line. You are throwing the result away.&lt;/p&gt;

&lt;p&gt;So I do not let the AI go deep until the direction is locked. I shape the high-level concept with questions. I check that the AI is heading where I want it to head. I correct early, when correcting is cheap and the result is still small. Only then do I fire it up. Detail review after a direction lock is a different activity from detail review before one. The first is verification. The second is archaeology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bad output is a mirror
&lt;/h2&gt;

&lt;p&gt;The hardest habit to retrain was my reaction to bad AI output. My first instinct used to be frustration with the model. The output was wrong, the AI was the one that produced it, the source of the problem felt obvious.&lt;/p&gt;

&lt;p&gt;It was almost never the source of the problem.&lt;/p&gt;

&lt;p&gt;When the AI produces something visibly off, the cause is rarely a capability gap. Far more often, it is a context gap on my side. I described the situation badly because I had not understood it well enough myself. The unwritten constraint stayed unwritten. The trade-off I had quietly resolved in my head never made it into the prompt. The AI made a reasonable guess where I had left a hole, and the guess was wrong because the hole was where the most important information should have been.&lt;/p&gt;

&lt;p&gt;Now I treat bad AI output as a diagnostic. If I cannot get a useful answer out of the AI, I almost certainly cannot produce a clean answer myself. The exercise of describing the problem well enough that the AI can engage with it is the same exercise as understanding the problem well enough to solve it. When that exercise fails, the failure is information. It tells me where to go think.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop telling. Start asking.
&lt;/h2&gt;

&lt;p&gt;The pattern fits in one sentence. Stop telling the AI what to do. Start asking what it would do.&lt;/p&gt;

&lt;p&gt;What changes is not the AI. The AI is the same on both sides of the change. What changes is which of your two advantages you are using. Instructions lean on your context and waste the AI's knowledge. Questions lean on the AI's knowledge and force you to be honest about your context. The output gets better. And, quietly, so does your own thinking - because you cannot ask a good question about a problem you do not understand.&lt;/p&gt;

&lt;p&gt;Next time you sit down to use AI for real work, resist the urge to instruct. Describe the problem. Ask what it would do. Read the answer like you would read a colleague's first sketch. Push back where your context says push back. Agree on direction before you agree on detail.&lt;/p&gt;

&lt;p&gt;Then, only then, let it loose.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>development</category>
      <category>productivity</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Refactoring My Brain: Why Writing Became My Favorite IDE (Yes, Really!)</title>
      <dc:creator>Manuel Holzrichter</dc:creator>
      <pubDate>Tue, 06 May 2025 09:30:00 +0000</pubDate>
      <link>https://dev.to/krippke/refactoring-my-brain-why-writing-became-my-favorite-ide-yes-really-3jen</link>
      <guid>https://dev.to/krippke/refactoring-my-brain-why-writing-became-my-favorite-ide-yes-really-3jen</guid>
      <description>&lt;p&gt;Hey fellow coders and keyboard ninjas!&lt;/p&gt;

&lt;p&gt;We all know the feeling, right? Deep in the trenches of code, fifth cup of coffee kicking in, the screen a hypnotic kaleidoscope of brackets and semicolons. We &lt;em&gt;love&lt;/em&gt; solving problems with code. So much so, that we sometimes forget... well, everything else. Especially writing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk13hys8h1us6jjmhzo1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk13hys8h1us6jjmhzo1o.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a long time, writing felt like that annoying linter warning you just want to ignore with &lt;code&gt;// eslint-disable-next-line&lt;/code&gt;. "Why should I write down what I'm doing? The code is the ultimate truth! Comments are for newbies, and nobody reads docs anyway!" Sound familiar? I was fully bought into the "code-is-self-documenting" cult. Spoiler alert: It wasn't. And that cost me time. A lot of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bug in the Thinking Process: When Phrasing Falters
&lt;/h2&gt;

&lt;p&gt;I noticed something: Whenever I tried to put an idea or concept into words (even just for myself) and found myself stumbling, it wasn't just a lack of vocabulary. It was a bug report for my own thought process! A clunky phrase, a sentence that felt like spaghetti code – that was a clear indicator: &lt;strong&gt;The thought itself hadn't finished compiling yet.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This struggle for the right words forces you to debug the idea. You have to clarify variables (terms), define functions (connections), and rethink the architecture (the structure of the thought). Until it feels "right." And lo and behold: If you can &lt;em&gt;write&lt;/em&gt; an idea down clearly and understandably, chances are pretty darn good that it's conceptually sound. It's like a passing unit test for your brain, &lt;em&gt;before&lt;/em&gt; you even commit a single line of code.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Code-First Junkie to Concept-Writer (A Conversion Story)
&lt;/h2&gt;

&lt;p&gt;My old workflow: Problem -&amp;gt; Caffeine -&amp;gt; Vague idea in head -&amp;gt; Hands on keyboard -&amp;gt; Hammer away until it (supposedly) worked. The problem? Often, I only realized my fundamental assumption was flawed when I was already deep in dependency hell, trying to tame the last obscure edge-case monster. Finding and fixing these conceptual errors late in the process felt like trying to replace a house's foundation while you're already putting on the roof tiles. Time cost: Enormous. Frustration level: &lt;code&gt;Integer.MAX_VALUE&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Today, I do it differently. Problem -&amp;gt; Caffeine -&amp;gt; &lt;strong&gt;Concept in Text Form!&lt;/strong&gt; I try to sketch out the solution first on half a page of prose. And this is where the magic happens: Where can't I articulate something clearly? &lt;em&gt;Those&lt;/em&gt; are the weak spots. I iterate on this text, refining the phrasing until it reads logically and completely. You can refactor half a page of text in minutes. A complex codebase? Don't ask...&lt;/p&gt;

&lt;h2&gt;
  
  
  LLMs: My New Pair-Programming Partner (Who Can Type Fast)
&lt;/h2&gt;

&lt;p&gt;This "Write-First" principle has proven invaluable, especially when dealing with our new friends, the Large Language Models. LLMs aren't magic crystal balls that conjure code out of thin air (even if it sometimes feels like it). They're more like an extremely well-read, lightning-fast intern. What they do brilliantly: Extract relevant knowledge from a massive dataset (basically, the internet) and tailor it to our &lt;em&gt;specific&lt;/em&gt; requests.&lt;/p&gt;

&lt;p&gt;My approach: I &lt;em&gt;write&lt;/em&gt; the high-level concept, the architecture, the core logic – essentially the blueprint and the important structural calculations. Then I hand this clear, thought-out plan to the LLM and say, "Okay, now paint in the details. Generate the boilerplate code, research these API specifics, draft the documentation structure." The "painting" is often the time-consuming part. By providing the clear structures, I let the LLM handle the grunt work. Efficiency boost? Definitely! But only because the groundwork – the clear thinking and writing – was already done. Without a clear prompt, you often just get eloquent nonsense back. Garbage In, Garbage Out applies to AI too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Your &lt;code&gt;System.out.println("Meeting outcome captured!");&lt;/code&gt; Isn't Enough
&lt;/h2&gt;

&lt;p&gt;Let's be honest: What's discussed in a meeting is often ancient history the moment the last participant leaves the room (or the Zoom call). "Didn't we agree to do it differently?" Sound familiar? The spoken word is fleeting, like an unsaved buffer. And it scales terribly. Try getting ten people on the same page by telling each one the story individually.&lt;/p&gt;

&lt;p&gt;Writing solves this. A well-formulated document, a clear concept, an architecture sketch in text form – that's persistent. It can be shared. It enables asynchronous collaboration. New team members can get up to speed. Decisions are traceable. It's like a well-maintained Git repository for thoughts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Free Your Mind: &lt;code&gt;git commit -m "Thought process checkpointed"&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Our brain is amazing, but it's not a multi-core marvel with infinite RAM when it comes to active contexts. Keeping every thought, every open task, every vague idea in your head eats up mental capacity. You know that feeling when a detail for Project X keeps you awake at 3 AM, even though you're supposed to be working on Project Y?&lt;/p&gt;

&lt;p&gt;Writing it down is like a &lt;code&gt;git commit&lt;/code&gt; for your thoughts. As soon as you've written down an idea, a plan, or a problem to the point where you know you can pick up the thread later, your brain can let go. It trusts that the information is safe. That frees up space! It's like closing 50 browser tabs because you know the links are saved in your bookmarks. Written thoughts free up your head and make you ready for the next challenge – or at least for a more restful sleep.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nn70n904k3euwip22uc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nn70n904k3euwip22uc.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  From Thought-Chaos to Concept-Canvas
&lt;/h2&gt;

&lt;p&gt;An unspoken, unwritten idea is like a ghost. It floats around, vague, undefined. You spin your wheels mentally, not really getting anywhere. Formulating and writing down that idea is like the first brushstroke on a blank canvas. It's the &lt;code&gt;mkdir my-new-project &amp;amp;&amp;amp; cd my-new-project&lt;/code&gt; moment for your creativity.&lt;/p&gt;

&lt;p&gt;You define the basic structures. You give the idea form. And suddenly, you see not only what's there, but also what's missing. You create a framework within which new, more detailed thoughts can unfold. Without that first step, the canvas stays empty, and the idea remains just a fleeting notion.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2zozpm4bzoe8qujov7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2zozpm4bzoe8qujov7g.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  From Thought-Chaos to Concept-Canvas
&lt;/h2&gt;

&lt;p&gt;An unspoken, unwritten idea is like a ghost. It floats around, vague, undefined. You spin your wheels mentally, not really getting anywhere. Formulating and writing down that idea is like the first brushstroke on a blank canvas. It's the &lt;code&gt;mkdir my-new-project &amp;amp;&amp;amp; cd my-new-project&lt;/code&gt; moment for your creativity.&lt;/p&gt;

&lt;p&gt;You define the basic structures. You give the idea form. And suddenly, you see not only what's there, but also what's missing. You create a framework within which new, more detailed thoughts can unfold. Without that first step, the canvas stays empty, and the idea remains just a fleeting notion.&lt;/p&gt;




</description>
      <category>roleof</category>
      <category>workflow</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>The role of iterating</title>
      <dc:creator>Manuel Holzrichter</dc:creator>
      <pubDate>Sun, 04 Feb 2024 11:00:00 +0000</pubDate>
      <link>https://dev.to/krippke/the-role-of-iterating-lea</link>
      <guid>https://dev.to/krippke/the-role-of-iterating-lea</guid>
      <description>&lt;p&gt;Experience software development through my eyes. From the initial hesitant steps to today's successes, my story is marked by insights, setbacks, and crucial turning points. Explore how I evolved from an inexperienced developer to an agile thinker, grasping the significance of small, iterative steps along the way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.manuel-holzrichter.de%2Fassets%2Fimages%2Fbusy-developer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.manuel-holzrichter.de%2Fassets%2Fimages%2Fbusy-developer.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My first major project began two years after I started in software development. Together with my project manager, we discussed the tasks at hand, and I gradually implemented the features. At that time, without the use of &lt;a href="https://www.manuel-holzrichter.de/2024/01/11/the-role-of-tests.html" rel="noopener noreferrer"&gt;automated tests&lt;/a&gt; and their benefits, the problems with this approach only became apparent relatively late in the process.&lt;/p&gt;

&lt;p&gt;Difficulties always arose when the developed features were first encountered by future users. It quickly became apparent that assumptions made at the start of the project had either been misunderstood or not fully taken into account. The consequences of this lack of information led to extensive rework, which had not been accounted for in the original project schedule. What does unplanned work mean? Right, a stressful time with lots of overtime.&lt;/p&gt;

&lt;p&gt;The overtime was motivation enough to avoid these problems in my future work. The idea was deceptively simple, but in retrospect wrong: Features were implemented based on the initial problem statement and requirements. When used by end users, it became apparent that certain aspects had not been considered in the planning phase. To avoid this problem, the planning phase became more intensive and detailed to ensure that nothing was overlooked. While these efforts addressed the most significant planning errors over the course of subsequent projects, the result was essentially the same: Once users started utilizing the features, additional aspects emerged, leading to extensive adjustments.&lt;/p&gt;

&lt;p&gt;Over time, the idea that it is not possible to know everything in advance became firmly established.&lt;br&gt;
Software development is like walking in a fog. We can't see far and certainly not the end. We have to take small steps, reassessing each time what we have encountered and what path we want to take. But how does this apply to software development?&lt;/p&gt;

&lt;p&gt;A new approach emerged: instead of implementing all the necessary functionality at once, we focused on implementing only what was absolutely necessary to deliver the core value. This partial solution was presented to end users, and the insights gained from their feedback were fed directly into the next phase of development.&lt;/p&gt;

&lt;p&gt;Up to that point we had followed a model where a central development instance was set up once at the start of the project. This would run for as long as the customer was running the applications that had been created on their premises. However, we ran into a problem when the increasing number of presentations to end users meant that development had to come to a temporary halt. We could not afford this in the long run. So we decided that each developer would implement their functionality locally and then gradually integrate it into the central development instance. However, the number of installations had grown to such an extent that this was considered too costly. It was not practical for a developer to spend 4 hours setting up a local development environment just to develop a small feature.&lt;/p&gt;

&lt;p&gt;The clear new goal was: It should be possible to create a local development environment with just one command. To my surprise, this was achieved relatively quickly. Over time, we began to see the positive effects of this state. It was now effortless to quickly create a demo instance or run tests with the client at short notice. Many tasks that had previously seemed cumbersome could now be performed with ease, as an application at the latest development stage was only a simple command away.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.manuel-holzrichter.de%2Fassets%2Fimages%2Fcelebrating-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.manuel-holzrichter.de%2Fassets%2Fimages%2Fcelebrating-2.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This experience has made me realise that it is worth optimising all the elements that contribute to extending a development iteration. The faster I can complete an iteration, the more efficient I can be and the more I can achieve in my working time. Once I realised this, I focused on optimising the most time-consuming and resource-intensive elements of a development iteration.&lt;/p&gt;

&lt;p&gt;I am happy to report that overtime is no longer a part of my daily routine. By optimising the efficiency of our development iterations, we are able to seamlessly integrate customer insights into the next iteration without the need for extensive time resources. These advances allow us to navigate through the fog in small steps and effortlessly change direction when needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The evolution of my approach to software projects has changed significantly over time. Initially, the focus was on intensive planning, but the realisation that it is impossible to know everything in advance led to a more agile way of working.&lt;/p&gt;

&lt;p&gt;The introduction of an iterative approach, where only what is needed is implemented, makes it possible to respond to end-user feedback at an early stage and adapt development accordingly. Moving to local development environments with a single command has not only made the developers' work easier, but has also enabled more flexible presentation and testing with customers.&lt;/p&gt;

&lt;p&gt;The key finding is that the continuous improvement of processes and the optimisation of time-consuming aspects of development iterations lead to a more efficient way of working. This has not only eliminated the need for overtime, but also improved the ability to adapt to customer requirements. The analogy of walking through the fog illustrates that you cannot see everything in advance, but you can move forward in small steps and change direction flexibly.&lt;/p&gt;

</description>
      <category>experience</category>
      <category>story</category>
      <category>personal</category>
      <category>roleof</category>
    </item>
    <item>
      <title>The role of tests</title>
      <dc:creator>Manuel Holzrichter</dc:creator>
      <pubDate>Sun, 28 Jan 2024 19:34:59 +0000</pubDate>
      <link>https://dev.to/krippke/the-role-of-tests-3n7h</link>
      <guid>https://dev.to/krippke/the-role-of-tests-3n7h</guid>
      <description>&lt;p&gt;Welcome to my personal journey of 12 years as a software developer. In this retrospective, I would like to share my development from the beginning until today, with a special focus on the transformative role of automated tests. The term 'test' in the title refers to the very automated tests that have significantly influenced the way I work and the quality of my software. Let's look at the highs and lows of this journey together, how I have overcome challenges and what I have learnt along the way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.manuel-holzrichter.de%2Fassets%2Fimages%2Fstruggling-developer.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.manuel-holzrichter.de%2Fassets%2Fimages%2Fstruggling-developer.jpg" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I started my journey as a developer 12 years ago. Looking back, my approach to developing code components was pretty disastrous. Based on a verbal description of a function, I started thinking about how I could implement it as a program. After the initial considerations, I started creating the components straight away. As soon as there were no more syntax errors to be found, I compiled the entire software project and started it locally on my PC. I navigated through the user interface to the place where the feature was used and clicked around. If an error occurred, I checked the log files, analysed the stack trace and tried to fix the error. This cycle of building code, compiling software, navigating the UI and troubleshooting was repeated until the feature seemed to work and no more errors occurred.&lt;/p&gt;

&lt;p&gt;After a few months of experience in developing and operating our application, I recognised the potential for improvement. Many of the errors were due to a library behaving differently than expected. The idea was to expand our knowledge of the libraries in use in order to reduce false assumptions. I started to study the source code of each library and understand how they work. A few months later, it became clear that the work was paying off. Fewer errors were occurring, and when they did occur, they could be identified and fixed more quickly.&lt;/p&gt;

&lt;p&gt;This improvement made another problem more obvious. The level of knowledge in our team was very different. In order to support each other in the projects, every colleague had to know all the assumptions and the behaviour of the libraries used. Long handover meetings were not effective and much of what was discussed was quickly forgotten. The implementation often contained errors that had to be corrected by the responsible colleague during the test phase. The support of a colleague ultimately led to a multiple implementation time. In retrospect, the conclusion was: I wish I had done it all myself. One realisation grew from this experience: We have to assume that people will make mistakes.&lt;/p&gt;

&lt;p&gt;My projects always started with a customer problem. If it turned out that the problem could best be solved by software, it landed on my desk with a supposedly ready-made solution. Sentences like "Do it this way and that way and we'll have solved the problem" reassured me at first. The consultant knew exactly what was needed. Over the years, however, I started to break out in a cold sweat whenever I heard something like that. Why, you ask? I'll try to illustrate this with the following example:&lt;/p&gt;

&lt;p&gt;The customer has an appointment with our consultant. During this one-hour appointment, the customer explains their problem. The consultant has a proposal that sounds like it could solve the problem. A few days later, I have a meeting with the consultant in which the basic aspects of the required software are explained to me. Based on this information, I start with the development. Over the next three months, I have short meetings with the consultant from time to time to check on progress. Always with the same result: it's going in the right direction, keep going. The day comes when all the required functions have been implemented and manually tested. The software works. The solution is presented to the customer. After two minutes, the customer says: "That's not at all what I need.&lt;/p&gt;

&lt;p&gt;This extreme example is supposed to show the following: At the end of the project, the software will not be what was intended at the start of the project. The extent of this deviation can be influenced. However, this is not part of this article. The software must be able to change. Changes will come and are the norm.&lt;/p&gt;

&lt;p&gt;It is precisely this changeability that cannot be mapped with my original type of software development. With every change, I would have to know what impact it would have on the other components of the software. As humans are quite forgetful, mistakes will happen sooner or later. To be on the safe side here, all aspects of the software would have to be tested again manually. This requires the corresponding behaviour to be documented and descriptions of which behaviour is to be tested and how. Not to mention the time and effort that would be involved in all this work.&lt;/p&gt;

&lt;p&gt;Over the years, I have repeatedly read about different types of automated tests. Unit tests, integration tests, end-to-end tests were theoretical concepts for me and nothing that I came into contact with in my daily work. I had tried from time to time to create automated tests for my components. However, it always failed because I didn't know how to create tests in our software. Looking back, I can now say that the obstacle to unit tests was our software architecture. No boundaries were defined, components were overlaid with responsibilities. Almost all components of the entire software had to be initialised for a unit test. We'll save the topic of software architecture for another post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.manuel-holzrichter.de%2Fassets%2Fimages%2Fcelebrating.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.manuel-holzrichter.de%2Fassets%2Fimages%2Fcelebrating.jpg" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Almost 8 years after I started developing software, I was now able to write tests. The positive effects quickly became apparent. After 8 years without tests, you have gained some experience and appreciate the benefits of automated tests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tests are the definition of behaviour:&lt;/strong&gt;
When I create a component, I assign certain behaviours to it. For each expected behaviour, I write a test to ensure that the component does what I expect it to do. In this way, I record my thoughts at the time of creating the component in the form of the tests. Questions like "What was the task of this component again?" no longer arise. I can look at the tests and see what the component is supposed to do.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests lead to a better API:&lt;/strong&gt;
When I created components back then, it often happened that when I used these components later, I realised that the API was difficult to use. Since I started writing tests, I realise during the creation of the component whether the API is easy to use or not. I am motivated to make the API user-friendly because I am forced to use it myself to create my tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests as a guide for adjustments:&lt;/strong&gt;
When modifying existing software, the developer does not necessarily need comprehensive knowledge of how the software works in detail. The definitions that determine the behaviour of the components are comprehensively documented by tests. In the event of a modification that violates an original assumption, the failing test specifically points out the problem to the developer.The developer gets confidence that he is not breaking anything.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests uncover conceptual problems early on:&lt;/strong&gt;
Software development is a binary field - it either works or it doesn't. In the process of realising concepts and ideas that are the product of human thought, conceptual hurdles become apparent at an early stage. Identifying these problems early on significantly reduces the time required for implementation. Writing tests plays a crucial role in evaluating the smooth interaction of the components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests help with scaling:&lt;/strong&gt;
Software without tests assumes that every developer involved is familiar with the software in detail. This causes a developer to spend a lot of time making meaningful code contributions. Software with tests allows less experienced developers to contribute, as the required behaviour is ensured by tests. The existence of tests enables me to define work packages and hand them over to colleagues with little time required. The colleague's work results can be quickly evaluated through his tests. Adjustments can be made with minimal effort if necessary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests lead to a better software architecture:&lt;/strong&gt;
Writing tests for software with a poor architecture is very time-consuming. If I change a component and am then forced to adapt hundreds or thousands of tests, my motivation to change something will increase. If you find it difficult to create a test for a component, you will think about how to do it better. This often leads to responsibilities becoming clearer. Tests for software with a good architecture can be created quickly and easily.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests are the basis for agile working:&lt;/strong&gt;
Software requirements are constantly changing. Automated tests help with adaptation, bug fixing or expansion. Software consists of thousands of components. Responsibilities are assigned to each component. Without automated tests, manual tests are the consequence. In practice, however, this is not feasible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tests help to understand the behaviour of a library or framework:&lt;/strong&gt;
When I use a component from a library today, I write a few tests to check my assumptions. If it turns out that my knowledge of the component is insufficient and my assumptions are wrong, the test fails. This gives me the opportunity to adjust my assumptions by reading the documentation or the source code. As soon as the test runs successfully, I can assume that I have built up a sufficiently correct understanding. There is a further advantage for the life cycle of the library in the software. If the library behaves differently in a newer version than it originally did, a test will fail and I as the developer have the opportunity to deal with it. This gives me a sense of security, which helps me to keep the versions of the libraries I use up to date.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To summarise, I can say that automated tests have not only led to higher code quality, but also to a more sustainable and efficient way of developing software. The insights gained have revolutionised my approach to projects and contribute significantly to my understanding of good software architecture. The decision to establish testing as an integral part of my development process proved to be the key to successful and future-orientated software development.&lt;/p&gt;

</description>
      <category>experience</category>
      <category>story</category>
      <category>personal</category>
      <category>roleof</category>
    </item>
  </channel>
</rss>
