<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Michael Larson</title>
    <description>The latest articles on DEV Community by Michael Larson (@mrlarson2007).</description>
    <link>https://dev.to/mrlarson2007</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mrlarson2007"/>
    <language>en</language>
    <item>
      <title>I Wasted a Day on My AI's Bad Idea, And It Was Worth It</title>
      <dc:creator>Michael Larson</dc:creator>
      <pubDate>Sun, 03 Aug 2025 02:44:03 +0000</pubDate>
      <link>https://dev.to/mrlarson2007/i-wasted-a-day-on-my-ais-bad-idea-and-it-was-worth-it-4bao</link>
      <guid>https://dev.to/mrlarson2007/i-wasted-a-day-on-my-ais-bad-idea-and-it-was-worth-it-4bao</guid>
      <description>&lt;p&gt;It started with a YouTube tab open. I was watching the Pragmatic Engineer interview with Kent Beck, and something he said I agreed with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Test driven development (TDD) is a “superpower” when working with AI agents. AI agents can (and do!) introduce regressions. An easy way to ensure this does not happen is to have unit tests for the codebase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://newsletter.pragmaticengineer.com/p/tdd-ai-agents-and-coding-with-kent" rel="noopener noreferrer"&gt;Kent Beck - Pragmatic Engineer Podcast&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I see this time and time again in my own use of AI coding agents. At times they are extremely helpful and get the task done as asked, other times they go off the rails and end up breaking everything. The only tool that I know of to help keep them on track is TDD, the "superpower" Kent Beck talked about during his interview.&lt;/p&gt;

&lt;p&gt;After watching the interview and noodling on the idea of how to keep agents on track, this led me to create aiswarm, a tool designed to manage multiple AI agents, each with a specific persona and task, to automate software development. With GitHub Copilot as my pair programmer, I wanted to see if I could build a tool that would help me wrangle these agents, keep them focused, and maybe make my coding life a little easier.&lt;/p&gt;

&lt;p&gt;Thanks for reading! Subscribe for free to receive new posts and support my work.&lt;/p&gt;

&lt;h3&gt;
  
  
  LOSING THE THREAD: CONTEXT WINDOWS AND WANDERING AGENTS
&lt;/h3&gt;

&lt;p&gt;If you’ve spent any time with AI agents, you know the pain. The longer the context window, the more likely it is that instructions from the start are slowly forgotten. I’ve seen it with every model I’ve tried, even Claude Sonnet 4 the best model currently for coding. At first things are great, the agent is writing a failing test, making the test pass, I give some refactoring suggestions or I do it myself. Life is great. But then as my coding session drags on, the agent starts to forget, and I have to constantly remind it to follow instructions. A common command is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Please remember to always follow TDD as instructed in the copilot_instructions.md file!"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's maddening at times! I knew that if I had a tool that could provide guard rails to the AI agent, then I might have better success getting the agent to stick with TDD. I also knew many started to experiment with the idea of AI agent swarms. Could I marry those two things together?&lt;/p&gt;

&lt;h3&gt;
  
  
  THE PYTHON MCP FIASCO
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fcr8iv440r0afh6f670.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fcr8iv440r0afh6f670.png" alt="A cartoon developer conducting an orchestra of chaotic AI robots" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fueled by too much nitro cold brew coffee and optimistic belief I could get it done in a couple of hours; I started with an idea. My plan was to use GitHub Copilot to help me build an MCP tool that could call my CLI agent of choice, Gemini CLI, for code review. I was able to create a pre-commit git hook that was able to call Gemini CLI and do a code review before committing code. The issue there was it was way too slow to run when the diff was large. Even small diffs took a bit of time as Gemini CLI initialized since a new one was started on each commit. As a result, it was adding at least several seconds of lag each time I committed code.&lt;/p&gt;

&lt;p&gt;I wanted a way to call the code review on command instead of just every git commit. The first experiment: Can I turn this into an MCP tool using python? I thought, "this should be pretty easy to get something working in a couple of hours... I could even have the tool call the agent to write a failing test, make the test pass, then refactor!" If I wrapped the CLI in a MCP tool, I could call all of these commands natively in my GitHub Agent chat. This sounded like a great idea, and of course GitHub Copilot agreed as well.&lt;/p&gt;

&lt;p&gt;The issues started to pile up. GitHub Copilot, knowing less than I did about MCP tools and how they work, agreed with the plan and started to code. Up until I actually called the agent CLI, things were going great. Then it was time to finally integrate with the Gemini CLI. We though kept running into an issue that I could not redirect stdout/stderr. I had GitHub Copilot create a simple CLI to call Gemini and redirect stdout/stderr, that worked. It was clear something was blocking stdout/stderr in the MCP tool. GitHub Copilot spent hours trying different things to fix the issue, but nothing worked.&lt;/p&gt;

&lt;h3&gt;
  
  
  SIMPLICITY WINS
&lt;/h3&gt;

&lt;p&gt;By the end of the day, I was tired and frustrated. I was frustrated with my AI agent, I was frustrated that I seemed to have wasted an entire day trying to get something to work that had no hope of working in the first place. I was frustrated that as usual AI just agreed with my ideas and just went along with it. I posted my frustrations to LinkedIn that day:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;My vibe coding experiment failed yesterday. With everyone experimenting with multiple AI agents, I thought I could vibe code some automation script in Python that would make my life easier spawning agents. At first, things were going fine; then the agent started to make assumptions on what I wanted, which didn't match mine. Then, it forgot to correctly update our local pip package while testing and got stuck in a loop trying to fix a non-existent bug. I ended up with a very simple Python script but wasted so much time trying to get the AI to fix all its mistakes. These are the times I know I have a strong future ahead of me as a software engineer. How can anyone say AI agents can operate with no human involvement or supervision? Communicating intent to other humans is hard; getting an LLM to "understand" us is even harder.&lt;/p&gt;

&lt;p&gt;Michael Larson - LinkedIn&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As mentioned in the post, I wanted to try one more thing before I went to bed. I thought, what if instead of trying to get the MCP tool working, I just started a Gemini CLI with a starting prompt file and start it? Within 15 minutes I had a working prototype in python working!&lt;/p&gt;

&lt;h3&gt;
  
  
  CONTINUING THE PIVOT
&lt;/h3&gt;

&lt;p&gt;The next day I was more motivated, now I had a working idea and was no longer blocked by the MCP tools issues that plagued me yesterday. I also decided to switch to C# since it was really familiar to me and knew how to build in C# faster then I could with Python so I deleted all of the code from the day before and started over.&lt;/p&gt;

&lt;p&gt;This time the agent was understanding me, and in about a couple of hours, I had a working prototype. I was able to get an agent to implement a very small coding task, have that code reviewed and fix any errors found, and then move on to the next task. It was nice to see agents in a more confined context not going off the rails again.&lt;/p&gt;

&lt;h3&gt;
  
  
  THE RESULTS
&lt;/h3&gt;

&lt;p&gt;With this setup, the agents stuck to their persona prompts and instructions and, crucially, to TDD. Sometimes the code wasn’t perfect, but it was easy to catch mistakes, either through my own review or my code review agent. My goal seemed to be accomplished, the agent stayed on task and stuck to TDD when writing new code!&lt;/p&gt;

&lt;h3&gt;
  
  
  THE NOT-SO-GREAT PARTS
&lt;/h3&gt;

&lt;p&gt;Of course, not everything is perfect. The process is still quite manual. For example, while a planning agent can drop an instruction file for a task, the aiswarm tool currently only feeds in the persona context when an agent starts. This means I have to manually tell the agent, "Your instructions are in this file..." to get it to work. It knows its role but has no idea what the immediate task is. A key improvement would be to enhance the tool to pass in both the persona and the instructions, allowing the agent to start working without user intervention. Another issue, since I do not start Gemini CLI in Yolo mode, it asks for almost every change if it is ok to make it. Since we are working in a git worktree I want to allow the agents to proceed and see what happens.&lt;/p&gt;

&lt;p&gt;When an agent finishes its task in a separate git worktree, there’s no automated way to merge that work back into the main branch. Furthermore, there's no mechanism for a potential planning agent to know when other agents have completed their tasks and can be closed. Sometimes Copilot tries to take over tasks I’d rather delegate. I’m thinking about writing a &lt;code&gt;copilot_instructions.md&lt;/code&gt; to clarify the workflow and keep everyone (and every agent) on track.&lt;/p&gt;

&lt;h3&gt;
  
  
  A CUSTOMIZABLE, AUTOMATED WORKFLOW
&lt;/h3&gt;

&lt;p&gt;My vision for aiswarm is to create a fully automated yet highly customizable development workflow. The default process would look something like this: a planner agent breaks down a feature into tasks, then launches an implementor agent to complete each task using strict TDD. Once the implementor is finished, a code-reviewing agent automatically inspects the work for quality and adherence to standards. Finally, the system prompts me for a final look. Once I give the green light, the planner agent merges the code from the git worktree back into the main branch.&lt;/p&gt;

&lt;p&gt;Beyond this default path, I want to build in flexibility. There should be an interactive mode for more hands-on collaboration with the agents. I also envision a "human-led" mode, where the human writes the initial code and then call in a specialized code-review agent to give me feedback—like having an on-demand code reviewer.&lt;/p&gt;

&lt;p&gt;The ultimate goal is to allow users to define their own workflows, mixing and matching agents and approval steps to fit their personal tastes and project needs. It’s about creating a system that’s gives the agents guard rails, but allows humans the flexibility to define their own workflows.&lt;/p&gt;

&lt;p&gt;I also want an init command that will allow the tool to create a local configuration folder with templates and instructions on how to use it.&lt;/p&gt;

&lt;h3&gt;
  
  
  CONCLUSION
&lt;/h3&gt;

&lt;p&gt;Building the aiswarm project was fun even though at times I felt like tearing my hair out when the GitHub Copilot would do unexpected things. I love being able to experiment and try different ideas. Initially the day that I spent trying to build my first idea felt like a failure at first. But really in that failure I learn what not to build and gave me the pivot I needed to build the tool I have today. As Kent Beck says, keep experimenting with these tools!&lt;/p&gt;

&lt;p&gt;Please check out my tool &lt;a href="https://github.com/mrlarson2007/aiswarm" rel="noopener noreferrer"&gt;here on GitHub&lt;/a&gt; and tell me what you think!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tdd</category>
      <category>developertools</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>Which speeds up development more: AI Coding Agents or Pair Programming?</title>
      <dc:creator>Michael Larson</dc:creator>
      <pubDate>Wed, 23 Jul 2025 00:17:50 +0000</pubDate>
      <link>https://dev.to/mrlarson2007/which-speeds-up-development-more-ai-coding-agents-or-pair-programming-5030</link>
      <guid>https://dev.to/mrlarson2007/which-speeds-up-development-more-ai-coding-agents-or-pair-programming-5030</guid>
      <description>&lt;p&gt;AI is transforming software engineering, but how much does it really speed up development? Big tech companies claim that AI can boost code output by 30% or more. But does that mean teams are actually 30% more efficient? Are features reaching customers 30% faster? Let's dig in and find out.&lt;/p&gt;

&lt;p&gt;With my knowledge of queue theory and the theory of constraints made me skeptical. Sure, developers might code faster with AI, but what about bottlenecks elsewhere in the pipeline? Could speeding up coding actually slow things down downstream? And what about tried-and-true practices like pair programming and trunk-based development, are they even faster than using pull requests and feature branches? I decided to put these ideas to the test with a simulation, using GitHub Copilot and some queue theory tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Here's how I set up the simulation:&lt;/p&gt;

&lt;h3&gt;
  
  
  General Key Assumptions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Infinite backlog of tickets: Developers always have something to work on, and tickets are independent (no merge conflicts).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each ticket takes about one day to complete.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After finishing a ticket, a developer immediately picks up a new one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AI-assisted scenarios benefit from a 30% speedup in coding time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Assumptions for Pull Request Scenarios
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Every ticket is submitted for review.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;About 75% of PRs get feedback that requires rework, with the time to fix randomly distributed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once rework is done, the code goes back into the review queue.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Assumptions for Pairing and Trunk-Based Development Scenarios
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The team uses trunk-based development, so there is no review queue, pairs push code directly to main.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The pairs wait about 6 minutes for automated tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If defects are found, the pair fixes them before moving on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pair programming has 60% the defect rate of solo work.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup lets us compare how different workflows—traditional PR, AI-enhanced PR, pair programming, and AI-enhanced pairs—affect lead times and throughput, with all other variables held constant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;After running the simulation for 60 business days, here are the results for each scenario:&lt;/p&gt;

&lt;h3&gt;
  
  
  Results for Pull Request Workflow
&lt;/h3&gt;

&lt;p&gt;A total of 246 tickets was completed, with a mean lead time of about 23 hours. This serves as our baseline for everything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results for Pull Request + AI Workflow
&lt;/h3&gt;

&lt;p&gt;Now our team gets an AI coding tool they love and starts using it right away, making them 30% faster at coding than before. This results in about a 23% increase in the number of tickets completed, but our lead time is still 23 hours. We also notice a lot more PRs waiting for review, and the wait time for feedback increases by 2.6 times! So while we're getting more done, it's putting increasing pressure on our pull request queue and the resources we have allocated for reviewing code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results for Pair Programming + Trunk-Based Development Workflow
&lt;/h3&gt;

&lt;p&gt;Now imagine the team looks at their PR queue and sees that lead times are still high. They decide to pair up instead of working alone, and agree to trust each pair to push code directly into the main branch. Each pair is committed to fixing any build breaks immediately before moving on to other work. In this scenario, they complete 240 tasks, but their lead time drops to about 9 hours, and the time spent in rework is reduced by 79% compared to our baseline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results for Pair Programming + Trunk-Based Development + AI Workflow
&lt;/h3&gt;

&lt;p&gt;Our team now re-introduces AI coding tools, with the same 30% speedup as our other AI-enhanced scenario. Their lead time falls to 7 hours, and their rework time drops by 82% compared to our baseline. They also complete 316 tickets, the most of all the scenarios tested.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overall Results
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu40d49i3haacbykz2w5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu40d49i3haacbykz2w5e.png" alt="Scenario  Total Tickets   Mean Lead Time (hours)  Avg Rework Cycles   Avg PR Queue Time   Total Rework Time Traditional PR    246 23.06   2.47    0.68    1259.30 AI-Enhanced PR  303 23.04   2.86    2.45    1369.03 Pairing &amp;amp; TBD   240 9.29    0.65    0.00    269.06 AI-Enhanced Pairs    316 7.01    0.60    0.00    218.96" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implications
&lt;/h2&gt;

&lt;p&gt;So, what does all this mean for teams thinking about adopting AI coding tools?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AI coding tools alone won’t solve workflow bottlenecks. If your process is slowed down by reviews, handoffs, or queues, speeding up the coding step won’t make a big difference. The simulation showed that even with a generous 30% speedup, lead times barely changed when the review queue was still present. This makes sense—queue theory tells us that if we increase the arrival rate but do nothing to address the service rate, backlog will continue to increase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pair programming offers both speed and predictability. By working together and skipping the review queue, the scenario that used pair programming and trunk-based development saw much faster and more consistent lead times. This means less waiting, more predictability, and a smoother flow of work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defect rates matter, but workflow matters more. While pair programming reduced defect rates in the simulation, the biggest gains came from removing bottlenecks. AI did not reduce defects in this model, and in some real-world studies, it may even increase rework.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimize the process before investing in new tools. If you want to ship faster, focus on removing queues and unnecessary handoffs. Once your workflow is streamlined, then consider how tools like AI can help you go even faster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The recent &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR research study&lt;/a&gt; shows that while AI can speed up several tasks developers do day to day, rework from fixing mistakes made by AI can consume more time than is saved elsewhere. When you also look at how effective pair programming and trunk-based development are at reducing lead time and rework, the sweet spot seems to be combining AI with pair programming.&lt;/p&gt;

&lt;p&gt;So often when we're working alone, we lose track of time and think, "Let me try one more thing, it's going to work this time!" But we end up wasting hours. This also happens with AI coding assistants: "If I prompt it one more time, then everything will be fixed!" With a human pairing partner, our pair can help us see the bigger picture and pull us out before we waste time on a dead end.&lt;/p&gt;

&lt;p&gt;If you want to use AI tools, experiment and use DORA metrics to see if they're really helping your developers. Listen to their frustrations and joys using AI tools. Adopt practices such as trunk-based development and pair programming. Use AI to help automate manual reviews that slow work and cause handoffs. Bottom line: don't just assume AI is going to be a miracle cure and fix all your problems. Experiment, but also pair this with good technical practices and observability.&lt;/p&gt;

&lt;p&gt;Feel free to look at my simulation &lt;a href="https://github.com/mrlarson2007/AI-Pairing-Lead-Time-Simulation" rel="noopener noreferrer"&gt;here on GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>development</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Interviews are hard</title>
      <dc:creator>Michael Larson</dc:creator>
      <pubDate>Fri, 24 Jul 2020 06:51:00 +0000</pubDate>
      <link>https://dev.to/mrlarson2007/interviews-are-hard-239i</link>
      <guid>https://dev.to/mrlarson2007/interviews-are-hard-239i</guid>
      <description>&lt;p&gt;I recently joined Microsoft last March as a Senior Software Engineer on Visual Studio org. The road to getting that job offer was a long and hard road that I wanted to briefly share my experience and what so that others that are out their looking right now might learn something, or at least some encouragement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting my job search
&lt;/h2&gt;

&lt;p&gt;Year and half ago after two more member of my team got laid off, it was obvious that it was time leave my current employer at the time. I loved the team but was clear the company was going in cost cutting mode and not a good place to be long term. So, I decided to start looking. At first it was slow going because I had to create an updated resume, make improvements to my LinkedIn profile, and start practicing for the dreaded technical interviews.&lt;/p&gt;

&lt;p&gt;Eventually I got my resume updated, went over my LinkedIn profile adding some more keywords to it, and started to practice some coding problems. I started to apply to a few places.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interview Rounds
&lt;/h2&gt;

&lt;p&gt;I was able to get interviews at a few companies, some because of a referral, most from contacts from recruiters through LinkedIn. Since I was still working, and married, it was hard to find the time to prepare and schedule interview, but was somehow able to practice a little bit and get time to interview. &lt;/p&gt;

&lt;p&gt;My early attempts where not successful, one place I had one interviewer never really talked to me while I went off on the deep end on the question, he gave me. I had a feeling that despite doing well on all the other rounds, that would go nowhere, and I was right. While I felt I did good on other interviews I didn't get any offers. Here is a sampling of some of the feedback I got:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My coding skills where not strong&lt;/li&gt;
&lt;li&gt;Not senior enough for the senior software engineer position&lt;/li&gt;
&lt;li&gt;While they felt I was strong technically, they thought I would not be happy there&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After getting that feedback I knew one thing. Interviewers only get 45 minutes to an hour to talk to you and see you work. If you have a bad day, are nervous, forget something, get a bad interviewer, or a host of other reasons you can get rejected and really has no reflections on your personal skills or experience. &lt;/p&gt;

&lt;p&gt;I spent time thinking back on some of the behavioral questions I answered and how I could have better answered them and show my past experience. I also cut back on some of the time I was using to practice programming problems. I had been spending too much time practicing technical questions, and felt I didn't really want to work somewhere that gave me brain teasers that required hours of study and that I felt had no practical application to the day to day work that I would be doing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting offers
&lt;/h2&gt;

&lt;p&gt;After applying those learnings, and taking a break from interviewing, I scheduled anther round of interviews, and this time I got offers back! All the places that I went to the questions where no brain teaser but felt where more practical type questions, the interviewers had real two way conversations with me, and I was able to bring out a lot of the experience and how I could apply it to the job I was interviewing for. I eventually decided to take the job at Microsoft because they agreed to let me work from home and stay close to my extended family. I also did not feel like commuting anymore and was excited about the work I would be doing there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesions learned
&lt;/h2&gt;

&lt;p&gt;I think the most important one was never to base how skilled or "senior" I am as an engineer based on interview feedback. The fact is that interviewing is hard for everyone, and the way we do it by having people solve problems on a physical or virtual whiteboard is a flawed process. In fact North Carolina State University and Microsoft found that the classic white board interviews widely used today really measures how well we deal with the anxiety of doing the interview than our actual skills (&lt;a href="https://news.ncsu.edu/2020/07/tech-job-interviews-anxiety/"&gt;https://news.ncsu.edu/2020/07/tech-job-interviews-anxiety/&lt;/a&gt;). So even if you have to play the game, never think that your bad engineer because just you failed interview at BigTechCo.&lt;/p&gt;

&lt;p&gt;Second, I realized as stated above I only have a short window to showcase my experience and skills. You can easily get into tangents that distract the interviewer and rob you of precious time. There is also one method that I still not have mastered, which is the Situation, Tasks, Action &amp;amp; Results method of answer questions (STAR). It is a good idea to think about some common behavioral questions and how you can use your experienced in the past to answer them using STAR. Even if you don't get it right, like I did, I still found it valuable since it help me from getting trapped into tangents or other areas that I did not want to talk about. &lt;/p&gt;

&lt;p&gt;I think the most important lesion is when the tables are turned and it is my turn to interview someone, is first show empathy and know that the person I am interviewing is nervous. Using hard problems or other brain teaser type questions might be tempting to use, but they would not really tell me much about the person I am interviewing and may even exclude an entire class of applicants. There are better ways of interviewing and we can do better, and really starts with each person doing the interview. &lt;/p&gt;

&lt;p&gt;I hope this is of help especially now especially with so many people looking for work or just joining the tech workforce. You’re not alone!&lt;/p&gt;

</description>
      <category>career</category>
      <category>development</category>
      <category>interview</category>
      <category>hiring</category>
    </item>
    <item>
      <title>How do you start speaking at conferences? </title>
      <dc:creator>Michael Larson</dc:creator>
      <pubDate>Thu, 13 Jun 2019 17:16:18 +0000</pubDate>
      <link>https://dev.to/mrlarson2007/how-do-you-start-speaking-at-conferences-3d75</link>
      <guid>https://dev.to/mrlarson2007/how-do-you-start-speaking-at-conferences-3d75</guid>
      <description>&lt;p&gt;This year my goal is to at least give a lightning talk or longer at software engineering confrence of some sort. I am sure some of you out there have similar goals so wanted to start a disscussion to help all of us! If you are already giving talks how did you get started? What are some good conferences to try? Would trying local meetups be a good idea?&lt;/p&gt;

</description>
      <category>disscuss</category>
      <category>careerdevelopment</category>
      <category>engineering</category>
      <category>publicspeaking</category>
    </item>
    <item>
      <title>Getting Trapped as an Expert Beginner </title>
      <dc:creator>Michael Larson</dc:creator>
      <pubDate>Sat, 11 May 2019 06:56:51 +0000</pubDate>
      <link>https://dev.to/mrlarson2007/getting-trapped-as-an-expert-beginner-bgg</link>
      <guid>https://dev.to/mrlarson2007/getting-trapped-as-an-expert-beginner-bgg</guid>
      <description>&lt;p&gt;I really enjoyed reading this article:&lt;br&gt;
&lt;a href="https://daedtech.com/how-developers-stop-learning-rise-of-the-expert-beginner/"&gt;https://daedtech.com/how-developers-stop-learning-rise-of-the-expert-beginner/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The big take away from this article is how important it is to stay humble to prevent getting stuck as an expert beginner. I do think it's important to take pride in your work and have confidence. Also many are dealing with imposter syndrome. On the other hand there are many on the other extrem that have such a big ego they can never be wrong or listen to others. If we want to grow as developers it's important to stay humble and be willing to listen to good ideas regardless of where they come from. &lt;/p&gt;

&lt;p&gt;This reminds me of this Bible proverb:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pride is before a crash, And a haughty spirit before stumbling.&lt;br&gt;
Proverbs 16:18&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So remember stay humble, keep listening and learning. Best way to keep growing! Do you have any examples where humility helped you learn or grow? What do you think of the points made in the article above? &lt;/p&gt;

</description>
      <category>community</category>
      <category>career</category>
      <category>discuss</category>
      <category>personaldevelopment</category>
    </item>
    <item>
      <title>Cost of Code Ownership</title>
      <dc:creator>Michael Larson</dc:creator>
      <pubDate>Tue, 07 May 2019 20:27:38 +0000</pubDate>
      <link>https://dev.to/mrlarson2007/cost-of-code-ownership-3ekb</link>
      <guid>https://dev.to/mrlarson2007/cost-of-code-ownership-3ekb</guid>
      <description>&lt;p&gt;If you spent any time buying a car, I am sure you have done your research and asked: How much will this car cost to own? Will I finance the car with a loan from the manufacture or use a bank or credit union? How much will the insurance on this car cost? How much will it cost to maintain it? What is the deprecation? And many more questions. All of this factors into the total cost of owning a car.&lt;/p&gt;

&lt;p&gt;While we may spend a lot of time figuring out what the cost of owning a car is, have you ever stopped and thought, “The code I write each day, how much does it cost me? How much does it cost my team? How much does it cost the organization I write it for?” These are important questions we must face. If you look at the industry in general, often we prize the mythical “10x Dev” that can run circles around everyone else. The guy or gal that can get things done and write lots and lots of code. But should we be optimizing for this? Rewarding people for it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Code should be viewed as a liability
&lt;/h2&gt;

&lt;p&gt;I am reminded by a famous quote from Edsgar Dijkstra:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Why did he say this? One reason comes to mind, complexity. As we write more and more code, things get more complex. We see this all the time in the industry with greenfield projects. Have you ever wonder why startups can just get things done and run circles around larger companies? Why is every thing so much easier in a greenfield project? The reason is that we don’t have all the baggage of an existing code base. Before we write a single line of code, our opportunities are endless. There is nothing to try to understand. There is nothing to integrate with. There is nothing to test. It’s all new and and exciting!&lt;/p&gt;

&lt;p&gt;Fast forward a few years and what often happens? As we add more and more code, things get harder to understand. We can no longer keep a working model of the system in our heads. And if we have added new people to the project, this only compounds the complexity issues. Also lots of us, including myself, often are learning as we go. This means that we might not even know how to write code that fights this complexity and we accidentally make things worse. Thus even skilled devs armed with the best engineering practices face growing complexity just from the fact our system gets larger and larger with every line of code we write.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accidental complexity
&lt;/h2&gt;

&lt;p&gt;This takes us to anther point, if just the act of adding code creates complexity, bad code makes the situation even worse. Dan North had this to say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I call false dichotomy. I've said elsewhere the goal of software development is to "sustainably minimise lead time to business impact". You can't do anything sustainably without taking engineering seriously. Poor code quality is only non-user visible in the short term.&lt;br&gt;
— Daniel Terhorst-North (@tastapod) March 4, 2019&lt;br&gt;
&lt;a href="https://twitter.com/tastapod/status/1102570605986103297"&gt;https://twitter.com/tastapod/status/1102570605986103297&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Bad code quality does not just effect the developers, it effects the business. If the complexity is not managed and minimized through good software engineering practices, both lead times and through put slows to a crawl. The once lighting fast startup grinds to a halt as features take longer and longer to add. This in turn costs the business money because of all the missed opportunities. It also starts to create a rift in organizations, as managers try to “fix” the issue by issuing deadlines and ultimatums. The developers start to get stressed and want to leave this toxic situation. The business suffers even more as developers start leaving for greener pastures. This in turn creates a powerful feedback loops as either new people are thrown at the problem, or the team is forced by management to start cutting corners with testing or other practices that ensure quality. The few experienced people that remain end up either fixing messes others make, and/or training all the new people.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can we do?
&lt;/h2&gt;

&lt;p&gt;At this point might seem hopeless situation, as we add more code, things will just get worse. But there is a lot we can do. Here are a few things we will go over now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Everyone needs to be aware of the state of the code&lt;/li&gt;
&lt;li&gt;Uphold software engineering best practices&lt;/li&gt;
&lt;li&gt;Do not optimize for code production, focus on outcomes&lt;/li&gt;
&lt;li&gt;Removing features that are no longer bringing value, delete dead code&lt;/li&gt;
&lt;li&gt;Balance is needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Everyone needs to be aware of the state of the code.
&lt;/h3&gt;

&lt;p&gt;Business needs to have a clear understanding of what shape each major areas of the code. Far too often in agile circles, features are treated all the same. Feature1 and Feature2 are often ranked only in terms of value to the business, but often the complexity of adding any of these features is not well understood by management. Clear and open communication is needed here, and we need clear and understandable ways to communicate the state of each major areas of the code to the business. Michael Feather gives some good ideas in this talk about how to communicate the level of technical debt in the code to the business &lt;a href="https://youtu.be/7hL6g1aTGvo"&gt;here&lt;/a&gt;. One idea, use hypothetical features you would add to each major area of the code base and track the estimates of the devs on the team over time. As you change the system over time, you can see how those changes affect the estimates. &lt;/p&gt;

&lt;p&gt;Along with this clear communication, both developers and management need to take a hard look at why did we end up with a mess in the first place? Is management giving unrealistic deadlines? Are the managers telling developers to start cutting corners to get things finished? Are the software developers giving too rosy estimates? At the end of the day the organizational and team culture have a powerful effect on the kind of code software developers write. If you want to make this better you have to have the right culture in place!&lt;/p&gt;

&lt;h3&gt;
  
  
  Uphold best engineering practices
&lt;/h3&gt;

&lt;p&gt;As already mention, the worst thing we can do when under schedule pressure is start cutting corners. Managers might think that stopping refactoring, cutting back on unit tests, and other such measures will help get features out the door. While this may work in very short term, such decisions can have a lasting impact on the project. Can't say this better than Mark Seemann:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I've also often observed managers professing preference for a particular developer who 'gets things done.'&lt;/p&gt;

&lt;p&gt;What that developer often does is to declare a lot of tasks to be 'done', leaving others to clean up his/her mess.&lt;br&gt;
— Mark Seemann (@ploeh) March 3, 2019&lt;br&gt;
&lt;a href="https://twitter.com/ploeh/status/1102301536443600897"&gt;https://twitter.com/ploeh/status/1102301536443600897&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What is the answer? Test driven development is so important here. First it ensures that we have good suite of unit tests. This in turn gives us the safety net needed to do refactoring. Often times I see developers afraid to touch certain areas of the code, and never want to refactor it. Why? It is hard to understand and few or no unit tests which makes it easy to break. So we must be disciplined about unit testing, and the one thing I have seen help with this is test driven development. Because we put tests first and depend on them when making changes, this incentives the developers to ensure that first their code is testable, and second that the tests are quick.&lt;/p&gt;

&lt;p&gt;But why do we need to spend so much time refactoring? The same reason we need to use drafts in writing. Often the first draft of something is not our best work, and often takes few revisions before we come up with something that is not only better, but easier to understand. The same is true of our code. Often the first draft is not our best work, we where just focusing on trying to get the code to work! Now we need time to make it understandable and maintainable. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Software development productivity has little correlation to how fast you can produce code.&lt;/p&gt;

&lt;p&gt;It's closer related to the total cost of ownership of that code.&lt;/p&gt;

&lt;p&gt;Some code, you can write in one hour, and then proceed to waste days or months maintaining and troubleshooting.&lt;br&gt;
— Mark Seemann (@ploeh) March 1, 2019&lt;br&gt;
&lt;a href="https://twitter.com/ploeh/status/1101442797436055552"&gt;https://twitter.com/ploeh/status/1101442797436055552&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Remember what Mark Seemann is saying here, just because we can churn out code fast does not mean we want to or should. We need time to refactor and revise our code. If we just rush through things, there is a good change we will end up creating code that no one understands, increasing the complexity, and creating more work for our fellow teammates.&lt;/p&gt;

&lt;p&gt;Also we have to consider the architecture of our system. If we have a small team might make sense to have a mono repo with everyone working out of it to keep the complexity low. If we have several teams working, we may want to pursue a micro-service architecture to allow teams to focus on the parts of the system they own and have well defined contracts to communicate with the services that other teams own. Look at your situation and take time to pick the right solution that will help minimize the complexity your facing.&lt;/p&gt;

&lt;p&gt;One final point to keep in mind, focus on your needs today, not what might happen in the future. Often times I see developers get so caught up in making code handle thousand different cases, when we really only care about one of them right now. Write only the code that you need to solve the problem at hand today. Don’t try to anticipate future needs that may never come. If you guess wrong, you end up with a lot of dead code that only confuses people.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do not optimize for code production, focus on outcomes
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Do not optimize for code production, ever.&lt;/strong&gt; Why such a strong statement? Goodhart's laws states:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Goodhart%27s_law"&gt;https://en.wikipedia.org/wiki/Goodhart%27s_law&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This means that a measurement, once it is used to evaluate the performance of anyone will eventually become useless. The reason is people will start to optimize for this measurement. So if we reward developers for the lines of code they produce in a day, or the number of commits they make, this will naturally encourage them to produce more code. They will do this even if it harms the project or destroys teamwork and pairing. People might not even be aware they are doing this. History is filled with such examples. This is also called the cobra effect and if you want to hear what happened to the British when they tried this in India you can listen to Freakonomics podcast on it &lt;a href="http://freakonomics.com/podcast/the-cobra-effect-a-new-freakonomics-radio-podcast/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Focus on outcomes for the business, what problems are trying to solve, and think about ways to measure for this when you release your software. At then end of the day we are trying to help someone solve problems with the software we are creating. If you want to get started in the right direction Jez Humble gives a lot of great ideas in his Goto conference talk &lt;a href="https://www.youtube.com/watch?v=2zYxWEZ0gYg"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Removing features that are no longer bringing value, delete dead code
&lt;/h3&gt;

&lt;p&gt;We don’t want to be a code hoarder, trying to keep every line of code, leaving commented out code through out our code base. Source control is our friend, we can get this code back if we need it later. The important thing here is we need to remove dead code. Less code is less complexity for everyone.&lt;/p&gt;

&lt;p&gt;At the same time we need to take a hard long look on the feature set of our application constantly. The natural progression of most teams is just keep adding feature after feature, but each of these features adds to the complexity of the code. Remember that up to 2/3 of our features will either do nothing, or have a negative impact on the business. Thus the business shouldn’t be hording features either. We need to be constantly thinking and analyzing, "Which features do I need? Which can I safely get rid of?" If we don't do this, we will eventually choke on features that no one is using or worse costing us lots of money.&lt;/p&gt;

&lt;p&gt;In this article &lt;a href="https://michaelfeathers.typepad.com/michael_feathers_blog/2011/05/the-carrying-cost-of-code-taking-lean-seriously.html"&gt;here&lt;/a&gt;, Michael Feather proposes a radical solution. The code should auto destruct and get deleted after a few months. This he argues would force teams and businesses to really consider what's important and fine ways to keep only the software that the users want or need.&lt;/p&gt;

&lt;h3&gt;
  
  
  Balance is needed
&lt;/h3&gt;

&lt;p&gt;In the end we must remember that this is software engineering. Everything has trade offs and balance. There are extremes on either end. Remember that we are not arguing here that all complexity is bad. We can trade complexity in one area to give us an advantage else where. The point is we cannot let these decisions happen randomly or give little thought to them. If we put in practices to help us manage complexity, we can help ourselves and our teams make better software and deliver it with more agility. I hope this article is of use to you and remember that we all need to worry about the cost of code and we can manage it!&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>legacycode</category>
      <category>technicaldebt</category>
      <category>programming</category>
    </item>
    <item>
      <title>Common Myths and Misconceptions of Test Driven Development</title>
      <dc:creator>Michael Larson</dc:creator>
      <pubDate>Fri, 06 Jul 2018 06:44:41 +0000</pubDate>
      <link>https://dev.to/mrlarson2007/common-myths-and-misconceptions-of-test-driven-development-jcn</link>
      <guid>https://dev.to/mrlarson2007/common-myths-and-misconceptions-of-test-driven-development-jcn</guid>
      <description>&lt;p&gt;This is my first ever post on dev.to! I have been thinking about doing this for some time, I finally had a good enough idea to share with everyone and the time. Now lets get down to the point. I love using test driven development have been using it for several years. I think it is a technique that every developer should at least be familiar with and understand what problems it is good at solving. &lt;/p&gt;

&lt;p&gt;This past week I read this article &lt;a href="https://blog.usejournal.com/lean-testing-or-why-unit-tests-are-worse-than-you-think-b6500139a009"&gt;https://blog.usejournal.com/lean-testing-or-why-unit-tests-are-worse-than-you-think-b6500139a009&lt;/a&gt;. While I liked the idea that we should take careful look at how we test and what kinds of tests we use, the author seemed very negative towards TDD. Even in the comment section of the article the hostility to TDD and unit testing in general continued. I have also read similar blog posts and articles in the past. These experience differ greatly from what I have seen from unit testing and TDD in general. It is clear that there some misconceptions and myths out there about TDD. Some of it is because of well meaning people trying to teach others about it, and in their zeal they leave a bad taste in people's mouths. Others it is misunderstanding of what TDD is. So today I wanted to go over these misconceptions and myths about TDD, and help people see TDD does not deserve the bad rap many give it.&lt;/p&gt;

&lt;h2&gt;
  
  
  You must have 100% code coverage!
&lt;/h2&gt;

&lt;p&gt;I think where most people get this idea is from the first rule or step of TDD, you must start from a failing unit test, and only when you have a failing test are you allowed to write any production code. So many take this as you must have 100% code coverage, because you can't have code without tests! But this is wrong. There are cases where TDD is not a good fit. One example is working on a front end GUI of an application. There is some code that is really hard to test, but really easy and trivial to write. So in those cases we would not write a unit test, and instead isolate the that hard to test code so we test everything else. &lt;/p&gt;

&lt;p&gt;This article from Martin Fowler about test coverage (&lt;a href="https://martinfowler.com/bliki/TestCoverage.html"&gt;https://martinfowler.com/bliki/TestCoverage.html&lt;/a&gt;) made some good points. High code coverage does not mean your code is high quality. If we make code coverage the goal instead of a tool, we can end up with high number of useless tests that get our code coverage numbers up, but do little to improve things, and may in the end make things worse. In the article above Martin Flower made this point:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Like most aspects of programming, testing requires thoughtfulness. TDD is a very useful, but certainly not sufficient, tool to help you get good tests. If you are testing thoughtfully and well, I would expect a coverage percentage in the upper 80s or 90s. I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  You don't need to do any up front design!
&lt;/h2&gt;

&lt;p&gt;Many have heard if you follow TDD all your coding problems will be solved! If you just follow the steps, your code will come out well designed and no need to do any upfront design! Others have complained that TDD causes damage because in the pursuit of test-ability and they feel the code has too many layers or forced to do things just for the sake of tests.&lt;/p&gt;

&lt;p&gt;The truth is you can write well designed software with or with out TDD. Just because you follow TDD will not automatically mean that you code will be well designed or that your code will be poorly designed. You are the developer/engineer. When writing any code we must spend some time upfront giving thought to how are we going to organize the code, how will the major parts of the application be structured, and how they will communicate with each other. &lt;/p&gt;

&lt;p&gt;What TDD is good at is once you at least have an idea of where you want to go, it will guide you along the way and you can test out these ideas before you end up writing a lot of code. It also forces you to constantly self review your work and see if there are any patterns emerging that might alter the design you originally came up with. You may even realize that your design is not going to work and you need to start over. &lt;/p&gt;

&lt;p&gt;Many of us in the TDD camp often promote the use of dependency injection to allow us to control what dependencies are used in a test, and allow us to inject mocks or stubs into the code we are testing so that the tests run fast. But is this a requirement of TDD? No, the only requirement is that the tests run fast, and that you can run them reliably on your own computer. There is nothing in TDD that says we can't have integration or other kinds of tests in the mix of things. The point is you are in control of how you want to design and layout your code. TDD just allows you to constantly self reflect and refactor your code. &lt;/p&gt;

&lt;p&gt;Anther sub issue here is the fact that you don't need to design your tests. this is wrong also, you need to put the same though and care that you put into your production code into your test code. Robert Martin wrote a great blog post on the need for design in tests (&lt;a href="http://blog.cleancoder.com/uncle-bob/2017/03/03/TDD-Harms-Architecture.html"&gt;http://blog.cleancoder.com/uncle-bob/2017/03/03/TDD-Harms-Architecture.html&lt;/a&gt;). Here is a great quote from the article:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Yes. That’s right. Tests need to be designed. Principles of design apply to tests just as much as they apply to regular code. Tests are part of the system; and they must be maintained to the same standards as any other part of the system.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Robert Martin goes on to make the point that many demos of TDD show a direct one to one relationship between the unit tests and the production code. In fact many who start with TDD emulate this kind of testing. The issue here is if we go down this road, it will end up in code the is highly coupled to the tests, and will make it painful to change things up later. So tests, just like our production code require careful thought to avoid these issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  If I use TDD I only need unit tests!
&lt;/h2&gt;

&lt;p&gt;Many have the feeling that if you using TDD you don't need integration tests or any other kinds of tests for that mater. You will be just fine with unit tests. The kinds of tests we write while using TDD help a lot with logical issues, and ensure the code works as you the developer expect that it should work. What they do not do is show you issues when you hook everything up together. They do not show you that you missing a requirement or that you made a wrong assumption about an existing requirement. &lt;/p&gt;

&lt;p&gt;So many who have been using TDD for some time know what TDD is good at and what it is not good at, and recognize that unit tests are just one tool in the tool box, and we must also have acceptance tests, integration tests, and end to end tests to really flush out all the possible issues in an application. Any one of these kind of tests can be over used and abused. It is up to us as engineers to choose the right test for the job at hand. We are the ones shipping the code, and it is up to us to pick the right level of testing to put in place so when we change something it does not break something else and we are confident things work.&lt;/p&gt;

&lt;p&gt;Here is a good example of why reliance on only unit tests is not a good thing from &lt;em&gt;Growing Object-Oriented Software, Guided by Tests&lt;/em&gt; by Steve Freeman and Nat Pryce:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Nat was once brought onto a project that had been using TDD since its inception. The team had been writing acceptance tests to capture requirements and show progress to their customer representatives. They had been writing unit tests for the classes of the system, and the internals where clean and easy to change. They had been making great progress, and the customer representatives had signed off all the implemented features on the basis of the passing acceptance tests.&lt;/p&gt;

&lt;p&gt;But the acceptance tests did not run end-to-end -- they instantiated the system internal objects and directly invoked their methods. The application actually did nothing at all. Its entry point contained only a single comment:&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
&lt;code&gt;//TODO implement this&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Side point that is a great book that explains how to use all levels of testing in real world software project. Good read!&lt;/p&gt;

&lt;h2&gt;
  
  
  If you don't use TDD you are a bad developer/engineer!
&lt;/h2&gt;

&lt;p&gt;Because of the zeal of many proponents of TDD, many get a bad taste in their mouths when they see them cheer lead TDD. The fact is that yes you can write well tested, and well designed code with out TDD. I just think it is harder to do it with out TDD. On the same hand, you can still write badly designed, confusing mess of code while following TDD. &lt;/p&gt;

&lt;p&gt;Also consider the fact that it takes a lot of time and practice to get comfortable with TDD. Many devs have been working for years with out TDD. If I came along and forced them to start using it, they would be way out of their comfort zone and make many mistakes. For myself personally it took me few years before things clicked in my head. Some devs really do not have the time to do this for various reasons. I think if you do have the time, it is well worth the investment. At the same time we also need to be realists and realize that there no one true way to code. There was a time I started to think this way, but I have realized that really all that matters is the out come. Is the code easy to understand? Is the code well tested where it needs to be so I am not afraid to change it? Can we easily change it if we need to? This is the real world and we have to be reasonable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conculsion
&lt;/h2&gt;

&lt;p&gt;I hope this clears up some misunderstandings out there about TDD. Many have a bad taste form how others may have forced this upon them or how they where introduced to it. Also in our zeal to let others know how awesome TDD we forget everything else that many in the TDD community have learned through experience and learning from  others. Remember TDD is just a tool, just as useful as any other tool we use to write code. &lt;/p&gt;

</description>
      <category>tdd</category>
      <category>programming</category>
      <category>testing</category>
    </item>
  </channel>
</rss>
