<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rebellion Software</title>
    <description>The latest articles on DEV Community by Rebellion Software (@rebellion_software).</description>
    <link>https://dev.to/rebellion_software</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rebellion_software"/>
    <language>en</language>
    <item>
      <title>Fear of Executing</title>
      <dc:creator>Rebellion Software</dc:creator>
      <pubDate>Thu, 18 Sep 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/rebellion_software/fear-of-executing-bbo</link>
      <guid>https://dev.to/rebellion_software/fear-of-executing-bbo</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The fear of executing on a plan is one of the most dangerous mindsets that can start to creep its way into development teams. You've probably seen it before: the meetings that lead to more endless meetings, the endless research, the what-ifs, and the unknown unknowns. It ends with a paralyzing fear of making a plan and executing on that plan. Often, this fear leads to abandoned refactorings or feature improvements that get tossed into the 'what could have been' graveyard. Realistically, this failure falls on leadership and senior developers. Getting through that mindset and pushing forward requires a collective effort to do the hard things. It's often said in the military that a plan is only good until first contact; the same logic can be applied to software development. You can make a plan, but the first unknown you hit, the plan is no longer effective. You need to move past that fear and learn how to evaluate, adapt, and change course.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does this fear manifest?
&lt;/h2&gt;

&lt;p&gt;There are a few clear warning signs that this mindset is starting to take hold. The most obvious is when teams start moving away from making small changes in legacy code because it simply needs to be rewritten from scratch, which will rarely happen without a substantial benefit to the business or client. This idea of needing to start over exists because developers are afraid to make changes to the legacy code. So instead of researching, creating a plan, and executing that plan, it just goes straight to impossible. Realistically, you can make small controlled changes hidden behind feature flags or using sprout methods with sensible fallbacks in case of errors. There are many patterns you can use to make refactoring safer and more reliable. The most important thing is end-to-end test coverage, and that is often missing in the dark corners of the map that your developers are afraid to change. Still, this needs to be a change from the top down; we can't settle for "it needs to be rewritten and all hope is lost " as an answer, that can't be acceptable. There needs to be a concerted effort to demonstrate that a path can be planned, managed, and executed.&lt;/p&gt;

&lt;p&gt;Another, more subtle indicator is how developers talk about these areas of code. The minefields that are filled with unknown unknowns in the dark corners of the legacy code swamp, which will take 10 points to figure out. Usually, the reluctance to work in an area is where the fear starts. The mountain of impossible obstacles that developers don't want to deal with. It's essential to acknowledge this here and begin countering the growing mindset of fear and the perception of impossible changes. Do exercises with developers like talking them through the steps of what needs to happen, ask questions about what the task is, and not in the do you understand the feature way. Try asking things like "I am a database wrapper that needs to process a query, how do I accomplish that?" Lead them through what they are doing in a hybrid of whiteboarding and pseudo-coding to get them thinking and seeing that this isn't impossible. Create a plan with them and ensure they execute it. When the unexpected arises, reevaluate with them and adjust the plan, but keep moving forward.&lt;/p&gt;

&lt;p&gt;When this mindset goes unchecked long enough, it can slow work on tech debt, refactoring, and improvements to a standstill. I've seen it end with months of meetings about refactoring code that went nowhere, only to result in more meetings. In the end, nothing got refactored, the code stayed the same, and developers realized it was ok to be afraid, even to try and tackle an area. Those areas of code decay more, and the fog of war increases around them as time goes on, compounding the fear of making the change. One of my favorite quotes on software development is from Tanya Reilly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When you join a new company, most of the big picture is completely unknown to you. A big part of starting a new job is building context, learning how your new organization works, and uncovering everyone's goals. Think of it like the fog of war in a video game, where you can't see what awaits you in the parts of the map you haven't explored yet. As you scout around, you clear the fog and get a better picture of the terrain, learning what's surrounding you and whether there are wolves coming to bother your villagers. You can set out to uncover the obscured parts in all three of the maps and find ways to make that information easy for other people to understand. For instance: Your locator map can help you make sure the teams you work with really understand their purpose in the organization, who their customers are, and how their work affects other people. Your topographical map can help highlight the friction and gaps between teams and open up the paths of communication. Your treasure map can help you make sure everyone knows exactly what they're trying to achieve and why. You'll be able to clear some parts of the map through everyday learning, but you'll need to deliberately set out to clear other parts. A core theme of this chapter is how important it is to know things: to have continual context and a sense of what's going on. Knowing things takes both skill and opportunity, and you might need to work at it for a while before you start seeing what you're not seeing.&lt;sup&gt;1&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Although Reilly's quote is more focused on joining a new company as a Staff Engineer, the idea itself is relevant to understanding the fear of executing. The unknowns, the minefields, the lack of coverage, all of it is fog of war that allows that fear to creep in. The only way to get rid of that is to explore, lead, and discover the map.&lt;/p&gt;

&lt;h2&gt;
  
  
  Easy wins and big refactors
&lt;/h2&gt;

&lt;p&gt;Often, this fear causes developers to overlook the easy wins —the things they can change in controlled ways that make an impact. These are good places to start to build confidence and buy credit with leadership or clients who may be growing frustrated. Look for things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Database queries running in loops: can they be pulled out and run beforehand in one large query?&lt;/li&gt;
&lt;li&gt;Inefficient queries: legacy code often has poorly adapted queries that have changed with time. Do they need changes or better index coverage?&lt;/li&gt;
&lt;li&gt;Caching: would caching data help reduce latency or increase performance?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are relatively small, focused changes that can have a compounding impact on legacy code. The best part of these types of changes is that they show developers you can make a plan, execute it, and get real results. They are small, quick wins that buy social credit and may only be a handful of points or hours of work in the long run.&lt;/p&gt;

&lt;p&gt;To tackle larger changes, you will want to be more deliberate and use patterns for refactoring that have been well-documented and discussed. There are clear paths and strategies, some of which we wrote about in &lt;a href="https://dev.to/rebellion_software/refactoring-legacy-code-1gg3"&gt;Refactoring Legacy Code&lt;/a&gt;. The key point is leading the team and showing them it is possible. Utilize proven methods, create a plan, and execute it effectively. In the end, how you approach the change will most likely change, but you can limit the scope of the change and the likelihood of massive deviations from the plan by planning and preparing beforehand. You have to see it through; that is the most critical part. If you end up in a meeting inception or a half-completed refactor, it reinforces the fear of executing. You'll hear the "the last time we tried this it blew up" rhetoric. Make a plan, execute the plan, adapt, and complete the change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The fear of making changes in legacy code can slow down teams, cost clients, and increase the time to live for product improvements. As leaders, we should be vigilant for the early signs of this mindset taking hold and intervene as soon as possible. We need to lead from the front and show teams that complicated changes in legacy code can be successfully completed and that even if you execute on a plan and it falls apart, you haven't failed. You adapt, evolve, and overcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;The Staff Engineers Path - Tanya Reilly&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>leadership</category>
      <category>code</category>
      <category>development</category>
    </item>
    <item>
      <title>Mentoring: Problem Solving Mental Models</title>
      <dc:creator>Rebellion Software</dc:creator>
      <pubDate>Tue, 09 Sep 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/rebellion_software/mentoring-problem-solving-mental-models-2c9g</link>
      <guid>https://dev.to/rebellion_software/mentoring-problem-solving-mental-models-2c9g</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Problem solving is a critical skill all developers must learn as they progress through their careers. One of the big things I focus on as a mentor is arming them with tools and techniques that aid them in becoming higher level thinkers. These techniques should help them see things from a different perspective and begin to unravel the unknown unknowns that sometimes come with old legacy code. These tools will help junior developers start coming out of that junior task focused mindset and begin thinking more like senior developers.&lt;/p&gt;

&lt;p&gt;Depending on the developer they will be further along or earlier in their journey so introducing these and the best approach will vary. I've had developers who were all borderline seniors that this was just a single team training, but you can also break it down and slowly introduce these ideas.&lt;/p&gt;

&lt;h2&gt;
  
  
  80/20 Rule
&lt;/h2&gt;

&lt;p&gt;This says that 80% of your result or effect comes from 20% of your work. This suggests that sometimes the smaller changes or improvements will have the biggest impact and those should be focused on. This is one of those tools that can help prioritize and triage defects and the approach to fixing them. It also can help rationalize that stop the bleeding hot fix you put in compared to the perfect solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;You give your kid 10 toys they will probably focus on 20% of them and ignore the other 80%. The 2 toys they decide to play with have the biggest impact or the same impact as the other 80%.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Optimization: if your code is having performance issues you may find that 20% of the critical code is causing 80% of the performance issues. By focusing on that 20% and optimizing that pain point you can drastically improve the overall performance of your code base.&lt;/li&gt;
&lt;li&gt;Defects: The idea here is that 20% of your code will likely cause 80% of your defects. Focus on finding and fixing this 20% and you will increase the reliability of your code.&lt;/li&gt;
&lt;li&gt;Features: To users 20% of the feature set provides 80% of the value, figure out what those features are and focus on them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 80/20 split is really just a guideline, it might not always be that clear cut but it is used to visualize the idea. The most important thing is to find the the smallest changes/improvements you can make that create the biggest impact/value. These are some ways to look at how to discover where the value is and prioritize time. Oftentimes, fixing or improving one of those 20% features can buy you enough social credit with users to spend more time fixing other issues, it shows progress and a commitment towards improving. Understanding this principal is a big part of that shift in thinking that marks a developer becoming more senior.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inverted Thinking
&lt;/h2&gt;

&lt;p&gt;Look at things from the opposite perspective. In development it means that the solution to a problem is often the opposite of the problem. Essentially, to solve a problem you think about how to create the problem instead of how to solve it. For a defect this means don't think about fixing the defect but think about how you would create it and work from there. This generally leads developers to think more of about what they know could cause the problem compared to the never ending mountain of what ifs. It's a more focused and narrow approach but can lead to faster resolution by started narrow and going more broad if needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;Your sandwich tastes terrible, don't think about how to make it better instead think about how you created this terrible sandwich and go from there. Instead of thinking about what to add to make it not so terrible you think about how you would create such a terrible sandwich and solve the problem that way.&lt;/p&gt;

&lt;p&gt;Another way to think about it is to think about the things you &lt;em&gt;do not&lt;/em&gt; want on your sandwich and create a sandwich from the remaining items.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Debugging: instead of thinking about the what the correct code is or why the code didn't work right think about how you would create the same defect. Often times this leads to exploring different possibilities which can determine the root cause more efficiently.&lt;/li&gt;
&lt;li&gt;Optimization: Instead of focusing on what to add or change to optimize code think about how you would slow it down or think about what you could remove from the code to achieve the same effect. For example, this SQL query is slow what could I do to intentionally create a slow query.&lt;/li&gt;
&lt;li&gt;UX: Instead of thinking what you need to add to make your UX better think about what you can remove or simplify&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Second Order Thinking
&lt;/h2&gt;

&lt;p&gt;Second order thinking is about thinking about the "and then..." situation. It forces you to think about long term or short term side effects of your decision and use that additional context to make your choice. Be careful to not get into the never ending what if minefield on this one. Long term should be within the next year or two what can we reasonably anticipate based on business needs, direction, and what we know. Getting caught in what ifs is paralyzing and usually fruitless. You protect your code from the 5 year unknowns by ensuring it is adaptable, maintained, and documented.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;There is a gigantic bowl of ice cream in front of you. You know that eating all of it will make you happy right now so you decide to eat all of it. Then you think about the "and then" scenarios and wonder if this will make you sick, make you miss your dinner, etc and decide not to eat all the ice cream.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Refactoring: forces you to think how your changes will play out long term. Are your changes maintainable, will they be easy for other devs to work with and understand, does it introduce complexity etc.&lt;/li&gt;
&lt;li&gt;Software Architecture: forces you to think beyond the initial requirements of the project and take into consideration scalability, how flexible the system is to changes or updates, how well the system will grow.&lt;/li&gt;
&lt;li&gt;Performance: will making a change to one area for a small performance boost ad complexity or hinder other parts of the code?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is really about thinking through the consequences of your changes or beyond the immediate bug fix, improvement etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Principle Thinking
&lt;/h2&gt;

&lt;p&gt;First principle thinking is about understanding something from its most basic building blocks or core functionality. This involves questioning assumptions about the system, breaking down problems into smaller pieces, and finding solutions by getting to the bottom of a problem. This is critical for sprint planning and backlog refinement. Asking these questions to get to the core of the problem helps define features an get questions out of the way earlier clearing the path for success during development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;You have a cookie and you eat it. Instead of asking for more cookies so you are no longer cookie-less you ask questions about how to make cookies like asking for ingredients, how those ingredients are combined, why they are combined that way, until you get to the basic blocks of making a cookie. In practice this is often done with "Why" questions.&lt;/p&gt;

&lt;p&gt;Identify the problem, break it down into pieces, ask questions or "Why...", and create solutions that start from the building blocks up.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Problem Solving: this encourages you to break problems down into smaller pieces and ask questions about those parts instead of looking at the problem as a whole you focus on the core principles and requirements&lt;/li&gt;
&lt;li&gt;Algorithm Design: forces you to analyze the problem before deciding on an algorithm or deciding on how to optimize an algorithm&lt;/li&gt;
&lt;li&gt;Design Patterns: forces you to think about what problems the pattern solves, how they solve them, can it be adapted to fit your problem. This gets you thinking about the principles behind design patterns instead of just applying them everywhere.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall this helps you dig deeper, question your assumptions, and understand the fundamentals or core parts of your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Occam's Razor
&lt;/h2&gt;

&lt;p&gt;Basically, when there are multiple explanations or solutions the simplest one is usually the best. This looks to solve problems with easy and uncomplicated solutions that avoid unnecessary complexity. If nothing else this one should be the standard. Unneccesary complexity is toxic in a codebase and does nothing but increase cognitive load, decrease maintainability, and decrease DevEx. You should strive to keep your code as simple and as you can, and this is the perfect reminder for that. The simple solution is usually the best.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;Your computer won't turn on. You could think the PSU is dead, the surge strip is bad, the motherboard is fried, but it could be as simple as the power is unplugged.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Debugging: when it comes to defects look for the simple explanation first, instead of thinking it is a huge complex issue check the simple basics first. Think typos, incorrect variables, missing data, etc. Most of the time it is something simple.&lt;/li&gt;
&lt;li&gt;Writing Code: this encourages you to write simple and clean code. Keeping this in mind can help you avoid complexity and help you to design simple solutions that are easier to understand and maintain.&lt;/li&gt;
&lt;li&gt;System Design: helps keep systems free of unneeded complexity making them easier to maintain and keeps points of failure low.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Type 1 and Type 2 Decisions
&lt;/h2&gt;

&lt;p&gt;This is more about understanding that choices you make generally fall into one of two categories, understanding this helps you classify what decisions might need more time to make or need extra care in making.&lt;/p&gt;

&lt;p&gt;Type 1 decisions are decisions that cannot be reversed or cannot be reversed without a huge amount of effort. These are the decisions to take your time on and gather as much context about as possible. An example of this is NoSQL vs SQL databases or building your application in PHP vs Node. Fundamental architecture choices are Type 1 decisions.&lt;/p&gt;

&lt;p&gt;Type 2 decisions are ones that can be easily changed. Some examples of this are libraries or using Fetch vs Axios etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to try and avoid
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Man with the hammer
&lt;/h3&gt;

&lt;p&gt;The man with a hammer sees everything as a nail. This says don't let your decisions be biased by the tools you have or by the tools you favor. Research and look for the best solution even if you are not comfortable with it.&lt;/p&gt;

&lt;h3&gt;
  
  
  First Conclusion Bias
&lt;/h3&gt;

&lt;p&gt;Don't settle for your first conclusion, normally when we find a solution our brain gets stuck on it and ignores other possible answers. When you come to a conclusion write it down and walk away to do something else and clear your head. Then come back and reevaluate, try to force yourself to think of alternate solutions or answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;These tools can help your junior developers start thinking more like seniors and more on the project level. They give them options for better troubleshooting and design and strategies to increase their performance and skill set. Focusing on soft skills is just as important as code and technology, if not more so. The fundamentals of programming translate between languages, but soft skills can be tricky and not developing those skills can slow their career progression significantly.&lt;/p&gt;

</description>
      <category>mentoring</category>
    </item>
    <item>
      <title>Refactoring Legacy Code</title>
      <dc:creator>Rebellion Software</dc:creator>
      <pubDate>Tue, 26 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/rebellion_software/refactoring-legacy-code-1gg3</link>
      <guid>https://dev.to/rebellion_software/refactoring-legacy-code-1gg3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Refactoring code is almost always met with fear and anxiety. The truth is, we should be making small changes in our code that improve it each time we work, but we all know that never happens. Instead, code deteriorates and becomes technical debt that eventually needs a massive overhaul. The dreaded refactor or migration, a mountain of work no one wants to own. I've heard everything from "we don't know how it works" to "we don't want the outages" when the topic has come up. Refactoring code is not always smooth sailing, but there are plenty of examples to learn from, and a simple three-step process is available. Discover, stabilize, and execute are the three steps to updating and refactoring your code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Discovery
&lt;/h2&gt;

&lt;p&gt;This phase helps to plan and organize the pieces and approach for migrating the code. This will help you surface how much work you actually have to plan and make time for. A key element to examine is test coverage, including the extent to which it exists and whether it is sufficient. The goal of a refactor or migration is to move the code without changing the behavior the user anticipates. As Martin Fowler stated:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Refactoring is the process of changing a software system in a way that does not alter the external behavior of the code yet improves its internal structure. It is a disciplined way to clean up code that minimizes the chances of introducing bugs. In essence, when you refactor, you are improving the design of the code after it has been written.&lt;sup&gt;1&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Understanding that the goal is to change the software in a way that does not impact behavior means that you can rely on unit tests, integration tests, and end-to-end tests as a metric for success. Thus, evaluating test coverage and where it falls short is a critical step for refactoring.&lt;/p&gt;

&lt;p&gt;Additionally, review any internal wikis for helpful information and context on the system and its objectives. Often, there are resources explaining potential pitfalls and hazards to watch out for. You should also review informal channels, such as chats, commit messages, and code comments. Commit history is a fascinating thing to examine because it reveals the evolution of the file's current state and any other files that may be related to or frequently updated alongside the current file you are viewing. This also lets you determine the change frequency for that file.&lt;/p&gt;

&lt;p&gt;In some cases, it becomes evident that this file is prone to errors and is tightly coupled with other files. I've actually used this method to locate files that required specific reviewers on pull requests before they were approved. They had high change frequencies, a high number of commit messages mentioning bugs or defects, and were tightly coupled with each other. The extra scrutiny reduced defects in those files by over 30%.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tornhill also describes a method for pinpointing tightly coupled modules in your program by looking at sets of files modified within the same commit. To depict this idea, let's say we have three files, superheroes.js, supervillains.js, and sidekicks.js. In a subset of our commits, we have the following changes: commit one modifies both superheroes.js and sidekicks.js; commit two modifies all three files; commit three again modifies superheros.js and sidekicks.js; and commit four only touches superheroes.js. From this subset of our version history, depicted in Table 3-3, we notice that of four commits, three of them modified both superheroes.js and sidekicks.js. This insinuates that some kind of coupling between these two files exists.&lt;sup&gt;2&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Reviewing lines of code, complexity metrics, and bug reports can all play a big part in discovery as well. Additionally, AI and LLMs make analyzing large sections of code a much easier task. You can include the files you are looking at as context and ask the LLM to summarize what the code does, locate edge cases, evaluate test coverage, and suggest improvements.&lt;/p&gt;

&lt;p&gt;By the end of discovery, you should have a document outlining all the essential information related to what you are refactoring. You should understand the code and have a good idea of potential pitfalls or landmines. The following quote from Tanya Reilly is focused on joining a new company, but it actually explains discovery really well:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When you join a new company, most of the big picture is completely unknown to you. A big part of starting a new job is building context, learning how your new organization works, and uncovering everyone's goals. Think of it like the fog of war in a video game, where you can't see what awaits you in the parts of the map you haven't explored yet. As you scout around, you clear the fog and get a better picture of the terrain, learning what's surrounding you and whether there are wolves coming to bother your villagers. You can set out to uncover the obscured parts in all three of the maps and find ways to make that information easy for other people to understand. For instance: Your locator map can help you make sure the teams you work with really understand their purpose in the organization, who their customers are, and how their work affects other people. Your topographical map can help highlight the friction and gaps between teams and open up the paths of communication. Your treasure map can help you make sure everyone knows exactly what they're trying to achieve and why. You'll be able to clear some parts of the map through everyday learning, but you'll need to deliberately set out to clear other parts. A core theme of this chapter is how important it is to know things: to have continual context and a sense of what's going on. Knowing things takes both skill and opportunity, and you might need to work at it for a while before you start seeing what you're not seeing.&lt;sup&gt;3&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Think about discovery in the same way: if you don't fully explore the map, you don't know what you're getting into. You have to clear the fog of war and understand the terrain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Caution&lt;/strong&gt; : If you reach this point and think you won't have time to complete the refactor, then you &lt;strong&gt;&lt;em&gt;must&lt;/em&gt;&lt;/strong&gt; narrow the scope. An incomplete or abandoned refactor is far worse than leaving the code as is. If the hill gets too big, narrow the scope and tackle the refactor or migration in smaller pieces.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stabilize
&lt;/h2&gt;

&lt;p&gt;At this point, you have a solid understanding of the code and a clear plan. Stabilizing is the intermediate step that allows you to execute that plan. In this phase, you focus on getting the code to a point where it is safe to migrate or completely refactor. You can, and should, perform small refactors in place in this phase. Making minor changes or adjustments that will ensure the larger migration or refactor is successful. The first thing you should prioritize is bringing the test suite up to a level that ensures you can validate that behavior is not changed or lost.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We can increase our confidence that nothing has changed by writing a suite of tests (unit, integration, end to end), and we should not seriously consider moving forward with any refactoring effort until we've established sufficient test coverage.&lt;sup&gt;2&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Tests are critical to ensuring that you are not breaking, changing, or completely removing functionality. This should be your first step in stabilizing the code and increasing test coverage.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Whenever I do refactoring, the first step is always the same. I need to ensure I have a solid set of tests for that section of code.&lt;sup&gt;1&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There is a strong possibility that you will need to modify the code to increase test coverage. An example of this is legacy PHP applications that use static method calls. These are not easily mocked and create dependencies that, in some cases, need to be broken to increase testing. You may need to adjust static calls or other dependencies. These changes should be kept to a minimum while allowing you to increase test coverage. After test coverage is raised, you can modify these areas more if needed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dependency is one of the most critical problems in software development. Much legacy code work involves breaking dependencies so that change can be easier.&lt;sup&gt;4&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Additionally, you should work on releasing these changes using feature flags or an A/B system. This allows you to slowly release and monitor adjustments and react to defects and issues as they arise in a more controlled manner. Two standard options for making these more minor changes are using Sprout Methods and Sprout Classes&lt;sup&gt;4&lt;/sup&gt;. These are methods or classes that are created alongside existing legacy code to slowly move functionality into a more stabilized state. For methods, you can create a new clean method and call it from the existing legacy method. If needed, you can add logging to determine all the paths calling that method and ensure they are working correctly. Eventually, the new method replaces the legacy method. A Sprout Class is similar, but it is used for gradually replacing entire classes over time. There are also Wrap Methods and Classes that follow the decorator pattern. Working Effectively with Legacy Code covers these methods in great detail and is a great read. Wrap Methods and Classes are used to intercept legacy calls and safely add functionality before and after the call and delegate to the original methods/classes&lt;sup&gt;4&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;At this point, your legacy code should have sufficient test coverage, be refactored in place, and have been tested with real users with feature flags or A/B testing. It is possible to stop here; you may not always need to undertake the complete migration or refactor, and that is okay. If your code is stable here, not causing problems, and has removed technical debt, you may have reached your goal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execute
&lt;/h2&gt;

&lt;p&gt;This final step involves executing the last refactor or migration. This consists of consolidating all your discovery and stabilization efforts and pushing through to the end of the project. Your code should be in such a tested and stable spot that this step should be straightforward. If you are moving to a new framework, your efforts to stabilize give you the tests you need, identify broken dependencies, and map out the dangerous parts of the code. You have likely already identified and addressed some bugs resulting from refactoring in place and have a deep understanding of the codebase.&lt;/p&gt;

&lt;p&gt;You should follow the same steps here, releasing changes in small chunks under feature flags or through A/B testing. Still, you will likely encounter fewer issues making changes at this point. The workflow should involve making small changes, committing, and testing. This makes it easier to review, change, and fix if something goes wrong. Working on small, functional pieces will make the final refactor or migration far easier than trying to do it all in a single massive commit.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;First, committing small, incremental changes makes it much easier to author great code. By pushing bite-sized commits, you can get relevant feedback early and often from your tooling (e.g., integration tests running on a server through continuous integration). If you push a wide breadth of changes infrequently, you risk needing to wade through and fix a heap of test failures.&lt;/p&gt;

&lt;p&gt;Second, reverting a small commit is much easier than reverting a big one. If something goes wrong, whether during development or well after the code has been deployed, reverting a small commit allows you to carefully extract only the offending change.&lt;/p&gt;

&lt;p&gt;Third, because concise commits tend to be sufficiently focused, you'll also be able to write better, more precise commit messages.&lt;/p&gt;

&lt;p&gt;Finally, it is nearly impossible for a teammate to review the entirety of the modified code adequately. &lt;sup&gt;2&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Work small and commit frequently; you'll be glad you did at the end. Live pushes should be the most minor yet complete feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Refactors and migrations can be intimidating, and they often are. If you follow a structured approach using these three phases. Much of the guesswork is removed, and you can discover the landmines upfront. Following this approach can help you successfully migrate and stabilize your code base. As an added bonus, you will unlock a massive treasure chest of context and documentation about what your code is doing, what to look out for, and what parts of the code rely on each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Refactoring (Addison-Wesley Signature Series) - Martin Fowler&lt;/li&gt;
&lt;li&gt;Refactoring at Scale - Maude Lemaire&lt;/li&gt;
&lt;li&gt;The Staff Engineers Path - Tanya Reilly&lt;/li&gt;
&lt;li&gt;Working Effectively with Legacy Code - Micheal Feathers&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>refactoring</category>
    </item>
    <item>
      <title>AI Is Not the End of Developers</title>
      <dc:creator>Rebellion Software</dc:creator>
      <pubDate>Wed, 20 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/rebellion_software/ai-is-not-the-end-of-developers-1gcf</link>
      <guid>https://dev.to/rebellion_software/ai-is-not-the-end-of-developers-1gcf</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;There is a growing concern that AI will replace developers or lead to a significant decline in the need for human developers. This idea overlooks the fundamental constraints of current AI/LLM capabilities. It disregards the fact that humans are not subject to the same limitations. The concept of replacing developers completely or all of a sudden turning one developer into five is still science fiction. Anyone telling you the opposite is ignoring the simple fact that code at scale is still nuanced, requires context, creativity, a shifting perspective on the entire project, and the intuition of human developers. These are all skills that are currently out of reach for LLMs. They cannot stop and shift focus to the broader project, making complex decisions on the fly while understanding business needs and objectives. They don't understand the nuances of large projects and how features may interact with or complement each other, or recognize that a specific pattern makes more sense for the project. These are all things they still need human developers to accomplish.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Changing Landscape
&lt;/h2&gt;

&lt;p&gt;The job of a developer is changing; there is no valid argument against that. As AI becomes increasingly present and undeniable in engineering, ignoring it will leave you behind. That's not to say you won't be unemployable; there will still be agencies and smaller shops that are behind the adoption curve, just as there always are. Still, you will be behind in the industry sense. AI is reshaping how developers work, automating processes, documentation, and building features, which increases productivity. This change doesn't eliminate the need for humans to design and build systems and projects, guide AI agents, and make complex decisions based on context, business objectives, and intuition. This change accelerates changes in how developers work and their skill set. Still, it does not make human engineers obsolete&lt;sup&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The role of the human developer, he said, becomes to guide and direct the A.I. agents — “the conductor of an A.I.-empowered orchestra.”&lt;sup&gt;1&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;LLMs without human intervention fall back on the average of their training. It's not a new and innovative approach that no one has seen before; for that, it still requires human ingenuity. That's what makes it work so well for smaller, well-planned features or repetitive tasks; it falls back to the average safe answer. Not one that blows your mind, but it gets the job done predictably.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Time is Now
&lt;/h2&gt;

&lt;p&gt;The Future of Jobs report still indicates that AI and software-related jobs will experience rapid growth, and Worldmetrics predicts that over 500,000 new software jobs will be created in 2025 &lt;sup&gt;2&lt;/sup&gt;. We are still at a point where AI makes good developers better and more efficient, not obsolete. Therefore, demand will increase as companies increase their product velocity to match the increased productivity. Eventually, it reaches equilibrium.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Learn computational thinking. For every field X there's going to be a computational X. If you figure out how to think about things computationally and you know the best tools, then you're in good shape.”&lt;/p&gt;

&lt;p&gt;Wolfram, known for his deep understanding of software and large language models, makes it clear: the edge always belongs to the human who understands the system, not the system itself.&lt;sup&gt;2&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Learning software development now is still setting yourself up for success, and focusing on how you can maximize your potential with AI will pay off in dividends. The Codesmith article says it best "AI won’t replace developers, but a developer using AI will"&lt;sup&gt;2&lt;/sup&gt;. As the agentic systems developers use become more complex, there will still be a need for developers to guide them and keep them on track, developers who understand the systems, etc. If anything, it creates new jobs and new fields to specialize in as AI use grows. It also forces junior developers to think more like senior developers and understand the higher-level thinking and planning required for large codebases.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The truth is that the role of the programmer, in line with just about every other professional role, will change. Routine, low-level tasks such as customizing boilerplate code and checking for coding errors will increasingly be done by machines.&lt;/p&gt;

&lt;p&gt;But that doesn’t mean basic coding skills won’t still be important. Even if humans are using AI to create code, it’s critical that we can understand it and step in when it makes mistakes or does something dangerous. This shows that humans with coding skills will still be needed to meet the requirement of having a “human-in-the-loop”. This is essential for safe and ethical AI, even if its use is restricted to very basic tasks.&lt;/p&gt;

&lt;p&gt;This means entry-level coding jobs don’t vanish, but instead transition into roles where the ability to automate routine work and augment our skills with AI becomes the bigger factor in the success or failure of a newbie programmer.&lt;sup&gt;3&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI is not coming to replace developers. It will create a demand for new skill sets and augment developers, allowing them to become more productive and efficient. It will create new jobs in building and managing complex agentic systems and tools. For now, it still needs human intuition, creativity, and the ability to zoom in and out of complex projects and problems to make decisions and plan features based on context, domain knowledge, and business objectives.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.nytimes.com/2025/02/20/business/ai-coding-software-engineers.html" rel="noopener noreferrer"&gt;A.I. Is Prompting an Evolution, Not Extinction, for Coders - The New York Times&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.codesmith.io/blog/why-ai-wont-replace-coders" rel="noopener noreferrer"&gt;AI Can’t Replace Coders Say Tech Leaders - Codesmith&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.forbes.com/sites/bernardmarr/2025/08/04/myth-or-reality-will-ai-replace-computer-programmers/" rel="noopener noreferrer"&gt;Myth Or Reality: Will AI Replace Computer Programmers? - Forbes&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Managing AI by Understanding How Your Team Thinks</title>
      <dc:creator>Rebellion Software</dc:creator>
      <pubDate>Mon, 11 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/rebellion_software/managing-ai-by-understanding-how-your-team-thinks-30g5</link>
      <guid>https://dev.to/rebellion_software/managing-ai-by-understanding-how-your-team-thinks-30g5</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Depending on whom you talk to, you will get a wide range of answers on whether AI coding or agentic coding is worth the time. Some developers swear by it, and others swear it off entirely. I'm not talking about vibe coders, either; I'm talking about true-to-form junior or senior developers. You often get mixed opinions that vary based on their experience. Some of this can be attributed to how they interact with AI and their anticipated outcome, but what type of thinker they are also plays a huge role.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Types of Thinkers
&lt;/h2&gt;

&lt;p&gt;The distinction made between the different types of thinkers was first made popular by Rob Walling in The SaaS Playbook&lt;sup&gt;1&lt;/sup&gt;. In this part of the book, Rob Walling points out three distinct types of thinkers, and depending on which level thinker you are, it can have a direct relation to how you perceive new technology, tools, and advances. It also impacts how you evaluate and use these tools. This is incredibly important to remember: the type of thinker your developers are can greatly influence how they evaluate a tool and form their first impressions. Rob Walling explains that there are three levels of thinkers: task-level, project-level, and owner-level thinkers.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Most Young Developers Think, the Task Thinker
&lt;/h2&gt;

&lt;p&gt;Task thinkers tend to think about issues at a very low level and don't think past the feature or bug they are currently working on. These are generally more junior or entry-level developers who are not yet thinking about things at that "big picture" level. This is fine, this is where we all begin, and a key sign that a developer is becoming ready to take on more senior roles is when they start to grow out of this level. Rob Walling describes this level as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Task-level thinkers are team members who focus on their current or next task. They might be early in their career or get overwhelmed with more than a few sequential tasks on their plate. Most of us begin our careers as task-level thinkers because prioritizing many complex, interrelated tasks is often not a natural ability.&lt;sup&gt;1&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The important thing to take from this is that they often focus on the task at hand and have not yet learned how to manage and prioritize complex related tasks. This concept of managing and prioritizing complex tasks is critical when working with AI coding tools. The more you understand these tasks and your ability to describe them and break them down into small, manageable features and instructions greatly impacts the quality of code you get from AI tools. This would be like telling AI to make a park compared to giving it detailed instructions on how to make a tree. Developers need to be able to understand the project as a whole at a high level and how all the pieces fit together. At the same time, they need to be able to break down those features into smaller tasks, building the tree.&lt;/p&gt;

&lt;p&gt;A key difference here is that task-level thinkers generally view these tools as an instrument to immediately solve a problem, fix a bug, or build a feature. They may find AI tools overwhelming or just bad because of how this shapes their interaction with the tool. So why is this a problem? Why does this task-level thinking cause a problem with the results you will see from AI?&lt;/p&gt;

&lt;h3&gt;
  
  
  How Agentic AI Works
&lt;/h3&gt;

&lt;p&gt;First of all, Agentic AI differs from AI in general because it is an architecture for AI to use memory, learning, and decision-making to achieve a goal. To accomplish this, there are defined steps that must take place. First, there is the perception module, which takes input and processes it into a structured format to pass on to the next module&lt;sup&gt;2&lt;/sup&gt;. Essentially, taking the "build me a park" prompt and turning it into whatever structure the second module needs. If your developers are prompting this bare bones, there won't be much to pass off to the reasoning module.&lt;/p&gt;

&lt;p&gt;The reasoning module is the "brain" of the system, typically an LLM. It attempts to take this structured input and use chain-of-thought reasoning to break the input into small subtasks and action items&lt;sup&gt;2&lt;/sup&gt;. This step is critical because it forms the step-by-step problem-solving that leads to correct and helpful output. This is why your first prompts are critical when working with AI. The more well-structured and planned those prompts are, the more they will change the entire outcome of the build. It gives the LLM the context and information needed to break down the problem into small subtasks and a strong history to look back on. For example, which do you think generates a better output, and which do you think the task-level thinker uses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I need to implement user authentication for my React app using Firebase Auth. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I need to implement user authentication for my React app using Firebase Auth. 

Requirements:
- Email/password and Google OAuth sign-in
- Protected routes that redirect to login
- User context that persists across app
- Logout functionality
- Loading states during auth operations

Technical constraints:
- Using React 18 with TypeScript
- React Router v6 for routing  
- Tailwind for styling

Please create:
1. AuthContext with provider
2. Custom hooks for auth operations
3. ProtectedRoute component
4. Login/signup forms with validation
5. Integration with existing routing

Include error handling and ensure type safety throughout.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The task-level thinker likely uses the first example.&lt;/p&gt;

&lt;p&gt;The reasoning module passes off these instructions to the action module. The action module is responsible for executing the plan and interacting with any needed tools&lt;sup&gt;2&lt;/sup&gt;. It will only perform as well as it can based on the action plan. The action plan depends on the context it is given from the input. The input is dependent on the user and the level of thinker they are. If you take a realistic look at where agentic AI is today, it is barely a step above the base level of AI, processing an input and returning an output with no memory or decision-making. The Vellum blog describes this level as where most AI is today&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At this stage, AI isn’t just responding—it’s executing. It can decide to call external tools, fetch data, and incorporate results into its output. This is where AI stops being a glorified autocomplete and actually does something. This agent can make execution decisions (e.g., “Should I look this up?”). The system decides when to retrieve data from APIs, query search engines, pull from databases, or reference memory. But the moment AI starts using tools, things get messy. It needs some kind of built-in BS detector—otherwise, it might just confidently hallucinate the wrong info. Most AI apps today live at this level. It’s a step toward agency, but still fundamentally reactive—only acting when triggered, with some orchestration sugar on top. It also doesn't have any iterative refinement—if it makes a mistake, it won’t self-correct.&lt;sup&gt;3&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Project Level Thinkers: Where Things Begin to Change
&lt;/h2&gt;

&lt;p&gt;The next step is a project-level thinker. This is where most senior or almost senior developers are. They are seeing projects at that higher organizational level and understand how they fit into the whole. They can take intricate features and effectively break them into smaller tasks and instructions for junior developers. Rob Walling describes this level as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Project-level thinkers look ahead weeks or months and juggle multiple priorities. They often rely on team members to complete work that's combined into a single deliverable. Project-level thinkers have advanced systems in place to track the myriad moving parts needed to successfully complete a project.&lt;sup&gt;1&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;They are thinking at a higher level and can break down the feature into small instruction sets for junior team members to complete the work. Think about that for a second; they are doing exactly what the LLM needs right now. LLMs are junior task-level thinkers that thrive on context, instructions, and rules. They need that information to be successful, just like a junior or entry-level developer does. This level thinker is probably more excited about using agentic AI because it allows them to work more efficiently, manage multiple tasks, and automate processes. Breaking projects down is already second nature to them, so they likely follow that same process with AI tools and get better results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Critical Soft Skills are Still the Answer
&lt;/h2&gt;

&lt;p&gt;We'll skip the next level for this, the next is owner level, which is someone thinking months or years in advance changing the path of a company&lt;sup&gt;1&lt;/sup&gt;. When we think about these two levels, task and project levels, the major difference is the level at which they are able to break down tasks as part of the whole. The way you get there is experience and time; this is why it is critical to still take the time to mentor those junior developers. They need to continue to develop that skill set and the critical thinking that allows them to manage project-level thinking. There is some fear that AI tools will impact our critical thinking and the ability to think at the project level&lt;sup&gt;4&lt;/sup&gt;. Keeping that in mind it may be worth having junior developers still turn off the AI from time to time.&lt;/p&gt;

&lt;p&gt;It will continue to be important to give junior developers, or even more senior task-level thinkers, time to grow and learn how to see things from the project level. The ability to see a project and break that down into smaller tasks and instructions is a critical skill that becomes even more important when working with AI. Developers who haven't yet mastered this skill are probably not crazy about AI. They probably say things like "it just started coding the whole project", "it took me hours to fix what it did", or "it didn't build what I wanted". There is some nuance to models and which are better for specific tasks, but what is true with all of them is that the quality of the output depends on the quality of the input, your first handful of prompts.&lt;/p&gt;

&lt;p&gt;Mentor and coach these developers along and help them get out of the task-level thinking box. Talk them through a large feature and ask for input on how they would break tasks down, what the challenges and constraints should be, define the scope, and what domain knowledge is needed to complete the feature. Let them take the reins and guide them through. It takes time, but it will help them and you in the long run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI isn't magic. How your developers think can greatly impact their experience and opinion about working with AI. Luckily, the same skills we have relied on to mentor developers into senior roles and help them advance their careers still apply. Understanding how your developers think is no different now than it was five years ago, but realizing it could be impacting their success or opinion on AI tools is new. Take the time to mentor them and get them thinking at the project level, and even if they still hate AI, you at least have them that much closer to being a senior or project-level thinker.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://saasplaybook.com" rel="noopener noreferrer"&gt;The SaaS Playbook - Rob Walling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://labs.adaline.ai/p/the-5-levels-of-agentic-ai" rel="noopener noreferrer"&gt;The 5 levels of Agentic AI - Adaline Labs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.vellum.ai/blog/levels-of-agentic-behavior" rel="noopener noreferrer"&gt;The Six Levels of Agentic Behavior - Vellum Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://htec.com/insights/blogs/is-ai-making-us-dumb/" rel="noopener noreferrer"&gt;Critical Time for Critical Thinking: Is AI Making Us Dull? - HTEC Blog&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Harness AI by Looking to the Past</title>
      <dc:creator>Rebellion Software</dc:creator>
      <pubDate>Tue, 05 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/rebellion_software/harness-ai-by-looking-to-the-past-4old</link>
      <guid>https://dev.to/rebellion_software/harness-ai-by-looking-to-the-past-4old</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Software development is changing rapidly, and with it, what it means to be a developer. With new LLM-powered tools emerging constantly, keeping up can feel overwhelming. Many developers have tried AI once or twice, only to be disappointed when it produced code that needed extensive fixes. Others expect magic: "I told it to build a feature and it went crazy."&lt;/p&gt;

&lt;p&gt;Here's the reality: AI tools are like junior developers with no context about your project, codebase, or tech stack. Until you teach them these things, they'll write code like a lost junior with no mentor. The good news? The same development methods that made teams efficient a decade ago can make working with AI more productive today. Those practices that helped teams of junior developers succeed also work with AI, and managing this workflow with a manager's mindset is key.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Power of Small, Well-Defined Features in Software Development
&lt;/h2&gt;

&lt;p&gt;When considering methods that make teams more effective, two stand out: Feature Driven Development (FDD) and Minimal Marketable Features (MMF). Both emphasize small, well-defined features with focused scope. These approaches improve communication and reduce risk by making it easier to adapt to changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feature Driven Development
&lt;/h3&gt;

&lt;p&gt;FDD breaks large features into smaller, narrowly-scoped pieces. This limits scope creep, increases productivity, and improves team focus&lt;sup&gt;1&lt;/sup&gt;. While it requires more planning time, the trade-off pays off in faster development. I've seen this firsthand when a feature expected to take three sprints was completed in less than one.&lt;/p&gt;

&lt;p&gt;FDD offers several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smaller features are easier to refactor when requirements change&lt;/li&gt;
&lt;li&gt;Teams get greater visibility into code and issues earlier&lt;/li&gt;
&lt;li&gt;Communication improves across the team&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As Bob Stanke writes:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Feature Driven Development places a strong emphasis on communication and collaboration between team members. This can help to improve the flow of information between team members and ensure that everyone is working towards a common goal&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Key ideas from FDD:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Narrowly defined scope that prevents scope creep and increases focus&lt;/li&gt;
&lt;li&gt;Small iterative changes that are easier to refactor&lt;/li&gt;
&lt;li&gt;Increased importance of preplanning and project management&lt;/li&gt;
&lt;li&gt;Increased communication and collaboration working towards a common goal&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Minimal Marketable Features
&lt;/h3&gt;

&lt;p&gt;MMF is similar to FDD but emphasizes incremental value delivery by breaking complex features into smaller deliverable pieces&lt;sup&gt;2&lt;/sup&gt;. It focuses on delivering small units that provide value in steps, helping teams prioritize what provides the most impact for customers.&lt;/p&gt;

&lt;p&gt;Like FDD, MMF increases adaptability to changes and feedback while enhancing collaboration. It also reduces costs by limiting what's being built at any time, minimizing the risk of scope creep and costly errors.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At its core, MMF is about breaking down complex software requirements into smaller, manageable chunks. Each MMF should be self-contained and independently usable, allowing developers to deliver value incrementally.&lt;sup&gt;2&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Key takeaways from MMF:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Narrowly defined scope that emphasizes small usable deliverables&lt;/li&gt;
&lt;li&gt;Easier to refactor and adapt to changes&lt;/li&gt;
&lt;li&gt;Increased planning and collaboration&lt;/li&gt;
&lt;li&gt;Reduces scope creep and waste through planning and incremental deliverables&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These methodologies share similar principles focused on structured approaches to increase performance. Now consider what developers often say about AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"It tried to code the whole project at once"&lt;/li&gt;
&lt;li&gt;"It didn't build what I wanted"&lt;/li&gt;
&lt;li&gt;"I wasted time fixing bugs and bad code"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sound familiar?&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt-Driven Development: Bringing the Same Principles to AI Collaboration
&lt;/h2&gt;

&lt;p&gt;When developing with AI, three critical truths emerge:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Context is key - the AI only knows what you tell it or what it can find&lt;/li&gt;
&lt;li&gt;It's a collaboration, not magic&lt;/li&gt;
&lt;li&gt;Communication remains just as important as with human teammates&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Looking at common developer complaints about AI, it becomes clear there's a process problem - just like when you ask a junior developer to build a complex feature with no guidance.&lt;/p&gt;

&lt;p&gt;Prompt-driven development is about communication and collaboration. It's working with what resembles a junior developer and managing their workflow. Your communication should include essential context like language, framework, and error messages - exactly what a new developer would need.&lt;/p&gt;

&lt;p&gt;The Prompt Engineering Playbook for Programmers states this perfectly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Always assume the AI knows nothing about your project beyond what you provide…Specificity and context make the difference between vague suggestions and precise, actionable solutions.&lt;sup&gt;3&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This brings us to a fundamental truth: output quality directly correlates with prompt quality. Like the difference between a vague 10-point ticket and a well-defined, narrowly-scoped feature.&lt;/p&gt;

&lt;p&gt;The Playbook continues:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Detail and direction are your friends. Provide the scenario, the symptoms, and then ask pointed questions. The difference between a flailing 'it doesn't work, help!' prompt and a surgical debugging prompt is night and day.&lt;sup&gt;3&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Just as well-planned features can be developed almost on autopilot, well-crafted prompts dramatically improve AI code generation. Focusing on narrowly-scoped pieces makes AI-generated code easier to review, refactor, and adjust - just like with human teams. Clear instructions, examples, context, restrictions, and expected results transform the output compared to vague requests like "Make me a contact page."&lt;/p&gt;

&lt;p&gt;From "Mastering Amazon Q Developer Part 1":&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your success…depends directly on how well you communicate with it. The difference between a vague request and a well-structured prompt can be the difference between wasted time and a productivity breakthrough.&lt;sup&gt;4&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The quality of information you receive directly correlates with the quality of the information you provide.&lt;sup&gt;4&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Good prompting techniques increase productivity, reduce technical debt, and shorten debugging time&lt;sup&gt;5&lt;/sup&gt;. Simply put:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Good prompt engineering is the difference between AI being your 10x productivity multiplier and a technical debt generator&lt;sup&gt;5&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As Yash Poojary noted in "I Rebuilt Sparkle in 14 Days with AI":&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There's a saying that you're the average of the five people you spend the most time with. The same goes for code. Your code is the average of the first five prompts you feed into your editor. If those prompts are scattered or vague, the model will drift. But if you set a strong foundation early—clear structure, naming, logic—the model starts acting like a teammate instead of a guesser.&lt;sup&gt;6&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Remember the key principles from FDD and MMF? All eight points remain true when coding with AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Narrow, well-defined scopes produce better outputs&lt;/li&gt;
&lt;li&gt;Better planning and prompting yield better code&lt;/li&gt;
&lt;li&gt;Scope creep is controlled through narrowing features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By applying these proven development principles to AI collaboration, your AI-assisted code will continuously improve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Working with AI in Development
&lt;/h2&gt;

&lt;p&gt;By now, you've likely recognized the parallels between managing AI and managing a team of developers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use clear, specific prompts with well-defined outcomes&lt;/li&gt;
&lt;li&gt;Include context and examples in your prompts&lt;/li&gt;
&lt;li&gt;Narrow the scope - don't ask for a park, ask for a tree&lt;/li&gt;
&lt;li&gt;Review, iterate, communicate, and establish rules&lt;/li&gt;
&lt;li&gt;Treat AI-generated code like work from a new junior dev - review everything critically&lt;/li&gt;
&lt;li&gt;Avoid vagueness - less context means worse output&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Beyond these basics, experiment with different AI models to discover their strengths and weaknesses. Using the right model for specific tasks can dramatically improve results.&lt;/p&gt;

&lt;p&gt;Two additional strategies have significantly improved my AI collaboration:&lt;/p&gt;

&lt;p&gt;First, instruct the AI to aim for 95% confidence in its work, asking clarifying questions when needed. This creates stronger collaboration through back-and-forth communication, dramatically improving output quality.&lt;/p&gt;

&lt;p&gt;Second, give the AI permission to acknowledge uncertainty. Models sometimes get so focused on completing tasks that they'll replace problematic code with placeholders or remove it entirely. When you explicitly give them permission to stop and ask questions, they're more likely to seek clarification rather than produce flawed code.&lt;/p&gt;

&lt;h3&gt;
  
  
  TLDR For Rules
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Be Specific&lt;/strong&gt; : Instead of "write clean code," specify "use meaningful variable names, limit functions to 20 lines, include JSDoc comments for all public methods."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Include Examples&lt;/strong&gt; : Show the AI exactly what good looks like in your codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set Boundaries&lt;/strong&gt; : Clearly state what the AI should and shouldn't do (e.g., "Never modify database schemas without explicit approval").&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update Regularly&lt;/strong&gt; : Refine rules based on what works and what doesn't in practice.&lt;/p&gt;

&lt;h3&gt;
  
  
  TLDR For Chat Prompts
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Front-load Context&lt;/strong&gt; : Put the most important information first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be Action-Oriented&lt;/strong&gt; : Use verbs like "implement," "refactor," "optimize," "debug."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specify Output Format&lt;/strong&gt; : Tell the AI how you want the response structured.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Include Relevant Code&lt;/strong&gt; : Paste existing code that the AI needs to understand or work with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask for Explanations&lt;/strong&gt; : Request reasoning behind implementation choices when learning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examples
&lt;/h3&gt;

&lt;p&gt;Instead of "fix this bug," try this approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are a senior developer on a project using [tech stack info].
I am experiencing [specific problem] in [location/context].
Expected behavior: [what should happen]
Current behavior: [what actually happens]
Relevant code: [paste code if applicable]
Please identify the issue and provide a fix.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives the AI much more context than "my app is crashing on the product list fix it." For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are a senior React developer.

I'm getting a "Cannot read property 'map' of undefined" error in my React component. 

Current behavior: App crashes when ProductList renders
Expected behavior: Should display empty state when no products

Component code:

const ProductList = () =&amp;gt; {
  const [products, setProducts] = useState();

  useEffect(() =&amp;gt; {
    fetchProducts().then(setProducts);
  }, []);

  return (
    &amp;lt;div&amp;gt;
      {products.map(product =&amp;gt; (
        &amp;lt;ProductCard key={product.id} product={product} /&amp;gt;
      ))}
    &amp;lt;/div&amp;gt;
  );
};

Please identify the issue and provide a robust solution that handles loading states and potential fetch failures.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For small features and additions, this format works well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are a senior developer on a project using [tech stack info].
I need to [specific feature] for [context]. 
Requirements: [list key requirements]
Constraints: [technical limitations]
Please [specific action] and ensure [quality criteria].
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For larger features, include more context in a separate file with numbered tasks. For code reviews and refactoring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are a senior developer on a project using [tech stack info].
Please review/refactor this [code type]:
[paste code]
Focus on: [specific aspects like performance, readability, maintainability]
Maintain: [what should stay the same]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For project rules or feature specifications, these templates are effective:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# [Project Name] - [Technology Stack]

## Project Context
[Brief description of what you're building, target users, main objectives]

## Tech Stack
- [Framework/Library]: [Version]
- [Language]: [Version] 
- [Database]: [Version]
- [Other key dependencies]

## Coding Standards
- [Naming conventions]
- [Code formatting preferences]
- [Comment/documentation style]
- [File organization patterns]

## Architecture Patterns
- [Design patterns to use]
- [Folder structure preferences]
- [Component/module organization]

## Quality Requirements
- [Testing expectations]
- [Performance standards]
- [Accessibility requirements]
- [Security considerations]

## Output Preferences
- [How you want code structured]
- [Documentation expectations]
- [Error handling approach]

## Constraints
- [What to avoid]
- [Technical limitations]
- [Business requirements]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For feature specifications:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I need to implement [specific feature] for [project context].

Requirements:
- [Key requirement 1]
- [Key requirement 2]  
- [Key requirement 3]

Technical constraints:
- [Framework/library versions]
- [Performance requirements]
- [Browser/device support]

Please create:
1. [Specific deliverable 1]
2. [Specific deliverable 2]
3. [Specific deliverable 3]

Additional considerations:
- [Error handling needs]
- [Testing requirements]
- [Documentation needs]

Success criteria: [How I'll know it's working correctly]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For complex debugging:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I'm experiencing [specific problem] in [location/component/function].

Expected behavior: [What should happen]
Current behavior: [What actually happens]
Error message (if any): [Exact error text]

Relevant code:
[Paste problematic code here]

Context:
- [When does this occur]
- [Steps to reproduce]
- [Environment details if relevant]

Please identify the root cause and provide a fix that [specific requirements for the solution].
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, a versatile template for general AI interactions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Role
You are a senior developer working in a [tech stack]

## Tone
This is optional, but useful for content/copy

## Instructions/subtasks
A numbered list of tasks you can refer to and instructions

## Rules
A numbered list of rules that should be followed

## Examples/Context
Output examples, directory structure, relevant files etc.

## Additional context
Any additional needed context

## Halucinations/Off Ramp
Set expectations on certainty and explain that the expected behavior is to stop and ask questions when needed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By applying time-tested team development patterns to AI collaboration, you can dramatically improve your AI-generated code. Focus on well-defined, narrow scopes. Communicate clearly. Collaborate effectively. Invest time in planning.&lt;/p&gt;

&lt;p&gt;Just as with a junior developer, this process takes patience but yields impressive results. The practices that make development teams successful—Feature Driven Development, Minimal Marketable Features, clear communication, and thorough planning—apply perfectly to AI collaboration.&lt;/p&gt;

&lt;p&gt;Try these techniques in your next AI coding session. The results may surprise you.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.bobstanke.com/blog/feature-driven-development" rel="noopener noreferrer"&gt;Feature-Driven Development: Pros, Cons, and How It Compares to Scrum - Bob Stanke&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://teamhub.com/blog/understanding-minimum-marketable-features-mmf-in-software-development/" rel="noopener noreferrer"&gt;Understanding Minimum Marketable Features (MMF) in Software Development - TeamHub Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://addyo.substack.com/p/the-prompt-engineering-playbook-for" rel="noopener noreferrer"&gt;The Prompt Engineering Playbook for Programmers - Addyo Substack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/devops/mastering-amazon-q-developer-part-1-crafting-effective-prompts/" rel="noopener noreferrer"&gt;Mastering Amazon Q Developer Part 1: Crafting Effective Prompts - AWS Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tech-stack.com/blog/what-is-prompt-engineering/" rel="noopener noreferrer"&gt;What is Prompt Engineering and Why It Matters for Generative AI - Tech Stack Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://every.to/source-code/i-rebuilt-sparkle-in-14-days-with-ai" rel="noopener noreferrer"&gt;I Rebuilt Sparkle in 14 Days with AI - Yash Poojary&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>prompting</category>
      <category>featuredrivendevelop</category>
      <category>coding</category>
    </item>
  </channel>
</rss>
