In the era before AI-assisted coding, my workflow for any feature followed this pattern:
I would analyze the business problem. Even if the Product Manager spent time on it, I would read the documentation and ask clarifying questions. This established a foundational understanding of the problem.
After some back-and-forth discussion, I would begin planning the implementation. This deepened my understanding of the problem.
I would review the existing codebase to identify established patterns and determine what I could reuse. This strengthened my familiarity with the codebase.
If no existing pattern applied, I would research similar scenarios and evaluate design patterns to find the best fit for the problem. This reinforced my coding practices and potentially uncovered new solutions.
Finally, I would start implementing. As I coded, I would continuously consider improvements and alternative approaches. This increased my familiarity with both the problem and the solution.
After completing this process, I could often recall the implementation details and logic from memory during team discussions. If a bug arose, I could usually deduce its cause without immediately inspecting the code, often because I recognized an edge case I had overlooked during implementation.
Overall, this process helped me learn more, retain more knowledge, and perform more of the work myself. These were actually the most fun parts of the process. Today, I spend much of my time reviewing code. However, reviewing is not the same as writing it. As the saying goes in mathematics, you cannot learn simply by reading a textbook; you must engage with the material and put pen to paper.
Maybe times have changed, and I do not even need to know all those details. But then it makes me wonder: am I redundant in this process?
Some people might point out that you bring in taste and judgment. However, what stops a non-developer from showing these skills? They just have to ask AI for alternatives and pick the best solution based on their understanding.
There are still a few places where AI is not as good, especially where there is any integration, whether it involves hardware devices or multiple systems stitched together. However, this mostly covers missing bridges (i.e., AI cannot click buttons on a hardware device or check multiple systems at once). These tasks are limited. Software Engineers working in novel fields might also not feel redundant, but those people are few and far between.
This makes me lean toward "Redundant" as the answer for most dev jobs today. The only way forward seems to be moving to the next level, i.e., truly being an engineer (working with systems that do not exist yet) instead of being a mechanic or developer (working with known systems).
Top comments (0)