Not long ago, most of my working day was spent writing code.
Today, a large part of it is spent writing… prompts.
Ever since AI appeared in programming, I’ve been a big supporter of it. At first, it was mostly simple features like autocomplete, suggesting more or less accurately what I needed. Then came generating unit tests, writing simple functions through an agent, and similar things.
On a daily basis I work with a solution that contains hundreds of projects, each with hundreds of source files. Some of those files are hardcore legacy code, often thousands of lines long. Because of that, I started using AI for a new kind of task — describing what a given class or project actually does, or locating places in the code that might interest me so I can inject my own implementation.
Today I catch myself mainly writing prompts rather than code. I review the output produced by AI, and if I modify it, it’s usually just a few lines. The actual implementation mostly comes down to writing the prompt correctly, which an agent can then implement much faster than I could myself.
My role is to review the results, test the changes manually, and sometimes write one or two additional prompts that slightly adjust the solution. I rarely need to write code if I have good instructions and I know what I want.
When it comes to code review, it often means reading a report generated by AI, approving a few remarks that are actually relevant to the project, and rejecting the rest. Then I jump on a call with a colleague and we discuss some of the details.
AI has had a significant impact on the way I work. I now spend much more time thinking about how something should work before implementing it. More time on conceptual work. Of course, part of it might simply be professional maturity that comes with experience. In the past I would just start writing code and modify it along the way. Today it’s quite satisfying to delegate the implementation itself to artificial intelligence and get a result within a few or a dozen minutes.
The transition from one way of working to another was completely smooth. It wasn’t an “aha” moment or an overnight switch. Over time I simply started using it more and more, building new tools that automate parts of my workflow. At some point I realized that I had moved from being a Senior .NET Developer to something closer to a Senior Prompt Engineer :D
A few things definitely contributed to that shift. MCP certainly played a big role. But beyond the different factors that make AI genuinely useful and capable of doing real work, I see a big difference in the way prompts themselves are written.
The foundation of the AI world is the prompt.
At the beginning I talked to AI almost like I would talk to a colleague at work, giving it plenty of room to guess and interpret what I meant. Today, every prompt that instructs AI to generate code is created with the help of a dedicated agent designed specifically for that purpose. In its own way, it keeps the more “creative” tendencies or hallucinations in check.
With this approach, GitHub Copilot has a much harder time drifting away from the path I want it to follow. It stopped adding code in classes I have no intention of modifying, it gets lost less often, and it’s much easier for me to land on a good solution almost immediately.
What still feels a bit annoying is the manual flow. It usually means talking to one agent that generates a prompt, then copy-pasting it into another one, and so on. Often the first agent needs a fairly extensive description of the task’s context — something that theoretically AI could just read on its own.
My goal is to connect everything into a single flow: writing everything in the Copilot window inside Visual Studio, where it would be instructed to transform my input into a well-structured, professional prompt that I would only need to approve.
Maybe after working like this for some time I’ll eventually reach a point where I can trust the implementation plan proposed by AI from the very beginning.
The difference between prompts I write myself and those created by an agent is quite significant. The agent often adds various warnings and instructions: what the coding agent should not do and what it should do. It lays everything out clearly, step by step.
For me, most of these things usually seemed obvious. I simply don’t have the habit of writing instructions that precisely. Because of that, I often ended up going down the wrong path — AI would follow a direction different from the one I intended.
Today, what I bring into this workflow is direction. Decision-making. Connecting solutions within a broader context. I need to know what I want. And in the end, validation of results and evaluation of what actually makes sense.
Statistically, I usually close an implementation within a few iterations. Sometimes fewer, sometimes more. The more precisely I describe the task, the better the results. If I approach the planning carefully, or if the task itself is relatively simple, the implementation often finishes in a single iteration and I get exactly what I wanted.
Now I’m going to say something that many people might disagree with — especially those who are strongly attached to the role of the expert.
I have the impression that remembering various technical nuances is becoming less important. Copilot already has that knowledge built in.
More and more I find myself thinking at a higher level — how to connect things, which algorithm or design pattern will solve a problem — rather than how exactly to implement something or which method to call.
Even though this trend clearly moves toward the total automation of a developer’s work, for now I’m not particularly worried about AI replacing me.
I use it a lot, and I see how much effort still goes into guiding it properly and how precisely I need to explain the expected outcome.
Without that — at least for now — AI struggles when it comes to working in large organizations and massive codebases.
Maybe at some point in the future I’ll revise that opinion.
Top comments (0)