Originally posted on https://proofpocket.com/blog/software-dev-in-ai-era
When I was building the first version of Proof Pocket, I wrote every line myself. Not because I had to, but because I needed to actually understand the domain — encryption, offline storage, what it means for a user to trust an app with sensitive documents. That understanding would have been much harder to gain if I had just prompted my way through it.
I think about that a lot when I see how AI-driven development is being sold right now.
We're moving complexity, not reducing it
A lot of AI-driven software development feels heavily influenced by marketing. We are being pushed toward vibe coding and faster delivery. Everyone keeps saying we will be left behind if we do not adopt it now. The promise is attractive: build more, ship faster, reduce the manual work.
But AI-generated results are often inconsistent and non-deterministic. Code review has always been one of the harder parts of software development, and now we are getting even more code to review. In some cases, we are not reducing complexity. We are just moving it somewhere else.
I also think real product understanding will become a serious problem. Developers usually understand the product better when they actually build parts of it themselves. If most of the implementation is delegated to AI, it becomes easier to lose context. And without context, it is much harder to make good technical and product decisions.
Code was rarely the bottleneck
This is subjective, but in many projects I have worked on, writing code was not always the thing that slowed us down the most.
Very often, the real blockers were decisions, unexpected pivots, unclear requirements, and changing direction in the middle of the work. Many times, I deleted my own code simply because the requirements changed.
In many teams, not enough attention is given to business analysis, diagrams, process descriptions, and proper planning before the coding starts. Decision-making is crucial and hard, and it is the part that gets skipped.
This does not apply to every case. Startups are different. When you are doing quick experiments, validating ideas, or building an MVP, speed matters a lot. But I would separate those situations from larger, more mature projects.
We need different processes, not one universal fix
I do not think there will be one perfect process for AI-assisted development. We probably need several different ways of building products, depending on the size, risk, and maturity of the project.
It is much easier to go into "YOLO mode" for a small product or startup. But for a big company with multiple products, especially critical ones like financial or medical systems, that approach is simply not acceptable.
Even before AI, I felt that being too attached to Scrum could become a rabbit hole. Imagine a small startup product estimated for two or three months of work. Having all the usual meetings, developing every tiny task independently, and then testing each one manually can waste a lot of time.
For this kind of project, I would rather divide the product into main workflows. Instead of treating every small part of the user account area as a separate delivery, I would group the whole user account and profile flow together — from account creation to filling in and editing profile data.
This bigger workflow can still be divided into smaller subtasks for easier code review. But I would not send every tiny part to QA separately. I would rather deliver the whole workflow, let QA test it manually as one complete section, gather all bugs, report them together, and then iterate.
With this approach, you can divide the app into a few real use cases and deliver them one after another. This makes even more sense for a startup or MVP, where the product should usually do one thing well instead of trying to become a super app from the beginning.
Someone could say that QA will have a lot to check at the end. But I think it is often easier to focus on a complete section of the app, write proper test cases for it, and test the whole flow without constantly switching between unrelated tasks and meetings.
This is also how Proof Pocket was handled. The first version was just simple onboarding and document encryption. No polished UI, no backups, no peer-to-peer transfers, no categories. It was supposed to address one simple problem: I have a confidential document on my phone, I want to keep it safely encrypted, and I want easy offline access to it.
But what about bigger projects?
Core domain still needs a human
I think the main domain should still be handled by software developers, with AI acting as an assistant. The code should be well known and understood by the developers, especially if they are expected to participate in decision-making meetings and understand the real business behind the product.
A good analogy is Domain-Driven Design. DDD teaches us to describe the main domain and subdomains carefully. Not every part of the system has the same importance. Some parts are core to the business, while others are just supporting or generic functionality.
This is where AI can be very useful. We can use AI to generate boilerplate, implement non-critical parts, help find bugs, audit code, or build features where deep business knowledge is not required.
For example, I do not always need to know every detail of how a table or view is rendered. But I do need to know what data should be presented, why it matters, and how it affects the user or the business process.
But for the core domain, I would be very careful. If the most important part of the business logic is generated without proper understanding, the team may move fast at the beginning and pay for it later.
Maintenance vs new features
AI is quite good when the task is to add another module, another layer, or another isolated feature. But reality is often more complicated than that.
Many legacy projects contain hard dependencies, hidden assumptions, and years of workarounds. New bugs can easily be introduced when existing code is changed. Sometimes the code looks strange only because the historical context is missing. Without that context, AI may "fix" something that was actually protecting the system from another problem.
From my experience, letting AI maintain a large legacy codebase is a no-go. AI can be very helpful when finding the source of an issue — reading stack traces, analyzing logs, pointing out suspicious places in the code. But when it tries to fix the issue automatically, it often introduces new problems.
We can build self-healing loops, automation, extra tests, and verification pipelines. But in most cases, the effort is not worth it.
For maintenance, I prefer to use AI in assisted mode. Let it help with debugging, finding the issue, deciphering native stack traces, and pointing out possible causes. But the actual bug fix should be done carefully by a human.
For new features, especially in a project with a good modular architecture, we can give AI a bit more freedom. If the boundaries are clear and the codebase has a proper test harness, AI can be very effective while still keeping the code reasonably clean.
Testing and AI

One thing I observe now is that when it becomes easy to add tons of tests with AI, people just do it.
At first, this sounds great. More tests, better coverage, fewer bugs. But in practice, many of those tests are useless AI slop that only extends the CI pipeline without giving much real confidence.
On the other hand, AI makes it much easier to add missing tests for parts of the system that we never had time to cover before. That is a real benefit.
Please do not flood your project with useless tests just because AI wants 100% coverage or because it can generate hundreds of test cases in a few seconds. Balance matters.
Tests should protect real behavior. They should document important assumptions. They should make refactoring safer. They should not exist only to satisfy a coverage number.
Where does that leave us?
I wonder what the future of software development will look like. I genuinely do not know.
AI will definitely change the way we build software. It already has. But I do not think the future is just about replacing developers with prompts.
Developers will still need to understand the product, the business domain, the architecture, and the trade-offs. AI will help us move faster, automate boring work, and explore solutions more quickly. The gap between what you can build and what you actually understand — that is where things will go wrong.
But the responsibility for the product will still stay with humans.
At least, I hope so.






Top comments (0)