This article reflects my personal experience over 8 months of intensive AI-assisted development across multiple projects. Your mileage may vary, but the principles should translate across languages and frameworks.
I was presenting this topic at Moldova DevCon 2025 and i want to share the whole story.
This is my journey in pair software development with AI. This is not an advertisement or promo for any of products and any services i'm going to talk about. This is a personal opinion. I'll talk about some challenges and gains that i've encountered during these coding sessions.
Why I Started This Journey?
Eight months ago, I was that developer who rolled his eyes at AI coding tools. "Just fancy autocomplete," I thought. "Real developers don't need assistance." Basically, I didn't want to pay for stuff that I don't believe will be useful for me. Then we just decided to give it a try. We have PCI DSS certification, so we can't just go with any AI. We need something more or less self hosted. Nobody wants to support this kind of infrastructure, so we decided to give a go to Amazon Q Developer. At that time I had an overwhelming amount of reviews and more coming.
More out of desperation than curiosity, I installed an Amazon Q Developer. Not because I believed in it, but because I was running out of time and options. What happened next wasn't magical or revolutionary - it was just... as i expected. Reviews of changes without understanding of context or domain give no result. Almost all reviews was "Looks good for me". But the ability of AI assistant to use glab console tool to get changes and that assistant understand how to use it by it owh - was impressive at least. That brought me to research or better to say learn how to use it properly. Small things at first. The AI would suggest a method name that was better than what I was about to type. It would catch a null reference before I did. It would generate test cases I hadn't thought of.
But the real moment came when I get Kiro from Amazon into my hands. Spec flow approach made me excited as 8 year old on birthday. I would describe a problem, and the AI would suggest an approach. It would create a design document then I would refine the idea, and it would design technical documentation. We were actually having a conversation about code. Then, when everything looks fine - generate a step-by-step implementation list. That's when I knew something fundamental had changed in how software gets built.
This is not about AI replacing developers. This is about developers and AI working together in ways that make both more effective. And yes, there are pitfalls, frustrations, and moments when you want to throw your laptop out the window. But there are also moments of genuine partnership that I never expected to experience with code.
Fail Fast
Fail fast, also sometimes termed fail often or fail cheap, is a business management concept and theory of organizational psychology that argues businesses should encourage employees to use a trial-and-error process to quickly determine and assess the long-term viability of a product or strategy and move on, cutting losses rather than continuing to invest in a doomed approach. - Wikipedia.
This concept also known to developers. We try to quit from function execution as soon as result is obvious. At grooming or decomposition stage, when we just thinking how it should work and do not have practical experience, to estimate outcome of new architecture or functionality. Fail fast approach can help iterate quickly and test theories before we invest huge amount of time into it. At this point AI code assistants, especially agents with support of tools, can provide huge help.
Testing a theory that we have in our mind can be quick and dirty. We do not need a clean code or onion layered architecture to get the proof of concept. We need to have it working and we need it now.
Recently, I wanted to test if a new caching strategy would improve our API response times. Instead of spending days implementing a full solution, I asked the AI agent to create a quick prototype with cache integration. In 30 minutes, I had a working proof of concept that clearly showed the performance benefits. This validated the approach before committing to a full implementation.
The Fear Of A Blank Page
The fear of a blank page is a common experience for many writers and artists. This anxiety often stems from the pressure to create something perfect right from the start. It can lead to procrastination and self-doubt, making it difficult to begin any creative work.
I have one. I often thinks about new pet project and what i should do and then... i just do not want to write huge amount of code just to get started. This is especially true for web projects or some kind of API hosts. I need to configure logging the way i like, metrics, health checks, authorization and authentication etc. This process makes me already tired and i don't want to do it. I've tried to use some templates, but languages and frameworks changes so fast, that this approach is not viable for me either.
AI code agents can help overcome this fear or better to say, in my case, procrastination. They can create starter project with all i like and require faster. Meanwhile i'll keep thinking how i want tho do main logic of application.
What used to take me 2-3 hours of boilerplate setup now takes 15-20 minutes. The AI can scaffold a complete web project with structured logging, health checks endpoints, authentication setup, containerization configuration, and basic database operations. While the AI handles the infrastructure, I can focus on the business logic that actually matters.
What to Choose?
There is a few options right now. IDE integrated agents or separate console agents. In terms of what to choose - matter of taste. I've used both and i prefer IDE integrated, because i can see what they are doing and review code during generation. While AI generating one part of project - i can review another.
It's hard to find IDE that does not have AI integration of some kind.
My favorite is Amazon Kiro and VS Code Copilot. I have separate window with ongoing monolog of AI agent, a window with files showing diff to me. If i spotted that things goes wrong - i can stop process, correct my prompt and restart earlier.
I found that, it is a much better experience, if you use versioning tools with your project. First of all this gives you peace of mind when code generation running. You will not loose anything that was pushed to repo. Second, you can revert safely and fully then iterate again.
Always work with a clean git state before starting AI sessions. Commit your working changes first.
Context
This is fight between give as much as possible and receive what you expected. When you give all you have to LLM - it has to much of freedom and does not understand how to deliver what you want. This is usual problem between software developers and product owners and/or business owner. People can not understand each other and we think that our train of thoughts can be processed be LLM as we expecting. This not a case so far. Maybe in future we will get AGI and it will do even better that promt it has, but not now. Not yet. We still need to give some context and set some boundaries.
On my experience, big code base is too much for an agent for now. I can not get meaningful results from LLM on one of our monorepo project with 250K+ LOC, with custom nuget packages and dependencies. It's out of agent capabilities. But when i start a new project from scratch - i constantly get an acceptable results. Yes, i still have to polish it, but it is usable.
So, modern AI agents support rule sets to provide some constrains. This is very useful to keep DRY principle.
Context Strategy:
- Small focused sessions: Work on one feature/component at a time
- Clear boundaries: Define what files/folders the AI should focus on
- Progressive disclosure: Start with high-level architecture, then drill down
- Rule sets help: Use modern language features, follow established patterns, include documentation, use validation libraries, follow company naming conventions
When AI Gets It Wrong: Debugging Together
AI isn't perfect. It makes mistakes, especially with complex business logic, framework-specific edge cases, security considerations, and performance optimizations.
AI generated a database query that looked correct but had performance issues. It was fetching data in a loop instead of using proper joins, creating multiple database calls when one would suffice. After explaining the performance problem, AI corrected the approach to use a single optimized query with proper relationships included.
This is how i usually approach debugging with AI:
- Describe the unexpected behavior clearly
- Share error messages and stack traces
- Explain the expected vs. actual behavior
- Let AI propose solutions, then review together
And remember, AI does not learn new things. All changes that are made during debug session should be stored somewhere. It can be a separate md file that next time you need to add to context or much better way - update general documentation about project.
Code Quality and Maintainability
AI a very good at following rules, especially those related to code quality and maintainability. Specifying rules and guidelines helps AI generate code that is consistent, readable, and maintainable.
What AI does well:
- Consistent naming conventions. Following naming conventions helps to improve code readability and maintainability and agents usually do this by themselves. They trained on large codebases and many projects follows same conventions.
- Proper exception handling patterns. But sometimes they over-engineer solutions, leading to unnecessary complexity. Creating custom exception even if it's not necessary. This can lead to confusion and unnecessary complexity.
- Comprehensive documentation. AI does great at writing documentation on existing code, especially if this code was written by AI during the same session.
- Following established patterns in the codebase. If you provide examples how to do it, AI will follow them religiously. This is very useful if you have some specific preferences in file location, tests arrangement and other rules.
What needs human review:
- Business logic correctness. This is an "Achilles' heel" of AI-generated code. While AI can generate code that follows patterns and conventions, it may not always understand the business logic correctly.
- Security implications. Generated code may not always be secure. Some times it may be vulnerable to common attacks.
- Performance considerations. Nova days many developers does not think about performance optimization. It's easier to add more memory and CPU resources than to optimize code. LLM's usually have a hard time with this also.
- Architectural decisions. Decide when to use what pattern or how to structure dependencies is a very hard task for AI. It's better to let humans make these decisions.
Testing and AI: A Complex Relationship
AI excels at generating unit tests but struggles with integration testing nuances. Mocking classes and services is no brainer for AI, but integration testing requires understanding of the system's architecture and dependencies. AI can help with setting up mocks and stubs, but it's important to ensure that the tests cover all possible scenarios and edge cases.
Strengths:
- Comprehensive test coverage for public methods
- Edge case identification
- Mock setup and teardown
- Test data generation
AI generated test follows standard patterns with proper arrangement, action, and assertion phases. It includes validation for edge cases like invalid inputs and expected exception handling. The test structure is clean and follows naming conventions consistently.
Limitations:
- Doesn't understand complex business workflows
- Struggles with database integration tests
- Can't validate UI behavior
- integration tests are difficult for AI to generate
The Economics of AI Development
Let's talk about the elephant in the room: the cost. When you hear "AI," you might think of massive, expensive infrastructure projects. The reality of AI-assisted development is far more down-to-earth. For an individual developer or a small team, the monthly subscription for a top-tier AI assistant is often less than the cost of a few hours of a developer's time. When you factor in the productivity gains—less time spent on boilerplate, faster debugging, and quicker prototyping—the return on investment becomes obvious very quickly. It's not an extravagant expense; it's a force multiplier.
However, the real cost isn't always measured in money. The biggest concern, and a valid one, is data security. Sending your proprietary source code to a third-party server is a non-starter for any organization with serious security or compliance obligations, like the PCI DSS certification we have. This is where you need to be selective. You can't just use any free or consumer-grade tool. The solution is to opt for enterprise-grade services like GitHub Copilot for Business or Amazon Q Developer, which offer commitments around data privacy, ensuring your code isn't used for training their public models and is kept confidential. The cost is slightly higher, but it's the necessary price for secure and responsible AI adoption.
Team Dynamics: Introducing AI to Your Development Process
Bringing an AI assistant into a development team isn't just a technical change; it's a cultural one. You can't just send out a memo and expect everyone to embrace it. Here’s how to do it right:
Start with a Pilot Group: Find a few enthusiastic developers who are open to experimenting. Let them be the pioneers. Their successes and learnings will become the best internal marketing you could ask for.
Establish Clear Guidelines: Don't let it be a free-for-all. Create a simple "How-To" guide: how to write effective prompts, what tasks it's best for (and what it's bad at), and most importantly, what not to share with the AI (e.g., API keys, customer data, security vulnerabilities).
Reframe it as Pair Programming: Encourage the team to think of the AI as a junior pair programmer. It’s fast and knows the syntax, but it lacks domain context and needs senior oversight. This framing helps manage expectations and reduces the fear of replacement.
Evolve Your Code Reviews: AI-generated code still needs human review. The focus of code reviews will shift from catching syntax errors to validating business logic, architectural alignment, and potential security holes. The question is no longer "Does it work?" but "Is it the right solution?"
Create a Feedback Loop: Set up a dedicated chat channel where team members can share wins, funny failures, and effective prompts. This builds a collaborative learning culture and helps everyone get better at using the tool together.
Address the fear head-on: AI isn't here to take jobs, it's here to take away the tedious, repetitive parts of the job, freeing up developers to focus on the complex, creative problem-solving that humans do best.
Actionable Takeaways
If you're going to start your own journey with an AI coding partner, here are the key takeaways from my experience:
Start with a Clean Slate: Always commit your changes to version control before starting a session with an AI agent. It's your ultimate undo button and safety net.
Be the Architect, Let AI Be the Builder: You define the "what" and the "why." Let the AI handle the "how" for the boilerplate and routine code. Your primary role is to provide direction and review the output.
Master the Art of Context: Don't dump your entire codebase on the AI. Give it focused, relevant information for the task at hand. Treat it like a junior developer you're onboarding to a specific feature.
Trust No One: AI makes mistakes. Your expertise is the final quality gate. Review every significant piece of generated code for correctness, security, and performance.
Use AI to "Fail Faster": Got a new idea? Use the AI to build a quick and dirty proof-of-concept. You can validate an approach in minutes or hours, not days.
Document Your Debugging Sessions: When you and the AI solve a problem, update your project's documentation. This becomes part of the context for future sessions, effectively "teaching" the AI your project's nuances.
Introduce It Collaboratively: When bringing AI to your team, do it thoughtfully. Start small, provide guidelines, and foster a culture of shared learning.
Conclusion
AI in software development isn't about replacement—it's about amplification. It amplifies your productivity, your ability to explore ideas quickly, and your capacity to handle complex projects. But it also amplifies your mistakes if you don't maintain critical thinking and code review discipline.
The future belongs to developers who can effectively collaborate with AI while maintaining their core engineering skills. Start experimenting now, but remember: the goal isn't to write less code, it's to solve more problems.
Final Advice: Treat AI as the best junior developer you've ever worked with — extremely capable, incredibly fast and with amnesia. Needs guidance, context, and oversight. With that mindset, you'll build a productive partnership that enhances rather than replaces your capabilities.



Top comments (0)