Hi everyone! I’m Gentian Elmazi, a software engineer with over ten years of experience and co-founder of Infinitcode.com and I am starting the first #buildinpublic series. At our company, we partner with businesses across industries—outsourcing the development of web applications powered by deep tech like AI, NLP, blockchain and so on.
Our Established Standards
From day one, we’ve enforced maintainable, readable, and scalable code. We adopt layered architectural patterns with dependency injection for modularity and testability. For source control, we follow GitHub Flow:
-
feature/*
branch - → Pull request
- → Peer review
- → Merge into
dev
- → Production release
The Bottleneck We Hit
As our client base and project scope grew, our senior engineers spent roughly 40% of their time on code reviews. Constantly context-switching across multiple repositories began to erode our strict standards—and that led to production issues we simply couldn’t ignore.
The Promise (and Pitfalls) of AI
When AI assistants promised 30–40% productivity boosts, we were eager to adopt them. Yet out-of-the-box AI snippets often:
- Violated our company rules
- Introduced inconsistent patterns
- Missed critical edge cases
Piloting Infinitcode.ai’s Code Reviewer
We decided to take a leap of faith and built an internal AI code-reviewer—now in public alpha as Infinitcode.ai—to help streamline our workflow. Within days, we saw:
35% Productivity Gain
Automated PR summaries and inline suggestions let senior engineers focus on high-value reviews, shrinking review cycles from days to hours.30% Performance Improvement
The AI flagged inefficient loops and unnecessary computations that we’d normally catch only after profiling in production.15+ Security Bugs Caught
Critical vulnerabilities surfaced early in pull requests, not weeks after deployment.120+ Typos & Style Violations Fixed
Consistent code formatting and improved documentation boosted overall readability.
Example Security Fix
⚠️ Risk: Non-cryptographic
UUIDv4
generation in bulk operations risked collisions under high-concurrency loads.
🔧 Fix: Implementedcrypto.randomUUID()
with batch-safe collision checks, ensuring unique identifiers even when processing thousands of records per second.
Even with a decade of hands-on coding, I likely would’ve missed this subtle bug until it hit production—so having AI catch it early was truly a game-changer.
The charts below illustrate the 35% productivity and 30% performance improvements we achieved—plus the dozens of security bugs and 120+ typos our AI reviewer caught—demonstrating its measurable impact on our workflow
Key Features That Helped Us
Multi-Model Integration
DeepSeek’s backbone matched our use cases, letting us switch models as needed.Custom Rulesets
Uploading our linting and security policies ensured AI suggestions aligned perfectly with our coding standards.Rapid Support & Iteration
Working directly with the team allowed us to fine-tune the tool within hours, not weeks.
Lessons Learned & Next Steps
AI Empowers, Doesn’t Replace
Developers remain central—AI handles routine checks so humans tackle design and architecture.Iterate on Your Rules
Continuously refine your AI’s rulesets and model settings as your codebase evolves.Measure & Visualize
Track metrics (productivity gains, bug catch rate, review times) to demonstrate ROI and keep stakeholders aligned.
I’d love to hear how AI is reshaping your code reviews—share your experiences or questions in the comments below!
As well feel free to try Infinitcode.ai in alpha stage. Your feedback is greatly appreciated
Top comments (10)
That's interesting. I'd like to talk more about the processes that your AI is doing. How can you ensure that AI can produce code that is really "necessary"? I've used different AIs and the code they were producing had added complexity and I think that's a problem that you and your company will soon face. A lot of code that has added complexity. And it just creates additional overhead.
For example, if you want to generate a simple crud project with clude, he will probably write some extra code, for example, he may do extra work for a simple sum, while we only want to have a sum function.
Anyway, I don't think AI should be introduced into the code anytime soon. He should be responsible for correcting "spelling errors".
Thank you for your feedback. That's my first time of
#build in public thread
that I will trying to cover in the upcoming series, and I am learning in the way of explaining the process in a writing form.Within the section of: The Promise (and Pitfalls) of AI I describe your example that when we used ChatGPT and other LLM's they usually start to "fantasize" and for a small job they would definitely get out of context.
The difference here with our AI tool is that the LLM it self has more context from the files changed on the PR and the purpose is not to write code however to analyse what you have written review it and suggest changes
So the tool is more an assistant for a senior software engineer to do better, faster code reviews and to ship faster in production
Another thing that the tool is doing quite well is actually correcting spelling errors here an example below:
Due to this
Typo
our API endpoint wouldn't be working as expected and this is a critical bug found from the tool it selfThank you for the useful information you provided. I think your work is admirable. And I agree. Building an AI to review code without interfering in the code creation and production can be very useful and even good. Another question I have is this open source?
Of course, happy to hear your feedback as well. Currently its in alpha and its free you can try it on at infinitcode.ai/ We are going to open source some very useful pieces by next month, I will make sure to continue my #buildinpublic series
Huge respect for building your own AI code reviewer and sharing these real numbers. I’ve seen similar muscle-memory gains using custom rulesets and multi-model AI at scale - makes me wonder where the human/AI handoff should stop!
Thank you for the feedback it means a lot. As per the handoff, my personal opinion is that AI still needs infrastructure to perform as expected. Most of the AI's write code however they cannot run/test properly, so I think the next big wave is in infrastructure level. But its coming fast :)
pretty cool seeing someone actually use ai to spot real bugs instead of just talking about it all day
kinda makes me wonder - you think there’s a limit to how much ai should handle in reviews before we start missing the human side
Thank you for the feedback. Well we are builders and we have to try things before we talk on something 😂
The tool it self is more of an assistant of a human code reviewer rather than replacing him
The purpose of the tools is to give a clear view of the reviewer of what is happening as well do an initial review of what things have been missed in the below pillars
In the end there’s always the responsibility of a reviewer it self to merge the changes
Feel free to try it, we value a lot feedback at this stage as well to join our discord community
As well for open source projects will always be free
Just to add here infinitcode.ai/ will always be free for open source projects and smaller teams <3
We are built from open source and we must support it
As well a few open source tools are coming public from us as well which I will cover in the next series of #buildinpublic
Good information given