I am currently building my new portfolio site with extremely heavy use of AI (Claude + Copilot). Probably around 90% of the code, documentation, tests, and even some of the architectural thinking have been accelerated or directly generated by AI tools.
And I’m completely fine with that — the same way I am completely fine pasting my own ideas, sweat and experience into a prompt in bullet points and thoroughly revising the result before publishing the article like in this case. As times change, tools change, and people should move with them and add human value to the ever-improving toolset.
But the question still comes up (both from others and occasionally from myself): "Am I cheating?"
My Honest Answer
No, I’m not cheating.
I use AI aggressively for speed, but I follow a deliberate, systematic process to stay in full control and maintain ownership of the code.
Every generated piece goes through structured review:
- I maintain detailed TODO lists in Markdown files for every major section and component.
- I write tests (Vitest) for critical data factories and components.
- I force myself to understand and be able to explain every function and major decision.
- I regularly push back on AI suggestions that feel off, overly clever, or not aligned with existing codebase patterns.
- I treat AI output as a strong first draft — never the final product.
This review discipline is something I’m actually proud of. It’s how I make sure the massive speed AI gives me doesn’t come at the cost of depth or long-term maintainability.
Where Does AI Really Help
Boilerplate → It removes boring boilerplate and lets me move extremely fast.
New learning discoveries → It surfaces patterns and solutions I might not have considered immediately.
Consistency → It helps me maintain consistency across a large number of components and even documentation.
A quick note about code documentation though — I personally do not care if a documentation is written with AI or not, as long as it helps and as long as it is well written. I really don't. An article may not belong in this same category, depending on the premise, but as far as documentation concerns, I am totally fine with using AI. In other words, if you produce great documentation and you used AI for that, I can only say thank you!
A Concrete Example From My Own Project
When I asked the AI to silence Iconify’s “icon loaded online” warnings, it suggested importing large JSON packages (@iconify-json/solar, @iconify-json/logos, @iconify-json/simple-icons) and writing a pickIcons() helper. It compiled and the warnings disappeared — technically “correct.”
Before — what AI suggested:
// icon-sets.extra.ts (AI version)
import { icons as logosIcons } from '@iconify-json/logos';
import { icons as solarIcons } from '@iconify-json/solar';
function pickIcons(collection: IconifyJSON, names: string[]): IconifyJSON {
return { ...collection, icons: Object.fromEntries(names.map((n) => [n, collection.icons[n]])) };
}
addCollection(pickIcons(logosIcons, ['angular-icon', 'vue', 'react', ...]));
addCollection(pickIcons(solarIcons, ['code-bold', 'settings-bold', ...]));
This however, was the wrong solution.
The codebase I used already had a clean and established pattern in icon-sets.minimals.ts: inline SVG strings, no extra package imports, explicit width/height control. Copilot / Claude didn’t see or follow that pattern — it reached for the general Iconify documentation approach instead of reading the existing code first.
I caught it though. I pushed back, and we rewrote it together properly: no heavy imports, only the exact icons we use, explicit dimensions, and clear comments explaining the rules for anyone who adds icons later.
After — the correct pattern already in the codebase:
// icon-sets.extra.ts (correct version)
// Rule: inline SVG body only. For logos: collection, always include explicit width + height
// because register-icons.ts forces the collection default to 24×24, which clips non-square paths.
addCollection({
prefix: 'logos',
width: 24,
height: 24,
icons: {
'angular-icon': {
width: 256,
height: 271,
body: '<path fill="#E23237" d="M0 134.5L11.5 ..." />...',
},
// ... one entry per icon, exactly the ones we use
},
});
The AI's version: ~18 MB of JSON. The correct version: ~75 KB of inline strings. 247× smaller — and it follows the pattern that was already there.
This is exactly why the review process exists.
Here’s the actual difference:
| Package | Raw size | Icons available | Icons we actually use |
|---|---|---|---|
| @iconify-json/solar | 6.3 MB | 7,404 | 105 |
| @iconify-json/logos | 7.3 MB | 2,091 | 10 |
| @iconify-json/simple-icons | 4.6 MB | 3,693 | 3 |
| Total | 18.2 MB | 13,188 | 118 |
The correct approach used 118 inline SVG strings across two registration files, which only weighs around 75 KB total. And that my friends is an oustanding 247× difference in payload. And that's not a small number.
Another issue Claude pointed out to me after my aditional analysis was a defect hiding in the AI’s "lazy" approach: logos icons had a 256×256 viewBox, but register-icons.ts forced all non-carbon collections to default to 24×24. Basically, what happened was, that the pickIcons() approach copied only body and dropped the metadata, so icons like logos:typescript-icon would have silently rendered with the wrong dimensions at any non-standard render size.
An Audit Proved My Point
When the inline SVG pattern was correct, I still continued with my questions and asked the AI to audit the codebase and re-confirm every icon was registered. Just to be on the safe side.
A second scan revealed the real usage: 105 solar icons (not 4). It had also missed 5 logos icons because they were stored as string values in TypeScript data objects rather than JSX props.
So I went further and asked AI to handle both patterns (JSX attributes + string literals) with a whitelist of known Iconify prefixes. The 5 missing icons were finally added, the tests now pass against the correct 118-icon count, and the site is better for it.
And I like to believe this is exactly the kind of thing a senior developer catches that AI alone misses. Maybe you don’t even need to be a senior developer for that — just one that cares.
My Rules for This Project
- AI is allowed to draft, but I must review, understand, and be able to defend every important piece.
- Critical parts (like the sections-api data layer) get extra scrutiny and manual refinement.
- I will eventually go through the entire codebase and make sure I can explain every function and design decision without referring back to the AI conversation.
- When AI produces a wrong pattern and I correct it, I write a regression test so the mistake can never happen again.
- This portfolio is my test project: use AI aggressively to ship fast, but then do the deeper work to truly own the result.
Disclosure: Tools Used for This Article
Because I talk a lot in this post about staying in full control and owning the work, I want to be transparent here too.
I wrote every single idea, story, example, and opinion in this article myself. The Iconify story, the audit experience, my rules, and all the feelings around “Am I cheating?” are 100% mine.
I did use AI assistance (Claude + Copilot) while writing and refining this post itself — mainly for light structural suggestions, tightening some paragraphs, and improving flow. I reviewed every change and kept full ownership of the final text.
This is the same approach I use in the actual codebase: AI helps me move faster, but I stay the one who understands, reviews, and takes responsibility for everything that ships.
I believe this is the honest and professional way to do it.
I’m not ashamed of using AI heavily. I’m actually quite proud of the system I built around it to make sure I stay a better engineer, not a worse one.
The goal isn’t to write every line myself. The goal is to ship high-quality work while keeping my skills and understanding sharp.
That’s the balance I’m aiming for.
What about you? Have you found yourself above? Have you built something similar with heavy AI use you can talk about? Is so, please, do drop your experience in the comments, it will make me hapy and I do respond.
Top comments (0)