Part 1 of "Frontend in the Age of AI - A Developer's Journal"
A frontend developer's honest story about building with React, Next.js, and the OpenAI API -and the bigger question it raised along the way.
There is a moment every developer knows well. You are building something, you get stuck on something small like a hover effect, a loading state, a form with validation and you open a new tab. For me, that used to mean heading straight to the Angular Material component documentation. Back in 2020, when I was first learning Angular and working on a real project, official documentation was my go-to reference for understanding how components were structured, how they behaved, and how to implement them correctly in a real codebase.
Fast forward to today. I am learning React and Next.js, and something feels completely different. Not just the framework but the entire way of learning and building has shifted. And building my own AI Component Generator made that shift impossible to ignore.
What I Built - And Why
The app is simple on the surface: you describe a React component in plain English, hit generate, and get back production-ready TypeScript code using Tailwind CSS, complete with accessibility features.
You can type something like:
"A primary action button with icon and loading state. Include hover effects and accessibility features."
And within seconds, you get a fully structured component with proper TypeScript interfaces, useState hooks, focus handlers, ARIA attributes and all. No copy-pasting from Stack Overflow. No digging through docs. Just describe what you want, and it appears.
I also added example prompts for common components like Buttons, Cards, Modals, and Forms, so beginners can click and experiment without feeling lost about how to describe things.
You can try it yourself here - AI Component Generator
The tech stack was straightforward: React and Next.js for the frontend, the OpenAI API doing the heavy lifting in the backend, and Tailwind CSS for styling the app itself.
But honestly? I didn't build this because I needed a component generator. I built it to learn, to understand how API keys work, how to talk to an LLM from a real application, and to push myself beyond tutorials into building something real. And in doing that, I accidentally gave myself the most important lesson of my frontend journey so far.
The Hardest Parts Were Not What I Expected
When I started, I assumed the hardest part would be getting the UI right or managing state in Next.js. It wasn't.
Prompt quality was brutal. Getting the OpenAI API to consistently return clean, working, well-structured React code took a lot of iteration. The difference between a vague system prompt and a precise one is the difference between getting messy JavaScript and getting TypeScript with proper interfaces. I had to learn how to instruct an AI, which is a completely different skill from writing code.
API limits were a real constraint. Rate limits, token limits, response timeouts, these aren't things you think about when you're doing tutorials. They become very real when you're making live API calls from a UI that a real person is clicking. I had to think about error states, loading states, and what happens when the API doesn't respond the way you expect.
These weren't just technical problems. They were design problems. And solving them taught me more about building real applications.
The Bigger Realisation - And This One Hit Different
Here's something I keep coming back to when I think about AI coding tools:
We are no longer just writing code. We are verifying it.
When I used to rely on Angular's official docs, I was learning the pattern. I understood every line I wrote because I had to understand it to find it. Now, I can describe a component in a sentence and get 50 lines of working code back in 3 seconds. And here's the uncomfortable truth - if I didn't know React well enough to read that code critically, I would have no idea if it was good, bad, or broken.
The generated code works. In every component I've tested, the output runs correctly. But you still need to know:
- Where to place the generated file in your project structure
- Which file to import it from and how to call it
- Whether the TypeScript interfaces make sense for your use case
- If the Tailwind classes are appropriate or need overriding
- If the accessibility implementation is genuinely correct or just looks correct
The AI doesn't know your project. It doesn't know your design system, your existing component library, or your naming conventions. It gives you a strong starting point but a developer without foundational knowledge can't tell the difference between a great starting point and a broken one.
This is an emerging skill that I think will matter more and more: the ability to evaluate AI-generated code critically.
From Official Docs to LLMs - How Fast We've Come
I think a lot about 2020 me, spending an afternoon on the Angular documentation trying to understand how to implement a single component with the right lifecycle methods. And I think about the fact that a beginner today can describe that same component in plain English and have it generated instantly.
That's genuinely astonishing. And I find myself fascinated rather than worried because the fundamentals haven't disappeared, they have just moved upstream. You need to know React not to write every line of it yourself, but to know what should be written.
The docs aren't dead either, by the way. I still read them. But now I use them differently - to verify, to understand why something works, not just to find what to type.
What This Means for You as a Beginner
If you are just starting out with React or Next.js, here is my honest take:
Don't skip the fundamentals to get to the AI tools faster. The fundamentals are what make the AI tools actually useful to you. If you can't read a React component and understand what it's doing, generating one doesn't help you - it just gives you code you don't understand, which is arguably worse than no code.
But do experiment with LLMs as a learning tool. Ask them to explain the code they generate. Ask them to show you alternatives. Ask them why they made a specific choice. By using this way, they accelerate learning instead of replacing it.
And build something. Even something small like my component generator. Building a real project - one where you have to make decisions, hit real constraints, and solve unexpected problems - will teach you things no tutorial can.
This Is Just Part One
This blog is the intro to a series I am writing about the intersection of frontend development and AI tools - from building this component generator, to exploring what different LLMs produce for UI generation, to the evolving question of what it means to be a frontend developer in this moment.
Because the tech is moving fast. And the most useful thing I can do - for myself and for you - is document what it actually feels like to be in the middle of it, learning in real time.
More coming soon. And if you are building something similar or experimenting with LLMs in your own frontend work - I'd love to hear about it.
Built with React, Next.js, OpenAI API, and TypeScript. Written by a frontend developer who is genuinely excited about where all of this is going.

Top comments (0)