DEV Community

Cover image for My AI Workflow as a Frontend Engineer
Mellina Yonashiro
Mellina Yonashiro

Posted on

My AI Workflow as a Frontend Engineer

I decided to compile and share all the ways I currently use AI for software development, because for some time, it was confusing to me how people were actually using it. I wouldn’t say there is a correct or more effective way to do it, as I also believe there is not a single way to do software engineering (there’s always the trade-off conversation when discussing engineering strategies). But feel free to take some ideas from here.

So, in this article, I will write about the ways I use AI chatbots, agents, and other tools for my work and personal/hobby coding projects. For some more context on what I currently use - Claude code in the command line, Claude chatbot (with some integrations, such as Github and Notion), and some Gemini models in Roo Code (VS Code extension). I will be focusing on how I use them for engineering work, not so much for my personal life. Also, I will not discuss how companies may be implementing in their workflows.


Investigation & Debugging

  • Reading and parsing error messages from logs
  • Debugging frontend runtime errors via console log

It was common practice for me to (try to) read the error message to identify where and why it occurred. If I didn’t understand, I’d paste the text in a search engine and dive deep into StackOverflow. Now, I paste the logs into a chat to speed up the research and solution process. This is especially valuable when the logs are long and hard for a human to read.

The frontend debugging workflow works quite differently. Components break at runtime - for example, a state is not passed down correctly. It’s hard to catch in development, because the issue happens when interacting in the browser, when the code is already compiled and running. Then, when this happens, the agent, acting more like a coding partner, asks me to add (or have it add) console.log() calls in strategic places in the code, and it asks me what the dev tools console prints. It will then get more information and eventually understand where the problem is. This will proceed iteratively until we reach a solution.


Codebase Understanding & Quality

  • Looking for inconsistencies in the codebase
  • Writing unit tests and mocks
  • Comparing changes with the main branch to catch regressions

Inconsistencies in a codebase are easy to miss and hard to prioritize. Using agentic coding that has access to the project makes it easy to find and make it more consistent. Most of the time, I don’t want it to act right away, as the codebase is huge - it can help with creating a plan, though. I make it read the project code, assess the opportunities to improve, and ask for a planning document. It’s important to keep patterns, naming conventions, and architecture consistent, for both bots and humans.

Writing unit tests is a must, but a time-consuming task - especially writing mocks. In the frontend, we also have a hard time finding the right selector (finding a reference to a DOM element), as design systems create complex HTML structures for rendering components. It works wonderfully for those. AI can even completely write the test by scanning the function or component.

Additionally to assisting with unit tests, I can compare my changes with the main branch, and have it check if there is something I wrote that can affect another part of the codebase (cause regression), or if it’s not consistent (as in code style or patterns).


Active Development

  • Performing migrations and package upgrades
  • Dealing with breaking changes
  • Developing new features based on the current architecture
  • Jira ticket descriptions to pull/merge requests

Usually, projects have many dependencies, and these dependencies release updates very often. Dependabot (in GitHub), which has existed for a while now, already helped with minor package updates. Now, agentic code can help to deal with major upgrades and breaking changes. It can handle tedious tasks such as renaming parameters or replacing logic that has become obsolete.

Moreover, one of the most powerful tools is to create new product features. It can take a feature description (or Jira ticket description) and implement it. Of course, agentic coding is better if your codebase is prepared for automation - if you’ve written proper documentation, context, and if it uses consistent patterns, architecture, and coding style. Plus, your instructions are clear (specifying which files to change, with example files).


Documentation & Communication

  • Writing codebase documentation
  • Creating Jira tickets

AI is great at reading large chunks of data and summarizing. With that, I make it read the entire codebase and create documentation, separating by subject, such as security, observability, and accessibility. These are important to inform developers on which practices and patterns we are currently using, or where configurations are. There are also links to external resources if necessary, which may also be accessed by the agent.

On a different direction, I use it to write comprehensive Jira tickets and PR descriptions for others to read. I usually have no problem writing them, but there are days when I’m just mentally tired. When this happens, I write some notes in draft format and ask AI to formulate a readable piece of text for humans (and machines) to understand.


Planning & Review

  • Code reviews (GitHub-integrated)
  • Brainstorming and looking for improvements

Having AI act as an additional reviewer, integrated into GitHub, for example, doesn't replace human review, but it catches things: inconsistencies, edge cases, logic that's technically correct but fragile, and not used parameters. Sometimes it is not correct, but most of the times it at least gives you some food for thought. What’s interesting in this interaction is that sometimes one model writes the feature code and another reviews it (for example, Claude writes the code, and Gemini reviews it). I’m looking forward to a future where I read some epic interaction between robots on what approach is the best to take.

Brainstorming sessions are looser but useful. Sometimes you don't need a solution; you need to learn the pros and cons of it. For example, would it make sense to change from Yarn to Bun? What are the trade-offs in this case? Or, how could a certain for loop be more performative? These are real questions I have already asked (with more details, of course).


Experimentation

  • Me as a human learning new stuff
  • Vibe coding (hobby projects)
  • Agent development

People are using AI for many different things, and I am also doing some experimentation myself. I have been using it to keep me up to date on the latest technologies - frontend is a subject that is constantly transforming, and it is historically hard to keep up. At the same time that good practices are mostly the same, there are new tools everywhere - state management libraries, observability systems, frameworks that challenge the status quo.

Also, there are some hobby projects I want to implement, but they are not my field of expertise. For those, I use the (in)famous vibe coding to implement projects from scratch. I’m pretty impressed by the outcome: using Python, we developed a web scraper while customizing it for my own needs.

Lastly, I’m in the stage where I’m experimenting with implementing an agent myself. I’m still unsure in which direction I will take this, but I’m planning on something basic at first, just to understand the implementation process: token usage, integration, and how to connect to existing MCPs.

What I've Learned

Information about how to use AI is all over the place, but overall, what I understood is that written communication is very important when prompting. I’ve always liked to write. I wrote journals, stories, on paper, on my computer, or on a notepad in my smartphone while on the tram. I don’t consider myself a writer, but I like doing it as a hobby. Because of that, I feel it is easier to write prompts and direct AI on what I want it to output. I can be direct when I want something specific, and I start a conversation with open questions when I want to do creative tasks. If you are good at communication or have some background knowledge on how these models work, you will most likely get better outcomes.

Top comments (0)