It’s been more than four years since I last posted here. I never planned to stop
writing about Elixir or making tutorials for so long, but life got tough, and a
mix of challenges wore me out. I burned out and started to question everything.
Even so, there were good moments in those years, though I only noticed them after
hitting a low point and reaching out for help.
I want to get back to writing on this blog because I really enjoyed it.
There’s a lot more Elixir content out there now, and the Elixir community—and tech
in general—has changed a lot in the past four years. The biggest shift has been the
rise of AI. At first, I saw AI as a threat that could take my job. But after using it,
reading about it, and learning by doing, my perspective changed. Now, I see AI
as a tool that boosts my productivity and creativity.
With this new outlook, I wanted to share my recent experience using AI in my
daily work and personal projects. After four years away from the blog, I barely remembered
it was built with Next.js and had many outdated dependencies. My first job was
to migrate it to a Phoenix project. I’ve done migrations like this before, but
I didn’t want to spend too much time on it. Since it was the holiday season and
I was on vacation, I also wanted to focus on healthy activities and spending time
with family and friends. That’s why using AI seemed like the perfect fit.
Desired end state
To explain my goal, I had already done a similar migration for TalentoIT’s blog.
That project is a simple Phoenix LiveView app that uses NimblePublisher to pull markdown
files from a folder and build a list of post structs at compile time. There are
handy functions for accessing posts, and you only need a couple of live views to list
and display them. For my own site, I wanted the Phoenix version to look and feel like the current one,
but with the latest version of Tailwind. This meant moving all the Next.js components
to Phoenix components and switching from Tailwind 3 to 4. Once I knew what I wanted,
all that was left was to open the terminal and get started.
Preparing the new project
The first thing I did was to create a new, simple Phoenix application:
mix phx.new bigardone_dev --module BigardoneDev --no-dashboard --no-ecto --no-gettext --no-mailer
This generated the project scaffold and installed all the dependencies. My agentic
coding tool of choice is Claude Code, and to give it (or any other agentic tool)
valuable information about your project and its dependencies, I also added usage_rules
to the list of dependencies. This library gathers usage rules from both Elixir
and dependencies installed in your project into a file or files that you can
later reference in your coding agent of choice, so it has some basic guidance on how to
do specific stuff. As for today, Phoenix, Ecto, and many other popular libraries
support it, so I’ve been using it lately. But before generating the rules,
I started Claude Code and ran the /init command to generate a basic CLAUDE.md
file for the project, giving Claude some basic context. With the CLAUDE file in
place, I ran the following mix task to generate the usage rules:
mix usage_rules.sync CLAUDE.md --all --link-to-folder docs --link-style at
This command does two things:
- It creates a bunch of Markdown files in the /docs folder with specific guidelines on using all supported libraries and basic Elixir features.
- It updates CLAUDE.md by adding a new section that references all these files, so it knows where to find the information when needed.
With the setup complete, I was now ready to have Claude help me with the first task: a little cleaning.
A little of vibe coding
Instead of following the same agentic methodology I’ve been using for the last few months,
which I will describe in a bit, I decided to make Claude one-shot a simple task and check
how it went. The task was to remove all Daisy UI references from the project.
Since I wanted exactly the same style as the original version, I didn’t need it,
and Phoenix actually comes with a lot of predefined code regarding Daisy UI.
I jumped into Claude Code and typed the following prompt:
Remove all DaisyUI-related code from the project
Claude started doing its thing, and after a short time, it responded with the success message:
⏺ I've successfully removed all DaisyUI-related code from the project. Here's what was done:
Files removed:
- assets/vendor/daisyui.js
- assets/vendor/daisyui-theme.js
Files modified:
1. assets/css/app.css - Removed daisyUI plugin imports and theme definitions
2. lib/bigardone_dev_web/components/core_components.ex - Replaced all daisyUI classes with Tailwind CSS equivalents:
- toast, alert, alert-info, alert-error → standard Tailwind positioning and color classes
- btn, btn-primary, btn-soft → Tailwind button styles
- checkbox, select, textarea, input, and their error variants → Tailwind form styles
- table, table-zebra → Tailwind table styles
- list, list-row, list-col-grow → Tailwind list styles
- Updated moduledoc to remove daisyUI reference
3. lib/bigardone_dev_web/components/layouts.ex - Replaced navbar, btn-ghost, btn-primary with Tailwind classes and removed the theme_toggle component
4. lib/bigardone_dev_web/controllers/page_html/home.html.heex - Replaced badge, rounded-box, bg-base-*, text-base-content, fill-base-content classes with Tailwind equivalents and removed <Layouts.theme_toggle />
5. lib/bigardone_dev_web/components/layouts/root.html.heex - Removed the theme toggle JavaScript
6. CLAUDE.md - Removed daisyUI references from architecture overview and asset pipeline sections
All tests pass, and the build compiles successfully.
The output looked accurate. After checking, I opened the browser, started the
Phoenix server, and looked for any errors. Everything worked fine—great!
Here’s the commit with all the changes. Next, since there was something else I
wanted to check that’s important for any LLM-related task, the current context usage.
So I ran /context to see how Claude’s context window was doing after the task:
> /context
⎿
Context Usage
⛁ ⛀ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ claude-opus-4-5-20251101 · 174k/200k tokens (87%)
⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛀ ⛁ ⛁
⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ System prompt: 3.1k tokens (1.6%)
⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ System tools: 16.0k tokens (8.0%)
⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ MCP tools: 14.4k tokens (7.2%)
⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ Custom agents: 481 tokens (0.2%)
⛁ ⛁ ⛁ ⛁ ⛁ ⛀ ⛶ ⛶ ⛶ ⛶ ⛁ Memory files: 7.8k tokens (3.9%)
⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛝ ⛝ ⛝ ⛁ Messages: 87.3k tokens (43.7%)
⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛶ Free space: 26k (12.9%)
⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ Autocompact buffer: 45.0k tokens (22.5%)
This check showed it was already at 87%, so continuing with this session or using
the same method to migrate the whole app wasn’t a good idea.
A little context engineering
Nacho, the CTO at Timeline, the company I recently joined, has been guiding
the team to embrace AI and agentic coding to boost our creativity and productivity.
As he pointed out once, we like it or not, this is the reality we live in now,
and if we want to be full-stack engineers, working efficiently with AI is part of the equation,
just like having backend, frontend, or database skills. In one of his talks, he shared a great video with our team
about advanced context engineering for coding agents. It really stuck with me and
changed how I think about using coding agents. The main point is that you need to
pay close attention to the context you give the LLM and how you manage its context
window. If there’s not much space left, the LLM is more likely to make mistakes or
hallucinate. The video also explains a method for better managing context by using
sub-agents and purposefully compacting it. Instead of using something like /compact,
where you don’t have full control over what the agent keeps or removes, the speakers
suggest a different approach with three steps:
- Research: In this step, you ask the agent to gather information to achieve a given task,understand the problem, and explain how the system currently works. The output is a Markdown file in a specific format that contains all key findings, code references, open questions the LLM might have, and related details. Once the file is created, you have to review it thoroughly and iterate on it, making any necessary changes to ensure it is as accurate as possible.
- Planning: Once the research step is finished and the document is good enough, you start the planning phase. In this step, the LLM reads the research document and generates a step-by-step implementation plan for the changes to be made. Again, the output is another Markdown file containing the detailed implementation plan, which you have to review carefully and make any necessary changes.
- Implementation: Once the implementation plan is as good as you want it to be, you give it to the LLM to execute, reviewing the changes as they are implemented and being ready for any issues it may encounter.
For each of the steps, there are some specific commands and agents that you can find
in HumanLayer’s repository. To maximize the context window efficiency, you have
to run each command in a fresh session, and each command uses the output of the
previous one as fresh context. Therefore, you have way more control over what information
the agent is going to use compared to using the /compact command.
What I like most about this approach is that it feels like working with another teammate.
This teammate helps me gather information, spot key points I might have missed,
and come up with alternative solutions to the problem. Also, because you need to
be careful with the documents from the first two steps, it’s easier to understand
both the problem and the solution before starting the implementation.
The method is flexible, too. Even if you don’t want the agent to do the coding,
you can still use the research and planning documents to share with your team in
a pull request and work on them together.
The research
I’ve been using this methodology for a few months, so I had all the commands and
agents in my ~/.claude folder. I jumped into the terminal, started Claude Code,
and typed the following:
/research_codebase I have the following Next.js blog in ~/projects/bigardone/bigardone.dev,
and another Phoenix blog using static content in ~/projects/bigardone/talento_it/talento_it_blog.
Create a research document to make this project behave like talento_it_blog, but with the layout, style, and
contents from the Next.js blog.
And the magic began:
⏺ I'll research all three codebases to understand how to combine talento_it_blog's static content approach with the Next.js blog's layout and content. Let me spawn parallel agents to explore each codebase.
⏺ 3 Explore agents finished (ctrl+o to expand)
├─ Explore current Phoenix project · 0 tool uses
│ ⎿ Done
├─ Explore talento_it_blog static content · 0 tool uses
│ ⎿ Done
└─ Explore Next.js blog layout/content · 0 tool uses
⎿ Done
...
...
... (Shortened for brevity)
After a few minutes, it finished the research and generated the document:
⏺ Research Complete
I've created a comprehensive research document at:
thoughts/shared/research/2025-12-19_11-24-29_static-blog-migration.md
Key Findings
talento_it_blog Architecture (to replicate):
- Uses NimblePublisher for compile-time static content from markdown files
- Posts stored in /posts/ with YYYY-MM-DD-slug.md naming
- Post struct with reading time calculation, categories, tags
- LiveViews for home listing and post detail
- Hot reload for markdown files in development
...
...
... (Shortened for brevity)
I opened the research file and read through it carefully. It looked good. It covered the
three projects and their key findings, like current technologies, dependencies,
and code structure. It also listed how many posts I had, their frontmatter format,
and the current UI layout, including colors and fonts. There was even a short
implementation plan and some open questions to tackle before moving forward.
After checking the context, there was still plenty of space left, so I asked it
to make a few changes here and there.
/context
⎿
Context Usage
⛁ ⛀ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ claude-opus-4-5-20251101 · 91k/200k tokens (46%)
⛀ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁
⛁ ⛁ ⛁ ⛁ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛁ System prompt: 3.1k tokens (1.6%)
⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛁ System tools: 16.2k tokens (8.1%)
⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛁ Custom agents: 481 tokens (0.2%)
⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛁ Memory files: 7.8k tokens (3.9%)
⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛁ Messages: 18.5k tokens (9.3%)
⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛝ ⛝ ⛝ ⛶ Free space: 109k (54.4%)
⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ Autocompact buffer: 45.0k tokens (22.5%)
⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝
After making a few small changes, I was happy with the document and felt it was
good enough for the first version of the migration.
The implementation plan
With the research document ready, I started a new session in Claude Code and ran the following command:
/create_implementation_plan for thoughts/shared/research/2025-12-19_11-24-29_static-blog-migration.md
This is the most important part. By starting a new session with a fresh context
window and giving it the document we want to use, instead of using the previous
session’s cluttered context, we get a more accurate implementation plan.
After I hit enter, Claude Code got to work:
⏺ I'll read the research document you've referenced to understand what we're implementing.
⏺ Read(thoughts/shared/research/2025-12-19_11-24-29_static-blog-migration.md)
⎿ Read 460 lines
⏺ This is a comprehensive research document. Let me now spawn parallel research tasks to gather more context from the referenced codebases before creating the implementation plan.
⏺ Read(~/projects/bigardone/talento_it/talento_it_blog/lib/talento_it_blog.ex)
⎿ Read 49 lines
⏺ Read(~/projects/bigardone/talento_it/talento_it_blog/lib/talento_it_blog/post.ex)
⎿ Read 68 lines
⏺ Read(~/projects/bigardone/talento_it/talento_it_blog/lib/talento_it_blog_web/live/home_live.ex)
⎿ Read 74 lines
...
...
... (Shortened for brevity)
It started by reading a lot of files from the two existing projects, and asking
me about the open questions in the research document:
⏺ Based on my research of all three codebases, I have a comprehensive understanding of what needs to be done. Before I write the detailed implementation plan, I have a few questions to clarify:
Questions that my research couldn't answer:
1. Frontmatter format: The Next.js blog uses YAML frontmatter, while talento_it_blog uses Elixir map syntax. Should I:
- Keep YAML and add yaml_elixir dependency (easier migration, less conversion work)
- Convert all 100+ posts to Elixir map format (more consistent with Phoenix ecosystem)
2. Scope for initial version: The research document lists several optional features. For the initial implementation, should I include:
- Related posts section on post detail page?
- Tag filtering on the blog listing page?
- RSS feed?
3. URL structure: The Next.js blog uses /blog/YYYY/MM/DD/slug. Should I keep this exact structure for SEO continuity, or simplify to /blog/:slug?
⏺ User answered Claude's questions:
⎿ · Which frontmatter format should we use for the markdown posts? → YAML (Recommended)
· Which features should be included in the initial implementation? → Core only (Recommended)
· What URL structure should we use for blog posts? → /blog/YYYY/MM/DD/slug (Recommended)
To keep things simple, I went with the recommended options. I wanted this to be
a quick migration with just the basics, and I could always add more features later.
After a few minutes, the implementation plan was ready to review. When I read it,
I decided to tweak it a bit and add automated browser testing with the Playwright MCP
server instead of the manual testing suggested in the plan. So, I started a new session and ran:
Continue with the implementation plan of thoughts/shared/plans/static-blog-migration.md, and update it to use the playwright mcp for visual verifications
This made Claude Code read the current implementation plan and make the necessary
updates to the success criteria sections:
⏺ I'll read the existing plan file to understand what needs to be updated.
⏺ Read(thoughts/shared/plans/static-blog-migration.md)
⎿ Read 996 lines
⏺ Now I understand the plan. I need to update the success criteria sections to use the Playwright MCP for visual verifications instead of manual checks. Let me update the plan to incorporate browser automation for visual testing.
⏺ Update(thoughts/shared/plans/static-blog-migration.md)
⎿ Added 4 lines, removed 2 lines
117 - [ ] Project compiles: `mix compile`
118 - [ ] Server starts: `mix phx.server`
119
120 -#### Manual Verification:
121 -- [ ] Page renders with Montserrat font visible in browser dev tools
120 +#### Visual Verification (Playwright MCP):
121 +- [ ] Navigate to `http://localhost:4000` using `browser_navigate`
122 +- [ ] Take snapshot with `browser_snapshot` to verify page loads
123 +- [ ] Use `browser_evaluate` to check font-family: `() => getComputedStyle(document.body).fontFamily` should include "Montserrat"
124
125 ---
126
... (Shortened for brevity)
Now the plan was ready to be put into action.
The implementation
After reviewing the first two documents, it was time to let Claude Code start the migration.
Ideally, I should have been able to simply run /implement_plan thoughts/shared/plans/static-blog-migration.md,
and everything should have been migrated perfectly. However, knowing that the plan consisted of six different phases
and that it was not perfect, probably making coding agents encounter issues here and there,
making the context window grow in usage, I decided to make use of a simple recommendation Nacho gave us:
running each step in a different session, so each one starts with a fresh context window. So I ran:
/implement_plan Phase 1 of thoughts/shared/plans/static-blog-migration.md
This approach meant I had to do a bit more work, but that was okay. I ran the
same command for each phase and let it run while I trained, cooked, or went for a walk.
From time to time, I checked my laptop to review each phase, ask for changes if needed,
and commit the code. When the last phase was done, the blog was fully migrated, worked like a charm, and looked
just like the old one. Awesome!
Conclusion
Migrating this blog had been on my to-do list for a long time, but because of my personal situation,
I didn’t have the time or motivation to do it. The same goes for some other
personal projects I’ve put off. But using AI as a coding partner
has brought back my motivation, especially with the approach I described here,
where I guide the process to get what I want. If you’re in a similar spot—lacking time,
energy, or motivation for a side project and you want to code but aren’t excited about using AI,
I suggest giving it a try. Let it help you enjoy coding again. What small project could you bring
back with an AI teammate? Take the first step and see how AI can spark your passion for coding.
Happy (agentic) coding!
Top comments (0)