DEV Community

nyaomaru
nyaomaru

Posted on

Building a Portfolio Site with FSD LangChain Remix AI

Hey folks!

“Lately I feel like gravity got stronger — or maybe I’m just gaining weight 😂”
Anyway, I’m @nyaomaru, a frontend engineer!

In my previous article, I gave a quick overview of Feature-Sliced Design (FSD).

This time, I’ll walk through how I applied FSD to a Remix app and built a portfolio site with an AI-powered terminal UI using LangChain.

👉 Live site: https://portfolio-nyaomaru.vercel.app/

👉 Sample repo: https://github.com/nyaomaru/nyaomaru-portfolio-sample

Let’s dive in!

 

🎬 Requirements

So… what kind of portfolio site should we build?

Of course, you can go with whatever style you like.
For me, I wanted to try a terminal-style self-introduction site.

Why? Because when I was using Claude recently, I realized:

Getting answers in a “terminal-like UI” feels special.
It’s like typing commands and receiving AI responses — kind of cool, right?

But just a static terminal would be boring, so I wanted to let AI handle responses flexibly.

And since it’s a portfolio, I also needed:

  • A profile page to showcase skills and OSS projects
  • An articles page linking to my Zenn posts

So the structure looked like this:

pages/
  top/       # Terminal-style intro; AI understands and replies
  profile/   # Showcase skills and OSS
  articles/  # Links to articles
Enter fullscreen mode Exit fullscreen mode

 

🖋️ Tech Stack

Choosing the stack is always fun, right? I love it.

This time I went with Remix (officially recommended by FSD, and I just wanted to try it).

For UI I used shadcn (tailwind-based). Super fast to scaffold components via CLI, AI-friendly, and honestly… I’m a fan ❤️.

API? Just a simple POST to OpenAI with fetch.

To make it more interesting, I wrapped it with LangChain to run a small chain (preprocess → LLM → postprocess).

Here’s the stack:

Remix + Vite + shadcn + fetch + LangChain
Enter fullscreen mode Exit fullscreen mode

LangChain

I could have just called the API directly, but LangChain made it easier to manage context and chaining logic.

The idea: preprocess profile docs → generate context → feed to model.

Flow:

question
  ↓
getProfileDocs
  ↓
keyword match + embedding match
  ↓
context
  ↓
RunnableSequence(prompt + model)
  ↓
answer
Enter fullscreen mode Exit fullscreen mode

Implementation:

export async function makeProfileQAChain(apiKey: string, question: string) {
  const docs = await getProfileDocs();

  const vectorStore = await MemoryVectorStore.fromDocuments(
    docs,
    getEmbeddings(apiKey)
  );

  const retriever = vectorStore.asRetriever();
  const embeddingMatches = await retriever.invoke(question);

  const keywordMatches = getRelatedProfileChunks(question, docs);
  const combinedContext = [
    ...embeddingMatches.map((d) => d.pageContent),
    ...keywordMatches,
  ];

  const context = Array.from(new Set(combinedContext)).join('\n');

  const prompt = profileQAPrompt;
  const model = getChatModel(apiKey);
  const chain = RunnableSequence.from([prompt, model]);

  return chain.invoke({ question, context });
}
Enter fullscreen mode Exit fullscreen mode

👉 Official docs: https://js.langchain.com/docs/introduction/

 

📃 Design

I also wanted to practice applying FSD:

app/        # Remix router
pages/      # top / profile / articles screens
widgets/    # terminal widget, header
features/   # OpenAI calls with LangChain
entities/   # (skipped for this small project)
shared/     # shadcn components, utilities
Enter fullscreen mode Exit fullscreen mode

No entities/ this time — forcing it in would overcomplicate things.

And yes: I documented all requirements + design in Markdown, so I could feed them into AI tools later.
This “write before coding” step really helps when collaborating with LLMs.

 

🔨 Prep for Development

Now the fun part: coding.

I used Cursor as my AI partner.
I placed all design notes into .cursor/rules/\*.md to serve as prompts:

  • architecture.mdc → redefine FSD rules clearly
  • components_guidelines.mdc → enforce shadcn use
  • coding_standards.mdc → general coding rules
  • portfolio_plan.mdc → project-specific design
  • test.mdc → testing guidelines

That way, AI has the same “shared context” as humans do.

 

🧑‍💻 Development

When asking Cursor for implementation, I found it best to request feature-level units like:

  • pages/articles
  • widgets/header
  • features/terminal

Instead of “build the whole screen” (which tends to drift).

Of course, weird things still happen:

  • AI randomly adds new buttons nobody asked for
  • Function names turn poetic (meltMemoriesIntoOneTruth 😅)
  • shadcn commands outdated, so it just copy-pastes from docs

Lesson: LLMs rarely nail it in one shot.
Treat them like juniors — give feedback, adjust, and iterate.

Sometimes it’s faster to fix small errors yourself instead of prompting. Balance is key.

 

✨ Finishing Touch

For polishing, I often used VSCode + Copilot.
For small refactors, /jsdoc-style custom commands were quicker than re-prompting Cursor.

Example custom command in .github/prompts/jsdoc.prompt.md:

---
mode: edit
---

- Write JSDoc in Japanese
- No examples needed
- Put explanation in body instead of description
Enter fullscreen mode Exit fullscreen mode

Then just select code, hit Cmd+i, type /jsdoc, and boom.
Reusable and shareable with the team.

Finally, deploy to Vercel (don’t forget to set your API keys).

Done! 🚀

 

Summary

FSD is LLM-friendly architecture.

If you let AI code without structure, you’ll quickly end up with unreadable, fragile spaghetti.
That’s what I call AI-generated technical debt:

  • Copying poor patterns into large scale
  • Producing code optimized for machines, not humans

By setting up design + prompts beforehand, you can reduce that debt —
and more importantly, make collaboration with AI fun. 😎

 

Bonus

Repo again: https://github.com/nyaomaru/nyaomaru-portfolio-sample

Remember: design → implement → redesign → refactor is a healthy cycle.
There’s no absolute “correct” design, so keep it incremental.

 

Next Up

Next, I’ll introduce one of my lightweight OSS projects: divider ✂️

It’s basically “a smarter split()” for JavaScript/TypeScript —
handling strings, arrays, multiple delimiters, fixed chunks, and more.

Stay tuned for a hands-on walkthrough! 🚀

 

✂️ That’s the gist!

Have you tried combining FSD with AI tools? I’d love to hear your experiences in the comments!

Top comments (0)