Live demo: arunkushwaha.xyz
Source code: Arun-One_page_Portfolio
Most developer portfolios look the same. A photo, a list of projects, a contact form, and a short bio. They work. They also vanish from memory about five seconds later.
I wanted mine to feel more like a product than a resume. Something a visitor could explore, not just scroll past. So I built ARUN_OS v1.0, a single-page React portfolio with a cyberpunk look, an interactive browser terminal, a voice-enabled chatbot, a canvas-based animated avatar, and a loading screen tied to real asset loading.
This is a look at the interesting parts of the build and the decisions behind them. 🙂
Why I did not want a template
My resume already covers production pipelines and backend performance work.
A static portfolio does not show any of that well.
So I wanted the portfolio itself to prove I can build interactive software, not only describe it. That pushed me toward a frontend that behaves more like a small product, with loading states, asset management, overlays, audio playback, frame-based animation, and an intent-resolution layer, all running client-side in the browser.
Basically, I wanted a portfolio that does not feel like homework. 😅
The stack
Built with React, Vite, Tailwind, Framer Motion, Three.js, Web Speech API, and Canvas API.
No backend. No CMS. No LLM API calls. Just a frontend trying to feel alive.
The terminal
There is a floating terminal button in the bottom-left corner. Click it, and you get a CLI that behaves like a shell session. You can type commands like whoami, ls, projects, search redis, open github, or cd skills, and it responds with section jumps, links, or structured output.
The whole thing runs from a command registry. Each command returns an array of typed output lines:
const registry = {
whoami: {
desc: 'Show full profile dossier',
run: () => [
{ type: 'header', content: `// SUBJECT: ${profile.name.toUpperCase()}` },
{ type: 'info', content: `ROLE: ${profile.role}` },
{ type: 'info', content: `LOCA: ${profile.location}` },
{ type: 'response', content: `BIOD: ${profile.summary}` },
{ type: 'link', content: `CONT: ${profile.email}` },
],
},
search: {
desc: 'Query portfolio database',
run: (args) => {
const query = args.join(' ').toLowerCase();
const results = [
...projects.filter(p =>
p.title.toLowerCase().includes(query) ||
p.description.toLowerCase().includes(query)
).map(p => `Project: ${p.title}`),
...Object.values(skills).flatMap(s => s.items)
.filter(sk => sk.toLowerCase().includes(query))
.map(sk => `Skill: ${sk}`),
].slice(0, 5);
return results.length
? results.map(r => ({ type: 'highlight', content: `MATCH: ${r}` }))
: [{ type: 'response', content: 'NO_MATCHES_FOUND' }];
},
},
// ... 15+ more commands
};
It supports tab completion, command history with arrow keys, section aliases like exp for experience, and a matrix command that toggles a Matrix rain overlay. There is even a Konami code easter egg that triggers a glitch chaos animation. Because of course there is. 😄
The main idea here was simple. The terminal reads from the same portfolio.js file as the rest of the site. So when I update my content, the terminal stays accurate without extra work.
The chatbot
It is not a generic chat widget. It is a portfolio-specific assistant with intent resolution, typed message playback, animated avatar frames, and optional voice output.
When someone asks, “What are Arun’s projects?” or “Tell me about the PDF pipeline,” it resolves the input against a keyword-scored knowledge base:
function resolvePortfolioIntent(rawText, knowledgeBase) {
const normalizedInput = normalizeText(rawText);
const commandMatch = knowledgeBase.quickCommands.find((cmd) =>
normalizedInput.includes(normalizeText(cmd.command))
);
if (commandMatch) {
return knowledgeBase.entriesById[commandMatch.id] ?? knowledgeBase.fallback;
}
const bestMatch = knowledgeBase.entries
.filter((entry) => entry.id !== 'fallback')
.map((entry) => ({
entry,
score: scoreKeywords(normalizedInput, entry.keywords),
}))
.sort((a, b) => b.score - a.score)[0];
return bestMatch?.score > 0 ? bestMatch.entry : knowledgeBase.fallback;
}
No API calls. No LLM. It is a local intent resolver that maps user input to structured portfolio answers. Each intent entry defines reply text, follow-up suggestions, and an audio asset ID.
The voice system tries a pre-recorded .mp3 first. If that file is missing, it falls back to the browser's SpeechSynthesis API with a tuned voice preference. The browser gets a little dramatic here, but it behaves.
The avatar is a canvas-rendered frame sequence, 50 WebP frames for the chatbot character. When the bot is idle, it loops the opening frames. When it speaks, it plays a different range. When voice is off, it transitions to a human avatar using the hero's idle frames. Close the chatbot, and it reverse-animates back to frame 0 before unmounting.
That part took more patience than pride. 😮💨
The loading screen
I wanted the loading screen to load real things, not fake a progress bar for two seconds and call it design.
The portfolio uses about 130 images, including avatar frames for the hero, hero movement, and chatbot, plus audio files. These are split into two phases:
// Phase 1 (Critical): Loaded during loading screen — blocks until done
const CRITICAL_IMAGE_SETS = [
{ key: 'heroIdle', folder: 'avatar', count: 30 },
{ key: 'heroMove', folder: 'avatar-move', count: 50 },
];
// Phase 2 (Deferred): Loaded silently after the page renders
const DEFERRED_IMAGE_SETS = [
{ key: 'chatbotAvatar', folder: 'avatar-chat-bot', count: 50 },
];
Phase 1 loads during the loading screen, so the progress ring reflects actual image loading. Phase 2 starts a second after the page mounts and preloads the chatbot avatar and audio in the background. By the time a user opens the chatbot, the assets are already cached.
The loading screen also uses a background video with dynamic brightness and contrast filters that intensify as progress increases. The whole thing feels like the site is powering up instead of waiting around.
Which is the vibe I wanted. Not a fake spinner doing theatre. 😅
Data-driven everything
Almost all content on the site comes from one file: src/data/portfolio.js. Profile info, skills, project details, work experience, and education all live there.
That decision had a nice ripple effect:
- The hero section reads from
profile.hero - The terminal's
whoamicommand reads fromprofile - The chatbot's knowledge base is generated from the same exports
- Project cards, experience timelines, and skill grids all use the same arrays
If I add a new project to portfolio.js, it shows up in the project section, becomes queryable in the terminal, and the chatbot can answer questions about it. No syncing. No duplicate copy. No “why is this stale” headache.
That alone saved me from future me, who is always one missed update away from chaos.
Lessons I would pass along
Treat heavy components as lazy-loaded modules. The Three.js background, chatbot, terminal, and Matrix rain are all React.lazy() imports. The core page renders first, and the heavier bits load after.
Coordinate overlays carefully. When the chatbot opens, the page gets inert, aria-hidden, pointer-events-none, and a blur filter. The body scroll is locked with position: fixed and restored on close. Getting that right took more time than building the chatbot.
Do not fake your loading screen. If your app loads assets, tie the progress bar to real progress. Users notice when a loading animation is cosmetic fluff.
Separate content from components early. A single data file for portfolio content saves hours later.
Frontend work turns into systems work fast. Once you have audio playback, canvas animation, scroll-driven positioning, overlays, and keyboard listeners all running together, you are not “just making a website” anymore.
Try it
- 🌐 Live demo: arunkushwaha.xyz
- 📦 Source code: GitHub
If you are building your own portfolio and want it to feel like more than a page, ask one question:
What would make this feel like something I built, not something I filled in?
That question changed everything for me.
Top comments (0)