This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI
🎮 About
I am a software craftsman and enthusiast, but also a gamer and sci-fi fan, and this portfolio brings those worlds together. The goal was to build an old-school portfolio with a space-station theme, driven by fully configurable data rather than static content.
Since I also enjoy adding a bit of fun and gamification, I combined the idea of an AI chatbot with a terminal interface and created a space-station terminal that supports both system and portfolio commands.
đź’» Portfolio
Github Repository: https://github.com/Gramli/gramli-portfolio-2026/tree/master
Portfolio Url: https://daniel-balcarek-portfolio-768859394911.europe-west1.run.app
đź‘· How I Built It
I took this challenge as an opportunity to improve my skills in writing quality prompts and collaborating with AI agents during development, as well as to finally get hands‑on experience with Google Cloud Run. That is why the portfolio is fully “vibe‑coded” using Gemini models.
- For drafting and experimentation, I used Google AI Studio
- For code completions, I used agent mode in my local development IDE
Tech Stack
-
🎨 Frontend
Angular / TypeScript
I chose Angular simply because I enjoy working with the framework. The frontend is fully data-driven, implements fuzzy-matching logic for the Job Fit Analyzer feature, and delivers the experience through both a classic UI and a gamified terminal-style interface.
-
⚙️ Backend
.NET / C#
Creating a .NET Cloud Run function had been on my to-do list for a long time, so this choice felt natural. The function acts as a proxy for communication with Gemini models and centrally manages prompts for specific actions. This allows the frontend to send only contextual data while the proxy assembles the final prompt before forwarding the request to the model.
Example of backend-defined system prompts:
private static string GetSystemInstruction(AiProxyRequest request)
{
return request.Type?.ToLowerInvariant() switch
{
"job-parse" => "You are a professional HR data extraction engine.\n\n" +
"Task: Extract structured data from the Job Description into a strict JSON format.\n\n" +
"Output Schema:\n" +
"{\n" +
" \"requiredSkills\": [\"string\"],\n" +
" \"niceToHaveSkills\": [\"string\"],\n" +
" \"yearsExperience\": numberOrNull,\n" +
" \"keyResponsibilities\": [\"string\"],\n" +
" \"industryDomains\": [\"string\"]\n" +
"}\n\n" +
"Strict Rules:\n" +
"1. Extract ONLY explicitly stated requirements. Do NOT infer missing skills.\n" +
"2. Return RAW JSON only. Do NOT use Markdown code blocks (json).\n" +
"3. If a field is not found, return null or [].\n" +
"4. Ensure valid JSON syntax.",
"chat" => "You are the AI interface for Daniel Balcarek's portfolio.\n" +
"Directive: Answer visitor queries using ONLY the provided context.\n\n" +
"Rules:\n" +
"1. Use ONLY the data in [CONTEXT]. Do not use external knowledge.\n" +
"2. If the answer is not in [CONTEXT], reply exactly: 'Data segment not found in archives.'\n" +
"3. Keep answers concise (max 2-3 sentences) and professional.\n" +
"4. Do not make up facts.\n\n" +
$"[CONTEXT]\n{request.ContextData ?? "{}"}\n[END CONTEXT]",
_ => "You are a helpful AI assistant."
};
}
🌌 What I'm Most Proud Of
Google Cloud Run
I finally became familiar with the Google Cloud Platform, which was something I had wanted to do for a long time—and it was quite a journey:
- figuring out where my deployed service actually lives (deployments, APIs…)
- understanding why I was being billed so much for Gemini usage (turns out I was using a paid Gemini 3 model)
- debugging why my C# service worked locally but failed in the cloud
- accidentally committing an API key again—and learning how to revoke it immediately
- and many more lessons learned along the way :)
Overall, once I understood the basics, Google Cloud Platform turned out to be very approachable. I also want to highlight how easy deployment from a GitHub repository was—it really took just a few clicks.
Prompting
For every larger feature, I created structured and detailed prompts. The results from the Gemini models were impressive. Spending more time on prompt design before coding can save a significant amount of time later. I am proud of how I improved my prompt‑engineering skills, and I created several prompts that I can reuse in future projects.
AI Integration
I finally learned how to integrate AI into a website and discovered that it is actually quite straightforward. I use AI in the portfolio in two main ways:
AI chatbot about the portfolio: You can ask questions about the portfolio data using a dedicated
ai [question]command. The ai command enables querying of portfolio data by forwarding the request and relevant contextual metadata to a proxy layer. The proxy assembles a structured prompt and invokes a Gemini model to generate the response.Job Fit Analyzer: My Job Fit Analyzer combines AI-based parsing with a custom scoring engine to simulate how a technical recruiter reviews a profile. First, it uses an LLM to break down a raw job description into structured data—separating required skills, years of experience, and key responsibilities. It then compares these requirements against my portfolio using fuzzy string matching (so it understands that 'ReactJS' and 'React' are the same) and scans my actual project descriptions to verify practical experience. The entire system is fully configurable via a JSON file, allowing me to tweak strictness and weighting without touching code—documented in a handy Markdown guide. The final match score isn't random; it’s calculated using a weighted algorithm that gives bonus points for skills I've actually used in projects, ensuring the result reflects real engineering capability.
Top comments (0)