I Built a Free AI Portfolio Chatbot — Here's How You Can Too (10 Minutes, No Server)
A few weeks ago I was looking at my portfolio site and thinking: this is just a static page anyone could have made in 2015.
I wanted something that actually talks to visitors. Something that answers questions about my thesis, explains my projects, and feels alive. But every solution I found either needed a paid server, left the API key sitting in the browser for anyone to steal, or took three days to set up properly.
So I built my own — and then open-sourced it so you can have it too.
What It Does
It's a personal portfolio site with an AI chatbot built in. The assistant knows everything about you — your research, your projects, your skills, your contact info — and answers questions about them in a natural, conversational way. It auto-detects the language the visitor writes in and responds accordingly.
The Stack — And Why It's All Free
The part I'm most proud of is that this costs exactly $0/month to run, forever.
GitHub Pages hosts the static frontend. No server, no Docker, no deployment pipeline to maintain.
Cloudflare Workers sits between the browser and the AI. This is the key piece most people miss — your API key should never be in your frontend code. Anyone can open DevTools and read it. The Worker keeps the key on the server side, completely out of reach.
Groq API with LLaMA 3.3 70B handles the actual AI. Groq's free tier is fast enough for a personal portfolio and generous enough that you'll never hit the limit under normal traffic.
Browser → GitHub Pages (your site)
↓
Cloudflare Worker ← API key lives here, safely
↓
Groq + LLaMA 3.3 70B
The Part That Makes It Actually Usable
There's one file you edit: config.js.
const CONFIG = {
owner: {
name: "Your Name",
title: "Graduate Student in EEE",
university: "Your University",
email: "your@email.com",
bio: "A sentence or two about yourself.",
avatar: "👨💻",
},
research: {
thesis: {
title: "Your Thesis Title",
keywords: ["AI", "Power Systems", "IoT"],
abstract: "What your research is about...",
status: "In Progress",
},
},
projects: [
{
name: "Your Project Name",
description: "What it does.",
tech: ["Python", "ESP32", "React"],
github: "https://github.com/you/project",
},
],
skills: {
programming: ["Python", "JavaScript", "MATLAB"],
tools: ["Git", "Linux", "Docker"],
},
social: {
github: "https://github.com/yourusername",
linkedin: "https://linkedin.com/in/yourprofile",
},
apiEndpoint: "https://your-worker.your-subdomain.workers.dev/chat",
};
The AI reads this file as its knowledge base. Add a project here, the chatbot knows about it. Update your bio, the chatbot reflects it. No retraining, no embedding pipeline, no vector database. Just a config object.
Setting It Up
Step 1 — Fork the repo and clone it
git clone https://github.com/YOUR-USERNAME/make-your-own-ai-assistant.git
Step 2 — Fill in config.js with your information
This is the only file you touch on the frontend side.
Step 3 — Get a free Groq API key
Head to console.groq.com, sign up (no credit card), and create an API key.
Step 4 — Deploy the Cloudflare Worker
npm install -g wrangler
wrangler login
cd cloudflare-workers
wrangler deploy
# Store your key securely — it never appears in your code
wrangler secret put GROQ_API_KEY
After deploy you'll get a URL like https://my-bot.your-subdomain.workers.dev. Copy it into the apiEndpoint field in config.js.
Step 5 — Enable GitHub Pages
git add .
git commit -m "my portfolio chatbot"
git push
Go to your repo → Settings → Pages → Source: GitHub Actions.
In about a minute, it's live at https://YOUR-USERNAME.github.io/make-your-own-ai-assistant/.
What It Looks Like Live
The bilingual detection is simple — it checks for Turkish characters and switches the response language automatically. If you want to add more languages, there's a single function in worker.js to extend.
A Few Things I'd Do Differently
Rate limiting. The current setup doesn't throttle requests, so technically someone could spam your Worker and drain your Groq free tier. For most personal portfolios this won't matter, but it's on the roadmap.
Streaming responses. Right now the AI waits until the full response is ready before sending it. Streaming would make it feel much snappier. Groq supports it — it's just not wired in yet.
A demo GIF in the README. I always find projects easier to evaluate when I can see them moving. I'll add one soon.
If you build something with this and run into any of these limitations, open an issue or PR. This is genuinely open source — not "open source but actually don't touch it."
Links
- GitHub: github.com/erendogan83/make-your-own-ai-assistant
- Live demo: eren-ai-assistant.pages.dev
If it saves you time, a star on the repo goes a long way. And if you build your own version, I'd genuinely like to see it — drop the link in the comments.




Top comments (0)