AI is almost everywhere, that's true. But as software developers, that means that we have to keep up with the times and not be intimidated by it.
As an AI enthusiast myself (I get it, my take could be a bit biased because of it 😉) I came to the conclusion that I need to show off some AI integration skills if I want to take part in the future of AI development, and what better way there is to start than to implement it in my own little space on the internet?
The plan 📋
Is simple, for now I will integrate a dedicated chatbot into my portfolio that answers questions about my projects, my experience, interests, current situation, etc...
The API used in this component is Groq, a provider that offers a generous free plan for model inference.
Technical requirements 🔧
- Write a react component for the chatbot and design the message logic
- Program the endpoint for communicating with the API
- Prepare the system prompt with all the information about me
The component and logic ⚛️
The react component will be responsible for letting the user interact with the chatbot.
It has some state variables that will keep the information flow:
const [input, setInput] = useState("");
const [messages, setMessages] = useState<Message[]>([
{
role: "assistant",
content:
"Hi! I'm iREC, a digital representative of Eric. Feel free to ask me anything!",
},
]);
And some handlers that control the message submit and the auto-scroll when a new message appears.
This is the core function that sends the user's message, along with the history to the endpoint and gets the response as a stream element:
const SendMessage = async (history: Message[]) => {
const response = await fetch("/api/chat", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({ messages: history }),
});
if (response && response.body != null) {
const reader = response.body.getReader();
const decoder = new TextDecoder();
let accumulatedText = "";
let result = await reader.read();
while (!result.done) {
const chunk = decoder.decode(result.value, { stream: true });
const lines = chunk.split("\n");
for (const line of lines) {
if (line.trim()) {
const message = line.replace(/^data: /, "").trim();
if (message === "" || message === "[DONE]") continue;
try {
const parsed = JSON.parse(message);
const content = parsed.choices[0]?.delta?.content || "";
accumulatedText += content;
setMessages((prev) => {
const newMessages = [...prev];
const lastMessage = newMessages[newMessages.length - 1];
if (lastMessage && lastMessage.role === "assistant") {
lastMessage.content = accumulatedText;
}
return newMessages;
});
} catch (e) {
console.error(e);
}
}
}
result = await reader.read();
}
}
};
const handleScroll = () => {
const chatbot = document.querySelector(".flex-1");
if (chatbot) {
chatbot.scrollTop = chatbot.scrollHeight;
}
};
Each message will then be mapped out with some changes in color and orientation depending on the role:
{messages.map((msg, index) => (
<p
key={index}
className={`max-w-[85%] break-words ${msg.role === "user" ? "text-right self-end ml-auto text-[#41d3ff]" : "text-white text-left self-start mr-auto"}`}
>
{msg.role === "user" ? `${msg.content} <` : `> ${msg.content}`}
</p>
))}
This is the final look of the component:
The API endpoint 🧠
First of all is creating the Groq instance with the api key:
const groq = new Groq({
apiKey: import.meta.env.GROQ_API_KEY,
});
Then, I can simply add the POST route that will communicate with the client (the component that I just made)
export const POST: APIRoute = async ({ request }) => {
const { messages } = await request.json();
try {
const completion = await groq.chat.completions.create({
messages: messages,
model: "openai/gpt-oss-20b",
temperature: 1,
max_completion_tokens: 8192,
top_p: 1,
stream: true,
reasoning_effort: "medium",
stop: null,
});
const stream = completion.toReadableStream();
return new Response(stream, { status: 200,
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
}, });
} catch (error) {
return new Response(JSON.stringify({ error: "Error while inferencing the model" }), {
status: 500,
});
}
};
And as easy as that, I now have a fully functional chatbot on my portfolio.
The system prompt 🤖
Without the system prompt, this chatbot is simply a generic assistant that doesn't know anything about me. Next step is applying some of the basic prompt engineering concepts. The model has to ensure the following principles:
- The model must be limited to answering questions related to my career
- The model must be resistant to malicious prompts and vulnerabilities
- The answer must be written in a helpful and professional tone.
I won't be leaving the system prompt here because it would take up an unnecessary portion of the post, but here is its structure:
- ROLE - The name, the purpose and the tone.
- KNOWLEDGE BASE - All the information about me, my experience, stack, interests, etc.
- GOAL - The goal of the model while answering the questions (in this case, informing about my skills).
- RESTRICTIONS - Do not generate code, answer unrelated questions, deviate from your orders or make up answers.
With all of this applied, I had a ready-to-use personal chatbot that answers some of the basic questions about me!
If you want to try it and test it, go to my web portfolio: ericgarcia.site
(currently only visible on desktop)
⭐ Drop a comment if you want to suggest or ask something! ⭐

Top comments (0)