<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Roy amit</title>
    <description>The latest articles on DEV Community by Roy amit (@roy_amit).</description>
    <link>https://dev.to/roy_amit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/roy_amit"/>
    <language>en</language>
    <item>
      <title>Building an AI - Powered Portfolio: A Developer's Journey</title>
      <dc:creator>Roy amit</dc:creator>
      <pubDate>Thu, 12 Feb 2026 21:14:18 +0000</pubDate>
      <link>https://dev.to/roy_amit/building-an-ai-powered-portfolio-a-developers-journey-16a1</link>
      <guid>https://dev.to/roy_amit/building-an-ai-powered-portfolio-a-developers-journey-16a1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Traditional portfolios have a fundamental problem: they assume visitors will actually read them. We craft paragraphs about our experience, build grids of project cards, and hope that recruiters or potential collaborators will take the time to scroll through everything we've written. Most don't.&lt;/p&gt;

&lt;p&gt;This project started with a different premise. What if a portfolio could actively engage with visitors? Instead of presenting information and hoping it gets consumed, what if it could have a conversation?&lt;/p&gt;

&lt;p&gt;This article documents the journey of building an AI-powered portfolio from scratch—the architectural decisions, the technical challenges, and the lessons learned along the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  Defining the Experience Before the Technology
&lt;/h2&gt;

&lt;p&gt;One principle guided the early development: design the experience first, then figure out how to build it.&lt;/p&gt;

&lt;p&gt;Before writing any backend code, the entire user interface was built. A chat window as the central element. A sidebar for structured navigation. A greeting banner with quick-action buttons for common questions. A command palette for keyboard-oriented users. Even the streaming text effect that would later show AI responses being generated in real-time.&lt;/p&gt;

&lt;p&gt;This approach served two purposes. First, it forced clarity about what the product should feel like before getting lost in implementation details. Second, it created concrete requirements for the AI system. The interface specified what the backend needed to deliver.&lt;/p&gt;

&lt;p&gt;The decision to commit entirely to dark mode came from a focus on consistency. Maintaining two color schemes requires double the design effort for every new component. The dark aesthetic also aligned better with the developer-focused positioning of the portfolio.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Challenge of Grounded Responses
&lt;/h2&gt;

&lt;p&gt;The first backend implementation was straightforward: FastAPI as the server framework, LangChain for AI orchestration, and OpenAI's GPT as the language model. Within a single development session, there was a working chatbot with a defined persona.&lt;/p&gt;

&lt;p&gt;The problem emerged immediately. The chatbot could hold a conversation, but it didn't know anything factual about the portfolio owner. When asked about specific projects, it would generate plausible-sounding but fictional descriptions. When asked about technical skills, it would list generic developer competencies rather than actual expertise.&lt;/p&gt;

&lt;p&gt;This is a common pitfall with language models. They're excellent at generating fluent text but have no mechanism for distinguishing between what they've been trained on and what the developer wants them to know.&lt;/p&gt;

&lt;p&gt;The solution was Retrieval-Augmented Generation, commonly called RAG. Instead of relying on the model's training data, this approach provides relevant context with each question. The implementation involved several components: documents containing actual portfolio information (resume, project descriptions, skills, bio), a process to convert these documents into searchable embeddings, a vector database to store and query these embeddings, and a modified prompt structure that includes retrieved context before asking for a response.&lt;/p&gt;

&lt;p&gt;The initial vector store was FAISS, a library that handles similarity search efficiently. For a prototype, it worked well. The chatbot could now answer questions about real projects, cite actual technologies used, and provide accurate information about experience and education.&lt;/p&gt;




&lt;h2&gt;
  
  
  Reconsidering the Frontend Framework
&lt;/h2&gt;

&lt;p&gt;The frontend began as a Vite and React application. Vite's fast hot module replacement made development pleasant, and the React ecosystem provided all necessary UI components.&lt;/p&gt;

&lt;p&gt;As the project matured, several limitations became apparent. Search engine optimization required manual implementation of meta tags and careful attention to server-side rendering—which Vite doesn't provide natively. Image optimization needed external tooling. The Open Graph images for social sharing were particularly problematic, requiring workarounds that felt fragile.&lt;/p&gt;

&lt;p&gt;Next.js offered built-in solutions for each of these problems. Its App Router architecture provided clear patterns for organizing code. Server Components could handle data fetching without sending unnecessary JavaScript to the client. The Image component automated optimization. The metadata API made SEO straightforward.&lt;/p&gt;

&lt;p&gt;The migration required restructuring the entire application. Component paths changed, the boundary between server and client code needed explicit definition, and responsive layouts that worked in the previous setup broke in ways that required investigation. The transition happened across three separate development branches over multiple days.&lt;/p&gt;

&lt;p&gt;The investment was worthwhile. The codebase became more organized, and problems that previously required workarounds became trivial.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building a Real-Time Experience
&lt;/h2&gt;

&lt;p&gt;A critical insight about chat interfaces is that perceived speed matters as much as actual speed. Users waiting for a complete response experience that wait as delay. Users watching text appear progressively experience the same duration as responsiveness.&lt;/p&gt;

&lt;p&gt;Implementing streaming introduced significant complexity. The backend needed to generate responses as Server-Sent Events, sending partial content as it became available. The frontend needed to receive these events, accumulate the partial content, and render it progressively—all while handling edge cases like dropped connections, race conditions between messages, and proper scroll behavior.&lt;/p&gt;

&lt;p&gt;The initial implementation had numerous bugs. Messages would sometimes disappear mid-stream when React re-renders occurred. Loading indicators would persist incorrectly. Multiple messages arriving in quick succession would cause rendering anomalies.&lt;/p&gt;

&lt;p&gt;Resolving these issues required rethinking the state management approach. A centralized React Context replaced the previous pattern of passing state through component props. A dedicated hook encapsulated the typewriter rendering logic. The streaming handler was rebuilt to properly manage partial messages and their accumulation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Evolving from Retrieval to Agency
&lt;/h2&gt;

&lt;p&gt;The RAG implementation could answer questions, but it couldn't take actions. When visitors asked for a resume, the system could describe its contents but couldn't actually provide it.&lt;/p&gt;

&lt;p&gt;The next architectural evolution transformed the chatbot into an agent with tools. Rather than a single pipeline that always performed retrieval and generation, the system gained a decision-making layer. The AI evaluates each request and selects the appropriate tool: searching the knowledge base for informational questions, or triggering the email system for resume requests.&lt;/p&gt;

&lt;p&gt;This changed the interaction model significantly. A visitor could now request a resume, provide their email address, and actually receive the document. The AI wasn't simulating an action—it was performing one.&lt;/p&gt;

&lt;p&gt;Getting the tool selection to work reliably required extensive prompt engineering. Early versions would invoke tools incorrectly, attempt searches for topics outside the knowledge base, or lose context across multiple messages in a conversation. Each failure mode required analysis to understand why the model made that decision, then prompt refinements to guide better choices.&lt;/p&gt;

&lt;p&gt;The critical improvements came from being explicit about decision criteria. Rather than hoping the model would infer when to use each tool, the system prompt provides clear conditions and examples for each scenario.&lt;/p&gt;




&lt;h2&gt;
  
  
  Addressing Security and Cost Constraints
&lt;/h2&gt;

&lt;p&gt;Traditional web applications have predictable costs. Servers run whether they receive one request or one thousand. AI applications are different—each request carries a direct cost for model inference.&lt;/p&gt;

&lt;p&gt;This creates a vulnerability that standard development practices don't address. Without proper safeguards, a malicious script could generate thousands of requests and produce substantial API bills. Even without malicious intent, bugs in client code could create request loops that drain budgets quickly.&lt;/p&gt;

&lt;p&gt;The solution involved multiple defensive layers. Rate limiting capped requests per session at sustainable levels. Input validation rejected requests that appeared designed to consume excessive tokens. Token counting before API calls enforced cost guardrails. These measures feel paranoid until you consider the consequences of not having them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Production Database Architecture
&lt;/h2&gt;

&lt;p&gt;The FAISS vector store served well for development but had significant limitations for production. As an in-memory store, it couldn't persist across server restarts. In serverless deployment environments, it couldn't be shared across function instances. Every cold start required regenerating embeddings—a process that consumed both time and API credits.&lt;/p&gt;

&lt;p&gt;PostgreSQL with the pgvector extension provided a production-ready alternative. Since the project already used PostgreSQL (hosted on Neon) for other persistence needs, adding vector search capability to the same database reduced infrastructure complexity. Embeddings became persistent and shared across all instances.&lt;/p&gt;

&lt;p&gt;Chat history presented a separate persistence challenge. Conversations needed to survive page refreshes and browser sessions. Visitors returning the next day should see their previous interactions. Redis, hosted on Upstash in a serverless configuration, provided the solution. The ephemeral nature of chat history aligned well with Redis's strengths: fast reads and writes, automatic expiration for old conversations, and minimal cost when idle.&lt;/p&gt;

&lt;p&gt;Docker Compose brought these services together for local development. A single command starts PostgreSQL and Redis with identical configuration to production. This eliminated the category of bugs that arise from development-production environment mismatches.&lt;/p&gt;




&lt;h2&gt;
  
  
  Crafting the First-Person Voice
&lt;/h2&gt;

&lt;p&gt;An unexpected challenge emerged in how the AI referred to its owner. Despite instructions to represent the portfolio owner directly, the model persistently used third-person constructions: "Roy Amit is a developer who specializes in..." rather than "I'm a developer who specializes in..."&lt;/p&gt;

&lt;p&gt;This might seem like a minor stylistic issue, but it fundamentally affected the user experience. A portfolio AI that speaks about its owner feels like a biographical assistant. One that speaks as the owner feels like a digital twin—a more personal and engaging interaction.&lt;/p&gt;

&lt;p&gt;The fix required explicit examples in the system prompt demonstrating the expected voice. Rather than abstract instructions to "speak in first person," the prompt now includes concrete examples of correct and incorrect responses for common question types. Edge cases like "Are you a real person?" received specific handling to maintain the persona without being misleading.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Final Layer: Visual Identity and Social Presence
&lt;/h2&gt;

&lt;p&gt;The last development phase focused on details that elevate a project to a product.&lt;/p&gt;

&lt;p&gt;Rich templates replaced plain text for common interactions. When visitors ask for an introduction, they receive a formatted card with a professional illustration, structured sections, and smoothly streaming text—not a plain paragraph.&lt;/p&gt;

&lt;p&gt;A branded splash screen provides a polished first impression. The portfolio's logo animates into view as the application loads, setting a professional tone before any interaction begins.&lt;/p&gt;

&lt;p&gt;Open Graph images ensure the portfolio makes a good impression when shared on social platforms. This seemingly simple requirement exposed an interesting edge case: Vercel generates unique URLs for preview deployments, and these URLs are protected by authentication. Social media crawlers couldn't access the preview images, resulting in broken cards on LinkedIn and Twitter. The solution involved configuring the application to always reference the production domain for social images, regardless of the deployment context.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The finished portfolio demonstrates an alternative to static presentation. Visitors can ask questions, explore projects through conversation, and receive documents directly. The AI provides accurate information because it retrieves from curated sources rather than generating from training data. It speaks with a consistent voice because that voice was deliberately crafted.&lt;/p&gt;

&lt;p&gt;Whether this approach suits every developer portfolio is debatable. For technical roles that value innovation and AI familiarity, the format itself communicates something about capabilities.&lt;/p&gt;

&lt;p&gt;The complete implementation is available at &lt;a href="https://royamit.vercel.app" rel="noopener noreferrer"&gt;royamit.vercel.app&lt;/a&gt;, with source code on &lt;a href="https://github.com/royamit1/Portfolio" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>portfolio</category>
      <category>ai</category>
      <category>nextjs</category>
      <category>python</category>
    </item>
    <item>
      <title>How we built "Space-Ease" using Next.js</title>
      <dc:creator>Roy amit</dc:creator>
      <pubDate>Sun, 22 Dec 2024 21:40:29 +0000</pubDate>
      <link>https://dev.to/roy_amit/space-ease-rent-your-space-park-with-ease-21bg</link>
      <guid>https://dev.to/roy_amit/space-ease-rent-your-space-park-with-ease-21bg</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;Urban parking is a struggle many of us know too well. You’ve been there: circling the block endlessly, paying a fortune, or giving up and parking miles away. The idea behind Space-Ease was simple: connect drivers searching for parking with private parking owners willing to rent out their unused spaces. Turning it into a real app? That was a whole other adventure, full of trial, error, and figuring things out as we went.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Journey: From Mobile to a Web App
&lt;/h2&gt;

&lt;p&gt;Initially, we envisioned Space-Ease as a mobile app and started building it in Android Studio with Java. It felt like a natural choice, especially since we wanted something accessible on smartphones. As we kept going, we realized the process wasn’t as flexible as we’d hoped. Java made some UI interactions tricky, and building dynamic, real-time features turned out to be more challenging than we expected.&lt;/p&gt;

&lt;p&gt;Seeing these limitations, we decided to pivot to a web application. This transition gave us more freedom to create a flexible app that could work across platforms. Modern web technologies offered the adaptability we needed to experiment and improve as we went.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tackling the Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When we started building the app, one of our first decisions was choosing a database. Postgres stood out right away-it’s reliable, fast, and works seamlessly with PostGIS, which was a lifesaver for running the geospatial queries we needed to find parking spots nearby.&lt;/p&gt;

&lt;p&gt;With the database sorted, the next challenge was figuring out how to work with it in the code. We initially went with Drizzle, a lightweight ORM that seemed like a good match for our needs. Its API was straightforward, and it made basic tasks like defining tables and running simple queries pretty easy. But once we started dealing with more complex relationships and queries, things got tricky. Even following their basic documentation gave us errors, and searching online didn’t help because there just wasn’t much of a community or support. On top of that, Drizzle's last release was a few months ago and hadn’t been updated since, so we were kind of stuck.&lt;/p&gt;

&lt;p&gt;Eventually, we decided to switch to Prisma. It was a bit of work to migrate everything, but it turned out to be a great decision. Switching to Prisma made it easier to handle complex queries, and the auto-generated client saved us a lot of time. The detailed documentation and active community were a game-changer when we ran into issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building the Frontend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the frontend, we needed something that could handle both performance and flexibility. After exploring options, we landed on Next.js. It gave us the ability to handle both server-side rendering and client-side interactivity in one framework. This hybrid setup was key in delivering a fast and responsive experience, especially when we were aiming to make the app feel seamless and intuitive.&lt;/p&gt;

&lt;p&gt;React was our choice for the UI because it made building the interface straightforward. It allowed us to design a clean, interactive layout without getting bogged down by complexity. Since we were focusing on mobile-first design, we turned to Tailwind CSS for its utility-first approach, which made styling efficient and consistent across the app. We also used Shadcn/UI to help with visual consistency and accessibility.&lt;/p&gt;

&lt;p&gt;One of the most important parts of Space-Ease is the interactive map, where users can view and manage parking spaces in real time. For this, we integrated Google Maps, which was essential for geolocation and navigation. It worked well with what we needed and helped make the map experience smooth and reliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managing Real-Time Updates&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Keeping parking availability data up to date was a major hurdle. Parking is always changing, and we needed a way to reflect those changes in real time without slowing the app down. After experimenting with different solutions, we turned to React Query. It provided efficient caching and synchronization, which helped keep the app responsive while making sure the data stayed fresh without adding unnecessary complexity.&lt;/p&gt;

&lt;p&gt;On the backend, Supabase was a key part of the solution. It integrated smoothly with Postgres, enabling real-time updates, which was essential for keeping the parking data accurate. It also simplified user authentication, saving us time and effort as we focused on scaling the app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developing Space-Ease was an iterative process filled with challenges and valuable lessons. From redefining our platform strategy to carefully selecting the right tools for each task, every decision helped shape the app into what it is today.&lt;br&gt;
What started as a way to address a personal frustration has grown into a project we are proud to share.&lt;/p&gt;

&lt;p&gt;If you’re interested, you can:&lt;br&gt;
Check out the code on &lt;a href="https://github.com/royamit1/space-ease" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;br&gt;
Try out the live app yourself on &lt;a href="https://space-ease.vercel.app/" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt;.&lt;br&gt;
Watch a detailed explanation of the problem, our solution, and the app in action in this &lt;a href="https://www.youtube.com/watch?v=q1-8qjWmIoQ&amp;amp;ab_channel=RoiNir" rel="noopener noreferrer"&gt;video&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’d love to hear your thoughts, suggestions, or even just stories about your own urban parking struggles!&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>prisma</category>
      <category>typescript</category>
      <category>database</category>
    </item>
  </channel>
</rss>
