<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Enzo Junior Vezzaro</title>
    <description>The latest articles on DEV Community by Enzo Junior Vezzaro (@enzo_junior_vezzaro).</description>
    <link>https://dev.to/enzo_junior_vezzaro</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/enzo_junior_vezzaro"/>
    <language>en</language>
    <item>
      <title>## Vibe Coding Adventures: Day 1 — HealthLens AI</title>
      <dc:creator>Enzo Junior Vezzaro</dc:creator>
      <pubDate>Tue, 01 Apr 2025 12:50:49 +0000</pubDate>
      <link>https://dev.to/enzo_junior_vezzaro/-vibe-coding-adventures-day-1-healthlens-ai-7a</link>
      <guid>https://dev.to/enzo_junior_vezzaro/-vibe-coding-adventures-day-1-healthlens-ai-7a</guid>
      <description>&lt;p&gt;Here, we are creating, learning, and improving. For my first project, I decided to start with something “lite” — an AI-powered health checker for analyzing medical results.&lt;/p&gt;

&lt;p&gt;After diving deep into research and brainstorming tons of ideas, I landed on something exciting — an AI assistant that helps users understand their health exams in simple terms. This isn’t a doctor, just a smart tool that breaks down medical jargon so you actually know what your results say.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zkvi8wecwcubihydkmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zkvi8wecwcubihydkmm.png" alt="HealthLens AI" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What was the goal of this project?
&lt;/h2&gt;

&lt;p&gt;I wanted to explore the capabilities of new AI systems in understanding unstructured data from health exams. My goal was to test the limits of these systems and see how they perform in terms of accuracy, efficiency, and even their ability to interpret images and healh results.&lt;/p&gt;

&lt;p&gt;Let’s say your doctor asks you to get some tests done — like a blood test or a physical examination. When you receive the results, they’re written in complex medical jargon that’s hard to understand. That’s where this tool comes in! It doesn’t replace your doctor, but it helps you better interpret your exam results, giving you a clearer picture of what’s going on.&lt;/p&gt;

&lt;p&gt;It’s an exciting concept — not easy to pull it off well, but a great way to explore the limitations and potential of AI in this field. &lt;strong&gt;Let’s dive deeper!&lt;/strong&gt; 🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Let’s start building!&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As part of this series, the goal is to determine whether Vibe Coding is a viable approach to building software. And so far, I have to say — I’m pretty impressed with what I was able to accomplish in just 2–3 hours of Vibe Coding, plus another hour of improvements and refactoring.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Let’s start from the beginning.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;First, I began by prompting ChatGPT with my idea. I worked alongside ChatGPT to refine the perfect prompt, starting with broad questions and iterating until I had a final version that looked like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Develop an AI tool that allows users to upload images or documents of their medical tests (e.g., blood work, X-rays, lab reports). The AI should analyze these documents and provide clear, understandable explanations of the results. It should identify key findings, explain medical terminology, and offer personalized recommendations based on the data, such as lifestyle adjustments or next steps for treatment. The AI should act as an educational resource, helping users interpret their health diagnostics and empowering them to make informed decisions about their health.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After this first step, I used one of my favorite Vibe Coding tool &lt;a href="https://lovable.dev/" rel="noopener noreferrer"&gt;Lovable.dev&lt;/a&gt;, which is just an incredible tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hnmslcr75xathh7zo1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hnmslcr75xathh7zo1c.png" alt="Lovable.dev interface" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By having the AI building the skeleton of the project, I learn few things. First, this tool is fantastic for building impressive frontend applications with clean, well-structured &lt;a href="https://tailwindcss.com/" rel="noopener noreferrer"&gt;Tailwind&lt;/a&gt; CSS styling.&lt;/p&gt;

&lt;p&gt;The end result is a visually appealing interface with smooth animations and interactions, all created by the AI based on our instructions, with most of the desired functionalities already in place. However, there’s a downside — the styling itself. Apparently, you can’t choose just any style you want. The AI is heavily calibrated to use Tailwind as much as possible, giving all projects built in Lovable a similar look and feel. This is great for rapid prototyping but not very flexible for custom designs or projects with specific styling requirements. There might be ways to customize it, such as integrating Figma or using custom CSS, but I haven’t tried that yet. I’ll be posting about it in a few days.&lt;/p&gt;

&lt;p&gt;Another challenge with Lovable is that sometimes the AI simply doesn’t know what’s wrong. As the project grows in complexity, the AI struggles to understand what’s happening, especially when dealing with TypeScript-related issues.&lt;/p&gt;

&lt;p&gt;Once I reached the AI’s limits in building the application, I moved everything to GitHub and started refining the skeleton on my own.&lt;/p&gt;

&lt;h2&gt;
  
  
  It’s Time to Code!
&lt;/h2&gt;

&lt;p&gt;At this point, I had a boilerplate — a solid skeleton of my application — which I cloned onto my machine to start working on it.&lt;/p&gt;

&lt;p&gt;Since this project is primarily a learning experience for me, I didn’t spend too much time on custom work. My main goal is to understand the limitations of Vibe Coding — where the AI reaches its limits and where human intervention becomes necessary.&lt;/p&gt;

&lt;p&gt;For me, that moment came as the application grew in complexity (after about 10–15 prompts). The AI eventually hit a wall, unable to continue on its own. That’s where I had to step in and take control.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3l7660olivh7cglpyv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3l7660olivh7cglpyv4.png" alt="VS Code" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Filling in the Missing Pieces
&lt;/h2&gt;

&lt;p&gt;At this stage, my main focus shifted to implementing the missing functionalities that would bring the project to life. These included:&lt;/p&gt;

&lt;p&gt;✅ Image Recognition — Enabling the AI to analyze uploaded medical documents and images. &lt;br&gt;
✅ Text-Based AI Interactions — Generate the explanations with AI. &lt;br&gt;
✅ Multiple AI Providers — Expanding compatibility beyond OpenAI to include other models like Grok, ensuring flexibility and better performance across different use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrating OpenAI’s Vision API&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first step was integrating OpenAI’s Vision API to process and analyze uploaded images. This allowed the AI to extract text and identify key visual elements from medical documents, such as blood test results, X-rays, or lab reports. The goal was to make the AI capable of understanding the content and providing a user-friendly summary.&lt;/p&gt;

&lt;p&gt;This was an exciting milestone because it transformed the project from a simple text-based assistant into a more dynamic tool that could handle real-world medical documents. Up next, I focused on enhancing AI interactions and expanding the model compatibility! 🚀&lt;/p&gt;

&lt;p&gt;This is the code for interacting with OpenAI:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const analyzeImage = async (imageFile: File): Promise&amp;lt;AnalysisData&amp;gt; =&amp;gt; {
  try {
    // Convert file to base64
    const base64Image = await fileToBase64(imageFile);

    // Send to OpenAI Vision API
    const response = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [
        {
          role: "system",
          content: "You are a medical expert that analyzes medical documents. Extract all relevant information from the image including test names, values, reference ranges, and provide explanations in plain language. Return your analysis in a structured format."
        },
        {
          role: "user",
          content: [
            { type: "text", text: "Analyze this medical document and extract key information:" },
            { type: "image_url", image_url: { url: base64Image } }
          ]
        }
      ],
      max_tokens: 1500,
    });

    // Process the AI response
    const aiResponse = response.choices[0].message.content;
    console.log("OpenAI Response:", aiResponse);

    // For now, we'll return mock data since we need to parse the AI response
    // In a real implementation, you would parse the AI's text response into structured data
    return processMedicalData(aiResponse);
  } catch (error) {
    console.error("Error analyzing image with OpenAI:", error);
    throw new Error("Failed to analyze the document. Please try again.");
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This approach worked well for the image recognition part, but unfortunately, I spent too much time trying to parse the response into a usable JSON format for the app. &lt;em&gt;Ultimately, this approach wasn’t working.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Of course, this was my mistake — I wasn’t aware that OpenAI allows you to pass schemas, as explained in their &lt;a href="https://platform.openai.com/docs/guides/structured-outputs?api-mode=chat" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. Next time, I’ll make sure to take advantage of this feature and implement it properly.&lt;/p&gt;

&lt;p&gt;At some point during development, after battling with JSON formatting issues for far too long, I decided to abandon the idea and move forward to the next task: integrating &lt;strong&gt;Grok (&lt;/strong&gt;I ended up *&lt;strong&gt;*using the **llama-3.2–11b-vision-preview&lt;/strong&gt; model for vision and &lt;strong&gt;llama-3.3–70b-versatile&lt;/strong&gt; for text, both accessed through the Grok API).&lt;/p&gt;

&lt;p&gt;This part was particularly interesting to me. I set up Grok as a separate service, making it possible to switch between providers simply by changing a boolean in the config file. However, as I started working on this integration, I quickly realized I was running into the same issue I had with OpenAI — &lt;strong&gt;JSON formatting problems&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At this point, I was pretty frustrated, but never defeated.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It was time to go back to the docs. And soon enough, I found the solution: &lt;strong&gt;JSON schemas&lt;/strong&gt; — something very similar to OpenAI’s but designed for Grok.&lt;/p&gt;

&lt;p&gt;This was a game-changer, but I had to make &lt;strong&gt;two queries&lt;/strong&gt; to get it right:&lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;Vision&lt;/strong&gt; — Recognize and extract information from the image.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const base64Image = await fileToBase64(imageFile);

    const messagePrompt = `You are an expert medical document analyzer. Your task is to extract and structure all key findings, vital signs, and any other medically relevant data from a document that I will give you after this prompt. 

    **Ensure that all extracted information follows this structured format** and is as complete as possible:

    1. **Document Type**: Identify the type of medical document (e.g., Lab Report, Medical Summary, Radiology Report, etc.).
    2. **Date**: Extract the document's date in YYYY-MM-DD format.
    3. **Summary**: Provide a brief but clear summary of the document’s contents with all the information about the patient. Use a markdown format for this.
    4. **Findings**: Extract all measurable medical findings, including:
      - **Name** (e.g., Heart Rate, Blood Pressure, Glucose Level)
      - **Value** (e.g., 72, 120/80, 98.6)
      - **Unit** (e.g., bpm, mmHg, mg/dL, °F)
      - **Reference Range** (if available; otherwise, return "N/A")
      - **Status** (must be one of: "normal", "abnormal", "critical")
      - **Explanation** (e.g., "within normal range", "elevated", "low", "requires immediate attention")

    5. **Medical Terms &amp;amp; Definitions**: Identify any key medical terms found in the document and provide a brief explanation.
    6. **Recommendations**:
      - If the document contains specific medical recommendations, extract and include them.
      - If no explicit recommendations are present, analyze the findings and provide expert medical advice based on the data. Ensure the advice is **medically sound**, considering possible risks and follow-up actions.

    **Rules:**
    - Do not omit any relevant data. If a required value is missing, return "N/A".
    - Do not use anything outside the document. Don't add anything, just use what's on the document.
    - Ensure all findings are properly labeled and categorized.
    - Maintain a structured, machine-readable format.
    - Recommendations must be evidence-based and logically derived from the extracted findings.`

    // Prepare the Groq API request body with the system and user prompts
    const requestBody = {
      messages: [
        {
          role: "user",
          content: [
            {
              type: 'text',
              text: messagePrompt
            },
            {
              type: "image_url",
              image_url: {
                url: base64Image
              }
            }
          ]
        }
      ],
      model: "llama-3.2-11b-vision-preview",
      temperature: 1,
      max_completion_tokens: 1024,
      top_p: 1,
      stream: false,
      stop: null
    };

    // Send the image to Groq API for analysis
    const response = await axios.post(API_URL, requestBody, {
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${GROQ_VISION_API_KEY}`
      }
    });

    // Process the AI response
    const aiResponse = response.data;
    console.log("Initial Groq AI Response:", aiResponse);

Response: 

"The document you provided appears to be a **Health Insurance Physical Exam**. Here is the structured data extracted from the document:

**Document Type**: Health Insurance Physical Exam
**Date**: 1995/06/15
**Summary**: Bernie Dickerson, a 28-year-old male, had a regular physical examination. He is in good health and has no reported medical conditions.

**Findings**:

* **Name**: Blood Pressure
* **Value**: 140/90 mmHg
* **Unit**: mmHg
* **Reference Range**: N/A
* **Status**: Normal
* **Explanation**: Within normal range

* **Name**: Heart Rate
* **Value**: 78 bpm
* **Unit**: bpm
* **Reference Range**: N/A
* **Status**: Normal
* **Explanation**: Within normal range

* **Name**: Respiratory Rate
* **Value**: 16 breaths/min
* **Unit**: breaths/min
* **Reference Range**: N/A
* **Status**: Normal
* **Explanation**: Within normal range

**Medical Terms &amp;amp; Definitions**:

* **Bernie Dickerson**: The patient's name.
* **Global Health Inc.**: The insurance provider.
* **GH4567890**: The policy number.
* **GH4567890**: The medical history policy number.

**Recommendations**: None. The patient has no reported medical conditions or findings outside of the normal physiological range for his age. Therefore, no specific medical recommendations can be made. It is recommended that the patient continue with regular check-ups to monitor his health."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2️⃣ &lt;strong&gt;Query the AI again&lt;/strong&gt; to get the correct JSON format.&lt;/p&gt;

&lt;p&gt;Once I had the initial information extracted, I made a second query to structure it properly in a usable JSON format, which allowed me to proceed with the integration on the app.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const jsonSchema = {
      type: "object",
      properties: {
        documentType: { type: "string" },
        date: { type: "string", format: "date" },
        summary: { type: "string" },
        findings: {
          type: "array",
          items: {
            type: "object",
            properties: {
              name: { type: "string" },
              value: { type: "string" },
              unit: { type: "string" },
              referenceRange: { type: "string" },
              status: { type: "string", enum: ["normal", "abnormal", "critical"] },
              explanation: { type: "string" }
            },
            required: ["name", "value", "unit", "status", "explanation"]
          }
        },
        terms: {
          type: "array",
          items: { 
            term: "string",
            definition: "string" 
          }
        },
        recommendations: {
          type: "array",
          items: { type: "string" }
        }
      },
      required: ["documentType", "date", "summary", "findings"]
    };

    const jsonFormatRequest = {
      messages: [
        { 
          role: "system", 
          content: `Convert the following extracted medical data into a structured JSON format according to this schema:\n${JSON.stringify(jsonSchema)}` 
        },
        { 
          role: "user", 
          content: JSON.stringify(aiResponse) 
        }
      ],
      model: "llama-3.3-70b-versatile",
      temperature: 0,
      stream: false,
      response_format: { type: "json_object" },
    };

    const jsonResponse = await axios.post(API_URL, jsonFormatRequest, {
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${GROQ_VISION_API_KEY}`
      }
    });

    const formattedData = jsonResponse.data;
    return processMedicalData(formattedData);

{
  "documentType": "Health Insurance Physical Exam",
  "date": "1995-06-15",
  "summary": "Bernie Dickerson, a 28-year-old male, had a regular physical examination. He is in good health and has no reported medical conditions.",
  "findings": [
    {
      "name": "Blood Pressure",
      "value": "140/90",
      "unit": "mmHg",
      "referenceRange": "N/A",
      "status": "normal",
      "explanation": "Within normal range"
    },
    {
      "name": "Heart Rate",
      "value": "78",
      "unit": "bpm",
      "referenceRange": "N/A",
      "status": "normal",
      "explanation": "Within normal range"
    },
    {
      "name": "Respiratory Rate",
      "value": "16",
      "unit": "breaths/min",
      "referenceRange": "N/A",
      "status": "normal",
      "explanation": "Within normal range"
    }
  ],
  "terms": [
    {
      "term": "Bernie Dickerson",
      "definition": "The patient's name."
    },
    {
      "term": "Global Health Inc.",
      "definition": "The insurance provider."
    },
    {
      "term": "GH4567890",
      "definition": "The policy number and medical history policy number."
    }
  ],
  "recommendations": [
    "None. The patient has no reported medical conditions or findings outside of the normal physiological range for his age. Therefore, no specific medical recommendations can be made. It is recommended that the patient continue with regular check-ups to monitor his health."
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;At this point, I had everything in place to finish this project. Once I had all the AI components integrated, I just needed to make a few tweaks to glue everything together and fix some minor issues in the code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;But the end result is actually quite amazing! The way everything came together exceeded my expectations.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo80b1ynt9ka8eqsumexp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo80b1ynt9ka8eqsumexp.png" alt="Before Uploading Exams" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rb1mkpwdghtiny0rx75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rb1mkpwdghtiny0rx75.png" alt="After Uploading Exams" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I mean, this is something I did in just few hours, with some help from AI. I’d say that as the first day of my &lt;strong&gt;“Vibe Coding Adventures”&lt;/strong&gt;, it’s been quite a ride. I gained a lot of insight and new knowledge from this experience.&lt;/p&gt;

&lt;p&gt;First of all, I learned more about the limits of these AI systems. &lt;strong&gt;&lt;em&gt;They can’t do whatever you want, and they can’t go as far as you might hope.&lt;/em&gt;&lt;/strong&gt; There are boundaries that you need to be aware of. Second, these systems aren’t completely consistent yet — they still sometimes hallucinate. But I’m sure that with a few more tweaks to the code, and by using more powerful AI models, we can overcome this limitation. And third, AI is not magic — it’s code. Once we’re done with our AI friend, as developers, we can keep improving and growing the software as we see fit.&lt;/p&gt;

&lt;p&gt;In conclusion, this has been a pretty interesting and exciting exercise. I gained a lot of knowledge about these systems and how they’re implemented in real-world scenarios. I’ll keep increasing the difficulty of my projects to continue pushing the limits of these AIs. For now, I’m pretty impressed with what I was able to accomplish with these tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s all for today — peace out! ✌️&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To see the live project go to this &lt;a href="https://health-ai-vibe-coding.netlify.app/" rel="noopener noreferrer"&gt;website&lt;/a&gt;. 👨🏽‍💻&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/EnzoVezzaro/health-lens-ai" rel="noopener noreferrer"&gt;Github project&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>vibecoding</category>
      <category>softwaredevelopment</category>
      <category>ai</category>
    </item>
    <item>
      <title>Vibe Coding Adventures: 100 days of experiments</title>
      <dc:creator>Enzo Junior Vezzaro</dc:creator>
      <pubDate>Tue, 01 Apr 2025 12:49:15 +0000</pubDate>
      <link>https://dev.to/enzo_junior_vezzaro/vibe-coding-adventures-100-days-of-experiments-db1</link>
      <guid>https://dev.to/enzo_junior_vezzaro/vibe-coding-adventures-100-days-of-experiments-db1</guid>
      <description>&lt;h2&gt;
  
  
  Vibe Coding Adventures: 100 days of experiments
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hi everyone,&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lately, I’ve been absolutely fascinated by what’s happening in AI. Not only is everything moving incredibly fast, but the improvements in technology are becoming more and more noticeable.&lt;/p&gt;

&lt;p&gt;This is an exciting time to be in software engineering! 👨🏽‍💻&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Vibe Coding?
&lt;/h2&gt;

&lt;p&gt;The term “vibe coding” was introduced by Andrej Karpathy, an AI engineer formerly at Tesla and OpenAI. In early 2025, he shared his thoughts on it in a tweet that resonated with many developers:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like ‘decrease the padding on the sidebar by half’ because I’m too lazy to find it. I ‘Accept All’ always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding — I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In simpler terms, &lt;em&gt;vibe coding&lt;/em&gt; involves using AI to create software by simply describing what you want. Instead of writing code yourself, you tell the AI what you’re aiming to build, and it generates the code. If problems come up, you describe them, and the AI makes adjustments.&lt;/p&gt;

&lt;p&gt;This is not a formal coding methodology; rather, it represents a cultural shift powered by recent advances in AI. While the term may sound playful, it describes a real shift: now, even non-programmers can create functional software by collaborating with AI instead of coding everything from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why am I trying this?
&lt;/h2&gt;

&lt;p&gt;Technology is evolving at an exponential rate, and we need to adapt. That’s why I’m diving into this experiment — to explore just how far this new way of building software can go.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s a new paradigm that we’ll need to adapt to.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Throughout history, humans have developed technologies to improve productivity. More often than not, these advancements become so efficient that they redefine the work itself. I believe that’s exactly what’s happening with AI, it’s a new tool (or maybe a toy, for now) that enhances our productivity, and one day, it may become the standard way to develop software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Vibe Coding good for software engineering?
&lt;/h2&gt;

&lt;p&gt;That depends on who you ask. Some engineers argue that it will lead to an avalanche of messy, unmaintainable code, requiring more engineers to fix issues than if the software had been built from scratch. Others see it as a way to generate a solid &lt;em&gt;boilerplate&lt;/em&gt; — a near-complete foundation that accelerates development and can be refined over time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Both perspectives have merit.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Personally, I believe AI will eventually write better code than 90% of developers. But that doesn’t mean humans will be out of the loop. Quite the opposite — we’ll need even more people to manage, refine, and optimize the massive output these AI systems can generate. It’s just a matter of time and exponential improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Another reason I’m doing this?
&lt;/h2&gt;

&lt;p&gt;As a software engineer with over 10 years of experience, I’ve never seen the industry as uncertain as it is now. Companies seem unsure about what AI is truly capable of or what role human developers should play in an AI-driven future.&lt;/p&gt;

&lt;p&gt;I’ve been job-hunting for five months now — longer than I’ve ever been without work since I started coding professionally at 23 (I’m 35 now). My impression? Many companies are &lt;em&gt;interviewing&lt;/em&gt; but not actually hiring. It feels like they’re creating a list of candidates while they try to figure out whether AI can replace human developers. In the meantime, they’re downsizing or keeping teams as they are, waiting to see if the AI hype is real.&lt;/p&gt;

&lt;p&gt;That’s why I started this project. I need to keep my mind busy, and I don’t want to fall into the &lt;em&gt;side-project hell&lt;/em&gt; trap again. So, I figured — why not try something new and exciting? Hopefully, I’ll learn a lot along the way, and I’ll be documenting everything on Medium so others can learn too.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, What’s this project about?
&lt;/h2&gt;

&lt;p&gt;This is my attempt to showcase the power of these new AI-driven coding systems.&lt;/p&gt;

&lt;p&gt;For the next 100 days, I’ll be &lt;em&gt;Vibe Coding&lt;/em&gt; 100 apps — one per day — making small improvements as I go. Each project will be deployed, and I’ll post daily write-ups explaining what I built.&lt;/p&gt;

&lt;p&gt;It’s an exciting chance for me to experiment, grow, and dive into the cutting-edge technologies that are truly reshaping the way we build software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s have some fun! 🚀&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
