<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: jimmyshoe85</title>
    <description>The latest articles on DEV Community by jimmyshoe85 (@jimmyshoe85).</description>
    <link>https://dev.to/jimmyshoe85</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jimmyshoe85"/>
    <language>en</language>
    <item>
      <title>The Fourth Leg of the Stool: JavaScript</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sun, 07 Sep 2025 05:40:02 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/the-fourth-leg-of-the-stool-javascript-3ggj</link>
      <guid>https://dev.to/jimmyshoe85/the-fourth-leg-of-the-stool-javascript-3ggj</guid>
      <description>&lt;p&gt;If you've stayed with me through API, Markdown, and JSON, you're almost there. The last leg is JavaScript, and I know what you're thinking. "This is where I get lost. This is where it becomes too technical for me."&lt;/p&gt;

&lt;p&gt;I get it. When I first heard the word JavaScript, I pictured lines of incomprehensible code that only engineers could understand. But here's what changed my mind: I realized I didn't need to become a programmer. I just needed to understand what JavaScript does and let the AI write it for me.&lt;/p&gt;

&lt;p&gt;Think of it this way. You know how to drive a car without understanding how the engine works, right? JavaScript is the same. You don't need to know how it works. You just need to know what it can do for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What JavaScript Actually Does
&lt;/h2&gt;

&lt;p&gt;JavaScript is the part that makes things happen on a webpage or in your course. When you click a button and something changes on the screen, that's JavaScript working. When a form calculates a total, that's JavaScript. When feedback appears after you answer a question, that's JavaScript too.&lt;/p&gt;

&lt;p&gt;In our four-legged stool, JavaScript is the delivery driver. The API opened the door to the AI. Markdown organized your knowledge. JSON put the AI's response in neat, labeled boxes. Now JavaScript takes those boxes and puts the contents exactly where your learners can see them.&lt;/p&gt;

&lt;p&gt;The magic is this: the AI can write JavaScript for you. You just have to tell it what you want to happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let Me Show You How Simple This Really Is
&lt;/h2&gt;

&lt;p&gt;I'm going to walk you through a real example, step by step, so you can see how this works in practice. No programming experience required.&lt;/p&gt;

&lt;p&gt;Let's say you're building a safety course about lockout/tagout procedures. You have a question where learners choose the first step in the process. Instead of showing everyone the same "incorrect" message, you want the AI to give each person feedback that addresses their specific mistake.&lt;/p&gt;

&lt;p&gt;Here's Sarah, one of your learners. She picks "Remove the energy source first" when the right answer is "Verify zero energy state first." What happens next?&lt;/p&gt;

&lt;p&gt;Behind the scenes, your course talks to the AI. It says something like: "Sarah chose to remove the energy source first, but the correct answer was to verify zero energy state first. Give her feedback that helps her understand why verification comes before removal."&lt;/p&gt;

&lt;p&gt;The AI thinks about it. It looks at your knowledge base (written in Markdown) about common lockout/tagout mistakes. It knows that people often want to jump straight to disconnecting things because that feels like progress.&lt;/p&gt;

&lt;p&gt;The AI responds with organized information (in JSON format):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Feedback:&lt;/strong&gt; "Not quite, Sarah. You identified an important step, but safety requires verifying zero energy before any removal."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hint:&lt;/strong&gt; "Think about what you need to confirm before touching any equipment."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Should she try again:&lt;/strong&gt; Yes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explanation:&lt;/strong&gt; "Verification prevents accidents from stored energy like compressed springs or residual pressure."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now JavaScript takes over. It reads that organized information and puts each piece where it belongs in your course. Sarah sees personalized feedback on her screen. The hint appears. The "Try Again" button shows up.&lt;/p&gt;

&lt;p&gt;Here's the beautiful part: you never wrote JavaScript. You told the AI what you wanted, and it wrote the JavaScript for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Actually Tell the AI
&lt;/h2&gt;

&lt;p&gt;This is where it gets practical. When you need JavaScript, you talk to the AI the same way you'd talk to an assistant. You might say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I have a course built in Storyline. When the AI gives me feedback, it comes back in a format with four pieces: feedback text, a hint, whether they should retry, and an explanation. I need to put the feedback text into a variable called vFeedback, the hint into vHint, the retry decision into vShowRetry, and the explanation into vExplanation. Can you write JavaScript code that does this?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The AI understands exactly what you need. It writes something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;updateCourseWithFeedback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;player&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GetPlayer&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SetVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vFeedback&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;feedback&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SetVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vHint&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hint&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SetVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vShowRetry&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;retry&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SetVar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vExplanation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;explanation&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You copy that code into your authoring tool, and it works. That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Isn't Overwhelming
&lt;/h2&gt;

&lt;p&gt;Look at what just happened. You didn't learn to code. You didn't study syntax or memorize commands. You explained what you needed in plain English, and the AI translated that into working code.&lt;/p&gt;

&lt;p&gt;The JavaScript itself is just four lines that say "take this information and put it over there." It's like giving someone directions to your house. The AI knows how to write the directions. You just know where things should go.&lt;/p&gt;

&lt;p&gt;This is why JavaScript isn't scary anymore. It's not about becoming a programmer. It's about becoming someone who can clearly explain what they want to happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real Example You Could Try This Week
&lt;/h2&gt;

&lt;p&gt;Let me give you something concrete you could actually build. Pick any course you're working on that has multiple-choice questions. Find one where you currently show the same feedback to everyone who gets it wrong.&lt;/p&gt;

&lt;p&gt;Here's what you'd do:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Tell the AI what you want. "When someone picks the wrong answer, I want you to give them feedback that explains why their specific choice was incorrect, plus a hint for their next attempt."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Set up your course variables. In Storyline, you might create vFeedback and vHint. In other tools, you'd create whatever variables you use for displaying text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Ask the AI to write the JavaScript. "Write JavaScript code that takes feedback and hint from an AI response and puts them into my Storyline variables vFeedback and vHint."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Copy the code into your course and test it.&lt;/p&gt;

&lt;p&gt;That's a complete, working example of AI-powered personalized feedback. And you built it by having a conversation with the AI, not by learning to program.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Opens Up
&lt;/h2&gt;

&lt;p&gt;Once you see this work, you start to realize what's possible. You're not limited to the feedback messages you write by hand anymore. You can have the AI generate explanations, hints, encouragement, or even custom practice problems based on what each learner needs.&lt;/p&gt;

&lt;p&gt;You can log which learners are struggling with which concepts. You can send reports to managers. You can update learner records in your LMS. You can build dashboards that show real-time progress across your organization.&lt;/p&gt;

&lt;p&gt;But it all starts with understanding this simple truth: JavaScript moves information from one place to another. The AI can write it. You just need to know what you want to move and where you want it to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  You're Ready for This
&lt;/h2&gt;

&lt;p&gt;I've watched hundreds of trainers and instructional designers cross this bridge. They start exactly where you are, thinking JavaScript is too technical. Then they try one small example. They see it work. They realize the AI is doing the heavy lifting.&lt;/p&gt;

&lt;p&gt;Sarah, the learner in our example, doesn't know there's JavaScript running behind her personalized feedback. She just knows the course feels smarter, more responsive, more helpful. That's what good technology does. It disappears into the experience.&lt;/p&gt;

&lt;p&gt;The same thing will happen for you. Once you understand that JavaScript is just the delivery mechanism, and that AI can write it for you, it stops being intimidating. It becomes a tool.&lt;/p&gt;

&lt;p&gt;You've made it through all four legs of the stool. API gave you access to AI capabilities you couldn't reach before. Markdown gave your knowledge the structure that AI needs to work with it effectively. JSON gave you a way to get predictable, organized responses from AI. And JavaScript gives you the ability to take those responses and put them exactly where your learners will see them.&lt;/p&gt;

&lt;p&gt;You're not just ready to use AI anymore. You're ready to build with it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Your Next Step
&lt;/h2&gt;

&lt;p&gt;You now understand the foundation. The question isn't whether you can do this—it's whether you're ready to stop being limited by other people's defaults and start building learning experiences that work the way you want them to.&lt;/p&gt;

&lt;p&gt;If you are, here's exactly where to begin...&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Reality Check: Are We Building for Comfort or for Change?</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sun, 07 Sep 2025 05:36:53 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/the-reality-check-are-we-building-for-comfort-or-for-change-480b</link>
      <guid>https://dev.to/jimmyshoe85/the-reality-check-are-we-building-for-comfort-or-for-change-480b</guid>
      <description>&lt;p&gt;Look at the adoption numbers and you can see a pattern. The LMS and SCORM are everywhere. They are the definition of mainstream. In the Guild's 2025 State of the Industry survey, more than 80 percent of organizations reported using LMS platforms as the backbone of their learning technology ecosystem. SCORM tracking was just as dominant. These systems are stable, familiar, and deeply embedded.&lt;/p&gt;

&lt;p&gt;xAPI, on the other hand, tells a different story. It was launched more than a decade ago with the promise of expanding tracking beyond SCORM. In theory, it could capture any kind of learning experience, inside or outside the LMS. In practice, Guild research shows adoption hovering around twenty percent. A majority of those who have implemented xAPI report using it only for pilot projects or limited use cases. Despite its technical strength, it has never crossed the gap into widespread use.&lt;/p&gt;

&lt;p&gt;That gap is not about whether xAPI works. It is about comfort. SCORM was familiar. LMS reporting was entrenched. xAPI required new workflows and new thinking, and the field largely resisted. The lesson is clear: technology that does not align with established habits and tools struggles to gain traction, even if it offers more capability.&lt;/p&gt;

&lt;p&gt;Now set that beside how we are using AI. To make sense of the current state, it helps to think in terms of four levels of maturity for L&amp;amp;D and marketing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 1 is AI as Assistant.&lt;/strong&gt; This is the chatbot in the browser, used for brainstorming, summaries, or drafting. According to the Guild's 2025 survey, more than half of respondents experimenting with AI place it in this category, with 55 percent reporting use for content generation and 42 percent for image creation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 2 is AI as Personal Tools.&lt;/strong&gt; These are small helpers that fit your workflow, like an API script that auto-summarizes a transcript or a bot that tags survey results. Only a minority of respondents reported building custom tools, with figures under 20 percent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 3 is AI Embedded in Products.&lt;/strong&gt; The AI shows up inside the tools your learners or customers already use, such as an LMS that personalizes practice questions or a CRM that drafts emails. Guild data shows fewer than 15 percent reporting any form of AI personalization inside platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 4 is AI as Agent or Automation.&lt;/strong&gt; At this level, AI coordinates across tools, sets goals, and completes steps with minimal input. Fewer than 10 percent reported anything resembling multi-step automation.&lt;/p&gt;

&lt;p&gt;Most teams are living at Level 1, maybe reaching into Level 2. We open a chatbot to brainstorm, summarize, or draft. Some of us have built small helpers that automate routine tasks. A few vendors are starting to push Level 3, embedding AI inside tools we already know. Almost no one is building for Level 4, where agents coordinate across tools and take on work with minimal input.&lt;/p&gt;

&lt;p&gt;Here is the danger. Are we repeating the same pattern? Are we bolting AI into the tools we are most comfortable with? When we talk about AI-driven LMS dashboards, AI-powered SCORM modules, or xAPI tracking for AI activities, we are really talking about familiar moves. They feel safe. They look familiar. But those same moves kept xAPI from ever crossing the adoption gap.&lt;/p&gt;

&lt;p&gt;The truth is, AI does not need the LMS. It does not need SCORM. It does not need xAPI. Those systems were designed for tracking, compliance, and uniformity. That focus brought with it certain constraints in how we design and deploy learning. They existed because we lacked the ability to personalize, adapt, and automate at scale. Now we have that ability. Forcing AI into those containers limits what it can do.&lt;/p&gt;

&lt;p&gt;Think about the authoring tools that dominate our field, like Articulate Storyline and Adobe Captivate. They were built for a world where course design had to live inside an LMS and flow through SCORM or xAPI. But today, I can go to a service like Lovable, prompt my way to a front end, connect it to GitHub for version control, link it to Supabase for data, deploy through Vercel, secure a domain name, and even tie it to Stripe for payments. In short, I can build the equivalent of my own LMS and learning module through prompts, and get eighty percent of the way there before handing it to a developer. Do not think for a second that others are not seeing the same opening. The learning industry is primed for disruption, and companies outside our space already know it. OpenAI and Gemini are already experimenting. The writing is on the wall: whatever gatekeeping is left around developing and deploying learning will face serious challenges in the near future.&lt;/p&gt;

&lt;p&gt;The Guild's data shows that while interest in AI is high, most reported use cases fall into surface-level categories: content generation, brainstorming, and image creation. These are Level 1 activities. Very few organizations report using AI for adaptive pathways, integrated personalization, or cross-system automation. In other words, we are following the same path as xAPI—lots of potential, little adoption beyond the early experiments.&lt;/p&gt;

&lt;p&gt;We also need to acknowledge some realities. In highly regulated industries, compliance is not negotiable. Audit trails and defensible data are survival tools, and no AI system has yet proven itself ready to replace them. At the same time, adoption patterns tell us something important: organizations cling to familiar workflows. xAPI never crossed into mainstream not because it was weak, but because it asked for too much change. Those are serious counterpoints, and they deserve attention.&lt;/p&gt;

&lt;p&gt;But even if those points are true, they do not change the larger trajectory. Comfort has logic behind it, but it is not enough. Compliance may explain why some systems remain, but it should not set the ceiling for what we can build. Adoption history shows how habits shape decisions, but it also shows how those habits can trap us. Scale and integration are real strengths of incumbents, but they are no longer exclusive. SaaS tools outside L&amp;amp;D have already shown they can grow faster and integrate more deeply than most LMS vendors.&lt;/p&gt;

&lt;p&gt;So this is the reality check. Are we preparing for Level 4 maturity, where AI acts as an agent and reshapes how learning actually happens? Or are we content to stay in Levels 2 and 3, patching AI into our existing stack because it feels safer?&lt;/p&gt;

&lt;p&gt;The adoption numbers for xAPI are a warning. Just because a technology makes sense does not mean it gets used. Adoption happens when workflows change. If we are serious about AI in L&amp;amp;D, we cannot settle for comfort. We have to design for possibility.&lt;/p&gt;

&lt;p&gt;That is why the four legs of the stool matter. API, Markdown, JSON, and JavaScript are not about keeping us inside old systems. They are about giving us the foundation to build outside them. The question is whether we will take that step.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Third Leg of the Stool: JSON</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sun, 07 Sep 2025 05:34:35 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/the-third-leg-of-the-stool-json-593k</link>
      <guid>https://dev.to/jimmyshoe85/the-third-leg-of-the-stool-json-593k</guid>
      <description>&lt;p&gt;By now, you've heard me talk about the four-legged stool of AI growth. We started with API, which gave us control. Then we added Markdown, which gave our knowledge structure. Now we're ready for the third leg: JSON.&lt;/p&gt;

&lt;p&gt;If the first two legs made sense to you, take a breath. JSON sounds technical, but it's not a new language you have to learn. It's simply a way of giving the AI a template to fill in, the same way you give learners a form with spaces for their name, their score, and their feedback. Think of JSON as that form written in plain text.&lt;/p&gt;

&lt;p&gt;If you've ever worked in Excel or Google Sheets, you already know the concept. Each column is a field name. Each cell holds a value. JSON is the same thing, just expressed in text so the AI can follow it exactly.&lt;/p&gt;

&lt;p&gt;Here's a simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Lockout/Tagout Basics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"objective"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Identify when and how to apply LOTO procedures"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"estimated_minutes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In plain English, we've said: give me a course with a title, an objective, and an estimated time. The AI doesn't get to wander. It has to fill in those boxes. That's what we mean by structured response.&lt;/p&gt;

&lt;p&gt;Now let's bring it closer to home. In most authoring tools you've probably used variables. A variable is just a container that holds text, a number, or a yes/no value. JSON works well with variables. You tell the AI, through the API, to return its answer in a JSON format. Then your course can map each JSON field into the matching variable. Suddenly the course isn't static. The AI is filling in the variables for you.&lt;/p&gt;

&lt;p&gt;Think about a feedback screen. Normally, you'd type out every message by hand. With JSON, you can tell the AI to send back three fields: feedback, next hint, and retry (true or false). Your course reads those values and displays the right message. It's still your design. The AI is just populating the fields.&lt;/p&gt;

&lt;p&gt;Here's what that could look like in JSON:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"feedback"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Correct. You selected the right PPE for aluminum MIG."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"next_hint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Before you adjust voltage, check your wire speed."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"retry"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And in your authoring tool, those values would drop neatly into variables you already use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;vFeedback&lt;/strong&gt; (text)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vHint&lt;/strong&gt; (text)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vRetry&lt;/strong&gt; (true/false)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don't have to worry about the technical connection yet. For now, just see how the fields in JSON line up one-to-one with the variables in your tool. That's the bridge. In the next step, we'll look at how to actually pass that data back and forth.&lt;/p&gt;

&lt;p&gt;This is where JSON shows its strength. Markdown let you shape ideas into clear sections. JSON goes a step further by locking those ideas into specific boxes. Because the boxes are predictable, you can reuse the output anywhere: in your course, in a database, or in a report. One format that stays consistent across uses.&lt;/p&gt;

&lt;p&gt;Here's another example outside of courses. When people write prompts for generating images or video, JSON can be used to organize the details: subject, setting, camera angle, style. Laying it out this way forces you to be mindful of the parts, and it makes it easier to scale. Want twenty consistent images? Change the subject field, keep the rest the same. The structure does the heavy lifting. &lt;/p&gt;

&lt;p&gt;For example, you might ask for an image with a simple line like: a welder standing on the factory floor with sparks flying in the background. That is freeform text. Now compare it to a structured JSON version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"subject"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"welder"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"setting"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"factory floor"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"detail"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sparks flying in background"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"composition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium shot"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"camera_angle"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"eye level"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"style"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"realistic photo"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first approach gives the AI room to interpret. The second makes sure every element has a place. That is the added clarity JSON brings.&lt;/p&gt;

&lt;p&gt;If you think about it, much of the prompt training is doing the same thing, just not in JSON format. Frameworks like RISE, TAG, and ACE are meant to guide best practices for prompting the LLM. They provide structure. JSON expresses that same idea in a format that makes it easy for different platforms to communicate.&lt;/p&gt;

&lt;p&gt;If you understood the first two articles, you can handle this one as well. JSON is a way to tell the AI, "fill in these boxes so I can trust what comes back." It might look like code, but it's just structure. When you reach that point, your AI work shifts from trial and error to something you can rely on.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Second Leg of the Stool: Markdown</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sun, 07 Sep 2025 05:32:29 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/the-second-leg-of-the-stool-markdown-237f</link>
      <guid>https://dev.to/jimmyshoe85/the-second-leg-of-the-stool-markdown-237f</guid>
      <description>&lt;p&gt;When I talk about the four-legged stool of AI growth, the first leg is API. The second is Markdown. Each leg gives you more balance, more stability, and more control over how you use AI in your work.&lt;br&gt;
Markdown sounds like another piece of jargon, but it's really just a way of writing text with simple rules for structure. If you've ever outlined a Word document with headings, subheadings, body text, and bullets, you already understand the idea. Markdown does the same thing, but it does it in plain text, which makes it both easier for people to read and easier for an AI to understand.&lt;br&gt;
Here's why this matters. Most company knowledge isn't structured in a way that AI considers useful. It lives in long documents, slide decks, or emails that are hard to parse. When you move that same knowledge into Markdown, you are giving it shape. The AI can see where one idea stops and another begins. That makes its answers more accurate because it is not guessing at boundaries.&lt;br&gt;
The benefit doesn't stop there. Because Markdown is structured, you can also request that the AI return output in Markdown. Once you have that, you can convert the exact same file into a Word document, a Google Doc, or even an HTML page for a website. One source file can travel into whatever format you need.&lt;br&gt;
For example, a short piece of Markdown text like this:&lt;br&gt;
markdown# Lesson Title&lt;/p&gt;

&lt;h2&gt;
  
  
  Objective
&lt;/h2&gt;

&lt;p&gt;Explain the purpose of the training.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Step one&lt;/li&gt;
&lt;li&gt;Step two&lt;/li&gt;
&lt;li&gt;Step three&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;can be exported as a Word document with headings, or instantly published as a clean webpage in HTML. The structure stays the same, the presentation shifts to fit the audience.&lt;br&gt;
This also connects to how large language models learn from data. When knowledge is broken into pieces, the system creates embeddings, which are like fingerprints for each chunk of text. If you chop by word count alone, you risk cutting in the middle of a sentence and losing meaning. With Markdown, the chunks naturally follow the sections. That makes retrieval stronger because the boundaries make sense.&lt;br&gt;
So Markdown isn't just another format. It is a way to bring order to your knowledge so both you and the AI can use it more effectively. It gives you one clean source that can be reshaped into many outputs. That's the power of the second leg of the stool. And once you are steady on it, the rest of the stool begins to feel a lot more secure.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The First Leg of the Stool: API</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sun, 07 Sep 2025 05:29:56 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/the-first-leg-of-the-stool-api-2ca5</link>
      <guid>https://dev.to/jimmyshoe85/the-first-leg-of-the-stool-api-2ca5</guid>
      <description>&lt;p&gt;When people ask me how to grow in AI, I tell them it comes down to four fundamentals. Think of it as a stool. If one leg is missing, the whole thing wobbles. The legs are API, markdown files, JSON, and JavaScript. None of them are exotic or reserved for engineers. They're just the practical foundations that let you move from being a passive user to someone who can actually build with AI.&lt;/p&gt;

&lt;p&gt;Let's start with the first leg: &lt;strong&gt;API&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here's the honest truth. When I talk to folks in L&amp;amp;D or marketing, a lot of them don't really know what an API is. They nod along politely, but if you pressed them, they couldn't explain it. And I don't blame them. Most of us didn't go into this field to learn tech jargon. So let's clear the fog.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;API&lt;/strong&gt;, or &lt;strong&gt;Application Programming Interface&lt;/strong&gt;, is a set of rules and connections that lets two pieces of software talk to each other. It's not a program by itself, and it doesn't "do" the work. It defines the pathway so one system can request something from another and get a predictable response back. That predictability is the key. Without APIs, every system would need a custom-built translation layer, and nothing would fit together.&lt;/p&gt;

&lt;p&gt;Think about ordering food through DoorDash. The app doesn't make your meal. It passes the order, in the exact format the restaurant's system expects, and gets back a confirmation. Or take checking the weather on your phone. Your app isn't reading satellite data directly. It makes a structured request to a weather service, and that service replies in a format the app understands. That handoff happens because of an API.&lt;/p&gt;

&lt;p&gt;The same is true for AI. The chatbot you're using right now is built on top of an API. The chatbot is a polished surface designed to make things easy. The API is the real doorway into the kitchen. That's where you get the full set of ingredients, not just what's printed on the menu. In the context of learning systems, think of ChatGPT as the interface sitting between you and your LMS. It makes the experience approachable, but you're still playing with the settings someone else chose. The API, on the other hand, is your way to connect directly into that LMS and control how learning content is delivered, tracked, and adapted.&lt;/p&gt;

&lt;p&gt;And here's why that matters for us. Staying in the chatbot feels safe, but it limits what you can do. You're stuck with someone else's defaults. Once you step into the API, you start to see new possibilities. You can adjust how the AI behaves, automate repetitive work, and connect it directly into your tools. You stop bouncing between tabs and start building flows that actually match your day-to-day.&lt;/p&gt;

&lt;p&gt;I've seen what happens when people cross that bridge. They stop thinking of AI as a gimmick and start using it as part of their craft. A trainer builds a helpdesk that answers common learner questions. A marketer wires up a script that organizes survey results in minutes instead of hours. These aren't grand enterprise systems. They're simple pilots that show you what's possible once you take the first step.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Working with AI: The Four Skills That Change Everything</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sun, 07 Sep 2025 05:08:45 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/from-ai-user-to-ai-builder-the-four-skills-that-change-everything-3hei</link>
      <guid>https://dev.to/jimmyshoe85/from-ai-user-to-ai-builder-the-four-skills-that-change-everything-3hei</guid>
      <description>&lt;p&gt;When people ask me how to grow in AI, I see the same look in their eyes. It's excitement mixed with overwhelm. They know AI is important. They've probably tried ChatGPT or Claude, maybe even used it to brainstorm ideas or draft some content. But they're stuck in that familiar place—using someone else's tool with someone else's settings, wondering how to move beyond the basics.&lt;/p&gt;

&lt;p&gt;I get it. You open a chatbot, type in a prompt, and get back something useful. It feels like magic at first. But after a while, you start bumping into limitations. The responses aren't quite what you need. You can't save the workflow for next time. You can't connect it to your actual work systems. You're stuck copying and pasting between tabs, feeling like there should be a better way.&lt;/p&gt;

&lt;p&gt;There is a better way. But it requires shifting from being someone who uses AI to someone who builds with AI. That shift isn't as big as you think, and it doesn't require becoming a programmer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Four-Legged Stool
&lt;/h2&gt;

&lt;p&gt;I think of AI growth like a four-legged stool. If one leg is missing, the whole thing wobbles. The legs are &lt;strong&gt;API&lt;/strong&gt;, &lt;strong&gt;Markdown&lt;/strong&gt;, &lt;strong&gt;JSON&lt;/strong&gt;, and &lt;strong&gt;JavaScript&lt;/strong&gt;. None of them are exotic or reserved for engineers. They're just the practical foundations that let you move from being a passive user to someone who can actually build with AI.&lt;/p&gt;

&lt;p&gt;Here's what each leg gives you:&lt;/p&gt;

&lt;h3&gt;
  
  
  API opens the door
&lt;/h3&gt;

&lt;p&gt;Instead of being limited to what someone else put in a chatbot interface, you get direct access to the AI's capabilities. You can control how it behaves, automate repetitive tasks, and connect it to your existing tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Markdown brings order to your knowledge
&lt;/h3&gt;

&lt;p&gt;Most company information lives in long documents and presentations that AI struggles to parse effectively. Markdown gives your content structure that both you and AI can work with cleanly.&lt;/p&gt;

&lt;h3&gt;
  
  
  JSON gives you precision
&lt;/h3&gt;

&lt;p&gt;Instead of getting back paragraphs of text you have to interpret, you can ask AI to return information in specific, predictable formats that your systems can use directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  JavaScript makes things happen
&lt;/h3&gt;

&lt;p&gt;It's the bridge between AI's responses and the places your learners or customers actually see them—whether that's in a course, on a webpage, or in an app.&lt;/p&gt;

&lt;p&gt;Together, these four skills move you from someone who prompts AI in a browser to someone who can integrate AI into real workflows. You stop bouncing between disconnected tools and start building experiences that actually match how you work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Now
&lt;/h2&gt;

&lt;p&gt;The L&amp;amp;D and marketing fields are at a crossroads. We can keep bolting AI onto our existing systems—AI-powered LMS dashboards, smarter SCORM modules, better analytics—or we can recognize that AI opens up entirely new possibilities for how learning and engagement actually happen.&lt;/p&gt;

&lt;p&gt;I've seen what happens when people make this shift. A trainer builds a help desk that answers learner questions automatically. A marketer creates a system that personalizes email campaigns based on real behavior patterns. An instructional designer develops adaptive learning paths that adjust in real time.&lt;/p&gt;

&lt;p&gt;These aren't massive enterprise projects requiring teams of developers. They're small, practical applications built by people who learned to work directly with AI instead of through someone else's interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Learn
&lt;/h2&gt;

&lt;p&gt;Over the next five posts, I'm going to walk you through each leg of the stool. We'll start with API because it's the foundation that unlocks everything else. Then we'll add Markdown to structure your knowledge, JSON to organize AI responses, and JavaScript to connect everything together.&lt;/p&gt;

&lt;p&gt;In the final post, I'll show you exactly how to build your first AI-powered project—something small and practical that you can complete in an afternoon but that demonstrates all four concepts working together.&lt;/p&gt;

&lt;p&gt;Each post builds on the previous ones, but I'll keep the examples concrete and the explanations grounded in real work you're probably already doing. You won't need to learn programming languages or master complex frameworks. You'll just need to understand what each tool does and how to ask AI to help you use it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who This Is For
&lt;/h2&gt;

&lt;p&gt;This series is for the overwhelmed colleague who knows AI is important but doesn't know where to start. You've probably experimented with AI tools, but you're ready to do more than just prompt and hope. You want to integrate AI into your actual work processes, not just use it for brainstorming sessions.&lt;/p&gt;

&lt;p&gt;Maybe you're in L&amp;amp;D and you're tired of building the same static courses over and over. Maybe you're in marketing and you want to personalize experiences at scale. Maybe you're in operations and you see opportunities to automate routine tasks that eat up your time.&lt;/p&gt;

&lt;p&gt;If you've ever thought "there has to be a better way to do this," and you're willing to learn four practical skills that don't require a computer science degree, this series is for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Series Will Give You
&lt;/h2&gt;

&lt;p&gt;These five posts are about removing the intimidation. AI isn't magic, and it isn't reserved for engineers. It's a set of tools that work together in predictable ways. Once you understand how those tools connect, you'll have options you don't have now.&lt;/p&gt;

&lt;p&gt;Will you become an AI expert after reading these posts? No. But you'll understand the landscape well enough to make informed decisions about what's worth pursuing and what's worth skipping. You'll know when someone is overselling a solution and when they're pointing toward something genuinely useful.&lt;/p&gt;

&lt;p&gt;Most importantly, you'll have a foundation. Whether you use it to build small automation scripts, to evaluate AI tools more effectively, or to have better conversations with technical colleagues, you'll be starting from solid ground instead of guessing.&lt;/p&gt;

&lt;p&gt;Not everyone who reads this series will become a builder. That's fine. But everyone who reads it will stop seeing AI as a black box. And that understanding, by itself, is valuable in a world where AI is increasingly part of how work gets done.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>My Real Vibe Coding Workflow: Claude Projects + Bolt + VS Code (No Cursor Required)</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sat, 31 May 2025 03:23:26 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/my-real-vibe-coding-workflow-claude-projects-bolt-vs-code-no-cursor-required-4863</link>
      <guid>https://dev.to/jimmyshoe85/my-real-vibe-coding-workflow-claude-projects-bolt-vs-code-no-cursor-required-4863</guid>
      <description>&lt;p&gt;After months of testing different AI coding tools, I've settled into a workflow that actually works for me—and it might work for you too, especially if you're not a traditional developer. Everyone's talking about &lt;strong&gt;Cursor&lt;/strong&gt; and &lt;strong&gt;Windsurf&lt;/strong&gt;, but here's the thing: I'm already paying $20 for Claude, so why add another subscription when I can build full-stack apps with what I already have?&lt;/p&gt;

&lt;p&gt;Let me walk you through how I built a complete wine course platform using &lt;strong&gt;Claude Projects&lt;/strong&gt;, &lt;strong&gt;Bolt&lt;/strong&gt; (free tier), and plain &lt;strong&gt;VS Code&lt;/strong&gt;. No fancy AI editors required.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Philosophy: Start with Requirements, Not Code
&lt;/h2&gt;

&lt;p&gt;Most people jump straight into coding when they want to build something. I've learned that starting with a solid &lt;strong&gt;Product Requirements Document (PRD)&lt;/strong&gt; saves hours of back-and-forth prompting later.&lt;/p&gt;

&lt;p&gt;For my wine course platform, I told Claude exactly what I wanted: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A landing page for three courses &lt;em&gt;(Pairing Wines 101, Seven Noble Grapes, and Wines of Piedmont)&lt;/em&gt; &lt;/li&gt;
&lt;li&gt;An accordion interface that forces sequential completion&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Detail:&lt;/strong&gt; I specifically asked for "a PRD for an AI app builder, not for a software engineer."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This distinction matters. When you tell Claude you're giving instructions to an AI tool versus a human developer, it adjusts the language and specificity accordingly. The result was a comprehensive document that both Bolt and Lovable could understand immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Claude Projects for Planning
&lt;/h2&gt;

&lt;p&gt;I start every project in &lt;strong&gt;Claude Projects&lt;/strong&gt;—not because I need the knowledge base initially, but because it gives me consistent context as I develop. My first prompt is always asking for that PRD.&lt;/p&gt;

&lt;p&gt;The PRD Claude generated included everything I needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Color schemes&lt;/li&gt;
&lt;li&gt;Accordion functionality with locked progression&lt;/li&gt;
&lt;li&gt;Assessment requirements&lt;/li&gt;
&lt;li&gt;Content management structure
&lt;/li&gt;
&lt;li&gt;User authentication needs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Having this upfront meant I could feed the same clear instructions to different tools and get consistent results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Testing with AI Builders (Bolt vs Lovable)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxkgl0rzaplp6hagyps5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxkgl0rzaplp6hagyps5.png" alt="Hero sectiton of wine app"&gt;&lt;/a&gt;&lt;br&gt;
Here's where my workflow differs from the typical "pick one tool and stick with it" approach. I take my PRD to both &lt;strong&gt;Bolt&lt;/strong&gt; and &lt;strong&gt;Lovable&lt;/strong&gt; to see which one nails the initial design better.&lt;/p&gt;

&lt;p&gt;For this wine course project:&lt;/p&gt;
&lt;h3&gt;
  
  
  Bolt's Results
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;✅ Cleaner UI with better visual hierarchy&lt;/li&gt;
&lt;li&gt;✅ Nice animations&lt;/li&gt;
&lt;li&gt;✅ Spot-on icons&lt;/li&gt;
&lt;li&gt;✅ More polished overall design&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Lovable's Results
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;✅ Better functionality implementation&lt;/li&gt;
&lt;li&gt;✅ More reliable sequential course logic&lt;/li&gt;
&lt;li&gt;❌ Weaker design aesthetic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Decision:&lt;/strong&gt; I chose to move forward with Bolt's version because I figured I could prompt my way to better functionality, but starting with good design as the foundation felt more valuable.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Adding Authentication (The Tricky Part)
&lt;/h2&gt;

&lt;p&gt;Both tools handled &lt;strong&gt;Supabase&lt;/strong&gt; integration differently. When I prompted for authentication setup, Bolt connected to Supabase but didn't automatically create the necessary database tables. Lovable had similar issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The solution was manual database setup&lt;/strong&gt; through Supabase's SQL editor. I had to run the table creation scripts myself, but once that was done, authentication worked perfectly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt; Always check your Supabase dashboard after prompting for database integration. Don't assume the tables were created automatically.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Step 4: Moving to GitHub and VS Code
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/HNct7eXh2Ck"&gt;
  &lt;/iframe&gt;
&lt;br&gt;
Once I had a working foundation from Bolt, I connected both versions to &lt;strong&gt;GitHub repositories&lt;/strong&gt;. This is where the real development begins.&lt;/p&gt;

&lt;p&gt;Instead of using Cursor or Windsurf, I clone the repository into VS Code using the standard Git workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy the GitHub repository URL&lt;/li&gt;
&lt;li&gt;Open VS Code, hit &lt;code&gt;Shift + Ctrl + P&lt;/code&gt;, select "Git: Clone"&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;npm install&lt;/code&gt; to pull in all dependencies&lt;/li&gt;
&lt;li&gt;Create the necessary &lt;code&gt;.env&lt;/code&gt; files for Supabase connection&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;npm run dev&lt;/code&gt; to start local development&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The beauty of this approach is that I'm working with &lt;strong&gt;standard tools&lt;/strong&gt; that every developer uses, not proprietary AI editors that lock me into specific workflows.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Claude Projects for Ongoing Development
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6js0ylfvl1ghj6p2how6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6js0ylfvl1ghj6p2how6.png" alt="Screenshot of Claude Project connection to Github"&gt;&lt;/a&gt;&lt;br&gt;
Here's where &lt;strong&gt;Claude Projects&lt;/strong&gt; really shines. Once I have the codebase connected as project knowledge, I can have Claude:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Evaluate code quality&lt;/li&gt;
&lt;li&gt;Suggest optimizations
&lt;/li&gt;
&lt;li&gt;Help implement new features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, when I connected my wine course repository, Claude immediately flagged potential performance issues like unnecessary re-renders and suggested image optimization strategies. Having this kind of architectural insight upfront prevents technical debt later.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Development Loop That Actually Works
&lt;/h2&gt;

&lt;p&gt;My ongoing workflow looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Make changes locally&lt;/strong&gt; in VS Code based on Claude's suggestions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test the changes&lt;/strong&gt; to ensure they work as expected&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commit and push&lt;/strong&gt; to GitHub with descriptive commit messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Refresh the Claude Projects knowledge&lt;/strong&gt; to pull in the latest code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start a new thread&lt;/strong&gt; for the next feature or improvement&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Claude doesn't have memory across threads, so keeping your project knowledge updated allows you to start fresh conversations without losing context.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  When I Add Features (Real Example: About Page)
&lt;/h2&gt;

&lt;p&gt;Let me show you exactly how I added an About page to demonstrate the workflow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My prompt:&lt;/strong&gt; "I need an About page for the wine course platform. Make it professional but approachable, highlighting expertise in wine education."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude's response was perfect:&lt;/strong&gt; it gave me the complete component code AND the implementation instructions. It told me exactly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where to create the file (&lt;code&gt;src/pages/AboutPage.tsx&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;How to add the route to &lt;code&gt;App.tsx&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key is Claude shows you both &lt;strong&gt;what to build&lt;/strong&gt; and &lt;strong&gt;how to integrate it&lt;/strong&gt;. For someone learning to code, this educational aspect is invaluable.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why This Works Better Than Cursor (For Me)
&lt;/h2&gt;

&lt;p&gt;Don't get me wrong—Cursor is powerful. But for my use case, this workflow has several advantages:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Advantage&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost efficiency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;I'm already paying for Claude Pro. Why add another subscription?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning value&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Working in standard VS Code helps me understand the underlying code structure better&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Flexibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;I can switch between different AI tools (Claude, ChatGPT, etc.) without being locked in&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Debugging skills&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;When something breaks, I learn to read error messages and understand the codebase&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  The Current State: A Working App
&lt;/h2&gt;

&lt;p&gt;After following this workflow, I have a fully functional wine course platform with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ User authentication through Supabase&lt;/li&gt;
&lt;li&gt;✅ Sequential course progression with locked content&lt;/li&gt;
&lt;li&gt;✅ Responsive design that works across devices&lt;/li&gt;
&lt;li&gt;✅ Database integration for user progress tracking&lt;/li&gt;
&lt;li&gt;✅ Clean, maintainable code structure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app isn't perfect—there are still improvements to make around course data management and admin functionality—but it's a solid foundation that I can iterate on.&lt;/p&gt;
&lt;h2&gt;
  
  
  Who This Workflow Works For
&lt;/h2&gt;

&lt;p&gt;This approach particularly suits people who:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Want to &lt;strong&gt;learn how code actually works&lt;/strong&gt;, not just generate it&lt;/li&gt;
&lt;li&gt;Are building &lt;strong&gt;on a budget&lt;/strong&gt; with existing AI subscriptions&lt;/li&gt;
&lt;li&gt;Prefer &lt;strong&gt;understanding their stack&lt;/strong&gt; over having everything automated&lt;/li&gt;
&lt;li&gt;Need the &lt;strong&gt;flexibility to switch&lt;/strong&gt; between different AI tools&lt;/li&gt;
&lt;li&gt;Want &lt;strong&gt;standard development practices&lt;/strong&gt; that translate to team environments&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  When Cursor Might Be Better
&lt;/h3&gt;

&lt;p&gt;If you're a professional developer who values &lt;strong&gt;speed above learning&lt;/strong&gt;, Cursor might be better. But if you're building side projects, learning to code, or working solo, this workflow gives you more control and understanding for less money.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Free Tool Strategy
&lt;/h2&gt;

&lt;p&gt;One final note about my approach: I use &lt;strong&gt;Bolt's free tier&lt;/strong&gt; (5 prompts) just to get the initial design and basic functionality right. Then I transfer everything to &lt;strong&gt;Claude Projects&lt;/strong&gt; where I have unlimited conversations as part of my existing subscription.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hybrid Approach Benefits:
├── Bolt's excellent design capabilities
├── Claude's superior reasoning and code analysis  
└── No multiple premium AI coding subscriptions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This hybrid approach means I get the best of both worlds without paying for multiple premium AI coding subscriptions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The result?&lt;/strong&gt; I can build complete full-stack applications using tools I already pay for, with a workflow that actually teaches me something along the way.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What's your current vibe coding setup? I'd love to hear if you've found workflows that work better for your situation.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Quick Reference: My Tech Stack
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Projects&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Planning, code analysis, ongoing development&lt;/td&gt;
&lt;td&gt;$20/month (existing)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bolt&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Initial design and prototyping&lt;/td&gt;
&lt;td&gt;Free tier (5 prompts)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;VS Code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Local development environment&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GitHub&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Version control and repository hosting&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Supabase&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Backend and authentication&lt;/td&gt;
&lt;td&gt;Free tier&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Key Workflow Commands
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone repository&lt;/span&gt;
git clone &lt;span class="o"&gt;[&lt;/span&gt;repository-url]

&lt;span class="c"&gt;# Install dependencies  &lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Start development server&lt;/span&gt;
npm run dev

&lt;span class="c"&gt;# VS Code: Open command palette&lt;/span&gt;
Shift + Ctrl + P
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Everyone: 'Adopt AI or Get Left Behind!' Me: 'Yeah... That's Not Actually True</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Tue, 22 Apr 2025 03:40:42 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/everyone-adopt-ai-or-get-left-behind-me-yeah-thats-not-actually-true-48ok</link>
      <guid>https://dev.to/jimmyshoe85/everyone-adopt-ai-or-get-left-behind-me-yeah-thats-not-actually-true-48ok</guid>
      <description>&lt;h1&gt;
  
  
  Balanced AI Adoption: Beyond the "Fall Behind" Narrative
&lt;/h1&gt;

&lt;p&gt;I am an AI enthusiast, but there is something I hear often regarding AI, and I don't totally agree with it: "Adopt AI immediately, or risk falling irreversibly behind." It sounds logical on the surface. After all, we've seen it happen in other industries. Think about car companies that stubbornly stuck to small sedans while the public flocked to SUVs. They didn't just fall behind; some nearly vanished. Similarly, consider Blackberry's failure to transition from physical keyboards to touchscreen devices, ultimately losing out to more innovative competitors like Apple.&lt;/p&gt;

&lt;h2&gt;
  
  
  But I don't think the analogy neatly applies to AI. Here's why:
&lt;/h2&gt;

&lt;p&gt;First, AI isn't stable or predictable yet. What we call "game changing" today could easily become obsolete in mere months. I've seen too many videos declaring an end to OpenAI or Midjourney only to have the same YouTuber declare a resurrection with the next update. This stuff is evolving fast. Attempting to master a specific tool or framework right now may be more transient than transformative.&lt;/p&gt;

&lt;p&gt;The truth is you probably won't "fall behind" by pausing briefly. In fact, waiting strategically for the inflection point might position you far better than jumping onto every hype-driven iteration. Google, for example, wasn't the first search engine on the scene. However, by clearly recognizing what users needed from a search engine, they quickly built a superior and enduring product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instead of obsessing over specific tech stacks or rapidly shifting tools, now is the ideal time to double down on timeless, human strengths:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build your own taste&lt;/strong&gt;. Don't stress about mastering every tool. What really matters is developing a strong point of view. AI just turns the volume up on your judgment, your creative instincts, your style. The sharper your voice, the more you'll get out of whatever tool you're using.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Figure out how you like to work.&lt;/strong&gt; Don't chase LLM releases and benchmarks. Build your workflow then experiment. Get good at prompting, remixing, and refining. That skill carries over no matter what tool you're using.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Know where AI ends and &lt;em&gt;you&lt;/em&gt; begin.&lt;/strong&gt; These tools are fast and efficient, sure. But they don't have gut instinct. They don't have taste, intuition, or a sense of timing. That's your lane. If you've vibe coded, then you have probably found out that there are times when the AI doesn't know what it is doing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  But...
&lt;/h2&gt;

&lt;p&gt;Let me be clear about something: while the fear of "falling behind" is overstated, there is value in measured experimentation. Companies that test early can discover hidden advantages, subtle challenges, and powerful use-cases that late adopters may struggle to replicate swiftly.&lt;/p&gt;

&lt;p&gt;The bottom line? Don't panic. Despite the hype, not every update is a game-changer or requires your attention. Go build something, be curious, and follow people who do interesting things with AI. AND... it's ok if you just decided to have a glass of wine and watch the AI hype train pass by and jump on later. DM me for wine suggestions.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Manufacturing Skills Gap: Current State and Future Outlook</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sat, 12 Apr 2025 00:16:21 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/the-manufacturing-skills-gap-current-state-and-future-outlook-2hmg</link>
      <guid>https://dev.to/jimmyshoe85/the-manufacturing-skills-gap-current-state-and-future-outlook-2hmg</guid>
      <description>&lt;h2&gt;
  
  
  Executive Summary
&lt;/h2&gt;

&lt;p&gt;The United States manufacturing sector is experiencing a persistent "skills gap" – a shortage of qualified workers for open skilled trade positions – even as the industry expands. Coming out of the pandemic, manufacturing employment has rebounded above 12.7 million (slightly higher than pre-2020 levels). Yet companies across all manufacturing sectors report difficulty finding talent, from entry-level operators to experienced technicians. &lt;/p&gt;

&lt;p&gt;This labor shortfall is not only an HR problem but an economic one: one study estimated that 2.1 million manufacturing jobs could go unfilled by 2030, potentially costing the U.S. economy $1 trillion in lost output that year alone. This report examines the current state of this skills gap in 2025, including demand vs. supply imbalances, regional and occupational impacts, and education pipeline trends, and explores how industry initiatives and new technologies are being leveraged to close the gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;2025: Current State of the Manufacturing Skills Gap&lt;/li&gt;
&lt;li&gt;Regional and Sector Impacts&lt;/li&gt;
&lt;li&gt;Occupations Most Impacted&lt;/li&gt;
&lt;li&gt;Talent Pipeline: Trade School and Community College Trends&lt;/li&gt;
&lt;li&gt;Attracting New Talent: Which Trades Are Easier or Harder to Fill?&lt;/li&gt;
&lt;li&gt;Industry Initiatives and Technology Strategies to Bridge the Gap&lt;/li&gt;
&lt;li&gt;Outlook to 2035: Projections and Future Scenarios&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  2025: Current State of the Manufacturing Skills Gap
&lt;/h2&gt;

&lt;p&gt;As of early 2025, U.S. manufacturing faces a significant mismatch between labor supply and demand. Job openings in manufacturing remain elevated, even after some post-pandemic cooling. For instance, there were 622,000 unfilled manufacturing jobs nationally in January 2024, and about 462,000 openings still in January 2025.&lt;/p&gt;

&lt;p&gt;This means hundreds of thousands of factory positions are going unfilled at any given time. By mid-2024 there were signs of temporary relief – July 2024 was the first month since 2021 where the number of unemployed manufacturing workers slightly exceeded the number of job vacancies – but overall the labor market remains extremely tight.&lt;/p&gt;

&lt;p&gt;Nationwide, the ratio of unemployed persons to job openings has hovered around 0.9 (i.e. fewer than one available worker per opening) in recent months, and mid-decade demographic trends (low population growth and labor participation, plus retirements) suggest labor shortages will persist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Regional and Sector Impacts
&lt;/h2&gt;

&lt;p&gt;The skills gap is widespread across the country, though some regions feel it more acutely. Virtually every state has more manufacturing jobs open than workers to fill them. In states with robust manufacturing growth and very low unemployment, the imbalance is extreme. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New Hampshire had only 28 available workers for every 100 manufacturing job openings as of late 2023&lt;/li&gt;
&lt;li&gt;Nebraska had just 39 per 100 openings&lt;/li&gt;
&lt;li&gt;North Carolina about 55 per 100 openings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even large states like Texas and Michigan report chronic shortages of skilled tradespeople in their manufacturing hubs. Generally, the Midwest and South – heartlands of automotive, aerospace, and heavy industry – report difficulty replacing an aging skilled workforce. At the same time, fast-growing manufacturing states in the Mountain West and Southeast (with many new factories breaking ground) are scrambling to attract enough trade workers to staff those operations.&lt;/p&gt;

&lt;p&gt;The breadth of the shortage is notable: it spans all major manufacturing sectors (from metal fabrication to food processing) and affects both durable goods and nondurable goods producers. Even after accounting for a recent dip in demand for manufactured goods in late 2024, manufacturers still cannot find enough qualified employees in many locales.&lt;/p&gt;

&lt;h2&gt;
  
  
  Occupations Most Impacted
&lt;/h2&gt;

&lt;p&gt;A range of skilled trade and production roles are in short supply. Notably, skilled maintenance technicians, welders, machinists, and industrial electricians are among the hardest positions to fill, according to industry surveys.&lt;/p&gt;

&lt;p&gt;For example, the United States faces an acute welding shortage – the average welder is nearing retirement age, and the American Welding Society estimates a gap of over 400,000 welders by the mid-2020s. This shortage is starkly illustrated by demographics: for every five welders retiring or leaving the trade, only about one new person is entering it.&lt;/p&gt;

&lt;p&gt;Similarly, employers report difficulty finding CNC machinists and tool-and-die makers, critical roles for precision manufacturing. Many veteran machinists are retiring after decades on the job, and too few younger workers are replacing them – the current shortage of CNC machine operators is roughly three times worse than it was a decade ago.&lt;/p&gt;

&lt;p&gt;Industrial maintenance mechanics (who keep complex factory equipment running) are also in high demand as plants become more automated; these roles require both mechanical know-how and digital skills, a combination that is relatively scarce. In addition, licensed electricians (needed for both facility maintenance and equipment installation) rank among the top ten hardest-to-fill occupations nationwide.&lt;/p&gt;

&lt;p&gt;Even entry-level production operators and assemblers are hard to hire in some regions. The challenge cuts across skill levels: companies need more workers in every category, "from entry-level associates to skilled production workers to engineers." In short, the entire manufacturing talent pipeline is under strain, with certain skilled trades reaching crisis levels of unfilled positions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Talent Pipeline: Trade School and Community College Trends
&lt;/h2&gt;

&lt;p&gt;A major factor in the skills gap is the pipeline of new skilled workers entering trades via education and training programs. Over the past decade, traditional training pathways for manufacturing trades have not kept pace with industry needs. Graduation data shows that while the number of bachelor's degrees in the U.S. grew significantly from 2011 to 2022, the number of associate degrees (two-year programs often tied to technical and industrial skills) was essentially flat.&lt;/p&gt;

&lt;p&gt;In other words, the output of community colleges in high-skill trades has stagnated. This is concerning because associate programs in fields like advanced manufacturing technology, welding technology, or engineering technology are key to producing technicians and mid-level skilled workers.&lt;/p&gt;

&lt;p&gt;Encouragingly, the tide may be starting to turn post-pandemic. After years of declining interest, enrollment in vocational and trade programs is rebounding. Fall 2023 saw a 16% jump in the number of students enrolled in vocational-focused programs at community colleges – reaching the highest level since tracking began in 2018. Overall, trade school enrollment grew ~4.9% from 2020 to 2023, reversing pre-2020 declines. During the same period, traditional university enrollment fell about 0.6%.&lt;/p&gt;

&lt;p&gt;However, not all skilled trade disciplines are benefitting equally from this renewed interest. Enrollment data shows that some trades programs have become much more popular than others:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Education Pathway&lt;/th&gt;
&lt;th&gt;Enrollment Trend (Post-2020)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Trade/vocational programs (overall)&lt;/td&gt;
&lt;td&gt;+4.9% (2020–2023 growth)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vocational programs @ community colleges&lt;/td&gt;
&lt;td&gt;+16% (increase in 2023 vs. 2022)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Construction trade programs&lt;/td&gt;
&lt;td&gt;+23% (2023 vs. 2022)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HVAC &amp;amp; auto maintenance programs&lt;/td&gt;
&lt;td&gt;+7% (2023 vs. 2022)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;University (4-year college) enrollment&lt;/td&gt;
&lt;td&gt;–0.6% (2020–2023 decline)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These trends show a promising uptick in interest for trades, but from a relatively low base. Prior to 2020, many skilled trade schools saw annual enrollment declines around 4%. Now there is growth, but even a few percentage points per year may not be enough to compensate for the wave of retirements hitting manufacturing.&lt;/p&gt;

&lt;p&gt;It's also notable that a few specialized sectors dominate trade education enrollment – for example, cosmetology (beauty and wellness) and healthcare technician programs together account for hundreds of thousands of students, whereas programs explicitly focused on manufacturing skills (like welding, machining, or mechatronics) have far fewer students.&lt;/p&gt;

&lt;p&gt;The bottom line: interest in skilled trades careers is rising, but the training pipeline is still not producing talent at the volume needed to meet 100% of manufacturing labor demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attracting New Talent: Which Trades Are Easier or Harder to Fill?
&lt;/h2&gt;

&lt;p&gt;Not all skilled trades positions face the same level of recruitment challenge. Some trades are benefiting from improved perceptions and strong wage growth, making it a bit easier to entice new talent, while others continue to struggle with an image or awareness problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Welders
&lt;/h3&gt;

&lt;p&gt;Despite high demand and decent pay, welding remains a hard-to-fill trade. The work can be physically demanding (involving heat, heavy equipment, and sometimes odd hours), and historically it has not been promoted to students as a desirable career. As a result, the average U.S. welder is in their mid-50s, and retirements are outpacing new entrants by a 5-to-1 ratio. This makes welding one of the most acute shortages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Industrial Maintenance &amp;amp; Mechatronics
&lt;/h3&gt;

&lt;p&gt;This field – which includes maintaining and repairing factory machinery, robotics, and automated systems – is crucial in advanced manufacturing. It requires a blend of mechanical skill and computer/electrical know-how. Many manufacturers report maintenance techs are very difficult to hire, as the role traditionally was learned through years of on-the-job training and apprenticeships, which have declined.&lt;/p&gt;

&lt;p&gt;One advantage is that these roles are increasingly high-tech and problem-solving oriented, which can appeal to younger workers who enjoy technology. Some companies have had success attracting talent by rebranding maintenance roles as "industrial automation technician" or "mechatronics specialist," highlighting the tech aspect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Electricians and Plumbers
&lt;/h3&gt;

&lt;p&gt;Skilled trades like electrical work and plumbing (often categorized under construction trades but also critical for manufacturing facilities) have generally seen strong interest from new workers relative to other trades. These occupations are well-known, offer clear apprenticeship/journeyman paths, and enjoy a reputation for good pay and job security.&lt;/p&gt;

&lt;p&gt;Despite this popularity, there is still a shortage – the construction and manufacturing boom means demand outstrips supply of licensed electricians, and a wave of retirements looms here too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Machinists and Tool-and-Die Makers
&lt;/h3&gt;

&lt;p&gt;These precision manufacturing roles are among the hardest to fill, largely due to a lack of visibility. Many young people simply don't know what a machinist or toolmaker does – these jobs aren't often mentioned in high school career counseling. Moreover, they carry the outdated stigma of being "dirty factory jobs," which in reality is an increasingly inaccurate picture (modern machining centers are often clean, high-tech environments).&lt;/p&gt;

&lt;h3&gt;
  
  
  "New collar" tech roles
&lt;/h3&gt;

&lt;p&gt;On a positive note, manufacturers have found it easier to attract talent into some emerging roles that combine tech with hands-on work. For example, positions like robotics technician, 3D printing specialist, or CNC programmer often appeal to younger workers who have grown up with computers and electronics. These roles are often seen as cool, cutting-edge jobs in advanced manufacturing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Industry Initiatives and Technology Strategies to Bridge the Gap
&lt;/h2&gt;

&lt;p&gt;To combat the skills gap, manufacturers, educators, and policymakers are deploying a range of initiatives. Broadly, strategies fall into two categories: workforce development programs (to grow and upskill the human talent pipeline) and technology solutions (to mitigate the gap by increasing productivity or automating certain tasks).&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhancing Workforce Development and Training
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Apprenticeships and Earn-and-Learn Programs:&lt;/strong&gt; Many companies have revived apprenticeship programs to train the next generation of skilled trades workers. These programs, often in partnership with unions or community colleges, allow trainees to earn a wage while learning on the job under mentorship.&lt;/p&gt;

&lt;p&gt;A notable example is the FAME program (Federation for Advanced Manufacturing Education), originally founded by Toyota and now spread to dozens of manufacturers via the Manufacturing Institute. FAME is a 2-year earn-and-learn model producing multi-skilled technicians, and it has expanded to 40+ chapters in 16 states.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Partnerships with Schools and Community Colleges:&lt;/strong&gt; Companies are increasingly collaborating with local educational institutions to align curricula with industry needs. This "talent ecosystem" approach involves manufacturers providing equipment, guest instructors, or internship opportunities to high schools, trade schools, and community colleges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upskilling Current Workers:&lt;/strong&gt; Retaining and retraining existing employees is another crucial tactic. Given the difficulty in hiring externally, manufacturers are focusing on upskilling their incumbent workforce – teaching new skills to current employees so they can fill more advanced roles.&lt;/p&gt;

&lt;p&gt;According to a 2024 industry survey, 56% of manufacturers said they are providing internal upskilling to help address talent shortages. Many large firms have established in-house training academies or "universities."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improving Perception and Outreach:&lt;/strong&gt; Changing the image of manufacturing careers is a long-term strategy. Industry associations and companies sponsor outreach initiatives such as Manufacturing Day (plant tours and demos each October), social media campaigns highlighting young people in trades, and engagement with K-12 schools (like robotics competitions or skilled trades summer camps).&lt;/p&gt;

&lt;p&gt;There is also a push to diversify the talent pool: recruiting more women, minorities, and veterans into manufacturing. Programs like Women in Manufacturing and Heroes MAKE America (for transitioning military veterans) are helping broaden the workforce.&lt;/p&gt;

&lt;h3&gt;
  
  
  Embracing Automation, AI, and Advanced Technology
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Industrial Automation &amp;amp; Robotics:&lt;/strong&gt; Automating repetitive or labor-intensive tasks can alleviate the need for as many manual workers. Many manufacturers are accelerating investments in robotics – from robotic welders in fabrication shops to autonomous guided vehicles (AGVs) for material handling in warehouses.&lt;/p&gt;

&lt;p&gt;For example, faced with the welder shortage, some companies have deployed robotic welding cells that one operator can oversee, effectively doing the work of multiple welders. Similarly, machine shops are adding robotic machine tending systems to run CNC machines lights-out (overnight with minimal human intervention).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advanced Analytics and AI for Workforce Management:&lt;/strong&gt; Manufacturers are also using software to optimize how they utilize the talent they do have. Workforce management systems help with smarter scheduling, demand forecasting, and skill tracking.&lt;/p&gt;

&lt;p&gt;By digitizing skills matrices and employee certifications, companies can quickly identify internal skill gaps and coordinate training or hiring to fill them. Some are using AI-based scheduling that can accommodate workers' preferences (improving work-life balance) while ensuring shifts are covered efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI, Simulation, and XR for Training:&lt;/strong&gt; Another technology approach addresses the skills gap by speeding up how quickly new or existing employees can be trained. Manufacturers are exploring extended reality (XR) – including virtual reality and augmented reality – to enhance training programs.&lt;/p&gt;

&lt;p&gt;For example, VR training modules can simulate a welding environment or a machine repair task in a safe, controlled virtual space, allowing trainees to practice and build skills faster. Augmented reality can overlay step-by-step instructions or diagrams onto a technician's field of view when servicing complex equipment, effectively lowering the skill threshold needed for certain tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Process Improvement and Intelligent Operations:&lt;/strong&gt; Beyond direct automation of labor, manufacturers are adopting Industry 4.0 technologies (IoT sensors, data analytics, predictive maintenance, etc.) to improve productivity and reduce reliance on brute-force labor.&lt;/p&gt;

&lt;p&gt;For instance, predictive maintenance systems can foresee equipment failures and schedule repairs proactively, meaning maintenance teams (which are understaffed) can work more efficiently and avoid catastrophic breakdowns that would require all hands on deck.&lt;/p&gt;

&lt;h2&gt;
  
  
  Outlook to 2035: Projections and Future Scenarios
&lt;/h2&gt;

&lt;p&gt;Looking ahead to 2035, will the manufacturing skills gap narrow, or will it persist (or even widen)? Based on current trends and projections, experts anticipate that the challenge will remain significant over the next decade, though its nature may evolve.&lt;/p&gt;

&lt;h3&gt;
  
  
  Size of the Gap
&lt;/h3&gt;

&lt;p&gt;If no major changes occur in the talent pipeline, the gap could stay extremely large. Deloitte and The Manufacturing Institute's latest talent study (2024) forecasts that the U.S. manufacturing industry will need to hire about 3.8 million workers between 2024 and 2033 to support growth and replace retirees. Of those, approximately 1.9 million jobs could remain unfilled if the industry doesn't solve the skills and applicant gaps.&lt;/p&gt;

&lt;p&gt;That implies roughly half of the openings might go unstaffed. Extrapolating to 2035, and considering the trajectory of baby boomer retirements, it's plausible that on the order of 2+ million positions in manufacturing might be unfilled a decade from now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continued Retirement Wave
&lt;/h3&gt;

&lt;p&gt;The demographic headwinds will intensify through the 2020s. Nearly one-third of the manufacturing workforce is 55+ today. By 2035, those individuals will be 65-75 years old – most will have retired. That represents millions of skilled tradespeople exiting.&lt;/p&gt;

&lt;p&gt;For perspective, Deloitte's analysis attributed 2.8 million of the 2024–2033 job openings to retirements alone. We can expect a similar magnitude of retirements in the early 2030s. This will especially hit skilled trades that skew older (e.g. many tool &amp;amp; die makers, master welders, senior maintenance techs).&lt;/p&gt;

&lt;h3&gt;
  
  
  Impact of Technology by 2035
&lt;/h3&gt;

&lt;p&gt;The next decade will likely see accelerated automation in manufacturing, partly out of necessity. By 2035, we can expect that many rote and physically intensive tasks in factories will be largely automated or assisted by machines. This could alleviate shortages in some entry-level roles.&lt;/p&gt;

&lt;p&gt;However, new skills gaps may emerge. The demand for human labor will shift toward overseeing and maintaining automated systems, programming machines, and other higher-skilled roles. This could create a gap in high-tech skills – e.g. not enough robotics technicians or data analysts for smart factories – if training doesn't keep up.&lt;/p&gt;

&lt;p&gt;We might also see more AI-driven worker augmentation: by 2035, AI copilots could be common on the shop floor (for instance, an AI assistant that a technician can consult for repair guidance). This would lower skill barriers for certain jobs, potentially allowing less-experienced workers to perform tasks that used to require 20-year veterans.&lt;/p&gt;

&lt;h3&gt;
  
  
  Education and Pipeline in 10 Years
&lt;/h3&gt;

&lt;p&gt;The recent uptick in trade school enrollment is an encouraging sign for 2035. If that momentum continues, the pipeline of new tradespeople should improve. Let's say trade/vocational program enrollment grows 5-7% annually – by 2030, that could produce a significantly larger graduating class of welders, machinists, etc.&lt;/p&gt;

&lt;p&gt;There is also a broader trend toward "skills-based hiring" in the economy, which might benefit trades. More emphasis on apprenticeships, certificates, and on-the-job skills (versus degrees) could channel more people into vocational paths.&lt;/p&gt;

&lt;p&gt;Overall, most analysts believe the manufacturing talent pipeline will improve incrementally, but not explosively, by 2035. Thus there will likely still be a shortfall, albeit maybe a somewhat reduced one if enrollment and training trends stay positive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As we approach 2035, the manufacturing skills gap is expected to persist as a critical challenge. Current forecasts suggest that, absent major changes, millions of manufacturing jobs will remain unfilled over the next decade.&lt;/p&gt;

&lt;p&gt;The gap will be driven largely by retirements outpacing new entrants, alongside the creation of new roles from industry growth and reshoring efforts. Automation and AI will help mitigate the shortfall but will also shift the skills needed from hands-on trades to more tech-centric roles.&lt;/p&gt;

&lt;p&gt;The hope is that the many initiatives now in motion – from revitalizing trade education, to apprenticeship expansions, to aggressive technology adoption – will bear fruit over the coming decade. If they do, the gap in 2035 could be smaller in proportion than it is in 2025, allowing U.S. manufacturing to sustain its growth.&lt;/p&gt;

&lt;p&gt;If they don't, the industry could be constrained by labor shortages, potentially missing out on productive capacity and economic gains (recall the projected $1 trillion cost of the gap by 2030 if unaddressed). In the years leading up to 2035, we can expect to see even more creative solutions aimed at finally closing the skills gap and securing the future of American manufacturing.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Master These Basics Before You Vibe Code With AI</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Fri, 04 Apr 2025 01:27:07 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/master-these-basics-before-you-vibe-code-with-ai-11g</link>
      <guid>https://dev.to/jimmyshoe85/master-these-basics-before-you-vibe-code-with-ai-11g</guid>
      <description>&lt;p&gt;Vibe coding is all the rage, and honestly, I don't know if the term makes sense, but maybe I'm not familiar enough with the origin of it to make the connection. However, based on my understanding, it is coding with AI with no concern for the technical details and focus on the creative aspect of developing. Sounds amazing until you put it into practice.&lt;/p&gt;

&lt;p&gt;Experienced developers understand this, which is why vibe coding can work for them. However, if you're like me (non-developer), we need a few basic tools in place before we go off "Vibing with the AI." That's a recipe for disaster, and I have a graveyard folder full of unfinished projects that Windsurf destroyed. I abandoned them because I hadn't yet suffered enough to first build a foundation.&lt;/p&gt;

&lt;p&gt;What follows is my foundation that I use to code with AI. I will state that I am one of those who likes to work in Claude or ChatGPT and copy/paste in VS Code. If I use Bolt or Loveable, I will get to a point then export to Github to finish everything with a chatbot and copy/paste. This works for me because I don't mind putzing through the code to find the lines to update. But I firmly believe that if you are not familiar with code, the following guideline is the best place to start if you want to code with AI.&lt;/p&gt;




&lt;h2&gt;
  
  
  Give the LLM Some Ground Rules
&lt;/h2&gt;

&lt;p&gt;One of the first things I discovered is that large language models (LLMs) love to get overly elaborate. If you don't give them guardrails, they'll produce complicated, bloated code that can leave you wondering how it all went off track. So I set my own "Code Guidelines for Collaboration."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start Simple&lt;/strong&gt;: Don't demand a Cadillac of features when a bicycle will do.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stay Under 200 Lines&lt;/strong&gt;: I shoot for functions under 200 lines of code in a file when possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skip the Fancy Frameworks&lt;/strong&gt;: Vanilla JavaScript, HTML, and CSS can often get the job done just fine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the LLM has a clear outline—like "No frameworks," "Keep functions small," or "Explain your choices"—it's less likely to go off on a tangent and produce a giant mess.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Actionable Insight&lt;/strong&gt;: Before you even start generating code, write your own "rules of engagement." It's like giving the AI a map of where you do and don't want to go. You will learn the word "refactoring." And you will discover how bad AI is at doing it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Embrace a Little Terminal Knowledge
&lt;/h2&gt;

&lt;p&gt;Yes, you can use tools like Cursor to automate basic terminal commands. But do yourself a favor: learn enough to install a Python environment, run npm install for JavaScript packages, or push to Git. Doing these steps manually a few times helps you understand what's happening behind the scenes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Basic Terminal Commands&lt;/strong&gt;: Know how to navigate folders, create files, and run scripts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python Environments&lt;/strong&gt;: If you're using Python, venv and pip install are commands you will use every time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git and GitHub&lt;/strong&gt;: Branching, committing, and reverting are not optional skills. Trust me, AI can break code faster than you can blink, so version control will be your safety net.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Actionable Insight&lt;/strong&gt;: For your next mini-project, do all the setup yourself. It'll build your confidence for when you really need to fix something at the command line.&lt;/p&gt;




&lt;h2&gt;
  
  
  Version Control Will Save Your Sanity
&lt;/h2&gt;

&lt;p&gt;I can't stress this enough: always, always use Git. AI might be your best friend, but it's also prone to wrecking a working app with a single "misinterpreted" line of code. If you have a clean commit history, you can revert to a stable version without redoing all your work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new branch for each new feature or experiment.&lt;/li&gt;
&lt;li&gt;Merge back to main only when you're confident.&lt;/li&gt;
&lt;li&gt;Don't forget to push frequently—tools like GitHub, GitLab, or Bitbucket will handle the rest.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Actionable Insight&lt;/strong&gt;: Even if you're a solo coder, treat your project like a team effort. Branch, commit, merge. You'll thank yourself later.&lt;/p&gt;




&lt;h2&gt;
  
  
  Skip the Frameworks (At First)
&lt;/h2&gt;

&lt;p&gt;React, Vue, Svelte, or Angular—these are all awesome for large-scale applications. But if you're just starting to dip your toes into coding, frameworks can add layers of complexity you don't need.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vanilla JavaScript&lt;/strong&gt;: Perfect for learning the ropes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js or Python for the backend&lt;/strong&gt;: Both are solid, widely used, and there's a ton of AI-generated help available.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTML and CSS&lt;/strong&gt;: Don't overlook the basics. A well-structured, simple page is often all you need.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Actionable Insight&lt;/strong&gt;: Try building a quick one-page app in plain HTML/JS first. If you really need a framework later, you'll have a better idea why and how to use it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Minimal Backend: Supabase, Airtable, or an S3 Bucket
&lt;/h2&gt;

&lt;p&gt;When you're ready to store data, skip the mammoth databases—just start with something lightweight and approachable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Supabase&lt;/strong&gt;: Great if you're comfortable with Postgres-like databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Airtable&lt;/strong&gt;: Offers a simple spreadsheet-like interface and an accessible API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Bucket&lt;/strong&gt;: A bit tougher to configure, but a good skill to have under your belt for object storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Actionable Insight&lt;/strong&gt;: Pick one service, set up a small test project (like a simple contact form), and get comfortable with how to read/write data. You'll be a lot more confident once you see it in action.&lt;/p&gt;




&lt;h2&gt;
  
  
  Use a Serverless Function to Protect API Keys
&lt;/h2&gt;

&lt;p&gt;For anyone working in tools like Rise, Storyline, or Captivate, you might want to integrate AI. But you can't just paste your API key directly into your eLearning course code—somebody could easily steal it. That's where a serverless function (like Vercel's serverless endpoints) or a minimal backend in Python on Render.com comes in handy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vercel&lt;/strong&gt;: Seamlessly hosts Node.js projects, auto-deploys from GitHub, and pairs well with Next.js if you ever decide to scale up.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Render.com&lt;/strong&gt;: A no-nonsense option for Python. Similar push-to-deploy model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Actionable Insight&lt;/strong&gt;: Follow a basic tutorial on spinning up a serverless function. Even if you don't get everything right the first time, it's a skill that immediately pays off—especially when you want to keep private keys private.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hosting Made Easy
&lt;/h2&gt;

&lt;p&gt;Once you get the hang of committing to GitHub, hooking up a host is a breeze:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vercel&lt;/strong&gt;: Perfect for Node.js. Commits to main automatically trigger deploys.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Render.com&lt;/strong&gt;: Great if you need a Python backend. Also monitors your repo for updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Actionable Insight&lt;/strong&gt;: Don't wait for a big project. Deploy small test apps and get comfortable with the workflow. The confidence boost from seeing your app live on the internet is huge.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practice Makes It Less Overwhelming
&lt;/h2&gt;

&lt;p&gt;This might seem like a lot. But after five or six mini-projects, you'll see the pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Spin up a simple prototype with vanilla tech.&lt;/li&gt;
&lt;li&gt;Add a dash of Node.js or Python on the backend (or a serverless function).&lt;/li&gt;
&lt;li&gt;Keep everything in Git with frequent commits.&lt;/li&gt;
&lt;li&gt;Deploy to Vercel or Render.com.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Actionable Insight&lt;/strong&gt;: Commit to creating a handful of personal or hobby projects; like a small to-do app, a simple chatbot, or a mini eLearning module. Repetition cements the workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;The promise of 'vibe coding' sounds amazing. Until you put it into practice. AI can do incredible things, but it won't magically produce production-ready apps just because you write good prompts. Without understanding basic architecture or how to guide the AI, you'll end up with unfinished projects. Just like my graveyard folder.&lt;/p&gt;

&lt;p&gt;What separates successful AI-assisted development from frustrating dead ends isn't better prompts. It's having enough foundational knowledge to recognize when the AI is heading down a problematic path.&lt;/p&gt;

&lt;p&gt;These basic skills will transform your relationship with AI. Version control saves your sanity. Terminal commands give you independence. Simple architectures provide structure. The AI works for you, not the other way around. And like any employee, it needs clear direction from someone who knows where they're going.&lt;/p&gt;

&lt;p&gt;So before you vibe code, build your foundation. Your future self will thank you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources to Get Started
&lt;/h2&gt;

&lt;p&gt;If you'd like to see these principles in action, I've created a few helpful resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://project-reports2.vercel.app/" rel="noopener noreferrer"&gt;Project Showcase&lt;/a&gt; - See examples of what I've built using this approach&lt;/li&gt;
&lt;li&gt;Reading: &lt;a href="https://www.hatica.io/blog/git-commands-for-developers/" rel="noopener noreferrer"&gt;Git Commands for Developers&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These real-world examples demonstrate that you don't need a computer science degree to create functional applications with AI assistance.&lt;/p&gt;

</description>
      <category>beginners</category>
    </item>
    <item>
      <title>Me vs GPT-4o Image Generation</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sat, 29 Mar 2025 22:50:56 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/me-vs-gpt-4o-image-generation-1ij</link>
      <guid>https://dev.to/jimmyshoe85/me-vs-gpt-4o-image-generation-1ij</guid>
      <description>&lt;h1&gt;
  
  
  Can GPT-4o Replace a Human in Photoshop?
&lt;/h1&gt;

&lt;p&gt;Once again you find me going up against new technology. GPT-4o dropped this week, and let’s be honest—it’s impressive. Fast, multimodal, and now capable of generating incredible images, and most impressively, generating images with flawless text. It can recreate art styles and remix photos.&lt;/p&gt;

&lt;p&gt;But I had a question as it relates to Photoshop manipulation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can GPT-4o take raw, mismatched source images and create a cinematic, story-rich composite as well as a human who knows how to work in Photoshop?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is a lot that goes into photomanipulation: lighting logic, composition, perspective, edge blending, ambient detail. And I wanted to see if GPT-4o, with all its intelligence, could replicate the nuanced decision-making that happens when you build a complex image by hand.&lt;/p&gt;

&lt;p&gt;So I made a sci-fi abduction scene. Built it manually in Photoshop from a pile of parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A grimy urban alleyway
&lt;/li&gt;
&lt;li&gt;A 70s-era rusted-out car
&lt;/li&gt;
&lt;li&gt;A woman mid-jump, arms open
&lt;/li&gt;
&lt;li&gt;A trash pile, a few flickering lights, and a classic flying saucer
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Image 1: Detailed Instructions
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsewnekogbg9iapdxcgt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsewnekogbg9iapdxcgt.png" alt="GPT-4o generated with detailed prompt" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt&lt;/strong&gt;: Scene Overview&lt;br&gt;&lt;br&gt;
A dramatic urban night scene set in a dark, narrow alley. The atmosphere is eerie and cinematic, with a strong contrast between shadows and a vibrant beam of light. A human figure is being abducted by a UFO, caught mid-air in a glowing tractor beam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environment &amp;amp; Setting&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Location&lt;/strong&gt;: Gritty alleyway between brick buildings with wet, grimy pavement
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time of Day&lt;/strong&gt;: Nighttime, dimly lit except for a prominent blue beam of light
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lighting&lt;/strong&gt;: Blue-white spotlight from the UFO above casts a circular glow around the abductee. Surroundings are illuminated in cyan and teal hues, with orange light spill near the garage and right-side wall
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Foreground Details&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Abductee&lt;/strong&gt;: Woman dressed in black athletic wear, barefoot, levitating mid-air
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Street&lt;/strong&gt;: Broken pavement, puddles reflecting light, scattered trash
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Car&lt;/strong&gt;: Old rusty with broken headlights, graffiti reads “DOPE$”
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Background&lt;/strong&gt;: Brick buildings, garage doors, utility wires, "children at play" sign
&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Lighting Effects&lt;/strong&gt;: Beam cuts through mist and darkness, detailed reflections
&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;UFO Design&lt;/strong&gt;: Classic saucer-style, ring of blue lights underneath, metallic finish
&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Color Palette&lt;/strong&gt;: Cool tones—teal, cyan, electric blue
&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  GPT-4o Output
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What it got right:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The beam lighting is genuinely well done—nice rim light, bounce on the ground, and glow on surrounding surfaces
&lt;/li&gt;
&lt;li&gt;Floating papers were a great inferred touch
&lt;/li&gt;
&lt;li&gt;Pose of the woman feels natural and cinematic
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;But here’s the issue:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Depth&lt;/strong&gt;: The alley flattens out fast—more like a set than a real place
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale&lt;/strong&gt;: The UFO feels small and unthreatening, proportions are slightly off
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Image 2: Limited Instructions + Blue Grade
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fox7hyorwnrcnmr4xdkne.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fox7hyorwnrcnmr4xdkne.png" alt="GPT-4ogenerated with minimal prompt" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt&lt;/strong&gt;:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Use the images to create a dramatic scene of an UFO abduction... make it look like a sci-fi movie scene with dramatic lighting and a blue color grade film&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  GPT-4o Output
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What it nailed:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Genuinely cinematic
&lt;/li&gt;
&lt;li&gt;Great use of color contrast—amber light vs. cyan beam
&lt;/li&gt;
&lt;li&gt;Trash, car, and signage were included
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Where it loses me:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scale&lt;/strong&gt;: UFO too close and too small, subject too large for the space
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Depth&lt;/strong&gt;: Better than the first, but still more like a backdrop
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Image 3: Vague Prompt / Just Scene Description
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakwnfuw2bcwjwo9k5y91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakwnfuw2bcwjwo9k5y91.png" alt="Image description" width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt&lt;/strong&gt;:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Use these images to create a cinematic sci-fi scene of an alien abduction&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  GPT-4o Output
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;What it got right:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Best sense of place—buildings visible in the background, lived-in environment
&lt;/li&gt;
&lt;li&gt;Strong composition and balance
&lt;/li&gt;
&lt;li&gt;Effective lighting and atmospheric balance
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;But then...&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scale&lt;/strong&gt;: Once again, the UFO feels too compact
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration&lt;/strong&gt;: The abductee isn’t color-matched or lit properly to fit the environment
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Verdict
&lt;/h2&gt;

&lt;p&gt;The obvious strength of OpenAI’s new model is its understanding of language. That’s what really separates GPT-4o from the rest right now. I ran the exact same prompt across Recraft, MidJourney, and Flux—and while &lt;strong&gt;Flux&lt;/strong&gt; came closest, &lt;strong&gt;none&lt;/strong&gt; matched the scene comprehension or compositional awareness that GPT-4o delivered.&lt;/p&gt;

&lt;p&gt;Yes, speed and rate limits are still a thing. But I expect that to smooth out as OpenAI scales capacity. What’s more important is that GPT-4o image generation actually feels like the version of AI art we’ve been waiting for—where visual storytelling and language finally start to merge in a meaningful way.&lt;/p&gt;




&lt;h2&gt;
  
  
  My AI Image Gen Wishlist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scene-aware Storyboarding&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I want to prompt across scenes, like building a storyboard. Let me describe 5 different shots and generate them in sequence while keeping consistency in the setting, lighting, and tone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Character Anchoring&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Give me a way to define a character once—through text, image, or a quick builder, and then just use an &lt;code&gt;@name&lt;/code&gt; tag to drop them into new scenes. No more re-describing facial features or outfits every time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Personal Style Library&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Let me upload reference images and train a mini-style model. I should be able to say “Use my noir style” or “Give this the same tone as my cyberpunk alley series.” Consistent tone and grade shouldn’t have to be reinvented with every prompt.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Overall, I'm good with this model.&lt;/p&gt;

&lt;p&gt;And just for laughs, here are the outputs from other image models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Flux 1.1&lt;/strong&gt; – Closest to GPT-4o&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhisgcl9mkm3sanv5mz81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhisgcl9mkm3sanv5mz81.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Recraft&lt;/strong&gt; – Pure trash output
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70w1k1stjonui0ym686e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70w1k1stjonui0ym686e.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MidJourney&lt;/strong&gt; – Best composition, but colors slightly off&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgo8vffzaulmpjhdur79c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgo8vffzaulmpjhdur79c.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>OpenAI's Responses API with Structured Ouput (Integrating it with Make.com)</title>
      <dc:creator>jimmyshoe85</dc:creator>
      <pubDate>Sat, 29 Mar 2025 19:13:53 +0000</pubDate>
      <link>https://dev.to/jimmyshoe85/openais-responses-api-with-structured-ouput-integrating-it-with-makecom-43da</link>
      <guid>https://dev.to/jimmyshoe85/openais-responses-api-with-structured-ouput-integrating-it-with-makecom-43da</guid>
      <description>&lt;p&gt;Let’s talk about something that’s flying under the radar but has huge implications—especially if you're building automation workflows: &lt;strong&gt;OpenAI's new Responses API&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is the Responses API?
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;Responses API&lt;/strong&gt; is OpenAI’s most advanced and future-focused interface for working with their models (like GPT-4o or GPT-3.5 Turbo). It consolidates and expands on previous APIs—especially the Completions and Assistants APIs—into a single, powerful toolset.&lt;/p&gt;

&lt;p&gt;Here’s what it brings to the table:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Text Generation&lt;/strong&gt; – Like the classic Completions API: simple prompt, smart response.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stateful Conversations&lt;/strong&gt; – Maintains memory across turns, similar to the Assistants API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal Input&lt;/strong&gt; – Supports both images and text, ideal for GPT-4o’s capabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Native Tools Integration&lt;/strong&gt; – Direct access to:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;File Search&lt;/strong&gt; (RAG-style document querying)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web Browsing&lt;/strong&gt; (live info retrieval)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Function Calling&lt;/strong&gt; – Trigger external APIs or tools based on the model’s output.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured Outputs (My Favorite)&lt;/strong&gt; – Use a JSON Schema to &lt;strong&gt;force&lt;/strong&gt; the model to return data in a predefined structure.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  From Assistants to Responses: A Welcome Evolution
&lt;/h3&gt;

&lt;p&gt;If you’ve worked with the Assistants API, you know it was already a step up—especially for stateful interaction and tool integration. I built most of my earlier automation projects on Assistants, and while it never got the love it deserved, it was incredibly useful.&lt;/p&gt;

&lt;p&gt;But OpenAI is moving forward. The Responses API will &lt;strong&gt;replace&lt;/strong&gt; the Assistants API—and honestly, that’s a good thing.&lt;/p&gt;

&lt;p&gt;Why? Because the Responses API gives you all the control that made Assistants valuable, while making the process &lt;em&gt;easier&lt;/em&gt;, &lt;em&gt;more capable&lt;/em&gt;, and &lt;em&gt;far more flexible&lt;/em&gt;. It’s less rigid, more powerful, and deeply customizable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Spotlight: Structured Outputs
&lt;/h3&gt;

&lt;p&gt;The game-changer feature for automation is &lt;strong&gt;Structured Output&lt;/strong&gt;—and this is where things get exciting.&lt;/p&gt;

&lt;p&gt;Structured Outputs allow you to define a strict &lt;strong&gt;JSON Schema&lt;/strong&gt;, which forces the AI to return data in a consistent and machine-readable format. This matters not just for clarity, but because it makes the response immediately useful in other systems—like a CRM, a training module, or a dashboard.&lt;/p&gt;

&lt;p&gt;Let’s look at a side-by-side example of freeform vs structured output in a "Wine Tutor" bot scenario.&lt;/p&gt;

&lt;h4&gt;
  
  
  Input Data
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"setting"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Harvest Dinner"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scenario"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"You're attending an exclusive harvest dinner..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Best Grape"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Cabernet Sauvignon"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Region"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Napa Valley"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Pairing Rationale"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A bold Napa Valley Cabernet Sauvignon..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Student Response"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"I think a Pinot from Oregon might be the best option..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Freeform AI Response
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Ah, my dear aspirant... For choosing a Pinot Noir, rather than the crowning jewel of a Napa Valley Cabernet Sauvignon, I must award you a mere 7 points... You did catch the red category, so an additional 2 points to you... Your rationale, however, demonstrated a grasp... deserving of a 3 for its attempt... Remember this: At events where richness abounds, let Napa Valley's Cabernet Sauvignon—bold like the sun and full of grace—be your guiding star."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Trying to extract structured values (like total score, grape recommendation, or rationale quality) from that paragraph reliably? Not fun. You’d likely have to build a custom parser, use regex, or rely on brittle keyword matching—none of which is scalable or reliable in a production automation environment. Worse, any slight variation in how the AI phrases its response could completely break your downstream logic. That’s a risky bet when you need consistent data for powering other systems like CRMs, LMS platforms, or analytics dashboards.&lt;/p&gt;

&lt;h4&gt;
  
  
  Structured Output (Easy to Parse and Use)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"feedback"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Ah, my dear aspirant... let Napa Valley's Cabernet Sauvignon—bold like the sun and full of grace—be your guiding star."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"grapeScore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"justificationScore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"totalScore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"bestGrape"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Cabernet Sauvignon"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"bestRegion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Napa Valley"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we understand structured data and Responses API, the question left to anwer is how to use this in an Automation tool like Make.com&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementing in Make.com: The HTTP Module &amp;amp; Double Parsing
&lt;/h3&gt;

&lt;p&gt;So how do you do this in Make.com? As of now, there isn’t a dedicated Responses API module. Instead, we use the &lt;strong&gt;HTTP - Make a request&lt;/strong&gt; module.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Setup:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;URL:&lt;/strong&gt; &lt;code&gt;https://api.openai.com/v1/responses&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Method:&lt;/strong&gt; &lt;code&gt;POST&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headers:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Authorization&lt;/code&gt;: &lt;code&gt;Bearer YOUR_API_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Content-Type&lt;/code&gt;: &lt;code&gt;application/json&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;OpenAI-Beta&lt;/code&gt;: &lt;code&gt;assistants=v2&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Body Type:&lt;/strong&gt; &lt;code&gt;Raw&lt;/code&gt;, &lt;code&gt;JSON&lt;/code&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Payload:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;model&lt;/code&gt;: &lt;code&gt;gpt-4o&lt;/code&gt; or &lt;code&gt;gpt-3.5-turbo&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;input&lt;/code&gt;: Your prompt (e.g., "I am eating {{40.meal}}. What wine pairs best?")&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;format&lt;/code&gt;: Use &lt;code&gt;json_schema&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;schema&lt;/code&gt;: Your defined JSON structure&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;instructions&lt;/code&gt;: Guide the model (e.g., "Return feedback, scores, and pairings.")&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Double Parse:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;First, parse the response's &lt;code&gt;data&lt;/code&gt; object.&lt;/li&gt;
&lt;li&gt;Then, extract the &lt;code&gt;output[].content[].text&lt;/code&gt; field and parse it again to get actual JSON data.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Use the Data:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;After parsing, you'll have clean variables: &lt;code&gt;feedback&lt;/code&gt;, &lt;code&gt;wine&lt;/code&gt;, &lt;code&gt;region&lt;/code&gt;, &lt;code&gt;totalScore&lt;/code&gt;, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Power Unleashed
&lt;/h3&gt;

&lt;p&gt;Why this matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reliability&lt;/strong&gt; – No more guesswork when parsing AI output.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity&lt;/strong&gt; – Eliminate brittle regex and string searches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt; – Focus on building logic, not fixing parsing issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control&lt;/strong&gt; – You dictate exactly what you want and how.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So while the Assistants API had its moment, the Responses API—especially when paired with Structured Outputs and Make.com—is a massive leap forward. It may take a few extra steps, but the payoff is clean, predictable, and scalable automation.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/MLXOL1v9cJc"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>automation</category>
      <category>openai</category>
    </item>
  </channel>
</rss>
