<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Julien Doussot</title>
    <description>The latest articles on DEV Community by Julien Doussot (@julien_doussot).</description>
    <link>https://dev.to/julien_doussot</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/julien_doussot"/>
    <language>en</language>
    <item>
      <title>Prompt Debugging Is the New Stack Trace</title>
      <dc:creator>Julien Doussot</dc:creator>
      <pubDate>Tue, 20 May 2025 08:46:30 +0000</pubDate>
      <link>https://dev.to/julien_doussot/prompt-debugging-is-the-new-stack-trace-2hie</link>
      <guid>https://dev.to/julien_doussot/prompt-debugging-is-the-new-stack-trace-2hie</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;What breaking AI workflows taught me about the future of engineering.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;In traditional software development, when something breaks, you look at the logs. You dig into the stack trace, inspect variables, step through the debugger, isolate the bug.&lt;/p&gt;

&lt;p&gt;In AI-powered applications, that world is gone.&lt;/p&gt;

&lt;p&gt;When an LLM fails, there’s often no crash, no error, and no stack trace. Just a subtly wrong response. A hallucination. A weird behavior that’s technically correct but totally wrong for your user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging becomes conversational.&lt;/strong&gt; And that changes everything.&lt;/p&gt;




&lt;h2&gt;
  
  
  From Code Bugs to Prompt Bugs
&lt;/h2&gt;

&lt;p&gt;In our product &lt;a href="https://linkeme.ai" rel="noopener noreferrer"&gt;Linkeme&lt;/a&gt;, we rely on prompts to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate social media content&lt;/li&gt;
&lt;li&gt;Choose relevant CTAs&lt;/li&gt;
&lt;li&gt;Compose visual overlays&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And yet, some of the most frustrating bugs we faced early on didn’t come from bad code — but from poorly constructed prompts.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Generate a LinkedIn post for this article: 'AI agents will replace internal tools.'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sounds fine? Not really. The model didn’t know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who the target audience was&lt;/li&gt;
&lt;li&gt;What tone to use&lt;/li&gt;
&lt;li&gt;Whether to include hashtags, emojis, or links&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The output was generic and missed the mark. So we debugged it — not with breakpoints, but with iterations.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Prompt Debugging Looks Like
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Add role and tone:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"You are a social media expert. Generate a LinkedIn post for B2B founders. Tone: bold but professional."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add formatting instructions:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"...Include 3 lines max, use a punchy hook, and end with a CTA."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add context examples:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Here are 2 good posts for reference: [...]"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We iterated until the model consistently produced good results. That’s prompt debugging.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building Tools for Prompt QA
&lt;/h2&gt;

&lt;p&gt;After spending too much time manually testing prompts, we built our own tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt versioning&lt;/strong&gt; — Every prompt change is tracked like a commit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt test cases&lt;/strong&gt; — Each key prompt has expected inputs and outputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Failure reporting&lt;/strong&gt; — Human validators can tag a failed generation and auto-rollback to the last good version.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring with PostHog&lt;/strong&gt; — To track usage and spot regressions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LLM development requires building a new stack for non-deterministic outputs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Prompt Debugging Is the New Core Skill
&lt;/h2&gt;

&lt;p&gt;If you work with AI, prompt engineering is not a phase — it’s a new layer in your software stack.&lt;/p&gt;

&lt;p&gt;It’s less about logic, more about &lt;em&gt;guidance&lt;/em&gt;.&lt;br&gt;
Less about syntax, more about &lt;em&gt;semantics&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;And it’s becoming a core engineering skill.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;You don’t debug LLM apps the way you debug JavaScript or Python.&lt;br&gt;
You test intentions, tweak contexts, and optimize outputs.&lt;/p&gt;

&lt;p&gt;If that sounds frustrating — well, welcome to the future.&lt;/p&gt;

&lt;p&gt;But once you embrace it, it’s surprisingly powerful.&lt;/p&gt;

&lt;p&gt;Your new debugger is not a terminal. It’s a chat.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We’re exploring more around LLM-powered dev at &lt;a href="https://easylab.ai" rel="noopener noreferrer"&gt;easylab.ai&lt;/a&gt; — and prompt versioning is just the beginning.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>llm</category>
      <category>startup</category>
    </item>
    <item>
      <title>How I Went From Sorting CVs to Building a Full AI-Powered SaaS With Zero Traditional Code</title>
      <dc:creator>Julien Doussot</dc:creator>
      <pubDate>Fri, 09 May 2025 15:05:48 +0000</pubDate>
      <link>https://dev.to/julien_doussot/how-i-went-from-sorting-cvs-to-building-a-full-ai-powered-saas-with-zero-traditional-code-105g</link>
      <guid>https://dev.to/julien_doussot/how-i-went-from-sorting-cvs-to-building-a-full-ai-powered-saas-with-zero-traditional-code-105g</guid>
      <description>&lt;p&gt;Let’s get one thing straight: &lt;strong&gt;I’m not a developer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I started in telecom. Built a successful infrastructure business. Sold it. Then bought a consulting firm that placed experts into banks and corporate environments. I didn’t expect to get into software — I was managing people.&lt;/p&gt;

&lt;p&gt;But what I didn’t expect even more was how fast I’d hit a wall.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pain Wasn't Technical — It Was Repetitive
&lt;/h2&gt;

&lt;p&gt;By 2022, I was drowning in recruiting work. Dozens of CVs daily, Excel sheets, calendar invites, formatting issues, interview follow-ups. If you’ve ever run an operational company without a dedicated product team, you know the feeling: everything feels duct-taped.&lt;/p&gt;

&lt;p&gt;So I started automating.&lt;/p&gt;

&lt;p&gt;At first, I used &lt;strong&gt;&lt;a href="https://www.make.com" rel="noopener noreferrer"&gt;Make.com&lt;/a&gt;&lt;/strong&gt; (then Integromat). I wired together flows to read CVs, extract info, rank candidates, and trigger reminders. I used &lt;strong&gt;Evernote&lt;/strong&gt; for notes, &lt;strong&gt;Google Sheets&lt;/strong&gt; for storage, and &lt;strong&gt;GPT-3&lt;/strong&gt; for summaries and scoring. It wasn’t pretty, but it saved hours.&lt;/p&gt;

&lt;p&gt;There was no front-end. Just logic.&lt;/p&gt;

&lt;p&gt;I didn't call it “vibe coding” back then — but that's exactly what it was: orchestrating AI and tools to &lt;em&gt;build&lt;/em&gt; without code, just intent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clients Started Asking Questions
&lt;/h2&gt;

&lt;p&gt;By mid-2023, something strange started happening.&lt;/p&gt;

&lt;p&gt;Clients were no longer impressed by our ability to place consultants. They were fascinated by the internal tools we were using.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Wait — you’re doing all that without a dev team?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s when I realized the software &lt;em&gt;was&lt;/em&gt; the product.&lt;/p&gt;

&lt;p&gt;So I gave it a name. I rebuilt it. And I prepared to commercialize it.&lt;/p&gt;

&lt;p&gt;That’s when &lt;strong&gt;&lt;a href="https://linkeme.ai" rel="noopener noreferrer"&gt;Linkeme&lt;/a&gt;&lt;/strong&gt; was born.&lt;/p&gt;

&lt;h2&gt;
  
  
  From MVP to V1: Built with AI, by AI
&lt;/h2&gt;

&lt;p&gt;I didn’t start with an idea. I started with a problem: &lt;em&gt;"I spend too much time managing content and candidate workflows."&lt;/em&gt; That was it.&lt;/p&gt;

&lt;p&gt;When I moved to productize Linkeme, I rebuilt the entire stack using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://bolt.new" rel="noopener noreferrer"&gt;Bolt.new&lt;/a&gt;&lt;/strong&gt; for front-end UI and logic (a spinoff from StackBlitz)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make.com&lt;/strong&gt; for backend logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT-4.1&lt;/strong&gt; and &lt;strong&gt;Claude&lt;/strong&gt; for copywriting, parsing, and decision-making&lt;/li&gt;
&lt;li&gt;A custom internal agent to rank post ideas&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;CTA engine&lt;/strong&gt; that adapts length/tone for each platform&lt;/li&gt;
&lt;li&gt;A visual composition layer that &lt;strong&gt;auto-generates illustrations&lt;/strong&gt; with title, subtitle, and brand overlay&lt;/li&gt;
&lt;li&gt;Publishing logic across &lt;strong&gt;LinkedIn, Twitter, Instagram, Facebook&lt;/strong&gt; — with zero manual input&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Give Linkeme a URL or a topic, and it outputs brand-compliant, multi-platform content — &lt;em&gt;fully automated&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Version 2.0: Scalable Infrastructure, Same Vibe
&lt;/h2&gt;

&lt;p&gt;As traction grew, we scaled.&lt;/p&gt;

&lt;p&gt;We rebuilt the architecture using &lt;strong&gt;serverless AWS&lt;/strong&gt; — Lambda, Step Functions, S3. We added &lt;strong&gt;Chargebee&lt;/strong&gt; for billing and made the system multi-tenant.&lt;/p&gt;

&lt;p&gt;But we never lost the vibe coding mindset. Using &lt;strong&gt;Cline.dev&lt;/strong&gt; inside VS Code, we orchestrated LLM agents to build workflows, detect edge cases, and self-refactor code.&lt;/p&gt;

&lt;p&gt;The same AI that created content now built Linkeme itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Changed for Me
&lt;/h2&gt;

&lt;p&gt;Before Linkeme, I managed consultants.&lt;/p&gt;

&lt;p&gt;Now, I run &lt;strong&gt;&lt;a href="https://easylab.ai" rel="noopener noreferrer"&gt;Easylab AI&lt;/a&gt;&lt;/strong&gt; — an AI-native automation agency helping teams replace repetitive workflows with intelligent agents.&lt;/p&gt;

&lt;p&gt;We build tools like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://sandrahr.com" rel="noopener noreferrer"&gt;SandraHR&lt;/a&gt;&lt;/strong&gt; — automated recruitment with LLM-based analysis&lt;/li&gt;
&lt;li&gt;Custom internal platforms using AI orchestration&lt;/li&gt;
&lt;li&gt;Zero-code/low-code systems for SMEs that can’t afford dev teams&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All with the same philosophy: &lt;strong&gt;no traditional code&lt;/strong&gt;, just design, orchestration, and validation.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, Is This The Future?
&lt;/h2&gt;

&lt;p&gt;No — not for everything. But for most operational software, dashboards, internal tools, automation flows? &lt;strong&gt;Absolutely yes.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You describe what you want.&lt;/li&gt;
&lt;li&gt;The agents figure out how.&lt;/li&gt;
&lt;li&gt;You validate the result.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We don’t do pull requests anymore. We review AI output.&lt;/p&gt;

&lt;p&gt;We don’t write frontends. We instruct layouts and flows.&lt;/p&gt;

&lt;p&gt;And it works.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;If you're building with AI, or if you want to — start with a pain, not a stack.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s what Linkeme was. That’s why it still works.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://linkeme.ai" rel="noopener noreferrer"&gt;linkeme.ai&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy to answer questions or dive deeper into our architecture. DM open.&lt;/p&gt;

</description>
      <category>nocode</category>
      <category>startup</category>
      <category>automation</category>
      <category>ai</category>
    </item>
    <item>
      <title>From Skepticism to System: How We Transitioned to Full-AI Dev.</title>
      <dc:creator>Julien Doussot</dc:creator>
      <pubDate>Fri, 18 Apr 2025 15:51:58 +0000</pubDate>
      <link>https://dev.to/julien_doussot/from-skepticism-to-system-how-we-transitioned-to-full-ai-dev-59dc</link>
      <guid>https://dev.to/julien_doussot/from-skepticism-to-system-how-we-transitioned-to-full-ai-dev-59dc</guid>
      <description>&lt;p&gt;I’ll be honest: when the idea was first proposed that we could shift &lt;em&gt;all&lt;/em&gt; of our code production to AI, I wasn’t entirely convinced. Like many, I saw the potential in tools like Copilot and ChatGPT—but to imagine entire features, components, even products being built from end to end by AI agents? That seemed like a stretch.&lt;/p&gt;

&lt;p&gt;And yet, here we are.&lt;/p&gt;

&lt;p&gt;Since October 2024, we’ve been running Easylab AI on a &lt;strong&gt;fully AI-assisted development stack&lt;/strong&gt;, where human engineers no longer write production code themselves. Everything we build is developed by agents, powered by LLMs like &lt;strong&gt;Claude 3.7&lt;/strong&gt; and &lt;strong&gt;DeepSeek GPT 4.1&lt;/strong&gt;, orchestrated by prompt engineers, product designers, and dev strategists.&lt;/p&gt;

&lt;p&gt;Let me walk you through how we got here—and what we’ve learned along the way.&lt;/p&gt;

&lt;p&gt;It started with a first project: &lt;strong&gt;&lt;a href="https://linkeme.ai" rel="noopener noreferrer"&gt;Linkeme&lt;/a&gt;&lt;/strong&gt;. A SaaS we designed for SMEs, Linkeme helps automate and optimize their social media communications by combining audience insights, AI-driven scheduling, and generative content creation. We chose it because it was ambitious—but also self-contained. If it failed, no real damage. If it worked, it would prove the value of full-AI dev.&lt;/p&gt;

&lt;p&gt;We gave it a spec, routed it through our early AI stack, and let the agents build. The first version ran… kind of. The logic was there, but fragile. Edge cases were missed, error handling was inconsistent, and the LLM misunderstood some of our intent. But despite the bugs, what struck us was the &lt;strong&gt;raw velocity&lt;/strong&gt;. It took hours—not days—to get a first working version.&lt;/p&gt;

&lt;p&gt;We iterated. Hard.&lt;/p&gt;

&lt;p&gt;We started building internal tools to guide prompts, track agent output, inject context, and debug reasoning paths. We learned which models were good for what: Claude for logic and planning, DeepSeek for clean code generation, others for tests or documentation. We designed role-based agents, and built structured workflows around them. We enforced standards. We forced ourselves to slow down—just enough to design the orchestration layer properly.&lt;/p&gt;

&lt;p&gt;There were failures. Some releases went sideways. There were moments where we thought, “Okay, maybe this was too much, too fast.” But each time, the lessons stuck—and the system got better.&lt;/p&gt;

&lt;p&gt;Within weeks, we had restructured our delivery pipeline. Our engineers stopped writing components manually and instead began managing tasks as &lt;strong&gt;intentful orchestrators&lt;/strong&gt;. They designed specs, prompted agents, reviewed code, validated logic, and shipped. They became system designers, reviewers, QA partners, and meta-programmers. And they got &lt;em&gt;very&lt;/em&gt; good at it.&lt;/p&gt;

&lt;p&gt;Today, we can say with confidence: our engineering team doesn’t just use AI—they build &lt;em&gt;through&lt;/em&gt; AI. They operate on another level. They move faster, think bigger, and have developed a skillset that I believe will define the next decade of engineering: &lt;strong&gt;the ability to shape autonomous systems that build autonomously&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Is it perfect? No. We still hit edge cases. Prompting is an art, not a science. And we’ve had to invest a lot into internal tooling to keep the whole orchestration layer stable and explainable. But the benefits are undeniable.&lt;/p&gt;




&lt;h3&gt;
  
  
  What we’ve gained
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;We deliver features in 10–20% of the time it used to take
&lt;/li&gt;
&lt;li&gt;Our engineers are less fatigued and more intellectually engaged
&lt;/li&gt;
&lt;li&gt;We’ve built a scalable internal knowledge architecture through reusable agents
&lt;/li&gt;
&lt;li&gt;We’ve created a culture of systems thinking, experimentation, and speed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What’s still challenging
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Prompting isn’t trivial. It requires training and intuition
&lt;/li&gt;
&lt;li&gt;LLMs hallucinate logic if instructions are vague
&lt;/li&gt;
&lt;li&gt;Multi-agent orchestration is fragile without the right sequence and fallback mechanisms
&lt;/li&gt;
&lt;li&gt;We had to build internal tools to track agent decisions and outputs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;
  How it works in practice
  &lt;ol&gt;
&lt;li&gt;A spec is written in natural language
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;bolt.new&lt;/code&gt; generates the base skeleton
&lt;/li&gt;
&lt;li&gt;The “Back-end Builder” agent expands logic and integrates with data models
&lt;/li&gt;
&lt;li&gt;Claude 3.7 refines the API logic and writes tests
&lt;/li&gt;
&lt;li&gt;A “QA Validator” agent audits edge cases and coverage
&lt;/li&gt;
&lt;li&gt;A human reviews, validates security, and deploys
This entire process usually takes 1–2 days
&lt;/li&gt;
&lt;/ol&gt;




&lt;/p&gt;


&lt;p&gt;We're still learning. Still improving. But today, I believe Easylab AI has one of the most forward-leaning dev cultures you’ll find in Europe. And it all started with a test project we were prepared to let fail.&lt;/p&gt;

&lt;p&gt;If you’re considering a similar shift—or if you’ve already tried something like this—I’d love to hear your take. How far are you pushing AI in your own dev workflow? Where are the limits? Where are the surprises?&lt;/p&gt;

&lt;p&gt;Let’s open the conversation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.easylab.ai" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Discover how Easylab AI builds with agents and LLMs&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://linkeme.ai" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Check out Linkeme – our first fully AI-developed SaaS&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
