<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hamza KONTE</title>
    <description>The latest articles on DEV Community by Hamza KONTE (@hamza_konte_04e855f6449da).</description>
    <link>https://dev.to/hamza_konte_04e855f6449da</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hamza_konte_04e855f6449da"/>
    <language>en</language>
    <item>
      <title>A colleague told me my prompt was garbage. He was right.</title>
      <dc:creator>Hamza KONTE</dc:creator>
      <pubDate>Sat, 14 Mar 2026 05:53:42 +0000</pubDate>
      <link>https://dev.to/hamza_konte_04e855f6449da/a-colleague-told-me-my-prompt-was-garbage-he-was-right-59j4</link>
      <guid>https://dev.to/hamza_konte_04e855f6449da/a-colleague-told-me-my-prompt-was-garbage-he-was-right-59j4</guid>
      <description>&lt;p&gt;Role, constraints, examples, output format. All in one paragraph. The model has to guess what's what.&lt;/p&gt;

&lt;p&gt;I kept rewriting my prompts, re-running, tweaking one sentence, re-running again. Ten tries to get something decent. Not because the model was bad. Because my instructions were a mess.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem nobody talks about
&lt;/h2&gt;

&lt;p&gt;When you write a prompt, you think in categories. You know which part is the role, which part is a constraint, which part is an example. But the model doesn't. It sees one block of text and has to infer the boundaries.&lt;/p&gt;

&lt;p&gt;Sometimes it gets it right. Sometimes it treats your example as an instruction. Sometimes it ignores your output format because it blended into the context paragraph above it.&lt;/p&gt;

&lt;p&gt;The fix isn't a better model. It's showing the model where each piece starts and ends.&lt;/p&gt;

&lt;h2&gt;
  
  
  So I built flompt
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://flompt.dev" rel="noopener noreferrer"&gt;flompt&lt;/a&gt; pre-interprets your prompt before the model does.&lt;/p&gt;

&lt;p&gt;You paste your prompt. AI splits it into typed blocks: role, audience, objective, constraints, examples, chain of thought, output format, response style, language. You see exactly how your instructions will be read.&lt;/p&gt;

&lt;p&gt;If the interpretation is wrong, you fix the blocks. Then you compile and get a structured prompt that the model parses correctly on the first try.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Same model. Same task. Night and day.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What it looks like
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/hFVTnnw9wIU"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  Three ways to use it
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Web app&lt;/strong&gt; at &lt;a href="https://flompt.dev" rel="noopener noreferrer"&gt;flompt.dev&lt;/a&gt;. No account, no install. Paste, decompose, edit, compile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Browser extension&lt;/strong&gt; for Chrome and Firefox. Adds an "Enhance" button right inside ChatGPT, Claude, and Gemini. You structure your prompt in a sidebar without leaving the conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP server&lt;/strong&gt; for Claude Code. Your agents can call &lt;code&gt;decompose_prompt()&lt;/code&gt; and &lt;code&gt;compile_prompt()&lt;/code&gt; programmatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add &lt;span class="nt"&gt;--transport&lt;/span&gt; http &lt;span class="nt"&gt;--scope&lt;/span&gt; user flompt https://flompt.dev/mcp/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  50+ templates
&lt;/h2&gt;

&lt;p&gt;Don't want to start from scratch? There are 50+ templates built from the best prompts on GitHub. Code review, blog post, sales pitch, data analysis, lesson plan. Load one, tweak the blocks, compile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack
&lt;/h2&gt;

&lt;p&gt;React 18 + TypeScript + React Flow + Zustand (frontend), FastAPI + Python 3.12 (backend), Caddy (reverse proxy). 10 languages supported.&lt;/p&gt;

&lt;p&gt;Free. Open-source. MIT.&lt;/p&gt;

&lt;p&gt;Try it: &lt;a href="https://flompt.dev" rel="noopener noreferrer"&gt;flompt.dev&lt;/a&gt;&lt;br&gt;
Repo: &lt;a href="https://github.com/Nyrok/flompt" rel="noopener noreferrer"&gt;https://github.com/Nyrok/flompt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you find it useful, a star on the repo would mean a lot. Solo project, every star helps with visibility.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
