<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Onyedikachi Ejim</title>
    <description>The latest articles on DEV Community by Onyedikachi Ejim (@kachi).</description>
    <link>https://dev.to/kachi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kachi"/>
    <language>en</language>
    <item>
      <title>From Prompts to Programs: The Promise and Problem of AI-Generated Code</title>
      <dc:creator>Onyedikachi Ejim</dc:creator>
      <pubDate>Wed, 28 Jan 2026 11:40:56 +0000</pubDate>
      <link>https://dev.to/kachi/from-prompts-to-programs-the-promise-and-problem-of-ai-generated-code-9a4</link>
      <guid>https://dev.to/kachi/from-prompts-to-programs-the-promise-and-problem-of-ai-generated-code-9a4</guid>
      <description>&lt;p&gt;Over the course of my AI engineering journey (20+ days and counting), I’ve seen just how many possibilities exist when you start working closely with large language models.&lt;/p&gt;

&lt;p&gt;At first glance, LLMs don’t seem that magical.&lt;/p&gt;

&lt;p&gt;You send a prompt -&amp;gt; tokens are generated -&amp;gt; text comes back.&lt;/p&gt;

&lt;p&gt;We’ve been doing some version of this for years now. Just better models, better refinement, better UX.&lt;/p&gt;

&lt;p&gt;But things get really interesting when you stop treating an LLM as just a text generator and start embedding it inside a system.&lt;/p&gt;

&lt;h2&gt;
  
  
  When LLMs Stop Talking and Start Doing
&lt;/h2&gt;

&lt;p&gt;The real power shows up when an LLM’s output is no longer the final result, but an instruction for something else to happen.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate code&lt;/li&gt;
&lt;li&gt;Trigger workflows&lt;/li&gt;
&lt;li&gt;Transform files&lt;/li&gt;
&lt;li&gt;Call tools&lt;/li&gt;
&lt;li&gt;Execute logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you let model outputs drive actions, you open the door to a completely different class of applications.&lt;/p&gt;

&lt;p&gt;That shift hit me hard around Day 10 of my AI engineering journey, when we covered code generation with structured outputs.&lt;/p&gt;

&lt;p&gt;Structured Output: Forcing the Model to Behave&lt;/p&gt;

&lt;p&gt;The idea was simple:&lt;/p&gt;

&lt;p&gt;Instead of letting the model return any text, you:&lt;/p&gt;

&lt;p&gt;Define a structure (schema, format, contract)&lt;/p&gt;

&lt;p&gt;Tell the model exactly what the output must look like&lt;/p&gt;

&lt;p&gt;Reject anything that doesn’t comply&lt;/p&gt;

&lt;p&gt;Now you’re not just “asking for code”, you’re constraining how code is generated.&lt;/p&gt;

&lt;p&gt;As I went through the lessons and tasks, my brain immediately jumped to a bigger idea.&lt;/p&gt;

&lt;h2&gt;
  
  
  The “What If” Moment
&lt;/h2&gt;

&lt;p&gt;What if I built a system where:&lt;/p&gt;

&lt;p&gt;A user describes a problem in plain English&lt;/p&gt;

&lt;p&gt;The system has no prebuilt feature for that problem&lt;/p&gt;

&lt;p&gt;The LLM generates code on the fly based on the request&lt;/p&gt;

&lt;p&gt;The code runs and solves a real-world task&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;A user uploads an Excel file and says:&lt;/p&gt;

&lt;p&gt;“I want this reorganized, grouped, and summarized in a specific way.”&lt;/p&gt;

&lt;p&gt;My app doesn’t support this feature at all.&lt;/p&gt;

&lt;p&gt;But instead of saying “Sorry, not supported”, the system:&lt;/p&gt;

&lt;p&gt;Interprets the request&lt;/p&gt;

&lt;p&gt;Generates a custom script&lt;/p&gt;

&lt;p&gt;Runs it&lt;/p&gt;

&lt;p&gt;Returns the result&lt;/p&gt;

&lt;p&gt;That felt… powerful.&lt;br&gt;
Almost too powerful.&lt;/p&gt;

&lt;p&gt;And Then Security Enters the Room&lt;/p&gt;

&lt;p&gt;That excitement didn’t last long 😅&lt;/p&gt;

&lt;p&gt;Because the next question immediately became:&lt;/p&gt;

&lt;p&gt;How do you make this safe?&lt;/p&gt;

&lt;p&gt;Once you allow:&lt;/p&gt;

&lt;p&gt;Dynamic code generation&lt;/p&gt;

&lt;p&gt;Execution based on user input&lt;/p&gt;

&lt;p&gt;Open-ended instructions&lt;/p&gt;

&lt;p&gt;You’re basically inviting abuse.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt injection&lt;/li&gt;
&lt;li&gt;Code injection&lt;/li&gt;
&lt;li&gt;Escaping sandboxes&lt;/li&gt;
&lt;li&gt;Resource exhaustion&lt;/li&gt;
&lt;li&gt;Unintended file access&lt;/li&gt;
&lt;li&gt;System manipulation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s just the obvious stuff.&lt;/p&gt;

&lt;h2&gt;
  
  
  Guardrails Everywhere… and the Cost of Them
&lt;/h2&gt;

&lt;p&gt;Naturally, I started thinking about defenses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt guardrails&lt;/li&gt;
&lt;li&gt;Input validation&lt;/li&gt;
&lt;li&gt;Keyword blocking&lt;/li&gt;
&lt;li&gt;Delimiters and escaping&lt;/li&gt;
&lt;li&gt;Schema enforcement&lt;/li&gt;
&lt;li&gt;Allowlists&lt;/li&gt;
&lt;li&gt;Sandboxing&lt;/li&gt;
&lt;li&gt;Adversarial testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But the more I thought about it, the clearer something became:&lt;/p&gt;

&lt;p&gt;Every layer of protection limits the model’s freedom.&lt;/p&gt;

&lt;p&gt;And here’s the uncomfortable truth I ran into:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you already know exactly what code can be generated,&lt;br&gt;
and exactly how it should behave,&lt;br&gt;
why not just write the code yourself?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The only scenario where this system truly makes sense is the most dangerous one:&lt;/p&gt;

&lt;p&gt;You don’t know what code will be generated&lt;/p&gt;

&lt;p&gt;The schema is created dynamically&lt;/p&gt;

&lt;p&gt;Guards are applied dynamically&lt;/p&gt;

&lt;p&gt;Code is generated and executed without prior knowledge of the steps&lt;/p&gt;

&lt;p&gt;That’s where the real value is.&lt;br&gt;
And that’s also where the real risk lives.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Cost: Validation at Scale
&lt;/h2&gt;

&lt;p&gt;Another thought hit me while learning about prompt injection attacks.&lt;/p&gt;

&lt;p&gt;There are so many of them.&lt;br&gt;
I’ve already seen more than 10, and I can think of even more.&lt;/p&gt;

&lt;p&gt;Each one adds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Another check&lt;/li&gt;
&lt;li&gt;Another regex&lt;/li&gt;
&lt;li&gt;Another condition&lt;/li&gt;
&lt;li&gt;Another validation pass
Now imagine:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;20+ validations per request&lt;/p&gt;

&lt;p&gt;Multiple users hitting your system simultaneously&lt;/p&gt;

&lt;p&gt;What does that do to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Latency?&lt;/li&gt;
&lt;li&gt;Cost?&lt;/li&gt;
&lt;li&gt;Complexity?&lt;/li&gt;
&lt;li&gt;Reliability?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where risk prioritization starts to matter more than perfection.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Big Takeaway (So Far)
&lt;/h2&gt;

&lt;p&gt;What I’m enjoying most about this journey is how every lesson leads to another question.&lt;/p&gt;

&lt;p&gt;You start with:&lt;/p&gt;

&lt;p&gt;“Can we do this?”&lt;/p&gt;

&lt;p&gt;Then quickly move to:&lt;/p&gt;

&lt;p&gt;“Should we do this?”&lt;br&gt;
“At what cost?”&lt;br&gt;
“And for whom?”&lt;/p&gt;

&lt;p&gt;LLMs don’t just force you to think about intelligence —&lt;br&gt;
they force you to think about systems, trade-offs, and responsibility.&lt;/p&gt;

&lt;p&gt;And honestly?&lt;br&gt;
That’s what’s making AI engineering genuinely exciting for me.&lt;/p&gt;

&lt;p&gt;If you’re building systems where models don’t just respond, but act, security isn’t an add-on.&lt;/p&gt;

&lt;p&gt;It’s the design.&lt;/p&gt;

&lt;p&gt;And I’m still learning how to get that balance right.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>python</category>
      <category>security</category>
    </item>
    <item>
      <title>My first post</title>
      <dc:creator>Onyedikachi Ejim</dc:creator>
      <pubDate>Mon, 04 Dec 2023 06:00:43 +0000</pubDate>
      <link>https://dev.to/kachi/my-first-post-292</link>
      <guid>https://dev.to/kachi/my-first-post-292</guid>
      <description>&lt;p&gt;What a journey this is going to be.&lt;/p&gt;

</description>
      <category>webdev</category>
    </item>
  </channel>
</rss>
