<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Samuel Omisakin</title>
    <description>The latest articles on DEV Community by Samuel Omisakin (@focus1010).</description>
    <link>https://dev.to/focus1010</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/focus1010"/>
    <language>en</language>
    <item>
      <title>Developers are shipping AI agents without any oversight mechanisms, I'm building a pattern library to fix that</title>
      <dc:creator>Samuel Omisakin</dc:creator>
      <pubDate>Thu, 16 Apr 2026 01:32:10 +0000</pubDate>
      <link>https://dev.to/focus1010/developers-are-shipping-ai-agents-without-any-oversight-mechanisms-im-building-a-pattern-library-3pbj</link>
      <guid>https://dev.to/focus1010/developers-are-shipping-ai-agents-without-any-oversight-mechanisms-im-building-a-pattern-library-3pbj</guid>
      <description>&lt;p&gt;Six months ago I started paying attention to how developers talk about AI agents they've built and shipped. Something came up again and again.&lt;/p&gt;

&lt;p&gt;They'd describe what the agent does. They'd mention a bug or an unexpected output. Then someone would ask: "What stops it from doing that again?" And the answer, more often than not, was some version of: "I haven't really thought about that."&lt;/p&gt;

&lt;p&gt;This isn't negligence. It's a tooling gap. There's a lot of writing about AI safety at the research level — papers on alignment, interpretability, RLHF. There's almost nothing at the level of: "Here is a code pattern you can add to your agent today that reduces the chance it does something you didn't intend."&lt;/p&gt;

&lt;p&gt;So I'm building that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The project is called AI Oversight Patterns.&lt;/strong&gt; It's an open-source catalog of software engineering patterns for maintaining human control over AI agents. Each pattern targets a specific failure mode that shows up in real deployments. Each one comes with a description of when to use it, a Python implementation, and a breakdown of failure modes and tradeoffs.&lt;/p&gt;

&lt;p&gt;Three patterns are live right now:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Human Approval Gate&lt;/strong&gt; — Before executing any irreversible action (send email, delete record, submit payment), the agent generates a plain-language summary of what it's about to do and waits for a human yes/no. The agent proposes. The human decides.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action Scope Limiter&lt;/strong&gt; — At startup, you define a whitelist of what the agent is allowed to do. That list is enforced in code, not just in the system prompt. If an action isn't on the list, it can't happen. No amount of clever prompting changes that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit Log Checkpoint&lt;/strong&gt; — Before every action, the agent writes a structured log entry: what it's about to do, why it chose that action, what alternatives it considered, and how confident it is. Append-only. Useful for debugging, for compliance, and for improving the system over time.&lt;/p&gt;

&lt;p&gt;I'm planning 20 patterns total. The remaining 17 cover things like rollback checkpoints, confidence threshold pauses, blast radius limiters, multi-agent scope boundaries, and graceful uncertainty escalation.&lt;/p&gt;

&lt;p&gt;The goal is not to make AI agents slower or more annoying to build. It's to give developers a concrete reference for the specific moments where adding a checkpoint is worth the friction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/Focus1010/ai-oversight-patterns" rel="noopener noreferrer"&gt;https://github.com/Focus1010/ai-oversight-patterns&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I'm also running a short survey for people building with LLM APIs. Three questions about whether you've added anything like this to your agents and whether a reference like this would be useful. Responses are helping me prioritize which patterns to build first.&lt;/p&gt;

&lt;p&gt;If you're building with LLMs — especially anything agentic — I'd appreciate your input: &lt;a href="https://dev.tourl"&gt;https://dev.to/focus1010/quick-question-for-people-building-with-llm-apis-3-questions-2-min-3mf9&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;A few things I'm genuinely curious about from people who've shipped agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have you ever had an agent do something unintended in production? What happened?&lt;/li&gt;
&lt;li&gt;Do you think about oversight when you build agents, or does it feel like overkill for your use case?&lt;/li&gt;
&lt;li&gt;Is there a failure mode you've personally encountered that isn't covered by the three patterns above?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy to discuss any of this in the comments.&lt;/p&gt;




</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Need all hand on this short survey</title>
      <dc:creator>Samuel Omisakin</dc:creator>
      <pubDate>Thu, 16 Apr 2026 00:39:22 +0000</pubDate>
      <link>https://dev.to/focus1010/need-all-hand-on-this-short-survey-25ml</link>
      <guid>https://dev.to/focus1010/need-all-hand-on-this-short-survey-25ml</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/focus1010/quick-question-for-people-building-with-llm-apis-3-questions-2-min-3mf9" class="crayons-story__hidden-navigation-link"&gt;Quick question for people building with LLM APIs (3 questions, 2 min)&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/focus1010" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3881376%2Ffd66502f-e7e2-4f85-a7ec-216e4ac01259.jpeg" alt="focus1010 profile" class="crayons-avatar__image" width="460" height="460"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/focus1010" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Samuel Omisakin
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Samuel Omisakin
                
              
              &lt;div id="story-author-preview-content-3507471" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/focus1010" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3881376%2Ffd66502f-e7e2-4f85-a7ec-216e4ac01259.jpeg" class="crayons-avatar__image" alt="" width="460" height="460"&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Samuel Omisakin&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/focus1010/quick-question-for-people-building-with-llm-apis-3-questions-2-min-3mf9" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Apr 16&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/focus1010/quick-question-for-people-building-with-llm-apis-3-questions-2-min-3mf9" id="article-link-3507471"&gt;
          Quick question for people building with LLM APIs (3 questions, 2 min)
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/openai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;openai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/claude"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;claude&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/productivity"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;productivity&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/focus1010/quick-question-for-people-building-with-llm-apis-3-questions-2-min-3mf9" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="24" height="24"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;1&lt;span class="hidden s:inline"&gt; reaction&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/focus1010/quick-question-for-people-building-with-llm-apis-3-questions-2-min-3mf9#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            2 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
    </item>
    <item>
      <title>Quick question for people building with LLM APIs (3 questions, 2 min)</title>
      <dc:creator>Samuel Omisakin</dc:creator>
      <pubDate>Thu, 16 Apr 2026 00:37:37 +0000</pubDate>
      <link>https://dev.to/focus1010/quick-question-for-people-building-with-llm-apis-3-questions-2-min-3mf9</link>
      <guid>https://dev.to/focus1010/quick-question-for-people-building-with-llm-apis-3-questions-2-min-3mf9</guid>
      <description>&lt;p&gt;I'm building an open-source reference called AI Oversight Patterns, a catalog of software patterns for keeping humans in control of AI agents. Things like approval gates before irreversible actions, action whitelists, audit logs, that kind of thing.&lt;/p&gt;

&lt;p&gt;Before I go further, I want to make sure I'm solving a real gap and not just something that seems important to me. Three quick questions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Are you currently building or maintaining an application that uses an LLM API (OpenAI, Anthropic, Gemini, etc.)?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yes, actively building&lt;/li&gt;
&lt;li&gt;Yes, it's in production&lt;/li&gt;
&lt;li&gt;Was building, now paused&lt;/li&gt;
&lt;li&gt;No, but planning to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Have you implemented any mechanism specifically to keep humans in control of what your AI agent can do? For example: an approval step before a sensitive action, a whitelist of what the agent is allowed to do, a log of what the agent decided and why.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yes, I have something like this&lt;/li&gt;
&lt;li&gt;No, I haven't thought about it much&lt;/li&gt;
&lt;li&gt;No, I thought about it but it felt like overkill&lt;/li&gt;
&lt;li&gt;I rely on the model's training to self-limit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. If a public GitHub repo existed with 20 documented patterns like these, each with a code example and a description of failure modes, would you use it?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yes, I'd use it as a reference&lt;/li&gt;
&lt;li&gt;Maybe, depends on the quality&lt;/li&gt;
&lt;li&gt;Probably not, I'd build my own approach&lt;/li&gt;
&lt;li&gt;I don't think oversight mechanisms are necessary for my use case&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Drop your answers in the comments. Any extra context you have is also welcome, I'm especially curious about the "I thought about it but felt like overkill" responses.&lt;/p&gt;

&lt;p&gt;Repo (work in progress): &lt;a href="https://github.com/Focus1010/ai-oversight-patterns" rel="noopener noreferrer"&gt;https://github.com/Focus1010/ai-oversight-patterns&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>claude</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
