DEV Community

Cover image for Quick question for people building with LLM APIs (3 questions, 2 min)
Samuel Omisakin
Samuel Omisakin

Posted on

Quick question for people building with LLM APIs (3 questions, 2 min)

I'm building an open-source reference called AI Oversight Patterns, a catalog of software patterns for keeping humans in control of AI agents. Things like approval gates before irreversible actions, action whitelists, audit logs, that kind of thing.

Before I go further, I want to make sure I'm solving a real gap and not just something that seems important to me. Three quick questions:

1. Are you currently building or maintaining an application that uses an LLM API (OpenAI, Anthropic, Gemini, etc.)?

  • Yes, actively building
  • Yes, it's in production
  • Was building, now paused
  • No, but planning to

2. Have you implemented any mechanism specifically to keep humans in control of what your AI agent can do? For example: an approval step before a sensitive action, a whitelist of what the agent is allowed to do, a log of what the agent decided and why.

  • Yes, I have something like this
  • No, I haven't thought about it much
  • No, I thought about it but it felt like overkill
  • I rely on the model's training to self-limit

3. If a public GitHub repo existed with 20 documented patterns like these, each with a code example and a description of failure modes, would you use it?

  • Yes, I'd use it as a reference
  • Maybe, depends on the quality
  • Probably not, I'd build my own approach
  • I don't think oversight mechanisms are necessary for my use case

Drop your answers in the comments. Any extra context you have is also welcome, I'm especially curious about the "I thought about it but felt like overkill" responses.

Repo (work in progress): https://github.com/Focus1010/ai-oversight-patterns

Top comments (0)