Give AI Agents a "Human Supervisor"
The age of AI Agents has arrived.
They promise a future of automation for everything.
But we all harbor an unspoken fear.
What if the AI "thinks wrong"?
Could it send a "destructive" email to the entire company without permission?
Could it mistakenly delete key files from the production environment?
This fear of "loss of control" is the final hurdle preventing AI Agents from transitioning from toys to production tools.
Today, a bridge is rising above the chasm.
An open-source project called humanlayer brings us the ultimate solution.
It's not about restricting AI.
It's about taming AI.
What exactly is it?
humanlayer
is an open-source JavaScript SDK.
It aims to seamlessly integrate a "Human-in-the-loop" workflow into your AI Agent.
Essentially, it installs a "brake pedal" and a "human approval stage" for your AI Agent.
This means that before executing any high-risk or irreversible operation, the AI must stop.
It must first seek human permission.
GitHub project link: https://github.com/humanlayer/humanlayer
Why is it a game-changer?
We've all fantasized about fully autonomous AI.
But the reality is that current AI Agents are unreliable.
Letting an "intern" control the company's "nuclear button" is catastrophic.
The underlying logic of humanlayer
is a shift from "complete trust" to "verifiable trust".
First, you need to define which tools are "dangerous".
For example, send_email
or delete_file
.
Next, when the AI Agent decides to use these tools, humanlayer
will automatically intercept the operation.
It will not execute immediately.
Then, it will send a request to the human approver through the channel you've configured.
This request will clearly state: "The AI wants to send this email, the content is as follows, do you approve?"
Finally, the code will only continue to execute after a human clicks "Approve".
The pain of manually building a complex approval workflow in the past is now encapsulated into an extremely simple API.
import { HumanLayer, Assistant } from "humanlayer-sdk";
// Initialization
const humanlayer = new HumanLayer({ apiKey: "..." });
const assistant = new Assistant({ humanlayer });
// Run your Agent
await assistant.run("帮我给老板发一封周报邮件", {
tools: {
send_email: {
// Mark this tool as requiring human approval
human_approval: true,
func: async ({ to, subject, body }) => {
// ... email sending logic ...
},
},
},
});
You've firmly grasped the reins of trust once again.
How to get started quickly?
First, you need to install this library.
In your terminal, enter the following command:
npm install humanlayer-sdk
After installation, the next step is to obtain your API key and start writing code.
With just a few lines of code, you can equip your AI Agent with this powerful "human supervisor".
It supports all major AI models and frameworks.
An era of safer, more reliable, and truly production-ready AI Agents has arrived.
Objectively speaking, humanlayer
is not intended to replace AI's autonomy.
It aims to add a safety guardrail to AI's autonomy.
It allows us to confidently apply AI Agents to more critical and higher-value business scenarios.
It allows us to move from "fear" of AI to "mastery" of AI.
Now go to GitHub and give it a star! This is not only an endorsement of the project, but also a vote for a more responsible and reliable way of building AI!
What scenarios, besides email and file operations, do you think AI Agents should absolutely not "act on their own" in?
Share your thoughts in the comments!
Top comments (0)