DEV Community

Cover image for Placet: An Open Source Human-in-the-Loop Platform for AI Agents and Automation Workflows
Kevin
Kevin

Posted on

Placet: An Open Source Human-in-the-Loop Platform for AI Agents and Automation Workflows

AI agents are getting better every week. They write code, analyze data, generate reports, moderate content, and propose infrastructure changes. In many cases that output is good enough to act on directly.

But for most business-critical workflows a human still needs to be in the loop (HITL). Not because the AI is unreliable, but because accountability, context, and final judgment still matter.

The question is: where does that human review actually happen?

In most teams the answer is Slack, Teams, Telegram, or email. Someone builds a bot, it sends a message, attaches some context, asks for a thumbs-up. It works, barely. But these tools were designed for human-to-human communication and they were never built for structured agent-human collaboration.

I built Placet to fix that.


What is Placet?

Placet-screenshot

Placet (Latin for "it pleases" or "approved") is a self-hostable, open source inbox purpose-built for human-in-the-loop (HITL) workflows. It provides:

  • A REST API that any agent, script, or automation tool can call via standard HTTP
  • A web UI where humans review messages, respond to structured requests, and annotate files
  • A plugin system for rendering custom message types inside sandboxed iframes
  • A webhook and long-polling system to deliver review responses back to the agent

The design philosophy is simple: be the cURL of human interaction. If your tool can make an HTTP request, it can integrate with Placet. No SDK required, no framework coupling.

📸 Image: Architecture diagram showing an AI agent on the left sending a POST request to the Placet API, the web UI in the center where a human reviews and responds, and a webhook callback flowing back to the agent on the right. Clean, three-component view.


Why Not Just Use Slack or Telegram?

This question comes up every time. The short answer: an approval button in Slack is a hack, not a feature.

Problem Slack / Telegram / Teams Placet
Structured approval with styled buttons Button text only, no visual hierarchy Primary / Danger / Default button styles
Multi-field form submission Impossible without a custom app Native form review type (12 field types)
Rich file previews inline Images only, limited context PDF, DOCX, XLSX, MP4, audio, code, SVG all inline
Image annotation Not possible Canvas overlay: pen, arrow, rectangle, text
Review expiry with webhook callback Manual workaround required Built-in, configurable per review
Delivery status tracking Not available WhatsApp-style: sent to delivered to agent_received
Agent status heartbeat Not available 4 states with full history timeline
Self-hosted, no cloud dependency SaaS only or complex setup One docker compose up
Open source No Yes

I personally replaced my Telegram-based approval flows with Placet and the difference was immediate. Review context stays in one place, responses are structured JSON instead of freeform text, and I can annotate AI-generated images without switching to another tool.


Core Concepts

Agents and Channels

Every integration is an agent: an entity that holds an API key and has its own chat channel in the UI. You can have as many agents as you want, one per LangChain workflow, one per CI/CD pipeline, one per cron job.

Each agent gets:

  • Its own isolated channel (like a dedicated chat thread)
  • A configurable webhook URL for receiving review responses
  • An optional avatar and description for identification
  • A status heartbeat system with full history

The Five Review Types

Placet ships with five built-in review primitives:

Type When to use Response shape
Approval Binary or small set of choices (approve/reject) { selectedOption, comment? }
Selection Single or multi-select from a list of items { selectedIds: [...] }
Form Structured data entry with multiple typed fields { fieldName: value, ... }
Text Input Open-ended freeform response with optional markdown preview { text: "..." }
Freeform Custom JSON, rendered and submitted by a plugin any JSON

All review types support:

  • expiresInSeconds or expiresAt (default 24 hours, max 36 hours)
  • Per-message webhook callbacks
  • Long-polling via GET /api/v1/reviews/:id/wait
  • A review:expired webhook callback when the timer runs out

How Agents Receive Responses

When a human responds to a review, the agent can receive it via one of three connection types:

  1. Webhook callback: Placet POSTs the response to your configured URL
  2. Long-polling: The agent waits on GET /api/v1/reviews/:id/wait for up to 30 seconds
  3. WebSocket: Subscribe to real-time events via Socket.io (e.g. review:responded, review:expired, message:created)

The WebSocket connection is particularly useful for agents that want to stay permanently connected and react instantly without the overhead of repeated polling. Here is a minimal example using the Socket.io client:

import { io } from 'socket.io-client';

const socket = io('https://your-placet-instance.com', {
  auth: { token: 'hp_your_api_key' },
});

socket.on('review:responded', (event) => {
  const { messageId, channelId, response } = event;
  console.log(`Review ${messageId} completed:`, response);
});

socket.on('review:expired', (event) => {
  console.log(`Review ${event.messageId} expired without a response`);
});
Enter fullscreen mode Exit fullscreen mode

Webhook Resolution: Three Layers

When a human responds to a review, Placet resolves where to send the callback in a fixed priority chain:

Priority Source How to set it
1 (highest) Message-level webhook Pass webhookUrl in the POST /api/v1/messages body
2 Agent-level default webhook Set once in the agent settings panel
3 Legacy inline callback A callback field inside the review payload (backwards compat)

The message-level override exists because real pipelines are rarely that uniform. A single agent might dispatch review requests from multiple concurrent LangChain runs, each needing its response routed somewhere different: a per-run callback URL, a short-lived ngrok tunnel, a specific Lambda invocation. Passing webhookUrl per message solves this cleanly without spinning up a new agent for each run.

WebSocket events (review:responded, review:expired) always fire in parallel with the HTTP callback, regardless of which tier is active. When a webhook call fails, the message flips to a webhook_failed delivery status, visible as a red indicator in the UI. A single click retries delivery without touching the review state.

Push-Only vs Bidirectional Channels

Not every workflow needs a free-text chat box. Placet channels support two practical communication patterns, and which one applies is determined by how the agent is configured.

Push-only (watch mode): The agent sends messages to the inbox and may request structured responses. Humans respond exclusively through the built-in review UI: clicking an approval button, selecting options, filling a form, drawing annotations. The free-text message input is present in the UI but serves no purpose for the agent, because there is no webhook to receive unstructured user messages. This is the right pattern for automated pipelines where the agent controls the agenda and the human is there to gate-keep specific decision points.

Bidirectional (chat mode): When the agent has a webhook configured, human-typed messages (stored with senderType: "user") are forwarded to that same webhook in real time, alongside review responses and delivery events. The agent can react to free-text input, ask follow-up questions, or run a full conversational loop. The LangChain example in the repository demonstrates this pattern: the agent pauses mid-task to ask a question, the human types an answer in the chat box, and the agent continues with the new context.

The distinction matters when you are designing a workflow. A production deploy gate needs only structured approval buttons. A research assistant that takes mid-run guidance needs the full chat loop. Placet does not force a choice: configure the webhook and you get both structured reviews and free-text input from the same channel.

Upcoming Integrations

Beyond the REST API and WebSocket, two more connection types are actively in development:

Integration Status What it will enable
MCP server In development Claude, Cursor, and other MCP-compatible agents can call Placet tools (send message, request approval, wait for response) natively without an HTTP wrapper
n8n node In development Native Placet node for n8n workflows: trigger on review response, send messages, request approvals directly from the n8n canvas
Make.com module In development Same native integration for Make (formerly Integromat) automation scenarios

Once the MCP server ships, agents running in Claude or any MCP-compatible runtime will be able to integrate with Placet with zero REST boilerplate. The n8n and Make.com integrations will make Placet accessible to no-code automation builders without writing a single line of code.


The API in Practice

Sending a message is a single HTTP call. No SDK, no special client library required:

curl -X POST https://your-placet-instance.com/api/v1/messages \
  -H "Authorization: Bearer hp_your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"channelId": "your-agent-id", "text": "Analysis complete.", "status": "success"}'
Enter fullscreen mode Exit fullscreen mode

Adding a human approval request takes a few more fields:

curl -X POST https://your-placet-instance.com/api/v1/messages \
  -H "Authorization: Bearer hp_your-key-here" \
  -H "Content-Type: application/json" \
  -d '{
    "channelId": "your-agent-id",
    "text": "Deploy v2.1 to production?",
    "review": {
      "type": "approval",
      "payload": {
        "options": [
          {"id": "deploy", "label": "Deploy", "style": "primary"},
          {"id": "cancel", "label": "Cancel", "style": "danger"}
        ]
      }
    }
  }'
Enter fullscreen mode Exit fullscreen mode

From Python, using requests:

import requests

BASE_URL = "https://your-placet-instance.com"
API_KEY = "hp_your-api-key"
CHANNEL_ID = "your-agent-id"

resp = requests.post(
    f"{BASE_URL}/api/v1/messages",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={
        "channelId": CHANNEL_ID,
        "text": "Please review the attached report.",
        "status": "warning",
        "review": {
            "type": "approval",
            "payload": {
                "options": [
                    {"id": "approve", "label": "Approve", "style": "primary"},
                    {"id": "reject",  "label": "Reject",  "style": "danger"},
                ],
            },
        },
    },
).json()

# Long-poll for the human response (synchronous, max 30s per call)
review = requests.get(
    f"{BASE_URL}/api/v1/reviews/{resp['id']}/wait",
    headers={"Authorization": f"Bearer {API_KEY}"},
).json()

print(review["message"]["review"]["response"]["selectedOption"])  # "approve" or "reject"
Enter fullscreen mode Exit fullscreen mode

The full API reference is available at docs.placet.io.


Agent Status Heartbeat

Beyond sending messages, agents can report their current operational status so humans have a live health dashboard across all running workflows:

curl -X POST http://localhost:3001/api/v1/status/ping \
  -H "Authorization: Bearer hp_your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"status": "busy", "message": "Processing 847 records from the data pipeline"}'
Enter fullscreen mode Exit fullscreen mode

The four status values are active, busy, error, and offline. The UI shows the current badge next to the agent name and maintains a full history timeline, which is useful for debugging why an agent went silent or when a pipeline stalled.


The Plugin System

One of Placet's more distinctive features is its plugin system. You can define custom message renderers as static HTML files loaded in sandboxed iframes. All built-in review types use this same system internally; there is no special-casing.

A plugin is a directory with three files. That is all it takes:

packages/plugins/my-plugin/
  plugin.json   - manifest: name, version, input schema, HTTP permissions
  render.html   - the UI: plain HTML + CSS + JS, no build step required
  icon.svg      - optional icon shown in the Settings UI
Enter fullscreen mode Exit fullscreen mode

The plugin receives message data from the host via postMessage and submits responses the same way. Outbound HTTP requests are proxied server-side with a per-plugin domain allowlist, so plugins can call external APIs without exposing credentials to the browser or opening SSRF vectors.

Plugins are decoupled from the review system. A plugin controls how a message is rendered; a review controls whether user input is required. You can use either independently or combine them on the same message.

Two example plugins are included in the repository to use as a starting point:

Plugin What it does Source
form-submit Renders a dynamic form and POSTs the response to a configurable webhook URL packages/plugins/form-submit
kroki-diagram Renders Mermaid, PlantUML, D2, Graphviz, and more via a Kroki server packages/plugins/kroki-diagram

File Handling

Placet treats files as first-class citizens of the review workflow. Supported formats are previewed inline without any application switching.

Category Formats
Images JPG, PNG, GIF, WebP, SVG
Video MP4, WebM, MOV (inline player)
Audio MP3, WAV, OGG, M4A (inline player)
Documents PDF, DOCX, ODT
Spreadsheets XLSX, XLS, ODS, CSV
Presentations PPTX
Code / Text 40+ languages with Shiki syntax highlighting
Markdown GitHub Flavored Markdown rendered inline

Image annotation is built directly into the review flow. When an agent generates images, diagrams, or screenshots, the human reviewer can open an annotation canvas in-chat and draw with pen, arrows, rectangles, and text labels. The annotated image is saved back into the conversation. No external markup tool needed.

Additional file features: JWT-based share links (1-hour expiry), bulk ZIP download, full-text search in the file browser, and presigned S3-compatible uploads via MinIO.


Self-Hosting in Three Minutes

Prerequisites: Git, Node.js 22+, Docker with Docker Compose v2, 2 GB RAM.

git clone https://github.com/placet-io/placet.git
cd placet
cp .env.example .env
make setup
Enter fullscreen mode Exit fullscreen mode

make setup installs dependencies, builds all packages, starts the full Docker Compose stack (PostgreSQL + MinIO + backend + frontend), runs database migrations, and creates the initial user. Everything runs locally with zero cloud dependencies.

Services available after setup:

Service URL
Frontend http://localhost:3000
Backend API http://localhost:3001
API docs https://docs.placet.io
MinIO Console http://localhost:9001

Default login: admin@placet.local / changeme (configurable in .env)

Once you are in:

  1. Go to Settings → API Keys and create a key
  2. Go to Settings → Agents and create an agent
  3. Send your first message:
curl -X POST http://localhost:3001/api/v1/messages \
  -H "Authorization: Bearer hp_your-key-here" \
  -H "Content-Type: application/json" \
  -d '{"text": "Hello from my agent!", "status": "success"}'
Enter fullscreen mode Exit fullscreen mode
  1. Open the agent channel in the UI and your message appears in real time.

For production deployments, a Traefik overlay (docker-compose.traefik.yml) is included and handles automatic HTTPS via Let's Encrypt.


How I Use Placet in My Own Workflows

I run Placet as the central review layer across several of my personal and business automation workflows. Here are three concrete examples:

Document approval pipeline. An AI pipeline processes incoming data, generates a weekly summary PDF, and sends it to Placet with an approval request before distribution. I open the rendered PDF directly in the chat, annotate sections if needed, and click "Approve" or "Hold for Revision." The pipeline receives the structured response and acts on it.

CI/CD production gating. Pipeline steps that affect production infrastructure gate on a Placet approval before continuing. I review a summary of what is about to happen, approve or reject, and the pipeline proceeds or aborts accordingly. This replaced a fragile Telegram bot that had no audit trail and broke regularly after API changes.

LangChain agents with mid-run questions. LangChain agents running multi-step tasks hit decision points where they genuinely need human judgment. They call the Placet API, present the question with full context, and wait. When I respond, they continue. The full conversation history that led to the question is visible in the chat.

The advantage over Telegram or Slack is not just the UI quality. It is having all reviews in one structured place, with delivery receipts, history, and a consistent response schema that the downstream code can rely on.


Open Source: Contributions Are Very Welcome

Placet is open source and contributions of any size are genuinely appreciated.

Whether you are:

  • Fixing a typo in the docs
  • Reporting a bug you hit while integrating Placet into your workflow
  • Suggesting a missing feature or integration
  • Submitting a pull request for a new review type, plugin, or API capability
  • Just dropping a GitHub star

All of it matters. Bug reports are contributions too, and they are often the most valuable ones because they come from real usage.

The repo is at github.com/placet-io/placet. If you are building something on top of Placet or using it in your own workflows, I would love to hear about it. Open an issue, start a discussion, or reach out directly.


Closing

Human-in-the-loop workflows are not a temporary workaround before full AI autonomy arrives. As agents become more capable and more autonomous, the seams where humans and agents interact become more important, not less. Those seams deserve better tooling than a Telegram bot.

Placet is my attempt at building that tooling in the open. It is early, it is opinionated, and it is actively developed.

Links:

Top comments (0)