This is a submission for the Notion MCP Challenge
What I Built
I built a Governed MCP-Based AI Agent System where real-world actions are executed through tools — but always under strict policy control.
Instead of focusing only on what agents can do, this system enforces what they are allowed to do — and what must be blocked.
Core Idea
Use MCP as the capability layer and Actra as the governance layer:
- MCP exposes real tools (Notion workspace actions)
- The AI agent selects and invokes these tools
- Actra evaluates every tool call before execution
This creates a system where:
Capability is separated from control.
How It Works (in practice)
In the demo:
- The agent connects to Notion via MCP
- It discovers available tools:
notion-search
notion-get-users
notion-create-pages
- The agent attempts to execute actions
Step 1 — Uncontrolled Agent (Baseline)
No policy enforcement ❌
Agent executes tools freely
- search works
- user data can be accessed
- write operations are possible
The agent has full power — with no guardrails.
Step 2 — Actra-Governed Agent
Actra is introduced as an in-process policy engine.
Every tool call is evaluated before execution.
What Gets Enforced
1. Input validation
Empty search → Blocked
Rule: block_empty_search
2. Context-based control
safe_mode = true → Block writes
Rule: block_writes_in_safe_mode
The agent still knows about the tool — but cannot execute it.
What Makes This Different
Most AI systems:
- rely on prompts or heuristics
- enforce rules inconsistently
- lack clear visibility into decisions
This system:
- enforces policies deterministically at runtime
- separates decision-making from control
- provides explicit reasoning for every block
What This Enables
- Safe AI agents for real-world workflows
- Controlled access to sensitive operations
- Clear auditability of decisions
- Policy-driven execution instead of implicit behavior
Core Insight
MCP gives agents capability.
Actra decides whether that capability can be used.
This transforms AI agents from:
"systems that can act"
into:
systems that can act — safely, predictably, and under control.
What Makes This Different
Instead of blindly executing AI actions, every decision is evaluated against policies like:
- ❌ Block sending sensitive data externally
- ❌ Restrict unsafe API calls
- ❌ Prevent unauthorized actions
- ✅ Allow only whitelisted operations
This turns Notion + AI from a productivity tool into a safe execution environment for real-world workflows.
Video Demo
Show us the code
https://github.com/getactra/notion-mcp-governed-agent
Repo Structure
.
├── LICENSE
│ └── Project license
├── auth
│ ├── callback.ts
│ │ └── Handles OAuth redirect/callback after user authentication
│ ├── exchange.ts
│ │ └── Exchanges authorization code for access/refresh tokens
│ ├── metadata.ts
│ │ └── Fetches auth provider metadata (endpoints, configs)
│ ├── pkce.ts
│ │ └── Implements PKCE (Proof Key for Code Exchange) helpers
│ ├── register.ts
│ │ └── Actra MCP client registration with auth provider (client_id, etc.)
│ ├── state.ts
│ │ └── Save OAuth state
│ └── url.ts
│ └── Builds authorization URLs for login flow
├── mcp
│ └── client.ts
│ └── MCP (Model Context Protocol) client wrapper
│ handles communication with MCP server/service
├── package.json
│ └── Project dependencies, scripts, and metadata
├── test-step1.ts
│ └── Initial test/setup (MCP connection)
├── test-step2.ts
│ └── Next step test
├── test-step3.ts
│ └── Intermediate flow test
├── test-step4.ts
│ └── Loads Notion MCP tools
├── test-step5-unsafe-agent.ts
│ └── Demonstrates an agent without safeguards
│ (To show risks accessing Notion without safeguards)
└── test-step6-actra-governed-agent.ts
└── Agent with ACTRA governance layer
(adds rules, constraints, or safety controls)```
### Example Policy
```yaml
version: 1
rules:
# Block writes in safe mode
- id: block_writes_in_safe_mode
scope:
action: notion-create-pages
when:
subject:
domain: snapshot
field: safe_mode
operator: equals
value:
literal: true
effect: block
# Block empty search
- id: block_empty_search
scope:
action: notion-search
when:
subject:
domain: action
field: query
operator: equals
value:
literal: ""
effect: block
How I Used Notion MCP
Notion MCP acts as the execution layer between an AI agent and real-world actions.
Instead of just reading data, the agent can:
- discover available tools
- execute operations (search, fetch, create, update)
- interact with a live Notion workspace
Role of MCP in this system
In my setup, MCP is responsible for:
- Tool discovery
notion-search
notion-get-users
notion-create-pages
- Tool execution
client.callTool({ name, arguments })
- Standardizing agent capabilities → every action becomes a structured tool call
What This Enables (and Why It’s Risky)
With MCP alone:
- the agent can read workspace data
- the agent can modify content
- the agent can access users and metadata
In Step 5 (uncontrolled agent):
No policy enforcement ❌
Agent executes tools freely
This means:
The agent has full capability, but no control
Adding Actra (What Changes)
With Actra layered on top:
- every tool call becomes a policy-evaluated action
- execution is conditionally allowed or blocked
- decisions are deterministic and explainable
In Step 6 (governed agent):
❌ Blocked by Actra
Rule: block_get_users
The agent still has capability — but no longer has unrestricted power
What This Unlocks
Without MCP:
- Notion is just a UI or database
With MCP:
- Notion becomes a programmable execution surface
With MCP + Actra:
- It becomes a governed AI system
-
Actions are:
- validated
- controlled
- auditable
Architecture
Notion (Workspace / Tools)
↓
MCP Layer
(Tool Discovery + Execution)
↓
AI Agent
↓
Actra Runtime
(Policy Evaluation Engine)
↓
Allowed / Blocked
↓
Tool Execution (or Denied)
Key Insight
MCP gives agents power.
Actra decides how that power is used.
Why This Matters
As AI agents become more powerful, governance becomes critical.
This project shows that:
- you don’t need heavy infra
- you don’t need external policy services
You can enforce deterministic, auditable control directly inside your application.
Future Work
- Role-based policies (team / org level)
- Policy simulation + testing UI inside Notion
- Full MCP-native agent orchestration
- Audit logs and explainability dashboards
Closing Thought
Everyone is building AI agents.
Very few are thinking about control, safety, and governance.
This project is a step toward making AI systems not just powerful — but trustworthy.
Top comments (1)
One surprising insight is that AI agents often fail not because of technical limitations but due to inadequate governance structures. In our experience with enterprise teams, it's crucial to establish clear oversight and decision-making protocols for AI agents to truly control and leverage them effectively. This means designing a system of checks and balances that aligns with business objectives, not just focusing on the agent's technical capabilities. - Ali Muwwakkil (ali-muwwakkil on LinkedIn)