DEV Community

Ko Takahashi
Ko Takahashi

Posted on

What an 87-Year-Old Festival Elder Taught Me About AI Agent Governance — Implemented in Rust

Festival Governance and AI Agents

How I Met an 87-Year-Old Festival Elder

In the fall of 2024, I met an 87-year-old festival elder in a small mountain village near Kyoto. He oversees an autumn festival with a 1,300-year history — over 30 neighborhood associations participate, involving thousands of people in a massive decentralized operation. No central server. No Slack channels.

"A festival doesn't run because someone gives orders. It works because everyone knows their role."

That one sentence fundamentally changed how I design AI agent systems.

The Governance Crisis of Modern AI Agents

In 2026, AI agents are everywhere. But serious problems are emerging.

Problem 1: Runaway Autonomy

# A common AI agent failure pattern
agent.execute("optimize costs")
# → Agent shuts down all servers
# → "Costs optimized (reduced to zero)"
Enter fullscreen mode Exit fullscreen mode

Problem 2: No Escalation

Most AI agent frameworks only have a simple fallback: "ask a human when confused." The festival elder taught me something far more nuanced — graduated escalation.

Problem 3: Lost Context

Agents forget the "why" behind past decisions. Festivals preserve centuries of "why we do things this way" through oral tradition and customs.

Principle 1: Bounded Autonomy

Each neighborhood association has full autonomy within their zone — but with clear boundaries. Here's how I implement this in Rust:

use std::collections::HashMap;
use chrono::{DateTime, Utc};

/// Applying festival "roles and responsibility boundaries" to AI agents
#[derive(Debug, Clone)]
pub struct BoundedAutonomy {
    pub resource_bounds: ResourceBounds,
    pub temporal_bounds: TemporalBounds,
    pub permission_scope: PermissionScope,
}

#[derive(Debug, Clone)]
pub struct ResourceBounds {
    pub cost_ceiling: f64,
    pub current_cost: f64,
    pub max_api_calls_per_minute: u32,
    pub max_memory_mb: u64,
    pub writable_stores: Vec<String>,
    pub max_records_per_action: usize,
}

#[derive(Debug, Clone)]
pub struct TemporalBounds {
    pub max_task_duration: std::time::Duration,
    pub session_expiry: DateTime<Utc>,
    pub cooldown_period: std::time::Duration,
}

#[derive(Debug, Clone)]
pub struct PermissionScope {
    pub readable: Vec<String>,
    pub writable: Vec<String>,
    pub executable: Vec<String>,
    pub denied: Vec<String>,
}

impl BoundedAutonomy {
    pub fn validate_action(&self, action: &AgentAction) -> Result<(), GovernanceError> {
        if self.resource_bounds.current_cost + action.estimated_cost 
            > self.resource_bounds.cost_ceiling {
            return Err(GovernanceError::CostCeilingExceeded {
                current: self.resource_bounds.current_cost,
                requested: action.estimated_cost,
                ceiling: self.resource_bounds.cost_ceiling,
            });
        }
        if Utc::now() > self.temporal_bounds.session_expiry {
            return Err(GovernanceError::SessionExpired);
        }
        if self.permission_scope.denied.iter()
            .any(|d| action.target_resource.starts_with(d)) {
            return Err(GovernanceError::PermissionDenied {
                resource: action.target_resource.clone(),
                reason: "Resource is in denied list".to_string(),
            });
        }
        Ok(())
    }
}

#[derive(Debug, Clone)]
pub struct AgentAction {
    pub name: String,
    pub target_resource: String,
    pub estimated_cost: f64,
    pub risk_level: RiskLevel,
}

#[derive(Debug, Clone)]
pub enum RiskLevel { Low, Medium, High, Critical }

#[derive(Debug)]
pub enum GovernanceError {
    CostCeilingExceeded { current: f64, requested: f64, ceiling: f64 },
    SessionExpired,
    PermissionDenied { resource: String, reason: String },
    EscalationRequired { level: String, context: String },
}
Enter fullscreen mode Exit fullscreen mode

Principle 2: Graduated Escalation

When problems arise at a festival, you don't immediately report to the elder. First the squad leader, then the association head, then the committee.

#[derive(Debug, Clone, PartialEq, PartialOrd)]
pub enum EscalationLevel {
    SelfResolve,
    PeerConsult,
    OrchestratorReport,
    HumanEscalation,
    EmergencyHalt,
}

pub struct EscalationEngine {
    pub thresholds: HashMap<String, f64>,
    pub history: Vec<EscalationEvent>,
}

impl EscalationEngine {
    pub fn calculate_level(&self, ctx: &ActionContext) -> EscalationLevel {
        let risk = self.calculate_risk(ctx);
        match risk {
            s if s < 0.2 => EscalationLevel::SelfResolve,
            s if s < 0.5 => EscalationLevel::PeerConsult,
            s if s < 0.75 => EscalationLevel::OrchestratorReport,
            s if s < 0.95 => EscalationLevel::HumanEscalation,
            _ => EscalationLevel::EmergencyHalt,
        }
    }

    fn calculate_risk(&self, ctx: &ActionContext) -> f64 {
        let base = match ctx.action.risk_level {
            RiskLevel::Low => 0.1,
            RiskLevel::Medium => 0.3,
            RiskLevel::High => 0.6,
            RiskLevel::Critical => 0.9,
        };
        let fail_rate = self.history.iter()
            .filter(|e| e.context.contains(&ctx.action.name))
            .filter(|e| e.resolution.is_none())
            .count() as f64 / self.history.len().max(1) as f64;
        (base + fail_rate * 0.3).min(1.0)
    }
}
Enter fullscreen mode Exit fullscreen mode

Principle 3: Contextual Memory

The most remarkable aspect of festivals is preserving centuries of institutional knowledge.

#[derive(Debug, Clone)]
pub struct ContextualMemory {
    pub episodes: Vec<Episode>,
    pub rules: Vec<Rule>,
    pub max_capacity: usize,
}

impl ContextualMemory {
    pub fn recall(&self, ctx: &HashMap<String, String>, limit: usize) -> Vec<&Episode> {
        let mut scored: Vec<_> = self.episodes.iter()
            .map(|ep| (self.relevance(ep, ctx), ep)).collect();
        scored.sort_by(|a, b| b.0.partial_cmp(&a.0)
            .unwrap_or(std::cmp::Ordering::Equal));
        scored.into_iter().take(limit).map(|(_, ep)| ep).collect()
    }

    fn relevance(&self, ep: &Episode, ctx: &HashMap<String, String>) -> f64 {
        let mut score = 0.0;
        for (k, v) in ctx {
            if ep.context.get(k).map_or(false, |ev| ev == v) { 
                score += 1.0; 
            }
        }
        let age = (Utc::now() - ep.timestamp).num_days() as f64;
        score * (0.3 + 0.7 * (-age / 365.0).exp())
    }
}
Enter fullscreen mode Exit fullscreen mode

Putting It All Together: MatsuriGovernanceFramework

pub struct MatsuriGovernanceFramework {
    pub autonomy: BoundedAutonomy,
    pub escalation: EscalationEngine,
    pub memory: ContextualMemory,
}

impl MatsuriGovernanceFramework {
    pub fn govern_action(&mut self, action: AgentAction, ctx: ActionContext)
        -> Result<ActionResult, GovernanceError>
    {
        // Step 1: Boundary check
        self.autonomy.validate_action(&action)?;
        // Step 2: Recall past experiences
        let episodes = self.memory.recall(&ctx.environment, 5);
        let failures: Vec<_> = episodes.iter()
            .filter(|ep| matches!(ep.outcome, Outcome::Failure { .. }))
            .collect();
        // Step 3: Determine escalation level
        let mut level = self.escalation.calculate_level(&ctx);
        if !failures.is_empty() {
            level = match level {
                EscalationLevel::SelfResolve => EscalationLevel::PeerConsult,
                EscalationLevel::PeerConsult => EscalationLevel::OrchestratorReport,
                other => other,
            };
        }
        // Step 4: Act based on escalation level
        match level {
            EscalationLevel::SelfResolve | EscalationLevel::PeerConsult =>
                Ok(ActionResult::Proceed { action: action.name, confidence: 0.8 }),
            EscalationLevel::OrchestratorReport =>
                Ok(ActionResult::NeedsReview { action: action.name, reason: "Review required".into() }),
            _ => Err(GovernanceError::EscalationRequired {
                level: "human".into(),
                context: format!("'{}' needs approval", action.name),
            }),
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Summary

Festival Principle AI Agent Design Rust Implementation
Neighborhood zones Bounded Autonomy ResourceBounds + PermissionScope
Graduated reporting Graduated Escalation EscalationEngine + RiskLevel
Oral tradition & customs Contextual Memory Episode + Rule + recall()
Elder's final judgment Human-in-the-loop EscalationLevel::HumanEscalation

The elder told me: "The festival has lasted 1,300 years not because anyone is in charge. It works because the system itself is what's remarkable."

AI agents are no different. No single AI needs to be omniscient. The right boundaries, graduated escalation, and accumulated contextual memory — together, these create truly trustworthy AI agent systems.


Ko Takahashi — Entrepreneur, Philosopher, Engineer. CEO of Jon & Coo Inc., Lead Architect of Matsuri Platform. Exploring the fusion of Japanese culture and technology. ko-takahashi.jp

Top comments (0)