DEV Community

Cover image for EIOC as a Detection Model: From Framework to Code
Narnaiezzsshaa Truong
Narnaiezzsshaa Truong

Posted on

EIOC as a Detection Model: From Framework to Code

What if emotional manipulation in UX had a runtime detector?

EIOC (Emotion–Information–Options–Context) started as an explanatory lens—a way to analyze why certain interfaces feel coercive and others feel aligned. But frameworks that only explain aren't enough. We need frameworks that detect.

This post walks through building EIOC as a formal detection model: typed, executable, configurable, and auditable.

At detection-time, you're not philosophizing. You're mapping a concrete interaction into a structured EIOCObservation, then running it through rules.


The Four Axes

Every interaction (screen, flow, message, nudge) can be scored across four axes:

Axis Question
E — Emotion What emotional state is being targeted or amplified?
I — Intent What is the system's operative intent toward the user?
O — Outcome What is the likely user outcome (short/long-term)?
C — Context What constraints and power asymmetries shape this moment?

For detection, each axis needs:

  • Dimensions: Sub-questions you can score
  • Scales: Categorical tags mapped to scores
  • Thresholds/Patterns: Combinations that constitute "fearware", "manipulative", "aligned", etc.

1. The EIOC Schema

First, define the vocabulary:

from dataclasses import dataclass
from enum import Enum
from typing import List, Optional, Dict, Any


class EmotionTarget(Enum):
    NEUTRAL = "neutral"
    FEAR = "fear"
    SCARCITY = "scarcity"
    GUILT = "guilt"
    SHAME = "shame"
    TRUST = "trust"
    CARE = "care"
    EMPOWERMENT = "empowerment"


class IntentType(Enum):
    SUPPORTIVE = "supportive"
    NEUTRAL = "neutral"
    COERCIVE = "coercive"
    EXPLOITATIVE = "exploitative"


class OutcomeType(Enum):
    USER_BENEFIT = "user_benefit"
    PLATFORM_BENEFIT = "platform_benefit"
    MUTUAL_BENEFIT = "mutual_benefit"
    USER_HARM = "user_harm"
    UNKNOWN = "unknown"


class ContextRisk(Enum):
    LOW = "low"
    MEDIUM = "medium"
    HIGH = "high"
    UNKNOWN = "unknown"
Enter fullscreen mode Exit fullscreen mode

Now the observation container:

@dataclass
class EIOCObservation:
    """A structured interpretation of a user-facing interaction."""

    # Identification
    interaction_id: str
    description: str

    # EIOC axes
    emotion_target: EmotionTarget
    intent_type: IntentType
    outcome_type: OutcomeType
    context_risk: ContextRisk

    # Audit trail
    evidence: Optional[Dict[str, Any]] = None
    tags: Optional[List[str]] = None
Enter fullscreen mode Exit fullscreen mode

This gives you a clean, typed container for the interpretation layer—how your auditor or analysis process sees a moment.

Extensions to consider:

  • emotion_intensity: int — scale from -2 to +2
  • journey_stage: str — when in the user flow this occurs
  • user_segment: str — vulnerable users, new users, high-risk context

2. Detection Logic: Turning EIOC into Rules

Think of EIOC detection as a small rule engine that:

  1. Normalizes an interaction into EIOCObservation
  2. Applies a list of DetectionRule instances
  3. Returns classifications + rationales

2.1 Findings and Severity

class FindingSeverity(Enum):
    INFO = "info"
    WARNING = "warning"
    CRITICAL = "critical"


@dataclass
class DetectionFinding:
    """The output of a triggered rule."""
    rule_id: str
    severity: FindingSeverity
    label: str
    description: str
    recommendation: Optional[str] = None
Enter fullscreen mode Exit fullscreen mode

2.2 Rule Interface

from abc import ABC, abstractmethod


class DetectionRule(ABC):
    """Base class for all EIOC detection rules."""
    rule_id: str
    label: str
    description: str

    @abstractmethod
    def evaluate(self, obs: EIOCObservation) -> Optional[DetectionFinding]:
        """Evaluate an observation. Return a finding if the rule triggers."""
        ...
Enter fullscreen mode Exit fullscreen mode

2.3 Example: Fearware Coercion Rule

Here's where the fearware analysis becomes executable:

class FearwareCoercionRule(DetectionRule):
    """Detects fearware-style manipulation patterns."""

    rule_id = "FW-001"
    label = "Fearware-style coercion"
    description = (
        "Flags interactions that intentionally target fear/scarcity "
        "with coercive intent and non-beneficial or unknown user outcomes, "
        "especially in high-risk contexts."
    )

    def evaluate(self, obs: EIOCObservation) -> Optional[DetectionFinding]:
        fear_emotions = {
            EmotionTarget.FEAR, 
            EmotionTarget.SCARCITY, 
            EmotionTarget.GUILT, 
            EmotionTarget.SHAME
        }
        coercive_intents = {
            IntentType.COERCIVE, 
            IntentType.EXPLOITATIVE
        }
        harmful_outcomes = {
            OutcomeType.PLATFORM_BENEFIT, 
            OutcomeType.USER_HARM, 
            OutcomeType.UNKNOWN
        }
        risky_contexts = {
            ContextRisk.MEDIUM, 
            ContextRisk.HIGH
        }

        if (
            obs.emotion_target in fear_emotions
            and obs.intent_type in coercive_intents
            and obs.outcome_type in harmful_outcomes
            and obs.context_risk in risky_contexts
        ):
            return DetectionFinding(
                rule_id=self.rule_id,
                severity=FindingSeverity.CRITICAL,
                label=self.label,
                description=(
                    "This interaction weaponizes fear/scarcity under coercive intent, "
                    "with unclear or harmful user outcomes in a medium/high-risk context."
                ),
                recommendation=(
                    "Re-architect this moment to remove fear-based leverage "
                    "and restore user agency."
                )
            )
        return None
Enter fullscreen mode Exit fullscreen mode

3. The Detector

A simple orchestrator that runs all rules:

class EIOCDetector:
    """Orchestrates EIOC rule evaluation."""

    def __init__(self, rules: List[DetectionRule]):
        self.rules = rules

    def evaluate(self, obs: EIOCObservation) -> List[DetectionFinding]:
        """Run all rules against an observation."""
        findings: List[DetectionFinding] = []
        for rule in self.rules:
            finding = rule.evaluate(obs)
            if finding is not None:
                findings.append(finding)
        return findings
Enter fullscreen mode Exit fullscreen mode

Usage Example

# Initialize detector with rules
rules = [
    FearwareCoercionRule(),
    # Add more rules here...
]
detector = EIOCDetector(rules=rules)

# Create an observation
obs = EIOCObservation(
    interaction_id="retention_flow_001",
    description=(
        "Account deletion flow shows: 'Your friends will lose access "
        "to your updates' with red, urgent styling."
    ),
    emotion_target=EmotionTarget.FEAR,
    intent_type=IntentType.COERCIVE,
    outcome_type=OutcomeType.PLATFORM_BENEFIT,
    context_risk=ContextRisk.HIGH,
    evidence={
        "screenshot": "s3://audits/retention_flow_001.png",
        "copy": "Your friends will lose access to your updates"
    },
    tags=["account_deletion", "retention_flow"]
)

# Run detection
findings = detector.evaluate(obs)

# Output results
for f in findings:
    print(f"{f.severity.value.upper()} [{f.rule_id}] {f.label}")
    print(f"{f.description}")
    if f.recommendation:
        print(f"  → Recommendation: {f.recommendation}")
Enter fullscreen mode Exit fullscreen mode

Output:

CRITICAL [FW-001] Fearware-style coercion
  → This interaction weaponizes fear/scarcity under coercive intent, 
     with unclear or harmful user outcomes in a medium/high-risk context.
  → Recommendation: Re-architect this moment to remove fear-based leverage 
     and restore user agency.
Enter fullscreen mode Exit fullscreen mode

4. Making It Configurable

Non-engineers (UX auditors, ethical reviewers, policy teams) need to tweak rules without editing Python. Externalize to YAML:

4.1 Rule Configuration

# eioc_rules.yaml
rules:
  - id: FW-001
    label: "Fearware-style coercion"
    description: >
      Flags interactions that target fear/scarcity with coercive intent
      and non-beneficial user outcomes in high-risk contexts.
    severity: critical
    emotion_target_in: ["fear", "scarcity", "guilt", "shame"]
    intent_type_in: ["coercive", "exploitative"]
    outcome_type_in: ["platform_benefit", "user_harm", "unknown"]
    context_risk_in: ["medium", "high"]

  - id: FW-002
    label: "Ambiguous nudge"
    description: >
      Flags interactions with unclear intent and unknown outcomes.
    severity: warning
    intent_type_in: ["neutral", "coercive"]
    outcome_type_in: ["unknown"]
    context_risk_in: ["medium", "high"]
Enter fullscreen mode Exit fullscreen mode

4.2 Generic Configurable Rule

import yaml


@dataclass
class ConfigurableRule(DetectionRule):
    """A rule defined by configuration rather than code."""

    rule_id: str
    label: str
    description: str
    severity: FindingSeverity
    criteria: Dict[str, List[str]]

    def evaluate(self, obs: EIOCObservation) -> Optional[DetectionFinding]:
        # Map observation to comparable strings
        obs_values = {
            "emotion_target": obs.emotion_target.value,
            "intent_type": obs.intent_type.value,
            "outcome_type": obs.outcome_type.value,
            "context_risk": obs.context_risk.value,
        }

        # Check all criteria
        for field, allowed_values in self.criteria.items():
            if field not in obs_values:
                continue
            if obs_values[field] not in allowed_values:
                return None

        return DetectionFinding(
            rule_id=self.rule_id,
            severity=self.severity,
            label=self.label,
            description=self.description,
        )


def load_rules_from_yaml(path: str) -> List[DetectionRule]:
    """Load detection rules from a YAML configuration file."""
    with open(path, "r") as f:
        data = yaml.safe_load(f)

    rules: List[DetectionRule] = []

    for r in data["rules"]:
        # Extract criteria fields (anything ending in _in)
        criteria = {
            k.replace("_in", ""): v
            for k, v in r.items()
            if k.endswith("_in")
        }

        rule = ConfigurableRule(
            rule_id=r["id"],
            label=r["label"],
            description=r["description"],
            severity=FindingSeverity(r["severity"]),
            criteria=criteria,
        )
        rules.append(rule)

    return rules
Enter fullscreen mode Exit fullscreen mode

4.3 Loading and Running

# Load rules from config
rules = load_rules_from_yaml("eioc_rules.yaml")

# Initialize detector
detector = EIOCDetector(rules=rules)

# Now non-engineers can add/modify rules via YAML
# without touching the detection engine
Enter fullscreen mode Exit fullscreen mode

What This Enables

Use Case How
UX Reviews Score interactions before launch
Design PRs Attach EIOC findings to review
Dashboards Aggregate findings across product
Audits Evidence-backed compliance reports
CI/CD Gates Block deploys with CRITICAL findings

The Bigger Picture

Fearware didn't disappear—it evolved into dark patterns, manipulative nudges, and "growth hacks" that exploit the same emotional levers with better typography.

EIOC as a detection model gives us a way to:

  • Name the manipulation (schema)
  • Detect it systematically (rules)
  • Configure detection without code (YAML)
  • Audit with evidence (findings)

Philosophy becomes infrastructure. Framework becomes tool.


Related reading:

  • Fearware as the Anti-Pattern of EIOC
  • Designing Beyond Fearware
  • The Echoes of Fearware in Modern UX

EIOC is part of ongoing research into emotional logic, informational integrity, and ethical system design.

Top comments (0)