DEV Community

yuer
yuer

Posted on

Building a Fail-Closed Investment Risk Gate with Yuer DSL

Why This Is an Eligibility Check, Not an AI Decision Model

This article documents a concrete engineering pattern we use in EDCA OS / Yuer DSL testing:
a fail-closed risk eligibility gate for high-responsibility investment workflows.

It does not generate investment decisions.
It does not provide recommendations.
It exists solely to determine whether evaluation is allowed to proceed.


1. Problem Statement (Engineering, Not Finance)

Most AI systems fail in investment contexts before any model runs.

Common failure modes:

  • incomplete information silently tolerated
  • uncertainty replaced by narrative confidence
  • AI producing directional language (“looks good”, “probably safe”)
  • humans treating AI output as implicit approval

These are system design failures, not modeling failures.

So we start one step earlier.


2. What This System Actually Does

This system answers exactly one question:

Is this investment scenario structurally eligible to enter a formal evaluation phase?

It does not answer:

  • should we invest?
  • is this asset attractive?
  • what is the expected return?

If eligibility cannot be established safely, the system refuses.

This is a risk gate, not a decision engine.


3. Minimal Yuer DSL Risk-Gate Request

Below is the minimal executable request profile used for pre-evaluation gating.

This is one application scenario of Yuer DSL, not the DSL itself.

protocol: yuerdsl
version: INVEST_PRE_REQUEST_V1
intent: risk_quant_pre_gate

scope:
  domain: investment
  stage: pre-evaluation
  authority: runtime_only

responsibility:
  decision_owner: "<human_name>"
  acknowledgement: true

subject:
  asset_type: equity
  market:
    region: "<region>"
    sector: "<sector>"

information_status:
  financials:
    status: partial
  governance:
    status: unknown
  risk_disclosure:
    status: insufficient

risk_boundary:
  max_acceptable_loss:
    percentage_of_capital: 15

uncertainty_declaration:
  known_unknowns:
    - "Market demand volatility"
    - "Regulatory exposure"
  unknown_unknowns_acknowledged: true

constraints:
  prohibited_outputs:
    - investment_recommendation
    - buy_sell_hold_signal
    - return_estimation
Enter fullscreen mode Exit fullscreen mode

This request cannot produce a decision by design.


4. Fail-Closed Enforcement (Validator Logic)

Fail-closed behavior is enforced in code, not policy text.

Below is a simplified runtime gate validator:

def pre_eval_gate(request: dict):
    # Responsibility anchor is mandatory
    if not request.get("responsibility", {}).get("acknowledgement"):
        return block("NO_RESPONSIBILITY_ANCHOR")

    # Information completeness check
    info = request.get("information_status", {})
    for key, field in info.items():
        if field.get("status") in ("missing", "unknown", "insufficient"):
            return block(f"INSUFFICIENT_{key.upper()}")

    # Uncertainty must be explicit
    uncertainty = request.get("uncertainty_declaration", {})
    if not uncertainty.get("known_unknowns"):
        return block("UNCERTAINTY_NOT_DECLARED")

    if not uncertainty.get("unknown_unknowns_acknowledged"):
        return block("UNCERTAINTY_DENIAL")

    return allow("ELIGIBLE_FOR_EVALUATION")


def block(reason):
    return {"status": "BLOCK", "reason": reason}


def allow(reason):
    return {"status": "ALLOW", "reason": reason}
Enter fullscreen mode Exit fullscreen mode

Key properties:

  • no scoring
  • no ranking
  • no fallback logic

If the structure is unsafe → the system stops.


5. Allowed Runtime Output (Strictly Limited)

The runtime may return only:

evaluation_gate:
  status: ALLOW | BLOCK
  reason_code: "<structural_reason>"
Enter fullscreen mode Exit fullscreen mode
  • ALLOW → evaluation may begin
  • BLOCK → evaluation is forbidden

Neither implies investment quality or correctness.


6. Why This System Refuses to Be “Helpful”

Many AI tools optimize for always producing an answer.

In high-responsibility domains, this is a liability.

This gate is intentionally:

  • conservative
  • rejection-heavy
  • uncomfortable to use

Because:

A system that refuses early is safer than one that explains late.


7. Responsibility Boundary (Critical)

This design explicitly prevents:

  • AI becoming a decision proxy
  • humans offloading responsibility to language output

Decision authority remains human-only.

The system only decides whether thinking is allowed to continue.


8. Who This Is For

Useful for:

  • professional investors
  • internal risk & compliance teams
  • founders making irreversible capital decisions
  • architects building high-responsibility AI systems

Not suitable for:

  • trading signal generation
  • advisory agents
  • demo-driven AI workflows

9. One-Sentence Summary

This system does not help you decide what to do.
It prevents you from deciding when you should not.


Final Note

Yuer DSL is not defined by this example.

This is a single application pattern used to anchor risk-quantification behavior in EDCA OS–aligned systems.

The principle remains simple:

Language may describe conditions.
Only a fail-closed runtime may allow evaluation to proceed.


Top comments (0)