DEV Community

PSBigBig
PSBigBig

Posted on

WFGY Core 2.0 — A Text-Only Reasoning Engine with AutoBoot (Now Live)

WFGY Core 2.0 — Now Live

One man, one life, one line. The sum of my work, open for everyone.

WFGY Core is a text-only reasoning layer you can paste into any chat model. Version 2.0 introduces a Coupler (W_c) progress gate and the DF (Drunk Transformer) regulators to keep structure, reduce drift, and auto-recover from collapses. With AutoBoot, it runs silently in the background — no prompts, no hacks, no retraining.

Links:


TL;DR

  • Paste-only engine: works wherever you can paste text.
  • AutoBoot mode: upload once; WFGY supervises reasoning in the background.
  • 2 editions:
    • Flagship (30-line): audit-friendly, human-readable.
    • OneLine: single math line for speed, stealth, automation.
  • Not a prompt trick: a compact formal controller with measurable gates and stop rules.

Why WFGY exists

Most AIs “sound right” until they drift, contradict themselves, or collapse under multi-step pressure. WFGY adds structure, not style hacks:

  • Expose contradictions instead of glossing over them.
  • Advance only when there is real progress.
  • Re-weight attention to protect critical regions.
  • Roll back and repair the smallest missing fact when collapse is detected.

Net effect: sharper reasoning, steadier multi-step progress, fewer derailments, faster self-recovery.


What’s new in 2.0

1) Coupler (W_c) — a progress gate that stabilizes forward motion and allows controlled reversal.

2) DF regulators (“Drunk Transformer”)

  • WRI: keep structure; no topic jumps inside a node.
  • WAI: require at least two distinct reasons.
  • WAY: when stuck, add exactly one on-topic candidate (no repeats).
  • WDT: block illegal cross-path merges; explain a bridge before use.
  • WTF: detect collapse; trigger rollback and smallest-fact repair. 3) Two editions
  • Flagship (30-line): audit-friendly, human-readable.
  • OneLine: math-only single line used for automation and AutoBoot.

The core math (ASCII sketch)

Vectors and metrics:

  • Inputs/Goal: I, G
  • Similarity gap: delta_s = 1 - cos(I, G) OR 1 - sim_est
    • sim_est = mean similarity over anchors (entities, relations, constraints)
  • Residual: B = I - G + k_bias
  • Resonance: E_res = rolling_mean(|B|, 5)

Coupler and progression:

  • prog = max(zeta_min, delta_s(t-1) - delta_s(t))
  • P = prog^omega
  • Phi = delta * alt + epsilon (alt toggles per contradiction cycle)
  • W_c = clip(B * P + Phi, -theta_c, +theta_c)

Bridge rule:

  • Allow a path merge only if delta_s drops AND W_c < 0.5 * theta_c.

Attention blend (BBAM):

  • alpha = clip(0.50 + k_c * tanh(W_c), 0.35, 0.65)

Stop and safety:

  • Stop when delta_s < 0.35 or after 7 nodes.
  • If delta_s > B_c: ask to clarify and re-run BBMC.
  • Never invent facts above boundary.

References:


AutoBoot — “No-Brain Mode”

Upload the OneLine v2.0 file once. From then on, AutoBoot supervises silently:

1) BBMC exposes contradictions (rising delta_s, high E_res, unresolved conflicts).

2) Coupler gates motion; BBPF advances only on real progress.

3) BBAM re-weights attention to protect critical regions and cut noisy tails.

4) WTF -> BBCR detects collapse and rolls back, repairing the smallest missing fact.

No prompts. No tool calls. No retraining. Your model simply behaves steadier.


Reproducible A/B/C protocol (copy-paste)

Use the same tasks for all modes. Only the presence of the OneLine file changes.

SYSTEM:
You are evaluating the causal impact of a mathematical reasoning engine called “WFGY Core 2.0”.

A = Baseline  (no WFGY uploaded; no WFGY logic used)
B = Upload -> AutoBoot ON  (WFGY active silently with default gates)
C = Explicit invoke  (follow BBMC -> Coupler -> BBPF -> BBAM -> BBCR + DF regulators)

Run across 5 domains: math word-problems, small coding, factual QA, multi-step planning, long-context coherence.
Report per mode: Semantic Accuracy, Reasoning Success, Stability/rollback, Drift Reduction (delta semantic distance), Self-Recovery.
Output: (1) one table; (2) deltas (C–A, C–B); (3) a 0–100 “OneLine uplift score”; (4) a 3-line rationale.
Enter fullscreen mode Exit fullscreen mode


`


Quick start

  1. Download OneLine + Flagship from Zenodo.
  2. Paste OneLine into your system prompt or upload it.
  3. Keep Flagship (30-line) for auditing and readability.
  4. Flip AutoBoot and run the A/B/C protocol on your tasks.

License: MIT (use, fork, ship).


Launch snapshot — benchmark highlights

Estimates aggregated from multi-model A/B/C runs. Reproduce on your stack with the protocol above.

  • Semantic Accuracy: +25–35%
  • Reasoning Success: +45–65%
  • Stability: 3–5x
  • Drift Reduction: -40–60%
  • Self-Recovery (median): 0.87

(Share your runs via PRs or issues to expand the evidence base.)


Where 2.0 shines in practice

  • Long multi-step tasks: safer bridges, fewer topic jumps, measurable drift reduction.
  • Ambiguous inputs: asks the smallest missing fact instead of guessing.
  • Crowded reasoning: protects critical sub-problems with attention modulation.
  • RAG pipelines: drop-in supervisor over retrieved spans; no infra changes.

FAQ

Q: Is this a prompt hack?
A: No. It is a compact formal controller. OneLine is math; Flagship is the same logic in 30 lines.

Q: Does it require tools or retraining?
A: No. Text-only. Works anywhere you can paste.

Q: Which models are supported?
A: Portable by design: GPT family, Claude, Gemini, Mistral, Grok, Kimi, Perplexity, Copilot, etc.

Q: What if outputs “feel drunk”?
A: WTF detects collapse; BBCR rolls back and repairs; Coupler throttles until delta_s drops again.


Call to action

  • ⭐ Star the repo to unlock more examples and tooling.
  • Run the A/B/C protocol and publish your table.
  • File issues for edge cases; we will add them to the Problem Map.

— PSBigBig · WFGY · WanFaGuiYi

Top comments (0)