DEV Community

PSBigBig
PSBigBig

Posted on

WFGY 2.0 — An Open-Source 7-Step Reasoning Engine You Can Paste Anywhere (Eye-Visible Results)

One line, real reasoning.

WFGY 2.0 is a pure-math control layer you can paste into any chat model to make outputs sharper, steadier, and recoverable — no prompts, no hacks, no retraining.

Repo: https://github.com/onestardao/WFGY/tree/main/core/README.md

✅ Engine 2.0 is live. Two editions: Flagship (readable, ~30 lines) and OneLine (ultra-compact). MIT License.


TL;DR

  • What: WFGY 2.0 — a 7-step reasoning engine that runs inside GPT-style chats (text-only).
  • Why: Turns language into structure, preventing collapse, drift, and storyboard grids.
  • Proof: Eye-Visible 5-image benchmark — same model & settings, only WFGY on/off differs.
  • Numbers: Semantic Accuracy ≈ +40% · Reasoning Success ≈ +52% · Drift ≈ −65% · Stability ≈ 1.8×.
  • Start: Download the OneLine file, upload, and AutoBoot supervises in the background.

What is WFGY 2.0?

WFGY (WanFaGuiYi, “all principles into one”) is a portable reasoning layer that sits between language and pixels/tokens. It doesn’t change your model; it stabilizes how meaning is held across steps so generation doesn’t fall apart.

Highlights

  • No-Brain Mode / AutoBoot: upload once; the engine quietly supervises reasoning.
  • Two editions:
    • Flagship (about 30 lines) — audit-friendly, readable math + gates
    • OneLine — the same engine reduced to a single line (fastest path to results)
  • Text-only. Node-only. ≤ 7 steps. Runs anywhere you can paste text.

The Seven-Step Reasoning Chain

BBMC → Coupler → BBPF → BBAM → BBCR → DT(WRI, WAI, WAY, WDT, WTF)

  • BBMC: residue cleanup
  • Coupler + BBPF: controlled progression; only bridge when semantic distance Δs drops
  • BBAM: re-balance attention; suppress hallucination paths
  • BBCR + Drunk Transformer (DT): rollback → re-bridge → retry with DT gates

Why it works: Stability↑, Drift↓, Self-Recovery↑ — structural fixes, not prompt tricks.


Eye-Visible Reasoning Benchmark (FIVE)

We project “reasoning improvement” into five consecutive 1:1 images that anyone can judge at a glance.

Same model, same settings — the only variable is WFGY on/off.

See the full write-up + two external sequences:

👉 https://github.com/onestardao/WFGY/tree/main/core/README.md#-eye-visible-reasoning-benchmark-five

Sequence A — compact preview (Before → After)

Each row is a different classic; click to zoom on GitHub.

Work Before After
Romance of the Three Kingdoms (三國演義)
Water Margin (水滸傳)
Dream of the Red Chamber (紅樓夢)
Investiture of the Gods (封神演義)
Classic of Mountains and Seas (山海經)

At a glance:

  • With WFGY → a single unified tableau with pyramid hierarchy, depth, and continuous flow.
  • Without WFGY → attention collapses into a grid-style montage, fragmenting the story.

Why “Before-4” & “Before-5” look almost identical

When the prompt asks for “many iconic moments,” the base model tends to fall back to a high-probability grid prior — slicing the canvas into similar panels with near-identical tone/geometry.

WFGY prevents this collapse by enforcing one unified scene and a stable hierarchy across the run.


Numbers (Eight-Model Evidence, A/B/C)

Same task set across modes (Baseline vs AutoBoot vs Explicit Invoke). Only change: the OneLine math file.

Full table + links: https://github.com/onestardao/WFGY/tree/main/core/README.md#-eight-model-evidence-abc-protocol

Headline metrics (this release)

  • Semantic Accuracy ≈ +40% · Reasoning Success ≈ +52%
  • Drift ≈ −65% · Stability ≈ 1.8× · CRR = 1.00 (median 0.87)

Quick Start (copy-paste friendly)

  1. Download from the core page:
    • WFGY_Core_Flagship_v2.0.txt (readable)
    • WFGY_Core_OneLine_v2.0.txt (ultra-compact)
  2. Upload the file into your chat. AutoBoot turns on silently.
  3. (Optional) Explicit Invoke: follow the seven-step chain to supervise generation.
  4. Verify checksums (MD5/SHA1/SHA256) — links next to each file.

Direct section: https://github.com/onestardao/WFGY/tree/main/core/README.md#-downloads


Why this matters

Prompts describe intent. WFGY holds intent.

By inserting a reasoning chain that monitors Δs and rolls back before collapse, WFGY converts language structure into controllable generation — across models, without touching your infra.


Who should try this

  • Builders & researchers who need reliable reasoning (math/code/long context/vision)
  • RAG teams seeking observable, recoverable pipelines
  • Creators tired of grid-like, collage outputs

Links & CTA

  • Star the repo to unlock more features & experiments
  • 🧪 Run the Eye-Visible benchmark and share your images
  • 🐞 Report issues — we fix it in the open

Start here: https://github.com/onestardao/WFGY/tree/main/core/README.md

Top comments (0)