DEV Community

Cover image for Designing OCP: a deterministic runtime/language built around observe match commit
minh đức nguyễn
minh đức nguyễn

Posted on

Designing OCP: a deterministic runtime/language built around observe match commit

Designing OCP: a governed runtime built around observe → match → commit

Most software systems allow side effects to happen throughout execution: read here, write there, mutate state, call external systems, and then rely on discipline, tooling, logs, and debugging to reconstruct what actually happened.

I wanted to explore a different model.

That is why I designed OCP: a governed runtime and execution model built around an observe → match → commit flow, where side effects are meant to be explicit, constrained, and committed through a clearer boundary instead of happening as ambient operations throughout the system.

This is not a claim that OCP is a finished answer, a universal replacement, or even a mature programming language in the broad sense people usually expect. What I am presenting it as is a design-led technical experiment around a question I think is worth testing seriously:

What changes if determinism, replayability, and effect governance are treated as first-class runtime constraints from the beginning?


The problem I wanted to explore

A lot of software complexity comes from the gap between:

  • what the program appears to mean,
  • what the runtime actually does,
  • and what the outside world observes after side effects happen.

In conventional systems, side effects are often easy to perform but harder to reason about afterward. Once reads, writes, external calls, and mutations are spread throughout execution, the burden shifts to tracing, debugging, logs, discipline, and post hoc reconstruction.

That works, but it also creates friction:

  • replay becomes harder,
  • auditability becomes weaker,
  • state transitions become less legible,
  • and answering “what exactly happened?” becomes more expensive than it should be.

OCP is an attempt to push against that direction.


The core idea

At a high level, OCP is organized around a simple conceptual flow:

  1. Observe

    Gather the facts or inputs the system is allowed to see.

  2. Match

    Evaluate structure, conditions, and possible transitions in a constrained way.

  3. Commit

    Make effects happen through an explicit, governed commit step.

The point is not just aesthetic structure. The point is to make side effects feel less like arbitrary ambient operations and more like explicit runtime events with a clearer boundary and stronger control surface.

That design is intended to improve three things:

  • determinism
  • replayability
  • governance of effects and state transitions

In other words, I want execution to be easier to inspect, reason about, replay, and constrain.


Why this was worth building

I was not interested in creating “yet another syntax experiment.”

What interested me was the runtime model itself.

I wanted to see what happens if the system is shaped from the start around questions like these:

  • Can state transitions be made more legible?
  • Can effects be forced through narrower, more explicit gates?
  • Can replay, debugging, and audit become stronger properties of the model rather than add-on tooling?
  • Can a runtime be designed around controlled collapse from observation into committed world change?

That is the line of inquiry behind OCP.


What OCP is trying to be

OCP is trying to be a design-led governed execution model for explicit observation, constrained matching, and controlled commitment of effects.

The emphasis is not on maximizing arbitrary freedom at every point in execution.

The emphasis is on making the runtime model more disciplined and structurally inspectable.

That means OCP is intentionally concerned with questions like:

  • what the system is allowed to observe,
  • how possible transitions are selected,
  • when effects are permitted to become real,
  • and how those transitions can be replayed or audited later.

This is also why OCP is not best understood as “just syntax.”

The design lives primarily at the level of runtime semantics, execution structure, and effect governance.


What OCP is not

It is worth being explicit here.

OCP is not currently being presented as:

  • a finished production system,
  • a drop-in replacement for mainstream languages,
  • or proof that every kind of software should be forced into this model.

It is also not being presented as a mature, general-purpose programming language already proven across the full range of computation people normally expect from that label.

What is strongest in OCP today is the governed runtime and execution side:

  • structured observe/match/commit flow,
  • replay-oriented execution,
  • policy-gated effects,
  • and a more explicit model of state transition boundaries.

That is the part I believe is most real and most worth evaluating right now.

The useful question is not “is this revolutionary?”

The useful question is:

Does this runtime model create real technical value, or does it mostly add conceptual ceremony?

That is the standard I think it should be judged by.


On authorship and AI-assisted implementation

I want to be direct about this.

OCP is a design-led, human-directed, AI-assisted project.

I am not presenting it as a hand-written solo codebase built line by line in the traditional way. My role is closer to this:

  • defining the original design intent,
  • shaping the model and constraints,
  • setting quality bars,
  • evaluating outputs,
  • rejecting weak directions,
  • and steering the system toward coherence.

In other words, the intellectual ownership is in the design, structure, philosophy, validation criteria, and direction of the project, while AI is used as an implementation partner.

I think that distinction should be stated openly rather than hidden.

I also understand that this comes with a real cost: AI-assisted implementation can produce code that is harder to read, harder to trust, and harder to evaluate cleanly in public. That criticism is fair, and I do not want to dodge it.

For me, the more important question is still this:

Does the artifact hold up at the level of model, documentation, examples, and runtime behavior?

That is the pressure I want OCP to face.


Current state of the project

OCP already has a public GitHub repository, and the current work is focused on making the project concrete, legible, and testable as a real artifact rather than just an idea.

The priorities are straightforward:

  • a repository structure outsiders can navigate,
  • documentation that explains the model clearly,
  • examples that show the execution shape,
  • and a presentation that makes the runtime constraints understandable.

I am less interested in pretending it is finished than in making sure it is concrete enough to be evaluated honestly.

GitHub:

https://github.com/DucHaiten/OCP


The kind of feedback I actually want

I am not looking for generic encouragement.

The most useful criticism would be on questions like these:

  1. Is the core execution model understandable from the documentation and examples?
  2. Does observe → match → commit produce meaningful advantages, or mostly extra structure?
  3. Where does the model feel technically disciplined, and where does it feel over-constrained?
  4. Which existing runtimes, orchestration systems, or language/runtime models should OCP be compared against more directly?
  5. If you were skeptical, which part would you challenge first: semantics, ergonomics, implementation strategy, or use-case fit?

That is the level of pressure I want the project under.


Why I am sharing it publicly now

Because design ideas harden.

And once they harden too early, they become harder to challenge, harder to refine, and easier to defend for emotional reasons rather than technical ones.

I would rather put OCP in front of people while it can still be criticized at the level that matters:
the model, the runtime assumptions, the explicit constraints, the public evidence, and the claimed value.

If the model is weak, I want that exposed.

If it is promising but misframed, I want that exposed too.

If the implementation and presentation fail to communicate the real idea, that is worth learning early as well.


Closing

OCP is an attempt to explore a stricter governed runtime model centered on:

  • explicit observation,
  • constrained matching,
  • and controlled commitment of effects.

Its value, if it has any, will not come from branding.

It will come from whether this structure leads to clearer execution, better replayability, stronger auditability, and more legible state transitions in practice.

If that sounds interesting, take a look at the repository and tell me where the model is strong, where it is weak, and where it is simply adding cost without enough return.

That is the conversation I want.

GitHub:

https://github.com/DucHaiten/OCP

Top comments (0)