<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nischal Nepal</title>
    <description>The latest articles on DEV Community by Nischal Nepal (@n-is).</description>
    <link>https://dev.to/n-is</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/n-is"/>
    <language>en</language>
    <item>
      <title>Death of "Vibe Coding": Engineering Intent in the Age of Autonomous Agents</title>
      <dc:creator>Nischal Nepal</dc:creator>
      <pubDate>Sat, 17 Jan 2026 16:09:37 +0000</pubDate>
      <link>https://dev.to/n-is/the-death-of-the-prompt-engineering-intent-in-the-age-of-autonomous-agents-2852</link>
      <guid>https://dev.to/n-is/the-death-of-the-prompt-engineering-intent-in-the-age-of-autonomous-agents-2852</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbumqli9zkt6k7d37rm4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbumqli9zkt6k7d37rm4.jpg" alt="From Vibe Coding to Engineering Intent" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The industry is currently intoxicated by "Vibe Coding", the practice of using natural language to coax code out of Large Language Models. For hobbyists, this is a revolution. For CTOs and Senior Architects building enterprise-grade systems, it is a liability.&lt;/p&gt;

&lt;p&gt;After deep integration cycles with high-reasoning agents like &lt;strong&gt;Google’s Antigravity&lt;/strong&gt; and &lt;strong&gt;Anthropic’s Claude Code&lt;/strong&gt;, I’ve observed a critical failure point: developers are over-indexing on &lt;strong&gt;Guardrails&lt;/strong&gt; (negative constraints) and under-indexing on &lt;strong&gt;Architecture&lt;/strong&gt; (structural intent).&lt;/p&gt;

&lt;p&gt;If we want AI agents to produce production-ready code, we must shift our primary artifact from the source code to the &lt;strong&gt;Source of Truth&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Architecture as the "Structural Glue"
&lt;/h2&gt;

&lt;p&gt;In traditional development, the &lt;code&gt;architecture.md&lt;/code&gt; was often a stale document that trailed behind the implementation. In an agentic workflow, it becomes the &lt;strong&gt;Command-and-Control&lt;/strong&gt; layer.&lt;/p&gt;

&lt;p&gt;Most "Agentic" failures stem from a lack of context. When you give an agent a task without a rigid architectural anchor, it defaults to the most statistically probable path, which is rarely the path required for your specific system.&lt;/p&gt;

&lt;p&gt;At our level, we must treat the &lt;strong&gt;Architecture as the Glue&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Consensus Model:&lt;/strong&gt; The architecture is the only document that requires high-bandwidth human intervention. It defines the "What" and the "How" (patterns, data flow, and capability-centric design).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deterministic Boundaries:&lt;/strong&gt; By defining clear interfaces and capabilities, a concept championed by the &lt;strong&gt;Universal API Specification (UAS)&lt;/strong&gt;, we collapse the agent's reasoning search space.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the "Glue" is strong, the implementation becomes a predictable byproduct.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. From Guardrails to Spec Engineering
&lt;/h2&gt;

&lt;p&gt;Guardrails are a low-leverage defensive strategy. They are "negative constraints" designed to stop the agent from breaking things.&lt;/p&gt;

&lt;p&gt;The high-leverage move is &lt;strong&gt;Spec Engineering&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In my workflow, I utilize a &lt;strong&gt;Recursive Spec Generation&lt;/strong&gt; model:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The AI-Human Architect Consensus&lt;/strong&gt; defines the high-level structural constraints in &lt;code&gt;architecture.md&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Agent&lt;/strong&gt;, operating within the context of that architecture, is tasked with generating a detailed &lt;strong&gt;Implementation Spec&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Engineer&lt;/strong&gt; reviews the Spec first, and the Code only much later.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the Spec is correct, the code is a commodity. We use &lt;strong&gt;Claude Code’s Plan Mode&lt;/strong&gt; and &lt;strong&gt;Antigravity’s Manager View&lt;/strong&gt; to enforce this hierarchy. The agent is never allowed to write code until it has successfully "interviewed" the architect and produced a spec that aligns with the project’s DNA.&lt;/p&gt;

&lt;p&gt;By providing a high-fidelity manifest or a robust &lt;code&gt;architecture.md&lt;/code&gt;, you aren't trying to "teach" the agent how to code; you are defining the &lt;strong&gt;Search Space&lt;/strong&gt;. You are giving the agent a clear destination and letting its massive reasoning compute find the most efficient path there.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Validating the Paradigm Shift
&lt;/h2&gt;

&lt;p&gt;We are not just theorizing; we are witnessing a convergence of industry thought leaders and the fundamental laws of AI scaling.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub’s Spec-Kit (2025):&lt;/strong&gt; Early telemetry from &lt;a href="https://github.com/github/spec-kit" rel="noopener noreferrer"&gt;&lt;strong&gt;Spec-Kit&lt;/strong&gt;&lt;/a&gt; suggests that agentic workflows anchored by explicit specifications reduce "logic drift" by over 60% compared to traditional zero-shot prompting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thoughtworks Tech Radar (2025):&lt;/strong&gt; The &lt;a href="https://www.thoughtworks.com/radar/techniques/summary/spec-driven-development" rel="noopener noreferrer"&gt;&lt;strong&gt;Thoughtworks Tech Radar&lt;/strong&gt;&lt;/a&gt; has officially recognized &lt;strong&gt;Spec-driven Development (SDD)&lt;/strong&gt; as a vital technique. They highlight that by shifting the focus from implementation to specification, we create a "contract" that AI agents can fulfill with far higher reliability than ad-hoc prompting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Universal API Specification (UAS):&lt;/strong&gt; A critical piece of this validation puzzle is the &lt;a href="https://github.com/nixitlabs/uas" rel="noopener noreferrer"&gt;&lt;strong&gt;UAS&lt;/strong&gt;&lt;/a&gt;. UAS moves us away from brittle, protocol-centric instructions toward &lt;strong&gt;Capability-Centric&lt;/strong&gt; manifests. In essence, it validates that the "How" never drifts from the "What."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code as a Byproduct:&lt;/strong&gt; Pioneers like &lt;a href="https://tessl.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;Tessl&lt;/strong&gt;&lt;/a&gt; and &lt;a href="https://kiro.dev/" rel="noopener noreferrer"&gt;&lt;strong&gt;Kiro&lt;/strong&gt;&lt;/a&gt; are building the infrastructure for this "AI-Native" era. Their work validates a radical truth: &lt;strong&gt;Code is a transient asset.&lt;/strong&gt; In an agentic workflow, the implementation is just a temporary state. The real intellectual property is the "Glue, which lives in your &lt;a href="http://architecture.md" rel="noopener noreferrer"&gt;&lt;strong&gt;architecture.md&lt;/strong&gt;&lt;/a&gt; and your specifications. As Tessl argues, we are moving toward a future where we "program" via specs, and the code is merely an ephemeral artifact generated to satisfy them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Bitter Lesson:&lt;/strong&gt; Why does this work? It follows &lt;a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Rich Sutton’s "Bitter Lesson"&lt;/strong&gt;&lt;/a&gt;. The history of AI shows that attempts to bake "human knowledge" (in our case, micro-managing guardrails and manual coding rules) into a system are eventually outperformed by general methods that leverage computation. The "Bitter Lesson" for engineers is that our manual code-smithing is less valuable than our ability to architect the objective.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Addy Osmani Principle:&lt;/strong&gt; Osmani’s recent work on &lt;a href="https://addyosmani.com/blog/good-spec/" rel="noopener noreferrer"&gt;&lt;strong&gt;spec writing for AI agents&lt;/strong&gt;&lt;/a&gt; highlights that for senior leaders, the "Developer" is being replaced by the &lt;strong&gt;"Architect of Intent."&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  4. Building the Autonomous Enterprise
&lt;/h2&gt;

&lt;p&gt;For technology leaders, the goal is not to have engineers write code faster; it is to build systems that are &lt;strong&gt;self-documenting and self-correcting&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By adopting a &lt;strong&gt;Capability-Centric&lt;/strong&gt; approach, as seen in the &lt;strong&gt;UAS framework&lt;/strong&gt;, we decouple business logic from transport protocols. This allows agents to work on the "logical surface" of the application, ensuring that the final output matches the verifiable integrity of the original manifest.&lt;/p&gt;




&lt;h2&gt;
  
  
  The New Stack
&lt;/h2&gt;

&lt;p&gt;The new engineering stack isn't just Python, TypeScript, or Go. It is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;architecture.md:&lt;/strong&gt; The Immutable Glue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent-Generated Specs:&lt;/strong&gt; The Executable Maps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-Reasoning Agents:&lt;/strong&gt; The Execution Engines.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As leaders, we must move our teams away from "prompting" and toward "architecting." The future of software isn't just written; it's specified.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>architecture</category>
      <category>antigravity</category>
      <category>programming</category>
    </item>
    <item>
      <title>The "Distributed State Tax" and Why We Don't Always Need Redis: Introducing C3</title>
      <dc:creator>Nischal Nepal</dc:creator>
      <pubDate>Fri, 12 Dec 2025 06:22:13 +0000</pubDate>
      <link>https://dev.to/n-is/the-distributed-state-tax-and-why-we-dont-always-need-redis-introducing-c3-12a8</link>
      <guid>https://dev.to/n-is/the-distributed-state-tax-and-why-we-dont-always-need-redis-introducing-c3-12a8</guid>
      <description>&lt;p&gt;One of the hardest lessons in software architecture is learning that every decoupling creates a new communication cost. We call this the "&lt;strong&gt;Distributed State Tax&lt;/strong&gt;".&lt;/p&gt;

&lt;p&gt;In a monolithic application, sharing data is essentially free. It’s just a memory lookup. But as soon as you split that application into microservices, you are "billed" for every piece of shared data. The currency isn't just latency; it's complexity. A simple question like "Is this user online?" or "Is this feature enabled?" transforms from a nanosecond variable check into a distributed systems problem involving networks, serialization, and failure modes.&lt;/p&gt;

&lt;p&gt;For years, the industry has presented us with a binary approach to solve this. Whether you are a Junior Developer building your first service or a Principal Architect designing a platform, you have likely been forced to choose between two imperfect paths:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Remote Procedure Call ("Just Ask the Source")&lt;/strong&gt;: You make an API call to the source of truth for every request.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Trap&lt;/em&gt;: This is easy to implement but creates "Chatty Services." Your network becomes a bottleneck, latency spikes, and if the source service goes down, everyone goes down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Caching Infrastructure ("Just Add Redis")&lt;/strong&gt;: You deploy a dedicated cache cluster like Redis.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Trap&lt;/em&gt;: This fixes performance but introduces "Infrastructure Bloat." You are now managing a complex distributed system just to read a simple value. If you treat Redis as a cache, you face the Dual-Write Problem (database saves, cache fails, data drifts). If you treat Redis as a primary store (to avoid dual-writes), you must now manage persistence, backups, and high availability for a whole new database engine.&lt;/p&gt;

&lt;p&gt;I believe there is a third way. We can stop treating caching as additional infrastructure and start treating it as a protocol rooted in the database we already use.&lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;C3 (Coherent Cluster Cache)&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Theoretical Vision: The Database IS the OS
&lt;/h2&gt;

&lt;p&gt;The architecture of C3 is not just about saving money on Redis instances; it is rooted in the concept of DBOS (Database-Oriented Operating System). The thesis is simple: modern relational databases (like PostgreSQL) are powerful enough to handle low-level coordination concerns like messaging and inter-process communication.&lt;/p&gt;

&lt;p&gt;C3 also draws inspiration from Tuple Spaces and the Linda language from the 1980s. In this model, processes don't message each other directly; they generate "tuples" into a shared abstract space.&lt;/p&gt;

&lt;p&gt;In the C3 architecture, PostgreSQL acts as that persistent Tuple Space. By standardizing the schema and invalidation signals at the persistence layer, we can create a "Serverless Caching" architecture where the database acts as the coordination engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is C3?
&lt;/h2&gt;

&lt;p&gt;C3 is a Go library that implements the Database-Centric Cache Protocol (DCCP). It provides a two-tiered caching strategy that delivers the read performance of local memory with the consistency guarantees of a distributed cache.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tiered Architecture
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;L1 Cache (In-Memory)&lt;/strong&gt;: An LRU map inside your application's memory. This serves the "hot path" with nanosecond-level latency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;L2 Cache (PostgreSQL)&lt;/strong&gt;: The shared source of truth. If data isn't in L1, C3 fetches it from the cache_data table in Postgres.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5oay1c655sw6vl6yu5xu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5oay1c655sw6vl6yu5xu.png" alt="Architecture with C3 at a glance" width="473" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But the real magic isn't in the storage; it's in the Coherency Engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  How We Achieve Coherency
&lt;/h2&gt;

&lt;p&gt;The hardest part of distributed caching is invalidation. C3 solves this by leveraging PostgreSQL's &lt;code&gt;ACID&lt;/code&gt; guarantees and its native &lt;code&gt;LISTEN/NOTIFY&lt;/code&gt; subsystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The Atomic Write Path
&lt;/h3&gt;

&lt;p&gt;When you set a value in C3, you aren't just writing to a database; you are initiating a transaction.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: The library writes the key/value pair to the L2 table (&lt;code&gt;INSERT ... ON CONFLICT DO UPDATE&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Within the same transaction, it executes &lt;code&gt;pg_notify('cache_invalidate', key)&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: The transaction commits.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This guarantees Atomicity. The data update and the invalidation signal happen together. If the transaction fails, no signal is sent. This eliminates the "Cache Drift" common in dual-write systems where an app writes to the DB and then fails to update Redis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz424wnf8uabkyh29qyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz424wnf8uabkyh29qyz.png" alt="Dual Write vs Atomic Write" width="346" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Generative Communication
&lt;/h3&gt;

&lt;p&gt;Every service instance running C3 maintains a background NotificationListener connected to the database. When Service A updates a user profile, the database broadcasts the cache_invalidate signal. Service B, C, and D receive this signal and perform a surgical invalidation of that specific key in their local L1 memory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm1gf1ofnyts1nfi4i4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm1gf1ofnyts1nfi4i4v.png" alt="Coherency Flow" width="354" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Resilience and Fail-Safes
&lt;/h3&gt;

&lt;p&gt;In IoT and Cloud systems, network partitions are a reality. What happens if the listener disconnects?&lt;/p&gt;

&lt;p&gt;C3 implements a strict fail-safe. If the connection to the database is lost, the library automatically purges the entire L1 cache. This ensures that a partitioned node "fails secure," serving higher-latency data (L2 misses) rather than serving stale, incorrect data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operational Simplicity &amp;amp; Observability
&lt;/h2&gt;

&lt;p&gt;As an architect, I value tools that are easy to run and even easier to maintain. Compared to a Service Mesh, which requires managing control planes and sidecars, C3 requires only a library import and a connection string.&lt;/p&gt;

&lt;p&gt;However, we didn't skimp on visibility. C3 comes with built-in OpenTelemetry instrumentation. It emits metrics for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;L1 vs. L2 Hit Rates&lt;/strong&gt;: To tune your memory allocation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Coherency Events&lt;/strong&gt;: To track invalidation volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tracing&lt;/strong&gt;: Verifying that L1 hits have zero network span.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When Should You Use C3?
&lt;/h2&gt;

&lt;p&gt;C3 is not a replacement for Kafka or a high-throughput job queue. However, it excels where Redis is often an overkill.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use C3 if:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You want Durability by default&lt;/strong&gt;: Unlike a typical Redis setup, C3 backs everything to Postgres. If your service restarts, the data is still there.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You want "Zero Ops"&lt;/strong&gt;: You don't want to manage, patch, or scale a separate Redis cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Your workload is Read-Heavy&lt;/strong&gt;: User profiles, feature flags, configuration, and session data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You work in a Polyglot Environment&lt;/strong&gt;: Any language that can talk to Postgres can participate in the protocol.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We often over-engineer our systems because we fear inconsistency. We deploy massive infrastructure to solve problems that could be handled by the database we already have.&lt;/p&gt;

&lt;p&gt;C3 validates the hypothesis that for read-heavy, transient state, the database itself is the most efficient coordinator. It allows us to build "Smart Clients" that maximize local resources while maintaining a coherent view of the world.&lt;/p&gt;

&lt;p&gt;The Database-Centric Cache Protocol (DCCP) is fundamentally language-agnostic, and we believe this pattern is too valuable to stay within the boundaries of a single language ecosystem. We are actively looking for contributors to help replicate the C3 library in Python (c3-py), TypeScript/Node.js, Rust, and Java.&lt;/p&gt;

&lt;p&gt;It’s time to pay less tax on our distributed state. Check out the repository and the protocol spec—let's build a coherent future together.&lt;/p&gt;

&lt;p&gt;For those interested in the code, the C3 library is open-sourced and available for Go applications, complete with E2E tests validating the coherency protocols.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nixitlabs/c3" rel="noopener noreferrer"&gt;C3 Repo at Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>database</category>
      <category>microservices</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
