<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Davis</title>
    <description>The latest articles on DEV Community by Daniel Davis (@jackcolquitt).</description>
    <link>https://dev.to/jackcolquitt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jackcolquitt"/>
    <language>en</language>
    <item>
      <title>A New Era of Determinism</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Tue, 20 Jan 2026 04:54:11 +0000</pubDate>
      <link>https://dev.to/jackcolquitt/a-new-era-of-determinism-2cn</link>
      <guid>https://dev.to/jackcolquitt/a-new-era-of-determinism-2cn</guid>
      <description>&lt;p&gt;A cosmic ray strikes a computer chip at 30,000 feet. A single bit flips. The aircraft's navigation system fails. &lt;/p&gt;

&lt;p&gt;This isn’t the intro to the next Final Destination movie. This happens. In the fall of 2025, increased radiation at cruising altitudes led to a documented rise in Single Event Upsets (SEUs) in aircraft systems, prompting Airbus to issue safety advisories and corrective actions. &lt;/p&gt;

&lt;p&gt;Before you panic, nothing bad happened. How? For seventy years, we’ve built systems that had to be right every time. We called this determinism. If you put in A, you always got B. No exceptions. No surprises. Not even for freak storms of cosmic radiation.&lt;/p&gt;

&lt;p&gt;But what do we do now that’re using AI to put vast amounts of human intelligence into machines? Human intelligence is not deterministic. It estimates, compares, and guesses — usually correctly, but sometimes not. This creates a problem. How do you trust something with your life that works most of the time?&lt;/p&gt;

&lt;p&gt;The answer is not to make AI deterministic the old way. That is impossible. The answer is to look at deterministic engineering in a new way. This article explains why.&lt;/p&gt;

&lt;p&gt;First, we will look at how engineers built deterministic systems in the past. Then we will see why even those systems fail when physics intervenes. Next, we will examine why AI cannot work like old systems. After that, we will explore what cybersecurity learned when perfect defense became impossible. Finally, we will see how context graphs offer a new kind of determinism—one based on outcomes, not processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Old Determinism&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Engineers worshiped determinism. They had to.&lt;/p&gt;

&lt;p&gt;When you fly an airplane full of people, the math must be exact. When you run a bank's ledger, the totals must match. When you control a nuclear reactor, the sensors must read true. Get it wrong once and people die or fortunes vanish. So engineers built systems that behaved the same way every single time. Same input, same output. Always.&lt;/p&gt;

&lt;p&gt;They used redundancy. They used verification. They used formal proofs. They tested millions of scenarios. They eliminated randomness wherever they found it. They succeeded, mostly. The computers that landed humans on the moon in 1969 worked because every calculation was deterministic. The software that runs your car's airbag deploys in milliseconds because the logic never varies.&lt;/p&gt;

&lt;p&gt;This approach defined computing for decades. It became the foundation of trust. If a system was deterministic, you could trust it. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;When the Universe Interferes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A chip is just silicon with electrical charges representing ones and zeros. A high-energy particle can knock an electron loose. A one becomes a zero. A zero becomes a one. Engineers call this a single event upset, or SEU. The chip is not broken. It just changed state. One bit flipped. That is all it takes.&lt;/p&gt;

&lt;p&gt;In the fall of 2025, solar storms increased cosmic radiation at high altitudes. Aircraft experienced single event upsets at a rate of sixty errors per hour per gigabyte of memory in systems critical to the safe operation of the aircraft. One flight suffered a failure in the flight control computer. Yet, nothing bad happened.&lt;/p&gt;

&lt;p&gt;These incidents get to the heart of “what is determinism?”. While the flight control computer may have produced errors because of the cosmic radiation, other systems compensated for these errors with redundancy and error correction to guarantee system resiliency in the face of catastrophe. &lt;/p&gt;

&lt;p&gt;While individual aircraft systems exhibited non-deterministic behavior the overall system,  the aircraft, was still able to achieve the desired outcome - landing safely. The aircraft - the overall system - did exhibit deterministic behavior with the goal being to land safely. &lt;/p&gt;

&lt;p&gt;If even our most carefully engineered systems cannot escape uncertainty, the challenge becomes far greater when uncertainty is intrinsic by design.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Probabilistic Machine&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI does not work like traditional software. It does not follow explicit rules. It learns patterns from data. It makes predictions based on the training datasets and how the training engineers chose to reward the model for its responses. Fundamentally, they introduce uncertainty.&lt;/p&gt;

&lt;p&gt;An AI model might identify fraud with 95% accuracy. It might translate text correctly 98% of the time. It might diagnose disease better than most doctors. But it cannot give you 100% certainty. The math does not allow it. Neural networks are statistical models. They deal in probabilities, not absolutes.&lt;/p&gt;

&lt;p&gt;This troubles people who grew up trusting deterministic systems. If an AI cannot guarantee correctness, how can you deploy it in critical systems? How can you trust it with medical decisions, financial transactions, or security judgments? The instinct is to demand that AI become deterministic like old software.&lt;/p&gt;

&lt;p&gt;That instinct is wrong. Just look at the cases of failing flight computers from cosmic radiation. We could say that we need to totally rethink the software (and immediately, Airbus did issue a recall, but that’s another issue), but we have to see that the aircraft were still able to operate safely. All the redundancy, error detection, and resiliency were able to achieve the desired outcome.&lt;/p&gt;

&lt;p&gt;The solution is not to try to make AI deterministic through rigid standards and endless testing and human reviews. The solution is to rethink what we mean by trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Outcome-Based Determinism&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Traditional determinism focused on process. If you control every step, you control the result. This works when you can enumerate every possible input and define every correct output. It fails when the problem space is too large or too complex to fully specify.&lt;/p&gt;

&lt;p&gt;AI operates in exactly these spaces. You cannot write rules for every possible email spam message. You cannot list every variant of every cyber attack. You cannot define every normal behavior pattern for millions of users. The combinatorics defeat you. So you build a system that learns to recognize patterns.&lt;/p&gt;

&lt;p&gt;But you can still achieve determinism at a different level. Not process determinism—outcome determinism. You accept that the AI's internal operations are probabilistic. But you ensure that the system as a whole produces reliable, auditable, explainable results that you can verify and correct.&lt;/p&gt;

&lt;p&gt;This is the shift. Old determinism said: "This code always executes the same way." New determinism says: "This system always achieves the required outcome, even if the path varies." It is determinism at the level of goals, not operations.&lt;/p&gt;

&lt;p&gt;How do you build that? The cybersecurity industry figured it out by necessity.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Security Lesson&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For decades, cybersecurity focused on prevention. Build a firewall. Block malicious traffic. Prevent intrusions. Stop the attack before it starts. This approach assumed you could enumerate threats and defend against each one. It assumed perfect prevention was possible.&lt;/p&gt;

&lt;p&gt;It was not. Attackers evolve too fast. New exploits appear daily. Zero-day vulnerabilities are, by definition, unknown until exploited. The attack surface grows as systems become more complex and connected. Perfect prevention has become impossible, then impractical, then irrelevant.&lt;/p&gt;

&lt;p&gt;Out of necessity, the cybersecurity industry has shifted to prioritize detection and response. Assume attackers will get in. Assume perfect defense is impossible. Instead, focus on detecting anomalies quickly and responding effectively. Reduce time to detection. Limit damage. Learn and adapt. This has worked better because it matches reality.&lt;/p&gt;

&lt;p&gt;But detection has its own problem. How do you detect something you have never seen before? Traditional detection looks for known signatures or known patterns. New attacks have no signatures. Novel behaviors have no established patterns. You need a different approach.&lt;/p&gt;

&lt;p&gt;You need context.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Context Graphs: Building Trust with Structure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Since correctness cannot be formally proven for AI systems, we have to shift our mindset about trust. Instead of expecting an AI system to always produce a single correct answer, we must ensure that it consistently produces an &lt;em&gt;acceptable&lt;/em&gt; outcome. Unlike classical determinism, input A will not always yield output B. Instead, correctness exists within a range of possibilities.&lt;/p&gt;

&lt;p&gt;Vector embeddings made AI useful for information retrieval. Feed documents into a model, and it produces numerical representations of meaning. When a user asks a question, the system retrieves nearby embeddings and provides them to the AI. But it has limits.&lt;/p&gt;

&lt;p&gt;Vectors lose contextual structure.&lt;/p&gt;

&lt;p&gt;A document becomes a point in high-dimensional space. You know it is close to other points, but you do not know &lt;em&gt;how&lt;/em&gt; they relate. Is this person an employee of that company, a customer, or a competitor? The vector cannot tell you. The relationship is gone. When the AI generates an answer, you cannot trace which facts came from where or how they were connected.&lt;/p&gt;

&lt;p&gt;Graph structures preserve what vectors discard.&lt;/p&gt;

&lt;p&gt;Entities remain distinct. Relationships remain explicit. A person connects to a company through an employment relationship. That company connects to a transaction through a time-stamped event. Each link has properties. Each entity has attributes. The structure is visible and inspectable.&lt;/p&gt;

&lt;p&gt;Domain-specific graphs add another layer of meaning. A cybersecurity graph understands what an IP address is, what normal access patterns look like, and what escalation means. A medical graph understands symptoms, diagnoses, and treatments. This is not generic data storage. It is organized knowledge shaped by domain semantics.&lt;/p&gt;

&lt;p&gt;Reification takes this further.&lt;/p&gt;

&lt;p&gt;Relationships themselves become first-class entities. An “employed by” relationship is no longer just an edge—it has properties such as start date, end date, role, and confidence score. When an AI uses this relationship to generate an answer, the system records it. Context becomes linked to outcomes. The system knows which facts led to which conclusions.&lt;/p&gt;

&lt;p&gt;Over time, the system learns what works. Context patterns that lead to desired outcomes are prioritized. Patterns that lead to errors or poor results are weighted down. The graph evolves based on results, not just on what data exists. It becomes tuned to outcomes.&lt;/p&gt;

&lt;p&gt;This structure also enables powerful anomaly detection.&lt;/p&gt;

&lt;p&gt;Graphs reveal patterns, and deviations from those patterns stand out. A user who normally accesses three systems suddenly accesses twenty. A transaction flow that usually takes five steps suddenly takes two. An entity that should be highly connected appears isolated. These are not known attack signatures. They are structural anomalies.&lt;/p&gt;

&lt;p&gt;Crucially, the system can intervene before a bad outcome occurs.&lt;/p&gt;

&lt;p&gt;When the graph shows something unusual—context being used in unexpected ways, relationships forming that should not exist, or confidence scores dropping below thresholds—the system stops. It does not guess. It does not proceed with low-confidence outputs. It asks for human judgment.&lt;/p&gt;

&lt;p&gt;This is trust through structure.&lt;/p&gt;

&lt;p&gt;The AI remains probabilistic internally, but the graph makes its reasoning transparent and auditable. Outcomes trace back to context. Anomalies trigger intervention. The system learns from results and adapts its priorities. Determinism reappears—not at the level of computation, but at the level of outcomes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The New Contract&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We are entering a new era. One where systems are intelligent but don’t operate in the world of absolutes. Where correctness is probabilistic but outcomes are still trustworthy. Where determinism means reliable results, not rigid processes.&lt;/p&gt;

&lt;p&gt;This era requires new tools. Context graphs provide the transparency that makes AI auditable. They bridge the gap between probabilistic intelligence and human oversight. They let us detect novel threats by understanding relationships, not just matching signatures.&lt;/p&gt;

&lt;p&gt;The cosmic ray that flips a bit teaches us that perfect determinism was always an illusion. The universe intrudes. Physics intervenes. Complexity defeats enumeration. What matters is not preventing every error, but detecting and correcting them quickly.&lt;/p&gt;

&lt;p&gt;AI systems will make mistakes. That is the only absolute. But with the right architecture—one built on context graphs and outcome-based determinism—those mistakes become visible, understandable, and fixable. That is the new determinism. Not perfection, but accountability. Not control, but transparency.&lt;/p&gt;

&lt;p&gt;Trust is not about eliminating uncertainty. It is about managing it well. The old era tried to eliminate uncertainty through rigid control. It failed when reality was too complex. The new era accepts uncertainty and manages it through visibility, verification, and rapid correction.&lt;/p&gt;

&lt;p&gt;The machines will keep learning. The universe will keep interfering. Our job is not to make systems that never fail. Our job is to make systems that fail gracefully, visibly, and correctably. That is determinism for an uncertain age. That is trust for the era of intelligence.&lt;/p&gt;

&lt;p&gt;The cosmic ray still flips the bit. But now we see it happen. We understand why. And we fix it before the plane goes down.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
      <category>rag</category>
    </item>
    <item>
      <title>Context Graphs: Reification not Decision Traces</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Fri, 09 Jan 2026 02:06:00 +0000</pubDate>
      <link>https://dev.to/jackcolquitt/context-graphs-reification-not-decision-traces-33pc</link>
      <guid>https://dev.to/jackcolquitt/context-graphs-reification-not-decision-traces-33pc</guid>
      <description>&lt;p&gt;There has been no shortage of commentary around the recent wave of &lt;em&gt;“context graph mania.”&lt;/em&gt; From Foundation Capital’s original article from Jaya Gupta and Ashu Garg, “AI’s Trillion Dollar Opportunity: Context graphs” to my &lt;a href="https://x.com/TrustSpooky/status/2006481858289361339" rel="noopener noreferrer"&gt;Context Graph Manifesto&lt;/a&gt; (there’s been so much attention, I even made a YouTube video, &lt;a href="https://www.youtube.com/watch?v=gZjlt5WcWB4" rel="noopener noreferrer"&gt;What is a Context Graph?&lt;/a&gt;), one topic in particular has attracted outsized attention: &lt;strong&gt;decision traces&lt;/strong&gt;. When I first encountered the term, I bristled—and I still do. That said, I understand what people are trying to accomplish when they talk about decision traces. The problem is that &lt;em&gt;decision&lt;/em&gt; is the wrong word.&lt;/p&gt;

&lt;p&gt;What this is really about is &lt;strong&gt;reification&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Reif-what?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Don’t worry—we’ll get there. But first, we need to talk about decisions.&lt;/p&gt;

&lt;p&gt;Computer systems don’t make decisions.&lt;/p&gt;

&lt;p&gt;There, I said it.&lt;/p&gt;

&lt;p&gt;Before you click away in disgust or angrily scroll to the comments to tell me I’m an idiot (and if you read all the way through and still feel compelled to do that, fair enough), let me explain what I mean.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What Is a Decision?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Decisions are a human construct, and they are surprisingly intangible. It sounds simple, but what &lt;em&gt;is&lt;/em&gt; a decision, really? At what exact moment do you “decide” to do something? An action is easier to observe, but how do we identify the precise tipping point that caused it? Can we honestly say that a single thought or piece of information led directly to an action?&lt;/p&gt;

&lt;p&gt;Stanford researcher Robert Sapolsky argues that humans have no free will at all. In his view, decisions are an illusion—one that obscures the cumulative effect of years of biology, environment, and external forces shaping our thoughts and behaviors.&lt;/p&gt;

&lt;p&gt;From a behavioral economics perspective, decisions are inseparable from &lt;strong&gt;incentives, goals, and starting conditions&lt;/strong&gt;. People often point to simplified game theory examples, like the prisoner’s dilemma, where outcomes appear to hinge on a discrete choice: betray the other prisoner or remain silent. While useful as an introduction, these scenarios are poor representations of the real world.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Limits of Game Theory&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As game theory scales into more complex systems, a dominant strategy tends to emerge: &lt;strong&gt;copycat behavior&lt;/strong&gt;. If you’ve ever been told “don’t overthink it,” you may have unknowingly received advice straight out of game theory. In many scenarios, the optimal move is simply to mirror the first actor.&lt;/p&gt;

&lt;p&gt;We see this repeatedly in emerging industries. The innovator is often not the long-term winner. Why? Because the innovator bears the cost of uncertainty and market education, while the second mover waits, observes, and copies once the heavy lifting is done.&lt;/p&gt;

&lt;p&gt;But copycat dynamics aren’t the only flaw.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Starting Conditions Matter&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Nobel Prize–winning behavioral economist Daniel Kahneman exposed a deeper limitation of classical game theory through &lt;strong&gt;prospect theory&lt;/strong&gt;: it ignores starting conditions.&lt;/p&gt;

&lt;p&gt;Consider the prisoner’s dilemma again, but with context. One prisoner is 18 years old. The other is 80 and terminally ill, with less than a year to live. If both stay silent, they each receive two years in prison. If one betrays the other, the betrayer walks free and the other gets twenty years.&lt;/p&gt;

&lt;p&gt;For the 18-year-old, two years may be painful but manageable; twenty years is catastrophic. For the 80-year-old, both outcomes are unacceptable—they do not want to die in prison. Their incentives and constraints are fundamentally different. Once starting conditions are considered, the “choices” no longer mean the same thing.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why This Matters for Computer Systems&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If we relied on game theory alone, we might be tempted to say that computer systems make decisions. After all, they evaluate options and select outcomes. But prospect theory makes it clear that what we call a decision is deeply shaped by human goals, incentives, and conditions—none of which computers possess.&lt;/p&gt;

&lt;p&gt;You might argue that we can program a system to favor certain choices. But where does that bias come from? Not the system itself. It originates with the human designer. The system is merely executing a logic structure created by someone else’s goals, incentives, and starting conditions.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Reification and Context Graphs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Does it really matter if &lt;em&gt;decision&lt;/em&gt; is the wrong word? Maybe not. I’m certainly guilty of saying, “don’t let the truth get in the way of good marketing.” Are we just splitting semantic hairs?&lt;/p&gt;

&lt;p&gt;In a way, yes—but that’s exactly the point of context graphs.&lt;/p&gt;

&lt;p&gt;The need for context graphs in LLM systems arises because language models struggle to disambiguate meaning when information is removed from its original context. Context graphs allow us to retrieve the &lt;em&gt;right&lt;/em&gt; contextual signals to guide the model’s interpretation. This idea is not new. And for knowledge graph enthusiasts, that brings us back to &lt;strong&gt;reification&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What Is Reification?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are many complex definitions of reification. The simplest—and best—I’ve found is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reification is a technique for representing statements about statements.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At first glance, that sounds unhelpful. I had the same reaction. But stick with me.&lt;/p&gt;

&lt;p&gt;Consider a simple fact:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fred -&amp;gt; hasLegs -&amp;gt; 4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now suppose we want to capture that &lt;em&gt;Mark told me&lt;/em&gt; that Fred has four legs. We could create an entirely new statement, but ideally we want to relate that assertion back to the original fact in the context graph.&lt;/p&gt;

&lt;p&gt;That’s reification.&lt;/p&gt;

&lt;p&gt;One approach is to introduce a new node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Statement1] -&amp;gt; hasSubject -&amp;gt; Fred
[Statement1] -&amp;gt; hasPredicate -&amp;gt; hasLegs
[Statement1] -&amp;gt; hasObject -&amp;gt; 4
[Statement1] -&amp;gt; assertedBy -&amp;gt; Mark
[Statement1] -&amp;gt; assertedDate -&amp;gt; “2026-01-08”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works—but there’s a problem. The reified statement isn’t directly connected to the original statement. Querying becomes cumbersome, requiring reconstruction of the original fact from its components.&lt;/p&gt;

&lt;p&gt;This is where property graph enthusiasts start smiling.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Property Graphs and RDF 1.2&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One of the key differences between RDF graphs and property graphs is that property graphs allow &lt;strong&gt;properties on edges&lt;/strong&gt;. Using a property graph, the same information can be represented as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fred -&amp;gt; hasLegs -&amp;gt; 4
          └&amp;gt; assertedBy: Mark
          └&amp;gt; assertedDate: “2026-01-08”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is cleaner, more intuitive, and far easier to query.&lt;/p&gt;

&lt;p&gt;On December 5, 2025, the W3C released a working draft of &lt;strong&gt;RDF 1.2&lt;/strong&gt;, which introduces timely improvements around reification. One major addition is the ability to treat a relationship itself as an object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;&amp;lt; Fred -&amp;gt; hasLegs -&amp;gt; 4 &amp;gt;&amp;gt; assertedBy -&amp;gt; Mark
&amp;lt;&amp;lt; Fred -&amp;gt; hasLegs -&amp;gt; 4 &amp;gt;&amp;gt; assertedDate -&amp;gt; 2026-01-08
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach closely resembles property graphs and quad-based models (S, P, O, G), where &lt;em&gt;G&lt;/em&gt; acts as an identifier or context. As discussed in the &lt;em&gt;Context Graph Manifesto&lt;/em&gt;, there is no single correct approach. With RDF 1.2, the choice between RDF and property graphs increasingly comes down to preference and tooling rather than capability.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Reification as a System of Record&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While “decision trace” may be a misnomer, the term &lt;strong&gt;system of record&lt;/strong&gt; gets much closer to what context graphs are actually enabling: auditable records of system behavior, data flows, and outputs.&lt;/p&gt;

&lt;p&gt;The reason this is often framed in terms of “decisions” becomes clearer when you consider governance.&lt;/p&gt;

&lt;p&gt;In mature organizations, decision-making authority is formalized through governance structures. Certain roles are empowered to make certain decisions, with escalation paths for higher-impact outcomes. Governance often evokes eye rolls—it’s associated with bureaucracy, slowness, and red tape.&lt;/p&gt;

&lt;p&gt;But governance exists for a reason.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Real Reason Records Matter&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you’ve worked as a senior executive, compliance officer, or corporate lawyer, you already know the answer: &lt;strong&gt;liability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;More charitably, governance demonstrates that an organization has fulfilled its &lt;strong&gt;duty of care&lt;/strong&gt;—the legal obligation to act with reasonable care to avoid foreseeable harm. And harm doesn’t have to be physical. Violating an SLA or breaching a contract qualifies. When AWS US-East-1 goes down for hours, lawsuits follow.&lt;/p&gt;

&lt;p&gt;Enterprises need systems of record not just to assign internal blame (though that happens), but to defend themselves when users claim harm. Lacking records can itself be interpreted as failing duty of care.&lt;/p&gt;

&lt;p&gt;Imagine a litigator arguing: &lt;em&gt;“The defendant doesn’t even know how their system works. They have no records showing reasonable precautions. How could they possibly know they weren’t at fault?”&lt;/em&gt; Most juries would nod along as the defense’s legal team began encouraging them to settle.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;From Black Boxes to Auditability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Today, AI systems are largely black boxes. Reification gives us a path toward transparency.&lt;/p&gt;

&lt;p&gt;In RAG systems, provenance can be attached directly to retrieved context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;&amp;lt;:dataset-common-crawl-2024 :containsData :source-wikipedia-en&amp;gt;&amp;gt;
  :includeDate "2024-02-01T00:00:00Z"^^xsd:dateTime ;
  :recordCount 6500000 ;
  :dataQualityScore 0.94 ;
  :licenseType "CC-BY-SA-3.0" ;
  :preprocessingPipeline :pipeline-cleaner-v3 ;
  :duplicatesRemoved 125000 ;
  :piiFiltered true ;
  :approvedBy :data-governance-team ;
  :auditTrail :audit-log-20240201 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inference events can be captured the same way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;:model-gpt4-mini :generated :response-abc123&amp;gt;&amp;gt;
  :timestamp "2025-01-07T14:32:11Z"^^xsd:dateTime ;
  :inputTokens 450 ;
  :outputTokens 280 ;
  :latencyMs 1250 ;
  :temperature 0.7 ;
  :topP 0.9 ;
  :modelVersion "gpt4-mini-v1.2.3" ;
  :requestId "req-xyz-789" ;
  :userId :user-john-doe ;
  :sessionId "session-2025-01-07-14" ;
  :containsSensitiveInfo false ;
  :moderationScore 0.02 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even policy enforcement becomes part of the graph:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;&amp;lt;:deployment-prod-gpt4-mini :hasConstraint :policy-no-medical-advice&amp;gt;&amp;gt;
  :policyEffectiveDate "2024-07-01T00:00:00Z"^^xsd:dateTime ;
  :policyVersion "v2.1" ;
  :enforcementLevel "strict" ;
  :enforcedBy :guardrail-system-v3 ;
  :violationCount 0 ;
  :lastReviewDate "2024-12-01T00:00:00Z"^^xsd:dateTime ;
  :nextReviewDate "2025-06-01T00:00:00Z"^^xsd:dateTime ;
  :approvedBy :legal-compliance-team .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reification allows us to bind system behavior directly to the data and context that produced it. This creates an auditable trail and opens the door to a more precise—and less anthropomorphic—notion of AI “memory.”&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Closing the Loop on Memory&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;“Memory” is nearly as problematic a term as “decision.” Human memory is deeply flawed, and modeling AI systems after it has always felt misguided.&lt;/p&gt;

&lt;p&gt;Instead of asking how to give AI memory, we should ask what we’re trying to accomplish. Do we want to store every token in a conversation? What happens when histories exceed context windows by orders of magnitude?&lt;/p&gt;

&lt;p&gt;Context graphs naturally evolve into layered systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grounding layers built from curated knowledge
&lt;/li&gt;
&lt;li&gt;A system-of-record layer capturing system behavior
&lt;/li&gt;
&lt;li&gt;Synthetic grounding layers derived from model outputs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Separating these layers is essential. It allows us to measure &lt;strong&gt;context drift&lt;/strong&gt;—how far synthetic grounding diverges from original ground truth. In some cases, evolution is expected. In others, deviation is a failure. The system-of-record layer is what allows us to observe, measure, and correct for this drift.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Putting It Into Practice&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;TrustGraph&lt;/a&gt;, our initial focus has been on building the tooling and infrastructure for production-grade grounding layers: context graphs that can be deployed anywhere, with any model, under full user control.&lt;/p&gt;

&lt;p&gt;Now that &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;TrustGraph&lt;/a&gt; is in production with real users, we see the next phase clearly. The foundation is in place. Reification transforms context graphs from static knowledge stores into auditable, learning systems.&lt;/p&gt;

&lt;p&gt;TrustGraph 2.0 is just coming into view on the horizon—and as always, it will be free and open source.&lt;/p&gt;

&lt;h2&gt;
  
  
  For more information:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Documentation: &lt;a href="https://docs.trustgraph.ai" rel="noopener noreferrer"&gt;https://docs.trustgraph.ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub Repository: &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;https://github.com/trustgraph-ai/trustgraph&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Discord Community: &lt;a href="https://discord.gg/sQMwkRz5GX" rel="noopener noreferrer"&gt;https://discord.gg/sQMwkRz5GX&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Website: &lt;a href="https://trustgraph.ai" rel="noopener noreferrer"&gt;https://trustgraph.ai&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
      <category>rag</category>
    </item>
    <item>
      <title>The Context Graph Manifesto</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Wed, 31 Dec 2025 21:45:00 +0000</pubDate>
      <link>https://dev.to/jackcolquitt/the-context-graph-manifesto-2a8m</link>
      <guid>https://dev.to/jackcolquitt/the-context-graph-manifesto-2a8m</guid>
      <description>&lt;p&gt;When &lt;a href="https://x.com/cybermaggedon" rel="noopener noreferrer"&gt;Mark Adams&lt;/a&gt; and I (&lt;a href="https://x.com/TrustSpooky" rel="noopener noreferrer"&gt;Daniel Davis&lt;/a&gt;) began working on what has become &lt;a href="https://trustgraph.ai" rel="noopener noreferrer"&gt;TrustGraph&lt;/a&gt; over 2 years ago, we knew that graph structures would be instrumental in realizing the potential of AI technology, specifically LLMs. &lt;/p&gt;

&lt;p&gt;I’ve never been particularly fond of the term RAG. In fact, we’ve always shied away from labeling TrustGraph a “RAG platform” because it does so much more than that. And that’s always been the point - to realize the potential of LLMs, you need more than just RAG.&lt;/p&gt;

&lt;p&gt;And no, I’m not about to say all you need are graphs. I’ve never been fond of the term GraphRAG either (especially after Microsoft co-opted the term), because again, you need more than just graph structures. &lt;/p&gt;

&lt;p&gt;Context Engineering has always felt like a better fit to me, but it just hasn’t seemed to gain traction, until perhaps now? My attention was grabbed by my friend &lt;a href="https://x.com/KirkMarple" rel="noopener noreferrer"&gt;Kirk Marple&lt;/a&gt;’s recent post, &lt;a href="https://x.com/KirkMarple/status/2003944353342149021?" rel="noopener noreferrer"&gt;The Context Layer AI Agents Actually Need&lt;/a&gt;, which was written in response to a post &lt;a href="https://x.com/JayaGup10" rel="noopener noreferrer"&gt;Jaya Gupta&lt;/a&gt; and &lt;a href="https://x.com/ashugarg" rel="noopener noreferrer"&gt;Ashu Garg&lt;/a&gt; of &lt;a href="https://foundationcapital.com/" rel="noopener noreferrer"&gt;Foundation Capital&lt;/a&gt; that you might have read - &lt;a href="https://foundationcapital.com/context-graphs-ais-trillion-dollar-opportunity/" rel="noopener noreferrer"&gt;AI’s trillion-dollar opportunity: Context graphs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So... What is a Context Graph?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Considering that we've been working on this very problem for a bit and TrustGraph has been &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;open source&lt;/a&gt; for nearly eighteen months, I feel qualified to offer a definition. Put simply, a context graph is a triples-representation of data that is optimized for usage with AI. That seems simple enough, right? That's what I thought when I started on the knowledge graph journey years ago as well...&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Ambiguity Problem: What is a Knowledge Graph?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Good question. No, seriously—sit around a table of knowledge graph experts, ask that question, and just wait for the arguments to begin. And no, I'm very serious about the arguments part. The knowledge graph community is extremely evangelical about what the "right way" is to do things. Imagine discussing politics on Twitter except with obscure references to information theory and linguistics.&lt;/p&gt;

&lt;p&gt;Perhaps the biggest promoter of the term has been &lt;a href="https://x.com/prathle" rel="noopener noreferrer"&gt;Philip Rathle&lt;/a&gt; from &lt;a href="https://neo4j.com" rel="noopener noreferrer"&gt;Neo4j&lt;/a&gt;, which offers the best-known graph database system for storing knowledge graphs. But here's where the confusion starts: Is a knowledge graph something you store, or is it how you store something? It's not just a knowledge graph—it's also a graph database. That distinction matters, but the boundaries are blurry.&lt;/p&gt;

&lt;p&gt;Despite Philip's best efforts, the term "knowledge graph" remains ambiguous. Neo4j's messaging emphasizes being a "graph database." The word "knowledge" itself is slippery. People have a good grasp of information and data, but what is knowledge? Is it the result of enriching data? Is knowledge when you can take action on data? These questions don't have clean answers, which is part of why the conversations get so heated.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;It's Only a Model&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ultimately, a knowledge graph is a data model—how you organize data. There are many ways to do this at many levels of scale. I could derail this discussion by going the Lakehouse route and talking about Apache Iceberg, dbt, and file/object stores. That's data too, right? Sure, when you're talking about Petabytes (or more commonly Exabytes now). And perhaps that's the most confusing part of all: the same terms get applied in different ways depending on the context.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Data in 3 Parts: The Triple&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It's all about the triple:&lt;/p&gt;

&lt;p&gt;Subject → Predicate → Object&lt;/p&gt;

&lt;p&gt;That's it. When people talk about knowledge graphs, they're generally talking about a collection of triples. A triple represents a relationship between two data points. As late as the 1990s, it was still common to use “verbs” instead of “predicates”. For instance, if Alice is the mother of Bob, you might express it with verbs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alice → has → child
&lt;/li&gt;
&lt;li&gt;Bob → is → child&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or with a predicate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alice → isMotherOf → Bob&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many ways to model this relationship, but predicates allow you to model more information in a single triple. In the "verbs" example, Alice and Bob aren't directly connected and would require additional triples to robustly connect them. But again, there are many ways to do this—hence the evangelical arguments.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;From Verbs to Predicates&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As I often say, these concepts aren’t new. The term “predicates” comes from predicate logic which has its origins in the 19th century.  The use of predicates in the knowledge graph sense appeared in the 1960s with the rise of the concept of “semantic networks”. The rise of the internet and the dream of the semantic web took “predicates” to the “mainstream”.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Semantic Web and RDF&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As the internet grew in scale during the 1990s and 2000s, technologists began asking: how can autonomous systems exchange information with each other? Sound familiar? That question is foundational to modern interoperability challenges like &lt;a href="https://x.com/MCP_Community" rel="noopener noreferrer"&gt;MCP&lt;/a&gt; and A2A—except those approaches treat it as if it's new, when in fact the semantic web community was thinking about this in the 1990s. I’ve personally worked with interoperable networking “protocols” in the aerospace industry that have their origins in the 1960s and are still in use today. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.w3.org/TR/rdf-schema/" rel="noopener noreferrer"&gt;RDF&lt;/a&gt; (the Resource Description Framework) adds structure to the triple concept by introducing classes, types, ranges, and strict syntax rules. The semantic web aimed to make knowledge representation mainstream by providing a standard framework that anyone could use. What many find odd about RDF is its use of URIs (Uniform Resource Identifiers)—often in the form of URLs. Why use web addresses as identifiers? The vision was interoperability: having globally unique identifiers ensured that two systems using the same URI would be referring to the same entity. Do the URLs themselves matter? No. That's a fair source of confusion.&lt;/p&gt;

&lt;p&gt;RDF supports multiple serialization formats. RDF/XML follows XML structure but is an absolute eye-sore for humans. N-Triples is just a list of triples with required URIs for subjects and predicates—simpler, but still painful to read. For those who like JSON, there's JSON-LD. The most human-readable format is Turtle, which is elegant but syntactically sensitive with its indentation and whitespace requirements.&lt;/p&gt;

&lt;p&gt;RDF is incredibly mature and robust. However, learning it independently is nearly impossible — the very definition of tribal knowledge. Without Mark Adams, co-founder of TrustGraph, writing &lt;a href="https://docs.trustgraph.ai/guides/knowledge-graphs" rel="noopener noreferrer"&gt;RDF guides&lt;/a&gt; specifically for me, I would never have figured it out. Accurate RDF tutorials are hard to find, and many online articles are either wrong or skewed by singular perspectives.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding the RDF Stack&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When people talk about "RDF," they're usually referring to more than just the basic RDF standard. RDF itself defines triples, but it's layered with complementary standards. RDFS (RDF Schema) adds types, properties, and structural constraints on top. &lt;a href="https://www.w3.org/TR/owl-ref/" rel="noopener noreferrer"&gt;OWL&lt;/a&gt; (Web Ontology Language) is an extension of RDFS that adds rich ontology capabilities. In practice, people don't say "I'm using RDF and RDFS"—they think of this as a single ecosystem. When we say "RDF," we typically mean this entire layered collection of technologies, each building on the previous one.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Property Graphs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before I start this discussion, please fasten your seatbelt and take a deep breath. No malice is intended—I'm just trying to help explain some very confusing concepts. Yes, I'm about to dive into some of those evangelical arguments.&lt;/p&gt;

&lt;p&gt;While a triple is a subject, predicate, and object (S, P, O), other terminology is commonly used in graph work: nodes, edges, and arcs. A node is a subject. When two nodes are connected, the relationship between them is an edge (sometimes called an arc, more commonly in European literature). For the object in a triple, it gets complicated: objects can be properties (literal values linked to a node) or relationships (links to other nodes).&lt;/p&gt;

&lt;p&gt;Here's where property graphs and RDF fundamentally diverge: Property graphs strictly differentiate between properties (connections to literal values) and relationships (connections to nodes). It's a clean distinction. In RDF, you could use OWL to specify how things relate and RDFS range declarations to define what types of objects are permitted, providing much more flexibility than property graphs allow. RDF is more powerful, but property graphs are easier to understand.&lt;/p&gt;

&lt;p&gt;Another key difference: property graphs allow properties on edges. You can model something similar in RDF, but edge properties are a simple way of doing something that becomes quite complex in RDF. This is a genuine advantage of the property graph approach.&lt;/p&gt;

&lt;p&gt;As for standards: RDF developed as a modular, layered set of standards—competing ideas were tested, and the best ones emerged through real-world usage. Property graphs lack a formal standard, though &lt;a href="https://opencypher.org/" rel="noopener noreferrer"&gt;Cypher&lt;/a&gt; (from Neo4j) became a de facto standard through widespread adoption. Other property graphs implemented it with variants. Very recently, this real-world usage influenced the development of an ISO standard, &lt;a href="https://www.iso.org/standard/76120.html" rel="noopener noreferrer"&gt;GQL&lt;/a&gt;. Unlike the modular RDF ecosystem, Cypher and GQL function more like single standards without the layered development that has been so productive in RDF's evolution.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Ontologies&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There's been considerable talk about ontologies recently. We launched &lt;a href="https://www.reddit.com/r/KnowledgeGraph/comments/1p54i8e/ontologydriven_ai/?utm_source=share&amp;amp;utm_medium=web3x&amp;amp;utm_name=web3xcss&amp;amp;utm_term=1&amp;amp;utm_content=share_button" rel="noopener noreferrer"&gt;custom ontology features in TrustGraph&lt;/a&gt;, and some have even used the term "OntologyRAG." But what exactly is an ontology?&lt;/p&gt;

&lt;p&gt;To understand ontologies, it helps to differentiate four related but distinct concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vocabularies: Human-readable definitions of words
&lt;/li&gt;
&lt;li&gt;Taxonomies: Human-readable hierarchies and definitions for domain-specific terms
&lt;/li&gt;
&lt;li&gt;Schemas: Machine-readable representations of data for storage and retrieval
&lt;/li&gt;
&lt;li&gt;Ontologies: Machine-readable definitions of terms, hierarchies of those terms, and their relationships&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;OWL (Web Ontology Language) is one of the most common ontologies—it's an extension of RDF designed for building structured taxonomies. &lt;a href="https://www.w3.org/TR/skos-reference/" rel="noopener noreferrer"&gt;SKOS&lt;/a&gt; (Simple Knowledge Organization System) is another interesting ontology that focuses more on concepts than OWL but never achieved widespread adoption. &lt;a href="http://Schema.org" rel="noopener noreferrer"&gt;Schema.org&lt;/a&gt; is perhaps the best known ontology—it's a direct extension of the semantic web that attempts to create a granular taxonomy for all types of information featured on websites.&lt;/p&gt;

&lt;p&gt;Ontologies are fundamentally a semantic web concept, born from the vision of interoperable information exchange. This doesn't mean they're only useful with triplestores—you can use ontologies with property graphs as well. The distinction is about the origin of the concepts and their primary use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;There's No Single "Right Way"&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There's no "right way" to do any of this. You can use RDF or property graphs to store the same information. This point is crucial: no matter what any "expert" claims, you can store the same information as a triplestore, property graph, or even as joined tables. The choice is about what fits your use case, your team's expertise, and your operational requirements.&lt;/p&gt;

&lt;p&gt;In fact, the default graph store in TrustGraph is Apache Cassandra. I remember when I first told Philip Rathle from Neo4j that our default graph store is Cassandra—I genuinely think he thought I was joking. He was even more skeptical when I mentioned that one of our users has over a billion nodes and edges loaded in Cassandra with TrustGraph (don’t worry Philip, I know Neo4j will always be #1 in your heart). What's remarkable: this user could have used Neo4j instead. TrustGraph builds graphs as triplestores with RDF in Cassandra or translates them for storage in Neo4j. Does it matter that one is a triplestore and the other is a property graph? The agents we build don't seem to care.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Machine Readable vs. Human Readable&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While we marvel at the generative capabilities of LLMs, perhaps the biggest disruption is their ability to work with both human-readable and machine-readable data. An LLM can understand text, images, software code, complex schemas, and ontologies. Not only can it understand them, but it can output responses in combinations of all these formats.&lt;/p&gt;

&lt;p&gt;Information systems are no longer bound by building custom retrieval algorithms for specific schemas and ontologies. An LLM can generate both the structure and the retrieval logic dynamically. This raises an interesting question: are there reasons to store machine-readable data with human-readable data?&lt;/p&gt;

&lt;p&gt;This is where our experimental work became revealing. We tested various context structures—CSVs, symbol-based representations like "-&amp;gt;", bulleted lists, numbered lists. Surely, with more concise structures, LLM outputs would improve, right? Wrong. Providing context in structured formats like Cypher or RDF improved responses despite the token overhead. Why? Because the structure itself carries information. When an LLM encounters Cypher or RDF (which it can read fluently), the structure encodes information about what is a node, what is a property, what is a relationship. There's inherent meaning in the syntax itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Decades of Mature Graph Algorithms&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before we discard decades of knowledge, we should acknowledge the mature graph retrieval algorithms waiting to be leveraged: graph traversal depth optimization, clustering analysis, density calculations, outlier detection, and much more. These techniques establish relationships in data from the graph structure itself. Should we be surprised that LLMs already seem to be doing this intuitively?&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Frontier: Temporal Context&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As I discussed earlier this year on the &lt;a href="https://www.youtube.com/@howaiisbuilt" rel="noopener noreferrer"&gt;How AI Is Built&lt;/a&gt; podcast (&lt;a href="https://www.youtube.com/watch?v=VpFVAE3L1nk" rel="noopener noreferrer"&gt;Temporal RAG: Embracing Time for Smarter, Reliable Knowledge Graphs&lt;/a&gt; from Feb 13, 2025) with &lt;a href="https://x.com/nicolaygerold" rel="noopener noreferrer"&gt;Nico Gerold&lt;/a&gt;, temporal relationships are the next frontier for understanding data. While uncomfortable to confront, the concept of "truth" is often murky. One way of establishing ground truth is to find an observation that remains constant: that data point always was and always will be. But can you establish truth from a single observation?&lt;/p&gt;

&lt;p&gt;When we begin to observe how data changes over time, we can assess whether information is "fresh" or "stale." Our instinct is to assume newer data is more trustworthy. Yet that's not always the case. Consider a contemporary example: UFO/UAP research.&lt;/p&gt;

&lt;p&gt;When I was growing up in the 1980s, the subject of UFOs and aliens was taboo. Even with shows like &lt;em&gt;The X-Files&lt;/em&gt; on prime time in the mid 1990s, being a fan guaranteed being labeled "the weird one." Today, we have documentaries like &lt;a href="https://en.wikipedia.org/wiki/The_Age_of_Disclosure" rel="noopener noreferrer"&gt;&lt;em&gt;The Age of Disclosure&lt;/em&gt;&lt;/a&gt; where current government officials openly discuss the topic. The culture has shifted from dismissing the subject as fringe to openly considering whether the government will eventually acknowledge it.&lt;/p&gt;

&lt;p&gt;But here's the puzzle: the data hasn't actually changed much. The observations documented decades ago in painstakingly researched books are largely the same observations being discussed today. Does repeated observation over 50+ years establish fact? When asking an LLM to analyze this information, should we prioritize 50-year-old data that still appears "fresh" and corroborated, or newer data that lacks observational confirmation? Freshness and recency are not the same as accuracy and precision. Just because data is old and obscure, doesn’t mean it’s not still valid.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A New Paradigm for Interoperability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;MCP and A2A set out to achieve a noble goal—interoperability. History confirms this has never been simple. My personal experience with interoperable systems has taught me that no matter how noble the goal, designing interoperable standards that can evolve and not balloon in complexity to the point of being a burden is a nearly impossible balance. Just look at the semantic web's unrealized promise. Yet LLMs provide a new opportunity: they enable us to work with dynamic ontologies as never before.&lt;/p&gt;

&lt;p&gt;Previously, ontologies needed to be static so that retrieval algorithms could be built to understand them. LLMs can "read" and understand ontologies dynamically—as we've demonstrated with our recent ontology capabilities in TrustGraph. Perhaps LLMs will finally enable the vision of the semantic web, but with slightly different data structures and more flexible implementation patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Progression: From RAG to Context Graphs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The AI journey we're on follows a clear progression:&lt;/p&gt;

&lt;p&gt;1. LLMs can answer questions from their training data&lt;/p&gt;

&lt;p&gt;2. RAG appears: We stuff prompts with chunks of text to add knowledge, realizing that LLM training data alone is insufficient using semantic similarity search over vector embeddings to find the text chunks&lt;/p&gt;

&lt;p&gt;3. GraphRAG emerges: Breaking away from text chunks and semantic similarity search alone, we use flexible knowledge representations that can be navigated and refined for better control that capture rich relationships between entities, concepts, etc.&lt;/p&gt;

&lt;p&gt;4. Ontology RAG: We take control over what gets loaded into graphs, using structured ontologies for precision and improved recall in how the relationships are annotated with improved granularity for retrieval&lt;/p&gt;

&lt;p&gt;This progression is revealing. Step 3 (GraphRAG) makes minimal use of existing graph algorithms. Step 4 pulls ontologies from the toolbox. We're genuinely scratching the surface of what graph tooling can do.&lt;/p&gt;

&lt;p&gt;This is where we are today. What comes next?&lt;/p&gt;

&lt;p&gt;5. Information retrieval analytics tuned to different data types: We develop specialized retrieval strategies for temporal data, accuracy-sensitive data, anomalies, clustering, and other domain-specific information retrieval challenges&lt;/p&gt;

&lt;p&gt;6. Self-describing information stores: Information systems that carry metadata about their own structure, allowing retrieval algorithms to adapt automatically to the information they encounter&lt;/p&gt;

&lt;p&gt;7. Dynamic information retrieval strategies: LLMs can derive complete information retrieval strategies for information types they've never seen before, generalizing from learned patterns&lt;/p&gt;

&lt;p&gt;8. Closing the loop to enable autonomous learning: As the system reingests its outputs, annotating the generative data with metadata, that can then adjust how that new information is retrieved in comparison to “old” data, and the ability to adjust the “old” structures as well is the holy grail of a true autonomous system that can learn&lt;/p&gt;

&lt;p&gt;Context graphs represent the visions that so many information theorists dedicated their lives to pursuing. The opportunity is enormous.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Age of Building&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As Kirk and I discussed in our recent &lt;a href="https://youtu.be/W6m_BzaedUc" rel="noopener noreferrer"&gt;2025 State of RAG&lt;/a&gt; podcast, we both tend to believe the promised innovations are going to come - but just not as quickly as the hype train tends to predict. LLMs are an example of both forces in action. LLMs have achieved high levels of maturity incredibly quickly. However, the speed at which LLMs have reached that maturity have left a void in how to realize their potential. Enter Context Graphs.&lt;/p&gt;

&lt;p&gt;If we look to AI leaders like &lt;a href="https://x.com/ilyasut" rel="noopener noreferrer"&gt;Ilya Sutskever&lt;/a&gt; and &lt;a href="https://x.com/ylecun" rel="noopener noreferrer"&gt;Yann Lecun&lt;/a&gt;, both have moved on from LLMs to chasing the “next big thing” in AI with ventures that are very much designed as long term research organizations. When will that next big thing come? Likely it will require quantum computing to hit scale - which is a gigantic question mark. Most current quantum computing is still blending quantum approaches with varying amounts of classical computing (the way we’ve been doing computing since the invention of the transistor). &lt;/p&gt;

&lt;p&gt;Or perhaps it won’t be “one thing” that is the enabler. It rarely ever is. LLMs skyrocketed to maturity on the back of availability of data, rapidly increasing compute power, and a massive influx of capital. Will context graphs be a critical enabler to the next big thing in AI? We think so.&lt;/p&gt;

&lt;h2&gt;
  
  
  For more information:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Documentation: &lt;a href="https://docs.trustgraph.ai" rel="noopener noreferrer"&gt;https://docs.trustgraph.ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub Repository: &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;https://github.com/trustgraph-ai/trustgraph&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Discord Community: &lt;a href="https://discord.gg/sQMwkRz5GX" rel="noopener noreferrer"&gt;https://discord.gg/sQMwkRz5GX&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Website: &lt;a href="https://trustgraph.ai" rel="noopener noreferrer"&gt;https://trustgraph.ai&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
      <category>rag</category>
    </item>
    <item>
      <title>The 2025 State of RAG</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Tue, 30 Dec 2025 17:57:00 +0000</pubDate>
      <link>https://dev.to/jackcolquitt/the-2025-state-of-rag-59mj</link>
      <guid>https://dev.to/jackcolquitt/the-2025-state-of-rag-59mj</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/W6m_BzaedUc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://x.com/TrustSpooky" rel="noopener noreferrer"&gt;Daniel Davis&lt;/a&gt; of &lt;a href="https://trustgraph.ai" rel="noopener noreferrer"&gt;TrustGraph&lt;/a&gt; and &lt;a href="https://x.com/kirkmarple" rel="noopener noreferrer"&gt;Kirk Marple&lt;/a&gt; from &lt;a href="https://graphlit.com" rel="noopener noreferrer"&gt;Graphlit&lt;/a&gt; revisit their predictions from their &lt;a href="https://dev.to/trustgraph/the-2024-state-of-rag-podcast-559b"&gt;2024 State of RAG&lt;/a&gt; podcast and make predictions for 2026.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rag</category>
      <category>mcp</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Part 3: How TrustGraph's Knowledge Cores End the Memento Nightmare</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Mon, 28 Apr 2025 16:22:54 +0000</pubDate>
      <link>https://dev.to/trustgraph/part-3-how-trustgraphs-knowledge-cores-end-the-memento-nightmare-36ai</link>
      <guid>https://dev.to/trustgraph/part-3-how-trustgraphs-knowledge-cores-end-the-memento-nightmare-36ai</guid>
      <description>&lt;p&gt;In &lt;a href="https://blog.trustgraph.ai/p/the-memento-problem-with-ai-memory" rel="noopener noreferrer"&gt;Parts 1&lt;/a&gt; and &lt;a href="https://blog.trustgraph.ai/p/why-your-ai-is-stuck-in-a-memento-loop" rel="noopener noreferrer"&gt;2&lt;/a&gt;, we exposed the dangerous flaw in most current AI "memory": like Leonard Shelby in Memento, our systems often operate on disconnected fragments, unable to form the interconnected knowledge needed for reliable reasoning. We saw how this reliance on context-stripped, relationship-blind, and provenance-oblivious data dooms AI to a cycle of confident errors and hallucinations, just as Leonard's fragmented note system led him down dangerous paths.&lt;/p&gt;

&lt;p&gt;So, how do we break the loop? How do we give AI the ability to truly know, not just recall fragments? The answer isn't a slightly better system of Polaroids and notes. The answer is to build the integrated, structured understanding Leonard tragically lacked: a Knowledge Core.&lt;/p&gt;

&lt;p&gt;This is precisely what TrustGraph, the AI Provisioning Platform, delivers through its advanced TrustRAG engine. It moves beyond the limitations of fragmented recall by architecting genuine knowledge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Mapping the Connections (Solving Relationship Blindness): Unlike Leonard staring at isolated clues, TrustGraph automatically builds a Knowledge Graph (KG). It doesn't just store facts; it explicitly maps the relationships between them (e.g., "Person X works for Company Y," "Event A caused Event B"). This Knowledge Graph is the coherent narrative structure Leonard couldn't form – the understanding of how things connect.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delivering Contextualized Scenes (Solving Context Stripping): Leonard reviewed one Polaroid at a time, losing the big picture. TrustRAG uses a hybrid retrieval process. Vector search identifies relevant starting points within the Knowledge Graph, but then TrustRAG traverses the graph connections, constructing a subgraph of related entities and relationships. Instead of isolated fragments, the LLM receives a connected scene – a relevant slice of the knowledge core with inherent local context.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verifying the Clues (Addressing Provenance Oblivion): Leonard couldn't be sure when or why he wrote his notes. TrustGraph's Knowledge Graph architecture is designed to incorporate provenance metadata directly with the facts and relationships it stores (source, timestamp, reliability). TrustRAG can then leverage this, allowing the AI to weigh information based on its origins, escaping the trap of treating all retrieved fragments as equally trustworthy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Escaping the Memento Loop: The Power of a Knowledge Core
&lt;/h3&gt;

&lt;p&gt;By building and utilizing this structured Knowledge Core, TrustGraph fundamentally changes AI capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enables Reliable Reasoning: Provides the interconnected facts and explicit relationships needed for complex reasoning, synthesis, and understanding causality – tasks impossible for Leonard (and fragment-based AI).&lt;/li&gt;
&lt;li&gt;Dramatically Reduces Hallucinations: Grounding responses in a verifiable graph of knowledge, potentially weighted by provenance, significantly reduces the chance of fabricating connections or asserting baseless claims.&lt;/li&gt;
&lt;li&gt;Offers Explainable Insight: The retrieved subgraph itself acts as an explanation, showing how the AI arrived at its context based on the knowledge core's structure – unlike Leonard's often opaque leaps of faith.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Provisioning Reliable Knowledge, Not Just Infrastructure
&lt;/h3&gt;

&lt;p&gt;TrustGraph isn't just a concept. It's an AI Provisioning Platform that containerizes the entire intelligent system – the LLMs, the necessary tools, and the essential TrustRAG Knowledge Cores – allowing you to reliably provision this complete, knowledgeable AI stack anywhere (Cloud, On-Prem, Edge). We're providing the robust, managed infrastructure for knowledge that Leonard's fragile system lacked.&lt;/p&gt;

&lt;p&gt;Stop building AI condemned to relive Leonard Shelby's nightmare. Stop provisioning systems based on fragmented recall and start delivering applications grounded in genuine understanding.&lt;/p&gt;

&lt;p&gt;Give your AI the gift of coherent memory. Build with a Knowledge Core.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explore &lt;a href="https://trustgraph.ai" rel="noopener noreferrer"&gt;TrustGraph&lt;/a&gt; on &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and see how we structure knowledge&lt;/li&gt;
&lt;li&gt;Read the &lt;a href="https://github.com/trustgraph-ai/trustgraph?tab=readme-ov-file#-trustrag" rel="noopener noreferrer"&gt;TrustRAG&lt;/a&gt; documentation for technical details&lt;/li&gt;
&lt;li&gt;Join our &lt;a href="https://discord.gg/sQMwkRz5GX" rel="noopener noreferrer"&gt;community&lt;/a&gt; and discuss the future of AI knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Provision AI that knows. Provision it with TrustGraph.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>opensource</category>
      <category>rag</category>
    </item>
    <item>
      <title>Part 2: Why Your AI is Stuck in a Memento Loop</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Sat, 26 Apr 2025 22:19:42 +0000</pubDate>
      <link>https://dev.to/trustgraph/part-2-why-your-ai-is-stuck-in-a-memento-loop-ilf</link>
      <guid>https://dev.to/trustgraph/part-2-why-your-ai-is-stuck-in-a-memento-loop-ilf</guid>
      <description>&lt;p&gt;In &lt;a href="https://blog.trustgraph.ai/p/the-memento-problem-with-ai-memory" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, we likened today's typical AI "memory" to the plight of Leonard Shelby in Memento: brilliant at accessing isolated fragments (Polaroids, notes, tattoos) but unable to weave them into the coherent tapestry of true knowledge. He remembers that he has a note, but not necessarily the reliable why or the how it connects to everything else. Now, let's diagnose why popular RAG approaches, inherently create this dangerous, fragmented reality for our AI.&lt;/p&gt;

&lt;p&gt;Imagine Leonard's investigation. His "database" consists of disconnected snapshots and cryptic assertions. When he tries to solve a problem ("Who is John G?"), he shuffles through these fragments, looking for clues that feel related. This is strikingly similar to how typical RAG approaches use “memory” :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Polaroid Snapshot (Context Stripping): Just as Leonard's Polaroids capture only a single moment divorced from what came before or after, document chunking for vectorization strips vital context. A retrieved sentence saying "Project Titan deadline is critical" loses the surrounding discussion about why it's critical, who set it, or what happens if it's missed. The AI gets the snapshot, not the scene.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cryptic Notes &amp;amp; Missing Links (Relationship Blindness): Leonard's notes might say "Meet Natalie" and "Don't believe Dodd's lies." Vector search can find documents mentioning "Natalie" and documents mentioning "Dodd," but like Leonard, it lacks the explicit map connecting them. Does Natalie know Dodd? Is she part of the lies? The relationships aren't inherently encoded in the vector similarity. Finding similar topics doesn't mean understanding their causal or structural connection, leaving the AI to guess these critical links.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trusting Faded Ink (Provenance Oblivion): Leonard must trust his fragmented notes, even if they were written under duress, based on misinformation, or are simply outdated. Standard RAG often does the same, treating all retrieved text fragments as equally valid assertions. It frequently lacks a robust mechanism to track provenance – the source, timestamp, or reliability score of the information. An old, debunked "fact" retrieved via vector similarity looks just as convincing to the LLM as a fresh, verified one.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Leonard Shelby Effect in AI:&lt;/p&gt;

&lt;p&gt;When AI operates with only these disconnected, context-stripped, relationship-blind, and provenance-oblivious fragments, its reasoning becomes dangerously flawed:&lt;/p&gt;

&lt;p&gt;Hallucinating Connections: Like Leonard assuming connections between unrelated clues, the LLM invents relationships between text fragments simply because they were retrieved together.&lt;/p&gt;

&lt;p&gt;Contradictory Actions: Acting on conflicting "facts" because it can't verify which source or connection is trustworthy or current.&lt;/p&gt;

&lt;p&gt;Inability to Synthesize: Unable to build a larger picture or draw reliable conclusions because the foundational links between data points are missing or inferred incorrectly.&lt;/p&gt;

&lt;p&gt;We are building AI systems trapped in a Memento loop: forever re-reading fragmented clues, capable of impressive recall but incapable of forming the durable, interconnected knowledge needed for reliable reasoning and true understanding. They are architecturally destined to make potentially disastrous mistakes based on an incomplete and untrustworthy view of their "world."&lt;/p&gt;

&lt;p&gt;If we want AI to escape this loop, we need to fundamentally change how we provide information. We need to move beyond retrieving isolated Polaroids and start building systems that can understand the whole, interconnected story.&lt;/p&gt;

&lt;p&gt;How do we provide that interconnected narrative? How do we build AI memory that understands relationships and provenance? Stay tuned for Part 3 where we reveal the architecture for true AI knowledge.&lt;/p&gt;

&lt;p&gt;Have you seen an AI confidently stitch together unrelated facts like Leonard building a flawed theory? Let us know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🌟 &lt;a href="https://trustgraph.ai" rel="noopener noreferrer"&gt;TrustGraph&lt;/a&gt; on &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; 🚢 &lt;/li&gt;
&lt;li&gt;Join the &lt;a href="https://discord.gg/sQMwkRz5GX" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; 👋 &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>opensource</category>
      <category>rag</category>
    </item>
    <item>
      <title>Part 1: The Memento Problem with AI Memory</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Fri, 25 Apr 2025 21:58:31 +0000</pubDate>
      <link>https://dev.to/trustgraph/part-1-the-memento-problem-with-ai-memory-4ge7</link>
      <guid>https://dev.to/trustgraph/part-1-the-memento-problem-with-ai-memory-4ge7</guid>
      <description>&lt;p&gt;We're drowning in takes about AI "memory." RAG is hailed as the silver bullet, promising intelligent systems that learn and retain information. But let's be brutally honest: most implementations are building agents that are drowning in data and suffocating from a lack of knowledge.&lt;/p&gt;

&lt;p&gt;These systems excel at retrieving fragments – isolated data points plucked from documents and observations stripped of their origins. Ask it a question, and it surfaces a text snippet that looks relevant. This feels like memory - like recall.&lt;/p&gt;

&lt;p&gt;But it isn't knowledge.&lt;/p&gt;

&lt;p&gt;Real knowledge isn't just storing data points - it's understanding their context, their provenance (where did this information come from? is it reliable?), and their relationships with other data points. Human memory builds interconnected information networks while current AI "memory" approaches just hoard disconnected digital Post-it notes. We are mistaking the retrieval of isolated assertions for the synthesis of contextualized understanding.&lt;/p&gt;

&lt;p&gt;Think of Leonard Shelby in Christopher Nolan's film Memento. Suffering from anterograde amnesia, Leonard can't form new memories. To function, he relies on a system of Polaroids, handwritten notes, and even tattoos – externalized fragments representing supposed facts about his world and his mission to find his wife's killer.&lt;/p&gt;

&lt;p&gt;Today's RAG systems often operate eerily like Leonard. They receive a query and consult their "Polaroids" – the vector embeddings of text chunks. They retrieve the chunk that seems most relevant based on similarity, a fragment like "Don't believe his lies" or "Find John G." Unfortunately, like Leonard, these RAG systems lack the overarching context and the relationships between these fragments. It doesn't inherently know how the note about John G. relates to the warning about lies, or the sequence of events that led to these assertions being recorded.&lt;/p&gt;

&lt;p&gt;And this fragmentation is where disaster strikes. Leonard, working only with disconnected clues, makes fatal misinterpretations. He trusts the wrong people, acts on incomplete information, and is manipulated because he cannot form a cohesive, interconnected understanding of his reality. His "memory," composed of isolated data points, leads him not to truth, but deeper into confusion, madness, and catastrophe.&lt;/p&gt;

&lt;p&gt;An AI that can quote a source but doesn't inherently grasp how that source connects to related concepts or whether that source is trustworthy isn't remembering – it's echoing fragments, just like Leonard reading his own fragmented notes.&lt;/p&gt;

&lt;p&gt;This fundamental flaw leads to confident hallucinations, an inability to reason deeply about causality, and systems that can misled. We're building articulate regurgitators, not truly knowledgeable thinkers.&lt;/p&gt;

&lt;p&gt;We need to stop celebrating glorified search indices as "memory" and start demanding systems capable of building actual knowledge. Until then, we're just building better mimics, doomed to repeat the mistakes born from disconnected understanding.&lt;/p&gt;

&lt;p&gt;Next time in Part 2: We dissect why this fragment-recall approach fundamentally breaks down when AI needs to reason, synthesize, or understand causality.&lt;/p&gt;

&lt;p&gt;Does your AI feel like it knows things, or just recalls text like Leonard Shelby reading his notes? Reach out to us below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🌟 TrustGraph on &lt;a href="https://github.com/trustgraph-ai/trustgraph?" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; 🧠 &lt;/li&gt;
&lt;li&gt;Join the &lt;a href="https://discord.gg/sQMwkRz5GX?" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; 👋 &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>rag</category>
      <category>memory</category>
    </item>
    <item>
      <title>The Symphony of the AI System</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Mon, 14 Apr 2025 20:33:53 +0000</pubDate>
      <link>https://dev.to/trustgraph/the-symphony-of-the-ai-system-4abh</link>
      <guid>https://dev.to/trustgraph/the-symphony-of-the-ai-system-4abh</guid>
      <description>&lt;h2&gt;
  
  
  Beyond the Monolith: Why the Future of AI is a Symphony, Not a Soloist
&lt;/h2&gt;

&lt;p&gt;For years, the science fiction dream and much of the AI hype cycle has revolved around a singular goal: building the one, giant Artificial General Intelligence (AGI). A single, monolithic model capable of learning, reasoning, and solving any problem like a human. It's a captivating vision, but is it the only path forward? Or even the right one for practical, powerful, and responsible AI?&lt;/p&gt;

&lt;p&gt;It's not. The pursuit of a single, all-encompassing model overlooks the messy, beautiful complexity of intelligence itself and ignores the profound limitations inherent in monolithic approaches. The true future of advanced machine intelligence lies not in a singular soloist, but in a symphony of tightly interconnected, specialized software components working in harmony.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Cracks in the Monolith
&lt;/h3&gt;

&lt;p&gt;Trying to build a single AI model to "solve" human-level intelligence faces immense hurdles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Garbage In, Garbage Out: Training such models requires unfathomable amounts of data and human intervention to evaluate the quality of the inputs and outputs which is subject to individual biases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Brittleness &amp;amp; Lack of Nuance: A single model, can struggle with specialized tasks outside its core training distribution. It's the ultimate "jack of all trades, master of none," potentially failing when encountering edge cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Operational Nightmares: Deploying, managing, updating, and securing a single, gigantic model across diverse environments (cloud, on-prem, edge) is incredibly complex and inefficient. How do you provide fine-grained updates or tailored capabilities?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explainability &amp;amp; Auditability Black Holes: Understanding why a monolithic model made a specific decision can be nearly impossible, hindering trust, debugging, and crucial safety checks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Concentration of Power &amp;amp; Risk: Placing all intelligent capabilities into a single entity creates immense concentrations of power and systemic risk.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Rise of the System: Intelligence as an Interconnected Network
&lt;/h3&gt;

&lt;p&gt;Nature offers a better blueprint. The human brain isn't a homogenous blob; it's a highly specialized, interconnected system of regions communicating dynamically. Complex tasks emerge from the coordinated activity of these specialized parts. Similarly, the future of advanced AI lies in building systems that mirror this principle.&lt;/p&gt;

&lt;p&gt;Imagine an AI architecture that functions less like a single giant brain and more like a biological nervous system – what we might call a Synaptic Automation System. This system possesses key characteristics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Modular Expertise: Instead of one model knowing everything, the system leverages specialized "Intelligent Cores" – components encapsulating deep expertise, algorithms, or processing for specific domains. These cores are the seeds of adaptable skill.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dynamic Synthesis &amp;amp; Deployment: The system doesn't just run pre-built applications. Based on the available Cores and the task at hand, it dynamically generates and deploys the necessary processing modules on the fly. Think of it assembling a specialized task force exactly when needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Emergent Learning &amp;amp; Adaptation: Faced with unique situations, the system doesn't just rely on past training. It can generate custom learning modules to analyze new data, identify patterns, and evolve its understanding over time through integrated feedback loops, constantly refining its capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Inherent Connectivity &amp;amp; Communication: Like synapses firing, components constantly communicate, sharing context and triggering actions across the system. This allows for holistic reasoning and complex workflow execution far beyond simple pipelines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Transparency &amp;amp; Trust: Crucially, because the system generates plans and modules dynamically, it can also be designed to make these processes transparent. The 'reasoning' behind an automated workflow can be audited, allowing for verification, compliance, and crucial safety checks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Safety First: Built-in mechanisms constantly monitor the system's actions, detecting potential harms or deviations from desired boundaries, enabling adaptive responses to ensure responsible operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Universal Presence: This entire intelligent system isn't locked to specific hardware. It's designed as a fabric that can be deployed consistently across any cloud, bare-metal servers, or edge devices, bringing intelligence wherever it's needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  TrustGraph: Embodying the Synaptic Vision
&lt;/h3&gt;

&lt;p&gt;This isn't just theory. Platforms like TrustGraph are pioneering this Synaptic Automation System approach. By focusing on dynamically connecting modular Intelligent Cores, synthesizing processes on demand, enabling continuous learning through feedback, ensuring auditability and safety, and running universally across infrastructures, TrustGraph demonstrates the power of this interconnected model over the monolithic dream.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Symphony Takes the Stage
&lt;/h3&gt;

&lt;p&gt;The future of impactful AI won't be a single, monolithic oracle attempting to know everything. It will be a dynamic, adaptable, and interconnected system – a symphony of specialized components working together seamlessly. This approach offers a path towards more scalable, resilient, trustworthy, and ultimately more powerful machine intelligence capable of tackling the world's complex challenges. It’s time to move beyond the monolith and embrace the power of the network.&lt;/p&gt;

&lt;p&gt;🌟 TrustGraph on &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; 🧠 &lt;/p&gt;

&lt;p&gt;Join the &lt;a href="https://discord.gg/sQMwkRz5GX" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; 👋 &lt;/p&gt;

&lt;p&gt;Watch tutorials on &lt;a href="https://www.youtube.com/@TrustGraphAI?sub_confirmation=1" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt; 📺️ &lt;/p&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Stop Thinking AI Agents, Start Engineering Autonomous Knowledge Operations</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Wed, 09 Apr 2025 19:24:21 +0000</pubDate>
      <link>https://dev.to/trustgraph/stop-thinking-ai-agents-start-engineering-autonomous-knowledge-operations-20mj</link>
      <guid>https://dev.to/trustgraph/stop-thinking-ai-agents-start-engineering-autonomous-knowledge-operations-20mj</guid>
      <description>&lt;h2&gt;
  
  
  Beyond the Buzz: Why Autonomous Knowledge Operations Matters More Than Just AI Agents
&lt;/h2&gt;

&lt;p&gt;The tech world has been ablaze with talk of AI agents. We see demos of agents booking flights, writing code snippets, or summarizing articles. It's exciting, capturing the imagination with glimpses of AI performing tasks previously requiring human operations. But as we move from demos to deployment, simply thinking in terms of "agents" falls short.&lt;/p&gt;

&lt;p&gt;The real paradigm shift isn't just about creating smarter tools (agents); it's about building systems capable of continuous, reliable, and goal-directed operations that are powered by deep contextual understanding. This is the philosophy of TrustGraph’s &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;Autonomous Knowledge Operations&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's the Difference? Isn't an Agent Autonomous?
&lt;/h3&gt;

&lt;p&gt;An AI Agent, in its common definition today, is often:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Task-Oriented: Designed to perform a specific, often short-lived task (e.g., answer a question, draft an email).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reactive: Primarily responds to direct input or triggers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Component-Level: Can be thought of as a sophisticated function call or a smart script.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Potentially Isolated &amp;amp; Knowledge-Poor: Might operate with limited context or struggle to access and reason over the complex web of information within an enterprise.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While powerful, these agents often lack the deep knowledge integration, robustness, persistence, and manageability needed for mission-critical business functions. Running a complex business process isn't like asking an agent to write a poem; it requires continuous awareness, adaptation, reliability, and critically, intelligent use of relevant knowledge.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;Autonomous Knowledge Operations&lt;/a&gt;, is a broader, more systemic approach where autonomy is directly fueled by intelligent information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Goal-Oriented &amp;amp; Continuous: Focused on achieving and maintaining a desired state or objective over time. Action is driven by understanding the goal within its knowledge context.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Proactive, Persistent &amp;amp; Knowledge-Driven: Actively monitors, plans, and acts by constantly interpreting its environment through a rich knowledge base. It runs continuously, learning and adapting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;System-Level: Encompasses not just agents but the entire infrastructure, knowledge pipelines (RAG, KG, VectorDBs), integration points, and feedback loops required for sustained, intelligent operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fueled by Deep Knowledge &amp;amp; Context: Leverages rich, relevant, and timely information drawn from enterprise sources. This requires sophisticated RAG pipelines with both vector databases and knowledge graphs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Observable &amp;amp; Manageable: Designed with built-in monitoring, logging, tracing, and controls to ensure reliability, understand the knowledge-driven behavior, and allow for intervention or adjustments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reliable &amp;amp; Scalable: Built on enterprise-grade infrastructure capable of handling failures, scaling resources, and meeting performance demands for both computation and knowledge processing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why This Shift in Thinking Matters
&lt;/h3&gt;

&lt;p&gt;Focusing solely on "agents" leads to several potential pitfalls in enterprise adoption:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The "Demo-to-Production" Gap: Cool agent demos often bypass the hard parts: robust knowledge integration, error handling, scalability, security, and monitoring needed for real-world value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Context Starvation: Agents without deep, structured context – the kind derived from integrated Knowledge Graphs combined with Vector DBs – struggle with complex reasoning and nuanced tasks common in business. This is a knowledge access problem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Infrastructure Nightmare: Managing dozens of agents and their disparate, potentially inconsistent knowledge sources, ensuring reliability, and providing consistent data access is an operational burden.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lack of Trust: How do you monitor, debug, or guarantee the performance of agents acting on potentially incomplete or misunderstood information? Observability into the knowledge retrieval and reasoning process is non-negotiable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Building for &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;Autonomous Knowledge Operations&lt;/a&gt;: The TrustGraph Philosophy
&lt;/h3&gt;

&lt;p&gt;This is precisely the philosophy behind TrustGraph. We realized that the conversation needed to evolve beyond just the agent itself to encompass the entire knowledge-driven system. TrustGraph is an &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;Autonomous Knowledge Operations&lt;/a&gt; Platform designed to provide the foundational elements missing from simple agent frameworks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enterprise-Grade Infrastructure: It provides the scalable, reliable backend needed to run operations continuously, managing both computation and knowledge flows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrated RAG (KG + VectorDB): It automates the deployment of sophisticated RAG pipelines, acknowledging that deep context and reliable autonomy stem from leveraging both semantic similarity (vectors) and structured relationships (knowledge graphs).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unified LLM Access: It abstracts the complexity of dealing with multiple LLM providers, allowing the system to focus on applying the best reasoning to the available knowledge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Full Observability Stack: It builds in logging, metrics, and tracing from the ground up, including insights into the RAG process, because trusting autonomous systems requires understanding how they arrive at decisions based on knowledge.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By focusing on the knowledge-driven operation rather than just the agent, we can build systems that don't just perform tasks but achieve persistent business outcomes reliably, efficiently, and intelligently.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Future is Systemic and Knowledge-Rich
&lt;/h3&gt;

&lt;p&gt;AI agents are a vital component of the future. But the true transformation lies in weaving these components into robust, knowledge-aware, observable, and continuous Autonomous Knowledge Operations. This requires a shift in mindset and tooling – moving from building smart tools to engineering intelligent, self-managing systems powered by deep understanding. That's the future we're building towards with TrustGraph.&lt;/p&gt;

&lt;p&gt;🌟 TrustGraph on &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; 🚀 &lt;/p&gt;

&lt;p&gt;Join the &lt;a href="https://discord.gg/sQMwkRz5GX" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; 👋 &lt;/p&gt;

&lt;p&gt;Watch tutorials on &lt;a href="https://www.youtube.com/@TrustGraphAI?sub_confirmation=1" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt; 📺️ &lt;/p&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How-to Use AI to See Your Data in 3D</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Mon, 30 Dec 2024 22:28:01 +0000</pubDate>
      <link>https://dev.to/trustgraph/how-to-use-ai-to-see-your-data-in-3d-f7h</link>
      <guid>https://dev.to/trustgraph/how-to-use-ai-to-see-your-data-in-3d-f7h</guid>
      <description>&lt;p&gt;We all know the struggle. You're drowning in massive amounts of unstructured data, trying to make sense of the complex web of relationships hidden within.  &lt;/p&gt;

&lt;p&gt;That's where &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;TrustGraph's&lt;/a&gt; Data Workbench 3D visualizer comes in. Forget clunky interfaces and limited perspectives. We're bringing the power of 3D visualization to your data analysis, giving you an intuitive, immersive, and interactive way to uncover hidden patterns and relationships.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why 3D Matters for AI Developers
&lt;/h2&gt;

&lt;p&gt;Everyone knows the power of GraphRAG. But everyone also knows the limitations of data visualizations in 2D. With 3D, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Uncover Hidden Relationships&lt;/strong&gt;: In a 2D graph, nodes can clutter the screen, obscuring vital connections. 3D lets you see the true depth of your data, revealing relationships you might otherwise miss.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intuitive Navigation&lt;/strong&gt;: Our brains are wired to understand spatial relationships. Navigating a 3D visualization is naturally more intuitive than panning and zooming across a flat surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Understanding&lt;/strong&gt;: The spatial layout of nodes in 3D can reveal clusters, hierarchies, and anomalies that are hard to spot in only 2D.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interactive Data&lt;/strong&gt;: Let’s be honest, a 3D visualization looks cool! Impress stakeholders with compelling and interactive visualizations that show the hidden gems of wisdom in your data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started: Your 3D Data Journey
&lt;/h2&gt;

&lt;p&gt;Before we can use the 3D visualizer, we must first deploy TrustGraph.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1:  Setting Up (If you haven't already)
&lt;/h3&gt;

&lt;p&gt;Navigate to &lt;a href="https://config-ui.demo.trustgraph.ai/" rel="noopener noreferrer"&gt;Configuration Portal&lt;/a&gt; to select all the components for your build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feeyuvgndjp4pykkf75q0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feeyuvgndjp4pykkf75q0.jpg" alt="TrustGraph Configuration Portal" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow all the instructions on the Finish Deployment tab get TrustGraph running. The Data Workbench will be accessible at port &lt;code&gt;8888&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Data Ingestion
&lt;/h3&gt;

&lt;p&gt;Navigate to the Data Workbench now running on port &lt;code&gt;8888&lt;/code&gt;. Use the Data Loader 📂 to upload your documents. TrustGraph supports &lt;code&gt;.pdf&lt;/code&gt;, &lt;code&gt;.txt&lt;/code&gt;, and &lt;code&gt;.md&lt;/code&gt; files. The data extraction agents will automatically process the files, extracting key information to build the knowledge graph and vector embeddings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d8kqzid0e2fxe1ekop3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d8kqzid0e2fxe1ekop3.jpg" alt="TrustGraph Data Workbench Loader" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Query the Data through AI Chat
&lt;/h3&gt;

&lt;p&gt;No need for &lt;code&gt;Cypher&lt;/code&gt; or &lt;code&gt;SPARQL&lt;/code&gt; queries! Once your data is processed, you can perform GraphRAG queries with natural language in the System Chat. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozc5815gr9up6xdzrb3g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozc5815gr9up6xdzrb3g.jpg" alt="TrustGraph Data Workbench Chat" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The System Chat will query both the knowledge graph and vector embeddings to generate a response. In addition, the Data Workbench will display a set of nodes from the knowledge graph on the left side of the screen. Clicking any one of these nodes will allow you to explore semantic relationships.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflhj6rqsh9xx9wh6itz1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflhj6rqsh9xx9wh6itz1.jpg" alt="TrustGraph Data Workbench Explorer" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4:  Exploring Your Data in 3D
&lt;/h3&gt;

&lt;p&gt;Now comes the fun part! Once you’ve selected a node, you can generate a 3D visualization by either clicking GRAPH VIEW in the Data Explorer window or by simply clicking Data Visualizer from the Data Workbench set of tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw62bdnnd8i29z1o56lx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw62bdnnd8i29z1o56lx.jpg" alt="TrustGraph Data Workbench 3D" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once in the 3D visualizer, you can interact with the data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zoom, Pan, and Rotate&lt;/strong&gt;: Use your mouse to explore the space. Zoom in to examine clusters in detail or rotate to gain a new perspective on relationships.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Click and Drag&lt;/strong&gt;: Click and drag on an individual node to reshape the graph.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node Exploration&lt;/strong&gt;: Click on individual nodes to see additional related nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relationship Explorer&lt;/strong&gt;: Click on a relationship connecting two nodes to see a pulse travel, depicting the directionality of the semantic relationship.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Ready to Explore Your Data in 3D?
&lt;/h2&gt;

&lt;p&gt;Ditch analysis in 2D, embrace the third dimension, and unlock the hidden potential of your unstructured data with TrustGraph’s 3D Visualizer.&lt;/p&gt;

&lt;p&gt;Give it a try today! Include the Data Workbench through the Configuration Portal in your build and experience the future of data analysis.&lt;/p&gt;

&lt;p&gt;We're excited to see how you leverage this powerful tool to push the boundaries of AI! Join the TrustGraph community to push data analysis to the future!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;Launch TrustGraph from GitHub&lt;/a&gt; 🚀 &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://discord.gg/sQMwkRz5GX" rel="noopener noreferrer"&gt;Join the Discord&lt;/a&gt; 👋 &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/@TrustGraph?sub_confirmation=1" rel="noopener noreferrer"&gt;Watch tutorials on YouTube&lt;/a&gt; 📺️ &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>rag</category>
      <category>aiops</category>
      <category>open</category>
    </item>
    <item>
      <title>The Future of Agentic Systems Podcast</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Fri, 15 Nov 2024 15:19:04 +0000</pubDate>
      <link>https://dev.to/trustgraph/the-future-of-data-engineering-with-llms-podcast-3ce8</link>
      <guid>https://dev.to/trustgraph/the-future-of-data-engineering-with-llms-podcast-3ce8</guid>
      <description>&lt;p&gt;The founders of &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;TrustGraph&lt;/a&gt;, discuss their journeys with big data, knowledge graphs, and data engineering. Knowledge graphs are hard to learn - no matter what Mark says, and he gives everyone a crash course on them, why querying graphs is tricky, and what makes for reliable data services. The conversation ends with a discussion of what makes for "explainable AI" and the future of AI security.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>opensource</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>The 2024 State of RAG Podcast</title>
      <dc:creator>Daniel Davis</dc:creator>
      <pubDate>Thu, 07 Nov 2024 16:25:11 +0000</pubDate>
      <link>https://dev.to/trustgraph/the-2024-state-of-rag-podcast-559b</link>
      <guid>https://dev.to/trustgraph/the-2024-state-of-rag-podcast-559b</guid>
      <description>&lt;p&gt;&lt;a href="https://x.com/trustspooky" rel="noopener noreferrer"&gt;Daniel Davis&lt;/a&gt; of &lt;a href="https://github.com/trustgraph-ai/trustgraph" rel="noopener noreferrer"&gt;TrustGraph&lt;/a&gt; and &lt;a href="https://x.com/kirkmarple" rel="noopener noreferrer"&gt;Kirk Marple&lt;/a&gt; from &lt;a href="https://graphlit.com" rel="noopener noreferrer"&gt;Graphlit&lt;/a&gt; discuss the 2024 state of RAG. Whether it's RAG, GraphRAG, or HybridRAG, a lot has changed since the term has become ubiquitous in AI. Where are we, where are we going, and where should be going are all answered in this discussion.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rag</category>
      <category>aiops</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
