<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matheus Pereira</title>
    <description>The latest articles on DEV Community by Matheus Pereira (@matheus_pereira_532646ca6).</description>
    <link>https://dev.to/matheus_pereira_532646ca6</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/matheus_pereira_532646ca6"/>
    <language>en</language>
    <item>
      <title>Breaking the Black Box Without Sacrificing Performance</title>
      <dc:creator>Matheus Pereira</dc:creator>
      <pubDate>Mon, 09 Feb 2026 18:27:23 +0000</pubDate>
      <link>https://dev.to/matheus_pereira_532646ca6/breaking-the-black-box-without-sacrificing-performance-4a0n</link>
      <guid>https://dev.to/matheus_pereira_532646ca6/breaking-the-black-box-without-sacrificing-performance-4a0n</guid>
      <description>&lt;p&gt;Deep learning has a transparency problem.&lt;/p&gt;

&lt;p&gt;Modern neural networks achieve impressive results across vision, language, and decision-making tasks, yet they often fail at answering a basic question: why was this decision made? This opacity limits trust, auditability, and adoption in real-world, high-stakes systems.&lt;/p&gt;

&lt;p&gt;Over the years, the community has explored two main directions. On one side, highly accurate black-box models with post-hoc explanations. On the other, fully interpretable models that attempt to force reasoning into human-defined concepts often at the cost of performance.&lt;/p&gt;

&lt;p&gt;Both approaches have limitations.&lt;/p&gt;

&lt;p&gt;This tension motivated a question that guided my recent work:&lt;/p&gt;

&lt;p&gt;Is it possible to constrain deep learning models for interpretability without destroying their expressive power?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem With Forcing Interpretability Everywhere&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Concept-based approaches, such as Concept Bottleneck Models, enforce predictions through human-interpretable concepts. When successful, they provide clear and intuitive explanations. However, they also assume that all relevant information can be expressed using predefined concepts.&lt;/p&gt;

&lt;p&gt;In practice, this assumption rarely holds.&lt;/p&gt;

&lt;p&gt;Many discriminative features textures, subtle shapes, spatial correlations do not map cleanly to human language. Forcing all information through concepts often leads to excessive compression and, consequently, a significant drop in predictive accuracy.&lt;/p&gt;

&lt;p&gt;The issue is not interpretability itself, but where and how it is enforced.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Hybrid Perspective&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of trying to eliminate the black box, I explored a different approach:&lt;br&gt;
make it explicit, bounded, and structurally separated from what we can explain.&lt;/p&gt;

&lt;p&gt;This idea led to HGC-Net (Hybrid Guided Concept Network).&lt;/p&gt;

&lt;p&gt;The core design principle is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Part of the model is trained to represent human-defined, interpretable concepts&lt;/li&gt;
&lt;li&gt;Another part remains unconstrained, capturing residual information necessary for performance&lt;/li&gt;
&lt;li&gt;The final prediction uses both components&lt;/li&gt;
&lt;li&gt;Interpretability is enforced selectively, not globally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What This Enables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This architectural separation leads to an important behavioral shift:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a human-defined concept applies, the model exposes it clearly&lt;/li&gt;
&lt;li&gt;When no known concept applies, the model does not fabricate explanations&lt;/li&gt;
&lt;li&gt;Performance remains close to a standard convolutional baseline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In qualitative analyses, this distinction becomes explicit. Some predictions are fully explainable through concepts. Others are correct but rely on latent representations, with the model being honest about the absence of a semantic explanation.&lt;/p&gt;

&lt;p&gt;Rather than pretending to be fully interpretable, the system makes the limits of interpretability visible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Interpretability should not mean oversimplification.&lt;/p&gt;

&lt;p&gt;Forcing explanations where none exist can be just as misleading as offering no explanation at all. A model that acknowledges the boundary between what is explainable and what remains latent is often more trustworthy than one that claims total transparency.&lt;/p&gt;

&lt;p&gt;HGC-Net treats interpretability as a structural property, not a post-hoc add-on. By deciding where semantic constraints belong, we can preserve both accountability and performance.&lt;/p&gt;

&lt;p&gt;Read the Full Paper&lt;/p&gt;

&lt;p&gt;The complete technical details, experiments, and qualitative analyses are available in the full preprint:&lt;/p&gt;

&lt;p&gt;Zenodo (DOI): &lt;a href="https://doi.org/10.5281/zenodo.18508972" rel="noopener noreferrer"&gt;click here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The paper includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the full architectural formulation&lt;/li&gt;
&lt;li&gt;quantitative comparisons with baselines&lt;/li&gt;
&lt;li&gt;qualitative examples of instance-level explanations&lt;/li&gt;
&lt;li&gt;links to executable experiments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Closing Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Breaking the black box does not require eliminating it.&lt;/p&gt;

&lt;p&gt;Sometimes, the most honest solution is to draw clear boundaries, explain what can be explained, and explicitly expose what cannot. Hybrid semantic bottlenecks offer one possible path in that direction, and this work is an ongoing exploration of how far that idea can go.&lt;/p&gt;

&lt;p&gt;If you’re interested in interpretability, concept-based learning, or trustworthy AI, I’d be glad to hear your thoughts.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
    <item>
      <title>Legacy-First Design (LFD): Designing Software That Still Makes Sense Over Time</title>
      <dc:creator>Matheus Pereira</dc:creator>
      <pubDate>Sun, 11 Jan 2026 16:58:51 +0000</pubDate>
      <link>https://dev.to/matheus_pereira_532646ca6/legacy-first-design-lfd-designing-software-that-still-makes-sense-over-time-4ak6</link>
      <guid>https://dev.to/matheus_pereira_532646ca6/legacy-first-design-lfd-designing-software-that-still-makes-sense-over-time-4ak6</guid>
      <description>&lt;p&gt;Most software architecture discussions focus on how to change faster:&lt;br&gt;
how to refactor safely, adopt new technologies, or evolve systems continuously.&lt;/p&gt;

&lt;p&gt;Much less attention is given to a different, and uncomfortable, question:&lt;/p&gt;

&lt;p&gt;What happens when change slows down… or stops entirely?&lt;/p&gt;

&lt;p&gt;In practice, many long-lived software systems do not fail because of performance or scalability issues. They fail because, over time, they lose conceptual coherence. Business rules leak into infrastructure, architectural decisions become irreversible, and eventually the system is rewritten rather than understood.&lt;/p&gt;

&lt;p&gt;This post introduces Legacy-First Design (LFD), a conceptual architectural methodology that treats time as a first-class constraint rather than an implicit assumption.&lt;br&gt;
What is Legacy-First Design?&lt;/p&gt;

&lt;p&gt;Legacy-First Design (LFD) is not a framework, language, or architectural pattern.&lt;/p&gt;

&lt;p&gt;It operates at the architectural decision level, asking architects and engineers to explicitly distinguish between what must endure and what may change over time.&lt;/p&gt;

&lt;p&gt;Instead of optimizing primarily for short-term delivery or technological convenience, LFD prioritizes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;preservation of system identity&lt;/li&gt;
&lt;li&gt;resistance to technological obsolescence&lt;/li&gt;
&lt;li&gt;long-term conceptual clarity&lt;/li&gt;
&lt;li&gt;survivability beyond active maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At its core, LFD reframes the architectural question from:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“How should we structure software to evolve faster?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What must remain valid even when evolution stops?”&lt;br&gt;
Survival in Abandoned State&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of the central ideas proposed by LFD is Survival in Abandoned State.&lt;/p&gt;

&lt;p&gt;Most architectural approaches implicitly assume continuous maintenance. LFD does not.&lt;/p&gt;

&lt;p&gt;It treats prolonged absence of maintenance as a predictable phase of the software lifecycle, not an exceptional failure. Under this perspective, a system that collapses conceptually when abandoned has failed architecturally — even if it once worked correctly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Designing for survival in abandoned state means building systems that:&lt;/li&gt;
&lt;li&gt;remain understandable without their original creators&lt;/li&gt;
&lt;li&gt;preserve core business meaning&lt;/li&gt;
&lt;li&gt;continue to make sense even after long periods of inactivity
How does LFD relate to existing approaches?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Legacy-First Design does not attempt to replace approaches such as Clean Architecture, Domain-Driven Design, or Hexagonal Architecture.&lt;/p&gt;

&lt;p&gt;Instead, it introduces a temporal governance layer that these approaches often leave implicit.&lt;/p&gt;

&lt;p&gt;LFD is less concerned with how a system is structured and more concerned with what is allowed to change over time — and what must be protected from change.&lt;br&gt;
The paper&lt;/p&gt;

&lt;p&gt;The complete, canonical definition of Legacy-First Design (LFD) is presented in a short technical paper:&lt;/p&gt;

&lt;p&gt;📄 Legacy-First Design (LFD): A Temporal Approach to Software Sustainability&lt;br&gt;
👉 &lt;a href="https://zenodo.org/records/18208444" rel="noopener noreferrer"&gt;PAPER Zenodo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The paper explicitly positions LFD as a conceptual and normative methodology, discusses its relationship with existing architectural approaches, and acknowledges its limitations.&lt;/p&gt;

&lt;p&gt;It is intentionally concise and meant to provoke discussion rather than prescribe implementation details.&lt;br&gt;
Why this perspective matters&lt;/p&gt;

&lt;p&gt;Software systems rarely fail because of a single bad decision.&lt;br&gt;
They fail because of accumulated decisions that did not account for time.&lt;/p&gt;

&lt;p&gt;Legacy-First Design does not promise to eliminate complexity or conflict. It proposes something more demanding:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Designing systems that remain coherent when time does its work.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you are working with long-lived systems, legacy codebases, or architectural sustainability, this perspective may be worth exploring.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>design</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Is the Monolith Dead? Introducing MQ-AGI: A Modular, Neuro-Symbolic Architecture for Scalable AI</title>
      <dc:creator>Matheus Pereira</dc:creator>
      <pubDate>Thu, 20 Nov 2025 23:56:47 +0000</pubDate>
      <link>https://dev.to/matheus_pereira_532646ca6/is-the-monolith-dead-introducing-mq-agi-a-modular-neuro-symbolic-architecture-for-scalable-ai-54o9</link>
      <guid>https://dev.to/matheus_pereira_532646ca6/is-the-monolith-dead-introducing-mq-agi-a-modular-neuro-symbolic-architecture-for-scalable-ai-54o9</guid>
      <description>&lt;p&gt;The "Wall" of Monolithic LLMs&lt;br&gt;
We all love GPT-4 and Claude. They are marvels of engineering. But if you've tried to build complex, long-term applications with them, you've hit the wall:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amnesia: The context window is a band-aid, not a memory.&lt;/li&gt;
&lt;li&gt;Black Box: You can't debug why it hallucinated.&lt;/li&gt;
&lt;li&gt;Inefficiency: Asking a 1T parameter model to do simple arithmetic is like using a flamethrower to light a candle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I believe the next step in AI isn't just "bigger models", but better architecture.&lt;br&gt;
For the past few months, I've been working on a blueprint to solve these bottlenecks. Today, I'm releasing the preprint of MQ-AGI (Modular Quantum-Orchestrated Artificial General Intelligence).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is MQ-AGI?&lt;/strong&gt; &lt;br&gt;
MQ-AGI is a conceptual framework that moves away from the single giant neural network. Instead, it proposes an "Orchestrated Brain" topology inspired by cognitive science (Global Workspace Theory).&lt;br&gt;
It breaks down the monolith into specialized components coordinated by a central core. Think of it as Microservices for Intelligence.&lt;br&gt;
Here is the high-level topology:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 4 Core Components&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Domain Expert Networks (DENs) - "System 1"&lt;/strong&gt;&lt;br&gt;
Instead of one model that tries to know Physics, Law, and Cooking all at once (and gets confused), MQ-AGI uses specialized, independent networks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dev Benefit: You can update the Python Expert without breaking the History Expert. No catastrophic forgetting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Global Integrator Network (GIN) - "System 2"&lt;/strong&gt;&lt;br&gt;
This is the orchestrator. It acts like the Prefrontal Cortex. It receives inputs from the experts, resolves conflicts (e.g., Safety vs. Efficiency), and maintains the train of thought.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It solves the Binding Problem: fusing disparate data types into a coherent concept.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Quantum-Inspired Core (The Routing Engine)&lt;/strong&gt; &lt;br&gt;
This is the heavy engineering part. How do you route a prompt to the perfect combination of 5 experts out of 1,000?Using a standard classifier is &lt;em&gt;O(N)&lt;/em&gt;. Finding the optimal coalition is a combinatorial nightmare.MQ-AGI models this as a Hamiltonian Energy Minimization problem. We use physics-inspired algorithms (Tensor Networks) to find the "Ground State" (lowest energy/conflict) path.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note: This runs on GPUs today (via simulation) but is ready for future QPU hardware.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. DREAM Memory Protocol&lt;/strong&gt; &lt;br&gt;
An AGI needs to remember you. MQ-AGI integrates DREAM (my previous work on episodic memory).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It replaces the raw "context window" with a Self-Pruning Vector Store.&lt;/li&gt;
&lt;li&gt;Adaptive TTL: Memories that you access often live longer. Useless noise is deleted.&lt;/li&gt;
&lt;li&gt;Dual Output: The system generates a user response AND an internal memory summary in parallel.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Open Source this Blueprint?&lt;/strong&gt;&lt;br&gt;
I believe we need more architectural diversity in AI research. We cannot leave the future of AGI solely in the hands of closed labs building bigger monoliths.&lt;/p&gt;

&lt;p&gt;I have released the full technical paper, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The mathematical formalization (Hamiltonians, Free Energy Principle).&lt;/li&gt;
&lt;li&gt;Critical feasibility analysis (addressing QRAM and latency bottlenecks).&lt;/li&gt;
&lt;li&gt;Implementation roadmap.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is available as an Open Access Preprint on Zenodo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Links &amp;amp; Resources&lt;/strong&gt;&lt;br&gt;
Read the Full Paper (PDF):&lt;a href="https://zenodo.org/records/17654543" rel="noopener noreferrer"&gt;MQ-AGI on Zenodo&lt;/a&gt;&lt;br&gt;
Read about the Memory Protocol: &lt;a href="https://zenodo.org/records/17619917" rel="noopener noreferrer"&gt;DREAM on Zenodo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’d love to hear your thoughts on the Orchestrator Topology. Do you think modularity is the key to System 2 reasoning? Let's discuss in the comments!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
