<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gelu Vac</title>
    <description>The latest articles on DEV Community by Gelu Vac (@geluvac).</description>
    <link>https://dev.to/geluvac</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/geluvac"/>
    <language>en</language>
    <item>
      <title>Deliberate Hybrid Design: Building Systems That Gracefully Fall Back from AI to Deterministic Logic</title>
      <dc:creator>Gelu Vac</dc:creator>
      <pubDate>Mon, 06 Apr 2026 13:52:12 +0000</pubDate>
      <link>https://dev.to/geluvac/deliberate-hybrid-design-building-systems-that-gracefully-fall-back-from-ai-to-deterministic-logic-1mna</link>
      <guid>https://dev.to/geluvac/deliberate-hybrid-design-building-systems-that-gracefully-fall-back-from-ai-to-deterministic-logic-1mna</guid>
      <description>&lt;p&gt;In the last decade, Artificial Intelligence has moved from experimental labs into the backbone of mainstream products. From recommendation systems and fraud detection to code assistants and autonomous vehicles, machine learning (ML) models now influence critical decisions at scale. Yet as powerful as these systems are, they are also inherently probabilistic and occasionally unreliable. Anyone who has wrestled with a large language model (LLM) that confidently produces incorrect information knows this firsthand.&lt;br&gt;
This unpredictability is not a flaw - it’s a feature of statistical models. But it does create a serious engineering challenge: how do we build dependable products on top of inherently uncertain components?&lt;br&gt;
The answer, increasingly, lies in a design philosophy that is both pragmatic and strategic: &lt;strong&gt;deliberate hybrid design&lt;/strong&gt;. Rather than treating AI as a standalone “brain,” we integrate it with deterministic, rule-based components - and ensure there is a &lt;strong&gt;graceful fallback path&lt;/strong&gt; when AI fails, is uncertain, or is not required.&lt;br&gt;
This article explores how to apply this philosophy in real-world software systems, why it matters, and how to design AI-enabled solutions that are robust, explainable, and trustworthy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Myth of AI Supremacy
&lt;/h2&gt;

&lt;p&gt;The rise of AI has created a strong narrative: if Machine Learning can outperform humans on a task, why not let it run everything? But this mindset - “AI-first” or “AI-only” - can be dangerous. It leads to brittle architectures, opaque decision-making, and poor user experiences.&lt;br&gt;
Consider a few examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Autonomous driving&lt;/strong&gt;: State-of-the-art perception systems detect objects and predict trajectories. But when conditions are unclear - fog, sensor failure, or conflicting signals - the system must hand over control or switch to rule-based safety protocols.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fraud detection&lt;/strong&gt;: Machine learning models flag suspicious transactions with impressive accuracy. Yet final decisions often depend on deterministic business rules (e.g., legal compliance thresholds) or &lt;strong&gt;require human review&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chatbots and copilots&lt;/strong&gt;: Generative AI can draft messages or code snippets. But when the confidence score is low or the consequences are high (e.g., financial transactions, medical recommendations), &lt;strong&gt;the system should defer to validated templates or manual input&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In each case, &lt;strong&gt;the most reliable solutions are hybrid&lt;/strong&gt; - combining probabilistic inference with deterministic control. This isn’t a fallback of weakness; it’s a design choice that maximizes robustness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Hybrid Systems Win
&lt;/h2&gt;

&lt;p&gt;There are several compelling reasons to design AI systems with deterministic components and fallback paths:&lt;/p&gt;

&lt;h4&gt;
  
  
  a. Reliability and Safety
&lt;/h4&gt;

&lt;p&gt;AI models can fail in unpredictable ways: edge cases, data drift, adversarial input, or low-confidence predictions. Deterministic logic provides a safety net, ensuring that critical operations continue under known, tested conditions.&lt;/p&gt;

&lt;h4&gt;
  
  
  b. Explainability and Compliance
&lt;/h4&gt;

&lt;p&gt;Regulated domains - healthcare, finance, law - require transparent decision-making. A hybrid approach allows us to explain final decisions even if an ML model contributed part of the reasoning.&lt;/p&gt;

&lt;h4&gt;
  
  
  c. Performance and Efficiency
&lt;/h4&gt;

&lt;p&gt;AI is computationally expensive. In many scenarios, we can use simple rules to handle routine cases and reserve AI for ambiguous or high-value decisions. This not only improves performance but also reduces costs.&lt;/p&gt;

&lt;h4&gt;
  
  
  d. User Trust
&lt;/h4&gt;

&lt;p&gt;Users are more likely to trust a system that admits uncertainty and defers when appropriate. &lt;strong&gt;Graceful fallback demonstrates design maturity and builds credibility over time&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture of Deliberate Hybrid Design
&lt;/h2&gt;

&lt;p&gt;Designing hybrid systems is not an afterthought - it requires planning at the architectural level. A deliberate hybrid system typically includes four layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sensing / Input Layer&lt;/strong&gt; – Collects data or user input.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Layer&lt;/strong&gt; – Performs probabilistic inference, classification, prediction, or generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deterministic Layer&lt;/strong&gt; – Enforces rules, policies, and business logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback / Escalation Layer&lt;/strong&gt; – Defines what happens when AI output is unreliable or ambiguous.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s explore each in more detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Confidence as a First-Class Citizen&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;A foundational principle in hybrid design is &lt;strong&gt;confidence scoring&lt;/strong&gt;. Whether you’re dealing with a classifier, a recommender, or a large language model (LLM), &lt;strong&gt;you need a way to quantify uncertainty&lt;/strong&gt;. This could be a probability score, entropy measure, thresholded similarity, or custom metric.&lt;br&gt;
Once you have a confidence signal, you can define thresholds that trigger deterministic behavior. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High confidence&lt;/strong&gt;: Accept AI output automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium confidence&lt;/strong&gt;: Pass output through a rule-based validation layer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low confidence&lt;/strong&gt;: Trigger fallback (e.g., human review, default rule, or simpler algorithm).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This tiered approach allows systems to dynamically adjust their behavior based on how “sure” they are - a cornerstone of graceful fallback.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Multi-Layer Decision Logic&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;In many applications, decisions don’t have to be binary (AI vs. rules). Instead, they can flow through multiple layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pre-filter with rules&lt;/strong&gt;: Before invoking AI, use deterministic logic to discard irrelevant input or enforce hard constraints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apply AI model&lt;/strong&gt;: Perform classification, prediction, or generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validate with rules&lt;/strong&gt;: Post-process the AI output using deterministic checks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback or escalate&lt;/strong&gt;: If validation fails, fallback to a safe default, prompt for human review, or invoke a simpler model.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For example, a medical triage chatbot might:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate that symptoms fall within known categories.&lt;/li&gt;
&lt;li&gt;Use an AI model to suggest possible causes.&lt;/li&gt;
&lt;li&gt;Apply clinical rules to flag life-threatening risks.&lt;/li&gt;
&lt;li&gt;Escalate to a human doctor if the risk is high or the model is uncertain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layered flow ensures that each decision is made at the right level of complexity and accountability.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Human-in-the-Loop as a Design Pattern&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Graceful fallback&lt;/em&gt; doesn’t always mean reverting to rules - sometimes, it means involving humans strategically. “Human-in-the-loop” (HITL) systems use people to validate or override AI decisions in critical moments.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In document review, AI can classify and prioritize, while humans verify.&lt;/li&gt;
&lt;li&gt;In autonomous systems, humans can assume control when conditions are unclear.&lt;/li&gt;
&lt;li&gt;In support chatbots, unresolved conversations can seamlessly escalate to human agents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key is designing &lt;strong&gt;handoffs that feel natural&lt;/strong&gt; - not as emergency patches, but as integral parts of the user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Case Studies: Hybrid Design in Practice
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Case Study 1: E-commerce Recommendation Engine
&lt;/h3&gt;

&lt;p&gt;A major retailer built a product recommendation system powered by deep learning. However, they faced two issues: legal restrictions on personalized offers in certain regions and customer complaints about irrelevant suggestions.&lt;/p&gt;

&lt;p&gt;The solution was a hybrid pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stage 1&lt;/strong&gt;: Business rules filtered out restricted categories and enforced legal constraints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stage 2&lt;/strong&gt;: The AI model generated ranked recommendations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stage 3&lt;/strong&gt;: A rule-based validator ensured diversity and compliance before display.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback&lt;/strong&gt;: If the AI model produced low-confidence results, a deterministic “bestsellers” list was shown instead.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;: higher user satisfaction, full regulatory compliance, and reduced reliance on expensive model inference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Case Study 2: Industrial Predictive Maintenance
&lt;/h3&gt;

&lt;p&gt;An industrial IoT platform used machine learning to predict equipment failures. But false-positives were costly, and false-negatives were dangerous.&lt;/p&gt;

&lt;p&gt;The hybrid solution combined:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rules&lt;/strong&gt;: Safety-critical thresholds (e.g., temperature &amp;gt; 120°C) triggered immediate shutdown.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI&lt;/strong&gt;: Predictive models forecasted potential failures based on historical data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback&lt;/strong&gt;: If predictions were low-confidence or conflicted with sensor data, the system defaulted to conservative rule-based safety actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result: downtime was reduced without compromising safety - and operators trusted the system more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Patterns for Graceful Fallback
&lt;/h2&gt;

&lt;p&gt;Here are some proven design patterns you can apply in your own systems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Default Response Pattern&lt;/strong&gt;: Provide a deterministic “safe answer” if AI is uncertain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rule-Gated AI Pattern&lt;/strong&gt;: Use rules to constrain AI input/output within acceptable boundaries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Confidence Escalation Pattern&lt;/strong&gt;: Route low-confidence cases to human review or secondary logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shadow Mode Pattern&lt;/strong&gt;: Run AI in parallel with existing rule-based systems, comparing outputs before full deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tiered Complexity Pattern&lt;/strong&gt;: Start with simple logic; escalate to more complex (AI) methods only when needed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: These patterns are not mutually exclusive - they often work best in combination.&lt;/p&gt;

&lt;h2&gt;
  
  
  Organizational Mindset: Engineering for Uncertainty
&lt;/h2&gt;

&lt;p&gt;Building hybrid systems is not just a technical challenge - it’s a cultural one. It requires teams to shift their mindset:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From “&lt;strong&gt;AI will solve it&lt;/strong&gt;” TO “&lt;strong&gt;AI is a tool, not an oracle&lt;/strong&gt;”;&lt;/li&gt;
&lt;li&gt;From “&lt;strong&gt;deterministic vs. probabilistic&lt;/strong&gt;” TO “&lt;strong&gt;deterministic and probabilistic&lt;/strong&gt;”;&lt;/li&gt;
&lt;li&gt;From “&lt;strong&gt;one-size-fits-all&lt;/strong&gt;” TO “&lt;strong&gt;context-aware decision flows&lt;/strong&gt;”.
This mindset influences everything: architecture, testing, DevOps, product design, and even how you communicate capabilities to users.
It also means measuring success differently. Instead of chasing model accuracy alone, &lt;strong&gt;focus on system reliability, user trust, and graceful degradation&lt;/strong&gt; under uncertainty.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion: Hybrid Is the Future
&lt;/h2&gt;

&lt;p&gt;The era of AI-only systems is ending. As we integrate machine learning (ML) into mission-critical workflows, &lt;strong&gt;robustness matters as much as intelligence&lt;/strong&gt;. &lt;em&gt;Deliberate hybrid design&lt;/em&gt; - the thoughtful fusion of probabilistic models, deterministic logic, and human judgment - is how we get there.&lt;br&gt;
The best systems of the future will not be those that rely on AI blindly. They will be those that &lt;strong&gt;understand when to trust the model, when to fall back, and when to escalate&lt;/strong&gt;. They will embrace uncertainty not as a weakness, but as a design constraint.&lt;br&gt;
And they will succeed because of it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AI is powerful but inherently uncertain; deterministic logic provides safety, transparency, and reliability.&lt;/li&gt;
&lt;li&gt;Design fallback paths intentionally - don’t bolt them on after failures occur.&lt;/li&gt;
&lt;li&gt;Use confidence scoring, layered decision flows, and human-in-the-loop mechanisms.&lt;/li&gt;
&lt;li&gt;Measure system success in terms of robustness, trust, and graceful degradation - not just model performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In software architecture, as in aviation, the most advanced autopilot is only as good as its ability to hand control back to a human pilot. Deliberate hybrid design ensures that when - not if - AI falters, the system continues to serve users reliably and safely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key references
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;“Hierarchical Fallback Architecture for High Risk Online Machine Learning Inference” – Gustavo Polleti et al. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proposes a hierarchical fallback architecture for robustness in ML systems; includes fallback modes when inference fails. &lt;/li&gt;
&lt;li&gt;This aligns with the idea of switching from a neural path to fallback logic under failure conditions. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://arxiv.org/html/2501.17834v1" rel="noopener noreferrer"&gt;https://arxiv.org/html/2501.17834v1&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“Hybrid AI Reasoning: Integrating Rule-Based Logic with LLMs” (Preprint) &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explores blending deterministic rule logic with transformer-based models. &lt;/li&gt;
&lt;li&gt;Mentions dual-stream frameworks and fallback or validation layers. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.preprints.org/manuscript/202504.1453/v1" rel="noopener noreferrer"&gt;https://www.preprints.org/manuscript/202504.1453/v1&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“Modular Design Patterns for Hybrid Learning and Reasoning Systems” &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Surveys many hybrid architectures, laying out patterns for mixing symbolic/rule and statistical (neural) systems. &lt;/li&gt;
&lt;li&gt;Useful for seeing how fallback or integration can appear in practical systems. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://arxiv.org/abs/2102.11965" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2102.11965&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“Taxonomy of Hybrid Architectures Involving Rule-Based Systems” &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Based on clinical decision systems in healthcare. Shows how rule-based logic is used alongside ML. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pubmed.ncbi.nlm.nih.gov/37355025/" rel="noopener noreferrer"&gt;https://pubmed.ncbi.nlm.nih.gov/37355025/&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“Hybrid Neuro-Symbolic Learning and Reasoning for Resilient Systems” &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mentions fallback actions conceived as “Force Minimal Operation” in hybrid systems. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.sciencedirect.com/science/article/pii/S0960148125020658" rel="noopener noreferrer"&gt;https://www.sciencedirect.com/science/article/pii/S0960148125020658&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;“Unlocking the Potential of Generative AI through Neuro-symbolic Architectures” &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Systematic study of architectures that integrate symbolic and neural components. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://arxiv.org/html/2502.11269v1" rel="noopener noreferrer"&gt;https://arxiv.org/html/2502.11269v1&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Bibliography
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Amershi, S. et al. (2019). Guidelines for Human-AI Interaction. CHI ’19. &lt;a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2019/01/Guidelines-for-Human-AI-Interaction-camera-ready.pdf" rel="noopener noreferrer"&gt;https://www.microsoft.com/en-us/research/wp-content/uploads/2019/01/Guidelines-for-Human-AI-Interaction-camera-ready.pdf&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Varshney, K. R. (2017). Engineering Safety in Machine Learning. 2017 Information Theory and Applications Workshop. &lt;a href="https://ieeexplore.ieee.org/document/7888195" rel="noopener noreferrer"&gt;https://ieeexplore.ieee.org/document/7888195&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Sculley, D. et al. (2015). Hidden Technical Debt in Machine Learning Systems. NeurIPS. &lt;a href="https://proceedings.neurips.cc/paper_files/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf" rel="noopener noreferrer"&gt;https://proceedings.neurips.cc/paper_files/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Ribeiro, M. T. et al. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD. &lt;a href="https://arxiv.org/abs/1602.04938" rel="noopener noreferrer"&gt;https://arxiv.org/abs/1602.04938&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Aston Jonathan (2024). The evolution of hybrid AI: where deterministic and statistical approaches meet. &lt;a href="https://www.capgemini.com/be-en/insights/expert-perspectives/the-evolution-of-hybrid-aiwhere-deterministic-and-probabilistic-approaches-meet/" rel="noopener noreferrer"&gt;https://www.capgemini.com/be-en/insights/expert-perspectives/the-evolution-of-hybrid-aiwhere-deterministic-and-probabilistic-approaches-meet/&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Google Cloud (2023). Machine Learning Design Patterns. (Lakshmanan, Robinson, Munn). O’Reilly. &lt;a href="https://www.oreilly.com/library/view/machine-learning-design/9781098115777/" rel="noopener noreferrer"&gt;https://www.oreilly.com/library/view/machine-learning-design/9781098115777/&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Microsoft (2022). Responsible AI Standard v2. (Sections on fallback and human oversight). &lt;a href="https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft-Responsible-AI-Standard-General-Requirements.pdf" rel="noopener noreferrer"&gt;https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft-Responsible-AI-Standard-General-Requirements.pdf&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
