<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vignesh Durai</title>
    <description>The latest articles on DEV Community by Vignesh Durai (@vigneshjd).</description>
    <link>https://dev.to/vigneshjd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vigneshjd"/>
    <language>en</language>
    <item>
      <title>Demystifying AI Serving for Java Developers: Apache Camel + TensorFlow Explained</title>
      <dc:creator>Vignesh Durai</dc:creator>
      <pubDate>Mon, 19 Jan 2026 14:02:41 +0000</pubDate>
      <link>https://dev.to/vigneshjd/demystifying-ai-serving-for-java-developers-apache-camel-tensorflow-explained-46dp</link>
      <guid>https://dev.to/vigneshjd/demystifying-ai-serving-for-java-developers-apache-camel-tensorflow-explained-46dp</guid>
      <description>&lt;p&gt;Apache Camel and TensorFlow usually show up in a Java developer’s work in very different ways. Camel is familiar: it routes messages, manages APIs, and moves data between systems. TensorFlow, on the other hand, often seems distant, tied to notebooks, Python scripts, and training loops outside the JVM.&lt;/p&gt;

&lt;p&gt;It’s easy to overlook that these two technologies connect not during training, but during serving. When models are seen as long-running services instead of experiments, the gap between them gets much smaller. The main question shifts from “how do I run AI?” to “how do I integrate another service?”&lt;/p&gt;

&lt;p&gt;This change in perspective is important.&lt;/p&gt;

&lt;h2&gt;
  
  
  From model artifacts to callable services
&lt;/h2&gt;

&lt;p&gt;In most production systems, models aren’t retrained all the time. They’re trained somewhere else, packaged, and then deployed to answer the same question repeatedly. TensorFlow’s serving tools are built for this. Rather than putting model logic inside applications, trained models are exported and made available through stable endpoints.&lt;/p&gt;

&lt;p&gt;For Java developers, this setup frequently seems familiar quickly. An AI model that takes a request and returns a response acts like any other backend service. It has inputs and outputs, latency, possible failures, and can be versioned, monitored, or replaced.&lt;/p&gt;

&lt;p&gt;At this stage, Camel doesn’t need to understand machine learning. It just needs to do what it does best: connect different systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where ready-made models quietly fit in
&lt;/h2&gt;

&lt;p&gt;A common misconception is that AI serving always needs custom models built from scratch. In reality, many teams start with pretrained, widely available models that already solve common problems well enough.&lt;/p&gt;

&lt;p&gt;Image classification is a good example. Models developed using large, general image datasets are often used to give basic labels to images. These labels aren’t perfect, but they provide a useful signal. In integration, that signal can help tag content, guide routing, or trigger other processes. The model itself stays a black box behind a service boundary.&lt;/p&gt;

&lt;p&gt;Object detection works in a similar way. Instead of asking “what is this image?”, the model answers “what objects are here, and about where?” Even if the results aren’t exact, they can add new metadata to messages. For Camel, this enrichment is just like calling any other external service.&lt;/p&gt;

&lt;p&gt;Text models regularly fit even more naturally into integration flows. Pretrained text classifiers, often using transformer architectures, are used to find sentiment, topic, or intent in short texts. Their outputs aren’t seen as absolute truth. Instead, they give helpful hints for deciding where a message should go next.&lt;/p&gt;

&lt;p&gt;These examples aren’t about the specific model design. What matters is that the models can be packaged once, served all the time, and used again and again, without spreading ML-specific issues into the rest of the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Camel’s role at the boundary, not the center
&lt;/h2&gt;

&lt;p&gt;Camel’s main value in this setup is handling the details around AI calls. It shapes requests to fit what the model expects, decides when to call the model, and manages slow responses, failures, or fallback options if inference isn’t available.&lt;/p&gt;

&lt;p&gt;At this point, AI serving feels less unusual. The same patterns apply as with any other external service: content-based routing, enrichment, throttling, and retries. The model provides the intelligence, but the integration layer keeps control.&lt;/p&gt;

&lt;p&gt;Many developers find this separation comforting. The model can change on its own, the routes stay easy to read, and the whole system remains understandable. &lt;/p&gt;

&lt;h2&gt;
  
  
  A mental model that tends to stick
&lt;/h2&gt;

&lt;p&gt;It helps to think of served models as translators or classifiers, not as decision-makers. They don’t control the workflow—they just provide a signal.&lt;/p&gt;

&lt;p&gt;Camel is where that signal gets interpreted in context. If a classification is slightly unsure, it doesn’t have to stop the process—it can just guide it. Over time, this makes systems feel more flexible and less fragile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI serving doesn’t ask Java developers to ignore their instincts. In fact, it rewards them. Treating models as services and integrations as key design elements fits well with how large systems are usually built.&lt;/p&gt;

&lt;p&gt;Apache Camel and TensorFlow work together not because they’re from the same ecosystem, but because they respect the same boundary: intelligence on one side, orchestration on the other. When teams keep that boundary clear, AI stops being disruptive and becomes just another, though powerful, part of the infrastructure.&lt;/p&gt;

&lt;p&gt;That’s often when it becomes truly useful.&lt;/p&gt;

</description>
      <category>apachecamel</category>
      <category>tensorflow</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>The Art of Small Images: Practical Techniques for Shaving Hundreds of MB Off AI and Java Containers</title>
      <dc:creator>Vignesh Durai</dc:creator>
      <pubDate>Sat, 20 Dec 2025 23:34:16 +0000</pubDate>
      <link>https://dev.to/vigneshjd/the-art-of-small-images-practical-techniques-for-shaving-hundreds-of-mb-off-ai-and-java-containers-4m2n</link>
      <guid>https://dev.to/vigneshjd/the-art-of-small-images-practical-techniques-for-shaving-hundreds-of-mb-off-ai-and-java-containers-4m2n</guid>
      <description>&lt;p&gt;Most teams know the feeling: the container finally works, models load, the JVM starts, and endpoints respond. But then someone points out the image size—sometimes eight hundred megabytes or more. While this isn’t surprising, the large size still causes problems. It slows local development, puts pressure on CI pipelines, and quietly shapes how systems evolve.&lt;br&gt;
Eventually, another question comes up: not just whether the container runs, but whether it really needs to be this large.&lt;br&gt;
This isn’t about following strict rules or aiming for the smallest possible image. It’s about making thoughtful choices in container design. Small decisions can add up and save hundreds of megabytes, all while keeping things clear and reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  When “It Works” Becomes the Baseline
&lt;/h2&gt;

&lt;p&gt;AI and Java containers often grow large for understandable reasons. Machine learning stacks need native libraries, Python wheels with compiled extensions, CUDA dependencies, and tools for testing. Java images might include full JDKs, debugging tools, and leftover build artifacts that were once helpful but never cleaned up.&lt;br&gt;
Many developers see that these containers are built quickly to meet deadlines, with the main goal of getting things working. This is normal for most projects. The problem appears later, when those early choices become the standard way of doing things.&lt;br&gt;
Container bloat usually isn’t caused by one big mistake. It often happens because of convenience, like installing extra system packages just in case, keeping build tools for debugging, or adding duplicate dependencies. Each choice seems fine alone, but together they add up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layers Tell Stories—If You Read Them
&lt;/h2&gt;

&lt;p&gt;Try thinking of container layers as telling a story, not just as technical details. Each layer should answer questions like: Why is this here? When was it needed? Is it still needed?&lt;br&gt;
Multi-stage builds are often recommended as a best practice, but their main benefit is separating different purposes. Build-time dependencies, such as compilers, package managers, and test frameworks, are not the same as runtime libraries. Mixing these roles makes images larger than they need to be.&lt;br&gt;
For Java containers, this often means compiling in one stage and running in another, moving only the needed runtime files to the final image. For AI workloads, it can involve installing Python dependencies and downloading models in a builder stage, then copying just the site-packages, model files, and required binaries into a smaller base image.&lt;br&gt;
This approach leads to images that are not just smaller, but also easier to understand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Base Images as Architectural Decisions
&lt;/h2&gt;

&lt;p&gt;Teams often use base images as defaults. Ubuntu feels safe, and full distributions are familiar. However, the base image you choose affects everything that follows.&lt;br&gt;
Many teams notice that moving from a general-purpose OS image to a focused runtime image can greatly reduce the container size. Distroless and slim images remove shells, package managers, and documentation that production containers rarely need. Alpine-based images have their own trade-offs, especially with native library compatibility, since Alpine uses the musl C standard library instead of glibc, which is used by Ubuntu, Debian, and Fedora. Teams need to check their actual dependencies when picking a base image, as Octopus points out. The goal isn’t to avoid these images, but to understand what they include. Choose runtime-only versions and make sure CUDA, driver, and framework versions match, so nothing extra is added.&lt;br&gt;
The goal isn’t to make the image as small as possible. It’s about being intentional with your choices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependency Discipline Over Dependency Hoarding
&lt;/h2&gt;

&lt;p&gt;Another pattern shows up when teams review their dependencies. Many containers still include libraries that were helpful during testing but were never removed. In Python, transitive dependencies can quietly increase the image size. In Java, unused modules often stay on the classpath.&lt;br&gt;
Some teams find it helpful to rebuild their dependency lists from scratch, starting with what the application really uses instead of what has built up over time. Others use tools to visualize dependency trees and decide what is still needed.&lt;br&gt;
The key is less about the tools and more about the mindset. Containers reward discipline by making extra baggage easy to spot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Small Optimizations, Compounding Effects
&lt;/h2&gt;

&lt;p&gt;It’s tempting to look for a single big fix. In practice, reducing image size usually comes from many small changes, like cleaning package caches, reordering layers for better reuse, stripping symbols from binaries, or using runtime flags to skip extra components.&lt;br&gt;
Each change might seem minor by itself, but together they make a big impact. Saving a few dozen megabytes at a time adds up, and soon the container feels lighter, both in size and in purpose.&lt;br&gt;
Many developers say that once a team adopts this approach, it becomes the standard. Smaller images become the expectation, not just a one-off achievement.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Different Kind of Craft
&lt;/h2&gt;

&lt;p&gt;Making containers smaller isn’t just about appearances or meeting requirements. It shows respect for build systems, runtime environments, and the engineers who will use these images later on.&lt;br&gt;
Smaller images are usually easier to understand. They make assumptions clear, encourage curiosity, and help systems feel intentional rather than accidental.&lt;br&gt;
So, cutting hundreds of megabytes from AI and Java containers isn’t just about better performance. It’s a design habit that values patience, curiosity, and the willingness to rethink old decisions that no longer fit.&lt;br&gt;
Perhaps the real skill is knowing when it’s time to review what already seems good enough.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>machinelearning</category>
      <category>containerapps</category>
    </item>
    <item>
      <title>Why Python Isn’t Enough: What Enterprises Miss When They Think of AI Only as a Data Science Problem</title>
      <dc:creator>Vignesh Durai</dc:creator>
      <pubDate>Sun, 14 Dec 2025 16:55:26 +0000</pubDate>
      <link>https://dev.to/vigneshjd/why-python-isnt-enough-what-enterprises-miss-when-they-think-of-ai-only-as-a-data-science-problem-hlb</link>
      <guid>https://dev.to/vigneshjd/why-python-isnt-enough-what-enterprises-miss-when-they-think-of-ai-only-as-a-data-science-problem-hlb</guid>
      <description>&lt;p&gt;In many organizations exploring AI, a common scene appears: a few data scientists with open notebooks, using Python libraries and training models. On the surface, it looks like progress. Code runs, accuracy improves, and it feels like something intelligent is happening.&lt;br&gt;
But after several months, the impact often seems limited.&lt;br&gt;
This is not because Python is lacking. Python is the main language for modern AI work for good reasons. It is expressive, flexible, and has a strong ecosystem that supports experimentation. However, when organizations assume AI is just data science, and data science is just Python, they often miss what is really needed to make AI valuable.&lt;br&gt;
The real gap is not technical skill, but perspective.&lt;/p&gt;

&lt;h2&gt;
  
  
  When AI Is Framed as a Notebook Activity
&lt;/h2&gt;

&lt;p&gt;For many teams, AI starts as an analytical task. They ask a question, collect data, train a model, and discuss the results. This process is similar to academic research, where it often works well.&lt;br&gt;
Problems arise when this approach is used without changes in real-world production settings.&lt;br&gt;
Notebooks are made for exploration, not for long-term use. They support trying new ideas rather than building lasting solutions. This is intentional, but it can shape how people think about AI. If AI is seen only as code and models in Python, it is easy to believe that once the model works, the main work is done.&lt;br&gt;
Many experts notice that this is where problems start. Models that seem strong on their own can have trouble in real systems. Inputs may be late, incomplete, or not quite right. Outputs often need to be interpreted, managed, or checked before they are useful. These challenges are not about the model’s quality alone, but they decide if AI adds value or not.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Overlooked Work Around the Model
&lt;/h2&gt;

&lt;p&gt;People often talk about AI as if the model is the whole system. In reality, the model is just one part of a longer process with many decisions, dependencies, and responsibilities.&lt;br&gt;
Think about what happens before a model gets data. Information needs to be collected, cleaned, filtered, and matched to the training assumptions. After the model makes predictions, results often need to be combined with rules, limits, or checked by people. These steps can lead to further actions, reviews, or explanations. While these tasks may not seem like 'AI work,' they affect how reliable and useful the results are.&lt;br&gt;
If organizations focus only on building models in Python, they may ignore these other important steps. This does not cause sudden failure, but it can slowly reduce trust. Systems may act unpredictably, ownership can become unclear, and teams may be unsure about using results they do not fully understand or control.&lt;br&gt;
This does not mean data science is not enough. Instead, it shows that AI sits at the crossroads of analytics, software engineering, and how organizations are structured. Python is great for one part of this, but it does not cover everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shifting From Models to Systems
&lt;/h2&gt;

&lt;p&gt;Many teams eventually realize that AI is not just a feature, but a system capability. It changes, can become less effective, and interacts with its environment in ways that regular code does not.&lt;br&gt;
This new way of thinking changes the questions leaders ask. They start to look beyond just model performance and consider how models are tracked, how decisions are recorded, and how problems are found. Reliability, explainability, and adaptability become real, practical issues.&lt;br&gt;
These questions are not just for data scientists. They also need skills from platform engineering, product management, and operations. Teams need a common language to work together. Python is still important, but it is not the only tool needed.&lt;br&gt;
Some organizations add new processes and tools to their current workflows. Others change how they organize AI work completely. In both cases, progress usually comes from understanding what Python can and cannot do, not from replacing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Lessons That Emerge
&lt;/h2&gt;

&lt;p&gt;Certain patterns come up often when teams go through this change. One is that AI decisions are rarely made in isolation; they are part of larger processes that need careful design. Another is the need for clear responsibility—knowing who manages the model over time and who steps in when things change.&lt;br&gt;
People are also starting to value the non-technical parts of AI systems. Good communication, clear documentation, and shared understanding can be just as important as advanced algorithms. Sometimes, a model that delivers slightly lower accuracy but behaves consistently and transparently proves more valuable in real-world use than a higher-performing model that produces confusing or unpredictable outcomes for its users.&lt;br&gt;
These are not strict rules, but lessons that appear when AI is used in daily work instead of just experiments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond the Comfort of Familiar Tools
&lt;/h2&gt;

&lt;p&gt;Python will continue to be a key part of AI work for a long time. Its importance is not decreasing. What is changing is the idea that Python alone can handle all of enterprise AI.&lt;br&gt;
If organizations see AI only as a data science task, they may miss the factors that help AI work well in complex settings. Models do not work alone. They are part of systems shaped by people, processes, and limits that code by itself cannot solve.&lt;br&gt;
The best AI results often come when teams look at the bigger picture. Instead of just asking how to make better models, they ask how AI fits into their existing systems.&lt;br&gt;
With this wider view, Python is still important, but it is not the only part of the story.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>softwareengineering</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Why Leaders Are Looking Beyond MLOps Toward Intelligence-Driven Operations</title>
      <dc:creator>Vignesh Durai</dc:creator>
      <pubDate>Tue, 09 Dec 2025 03:35:17 +0000</pubDate>
      <link>https://dev.to/vigneshjd/why-leaders-are-looking-beyond-mlops-toward-intelligence-driven-operations-4p5o</link>
      <guid>https://dev.to/vigneshjd/why-leaders-are-looking-beyond-mlops-toward-intelligence-driven-operations-4p5o</guid>
      <description>&lt;p&gt;Across many technical leadership conversations, there’s a sense that MLOps alone no longer captures the full complexity of modern AI systems. For years, discussions about AI maturity have circled the same milestones: experiment, train, deploy, monitor, repeat. The narrative often frames machine learning pipelines like assembly lines—steady, predictable, and mostly concerned with keeping the machinery well-oiled. Yet the landscape around those pipelines continues to shift. Models are no longer isolated artifacts; they sit inside systems that behave less like factories and more like living ecosystems.&lt;br&gt;
As teams explore this new terrain, a quiet idea has been emerging in discussions, research meetups, and architectural sketches on whiteboards: the notion that we may be heading toward something broader than MLOps or even model orchestration. A number of practitioners have begun referring to it, informally, as Intelligence Operations—not as a formal discipline, but as a direction of thinking that tries to account for how AI behaves when it’s deeply woven into complex software environments.&lt;br&gt;
This article is an attempt to map the edges of that idea, not to define it. Definitions tend to age quickly in this field; patterns, however, linger.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Moment When Pipelines Start Feeling Too Small
&lt;/h2&gt;

&lt;p&gt;If you talk to engineers who have been building ML systems for a while, a familiar story tends to surface. A team begins with training workflows, optimizes them, introduces CI/CD for models, then adds monitoring, drift checks, lineage tracking, and eventually some form of orchestration. Everything appears under control—until the system needs to interact with unpredictable real-world inputs.&lt;br&gt;
A model might rely on data that shifts without warning. A prompt-based component may require adjustments based on user behavior rather than algorithmic metrics. A routing layer may need to decide which model, or which combination of models, should handle which query. And suddenly, the idea of “operations” stretches beyond pipelines.&lt;br&gt;
This is often the point where teams realize they’re not just running models. They’re managing intelligence flows—systems that sense, interpret, and respond, sometimes in ways that don’t fit neatly within a DAG or a versioned artifact.&lt;br&gt;
Some engineers describe this shift as moving from maintaining machinery to curating an evolving environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Orchestration Isn’t the Final Step
&lt;/h2&gt;

&lt;p&gt;Many people use orchestration as shorthand for “coordination of tasks.” In classical data engineering, that usually means scheduling jobs or linking steps in a process. In AI systems, orchestration grows into something more fluid. It needs to adapt, negotiate, and occasionally improvise.&lt;br&gt;
Several teams report that once they integrate large models or multi-model strategies, orchestration tools start carrying a burden they weren’t originally designed for. The system doesn’t just move data and models around; it mediates between different forms of intelligence—statistical, symbolic, retrieval-based, prompt-based. It decides how they interact and when to override them.&lt;br&gt;
This is where the idea of Intelligence Operations starts taking shape in practice, even if no one is using the phrase explicitly. It describes a pattern where orchestration is only one part of a broader system that includes interpretation, context management, feedback loops, and dynamic policy boundaries.&lt;br&gt;
Instead of thinking, “How do we deploy this model consistently?” the question becomes, “How do we ensure that all intelligent components interact responsibly and coherently over time?”&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shape of Intelligence Operations (As It Appears Today)
&lt;/h2&gt;

&lt;p&gt;Since this isn’t a formal discipline, it doesn’t come with a checklist. But some recurring tendencies are noticeable in the way advanced AI systems are now designed.&lt;br&gt;
One common observation is that teams are beginning to treat intelligence as a composite structure rather than a single model. They build flows where retrieval steps influence generation, where smaller models filter or summarize for larger ones, or where a simple rule engine acts as the guardrail around a complex reasoning module.&lt;br&gt;
Another recurring pattern is the elevation of feedback—not in the narrow sense of supervised labels, but in the broader sense of “how the system learns from its own behavior.” Logs, human review moments, preference signals, and policy violations often form feedback loops that guide how the system adapts. Intelligence Operations, in this view, becomes partly about tending those loops so the system does not drift into unhelpful or unpredictable territory.&lt;br&gt;
There is also a growing focus on context governance, especially as models rely more on retrieved knowledge, user inputs, or chain-of-thought mechanisms. Some engineers describe this as shaping the “cognitive boundary” of the system. Not restricting capability, but defining conditions under which the system should reason, recall, transform, or refuse.&lt;br&gt;
None of this replaces MLOps. It builds on top of it, just as MLOps once built on traditional DevOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Living Systems Require Living Practices
&lt;/h2&gt;

&lt;p&gt;A useful way to think about the transition is to look at how teams respond when a model’s behavior diverges from expectations. In classical pipelines, the instinct is to retrain. In modern systems, retraining is only one of several levers.&lt;br&gt;
Some teams experiment with prompt adjustments, context selection rules, gating mechanisms, or small-scale patches that influence behavior without altering the underlying weights. Others introduce multiple reasoning steps and allow the system to critique its own output before returning a result. These may sound like small implementation details, but together they suggest that intelligence in production is no longer static or monolithic. It's more like a conversation between components.&lt;br&gt;
When a system behaves more like a conversation, the operational mindset naturally shifts as well. You stop thinking only about artifacts and start thinking about interactions.&lt;br&gt;
Many architects describe this as a kind of stewardship—guiding the system rather than merely deploying it. The vocabulary might change in the future, but the sentiment seems to be spreading.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Intelligence Operations Could Become
&lt;/h2&gt;

&lt;p&gt;If this direction continues, Intelligence Operations might eventually resemble a synthesis of several existing disciplines: MLOps, data engineering, model governance, prompt management, retrieval tuning, and human oversight patterns. But it may also introduce practices that do not exist yet, especially as models gain more autonomy in making low-level decisions.&lt;br&gt;
One interpretation is that Intelligence Operations could evolve into a practice focused on coherence—ensuring that the system behaves consistently across components, contexts, and time. Another interpretation is that it may act as the bridge between human intention and machine reasoning, translating goals into adjustable, observable behaviors.&lt;br&gt;
Whatever shape it takes, it seems to emerge from a shared recognition that models alone are no longer the story. The system that surrounds them—the flow of intelligence—is where much of the engineering and design energy is now going.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Closing Reflection
&lt;/h2&gt;

&lt;p&gt;If MLOps helped teams industrialize machine learning, and orchestration helped them structure AI workflows, Intelligence Operations hints at a broader shift: the move from managing artifacts to shaping behaviors. It approaches AI not as a set of tasks to automate but as an ecology to guide.&lt;br&gt;
Perhaps that is the natural next step. As models become more capable and systems more interconnected, the real challenge isn’t building intelligence—it’s cultivating it.&lt;br&gt;
Not with rigid tools or grand theories, but with the kind of steady, thoughtful engineering that grows from paying attention to how systems evolve, and from being willing to adjust the environment when they do.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mlops</category>
      <category>machinelearning</category>
      <category>softwareengineering</category>
    </item>
  </channel>
</rss>
