<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gorkem Ercan</title>
    <description>The latest articles on DEV Community by Gorkem Ercan (@gorkem).</description>
    <link>https://dev.to/gorkem</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gorkem"/>
    <language>en</language>
    <item>
      <title>Securing MCP: Applying Lessons Learned from the Language Server Protocol</title>
      <dc:creator>Gorkem Ercan</dc:creator>
      <pubDate>Fri, 28 Mar 2025 18:37:07 +0000</pubDate>
      <link>https://dev.to/jozu/securing-mcp-applying-lessons-learned-from-the-language-server-protocol-338</link>
      <guid>https://dev.to/jozu/securing-mcp-applying-lessons-learned-from-the-language-server-protocol-338</guid>
      <description>&lt;p&gt;I was deeply involved with the Language Server Protocol (&lt;a href="https://microsoft.github.io/language-server-protocol/" rel="noopener noreferrer"&gt;LSP&lt;/a&gt;) from its earliest days at Red Hat, one of the instrumental organizations in driving LSP adoption. During that time, I contributed to several key implementations, including the second-ever language server—the &lt;a href="https://github.com/eclipse-jdtls/eclipse.jdt.ls" rel="noopener noreferrer"&gt;Java Language Server&lt;/a&gt;—and the widely adopted &lt;a href="https://github.com/redhat-developer/yaml-language-server" rel="noopener noreferrer"&gt;YAML Language Server&lt;/a&gt;. These projects became benchmarks for reliability and widespread adoption in developer communities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MCP Matters
&lt;/h2&gt;

&lt;p&gt;Given my experience with LSP, I’m enthusiastic about the growing interest in the Model Context Protocol (&lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;MCP&lt;/a&gt;). However, I am concerned that the valuable lessons learned from LSP are not being effectively applied to MCP.&lt;/p&gt;

&lt;p&gt;When LSP emerged, it transformed programming language tooling. Specifically, it allowed language experts to implement sophisticated, language-specific intelligence consistently across different IDEs and editors. LSP created an abstraction enabling the same compiler development teams to directly support any IDE or editor.&lt;/p&gt;

&lt;p&gt;MCP provides an analogous abstraction between AI tools and agents and their computing environments. However, the type of abstraction provided by LSP—deep, specialized programming language expertise—is significantly more complex to integrate and replicate compared to the API interactions primarily targeted by MCP. This difference currently makes MCP’s value proposition lower than that of LSP, which raises ongoing questions about whether MCP provides substantial value beyond existing APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Critical Risks with Current MCP Implementations
&lt;/h2&gt;

&lt;p&gt;Unfortunately, MCP carries forward several critical shortcomings that were also issues with LSP. One significant oversight with LSP was the lack of standardized packaging. Visual Studio Code—the hero product driving LSP adoption—provided its own method for packaging extensions, but this approach was not easily transferable to other platforms. The absence of standardized, secure packaging made LSP implementations vulnerable to supply chain attacks. Even VS Code’s extension packaging was not originally designed with supply chain security in mind, proving &lt;a href="https://www.techradar.com/pro/security/vscode-extensions-pulled-over-security-risks-but-millions-of-users-have-already-installed" rel="noopener noreferrer"&gt;vulnerable&lt;/a&gt; at times.&lt;/p&gt;

&lt;p&gt;The risk is even greater with MCP due to its broader potential access and integration to critical systems. Organizations face significant security risks if they adopt MCP directly from third-party sources without a robust packaging solution that includes secure attestations and digital signatures.&lt;/p&gt;

&lt;p&gt;Additionally, LSP is defined to operate on single-user desktop environments without built-in multi-tenancy, a feature that simplifies implementation, but limits use in cloud environments. This lack of multi-tenancy poses a much larger challenge for MCP, as MCP implementations are more likely to run in multi-tenant environments requiring robust authentication and authorization.&lt;/p&gt;

&lt;p&gt;Without addressing these critical issues related to packaging, secure supply chains, multi-tenancy, authentication, and authorization, the overall value and viability of MCP will continue to be questioned.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://jozu.com/" rel="noopener noreferrer"&gt;Jozu&lt;/a&gt;, we are uniquely positioned to address these critical MCP adoption challenges. With extensive experience gained from pioneering work on LSP and our development of &lt;a href="https://kitops.org/" rel="noopener noreferrer"&gt;KitOps&lt;/a&gt;—a proven open-source solution trusted by enterprises for securely packaging and deploying AI/ML workloads—we are prepared to solve MCP’s most pressing security and packaging issues. Partnering with us will help your organization significantly reduce exposure to supply chain risks while accelerating secure MCP adoption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Opportunity: Become a Design Partner
&lt;/h2&gt;

&lt;p&gt;We’re currently seeking a limited number of design partners to join us in shaping the future of MCP. As a design partner, you’ll gain exclusive access to our solution, have direct influence on product direction, and receive expert guidance on securely implementing MCP in your organization.&lt;/p&gt;

&lt;p&gt;Spots are limited—&lt;a href="//gorkeml@jozu.com"&gt;contact&lt;/a&gt; today to secure your position.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>programming</category>
    </item>
    <item>
      <title>The KitOps Methodology</title>
      <dc:creator>Gorkem Ercan</dc:creator>
      <pubDate>Fri, 14 Mar 2025 16:29:53 +0000</pubDate>
      <link>https://dev.to/jozu/the-kitops-methodology-4c15</link>
      <guid>https://dev.to/jozu/the-kitops-methodology-4c15</guid>
      <description>&lt;p&gt;In the ever-evolving world of AI and machine learning, the path from model conception to deployment is full of challenges. The KitOps methodology is designed to guide teams through this complex journey with a focus on security, reproducibility, and transparency. The KitOps methodology streamlines the entire AI lifecycle by offering a unified, OCI-compliant framework that bridges the gap between development, packaging, and deployment. This approach not only simplifies collaboration but also empowers teams to innovate without sacrificing clarity or security. KitOps creates an environment where data scientist, a DevOps engineer, or an application developer, KitOps helps bridge the gap between model creation, versioning, and operationalization, all while maintaining transparency, security, and modularity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Principles
&lt;/h2&gt;

&lt;p&gt;Secure, Immutable Versioning and Provenance&lt;br&gt;
At the heart of KitOps is the idea to secure and immutable versioning. Each model version is encapsulated as a single, immutable entity that includes code, data, documentation, and configurations as a single, immutable entity. This guarantees:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency:&lt;/strong&gt; Every component of a given model version is stored together, ensuring full reproducibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traceability:&lt;/strong&gt; Comprehensive attestations and provenance details make it easy to track changes and verify the authenticity of each model version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrity and Accountability:&lt;/strong&gt; Immutability prevents unauthorized modifications and supports compliance with DevSecOps best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Separation of Concerns
&lt;/h2&gt;

&lt;p&gt;KitOps advocates for a clear separation between model artifacts and infrastructure dependencies. This principle helps teams maintain:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modularity:&lt;/strong&gt; Models remain independent units, which simplifies updates and reducing the risk of conflicts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplicity:&lt;/strong&gt; Teams can focus on improving models without being entangled in infrastructure-level complexities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Maintenance:&lt;/strong&gt; Updating models and infrastructure independently prevents unintended breakage and simplifies long-term maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Components
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ModelKit
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;ModelKit:&lt;/strong&gt; is an OCI-compliant packaging format that contains all the essential artifacts of the AI/ML model lifecycle. This includes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Datasets:&lt;/strong&gt; Comprehensive collections of training, validation, and test data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code:&lt;/strong&gt; All logic required for model training, inference, and deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configurations:&lt;/strong&gt; Environment variables, hyperparameters, and deployment settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documents:&lt;/strong&gt; Detailed records and guides related to the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model Artifacts:&lt;/strong&gt; Serialized model weights and associated metadata.&lt;/p&gt;

&lt;p&gt;This standardized packaging ensures that models can be easily shared, audited, and redeployed, fostering a collaborative and transparent workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  OCI Registry
&lt;/h3&gt;

&lt;p&gt;An OCI Registry, compatible with Open Container Initiative standards, serves as a centralized repository for storing and distributing OCI artifacts like ModelKits and container images. Its benefits include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standardization:&lt;/strong&gt; Consistent management and access to model artifacts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration:&lt;/strong&gt; Direct compatibility with common CI/CD, MLOps, and DevOps tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; Hardened storage and secure artifact transmission, enhancing overall supply chain integrity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kitfile
&lt;/h3&gt;

&lt;p&gt;The Kitfile is a YAML-based configuration file that precisely defines the contents of a ModelKit. With a Kitfile, teams can ensure:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repeatability:&lt;/strong&gt; Consistent model packages across different environments and teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Governance:&lt;/strong&gt; A clear and auditable record of the artifacts included in each ModelKit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplicity:&lt;/strong&gt; One central place to specify datasets, code, configurations, and documentation artifacts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kit CLI and PyKitOps
&lt;/h3&gt;

&lt;p&gt;The Kit CLI and the Pykitops library are powerful tools that enables users to create, manage, run, and deploy ModelKits. Whether you are packaging a new model for development or deploying an existing model into production, these tools simplify your workflow and accelerate your innovation cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Create or Generate a Kitfile:&lt;/strong&gt;&lt;br&gt;
Begin by specifying which documents, code, datasets, configurations, and serialized model weights should be included. Early stages might focus on datasets and code, while production-ready models include comprehensive elements such as weights, validation data, API code, and even infrastructure-as-code recipes like Terraform scripts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Package the ModelKit:&lt;/strong&gt;&lt;br&gt;
Use the command kit pack to bundle your Kitfile into a ModelKit. This package acts as a single source of truth, simplifying collaboration, auditing, and distribution among stakeholders.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Push to a Registry:&lt;/strong&gt;&lt;br&gt;
Push your ModelKit to an OCI-compatible registry (e.g., Jozu Hub) to store, manage, and share it securely. This ensures that your team—across various regions and environments—has consistent and secure access to the model artifacts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Automated Processing:&lt;/strong&gt;&lt;br&gt;
Leverage automation to handle the ModelKit for various tasks such as deployment, training, evaluation, or integration into downstream applications. Automated pipelines ensure consistency and rapid iteration, allowing teams to quickly adapt models to evolving requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of the KitOps Methodology
&lt;/h2&gt;

&lt;p&gt;Efficiency: Streamlined management of artifacts and distribution processes reduces friction and accelerates innovation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security and Compliance:&lt;/strong&gt; Strong governance, auditing, and immutability measures ensure that every change is traceable and compliant with industry standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; As models, datasets, and related resources expand, KitOps scales gracefully, maintaining uniform standards and practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The KitOps methodology represents a modern, secure, and reliable approach to managing AI/ML assets. By pairing well-defined artifacts with standardized tooling—supported by OCI registries and the Kit CLI—teams can confidently develop, test, share, and deploy models. In an era where rapid iteration and continuous improvement are key, KitOps not only enhances technical efficiency but also nurtures a culture of innovation and accountability.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>machinelearning</category>
      <category>programming</category>
      <category>learning</category>
    </item>
    <item>
      <title>Unifying Documentation and Provenance for AI and ML: A Developer’s Guide to Navigating the Chaos</title>
      <dc:creator>Gorkem Ercan</dc:creator>
      <pubDate>Thu, 17 Oct 2024 14:32:58 +0000</pubDate>
      <link>https://dev.to/jozu/unifying-documentation-and-provenance-for-ai-and-ml-a-developers-guide-to-navigating-the-chaos-1n1o</link>
      <guid>https://dev.to/jozu/unifying-documentation-and-provenance-for-ai-and-ml-a-developers-guide-to-navigating-the-chaos-1n1o</guid>
      <description>&lt;p&gt;In the fast-paced, constantly evolving world of artificial intelligence (AI) and machine learning (ML), you might expect there to be a well-defined standard for something as critical as model documentation. Yet, the current reality is far from expectation. While AI model documentation tools like Model Cards were meant to streamline accountability and transparency, we’ve instead landed in a fragmented space that lacks consistency.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is a Model Card?
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Model Card&lt;/strong&gt; is a standardized documentation framework designed to provide essential information about a machine learning (ML) model, including its attributes, performance metrics, and ethical considerations. Model Cards help developers, researchers, and end-users better understand the model's intended use, its limitations, and any potential risks or biases associated with it. This documentation aims to improve transparency, accountability, and trust in AI and ML systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Information in a Model Card:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model Overview&lt;/strong&gt;: A description of the model, its architecture, and its intended use case.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Detailed metrics on how the model performs across different datasets, environments, or user demographics.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ethical Considerations&lt;/strong&gt;: Information on potential biases in the model and any fairness or safety concerns.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Training Data&lt;/strong&gt;: Description of the data used to train the model, including its provenance, size, and any preprocessing steps.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limitations&lt;/strong&gt;: Clear details about where and how the model should not be used, including scenarios where it might fail.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Model Cards were introduced in a 2019 paper by &lt;strong&gt;Margaret Mitchell&lt;/strong&gt; and her collaborators at Google AI. The idea emerged from the recognition that machine learning models, especially those deployed in real-world applications, often have far-reaching ethical and societal implications. Without clear and transparent documentation, these models can be misused or misunderstood, potentially leading to harmful outcomes, such as biased predictions or unfair decision-making processes.&lt;/p&gt;

&lt;p&gt;The paper proposed Model Cards as a way to address these challenges, by offering a standardized and accessible format for documenting models. It drew inspiration from &lt;strong&gt;nutrition labels&lt;/strong&gt;, which provide clear and consistent information to consumers about the contents of food products. Similarly, Model Cards are intended to serve as "nutrition labels" for ML models, offering critical details in a standardized and understandable format. In reality, it’s a bit more complex. &lt;/p&gt;

&lt;h3&gt;
  
  
  The Model Card Maze
&lt;/h3&gt;

&lt;p&gt;Model Cards, in theory, are straightforward. They’re designed to offer clear, standardized documentation on the attributes, performance, and ethical considerations of machine learning models. The idea behind them is solid—a one-size-fits-all tool for explaining how models work and their implications.&lt;/p&gt;

&lt;p&gt;However, in practice, Model Cards have taken on multiple forms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool" rel="noopener noreferrer"&gt;&lt;strong&gt;HuggingFace&lt;/strong&gt;&lt;/a&gt; uses YAML frontmatter and Markdown for its Model Cards.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-cards.html#model-cards-json-schema" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS SageMaker&lt;/strong&gt;&lt;/a&gt; employs a JSON schema.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://report.verifyml.com/model_card" rel="noopener noreferrer"&gt;&lt;strong&gt;VerifyML&lt;/strong&gt;&lt;/a&gt; has its own unique spin on the format.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tensorflow/model-card-toolkit/blob/main/model_card_toolkit/schema/v0.0.2/model_card.schema.json" rel="noopener noreferrer"&gt;&lt;strong&gt;Google&lt;/strong&gt;&lt;/a&gt;? They follow a different JSON schema entirely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s not even touching on the &lt;a href="https://arxiv.org/abs/1810.03993" rel="noopener noreferrer"&gt;original Model Card proposal from the foundational paper&lt;/a&gt;. The variation across platforms is not just about different tastes or minor tweaks. These differences reflect deeper structural and intent-based divergences. HuggingFace's Markdown-driven simplicity is very different from SageMaker's JSON schema-based precision, and that disparity matters. Developers trying to adhere to best practices for AI accountability are left grappling with a lack of coherence.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Cards Are More Than Just Documentation
&lt;/h3&gt;

&lt;p&gt;These differences aren’t just a matter of aesthetics. Model Cards play a critical role in ensuring compliance with a growing web of AI regulations, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;EU AI Act&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;NIST AI Risk Management Framework (RMF)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ISO 42001&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These regulations require robust documentation, and without a unified standard, developers are left to navigate this growing regulatory minefield without clear guidance. The result? Increased risk of non-compliance, and potentially, the perpetuation of biased or unsafe AI systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  SBOMs: A Glimmer of Hope for Standardization
&lt;/h3&gt;

&lt;p&gt;But all is not lost. Amid the chaos, there’s a promising development: &lt;strong&gt;SBOM formats&lt;/strong&gt; (Software Bill of Materials) like &lt;strong&gt;SPDX 3.0&lt;/strong&gt; and &lt;strong&gt;CycloneDX&lt;/strong&gt;. While not originally created for AI, these formats have started to incorporate AI models and datasets. This is a crucial step forward because SBOMs are a logical solution to providing the standardization that Model Cards are currently lacking, and they are already commonplace in software development practices.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why SBOMs Matter for AI
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive Coverage&lt;/strong&gt;: SBOMs can include both models and data, giving developers a more complete view of their AI systems.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standardization&lt;/strong&gt;: With a unified format like SPDX 3.0 or CycloneDX, we could bridge the gap left by the fragmented Model Card landscape.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provenance and Trust&lt;/strong&gt;: SBOMs offer a way to trace the lineage of AI models—what they do, where they came from, how they were trained, and under what conditions they should be used.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  A Path Forward
&lt;/h3&gt;

&lt;p&gt;The inclusion of AI models in SBOM standards like SPDX 3.0 and CycloneDX is a critical advancement. If these formats gain widespread adoption, they could provide the transparency and accountability that the AI industry so desperately needs. This isn’t just about technical improvements—embracing SBOMs is a moral imperative to ensure that AI is developed and deployed ethically and transparently.&lt;/p&gt;

&lt;p&gt;In the end, the future of AI documentation depends on our ability to standardize and unify our approaches. It’s time for the industry to rally around SBOMs and adopt standards like SPDX 3.0 and CycloneDX, before the lack of coherence in documentation leads us down a risky path.&lt;/p&gt;

&lt;p&gt;Let’s not wait for regulations, like the &lt;a href="https://jozu.com/blog/10-mlops-tools-that-comply-with-the-eu-ai-act" rel="noopener noreferrer"&gt;EU AI Act&lt;/a&gt;, to force our hand. The time to act is now.&lt;/p&gt;

</description>
      <category>security</category>
      <category>programming</category>
      <category>ai</category>
      <category>devops</category>
    </item>
    <item>
      <title>Why Devs Love Open Source KitOps–Tales from the ML Trenches</title>
      <dc:creator>Gorkem Ercan</dc:creator>
      <pubDate>Thu, 29 Aug 2024 16:51:12 +0000</pubDate>
      <link>https://dev.to/jozu/why-devs-love-open-source-kitops-tales-from-the-ml-trenches-23pa</link>
      <guid>https://dev.to/jozu/why-devs-love-open-source-kitops-tales-from-the-ml-trenches-23pa</guid>
      <description>&lt;p&gt;In the world of AI/ML there are a lot of puff pieces singing the latest technical innovation. Most of the time, these innovations aren’t being used outside of a cadre of scientists who have adopted. In contrast, we put together this article to share how a real user at a real company is using KitOps - and explain, in stark terms, why KitOps is the only solution that meets their needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Folly of TARs and S3
&lt;/h2&gt;

&lt;p&gt;First, let's address the elephant in the room: TAR files. Yes, they’re handy little bundles of joy, squeezing your artifacts into neat, portable packages. But that’s where the honeymoon ends. One user, Niklas and engineer at a German federal technology company, broke it down for us with the kind of brutal honesty that only comes from experience “&lt;em&gt;S3 and GitLFS are like the wild west—anything goes, and that’s precisely the problem because both fall short&lt;/em&gt;”&lt;br&gt;
&lt;strong&gt;Not Tamper Proof:&lt;/strong&gt; Without immutability, there's no guarantee your artifacts haven’t been tampered with. Good luck explaining that to your compliance officer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lack of Audibility:&lt;/strong&gt; When it’s time to trace back a decision to its source, TARs and S3 aren’t much help. New AI regulations, Niklas points out, "requires securing the integrity and authenticity of release artifacts." How’s that supposed to happen when your artifacts are floating around, unchained and unverified?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No Easy Tagging:&lt;/strong&gt; Champion vs. challenger models, semantic versioning—good luck implementing those without the right tools. TARs don’t do it, and neither does S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Poor Metadata Handling:&lt;/strong&gt; Sure, your artifact might have a name, but does it tell the whole story? What about the additional metadata that’s crucial for downstream processes?&lt;br&gt;
&lt;strong&gt;Inconsistent Supply Chain:&lt;/strong&gt; "Everything is in Artifactory," Niklas notes. "So why not store ML artifacts there too?" It’s about consistency, and that’s not something TARs or S3 can deliver.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Power of KitOps
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kitops.ml" rel="noopener noreferrer"&gt;KitOps&lt;/a&gt;, by contrast, doesn’t just store your artifacts—it puts them into your existing OCI registry (DockerHub, Quay.io, Artifactory, GitHub Container Registry) which has already passed security vetting and is covered by enterprise-grade authentication and authorization. Now they’re guarded like a hawk, while the &lt;a href="https://kitops.ml/docs/modelkit/intro.html" rel="noopener noreferrer"&gt;KitOps ModelKit&lt;/a&gt; format ensures that every byte is accounted for, every version is tagged and tracked, and every artifact is as immutable as Mount Everest. This isn’t just about meeting compliance—it’s about ensuring that your AI models are trustworthy, reliable, and secure from the get-go.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why KitOps?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Tamper-Proof:&lt;/strong&gt; With KitOps, your artifacts are locked down, hashed, and immutably stored. No more waking up in cold sweats wondering if something was altered on the sly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auditability:&lt;/strong&gt; Every artifact comes with a complete history, ensuring that when the auditors come knocking, you’re not scrambling for answers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tagging and Versioning:&lt;/strong&gt; With built-in support for champion vs. challenger models, semantic versioning, and more, KitOps makes it easy to manage complex ML workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elegant Bundling:&lt;/strong&gt; KitOps doesn’t just store your artifacts—it bundles them with all the metadata you need, ensuring that every deployment is consistent and reliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency Across the Supply Chain:&lt;/strong&gt; By storing everything in Artifactory, KitOps ensures that your AI/ML workflows are as seamless as the rest of your DevOps processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling Up
&lt;/h2&gt;

&lt;p&gt;Of course, all of this wouldn’t mean much if KitOps couldn’t scale. But as Niklas explains, that’s not an issue. His team might be small now—just 10-15 data scientists, engineers, and SREs working on five predictive ML models—but they’re growing. And as they do, KitOps will scale with them, ensuring that their workflows remain smooth, secure, and consistent, no matter how many models they deploy. That’s why it’s being adopted by some of the largest government agencies, science labs, and global technology companies in the world.&lt;/p&gt;

&lt;h3&gt;
  
  
  Predictive ML, Not LLMs
&lt;/h3&gt;

&lt;p&gt;It’s worth noting that Niklas’s team isn’t diving into the deep end of LLMs just yet—they’re focused on predictive ML. But whether you’re deploying LLMs, fine-tuning them, or just managing a handful of predictive models, KitOps has you covered. Of course, if you’re doing LLMs, KitOps makes even more sense since the number and size of project artifacts only grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;If you’re serious about AI/ML, and you’re tired of wrestling with tools that promise the world but deliver a mess, it’s time to give KitOps a look. It’s not just about storing your artifacts—it’s about ensuring they’re secure, auditable, and ready for deployment at a moment’s notice. Because in this game, anything less is a risk you can’t afford to take.&lt;/p&gt;

&lt;p&gt;If you have questions about integrating KitOps with your team, join the conversation on &lt;a href="https://discord.com/invite/Tapeh8agYy" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; and &lt;a href="https://kitops.ml/docs/quick-start.html" rel="noopener noreferrer"&gt;start&lt;/a&gt; using KitOps today!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>programming</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>The Expected Cooling of the Generative AI Hype</title>
      <dc:creator>Gorkem Ercan</dc:creator>
      <pubDate>Mon, 26 Aug 2024 13:48:07 +0000</pubDate>
      <link>https://dev.to/gorkem/the-expected-cooling-of-the-generative-ai-hype-4lln</link>
      <guid>https://dev.to/gorkem/the-expected-cooling-of-the-generative-ai-hype-4lln</guid>
      <description>&lt;p&gt;Environmental changes have always been catalysts for evolutionary shifts. The rise of Large Language Models (LLMs) like ChatGPT has ignited the birth of a new technological ecosystem. But, as with any seismic change, the initial response is often wildly exaggerated, driven by those who don't fully grasp the nuances—a phenomenon we now dub the &lt;em&gt;AI Hype&lt;/em&gt;. For those of us who've seen such waves before, it was clear from the start: this was a hype cycle, and like all hype cycles, it had to run its course. Now, the signs are undeniable—&lt;a href="https://pitchbook.com/news/articles/generative-ai-seed-funding-drops" rel="noopener noreferrer"&gt;the hype is cooling down.&lt;/a&gt; But what's next for AI?&lt;/p&gt;

&lt;p&gt;No technology, no matter how revolutionary, can thrive without delivering real value. And here's the truth: we haven’t yet cracked the code on generating consistent value for enterprises from AI. The next phase of AI's evolution won't be about shiny new algorithms or eye-catching demos; it will be about the nitty-gritty work of building standards, developing techniques, and creating frameworks that enable AI and ML to integrate into the fabric of business seamlessly and safely. We've begun to see &lt;a href="https://www.investopedia.com/4-key-takeaways-from-walmart-earnings-call-8695732" rel="noopener noreferrer"&gt;pockets of success&lt;/a&gt; and early experiments that hint at the financial benefits AI/ML can offer. But as the hype bubble bursts, there's always a danger that the technology itself will be blamed for the inevitable disappointments—rather than the non-specialists who inflated expectations to begin with.&lt;/p&gt;

&lt;p&gt;The reality is clear: AI/ML works. It has proven, valuable applications and should not be discarded just because it was overhyped. Now, more than ever, it’s time to rally behind standards, support open-source initiatives, and back companies that are focused on streamlining the adoption of AI and ML. This is how we accelerate the next phase of AI's evolution, turning what was once hype into sustainable, real-world impact.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Announcing the Preview Release for Jozu Hub</title>
      <dc:creator>Gorkem Ercan</dc:creator>
      <pubDate>Wed, 24 Jul 2024 12:53:19 +0000</pubDate>
      <link>https://dev.to/jozu/announcing-the-preview-release-for-jozu-hub-jld</link>
      <guid>https://dev.to/jozu/announcing-the-preview-release-for-jozu-hub-jld</guid>
      <description>&lt;p&gt;The benefits that AI will bring to enterprises will manifest as a gradual transformation rather than a sudden change. Much like transformative technologies of the past, such as cloud computing, mobile, or the internet, organizations that can integrate these technologies at a fundamental level will see significant advantages. Similar to its predecessors, AI as a transformative technology requires infrastructure, tooling, and practices to support its adoption.&lt;/p&gt;

&lt;p&gt;At &lt;a href="//jozu.com"&gt;Jozu&lt;/a&gt;, our goal is to empower organizations to deploy and operate AI applications in production. We have open-sourced &lt;a href="//kitops.ml"&gt;KitOps&lt;/a&gt;, which includes ModelKits—an OCI-compliant package containing everything needed to integrate with a model or deploy it to production—and the Kit CLI, which manages ModelKits on any OCI-compliant repository. Since launching KitOps a few months ago, we have experienced significant interest, as evidenced by the increasing download numbers, which have surpassed a thousand weekly downloads. We have also partnered with several companies in the US and Europe to help them adopt KitOps into their AI/ML pipelines.&lt;/p&gt;

&lt;p&gt;One recurring piece of feedback we have received is that while organizations appreciate using their existing OCI registries, they would prefer a ModelKit-first registry experience. Today, we are launching the preview of Jozu Hub, our SaaS registry with a ModelKit-first experience. Jozu Hub is designed to work with ModelKits and provide essential information about a ModelKit, such as datasets and models, at your fingertips. It allows you to see the differences between ModelKit versions and easily spot essential security information, such as provenance.&lt;/p&gt;

&lt;p&gt;Today’s launch is just the beginning. We have more features in the works, such as private repositories, more options for search, inference container images, and many more. We have not forgotten organizations that cannot use our SaaS service. We will also release a version of Jozu Hub that you can run in your infrastructure, which will work with your existing OCI registry.&lt;/p&gt;

&lt;p&gt;If you are interested in the ModelKit-first experience of &lt;a href="//jozu.com/hub"&gt;Jozu Hub&lt;/a&gt;, visit our beta release at &lt;a href="//jozu.ml"&gt;jozu.ml&lt;/a&gt;. If you are interested in storing your own ModelKits, &lt;a href="//jozu.com/hub"&gt;sign up for early access&lt;/a&gt;. If you are an organization that would prefer to run your own Jozu Hub, contact us at &lt;a href="mailto:info@jozu.com"&gt;info@jozu.com&lt;/a&gt;, and in the meantime, sign up for early access to test the features.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>news</category>
      <category>programming</category>
      <category>ai</category>
    </item>
    <item>
      <title>KitOps Release v0.2–Introducing Dev Mode and the ability to chain ModelKits</title>
      <dc:creator>Gorkem Ercan</dc:creator>
      <pubDate>Wed, 01 May 2024 12:25:41 +0000</pubDate>
      <link>https://dev.to/jozu/kitops-release-v02-introducing-dev-mode-and-the-ability-to-chain-modelkits-4n39</link>
      <guid>https://dev.to/jozu/kitops-release-v02-introducing-dev-mode-and-the-ability-to-chain-modelkits-4n39</guid>
      <description>&lt;p&gt;Welcome KitOps v0.2! This update brings two major features for working with LLMs, as well numerous smaller enhancements. We are excited to introduce KitOps’ Dev mode - making it possible to test LLMs locally (even if you don’t have an internet connection or GPU). &lt;/p&gt;

&lt;p&gt;We also introduce model parts to Kit so you can “chain” ModelKits, simplifying the building of things like adapters for common open source LLMs. These were both features requested by the community 💝&lt;/p&gt;

&lt;p&gt;Here's what you need to know about the new additions:&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing KitOps Dev Mode
&lt;/h2&gt;

&lt;p&gt;You can try Kit’s new dev mode by using the &lt;code&gt;kit dev&lt;/code&gt; command. This initializes and launches a local development portal, allowing you to test various large language models (LLMs). The kit dev command uses the contents of the Kitfile. This initial version of dev mode includes a user-friendly chat and prompt interface and an OpenAI-compatible API, to seamlessly integrate an LLM into your applications.&lt;/p&gt;

&lt;p&gt;Getting started is simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Unpack a base LLM
kit unpack ghcr.io/jozu-ai/llama3:8B-text-q4_0 -d ./my-ai-project

# Launch the developer portal
kit dev ./my-ai-project 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running these commands, you will receive a URL to access the portal through your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2sw1w5s3hwkbxr744xj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2sw1w5s3hwkbxr744xj.png" alt="KitOps Dev Mode" width="800" height="629"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently dev mode is only available on MacOS, although we plan to expand it to additional platforms and include more inference runtimes and utilities for models, data, and code. &lt;a href="https://github.com/jozu-ai/kitops/issues/new/choose" rel="noopener noreferrer"&gt;File an issue in our GitHub repository&lt;/a&gt; telling us what platform we should tackle next, or how to improve the Kit dev command in general - we love community feedback!&lt;/p&gt;

&lt;p&gt;More Powerful Model Packaging with Model Parts and Referencing&lt;br&gt;
This release also introduces model parts, a feature to bring even more flexibility to ModelKits. Now you can reference other ModelKits as the base for your ModelKit, or package a more complex AI/ML project into multiple ModelKits, each focused on different models or model parts.&lt;/p&gt;

&lt;p&gt;For example, to package a LoRA adapter that you have fine-tuned from the Llama3 base model, your Kitfile would be structured as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model:
  name: my fine-tuned llama3
  path: ghcr.io/jozu-ai/llama3:8B-instruct-q4_0
  parts:
    - path: ./lora-adapter.gguf
      type: lora-adapter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration instructs the kit pack command to package only the LoRA adapter, while the kit unpack and kit pull commands will retrieve your ModelKit and the base model from the referenced ModelKit - so users have everything they need from one command.&lt;/p&gt;

&lt;p&gt;Model parts and referencing provide a flexible way to manage and distribute even complex models -  for use cases like LoRA adapters, projectors, introducing new parameter sets, and many more. This feature is also helpful for enterprises who want to keep a library of base models, adapters, and even embedding or integration code pre-packaged in ModelKits, for development teams to reference. This can be a great way to provide approved AI/ML packages and “guardrails” for teams as they begin to build with AI/ML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started Now
&lt;/h2&gt;

&lt;p&gt;We encourage you to explore these new features and take advantage of the other improvements in this release, including bug fixes, documentation enhancements, and performance optimizations. As always, the latest release is available on the KitOps project &lt;a href="https://github.com/jozu-ai/kitops/releases" rel="noopener noreferrer"&gt;releases&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/jozu-ai/kitops" rel="noopener noreferrer"&gt;Try KitOps v0.2&lt;/a&gt; today and see how these new capabilities can enhance your AI development workflow!&lt;/p&gt;

&lt;p&gt;For support or to join the KitOps community, &lt;a href="https://discord.gg/dEyuTFaMC2" rel="noopener noreferrer"&gt;checkout the KitOps Discord server&lt;/a&gt; and &lt;a href="https://github.com/jozu-ai/kitops" rel="noopener noreferrer"&gt;Star the KitOps GitHub repo to support the project&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Strategies for Tagging ModelKits</title>
      <dc:creator>Gorkem Ercan</dc:creator>
      <pubDate>Fri, 19 Apr 2024 13:57:52 +0000</pubDate>
      <link>https://dev.to/jozu/strategies-for-tagging-modelkits-208j</link>
      <guid>https://dev.to/jozu/strategies-for-tagging-modelkits-208j</guid>
      <description>&lt;p&gt;&lt;a href="https://kitops.ml/docs/modelkit/intro.html" rel="noopener noreferrer"&gt;ModelKits&lt;/a&gt;, much like other OCI artifacts, can be identified using tags that are comprehensible to humans. This blog explores various strategies for effectively &lt;a href="https://kitops.ml/docs/cli/cli-reference.html#kit-tag" rel="noopener noreferrer"&gt;tagging your ModelKits&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multiple tags
&lt;/h2&gt;

&lt;p&gt;A ModelKit can carry multiple tags and that is for a good reason. When you create a ModelKit, you typically want to have a tag that identifies its contents. For instance &lt;code&gt;llama-2:7b-chat-q8_0&lt;/code&gt; where &lt;code&gt;7b-chat-q8_0&lt;/code&gt; tells the parameter size, variant and quantization of the llama2 model. If a ModelKit exclusively contains data, it might be tagged as &lt;code&gt;categorization:sales-data-2023&lt;/code&gt;. These tags define the artifacts a user expects to receive. If a ModelKit includes both a model and significant datasets, it can be tagged with multiple tags that deliniates its contents. Generally, such tags are considered immutable by convention.&lt;/p&gt;

&lt;p&gt;Besides content identification, tags can also signify different stages, environments and other characteristics for a ModelKit. Like &lt;code&gt;latest&lt;/code&gt;, &lt;code&gt;production&lt;/code&gt;, &lt;code&gt;challenger&lt;/code&gt;. By convention, such tags are expected to be mutable. &lt;/p&gt;

&lt;h2&gt;
  
  
  Mutable vs Immutable
&lt;/h2&gt;

&lt;p&gt;Tags are inherently mutable in nature. Some registries, like ECR, allow the configuration of tag mutability at the repository level. However, because a repository often needs to manage both mutable and immutable tags, enforcing immutability becomes impractical. Therefore, tags should not be relied upon when immutable references are required.&lt;/p&gt;

&lt;p&gt;Instead, each ModelKit is equipped with a content-addressable tag called digest which is immutable. A ModelKit's digest is a 64-character hex-encoded SHA-256 hash of its contents. It is based on the contents of every artifact code, models, data and configuration. Changing any of these bits of information would result in a new digest. &lt;/p&gt;

&lt;p&gt;You can pull a ModelKit by digest, verify the contents match the given digest. The digest is the canonical ID of a ModelKit by its contents. &lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison with Git Tags
&lt;/h2&gt;

&lt;p&gt;If you're familiar with Git, you may be thinking at this point that a digest is like a commit SHA, that is because a Git commit SHA is a 40-character hex-encoded SHA-1 hash of the contents of a given state of the source code. Changing a file's contents or metadata would result in a new commit SHA.&lt;/p&gt;

&lt;p&gt;Git also has a concept of tags. When you want to share the state of a source code with others, you can apply a tag like &lt;code&gt;1.0.0&lt;/code&gt;, Git maps the tag &lt;code&gt;1.0.0&lt;/code&gt; to some commit SHA. ModelKit tags are similar. Instead of having to deal with a long hex-encoded digest, you can deal with a tag like &lt;code&gt;:7b-q4_0&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Both Git tags and ModelKit tags are mutable. In both cases, anyone with permission to push to the repo can also update and push a tag. This could be malicious, but usually it's a well-meaning mistake. If you want to know exactly the truth and immutability is really needed digests is the only way.&lt;/p&gt;

&lt;p&gt;Some of you at this point may be thinking what about Git branches? Git branches also point to commits, when you Git commit something on a branch, the branch moves to point to the new commit. But this is really just a convention supported by Git tooling. ModelKits do not have a concept of branches but you can imagine the tags like &lt;code&gt;latest&lt;/code&gt; that are expected to move to be a similar convention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Effective tagging of ModelKits not only facilitates ease of identification and organization but also enhances the manageability of different versions and configurations of these artifacts. Whether leveraging mutable tags for operational flexibility or immutable digests for ensuring integrity, the thoughtful application of tagging strategies ensures that ModelKits can be seamlessly integrated and reliably referenced within any type of workflow. Remember, while tags offer convenience and adaptability, digests provide the cornerstone of trust and verification in the lifecycle of a ModelKit. Adopting these practices will empower your teams to maintain a robust, efficient, and secure AI/ML project management system.&lt;/p&gt;

</description>
      <category>mlops</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Introducing the New GitHub Action for using Kit CLI on MLOps pipelines</title>
      <dc:creator>Gorkem Ercan</dc:creator>
      <pubDate>Fri, 05 Apr 2024 19:15:59 +0000</pubDate>
      <link>https://dev.to/jozu/introducing-the-new-github-action-for-using-kit-cli-on-mlops-pipelines-21ia</link>
      <guid>https://dev.to/jozu/introducing-the-new-github-action-for-using-kit-cli-on-mlops-pipelines-21ia</guid>
      <description>&lt;p&gt;One of the goals of KitOps is streamlining AI integration with your existing DevOps practices. Our latest contribution on this evolving landscape is a GitHub Action that simplifies integrating Kit CLI into your existing CI/CD toolset. This development is particularly exciting for those looking to leverage ModelKits, our OCI-compliant AI project packaging, directly within their CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;With the &lt;a href="https://github.com/marketplace/actions/setup-kit-cli" rel="noopener noreferrer"&gt;Setup Kit CLI&lt;/a&gt; action. Using Kit as part of your Github workflows is effortless. You can setup the latest version of the CLI to your workflow simply by adding the following to your workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
  - uses: jozu-ai/gh-kit-setup@v1.0.0
    id: install_kit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://kitops.ml" rel="noopener noreferrer"&gt;KitOps&lt;/a&gt; uniquely addresses a critical challenge in AI/ML development: the versioning and management of all project artifacts—including code, models, and data—under a unified framework. This integrated approach ensures that each component of an AI project is tracked and managed with precision, eliminating the common pitfalls associated with fragmented artifact management.&lt;/p&gt;

&lt;p&gt;ModelKits, being OCI compliant, provide a standardized way to package AI projects, ensuring compatibility across your existing infrastructure. This standardization is vital for teams looking to deploy AI solutions in diverse operational landscapes without getting bogged down by compatibility issues.&lt;/p&gt;

&lt;p&gt;To begin taking advantage of the streamlined AI project management KitOps offers, simply incorporate the Setup Kit CLI action into your GitHub workflows. For an in-depth understanding of all the features this action provides, visit the detailed documentation linked above.&lt;/p&gt;

&lt;p&gt;As we continue to innovate and improve KitOps, your feedback is invaluable to us. Whether you're encountering challenges, have suggestions for new features, or simply want to share your success stories, we're all ears. You can provide feedback on our &lt;a href="https://github.com/jozu-ai/kitops" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; or our &lt;a href="https://discord.gg/Tapeh8agYy" rel="noopener noreferrer"&gt;Discord channel&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>devops</category>
      <category>ai</category>
      <category>github</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
