<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Juan Manuel Barea Martínez</title>
    <description>The latest articles on DEV Community by Juan Manuel Barea Martínez (@juan_manuelbareamartne).</description>
    <link>https://dev.to/juan_manuelbareamartne</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/juan_manuelbareamartne"/>
    <language>en</language>
    <item>
      <title>Securing an inference service with Authorino</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Mon, 01 Dec 2025 09:20:01 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/securing-an-inference-service-with-authorino-4g4p</link>
      <guid>https://dev.to/juan_manuelbareamartne/securing-an-inference-service-with-authorino-4g4p</guid>
      <description>&lt;p&gt;In previous posts, I walked through the process of deploying LLMs using &lt;a href="https://medium.com/gitconnected/deploying-an-llm-using-vllm-on-production-with-kubernetes-90e0bf225448" rel="noopener noreferrer"&gt;vLLM&lt;/a&gt; and &lt;a href="https://medium.com/gitconnected/deploying-an-llm-with-ollama-on-kubernetes-1975acfc4a2b" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; across various production environments, addressing challenges such as storage, scaling, and performance.&lt;/p&gt;

&lt;p&gt;However, there was one thing I intentionally left out. &lt;strong&gt;security&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Security is one of the most overlooked aspects when designing AI systems. We get excited about running powerful models and optimizing performance, but we often assume “everything will be fine” on the security side.&lt;/p&gt;

&lt;p&gt;For LLM and inference workloads, security is even more critical. These services consume significant resources, expose sensitive data, and, without proper protection, become easy targets for misuse, leaks, or cost explosions.&lt;/p&gt;

&lt;p&gt;In my latest article, I explain how to secure an inference service using Authorino + Envoy, adding authentication layers that protect your API without complicating your pipeline.&lt;/p&gt;

&lt;p&gt;Read the full article on &lt;a href="https://medium.com/gitconnected/securing-an-inference-service-with-authorino-d4d2d62cf554" rel="noopener noreferrer"&gt;Securing an inference service with Authorino&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>softwareengineering</category>
      <category>ai</category>
      <category>aiops</category>
    </item>
    <item>
      <title>Deploying an LLM using vLLM on Production with Kubernetes</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Wed, 05 Nov 2025 08:01:49 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/deploying-an-llm-using-vllm-on-production-with-kubernetes-22g2</link>
      <guid>https://dev.to/juan_manuelbareamartne/deploying-an-llm-using-vllm-on-production-with-kubernetes-22g2</guid>
      <description>&lt;p&gt;While most companies focus on building better models, adding new features, and providing new services, many seem to overlook one crucial step: deploying those models through a proper inference service.&lt;/p&gt;

&lt;p&gt;Doing so not only addresses key security concerns but also gives better control over the data your models use.&lt;/p&gt;

&lt;p&gt;In my second post about deploying models in production environments, I explain how to deploy an LLM on Kubernetes using vLLM and discuss the main challenges and how to overcome them.&lt;/p&gt;

&lt;p&gt;If you’d like to dig deeper, check it out here: &lt;a href="https://levelup.gitconnected.com/deploying-an-llm-using-vllm-on-production-with-kubernetes-90e0bf225448" rel="noopener noreferrer"&gt;https://levelup.gitconnected.com/deploying-an-llm-using-vllm-on-production-with-kubernetes-90e0bf225448&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>software</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploying an LLM with Ollama on Kubernetes</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Thu, 16 Oct 2025 07:09:22 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/deploying-an-llm-with-ollama-on-kubernetes-3j75</link>
      <guid>https://dev.to/juan_manuelbareamartne/deploying-an-llm-with-ollama-on-kubernetes-3j75</guid>
      <description>&lt;p&gt;I'm writing a blog post series about how to create a production-grade AI Stack environment.&lt;/p&gt;

&lt;p&gt;The first post is about deploying an LLM in Kubernetes environments where GPUs are not available using Ollama: &lt;a href="https://levelup.gitconnected.com/deploying-an-llm-with-ollama-on-kubernetes-1975acfc4a2b" rel="noopener noreferrer"&gt;https://levelup.gitconnected.com/deploying-an-llm-with-ollama-on-kubernetes-1975acfc4a2b&lt;/a&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>software</category>
    </item>
    <item>
      <title>10 Tips to Become a Successful Software Engineer in 2025</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Sat, 20 Sep 2025 16:40:05 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/10-tips-to-become-a-successful-software-engineer-in-2025-3944</link>
      <guid>https://dev.to/juan_manuelbareamartne/10-tips-to-become-a-successful-software-engineer-in-2025-3944</guid>
      <description>&lt;p&gt;After years of learning (and making mistakes), I put together 10 practical pieces of advice that helped me grow as an engineer — and that can help juniors navigate today’s tough tech landscape.&lt;/p&gt;

&lt;p&gt;From reading the right books, contributing to open source, and learning to love testing, to building your personal brand and embracing AI as just another tool — this list is a roadmap for continuous growth.&lt;/p&gt;

&lt;p&gt;Check out the full article here: &lt;a href="https://medium.com/@juanmabareamartinez/a81bf504759d" rel="noopener noreferrer"&gt;https://medium.com/@juanmabareamartinez/a81bf504759d&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whether you’re just starting out or already experienced, I’d love to hear: what’s the best piece of advice you’ve received in your career?&lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>career</category>
    </item>
    <item>
      <title>MyAIProfile: make AI sound like you, not like everyone else</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Mon, 01 Sep 2025 09:32:36 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/myaiprofile-make-ai-sound-like-you-not-like-everyone-else-1j2k</link>
      <guid>https://dev.to/juan_manuelbareamartne/myaiprofile-make-ai-sound-like-you-not-like-everyone-else-1j2k</guid>
      <description>&lt;p&gt;I usually don’t use AI to generate content beyond light editing and corrections, but this time I made an exception.&lt;/p&gt;

&lt;p&gt;I asked an AI to write an article fully guided by my MyAIProfile file, and the result is content that actually sounds like me, not like a generic template.&lt;/p&gt;

&lt;p&gt;In the article, I talk about why AI-generated content often feels soulless, how MyAIProfile helps keep your own voice, the role of the generator to create your profile automatically, and a real example of how to put it into practice.&lt;/p&gt;

&lt;p&gt;👉 Read the full article here &lt;a href="https://medium.com/@juanmabareamartinez/7eb989a6eccd" rel="noopener noreferrer"&gt;https://medium.com/@juanmabareamartinez/7eb989a6eccd&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>writing</category>
      <category>programming</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Designing a Resilient Go Client for ArgoCD with Auto Token Renewal</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Mon, 21 Jul 2025 07:02:55 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/designing-a-resilient-go-client-for-argocd-with-auto-token-renewal-323k</link>
      <guid>https://dev.to/juan_manuelbareamartne/designing-a-resilient-go-client-for-argocd-with-auto-token-renewal-323k</guid>
      <description>&lt;p&gt;Struggling with OAuth token expiration in your Go clients?&lt;/p&gt;

&lt;p&gt;I recently ran into this challenge while integrating with ArgoCD, and I decided to tackle it with a simple and reusable solution, making token renewal automatic and transparent for the client.&lt;/p&gt;

&lt;p&gt;In this post, I walk through the problem, explore common approaches, and show a clean way to handle it while keeping your code flexible and maintainable.&lt;/p&gt;

&lt;p&gt;If you're working with APIs that require OAuth, this might save you some headaches.&lt;/p&gt;

&lt;p&gt;Check it out and let me know what you think!&lt;br&gt;
&lt;a href="https://medium.com/@juanmabareamartinez/designing-a-resilient-go-client-for-argocd-with-auto-token-renewal-d990e9d5480d" rel="noopener noreferrer"&gt;https://medium.com/@juanmabareamartinez/designing-a-resilient-go-client-for-argocd-with-auto-token-renewal-d990e9d5480d&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>programming</category>
      <category>softwareengineering</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to Use TDD with AI Tools Like Cursor</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Mon, 14 Jul 2025 08:12:08 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/how-to-use-tdd-with-ai-tools-like-cursor-4jal</link>
      <guid>https://dev.to/juan_manuelbareamartne/how-to-use-tdd-with-ai-tools-like-cursor-4jal</guid>
      <description>&lt;p&gt;I’ve been exploring how AI can make Test-Driven Development (TDD) faster and more practical.&lt;/p&gt;

&lt;p&gt;In my latest post, I walk through a hands-on example using Cursor to write tests, implement logic, and refactor, all through a TDD workflow.&lt;/p&gt;

&lt;p&gt;It’s a real look at how AI can assist, the way we build reliable software.&lt;/p&gt;

&lt;p&gt;Curious about how TDD and AI can work together?&lt;br&gt;
&lt;a href="https://medium.com/@juanmabareamartinez/how-to-use-tdd-with-ai-tools-like-cursor-d41253e4b62e" rel="noopener noreferrer"&gt;https://medium.com/@juanmabareamartinez/how-to-use-tdd-with-ai-tools-like-cursor-d41253e4b62e&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>tdd</category>
    </item>
    <item>
      <title>Write your own local Copilot with Ollama and VSCode</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Mon, 30 Jun 2025 12:35:33 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/write-your-own-local-copilot-with-ollama-and-vscode-27p0</link>
      <guid>https://dev.to/juan_manuelbareamartne/write-your-own-local-copilot-with-ollama-and-vscode-27p0</guid>
      <description>&lt;p&gt;🚀 Build your own local Copilot with VS Code and Ollama!&lt;/p&gt;

&lt;p&gt;In my latest article, I demonstrate how to run a language model locally in VS Code, without relying on the cloud or data leaks, for fast and private coding assistance.&lt;/p&gt;

&lt;p&gt;Perfect for developers who want full control over their AI tools.&lt;/p&gt;

&lt;p&gt;Check it out here: &lt;a href="https://medium.com/@juanmabareamartinez/write-your-own-local-copilot-with-ollama-and-vscode-38092575a33a" rel="noopener noreferrer"&gt;https://medium.com/@juanmabareamartinez/write-your-own-local-copilot-with-ollama-and-vscode-38092575a33a&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>llm</category>
      <category>vscode</category>
    </item>
    <item>
      <title>The Real Cost of AI: What We’re Giving Up</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Mon, 23 Jun 2025 07:37:21 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/the-real-cost-of-ai-what-were-giving-up-834</link>
      <guid>https://dev.to/juan_manuelbareamartne/the-real-cost-of-ai-what-were-giving-up-834</guid>
      <description>&lt;p&gt;What are we losing in exchange for smarter machines?&lt;br&gt;
AI is accelerating faster than ever, but we rarely stop to ask: at what cost?&lt;/p&gt;

&lt;p&gt;In this post, I explore how artificial intelligence is reshaping not just our tools, but our minds, ethics, identity, and even the meaning of life and death.&lt;br&gt;
Concepts like digital immortality or subscription-based intelligence aren’t just theoretical anymore, they’re becoming real-world trends.&lt;/p&gt;

&lt;p&gt;Read the full article here: &lt;a href="https://medium.com/@juanmabareamartinez/the-real-cost-of-ai-what-were-giving-up-382981ee141c" rel="noopener noreferrer"&gt;https://medium.com/@juanmabareamartinez/the-real-cost-of-ai-what-were-giving-up-382981ee141c&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’d love to hear your thoughts. Are we in control of AI, or are we becoming its product?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ethics</category>
      <category>machinelearning</category>
      <category>llm</category>
    </item>
    <item>
      <title>The Open Source Beginner Contributor Guide</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Mon, 16 Jun 2025 07:24:56 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/the-open-source-beginner-contributor-guide-e5a</link>
      <guid>https://dev.to/juan_manuelbareamartne/the-open-source-beginner-contributor-guide-e5a</guid>
      <description>&lt;p&gt;Open source is not just about code, it's about community, collaboration, and growth.&lt;br&gt;
In this article, I share practical advice for those who want to start contributing to open-source projects but aren’t sure where to begin. Whether you're a student, junior dev, or simply curious, this guide is for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@juanmabareamartinez/the-open-source-beginner-contributor-guide-9667af9a08ac" rel="noopener noreferrer"&gt;https://medium.com/@juanmabareamartinez/the-open-source-beginner-contributor-guide-9667af9a08ac&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’ve ever thought about contributing to open source, I’d love for you to check it out.&lt;br&gt;
And if you’re already a contributor, feel free to share your tips in the comments!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>codenewbie</category>
      <category>community</category>
      <category>career</category>
    </item>
    <item>
      <title>Model Context Protocol — the USB‑C of tooling for LLMs</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Mon, 09 Jun 2025 07:21:44 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/model-context-protocol-the-usb-c-of-tooling-for-llms-5dfd</link>
      <guid>https://dev.to/juan_manuelbareamartne/model-context-protocol-the-usb-c-of-tooling-for-llms-5dfd</guid>
      <description>&lt;p&gt;How do we make LLMs more useful, composable, and secure?&lt;br&gt;
The Model Context Protocol (MCP) proposes a modular and open way to connect models, tools, and data, just like USB-C did for hardware.&lt;/p&gt;

&lt;p&gt;If you're building with LLMs, this might change how you think about tool integration.&lt;/p&gt;

&lt;p&gt;Read here: &lt;a href="https://medium.com/@juanmabareamartinez/model-context-protocol-the-usb-c-of-tooling-for-llms-dd59b2c0bedc" rel="noopener noreferrer"&gt;https://medium.com/@juanmabareamartinez/model-context-protocol-the-usb-c-of-tooling-for-llms-dd59b2c0bedc&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>llama</category>
      <category>programming</category>
    </item>
    <item>
      <title>Llama-Stack: The Developer Framework for LLM Workflows</title>
      <dc:creator>Juan Manuel Barea Martínez</dc:creator>
      <pubDate>Tue, 03 Jun 2025 16:39:06 +0000</pubDate>
      <link>https://dev.to/juan_manuelbareamartne/llama-stack-the-developer-framework-for-llm-workflows-2fcj</link>
      <guid>https://dev.to/juan_manuelbareamartne/llama-stack-the-developer-framework-for-llm-workflows-2fcj</guid>
      <description>&lt;p&gt;Building applications with LLMs often becomes messy — orchestration, evaluation, modularity...&lt;/p&gt;

&lt;p&gt;In this post, I introduce &lt;strong&gt;Llama-Stack&lt;/strong&gt;, a powerful open-source framework developed by Meta. It’s designed to tackle one of the biggest challenges in generative AI: building, evaluating, and deploying production-ready LLM applications with flexibility and scalability.&lt;/p&gt;

&lt;p&gt;Want to dive deeper? 👉 &lt;a href="https://medium.com/@juanmabareamartinez/llama-stack-the-developer-framework-for-the-future-of-ai-29855c9f97ad" rel="noopener noreferrer"&gt;Read the full post&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>development</category>
      <category>softwaredevelopment</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
