<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sam Morris</title>
    <description>The latest articles on DEV Community by Sam Morris (@cupofpython).</description>
    <link>https://dev.to/cupofpython</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cupofpython"/>
    <language>en</language>
    <item>
      <title>Shift as far left as you can... but you trust the shifter?</title>
      <dc:creator>Sam Morris</dc:creator>
      <pubDate>Mon, 23 Feb 2026 19:26:11 +0000</pubDate>
      <link>https://dev.to/cupofpython/shift-as-far-left-as-you-can-but-you-trust-the-shifter-2j47</link>
      <guid>https://dev.to/cupofpython/shift-as-far-left-as-you-can-but-you-trust-the-shifter-2j47</guid>
      <description>&lt;p&gt;Across every customer conversation I have had with global enterprises, there are a few key themes.&lt;/p&gt;

&lt;p&gt;a. Developer experience.&lt;br&gt;
b. The value of disrupting the status quo.&lt;br&gt;
c. Contribution to the sphere of software best practices.&lt;br&gt;
d. Trust.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust. Trust. Trust.&lt;/strong&gt; It's paramount to our relationships, and it's the best case scenario. We trust you because of your track record. We trust you because you are credible. We trust you because we transact our dollars for your backing.&lt;/p&gt;

&lt;p&gt;Then there is the concept of Zero Trust, or rather Never Trust, Always Verify. The way to build trust is to evaluate how risk is managed and incidents are handled. Since bad things happen... how many calories do you put in to preventing them from occurring?&lt;/p&gt;

&lt;p&gt;I would say that blind trust in any tool has not yet been achieved in our digital landscape, and probably should not ever occur. Rather, there is a type of pseudo-trust that we all currently work towards. The "shift left" movement for security by pushing scanning and exploitation analysis back further into the SDLC does one thing very well -- it provides transparency and notification. By analyzing potential risk through documented insecure practices in your code, or CVEs in your dependencies as early as possible, you can spend time upfront preventing the truly costly issue we are all afraid of: exploit.&lt;/p&gt;

&lt;p&gt;So, we increase our transparency of our software. We generate build-time (or cook-time) SBOMs to document the recipe for the dishes we serve our customers. We scan code, we run attack simulations with dynamic analysis, and we do everything we can early and often in our IDEs and our staging environments. We also control how code is introduced with 2 approvers on a PR, blocked commits, and more. And we do our best to enforce it all.&lt;/p&gt;

&lt;p&gt;There is one missing piece of the puzzle here, and it's securing the supply chain. And by one piece, I mean a million smaller pieces comprising a puzzle we are still building and understanding. How do we handle the puzzle pieces that are crafted by people all over the world, all held to different standards and practices, not defined by you or your organization, within the open source stratosphere?&lt;/p&gt;

&lt;p&gt;Authenticity and integrity. Those concepts help us establish trust. Again, zero trust, but verify, allows us to cautiously proceed with the truth. They allow us to consider: &lt;strong&gt;did this dish, that I have a recipe for and is at my table now, come from the restaurant I am sitting at&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;This is where I pivot to pushing for checking not just your own homework, but the homework of all the pieces comprising the larger picture. Tools like cosign allow for keyless cryptographic signing of files tied to an identity. Identity is paramount here. It's one thing to know your dish is from a restaurant. But it's another one to be able to verify it's from Olive Garden. By knowing Olive Garden, you can check if the dish is from there, and decline it if not (perhaps you only want free bread if it's a delicious breadstick).&lt;/p&gt;

&lt;p&gt;I urge you to verify if your base container images are not just signed, since anyone can authenticate and sign an image, but that they are &lt;strong&gt;signed by a source you trust&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now, what is a trusted source? Today it may look more like a denylist than an allowlist, i.e. these sources are vetted by me and my team, and aren't publishable by an anonymous actor on the public internet. One day, as signatures become the standard, it could function as an allowlist. But for now, we should operate from a place of scrutiny, until cryptographic signatures become the norm.&lt;/p&gt;

&lt;p&gt;To make this easier, I provide an example of how to verify base images in multi-stage Dockerfiles against trusted providers. Did this image get signed by xyz provider? If yes, pass. If not, fail. &lt;/p&gt;

&lt;p&gt;As OCI image signatures become more of a reality than a best practice, I want to expand this to look deeper into the Dockerfiles of Dockerfiles, and analyze not just the immediate base image of a Dockerfile for a signature, but the base image of that base image and so on. The key thing to remember is that you have a software supply CHAIN, and you must look beyond the most immediate contiguous link.&lt;/p&gt;

&lt;p&gt;tldr; we have pseudo-trust, we should always operate from zero trust, sign your images, know who to check for a signature from, consume responsibly, tread optimistically.&lt;/p&gt;

&lt;p&gt;Try it out! Here's Integrity-Check on GitHub: &lt;a href="https://github.com/cupofpython/integrity-check" rel="noopener noreferrer"&gt;https://github.com/cupofpython/integrity-check&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>containers</category>
      <category>github</category>
      <category>cosign</category>
    </item>
    <item>
      <title>Building a bot to talk to my cats</title>
      <dc:creator>Sam Morris</dc:creator>
      <pubDate>Wed, 26 Mar 2025 23:36:25 +0000</pubDate>
      <link>https://dev.to/cupofpython/building-a-bot-to-talk-to-my-cats-oh8</link>
      <guid>https://dev.to/cupofpython/building-a-bot-to-talk-to-my-cats-oh8</guid>
      <description>&lt;h2&gt;
  
  
  I ❤️ my cats 🐱
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2mjo4daflh5sxdestcj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2mjo4daflh5sxdestcj.jpg" alt=" " width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have you ever looked at your pet and asked yourself, what exactly is going on inside that head? What do you need when you cry out – food, water, attention? Maybe you just want to ask how they are doing and know they are okay. My partner, who travels often for work, texted me asking how our kitten was after an emergency vet visit, and followed up with “I wish I could just text him.” And boom, the idea for CatBot, an AI-assisted chat-bot that replies as your furry friend was born.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does this relate to Docker... or K8s? 🐳
&lt;/h2&gt;

&lt;p&gt;I decided to try to make this idea a reality. At first I thought about the building blocks - I would need a frontend chat-like interface to build a profile for the cat, and then I would need some mechanism to take in my conversation and build a prompt for some large language model. These would serve as disparate services, and each service could run on a Docker container. These containers need to run somewhere, and the whole application needs to be accessible to the internet so me and my partner can both reach it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1: The Code - Logic, Modules, and More
&lt;/h2&gt;

&lt;p&gt;I started with the frontend — I am by no means a frontend guru, so I graciously accepted the assistance of Claude from Anthropic to generate some JavaScript and CSS for me. The primary part of the application code that I needed to modify was handling the input to the chatbot. I created a new string with not just the input, but the context that I wanted my model to respond in the style of the cat that has certain traits input when the cat profile is created. Then, I executed a POST command with these parameters against my running server listening on a different port.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend and backend? Not a power couple
&lt;/h3&gt;

&lt;p&gt;I wanted to decouple my frontend and backend code, but still package both into one container since a. I wanted to avoid additional networking configuration (which didn’t happen...), and b. the &lt;br&gt;
backend serviced one request, which was “execute the command” and talk to a different container running an LLM. So, I created a separate server file and modified my startup script (&lt;code&gt;npm start&lt;/code&gt;) to not only run &lt;code&gt;npm start&lt;/code&gt;, but to run &lt;code&gt;node server.js&lt;/code&gt;, starting up my server to listen for requests on port 5001 to execute a command. This would also come in handy when my Dockerfile’s final line was to run &lt;code&gt;npm start&lt;/code&gt; upon starting a container.&lt;/p&gt;

&lt;h3&gt;
  
  
  My container will be calling your container 😎
&lt;/h3&gt;

&lt;p&gt;Initially, I knew I needed to talk to a running Docker container from another container. More specifically, I wanted to execute a command, such as &lt;code&gt;ollama run llama3.2&lt;/code&gt;, on a running container. So naturally, I figured I would need a way to exec into the container and run this command. I found Dockerode, which is a Node module that allows you to manage and manipulate containers. I set it up, instantiated Docker, and got the container by its expected name, which I specified in my Docker Compose file. Then, I referenced the container and essentially exec’ed into it, ran the command &lt;code&gt;ollama run llama3.2 &amp;amp;&amp;amp;&lt;/code&gt; + &lt;code&gt;{my prompt}&lt;/code&gt;, to startup the Ollama model and execute the command remotely.&lt;/p&gt;

&lt;p&gt;The Dockerode solution for working with the running LLM container seemed perfect at the time, and worked when I was doing fully local testing, but Dockerode did not play nicely when I ran this implementation on a minikube cluster. It needed to be instantiated and have the Docker socket file accessible, which is used to communicate with the Docker daemon. However, Kubernetes does not use Docker as its container runtime, even though it can create containers from built Docker images.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breaking up with Dockerode 💔
&lt;/h3&gt;

&lt;p&gt;So, I ended up scrapping Dockerode completely since my end goal was to deploy this application on a Kubernetes cluster. Instead, I used axios, a promise-based HTTP client for NodeJS, to do a post call to my LLM container. Using a promise-based client was critical for dealing with timing issues I ran into early on, since sometimes my app would not get a response from the LLM in time and display some error text. Initially, I pointed to localhost as my host, since my LLM container was running on port 11434 on my machine. This worked great when I would run my Node app as a container, or even run npm start on my local machine; I could still interact with my LLM container that was accessible at localhost:11434. However, this did not work in the Kubernetes implementation, since my containers were no longer port mapped to a port on my local machine. This is where the Kubernetes magic comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2: Spinning it Up - Local Dev Startup → Minikube → EKS
&lt;/h2&gt;

&lt;p&gt;I talk a lot about using containers for local development. The container that I always used was some running LLM container that I pulled from the &lt;a href="https://hub.docker.com/catalogs/gen-ai" rel="noopener noreferrer"&gt;Docker Hub official AI image registry&lt;/a&gt;. I initially started dev work by just running npm start to get my app running and test connecting to a container, and then I got more savvy with my approach by leveraging &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;. Docker Compose allowed me to automatically spin up my containers, set up port mapping, and even run a post_start command on my LLM container to pull the correct model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Compose... meet your Bridge
&lt;/h3&gt;

&lt;p&gt;Docker Compose was extremely useful for this use case, since I had expected names for containers and ports that were hardcoded in my code. However, I eventually wanted to deploy this application in a way that was accessible outside of just my machine. I attempted to follow a guide for deploying a Docker Compose-orchestrated application to Amazon Elastic Container Service (ECS) via the &lt;a href="https://github.com/docker-archive/compose-cli/blob/main/INSTALL.md" rel="noopener noreferrer"&gt;Docker Compose CLI&lt;/a&gt;, only to find that that CLI command had been deprecated. Our general guidance is to use Docker Compose for bootstrapping your development efforts and Kubernetes for production applications. So, I asked around and found out about Docker Compose Bridge – and it did the trick, for the most part.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/compose/bridge/" rel="noopener noreferrer"&gt;Docker Compose Bridge&lt;/a&gt; takes your compose.yaml file and translates it into Kubernetes manifests. This saved me significant time translating my Compose file into the finicky manifests, all by running a quick command. I did have to make some initial tweaks to my manifests, most notably:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In my model deployment, I added a lifecycle section to execute a command post start of the container to pull the correct model from Ollama&lt;/li&gt;
&lt;li&gt;In my server deployment, I set the imagePullPolicy to Always, since I was working off a “hotfix image” (which I begrudgingly tagged as &lt;code&gt;:latest&lt;/code&gt;, against best practice to develop rapidly… the classic warning of “don’t do that in prod” applies here 😅).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, I ran &lt;code&gt;kubectl apply&lt;/code&gt; and saw my pods spin up! Compose Bridge even exposed some LoadBalancer services for each container so my pods could communicate with each other. Things were looking swell.&lt;/p&gt;

&lt;p&gt;And yet, whoever said “learn networking before Kubernetes” was a smart LinkedIn influencer, because wow, I ran into some issues. To keep things short for you all, I discovered a few key problems:&lt;/p&gt;

&lt;h3&gt;
  
  
  Ah networking... my fickle friend
&lt;/h3&gt;

&lt;p&gt;Remember how my frontend + backend that handles requests were coupled into one container? This isn’t typically the best practice, but you could argue these logical units could live in the same container. Since I implemented it this way, I needed to expose both my frontend port (3000) and the port my server was running on (5001). This involved updating my EXPOSE command in my Dockerfile to handle both ports and updating both my server deployment and service manifests to have both ports listed as well. Initially I was able to execute commands on my remote server running in the same container by pointing to localhost:5001, but this fell apart in the EKS deployment which required more specific networking configuration and the use of services. I did a lot of testing connections by exec’ing into my app containers and running curl commands against the execute endpoint, to troubleshoot why I could get a response inside the container, but not in the UI, which led me to explicitly stating all ports that were listening on a running container in Kubernetes.&lt;/p&gt;

&lt;p&gt;Once I specifically defined all my ports, I was able to get started deploying to something internet facing – a cloud based Kubernetes cluster. I was doing my testing on a single node cluster using Minikube, so I decided to start deploying to Amazon EKS.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS time! 💃
&lt;/h3&gt;

&lt;p&gt;I followed a &lt;a href="https://medium.com/@tamerbenhassan/deploying-a-simple-application-using-eks-step-by-step-guide-512b1559a7bd" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; article on how to set up a cluster, which involved a few steps. Some of those included creating an IAM user with various permissions attached to it related to EC2, CloudFormation, IAM and EKS, installing the EKS CLI, and running a few commands to get the cluster up and running, as well as an IAM policy for the cluster nodes themselves.&lt;/p&gt;

&lt;p&gt;Once my cluster was up, and I gave myself as the root user permission to view information in the AWS console (LOL), I switched context to my EKS cluster from Minikube. Then I was able to run the same kubectl apply command provided by the &lt;a href="https://docs.docker.com/compose/bridge/usage/" rel="noopener noreferrer"&gt;Docker Compose Bridge documentation&lt;/a&gt; to spin up my app. This led to two final problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I still had connection issues even though my server port was exposed. &lt;/li&gt;
&lt;li&gt;When I was able to connect, my LLM container was slow to respond.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For point one, I updated my request to use the exposed service URL from EKS as the host, since my server was now running on EKS port 5001. This needed to be called out explicitly in my code, as I was no longer port mapping to localhost.&lt;/p&gt;

&lt;p&gt;For point two, I updated my manifests to have explicit resource requests for CPU and memory. By running &lt;code&gt;kubectl top pod&lt;/code&gt; as a request was being served, I was able to see the consumption of the pods and amp those requests up in the manifest. This sped up the response time, but I noticed there was still a lag, which I identified partly as not leveraging the Ollama API’s stream feature to display responses one word at a time instead of as one pile of text.&lt;/p&gt;

&lt;h3&gt;
  
  
  So uh... did you get it working yet?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxao4p9113qv1guosd6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxao4p9113qv1guosd6a.png" alt="CatBot app screenshot showing a conversation with the cat" width="800" height="1731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And just like that, my CatBot app was up and running! I successfully used Docker images, Docker Compose, Docker Compose Bridge, Minikube, and Amazon EKS to talk to my cats. Get a life, am I right? Just go upstairs and talk to them, Sam! Just kidding. 😂&lt;/p&gt;

&lt;p&gt;Tune in next time where I dive into optimizing the process – automating my builds in a CI/CD pipeline, using proper versioning for my images, leveraging Docker Build Cloud for faster builds, and more.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ai</category>
      <category>docker</category>
      <category>eks</category>
    </item>
  </channel>
</rss>
