<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arun Rao</title>
    <description>The latest articles on DEV Community by Arun Rao (@arun12415).</description>
    <link>https://dev.to/arun12415</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arun12415"/>
    <language>en</language>
    <item>
      <title>Mr. Chapra Milk: A Serverless Farm-to-Table Subscription Engine</title>
      <dc:creator>Arun Rao</dc:creator>
      <pubDate>Tue, 14 Apr 2026 12:19:49 +0000</pubDate>
      <link>https://dev.to/arun12415/mr-chapra-milk-a-serverless-farm-to-table-subscription-engine-125e</link>
      <guid>https://dev.to/arun12415/mr-chapra-milk-a-serverless-farm-to-table-subscription-engine-125e</guid>
      <description>&lt;p&gt;The Project Description&lt;br&gt;
The Problem: Local dairy businesses often struggle with manual order management and lack a digital presence to reach health-conscious customers.&lt;/p&gt;

&lt;p&gt;The Solution: I built Mr. Chapra Milk, a full-stack web application that automates subscriptions. It’s designed to be "infinitely scalable," meaning it can handle 10 or 10,000 customers without slowing down.&lt;/p&gt;

&lt;p&gt;Tech Stack Highlights:&lt;/p&gt;

&lt;p&gt;Frontend: Responsive UI built with Tailwind CSS for a premium, modern feel.&lt;/p&gt;

&lt;p&gt;Compute: AWS Lambda (Node.js) to handle logic without managing servers (Serverless).&lt;/p&gt;

&lt;p&gt;API Layer: AWS API Gateway with custom REST endpoints and CORS security.&lt;/p&gt;

&lt;p&gt;Database: Amazon DynamoDB for lightning-fast NoSQL data storage.&lt;/p&gt;

&lt;p&gt;Hosting: CI/CD pipeline set up via AWS Amplify.&lt;/p&gt;

&lt;p&gt;Please Checkout The Website and Give Feedback&lt;/p&gt;

&lt;p&gt;Live:- &lt;a href="https://main.d33etlmrr49ug1.amplifyapp.com/" rel="noopener noreferrer"&gt;https://main.d33etlmrr49ug1.amplifyapp.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>productivity</category>
      <category>career</category>
    </item>
    <item>
      <title>Strands Temporal Agent: Building an AI-Powered Docker Monitor with Temporal, AWS Bedrock &amp; Ollama</title>
      <dc:creator>Arun Rao</dc:creator>
      <pubDate>Wed, 01 Apr 2026 10:15:51 +0000</pubDate>
      <link>https://dev.to/arun12415/strands-temporal-agent-building-an-ai-powered-docker-monitor-with-temporal-aws-bedrock-ollama-4acl</link>
      <guid>https://dev.to/arun12415/strands-temporal-agent-building-an-ai-powered-docker-monitor-with-temporal-aws-bedrock-ollama-4acl</guid>
      <description>&lt;p&gt;Introduction&lt;br&gt;
What if you could monitor your Docker containers just by typing plain English commands like “show nginx logs” or “is redis healthy?” — and have the system figure out the rest automatically?&lt;/p&gt;

&lt;p&gt;That’s exactly what I built with Strands Temporal Agent — an AI-powered Docker container health monitoring system that combines local LLMs, cloud AI, and fault-tolerant workflows to make container management as simple as having a conversation.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk you through what I built, the tech stack I used, the challenges I hit, and what I learned along the way.&lt;/p&gt;

&lt;p&gt;What is Strands Temporal Agent?&lt;br&gt;
Strands temporal agents is a Docker monitoring agent that:&lt;/p&gt;

&lt;p&gt;Accepts natural language commands from the user&lt;br&gt;
Uses AI to parse and route those commands to the right Docker operation&lt;br&gt;
Executes operations like status checks, health monitoring, log retrieval, and container restarts&lt;br&gt;
Handles failures automatically with retry policies and exponential backoff powered by Temporal&lt;br&gt;
Instead of memorizing Docker CLI commands, you just type what you want — and the system handles the rest.&lt;/p&gt;

&lt;p&gt;The Tech Stack&lt;br&gt;
Here’s everything I used to build this project:&lt;/p&gt;

&lt;p&gt;ToolPurposePythonCore application logicTemporalFault-tolerant workflow orchestrationDockerContainer managementOllama + LLaMA 3Local LLM for AI orchestrationAmazon BedrockCloud AI capabilities via AWSAWS CLI + IAMSecure AWS authentication&lt;/p&gt;

&lt;p&gt;System Architecture&lt;br&gt;
The system works in three layers:&lt;/p&gt;

&lt;p&gt;User Input (natural language)&lt;br&gt;
        ↓&lt;br&gt;
AI Orchestrator Activity&lt;br&gt;
(parses intent → generates operation plan)&lt;br&gt;
        ↓&lt;br&gt;
Temporal Workflow&lt;br&gt;
(executes operations with retry policies)&lt;br&gt;
        ↓&lt;br&gt;
Docker Activities&lt;br&gt;
(status / health / logs / restart)&lt;br&gt;
When you type “show nginx logs”, here’s what happens behind the scenes:&lt;/p&gt;

&lt;p&gt;The input goes to ai_orchestrator_activity&lt;br&gt;
The AI parses it and returns a plan: logs:nginx:100&lt;br&gt;
Temporal’s workflow engine picks up the plan&lt;br&gt;
get_container_logs_activity executes with retry logic&lt;br&gt;
The result is returned to you&lt;br&gt;
Step 1 — Setting Up the Environment&lt;br&gt;
The first step was getting all the tools installed and configured on my local Windows machine.&lt;/p&gt;

&lt;p&gt;Ollama + LLaMA 3&lt;/p&gt;

&lt;p&gt;bash&lt;/p&gt;
&lt;h1&gt;
  
  
  Install Ollama from &lt;a href="https://ollama.com" rel="noopener noreferrer"&gt;https://ollama.com&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;ollama pull llama3&lt;br&gt;
ollama run llama3&lt;br&gt;
Docker&lt;/p&gt;

&lt;p&gt;Downloaded Docker Desktop from docker.com and verified the installation:&lt;/p&gt;

&lt;p&gt;bash&lt;/p&gt;

&lt;p&gt;docker --version&lt;br&gt;
docker run hello-world&lt;br&gt;
Python Environment&lt;/p&gt;

&lt;p&gt;bash&lt;/p&gt;

&lt;p&gt;python -m venv venv&lt;br&gt;
venv\Scripts\activate&lt;br&gt;
pip install temporalio docker boto3&lt;br&gt;
AWS CLI + IAM Setup&lt;/p&gt;

&lt;p&gt;bash&lt;/p&gt;
&lt;h1&gt;
  
  
  Install AWS CLI, then configure with IAM credentials
&lt;/h1&gt;

&lt;p&gt;aws configure&lt;/p&gt;
&lt;h1&gt;
  
  
  Enter: Access Key ID, Secret Access Key, Region
&lt;/h1&gt;

&lt;p&gt;Step 2 — Building the Temporal Activities&lt;br&gt;
Temporal is the backbone of this project. It ensures that even if something fails midway, the workflow retries automatically without losing progress.&lt;/p&gt;

&lt;p&gt;I built 5 core activities:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI Orchestrator Activity Parses natural language and returns an operation plan:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;python&lt;/p&gt;

&lt;p&gt;@activity.defn&lt;br&gt;
async def ai_orchestrator_activity(task: str) -&amp;gt; str:&lt;br&gt;
    """Parses natural language into Docker operation plan."""&lt;br&gt;
    # e.g. "show nginx logs" → "logs:nginx:100"&lt;br&gt;
    # e.g. "restart postgres" → "restart:postgres"&lt;br&gt;
    # e.g. "is redis healthy?" → "health:redis"&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get Container Status Activity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;python&lt;/p&gt;

&lt;p&gt;@activity.defn&lt;br&gt;
async def get_container_status_activity(filter_by: str = None) -&amp;gt; str:&lt;br&gt;
    """Returns status of all or filtered containers."""&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check Container Health Activity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;python&lt;/p&gt;

&lt;p&gt;@activity.defn&lt;br&gt;
async def check_container_health_activity(container_name: str = None) -&amp;gt; str:&lt;br&gt;
    """Checks health of specific or all running containers."""&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get Container Logs Activity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Become a Medium member&lt;br&gt;
python&lt;/p&gt;

&lt;p&gt;@activity.defn&lt;br&gt;
async def get_container_logs_activity(container_name: str, lines: int = 100) -&amp;gt; str:&lt;br&gt;
    """Retrieves last N lines of container logs."""&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Restart Container Activity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;python&lt;/p&gt;

&lt;p&gt;@activity.defn&lt;br&gt;
async def restart_container_activity(container_name: str) -&amp;gt; str:&lt;br&gt;
    """Restarts a specified container."""&lt;br&gt;
Step 3 — The Temporal Workflow&lt;br&gt;
The workflow ties everything together. It calls the AI orchestrator first, then executes each operation in the plan:&lt;/p&gt;

&lt;p&gt;python&lt;/p&gt;

&lt;p&gt;@workflow.defn&lt;br&gt;
class DockerMonitorWorkflow:&lt;br&gt;
    @workflow.run&lt;br&gt;
    async def run(self, task: str) -&amp;gt; str:&lt;br&gt;
        # Step 1: Get AI-generated operation plan&lt;br&gt;
        plan = await workflow.execute_activity(&lt;br&gt;
            ai_orchestrator_activity,&lt;br&gt;
            task,&lt;br&gt;
            start_to_close_timeout=timedelta(seconds=15),&lt;br&gt;
            retry_policy=RetryPolicy(maximum_attempts=2)&lt;br&gt;
        )&lt;br&gt;
        # Step 2: Execute each operation&lt;br&gt;
        results = []&lt;br&gt;
        for operation_spec in plan.split(','):&lt;br&gt;
            result = await self._execute_operation(operation_spec)&lt;br&gt;
            results.append(result)&lt;br&gt;
        return "\n\n".join(results)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Each activity has its own **retry policy** tuned to the operation type. For example, restarts get 5 retry attempts with 30-second timeouts, while status checks only need 3 attempts with 10-second timeouts.
---
### Step 4 — The AI Parser
This was the most interesting part to build. The AI orchestrator needed to understand varied natural language inputs and map them to structured operation strings.
For example:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;"show nginx logs"          → "logs:nginx:100"&lt;br&gt;
"analyze nginx logs"       → "logs:nginx:100"&lt;br&gt;
"is redis healthy?"        → "health:redis"&lt;br&gt;
"check nginx health and show logs" → "health:nginx,logs:nginx:100"&lt;br&gt;
"restart my postgres container"    → "restart:postgres"&lt;br&gt;
The key challenge was building a stop-words filter that ignores filler words and correctly extracts the container name:&lt;/p&gt;

&lt;p&gt;python&lt;/p&gt;

&lt;p&gt;STOP_WORDS = {&lt;br&gt;
    "restart", "container", "the", "please", "a", "an",&lt;br&gt;
    "can", "you", "my", "show", "logs", "log", "check",&lt;br&gt;
    "health", "healthy", "analyze", "fetch", "get", ...&lt;br&gt;
}&lt;br&gt;
def find_container(words):&lt;br&gt;
    for w in words:&lt;br&gt;
        if w and w not in STOP_WORDS and not w.isdigit():&lt;br&gt;
            return w&lt;br&gt;
    return None&lt;br&gt;
Challenges I Faced&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SyntaxError — Unterminated Triple-Quoted String&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The most frustrating bug was a SyntaxError caused by leftover code fragments from an earlier version of the file. The old LLM-based orchestrator code wasn't fully removed, leaving orphaned text that Python couldn't parse. Lesson learned: always verify syntax with:&lt;/p&gt;

&lt;p&gt;bash&lt;/p&gt;

&lt;p&gt;python -c "import ast; ast.parse(open('file.py').read()); print('Syntax OK')"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**2. Dead Code — Unreachable Combined Block**
My original parser had a COMBINED section for handling queries like *"check nginx health and show logs"* — but it was placed after standalone health and log blocks that always returned first. The combined block was completely unreachable. I fixed this by merging all three into a single unified block.
**3. Stop-Words Too Narrow**
Words like `"analyze"` and `"fetch"` weren't in my stop-words list, so they were being grabbed as container names. *"Analyze nginx logs"* was routing to `logs:analyze:100` instead of `logs:nginx:100`. The fix was expanding the stop-words set with common action verbs.
**4. Error Strings Being Executed as Operations**
When no container name was found, my parser returned `"Error: logs requires a container name"`. The workflow then tried to execute `"error"` as an operation name, returning `Unknown operation: error`. The fix was to fall back to `"status"` instead of returning an error string.
---
### The Result
After all fixes, the system works smoothly:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter task: show nginx logs&lt;br&gt;
→ logs:nginx:100 executed ✅&lt;br&gt;
Enter task: is redis healthy?&lt;br&gt;
→ health:redis executed ✅&lt;br&gt;
Enter task: restart postgres&lt;br&gt;
→ restart:postgres executed ✅&lt;br&gt;
Enter task: check nginx health and show logs&lt;br&gt;
→ health:nginx,logs:nginx:100 executed ✅&lt;br&gt;
The Temporal UI at localhost:8233 shows every workflow execution with full event history, activity timelines, inputs, outputs, and retry attempts — making debugging incredibly easy.&lt;/p&gt;

&lt;p&gt;Key Takeaways&lt;br&gt;
Temporal is a game-changer for reliability. Writing retry logic manually is error-prone and tedious. Temporal handles it declaratively — you just define the policy and it takes care of the rest.&lt;/p&gt;

&lt;p&gt;Local LLMs are surprisingly capable. Running LLaMA 3 locally via Ollama gave me a fully offline AI layer with no API costs and no latency from cloud round-trips.&lt;/p&gt;

&lt;p&gt;NLP parsing is harder than it looks. Even simple rule-based parsing has edge cases. Real user input is messy — abbreviations, typos, extra words, unexpected word order. Always test with varied inputs.&lt;/p&gt;

&lt;p&gt;Always verify syntax programmatically. A one-liner AST check would have saved me an hour of debugging.&lt;/p&gt;

&lt;p&gt;What’s Next&lt;br&gt;
Scheduled health checks using Temporal’s cron scheduling&lt;br&gt;
Alerting when a container goes unhealthy&lt;br&gt;
Web dashboard to visualize container status in real time&lt;br&gt;
Kubernetes support — extend beyond Docker to K8s pods&lt;br&gt;
Source Code&lt;br&gt;
The full source code is available on GitHub: &lt;a href="https://github.com/Arun12415/strands-temporal-agents.git" rel="noopener noreferrer"&gt;https://github.com/Arun12415/strands-temporal-agents.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Final Thoughts&lt;br&gt;
Strands temporal agents started as a learning project and turned into something genuinely useful. If you’re exploring Temporal, local LLMs, or Docker automation — I hope this post gives you a solid starting point.&lt;/p&gt;

&lt;p&gt;Have questions or suggestions? Drop them in the comments — I’d love to hear your thoughts!&lt;/p&gt;

&lt;p&gt;Tags: #Python #Docker #AWS #Temporal #LLM #Ollama #AmazonBedrock #DevOps #AI #LLaMA&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>docker</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>DevOps AI Platform: Jenkins + Docker + Grafana + SonarQube</title>
      <dc:creator>Arun Rao</dc:creator>
      <pubDate>Thu, 05 Mar 2026 12:23:40 +0000</pubDate>
      <link>https://dev.to/arun12415/devops-ai-platform-jenkins-docker-grafana-sonarqube-3k2o</link>
      <guid>https://dev.to/arun12415/devops-ai-platform-jenkins-docker-grafana-sonarqube-3k2o</guid>
      <description>&lt;p&gt;I Didn't Just Build the App — I Deployed It Like a Real DevOps Team&lt;br&gt;
Let me be honest with you.&lt;br&gt;
I got tired of watching developers Google the same DevOps questions over and over. How do I write a Dockerfile? What does this Terraform block actually do? Where do I even start with a CI/CD pipeline?&lt;br&gt;
So I decided to build something that could just... answer those questions.&lt;/p&gt;

&lt;p&gt;The idea was simple. What if there was an AI assistant that spoke fluent DevOps? Not a generic chatbot — something that actually understands what you're trying to do when you're stuck at 2am trying to containerize your app.&lt;br&gt;
That became the DevOps AI Platform. You ask it something like "give me a Docker Compose file for Node.js and PostgreSQL" and it gives you something you can actually use — not a vague explanation, a real working example.&lt;br&gt;
The whole thing runs locally on Windows inside Docker, so there's zero cloud cost and zero complicated setup. Clone it, run it, use it.&lt;br&gt;
Then I Asked Myself a Harder Question&lt;br&gt;
Once the app was working, I sat back and thought — okay, but what would an actual DevOps team do with this?&lt;br&gt;
That question changed everything.&lt;br&gt;
Because any team worth their salt wouldn't just run the app. They'd have automated pipelines, containerized builds, live monitoring, and code quality checks. So I went back and built all of that around my own platform.&lt;/p&gt;

&lt;p&gt;The CI/CD Pipeline&lt;br&gt;
I wired up Jenkins with GitHub so that every single code push automatically kicks off a pipeline. It pulls the code, runs the build, packages everything into a Docker image, pushes it to Docker Hub, and deploys the updated container.&lt;br&gt;
No clicking. No manual steps. Push code, walk away, it handles the rest.&lt;/p&gt;

&lt;p&gt;Containerization&lt;br&gt;
The app lives inside Docker. Every version gets built into an image and stored on Docker Hub. That means I can roll back instantly if something breaks, and anyone can pull and run the exact same environment I'm using. No "works on my machine" problems.&lt;/p&gt;

&lt;p&gt;Code Quality&lt;br&gt;
Before anything ships, SonarQube runs a full static analysis pass. It catches bugs, security issues, code smells, duplicated logic — the stuff that looks fine today but bites you six months later. It's become one of those tools I didn't know I needed until I couldn't imagine working without it.&lt;/p&gt;

&lt;p&gt;Monitoring&lt;br&gt;
Once everything was running, I wanted to actually see what was happening. I set up Prometheus to collect metrics and Grafana to display them on live dashboards — CPU, memory, request rates, all of it.&lt;br&gt;
It's the difference between flying blind and actually knowing your system is healthy.&lt;/p&gt;

&lt;p&gt;Tech used:-&lt;br&gt;
Ai Assistant - OpenAI API&lt;br&gt;
Version Control - GitHub&lt;br&gt;
CI/CD      -    Jenkins&lt;br&gt;
Containers -    Docker + Docker Hub&lt;br&gt;
Code Quality -   SonarQube&lt;br&gt;
Metrics    -      Prometheus&lt;br&gt;
Dashboards -      Grafana&lt;br&gt;
Environment -    Windows (local)/ linux &lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/Arun12415/devops-ai.git" rel="noopener noreferrer"&gt;https://github.com/Arun12415/devops-ai.git&lt;/a&gt;&lt;br&gt;
Portfolio: &lt;br&gt;
&lt;a href="http://arun-cloud-portfolio-2026.s3-website.ap-south-1.amazonaws.com/" rel="noopener noreferrer"&gt;http://arun-cloud-portfolio-2026.s3-website.ap-south-1.amazonaws.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>career</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
