Why I'm Sharing This Journey
Hi! I'm Nazmur, a Junior Linux Administrator who's decided to level up. Like many in operations, I've realized that the future belongs to those who can bridge traditional sysadmin work with modern DevOps practices and emerging AI technologies.
This is the first post in my journey from Linux admin → DevOps Engineer → AI/ML Engineer.
Where I Started
As a Linux admin, my daily work involves:
- Managing and monitoring Linux servers
- Writing bash scripts to automate repetitive tasks
- Troubleshooting system issues
- Ensuring uptime and reliability
It's solid work, but I kept asking myself: "How can I do this faster? More efficiently? What's next?"
The Three Pillars I'm Focusing On
1. DevOps & Cloud
- Kubernetes - Container orchestration
- Docker - Containerization
- CI/CD - GitHub Actions, Jenkins
- Infrastructure as Code - Terraform
- Cloud Platforms - AWS, Azure
2. Generative AI & Agentic AI
- RAG (Retrieval-Augmented Generation) - Building context-aware AI systems
- LangChain & LlamaIndex - AI agent frameworks
- LLM Integration - Working with GPT, open-source models
- Prompt Engineering - Getting the best from AI
3. Python Deep Dive
I already know Python basics, but now I'm focusing on:
- Automation scripts for infrastructure
- API integrations for AI agents
- Building tools that bridge ops and AI
What I've Built So Far
Week 1-2: Kubernetes Homelab
I set up k3s on an old laptop to get hands-on:
bash
# Install k3s
curl -sfL https://get.k3s.io | sh -
# Check my cluster
kubectl get nodes
Result: A working Kubernetes cluster in my living room!
Week 3: First RAG Agent
I built a simple RAG (Retrieval-Augmented Generation) agent that:
Ingests documents
Creates embeddings
Answers questions based on the documents
Here's a snippet:
python
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
# Load documents
loader = TextLoader("my_docs.txt")
documents = loader.load()
# Split into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000)
docs = text_splitter.split_documents(documents)
# Create vector store
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)
It's basic, but it works! Next step: adding memory and tool use.
My Learning Resources
Topic Resources
Kubernetes KodeKloud, K8s Official Docs
AI/LLM LangChain Docs, DeepLearning.AI
DevOps TechWorld with Nana, Adrian Cantrill
What's Next
Next week:
Deploy a real app on my K8s cluster
Add memory to my RAG agent
Write a detailed Terraform setup guide
By end of month:
Full CI/CD pipeline with GitHub Actions
Deploy AI agent as a web service
Document everything (this helps others and solidifies my learning)
Why This Matters to You
If you're also transitioning from sysadmin to DevOps or AI, here's what I'm learning:
Your Linux skills are invaluable — cloud is just Linux at scale
Start with a homelab — break things without fear
Build publicly — share what you learn (like this post!)
Focus on fundamentals — containers, networking, automation
Let's Connect
I'll be posting weekly about:
Kubernetes deep dives
AI agent experiments
DevOps project builds
Career transition lessons
Follow along if you're on a similar journey!
📦 GitHub: github.com/nazmur96
💼 LinkedIn: linkedin.com/in/nazmur96
From servers to agents — building the future, one line of code at a time.
If this post helped you, leave a comment or reaction! I'd love to hear about your journey too.
Result: A working Kubernetes cluster in my living room!
Week 3: First RAG Agent
I built a simple RAG (Retrieval-Augmented Generation) agent that:
Ingests documents
Creates embeddings
Answers questions based on the documents
Here's a snippet:
python
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
# Load documents
loader = TextLoader("my_docs.txt")
documents = loader.load()
# Split into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000)
docs = text_splitter.split_documents(documents)
# Create vector store
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)
It's basic, but it works! Next step: adding memory and tool use.
My Learning Resources
Topic Resources
Kubernetes KodeKloud, K8s Official Docs
AI/LLM LangChain Docs, DeepLearning.AI
DevOps TechWorld with Nana, Adrian Cantrill
What's Next
Next week:
Deploy a real app on my K8s cluster
Add memory to my RAG agent
Write a detailed Terraform setup guide
By end of month:
Full CI/CD pipeline with GitHub Actions
Deploy AI agent as a web service
Document everything (this helps others and solidifies my learning)
Why This Matters to You
If you're also transitioning from sysadmin to DevOps or AI, here's what I'm learning:
Your Linux skills are invaluable — cloud is just Linux at scale
Start with a homelab — break things without fear
Build publicly — share what you learn (like this post!)
Focus on fundamentals — containers, networking, automation
Let's Connect
I'll be posting weekly about:
Kubernetes deep dives
AI agent experiments
DevOps project builds
Career transition lessons
Follow along if you're on a similar journey!
📦 GitHub: github.com/nazmur96
💼 LinkedIn: linkedin.com/in/nazmur96
From servers to agents — building the future, one line of code at a time.
If this post helped you, leave a comment or reaction! I'd love to hear about your journey too.
Top comments (0)