<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Femi A</title>
    <description>The latest articles on DEV Community by Femi A (@deckmaster101).</description>
    <link>https://dev.to/deckmaster101</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/deckmaster101"/>
    <language>en</language>
    <item>
      <title>Build a Kubernetes Log Pipeline in 5 Minutes (No YAML Required)</title>
      <dc:creator>Femi A</dc:creator>
      <pubDate>Mon, 11 May 2026 14:06:29 +0000</pubDate>
      <link>https://dev.to/deckmaster101/build-a-kubernetes-log-pipeline-in-5-minutes-no-yaml-required-11l2</link>
      <guid>https://dev.to/deckmaster101/build-a-kubernetes-log-pipeline-in-5-minutes-no-yaml-required-11l2</guid>
      <description>&lt;p&gt;If you've ever tried to set up proper logging in a Kubernetes cluster, you know the drill: research log collectors, write a DaemonSet manifest, figure out the right volume mounts for &lt;code&gt;/var/log/pods&lt;/code&gt;, configure a parsing pipeline, set up a destination, test it, break it, fix it, and eventually ship something that mostly works but nobody wants to touch again.&lt;/p&gt;

&lt;p&gt;It doesn't have to be that complicated.&lt;/p&gt;

&lt;p&gt;In this guide, we'll go from a bare Kubernetes cluster to a fully structured log pipeline — collecting, parsing, and routing logs — in about five minutes. No YAML to write. No VRL to learn. Just a working pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we're building
&lt;/h2&gt;

&lt;p&gt;By the end of this tutorial, you'll have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A lightweight agent running as a DaemonSet on your cluster (~20–50 MB RAM, &amp;lt;1% CPU)&lt;/li&gt;
&lt;li&gt;Automatic collection of all pod logs (stdout/stderr)&lt;/li&gt;
&lt;li&gt;Structured parsing — Kubernetes metadata (pod name, namespace, container) attached to every log line&lt;/li&gt;
&lt;li&gt;A destination of your choice receiving clean, structured JSON&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The agent handles log collection using &lt;a href="https://vector.dev" rel="noopener noreferrer"&gt;Vector&lt;/a&gt; under the hood. The control plane handles configuration. &lt;strong&gt;Your logs never leave your infrastructure&lt;/strong&gt; until they hit the destination you choose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster (any flavour — EKS, GKE, AKS, k3s, minikube, kind)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; configured and pointing at the cluster&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://transformd.co/register" rel="noopener noreferrer"&gt;Transformd account&lt;/a&gt; (the free tier works fine for this)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. No Helm charts to customise, no CRDs to install.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Create an infrastructure
&lt;/h2&gt;

&lt;p&gt;Log into &lt;a href="https://transformd.co" rel="noopener noreferrer"&gt;Transformd&lt;/a&gt; and click &lt;strong&gt;New Infrastructure&lt;/strong&gt;. Give it a name — something like &lt;code&gt;staging-cluster&lt;/code&gt; or &lt;code&gt;dev-k8s&lt;/code&gt;. Select &lt;strong&gt;Kubernetes&lt;/strong&gt; as the type. You'll get an infrastructure token. Copy it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What's an infrastructure?&lt;/strong&gt; An infrastructure in Transformd represents a deployment target — a cluster, a set of VMs, a single server. Each infrastructure gets its own agent and its own set of pipelines.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 2: Install the agent
&lt;/h2&gt;

&lt;p&gt;The infrastructure page shows a one-line install command. For Kubernetes, it's a &lt;code&gt;kubectl apply&lt;/code&gt; that deploys the agent as a DaemonSet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://transformd.co/install/&amp;lt;your-token&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That single command creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;transformd&lt;/code&gt; namespace&lt;/li&gt;
&lt;li&gt;A DaemonSet running the agent on every node&lt;/li&gt;
&lt;li&gt;A ServiceAccount with read-only access to pods and namespaces (for metadata enrichment)&lt;/li&gt;
&lt;li&gt;Volume mounts for &lt;code&gt;/var/log/pods&lt;/code&gt; (read-only)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The agent connects to the control plane over an outbound encrypted tunnel. &lt;strong&gt;No inbound ports, no LoadBalancer, no ingress rules needed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After a few seconds, your infrastructure should show as "Connected" in the dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create a pipeline
&lt;/h2&gt;

&lt;p&gt;Click into your infrastructure and hit &lt;strong&gt;New Pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The source is already configured: Kubernetes pod logs. The agent automatically collects stdout/stderr from every pod in the cluster and attaches Kubernetes metadata (pod name, namespace, container name, node name, labels).&lt;/p&gt;

&lt;p&gt;Now add a transform. Click the &lt;strong&gt;+&lt;/strong&gt; button and choose &lt;strong&gt;Parse JSON&lt;/strong&gt;. Most Kubernetes applications log in JSON format — this transform will parse the JSON string into structured fields. If a log line isn't valid JSON, it passes through unchanged (no crash, no data loss).&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Add a destination
&lt;/h2&gt;

&lt;p&gt;Click &lt;strong&gt;Add Destination&lt;/strong&gt; and pick where you want your logs to land:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Destination&lt;/th&gt;
&lt;th&gt;Best for&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Grafana Loki&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Teams already using the Grafana stack&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Elasticsearch&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Teams with existing ELK infrastructure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AWS S3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cost-effective long-term storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AWS CloudWatch&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AWS-native teams&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Datadog&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Filter and transform before sending to Datadog&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Kafka&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Event streaming architectures&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Splunk&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Enterprise SIEM and log analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Enter your destination's connection details (URL, auth token, index name — whatever the destination needs). The agent connects directly to your destination from your infrastructure. &lt;strong&gt;The control plane never sees the connection credentials&lt;/strong&gt; — they're stored encrypted on the agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Deploy and verify
&lt;/h2&gt;

&lt;p&gt;Hit &lt;strong&gt;Deploy&lt;/strong&gt;. The control plane pushes the pipeline configuration to your agent over the encrypted tunnel. Within a few seconds, logs start flowing.&lt;/p&gt;

&lt;p&gt;You can verify in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;In Transformd:&lt;/strong&gt; The pipeline view shows live metrics — events in, events out, errors, throughput. You'll see numbers ticking up immediately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;In your destination:&lt;/strong&gt; Check your Loki/Elasticsearch/S3 for incoming structured log entries. They should have Kubernetes metadata (namespace, pod, container) as top-level fields.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;That's it.&lt;/strong&gt; You now have a production-grade Kubernetes log pipeline. Every pod in your cluster is sending structured, parsed logs to your destination. It took one &lt;code&gt;kubectl&lt;/code&gt; command and a few clicks.&lt;/p&gt;




&lt;h2&gt;
  
  
  What makes this different
&lt;/h2&gt;

&lt;p&gt;If you've used Fluentd, Fluent Bit, or raw Vector configs before, you might be wondering what Transformd actually does differently. A few things:&lt;/p&gt;

&lt;h3&gt;
  
  
  Your logs never leave your network
&lt;/h3&gt;

&lt;p&gt;Transformd uses a split architecture. The control plane manages pipeline configuration — it never touches your log data. The agent runs on your nodes, processes logs locally, and sends them straight to your destination. This matters for compliance (GDPR, SOC 2, HIPAA) and for cost (no per-GB ingest fees from a middleman).&lt;/p&gt;

&lt;h3&gt;
  
  
  No YAML, no VRL, no config files
&lt;/h3&gt;

&lt;p&gt;The pipeline builder is visual. You pick transforms from 80+ templates (parsing, filtering, enrichment, redaction, etc.), configure parameters via form fields, and preview the output against live data before deploying. The platform generates the VRL (Vector Remap Language) code for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Autopilot discovery
&lt;/h3&gt;

&lt;p&gt;After installing the agent, Transformd's Autopilot can scan your cluster and discover every log source automatically — every pod, every namespace, every log format. It even suggests which transforms to apply. You can go from zero to full-cluster coverage in a couple of minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flat pricing, no ingest fees
&lt;/h3&gt;

&lt;p&gt;Traditional log platforms charge per GB of ingest. With Transformd, pricing is based on the number of infrastructures, not data volume. The free tier gives you 1 infrastructure with unlimited pipelines.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to do next
&lt;/h2&gt;

&lt;p&gt;Now that you have a working pipeline, here are some things worth trying:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filter noisy namespaces.&lt;/strong&gt; Add a "Filter" transform to drop logs from &lt;code&gt;kube-system&lt;/code&gt; or other noisy namespaces. You'll immediately reduce your log volume (and your destination bill) without losing anything you care about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parse application-specific formats.&lt;/strong&gt; If you have services logging in non-JSON formats (Nginx access logs, syslog, custom formats), add a "Parse with regex" or "Parse key-value" transform. You can have multiple transforms in a pipeline, and they're applied in order.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Route different logs to different destinations.&lt;/strong&gt; Security audit logs to your SIEM, application logs to Loki, debug logs to &lt;code&gt;/dev/null&lt;/code&gt;. You can add up to 20 destinations to a single pipeline with per-destination filters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up alerts.&lt;/strong&gt; Transformd can fire alerts when specific conditions appear in your log stream — error rate spikes, crash loops, absence of expected logs, slow requests. Alerts route to Slack, PagerDuty, Microsoft Teams, or webhooks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Download a Grafana dashboard.&lt;/strong&gt; Transformd ships pre-built Grafana dashboards for Kubernetes, Linux, and Docker. Download the JSON from the infrastructure overview page, import into Grafana, and you have instant log visibility.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Does this work with managed Kubernetes (EKS, GKE, AKS)?&lt;/strong&gt;&lt;br&gt;
Yes. The agent runs as a DaemonSet and reads pod logs from the node filesystem. Works identically on managed and self-hosted clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What resources does the agent use?&lt;/strong&gt;&lt;br&gt;
Typically ~20–50 MB RAM and &amp;lt;1% CPU. We recommend 100m CPU request / 500m limit and 128Mi memory request / 512Mi limit. It's based on Vector, which is written in Rust and extremely efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What if the control plane goes down?&lt;/strong&gt;&lt;br&gt;
Your pipelines keep running. The agent operates on its last-known configuration. You lose the ability to make changes until the control plane recovers, but log collection and routing continue uninterrupted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does Transformd see my logs?&lt;/strong&gt;&lt;br&gt;
No. The agent processes logs on your infrastructure and sends them directly to your destination. The control plane only handles pipeline configuration and health metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is there a free tier?&lt;/strong&gt;&lt;br&gt;
Yes — 1 infrastructure, unlimited pipelines, 2 team members, no credit card required. &lt;a href="https://transformd.co/register" rel="noopener noreferrer"&gt;Sign up here&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Transformd is a log pipeline platform built on &lt;a href="https://vrl.dev" rel="noopener noreferrer"&gt;VRL&lt;/a&gt; — the transform engine used by Datadog, Cloudflare, and AWS. &lt;a href="https://transformd.co/register" rel="noopener noreferrer"&gt;Try it free&lt;/a&gt; or &lt;a href="https://transformd.co/blog" rel="noopener noreferrer"&gt;read more on the blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>logging</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
