<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hemanth</title>
    <description>The latest articles on DEV Community by Hemanth (@hemanth_22799aec3766938fd).</description>
    <link>https://dev.to/hemanth_22799aec3766938fd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hemanth_22799aec3766938fd"/>
    <language>en</language>
    <item>
      <title>How I Ship systemd Logs to CloudWatch for $0 (Django + Celery on EC2)</title>
      <dc:creator>Hemanth</dc:creator>
      <pubDate>Tue, 21 Apr 2026 18:32:46 +0000</pubDate>
      <link>https://dev.to/hemanth_22799aec3766938fd/how-i-ship-systemd-logs-to-cloudwatch-for-0-django-celery-on-ec2-1844</link>
      <guid>https://dev.to/hemanth_22799aec3766938fd/how-i-ship-systemd-logs-to-cloudwatch-for-0-django-celery-on-ec2-1844</guid>
      <description>&lt;p&gt;Running Django and Celery as systemd services on EC2 and tired of SSH-ing in to debug? Here's the exact setup I used to ship logs to CloudWatch Logs for free, without touching a single production service. Real commands, real configs, real gotchas included.&lt;/p&gt;

&lt;p&gt;I was SSH-ing into production to debug. Every. Single. Time.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;journalctl -u gunicorn.service -f&lt;/code&gt; was my monitoring stack. It worked until I needed to see what happened three hours ago on a Celery task that silently failed. SSH in, scroll up, hope journald hadn't rotated it yet.&lt;/p&gt;

&lt;p&gt;I needed logs off the server, queryable, and retained without paying $40/month for a logging SaaS I'd barely use. So I set up CloudWatch Logs with the Amazon CloudWatch Agent. Total monthly cost: $0.&lt;/p&gt;

&lt;p&gt;This is exactly how I did it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the CloudWatch Agent Actually Does
&lt;/h2&gt;

&lt;p&gt;Before installing anything, understand the mechanism.&lt;/p&gt;

&lt;p&gt;The agent is not a real-time forwarder. It wakes up every 5 seconds, reads new bytes from your log file since the last read using a stored file offset, compresses them, and ships one HTTPS batch to CloudWatch.&lt;/p&gt;

&lt;p&gt;Your application has zero awareness the agent exists. It reads from outside your process, like &lt;code&gt;tail -f&lt;/code&gt; does. Django doesn't know. Daphne doesn't know. Celery doesn't know.&lt;/p&gt;

&lt;p&gt;Resource profile on a production EC2: roughly 0.1% CPU at steady state with a tiny spike every 5 seconds on flush, 35-60 MB RAM fixed regardless of log volume, and a few KB of compressed network per flush. Not worth worrying about.&lt;/p&gt;

&lt;p&gt;Logs show up in CloudWatch within 5-15 seconds of being written. Not real-time, near-real-time. Good enough for debugging a failed task after the fact.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost Reality
&lt;/h2&gt;

&lt;p&gt;CloudWatch Logs free tier per account per month: 5 GB ingestion, 5 GB storage.&lt;/p&gt;

&lt;p&gt;A typical Django/Celery setup at &lt;code&gt;INFO&lt;/code&gt; log level generates roughly 80-100 MB/month. With 7-day retention, only about 20 MB sits in storage at any given moment. The free tier covers this comfortably.&lt;/p&gt;

&lt;p&gt;The only thing that'll push you over is leaving Django at &lt;code&gt;DEBUG&lt;/code&gt; level in production. That multiplies log volume 10-50x instantly. Keep Django at &lt;code&gt;INFO&lt;/code&gt;, Celery at &lt;code&gt;ERROR&lt;/code&gt;. You won't see a bill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not Point the Agent Directly at journald
&lt;/h2&gt;

&lt;p&gt;Both my services, Daphne and Celery, log through journald. The obvious config would be a &lt;code&gt;journald&lt;/code&gt; collector block in the agent JSON:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"journald"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"collect_list"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"units"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"gunicorn.service"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"log_group_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/prod/daphne"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I tried this first. The agent threw:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;E! Invalid Json input schema.
Under path : /logs/logs_collected | Error : Additional property journald is not allowed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Agent version &lt;code&gt;1.300064&lt;/code&gt; doesn't support the journald collector in its config schema. The docs don't make this obvious upfront.&lt;/p&gt;

&lt;p&gt;So before starting the setup, understand the actual flow you're building:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwkycaglp4tg6029tosm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwkycaglp4tg6029tosm.jpg" alt="Flowchart showing how systemd service logs flow from journald through a piping service and log file on disk to the CloudWatch Agent and finally into AWS CloudWatch Logs on an EC2 instance&amp;lt;br&amp;gt;
" width="800" height="689"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Two hops instead of one, but completely non-destructive. You don't touch your production services at all during setup. journald still captures everything in parallel, so you keep local history and get CloudWatch shipping simultaneously.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: IAM Role on the EC2
&lt;/h2&gt;

&lt;p&gt;The agent needs AWS credentials to call the CloudWatch API. Attach an IAM role to the EC2 instance directly — not access keys in a config file.&lt;/p&gt;

&lt;p&gt;Create a role in IAM, trusted entity is EC2, attach the &lt;code&gt;CloudWatchAgentServerPolicy&lt;/code&gt; managed policy. Then go to your EC2 instance, Actions, Security, Modify IAM role, attach it.&lt;/p&gt;

&lt;p&gt;Verify it worked from the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://169.254.169.254/latest/meta-data/iam/info | python3 &lt;span class="nt"&gt;-m&lt;/span&gt; json.tool
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your &lt;code&gt;InstanceProfileArn&lt;/code&gt; in the output. That IP &lt;code&gt;169.254.169.254&lt;/code&gt; is the EC2 Instance Metadata Service — a link-local address only reachable from inside the instance itself. The agent hits this on startup to get temporary credentials automatically. No keys, no secrets files sitting on disk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Install the Agent
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; amazon-cloudwatch-agent.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Create the Log Files
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/log/app
&lt;span class="nb"&gt;sudo touch&lt;/span&gt; /var/log/app/daphne.log /var/log/app/celery.log
&lt;span class="nb"&gt;sudo chown &lt;/span&gt;ubuntu:www-data /var/log/app/daphne.log /var/log/app/celery.log
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;644 /var/log/app/daphne.log /var/log/app/celery.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Write the Agent Config
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
{
  "agent": {
    "run_as_user": "cwagent"
  },
  "logs": {
    "logs_collected": {
      "files": {
        "collect_list": [
          {
            "file_path": "/var/log/app/daphne.log",
            "log_group_name": "/prod/daphne",
            "log_stream_name": "{instance_id}",
            "retention_in_days": 7
          },
          {
            "file_path": "/var/log/app/celery.log",
            "log_group_name": "/prod/celery",
            "log_stream_name": "{instance_id}",
            "retention_in_days": 7
          },
          {
            "file_path": "/var/log/nginx/access.log",
            "log_group_name": "/prod/nginx-access",
            "log_stream_name": "{instance_id}",
            "retention_in_days": 7
          },
          {
            "file_path": "/var/log/nginx/error.log",
            "log_group_name": "/prod/nginx-error",
            "log_stream_name": "{instance_id}",
            "retention_in_days": 7
          }
        ]
      }
    },
    "force_flush_interval": 5
  }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;retention_in_days&lt;/code&gt; set here means the agent creates the log group with that retention automatically. No manual console clicks needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Create the Piping Services
&lt;/h2&gt;

&lt;p&gt;One small systemd service per application. Each one runs &lt;code&gt;journalctl -f&lt;/code&gt; for that unit and appends output to the log file the agent reads.&lt;/p&gt;

&lt;p&gt;For Daphne:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/systemd/system/daphne-log-pipe.service &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
[Unit]
Description=Pipe gunicorn journald logs to file
After=gunicorn.service
BindsTo=gunicorn.service

[Service]
ExecStart=/bin/bash -c "journalctl -u gunicorn.service -f --no-pager -o short-iso &amp;gt;&amp;gt; /var/log/app/daphne.log"
Restart=always
RestartSec=3
User=ubuntu

[Install]
WantedBy=multi-user.target
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Celery:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/systemd/system/celery-log-pipe.service &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
[Unit]
Description=Pipe celery journald logs to file
After=celery.service
BindsTo=celery.service

[Service]
ExecStart=/bin/bash -c "journalctl -u celery.service -f --no-pager -o short-iso &amp;gt;&amp;gt; /var/log/app/celery.log"
Restart=always
RestartSec=3
User=ubuntu

[Install]
WantedBy=multi-user.target
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;BindsTo=gunicorn.service&lt;/code&gt; means if the main service stops, the piping service stops with it. Clean dependency, no orphaned processes.&lt;/p&gt;

&lt;p&gt;Enable and start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;daphne-log-pipe.service celery-log-pipe.service
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start daphne-log-pipe.service celery-log-pipe.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify lines are flowing into the files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sleep &lt;/span&gt;10 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-5&lt;/span&gt; /var/log/app/daphne.log &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-5&lt;/span&gt; /var/log/app/celery.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see actual request lines from Daphne and task output from Celery. If the files are empty after 10 seconds, check &lt;code&gt;systemctl status daphne-log-pipe.service&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Start the Agent
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-a&lt;/span&gt; fetch-config &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-m&lt;/span&gt; ec2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl &lt;span class="nt"&gt;-a&lt;/span&gt; status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"running"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"starttime"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-04-13T05:39:11+00:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"configstatus"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"configured"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.300064.1b1344"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Verify Logs Are in CloudWatch
&lt;/h2&gt;

&lt;p&gt;From AWS CloudShell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws logs filter-log-events &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--log-group-name&lt;/span&gt; /prod/daphne &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--limit&lt;/span&gt; 5 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; table &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'events[*].message'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your actual API request lines should appear. If the console shows &lt;code&gt;storedBytes: 0&lt;/code&gt;, wait 2-3 minutes and refresh. The agent flushes every 5 seconds but CloudWatch takes a moment to index.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Logs Live Now
&lt;/h2&gt;

&lt;p&gt;Three places simultaneously, independently:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Location&lt;/th&gt;
&lt;th&gt;Retention&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;journald on EC2&lt;/td&gt;
&lt;td&gt;capped (set via journald.conf)&lt;/td&gt;
&lt;td&gt;fast local debugging&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;/var/log/app/*.log&lt;/code&gt; on EC2&lt;/td&gt;
&lt;td&gt;logrotate weekly&lt;/td&gt;
&lt;td&gt;agent reads from here&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CloudWatch Logs&lt;/td&gt;
&lt;td&gt;7 days&lt;/td&gt;
&lt;td&gt;off-server, queryable&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The agent is a photocopier. It never deletes, never moves, never touches the original log. Stop the agent tomorrow and your EC2 logs are completely unaffected. If your EC2 dies, CloudWatch still has the last 7 days.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleaning This Up Later
&lt;/h2&gt;

&lt;p&gt;The piping services are temporary scaffolding. Once you've confirmed everything is stable over a few days, do the cleaner version during a low-traffic window.&lt;/p&gt;

&lt;p&gt;Add these lines directly to your gunicorn and celery service files under &lt;code&gt;[Service]&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;StandardOutput&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;append:/var/log/app/daphne.log&lt;/span&gt;
&lt;span class="py"&gt;StandardError&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;append:/var/log/app/daphne.log&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reload and restart both services, verify logs still flow, then remove the piping services:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl stop daphne-log-pipe.service celery-log-pipe.service
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl disable daphne-log-pipe.service celery-log-pipe.service
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; /etc/systemd/system/daphne-log-pipe.service
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; /etc/systemd/system/celery-log-pipe.service
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Direct write, one less moving part. The reason I didn't do this on day one is that it requires a production service restart. Validate the full pipeline first, clean up after.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Can Build on Top
&lt;/h2&gt;

&lt;p&gt;With logs in CloudWatch, the real value starts.&lt;/p&gt;

&lt;p&gt;Metric filters let you turn log patterns into metrics. Every line matching &lt;code&gt;" 500 "&lt;/code&gt; in your daphne logs increments an &lt;code&gt;HTTP5xxErrors&lt;/code&gt; counter. Every &lt;code&gt;ERROR&lt;/code&gt; in Celery increments a &lt;code&gt;CeleryTaskFailures&lt;/code&gt; counter. From there you can build dashboards, set alarms, get SNS notifications when error rates spike.&lt;/p&gt;

&lt;p&gt;Logs Insights is the other useful piece. It's SQL for your log groups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;fields @timestamp, @message
| filter @message like /api&lt;span class="se"&gt;\/&lt;/span&gt;auth&lt;span class="se"&gt;\/&lt;/span&gt;login/
| filter @message like &lt;span class="s2"&gt;"403"&lt;/span&gt;
| stats count&lt;span class="o"&gt;()&lt;/span&gt; by bin&lt;span class="o"&gt;(&lt;/span&gt;5m&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That query shows login failures per 5-minute window. Took 30 seconds to write, runs in 2 seconds. Try doing that with &lt;code&gt;journalctl&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The piping service pattern isn't the cleanest architecture forever, but it got logs off the server without a single production restart on day one. If you're still SSH-ing into EC2 to read logs, this setup takes about 30 minutes and the free tier covers most production workloads. Do it before you actually need it at 2 AM.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>linux</category>
      <category>python</category>
    </item>
  </channel>
</rss>
