<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rajan Vavadia</title>
    <description>The latest articles on DEV Community by Rajan Vavadia (@rajanvavadia).</description>
    <link>https://dev.to/rajanvavadia</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rajanvavadia"/>
    <language>en</language>
    <item>
      <title>ELK Stack Setup for Centralized Log Management &amp; Monitoring</title>
      <dc:creator>Rajan Vavadia</dc:creator>
      <pubDate>Wed, 08 Apr 2026 11:42:01 +0000</pubDate>
      <link>https://dev.to/addwebsolutionpvtltd/elk-stack-setup-for-centralized-log-management-monitoring-11l0</link>
      <guid>https://dev.to/addwebsolutionpvtltd/elk-stack-setup-for-centralized-log-management-monitoring-11l0</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“The goal is to turn data into information, and information into insight.” - Carly Fiorina&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Stage 1: Elasticsearch - The Search &amp;amp; Storage Engine&lt;/li&gt;
&lt;li&gt;Stage 2: Logstash - The Data Processing Pipeline&lt;/li&gt;
&lt;li&gt;Stage 3: Kibana - The Visualization Layer&lt;/li&gt;
&lt;li&gt;Stage 4: Filebeat - The Lightweight Log Shipper&lt;/li&gt;
&lt;li&gt;Connecting the Pieces - End-to-End Data Flow&lt;/li&gt;
&lt;li&gt;Practical Setup (Step-by-step)&lt;/li&gt;
&lt;li&gt;Troubleshooting Common Issues&lt;/li&gt;
&lt;li&gt;Quotes&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;When your application runs on a single server, tailing a log file is enough. When it runs across multiple servers, containers, or microservices - you need centralized logging. Scattered logs across dozens of servers make debugging slow, error correlation impossible, and incident response reactive instead of proactive.&lt;/p&gt;

&lt;p&gt;This guide walks through setting up a production-ready ELK stack (Elasticsearch, Logstash, Kibana) with Filebeat for centralized log collection, processing, and visualization. The setup covers a real-world scenario: a Java Spring Boot application running on one server, with the ELK stack on a separate server.&lt;/p&gt;

&lt;p&gt;The approach uses a &lt;strong&gt;four-component architecture:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filebeat (Application Server): Lightweight agent that tails log files and ships them to Logstash.&lt;/li&gt;
&lt;li&gt;Logstash (ELK Server): Receives raw logs, parses and transforms them, and forwards structured data to Elasticsearch.&lt;/li&gt;
&lt;li&gt;Elasticsearch (ELK Server): Stores, indexes, and makes logs searchable in near real-time.&lt;/li&gt;
&lt;li&gt;Kibana (ELK Server): Web UI for searching, visualizing, and building dashboards from log data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why centralized logging?&lt;/strong&gt;&lt;br&gt;
Manually SSH-ing into each server and grepping through log files does not scale. Centralized logging solves this by aggregating all logs into a single searchable location. &lt;br&gt;
You get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single pane of glass for all application and infrastructure logs.&lt;/li&gt;
&lt;li&gt;Real-time search across millions of log entries in milliseconds.&lt;/li&gt;
&lt;li&gt;Correlation of events across services and servers by timestamp.&lt;/li&gt;
&lt;li&gt;Alerting on error patterns before users report issues.&lt;/li&gt;
&lt;li&gt;Retention and compliance with configurable index lifecycle policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Component Responsibilities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9p4c79403j4tu2xf2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9p4c79403j4tu2xf2q.png" alt=" " width="537" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why separate servers?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Resource isolation:&lt;/strong&gt; Elasticsearch is memory-hungry. Running it on the application server competes with your app for RAM and CPU.&lt;br&gt;
&lt;strong&gt;- Independent scaling:&lt;/strong&gt; You can scale the ELK server (more RAM, bigger disk) without touching production application servers.&lt;br&gt;
&lt;strong&gt;- Security boundary:&lt;/strong&gt; The ELK server can sit in a private subnet, accessible only to internal services and authorized users.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Stage 1: Elasticsearch - The Search &amp;amp; Storage Engine
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;2.1 What is Elasticsearch?&lt;/strong&gt;&lt;br&gt;
Elasticsearch is a distributed search and analytics engine built on Apache Lucene. In the ELK stack, it serves as the storage and search backend - every log line that Logstash processes ends up as a document in an Elasticsearch index.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.2 Installation&lt;/strong&gt;&lt;br&gt;
Import the Elasticsearch GPG key&lt;br&gt;
wget -qO - &lt;a href="https://artifacts.elastic.co/GPG-KEY-elasticsearch" rel="noopener noreferrer"&gt;https://artifacts.elastic.co/GPG-KEY-elasticsearch&lt;/a&gt; | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg&lt;/p&gt;

&lt;p&gt;Add the APT repository&lt;br&gt;
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] &lt;a href="https://artifacts.elastic.co/packages/8.x/apt" rel="noopener noreferrer"&gt;https://artifacts.elastic.co/packages/8.x/apt&lt;/a&gt; stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list&lt;/p&gt;

&lt;p&gt;Install Elasticsearch&lt;br&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y elasticsearch&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.3 Configuration&lt;/strong&gt;&lt;br&gt;
The main configuration file is /etc/elasticsearch/elasticsearch.yml:&lt;br&gt;
Cluster and node identity&lt;br&gt;
cluster.name: my-cluster&lt;br&gt;
node.name: node-1&lt;/p&gt;

&lt;p&gt;Data and log paths&lt;br&gt;
path.data: /var/lib/elasticsearch&lt;br&gt;
path.logs: /var/log/elasticsearch&lt;/p&gt;

&lt;p&gt;Network - bind to all interfaces for external access&lt;br&gt;
network.host: 0.0.0.0&lt;br&gt;
http.host: 0.0.0.0&lt;/p&gt;

&lt;p&gt;Discovery - single node (no cluster formation)&lt;br&gt;
discovery.type: single-node&lt;/p&gt;

&lt;p&gt;Security - disable for internal/dev setups&lt;br&gt;
xpack.security.enabled: false&lt;br&gt;
xpack.security.enrollment.enabled: true&lt;br&gt;
xpack.security.http.ssl:&lt;br&gt;
  enabled: false&lt;br&gt;
xpack.security.transport.ssl:&lt;br&gt;
  enabled: false&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration settings explained:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem1va7kcqe4w5mfpga6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem1va7kcqe4w5mfpga6u.png" alt=" " width="535" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security note:&lt;/strong&gt; xpack.security.enabled: false is acceptable for internal/development setups behind a firewall. For production environments exposed to the internet, always enable security with TLS certificates and user authentication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.4 Start and Enable&lt;/strong&gt;&lt;br&gt;
sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable elasticsearch&lt;br&gt;
sudo systemctl start elasticsearch&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.5 Verify&lt;/strong&gt;&lt;br&gt;
curl &lt;a href="http://localhost:9200" rel="noopener noreferrer"&gt;http://localhost:9200&lt;/a&gt;&lt;br&gt;
Expected response:&lt;br&gt;
{&lt;br&gt;
  "name": "node-1",&lt;br&gt;
  "cluster_name": "my-cluster",&lt;br&gt;
  "version": {&lt;br&gt;
    "number": "8.x.x"&lt;br&gt;
  },&lt;br&gt;
  "tagline": "You Know, for Search"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.6 Memory Considerations&lt;/strong&gt;&lt;br&gt;
Elasticsearch defaults to a 1GB heap (-Xms1g -Xmx1g). For production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set heap to 50% of available RAM, but never more than 31GB (to stay within compressed OOPs).&lt;/li&gt;
&lt;li&gt;Edit /etc/elasticsearch/jvm.options.d/heap.options:
-Xms2g
-Xmx2g&lt;/li&gt;
&lt;li&gt;Ensure the system has enough RAM for both the JVM heap and filesystem cache (Lucene relies heavily on OS page cache).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgfd8phjdnstox1snwx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgfd8phjdnstox1snwx8.png" alt=" " width="536" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Stage 2: Logstash - The Data Processing Pipeline
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;3.1 What is Logstash?&lt;/strong&gt;&lt;br&gt;
Logstash is a server-side data processing pipeline that ingests data from multiple sources, transforms it, and sends it to Elasticsearch. It sits between Filebeat and Elasticsearch, adding structure to raw log lines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.2 Installation&lt;/strong&gt;&lt;br&gt;
sudo apt-get install -y logstash&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.3 Pipeline Configuration&lt;/strong&gt;&lt;br&gt;
Logstash pipelines are defined in /etc/logstash/conf.d/. Each pipeline has three sections: input, filter, and output.&lt;br&gt;
Create /etc/logstash/conf.d/boardgame.conf:&lt;br&gt;
input {&lt;br&gt;
  beats {&lt;br&gt;
    port =&amp;gt; 5044&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;filter {&lt;br&gt;
  #Parse Spring Boot log format&lt;br&gt;
  grok {&lt;br&gt;
    match =&amp;gt; { "message" =&amp;gt; "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} %{GREEDYDATA:logmessage}" }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output {&lt;br&gt;
  elasticsearch {&lt;br&gt;
    hosts =&amp;gt; ["localhost:9200"]&lt;br&gt;
    index =&amp;gt; "boardgame-logs-%{+YYYY.MM.dd}"&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
Pipeline sections explained:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Input - Where data comes from:&lt;/strong&gt;&lt;br&gt;
input {&lt;br&gt;
  beats {&lt;br&gt;
    port =&amp;gt; 5044&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxark7w6og256clyj5mj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxark7w6og256clyj5mj.png" alt=" " width="536" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filter - How data is transformed:&lt;/strong&gt;&lt;br&gt;
filter {&lt;br&gt;
  grok {&lt;br&gt;
    match =&amp;gt; { "message" =&amp;gt; "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} %{GREEDYDATA:logmessage}" }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ften9gk99e9hu2slwlwce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ften9gk99e9hu2slwlwce.png" alt=" " width="538" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The grok filter parses unstructured log lines into structured fields (timestamp, loglevel, logmessage). This enables filtering by log level in Kibana (e.g., show only ERROR logs).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output - Where data goes:&lt;/strong&gt;&lt;br&gt;
output {&lt;br&gt;
  elasticsearch {&lt;br&gt;
    hosts =&amp;gt; ["localhost:9200"]&lt;br&gt;
    index =&amp;gt; "boardgame-logs-%{+YYYY.MM.dd}"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf73dgxhsyznncijxevd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf73dgxhsyznncijxevd.png" alt=" " width="536" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why daily indices?&lt;/strong&gt; Daily indices make retention management simple - delete old indices by date. They also improve search performance because Elasticsearch can skip entire indices when querying a specific time range.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.4 Start and Enable&lt;/strong&gt;&lt;br&gt;
sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable logstash&lt;br&gt;
sudo systemctl start logstash&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.5 Verify&lt;/strong&gt;&lt;br&gt;
Check if Logstash is listening on port 5044&lt;br&gt;
sudo ss -tlnp | grep 5044&lt;/p&gt;

&lt;p&gt;Check Logstash logs for pipeline startup&lt;br&gt;
sudo journalctl -u logstash --no-pager -n 20&lt;br&gt;
Look for: Pipeline started {"pipeline.id"=&amp;gt;"main"} in the logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Stage 3: Kibana - The Visualization Layer
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;4.1 What is Kibana?&lt;/strong&gt;&lt;br&gt;
Kibana is the web interface for the ELK stack. It connects to Elasticsearch and provides tools for searching logs (Discover), building visualizations (charts, graphs, maps), and creating dashboards for real-time monitoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.2 Installation&lt;/strong&gt;&lt;br&gt;
sudo apt-get install -y kibana&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.3 Configuration&lt;/strong&gt;&lt;br&gt;
Edit /etc/kibana/kibana.yml:&lt;br&gt;
Bind to all interfaces for external access&lt;br&gt;
server.host: "0.0.0.0"&lt;/p&gt;

&lt;p&gt;Connect to Elasticsearch over plain HTTP&lt;br&gt;
elasticsearch.hosts: ["&lt;a href="http://localhost:9200%22" rel="noopener noreferrer"&gt;http://localhost:9200"&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Critical configuration points:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft31gdko0evuu2z2ewcfi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft31gdko0evuu2z2ewcfi.png" alt=" " width="536" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common pitfall:&lt;/strong&gt; If Elasticsearch has xpack.security.http.ssl.enabled: false but Kibana is configured with https:// in elasticsearch.hosts, Kibana will fail to connect with Unable to retrieve version information from Elasticsearch. Always match the protocol.&lt;/p&gt;

&lt;p&gt;Settings to remove or comment out when SSL is disabled:&lt;br&gt;
Comment out or remove these lines&lt;br&gt;
elasticsearch.ssl.certificateAuthorities: [/path/to/ca.crt]&lt;br&gt;
elasticsearch.username: "kibana_system"&lt;br&gt;
elasticsearch.password: "pass"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.4 Start and Enable&lt;/strong&gt;&lt;br&gt;
sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable kibana&lt;br&gt;
sudo systemctl start kibana&lt;/p&gt;

&lt;p&gt;Kibana takes 30–60 seconds to fully initialize.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.5 Verify&lt;/strong&gt;&lt;br&gt;
curl &lt;a href="http://localhost:5601/api/status" rel="noopener noreferrer"&gt;http://localhost:5601/api/status&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Expected: {"status":{"overall":{"level":"available"}}} - the level should be available, not unavailable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.6 Create a Data View&lt;/strong&gt;&lt;br&gt;
Once data is flowing, create a data view so Kibana knows which indices to query:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open http://:5601 in a browser.&lt;/li&gt;
&lt;li&gt;Navigate to Stack Management &amp;gt; Data Views (under Kibana section).&lt;/li&gt;
&lt;li&gt;Click Create data view.&lt;/li&gt;
&lt;li&gt;Fill in the fields:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qn4bpi9xpwcqc02w2pt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qn4bpi9xpwcqc02w2pt.png" alt=" " width="535" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now go to Discover (under Analytics) to search and explore your logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9jbbxazfjeo9xc8w6vk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9jbbxazfjeo9xc8w6vk.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.7 AWS Security Group&lt;/strong&gt;&lt;br&gt;
If running on AWS EC2, ensure the security group allows inbound traffic:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0je9eq8pgadlzacfacjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0je9eq8pgadlzacfacjo.png" alt=" " width="536" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Never expose port 9200 to the public internet unless Elasticsearch security is enabled with TLS and authentication.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Logs are the immune system of your infrastructure - they tell you when something is wrong before it becomes a crisis.” - DevOps principle&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  5. Stage 4: Filebeat - The Lightweight Log Shipper
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;5.1 What is Filebeat?&lt;/strong&gt;&lt;br&gt;
Filebeat is a lightweight log shipper that runs on the application server. It tails log files, handles log rotation, tracks read positions (so it never sends duplicate lines), and ships logs to Logstash or Elasticsearch with minimal resource overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.2 Why Filebeat instead of sending directly to Logstash?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faph4qwaku3cx7i77kfpl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faph4qwaku3cx7i77kfpl.png" alt=" " width="537" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Filebeat decouples your application from the logging pipeline. If Logstash or Elasticsearch goes down, Filebeat queues events and retries automatically. Your application keeps writing to its log file without interruption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.3 Installation (on the Application Server)&lt;/strong&gt;&lt;br&gt;
Use the same Elastic repository&lt;br&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y filebeat&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.4 Configuration&lt;/strong&gt;&lt;br&gt;
Edit /etc/filebeat/filebeat.yml:&lt;br&gt;
filebeat.inputs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;type: log
enabled: true
paths:

&lt;ul&gt;
&lt;li&gt;/home/ubuntu/Boardgame/target/app.log
multiline.pattern: '^\d{4}-\d{2}-\d{2}'
multiline.negate: true
multiline.match: after&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;output.logstash:&lt;br&gt;
  hosts: [":5044"]&lt;/p&gt;

&lt;p&gt;processors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add_host_metadata:
  when.not.contains.tags: forwarded&lt;/li&gt;
&lt;li&gt;add_cloud_metadata: ~&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;logging.level: info&lt;/p&gt;

&lt;p&gt;Configuration explained:&lt;br&gt;
Input section:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh55ewofbyjh7ybk6wzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh55ewofbyjh7ybk6wzp.png" alt=" " width="540" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Multiline settings (critical for Java/Spring Boot stack traces):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz47xuf0vcqg9zs5pn9lj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz47xuf0vcqg9zs5pn9lj.png" alt=" " width="539" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This ensures that a multi-line Java stack trace is treated as a single log event, not dozens of separate lines:&lt;br&gt;
2026-03-11 06:37:55 ERROR Something went wrong&lt;br&gt;
java.lang.NullPointerException&lt;br&gt;
    at com.example.Service.process(Service.java:42)&lt;br&gt;
    at com.example.Controller.handle(Controller.java:15)&lt;br&gt;
Without multiline config, each line of the stack trace becomes a separate Elasticsearch document - making it impossible to correlate errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output section:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3ikg4a1gvaosipkbc0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3ikg4a1gvaosipkbc0y.png" alt=" " width="536" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Processors:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fnzz2za241yhshna9z1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fnzz2za241yhshna9z1.png" alt=" " width="536" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;YAML formatting rules (common source of errors):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl5wkwylunhomvhmstmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl5wkwylunhomvhmstmr.png" alt=" " width="537" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.5 Start and Enable&lt;/strong&gt;&lt;br&gt;
Clear old registry to re-read files from the beginning&lt;br&gt;
sudo rm -rf /var/lib/filebeat/registry&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable filebeat&lt;br&gt;
sudo systemctl start filebeat&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5.6 Verify&lt;/strong&gt;&lt;br&gt;
Check Filebeat status&lt;br&gt;
sudo systemctl status filebeat&lt;/p&gt;

&lt;p&gt;Check logs - look for "Harvester started"&lt;br&gt;
sudo journalctl -u filebeat --no-pager -n 20&lt;br&gt;
Key indicators in the logs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5zmhi84e6as4n5e0j18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5zmhi84e6as4n5e0j18.png" alt=" " width="537" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Connecting the Pieces - End-to-End Data Flow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;6.1 The Complete Pipeline&lt;/strong&gt;&lt;br&gt;
[Spring Boot App]&lt;br&gt;
       │&lt;br&gt;
       │ writes logs to disk&lt;br&gt;
       ▼&lt;br&gt;
[/home/ubuntu/Boardgame/target/app.log]&lt;br&gt;
       │&lt;br&gt;
       │ Filebeat tails the file&lt;br&gt;
       ▼&lt;br&gt;
[Filebeat] ──── port 5044 ────► [Logstash]&lt;br&gt;
                                     │&lt;br&gt;
                                     │ grok filter parses log lines&lt;br&gt;
                                     ▼&lt;br&gt;
                              [Elasticsearch]&lt;br&gt;
                              index: boardgame-logs-2026.03.11&lt;br&gt;
                                     │&lt;br&gt;
                                     │ Kibana queries the index&lt;br&gt;
                                     ▼&lt;br&gt;
                                 [Kibana]&lt;br&gt;
                              http://:5601&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf618ruhvk1s0kvzwn6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmf618ruhvk1s0kvzwn6f.png" alt=" " width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.2 Verifying Each Link&lt;/strong&gt;&lt;br&gt;
Test each component in order, from bottom to top:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Elasticsearch is running and accessible&lt;br&gt;
curl &lt;a href="http://localhost:9200" rel="noopener noreferrer"&gt;http://localhost:9200&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logstash is listening for Beats input&lt;br&gt;
sudo ss -tlnp | grep 5044&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kibana can reach Elasticsearch&lt;br&gt;
curl &lt;a href="http://localhost:5601/api/status" rel="noopener noreferrer"&gt;http://localhost:5601/api/status&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Filebeat can reach Logstash (from application server)&lt;br&gt;
telnet  5044&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data is actually in Elasticsearch&lt;br&gt;
curl &lt;a href="http://localhost:9200/_cat/indices?v" rel="noopener noreferrer"&gt;http://localhost:9200/_cat/indices?v&lt;/a&gt; | grep boardgame&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check document count&lt;br&gt;
curl &lt;a href="http://localhost:9200/boardgame-logs-*/_count" rel="noopener noreferrer"&gt;http://localhost:9200/boardgame-logs-*/_count&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  7. Practical Setup (Step-by-step)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;7.1 Server Requirements&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5klb3utps0mw74tahu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5klb3utps0mw74tahu2.png" alt=" " width="535" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7.2 ELK Server Setup (in order)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 1: Add Elastic repository&lt;br&gt;
wget -qO - &lt;a href="https://artifacts.elastic.co/GPG-KEY-elasticsearch" rel="noopener noreferrer"&gt;https://artifacts.elastic.co/GPG-KEY-elasticsearch&lt;/a&gt; | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg&lt;br&gt;
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] &lt;a href="https://artifacts.elastic.co/packages/8.x/apt" rel="noopener noreferrer"&gt;https://artifacts.elastic.co/packages/8.x/apt&lt;/a&gt; stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list&lt;br&gt;
sudo apt-get update&lt;/p&gt;

&lt;p&gt;Step 2: Install all three components&lt;br&gt;
sudo apt-get install -y elasticsearch logstash kibana&lt;/p&gt;

&lt;p&gt;Step 3: Configure Elasticsearch&lt;br&gt;
sudo nano /etc/elasticsearch/elasticsearch.yml&lt;br&gt;
Set: network.host: 0.0.0.0&lt;br&gt;
Set: discovery.type: single-node&lt;br&gt;
Set: xpack.security.enabled: false&lt;br&gt;
Set: xpack.security.http.ssl.enabled: false&lt;br&gt;
Set: xpack.security.transport.ssl.enabled: false&lt;/p&gt;

&lt;p&gt;Step 4: Configure Logstash pipeline&lt;br&gt;
sudo nano /etc/logstash/conf.d/boardgame.conf&lt;br&gt;
Add input (beats, port 5044), filter (grok), output (elasticsearch)&lt;/p&gt;

&lt;p&gt;Step 5: Configure Kibana&lt;br&gt;
sudo nano /etc/kibana/kibana.yml&lt;br&gt;
Set: server.host: "0.0.0.0"&lt;br&gt;
Set: elasticsearch.hosts: ["&lt;a href="http://localhost:9200%22" rel="noopener noreferrer"&gt;http://localhost:9200"&lt;/a&gt;]&lt;br&gt;
Remove/comment any SSL certificate lines&lt;/p&gt;

&lt;p&gt;Step 6: Start services&lt;br&gt;
sudo systemctl enable elasticsearch logstash kibana&lt;br&gt;
sudo systemctl start elasticsearch logstash kibana&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7.3 Application Server Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 1: Install Filebeat&lt;br&gt;
sudo apt-get install -y filebeat&lt;/p&gt;

&lt;p&gt;Step 2: Configure Filebeat&lt;br&gt;
sudo nano /etc/filebeat/filebeat.yml&lt;br&gt;
Replace entire file with clean config (see Section 6.4)&lt;/p&gt;

&lt;p&gt;Step 3: Clear registry and start&lt;br&gt;
sudo rm -rf /var/lib/filebeat/registry&lt;br&gt;
sudo systemctl enable filebeat&lt;br&gt;
sudo systemctl start filebeat&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7.4 Kibana Data View Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify data arrived in Elasticsearch&lt;br&gt;
curl http://:9200/_cat/indices?v | grep boardgame&lt;br&gt;
Then in the Kibana UI: 1. Stack Management &amp;gt; Data Views &amp;gt; Create data view 2. Name: Boardgame Logs, Index pattern: boardgame-logs-*, Timestamp: @timestamp 3. Save data view to Kibana 4. Go to Discover to explore your logs&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7.5 Optional: Index Lifecycle Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For production, configure automatic index cleanup to prevent disk from filling up:&lt;br&gt;
Create an ILM policy that deletes indices older than 30 days&lt;br&gt;
curl -X PUT "&lt;a href="http://localhost:9200/_ilm/policy/boardgame-logs-policy" rel="noopener noreferrer"&gt;http://localhost:9200/_ilm/policy/boardgame-logs-policy&lt;/a&gt;" -H 'Content-Type: application/json' -d'&lt;br&gt;
{&lt;br&gt;
  "policy": {&lt;br&gt;
    "phases": {&lt;br&gt;
      "hot": {&lt;br&gt;
        "actions": {&lt;br&gt;
          "rollover": {&lt;br&gt;
            "max_size": "5gb",&lt;br&gt;
            "max_age": "1d"&lt;br&gt;
          }&lt;br&gt;
        }&lt;br&gt;
      },&lt;br&gt;
      "delete": {&lt;br&gt;
        "min_age": "30d",&lt;br&gt;
        "actions": {&lt;br&gt;
          "delete": {}&lt;br&gt;
        }&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}'&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Troubleshooting Common Issues
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;8.1 Kibana shows “Unable to retrieve version information from Elasticsearch”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cause: Protocol mismatch - Kibana is using https:// but Elasticsearch has SSL disabled.&lt;br&gt;
Fix:&lt;br&gt;
Check current Kibana config&lt;br&gt;
sudo grep "elasticsearch.hosts" /etc/kibana/kibana.yml&lt;/p&gt;

&lt;p&gt;Fix: change https to http&lt;br&gt;
elasticsearch.hosts: ["&lt;a href="http://localhost:9200%22" rel="noopener noreferrer"&gt;http://localhost:9200"&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Comment out any SSL certificate lines&lt;br&gt;
elasticsearch.ssl.certificateAuthorities: [...]&lt;/p&gt;

&lt;p&gt;sudo systemctl restart kibana&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8.2 Filebeat shows harvester.open_files: 0&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cause: Filebeat config is malformed (duplicate sections, wrong indentation, or double-quoted regex).&lt;br&gt;
Fix: - Ensure filebeat.yml has no duplicate top-level keys. - Use single quotes for multiline.pattern (YAML treats \d in double quotes as an escape sequence). - Top-level keys (filebeat.inputs:, output.logstash:, processors:) must start at column 0.&lt;br&gt;
Validate the config&lt;br&gt;
sudo filebeat test config -c /etc/filebeat/filebeat.yml&lt;/p&gt;

&lt;p&gt;Clear registry and restart&lt;br&gt;
sudo rm -rf /var/lib/filebeat/registry&lt;br&gt;
sudo systemctl restart filebeat&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8.3 No boardgame-logs-* indices in Elasticsearch&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cause: Filebeat cannot reach Logstash on port 5044.&lt;br&gt;
Fix:&lt;br&gt;
From the application server, test connectivity&lt;br&gt;
telnet  5044&lt;/p&gt;

&lt;p&gt;If connection refused - open port 5044 in the ELK server's security group&lt;br&gt;
If connection times out - check if Logstash is running&lt;br&gt;
sudo systemctl status logstash&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktjli79a9bl69xpo0mfr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktjli79a9bl69xpo0mfr.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/..." class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/..." alt="Uploading image" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8.4 Kibana Data View shows “No data streams, indices, or index aliases match”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cause: No data has been ingested yet, or the index pattern is wrong.&lt;br&gt;
Fix:&lt;br&gt;
List all indices&lt;br&gt;
curl &lt;a href="http://localhost:9200/_cat/indices?v" rel="noopener noreferrer"&gt;http://localhost:9200/_cat/indices?v&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the exact index name and match your pattern accordingly&lt;br&gt;
If index is "boardgame-logs-2026.03.11", pattern should be "boardgame-logs-*"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8.5 Logstash is running but not receiving data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cause: Logstash pipeline failed to start, or the config file has syntax errors.&lt;br&gt;
Fix:&lt;br&gt;
Test the Logstash config&lt;br&gt;
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/&lt;/p&gt;

&lt;p&gt;Check Logstash logs&lt;br&gt;
sudo journalctl -u logstash --no-pager -n 30&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“You can’t manage what you can’t measure.” - Peter Drucker&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  9. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1. Why use the ELK stack instead of cloud-native logging (CloudWatch, Stackdriver)?&lt;/strong&gt; &lt;br&gt;
ELK gives you full control over data retention, parsing rules, and costs. Cloud logging services charge per GB ingested, which becomes expensive at scale. ELK is free (open-source) - you only pay for the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2. Why Filebeat instead of sending logs directly from the application?&lt;/strong&gt; &lt;br&gt;
Filebeat decouples your application from the logging pipeline. If Logstash or Elasticsearch goes down, Filebeat queues events and retries. Your application keeps running without blocking on log delivery. Filebeat also uses ~10–30 MB RAM versus Logstash’s 500 MB+, making it ideal for application servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3. Can I skip Logstash and send Filebeat directly to Elasticsearch?&lt;/strong&gt; &lt;br&gt;
Yes. Set output.elasticsearch instead of output.logstash in Filebeat. However, you lose the ability to parse and transform logs with grok filters. For simple use cases (no parsing needed), direct shipping is fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4. Why daily indices (boardgame-logs-2026.03.11) instead of a single index?&lt;/strong&gt; &lt;br&gt;
Daily indices enable simple retention management (delete indices older than N days), improve search performance (Elasticsearch skips irrelevant time ranges), and make index management operations (backup, restore) more granular.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q5. How much disk space does the ELK stack need?&lt;/strong&gt; &lt;br&gt;
Rough estimate: 1 GB of raw logs produces ~1.5–2 GB of Elasticsearch data (due to indexing overhead). For 100 MB/day of logs with 30-day retention, budget ~6 GB for Elasticsearch data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q6. Why must multiline.pattern use single quotes in YAML?&lt;/strong&gt; &lt;br&gt;
YAML interprets backslash sequences in double-quoted strings (\d becomes an invalid escape). Single quotes treat the content literally, preserving the regex pattern for Filebeat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q7. How do I add more application servers?&lt;/strong&gt; &lt;br&gt;
Install Filebeat on each server, point it to the same Logstash endpoint. Add a fields section to distinguish servers:&lt;br&gt;
filebeat.inputs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;type: log
enabled: true
paths:

&lt;ul&gt;
&lt;li&gt;/var/log/myapp/*.log
fields:
server_name: web-02
environment: production&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Q8. Is this setup production-ready as described?&lt;/strong&gt; &lt;br&gt;
For internal use, yes. For public-facing production, enable Elasticsearch security (xpack.security.enabled: true), use TLS certificates for all inter-component communication, put Kibana behind a reverse proxy with authentication, and configure Index Lifecycle Management for automatic cleanup.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Centralized logging transforms debugging from “SSH into each server and grep” to “search once, find everywhere.”&lt;/li&gt;
&lt;li&gt;Filebeat belongs on the application server, not the ELK server. It is lightweight (~10–30 MB RAM) and handles backpressure gracefully.&lt;/li&gt;
&lt;li&gt;Logstash’s grok filter turns unstructured log lines into structured, searchable fields (timestamp, loglevel, logmessage).&lt;/li&gt;
&lt;li&gt;Protocol mismatch (https:// vs http://) between Kibana and Elasticsearch is the most common setup failure. Always match the protocol to Elasticsearch’s actual SSL configuration.&lt;/li&gt;
&lt;li&gt;YAML formatting causes most Filebeat config errors - use single quotes for regex, no duplicate keys, no leading spaces on top-level keys.&lt;/li&gt;
&lt;li&gt;Daily indices (boardgame-logs-%{+YYYY.MM.dd}) simplify retention management and improve query performance.&lt;/li&gt;
&lt;li&gt;Security is not optional in production - enable xpack.security, use TLS, and restrict port access via security groups.&lt;/li&gt;
&lt;li&gt;Test connectivity bottom-up: Elasticsearch first, then Logstash, then Kibana, then Filebeat.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  11. Conclusion
&lt;/h2&gt;

&lt;p&gt;Setting up the ELK stack is not just about installing four packages - it is about building a reliable data pipeline that turns scattered log files into searchable, visualizable insights. Each component has a clear role: Filebeat ships, Logstash transforms, Elasticsearch stores, and Kibana visualizes.&lt;/p&gt;

&lt;p&gt;The most common failures are not in the software itself, but in the configuration glue between components: protocol mismatches between Kibana and Elasticsearch, YAML formatting errors in Filebeat, unopened firewall ports between servers, and missing runtime dependencies.&lt;/p&gt;

&lt;p&gt;By following the step-by-step approach in this guide - installing bottom-up (Elasticsearch → Logstash → Kibana → Filebeat), verifying each component before moving to the next, and understanding why each configuration setting exists - you can set up a production-grade centralized logging system that scales from a single application to dozens of services.&lt;/p&gt;

&lt;p&gt;Once the pipeline is flowing, the real value begins: building dashboards for error rates, setting up alerts for anomalies, and turning your logs from an afterthought into your first line of defense.&lt;/p&gt;

&lt;p&gt;About the Author:&lt;em&gt;Rajan is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automation infrastructure, Optimize the CI/CD Pipelines and ensuring seamless deployments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>elk</category>
      <category>elasticsearch</category>
      <category>logstash</category>
      <category>kibana</category>
    </item>
    <item>
      <title>Code Quality Checks and Deployment with GitHub Actions</title>
      <dc:creator>Rajan Vavadia</dc:creator>
      <pubDate>Wed, 18 Mar 2026 12:09:37 +0000</pubDate>
      <link>https://dev.to/addwebsolutionpvtltd/code-quality-checks-and-deployment-with-github-actions-3p5d</link>
      <guid>https://dev.to/addwebsolutionpvtltd/code-quality-checks-and-deployment-with-github-actions-3p5d</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“The XP philosophy is to start where you are now and move towards the ideal. From where you are now, could you improve a little bit?” Kent Beck&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Overview: Two-Workflow Strategy (Quality Gate + Deploy)&lt;/li&gt;
&lt;li&gt;Workflow 1: Code Quality Checks (PR → stg)&lt;/li&gt;
&lt;li&gt;Workflow 2: Deploy (push → stg)&lt;/li&gt;
&lt;li&gt;Practical Setup (Step-by-step)&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;This document explains a practical GitHub Actions setup that enforces Flutter code quality checks on every pull request to the stg branch and deploys the Flutter web build to AWS S3 whenever code is pushed to stg.&lt;/p&gt;

&lt;p&gt;The approach uses two separate workflows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code Quality Checks: runs on pull_request events (stg)&lt;/li&gt;
&lt;li&gt;Deploy (stg): runs on push events to stg&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Overview: Two-Workflow Strategy (Quality Gate + Deploy)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Developers open a Pull Request targeting stg.&lt;/li&gt;
&lt;li&gt;GitHub Actions runs Flutter clean → pub get → analyze → test.&lt;/li&gt;
&lt;li&gt;If checks pass, the PR is safe to merge. If checks fail, a Rocket.Chat alert is sent with the build log.&lt;/li&gt;
&lt;li&gt;After merge/push to stg, a separate Deploy workflow builds Flutter web and syncs build/web to an S3 bucket.&lt;/li&gt;
&lt;li&gt;Deploy success/failure is reported to Rocket.Chat.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why separate workflows?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PR checks run in the PR context and are ideal for gating merges.&lt;/li&gt;
&lt;li&gt;Deploy runs only on trusted branch pushes, avoiding accidental deployments from feature branches.&lt;/li&gt;
&lt;li&gt;Clearer troubleshooting: you can tell instantly whether a failure is quality-related or deployment-related.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Workflow 1: Code Quality Checks (PR → stg)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;3.1 What it does&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Triggers when a Pull Request targets branch stg.&lt;/li&gt;
&lt;li&gt;Ensures only one run per PR branch using concurrency (cancels older in-progress runs when a new commit is pushed).&lt;/li&gt;
&lt;li&gt;Runs Flutter quality steps and writes a readable log file (cq.log).&lt;/li&gt;
&lt;li&gt;Uploads cq.log as an artifact (kept for 7 days).&lt;/li&gt;
&lt;li&gt;Notifies Rocket.Chat on success or failure (failure attempts to upload cq.log to the room).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.2 YAML: Code Quality Checks&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Code Quality Checks

on:
  pull_request:
    branches: [stg]

concurrency:
  group: code-quality-${{ github.head_ref }}
  cancel-in-progress: true

env:
  SITE_URL:      ${{ secrets.SITE_URL }}
  ENV_NAME:      ${{ secrets.ENV_NAME }}
  RC_BASE_URL:   ${{ secrets.RC_BASE_URL }}
  RC_ROOM_ID:    ${{ secrets.RC_ROOM_ID }}
  RC_USER_ID:    ${{ secrets.RC_USER_ID }}
  RC_AUTH_TOKEN:  ${{ secrets.RC_AUTH_TOKEN }}

jobs:
  code-quality:
    name: Flutter Analyze &amp;amp; Test
    runs-on: ubuntu-latest
    timeout-minutes: 30

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Flutter
        uses: subosito/flutter-action@v2
        with:
          flutter-version: '3.29.2'

      - name: Run code quality checks
        id: quality
        run: |
          set -e
          LOG="cq.log"

          echo "════════════════════════════════════════════════════" &amp;gt; "$LOG"
          echo " Code Quality Checks Started"                        &amp;gt;&amp;gt; "$LOG"
          echo "═══════════════════════════════════════════════════════" &amp;gt;&amp;gt; "$LOG"

          echo ""                                                      &amp;gt;&amp;gt; "$LOG"
          echo " STEP 1: flutter clean"                              &amp;gt;&amp;gt; "$LOG"
          echo "───────────────────────────────────────────────────────" &amp;gt;&amp;gt; "$LOG"
          flutter clean &amp;gt;&amp;gt; "$LOG" 2&amp;gt;&amp;amp;1

          echo ""                                                      &amp;gt;&amp;gt; "$LOG"
          echo " STEP 2: flutter pub get"                            &amp;gt;&amp;gt; "$LOG"
          echo "───────────────────────────────────────────────────────" &amp;gt;&amp;gt; "$LOG"
          flutter pub get &amp;gt;&amp;gt; "$LOG" 2&amp;gt;&amp;amp;1

          echo ""                                                      &amp;gt;&amp;gt; "$LOG"
          echo " STEP 3: flutter analyze"                            &amp;gt;&amp;gt; "$LOG"
          echo "───────────────────────────────────────────────────────" &amp;gt;&amp;gt; "$LOG"
          flutter analyze &amp;gt;&amp;gt; "$LOG" 2&amp;gt;&amp;amp;1

          echo ""                                                      &amp;gt;&amp;gt; "$LOG"
          echo " STEP 4: flutter test"                               &amp;gt;&amp;gt; "$LOG"
          echo "───────────────────────────────────────────────────────" &amp;gt;&amp;gt; "$LOG"
          flutter test &amp;gt;&amp;gt; "$LOG" 2&amp;gt;&amp;amp;1

          echo ""                                                      &amp;gt;&amp;gt; "$LOG"
          echo "═══════════════════════════════════════════════════════" &amp;gt;&amp;gt; "$LOG"
          echo "All checks passed!"                                  &amp;gt;&amp;gt; "$LOG"
          echo "═══════════════════════════════════════════════════════" &amp;gt;&amp;gt; "$LOG"

      - name: Upload build log
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: code-quality-log
          path: cq.log
          retention-days: 7

      - name: Notify Rocket.Chat (success)
        if: success()
        run: |
          REPO_URL="${{ github.event.repository.html_url }}"
          COMMIT_URL="${REPO_URL}/commit/${{ github.sha }}"
          AUTHOR="${{ github.event.pull_request.user.login }}"
          DURATION="${{ env.DURATION }}"

          MSG_TEXT="*Code Quality Checks Passed*

          *Site-URL:* ${SITE_URL}
          *PR Title:* ${{ github.event.pull_request.title }}
          *PR URL:* ${{ github.event.pull_request.html_url }}
          *Environment:* ${ENV_NAME}

          *on PR:* ${{ github.head_ref }} → ${{ github.base_ref }}
          *Git Commit:* ${COMMIT_URL}
          *Git Author:* ${AUTHOR}"
          jq -n \
            --arg roomId "${RC_ROOM_ID}" \
            --arg text "$MSG_TEXT" \
            '{roomId: $roomId, text: $text}' | \
          curl -sS -X POST "${RC_BASE_URL}/api/v1/chat.postMessage" \
            -H "X-User-Id: ${RC_USER_ID}" \
            -H "X-Auth-Token: ${RC_AUTH_TOKEN}" \
            -H "Content-Type: application/json" \
            -d @- || echo "Notification failed, continuing"

      - name: Notify Rocket.Chat (failure)
        if: failure()
        run: |
          REPO_URL="${{ github.event.repository.html_url }}"
          COMMIT_URL="${REPO_URL}/commit/${{ github.sha }}"
          AUTHOR="${{ github.event.pull_request.user.login }}"

          MSG_TEXT="*Code Quality Checks Failed*

          *Site-URL:* ${SITE_URL}
          *PR Title:* ${{ github.event.pull_request.title }}
          *PR URL:* ${{ github.event.pull_request.html_url }}
          *Environment:* ${ENV_NAME}

          *on PR:* ${{ github.head_ref }} → ${{ github.base_ref }}
          *Git Commit:* ${COMMIT_URL}
          *Git Author:* ${AUTHOR}

           *Full build log attached below:*"

          UPLOAD_RESP=$(curl -sS -X POST "${RC_BASE_URL}/api/v1/rooms.media/${RC_ROOM_ID}" \
            -H "X-User-Id: ${RC_USER_ID}" \
            -H "X-Auth-Token: ${RC_AUTH_TOKEN}" \
            -F "file=@cq.log") || true

          FILE_URL=$(echo "$UPLOAD_RESP" | jq -r '.file.url // empty')

          if [ -n "$FILE_URL" ]; then
            jq -n \
              --arg rid "${RC_ROOM_ID}" \
              --arg msg "$MSG_TEXT" \
              --arg title "📄 BUILD LOG (cq.log) - Click to download" \
              --arg link "$FILE_URL" \
              --arg desc "Contains detailed output of all build steps" \
              '{message: {rid: $rid, msg: $msg, attachments: [{title: $title, title_link: $link, text: $desc, collapsed: false}]}}' | \
            curl -sS -X POST "${RC_BASE_URL}/api/v1/chat.sendMessage" \
              -H "X-User-Id: ${RC_USER_ID}" \
              -H "X-Auth-Token: ${RC_AUTH_TOKEN}" \
              -H "Content-Type: application/json" \
              -d @-
          else
            jq -n \
              --arg roomId "${RC_ROOM_ID}" \
              --arg text "$MSG_TEXT" \
              '{roomId: $roomId, text: $text}' | \
            curl -sS -X POST "${RC_BASE_URL}/api/v1/chat.postMessage" \
              -H "X-User-Id: ${RC_USER_ID}" \
              -H "X-Auth-Token: ${RC_AUTH_TOKEN}" \
              -H "Content-Type: application/json" \
              -d @-
          fi || echo "Failure notification failed"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.3 How logs + artifacts work&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The workflow writes all step output to cq.log so your team has one clean, shareable log file.&lt;/li&gt;
&lt;li&gt;Upload build log runs with if: always(), so the artifact is saved even when analyze/test fails.&lt;/li&gt;
&lt;li&gt;retention-days: 7 keeps the artifact for one week to reduce storage and keep logs relevant.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.4 Rocket.Chat notifications (success/failure)&lt;/strong&gt;&lt;br&gt;
Two messages are sent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Success: posts a summary (PR title, PR URL, environment, branch and commit details).&lt;/li&gt;
&lt;li&gt;Failure: tries to upload cq.log to the room, then posts a message with a link to the uploaded file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If uploading fails, it still posts the failure summary so you are notified.&lt;br&gt;
Prerequisites on the runner:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;jq must be available (it is available by default on ubuntu-latest runners).&lt;/li&gt;
&lt;li&gt;Rocket.Chat API base URL, user id, auth token, and room id must be configured as secrets.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  4. Workflow 2: Deploy (push → stg)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;4.1 What it does&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Triggers on push to stg (typically after PR merge).&lt;/li&gt;
&lt;li&gt;Builds Flutter web (build/web output).&lt;/li&gt;
&lt;li&gt;Configures AWS credentials on the runner.&lt;/li&gt;
&lt;li&gt;Syncs build/web to S3 bucket using aws s3 sync --delete.&lt;/li&gt;
&lt;li&gt;Notifies Rocket.Chat on success/failure withcommit  and author details.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.2 YAML: Deploy (stg)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy (stg)

on:
  push:
    branches: [stg]

concurrency:
  group: deploy-stg
  cancel-in-progress: false

permissions:
  contents: read

env:
  SITE_URL:      ${{ secrets.SITE_URL }}
  ENV_NAME:      ${{ secrets.ENV_NAME }}
  RC_BASE_URL:   ${{ secrets.RC_BASE_URL }}
  RC_ROOM_ID:    ${{ secrets.RC_ROOM_ID }}
  RC_USER_ID:    ${{ secrets.RC_USER_ID }}
  RC_AUTH_TOKEN:  ${{ secrets.RC_AUTH_TOKEN }}

jobs:
  build-and-deploy:
    name: Build &amp;amp; Deploy to AWS S3
    runs-on: ubuntu-latest
    timeout-minutes: 30

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Flutter
        uses: subosito/flutter-action@v2
        with:
          flutter-version: '3.29.2'

      - name: Flutter clean
        run: flutter clean

      - name: Flutter build web
        run: |
          flutter build web \
            --dart-define=FLUTTER_WEB_USE_SKIA=false \
            --pwa-strategy=none

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Deploy to S3
        run: |
          aws s3 sync build/web s3://${{ secrets.AWS_S3_BUCKET }} \
            --delete

      - name: Notify Rocket.Chat (success)
        if: success()
        run: |
          REPO_URL="${{ github.event.repository.html_url }}"
          COMMIT_URL="${REPO_URL}/commit/${{ github.sha }}"
          AUTHOR=$(git log -1 --pretty=%an)

          MSG_TEXT="*Deployment Successful*

          *Site-URL:* ${SITE_URL}
          *Environment:* ${ENV_NAME}

          *Branch:* ${{ github.ref_name }}
          *Git Commit:* ${COMMIT_URL}
          *Git Author:* ${AUTHOR}"

          jq -n \
            --arg roomId "${RC_ROOM_ID}" \
            --arg text "$MSG_TEXT" \
            '{roomId: $roomId, text: $text}' | \
          curl -sS -X POST "${RC_BASE_URL}/api/v1/chat.postMessage" \
            -H "X-User-Id: ${RC_USER_ID}" \
            -H "X-Auth-Token: ${RC_AUTH_TOKEN}" \
            -H "Content-Type: application/json" \
            -d @- || echo "Deployment notification failed"

      - name: Notify Rocket.Chat (failure)
        if: failure()
        run: |
          REPO_URL="${{ github.event.repository.html_url }}"
          COMMIT_URL="${REPO_URL}/commit/${{ github.sha }}"
          AUTHOR=$(git log -1 --pretty=%an)

          MSG_TEXT="*Deployment Failed*

          *Site-URL:* ${SITE_URL}
          *Environment:* ${ENV_NAME}

          *Branch:* ${{ github.ref_name }}
          *Git Commit:* ${COMMIT_URL}
          *Git Author:* ${AUTHOR}"

          jq -n \
            --arg roomId "${RC_ROOM_ID}" \
            --arg text "$MSG_TEXT" \
            '{roomId: $roomId, text: $text}' | \
          curl -sS -X POST "${RC_BASE_URL}/api/v1/chat.postMessage" \
            -H "X-User-Id: ${RC_USER_ID}" \
            -H "X-Auth-Token: ${RC_AUTH_TOKEN}" \
            -H "Content-Type: application/json" \
            -d @- || echo "Deployment failure notification failed"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;“Beware of bugs in the above code; I have only proved it correct, not tried it.” Donald Knuth&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;4.3 AWS S3 deployment notes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 bucket must exist and the AWS credentials must have permission to sync to it.&lt;/li&gt;
&lt;li&gt;aws s3 sync --delete removes files in S3 that no longer exist in build/web (prevents stale assets).&lt;/li&gt;
&lt;li&gt;If you use CloudFront, consider adding an invalidation step (optional) after sync.&lt;/li&gt;
&lt;li&gt;If your site is a Single Page App, configure S3/CloudFront to route 404s to index.html.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Practical Setup (Step-by-step)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;5.1 Create workflow files&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In your repo, create folder: .github/workflows/&lt;/li&gt;
&lt;li&gt;Add file: code-quality.yml (paste Workflow 1 YAML).&lt;/li&gt;
&lt;li&gt;Add file: deploy.yml (paste Workflow 2 YAML).&lt;/li&gt;
&lt;li&gt;Commit and push to GitHub.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5.2 Configure GitHub secrets&lt;/strong&gt;&lt;br&gt;
Go to: Repository → Settings → Secrets and variables → Actions → New repository secret&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SITE_URL (example: &lt;a href="https://stg.example.com" rel="noopener noreferrer"&gt;https://stg.example.com&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;ENV_NAME (example: stg)&lt;/li&gt;
&lt;li&gt;RC_BASE_URL (example: &lt;a href="https://chat.example.com" rel="noopener noreferrer"&gt;https://chat.example.com&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;RC_ROOM_ID (Rocket.Chat room id)&lt;/li&gt;
&lt;li&gt;RC_USER_ID (Rocket.Chat API user id)&lt;/li&gt;
&lt;li&gt;RC_AUTH_TOKEN (Rocket.Chat API token)&lt;/li&gt;
&lt;li&gt;AWS_ACCESS_KEY_ID&lt;/li&gt;
&lt;li&gt;AWS_SECRET_ACCESS_KEY&lt;/li&gt;
&lt;li&gt;AWS_REGION (example: ap-south-1)&lt;/li&gt;
&lt;li&gt;AWS_S3_BUCKET (bucket name only, without s3://)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5.3 Set up branch protection (recommended)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In GitHub: Settings → Branches → Add branch protection rule for stg.&lt;/li&gt;
&lt;li&gt;Enable 'Require a pull request before merging'.&lt;/li&gt;
&lt;li&gt;Enable 'Require status checks to pass before merging'.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the workflow check name that appears for the PR (Flutter Analyze &amp;amp; Test / Code Quality Checks).&lt;br&gt;
&lt;strong&gt;5.4 Verify end-to-end flow&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a feature branch and open a PR to stg.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm Code Quality Checks workflow starts automatically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If it passes, merge the PR to stg.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm Deploy (stg) runs on the push event and uploads new web build to S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check Rocket.Chat channel for the success messages.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5.5 Common improvements (optional but practical)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add caching (Flutter pub cache) to speed up builds.&lt;/li&gt;
&lt;li&gt;Run flutter format --set-exit-if-changed and/or dart analyze for stricter linting.&lt;/li&gt;
&lt;li&gt;Use AWS OIDC instead of long-lived AWS keys (more secure) if your org supports it.&lt;/li&gt;
&lt;li&gt;Add environment protection for stg deployments (manual approval for deploy job).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Practical Demonstration (Images Explained)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Modify the README file on the feature branch.&lt;br&gt;
In this demonstration, the README.md file was updated with the text "Addwebsolution" and the changes were committed and pushed to the feature-dev branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7358dvxok9hib075vkkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7358dvxok9hib075vkkq.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Create a Pull Request from feature-dev to the stg Branch&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofuazrsz8k3a4rpfvwfa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofuazrsz8k3a4rpfvwfa.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once the Pull Request is created, the Code Quality Checks workflow is automatically triggered.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faq3i99aglf5kihsh13dp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faq3i99aglf5kihsh13dp.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upon successful completion of the GitHub Actions pipeline, a Rocket.Chat notification is dispatched to the configured channel.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkytavfdjmt4thrz7jdt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkytavfdjmt4thrz7jdt2.png" alt=" " width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The screenshot below shows the Pull Request in a mergeable state the Code Quality status check has passed (indicated by the green checkmark), and the "Merge pull request" button is now enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqul461u2k5pzusxbdm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqul461u2k5pzusxbdm5.png" alt=" " width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Click the "Confirm Merge" button to merge the Pull Request into stg. This push event automatically triggers the Deploy workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nvsc3mzsn4druusnezv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nvsc3mzsn4druusnezv.png" alt=" " width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Deploy (stg) pipeline is automatically triggered and the deployment job begins execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsx7rximjfk66zuetq08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsx7rximjfk66zuetq08.png" alt=" " width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The deployment pipeline has completed successfully the Flutter web build artifacts have been synced to the configured AWS S3 bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf1pmd6bv3lwy9xetwqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf1pmd6bv3lwy9xetwqq.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Rocket.Chat notification confirms the successful deployment, including details such as site URL, environment name, branch, commit hash, and author.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkw12ipwqxbep24bpa0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkw12ipwqxbep24bpa0q.png" alt=" " width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Quality and Deployment Pipeline Failure Scenario&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Introduce a deliberate syntax error in main.dart (e.g., a missing closing parenthesis) and push the changes to the feature branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bmq4tvtu2obda1ma47o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bmq4tvtu2obda1ma47o.png" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A new Pull Request is created targeting the stg branch, which automatically triggers the Code Quality Checks workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hohcjziyu6fafcuvbr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hohcjziyu6fafcuvbr4.png" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; The Code Quality Checks workflow fails due to the syntax error for example, print("hello" is missing its closing parenthesis. The flutter analyze step detects the issue and the pipeline exits with a non-zero status code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3estl709r9cczzle51u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3estl709r9cczzle51u.png" alt=" " width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A failure notification is sent to Rocket.Chat with the build log attached. Since the Code Quality status check has failed, the Pull Request cannot be merged, and the Deploy workflow is never triggered effectively preventing broken code from reaching the stg environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7v5kyub7v8ir8sj3dqz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7v5kyub7v8ir8sj3dqz.png" alt=" " width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This practical demonstration validates the end-to-end CI/CD pipeline behavior across both success and failure scenarios. When code quality checks pass, the Pull Request is merged into the stg branch and the Deploy workflow automatically builds and publishes the Flutter web application to AWS S3, with a success notification delivered to Rocket.Chat. Conversely, when a syntax error or lint violation is detected, the Code Quality Checks workflow fails, the Pull Request is blocked from merging, and the Deploy workflow is never invoked ensuring that only validated, production-ready code reaches the staging environment. This two-workflow strategy enforces a reliable quality gate at the PR level while keeping deployment isolated to trusted branch pushes.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“Continuous Integration is not a tool. It is a practice.” Common CI principle (team reminder)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  8. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1. Why does Code Quality run on pull_request but Deploy runs on push?&lt;/strong&gt;&lt;br&gt;
A. pull_request is best for quality gates before merging, while push to stg is best for deployments because it limits deploys to the stg branch history (usually protected).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2. What does concurrency do in Code Quality Checks?&lt;/strong&gt;&lt;br&gt;
A. It groups runs by the PR branch name (github.head_ref). If a new commit is pushed to the same PR branch, the older run is canceled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3. Why upload cq.log if GitHub Actions already has logs?&lt;/strong&gt;&lt;br&gt;
A. cq.log is a single consolidated file you can share, attach to chats, and keep as an artifact. It also makes Rocket.Chat upload easy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4. What if Rocket.Chat notification fails?&lt;/strong&gt;&lt;br&gt;
A. The workflow prints a warning and continues. Your CI status still reflects pass/fail correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q5. What permissions are needed for S3 deploy?&lt;/strong&gt;&lt;br&gt;
A. The AWS identity used in GitHub Actions needs at minimum s3:ListBucket and s3:PutObject/DeleteObject permissions for the target bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q6. How do I confirm the build is deployed correctly?&lt;/strong&gt;&lt;br&gt;
A. Check the S3 bucket objects update time and open SITE_URL in browser. If CloudFront is used, confirm cache invalidation or TTL behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;PR quality checks prevent broken code from entering stg.&lt;/li&gt;
&lt;li&gt;Concurrency prevents duplicate CI runs and saves time.&lt;/li&gt;
&lt;li&gt;Artifacts (cq.log) make debugging faster and help the team collaborate.&lt;/li&gt;
&lt;li&gt;Deploy is isolated to stg pushes, which is safer and easier to audit.&lt;/li&gt;
&lt;li&gt;Rocket.Chat alerts keep the team informed without opening GitHub every time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  10. Conclusion
&lt;/h2&gt;

&lt;p&gt;This setup provides a clean CI/CD workflow for a Flutter web project: enforce quality on pull requests and deploy only after stg updates. With proper secrets, branch protections, and clear notifications, your team gets fast feedback and reliable deployments.&lt;/p&gt;

&lt;p&gt;About the Author:&lt;em&gt;Rajan is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution,&lt;/a&gt; specializing in automation infrastructure, Optimize the CI/CD Pipelines and ensuring seamless deployments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>codequality</category>
      <category>automation</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Node.js Application with CI/CD GitLab Pipeline on AWS EC2</title>
      <dc:creator>Rajan Vavadia</dc:creator>
      <pubDate>Thu, 26 Feb 2026 12:01:06 +0000</pubDate>
      <link>https://dev.to/addwebsolutionpvtltd/nodejs-application-with-cicd-gitlab-pipeline-on-aws-ec2-2kk9</link>
      <guid>https://dev.to/addwebsolutionpvtltd/nodejs-application-with-cicd-gitlab-pipeline-on-aws-ec2-2kk9</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“Automation is the key to speed and reliability in modern software development.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Architecture Overview&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;CI/CD Workflow (Step-by-Step)&lt;/li&gt;
&lt;li&gt;GitLab Pipeline Configuration&lt;/li&gt;
&lt;li&gt;Deployment Process on AWS EC2&lt;/li&gt;
&lt;li&gt;Security Best Practices&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;Continuous Integration and Continuous Deployment (CI/CD) is a modern development practice that automates the process of building, testing, and deploying applications. This document explains how to set up a CI/CD pipeline for a Node.js application using GitLab CI/CD and deploy it automatically to an AWS EC2 instance.&lt;br&gt;
The goal is to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automate deployment&lt;/li&gt;
&lt;li&gt;Reduce manual errors&lt;/li&gt;
&lt;li&gt;Improve development speed&lt;/li&gt;
&lt;li&gt;Ensure reliable releases&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  2. Architecture Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Backend: Node.js&lt;/li&gt;
&lt;li&gt;Version Control: GitLab&lt;/li&gt;
&lt;li&gt;CI/CD Tool: GitLab Pipeline&lt;/li&gt;
&lt;li&gt;Server: AWS EC2 (Ubuntu)&lt;/li&gt;
&lt;li&gt;Process Manager: PM2&lt;/li&gt;
&lt;li&gt;SSH Authentication: Secure Key-based login&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;High-level Flow:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developer pushes code to GitLab repository.&lt;/li&gt;
&lt;li&gt;GitLab pipeline triggers automatically.&lt;/li&gt;
&lt;li&gt;Pipeline installs dependencies and builds project.&lt;/li&gt;
&lt;li&gt;GitLab connects to EC2 via SSH.&lt;/li&gt;
&lt;li&gt;Code is pulled on EC2 server.&lt;/li&gt;
&lt;li&gt;Application restarts using PM2.&lt;/li&gt;
&lt;li&gt;Nginx routes HTTP traffic to the Node.js app&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  3. Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before setting up CI/CD, ensure the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitLab Setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitLab repository created&lt;/li&gt;
&lt;li&gt;Branches (dev/stage/prod) configured&lt;/li&gt;
&lt;li&gt;GitLab Runner enabled (shared runner works)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS EC2 Setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ubuntu EC2 instance running&lt;/li&gt;
&lt;li&gt;Node.js &amp;amp; npm installed&lt;/li&gt;
&lt;li&gt;Git installed on server&lt;/li&gt;
&lt;li&gt;SSH access configured
PM2 installed globally
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install pm2 -g
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;SSH Key Setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate SSH key on local system:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ssh-keygen -t rsa -b 4096
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Add public key to EC2:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;- Add private key in GitLab:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    **GitLab → Settings → CI/CD → Variables**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;SSH_PRIVATE_KEY&lt;/li&gt;
&lt;li&gt;SSH_HOST&lt;/li&gt;
&lt;li&gt;SSH_USER&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. CI/CD Workflow (Step-by-Step)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Developer Pushes Code&lt;/strong&gt;&lt;br&gt;
Developer pushes code to the GitLab branch (e.g., staging or production).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Pipeline Triggered&lt;/strong&gt;&lt;br&gt;
GitLab detects changes and starts pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Install Dependencies&lt;/strong&gt;&lt;br&gt;
Pipeline installs Node.js packages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: SSH Connection&lt;/strong&gt;&lt;br&gt;
GitLab pipeline connects to AWS EC2 via SSH.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Deployment&lt;/strong&gt;&lt;br&gt;
On EC2 server:&lt;br&gt;
Pull latest code&lt;br&gt;
Install dependencies&lt;br&gt;
Restart Node app with PM2&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Live Deployment&lt;/strong&gt;&lt;br&gt;
Application updated automatically without manual login.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“CI/CD turns deployment from a risky event into a routine process.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  5. GitLab Pipeline Configuration
&lt;/h2&gt;

&lt;p&gt;Create &lt;strong&gt;.gitlab-ci.yml&lt;/strong&gt; in project root:&lt;br&gt;
stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;production
deploy_to_ec2:
stage: production
image: alpine:latest
only:

&lt;ul&gt;
&lt;li&gt;prd
before_script:&lt;/li&gt;
&lt;li&gt;apk add --no-cache openssh&lt;/li&gt;
&lt;li&gt;mkdir -p ~/.ssh&lt;/li&gt;
&lt;li&gt;cp "$SSH_PRIVATE_KEY" ~/.ssh/id_rsa&lt;/li&gt;
&lt;li&gt;chmod 600 ~/.ssh/id_rsa&lt;/li&gt;
&lt;li&gt;ssh-keyscan -H "$SSH_HOST" &amp;gt;&amp;gt; ~/.ssh/known_hosts
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;script:
   - |
     ssh "$SSH_USER@$SSH_HOST" &amp;lt;&amp;lt; 'EOF'
       set -e


       echo "---------- Checking Directory ---------------"
       cd "$PATH_DIR"


       echo "---------------- Load NVM ----------------"
       export NVM_DIR="$HOME/.nvm"
       [ -s "$NVM_DIR/nvm.sh" ] &amp;amp;&amp;amp; . "$NVM_DIR/nvm.sh"
       [ -s "$NVM_DIR/bash_completion" ] &amp;amp;&amp;amp; . "$NVM_DIR/bash_completion"


       echo "--------Node Version:-----------"
       node -v || echo "Node not found"


       echo "----------NPM Version:--------------"
       npm -v || echo "npm not found"


       echo "----------------- Git Pull ------------------"
       git pull origin "$PRD_BRANCH"


       echo "----------------- npm install ----------------"
       npm install


       echo "----------------- Restart PM2 ----------------"
       pm2 restart all


       echo "---------------- Deployment Completed ----------------"
     EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Deployment Process on AWS EC2
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;On EC2 server:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone project first time:&lt;/li&gt;
&lt;li&gt;git clone &lt;/li&gt;
&lt;li&gt;cd project&lt;/li&gt;
&lt;li&gt;npm install&lt;/li&gt;
&lt;li&gt;pm2 start app.js --name node-app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;After CI/CD setup:&lt;/strong&gt;&lt;br&gt;
Deployment becomes automatic on every push.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical Demonstration (Images Explained)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1. BEFORE:&lt;/strong&gt; Original Login Page&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This shows the original application running at &lt;a href="https://poc-addweb-app.addwebprojects.com" rel="noopener noreferrer"&gt;https://poc-addweb-app.addwebprojects.com&lt;/a&gt; with the title "Login Page". This is the state before making any code changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3ezh3t4fyzuhf7xo5dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3ezh3t4fyzuhf7xo5dw.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2. Making Changes &amp;amp; Pushing Code&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This terminal screenshot shows the developer workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gmbkhcwl0n3txuhf8te.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gmbkhcwl0n3txuhf8te.png" alt=" " width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The code was reflected in the GitLab repository 6 minutes ago.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj95usowla8q7cd9x0tm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj95usowla8q7cd9x0tm.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3. GitLab Pipelines Dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This shows the Pipelines page in GitLab with successful deployments:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7n4204ofa3uyd3v2p5cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7n4204ofa3uyd3v2p5cd.png" alt=" " width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The highlighted row #2330658746 is the most recent deployment that was triggered automatically when code was pushed to the prd branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyz4txpemh64iwybp50q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyz4txpemh64iwybp50q.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pipeline deployment was successful. See the image below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4. AFTER: Updated Login Page&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uqglu3gxbfeo2jqkpb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uqglu3gxbfeo2jqkpb5.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This shows the application after successful deployment. The title has changed from "Login Page" to "Addweb Login Page" - confirming the CI/CD pipeline worked correctly!&lt;/p&gt;

&lt;h2&gt;
  
  
  GitLab Pipeline Failure Scenario
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1. Intentionally Introduce an Error&lt;/strong&gt;&lt;br&gt;
To test pipeline failure behavior, we intentionally modified the package.json file by adding an invalid dependency:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnml7corj13jqfy32laye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnml7corj13jqfy32laye.png" alt=" " width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example change in package.json:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-  "package-does-not-exist-3232": "2.1.0"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This package does not exist in the npm registry. the purpose was to simulate a real-world mistake such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Typo in package name&lt;/li&gt;
&lt;li&gt;Incorrect dependency version&lt;/li&gt;
&lt;li&gt;Invalid module added by mistake&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2. Commit and Push the Wrong Code&lt;/strong&gt;&lt;br&gt;
After modifying package.json, the changes were committed and pushed to the prd branch:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak8tfn6isox2jrko6ce2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak8tfn6isox2jrko6ce2.png" alt=" " width="800" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;git commit -m "We mentioned the wrong package name in package.json."&lt;/li&gt;
&lt;li&gt;git push origin prd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since our GitLab Pipeline is configured to run automatically on the prd branch, this push immediately triggered a new pipeline execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3. GitLab Pipeline Triggered Automatically&lt;/strong&gt;&lt;br&gt;
As expected, GitLab Pipelines started running automatically as soon as the code was pushed.&lt;br&gt;
In the Pipelines dashboard we can see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New pipeline execution created #2330702561&lt;/li&gt;
&lt;li&gt;Status initially shown as “Failed”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo99q9hkv1sbxgzao9bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo99q9hkv1sbxgzao9bc.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This confirms that the CI/CD automation is working correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4. Pipeline Execution Failed&lt;/strong&gt;&lt;br&gt;
During pipeline execution, the following command was executed on the server:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxeyo760n5xcl7xjminr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxeyo760n5xcl7xjminr.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- npm install **&lt;br&gt;
Because we added a non-existent package, the installation failed with this error:&lt;br&gt;
npm error 404 Not Found - GET &lt;a href="https://registry.npmjs.org/package-does-not-exist-3232" rel="noopener noreferrer"&gt;https://registry.npmjs.org/package-does-not-exist-3232&lt;/a&gt; - Not found&lt;br&gt;
npm error 404  The requested resource **'&lt;a href="mailto:package-does-not-exist-3232@2.1.0"&gt;package-does-not-exist-3232@2.1.0&lt;/a&gt;'&lt;/strong&gt; could not be found or you do not have permission to access it.&lt;br&gt;
npm error 404&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As a result:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The pipeline step stopped&lt;/li&gt;
&lt;li&gt;Deployment process was aborted&lt;/li&gt;
&lt;li&gt;Application was NOT restarted&lt;/li&gt;
&lt;li&gt;Previous working version remained intact&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any step in the CI/CD pipeline fails, the deployment automatically stops. This protects production from broken or unstable code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“First automate, then optimize.”&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Security Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use SSH keys instead of passwords&lt;/li&gt;
&lt;li&gt;Restrict EC2 security group (only required ports)&lt;/li&gt;
&lt;li&gt;Store secrets in GitLab CI/CD variables&lt;/li&gt;
&lt;li&gt;Disable root login on EC2&lt;/li&gt;
&lt;li&gt;Use environment variables for API keys&lt;/li&gt;
&lt;li&gt;Enable firewall (UFW)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  8. Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Companies using CI/CD deploy 30x faster than manual deployment.&lt;/li&gt;
&lt;li&gt;Automated pipelines reduce deployment failures by 40–60%.&lt;/li&gt;
&lt;li&gt;GitLab CI/CD supports Auto DevOps for full automation.&lt;/li&gt;
&lt;li&gt;AWS EC2 is used by millions of applications worldwide.&lt;/li&gt;
&lt;li&gt;90% of DevOps teams use CI/CD pipelines in production.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  9. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1: Why use CI/CD for &lt;a href="http://Node.js" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;?&lt;/strong&gt;&lt;br&gt;
→ It automates testing and deployment, saving time and reducing errors.&lt;br&gt;
&lt;strong&gt;Q2: Why GitLab CI/CD?&lt;/strong&gt;&lt;br&gt;
→ GitLab provides built-in CI/CD with repositories, making setup easier.&lt;br&gt;
&lt;strong&gt;Q3: Why use PM2?&lt;/strong&gt;&lt;br&gt;
→ PM2 keeps Node.js apps running and supports auto restart.&lt;br&gt;
&lt;strong&gt;Q4: Can we deploy multiple environments?&lt;/strong&gt;&lt;br&gt;
→ Yes, create separate branches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dev&lt;/li&gt;
&lt;li&gt;staging&lt;/li&gt;
&lt;li&gt;production
&lt;strong&gt;Q5: Is EC2 safe for production?&lt;/strong&gt;
→ Yes, if proper security (SSH keys, firewall, updates) is applied.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  10. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;CI/CD automates build and deployment.&lt;/li&gt;
&lt;li&gt;GitLab pipeline integrates easily with AWS EC2.&lt;/li&gt;
&lt;li&gt;SSH keys ensure secure deployment.&lt;/li&gt;
&lt;li&gt;PM2 manages Node.js processes efficiently.&lt;/li&gt;
&lt;li&gt;Automated deployment saves time and reduces downtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  11. Conclusion
&lt;/h2&gt;

&lt;p&gt;Implementing CI/CD for a Node.js application using GitLab and AWS EC2 significantly improves development workflow and deployment reliability.&lt;br&gt;
With automated pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers can focus on coding&lt;/li&gt;
&lt;li&gt;Deployments become faster&lt;/li&gt;
&lt;li&gt;Errors are minimized&lt;/li&gt;
&lt;li&gt;Applications stay updated continuously&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CI/CD is no longer optional, it is a standard practice for modern scalable applications.&lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Rajan is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automation infrastructure, Optimize the CI/CD Pipelines and ensuring seamless deployments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>ci</category>
      <category>continuousdeployment</category>
      <category>automation</category>
    </item>
    <item>
      <title>Why Developer Experience (DX) Matters in DevOps</title>
      <dc:creator>Rajan Vavadia</dc:creator>
      <pubDate>Fri, 28 Nov 2025 07:01:00 +0000</pubDate>
      <link>https://dev.to/addwebsolutionpvtltd/why-developer-experience-dx-matters-in-devops-18k6</link>
      <guid>https://dev.to/addwebsolutionpvtltd/why-developer-experience-dx-matters-in-devops-18k6</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“Great DevOps isn’t built on powerful tools , it’s built on developers who have the freedom and clarity to use them well.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;What Is Developer Experience (DX)?&lt;/li&gt;
&lt;li&gt;Why DX Matters in DevOps&lt;/li&gt;
&lt;li&gt;Key Elements of a Great Developer Experience&lt;/li&gt;
&lt;li&gt;How DX Impacts DevOps Success&lt;/li&gt;
&lt;li&gt;Common Challenges That Hurt Developer Experience&lt;/li&gt;
&lt;li&gt;Strategies to Improve DX in DevOps&lt;/li&gt;
&lt;li&gt;Tooling &amp;amp; Automation That Enhance DX&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As DevOps becomes the backbone of modern software delivery, organizations are increasingly recognizing that successful DevOps doesn’t start with tools , it starts with people. More specifically, it begins with developers, their workflows, and their overall experience.&lt;br&gt;
Developer Experience (DX) is now emerging as a core focus for DevOps teams who want to accelerate delivery, minimize friction, reduce cognitive load, and create an environment where developers can actually do what they do best: build great software.&lt;br&gt;
This guide explains why DX matters, how it affects DevOps outcomes, and the steps organizations can take to create a frictionless, productive environment for engineering teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Developer Experience (DX)?
&lt;/h2&gt;

&lt;p&gt;Developer Experience (DX) refers to how developers feel while interacting with tools, processes, documentation, and systems throughout the development lifecycle. A good DX means developers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get fast feedback&lt;/li&gt;
&lt;li&gt;Spend less time fixing environment issues&lt;/li&gt;
&lt;li&gt;Have clear documentation and automated workflows&lt;/li&gt;
&lt;li&gt;Can focus on creativity rather than repetitive tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DX isn’t about giving developers “more perks”  , it's about improving efficiency, reducing friction, and enabling innovation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why DX Matters in DevOps
&lt;/h2&gt;

&lt;p&gt;Developer Experience and DevOps are deeply connected. DevOps aims to shorten the development lifecycle and improve collaboration through automation and cultural transformation. DX ensures developers actually enjoy , and succeed , within that system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How DX Elevates DevOps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduces Cognitive Load :&lt;/strong&gt; Developers aren’t overwhelmed by complex systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Boosts Productivity :&lt;/strong&gt; Simple workflows and automation accelerate development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improves Collaboration :&lt;/strong&gt; Better tools and documentation reduce miscommunication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster Releases :&lt;/strong&gt; Efficient pipelines enable quicker delivery without burnout.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Higher Quality Code :&lt;/strong&gt; When devs have clarity and confidence, bugs decrease.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short: &lt;strong&gt;Better DX = Happier Devs = Better Software + Faster Delivery.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Elements of a Great Developer Experience
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Documentation That Actually Helps&lt;/strong&gt;&lt;br&gt;
→ Clear, updated, concise documentation saves time and confusion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Fast Feedback Loops&lt;/strong&gt;&lt;br&gt;
→ Slow builds or testing cycles kill motivation. Fast pipelines keep devs moving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Automation of Repetitive Tasks&lt;/strong&gt;&lt;br&gt;
→ CI/CD, linting, testing, and deployments should be automated, not manual.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Easy-to-Use Tooling&lt;/strong&gt;&lt;br&gt;
→ Tools need to be intuitive, consistent, and integrated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Stable and Reproducible Environments&lt;/strong&gt;&lt;br&gt;
→ “Works on my machine” should never happen in 2025.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Smooth Onboarding&lt;/strong&gt;&lt;br&gt;
→ New developers should get up to speed in hours, not weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  How DX Impacts DevOps Success:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Higher Deployment Frequency:&lt;/strong&gt; Developers ship code faster when pipelines and tools are optimized.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lower Change Failure Rates:&lt;/strong&gt; Less friction means fewer mistakes and smoother deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster Mean Time to Recovery (MTTR):&lt;/strong&gt; Clear observability and tooling help developers fix issues quickly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stronger Developer Retention:&lt;/strong&gt; Good DX reduces burnout and increases job satisfaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“Developer Experience turns processes into productivity and friction into flow.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Common Challenges That Hurt Developer Experience
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Overly complex CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Poor or outdated documentation&lt;/li&gt;
&lt;li&gt;Inconsistent environments or configuration drift&lt;/li&gt;
&lt;li&gt;Slow builds, tests, or deployments&lt;/li&gt;
&lt;li&gt;Multiple disconnected tools&lt;/li&gt;
&lt;li&gt;Lack of automation&lt;/li&gt;
&lt;li&gt;No feedback or monitoring tools for developers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These issues don’t just slow teams down, they directly affect morale and product quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategies to Improve DX in DevOps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Standardize Toolchains
Use shared pipelines, templates, and tools across teams.&lt;/li&gt;
&lt;li&gt;Shift Automation Left
Automate tests, security scans, code reviews, and quality checks early.&lt;/li&gt;
&lt;li&gt;Integrate Observability for Developers
Give developers dashboards, logs, metrics, and alerts they understand.&lt;/li&gt;
&lt;li&gt;Simplify CI/CD Pipelines
Reduce unnecessary steps and optimize for speed.&lt;/li&gt;
&lt;li&gt;Provide Self-Service Platforms
Let developers request infrastructure or run deployments without waiting for ops.&lt;/li&gt;
&lt;li&gt;Reduce Manual Work
Automate everything repetitive , builds, tests, tagging, deployments, and more.&lt;/li&gt;
&lt;li&gt;Create a Culture of Continuous Improvement
Regularly gather developer feedback and implement changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tooling &amp;amp; Automation That Enhance DX
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions / GitLab CI :&lt;/strong&gt; For frictionless automation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backstage :&lt;/strong&gt; Developer portals for unified experiences&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform / Pulumi :&lt;/strong&gt; IaC that makes infrastructure predictable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;k9s / Lens :&lt;/strong&gt; Developer-friendly Kubernetes tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trunk.io / SonarQube :&lt;/strong&gt; Automated code quality&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slack / Teams integrations :&lt;/strong&gt; Real-time pipeline and deployment updates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal Developer Platforms (IDPs) :&lt;/strong&gt; One-stop hubs for tooling and workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When tools are designed with DX in mind, developers naturally become more efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Organizations that invest in Developer Experience see up to 40% faster lead times. Source: &lt;a href="https://platformengineering.org/blog/how-to-measure-developer-productivity-and-platform-roi-a-complete-framework-for-platform-engineers" rel="noopener noreferrer"&gt;Developer Experience&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Poor DX contributes to over 60% of developer burnout, according to industry surveys. Source: &lt;a href="https://www.usehaystack.io/blog/83-of-developers-suffer-from-burnout-haystack-analytics-study-finds" rel="noopener noreferrer"&gt;DX contributes&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Teams with high DX ratings experience 3x fewer production incidents. Source: &lt;a href="https://www.gartner.com/en/software-engineering/topics/developer-experience" rel="noopener noreferrer"&gt;DX rating experience&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;More than 70% of developers claim that slow CI/CD pipelines negatively impact their productivity. Source: &lt;a href="https://www.atlassian.com/software/compass/resources/state-of-developer-2024" rel="noopener noreferrer"&gt;Developers claim&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“When developers struggle with their environment, innovation slows. When the experience improves, everything else accelerates.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1: Is DX only about tools?&lt;/strong&gt;&lt;br&gt;
No. Tools are important, but DX also includes culture, documentation, onboarding, and processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2: How do you measure Developer Experience?&lt;/strong&gt;&lt;br&gt;
Common metrics include build times, deployment frequency, onboarding duration, and developer satisfaction surveys.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3: What’s the difference between UX and DX?&lt;/strong&gt;&lt;br&gt;
UX focuses on end-users; DX focuses on developers interacting with systems and tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4: Does improving DX cost a lot?&lt;/strong&gt;&lt;br&gt;
Not necessarily. Many improvements involve optimizing workflows, not buying new tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q5: Who is responsible for DX?&lt;/strong&gt;&lt;br&gt;
DevOps teams, platform engineers, and engineering leaders collaboratively shape DX.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Developer Experience is essential for high-performing DevOps teams.&lt;/li&gt;
&lt;li&gt;Good DX reduces friction, accelerates releases, and improves quality.&lt;/li&gt;
&lt;li&gt;Automation, documentation, and streamlined tooling are core DX pillars.&lt;/li&gt;
&lt;li&gt;Investing in DX leads to happier developers and more successful products.&lt;/li&gt;
&lt;li&gt;DX isn’t optional in 2025 , it’s a competitive advantage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Developer Experience is no longer a “nice to have” , it’s at the heart of successful DevOps culture. When developers have intuitive tools, fast feedback, reliable systems, and streamlined workflows, innovation becomes effortless. Organizations that prioritize DX not only improve productivity but also attract and retain top talent.&lt;br&gt;
In a world where speed, quality, and security matter more than ever, great Developer Experience is the foundation of great DevOps.&lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Rajan is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automation infrastructure, Optimize the CI/CD Pipelines and ensuring seamless deployments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devex</category>
      <category>devops</category>
      <category>cicd</category>
      <category>devopsbestpractices</category>
    </item>
    <item>
      <title>Serverless vs Containers: The Real Battle for Modern Deployments</title>
      <dc:creator>Rajan Vavadia</dc:creator>
      <pubDate>Fri, 24 Oct 2025 07:39:20 +0000</pubDate>
      <link>https://dev.to/addwebsolutionpvtltd/serverless-vs-containers-the-real-battle-for-modern-deployments-25nd</link>
      <guid>https://dev.to/addwebsolutionpvtltd/serverless-vs-containers-the-real-battle-for-modern-deployments-25nd</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“Containers are the new virtual machines, but with better portability and efficiency.” Kelsey Hightower&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Understanding Serverless&lt;/li&gt;
&lt;li&gt;Understanding Containers&lt;/li&gt;
&lt;li&gt;Key Differences: Serverless vs Containers&lt;/li&gt;
&lt;li&gt;Advantages and Limitations of Each&lt;/li&gt;
&lt;li&gt;When to Choose Serverless&lt;/li&gt;
&lt;li&gt;When to Choose Containers&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Cost, Scaling, and Operational Considerations&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;Modern software development is evolving rapidly. With microservices, cloud-native apps, and event-driven architectures, organizations are seeking deployment solutions that are scalable, flexible, and cost-efficient.&lt;br&gt;
Two paradigms dominate the conversation today: serverless computing and containers. Both promise agility, but they differ fundamentally in philosophy, architecture, and operational approach. Understanding these differences is key to making the right technology decisions for your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Understanding Serverless
&lt;/h2&gt;

&lt;p&gt;Serverless computing, often called Function-as-a-Service (FaaS), allows developers to run discrete functions without managing servers. The cloud provider handles provisioning, scaling, and server maintenance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Characteristics of Serverless:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event-driven and stateless by default.&lt;/li&gt;
&lt;li&gt;Automatic scaling based on demand.&lt;/li&gt;
&lt;li&gt;Pay-per-use pricing model.&lt;/li&gt;
&lt;li&gt;Ideal for APIs, background tasks, and unpredictable workloads.
&lt;strong&gt;Example:&lt;/strong&gt; AWS Lambda, Azure Functions, Google Cloud Functions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Understanding Containers
&lt;/h2&gt;

&lt;p&gt;Containers package an application along with its dependencies, ensuring consistent behavior across environments. They provide portability and control over runtime, and are typically orchestrated using tools like Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Characteristics of Containers:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lightweight, isolated environments.&lt;/li&gt;
&lt;li&gt;Consistent across development, testing, and production.&lt;/li&gt;
&lt;li&gt;Suitable for long-running services and microservices.&lt;/li&gt;
&lt;li&gt;Require orchestration for large-scale deployments.
&lt;strong&gt;Example:&lt;/strong&gt; Docker, Kubernetes, OpenShift.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“Serverless is the next step in the evolution of cloud computing, enabling more agile and cost-effective application development.” Adrian Cockcroft&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  4. Key Differences: Serverless vs Containers
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fei1sbtxl0f7q0bei8xek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fei1sbtxl0f7q0bei8xek.png" alt=" " width="631" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Advantages and Limitations of Each
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Serverless Advantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplified operations, minimal infrastructure management.&lt;/li&gt;
&lt;li&gt;Cost-efficient for unpredictable workloads.&lt;/li&gt;
&lt;li&gt;Rapid deployment and scaling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Serverless Limitations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cold start latency.&lt;/li&gt;
&lt;li&gt;Execution time limits.&lt;/li&gt;
&lt;li&gt;Higher vendor lock-in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Containers Advantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full control over environment and dependencies.&lt;/li&gt;
&lt;li&gt;Portability across cloud providers.&lt;/li&gt;
&lt;li&gt;Excellent for complex, long-running applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Containers Limitations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requires DevOps expertise.&lt;/li&gt;
&lt;li&gt;More operational overhead than serverless.&lt;/li&gt;
&lt;li&gt;Scaling requires orchestration and monitoring tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. When to Choose Serverless
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Choose serverless when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your workload is event-driven (e.g., API requests, message processing).&lt;/li&gt;
&lt;li&gt;Traffic is unpredictable or highly variable.&lt;/li&gt;
&lt;li&gt;Rapid iteration is needed, and operational simplicity is a priority.&lt;/li&gt;
&lt;li&gt;Short-lived tasks are dominant (e.g., notifications, image processing).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  7. When to Choose Containers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Choose containers when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need control over runtime, dependencies, or networking.&lt;/li&gt;
&lt;li&gt;Applications are long-running or stateful.&lt;/li&gt;
&lt;li&gt;You want portability between clouds or on-premise environments.&lt;/li&gt;
&lt;li&gt;Managing microservices at scale is a priority.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  8. Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Serverless adoption is 40% of companies run production workloads on serverless platforms. Source:  &lt;a href="https://www.precedenceresearch.com/serverless-computing-market" rel="noopener noreferrer"&gt;40% of companies&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Containers usage is  90% + of organizations use containers for microservices. Source:  &lt;a href="https://thenewstack.io/why-90-of-microservices-still-ship-like-monoliths" rel="noopener noreferrer"&gt;90% of organizations&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Cost savings of Serverless can reduce compute costs by up to 50% for intermittent workloads. Source:  &lt;a href="https://www.databricks.com/blog/cost-savings-serverless-compute-notebooks-jobs-and-pipelines" rel="noopener noreferrer"&gt;Serverless&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Scaling speed of Serverless functions scale near-instantly; containers scale within seconds. Source: &lt;a href="https://www.cloudzero.com/blog/serverless-vs-containers" rel="noopener noreferrer"&gt;Serverless and Containers&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“Containers provide a consistent environment from development to production, making it easier to manage applications at scale.”  Joe Beda&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  10. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1: Can serverless and containers be used together?&lt;/strong&gt;&lt;br&gt;
 Yes, hybrid architectures often combine containers for core services and serverless for event-driven tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2: Which is more cost-effective?&lt;/strong&gt;&lt;br&gt;
 Serverless is cost-efficient for sporadic workloads; containers may be cheaper for predictable, long-running applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3: Do containers require DevOps expertise?&lt;/strong&gt;&lt;br&gt;
 Yes, effective container deployment typically requires orchestration knowledge and monitoring practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4: What about vendor lock-in?&lt;/strong&gt;&lt;br&gt;
 Serverless has higher risk due to cloud-specific APIs. Containers offer greater portability across clouds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q5: Are serverless functions stateless?&lt;/strong&gt;&lt;br&gt;
 Yes, serverless functions are stateless by design, though state can be stored externally (e.g., databases, object storage).&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Serverless: Best for event-driven, short-lived tasks, and variable workloads.&lt;/li&gt;
&lt;li&gt;Containers: Best for complex, long-running applications requiring control and portability.&lt;/li&gt;
&lt;li&gt;Hybrid deployments: Combine the strengths of both paradigms for flexibility, cost efficiency, and scalability.&lt;/li&gt;
&lt;li&gt;Operational planning: Understanding scaling, cost, and orchestration needs is critical for both approaches.&lt;/li&gt;
&lt;li&gt;Future trend: Modern enterprises are increasingly leveraging hybrid models to optimize cloud workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  12. Cost, Scaling, and Operational Considerations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Serverless:&lt;/strong&gt; Best for event-driven, short-lived tasks, and variable workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containers:&lt;/strong&gt; Best for complex, long-running applications requiring control and portability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid deployments:&lt;/strong&gt; Combine the strengths of both paradigms for flexibility, cost efficiency, and scalability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational planning:&lt;/strong&gt; Understanding scaling, cost, and orchestration needs is critical for both approaches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Future trend:&lt;/strong&gt; Modern enterprises are increasingly leveraging hybrid models to optimize cloud workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  13. Conclusion
&lt;/h2&gt;

&lt;p&gt;There is no universal winner in the battle of serverless vs containers. The choice depends on workload patterns, operational requirements, and organizational expertise.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Serverless:&lt;/strong&gt; Best for rapid development, event-driven tasks, and variable traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containers:&lt;/strong&gt; Best for complex applications requiring control, portability, and long-running processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid deployments:&lt;/strong&gt; Often provide the best of both worlds, combining scalability, cost efficiency, and flexibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ultimately, understanding the strengths and limitations of both allows organizations to build modern, resilient, and scalable applications that align with business goals.&lt;/p&gt;

&lt;p&gt;About Author: &lt;em&gt;Rajan is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automation infrastructure, Optimize the CI/CD Pipelines and ensuring seamless deployments.&lt;/em&gt; &lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>devops</category>
      <category>cloudarchitecture</category>
      <category>serverless</category>
    </item>
    <item>
      <title>What's the Importance of Collaboration in DevOps</title>
      <dc:creator>Rajan Vavadia</dc:creator>
      <pubDate>Wed, 17 Sep 2025 06:37:54 +0000</pubDate>
      <link>https://dev.to/addwebsolutionpvtltd/whats-the-importance-of-collaboration-in-devops-28a7</link>
      <guid>https://dev.to/addwebsolutionpvtltd/whats-the-importance-of-collaboration-in-devops-28a7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“Culture eats strategy for breakfast. In DevOps, collaboration is the culture that drives everything else.” – Peter Drucker&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Why Collaboration is Central to DevOps&lt;/li&gt;
&lt;li&gt;Key Benefits of Collaboration in DevOps&lt;/li&gt;
&lt;li&gt;Challenges to Effective Collaboration&lt;/li&gt;
&lt;li&gt;Strategies to Improve Collaboration&lt;/li&gt;
&lt;li&gt;Real-World Impact&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;DevOps is more than just a set of practices or tools; it is a cultural shift that combines development and operations into a unified approach. At the core of this culture lies collaboration, which ensures that teams work together to deliver high-quality software faster and more reliably.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Collaboration is Central to DevOps
&lt;/h2&gt;

&lt;p&gt;Traditional IT environments kept developers and operations in silos, leading to delays, conflicts, and inefficiency. Collaboration in DevOps bridges this gap by creating a culture of shared responsibility, continuous communication, and joint problem-solving. Without collaboration, DevOps cannot achieve its true purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits of Collaboration in DevOps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Breaking Down Silos:&lt;/strong&gt; Encourages transparency and teamwork across departments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster Delivery:&lt;/strong&gt; Enables shorter development cycles and quicker releases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Quality:&lt;/strong&gt; Continuous integration and monitoring reduce bugs and downtime&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared Accountability:&lt;/strong&gt; Teams own both success and failure together&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increased Innovation:&lt;/strong&gt; Cross-functional collaboration leads to creative solutions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better Problem-Solving:&lt;/strong&gt; Issues are resolved faster when multiple perspectives are combined.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“DevOps is not a goal, but a never-ending process of continual improvement.” – Jez Humble&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Challenges to Effective Collaboration
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cultural Resistance:&lt;/strong&gt; Teams may be hesitant to adopt new practices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Communication Barriers:&lt;/strong&gt; Lack of clear communication can slow progress.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blame Culture:&lt;/strong&gt; Pointing fingers instead of solving problems weakens collaboration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool Overload:&lt;/strong&gt; Relying only on tools without cultural alignment limits effectiveness.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Strategies to Improve Collaboration
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Promote open communication and daily stand-ups.&lt;/li&gt;
&lt;li&gt;Foster a culture of trust and shared responsibility.&lt;/li&gt;
&lt;li&gt;Provide cross-training opportunities between development and operations.&lt;/li&gt;
&lt;li&gt;Implement effective collaboration tools (Slack, Jira, Git, CI/CD pipelines).&lt;/li&gt;
&lt;li&gt;Encourage continuous feedback and learning.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real-World Impact
&lt;/h2&gt;

&lt;p&gt;Studies show that organizations with strong DevOps collaboration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy software 46 times more frequently.&lt;/li&gt;
&lt;li&gt;Recover from failures 96 times faster.&lt;/li&gt;
&lt;li&gt;Reduce lead times for changes by over 2,000%.
These statistics highlight that collaboration is not just a cultural preference, it’s a business necessity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;DevOps adoption has grown rapidly, with over 83% of IT leaders reporting its implementation in their organizations. Source:- &lt;a href="https://tsttechnology.io/blog/devops-statistics" rel="noopener noreferrer"&gt;83% of IT Leaders&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;High-performing DevOps teams spend 50% less time fixing security issues due to better collaboration and integration.Source:- &lt;a href="https://www.nova-8.com/camera-vai-utilizar-inteligencia-artificial-do-google-para-identificar-pessoas/" rel="noopener noreferrer"&gt;50% less time fixing security issues&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Effective collaboration in DevOps reduces unplanned work by 22%, freeing time for innovation.Source:- &lt;a href="https://www.biztechcs.com/blog/top-7-business-benefits-of-devops/" rel="noopener noreferrer"&gt;Unplanned Work by 22%
Quotes&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“The most important part of DevOps is communication, not tools.” – Patrick Debois&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1: Why is collaboration important in DevOps?&lt;/strong&gt;&lt;br&gt;
 Because it ensures faster delivery, better quality, and stronger innovation through teamwork and shared responsibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2: Can tools replace collaboration in DevOps?&lt;/strong&gt;&lt;br&gt;
 No. Tools support collaboration, but cultural alignment and teamwork are essential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3: How can teams start improving collaboration?&lt;/strong&gt;&lt;br&gt;
 By promoting transparency, breaking silos, and creating feedback loops.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Collaboration is the backbone of DevOps success.&lt;/li&gt;
&lt;li&gt;It enables faster delivery, higher quality, and continuous improvement.&lt;/li&gt;
&lt;li&gt;Cultural change is just as important as adopting new tools.&lt;/li&gt;
&lt;li&gt;Real-world results prove that collaboration drives efficiency and business growth.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Collaboration is the backbone of DevOps. It enables speed, quality, and innovation while building a culture of shared success. Without collaboration, DevOps becomes just another set of tools. With it, organizations can achieve true digital transformation.&lt;/p&gt;

&lt;p&gt;About the Author:&lt;em&gt;Rajan is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/devops-consulting" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automation infrastructure, Optimize the CI/CD Pipelines and ensuring seamless deployments.&lt;/em&gt; &lt;/p&gt;

</description>
      <category>devops</category>
      <category>continuousdelivery</category>
      <category>agile</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>How to Use Feature Flags to Deploy Safely in Production</title>
      <dc:creator>Rajan Vavadia</dc:creator>
      <pubDate>Mon, 01 Sep 2025 06:09:35 +0000</pubDate>
      <link>https://dev.to/addwebsolutionpvtltd/how-to-use-feature-flags-to-deploy-safely-in-production-2he1</link>
      <guid>https://dev.to/addwebsolutionpvtltd/how-to-use-feature-flags-to-deploy-safely-in-production-2he1</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Feature flags decouple deployment from release giving teams the power to ship code without shipping risk."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;The Problem: Risky Production Deployments&lt;/li&gt;
&lt;li&gt;How Feature Flags Make Deployments Safer&lt;/li&gt;
&lt;li&gt;Feature Flags and Continuous Delivery&lt;/li&gt;
&lt;li&gt;Interesting Stats&lt;/li&gt;
&lt;li&gt;Real-World Impacts&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;In today’s fast-paced software landscape, organizations must balance two critical demands: the need to innovate quickly and the need to maintain stability in production environments. Traditional release strategies often force teams to choose between speed and safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature flags&lt;/strong&gt; (also known as feature toggles) offer a solution to this dilemma. They allow teams to ship code to production without exposing new features to users immediately, making it possible to test in production, perform gradual rollouts, and quickly mitigate risk. With feature flags, teams decouple code deployment from feature release, enabling safer, faster, and more controlled delivery processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The Problem: Risky Production Deployments
&lt;/h2&gt;

&lt;p&gt;Historically, production deployments have been risky, high-stress events. Issues include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;All-or-nothing releases:&lt;/strong&gt; New code is pushed live for all users at once, increasing the blast radius of bugs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback complexity:&lt;/strong&gt; Fixing bad releases requires reverting code or hotfixing, which is time-consuming and error-prone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment inconsistencies:&lt;/strong&gt; Staging rarely matches production perfectly; bugs can slip through unnoticed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual coordination:&lt;/strong&gt; Releases often require tight coordination between development, QA, and operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User disruption:&lt;/strong&gt; Customers may experience downtime or broken functionality if something goes wrong.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These challenges slow down innovation and put unnecessary pressure on development and ops teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. How Feature Flags Make Deployments Safer
&lt;/h2&gt;

&lt;p&gt;Feature flags mitigate the above issues by allowing you to control feature exposure independently of code deployment. Here’s how:&lt;br&gt;
&lt;strong&gt;3.1 Progressive Rollouts&lt;/strong&gt;&lt;br&gt;
Instead of launching a feature to all users at once, you can release it to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A small percentage of users&lt;/li&gt;
&lt;li&gt;Internal staff or QA teams&lt;/li&gt;
&lt;li&gt;Specific user segments based on location, plan, or behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows you to monitor impact in real time and stop rollout if issues are detected.&lt;br&gt;
&lt;strong&gt;3.2 Instant Rollback&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If something goes wrong, disabling the flag immediately removes the feature from production without redeploying. This minimizes user impact and buys your team time to investigate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.3 A/B Testing and Experimentation&lt;/strong&gt;&lt;br&gt;
Flags enable A/B or multivariate testing in production. You can measure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Conversion rates&lt;/li&gt;
&lt;li&gt;User engagement&lt;/li&gt;
&lt;li&gt;System performance
Based on real user data, you can then choose the most effective feature variant.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.4 Environment-Specific Flags&lt;/strong&gt;&lt;br&gt;
Flags allow enabling or disabling features across different environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enabled in staging and QA for testing&lt;/li&gt;
&lt;li&gt;Disabled in production until ready&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.5 Targeted Releases&lt;/strong&gt;&lt;br&gt;
Release features only to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise clients&lt;/li&gt;
&lt;li&gt;Beta testers&lt;/li&gt;
&lt;li&gt;Specific geographic regions
This gives business and product teams flexibility in go-to-market strategies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.6 Safer Testing in Production&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since the code is already in production behind a flag, it’s easier to test features in a real-world environment without endangering all users.&lt;br&gt;
&lt;strong&gt;3.7 Increased Developer Confidence&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers can deploy without fear of breaking things because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Features are off by default&lt;/li&gt;
&lt;li&gt;Rollbacks are simple&lt;/li&gt;
&lt;li&gt;Testing can happen in production safely&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"With feature flags, turning off a broken feature is as easy as flipping a switch no redeploy required."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  4. Feature Flags and Continuous Delivery
&lt;/h2&gt;

&lt;p&gt;Feature flags are a cornerstone of Continuous Delivery (CD). Together, they allow for faster, more reliable software releases.&lt;br&gt;
&lt;strong&gt;4.1 Decoupled Deploy and Release&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code can be deployed to production at any time&lt;/li&gt;
&lt;li&gt;Business can control when users actually see the feature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.2 Trunk-Based Development&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers work on a single main branch&lt;/li&gt;
&lt;li&gt;Features are gated by flags, preventing incomplete code from impacting users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.3 Shorter Feedback Loops&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Feature behavior can be monitored and iterated on in production&lt;/li&gt;
&lt;li&gt;Real user data improves decision-making&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.4 Safer Refactoring&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rewrites or large changes can be flagged and rolled out gradually&lt;/li&gt;
&lt;li&gt;This reduces risk from technical debt cleanups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.5 Reduced Lead Time for Changes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code moves from development to production faster&lt;/li&gt;
&lt;li&gt;Flags allow safe validation and control over exposure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Interesting Stats
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;84% of elite teams use feature flags for controlled rollouts
Source:  &lt;a href="https://fullscale.io/blog/feature-flags-implementation-guide" rel="noopener noreferrer"&gt;feature flags&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Teams using flags deploy 10x more often than those that don’t 
Source: &lt;a href="https://www.featbit.co/articles2025/feature-flag-based-development-2025" rel="noopener noreferrer"&gt;flags based deployment&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Feature flags reduce production incidents by 63% (DORA Report)
Source: &lt;a href="https://incident.io/hubs/dora/dora-metrics-change-failure-rate" rel="noopener noreferrer"&gt;Feature flags reduce&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. Real-World Impacts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario Without Feature Flags:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You deploy a new checkout flow to all users.&lt;/li&gt;
&lt;li&gt;Unexpected issues cause failures.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You scramble to roll back code or patch bugs under pressure.&lt;br&gt;
&lt;strong&gt;Scenario With Feature Flags:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The new checkout flow is behind a flag.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You enable it for 5% of traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Error rates increase slightly, so you turn off the flag.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Issue is contained, users aren’t affected, and the team debugs in peace.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example: Spotify&lt;/strong&gt;&lt;br&gt;
Spotify uses feature flags for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuous experimentation&lt;/li&gt;
&lt;li&gt;Testing new UI components&lt;/li&gt;
&lt;li&gt;Rolling out features by region or user segment
This lets them innovate fast while protecting the user experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example: Facebook&lt;/strong&gt;&lt;br&gt;
Facebook uses flags extensively for A/B testing and gradual rollouts, allowing them to validate changes in production at scale.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The safest place to test software is production if you're using feature flags wisely."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  8. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q: Are feature flags only for large teams or enterprises?&lt;/strong&gt;&lt;br&gt;
 A: No. Startups benefit from them even more by avoiding risky deployments with small teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Do feature flags create technical debt?&lt;/strong&gt;&lt;br&gt;
 A: Yes, if not managed. Regularly audit and remove stale flags. Many tools support flag lifecycle management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How do feature flags affect performance?&lt;/strong&gt;&lt;br&gt;
 A: With proper implementation, impact is minimal. Use optimized SDKs or caching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: What tools are available for managing feature flags?&lt;/strong&gt;&lt;br&gt;
 A: LaunchDarkly, Unleash, Split.io, Flagsmith, and homegrown systems using config files or databases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How are feature flags different from config toggles?&lt;/strong&gt;&lt;br&gt;
 A: Feature flags are dynamic, environment-specific, and often user-targeted. Config toggles are static and generally global.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Feature flags decouple deployment from release&lt;/li&gt;
&lt;li&gt;They enable safer, controlled, and reversible changes&lt;/li&gt;
&lt;li&gt;Progressive rollout and instant rollback reduce risk&lt;/li&gt;
&lt;li&gt;Essential for Continuous Delivery and trunk-based development&lt;/li&gt;
&lt;li&gt;Must be managed to avoid flag debt&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  10. Conclusion
&lt;/h2&gt;

&lt;p&gt;Feature flags are not just a deployment strategy—they’re a risk mitigation tool, a business enabler, and a developer productivity booster. By decoupling code delivery from feature exposure, they give teams full control over when and how new functionality is introduced.&lt;br&gt;
When used effectively, feature flags empower teams to move fast, test in production safely, and recover from issues instantly. Whether you're a startup looking to reduce deployment anxiety or an enterprise scaling experimentation, feature flags are a foundational practice for modern, safe, and efficient software delivery.&lt;/p&gt;

&lt;p&gt;About the Author: &lt;em&gt;Rajan is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automation infrastructure, Optimize the CI/CD Pipelines and ensuring seamless deployments.&lt;/em&gt; &lt;/p&gt;

</description>
      <category>featureflags</category>
      <category>safedeployments</category>
      <category>continuousdelivery</category>
      <category>devops</category>
    </item>
    <item>
      <title>Monitoring Your App with Prometheus and Grafana</title>
      <dc:creator>Rajan Vavadia</dc:creator>
      <pubDate>Fri, 08 Aug 2025 06:53:05 +0000</pubDate>
      <link>https://dev.to/addwebsolutionpvtltd/monitoring-your-app-with-prometheus-and-grafana-3p97</link>
      <guid>https://dev.to/addwebsolutionpvtltd/monitoring-your-app-with-prometheus-and-grafana-3p97</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"If you can't measure it, you can't improve it." Peter Drucker&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Why Monitoring Matters&lt;/li&gt;
&lt;li&gt;Overview of Prometheus and Grafana&lt;/li&gt;
&lt;li&gt;Step-by-Step Setup Guide&lt;/li&gt;
&lt;li&gt;Interesting Facts &amp;amp; Statistics&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;In today’s fast-paced development environment, real-time application monitoring isn’t optional, it's essential. Prometheus and Grafana are two powerful open-source tools that together provide deep insights into your system’s health and performance.&lt;br&gt;
This blog walks you through how to monitor your application effectively using Prometheus for metrics collection and Grafana for visualization and alerting.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Why Monitoring Matters
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Early Issue Detection:&lt;/strong&gt; Identify anomalies before they become outages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Tuning:&lt;/strong&gt; Understand resource consumption and bottlenecks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SLAs and Uptime:&lt;/strong&gt; Meet service level commitments with actionable insights.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DevOps Best Practice:&lt;/strong&gt; Enables observability in CI/CD workflows.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  3. Overview of Prometheus and Grafana
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prometheus&lt;/strong&gt;&lt;br&gt;
An open-source systems monitoring and alerting toolkit, ideal for recording metrics in a time-series database.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull-based data scraping (uses exporters)&lt;/li&gt;
&lt;li&gt;Powerful query language (PromQL)&lt;/li&gt;
&lt;li&gt;Alert manager support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt;&lt;br&gt;
A data visualization and monitoring platform that turns raw metrics into insightful dashboards.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customizable dashboards&lt;/li&gt;
&lt;li&gt;Alerts and notifications&lt;/li&gt;
&lt;li&gt;Rich plugin ecosystem&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"Monitoring is a key to building reliable systems." Charity Majors&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  4.Step-by-Step Setup Guide
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker or Linux system (Ubuntu/CentOS)&lt;/li&gt;
&lt;li&gt;Basic networking knowledge&lt;/li&gt;
&lt;li&gt;Application with exposed metrics (Node Exporter, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1. Install Prometheus&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With Docker:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d \
  --name=prometheus \
  -p 9090:9090 \
  -v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
  prom/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;prometheus.yml sample:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval: 15s
scrape_configs:
  - job_name: 'node'
    static_configs:
      - targets: ['localhost:9100']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Install Node Exporter (Optional for server metrics)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d \
  -p 9100:9100 \
  --name=node-exporter \
  prom/node-exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Install Grafana&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d \
  -p 3000:3000 \
  --name=grafana \
  grafana/grafana

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Access: &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Default credentials: admin / admin&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Connect Prometheus to Grafana&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add Data Source → Select &lt;strong&gt;Prometheus&lt;/strong&gt; → Enter &lt;a href="http://prometheus:9090" rel="noopener noreferrer"&gt;http://prometheus:9090&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Create Dashboards&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Grafana Templates or Create Custom Panels&lt;/li&gt;
&lt;li&gt;Import pre-built dashboards from &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Interesting Facts &amp;amp; Statistics
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;90% of system outages could be mitigated or avoided with proactive monitoring.Source: &lt;a href="https://www.tabtree.in/Blog/it-solutions/reduce-downtime-by-90-with-proactive-it-monitoring/" rel="noopener noreferrer"&gt;ProactiveMonitoring&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Prometheus is the #1 CNCF monitoring tool adopted by Kubernetes users.Source: &lt;a href="https://www.cncf.io/blog/2022/03/08/cloud-native-observability-microsurvey-prometheus-leads-the-way-but-hurdles-remain-to-understanding-the-health-of-systems/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Grafana has over 20M+ users worldwide and integrates with 60+ data sources.Source: &lt;a href="https://grafana.com/about/press/2023/06/13/grafana-ships-v10-on-10-year-anniversary-as-it-surpasses-20-million-users" rel="noopener noreferrer"&gt;Grafana 20M&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Real-time metrics can reduce Mean Time to Resolution (MTTR) by up to 60%. Source: &lt;a href="https://www.quinnox.com/case-study/qinfinite-event-intelligence-reduce-mttr-manufacturing/" rel="noopener noreferrer"&gt;Mean Time to Resolution (MTTR)&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"Observability is how you understand your system in production." Cindy Sridharan&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  6. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q1: Is Prometheus suitable for large-scale monitoring?&lt;/strong&gt;&lt;br&gt;
 Yes, it’s scalable via federation and sharding for enterprise-scale monitoring.&lt;br&gt;
&lt;strong&gt;Q2: What’s the difference between Prometheus and Grafana?&lt;/strong&gt;&lt;br&gt;
 Prometheus collects and stores metrics. Grafana visualizes them and enables alerting.&lt;br&gt;
&lt;strong&gt;Q3: Can Grafana use other data sources?&lt;/strong&gt;&lt;br&gt;
 Absolutely! Grafana supports Elasticsearch, InfluxDB, Loki, MySQL, and more.&lt;br&gt;
&lt;strong&gt;Q4: How do I set alerts in Grafana?&lt;/strong&gt;&lt;br&gt;
 Go to the dashboard panel → Click “Alert” → Define conditions → Add notifications (Email, Slack, etc.)&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus + Grafana = Powerful full-stack monitoring solution.&lt;/li&gt;
&lt;li&gt;Easy to set up with Docker and extensible for custom metrics.&lt;/li&gt;
&lt;li&gt;Helps reduce downtime and improve performance visibility.&lt;/li&gt;
&lt;li&gt;Crucial for DevOps, SRE, and cloud-native environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  8. Conclusion
&lt;/h2&gt;

&lt;p&gt;Modern applications require modern monitoring. With Prometheus handling metrics collection and Grafana delivering rich dashboards and alerts, you gain full observability into your application’s health. Whether you’re a solo developer or a large enterprise, this stack empowers proactive operations, performance tuning, and rapid troubleshooting. &lt;/p&gt;

&lt;p&gt;About the Author:&lt;em&gt;Rajan is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automation infrastructure, Optimize the CI/CD Pipelines and ensuring seamless deployments.&lt;/em&gt; &lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>prometheus</category>
      <category>grafana</category>
      <category>devops</category>
    </item>
    <item>
      <title>CI/CD Pipeline with GitHub Actions: Deploy Your App in Minutes</title>
      <dc:creator>Rajan Vavadia</dc:creator>
      <pubDate>Wed, 23 Jul 2025 06:41:31 +0000</pubDate>
      <link>https://dev.to/addwebsolutionpvtltd/cicd-pipeline-with-github-actions-deploy-your-app-in-minutes-4knn</link>
      <guid>https://dev.to/addwebsolutionpvtltd/cicd-pipeline-with-github-actions-deploy-your-app-in-minutes-4knn</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Move fast and break nothing. CI/CD is the seatbelt that lets us accelerate safely."— Charity Majors, CTO at Honeycomb.io&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;What is CI/CD &lt;/li&gt;
&lt;li&gt;Getting Started with GitHub Actions&lt;/li&gt;
&lt;li&gt;Creating Your First GitHub Actions Workflow&lt;/li&gt;
&lt;li&gt;Auto-Deploy to Production (Example: Docker + Nginx)&lt;/li&gt;
&lt;li&gt;Key Stats &amp;amp; Interesting Facts&lt;/li&gt;
&lt;li&gt;FAQs&lt;/li&gt;
&lt;li&gt;Key Takeaways&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;Shipping code faster, safer, and more consistently — that’s the DevOps dream. And with GitHub Actions, it’s now easier than ever to build powerful CI/CD pipelines directly within your GitHub repository — no Jenkins, no external CI tools, just YAML and your code.&lt;br&gt;
In this blog, you'll learn how to create a CI/CD pipeline using GitHub Actions to test, build, and deploy your application with minimal configuration — in just minutes, not hours.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. What is CI/CD (And Why It Matters)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CI (Continuous Integration):&lt;/strong&gt; Automatically tests and merges code changes into the main branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CD (Continuous Deployment):&lt;/strong&gt; Automatically pushes those changes to your production or staging environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Together, they:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prevent bugs before they go live&lt;/li&gt;
&lt;li&gt;Speed up release cycles&lt;/li&gt;
&lt;li&gt;Increase team productivity&lt;/li&gt;
&lt;li&gt;Reduce human error&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of CI/CD as your invisible assistant — always building, testing, and shipping code in the background.&lt;/p&gt;
&lt;h2&gt;
  
  
  3. Getting Started with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitHub repository&lt;/li&gt;
&lt;li&gt;A simple app (Node.js, Python, or Dockerized app)&lt;/li&gt;
&lt;li&gt;SSH/FTP access to your server, or Docker + Nginx environment&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  4. Creating Your First GitHub Actions Workflow
&lt;/h2&gt;

&lt;p&gt;Let’s create a basic CI/CD workflow to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install dependencies&lt;/li&gt;
&lt;li&gt;Run tests&lt;/li&gt;
&lt;li&gt;Build Docker image&lt;/li&gt;
&lt;li&gt;Deploy to remote server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;GitHub Actions Directory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a .github/workflows folder in your repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; .github/workflows
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;.github/workflows/deploy.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CI/CD Pipeline&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build-and-deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Node.js&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;18'&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install dependencies&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm install&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run tests&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm test&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build Docker image&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker build -t my-app .&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to server via SSH&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appleboy/ssh-action@v1.0.0&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.HOST }}&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.USERNAME }}&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SSH_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;docker stop my-app || true&lt;/span&gt;
            &lt;span class="s"&gt;docker rm my-app || true&lt;/span&gt;
            &lt;span class="s"&gt;docker run -d -p 80:3000 --name my-app my-app&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Add Secrets in GitHub&lt;/strong&gt;&lt;br&gt;
Go to your &lt;strong&gt;repo → Settings → Secrets → Actions&lt;/strong&gt; and add:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HOST → your server IP&lt;/li&gt;
&lt;li&gt;USERNAME → SSH username&lt;/li&gt;
&lt;li&gt;SSH_KEY → your private key (paste the content)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Auto-Deploy to Production (Example with Docker + Nginx)
&lt;/h2&gt;

&lt;p&gt;If you have Nginx configured as a reverse proxy on your server, this action will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build and push Docker container&lt;/li&gt;
&lt;li&gt;Restart the service via SSH&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your live site updates in seconds after every push&lt;br&gt;
&lt;strong&gt;You can enhance it further using:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Slack notifications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Hub or GitHub Container Registry&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rollback logic with tagged deployments&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Key Stats &amp;amp; Interesting Facts&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Over 94 million GitHub Actions workflows run every month
Source: &lt;a href="https://www.sciencedirect.com/science/article/abs/pii/S0164121223002224#:~:text=Among%20its%20main%20features%2C%20it,workflows%20up%2Dto%2Ddate." rel="noopener noreferrer"&gt;GitHub Action workflows&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub Actions usage grew by over 150% in 2024
Source: &lt;a href="https://www.aidoos.com/blog/GitHub-2024-Octoverse-Report-AI-Python-and-a-New-Global-Developer-Landscape/?srsltid=AfmBOopKNNL36PhpSRbUrBr1SG6zCFfxq1FuKNq9RlTzeZgUF5KrcAYi&amp;amp;utm" rel="noopener noreferrer"&gt; Actions Usage&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub offers 2,000 free minutes/month on public repos (and generous limits on private)
Source: &lt;a href="https://prismic.io/blog/gitlab-vs-github#featurerich-free-plans" rel="noopener noreferrer"&gt;GitHub offer free 2000 Minutes&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  7. FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q: Is GitHub Actions free?&lt;/strong&gt;&lt;br&gt;
A: Yes, for public repositories. Private repos have generous free tier limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Can I deploy to multiple environments (staging, prod)?&lt;/strong&gt;&lt;br&gt;
A: Absolutely. You can use if: conditionals or separate workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Does it work with non-GitHub servers?&lt;/strong&gt;&lt;br&gt;
A: Yes. You can deploy to any server via SSH, FTP, Docker, Kubernetes, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions enables native, integrated CI/CD with minimal config.&lt;/li&gt;
&lt;li&gt;It supports any stack: Node.js, Python, Go, Docker, Kubernetes, and more.&lt;/li&gt;
&lt;li&gt;Secrets management is built-in for secure deployments.&lt;/li&gt;
&lt;li&gt;You can deploy to any environment — local, cloud, or hybrid.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  9. Conclusion
&lt;/h2&gt;

&lt;p&gt;Gone are the days of bulky CI servers and manual deployment scripts. With GitHub Actions, you can streamline your entire DevOps workflow inside your repository.&lt;br&gt;
Whether you're a solo developer or managing a team, CI/CD is no longer optional — it's essential. And with GitHub Actions, it's also accessible.&lt;/p&gt;

&lt;p&gt;About the Author:&lt;em&gt;Rajan is a DevOps Engineer at &lt;a href="https://www.addwebsolution.com/" rel="noopener noreferrer"&gt;AddWebSolution&lt;/a&gt;, specializing in automation infrastructure, Optimize the CI/CD Pipelines and ensuring seamless deployments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>cicdpipeline</category>
      <category>devops</category>
      <category>ci</category>
    </item>
  </channel>
</rss>
