<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tran Huynh An Duy (Andy)</title>
    <description>The latest articles on DEV Community by Tran Huynh An Duy (Andy) (@andylovecloud).</description>
    <link>https://dev.to/andylovecloud</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/andylovecloud"/>
    <language>en</language>
    <item>
      <title>Masking sensitive data for Jenkins server by Mask Passwords plugin</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Thu, 18 Dec 2025 13:58:57 +0000</pubDate>
      <link>https://dev.to/andylovecloud/masking-sensitive-data-for-jenkins-server-by-mask-passwords-plugin-4k4j</link>
      <guid>https://dev.to/andylovecloud/masking-sensitive-data-for-jenkins-server-by-mask-passwords-plugin-4k4j</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;The purpose of these activities is to implement the automatic masking of passwords or other information from build parameters. &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Current situation
&lt;/h2&gt;

&lt;p&gt;During the pipeline build process, the sensitive data (userID or password,...) is usually embedded or included in the script. This information can be exposed by checking the Console Output after running the job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fundz314gg51csxdqksce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fundz314gg51csxdqksce.png" alt="Example-situation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Solution
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2.1 Installation Mask Passwords plugin:
&lt;/h3&gt;

&lt;p&gt;From the Jenkins dashboard, go to Manage Jenkins → Plugins → Available plugins → type "Mask Passwords" in the search bar → select the check box and click Install.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuk4r3porbq5ghep1qwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuk4r3porbq5ghep1qwp.png" alt="Plugin installation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then remember to select the checkbox "Restart Jenkins when installation is complete and no jobs are running".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2jrqwfmezyh0ne3pijf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2jrqwfmezyh0ne3pijf.png" alt="Restart"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 Configure the plugin:
&lt;/h3&gt;

&lt;p&gt;After the installation is completed, select the job you would like to implement the masking on → then click &lt;strong&gt;Configuration *&lt;em&gt;→ **Environment *&lt;/em&gt;→ **select Mask password and regexes&lt;/strong&gt; → Define variables with values you would like to do the masking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi39bm4wt52a7p8ro6ft9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi39bm4wt52a7p8ro6ft9.png" alt="installed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;Build steps&lt;/strong&gt; → replace the variables above in the execute script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dvhjkb0msji4vpi4oms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dvhjkb0msji4vpi4oms.png" alt="Build steps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then run the test again and compare the differences. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4pk99rxd52qaooiw6a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4pk99rxd52qaooiw6a5.png" alt="Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://plugins.jenkins.io/mask-passwords/" rel="noopener noreferrer"&gt;Mask Passwords | Jenkins plugin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Video about Mask credentials in Jenkins from console output&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/yCYPz2P1S3c"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>linux</category>
      <category>devops</category>
    </item>
    <item>
      <title>Enable HTTPS for Jenkins on SUSE by using Apache httpd Reverse Proxy with Existing SSL Certificate</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Thu, 18 Dec 2025 12:39:02 +0000</pubDate>
      <link>https://dev.to/andylovecloud/enable-https-for-jenkins-on-suse-by-using-apache-httpd-reverse-proxy-with-existing-ssl-certificate-3hgm</link>
      <guid>https://dev.to/andylovecloud/enable-https-for-jenkins-on-suse-by-using-apache-httpd-reverse-proxy-with-existing-ssl-certificate-3hgm</guid>
      <description>&lt;h2&gt;
  
  
  1. Prerequisites
&lt;/h2&gt;

&lt;p&gt;SUSE Linux server with Jenkins already installed and running on port 8080.&lt;/p&gt;

&lt;p&gt;A domain name pointing to your server (e.g., abc.com).&lt;/p&gt;

&lt;p&gt;An SSL certificate (Company certificate) has already been issued (e.g., .crt + .key files, and possibly a CA bundle).&lt;/p&gt;

&lt;p&gt;Root or &lt;code&gt;sudo&lt;/code&gt; privileges.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Install Apache httpd
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo zypper refresh
sudo zypper install apache2 apache2-utils
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable and start Apache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable apache2
sudo systemctl start apache2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check status:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;systemctl status apache2&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Enable Required Apache Modules
&lt;/h2&gt;

&lt;p&gt;Apache needs proxy, proxy_http, ssl, and headers modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod ssl
sudo a2enmod headers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the module is already enabled&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apache2ctl -M | grep ssl&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you don’t see &lt;code&gt;ssl_module (shared)&lt;/code&gt;, then enable it:&lt;/p&gt;

&lt;p&gt;Edit &lt;code&gt;/etc/sysconfig/apache2&lt;/code&gt;, find the &lt;code&gt;APACHE_MODULES=&lt;/code&gt; line and add &lt;code&gt;ssl.&lt;/code&gt;Like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;APACHE_MODULES="... proxy proxy_http headers ssl ..."&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Also ensure &lt;code&gt;APACHE_SERVER_FLAGS&lt;/code&gt; includes &lt;code&gt;SSL&lt;/code&gt; so SSL-vhost stuff is actually activated. Something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;APACHE_SERVER_FLAGS="SSL"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then reload apache modules / restart Apache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart apache2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Place Your SSL Certificate
&lt;/h2&gt;

&lt;p&gt;Copy your certificate and key files into a secure directory:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/pki/tls/certs/abc.com.crt&lt;/code&gt;  → your certificate&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/pki/tls/private/abc.com.key&lt;/code&gt;   → your private key&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/pki/tls/certs/wildcard.abc.com_ca_bundle.crt&lt;/code&gt;  (optional, if provided by CA)&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Configure Apache VirtualHost for Jenkins
&lt;/h2&gt;

&lt;p&gt;5.1 Create or edit a config file for the Jenkins service:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/apache2/vhosts.d/jenkins.conf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add this configuration (replace plm-jenkins-dev.konecranes.com with your domain):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;VirtualHost *:80&amp;gt;    

ServerName plm-jenkins-dev.abc.com

Redirect permanent / https://plm-jenkins-dev.abc.com/

 &amp;lt;/VirtualHost&amp;gt;  

&amp;lt;VirtualHost *:443&amp;gt;    

ServerName plm-jenkins-dev.konecranes.com
SSLEngine on    

SSLCertificateFile /etc/pki/tls/certs/wildcard.abc.com.crt
SSLCertificateKeyFile /etc/pki/tls/private/wildcard.abc.com.key  
SSLCertificateChainFile /etc/pki/tls/certs/wildcard.abc.com_ca_bundle.crt  

ProxyRequests     Off    

ProxyPreserveHost On    

AllowEncodedSlashes NoDecode     

&amp;lt;Proxy http://localhost:8080/jenkins*&amp;gt;
   Require all granted
&amp;lt;/Proxy&amp;gt;

   ProxyPass         /jenkins http://localhost:8080/jenkins nocanon
   ProxyPassReverse  /jenkins http://localhost:8080/jenkins
   RequestHeader set X-Forwarded-Proto "https"
   RequestHeader set X-Forwarded-Port "443"
&amp;lt;/VirtualHost&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.2 Edit the Apache service config file to let it run on the 443 port :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/apache2/httpd.conf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add this configuration (add from the top of the file):&lt;/p&gt;

&lt;p&gt;ServerName plm-jenkins-dev.abc.com&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Restart Apache
&lt;/h2&gt;

&lt;p&gt;Verify the previous configuration:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apache2ctl configtest&lt;/code&gt; → If you see y &lt;code&gt;Syntax OK&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Restart the service&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl restart apache2&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Configure Jenkins
&lt;/h2&gt;

&lt;p&gt;Ensure Jenkins is aware it’s behind HTTPS.&lt;/p&gt;

&lt;p&gt;7.1 Open Jenkins config file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/sysconfig/jenkins&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add this line if missing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=$HTTP_PORT --prefix=/jenkins"
Environment="JENKINS_PREFIX=/jenkins"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7.2 Open the Jenkins start-up configuration file (from Jenkins 2.3xx we have to apply the change here)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl edit jenkins&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the value below (from the 3rd rows at the top)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Service]
Environment="JENKINS_PREFIX=/jenkins"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then restart the service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl restart jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7.3 Inside Jenkins UI → Manage Jenkins → Configure System → set Jenkins URL:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://plm-jenkins-dev.abc.com/jenkins&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;7.4 Restart Jenkins:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl restart jenkins&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify
Open in browser:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://plm-jenkins-dev.abc.com/jenkins" rel="noopener noreferrer"&gt;https://plm-jenkins-dev.abc.com/jenkins&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You should see Jenkins running securely with your SSL certificate.&lt;/p&gt;

&lt;p&gt;(Optional) Block direct port 8080 access:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo firewall-cmd --permanent --remove-port=8080/tcp
sudo firewall-cmd --reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>jenkins</category>
      <category>devops</category>
      <category>linux</category>
    </item>
    <item>
      <title>Jenkins installation in SUSE server</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Thu, 18 Dec 2025 11:57:00 +0000</pubDate>
      <link>https://dev.to/andylovecloud/jenkins-installation-in-suse-server-85m</link>
      <guid>https://dev.to/andylovecloud/jenkins-installation-in-suse-server-85m</guid>
      <description>&lt;h2&gt;
  
  
  1.1 Install pre-requisites and Jenkins service
&lt;/h2&gt;

&lt;p&gt;To install Jenkins, you need to install Java as the 1st requirement.&lt;/p&gt;

&lt;p&gt;Below is the script to install the prerequisites for the server running SUSE Linux Enterprise Server. After you have received the handover from the server team, access to server by SSH (via Putty application) and run the script below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you can start from the 1st step in the script below, you might need help from the local service team to install the rest (from step 1 to 5) from their side. After that, you can continue from section 1.2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#jenkins-install.sh

#1.Update system
sudo zypper up

#2.Install java
sudo zypper install -y java-21-openjdk java-21-openjdk-devel

#3.Add repo Jenkins
sudo zypper addrepo -f http://pkg.jenkins.io/opensuse-stable/ jenkins

#4.Install Jenkins 
sudo zypper install -y jenkins

#5.Enable Jenkins service
sudo systemctl enable jenkins
sudo systemctl start jenkins
sudo systemctl status jenkins


#6.Checking the port 8080 open in server (Local Address:Port)
sudo ss -tlpun

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: If you get the error related to Permission denied, you can check the picture in the Troubleshooting section.&lt;/p&gt;

&lt;h2&gt;
  
  
  1.2. Configuration for Jenkins
&lt;/h2&gt;

&lt;p&gt;Access to the Jenkins Server via IP address is available with the default port, 8080.&lt;/p&gt;

&lt;p&gt;Example: &lt;a href="http://Server-IP-address:8080/" rel="noopener noreferrer"&gt;http://Server-IP-address:8080/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then enter the Administrator password value from this path:  /var/lib/jenkins/secrets/initialAdminPassword&lt;/p&gt;

&lt;p&gt;By using the code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /var/lib/jenkins/secrets/initialAdminPassword 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foal6y478yhaj48dc5pih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foal6y478yhaj48dc5pih.png" alt="Getting Started" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select Install suggested plugins&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42hjioxbu7eihk4c8j8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42hjioxbu7eihk4c8j8j.png" alt="Install suggested plugins" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait until the plugin installation is completed &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2n39pnxdsqpwd6mplq5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2n39pnxdsqpwd6mplq5f.png" alt="plugin installation is completed" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create 1st Admin user&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qp2sdrn12huiybaykfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qp2sdrn12huiybaykfw.png" alt="1st Admin user" width="800" height="756"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start using Jenkins&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpv99y6nmhifxdh5l4d9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpv99y6nmhifxdh5l4d9.png" alt="Start using Jenkins" width="788" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the public IP address or domain name of the server (including port 8080)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zs14t4sh50995ytr5v9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zs14t4sh50995ytr5v9.png" alt="public IP address" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Home screen of Jenkins&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrkhw9gyy9ftvdhwa3qm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrkhw9gyy9ftvdhwa3qm.png" alt="Home screen of Jenkins" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1.3 Change Jenkins's working folder
&lt;/h2&gt;

&lt;p&gt;To change the current working folder from &lt;strong&gt;/var/lib/jenkins&lt;/strong&gt; to /&lt;strong&gt;opt/devops/jenkins&lt;/strong&gt;, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjpxgr3hub3bxdy589ra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjpxgr3hub3bxdy589ra.png" alt="Change Jenkins's working folder" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;do the following steps:&lt;/p&gt;

&lt;p&gt;Change working folder steps&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#1. Stop Jenkins service
sudo systemctl stop jenkins

#2. Create the new directory
sudo mkdir -p /opt/devops/jenkins

#3. Grant permission for jenkins user to new folder
sudo chown -R jenkins:jenkins /opt/devops/jenkins

#4. Copy existing Jenkins data (included jobs, plugins,...)
sudo rsync -avzh /var/lib/jenkins/ /opt/devops/jenkins/

#5. Update Jenkins configuration
sudo nano /etc/sysconfig/jenkins

##5.1 Find JENKINS_HOME="/var/lib/jenkins" -&amp;gt; Change to JENKINS_HOME="/opt/devops/jenkins"
##5.2 Find WorkingDirectory=/var/lib/jenkins -&amp;gt; WorkingDirectory=/opt/devops/jenkins (Optional if available)

##5.3 If the steps 5.1 does not work, you should also edit file usr/lib/systemd/system/jenkins.service and replace 2 parameters as steps 5.1 and 5.2 in that file.

#6. Reload systemd:
sudo systemctl daemon-reload

#7. Start Jenkins again
sudo systemctl start jenkins

#8. Verify new home directory
ps aux | grep jenkins

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The expectation can be checked by the step No. 8 above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne3y7zromdpcq4wwuuvc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne3y7zromdpcq4wwuuvc.png" alt="checked by the step" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or inside &lt;strong&gt;Jenkins UI → Manage Jenkins → System Information → Environment Variables → JENKINS_HOME&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwl6rpv8u318iogo9dfk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwl6rpv8u318iogo9dfk8.png" alt="Verify in UI" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cross-check the existing job already there in the new folders:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls /opt/devops/jenkins/jobs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;While installing the pre-requisite or Jenkins, if you face the problems below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctqesjj8trsgs3dcr89b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctqesjj8trsgs3dcr89b.png" alt="Issue 1" width="727" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycmy5zkhobd7h008nkmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycmy5zkhobd7h008nkmj.png" alt="Issue 2" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Solution&lt;/u&gt;&lt;/strong&gt;: contact the local service team, who provided the access to your ADM account or your Infodba account.&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>linux</category>
      <category>devops</category>
    </item>
    <item>
      <title>Day 15: Decoupling and Connecting - Mastering AWS VPC Peering with Terraform</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Wed, 10 Dec 2025 11:37:42 +0000</pubDate>
      <link>https://dev.to/andylovecloud/day-15-decoupling-and-connecting-mastering-aws-vpc-peering-with-terraform-3cjk</link>
      <guid>https://dev.to/andylovecloud/day-15-decoupling-and-connecting-mastering-aws-vpc-peering-with-terraform-3cjk</guid>
      <description>&lt;p&gt;Today, we successfully implemented VPC Peering using Terraform, a foundational skill for anyone managing complex cloud infrastructure. This mini-project demonstrated how to securely connect separate network environments across different regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is VPC Peering?
&lt;/h2&gt;

&lt;p&gt;VPC peering is a networking connection between two Virtual Private Clouds (VPCs) that enables instances in either VPC to communicate with each other privately. This is achieved as if the instances were on the same network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is this necessary?
&lt;/h2&gt;

&lt;p&gt;In enterprise environments, you often need to share resources or data securely between different departments, environments (like Dev and Prod), or even AWS accounts. For example, a web server located in one VPC might need to securely retrieve data from a backend database residing in a separate VPC.&lt;br&gt;
Without VPC peering, that data connection would have to traverse the Internet Gateway, exposing the traffic to the public internet—a major security risk. VPC peering allows this communication to occur over a private AWS backbone, ensuring security and low latency.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Non-Negotiable Rules of Peering
&lt;/h2&gt;

&lt;p&gt;Before establishing any connection, two rules are critical:&lt;br&gt;
&lt;strong&gt;1. Non-Overlapping CIDR Ranges&lt;/strong&gt;: The IP address ranges (CIDRs) of the two VPCs must not overlap. For our project, we used 10.0.0.0/16 for the primary VPC and 10.1.0.0/16 for the secondary VPC.&lt;br&gt;
&lt;strong&gt;2. Bidirectional Connection&lt;/strong&gt;: VPC peering is not a single connection; it requires a bidirectional setup. You must initiate a connection from VPC A to VPC B, and then establish a corresponding connection from VPC B back to VPC A.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9l553z15v8g7po2z06n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9l553z15v8g7po2z06n.png" alt="VPC Peering" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Terraform Implementation: Multi-Region Setup
&lt;/h2&gt;

&lt;p&gt;Our project involved provisioning VPCs in different AWS regions (US East 1 and US West 2). To manage resources across these distinct regions in a single Terraform configuration, we utilized aliases in our provider definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Define Primary Provider (US East 1)
provider "aws" {
  region = var.primary_region 
  alias  = "primary"
}

# Define Secondary Provider (US West 2)
provider "aws" {
  region = var.secondary_region 
  alias  = "secondary"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Establishing the Connection
&lt;/h2&gt;

&lt;p&gt;We used the aws_vpc_peering_connection resource to create the connection request (A to B) and then set up a corresponding resource to accept the request (B to A, or the acceptor).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VPC Peering Connection Request (Primary to Secondary)
resource "aws_vpc_peering_connection" "primary_to_secondary" {
  provider      = aws.primary
  vpc_id        = aws_vpc.primary_vpc.id
  peer_vpc_id   = aws_vpc.secondary_vpc.id
  peer_region   = var.secondary_region
  auto_accept   = false # The destination must explicitly accept
}

# VPC Peering Connection Acceptor (Secondary accepts Primary's request)
resource "aws_vpc_peering_connection_accepter" "secondary_acceptor" {
    provider = aws.secondary
    vpc_peering_connection_id = aws_vpc_peering_connection.primary_to_secondary.id
    auto_accept = true # Auto-accept the incoming request
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Crucial Routing Step
&lt;/h2&gt;

&lt;p&gt;Creating the peering connection is only half the battle. To allow traffic to flow, we must update the Route Tables in both VPCs. Each VPC's route table must be configured to send traffic destined for the peer VPC's CIDR range (e.g., 10.1.0.0/16) to the newly created VPC peering connection ID.&lt;br&gt;
This crucial step, combined with correctly configured security groups (allowing ICMP/Ping traffic between the VPC CIDR blocks), ensures successful private connectivity.&lt;/p&gt;
&lt;h2&gt;
  
  
  Advanced Caveat: Transitive Peering
&lt;/h2&gt;

&lt;p&gt;During the project, we encountered a vital architectural lesson: Transitive Peering does not work.&lt;/p&gt;

&lt;p&gt;If you connect:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;VPC A to VPC B&lt;/li&gt;
&lt;li&gt;VPC B to VPC C&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Traffic will not automatically flow from VPC A to VPC C. To enable A and C to communicate privately, you must create a dedicated VPC peering connection between A and C as well. This is a key consideration when designing larger, interconnected networks.&lt;/p&gt;



&lt;p&gt;By successfully provisioning and configuring VPC peering, we established secure, private communication between two instances running in separate regions, a vital step toward building complex, enterprise-grade infrastructure with Terraform.&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/WGt000THDmQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Day 14: Real-World Terraform - Hosting a Static Website with S3 and CloudFront</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Tue, 09 Dec 2025 09:33:00 +0000</pubDate>
      <link>https://dev.to/andylovecloud/day-14-real-world-terraform-hosting-a-static-website-with-s3-and-cloudfront-59eo</link>
      <guid>https://dev.to/andylovecloud/day-14-real-world-terraform-hosting-a-static-website-with-s3-and-cloudfront-59eo</guid>
      <description>&lt;p&gt;Having covered the fundamental concepts of Terraform, from file structure and type constraints to functions and data sources, it’s time to apply that knowledge to a real-time project: hosting a static website using a private S3 bucket backed by a CloudFront Distribution. We will build this entire, production-ready architecture using Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Use CloudFront in Front of S3
&lt;/h2&gt;

&lt;p&gt;While S3 buckets can host static websites directly, this approach is problematic in production. Direct S3 serving often incurs higher data transfer and storage costs, requires the bucket to be publicly accessible (a major security risk), and makes the website vulnerable to DoS attacks.&lt;br&gt;
To solve these challenges, we introduce Amazon CloudFront, a Content Delivery Network (CDN).&lt;br&gt;
CloudFront creates Edge Locations—points of presence geographically close to your users worldwide. These edge locations serve two key purposes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Speed and Cost&lt;/strong&gt;: Edge locations cache frequently accessed files (like HTML, images, and CSS). When a user requests a file, it is served instantly from the nearest edge location rather than traversing continents to reach the origin S3 bucket. This reduces latency and lowers data transfer costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Security&lt;/strong&gt;: By using CloudFront, we can keep the S3 bucket private. Only the CloudFront distribution is granted access to fetch files from the S3 bucket, ensuring users cannot access the bucket directly.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Secure Architecture
&lt;/h2&gt;

&lt;p&gt;Our goal is to provision an architecture that provides secure access to static files.&lt;br&gt;
Architectural Flow: We need to establish authentication and authorization between the CloudFront Distribution and the private S3 bucket.&lt;br&gt;
• Origin Access Control (OAC): This resource acts as the identity management system that allows only the specified CloudFront Distribution to access the private S3 bucket. This is the recommended, modern approach, replacing the deprecated Origin Access Identity (OAI).&lt;br&gt;
• Bucket Policy: We must attach a bucket policy to the S3 bucket to authorize the CloudFront distribution to perform necessary actions, such as Get and List files, which are required for caching at the edge locations.&lt;/p&gt;
&lt;h2&gt;
  
  
  Diagram of the Concept:
&lt;/h2&gt;

&lt;p&gt;Imagine users accessing the site globally. Instead of hitting the centralized S3 bucket directly, they hit the closest CloudFront edge location. If the file is cached (TTL, or Time To Live, default is around 24 hours), it’s served immediately. If not, the edge location securely retrieves the file from the private S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggsy2a7xat7fymywrx7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggsy2a7xat7fymywrx7f.png" alt="Project concept"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Terraform Implementation: Key Resources
&lt;/h2&gt;

&lt;p&gt;To realize this architecture, we define several Terraform resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;S3 Bucket &amp;amp; Public Access Block:&lt;/strong&gt; A private aws_s3_bucket is created, and we use the aws_s3_bucket_public_access_block resource to ensure public access is entirely disabled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Origin Access Control (OAC)&lt;/strong&gt;: Defines the secure identity for CloudFront interaction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Bucket Policy&lt;/strong&gt;: This resource grants the OAC identity s3:GetObject and other necessary permissions using a complex JSON structure, which must be encoded using the jsonencode() function in Terraform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uploading Files&lt;/strong&gt;: Instead of manually uploading static files, we use the aws_s3_object resource combined with iteration functions to upload everything in the local www directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudFront Distribution&lt;/strong&gt;: This resource defines the global CDN, referencing the OAC ID and setting cache behaviors (e.g., allowing only GET and HEAD methods).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This comprehensive approach allows us to deploy secure, high-performance static websites without manually touching the AWS console. While we implemented the core components, follow-up tasks include adding Route 53 DNS records, using an SSL certificate from ACM, and configuring custom error pages for a complete production solution.&lt;/p&gt;



&lt;p&gt;Embed the video "14/30 - Host A Static Website In AWS S3 And Cloudfront (using terraform)" from &lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/bK6RimAv2nQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Day 13: Decoupling Your Infrastructure - Mastering Terraform Data Sources</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Tue, 09 Dec 2025 09:16:24 +0000</pubDate>
      <link>https://dev.to/andylovecloud/day-13-decoupling-your-infrastructure-mastering-terraform-data-sources-p7i</link>
      <guid>https://dev.to/andylovecloud/day-13-decoupling-your-infrastructure-mastering-terraform-data-sources-p7i</guid>
      <description>&lt;p&gt;We’ve spent the last few days mastering expressions and functions to make our code reusable. Today, we tackle a critical concept for enterprise environments: Data Sources.&lt;/p&gt;

&lt;p&gt;Data Sources allow your Terraform configuration to read information about resources outside of your current configuration—meaning resources that already exist in your AWS environment but were not created by this specific Terraform code. This ability to reference pre-existing components is crucial for decoupling and sharing infrastructure across multiple teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Data Sources? The Need for Decoupling
&lt;/h2&gt;

&lt;p&gt;When provisioning new infrastructure, you often rely on shared or existing components. For example, if you need to provision an EC2 instance, you need several pieces of external information:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. AMI ID&lt;/strong&gt;: The Amazon Machine Image (AMI) is necessary for the instance, but the AMI itself is not stored inside your AWS environment; it's pulled from an external, often open-source, repository. You don't want to hardcode the AMI ID, which changes with new releases; you want the latest one dynamically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Shared VPC/Subnets&lt;/strong&gt;: In an enterprise setting, infrastructure like Virtual Private Clouds (VPCs) and subnets are often pre-provisioned and shared among development, QA, and DevOps teams. When creating new resources, you must reference these existing network components rather than creating new ones.&lt;/p&gt;

&lt;p&gt;Data Sources solve this by fetching these details dynamically, eliminating the need for manual intervention or hardcoding IDs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frllv52zpvlt2bqew0170.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frllv52zpvlt2bqew0170.png" alt="Terraform data sources"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How Data Sources Work: The Syntax
&lt;/h3&gt;

&lt;p&gt;To use a Data Source, you use the data keyword followed by the resource type (e.g., aws_vpc) and a local name you define:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_vpc" "vpc_name" {
  // configuration (filters) to find the specific VPC
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The data source then provides outputs (like ID, CIDR block, etc.) that your resources can reference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Case Study: Referencing Existing Resources
&lt;/h3&gt;

&lt;p&gt;Here is how we use Data Sources to pull information about an existing VPC, a subnet, and the latest Linux AMI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Finding the Shared VPC and Subnet&lt;/strong&gt;&lt;br&gt;
Instead of hardcoding the VPC ID, we use filters to look up the default VPC based on its Name tag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Code Example (VPC Data Source):
data "aws_vpc" "vpc_name" {
  filter {
    name   = "tag:Name"
    values = ["default"] // Assumes the default VPC is tagged 'default' [7]
  }
}

// Data Source for Subnet within the shared VPC
data "aws_subnet" "shared" {
  filter {
    name   = "tag:Name"
    values = ["subnet A"] // Finds the subnet tagged 'subnet A' [8]
  }
  vpc_id = data.aws_vpc.vpc_name.id // Reference the ID found by the VPC data source [8]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this configuration, we successfully filter existing resources in the AWS environment based on tags. We haven't created the VPC or subnet, yet we are correctly referencing the existing ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Finding the Latest AMI ID&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We use the aws_ami data source to fetch the most recent Amazon Linux 2 image:&lt;br&gt;
Code Example (AMI Data Source):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_ami" "linux2" {
  most_recent = true // Ensures we get the latest release [10]
  owners      = ["amazon"] // Owned by Amazon, not us [10]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-gp2"] // Uses a wildcard filter for the name [10]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Provisioning the EC2 Instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, we use the outputs from these data sources in our resource definition:&lt;br&gt;
Code Example (EC2 Instance using Data Source Outputs):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "example_instance" {
  instance_type = "t2.micro" 

  // Use the AMI ID retrieved by the data source
  ami           = data.aws_ami.linux2.id 

  // Use the Subnet ID retrieved by the data source
  subnet_id     = data.aws_subnet.shared.id 
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This demonstrates how Data Sources provide the necessary external IDs (AMI ID, Subnet ID) without hardcoding, allowing the instance to be provisioned correctly using existing, shared infrastructure.&lt;/p&gt;




&lt;p&gt;Data Sources are fundamental to creating flexible and maintainable configurations, especially in environments where infrastructure management is shared across multiple teams. &lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/MSr67lWCyD8"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Day 12: The Power of Reusability (Part 2) - Mastering Advanced Terraform Functions</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Mon, 08 Dec 2025 13:01:16 +0000</pubDate>
      <link>https://dev.to/andylovecloud/day-12-the-power-of-reusability-part-2-mastering-advanced-terraform-functions-24i9</link>
      <guid>https://dev.to/andylovecloud/day-12-the-power-of-reusability-part-2-mastering-advanced-terraform-functions-24i9</guid>
      <description>&lt;p&gt;This is Day 12, a direct continuation of our deep dive into Terraform's built-in functions. If Day 11 focused on basic string and collection manipulation, today we unlock the true power of HCL by mastering validation, type conversion, numeric calculations, and file handling.&lt;br&gt;
Understanding these functions is essential for building robust, secure, and highly dynamic infrastructure configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7540fehzot8odrx1iyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7540fehzot8odrx1iyd.png" alt="Advance Terraform Functions" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  1. Validation Functions: Enforcing Constraints
&lt;/h2&gt;

&lt;p&gt;Validation functions ensure that the input values provided by the user meet specific criteria before Terraform attempts to execute a plan. This prevents failures later in the provisioning process.&lt;br&gt;
Validation rules are placed directly within the variable declaration, using a validation block which contains a condition and an error_message.&lt;/p&gt;
&lt;h3&gt;
  
  
  A. Checking Length and Logic
&lt;/h3&gt;

&lt;p&gt;You can combine the length() function with logical operators (&amp;amp;&amp;amp; for logical AND) to enforce length constraints on input strings, such as ensuring an instance type name is between 2 and 20 characters.&lt;br&gt;
Code Example (Length Validation):&lt;br&gt;
variable "instance_type" {&lt;br&gt;
  default = "t2.micro" &lt;/p&gt;

&lt;p&gt;validation {&lt;br&gt;
    condition     = length(var.instance_type) &amp;gt;= 2 &amp;amp;&amp;amp; length(var.instance_type) &amp;lt;= 20&lt;br&gt;
    error_message = "Instance type must be between 2 and 20 characters." &lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;
&lt;h3&gt;
  
  
  B. Regular Expressions and can()
&lt;/h3&gt;

&lt;p&gt;For more complex pattern matching, the can() function, combined with regex() (regular expression), allows you to define stringent requirements, such as ensuring an instance type starts with 't2' or 't3'.&lt;/p&gt;
&lt;h3&gt;
  
  
  C. String Endings (endswith) and Sensitive Data
&lt;/h3&gt;

&lt;p&gt;The endswith() validation function ensures a string meets a naming convention, such as requiring a backup variable name to end with _backup.&lt;br&gt;
Furthermore, you can flag variables as sensitive by setting sensitive = true in the declaration. This prevents the value from being printed in logs or standard output during terraform plan and apply. While this is a critical security feature, it’s important to remember that the value is still saved (Base64 encoded, not truly encrypted) in the state file, so exercise caution.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Type Conversion and Collection Functions
&lt;/h2&gt;

&lt;p&gt;Type conversion functions allow you to change the structure of data, which is often necessary when manipulating collections.&lt;/p&gt;
&lt;h3&gt;
  
  
  A. Combining and Making Unique (concat and toset)
&lt;/h3&gt;

&lt;p&gt;While the concat() function combines two or more lists, the resulting list may contain duplicate values. To eliminate duplicates and ensure a collection of only unique values, you must convert the list into a set using the toset() function.&lt;br&gt;
Code Example (Removing Duplicates):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "user_locations" {
  type    = list(string)
  default = ["us-east-1", "us-east-1", "us-west-1"] 
}
variable "default_location" {
  type    = list(string)
  default = ["us-west-2"]
}

locals {
  all_locations   = concat(var.user_locations, var.default_location) 
  # Result: ["us-east-1", "us-east-1", "us-west-1", "us-west-2"] (Duplicates remain)

  unique_locations = toset(local.all_locations)
  # Result: {"us-east-1", "us-west-1", "us-west-2"} (Set removes duplicates)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Numeric Functions and Spread Operators
&lt;/h2&gt;

&lt;p&gt;Numeric functions handle mathematical operations. Functions like sum(), max(), and min() are designed to work on numbers, but when operating on complex collections like tuples or lists, they require additional iteration or the use of a spread operator.&lt;/p&gt;

&lt;h3&gt;
  
  
  Calculating Absolute Values and Totals
&lt;/h3&gt;

&lt;p&gt;The abs() function returns the absolute value of a number (converting negative values to positive). To apply abs() to every element in a list (like a list of monthly costs where negative values represent credits), you must use a for loop to iterate through the collection.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Spread Operator (...)
&lt;/h3&gt;

&lt;p&gt;To use aggregate functions like max() or min() on a list of numbers, you must use the spread operator (...) after the list variable. This operator tells Terraform to treat the elements of the list as individual numbers, allowing the function to work correctly.&lt;br&gt;
Code Example (Calculating Max Cost):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  # Assuming positive_cost is a list of absolute values like 
  max_cost = max(local.positive_cost...)
  # The '...' spreads the list elements, treating them as arguments: max(200, 100, 75, 50)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This technique also allows for calculating the average cost, which involves dividing the total sum by the length of the list.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. File and Date/Time Functions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  A. File Handling
&lt;/h3&gt;

&lt;p&gt;Terraform can interact with local files. The fileexists() function checks if a file is present. You can read the contents of a file using the file() function, and if the file contains structured data (like JSON), you can use jsondecode() to parse it into an accessible HCL map.&lt;/p&gt;

&lt;h3&gt;
  
  
  B. Date and Time
&lt;/h3&gt;

&lt;p&gt;The timestamp() function returns the current timestamp. The formatdate() function allows you to reformat this timestamp string into a desired structure (e.g., YYYY-MM-DD), which is useful for creating unique, time-based resource names.&lt;/p&gt;




&lt;p&gt;Mastering these advanced functions solidifies your ability to write highly dynamic, reusable, and secure configurations, moving you beyond simple resource declarations from &lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ZYCCu9rZkU8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 11: The Power of Reusability - Mastering Terraform Functions (Part 1)</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Mon, 08 Dec 2025 12:35:04 +0000</pubDate>
      <link>https://dev.to/andylovecloud/day-11-the-power-of-reusability-mastering-terraform-functions-part-1-1666</link>
      <guid>https://dev.to/andylovecloud/day-11-the-power-of-reusability-mastering-terraform-functions-part-1-1666</guid>
      <description>&lt;p&gt;Welcome to Day 11 of our 30 Days of AWS Terraform series! After structuring our projects and mastering Meta Arguments, we are now diving into a core concept that enables highly efficient and reusable infrastructure code: Terraform Functions.&lt;br&gt;
If you're new to programming concepts, a function is simply a tool that makes life easier by allowing you to reuse code repeatedly. Instead of writing the same four lines of code 10 times, you wrap them once inside a function (like sum) and call that function whenever needed, passing different inputs each time.&lt;/p&gt;

&lt;p&gt;Crucially, Terraform is not a full-fledged programming language; it is the HashiCorp Configuration Language (HCL). Because of this, you cannot create your own custom functions in Terraform; you can only use the rich library of inbuilt functions provided by Terraform.&lt;br&gt;
These functions are categorized into types such as string, numeric, collection, type conversion, and more. Today, we explore some of the most practical and frequently used functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwf3snwpkudx4gd73dj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwf3snwpkudx4gd73dj8.png" alt="Terraform function p1" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  1. String Manipulation Functions
&lt;/h2&gt;

&lt;p&gt;String functions are used to clean, format, and modify text inputs, ensuring they meet specific infrastructure requirements (like S3 bucket naming conventions).&lt;/p&gt;
&lt;h3&gt;
  
  
  Lower, Upper, Trim, and Replace
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;• lower() and upper()&lt;/strong&gt;: These convert a string to all lowercase or all uppercase characters, respectively.&lt;br&gt;
    ◦ Example: lower("HELLO WORLD") returns "hello world".&lt;br&gt;
• &lt;strong&gt;trim()&lt;/strong&gt;: Removes specified characters from a string.&lt;br&gt;
    ◦ Example: trim(" AWSTF ", "f") removes the character 'f' from the string, but leaves the spaces if space is not specified as the character to trim.&lt;br&gt;
• &lt;strong&gt;replace()&lt;/strong&gt;: Substitutes an occurrence of a character or substring with a new character.&lt;br&gt;
    ◦ Example: replace("hello world", " ", "-") returns "hello-world".&lt;br&gt;
A powerful technique is nesting functions to perform multiple operations in one line. For instance, to ensure a project name is lowercase and uses hyphens instead of spaces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Code Example (Nesting lower() and replace()):
locals {
  # First, replace spaces with hyphens; then, convert the result to lowercase.
  formatted_project_name = lower(
    replace(var.project_name, " ", "-")
  )
}

output "formatted_name" {
  value = local.formatted_project_name 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If var.project_name is "Project Alpha Resource", the output becomes "project-alpha-resource". This is crucial for handling naming constraints (like those for S3 buckets, which must be lowercase and cannot contain spaces).&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Collection Functions: Managing Groups
&lt;/h2&gt;

&lt;p&gt;Collection functions help manage groups of data (like lists and maps) for tasks such as calculating length or combining inputs.&lt;br&gt;
Length, Concat, and Merge&lt;br&gt;
• &lt;strong&gt;length()&lt;/strong&gt;: Calculates the number of elements in a list.&lt;br&gt;
•** concat()&lt;strong&gt;: Combines two or more lists or tuples into a single list.&lt;br&gt;
• **merge()&lt;/strong&gt;: Combines two or more maps (key-value pairs) into a single map. If duplicate keys exist, the last provided map takes precedence.&lt;br&gt;
The merge() function is excellent for combining default tags and environment-specific tags:&lt;/p&gt;

&lt;p&gt;Code Example (Merging Maps):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# variables.tf contains var.default_tags and var.environment_tags (both are maps)

locals {
  new_tag = merge(
    var.default_tags, 
    var.environment_tags
  )
}

resource "aws_s3_bucket" "first_s3" {
  # ... bucket configuration ...
  tags = local.new_tag # This applies all merged tags
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This single tag block now contains all key-value pairs from both input maps.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Lookup Function: Dynamic Environment Selection
&lt;/h2&gt;

&lt;p&gt;The lookup() function is vital for selecting configuration values based on an input key, often used to determine environment-specific settings.&lt;br&gt;
The Structure:&lt;br&gt;
The lookup function requires three arguments: lookup(map, key, default).&lt;br&gt;
&lt;strong&gt;1. Map&lt;/strong&gt;: The entire collection of data (e.g., instance sizes for Dev, Staging, Prod).&lt;br&gt;
&lt;strong&gt;2. Key&lt;/strong&gt;: The value you are looking up (e.g., the current environment name).&lt;br&gt;
&lt;strong&gt;3. Default&lt;/strong&gt;: A fallback value if the key is not found (e.g., t2.micro).&lt;br&gt;
Code Example (Instance Sizing):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "instance_sizes" {
  type = map(string)
  default = {
    dev    = "t2.micro"
    staging = "t3.small"
    prod   = "t3.large"
  }
}
variable "environment" {
  default = "prod"
}

locals {
  instance_size = lookup(
    var.instance_sizes, 
    var.environment, 
    "t2.micro" # Default value if key is missing
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If var.environment is set to "prod", the lookup function returns "t3.large". If it is set to an undefined environment like "production", it returns the default value, "t2.micro".&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Iteration and Splitting Data
&lt;/h2&gt;

&lt;p&gt;split() and for Expressions&lt;br&gt;
When inputs are provided as a single comma-separated string, you must convert them into a list for iteration. The split() function takes a separator (e.g., comma) and a string, returning a list of substrings.&lt;br&gt;
The for expression (different from the for_each meta argument) then allows you to iterate through that list to create a structured collection of output values, such as security group rules.&lt;br&gt;
Code Example (Splitting String into List and Iterating):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "allowed_ports" {
  default = "80,443,8080" # This is a single string
}

locals {
  port_list = split(",", var.allowed_ports) # Output: ["80", "443", "8080"]

  sg_rules = {
    for port in local.port_list : port =&amp;gt; {
      name        = "port-${port}"
      description = "Allow traffic on port ${port}"
    }
  }
}

# The 'sg_rules' local is now a map of rules, structured for resource definition.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This process converts a simple string into a complex map of security group rules, demonstrating how functions and expressions combine for powerful data manipulation.&lt;/p&gt;




&lt;p&gt;Mastering these functions is the key to writing concise, repeatable, and maintainable Terraform configurations. Thanks &lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/-dKsmU4Z1hM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>awschallenge</category>
      <category>30dayschallenge</category>
    </item>
    <item>
      <title>Day 10: Mastering Terraform Expressions - Conditional Logic, Iteration, and Collection</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Wed, 03 Dec 2025 18:35:39 +0000</pubDate>
      <link>https://dev.to/andylovecloud/day-10-mastering-terraform-expressions-conditional-logic-iteration-and-collection-4igd</link>
      <guid>https://dev.to/andylovecloud/day-10-mastering-terraform-expressions-conditional-logic-iteration-and-collection-4igd</guid>
      <description>&lt;p&gt;This is Day 10, and we are diving deep into Terraform Expressions. Expressions are a fundamental concept, providing powerful, concise syntax that helps avoid rewriting code repeatedly. While similar in function to what we will later cover in Terraform functions, expressions offer immediate utility in making your configurations dynamic and reusable.&lt;br&gt;
We focus today on three key expression types: Conditional, Dynamic Block, and Splat.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                ┌───────────────────────────────────────────┐
                │   Terraform Expressions                   │
                │   Simplify Configurations &amp;amp; Reduce Repetition │
                └───────────────────────────────────────────┘
                                │
   ┌────────────────────────────┼────────────────────────────┐
   │                            │                            │
   ▼                            ▼                            ▼
┌─────────────────┐      ┌────────────────────┐       ┌────────────────────┐
│ Conditional      │      │ Dynamic Blocks     │       │ Splat Expressions  │
│ Expressions      │      │ (nested iteration) │       │ (retrieve multiple │
│ (true/false logic)│     │                    │       │ attributes)        │
└─────────────────┘      └────────────────────┘       └────────────────────┘
   │                        │                            │
   ▼                        ▼                            ▼
┌─────────────────┐   ┌────────────────────┐       ┌────────────────────┐
│ Syntax:          │   │ Syntax:            │       │ Syntax:            │
│ condition ?      │   │ dynamic "block" {  │       │ resource.*.attr    │
│ true : false     │   │   for_each = list  │       │                    │
└─────────────────┘   │   content {...}     │       └────────────────────┘
                      └────────────────────┘
   │                        │                            │
   ▼                        ▼                            ▼
┌─────────────────┐   ┌────────────────────┐       ┌────────────────────┐
│ Example:         │   │ Example:           │       │ Example:           │
│ EC2 instance     │   │ Security Group     │       │ Collect EC2 IDs    │
│ type selection   │   │ ingress rules      │       │ into a list        │
└─────────────────┘   └────────────────────┘       └────────────────────┘


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  1. Conditional Expressions: True or False Logic
&lt;/h2&gt;

&lt;p&gt;A conditional expression is a simple, one-line piece of syntax that functions like an if/else statement found in most programming languages. It evaluates a condition and returns one of two specified values (the true value or the false value).&lt;/p&gt;

&lt;p&gt;The Structure:&lt;br&gt;
The conditional expression is always written as: condition ? true_value : false_value. The colon (:) separates the true value from the false value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Example (Environment Selection):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is most commonly used to set resource attributes based on environment variables. For instance, we can specify that if the environment is dev, a smaller instance type is used; otherwise, a larger one is selected.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "environment" {
  type    = string
  default = "staging" // Will select the false value
}

resource "aws_instance" "example" {
  // If var.environment == "dev" is TRUE, use T2 micro.
  // If FALSE (e.g., staging/prod), use T3 micro.
  instance_type = var.environment == "dev" ? "t2.micro" : "t3.micro" [4, 6]

  // ... other configuration ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As demonstrated in the demo, changing var.environment from dev to anything else (like staging or prod) immediately switches the instance type in the execution plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Dynamic Blocks: Nested Iteration
&lt;/h2&gt;

&lt;p&gt;Dynamic blocks are essential when you need to write a nested block with multiple values within a resource definition. They help avoid manual repetition of configuration. This is frequently used for managing repeated elements, such as ingress or egress rules within an AWS Security Group.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Structure and Iteration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A dynamic block starts with the dynamic keyword, followed by a name (e.g., ingress). Inside, it uses a for_each argument to iterate over a collection (often a complex list of objects, like var.ingress_rules). The resource definitions must be placed within a content block.&lt;br&gt;
When iterating, you access the attributes of the current element using the dynamic block name (e.g., ingress) and the special iterator object (value): ingress.value.from_port.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Example (Security Group Rules)&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;If we define two ingress rules (one for HTTP on port 80, one for HTTPS on port 443) in a list of objects variable, the dynamic block iterates through the list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Example Variable (List of Objects containing port, protocol, etc.) [8, 14]

resource "aws_security_group" "ingress_rule" {
  // ... other attributes ...

  dynamic "ingress" {
    for_each = var.ingress_rules // Iterates through the list of rules [11]

    content {
      from_port   = ingress.value.from_port [12] 
      to_port     = ingress.value.to_port [12]
      protocol    = ingress.value.protocol [12]
      cidr_blocks = ingress.value.cidr_blocks [12]
    }
  }
}
// This single block generates multiple ingress rules in the plan [15].
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Splat Expressions: Retrieving Multiple Attributes
&lt;/h2&gt;

&lt;p&gt;The splat expression is another powerful one-liner used to retrieve a list of attributes from a set of resources. This is most useful when dealing with resources that were created using the count meta-argument, meaning they exist as a collection.&lt;br&gt;
The splat expression uses the star operator (&lt;em&gt;).&lt;br&gt;
The Structure:&lt;br&gt;
The expression is written as: resource_list.&lt;/em&gt;.attribute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Example (Collecting Instance IDs)&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;If we use count = 2 to create two EC2 instances (aws_instance.example), the splat expression retrieves the ID attribute for both instances and collects them into a list:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  // Collects the 'id' attribute from all instances named 'example'
  all_instance_ids = aws_instance.example.*.id [16-18]
}

output "instances" {
  value = local.all_instance_ids [19]
  // Value will be known after apply [19].
}
This efficiently creates a list containing [id_of_instance_0, id_of_instance_1].

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Mastering these expressions allows you to write truly dynamic and concise Infrastructure as Code. Make sure you complete the hands-on practice in the Day 10 GitHub repository to solidify your learning from &lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/R4ShnFDJwI8"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>30dayschallenge</category>
    </item>
    <item>
      <title>Day 09 - Mastering Terraform Lifecycle Rules for Secure and Controlled Deployments</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Wed, 03 Dec 2025 17:34:18 +0000</pubDate>
      <link>https://dev.to/andylovecloud/day-09-mastering-terraform-lifecycle-rules-for-secure-and-controlled-deployments-37kb</link>
      <guid>https://dev.to/andylovecloud/day-09-mastering-terraform-lifecycle-rules-for-secure-and-controlled-deployments-37kb</guid>
      <description>&lt;p&gt;Today, we are tackling a foundational yet powerful topic: Terraform Lifecycle Rules. These are meta-arguments provided by Terraform that do not configure the resource itself (like setting an AMI ID), but instead control how the resource behaves when it is created, destroyed, or updated.&lt;br&gt;
Lifecycle rules are essential tools that improve security, enhance infrastructure manageability, and prevent accidental resource deletions or modifications.&lt;br&gt;
Here is a breakdown of the key lifecycle rules and how they empower your infrastructure code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                ┌───────────────────────────────────────────┐
                │   Lifecycle Rules in Terraform            │
                │   Improve Security &amp;amp; Manageability        │
                │   Prevent Accidental Deletions            │
                └───────────────────────────────────────────┘
                                │
   ┌────────────────────────────┼────────────────────────────┐
   │                            │                            │
   ▼                            ▼                            ▼
┌───────────────┐        ┌────────────────┐          ┌─────────────────┐
│ Controlling   │        │ Managing       │          │ Enforcing        │
│ Destruction &amp;amp; │        │ External       │          │ Dependencies     │
│ Downtime      │        │ Changes        │          │ (replace_...)    │
└───────────────┘        └────────────────┘          └─────────────────┘
   │                            │                            │
   ▼                            ▼                            ▼
┌───────────────┐        ┌────────────────┐          ┌─────────────────┐
│ prevent_destroy│        │ ignore_changes │          │ replace_triggered│
│ (no accidental │        │ (allow ops     │          │ (force EC2       │
│ deletions)     │        │ updates)       │          │ replacement)     │
└───────────────┘        └────────────────┘          └─────────────────┘
   │
   ▼
┌──────────────────────────────┐
│ create_before_destroy         │
│ (minimize downtime by         │
│ provisioning new before old)  │
└──────────────────────────────┘

                                ▼
                      ┌───────────────────────────┐
                      │ Validations               │
                      │ precondition / postcondition │
                      │ (check assumptions &amp;amp; enforce │
                      │ compliance after creation)   │
                      └───────────────────────────┘

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  1. Controlling Destruction and Downtime
&lt;/h2&gt;

&lt;p&gt;Terraform offers precise control over when and how a resource is terminated, preventing costly mistakes and minimizing service interruptions.&lt;/p&gt;

&lt;h3&gt;
  
  
  A. Preventing Accidental Deletion (prevent_destroy)
&lt;/h3&gt;

&lt;p&gt;Imagine managing a critical S3 bucket or database instance using Terraform. If someone accidentally runs terraform destroy, or if a configuration change implicitly requires the resource to be deleted, you want a safety net.&lt;br&gt;
The prevent_destroy = true rule blocks any deletion of the associated resource. The destruction will fail unless the rule is explicitly updated back to false.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Code Example: Protecting a Critical Asset
resource "aws_s3_bucket" "my_critical_bucket" {
  bucket = "my-audit-logs-bucket"

  lifecycle {
    prevent_destroy = true 
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  B. Minimizing Downtime (create_before_destroy)
&lt;/h3&gt;

&lt;p&gt;When you update certain properties of a resource (like changing the AMI ID of an EC2 instance), Terraform often must destroy the old resource and create a new one to apply the change.&lt;br&gt;
By default, the old resource might be destroyed first, leading to downtime. Setting create_before_destroy = true reverses this order: the new resource is created and fully provisioned before the previous one is destroyed. This ensures minimal downtime for your application.&lt;br&gt;
If this rule is set to false, and the old resource is destroyed first, users will experience downtime. Furthermore, if the attempt to create the new resource fails (due to a bad request or unauthorized image, as shown in the demo), the existing service is lost without a replacement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Code Example: Ensuring Continuity
resource "aws_instance" "example" {
  ami           = var.ami_id
  instance_type = var.instance_type

  lifecycle {
    create_before_destroy = true // New resource created before old one destroyed
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Managing External Changes (ignore_changes)
&lt;/h2&gt;

&lt;p&gt;Often, specific resource attributes might need to be modified by external processes (e.g., auto-scaling mechanisms or manual operational changes) without Terraform constantly trying to revert those external changes.&lt;br&gt;
The ignore_changes rule allows you to specify attributes that Terraform should ignore during subsequent planning and applying cycles.&lt;br&gt;
A common example is an Auto Scaling Group (ASG). If the desired_capacity is set to 2 in Terraform, but an operations team manually scales it to 1, Terraform would normally try to revert it back to 2. By using ignore_changes, Terraform knows not to revert external updates to that specific field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Code Example: Allowing External Updates
resource "aws_autoscaling_group" "asg" {
  desired_capacity = 2 
  // ... other configuration ...

  lifecycle {
    ignore_changes = [desired_capacity]
  }
}
// Terraform will not attempt to revert desired_capacity changes made outside its configuration [10].
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Enforcing Dependencies (replace_triggered_by)
&lt;/h2&gt;

&lt;p&gt;While Terraform handles implicit dependencies (when one resource uses an output of another), sometimes you need to enforce that a change in Resource A forces a replacement of Resource B, even if Resource B doesn't directly reference Resource A.&lt;br&gt;
The replace_triggered_by rule creates this explicit forced dependency. For instance, if you make a change to a Security Group (SG) rule, you might want your associated EC2 instance to be replaced with a new one that inherits the updated SG configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Code Example: Forcing Recreation
resource "aws_instance" "ec2_instance" {
  // ... configuration ...
  lifecycle {
    replace_triggered_by = [aws_security_group.app_sg.id] 
  }
}
// Any change to the app_sg security group will trigger the replacement of this EC2 instance [11].
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Validations (precondition and postcondition)
&lt;/h2&gt;

&lt;p&gt;Finally, lifecycle rules offer validation checks before or after a resource operation.&lt;br&gt;
• &lt;strong&gt;Precondition&lt;/strong&gt;: Validates assumptions before a resource is created (e.g., checking if the region specified for deployment is allowed).&lt;br&gt;
• &lt;strong&gt;Postcondition&lt;/strong&gt;: Validates resource attributes after it has been created (e.g., ensuring a newly created S3 bucket has a mandatory compliance tag for audit purposes). If the condition fails, an error message is thrown, blocking execution.&lt;/p&gt;



&lt;p&gt;Mastering these rules provides a layer of operational security and control that goes beyond basic resource definition. &lt;br&gt;
Embed the video "Day 9 - AWS Terraform Lifecycle Rules Explained" from &lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/60tOSwpvldY"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>terraform</category>
      <category>30dayschallenge</category>
    </item>
    <item>
      <title>Day 8: Mastering Terraform Meta Arguments - The Power of Control and Iteration</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Tue, 02 Dec 2025 12:51:06 +0000</pubDate>
      <link>https://dev.to/andylovecloud/day-8-mastering-terraform-meta-arguments-the-power-of-control-and-iteration-467h</link>
      <guid>https://dev.to/andylovecloud/day-8-mastering-terraform-meta-arguments-the-power-of-control-and-iteration-467h</guid>
      <description>&lt;p&gt;Welcome back to the 30 Days of AWS Terraform challenge! We are diving into a crucial topic today: &lt;strong&gt;Meta Arguments&lt;/strong&gt;. While standard resource arguments (like bucket or region) are provided by the specific provider (e.g., AWS), Meta Arguments are supplied by Terraform itself. They provide powerful ways to implement advanced logic, control deployment flow, and avoid writing external scripts.&lt;/p&gt;

&lt;p&gt;Today, we focus on three essential meta arguments: &lt;code&gt;depends_on&lt;/code&gt;, &lt;code&gt;count&lt;/code&gt;, and &lt;code&gt;for_each&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Controlling Sequence with depends_on
&lt;/h2&gt;

&lt;p&gt;When you define multiple resources in a Terraform file, how does Terraform know the order in which to provision them? While Terraform often detects implicit dependencies (e.g., if Resource B uses an output from Resource A), sometimes you need to enforce a specific creation order,this is known as explicit dependency.&lt;br&gt;
The &lt;strong&gt;depends_on&lt;/strong&gt; meta argument establishes this explicit dependency. It ensures that Resource A is fully provisioned and healthy before Terraform proceeds to create Resource B.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu02b2vlfhcwfjatbo3t6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu02b2vlfhcwfjatbo3t6.png" alt="Controlling Sequence with depends_on"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scenario Flow: Suppose you need a second resource (bucket_two) to only be created after the primary resource (bucket_one) is fully operational:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Resource 1: Primary Bucket (Created First)
resource "aws_s3_bucket" "bucket_one" {
  // configuration details...
}

// Resource 2: Dependent Bucket (Waits for Resource 1)
resource "aws_s3_bucket" "bucket_two" {
  depends_on = [
    aws_s3_bucket.bucket_one
  ]
  // configuration details...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As demonstrated in our hands-on session, Terraform executes the creation of &lt;code&gt;bucket_one&lt;/code&gt; first. Only once its creation is complete does it proceed with &lt;code&gt;bucket_two&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Iteration for Lists using count
&lt;/h2&gt;

&lt;p&gt;If you need to create multiple, identical resources such as 10 S3 buckets or 5 EC2 instances, you don't need to write the resource block 10 or 5 times. Instead, you use the count meta argument.&lt;br&gt;
count works well when iterating over a variable defined as a list. Lists are ordered collections accessed by a numerical index starting at zero.&lt;br&gt;
We can set count dynamically using the &lt;code&gt;length()&lt;/code&gt; function on a list variable. To reference each element during the iteration, we use &lt;code&gt;count.index&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7vf6ro43efc8fvfaljp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7vf6ro43efc8fvfaljp.png" alt=" Iteration for Lists using count"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Code Example (Creating Multiple Buckets from a List):
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "bucket_names" {
  type = list(string)
  default = ["my-unique-bucket-1", "my-unique-bucket-2"]
}

resource "aws_s3_bucket" "bucket_one" {
  // Sets count to 2, based on the list size [10]
  count = length(var.bucket_names)

  // Iterates through the list using the index (0, 1, 2...) [10]
  bucket = var.bucket_names[count.index]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  3. Iteration for Sets and Maps using for_each
&lt;/h2&gt;

&lt;p&gt;While count is excellent for lists, it won't work effectively with sets or maps because they lack fixed indexes. For these scenarios, we use &lt;code&gt;for_each&lt;/code&gt;, which acts like a specialized for loop.&lt;br&gt;
&lt;code&gt;for_each&lt;/code&gt; iterates through the elements of the set or map. When using &lt;code&gt;for_each&lt;/code&gt;, we reference the elements using the special each object.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Sets&lt;/strong&gt;: In a set of strings, there is no key/value distinction. Therefore, each.key and each.value both refer to the content of the element.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Maps&lt;/strong&gt;: Maps are defined by distinct key-value pairs (like JSON syntax). When iterating a map, each.key accesses the map key (e.g., name), and each.value accesses the corresponding map value (e.g., Piyush).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dnlp68olyt4t226n7ph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dnlp68olyt4t226n7ph.png" alt="Iteration for Sets and Maps "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Code Example (Creating Buckets from a Set):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "bucket_name_set" {
  type = set(string)
  default = ["set-bucket-a", "set-bucket-b"]
}

resource "aws_s3_bucket" "bucket_two" {
  // Iterates through every element in the set [11]
  for_each = var.bucket_name_set

  // In a set, key and value are the same [13]
  bucket = each.value 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mastering these meta arguments allows you to write concise, reusable, and powerful Terraform configurations that efficiently manage dependency and repetition.&lt;/p&gt;




&lt;p&gt;Ready to solidify your understanding? Make sure to complete the practical exercises and tasks available in the Day 8 GitHub repository from &lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/XMMsnkovNX4"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Day 7: Mastering Terraform Type Constraints - The Secret to Clean Variables</title>
      <dc:creator>Tran Huynh An Duy (Andy)</dc:creator>
      <pubDate>Sun, 30 Nov 2025 19:47:08 +0000</pubDate>
      <link>https://dev.to/andylovecloud/day-7-mastering-terraform-type-constraints-the-secret-to-clean-variables-39n9</link>
      <guid>https://dev.to/andylovecloud/day-7-mastering-terraform-type-constraints-the-secret-to-clean-variables-39n9</guid>
      <description>&lt;p&gt;On the &lt;a href="https://dev.to/andylovecloud/day-06-organizing-your-infrastructure-as-code-for-your-project-4nko"&gt;day 06&lt;/a&gt;, we optimized our workflow by structuring our root module into separate files like &lt;code&gt;main.tf&lt;/code&gt; and &lt;code&gt;variables.tf&lt;/code&gt;. Today, we dive deeper into the world of variables, focusing specifically on &lt;strong&gt;Type Constraints&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Type constraints define the type of value that a variable can store. Understanding these types is crucial for creating robust and predictable infrastructure as code configurations. Variables are broadly categorized as either primitive (simple) or complex (multi-value).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Primitives: Simple and Straightforward
&lt;/h2&gt;

&lt;p&gt;Primitive types are the standard building blocks you use every day:&lt;br&gt;
&lt;strong&gt;1. String&lt;/strong&gt;: Used for text, strings must be enclosed in double quotes (e.g., name = "Piyush").&lt;br&gt;
&lt;strong&gt;2. Number&lt;/strong&gt;: Used for numerical values (e.g., setting the count argument for an EC2 instance).&lt;br&gt;
&lt;strong&gt;3. Boolean (Bool)&lt;/strong&gt;: Represents truth values (true or false). Arguments like monitoring in an EC2 instance configuration often expect a boolean value.&lt;br&gt;
If you do not explicitly set a type constraint, Terraform defaults the variable to the special type any.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg92jxnrflyegsdg4ozgh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg92jxnrflyegsdg4ozgh.png" alt="Primitives"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Navigating Complex Types: Storing Multiple Values
&lt;/h2&gt;

&lt;p&gt;Complex types are designed to store multiple values in a single variable. These require careful handling, as they are accessed either by index (position) or by key (name).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filo5q6o63mh0aamrlv7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filo5q6o63mh0aamrlv7m.png" alt="Navigating Complex Types"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Lists and Sets
&lt;/h3&gt;

&lt;p&gt;Both Lists and Sets store a sequence of values, all of the same data type (e.g., list(string) or set(string)).&lt;br&gt;
• &lt;strong&gt;List&lt;/strong&gt;: An ordered collection where the sequence is fixed. Lists are accessed by their index, starting at zero. Critically, lists allow duplicate values.&lt;br&gt;
• &lt;strong&gt;Set&lt;/strong&gt;: An unordered collection that does not allow duplicate values. Because the order is not fixed, sets cannot be accessed directly by their index. If you need to access a specific element in a set, you must first convert it to a list using a function like &lt;code&gt;tolist()&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Maps and Objects
&lt;/h3&gt;

&lt;p&gt;Maps and Objects handle data as key-value pairs.&lt;br&gt;
• &lt;strong&gt;Map&lt;/strong&gt;: A collection of key-value pairs where all values must share the same data type, such as map(string). Maps are ideal for defining resource tags (e.g., tags = { environment = "dev" }). Maps are accessed using the key (e.g., var.tags.environment).&lt;br&gt;
• &lt;strong&gt;Object&lt;/strong&gt;: A more flexible structure than a map. Objects allow you to define key-value pairs where each key can have a different data type (e.g., region = string, instance_count = number). This is useful for grouping related configuration metadata. Objects are accessed via the key, just like a map (e.g., var.config.region).&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Tuples
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Tuple&lt;/strong&gt; is an ordered sequence designed to hold multiple different data types (e.g., a number, a string, and another number). The sequence of elements matters, as the values provided must match the data types defined by their position. Like lists, tuples are accessed by index.&lt;br&gt;
By correctly specifying these type constraints, you not only improve readability but also ensure Terraform expects and receives the necessary data format for your infrastructure configuration.&lt;/p&gt;



&lt;p&gt;Ready to put this into practice? Make sure you complete the hands-on exercise for Day 7 from &lt;a class="mentioned-user" href="https://dev.to/piyushsachdeva"&gt;@piyushsachdeva&lt;/a&gt; to solidify your understanding of how to access and manage these complex variables!&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/gu2oCJ9DQiQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>terraform</category>
      <category>30dayschallenge</category>
    </item>
  </channel>
</rss>
