<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yusuf Giwa</title>
    <description>The latest articles on DEV Community by Yusuf Giwa (@speedygamer12).</description>
    <link>https://dev.to/speedygamer12</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/speedygamer12"/>
    <language>en</language>
    <item>
      <title>The engine, then the Pilot</title>
      <dc:creator>Yusuf Giwa</dc:creator>
      <pubDate>Thu, 07 Jul 2022 22:59:24 +0000</pubDate>
      <link>https://dev.to/speedygamer12/the-engine-then-the-pilot-4nda</link>
      <guid>https://dev.to/speedygamer12/the-engine-then-the-pilot-4nda</guid>
      <description>&lt;p&gt;In a monolithic architecture, all the layers of our applications are built into one single artifact. In this case, each service offered by the application is essentially a function call away from another service. This facilitates the networking and communications between the layers of the applications. However, things can get rocky when it comes scaling a part of the application, it is essentially the same cost as scaling the whole application. It becomes difficult to upgrade a layer of the application without causing failure in another unrelated layer.&lt;br&gt;
Micro-service gives us the ability to re-architect our applications into different services such that they can be built, deployed and scaled independently. The services can communicate using an Application Programming Interface, API. &lt;/p&gt;

&lt;p&gt;To achieve a micro-service architecture, we can use container images as our artifact.&lt;br&gt;
Container images are convenient artifacts for software development. We specify the  all the dependencies needed to execute code inside a container in a YAML file called Docker file, we can build this file to create our Docker image. The Docker image is a largely static file that can be deployed on any host without any changes.&lt;br&gt;
Container images readily serve as the engine driving each services and they accelerate our ability to build applications. &lt;/p&gt;

&lt;p&gt;Finally, we need a way to deploy, scale and update these containers. This is where kubernetes comes in.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftspvls9enc76dobsqm9l.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftspvls9enc76dobsqm9l.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is an orchestration system for containers. We can create a kubernetes cluster to run our containers, deploy an existing app to the cluster, expose application ports, scale applications and update applications. &lt;br&gt;
Kubernetes helps us get our applications in front of our customers with maximum up-time using declarative configuration and automation. All we need to do is describe the desired state of the system using objects like pods and services in a YAML file. A kubernetes controller observe the current state of the cluster and brings the system to the specified state. We can do all this from the terminal using kubectl, kubernetes command line tool.&lt;br&gt;
Kubernetes is like a pilot for our workload, while we set the destination.&lt;/p&gt;

&lt;p&gt;Now that we are convinced about containers and container orchestration, the next step is to have fun building docker images and managing container orchestration with services like Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (EKS). We can also venture into extending kubectl binary with plugins for authentication. Finally we can establish fine-grained access on our clusters by using self-manged options like Kops and Kubeadm.&lt;br&gt;
If all of these excites you, you should read this googles' article om different generations of containers &lt;a href="https://queue.acm.org/detail.cfm?id=2898444" rel="noopener noreferrer"&gt;here&lt;/a&gt;. You might want to check &lt;a href="https://dev.to/speedygamer12/i-explained-googles-lesson-on-their-container-management-systems-so-you-can-have-fun-reading-43lj"&gt;this&lt;/a&gt; out, I tried to explain some technical terms from the google article. &lt;/p&gt;

&lt;p&gt;In our micro-service architecture, the container image can serve as our engine, whilst kubernetes is the Pilot. Happy learning!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>kubernetes</category>
      <category>microservices</category>
    </item>
    <item>
      <title>I explained googles lesson on their container management systems so you can have fun reading - 1</title>
      <dc:creator>Yusuf Giwa</dc:creator>
      <pubDate>Thu, 30 Jun 2022 22:55:57 +0000</pubDate>
      <link>https://dev.to/speedygamer12/i-explained-googles-lesson-on-their-container-management-systems-so-you-can-have-fun-reading-43lj</link>
      <guid>https://dev.to/speedygamer12/i-explained-googles-lesson-on-their-container-management-systems-so-you-can-have-fun-reading-43lj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt; : These are majorly notes and summaries i made while reading &lt;a href="https://queue.acm.org/detail.cfm?id=2898444"&gt;this article&lt;/a&gt; - "Borg, Omega, and Kubernetes: Lessons learned from three container-management systems over a decade".&lt;/p&gt;

&lt;p&gt;I strongly recommend everyone using container systems should have a read.&lt;/p&gt;

&lt;p&gt;Free BSD(Berkely Software distribution) mechanism is an implementation of OS Level virtualisation which allows us to partition Free BSD derived computer into independent mini systems sharing the same kernel with little overhead.&lt;/p&gt;

&lt;p&gt;DRAM(Dynamic Random Access Memory) is a type of semi conductor memory that is used for program code or data needed by the computer to function. DRAM is used to access large amount of info needed by the computer quickly.&lt;/p&gt;

&lt;p&gt;L3 memory cache is developed to  increase the performance of L1 and L2 cache. In multi-core processors L3 cache are usually shared among cores while L1 and L2 are dedicated.&lt;/p&gt;

&lt;p&gt;Containerization started with chroot (an os level virtualisation on the kernel) —&amp;gt; Then, FSB jails extended the contexts to namespace like process ID. —&amp;gt; Then, Solaris enhanced these features which are now used in Linux control groups today.&lt;/p&gt;

&lt;p&gt;0) &lt;strong&gt;The resource isolation by google improved utilization.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Borg uses containers to co-locate latency sensitive, user facing batch jobs on the same VM’s. Because user facing jobs reserve more memory than needed so when there’s a spike and fail-over  they can use that extra memory.&lt;/p&gt;

&lt;p&gt;This isolation is not perfect since the containers cannot prevent interference in the resources that the OS kernel doesn’t manage such as L3 cache and memory bandwidth. Containers also need an additional layer of security in the cloud.&lt;/p&gt;

&lt;p&gt;Containers are now more than isolation, it includes an image which is the make up of the apps in the container. Midas Package Manager MPM is used by google to build and deploy images. The relationship between the isolation mechanism and MPM can be seen in Docker Daemon and Container registry.&lt;/p&gt;

&lt;p&gt;1) &lt;strong&gt;A container is essentially the runtime isolation and the image.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containerization transforms the data center from being Machine Oriented to Application oriented.  It abstracts details of the machine and OS from the app developer and deployment infrastructure.&lt;/p&gt;

&lt;p&gt;The shift of management API from machine oriented to application oriented improves introspection and deployment.&lt;/p&gt;

&lt;p&gt;Decoupling of image and OS makes it possible to provide similar environment in production and development environments. &lt;/p&gt;

&lt;p&gt;2) &lt;strong&gt;To make this abstraction we’ll have a container image that can encapsulate all app dependency into a package that can be deployed into the container. This way only local external dependency will be on the Linux kernel system interface call.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Linux kernal system is the core interface between the computer hardware and the processes. So the interface calls are how the a program enters the kernel to perform a call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Combining 1 and 2 we can say Chroot, Chgrp and namespace prevents data leak in the VM and improves resource utilization while the container images isolates the app from the OS.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;3) &lt;strong&gt;This isolation is not perfect. Applications can still be exposed to churn in the OS via socket options, arguments to ioctl calls and /proc.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A socket is one endpoint of a two way communication link between two programs running on the network. A socket is bound to a port number so that the TCP layer can identify the application that data is destined to be sent.&lt;/p&gt;

&lt;p&gt;Endpoint = IP address + Port number.&lt;/p&gt;

&lt;p&gt;A server runs on a specific computer and has a socket bound to a port number. The server waits listening to the socket for a client to make a request.&lt;br&gt;
The client tries to rendezvous with the server on the machine and port. It also tries to identify itself to the server so it binds to a local port number assigned by the system that it will use during this connection.  After acceptance, the server gets a new socket bound to the same local port and also has its remote endpoint set to the address and port of the client. The new socket is to listen to request of the connected client. The client also creates a socket to communicate with the server. The clients can communicate by writing and reading from their sockets.&lt;br&gt;
&lt;a href="https://docs.oracle.com/javase/tutorial/networking/sockets/definition.html"&gt;read on sockets&lt;/a&gt;. &lt;br&gt;
Docker Daemon can listen to Docker engine API request via three sockets types: Linux, tcp and fd.&lt;/p&gt;

&lt;p&gt;Input and output control ioctl() system calls manipulate many underlying devices parameters of special files. e.g the terminal. IOCTL recieves the following arguments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fd must be an open file descriptor. &lt;/li&gt;
&lt;li&gt;arg2 is a device dependent request code. &lt;/li&gt;
&lt;li&gt;arg3 is an untyped memory pointer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;/proc is a virtual file-system created on the fly when system boots and it is dissolved ant shut down. &lt;br&gt;
If you run: &lt;code&gt;stat /proc&lt;/code&gt; &lt;br&gt;
you will see it doesn’t have any size. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lXBioC9J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0lj8emgvexu7dcvvs6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lXBioC9J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0lj8emgvexu7dcvvs6u.png" alt="Image description" width="714" height="167"&gt;&lt;/a&gt; &lt;br&gt;
Same for all its sub directories obviously.&lt;/p&gt;

&lt;p&gt;Despite the flaws mentioned in 3, the isolation and dependency reduction has been effective for google and they only run containers which consequently means they have a small number of OS version and only require small staff to maintain them.&lt;/p&gt;

&lt;p&gt;There are many ways to achieve these images. In Borg, program binaries are statically linked at build to known libraries available company-wide. This is still bad because upgrades on the base image can affect running applications as the image are only installed once and shared by multiple programs.&lt;/p&gt;

&lt;p&gt;Binaries are compiled code: They allow programs to be installed without having to installed without having to compile the source code.&lt;/p&gt;

&lt;p&gt;Libraries are a collection of standard programs and subroutines(think macro?) that are available for use. &lt;/p&gt;

&lt;p&gt;Modern containers require explicit user commands to share image data between containers trying to the issue of dependencies.&lt;/p&gt;

&lt;p&gt;Remote Procedure Call (RPC) is a software communication protocol that one program can use to request a service from a program located in another computer on a network without having to understand the network detail. i.e It lets a program execute a procedure in a different address space without having to explicitly code details of remote interaction.  &lt;/p&gt;

&lt;p&gt;Borg Naming System BNS name for each task includes cell name task number and job name. This is written in a highly available file in chubby. which is the used by the RPC system to find task endpoints.&lt;/p&gt;

&lt;p&gt;Chubby is a highly available and persistent distributed lock service and file-system. It manages locks for resources and stores configuration information for various distributed services in google cluster environments. &lt;/p&gt;

&lt;p&gt;Chubby Distributed lock is a filesystem that provides reliable storage for loosely coupled system. It provides a coarse grained mechanism using locking service, Library storage, and high throughput rate.&lt;/p&gt;

&lt;p&gt;Chubby cell contains a 5 servers called replicas. The elected master are elected by a distributed protocol like round-robin or berkeley’s algorithm.&lt;/p&gt;

&lt;p&gt;Master duration is the duration in which no other master would be elected by replicas that have once voted for a master.&lt;/p&gt;

&lt;p&gt;Files and directories are known as nodes. contained in the chubby namespace. Every node has a distinct meta data.&lt;/p&gt;

&lt;p&gt;Handle specifies include check digits, handle seq no and mode info. They are file descriptors for opened nodes.&lt;/p&gt;

&lt;p&gt;It implements is reader writer locks. &lt;/p&gt;

&lt;p&gt;Distributed locks is complex thereby making is costly since it permits only using lock interactions.   They also have status which are described as strings called sequencies.&lt;/p&gt;

&lt;p&gt;Roundrobin is an arrangement of choosing all elements in a group equally. i.e. They take turns in a logical order like top to bottom.&lt;/p&gt;

&lt;p&gt;Berkeley’s algorithm or clock synchronization is a method of synchronizing clock values in a distributed system by using external reference clock.  It assumes that each node doesn;t have a accurate time source.&lt;/p&gt;

&lt;p&gt;Happy Learning, I hope you have fun reading the paper.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>docker</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Ansible - Cool</title>
      <dc:creator>Yusuf Giwa</dc:creator>
      <pubDate>Tue, 21 Jun 2022 21:44:46 +0000</pubDate>
      <link>https://dev.to/speedygamer12/ansible-cool-5432</link>
      <guid>https://dev.to/speedygamer12/ansible-cool-5432</guid>
      <description>&lt;p&gt;We have decided that automating repetitive task is important in having consistent delivery of services. Configuring our newly provisioned servers is one of such tasks. However there are not much human-readable configuration solutions out there.&lt;/p&gt;

&lt;p&gt;In simple terms, Ansible is an open source configuration tool for configuring servers after deployment. It completes automation tasks with a blue print known as "ansible-playbook". Ansible playbook is a "yml" file(e.g ansible-main.yml") that can execute tasks on specified machines. The tasks are split into a separate folder named roles with sub-folders that has the role name as title and "main.yml" file. It also has an inventory.txt file that has the address of each server that the script will run. &lt;/p&gt;

&lt;p&gt;This is what a typical ansible directory looks like:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HYSwYGiU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mgac9j3ztm225vqaayo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HYSwYGiU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mgac9j3ztm225vqaayo.png" alt="Image description" width="849" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can run an ansible playbook with the command:&lt;br&gt;
&lt;code&gt;ansible-playbook main.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1fY1hVC3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84bpp7uh9p5sbvxknh75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1fY1hVC3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84bpp7uh9p5sbvxknh75.png" alt="Image description" width="566" height="320"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This is the result of running a simple ansible playbookon on a local machine. &lt;br&gt;
The code can be found on github &lt;a href="https://github.com/speedygamer12/blog-ansible"&gt;here:&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Ansible is declarative such that ansible tasks describes the state of the computer and ensure the tasks end up in the state that was specified. However, It allows us run tasks in a specific order making it imperative. Safe to say it has a mixture of both. &lt;br&gt;
During setup, ansible runner gathers facts about the target node, so it knows the current state of files(everything in Unix is file :)) so that while it runs, it knows whether a specific file exists on the computer and if it is already in the final state. This makes ansible idempotent.&lt;/p&gt;

&lt;p&gt;Ansible execute tasks using its modules.&lt;br&gt;
Package management: &lt;br&gt;
Ansible has modules for different package managers to install, remove and upgrade packages on the target node modules. Some of the modules are apt, yum, dnf, and zypper. The modules name are similar to their corresponding generic name. &lt;br&gt;
For example, we can use the apt module to install nodejs and npm on our machine. &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Here we used the apt package to install the latest version of nodejs and npm. The runner will need admin privileges on the target node to perform the operations. The syntax “become: yes” helps us use an elevated privilege to perform such tasks.&lt;/p&gt;

&lt;p&gt;Other popular modules on ansible are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy: The copy module copies a file from the local or remote machine to a location on the target remote machine.&lt;/li&gt;
&lt;li&gt;Shell: It is used to execute shell commands in nodes.&lt;/li&gt;
&lt;li&gt;File: It is used to set attributes of files in a module.&lt;/li&gt;
&lt;li&gt;git: Deploy software (or files) from git checkouts
&lt;a href="https://docs.ansible.com/ansible/2.4/core_maintained.html"&gt;Here&lt;/a&gt;  is a list of core ansible modules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ansible has all these cool features and can be used to configure more than just servers, It can configure any machine with ssh interface or a callable API such as load-balancers and other networking tools. Personally, i like to include my ansible-playbook as a job in a Circleci pipeline to configure the infrastructures i have just deployed. Doing this will ensure i get consistent configurations in as much servers as i want.&lt;/p&gt;

&lt;p&gt;I found this well written article &lt;a href="https://spacelift.io/blog/ansible-tutorial"&gt;Ansible Tutorial for Beginners: Ultimate Playbook &amp;amp; Examples&lt;/a&gt;. You should definitely give it a read for more in-depth examples on Ansible. &lt;/p&gt;

&lt;p&gt;The idea is to automate as much as we can, till we can boast of a perfect deployment pipeline. Happy Learning!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>webdev</category>
      <category>startup</category>
      <category>programming</category>
    </item>
    <item>
      <title>Overview of continuous Delivery</title>
      <dc:creator>Yusuf Giwa</dc:creator>
      <pubDate>Tue, 14 Jun 2022 22:39:42 +0000</pubDate>
      <link>https://dev.to/speedygamer12/overview-of-continuous-delivery-4jch</link>
      <guid>https://dev.to/speedygamer12/overview-of-continuous-delivery-4jch</guid>
      <description>&lt;p&gt;In a bid to deliver software solutions to costumers efficiently, companies are looking to inculcate techniques and practices that will improve the quality of their code and optimize the time and resources required for deployment. A popular approach these days is continuous delivery.&lt;/p&gt;

&lt;p&gt;Continuous delivery is an approach where value is delivered frequently through automated deployments. It is often seen as a philosophy that facilitates the use of techniques that lets organizations build, test, and prepare code changes for release to production rapidly and efficiently. These techniques can be split in Continuous integration and continuous deployment. Continuous delivery practices can be directly translated to economic terms or metrics.&lt;/p&gt;

&lt;p&gt;Continuous Integration is a practice where developers merge working copies to a shared mainline several times a day. This includes operations such as building artifacts, running tests and code analysis.&lt;br&gt;
Continuous deployment is a practice where teams produce and release values a short time. This includes operations such as  deployment, verification and promotion to production.&lt;/p&gt;

&lt;p&gt;These are some good practices to build a continuous delivery pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fail fast: This means the quicker we get to the error the less resources are wasted.&lt;/li&gt;
&lt;li&gt;Measure code quality: There should be metrics to measure the quality of code in our pipeline such as Lead time to production, and roll back rate.&lt;/li&gt;
&lt;li&gt;Only one road leads to production: All potential loopholes should be eliminated in order to utilize the pipeline.&lt;/li&gt;
&lt;li&gt;Maximum automation: We should automate tasks as much as possible to reduce human error and increase speed of delivery.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should be noted that these continuous delivery practices should not be seen as an absolute pedestal but as true North that will help the organisation reach steady improvement.&lt;br&gt;
CI/CD can be achieved with configuration tools like Ansible or chef and pipelines like CircleCI or Jenkins as well as version control tools like git.&lt;br&gt;
Happy learning!&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>startup</category>
      <category>circleci</category>
      <category>git</category>
    </item>
    <item>
      <title>Cloud Formation good practices</title>
      <dc:creator>Yusuf Giwa</dc:creator>
      <pubDate>Tue, 07 Jun 2022 22:11:57 +0000</pubDate>
      <link>https://dev.to/speedygamer12/cloud-formation-good-practices-2dio</link>
      <guid>https://dev.to/speedygamer12/cloud-formation-good-practices-2dio</guid>
      <description>&lt;p&gt;Using AWS Cloud formation templates to deploy our infrastructure such that it can be easily reproduced consistently and it can also be incorporated in an automation scripts is increasingly becoming the default. There are lots of positives to Infrastructure as a code. However, when deploying huge infrastructures, the templates can become unreadable or even lead to security leaks. In these article we will be looking at some practices we can incorporate in our templates to help us mitigate these issues and facilitate readability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Parameters:&lt;/strong&gt;&lt;br&gt;
This means we will not be hard-coding values in the templates. Declaration of Parameters facilitates re-usability of our template. &lt;br&gt;
Parameters are declared at the beginning of the template under the "Parameters" section to be referenced in the "Resources" section. They are stored in a parameter file usually in a JSON format where there are called during the creation or update of our infrastructures. &lt;br&gt;
Parameters can also be grouped based on resource types i.e server-group, network-group e.t.c which aids readability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Dynamic Reference:&lt;/strong&gt;&lt;br&gt;
It is usually a good practice to store values in external services for reasons such as version control, security, etc.&lt;br&gt;
We can reference external values that are stored in services like secrets in AWS Secrets Manager or plain-text values from AWS Parameter store with dynamic reference. A maximum of 60 dynamic references can be used in each templates. Dynamic reference are used with the following pattern: &lt;br&gt;
&lt;code&gt;{{resolve:service-name:reference-key}}&lt;/code&gt;&lt;br&gt;
Where service names are ssm, ssm-secure and secretsmanager.&lt;br&gt;
&lt;em&gt;Note that it is not advisable not to use dynamic referencing for values that are part of the resource primary identifier as this would lead to secret leak when the resource is refernced.&lt;/em&gt; &lt;br&gt;
You can read more on dynamic reference in the &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Mappings and Conditions:&lt;/strong&gt;&lt;br&gt;
There are several region dependent values or pipeline-stage dependent resource properties, it can be a hassle trying to get the right value for the different regions the templates will be used or the different deployment stage such as development or production stage. The Mappings section of the template consists of key-value pairs declarations such that values can be called by referencing the key.&lt;/p&gt;

&lt;p&gt;The conditions section of the template contains statements that defines entity configuration or creation. The statements are then attached to the concerned resources with the use of the optional "Condition" property.&lt;/p&gt;

&lt;p&gt;Mappings and Conditions can also be combined such that if a condition is satisfied we might configure the resource by referencing a value based on a key from the satisfied condition. This one condition can be used to influence more resources in the template since we can have the same key name in different mapping declaration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-stack Referencing&lt;/strong&gt;&lt;br&gt;
We might decide to create different templates for different resource groups or different resource level. These resource will somehow still need to refer to each other. i.e we will need to reference the &lt;em&gt;VpcId&lt;/em&gt; created in our network-stack when we are creating the security group of a server. This is where cross stack reference comes to play. We are able to reference the resource created in a stack in another stack.&lt;br&gt;
We can export these values in the Outputs section of our template. These values can then be referenced using the intrinsic &lt;em&gt;ImportValue&lt;/em&gt; fucntion. &lt;/p&gt;

&lt;p&gt;We have now discussed several Cloud formation tricks and practices. These can only be useful when they are used in the right contexts else we would have a lot of redundant parts in our template which will not produce the desired result.&lt;br&gt;
I attempted to incorporate some of these practices &lt;a href="https://github.com/speedygamer12/Udacity-devops-highly-available-app-infrastructure/blob/main/README.md"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy learning and building.🛠️&lt;/p&gt;

</description>
      <category>iac</category>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
