<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rahimah Sulayman</title>
    <description>The latest articles on DEV Community by Rahimah Sulayman (@rahimah_dev).</description>
    <link>https://dev.to/rahimah_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rahimah_dev"/>
    <language>en</language>
    <item>
      <title>From Code to Clouds: Hosting a Professional Resume on GitHub Pages(2)</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Sat, 02 May 2026 22:55:35 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/from-code-to-clouds-hosting-a-professional-resume-on-github-pages2-3poi</link>
      <guid>https://dev.to/rahimah_dev/from-code-to-clouds-hosting-a-professional-resume-on-github-pages2-3poi</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In &lt;em&gt;Part 1&lt;/em&gt;, we successfully moved the resume from a &lt;em&gt;local editor to a live URL&lt;/em&gt;. But an empty repository is like a house without a front door, functional, yet inaccessible to those looking in. In this second &lt;strong&gt;installment, we’re going back into the terminal to master the art of the README&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I’ll show you how to turn a folder of code into a polished, technical portfolio that speaks for itself&lt;/em&gt;. Whether you’re a Cloud aspirant or a DevOps enthusiast, your documentation is your handshake. Let’s make it count.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Navigating and Initializing the README&lt;/strong&gt;&lt;br&gt;
In the previous guide, we set up the environment. Now, we return to the terminal to add the "front door" of our project, that is the &lt;strong&gt;the README.md file&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Changing Directory&lt;/strong&gt;: We start by using &lt;code&gt;cd ~/Desktop/website&lt;/code&gt; to ensure we are working inside the correct project folder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the File&lt;/strong&gt;: The &lt;code&gt;touch README.md&lt;/code&gt; command is used to generate a blank Markdown file. Using the &lt;code&gt;.md&lt;/code&gt; extension is crucial as it tells GitHub to render this as a formatted document rather than just plain text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Opening the Editor&lt;/strong&gt;: To add content, we use the &lt;code&gt;vi README.md&lt;/code&gt; command. This opens the Vi text editor directly in the terminal, it's a lightweight, powerful tool that every &lt;strong&gt;Cloud and DevOps&lt;/strong&gt; professional should be comfortable using.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnadic6jz0012srjr0hr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnadic6jz0012srjr0hr2.png" alt="location" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Inspecting and Verifying Content&lt;/strong&gt;&lt;br&gt;
Once I’ve finished editing in &lt;code&gt;Vi&lt;/code&gt;, it’s best practice to verify that your data was saved exactly how you intended.&lt;/p&gt;

&lt;p&gt;We use &lt;code&gt;cat README.md&lt;/code&gt; to display the entire contents of the file directly in the terminal. This &lt;em&gt;check&lt;/em&gt; ensures there are no typos or formatting errors in our Markdown syntax before we push.&lt;/p&gt;

&lt;p&gt;The output shows a structured, professional layout:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Project Title&lt;/em&gt;: Using # for a clear H1 heading.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Live Links&lt;/em&gt;: Providing immediate access to the hosted resume.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Visual Elements&lt;/em&gt;: Using emojis (🚀, 🛠️, 📖) to make the documentation modern and readable.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Technical Milestones&lt;/em&gt;: A numbered list summarizing the core achievements of the project.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Command Reference&lt;/em&gt;: Using backticks (`) for code blocks, creating a "cheat sheet" for anyone who forks the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qbgmjt8vilvfw4av0f9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qbgmjt8vilvfw4av0f9.png" alt="readme" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: The Version Control Lifecycle (Stage &amp;amp; Commit)&lt;/strong&gt;&lt;br&gt;
In this step, we move our new documentation through the Git lifecycle. It’s not enough to just create the file, we have to tell &lt;code&gt;Git&lt;/code&gt; to track it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git status&lt;/strong&gt; (The Pulse Check): Before doing anything, I ran &lt;code&gt;git status&lt;/code&gt;. You can see &lt;code&gt;README.md&lt;/code&gt; highlighted in &lt;strong&gt;red&lt;/strong&gt; under &lt;strong&gt;Untracked files&lt;/strong&gt;. This confirms that &lt;code&gt;Git&lt;/code&gt; sees a new file but isn't yet managing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git add README.md (Staging)&lt;/strong&gt;: This command moves the file into the staging area. The terminal warning about &lt;em&gt;LF will be replaced by CRLF&lt;/em&gt; is a common occurrence in Git Bash on Windows, it’s just Git managing how line endings are handled between different operating systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git commit (The Snapshot)&lt;/strong&gt;: By running &lt;code&gt;git commit -m "add README with project overview and commands"&lt;/code&gt;, we officially record the change. &lt;em&gt;The output "1 file changed, 30 insertions" serves as a receipt, confirming that our 30 lines of professional documentation are now part of the project's history&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjr032bwyk1qsupxmej6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjr032bwyk1qsupxmej6.png" alt="commit" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: The Final Push (Syncing to the Cloud)&lt;/strong&gt;&lt;br&gt;
This is the bridge between our local work and our public profile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git push origin master&lt;/strong&gt;: This command tells &lt;code&gt;Git&lt;/code&gt; to take the committed changes &lt;strong&gt;(the new README)&lt;/strong&gt; and upload them to the &lt;strong&gt;origin (GitHub)&lt;/strong&gt; on the master branch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Enumerating objects: 4, done"&lt;/strong&gt;: Git is gathering the files you changed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Writing objects... 1.19 KiB"&lt;/strong&gt;: This confirms the actual data transfer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Confirmation: The line bb98894..e6d1410 master -&amp;gt; master is the technical "all clear."&lt;/strong&gt; It shows your local branch is now perfectly synced with GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk26tozpdwgti86ohhpt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk26tozpdwgti86ohhpt6.png" alt="pushed" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Configuring the Global Gateway (GitHub Pages)&lt;/strong&gt;&lt;br&gt;
Once the code is safely in the cloud, you have to tell GitHub how to serve it to the public.&lt;/p&gt;

&lt;p&gt;Navigating to the &lt;strong&gt;Settings &amp;gt; Pages&lt;/strong&gt; tab is where the hosting magic happens.&lt;/p&gt;

&lt;p&gt;I set the source to &lt;strong&gt;Deploy from a branch&lt;/strong&gt; and selected the master branch and / (root) folder. This tells the &lt;code&gt;GitHub&lt;/code&gt; server to look for our &lt;code&gt;index.html&lt;/code&gt; file exactly where we saved it.&lt;/p&gt;

&lt;p&gt;Once saved, GitHub generates a custom URL (e.g., &lt;a href="https://rahimahisah17.github.io/Git-Basics101/" rel="noopener noreferrer"&gt;https://rahimahisah17.github.io/Git-Basics101/&lt;/a&gt;). &lt;br&gt;
Seeing that "Your site is live" message is the ultimate green light for a developer!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak0savv7fwx8t70qt5ml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak0savv7fwx8t70qt5ml.png" alt="git" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykqdiixmm6oj02yyzl1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykqdiixmm6oj02yyzl1e.png" alt="git" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Documentation is the bridge between writing code and building a career. A project without a &lt;strong&gt;README&lt;/strong&gt; is just a folder, a project with a &lt;strong&gt;README&lt;/strong&gt; is a story of your technical journey.&lt;/p&gt;

</description>
      <category>github</category>
      <category>devops</category>
      <category>git</category>
      <category>documentation</category>
    </item>
    <item>
      <title>Kubernetes Building Blocks(1)</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Fri, 24 Apr 2026 16:01:47 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/kubernetes-building-blocks1-815</link>
      <guid>https://dev.to/rahimah_dev/kubernetes-building-blocks1-815</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;If you liken Kubernetes to an ocean, those individual drops that make up the ocean are the core building blocks: &lt;code&gt;Namespaces&lt;/code&gt;, &lt;code&gt;Pods&lt;/code&gt;, &lt;code&gt;ReplicaSets&lt;/code&gt;, &lt;code&gt;Deployments&lt;/code&gt;, &lt;code&gt;Labels&lt;/code&gt;, etc.&lt;br&gt;
Our focus here will be on the first two mentioned.&lt;br&gt;
As a professional working across Data Analytics and Cloud Engineering, I’ve found that the best way to master these concepts isn't just by reading documentation, but by using the &lt;em&gt;Build, See, Destroy&lt;/em&gt; methodology. This approach allows you to experiment fearlessly, visualize the cluster's internal logic, and clean up after yourself.&lt;/p&gt;

&lt;p&gt;In this post, we are going to move from a &lt;strong&gt;blank slate&lt;/strong&gt; Minikube cluster to an orchestrated environment, exploring both the fast-paced command line and the birds-eye view of the Kubernetes Dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Scenario&lt;/strong&gt;: The Isolated Web Fleet&lt;br&gt;
Imagine you are a DevOps Engineer tasked with deploying a fleet of Nginx web servers for a new project. However, the cluster is shared with other teams, so you can't just dump your resources into the default space.&lt;br&gt;
&lt;strong&gt;Tasks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Carve out a Virtual Sandbox: Create a dedicated &lt;code&gt;Namespace&lt;/code&gt; to keep our project isolated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the Fleet: Use both Imperative (quick CLI) and Declarative (&lt;code&gt;YAML&lt;/code&gt;/&lt;code&gt;JSON&lt;/code&gt;) methods to launch five Nginx pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Inspect &amp;amp; Troubleshoot: Go under the hood to check &lt;code&gt;IP addresses&lt;/code&gt;, &lt;code&gt;node placements&lt;/code&gt;, and handle the inevitable errors that come with cluster management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Visualize the Result: Launch the &lt;code&gt;Minikube Dashboard&lt;/code&gt; to confirm 100% health of our fleet.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Namespaces
&lt;/h2&gt;

&lt;p&gt;If multiple users and teams use the same Kubernetes cluster we can partition the cluster into virtual sub-clusters using &lt;code&gt;Namespaces&lt;/code&gt;. The names of the resources/objects created inside a Namespace are unique, &lt;em&gt;but not across Namespaces in the cluster&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Checking if the Cluster is Ready&lt;/strong&gt;&lt;br&gt;
Before we can start building with &lt;strong&gt;Namespaces&lt;/strong&gt; and &lt;strong&gt;Pods&lt;/strong&gt;, we need to ensure our local environment is up and running. Using &lt;code&gt;Minikube&lt;/code&gt;, we can quickly verify the health of our cluster, run &lt;strong&gt;minikube status&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To list all the Namespaces, we can run the following command:&lt;br&gt;
&lt;strong&gt;$ kubectl get namespaces&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadquiryfib94y5ao8cdr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadquiryfib94y5ao8cdr.png" alt="namespaces" width="800" height="331"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A Namespace is to a Kubernetes cluster as a ResourceGroup is to an Azure Subscription&lt;/em&gt;. &lt;br&gt;
Generally, Kubernetes creates four Namespaces out of the box: &lt;code&gt;kube-system&lt;/code&gt;, &lt;code&gt;kube-public&lt;/code&gt;, &lt;code&gt;kube-node-lease&lt;/code&gt;, and &lt;code&gt;default&lt;/code&gt;. The &lt;code&gt;kube-system&lt;/code&gt; Namespace contains the objects created by the Kubernetes system, mostly the control plane agents. The &lt;code&gt;default&lt;/code&gt; Namespace contains the objects and resources created by administrators and developers, and objects are assigned to it by default unless another Namespace name is provided by the user. &lt;code&gt;kube-public&lt;/code&gt; is a special Namespace, which is unsecured and readable by anyone, used for special purposes such as exposing public (non-sensitive) information about the cluster. &lt;br&gt;
The newest Namespace is &lt;code&gt;kube-node-lease&lt;/code&gt; which holds node lease objects used for node heartbeat data. &lt;br&gt;
&lt;em&gt;Good practice, however, is to create additional Namespaces, as desired, to virtualize the cluster and isolate users, developer teams, applications, or tiers&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2. Organizing your cluster by Creating Your Own Namespace&lt;/strong&gt;&lt;br&gt;
Think of Namespaces as virtual folders within your cluster. While Kubernetes provides a default namespace, &lt;em&gt;it’s best practice to create your own to keep your projects isolated and organized&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In this step, I move from viewing the cluster to actively managing it by running the command:&lt;br&gt;
&lt;strong&gt;$ kubectl create namespace new-namespace-name&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl create namespace rahimah&lt;/strong&gt; &lt;br&gt;
This is an imperative command. I'm telling the Kubernetes API directly to make this right now. The confirmation &lt;code&gt;namespace/rahimah created&lt;/code&gt; is the cluster acknowledging the new logical boundary.&lt;/p&gt;

&lt;p&gt;Namespaces are one of the most desired features of Kubernetes, securing its lead against competitors, as it provides a solution to the &lt;strong&gt;multi-tenancy&lt;/strong&gt; requirement of today's enterprise development teams.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8dstr87aqacqvp5oodk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8dstr87aqacqvp5oodk.png" alt="createns" width="800" height="95"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;$ kubectl get namespaces&lt;/strong&gt; to Verifying our work is key. You can see the new namespace rahimah is now &lt;code&gt;Active&lt;/code&gt; alongside the system-generated ones. It’s now a ready-to-use &lt;code&gt;sandbox&lt;/code&gt; where we can deploy our pods without cluttering the rest of the cluster.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Namespaces are great for multi-tenancy. If you are working in a team, giving each developer or project their own namespace prevents naming collisions, meaning you can have a pod named web-server in Namespace A and another web-server in Namespace B without any conflict!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofmwuuxsf7j61jzhl7q5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofmwuuxsf7j61jzhl7q5.png" alt="newnamespace" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pods
&lt;/h2&gt;

&lt;p&gt;A &lt;code&gt;Pod&lt;/code&gt; is the &lt;strong&gt;smallest&lt;/strong&gt; Kubernetes workload object. It is the unit of deployment in Kubernetes, which represents a single instance of the application. A Pod is a &lt;strong&gt;logical collection&lt;/strong&gt; of one or more containers, enclosing and isolating them to ensure that they:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are scheduled together on the same host with the Pod.&lt;/li&gt;
&lt;li&gt;Share the same network namespace, meaning that they share a single IP address originally assigned to the Pod.&lt;/li&gt;
&lt;li&gt;Have access to mount the same external storage (volumes) and other common dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is an example of a stand-alone Pod object's definition manifest in &lt;code&gt;YAML&lt;/code&gt; format, without an operator. This represents the declarative method to define an object, and can serve as a template for a much more complex Pod definition manifest if desired:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls7lhyv0sfirdfpmf95i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls7lhyv0sfirdfpmf95i.png" alt="manifest" width="375" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Moving to Declarative by Defining Your First Pod&lt;/strong&gt;&lt;br&gt;
While &lt;code&gt;kubectl run&lt;/code&gt; is great for quick tests, real-world Kubernetes relies on Manifests. These YAML files act as the &lt;em&gt;source of truth&lt;/em&gt; for your infrastructure, allowing you to version control and share your configurations easily.&lt;/p&gt;

&lt;p&gt;The Screenshot: Preparing the Manifest&lt;br&gt;
In this sequence, I’m setting up the workspace and defining the blueprint for our application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Setting the Scene&lt;/strong&gt;&lt;br&gt;
I created a dedicated directory &lt;code&gt;cluster&lt;/code&gt; and a new file &lt;code&gt;rahimah.yaml&lt;/code&gt; to keep the project organized.&lt;/p&gt;

&lt;p&gt;What the manifest contains:&lt;br&gt;
&lt;strong&gt;apiVersion &amp;amp; kind&lt;/strong&gt;: Tells Kubernetes we are creating a version 1 Pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt;: This is where we name our resource (nginx-pod).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt;: This is the most critical part. It defines the desired state, specifically, that we want one container running the nginx:1.22.1 image on port 80.&lt;br&gt;
&lt;strong&gt;KAMS&lt;/strong&gt; stands for the parts of a manifest.&lt;/p&gt;

&lt;p&gt;Tip: YAML is extremely sensitive to indentation! If you're off by even one space, the Kubernetes API will reject your file. I always recommend using a code editor with a YAML linting extension to catch these &lt;em&gt;invisible&lt;/em&gt; errors before you hit the terminal, hence I used the &lt;code&gt;vi&lt;/code&gt; editor.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;apiVersion&lt;/code&gt; field must specify &lt;code&gt;v1&lt;/code&gt; for the Pod object definition. The second required field is &lt;code&gt;kind&lt;/code&gt; specifying the Pod object type. The third required field &lt;code&gt;metadata&lt;/code&gt;, holds the object's name and optional labels and annotations. The fourth required field &lt;code&gt;spec&lt;/code&gt; marks the beginning of the block defining the desired state of the Pod object (also named the &lt;code&gt;PodSpec&lt;/code&gt;). Our Pod creates a single container running the nginx:1.22.1 image pulled from a &lt;strong&gt;container image registry&lt;/strong&gt;, in this case from &lt;code&gt;Docker Hub&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;The above definition manifest, if stored by a &lt;code&gt;rahimah.yaml&lt;/code&gt; file, is loaded into the cluster to run the desired &lt;code&gt;Pod&lt;/code&gt; and its associated container image.&lt;/p&gt;

&lt;p&gt;Before creating the pod, I'll save the YAML file in a directory, for organization. Then &lt;code&gt;vi&lt;/code&gt; into it for verification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldddazinlyu5ikvnno6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldddazinlyu5ikvnno6g.png" alt="directory" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dj79c686rwgqxmdxh6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dj79c686rwgqxmdxh6g.png" alt="vi" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Reviewing the Manifest&lt;/strong&gt;&lt;br&gt;
Before we send our instructions to the &lt;code&gt;Kubernetes API&lt;/code&gt;, it’s always a good habit to &lt;em&gt;peek&lt;/em&gt; inside the file one last time. This ensures that our indentation is correct and that we are deploying the exact version of the image we intended.&lt;/p&gt;

&lt;p&gt;Am inspecting the File with &lt;code&gt;cat&lt;/code&gt; command&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg0swm14eoxe94axl1x9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg0swm14eoxe94axl1x9.png" alt="cat" width="800" height="306"&gt;&lt;/a&gt;&lt;br&gt;
This is important because in a real-world &lt;code&gt;DevOps&lt;/code&gt; environment, you might be managing dozens of &lt;code&gt;YAML&lt;/code&gt; files. Running &lt;code&gt;cat&lt;/code&gt; allows you to verify the &lt;strong&gt;metadata&lt;/strong&gt; and &lt;strong&gt;spec&lt;/strong&gt; fields without opening a full text editor.&lt;/p&gt;

&lt;p&gt;NOTE: If you're following along, notice the &lt;code&gt;containerPort: 80&lt;/code&gt;. This doesn't actually &lt;em&gt;open&lt;/em&gt; the port to the outside world yet (you'll need a Service for that later) but it tells Kubernetes which port the application inside the container is listening on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl create -f rahimah.yaml&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: From Blueprint to Reality&lt;/strong&gt;&lt;br&gt;
This is where we move from local files to actual running resources. In this step, I demonstrate both the &lt;code&gt;Declarative&lt;/code&gt; and &lt;code&gt;Imperative&lt;/code&gt; methods of pod creation.&lt;/p&gt;

&lt;p&gt;The screenshot shows deployment the &lt;code&gt;Nginx pods&lt;/code&gt;.&lt;br&gt;
In this image, I am populating the cluster with &lt;em&gt;three&lt;/em&gt; distinct pods using two different techniques.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl create -f rahimah.yaml&lt;/strong&gt;: This is the &lt;code&gt;Declarative&lt;/code&gt; approach. We are telling Kubernetes to &lt;em&gt;create whatever is defined in this file&lt;/em&gt;. It’s the professional standard for production environments.&lt;br&gt;
&lt;strong&gt;$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;kubectl run nginx-pod2 &amp;amp; nginx-pod3: These are &lt;code&gt;Imperative&lt;/code&gt; commands. They are fast, one-liners that are perfect for &lt;code&gt;Build, See, Destroy&lt;/code&gt; learning or quick debugging.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlg9msrdz0c3j84o3vvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlg9msrdz0c3j84o3vvj.png" alt="3pods" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the consistent feedback from the cluster: &lt;strong&gt;pod/nginx-pod created&lt;/strong&gt;. &lt;br&gt;
Whether you use a complex &lt;code&gt;JSON/YAML&lt;/code&gt; file or a simple &lt;code&gt;CLI&lt;/code&gt; command, the &lt;code&gt;Kubernetes API&lt;/code&gt; processes the request and schedules the workload onto a node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Verifying the Workload&lt;/strong&gt;&lt;br&gt;
The most satisfying part of any Kubernetes project is seeing that &lt;strong&gt;Running&lt;/strong&gt; status. This is where we confirm that the cluster has successfully pulled the images, allocated resources, and started our containers.&lt;/p&gt;

&lt;p&gt;The Screenshot shows &lt;strong&gt;Healthy&lt;/strong&gt; Pod Fleet&lt;br&gt;
In this final view, we run the ultimate "truth" command: &lt;strong&gt;kubectl get pods&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31qip00d8yhz1hoo0r4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31qip00d8yhz1hoo0r4k.png" alt="getpods" width="800" height="128"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;READY 1/1&lt;/strong&gt;: This indicates that the container inside the pod has passed its readiness check and is prepared to handle traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STATUS Running&lt;/strong&gt;: This confirms the pod is active on a node. If you see &lt;code&gt;ContainerCreating&lt;/code&gt; or &lt;code&gt;ErrImagePull&lt;/code&gt; here, you know something went wrong during the startup process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RESTARTS 0&lt;/strong&gt;: A zero here is a sign of stability. It means your application hasn't crashed or entered a &lt;code&gt;CrashLoopBackOff&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;However, when in need of a starter definition manifest, knowing how to generate one can be a life-saver. The imperative command with additional key flags such as dry-run and the yaml output, can generate the definition template instead of running the Pod, while the template is then stored in the rahimah.yaml file. The following is a multi-line command that should be selected in its entirety for copy/paste (including the backslash character "\")&lt;/em&gt;&lt;br&gt;
$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80 \&lt;br&gt;
--dry-run=client -o yaml &amp;gt; nginx-pod.yaml&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjoi8vnebvluyja74gmc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjoi8vnebvluyja74gmc.png" alt="yaml" width="800" height="46"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I ran the &lt;strong&gt;ls&lt;/strong&gt; command to confirm the file presence in that location,&lt;br&gt;
then I created a pod from the &lt;code&gt;YAML&lt;/code&gt; file and &lt;strong&gt;get pods&lt;/strong&gt; to list the pods present in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp543qevymnkmkwudeqka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp543qevymnkmkwudeqka.png" alt="pods" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The command above generates a definition manifest in &lt;code&gt;YAML&lt;/code&gt;, but we can generate a &lt;code&gt;JSON&lt;/code&gt; definition file just as easily with:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80 \&lt;br&gt;
--dry-run=client -o json &amp;gt; nginx-pod.json&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpwa3w5rcej34rwuvr5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpwa3w5rcej34rwuvr5c.png" alt="catjson" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then &lt;code&gt;vi&lt;/code&gt; into the &lt;code&gt;json&lt;/code&gt; file that created pod4 to edit it for pod5. Then &lt;strong&gt;cat&lt;/strong&gt; again it to verify the changes were effected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyicwlxpycxijj199roy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyicwlxpycxijj199roy.png" alt="newfile" width="800" height="624"&gt;&lt;/a&gt;&lt;br&gt;
I created a new pod using the &lt;code&gt;apply&lt;/code&gt; verb, then ran &lt;code&gt;get pods&lt;/code&gt;, &lt;code&gt;get pods -o wide&lt;/code&gt; commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsokdxdw8dza4vs5sql8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsokdxdw8dza4vs5sql8.png" alt="get pods" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE: The difference get pods and get pods -o wide&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl create -f nginx-pod.yaml&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;$ kubectl create -f nginx-pod.json&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Both the &lt;code&gt;YAML&lt;/code&gt; and &lt;code&gt;JSON&lt;/code&gt; definition files can serve as templates or can be loaded into the cluster respectively as such:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Deep Dive&lt;/strong&gt;&lt;br&gt;
Sometimes, a simple &lt;strong&gt;kubectl get pods&lt;/strong&gt; isn't enough. When you need to know exactly what is happening inside a &lt;code&gt;Pod&lt;/code&gt;, like which node it's running on, its IP address, or its lifecycle events, you need to use the &lt;strong&gt;describe&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;The Screenshot shows the &lt;strong&gt;anatomy&lt;/strong&gt; of a Running Pod&lt;br&gt;
In this image, I’m running &lt;strong&gt;kubectl describe pods&lt;/strong&gt; and the output is a goldmine of information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node Information&lt;/strong&gt;: You can see this pod is scheduled on the &lt;code&gt;minikube&lt;/code&gt; node at IP 192.168.49.2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container Details&lt;/strong&gt;: It confirms we are using the &lt;code&gt;nginx:1.22.1 image&lt;/code&gt; and that the container is officially in the &lt;strong&gt;Running&lt;/strong&gt; state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conditions&lt;/strong&gt;: Notice the &lt;strong&gt;True&lt;/strong&gt; values for &lt;em&gt;Initialized&lt;/em&gt;, &lt;em&gt;Ready&lt;/em&gt;, and &lt;strong&gt;PodScheduled&lt;/strong&gt;. This is the _checklist _Kubernetes uses to ensure the pod is healthy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IP Address&lt;/strong&gt;: Each pod gets its own unique internal IP (in this case, 10.244.0.3).&lt;/p&gt;

&lt;p&gt;NOTE: If your pod is stuck in &lt;code&gt;Pending&lt;/code&gt; or &lt;code&gt;CrashLoopBackOff&lt;/code&gt;, always scroll to the very bottom of the describe output. The Events section will tell you the exact reason, whether it’s a &lt;strong&gt;failed pull&lt;/strong&gt;, &lt;strong&gt;a lack of memory&lt;/strong&gt;, or a &lt;strong&gt;configuration error&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: The Apply Warning&lt;/strong&gt;&lt;br&gt;
As you saw earlier, switching between &lt;code&gt;kubectl run&lt;/code&gt; and &lt;code&gt;kubectl&lt;/code&gt; apply can trigger a warning message.&lt;/p&gt;

&lt;p&gt;Recall the Screenshot &lt;strong&gt;Missing Annotation Warning&lt;/strong&gt;&lt;br&gt;
In this image, I used &lt;code&gt;kubectl apply&lt;/code&gt; on a pod that was originally created with a simple run command.&lt;/p&gt;

&lt;p&gt;What the Warning Means: Kubernetes is saying: "I don't see the 'last-applied-configuration' note on this pod." You don't actually have to do anything, Kubernetes automatically &lt;code&gt;patches&lt;/code&gt; the pod by adding that annotation so it can track future declarative changes.&lt;/p&gt;

&lt;p&gt;Hence, once you move to a file-based workflow (using &lt;code&gt;YAML&lt;/code&gt; or &lt;code&gt;JSON&lt;/code&gt;), stick with &lt;code&gt;kubectl apply&lt;/code&gt;. It makes your deployments much more predictable and stable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sr8axskg02fyq2n1gp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sr8axskg02fyq2n1gp4.png" alt="allpods" width="800" height="572"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20788mfpg5q0g1z40471.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20788mfpg5q0g1z40471.png" alt="allpods" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Using the describe Command for Advanced Inspection&lt;/strong&gt;&lt;br&gt;
When a simple list isn't enough, we need to look closer. The &lt;code&gt;kubectl describe&lt;/code&gt; command is your magnifying glass for everything happening inside a resource.&lt;/p&gt;

&lt;p&gt;The Screenshots below show an anatomy of a Running Pod, am inspecting all the pods. The output provides critical data that isn't visible in a standard list:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Placement&lt;/strong&gt;: You can see exactly which Node (in this case, our &lt;code&gt;minikube VM&lt;/code&gt;) is hosting the pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Networking&lt;/strong&gt;: Each pod is assigned its own internal IP address (like 10.244.0.3).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lifecycle Conditions&lt;/strong&gt;: Notice the "True" status for &lt;code&gt;Initialized&lt;/code&gt;, &lt;code&gt;Ready&lt;/code&gt;, and &lt;code&gt;ContainersReady&lt;/code&gt;. This is the checklist Kubernetes uses to confirm the pod is healthy and capable of serving traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Events&lt;/strong&gt;: While not visible in every crop, the bottom of this output logs every action the cluster took, &lt;em&gt;from pulling the image to starting the container&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Tip: If your pod status is &lt;strong&gt;Pending&lt;/strong&gt;, use &lt;code&gt;describe&lt;/code&gt;. It will often tell you if the cluster is out of memory or if it can't find a node that fits your requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8il6x4odoax1byol3m2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8il6x4odoax1byol3m2.png" alt="singlepod" width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hv1slno8y72hxif89g9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hv1slno8y72hxif89g9.png" alt="crash" width="800" height="254"&gt;&lt;/a&gt;&lt;br&gt;
Always verify cluster health before deployment to minimize downtime.&lt;br&gt;
&lt;strong&gt;Step 11: The GUI Perspective (Launching the Minikube Dashboard)&lt;/strong&gt;&lt;br&gt;
Sometimes a visual overview is exactly what you need to see the *&lt;em&gt;big picture *&lt;/em&gt; of your cluster.&lt;/p&gt;

&lt;p&gt;In the screenshot below an accessing the Dashboard by running the command &lt;code&gt;minikube dashboard&lt;/code&gt;. This automates several complex steps for you;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enabling the Addon&lt;/strong&gt;: It ensures the dashboard components are active.&lt;br&gt;
&lt;strong&gt;Launching the Proxy&lt;/strong&gt;: It creates a secure tunnel between your local machine and the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Opening the UI&lt;/strong&gt;: It provides a local URL that opens directly in your browser.&lt;br&gt;
The dashboard isn't just for looking, you can use it to edit &lt;code&gt;YAML&lt;/code&gt; files, scale your &lt;code&gt;deployments&lt;/code&gt;, and view real-time &lt;code&gt;logs&lt;/code&gt; without typing a single &lt;code&gt;kubectl&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bdpq3q2jbxuyo415j60.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bdpq3q2jbxuyo415j60.png" alt="dashboard" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Exploring the Kubernetes Dashboard&lt;/strong&gt;&lt;br&gt;
After spending so much time in the terminal, there is nothing quite like seeing your cluster come to life in a web browser. The &lt;code&gt;Minikube Dashboard&lt;/code&gt; provides a real-time, graphical view of your workloads, and it’s an incredible tool for both beginners and advanced users.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbg2kb1px62odu0377c9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbg2kb1px62odu0377c9.png" alt="dashbrdbrws" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The solid green circle shows that 100% of our desired pods are healthy and running.&lt;/p&gt;

&lt;p&gt;The Pod List shows all five of our Nginx pods (nginx-pod through nginx-pod5) lined up perfectly.&lt;/p&gt;

&lt;p&gt;The Metadata at a Glance: Without typing a single command, we can see the internal IP addresses, the image versions (nginx:1.22.1), and even how long each pod has been alive.&lt;/p&gt;

&lt;p&gt;The Sidebar: Notice the menu on the left. This is where you can explore more advanced building blocks like &lt;code&gt;ConfigMaps&lt;/code&gt;, &lt;code&gt;Secrets&lt;/code&gt;, and &lt;code&gt;Storage Classes&lt;/code&gt; as you progress in your journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10: Powering Down&lt;/strong&gt;&lt;br&gt;
Once you’ve finished your lab session, it’s best practice to stop your local cluster to save your machine's battery and CPU.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5lx60n23t8fiuwhvjez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5lx60n23t8fiuwhvjez.png" alt="minikube stop" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The screenshot shows the transition from an active environment to a clean stop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ minikube stop&lt;/strong&gt;: This gracefully powers down the Minikube virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ minikube status&lt;/strong&gt;: Confirming the shutdown. You can see the &lt;code&gt;host&lt;/code&gt;, &lt;code&gt;kubelet&lt;/code&gt;, and &lt;code&gt;apiserver&lt;/code&gt; are all now in a Stopped state.&lt;/p&gt;

&lt;p&gt;Your work isn't lost. The next time you run minikube start, your namespaces and manifests will be right where you left them.&lt;/p&gt;

&lt;p&gt;Before advancing to more complex application deployment and management methods, become familiar with Pod operations with additional commands such as:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl apply -f nginx-pod.yaml&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;$ kubectl get pods&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;$ kubectl get pod nginx-pod -o yaml&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;$ kubectl get pod nginx-pod -o json&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;$ kubectl describe pod nginx-pod&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;$ kubectl delete pod nginx-pod&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Bridging the Gap Between Code and Infrastructure&lt;/strong&gt;&lt;br&gt;
Building this fleet of &lt;code&gt;Nginx pods&lt;/code&gt; was more than just a technical exercise, it was a demonstration of how a structured "Build, See, Destroy" approach ensures infrastructure reliability. By moving from imperative CLI commands to declarative &lt;code&gt;YAML&lt;/code&gt; and &lt;code&gt;JSON&lt;/code&gt; manifests, we create a system that is version-controlled, repeatable, and scalable—the core pillars of a modern &lt;code&gt;DevOps&lt;/code&gt; culture.&lt;/p&gt;

&lt;p&gt;For me, the real power of Kubernetes lies in its self-healing nature and resource isolation. Whether it’s troubleshooting a missing client certificate or using the Dashboard to verify cluster health, the goal remains the same: ensuring high availability and operational excellence for the end user.&lt;/p&gt;

&lt;p&gt;As the tech landscape continues to evolve, mastering these foundational building blocks is what allows us to build the resilient, sovereign digital infrastructures of tomorrow.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>minikube</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>From Code to Clouds: Hosting a Professional Resume on GitHub Pages</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 23 Apr 2026 00:38:47 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/from-code-to-clouds-hosting-a-professional-resume-on-github-pages-37ch</link>
      <guid>https://dev.to/rahimah_dev/from-code-to-clouds-hosting-a-professional-resume-on-github-pages-37ch</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Imagine an employer or recruiter landing on your profile. Instead of a standard, static PDF, they are greeted with a &lt;em&gt;fast, responsive, and live-hosted website&lt;/em&gt; that showcases your professional journey with precision. It doesn’t just show them your experience, it proves you have the technical initiative to build, manage, and deploy your own digital footprint.&lt;/p&gt;

&lt;p&gt;In this article, I’ll walk you through how I took a professional Pilot’s resume from a local code editor to a &lt;strong&gt;live URL&lt;/strong&gt; using HTML, CSS, and &lt;strong&gt;GitHub Pages&lt;/strong&gt;, including specific terminal hurdles I encountered along the way.&lt;br&gt;
Learning Objectives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure my global Git identity.&lt;/li&gt;
&lt;li&gt;Initialize and manage a clean local workspace.&lt;/li&gt;
&lt;li&gt;Troubleshoot common terminal errors like &lt;em&gt;fatal: not a git repository.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Deploy a live project using GitHub Pages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;** Prerequisites**&lt;br&gt;
To follow along with this tutorial, you will need to have &lt;strong&gt;Git Bash&lt;/strong&gt; installed on your Windows machine. You can download the official installer directly from the Git website:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://git-scm.com/download/win" rel="noopener noreferrer"&gt;Download Git for Windows&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Setting the Foundation&lt;/strong&gt;&lt;br&gt;
Every successful deployment starts with identity. Before &lt;code&gt;Git&lt;/code&gt; can track your progress, it needs to &lt;em&gt;know who is behind the code&lt;/em&gt;. I started by configuring my global credentials in the terminal to ensure every commit was correctly attributed to my profile.&lt;/p&gt;

&lt;p&gt;Using &lt;code&gt;git config --list&lt;/code&gt; is a quick way to verify your setup and avoid &lt;em&gt;Permission Denied&lt;/em&gt; errors later in the workflow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: Only two lines of output appeared because I have configured my environment before this exercise&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxozdxwxpxge8qypjmss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxozdxwxpxge8qypjmss.png" alt="config" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Building the Workspace&lt;/strong&gt;&lt;br&gt;
Organization is &lt;strong&gt;key&lt;/strong&gt; to a smooth deployment. Instead of working out of a cluttered root directory, I used the terminal to create a dedicated space for my project. By navigating to the Desktop and using &lt;code&gt;mkdir&lt;/code&gt; and &lt;code&gt;touch&lt;/code&gt;, I established a clean environment for my source files.&lt;/p&gt;

&lt;p&gt;The Workflow:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd&lt;/code&gt;: Navigating to the right environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mkdir&lt;/code&gt; website: Creating a isolated container for the project.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;touch&lt;/code&gt; index.html: Generating the entry point for my live site.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuyg63p8rqt4rs258jj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuyg63p8rqt4rs258jj2.png" alt="buildingwksp" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Bridging Local to Remote&lt;/strong&gt;&lt;br&gt;
With the local code ready, the next step was creating a &lt;strong&gt;remote repository&lt;/strong&gt; on GitHub. This is where the magic of &lt;code&gt;Hosting&lt;/code&gt; begins. By clicking that &lt;strong&gt;New&lt;/strong&gt; button, I created a central hub (Git-Basics101) that would eventually serve my resume to the world. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: Creating a remote repository is like setting up a destination for a flight, you need a clear target before you can push your data into the clouds&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjh57nwwcmzp35s5gc3ou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjh57nwwcmzp35s5gc3ou.png" alt="createrepo" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then I copied the HTTPS url, to use in the terminal.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lx5ei63yxao0b9scdi0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lx5ei63yxao0b9scdi0.png" alt="copyhttps" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Troubleshooting the Workflow&lt;/strong&gt;&lt;br&gt;
The Git Discovery Error occurred when trying to link my remote. &lt;em&gt;I hit the fatal: not a git repository error&lt;/em&gt;. This was a great reminder that &lt;code&gt;git init&lt;/code&gt; must be the very first step before any remote connections can be made.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3v0uio86jf9m1qad30b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3v0uio86jf9m1qad30b.png" alt="errorngitinit" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Mastering the Editor&lt;/strong&gt;&lt;br&gt;
To ensure the final product was polished and free of errors, I utilized the &lt;code&gt;Vim (vi)&lt;/code&gt; editor directly within the terminal. This allowed me to inspect the &lt;em&gt;index.html&lt;/em&gt; file in a raw environment.&lt;br&gt;
I used the &lt;code&gt;cat&lt;/code&gt; command to verify my code before it goes live. &lt;br&gt;
&lt;em&gt;NOTE: This resume is not real, but for demo purpose only.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F973y3f3g34udda2hhev6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F973y3f3g34udda2hhev6.png" alt="vim" width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Staging for Deployment&lt;/strong&gt;&lt;br&gt;
Here am moving files from &lt;em&gt;untracked&lt;/em&gt; zone to the &lt;em&gt;staged&lt;/em&gt; zone. By using &lt;code&gt;git status&lt;/code&gt;, I could see exactly what Git was watching. But before that, I used the &lt;code&gt;ls&lt;/code&gt; to verify that the file is in that particular location in my local environment. Initially, my index.html was in &lt;em&gt;red&lt;/em&gt; (untracked), but with a quick &lt;code&gt;git add .&lt;/code&gt;, it turned &lt;em&gt;green&lt;/em&gt;, ready and waiting to be committed.&lt;/p&gt;

&lt;p&gt;This visual confirmation in the terminal is the developer's &lt;strong&gt;safety check&lt;/strong&gt; to ensure only the right files are being sent to the server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frob1plin4aegxb6rsrxq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frob1plin4aegxb6rsrxq.png" alt="staging" width="800" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Sealing the Version&lt;/strong&gt;&lt;br&gt;
With the files staged, it was time to create a permanent &lt;strong&gt;snapshot&lt;/strong&gt; of my work. Running &lt;code&gt;git commit -m "create index.html"&lt;/code&gt; acted as the official seal for this &lt;strong&gt;version&lt;/strong&gt; of the project.&lt;/p&gt;

&lt;p&gt;The terminal output showing &lt;em&gt;1 file changed" and "130 insertions&lt;/em&gt; is more than just text, it’s a receipt of my progress. By providing a clear, descriptive message, I’ve ensured that any future collaborator (or even my future self) understands &lt;strong&gt;exactly what this change was for&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59vlull8i7c32p5s2dkx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59vlull8i7c32p5s2dkx.png" alt="commit" width="800" height="106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Overcoming the "Origin" Obstacle by authentication&lt;/strong&gt;&lt;br&gt;
One of the most common hurdles for any developer is connecting a local project to a remote server. As seen in my terminal, I initially hit a &lt;em&gt;fatal: origin does not appear to be a git repository error&lt;/em&gt;. This happened because my local folder hadn't been &lt;em&gt;introduced&lt;/em&gt; to GitHub yet.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: I resolved this by explicitly adding the remote origin URL and password. It was a another reminder that Git needs a clear map of where your code is supposed to go before it can start the journey.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2vytk8ru6gedvovgbff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2vytk8ru6gedvovgbff.png" alt="push" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the modern Git workflow, security is paramount. GitHub requires a Personal Access Token (PAT) instead of a regular password. So I navigated to my &lt;code&gt;Developer Settings&lt;/code&gt; to generate a &lt;code&gt;Classic&lt;/code&gt; token. This token acts as a secure key, allowing my terminal to communicate safely with my GitHub account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5nrqyjpdlvjioedlyim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5nrqyjpdlvjioedlyim.png" alt="PAT" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10: The Successful Push&lt;/strong&gt;&lt;br&gt;
With the connection established, I ran the final command: &lt;code&gt;git push -u origin master&lt;/code&gt;. Seeing those lines of code: &lt;em&gt;Enumerating objects, Counting objects, and finally the URL to the remote repository&lt;/em&gt;—is the ultimate &lt;strong&gt;mission accomplished&lt;/strong&gt; moment for a developer.&lt;/p&gt;

&lt;p&gt;The code was no longer just on my laptop, it was officially live in the cloud.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqow588wen4i81v0s1q1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqow588wen4i81v0s1q1o.png" alt="master" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 11: Activating GitHub Pages&lt;/strong&gt;&lt;br&gt;
With the code successfully pushed to the &lt;strong&gt;Git-Basics101 repository&lt;/strong&gt;, the final piece of the puzzle was turning on the hosting.&lt;/p&gt;

&lt;p&gt;I navigated to the &lt;strong&gt;Settings&lt;/strong&gt; tab of my repository, selected the Pages menu on the left, and set the build source to the &lt;strong&gt;master branch&lt;/strong&gt; and saved the change. Within seconds, GitHub provided a &lt;strong&gt;live link&lt;/strong&gt;. This transition from a local index.html file to a globally accessible URL is the ultimate goal of any web deployment project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdkh5mff63c2lfydckn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdkh5mff63c2lfydckn5.png" alt="master" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw735z28s98v5cor58prn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw735z28s98v5cor58prn.png" alt="success" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yh36jhob0otpfn2qqj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yh36jhob0otpfn2qqj6.png" alt="live" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Building this resume was more than just a coding exercise, it was a lesson in the modern developer's workflow. Errors aren't failures, they are the terminal's way of teaching you the correct sequence of operations.&lt;/p&gt;

&lt;p&gt;The result? A professional, responsive resume that is ready for the world to see.&lt;/p&gt;

</description>
      <category>github</category>
      <category>webdev</category>
      <category>html</category>
      <category>deployment</category>
    </item>
    <item>
      <title>Installing Local Kubernetes Clusters</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Fri, 17 Apr 2026 21:37:51 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/my-kubernetes-mastery-journey-installing-local-kubernetes-clusters-176</link>
      <guid>https://dev.to/rahimah_dev/my-kubernetes-mastery-journey-installing-local-kubernetes-clusters-176</guid>
      <description>&lt;p&gt;Now that we have familiarized ourselves with the default &lt;strong&gt;minikube start&lt;/strong&gt; command, let's dive deeper into Minikube to understand some of its more advanced features.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;minikube&lt;/code&gt; start by default selects a &lt;code&gt;driver&lt;/code&gt; isolation software, such as a hypervisor or a container runtime, &lt;em&gt;if one (VitualBox) or multiple are installed on the host workstation&lt;/em&gt;. In addition it downloads the latest Kubernetes version components. With the selected driver software it provisions a single &lt;strong&gt;VM&lt;/strong&gt; named &lt;code&gt;minikube&lt;/code&gt; (with hardware profile of CPUs=2, Memory=6GB, Disk=20GB) or container (Docker) to host the default single-node all-in-one Kubernetes cluster. Once the node is provisioned, it bootstraps the Kubernetes control plane (with the default &lt;code&gt;kubeadm&lt;/code&gt; tool), and it installs the latest version of the default container runtime, Docker, that will serve as a running environment for the containerized applications we will deploy to the Kubernetes cluster. &lt;br&gt;
The &lt;strong&gt;minikube start&lt;/strong&gt; command generates a default minikube cluster with the specifications described above and it will store these specs so that we can restart the default cluster whenever desired. The object that stores the specifications of our cluster is called a &lt;code&gt;profile&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As Minikube matures, so do its features and capabilities. With the introduction of profiles, Minikube allows users to create custom reusable clusters that can all be managed from a single command line client.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;minikube profile&lt;/strong&gt; command allows us to view the status of all our clusters in a table formatted output. &lt;br&gt;
Now, we'll start the minikube indicating the driver, which is Docker in this case.&lt;br&gt;
Wait for it to finish! You'll see a message like "Done! kubectl is now configured." Once you see that, then you can try another Kubernetes command.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy8cklosb5qyjul1shgh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy8cklosb5qyjul1shgh.png" alt="mini3start" width="800" height="332"&gt;&lt;/a&gt;&lt;br&gt;
Now, we'll run the &lt;code&gt;minikube status&lt;/code&gt;. Once Minikube is "Running," you have a tiny one-node Kubernetes cluster alive on your machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m2x209ctp8i2s9rby8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m2x209ctp8i2s9rby8b.png" alt="status" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see a node named minikube with a status of &lt;strong&gt;Ready&lt;/strong&gt;, you officially have a Kubernetes cluster running on your laptop! We'll check by running the &lt;strong&gt;kubectl get nodes&lt;/strong&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqv587x2jss3skofbz1bb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqv587x2jss3skofbz1bb.png" alt="getnodes" width="749" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;minikube stop&lt;/strong&gt;: With the this command, we can stop Minikube. This command stops all applications running in Minikube, safely stops the cluster and the VM, preserving our work until we decide to start the Minikube cluster once again, while preserving the Minikube VM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpvygy2lba8rqjkzg2t2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpvygy2lba8rqjkzg2t2.png" alt="stop" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;minikube status&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7f62nfjfceq7ac40u79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7f62nfjfceq7ac40u79.png" alt="statusagain" width="796" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assuming we have created only the default &lt;code&gt;minikube&lt;/code&gt; cluster, we could list the properties that define the default profile with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhqleifz44l5srfyqts2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhqleifz44l5srfyqts2.png" alt="profilelist" width="800" height="149"&gt;&lt;/a&gt;&lt;br&gt;
This table presents the columns associated with the default properties such as the profile name: &lt;strong&gt;minikube&lt;/strong&gt;, the isolation driver: &lt;strong&gt;VirtualBox&lt;/strong&gt;, the container runtime: &lt;strong&gt;Docker&lt;/strong&gt;, the Kubernetes version: &lt;strong&gt;v1.28.3&lt;/strong&gt;, the status of the cluster - &lt;strong&gt;running or stopped&lt;/strong&gt;. The table also displays the &lt;strong&gt;number of nodes&lt;/strong&gt;: 1 by default, the &lt;strong&gt;private IP address&lt;/strong&gt; of the minikube cluster's control plane VirtualBox VM, and the &lt;strong&gt;secure port&lt;/strong&gt; that exposes the API Server to cluster control plane components, agents and clients: 8443. &lt;/p&gt;

&lt;p&gt;To create a brand-new cluster with 2 nodes named &lt;code&gt;lab-cluster&lt;/code&gt;, you use the --nodes flag. However, since you already have a single-node cluster stopped (as seen in your screenshot), we need to start it back up with the multi-node configuration. We'll run the multi-node command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1tx5q93zvz6088ltoqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1tx5q93zvz6088ltoqi.png" alt="2nodes" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the cluster starts, we'll use the next three commands to see the difference: &lt;strong&gt;kubectlgetnodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgngvrsdtgf0x2xyqavs2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgngvrsdtgf0x2xyqavs2.png" alt="getnodes" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;minikube profile list&lt;/strong&gt;: Using this command, you can check the Minikube Profiles and see both the original cluster and the new 2-node cluster side-by-side.&lt;br&gt;
The minikube profile list command shows the two separate &lt;code&gt;slots&lt;/code&gt; you have created on your machine:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;lab-cluster&lt;/code&gt;: This is the active cluster. It is running on the docker driver with 2 nodes and currently has a status of &lt;strong&gt;OK&lt;/strong&gt;. The asterisk (*) in the ACTIVE_PROFILE column indicates that any &lt;code&gt;kubectl&lt;/code&gt; commands ran right now will target this cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;minikube&lt;/code&gt;: This is the original &lt;code&gt;single-node&lt;/code&gt; cluster. It is currently Stopped, meaning it isn't consuming any RAM or CPU, but its configuration and any data it had is are safely saved.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe8mbywvguxru1olhrnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpe8mbywvguxru1olhrnl.png" alt="pl" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kubectl get nodes -o wide&lt;/strong&gt;: This gives a detailed View: you can see which node is the &lt;code&gt;Control Plane&lt;/code&gt; (the brain) and which is the &lt;code&gt;Worker&lt;/code&gt; (the muscle).&lt;br&gt;
This command shows the details of the nodes inside your active lab-cluster profile:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;lab-cluster&lt;/code&gt;(control-plane): This is the &lt;code&gt;brain&lt;/code&gt; of your cluster. It manages the state, schedules applications, and handles the API.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;lab-cluster-m02&lt;/code&gt;: This is your second node. In a multi-node setup, this acts as a Worker node where your actual application containers (Pods) will run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ready Status&lt;/strong&gt;: Both nodes are Ready, meaning they are healthy and communicating with each other.&lt;br&gt;
And the &lt;strong&gt;-o wide&lt;/strong&gt; flag gives you deeper technical insights:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal-IP&lt;/strong&gt;: Your nodes have unique internal addresses (192.168.58.2 and 192.168.58.3) to talk to each other.&lt;br&gt;
&lt;strong&gt;OS-Image&lt;/strong&gt;: They are running Debian GNU/Linux 12 inside their Docker containers.&lt;br&gt;
&lt;strong&gt;Container-Runtime&lt;/strong&gt;: They are using Docker v1.35.1 to actually spin up the containers.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgr1q9mm386dkzgzhrfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgr1q9mm386dkzgzhrfz.png" alt="getsnodewide" width="800" height="48"&gt;&lt;/a&gt;&lt;br&gt;
The role of the second cluster is labeled as &lt;code&gt;none&lt;/code&gt; because it wasn't specified during creation.&lt;br&gt;
Now we'll stop the &lt;code&gt;lab-cluster&lt;/code&gt; and start the &lt;code&gt;minikube&lt;/code&gt;.&lt;br&gt;
This is known as &lt;strong&gt;Switching Contexts&lt;/strong&gt; and should be mastered.&lt;br&gt;
If you want to go back to your first cluster, you don't need to delete anything. You just switch the &lt;code&gt;Active&lt;/code&gt; pointer:&lt;/p&gt;

&lt;p&gt;Stop current: &lt;strong&gt;minikube stop -p lab-cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxmvw3h6a74bygkounxi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxmvw3h6a74bygkounxi.png" alt="stoplabcl" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Switch &amp;amp; Start: &lt;strong&gt;minikube start -p minikube&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqkgbj9di9y85en119da.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqkgbj9di9y85en119da.png" alt="startmin" width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Minikube features
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;minikube profile list&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxabm71iajfo6hrkiyq0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxabm71iajfo6hrkiyq0y.png" alt="minikubeprofilelist" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to set the profile to &lt;code&gt;lab-cluster&lt;/code&gt;, we' ll use the command: &lt;strong&gt;profile lab-cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7esor478x23cvvy0sjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7esor478x23cvvy0sjc.png" alt="settolabcluster" width="800" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then start the minikube again using &lt;strong&gt;minikube start&lt;/strong&gt;&lt;br&gt;
When it is time to run the cluster again, simply run the &lt;strong&gt;minikube start&lt;/strong&gt; command (driver option is not required), and it will restart the earlier bootstrapped Minikube cluster.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdz7xt94m2w6m2tee0sa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdz7xt94m2w6m2tee0sa.png" alt="mini3start" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I want the terminal to look organized and show &lt;code&gt;worker&lt;/code&gt;, I can manually assign the role using the label command. Run this in your VS Code terminal:&lt;/p&gt;

&lt;p&gt;then change context to &lt;code&gt;lab-cluster&lt;/code&gt;(to make the target cluster &lt;code&gt;lab-cluster&lt;/code&gt;) and then run &lt;code&gt;get nodes&lt;/code&gt; command.&lt;br&gt;
 is now &lt;code&gt;worker!&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyomhgt8rwnllnd8gdwye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyomhgt8rwnllnd8gdwye.png" alt="getnodes" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;minikube profile list&lt;/strong&gt; command, the profile will now be set to the &lt;code&gt;lab-cluster&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bysprkthvnfdh3juvsn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8bysprkthvnfdh3juvsn.png" alt="profilelist" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;kubectl config view&lt;/strong&gt;, it gives a detailed information about the cluster and its nodes.&lt;br&gt;
The &lt;code&gt;kubeconfig&lt;/code&gt; includes the API Server's endpoint server: ht‌t‌ps://127.0.0.1:49687 and the minikube user's client authentication key and certificate data.&lt;/p&gt;

&lt;p&gt;With the &lt;code&gt;kubectl&lt;/code&gt; is installed, we can display information about the Minikube Kubernetes cluster with the kubectl cluster-info command:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92y5wcga9as8ko14i7cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92y5wcga9as8ko14i7cn.png" alt="view" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the &lt;strong&gt;cluster info&lt;/strong&gt; command, this gives information about the IP address the cluster is running at.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0yqaitr2w2hnb7gf8nr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0yqaitr2w2hnb7gf8nr.png" alt="kubectl" width="800" height="66"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;Kubernetes master&lt;/code&gt; is running at htt‌‌ps://127.0.0.1:49687&lt;br&gt;
KubeDNS is running at htt‌‌ps://127.0.0.1:49687/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy&lt;/p&gt;

&lt;h2&gt;
  
  
  The Kubernetes Dashboard
&lt;/h2&gt;

&lt;p&gt;The Kubernetes Dashboard provides a &lt;strong&gt;web-based user interface&lt;/strong&gt; for Kubernetes cluster management. &lt;code&gt;Minikube&lt;/code&gt; installs the Dashboard as an &lt;code&gt;addon&lt;/code&gt;, but it is disabled by default. Prior to using the Dashboard we are required to enable the Dashboard &lt;code&gt;addon&lt;/code&gt;, together with the &lt;code&gt;metrics-server addon&lt;/code&gt;, a helper addon designed to collect usage metrics from the Kubernetes cluster. To access the dashboard from &lt;code&gt;Minikube&lt;/code&gt;, we can use the minikube dashboard command, which opens a new tab in our web browser displaying the Kubernetes Dashboard, but only after we list, enable required addons, and verify their state:&lt;/p&gt;

&lt;p&gt;$ minikube addons list&lt;/p&gt;

&lt;p&gt;$ minikube addons enable metrics-server&lt;/p&gt;

&lt;p&gt;$ minikube addons enable dashboard&lt;/p&gt;

&lt;p&gt;$ minikube addons list&lt;/p&gt;

&lt;p&gt;$ minikube dashboard &lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;minikube addons list&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvrf6x4ye4st0gimo9g4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvrf6x4ye4st0gimo9g4.png" alt="addonlist" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to enable metrics-server addon, run &lt;strong&gt;minikube addons enable metrics-server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkwsqcsz8zbdsfbjx2po.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkwsqcsz8zbdsfbjx2po.png" alt="enableser" width="800" height="123"&gt;&lt;/a&gt;&lt;br&gt;
Verify that the metrics-server is now enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pb671f2p7fnfvvaynoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pb671f2p7fnfvvaynoc.png" alt="enabled" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;minikube addons enable dashboard&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famg2mifzt9tredzxyxip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famg2mifzt9tredzxyxip.png" alt="dashboard" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verify that dashboard enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwou7i642v774f9z3xei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwou7i642v774f9z3xei.png" alt="dashenabled" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the &lt;strong&gt;minikube dashboard&lt;/strong&gt; command, and a url is displayed which opens a new window when clicked.&lt;br&gt;
Or you can simply run &lt;strong&gt;minikube dashboard --url&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz115e8b4mvvcop8dqzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz115e8b4mvvcop8dqzp.png" alt="dashhttp" width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The dashboard is empty as expected.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv336omcnq6w2wejal0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv336omcnq6w2wejal0i.png" alt="nothing to view" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we'll create one pod using this command: &lt;strong&gt;kubectl run my-first-pod --image=nginx&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m53swtp6vh3lvyusa9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m53swtp6vh3lvyusa9b.png" alt="onepodcommand" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verify it's now displayed on the dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgi7whzqb3c894shje98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgi7whzqb3c894shje98.png" alt="1pod" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the &lt;code&gt;logs&lt;/code&gt; command for my-first-pod&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy3ndvebed76xcuew917.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy3ndvebed76xcuew917.png" alt="logs" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;log can also be performed on the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnezgc6w1hbsgq33sa0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnezgc6w1hbsgq33sa0n.png" alt="fromdash" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;APIs with kubectl proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Issuing the &lt;code&gt;kubectl&lt;/code&gt; proxy command, kubectl authenticates with the API server on the control plane node and makes services available on the default proxy port 8001.&lt;/p&gt;

&lt;p&gt;First, we issue the &lt;code&gt;kubectl&lt;/code&gt; proxy command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl proxy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Starting to serve on 127.0.0.1:8001&lt;/p&gt;

&lt;p&gt;It locks the terminal for as long as the proxy is running, unless we run it in the background (with kubectl proxy &amp;amp;).&lt;/p&gt;

&lt;p&gt;When kubectl proxy is running, we can send requests to the API over the localhost on the default proxy port 8001 (from another terminal, since the proxy locks the first terminal when running in foreground):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ curl &lt;a href="http://localhost:8001/" rel="noopener noreferrer"&gt;http://localhost:8001/&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh20r4x5weqli3wqkb4q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh20r4x5weqli3wqkb4q.png" alt="curl failed" width="800" height="108"&gt;&lt;/a&gt;&lt;br&gt;
But it worked on the browser:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tmp4vdyxw0rfxkfuurk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tmp4vdyxw0rfxkfuurk.png" alt=":8001 on browser" width="800" height="662"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we'll use another terminal because the proxy is now locked in the first terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq7salavv5qtgrk90bjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq7salavv5qtgrk90bjg.png" alt="didnotfail" width="800" height="425"&gt;&lt;/a&gt;&lt;br&gt;
This works!&lt;/p&gt;

&lt;p&gt;I stopped the clusters from dashboard and verified using the command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphcc5hb5zl04bwb1i8lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphcc5hb5zl04bwb1i8lh.png" alt="veriyfromcmd" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Mastering &lt;code&gt;Minikube&lt;/code&gt; is about more than just starting a cluster, it’s about creating a reliable, reproducible environment that mirrors the complexities of the cloud. By moving beyond default settings and embracing multi-node profiles, you transition from a student of Kubernetes to an engineer capable of architecting resilient systems.&lt;/p&gt;

&lt;p&gt;As you continue building, remember that a well-organized local environment is the foundation of a successful deployment pipeline. Whether you are assigning worker roles in the CLI or monitoring pod health on the dashboard, these skills ensure that your infrastructure is as robust as the code running on it.&lt;/p&gt;

&lt;p&gt;Happy &lt;em&gt;Kube-ing!&lt;/em&gt; Post by rahimah_dev&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>minikube</category>
      <category>infrastructure</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Mastering Azure Monitor: Deployment and Configuration</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 16 Apr 2026 19:36:28 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/mastering-azure-monitor-deployment-and-configuration-45nl</link>
      <guid>https://dev.to/rahimah_dev/mastering-azure-monitor-deployment-and-configuration-45nl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;While working on a comprehensive deployment of Azure Monitor, I hit a common but frustrating wall: the dreaded &lt;code&gt;SubscriptionIsOverQuotaForSku&lt;/code&gt; error. Instead of stopping, I pivoted, re-engineering my deployment across Korea Central and East US to maintain uptime and visibility(since we're in a learning environment).&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk you through how I deployed a hybrid environment featuring &lt;em&gt;Windows Server&lt;/em&gt; (IIS), &lt;em&gt;Linux&lt;/em&gt; (Ubuntu), and a &lt;em&gt;SQL-backed Web App&lt;/em&gt;, all while configuring the &lt;strong&gt;observability&lt;/strong&gt; layers needed to keep a modern enterprise running. &lt;/p&gt;

&lt;p&gt;Here is one of the scenarios that would urgently require one of the tasks below: An insurance firm just suffered a minor &lt;code&gt;brute-force&lt;/code&gt; attack because a junior dev left a virtual machine open to the entire internet. The CTO orders an immediate &lt;code&gt;lockdown&lt;/code&gt; of all infrastructure.&lt;/p&gt;

&lt;p&gt;My Task: I changed the &lt;strong&gt;RDP&lt;/strong&gt; Source to &lt;strong&gt;My IP&lt;/strong&gt; and manually configured Inbound Security Rules for HTTP (Port 80).&lt;/p&gt;

&lt;p&gt;The necessity: This is a critical security task. By restricting RDP access to only my specific IP address, I effectively &lt;em&gt;closed the front door&lt;/em&gt; to hackers. &lt;/p&gt;

&lt;p&gt;This exercise should take approximately &lt;strong&gt;30&lt;/strong&gt; minutes to complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prepare your bring-your-own-subscription (BYOS)
&lt;/h2&gt;

&lt;p&gt;This set of lab exercises assumes that you have global administrator permissions to an Azure subscription.&lt;/p&gt;

&lt;p&gt;1.In the Azure Portal Search Bar, enter &lt;strong&gt;Resource Groups&lt;/strong&gt; and select &lt;strong&gt;Resource groups&lt;/strong&gt; from the list of results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf9x23gnt2qb6vs3cj6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf9x23gnt2qb6vs3cj6u.png" alt="rg" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.On the &lt;strong&gt;Resource Groups&lt;/strong&gt; page, select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5binfzzx6sbc9xo5ypxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5binfzzx6sbc9xo5ypxp.png" alt="create" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the &lt;strong&gt;Create a Resource Group&lt;/strong&gt; page, select your subscription and enter the name &lt;code&gt;rg-alpha&lt;/code&gt;. Set the region to East US, choose &lt;strong&gt;Review + Create&lt;/strong&gt;, and then choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp9n4fg94cx4vms0fvpy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp9n4fg94cx4vms0fvpy.png" alt="name" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzjjifuxd0g6bg8dke49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzjjifuxd0g6bg8dke49.png" alt="create" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: This set of exercises assumes that you choose to deploy in the East US Region, but you can change this to another region if you choose. Just remember that each time you see East US mentioned in these instructions you will need to substitute the region you have chosen&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create App Log Examiners security group
&lt;/h2&gt;

&lt;p&gt;In this exercise, you create an &lt;code&gt;Entra&lt;/code&gt; ID security group.&lt;/p&gt;

&lt;p&gt;1.In the Azure Portal Search Bar, enter Azure Active Directory (or Entra ID) from the list of results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzeu8stl66nkblpmpsvef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzeu8stl66nkblpmpsvef.png" alt="Entra" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.On the &lt;strong&gt;Default Directory&lt;/strong&gt; page, select, &lt;strong&gt;+ Add&lt;/strong&gt;, then &lt;strong&gt;Groups&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxbntzjwqfwmqn5v5mrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxbntzjwqfwmqn5v5mrx.png" alt="grps" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the &lt;strong&gt;New Group&lt;/strong&gt; page, provide the values in the following table and choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Group type&lt;/td&gt;
&lt;td&gt;Security&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Group name&lt;/td&gt;
&lt;td&gt;App Log Examiners&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Group description&lt;/td&gt;
&lt;td&gt;App Log Examiners&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faafuialpwngvurakf7oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faafuialpwngvurakf7oy.png" alt="grps" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy and configure WS-VM1
&lt;/h2&gt;

&lt;p&gt;In this exercise, you deploy and configure a Windows Server virtual machine.&lt;/p&gt;

&lt;p&gt;1.In the Azure Portal Search Bar, enter &lt;strong&gt;Virtual Machines&lt;/strong&gt; and select &lt;strong&gt;Virtual Machines&lt;/strong&gt; from the list of results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww0h86oi7axfzzd2f1cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww0h86oi7axfzzd2f1cn.png" alt="vm" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.On the &lt;strong&gt;Virtual Machines&lt;/strong&gt; page, choose &lt;strong&gt;Create&lt;/strong&gt; and select &lt;strong&gt;Azure Virtual Machine&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6gg6zr8l14tvxrbdmrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6gg6zr8l14tvxrbdmrv.png" alt="azurevm" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the &lt;strong&gt;Basics&lt;/strong&gt; page of the Create A Virtual Machine wizard, select the following settings and then choose &lt;strong&gt;Review + Create&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Subscription&lt;/td&gt;
&lt;td&gt;Your subscription&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Group&lt;/td&gt;
&lt;td&gt;rg-alpha&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Virtual machine name&lt;/td&gt;
&lt;td&gt;WS-VM1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Region&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Availability options&lt;/td&gt;
&lt;td&gt;No infrastructure redundancy required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security type&lt;/td&gt;
&lt;td&gt;Standard&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image&lt;/td&gt;
&lt;td&gt;Windows Server 2022 Datacenter: Azure Edition – x64 Gen2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VM architecture&lt;/td&gt;
&lt;td&gt;x64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Size&lt;/td&gt;
&lt;td&gt;Standard_D4s_v3 – 4 vcpus, 16 GiB memory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Administrator account&lt;/td&gt;
&lt;td&gt;prime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Password&lt;/td&gt;
&lt;td&gt;[Select a unique secure password] P@ssw0rdP@ssw0rd&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Inbound ports&lt;/td&gt;
&lt;td&gt;RDP 3389&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbnjb4e06bh0b2brx4mz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbnjb4e06bh0b2brx4mz.png" alt="create" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk1jgzx0l1jphhat7q9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk1jgzx0l1jphhat7q9m.png" alt="create" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8hu77vvfn7vindwxaa6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8hu77vvfn7vindwxaa6.png" alt="create" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Review the settings and select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06blxoig8d13pway0q0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06blxoig8d13pway0q0r.png" alt="create" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.Wait for the deployment to complete. Once deployment completes choose &lt;strong&gt;Go to resource&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s5v6s48aqdykxfca3xl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s5v6s48aqdykxfca3xl.png" alt="gtr" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.On the &lt;strong&gt;WS-VM1 properties&lt;/strong&gt; page, choose &lt;strong&gt;Networking&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqmbgyjfv2i6hieu4jyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqmbgyjfv2i6hieu4jyo.png" alt="ntwk" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.On the &lt;strong&gt;Networking&lt;/strong&gt; page, select the &lt;code&gt;RDP&lt;/code&gt; rule.&lt;/p&gt;

&lt;p&gt;8.On the RDP rule space, change the Source to My IP address and choose &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuj2ugsdnshrvy7zskpz3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuj2ugsdnshrvy7zskpz3.png" alt="rdp" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This restricts incoming RDP connections to the IP address you’re currently using.&lt;br&gt;
9.On the &lt;strong&gt;Networking&lt;/strong&gt; page, choose &lt;strong&gt;Add inbound port rule&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9jc58rwndrnefahhrgz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9jc58rwndrnefahhrgz.png" alt="portrules" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkc93rgn5ytbgsm811xj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkc93rgn5ytbgsm811xj.png" alt="inboundrules" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;10.On the &lt;strong&gt;Add inbound security rule&lt;/strong&gt; page, configure the following settings and choose &lt;strong&gt;Add&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Source&lt;/td&gt;
&lt;td&gt;Any&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Source port ranges&lt;/td&gt;
&lt;td&gt;*&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Destination&lt;/td&gt;
&lt;td&gt;Any&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Service&lt;/td&gt;
&lt;td&gt;HTTP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Action&lt;/td&gt;
&lt;td&gt;Allow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Priority&lt;/td&gt;
&lt;td&gt;310&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Name&lt;/td&gt;
&lt;td&gt;AllowAnyHTTPInbound&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqn6t9v5f9a5cutskswc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqn6t9v5f9a5cutskswc.png" alt="rules" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;11.On the &lt;strong&gt;WS-VM1&lt;/strong&gt; page, choose &lt;strong&gt;Connect&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h47o8t2gznzzsh4s6o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h47o8t2gznzzsh4s6o0.png" alt="connect" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;12.Under Native RDP, choose &lt;strong&gt;Select&lt;/strong&gt;.&lt;br&gt;
13.On the &lt;strong&gt;Native RDP&lt;/strong&gt; page, choose &lt;strong&gt;Download RDP file&lt;/strong&gt; and then open the file. Opening the RDP file opens the Remote Desktop Connection dialog box.&lt;/p&gt;

&lt;p&gt;14.On the &lt;strong&gt;Windows Security&lt;/strong&gt; dialog box, choose &lt;strong&gt;More Choices&lt;/strong&gt; and then choose Use a different account.&lt;/p&gt;

&lt;p&gt;15.Enter the username as .\prime and the password as the secure password you chose in Step 3, and choose &lt;strong&gt;OK&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehswqzxxhm70r1rtrjob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehswqzxxhm70r1rtrjob.png" alt="securitybox" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;16.When signed into the Windows Server virtual machine, right-click on the &lt;strong&gt;Start&lt;/strong&gt; hint and then choose &lt;strong&gt;Windows PowerShell (Admin)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;17.At the elevated command prompt, type the following command and press &lt;strong&gt;Enter&lt;/strong&gt;. Install-WindowsFeature Web-Server -IncludeAllSubFeature -IncludeManagementTools&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfylj0v7ckfc91xbu6it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfylj0v7ckfc91xbu6it.png" alt="Installing" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;18.When the installation completes run the following command to change to the web server root directory. cd c:\inetpub\wwwroot\&lt;br&gt;
19.Run the following command. Wget &lt;a href="https://raw.githubusercontent.com/Azure-Samples/html-docs-hello-world/master/index.html" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/Azure-Samples/html-docs-hello-world/master/index.html&lt;/a&gt; -OutFile index.html&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4vvc0a6yoqf6gi0kxyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4vvc0a6yoqf6gi0kxyz.png" alt="complete" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy and configure LX-VM2
&lt;/h2&gt;

&lt;p&gt;In this exercise you deploy and configure a Linux virtual machine.&lt;/p&gt;

&lt;p&gt;1.In the Azure Portal Search Bar, enter &lt;strong&gt;Virtual Machines&lt;/strong&gt; and select &lt;strong&gt;Virtual Machines&lt;/strong&gt; from the list of results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbinra28zxr2wynji4qk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbinra28zxr2wynji4qk.png" alt="createvm" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.On the &lt;strong&gt;Virtual Machines&lt;/strong&gt; page, choose &lt;strong&gt;Create&lt;/strong&gt; and select &lt;strong&gt;Azure Virtual Machine&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cyo2pct3bd9dct6g878.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cyo2pct3bd9dct6g878.png" alt="vmcreate" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the &lt;strong&gt;Basics&lt;/strong&gt; page of the Create A Virtual Machine wizard, select the following settings and then choose &lt;strong&gt;Review + Create&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Subscription&lt;/td&gt;
&lt;td&gt;Your subscription&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Group&lt;/td&gt;
&lt;td&gt;rg-alpha&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Virtual machine name&lt;/td&gt;
&lt;td&gt;Linux-VM2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Region&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Availability options&lt;/td&gt;
&lt;td&gt;No infrastructure redundancy required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security type&lt;/td&gt;
&lt;td&gt;Standard&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image&lt;/td&gt;
&lt;td&gt;Ubuntu Server 20.04 LTS – x64 Gen2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VM architecture&lt;/td&gt;
&lt;td&gt;x64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Size&lt;/td&gt;
&lt;td&gt;Standard_D2s_v3 – 2 vcpus, 8 GiB memory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Authentication type&lt;/td&gt;
&lt;td&gt;Password&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Username&lt;/td&gt;
&lt;td&gt;Prime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Password&lt;/td&gt;
&lt;td&gt;[Select a unique secure password] P@ssw0rdP@ssw0rd&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Public inbound ports&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnjl7vyffzwyit4w1py6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnjl7vyffzwyit4w1py6.png" alt="vm" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxmklw6swvr2kunbatpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxmklw6swvr2kunbatpi.png" alt="vm" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Review the information and choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z1ubdj80epwr3ctm9pc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z1ubdj80epwr3ctm9pc.png" alt="review" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.After the VM deploys, open the &lt;strong&gt;VM properties&lt;/strong&gt; page and choose &lt;strong&gt;Extensions + Applications **under **Settings&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxh49kqiwonnrtt6czm3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxh49kqiwonnrtt6czm3.png" alt="xtensn" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.Choose &lt;strong&gt;Add&lt;/strong&gt; and select the &lt;strong&gt;Network Watcher Agent for Linux&lt;/strong&gt;. Choose &lt;strong&gt;Next&lt;/strong&gt; and then choose &lt;strong&gt;Review and Create&lt;/strong&gt;. Choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdodt1quf5191htzmhcst.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdodt1quf5191htzmhcst.png" alt="nightwatcher" width="800" height="755"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qi2ml8pfv4giwk9l7qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qi2ml8pfv4giwk9l7qd.png" alt="create" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: The installation and configuration of the OmsAgentForLinux extension will be performed in Exercise 2 after the Log Analytics workspace is created&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy a web app with an SQL Database
&lt;/h2&gt;

&lt;p&gt;1.Ensure that you’re signed into the Azure Portal.&lt;br&gt;
2.In your browser, open a new browser tab and navigate to &lt;a href="https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-sql-database" rel="noopener noreferrer"&gt;https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-sql-database&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.On the GitHub page, choose &lt;strong&gt;Deploy to Azure&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kz7z0d3p6z6zlgnqrec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kz7z0d3p6z6zlgnqrec.png" alt="github" width="800" height="404"&gt;&lt;/a&gt;&lt;br&gt;
4.A new tab opens. If necessary, re-sign into Azure with the account that has Global Administrator privileges.&lt;br&gt;
5.On the &lt;strong&gt;Basics&lt;/strong&gt; page, select &lt;strong&gt;Edit template&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n5dkswppph6c0dhaj96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n5dkswppph6c0dhaj96.png" alt="edittemplate" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.In the template editor, delete the contents of lines 158 to 174 inclusive and delete the “,” on line 157. Choose &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp1wqeidifryfici56na.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp1wqeidifryfici56na.png" alt="deletelines" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.On the &lt;strong&gt;Basics&lt;/strong&gt; page, provide the following information and choose &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Subscription&lt;/td&gt;
&lt;td&gt;Your subscription&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Group&lt;/td&gt;
&lt;td&gt;rg-alpha&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Region&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sku Name&lt;/td&gt;
&lt;td&gt;F1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sku Capacity&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sql Administrator Login&lt;/td&gt;
&lt;td&gt;prime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sql Administrator Login Password&lt;/td&gt;
&lt;td&gt;[Select a unique secure password] P@ssw0rdP@ssw0rd&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqkdzcxaf3xxhov8l7yb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqkdzcxaf3xxhov8l7yb.png" alt="create" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmwpolareylqbeh4wqzd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmwpolareylqbeh4wqzd.png" alt="error" width="800" height="382"&gt;&lt;/a&gt;&lt;br&gt;
Quota limits are often region-specific. If you are hitting a zero-limit in one location, another might have availability.&lt;br&gt;
Go back to the &lt;strong&gt;Basics&lt;/strong&gt; tab and try switching the Region to a major hub like East US, West US 2, or North Europe.&lt;/p&gt;

&lt;p&gt;In my specific case, Korea Central is often a good alternative when am faced subscription roadblocks in other regions recently.&lt;br&gt;
Hence, I changed the resource group to &lt;code&gt;rg-alpha2&lt;/code&gt; and Region to &lt;code&gt;Korea Central&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE: If you absolutely need that specific region and size, you have to ask Microsoft to "unlock" it for you&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyierzmc315x95y5tt058.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyierzmc315x95y5tt058.png" alt="change" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Review the information presented and select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekino22uv804ty0fi5wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekino22uv804ty0fi5wx.png" alt="create" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9.After the deployment completes, choose &lt;strong&gt;Go to resource group&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz88h2pvbej24lqn3y4vl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz88h2pvbej24lqn3y4vl.png" alt="gtd" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e1vqrzwuqau6o74ekmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e1vqrzwuqau6o74ekmb.png" alt="overview" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy a Linux web app
&lt;/h2&gt;

&lt;p&gt;1.Ensure that you’re signed into the Azure Portal.&lt;br&gt;
2.In your browser, open a new browser tab and navigate to &lt;a href="https://learn.microsoft.com/en-us/samples/azure/azure-quickstart-templates/webapp-basic-linux/" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/samples/azure/azure-quickstart-templates/webapp-basic-linux/&lt;/a&gt;&lt;br&gt;
3.On the GitHub page, choose &lt;strong&gt;Deploy to Azure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3m8v9y1ohvcvkr2lv2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3m8v9y1ohvcvkr2lv2a.png" alt="deploytoazure" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.On the &lt;strong&gt;Basics&lt;/strong&gt; page, provide the following information and choose &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Subscription&lt;/td&gt;
&lt;td&gt;Your subscription&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Group&lt;/td&gt;
&lt;td&gt;rg-alpha&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Region&lt;/td&gt;
&lt;td&gt;East US&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web app Name&lt;/td&gt;
&lt;td&gt;AzureLinuxAppWXYZ (assign a random number to the final four characters of the name)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sku&lt;/td&gt;
&lt;td&gt;S1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linux Fx version&lt;/td&gt;
&lt;td&gt;php                                          7.4&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3od68147m5u002qc5urh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3od68147m5u002qc5urh.png" alt="details" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.Review the information and choose &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj51oaqqpdnxcfz9ekn01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj51oaqqpdnxcfz9ekn01.png" alt="error" width="800" height="385"&gt;&lt;/a&gt;&lt;br&gt;
So I changed the resource group and Region again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam57vvfehduj826idf1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam57vvfehduj826idf1d.png" alt="change" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe50mm8m52o9ab68rl9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe50mm8m52o9ab68rl9q.png" alt="create" width="800" height="497"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Deployment failed&lt;/strong&gt; because the webApp name already exists, click on &lt;strong&gt;Redeploy&lt;/strong&gt; and edit the name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnpqqzd1mz1fcy641sbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnpqqzd1mz1fcy641sbk.png" alt="failed to deploy" width="800" height="440"&gt;&lt;/a&gt;&lt;br&gt;
6.Now that Deployment is complete, &lt;strong&gt;Go To Resource&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf7vi997sto7d9o0rapx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf7vi997sto7d9o0rapx.png" alt="resource" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The "Invisible Skill of Cloud Engineering&lt;/strong&gt;&lt;br&gt;
Setting up a multi-tier environment on Azure is more than a checklist of installations, it is a lesson in resiliency. This project challenged me to manage a &lt;strong&gt;hybrid&lt;/strong&gt; ecosystem—bridging Windows Server management, Linux observability, and SQL-backed web applications, all while navigating real-world infrastructure constraints like subscription quotas and regional limitations.&lt;/p&gt;

</description>
      <category>powershell</category>
      <category>devops</category>
      <category>linux</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>KUBERNETES - Deploying a Standalone Application 1</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Sun, 05 Apr 2026 16:38:43 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/deploying-a-standalone-application-1-le0</link>
      <guid>https://dev.to/rahimah_dev/deploying-a-standalone-application-1-le0</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt; has a reputation for being a &lt;strong&gt;wall of YAML&lt;/strong&gt;, but it doesn't have to start that way. If you’re looking for a visual, hands-on way to understand how &lt;em&gt;Pods, Deployments, and Services&lt;/em&gt; actually interact, you’re in the right place. Today, we’re firing up the &lt;code&gt;Minikube Dashboard&lt;/code&gt; to deploy a &lt;strong&gt;standalone web server&lt;/strong&gt; with just a few clicks. By the end of this post, you won't just have an &lt;code&gt;Nginx server&lt;/code&gt; running, you'll understand the &lt;code&gt;labels&lt;/code&gt; and &lt;code&gt;selectors&lt;/code&gt; that hold the entire K8s ecosystem together.&lt;/p&gt;

&lt;p&gt;Learning Objectives&lt;/p&gt;

&lt;p&gt;By the end of this series, you should be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy an application from the dashboard.&lt;/li&gt;
&lt;li&gt;Deploy an application from a &lt;code&gt;YAML&lt;/code&gt; file using &lt;code&gt;kubectl&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Expose a service using &lt;code&gt;NodePort&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Access the application from outside the &lt;code&gt;Minikube&lt;/code&gt; cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deploying an Application Using the Dashboard (1)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's learn how to deploy an &lt;code&gt;nginx webserver&lt;/code&gt; using the nginx container image from Docker Hub.&lt;/p&gt;

&lt;p&gt;Start &lt;code&gt;Minikube&lt;/code&gt; and verify that it is running. Run this command first:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ minikube start&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftinwuuw0lqdld8a9h3xu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftinwuuw0lqdld8a9h3xu.png" alt="ministart" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then verify &lt;strong&gt;Minikube&lt;/strong&gt; status:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ minikube status&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fungyk6pz99tm8sf7wn0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fungyk6pz99tm8sf7wn0a.png" alt="ministatus" width="800" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start the &lt;strong&gt;Minikube&lt;/strong&gt; Dashboard. To access the Kubernetes &lt;strong&gt;Web IU&lt;/strong&gt;, we need to run the following command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ minikube dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Running this command will open up a browser with the Kubernetes &lt;strong&gt;Web UI&lt;/strong&gt;, which we can use to manage containerized applications. By default, the dashboard is connected to the default Namespace. Therefore, all the operations will be performed inside the default Namespace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyk5nzsrx8crkrlc9fuu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyk5nzsrx8crkrlc9fuu.png" alt="dashboard" width="800" height="402"&gt;&lt;/a&gt;&lt;br&gt;
Deploying an Application - Accessing the Dashboard&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; In case the browser is not opening another tab and does not display the Dashboard as expected, verify the output in your terminal as it may display a link for the Dashboard (together with some Error messages). Copy and paste that link in a new tab of your browser. Depending on your terminal's features you may be able to just click or right-click the link to open directly in the browser.&lt;/p&gt;

&lt;p&gt;The link may look similar to:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://127.0.0.1:40235/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/" rel="noopener noreferrer"&gt;http://127.0.0.1:40235/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Chances are that the only difference is the PORT number, which above is 40235. Your port number may be different.&lt;br&gt;
After a logout/login or a reboot of your workstation the expected behavior may be observed (where the minikube dashboard command directly opens a new tab in your browser displaying the Dashboard)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying an Application Using the Dashboard (2)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deploy a webserver using the nginx image. From the dashboard, click on the &lt;code&gt;+&lt;/code&gt; symbol at the top right corner of the Dashboard. That will open the create interface as seen below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijz47xvi473p7d629so9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijz47xvi473p7d629so9.png" alt="plus" width="800" height="382"&gt;&lt;/a&gt;&lt;br&gt;
Create a New Application - Interface&lt;/p&gt;

&lt;p&gt;From there, we can create an application using valid YAML/JSON configuration data, from a definition manifest file, or manually from the &lt;strong&gt;Create from form&lt;/strong&gt; tab. Click on the Create from form tab and provide the following application details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The application name is &lt;code&gt;web-dash&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The container image to use is &lt;code&gt;nginx&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The replica count, or the number of Pods, is 1.&lt;/li&gt;
&lt;li&gt;Service is External, Port 8080, Target port 80, Protocol TCP.
Namespace is &lt;code&gt;default&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fou8266ys3iqv8ghpdvcc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fou8266ys3iqv8ghpdvcc.png" alt="deploying" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm27nvqn4zsquolwh03nc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm27nvqn4zsquolwh03nc.png" alt="deploying" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff72o2p5cd93i05itlapv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff72o2p5cd93i05itlapv.png" alt="deploying" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslx65qjk3p6rsxuh23p6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslx65qjk3p6rsxuh23p6.png" alt="deploy" width="800" height="383"&gt;&lt;/a&gt;&lt;br&gt;
Deploy a Containerized Application - Interface&lt;/p&gt;

&lt;p&gt;If we click on Show &lt;strong&gt;Advanced Options&lt;/strong&gt;, we can specify options such as Labels, Namespace, Resource Requests, etc. By default, the Label is set to the application name. In our example k8s-app: web-dash Label is set to all objects created by this Deployment: &lt;code&gt;Pods&lt;/code&gt; and &lt;code&gt;Services&lt;/code&gt; (when exposed).&lt;/p&gt;

&lt;p&gt;By clicking on the Deploy button, we trigger the deployment. As expected, the Deployment web-dash will create a ReplicaSet (web-dash-74d8bd488f), which will eventually create 1 Pod replica (web-dash-74d8bd488f-dwbzz).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Add the full URL in the Container Image field docker.io/library/nginx if any issues are encountered with the simple nginx image name (or use the k8s.gcr.io/nginx URL if it works instead).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The resource names are unique and are provided for illustrative purposes only. The resources in your clusters and dashboards will display different names, but the naming structure follows the same convention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying an Application Using the Dashboard (3)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once we create the web-dash Deployment, we can use the resource navigation panel from the left side of the Dashboard to display details of &lt;strong&gt;Deployments&lt;/strong&gt;, &lt;strong&gt;ReplicaSets&lt;/strong&gt;, and &lt;strong&gt;Pods&lt;/strong&gt; in the default Namespace.&lt;/p&gt;

&lt;p&gt;From the Dashboard we can display individual objects’ properties by simply clicking the object’s name. From the commands menu symbol (the vertical three-dots) at the far-right we can easily manage their state. Easily scale up the Deployment to a higher number of &lt;strong&gt;replicas&lt;/strong&gt;, and observe the additional Pods spin up, or scale it down to fewer replicas. Attempt to delete one of the individual Pods of the Deployment. What do you notice after a few seconds? We can even delete the Deployment, an action that results in all its Pod replicas being terminated. But for now, let’s keep the Deployment so we can analyze it further.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24gr1zxbm7uesquptohx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24gr1zxbm7uesquptohx.png" alt="depore" width="800" height="382"&gt;&lt;/a&gt;&lt;br&gt;
Dashboard displaying Deployments, Pods, and ReplicaSets&lt;/p&gt;

&lt;p&gt;The resources displayed by the Dashboard match one-to-one resources displayed from the &lt;code&gt;CLI&lt;/code&gt; via &lt;code&gt;kubectl&lt;/code&gt;. List the Deployments. We can list all the Deployments in the default Namespace using the &lt;code&gt;kubectl&lt;/code&gt; get deployments command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get deployments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;List the ReplicaSets. We can list all the ReplicaSets in the default Namespace using the kubectl get replicasets command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get replicasets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;List the Pods. We can list all the Pods in the default namespace using the kubectl get pods command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;List Deployment, ReplicaSet and Pod with a single command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get deploy,rs,po&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01t9030zz0kvc12nvyun.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01t9030zz0kvc12nvyun.png" alt="resources" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploring Labels and Selectors (1)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Earlier, we have seen that labels and selectors play an important role in logically grouping a subset of objects to perform operations. Let's take a closer look at them.&lt;/p&gt;

&lt;p&gt;Display the Pod's details. We can look at an object's details using the kubectl describe command. In the following example, you can see a Pod's description:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl describe pod web-dash-6bf994f6&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yow8r43jk8mq6edsqiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yow8r43jk8mq6edsqiw.png" alt="describe" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ymyng2oawfx977p90cj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ymyng2oawfx977p90cj.png" alt="describe" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8gjor0xgwz59cqwfnlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8gjor0xgwz59cqwfnlv.png" alt="describe" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploring Labels and Selectors (2)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;List the Pods, along with their attached Labels. With the &lt;code&gt;-L&lt;/code&gt; option to the kubectl get pods command, we add extra columns in the output to list Pods with their attached Label keys and their values. In the following example, we are listing Pods with the Label keys k8s-app and label2:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods -L k8s-app,label2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All of the Pods are listed, as each Pod has the Label key k8s-app with value set to web-dash. We can see that in the K8S-APP column. As none of the Pods have the &lt;strong&gt;label2&lt;/strong&gt; Label key, no values are listed under the LABEL2 column.&lt;/p&gt;

&lt;p&gt;Select the Pods with a given Label. To use a selector with the kubectl get pods command, we can use the &lt;code&gt;-l&lt;/code&gt; option. In the following example, we are selecting all the Pods that have the k8s-app Label key set to value web-dash:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods -l k8s-app=web-dash&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21hpsk0qd5gzp1knv75v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21hpsk0qd5gzp1knv75v.png" alt="pods" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the example above, we listed all the Pods we created, as all of them have the k8s-app Label key set to value web-dash.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Try using k8s-app=webserver as the Selector&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods -l k8s-app=webserver&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjjoqoqc7o9us27hndht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjjoqoqc7o9us27hndht.png" alt="no resources" width="800" height="437"&gt;&lt;/a&gt;&lt;br&gt;
No resources found.&lt;br&gt;
&lt;em&gt;As expected, no Pods are listed.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying an Application Using the CLI (1)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To deploy an application using the CLI, let's first delete the Deployment we created earlier.&lt;/p&gt;

&lt;p&gt;One method to delete the Deployment we created earlier is from the Dashboard, from the Deployment’s commands menu. Another method is using the kubectl delete command. Next, we are deleting the web-dash Deployment we created earlier:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl delete deployments web-dash&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deleting a Deployment also deletes the ReplicaSet and the Pods it created:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get replicasets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;$ kubectl get pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhe093jyy1im391x7jn0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhe093jyy1im391x7jn0.png" alt="delete" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this first installment, we were able to cover the Visual Orchestration &amp;amp; Lifecycle Basics. We established a rock-solid foundation for managing containerized applications. We moved beyond simple container execution and began exploring the automated world of Kubernetes orchestration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Competencies Achieved&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Cluster Lifecycle Management:&lt;/strong&gt;&lt;br&gt;
Initiated and verified the local Kubernetes environment using minikube start and status. This confirmed the control plane, Kubelet, and API server were operational.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GUI-Driven Deployment:&lt;/strong&gt; Leveraged the Kubernetes Dashboard to deploy a standalone Nginx application. This demonstrated the "Create from Form" workflow, which simplifies resource definition for those new to the ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Hierarchy Identification:&lt;/strong&gt; Observed the relationship between Deployments, ReplicaSets, and Pods. We verified how a single Deployment instruction automatically handles the creation of underlying ReplicaSets to ensure the desired state of our Pods.&lt;/p&gt;

&lt;p&gt;See you in the next part!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>minikube</category>
      <category>cloudnative</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Master the Linux Terminal for Modern Data Analytics</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Mon, 30 Mar 2026 13:56:35 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/master-the-linux-terminal-for-modern-data-analytics-him</link>
      <guid>https://dev.to/rahimah_dev/master-the-linux-terminal-for-modern-data-analytics-him</guid>
      <description>&lt;h2&gt;
  
  
  INTRODUCTION
&lt;/h2&gt;

&lt;p&gt;In the high-stakes world of Data Analytics, your tools should never be your bottleneck. Most analysts can build a dashboard, but the elite 1% know how to handle data where it actually lives, that is, the &lt;strong&gt;Command Line&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Imagine a 10GB CSV file that crashes Excel on sight. While others wait for their GUI to load, &lt;em&gt;the modern analyst uses the Linux Terminal to slice, filter, and audit millions of rows in milliseconds&lt;/em&gt;. &lt;br&gt;
As a Data Analyst, I’ve realized that the CLI isn't just an &lt;strong&gt;'extra' skill&lt;/strong&gt;, it is the engine of efficiency in 2026. While prepping raw data for Power BI, mastering these 'black screen' secrets is how you move from being a passenger to being the pilot of your data infrastructure.&lt;/p&gt;

&lt;p&gt;Welcome to the world of Linux! Think of the Linux file system as an upside-down tree. Everything grows from a single point at the very top. What is the Root Directory? The root directory is the starting point of the entire Linux file system. &lt;br&gt;
&lt;em&gt;Every single file, folder, and drive on your computer is contained within it&lt;/em&gt;.&lt;br&gt;
It is represented by a single &lt;code&gt;forward slash&lt;/code&gt;: /.&lt;br&gt;
The "Parent": It has no parent directory, it is the absolute top level.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To get started, you have three ways to use these exact Linux commands on a Windows machine&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. WSL&lt;/strong&gt; &lt;br&gt;
WSL (Windows Subsystem for Linux) is a literal Linux system living inside your Windows computer. It’s what almost all developers use today.&lt;/p&gt;

&lt;p&gt;How to get it: Open your Windows Terminal and type &lt;code&gt;wsl --install&lt;/code&gt;.&lt;br&gt;
This runs the Result on your actual hard drive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Git Bash&lt;/strong&gt; &lt;br&gt;
If you install Git for Windows, it comes with &lt;code&gt;Git Bash&lt;/code&gt;. It’s a small emulator that lets you use Linux commands to navigate your Windows folders.&lt;/p&gt;

&lt;p&gt;In Git Bash, your C: drive is usually mapped to /c/.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.PowerShell&lt;/strong&gt; &lt;br&gt;
PowerShell actually has "aliases" for some Linux commands to make life easier for people moving between systems. It serves as a translator.&lt;br&gt;
&lt;strong&gt;NOTE&lt;/strong&gt;: Windows and Linux speak different "languages." though there are some similarities.&lt;br&gt;
Windows uses PowerShell or Command Prompt (CMD), where the root is usually C:. Linux uses the Bash shell, where the root is /.&lt;/p&gt;

&lt;p&gt;I'll be using Git Bash is actually one of the most popular ways for developers to use Linux commands on a Windows computer,and I already have it in VS Code.&lt;/p&gt;

&lt;p&gt;When you open Git Bash in VS Code, you are essentially running a "mini Linux environment" that can see your Windows files.&lt;/p&gt;

&lt;h2&gt;
  
  
  HOW TO USE IT
&lt;/h2&gt;

&lt;p&gt;Open the Terminal: In VS Code, press &lt;code&gt;Ctrl +&lt;/code&gt;`  (the backtick key). Or &lt;strong&gt;select View, then Terminal&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0113x46xwnkybutuwfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0113x46xwnkybutuwfd.png" alt="view" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select Git Bash&lt;/strong&gt;: In the top-right corner of the terminal pane, click the dropdown arrow (usually says "powershell" or "cmd") and select &lt;code&gt;Git Bash&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ps98ymc1vsmvuaaswwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ps98ymc1vsmvuaaswwo.png" alt="gitbash" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to explore these right now in your terminal, here is how you can use those commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jump to Root: Type &lt;code&gt;cd&lt;/code&gt; / to move to the very top.&lt;/li&gt;
&lt;li&gt;See Where You Are: Type &lt;code&gt;pwd&lt;/code&gt;(Print Working Directory). It should just show /.&lt;/li&gt;
&lt;li&gt;Look Around: Type &lt;code&gt;ls&lt;/code&gt; to see all the folders (like bin, etc, and home) living inside the root. &lt;em&gt;These aren't your Windows C: drive folders, they are the virtual Linux-style folders Git Bash creates to make your commands work&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6xqubo5bsoeye7di9r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6xqubo5bsoeye7di9r8.png" alt="root directory" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the most important part for a beginner. Git Bash "mounts" your Windows drives inside the root&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;To go to your C: Drive, type: &lt;strong&gt;cd /c/&lt;/strong&gt;&lt;br&gt;
To go to your Desktop, type: &lt;strong&gt;cd /c/Users/YourUsername/Desktop&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change directory to the desktop&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NOTE: My computer is using the username &lt;code&gt;Admin&lt;/code&gt; inside the folders, even though your machine name is &lt;code&gt;RAHIMAH-ISAH&lt;/code&gt;. Linux is very literal about where things are stored.&lt;br&gt;
So &lt;strong&gt;cd /c/Users/Admin/Desktop&lt;/strong&gt; is the full "map" to my Desktop&lt;/p&gt;

&lt;p&gt;where:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;cd: Change Directory.&lt;/li&gt;
&lt;li&gt;/c/: This is your C: Drive.&lt;/li&gt;
&lt;li&gt;Desktop: This is your destination.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frytov9pog4efen7kxamw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frytov9pog4efen7kxamw.png" alt="desktop" width="800" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To move from my current location into the &lt;code&gt;My_Analytics&lt;/code&gt; folder on the Desktop, I'll use the cd (change directory) command:&lt;br&gt;
&lt;strong&gt;cd My_Analytics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv9ms20s6mxn764mbrhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffv9ms20s6mxn764mbrhg.png" alt="My_analytics" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a "test" folder&lt;/strong&gt;&lt;br&gt;
To create a new folder (directory), use the &lt;code&gt;mkdir&lt;/code&gt; (make directory) command:&lt;br&gt;
&lt;strong&gt;mkdir test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But first, change directory to the previous: &lt;code&gt;cd ..&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeku1rzkwecm96lzseo3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeku1rzkwecm96lzseo3.png" alt="testfolder" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to see if your new folder was actually created, type &lt;code&gt;ls&lt;/code&gt;. It will list everything in your current location, and you should see test appearing in the list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgnt9ieh6s4og9n63dfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgnt9ieh6s4og9n63dfz.png" alt="LS" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Out of curiosity, being a beginner it's allowed to check the Desktop&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvihtadnaldazttyr3o95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvihtadnaldazttyr3o95.png" alt="Desktop" width="734" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's learn how to use these, try this "Real World" sequence in the VS Code Git Bash:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go to your User folder&lt;/strong&gt;: &lt;strong&gt;cd ~&lt;/strong&gt; (The tilde ~ is a Linux shortcut for your home).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a project folder&lt;/strong&gt;: &lt;strong&gt;mkdir my-linux-practice2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enter the folder&lt;/strong&gt;: &lt;strong&gt;cd my-linux-practice2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdh4c44wk45dmfu2h0fz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdh4c44wk45dmfu2h0fz.png" alt="mkdir" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a blank file&lt;/strong&gt;: &lt;strong&gt;touch notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify it's there: &lt;strong&gt;ls&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynzgf76zlf5kjundl1za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynzgf76zlf5kjundl1za.png" alt="blank file" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I made use of "-" instead of "_".&lt;br&gt;
Let's make the correction together.&lt;/p&gt;

&lt;p&gt;In Linux, we use the mv command (short for move) to rename files and folders.&lt;/p&gt;

&lt;p&gt;Since am currently on the Desktop, I can "move" the folder from the old name to the new name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Correction Command&lt;/strong&gt;&lt;br&gt;
Type this and press Enter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mv linux-practice linux_practice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How it works&lt;/em&gt;&lt;br&gt;
The mv command follows a simple logic:&lt;br&gt;
&lt;strong&gt;mv [old_name] [new_name]&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;linux-practice&lt;/em&gt;: The folder as it exists now.&lt;br&gt;
&lt;em&gt;linux_practice&lt;/em&gt;: What you want it to be named.&lt;br&gt;
I did not remember the exact folder name, so I used the &lt;strong&gt;ls&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13xx41ydpc6khjzr990g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13xx41ydpc6khjzr990g.png" alt="ls" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then verified the name change by running &lt;strong&gt;ls&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a message&lt;/strong&gt;&lt;br&gt;
We are going to use the &lt;code&gt;echo&lt;/code&gt; command. It literally "echoes" whatever you type back to you, but we are going to use a special symbol &lt;code&gt;&amp;gt;&lt;/code&gt; to tell it to &lt;code&gt;echo&lt;/code&gt; into a file instead.&lt;/p&gt;

&lt;p&gt;Remember to enter the Directory if you aren't already there:&lt;br&gt;
&lt;strong&gt;cd /c/Users/Admin/Desktop/notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;echo "Hello from the Linux terminal!" &amp;gt; notes.txt&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read your file&lt;/strong&gt;&lt;br&gt;
Now, let's see if the file actually contains that message.&lt;/p&gt;

&lt;p&gt;Type this and press Enter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cat notes.txt&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;cat (short for concatenate)&lt;/code&gt; is the standard way to quickly read the contents of a file in the terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdrguvxd9su91tf4u57v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdrguvxd9su91tf4u57v.png" alt="echo" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What just happened?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;echo "text": Prepared the message.&lt;/li&gt;
&lt;li&gt;&amp;gt;: This is called a Redirect. It took the message that would normally print on the screen and "poured" it into the file.&lt;/li&gt;
&lt;li&gt;cat: Showed you the result.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since we've already used &lt;code&gt;&amp;gt;&lt;/code&gt; to create the file, let's learn how to add a second line to it without deleting the first one.&lt;/p&gt;

&lt;p&gt;Step 1:Go back into your folder (if you left it)&lt;br&gt;
cd /c/Users/Admin/Desktop/notes&lt;/p&gt;

&lt;p&gt;Step 2:Add a new line (use two symbols &amp;gt;&amp;gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&amp;gt; = Overwrites the file (deletes old stuff).&lt;/li&gt;
&lt;li&gt;&amp;gt;&amp;gt; = Appends (adds to the bottom).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;echo "This is my second line!" &amp;gt;&amp;gt; notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 3:Read it (check your spelling!)&lt;br&gt;
&lt;strong&gt;cat notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncuyn3bqhlcfj4ua1pqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncuyn3bqhlcfj4ua1pqv.png" alt="secondline" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use echo with &amp;gt;&amp;gt; to add another line:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;echo "This is my third line!" &amp;gt;&amp;gt; notes.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify the result:&lt;br&gt;
If you run &lt;code&gt;cat notes.txt&lt;/code&gt; now, you should see:&lt;/p&gt;

&lt;p&gt;Hello from me!&lt;br&gt;
This is my second line.&lt;br&gt;
This is my third line!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;:&lt;br&gt;
In Linux, &lt;em&gt;case sensitivity is everything&lt;/em&gt;. If you create a file named Notes.txt (with a capital N) and then try to read notes.txt (with a lowercase n), the terminal thinks they are two completely different files.&lt;br&gt;
When you type a command like cat and press Enter, the terminal will seem to "freeze." This is because cat without a filename waits for you to type something into it.&lt;/p&gt;

&lt;p&gt;Whenever a command gets stuck like that, press &lt;code&gt;Ctrl + C&lt;/code&gt; on your keyboard to kill the process and get your prompt back.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hidden files&lt;/strong&gt;&lt;br&gt;
In Linux, you can make a file "hidden" just by starting its name with a &lt;strong&gt;period (.)&lt;/strong&gt;. These are usually used for important system settings that you don't want to see cluttering your folders.&lt;/p&gt;

&lt;p&gt;Step 1: Create a hidden file (in my folder my_linux_practice2)&lt;br&gt;
Type this and press Enter:&lt;br&gt;
&lt;strong&gt;touch .secret_note.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 2:Try to find it with a normal ls&lt;br&gt;
Type this:&lt;br&gt;
&lt;strong&gt;ls&lt;/strong&gt;&lt;br&gt;
(Notice that it doesn't show up, even though it's there.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscxl324l18s491pttosq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscxl324l18s491pttosq.png" alt="secretfile" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3:Reveal the hidden files&lt;br&gt;
To see everything (including hidden files), you need to add a "flag" to your command.&lt;br&gt;
Type this:&lt;br&gt;
&lt;strong&gt;ls -a&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-a&lt;/code&gt;: stands for "all".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0r1za65jrkz7aaiaaxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0r1za65jrkz7aaiaaxf.png" alt="toseeall" width="800" height="130"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;.&lt;/code&gt;: This represents the current directory you are in.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;..&lt;/code&gt;: This represents the parent directory (one level up).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;.my_secret_file.txt&lt;/code&gt;: The brand new hidden file!&lt;/p&gt;

&lt;p&gt;Being a Data Analyst and Cloud Engineer, I see these "dot files" (like .git or .env) all the time in your professional work. Knowing how to find them using ls -a is a critical skill.&lt;/p&gt;

&lt;p&gt;To delete files in Linux, we use the &lt;code&gt;rm&lt;/code&gt; command (short for remove).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be careful&lt;/strong&gt;: unlike Windows, there is no "Recycle Bin" in the Linux terminal. Once you delete a file with this command, it is gone for good!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Deletion Command&lt;/strong&gt;&lt;br&gt;
Type this and press Enter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;rm .secret_note.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify it is gone&lt;br&gt;
Since this was a hidden file, a normal ls wouldn't have shown it anyway. To be 100% sure it’s deleted, you need to use the "all" flag again.&lt;/p&gt;

&lt;p&gt;Type this:&lt;br&gt;
&lt;strong&gt;ls -a&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flir6c4vlv9nnbsc5nwyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flir6c4vlv9nnbsc5nwyw.png" alt="delete" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What you should see: notes.txt, plus the system markers &lt;code&gt;.&lt;/code&gt; and &lt;code&gt;..&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;What should be missing?: .secret_note.txt.&lt;/p&gt;

&lt;p&gt;While working, for example, with Azure and Power BI, you'll often have folders full of data files. If you ever need to delete an entire folder and everything inside it, you have to add a "recursive" flag:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;rm -r folder_name&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warning&lt;/strong&gt;: Never type rm -rf /. This tells Linux to "Force Delete Everything starting from the Root," which would erase your entire operating system!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copying is another fundamental skill&lt;/strong&gt;, especially when you want to create backups of your scripts or data reports before you make changes.&lt;/p&gt;

&lt;p&gt;In Linux, we use the &lt;code&gt;cp&lt;/code&gt; (copy) command.&lt;/p&gt;

&lt;p&gt;Step 1: Create a simple copy&lt;br&gt;
Let's take the existing &lt;code&gt;notes.txt&lt;/code&gt; and create a backup called &lt;code&gt;backup.txt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Type this and press Enter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cp notes.txt backup.txt&lt;/strong&gt;&lt;br&gt;
Step 2: Verify the copy&lt;br&gt;
Now, let's see if you have two separate files now.&lt;/p&gt;

&lt;p&gt;Type this:&lt;br&gt;
&lt;strong&gt;ls&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You should see both notes.txt and backup.txt listed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl4xy5egnwx2udhd0nca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl4xy5egnwx2udhd0nca.png" alt="copy" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy into a new folder&lt;/strong&gt;&lt;br&gt;
Now let's get a bit more organized. Let's create a "logs" folder and copy the file into it.&lt;/p&gt;

&lt;p&gt;Step 1:&lt;br&gt;
Create the folder:&lt;br&gt;
&lt;strong&gt;mkdir logs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 2:&lt;br&gt;
Copy the file into the folder:&lt;br&gt;
&lt;strong&gt;cp notes.txt logs/&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 3:&lt;br&gt;
Check inside the logs folder:&lt;br&gt;
&lt;strong&gt;ls logs&lt;/strong&gt;&lt;br&gt;
This tells ls to look specifically inside the logs directory without you having to &lt;code&gt;cd&lt;/code&gt; into it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4e0c9fu4ch8ghy969my6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4e0c9fu4ch8ghy969my6.png" alt="copytofolder" width="800" height="233"&gt;&lt;/a&gt;&lt;br&gt;
Now the logs folder is no longer empty, it contains the notes.txt file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Dot" Trick&lt;/strong&gt;&lt;br&gt;
If you are already inside a folder and want to copy a file from somewhere else into your current spot, you use a &lt;strong&gt;period&lt;/strong&gt; . (which means "here").&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example (Try this on your own)&lt;/strong&gt;:&lt;br&gt;
cp /c/Users/Admin/Desktop/important.txt .&lt;br&gt;
(This translates to: "Copy important.txt from the Desktop to here.")&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copying a folder&lt;/strong&gt;&lt;br&gt;
Since we are copying a folder (the logs folder) instead of a single file, we need to use a special flag. In Linux, if you try to copy a folder without this flag, the terminal will give you an error saying "omitting directory."&lt;/p&gt;

&lt;p&gt;To copy a directory and everything inside it, we use &lt;strong&gt;-r&lt;/strong&gt; (which stands for recursive).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Copy Directory Commands&lt;/strong&gt;&lt;br&gt;
Step 1: Copy the logs folder to a new name&lt;br&gt;
Type this and press Enter:&lt;br&gt;
&lt;strong&gt;cp -r logs logs_backup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 2:Verify both folders exist&lt;br&gt;
Type this:&lt;br&gt;
&lt;strong&gt;ls -F&lt;/strong&gt;&lt;br&gt;
(The -F flag is a neat trick,it adds a &lt;code&gt;/&lt;/code&gt; to the end of folder names so you can easily tell them apart from files!)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktibb7g5q8tbmdn7izzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktibb7g5q8tbmdn7izzn.png" alt="copyfolder" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3:Copy a file from one folder to another&lt;br&gt;
Let's practice moving things between folders without leaving your current spot. Let's copy the file inside logs into logs_backup but give it a new name.&lt;/p&gt;

&lt;p&gt;Type this:&lt;br&gt;
&lt;strong&gt;cp logs/notes.txt logs_backup/archive_copy.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 4:Check the contents of the backup folder&lt;br&gt;
&lt;strong&gt;ls logs_backup&lt;/strong&gt;&lt;br&gt;
You should now see both &lt;code&gt;notes.txt&lt;/code&gt; and &lt;code&gt;archive_copy.txt&lt;/code&gt; inside that folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqhrfqufxicazbs13ta3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqhrfqufxicazbs13ta3.png" alt="backup" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Tab" Trick&lt;/strong&gt;&lt;br&gt;
To avoid typos (like lowercase vs. uppercase), try this:&lt;br&gt;
Type cd lin and then press the Tab key on your keyboard. Git Bash will automatically finish the word linux_practice for you! It’s like magic and prevents almost all errors. You can try it with the first few letters of your file and folder names.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating Multiple folders&lt;/strong&gt;&lt;br&gt;
Creating multiple folders at once is a huge time-saver for any &lt;strong&gt;Data Analyst&lt;/strong&gt;. Instead of typing &lt;code&gt;mkdir&lt;/code&gt; five separate times, we can do it in one single line.&lt;/p&gt;

&lt;p&gt;In Linux, there are two ways to do this: the Simple List and the Brace Expansion (the "Pro" way).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 1: The Simple List&lt;/strong&gt;&lt;br&gt;
You can simply type &lt;code&gt;mkdir&lt;/code&gt; followed by all the names you want, separated by spaces.&lt;br&gt;
&lt;strong&gt;mkdir Jan Feb Mar Apr May Jun&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F502rwly07adc0ik21mli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F502rwly07adc0ik21mli.png" alt="multiplefolders" width="800" height="319"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Result: 6 new folders appear instantly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 2: Brace Expansion {} (The Power Move)&lt;/strong&gt;&lt;br&gt;
This is how engineers create hundreds of folders in a second. It uses curly brackets to tell Linux: "Take this prefix and attach all these options to it."&lt;/p&gt;

&lt;p&gt;Try creating six months of data folders like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mkdir month_{1..6}&lt;/strong&gt;&lt;br&gt;
Result: You will get month_1, month_2, up to month_6.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv97fow779heh5cc1uqiq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv97fow779heh5cc1uqiq.png" alt="method2" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 3: Nested Folders (The -p Flag)&lt;/strong&gt;&lt;br&gt;
Sometimes you want to create a folder inside a folder that doesn't exist yet (like a file path). If you just try mkdir Project/Data, it will fail. You need the -p (parents) flag.&lt;/p&gt;

&lt;p&gt;Try this to build a full project structure in one go this way:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mkdir -p Analytics_Project/{customer,date,region,price}&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdzptjnsmtobvi2kpavp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdzptjnsmtobvi2kpavp.png" alt="third" width="800" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happened?&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It created the main folder Analytics_Project.&lt;/li&gt;
&lt;li&gt;Inside it, it created 4 sub-folders: customer, date, region, and price.
To see your beautiful new structure without clicking around your Desktop, use the "Recursive List" command:
&lt;strong&gt;ls -R Cloud_Project&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi49zgwim5z28ovipbkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi49zgwim5z28ovipbkf.png" alt="list" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv7ed30x5fi1vhzbvj5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv7ed30x5fi1vhzbvj5x.png" alt="checking" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating multiple folders, knowing &lt;em&gt;how to clean them up&lt;/em&gt; is just as important. In Linux, there are two main ways to delete a directory, depending on whether it has files inside it or not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 1: The "Safety" Way (rmdir)&lt;/strong&gt;&lt;br&gt;
If a folder is completely empty, you use rmdir (remove directory). This is safe because Linux will refuse to run the command if there is even one tiny file inside, preventing accidental data loss.&lt;/p&gt;

&lt;p&gt;Try it on one of your empty month folders:&lt;br&gt;
&lt;strong&gt;rmdir Jan&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 2: The "Force" Way (rm -r)&lt;/strong&gt;&lt;br&gt;
If the folder has files, scripts, or other folders inside it, rmdir won't work. You must use the rm command with the -r (recursive) flag. This tells Linux to go "inside" the folder and delete everything first, then delete the folder itself.&lt;/p&gt;

&lt;p&gt;Try it on your Analytics_Project folder:&lt;br&gt;
&lt;strong&gt;rm -r Analytics_Project&lt;/strong&gt;&lt;br&gt;
A Critical Warning for Cloud Engineers and Data Analysts&lt;br&gt;
In Linux, there is no "Recycle Bin." Once you run rm -r, that data is gone forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Danger" Command&lt;/strong&gt;:&lt;br&gt;
You will often see &lt;code&gt;rm -rf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-r&lt;/code&gt;: Recursive (deletes folders).&lt;br&gt;
&lt;code&gt;-f&lt;/code&gt;: Force (doesn't ask "Are you sure?").&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Always double-check your current location with pwd before running a recursive delete&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Real-life" Challenge&lt;/strong&gt;&lt;br&gt;
We are going to simulate a real-world task: Organizing a project*&lt;em&gt;.&lt;br&gt;
Step 1: Create the Workspace&lt;br&gt;
Create a main project folder and two sub-folders in one line:&lt;br&gt;
**mkdir -p My_Analytics/{raw_data,final_reports}&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;Step 2:Create a "Data" file&lt;br&gt;
Let’s create a dummy data file inside the raw_data folder:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;touch My_Analytics/raw_data/sales_2026.csv&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 3:Copy the file (The "Backup" Move)&lt;br&gt;
Before you edit data, you should always have a copy. Use cp (copy)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cp My_Analytics/raw_data/sales_2026.csv My_Analytics/raw_data/sales_backup.csv&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Step 4:Move the file (The "Organization" Move)&lt;br&gt;
Now, let's pretend you finished your analysis. Move the original file to the final_reports folder using mv (move):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mv My_Analytics/raw_data/sales_2026.csv My_Analytics/final_reports/&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check Your Work&lt;br&gt;
Use the Tree view (or recursive list) to see your organized project:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ls -R My_Analytics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mtl8j0z60dnb9el05dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mtl8j0z60dnb9el05dg.png" alt="reallife" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since you've already learned how to create, move, and delete folders, the next "superpower" for a Data Analyst is being able to see what is inside a file without needing to open a heavy application like Excel or Notepad.&lt;/p&gt;

&lt;p&gt;Imagine you just downloaded a massive dataset from an Azure storage bucket. You need to know if it's the right data before you start your analysis.&lt;/p&gt;

&lt;p&gt;Practice 1: Creating a "Data" File&lt;br&gt;
First, let's create a file with some actual content inside it so we have something to look at.&lt;/p&gt;

&lt;p&gt;Type this command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;echo -e "ID,Name,Sales\n1,Rahimah,500\n2,Ibrahim,750\n3,Dickson,300" &amp;gt; sales_data.csv&lt;/strong&gt;&lt;br&gt;
What this does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;echo prints text.&lt;/li&gt;
&lt;li&gt;-e allows for "new lines" (\n).&lt;/li&gt;
&lt;li&gt;&amp;gt; saves that text into a new file called sales_data.csv.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Practice 2:The "Peeking" Commands&lt;br&gt;
Now, let's look at the data using three different tools.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The cat Command (Concatenate)
This dumps the entire file onto your screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;cat sales_data.csv&lt;/strong&gt;&lt;br&gt;
Use this when: The file is small (like a configuration file).&lt;/p&gt;

&lt;p&gt;2.The head Command&lt;br&gt;
This only shows the first few lines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;head -n 2 sales_data.csv&lt;/strong&gt;&lt;br&gt;
Use this when you have a 1-million-row CSV and just want to see the column headers.&lt;/p&gt;

&lt;p&gt;3.The tail Command&lt;br&gt;
This shows the very end of the file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tail -n 1 sales_data.csv&lt;/strong&gt;&lt;br&gt;
Use this when: You want to see the most recent entry in a log file.&lt;/p&gt;

&lt;p&gt;Practice 3:The "Search" Power (grep)&lt;br&gt;
This is the command Data Analysts use most. It searches for a specific word inside a file. Suppose you only want to see the sales for "Ibrahim".&lt;/p&gt;

&lt;p&gt;Type this:&lt;br&gt;
&lt;strong&gt;grep "Ibrahim" sales_data.csv&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt;: It will ignore everything else and only show you the row for Ibrahim.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4ilhqxm96x6gj9vp2e1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4ilhqxm96x6gj9vp2e1.png" alt="head" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It actually created an .csv file, amazing!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F131lwcyboqut90ewqcog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F131lwcyboqut90ewqcog.png" alt="excel" width="800" height="661"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgt7jyioilbdjrk5t9rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgt7jyioilbdjrk5t9rl.png" alt="excel" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters for you&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Analysts love finding needles in haystacks&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Imagine searching a 2GB file for a specific Transaction ID. In Excel, your computer freezes. In the CLI, grep 'TXN_9984' data.csv finds it in milliseconds. &lt;br&gt;
When you are working as a Data Analyst, you might have thousands of logs. Instead of scrolling through them, you use grep &lt;code&gt;Error&lt;/code&gt; to find exactly where something went wrong in your Azure pipeline.&lt;/p&gt;

&lt;p&gt;Final Command for today: &lt;strong&gt;history&lt;/strong&gt;&lt;br&gt;
Want to see a list of everything you've done in this session?&lt;/p&gt;

&lt;p&gt;Type this in your Git Bash:&lt;br&gt;
&lt;strong&gt;history&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vmj8wzzgozhy4lo6kwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vmj8wzzgozhy4lo6kwo.png" alt="history" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;Mastering the Linux Terminal is more than just learning a list of commands; it is about adopting a mindset of efficiency and automation. In an era where data volumes are exploding and cloud infrastructure is the standard, the ability to navigate a server, audit a massive CSV, or automate a directory structure is what separates a traditional analyst from a modern data professional.&lt;/p&gt;

&lt;p&gt;As you move from the GUI to the CLI, you aren't just changing how you interact with your computer—you are expanding your capacity to handle "Big Data" that others simply cannot touch. Whether you are building pipelines in Azure, managing repositories on GitHub, or cleaning raw data for Power BI, the terminal is the bridge that connects your analytical skills to the global tech ecosystem.&lt;/p&gt;

&lt;p&gt;The Challenge&lt;br&gt;
Don't let these commands sit idle. Open your Git Bash today, navigate to your projects using only your keyboard. &lt;/p&gt;

</description>
      <category>dataanalytics</category>
      <category>linux</category>
      <category>bash</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Creating Azure Resources via Azure CLI: Part 3</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 19 Mar 2026 15:28:22 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/creating-azure-resources-via-azure-cli-part-3-237e</link>
      <guid>https://dev.to/rahimah_dev/creating-azure-resources-via-azure-cli-part-3-237e</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In Part 3 of this series, I continue building a fully functional Azure environment using only the Azure CLI, expanding on the resources created in earlier parts. This phase focuses on working with storage, securing sensitive data, and implementing operational best practices.&lt;/p&gt;

&lt;p&gt;You’ll see how to create and interact with a storage account, upload and manage files in Blob Storage, securely handle secrets using Azure Key Vault, and explore cost management strategies. Along the way,&lt;em&gt;I also highlight real-world challenges like RBAC permission barriers and subscription limitations&lt;/em&gt;. I'll be showing you how to navigate them effectively as a cloud engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Storage Account &amp;amp; Upload Files
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create the Storage Account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This will generate a unique random string and create a storage account in Azure.&lt;/p&gt;

&lt;p&gt;This is very important because storage accounts provide the scalable backend object storage required for storing logs, backups, container apps, and static assets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability&lt;/strong&gt; - &lt;em&gt;Local Redundant Storage (LRS)&lt;/em&gt; keeps 3 synchronized copies of your data across multiple fault domains in a single data center.&lt;br&gt;
Protection: Guards against hardware failures (e.g., disk crash).&lt;br&gt;
Limitation: If the entire data center goes down, your data may be lost.&lt;br&gt;
Use case: Cheapest option, good for non-critical data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Zone-Redundant Storage (ZRS)&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;Reliability&lt;/strong&gt; - It replicates your data across multiple availability zones in the same region.&lt;br&gt;
Protection: Survives data center failure (since zones are separate).&lt;br&gt;
Advantage: It has a higher availability than LRS.&lt;br&gt;
Use case: Applications that need high availability within a region.&lt;br&gt;
&lt;em&gt;Geo-Redundant Storage (GRS)&lt;/em&gt; (often called regional redundancy)&lt;br&gt;
&lt;strong&gt;Reliability&lt;/strong&gt; - It copies your data to a secondary region far away from the primary region.&lt;br&gt;
Protection: Survives regional outages (e.g., natural disasters).&lt;br&gt;
Bonus: Some versions allow read access to the secondary region, that is, RA-GRS.&lt;br&gt;
Use case: Critical data requiring disaster recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Comparison&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Type&lt;/strong&gt;    &lt;strong&gt;Copies Location&lt;/strong&gt;        &lt;strong&gt;Protection Level&lt;/strong&gt;&lt;br&gt;
LRS         Single data center,        Low protection&lt;br&gt;
ZRS         Multiple zones (same region),  Medium protection&lt;br&gt;
GRS         Different regions,         High protection&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Run the following command block to create a storage account&lt;/em&gt;:&lt;br&gt;
&lt;strong&gt;$STORAGE_NAME="labstoragefeb26"&lt;br&gt;
az storage account create &lt;br&gt;
--name $STORAGE_NAME &lt;br&gt;
--resource-group azurecli-lab-rg&lt;br&gt;
--kind StorageV2 &lt;br&gt;
--location korecentral &lt;br&gt;
--sku Standard_LRS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulajmv12b9twry00oot9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulajmv12b9twry00oot9.png" alt="storageaccount" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyyo46uo4lrndrb3etg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyyo46uo4lrndrb3etg0.png" alt="sa" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkj7bpdemcurkocels7m4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkj7bpdemcurkocels7m4.png" alt="sa" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2q6s0oqtb0o6nv0honx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2q6s0oqtb0o6nv0honx.png" alt="sa" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2462svlqhf0418neclu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2462svlqhf0418neclu.png" alt="sa" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:Create a Blob Container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This creates a logical folder/bucket inside the storage account.&lt;/p&gt;

&lt;p&gt;It's needed because you cannot upload blobs directly to the storage account root, they must live inside a container.&lt;/p&gt;

&lt;p&gt;Security — containers provide access boundaries allowing RBAC segmentation.&lt;/p&gt;

&lt;p&gt;To upload a file into an Azure Blob Storage container, use the &lt;strong&gt;az storage blob upload command&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;az storage container create &lt;br&gt;
--name lab-files &lt;br&gt;
--account-name $STORAGE_NAME &lt;br&gt;
--auth-mode login&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;auth-mode login&lt;/code&gt; : This tells Azure to use your current &lt;em&gt;az login credentials&lt;/em&gt; rather than an access key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbjrjt8277c4cbpgjykk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbjrjt8277c4cbpgjykk.png" alt="blobcontainer" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:Upload a file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This will locally scaffold a text file, then pushes it to Azure and stores it as a blob.&lt;br&gt;
It's very much needed because Azure Blob storage is the most common storage mechanism for handling files (like images, docs, and backups).&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — automated asset and artifact uploads.&lt;br&gt;
Run:&lt;br&gt;
&lt;strong&gt;az storage blob upload &lt;br&gt;
--account-name $STORAGE_NAME &lt;br&gt;
--container-name lab-files &lt;br&gt;
--name sample.txt &lt;br&gt;
--file "C:\Users\Admin\Documents\New folder\sample.txt"&lt;br&gt;
--auth-mode login&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkw09shss94ugnmtfsmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkw09shss94ugnmtfsmn.png" alt="blobupload" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;:This error shows that I hit a permissions wall.&lt;/p&gt;

&lt;p&gt;The error "You do not have the required permissions" happens because, in Azure, being the "Owner" of a subscription doesn't automatically give you the right to upload data inside a storage account when using &lt;code&gt;auth-mode login&lt;/code&gt;. You need a specific Data Plane role.&lt;br&gt;
The solution is to assign the "Storage Blob Data Contributor" Role&lt;br&gt;
You need to give yourself permission to handle the actual data inside the blobs. &lt;br&gt;
&lt;strong&gt;RBAC (Role-Based Access Control)&lt;/strong&gt;: Azure separates "Management" (creating the storage account) from "Data" (uploading files).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Storage Blob Data Contributor role is exactly what the error message is asking for so you can upload, read, and delete blobs&lt;/em&gt;.&lt;br&gt;
If you decide to go the Role Assignment route,note that propagation time after running the command takes about 1–2 minutes. Role assignments in Azure can take a moment to "settle" across the global network.&lt;br&gt;
I will go with the alternative (The "Key" Method).&lt;br&gt;
I don't want to deal with roles right now, so I can bypass this by using the storage account's &lt;strong&gt;Access Key&lt;/strong&gt; instead of my login:&lt;br&gt;
Run this command first to get the key:&lt;br&gt;
&lt;strong&gt;$ACCOUNT_KEY=$(az storage account keys list --account-name $STORAGE_NAME --query "[0].value" -o tsv)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Then, Upload using the key:&lt;br&gt;
&lt;strong&gt;az storage blob upload &lt;br&gt;
--account-name $STORAGE_NAME &lt;br&gt;
--account-key $ACCOUNT_KEY &lt;br&gt;
--container-name lab-files &lt;br&gt;
--name sample.txt &lt;br&gt;
--file "C:\Users\Admin\Documents\New folder\sample.txt"&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cexcbgbd5joivit3dcb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cexcbgbd5joivit3dcb.png" alt="blobupload" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That success message is exactly what I was looking for! The &lt;strong&gt;100.0000%&lt;/strong&gt; progress bar and the JSON output confirm that your file, &lt;code&gt;sample.txt&lt;/code&gt;, has been successfully uploaded to the lab-files container in Azure.&lt;/p&gt;

&lt;p&gt;After the upload using the Access Key method, the next logical step and good practice is to confirm the file is visible in the cloud. This takes us to the next step..&lt;br&gt;
&lt;strong&gt;Step 4:List blobs in the container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This queries the Azure storage API for the contents of the container.&lt;/p&gt;

&lt;p&gt;It's needed as a verification step to ensure your push succeeded.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — automated verification.&lt;br&gt;
Run this command to list all blobs in your container:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;az storage blob list &lt;code&gt;&lt;br&gt;
--account-name $STORAGE_NAME&lt;/code&gt;&lt;br&gt;
--account-key $ACCOUNT_KEY &lt;code&gt;&lt;br&gt;
--container-name lab-files&lt;/code&gt;&lt;br&gt;
--output table&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftob5hzjktndpn8rn6xdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftob5hzjktndpn8rn6xdp.png" alt="verify" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this was just a test, you should know how to remove the file to keep your storage environment clean. To achieve this, run:&lt;br&gt;
&lt;strong&gt;az storage blob delete &lt;br&gt;
--account-name $STORAGE_NAME &lt;br&gt;
--account-key $ACCOUNT_KEY &lt;br&gt;
--container-name lab-files &lt;br&gt;
--name sample.txt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By using the $ACCOUNT_KEY, I &lt;em&gt;bypassed&lt;/em&gt; the complicated RBAC permissions (Roles) that were blocking you earlier. &lt;br&gt;
While roles are safer for large teams, using the key is the fastest way to get things done in a personal lab environment, such as this one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Store Secrets in Azure Key Vault
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:Create a Key Vault&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This provisions an Azure Key Vault instance.&lt;/p&gt;

&lt;p&gt;It's needed credentials, connection strings, certificates and API keys must Never be hard-coded. They belong in Key Vaults.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Security — secure secret storage.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Recall&lt;/em&gt; that a Key Vault stores certificates, keys and secrets.&lt;br&gt;
 Also note that key vault names must be globally unique, just like storage accounts.&lt;br&gt;
Run this command block:&lt;br&gt;
&lt;strong&gt;$KV_NAME="labkvrahfeb26"&lt;br&gt;
az keyvault create &lt;br&gt;
  --name $KV_NAME &lt;br&gt;
  --resource-group azurecli-lab-rg &lt;br&gt;
  --location koreacentral &lt;br&gt;
  --enable-rbac-authorization false&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw0fv44olsdnpy2qcgfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw0fv44olsdnpy2qcgfd.png" alt="kvcreate" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y2kwhqn3xffjsbhmu2d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y2kwhqn3xffjsbhmu2d.png" alt="create" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpqdm7p4s0qr3k3poqmo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpqdm7p4s0qr3k3poqmo.png" alt="created" width="800" height="343"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 2:Store a secret&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It ingests the &lt;code&gt;db-password&lt;/code&gt; secret securely.&lt;/p&gt;

&lt;p&gt;It's needed because it provides safe retrieval instead of storing cleartext credentials locally.&lt;/p&gt;

&lt;p&gt;Security — ensuring robust secrets lifecycle.&lt;br&gt;
Run this command:&lt;br&gt;
&lt;strong&gt;az keyvault secret set &lt;br&gt;
  --vault-name $KV_NAME &lt;br&gt;
  --name db-password &lt;br&gt;
  --value 'SuperSecure@pass123'&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgecv67zfqgzcjxmvtvus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgecv67zfqgzcjxmvtvus.png" alt="az kv" width="800" height="386"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 3: Retrieve the secret&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It fetches the decrypted plain text variable value securely using your currently authenticated user.&lt;/p&gt;

&lt;p&gt;It proves the CLI works to securely obtain values from Key Vault context.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Security — programmatic retrieval over TLS.&lt;br&gt;
Run the following command:&lt;br&gt;
&lt;strong&gt;az keyvault secret show &lt;br&gt;
  --vault-name $KV_NAME &lt;br&gt;
  --name db-password &lt;br&gt;
  --query value &lt;br&gt;
  --output tsv&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9592xzocqqp49p79zmbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9592xzocqqp49p79zmbb.png" alt="secretshow" width="800" height="164"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 4:Assign VM a Managed Identity to access the vault&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This step configures Azure AD to grant the VM identity-based permissions to extract secrets.&lt;/p&gt;

&lt;p&gt;It's needed because it allows background services in the VM to get the secret later without logging in themselves.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Security — Zero-credential deployment utilizing Managed Identities.&lt;br&gt;
Run this command:&lt;br&gt;
&lt;strong&gt;az vm identity assign &lt;br&gt;
  --resource-group azurecli-lab-rg &lt;br&gt;
  --name lab-vm &lt;br&gt;
$PRINCIPAL_ID=$(az vm show &lt;br&gt;
  --resource-group azurecli-lab-rg &lt;br&gt;
  --name lab-vm &lt;br&gt;
  --query identity.principalId &lt;br&gt;
  --output tsv)&lt;br&gt;
az role assignment create &lt;br&gt;
  --role 'Key Vault Secrets User' &lt;br&gt;
  --assignee $PRINCIPAL_ID &lt;br&gt;
  --scope $(az keyvault show --name $KV_NAME --query id --output tsv)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis6qlaygpprowtsjesgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis6qlaygpprowtsjesgg.png" alt="managedidentity" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg0drht1svcsjumd9dy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg0drht1svcsjumd9dy0.png" alt="mid" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor Costs &amp;amp; Set a Budget Alert
&lt;/h2&gt;

&lt;p&gt;Step 1:Get your subscription ID&lt;/p&gt;

&lt;p&gt;This queries the active subscription ID programmatically.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
Required when sending alerts so it explicitly monitors current active account.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Cost Optimization — understanding which account you are billing towards.&lt;br&gt;
Run: &lt;strong&gt;SUB_ID=$(az account show --query id --output tsv)&lt;br&gt;
echo "Subscription: $SUB_ID"&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgblk7azbsczrw7f6e92a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgblk7azbsczrw7f6e92a.png" alt="subID" width="800" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:Create a $10 monthly budget with an alert at 80%&lt;/strong&gt;&lt;br&gt;
This sets a strict ceiling for consumption using native Azure limits.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
Setting alerts &lt;em&gt;prevents surprise billing&lt;/em&gt; caused by unmonitored rogue or misconfigured compute arrays.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Cost Optimization — preventative guard-rails ensuring fiscal control.&lt;br&gt;
Setting a budget is the "responsible" side of being a &lt;strong&gt;Cloud Engineer&lt;/strong&gt;. It proves you aren't just building things, you're managing the &lt;strong&gt;Cost Management&lt;/strong&gt; aspect of the cloud, which is a major focus for businesses in 2026.&lt;br&gt;
Run:&lt;br&gt;
&lt;strong&gt;az consumption budget create &lt;br&gt;
  --budget-name lab-budget &lt;br&gt;
  --amount 10 &lt;br&gt;
  --category Cost &lt;br&gt;
  --time-grain Monthly &lt;br&gt;
  --start-date (Get-Date -Format "yyyy-MM-01") &lt;br&gt;
  --end-date 2026-12-31 &lt;br&gt;
  --resource-group azurecli-lab-rg&lt;br&gt;
  --notifications '[{"enabled":true,"operator":"GreaterThan","threshold":80,"contactEmails":["&lt;a href="mailto:you@example.com"&gt;you@example.com&lt;/a&gt;"]}]'&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw73q9okcv9cwrlxpamkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw73q9okcv9cwrlxpamkg.png" alt="unrecognizedargument:notification" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The output shows notification errors so we'll run another command. This version avoids all the JSON/notification issues that broke earlier.&lt;br&gt;
**$subId = az account show --query id -o tsv&lt;/p&gt;

&lt;p&gt;az consumption budget create &lt;code&gt;&lt;br&gt;
  --budget-name "lab-budget"&lt;/code&gt;&lt;br&gt;
  --amount 10 &lt;code&gt;&lt;br&gt;
  --category Cost&lt;/code&gt;&lt;br&gt;
  --time-grain Monthly &lt;code&gt;&lt;br&gt;
  --start-date "2026-03-01"&lt;/code&gt;&lt;br&gt;
  --end-date "2026-12-31" `&lt;br&gt;
  --subscription $subId**&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h3tymvscx9l763i0amn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h3tymvscx9l763i0amn.png" alt="solution" width="800" height="195"&gt;&lt;/a&gt;&lt;br&gt;
This displays a RBACAccessDenied error, but this screenshot:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gulealr7ek3wwz8z5g4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gulealr7ek3wwz8z5g4.png" alt="ownership" width="800" height="354"&gt;&lt;/a&gt; &lt;br&gt;
confirms ownership of the subscription.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4mxwhqxheubylqmzxym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4mxwhqxheubylqmzxym.png" alt="invalid budget config" width="800" height="296"&gt;&lt;/a&gt;&lt;br&gt;
This above screenshot shows Invalid budget configuration.&lt;/p&gt;

&lt;p&gt;The CLI keeps failing with different error types.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhxkd0pxlzf9guoi0r9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhxkd0pxlzf9guoi0r9f.png" alt="Invalid budget config" width="800" height="398"&gt;&lt;/a&gt;&lt;br&gt;
I ran into another Invalid budget configuration error.&lt;/p&gt;

&lt;p&gt;I confirmed the subscription is active, enabled:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyouaaav0kte93acwy0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyouaaav0kte93acwy0o.png" alt="enabled sub" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3vcvseoe1k44n4tovly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3vcvseoe1k44n4tovly.png" alt="active" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I ran another command which finally confirms the "root cause" 100%.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfd80jrrgxh8vjcisnr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfd80jrrgxh8vjcisnr8.png" alt="confirmation" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From your output:&lt;br&gt;
&lt;strong&gt;"quotaId": "FreeTrial_2014-09-01",&lt;br&gt;
"spendingLimit": "On"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This means am using a Free Trial subscription with spending limit ON.&lt;br&gt;
Why the budget creation keeps failing:&lt;br&gt;
Azure does NOT allow budget creation via CLI for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free Trial&lt;/strong&gt; subscriptions&lt;/li&gt;
&lt;li&gt;Subscriptions with spending &lt;strong&gt;limit ON&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s why I keep getting invalid budget configuration and &lt;br&gt;
RBACAccessDenied (misleading error).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Important insight&lt;/em&gt;&lt;br&gt;
Since I already HAVE a built-in spending cap, it automatically shuts down when credits finish.&lt;br&gt;
So Azure assumes “You don’t need a budget — we already limit your spending.”&lt;/p&gt;

&lt;p&gt;These are the available options: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OPTION 1 — Use Azure Portal to create it manually (works sometimes) when CLI/API is blocked.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Go to:&lt;br&gt;
Cost Management → Budgets → + Add&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OPTION 2 — Upgrade subscription &lt;strong&gt;(guaranteed solution)&lt;/strong&gt;
Click “Upgrade” at the top of your portal and this will remove spending limit, convert to Pay-As-You-Go.
This allows budgets and alerts, with Full CLI support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ONLY blocker is the Free Trial restriction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommendation&lt;/strong&gt;: For learning (especially Azure CLI labs), Upgrade the subscription otherwise, you’ll keep hitting hidden limitations like this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:Check current resource group costs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This creates Log Analytics workspace to ingest usage metrics and performance logs later on.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
Provides unified overview. Essential monitoring dependency for true Production Readiness.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — creating the hub for telemetry.&lt;br&gt;
terminal&lt;br&gt;
Run:&lt;br&gt;
&lt;strong&gt;az monitor log-analytics workspace create &lt;br&gt;
  --resource-group azurecli-lab-rg &lt;br&gt;
  --workspace-name lab-logs &lt;br&gt;
  --location koreacentral&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm3aj73pl8qvkrbz6101.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm3aj73pl8qvkrbz6101.png" alt="resourcegrpcost" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjk6dwq16abtsm2tzuin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjk6dwq16abtsm2tzuin.png" alt="rgc" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Up &amp;amp; Document Your Work
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Delete the resource group (and everything in it)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It removes the resource group and triggers the recursive cascading wipe of all associated child network and data structures attached within it.&lt;/p&gt;

&lt;p&gt;Why it's Needed&lt;br&gt;
The ultimate power move of organized Resource tiering and management. &lt;em&gt;Cloud instances incur hourly charges, immediate destruction preserves free-tiers&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
&lt;strong&gt;Cost Optimization&lt;/strong&gt; — decommission what you no longer use.&lt;br&gt;
terminal&lt;br&gt;
Run:&lt;br&gt;
&lt;strong&gt;az group delete --name $azurecli-lab-rg --yes --no-wait&lt;br&gt;
az group list --output table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The fist line of the command deletes ALL resources in the group — VM, VNet, Storage, Key Vault. While the second line&lt;br&gt;
verifies deletion. (wait a few minutes, then check)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbg1d7xlyb0yoghsydtbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbg1d7xlyb0yoghsydtbh.png" alt="delete" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:Create a project folder and write a README&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It scaffolds standard markdown files documenting everything accomplished here.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
It ensures recruiters see exactly what was executed instead of an empty claim regarding Cloud expertise.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — automated documentation.&lt;br&gt;
Run the following commands:&lt;br&gt;
&lt;strong&gt;mkdir azure-cli-lab; cd azure-cli-lab&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;This creates a new directory (folder)&lt;/em&gt;&lt;br&gt;
(The semicolon (;) is the valid statement separator in my Powershell version. It does the exact same thing, that is, it tells the computer to finish the first task and then start the second one.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git init&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Initializes a new Git repository in the current directory&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4cmxpe0fx74iyigkwoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4cmxpe0fx74iyigkwoc.png" alt="git" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To display the content of a file in the terminal, run this command:&lt;/em&gt;&lt;br&gt;
**@'&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure CLI Cloud Lab
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;A complete Azure environment using only the Azure CLI — no portal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources Created
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Resource Group (azurecli-lab-rg in East US)&lt;/li&gt;
&lt;li&gt;Virtual Network (10.0.0.0/16) with Subnet (10.0.1.0/24)&lt;/li&gt;
&lt;li&gt;NSG with SSH (22) and HTTP (80) rules&lt;/li&gt;
&lt;li&gt;Ubuntu VM (Standard_B1s) with Nginx installed&lt;/li&gt;
&lt;li&gt;Storage Account with blob container&lt;/li&gt;
&lt;li&gt;Key Vault with secret &amp;amp; managed identity&lt;/li&gt;
&lt;li&gt;Cost Budget at $10/month with 80% alert&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Commands
&lt;/h2&gt;

&lt;p&gt;az group create, az vm create, az network vnet create, az storage account create, az keyvault create&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How to provision a full Azure environment from the CLI&lt;/li&gt;
&lt;li&gt;VNet + NSG = the network security foundation&lt;/li&gt;
&lt;li&gt;Key Vault + Managed Identity = zero-credential secret management&lt;/li&gt;
&lt;li&gt;Always delete resources after a lab to avoid charges
'@ | Set-Content -Path "README.md"**&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv3ihm8kbb743yw53u7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv3ihm8kbb743yw53u7g.png" alt="readme" width="800" height="500"&gt;&lt;/a&gt;.&lt;br&gt;
&lt;em&gt;The error you see in my screenshot resulted when I used a bash command instead of PowerShell.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that I've "built" the file, let's "see" it. Run this command to read it back in your terminal:&lt;br&gt;
&lt;strong&gt;Get-Content README.md&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkal7992070rcn01cn6fl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkal7992070rcn01cn6fl.png" alt="readme" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notice the README.md is in the azure-cli-lab file&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:Commit and push to GitHub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This pushes your locally created lab notes to an external hosted tracking service.&lt;/p&gt;

&lt;p&gt;Why It's Needed&lt;br&gt;
A standard CI workflow for real-world projects and portfolio sharing.&lt;/p&gt;

&lt;p&gt;Pillar Connection&lt;br&gt;
Operational Excellence — tracking version history in remote repos.&lt;br&gt;
Run the following commands:&lt;br&gt;
&lt;strong&gt;git add .&lt;/strong&gt;&lt;br&gt;
Stages all changed files for the next commit.&lt;br&gt;
If you want to be specific, you can name the file:git add README.md&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git commit -m 'docs: Azure CLI cloud lab — full environment from scratch'&lt;/strong&gt;&lt;br&gt;
Creates a new commit with all staged changes and the message after -m&lt;br&gt;
&lt;strong&gt;git branch -M main&lt;/strong&gt;&lt;br&gt;
Lists, creates, or manages branches.&lt;br&gt;
&lt;strong&gt;git remote add origin &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;https://github.com/&lt;/a&gt;/azure-cli-lab.git&lt;/strong&gt;&lt;br&gt;
Stages the specified file(s) for the next commit&lt;br&gt;
&lt;strong&gt;git push -u origin main&lt;/strong&gt;&lt;br&gt;
Uploads your local commits to the remote repository.&lt;br&gt;
The Goal is actually sending the box to the cloud.&lt;/p&gt;

&lt;p&gt;The -u "links" your local folder to the GitHub folder forever, so next time you only have to type &lt;code&gt;git push&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fype6osm4d1s3hzqx81kc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fype6osm4d1s3hzqx81kc.png" alt="gitopens" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4w9fwlgwc55emnqcl7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4w9fwlgwc55emnqcl7r.png" alt="git" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uwk5o5u2kvo1v7pzs5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uwk5o5u2kvo1v7pzs5u.png" alt="verificationcode" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7sasrwyhslncrnpt0qm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7sasrwyhslncrnpt0qm.png" alt="complete" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to make sure there is a "landing pad" waiting for your code on the internet.&lt;br&gt;
Think of it like this - your terminal knows what to send, but GitHub doesn't know where to put it yet.&lt;br&gt;
&lt;strong&gt;Step 1: Create the "Landing Pad" (GitHub Website)&lt;/strong&gt;&lt;br&gt;
Before running the next command, you need to do this manually in your browser:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to github.com.&lt;/li&gt;
&lt;li&gt;Click the + icon in the top right and select &lt;strong&gt;New repository&lt;/strong&gt;.
Name it exactly &lt;strong&gt;azure-cli-lab&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Do not check "Initialize this repository with a README" (because we already created one in your terminal).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij1s5xq1tvepjvl39ntn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij1s5xq1tvepjvl39ntn.png" alt="repo" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll to the page bottom and Click &lt;strong&gt;Create repository&lt;/strong&gt;.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanmogubo3r7906frnu3m.png" alt="createrepo" width="800" height="528"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: The "Git Log" Check (Terminal)&lt;/strong&gt;&lt;br&gt;
While setting that up, let's verify that the current branch 'main' already has commits, by running this command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git log --oneline&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80hjaa0we45cqo9gw795.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80hjaa0we45cqo9gw795.png" alt="boxispacked" width="800" height="454"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 3:Connect and Push&lt;/strong&gt;&lt;br&gt;
Once the GitHub repository is created on the website,I'll run these two final commands to finish the lab:&lt;/p&gt;

&lt;h1&gt;
  
  
  Connect your computer to the web address (Replace YOUR_USERNAME)
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;git remote add origin &lt;a href="https://github.com/rahimahisah17/azure-cli-lab.git" rel="noopener noreferrer"&gt;https://github.com/rahimahisah17/azure-cli-lab.git&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Upload the files
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;git push -u origin main&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftwnj4j1w77ueb1kz2bq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftwnj4j1w77ueb1kz2bq.png" alt="worked" width="800" height="274"&gt;&lt;/a&gt;&lt;br&gt;
The latest screenshot shows a total success. I've officially "pushed" your code from your local machine to the cloud. &lt;strong&gt;Seeing * [new branch] main -&amp;gt; main is the final green light for any developer.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Writing objects&lt;/em&gt;: 100% (3/3) means All 3 parts of your Git snapshot &lt;strong&gt;(the files, the folder info, and the message)&lt;/strong&gt; were uploaded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;branch main&lt;/code&gt; set up to track 'origin/main' means your computer and GitHub are now "synced." Next time you change your README, you only have to type &lt;strong&gt;git push&lt;/strong&gt;, no extra settings needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Correction to READme
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;I realized, I used East US, instead of Korea Central. I must also state that I failed to create the budget and state the reason.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Update the README.md File&lt;/strong&gt;&lt;br&gt;
This command uses a "Here-String" to overwrite your existing file with the new location (Korea Central) and the note about subscription limitations.&lt;br&gt;
Run this block:&lt;br&gt;
**@'&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure CLI Cloud Lab
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;A complete Azure environment using only the Azure CLI — no portal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources Created
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Resource Group (azurecli-lab-rg in Korea Central)&lt;/li&gt;
&lt;li&gt;Virtual Network (10.0.0.0/16) with Subnet (10.0.1.0/24)&lt;/li&gt;
&lt;li&gt;NSG with SSH (22) and HTTP (80) rules&lt;/li&gt;
&lt;li&gt;Ubuntu VM (Standard_B1s) with Nginx installed&lt;/li&gt;
&lt;li&gt;Storage Account with blob container&lt;/li&gt;
&lt;li&gt;Key Vault with secret &amp;amp; managed identity&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;[!IMPORTANT]&lt;br&gt;
&lt;strong&gt;Cost Budget Note:&lt;/strong&gt; The $10 monthly budget failed to implement in this specific environment due to Azure subscription limitations (e.g., Free Tier or specific tenant restrictions).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Key Commands
&lt;/h2&gt;

&lt;p&gt;az group create, az vm create, az network vnet create, az storage account create, az keyvault create&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Regional differences: Migrated deployment to Korea Central.&lt;/li&gt;
&lt;li&gt;API Constraints: Budgeting tools are restricted on certain subscription types.&lt;/li&gt;
&lt;li&gt;Always delete resources after a lab to avoid charges.
'@ | Set-Content -Path "README.md"**&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yd0ozod1npdt5nzenu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yd0ozod1npdt5nzenu4.png" alt="stage" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Update Your Resource Group Location&lt;/strong&gt;&lt;br&gt;
Since I decided to change the location to Korea Central, I ran this to update your Azure environment to match your new documentation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;az group update --name "azurecli-lab-rg" --set location="koreacentral"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozwuexa0raoaybj1n56m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozwuexa0raoaybj1n56m.png" alt="ameend" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Amend and Force Push to GitHub&lt;/strong&gt;&lt;br&gt;
Since I already pushed a version of this project, I will "amend" the previous commit so your history stays clean and professional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git add README.md&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Overwrite the last commit message&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git commit --amend -m "docs: update location to Korea Central and note budget limit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Force push to the cloud&lt;/strong&gt;&lt;br&gt;
(This is required because we are changing history that was already uploaded.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;git push origin main --force&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4500trdsbf9gob5xqjjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4500trdsbf9gob5xqjjc.png" alt="git" width="800" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this part of the project, I successfully extended my Azure CLI lab by implementing storage, security, and operational workflows. I created a storage account and container, uploaded and verified files, and explored two authentication approaches: RBAC and access keys. I also set up Azure Key Vault to &lt;strong&gt;securely store and retrieve secrets&lt;/strong&gt;, and configured a managed identity for secure, credential-free access.&lt;/p&gt;

&lt;p&gt;While attempting to implement &lt;strong&gt;cost monitoring&lt;/strong&gt;, I encountered Azure subscription limitations that prevented budget creation via CLI, this is an important real-world insight into how Free Trial subscriptions behave.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Overall, this phase reinforced key cloud principles of secure data handling, identity-based access, cost awareness, and environment cleanup. It also demonstrated that beyond just running commands, understanding Azure’s underlying constraints and design decisions is critical for building reliable, production-ready solutions.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>azurecli</category>
      <category>devops</category>
      <category>cloudcomputing</category>
      <category>azure</category>
    </item>
    <item>
      <title>Data-Driven Energy Insights: Analyzing National Fuel Markets with Power BI &amp; DAX</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Mon, 16 Mar 2026 17:35:13 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/data-driven-energy-insights-analyzing-national-fuel-markets-with-power-bi-dax-3f1c</link>
      <guid>https://dev.to/rahimah_dev/data-driven-energy-insights-analyzing-national-fuel-markets-with-power-bi-dax-3f1c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why Fuel Market Data Matters&lt;/strong&gt;&lt;br&gt;
In a global economy, understanding energy consumption and fuel distribution is more than just looking at numbers, it’s about identifying economic patterns and infrastructure needs. I recently completed a deep-dive analysis into the National Fuel Market in Argentina, transforming raw datasets into an interactive, actionable intelligence report.&lt;/p&gt;

&lt;p&gt;This project wasn't just about visualization, it was about building a robust data model that could handle complex regional variables and provide clear insights for stakeholders.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Challenge
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;From Raw Data to Insights&lt;/strong&gt;&lt;br&gt;
Every data project starts with a hurdle. For this analysis, the focus was on ensuring data integrity across various provinces and fuel types.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Data Architecture &amp;amp; Modeling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I implemented a &lt;code&gt;Star Schema&lt;/code&gt; to ensure the report remained performant despite the dataset's size. By separating fact tables (sales and prices) from dimension tables (geography, time, and fuel categories), I ensured that the report remains scalable for future data updates. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdu89s37jzv9k6bddf8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdu89s37jzv9k6bddf8f.png" alt="model" width="800" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Advanced Analytics with DAX&lt;/strong&gt;&lt;br&gt;
To go beyond basic arithmetic, I utilized DAX (Data Analysis Expressions) to create dynamic measures. These allowed for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Year-over-Year (YoY) Growth: Tracking how consumption shifted across different quarters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regional Market Share: Identifying which provinces dominated specific fuel categories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Price Volatility Tracking: Visualizing how price fluctuations impacted sales volume.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.I customized a wireframe using Powerpoint&lt;/strong&gt;.&lt;br&gt;
Into the wireframe, I inserted my shapes and resized and added fill colors of my choice. I imported the icons from &lt;code&gt;Flaticons&lt;/code&gt;. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpzere20nfyvpmwaihr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpzere20nfyvpmwaihr7.png" alt="ppt" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.I used a high-end UI/UX design&lt;/strong&gt; &lt;br&gt;
This demonstrated the effective use buttons and bookmarks to enhance interactivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Insights Discovered
&lt;/h2&gt;

&lt;p&gt;The data revealed several compelling trends that would be vital for any policy-maker or private stakeholder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3668p94ptlqhfs0co02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3668p94ptlqhfs0co02.png" alt="Images" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnt1fdhapwcp3n5zwifd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnt1fdhapwcp3n5zwifd.png" alt="Images" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regional Concentration&lt;/strong&gt;: Highlighting specific hubs where infrastructure investment would yield the highest ROI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consumption Shifts&lt;/strong&gt;: Identifying the transition points between traditional fuels and emerging alternatives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Market Resilience&lt;/strong&gt;: How various regions reacted to pricing shifts over the analyzed period.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Interactive Experience
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Static reports only tell half the story. To truly explore the data, I’ve published the full interactive version of the dashboard&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.novypro.com/profile_about/1770707382971x906585397690459400?Popup=memberProject&amp;amp;Data=1772657215286x758082894723003800" rel="noopener noreferrer"&gt;View the &lt;strong&gt;Interactive&lt;/strong&gt; Report on NovyPro&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rahimahisah17/National-Fuel-Market-Analysis" rel="noopener noreferrer"&gt;Explore the Full Technical Repository on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Turning Data into Strategy&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;As a Data Analyst, my goal is always to bridge the gap between technical complexity and business strategy. This project reinforced the importance of clean modeling and the power of interactive storytelling in data&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>businessintelligence</category>
      <category>energysector</category>
      <category>portfolioproject</category>
      <category>dataanalysis</category>
    </item>
    <item>
      <title>Creating Azure Resources via Azure CLI: Part 2</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 12 Mar 2026 15:52:12 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/creating-azure-resources-via-azure-cli-part-2-10m9</link>
      <guid>https://dev.to/rahimah_dev/creating-azure-resources-via-azure-cli-part-2-10m9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Adaptability is the core of DevOps. After navigating subscription-level constraints in our previous VM setup, it became clear that understanding 'where' and 'how' you deploy is just as vital as the 'what.' &lt;br&gt;
In this second part of our Azure CLI series, we apply those hard-won lessons to build a faster, more agile deployment workflow. From &lt;strong&gt;optimizing&lt;/strong&gt; locations to &lt;strong&gt;selecting&lt;/strong&gt; the right VM sizes on the fly, this guide provides a professional blueprint for mastering Azure resources through the command line.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Here, I skipped Install Azure CLI and headed straight to verification&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Verify the installation&lt;/strong&gt;.&lt;br&gt;
To verify the installation, run the Azure CLI command: &lt;strong&gt;az --version&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e3jqap1vmqe4a5k9h79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e3jqap1vmqe4a5k9h79.png" alt="version" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This gives details of the Azure version in use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Login&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the Azure command &lt;strong&gt;az login&lt;/strong&gt;. This opens a browser for interactive authentication and sets your active Azure subscription context.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76i5zuys0fcdl8lg8hf4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76i5zuys0fcdl8lg8hf4.png" alt="login" width="800" height="468"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notice&lt;/em&gt; I selected my account and then &lt;strong&gt;Continue&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwbfdpqc34gl3uedxk40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwbfdpqc34gl3uedxk40.png" alt="login" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note I did not have to sign in because the account was set in the previous exercise, using the &lt;strong&gt;az account set&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: To confirm my active subscription, I entered 1, because it is the only active subscription I have. If you have more than one subscription, enter the number that corresponds to the subscription you intend to use.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5vemrkoq38s4yercu1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5vemrkoq38s4yercu1x.png" alt="setacct" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We can go ahead and provision the Resource Group now&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Resource Group
&lt;/h2&gt;

&lt;p&gt;In this section, we aim to create a resource group to act as the logical container for the entire lab environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: &lt;strong&gt;Set a variable for the resource group&lt;/strong&gt;. &lt;br&gt;
Store the &lt;strong&gt;resource group name&lt;/strong&gt; and &lt;strong&gt;region&lt;/strong&gt; in powershell variables. This is highly recommended to prevent typos throughout the rest of the lab and make the script easily reusable.&lt;/p&gt;

&lt;p&gt;RG="azurecli-lab-rg"&lt;br&gt;
This sets the shell variable RG so later commands can reference it with &lt;strong&gt;$RG&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;LOCATION="koreacentral"&lt;br&gt;
This sets the shell variable LOCATION so later commands can reference it with &lt;strong&gt;$LOCATION&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: &lt;strong&gt;Create the Resource Group&lt;/strong&gt;&lt;br&gt;
Create a named resource group in &lt;strong&gt;korea central&lt;/strong&gt; this time around. All resources in this lab will be placed here for easy cleanup.&lt;/p&gt;

&lt;p&gt;This is needed because Azure requires every resource to live inside a resource group. &lt;em&gt;They make it easy to manage, monitor, and delete everything together at the end of the lab&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Operational Excellence — grouping related resources together is a best practice for &lt;strong&gt;manageability&lt;/strong&gt; and &lt;strong&gt;cost tracking&lt;/strong&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az group create --name $RG --location $LOCATION&lt;/strong&gt;.&lt;br&gt;
&lt;em&gt;This creates a resource group called "$RG" — a logical container for all the Azure resources in this lab&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7n1rpfv3xjjqgse604a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7n1rpfv3xjjqgse604a.png" alt="created" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clearly, it was created because the properties are stated and &lt;em&gt;Notice&lt;/em&gt; the &lt;strong&gt;ProvisioningState&lt;/strong&gt; reads &lt;strong&gt;"Succeeded"&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build a Virtual Network (VNet) &amp;amp; Subnet
&lt;/h2&gt;

&lt;p&gt;This is aimed at creating a secure private network for your Azure resources to communicate on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: &lt;strong&gt;Create the Virtual Network&lt;/strong&gt;&lt;br&gt;
Here, you will create a &lt;strong&gt;Virtual Network&lt;/strong&gt; with a broad 10.0.0.0/16 IP address space.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is necessary because VMs and other infrastructure need a secure, isolated private network to communicate with each other.&lt;br&gt;
Creating an isolated network boundary is the foundational step of cloud security&lt;/em&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az network vnet create --address-prefix 10.0.0.0/16 --resource-group $RG --name lab-vnet --location $LOCATION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87x6iyqeksyn0dx9md1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87x6iyqeksyn0dx9md1r.png" alt="vnet" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It took just 4 seconds for this virtual network to provision. That's amazing!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: &lt;strong&gt;Create a Subnet&lt;/strong&gt;&lt;br&gt;
This would carve out a &lt;strong&gt;smaller 10.0.1.0/24 piece (subnet)&lt;/strong&gt; of the VNet specifically for your VMs.&lt;/p&gt;

&lt;p&gt;Why this is relevant&lt;br&gt;
&lt;em&gt;Segmenting networks allows you to apply different routing and firewall rules to different types of resources&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This aspect of security is known as network segmentation.&lt;br&gt;
Run the CLI command: &lt;strong&gt;az network vnet subnet create --resource-group $RG --vnet-name lab-vnet --name lab-subnet --address-prefix 10.0.1.0/24&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oad23duj9vs5p6mh9v2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oad23duj9vs5p6mh9v2.png" alt="subnet" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: &lt;strong&gt;Create a Network Security Group (NSG)&lt;/strong&gt;&lt;br&gt;
A Network Security Group acts as a virtual firewall.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is important because without an NSG attached, Microsoft allows no inbound traffic but allows all outbound traffic. We need an NSG to poke specific holes in the firewall.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Controlling traffic flow with firewalls is a basic security requirement&lt;/strong&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az network nsg create --resource-group $RG --name lab-nsg&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpbssz5rvs8yilpb3xlt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpbssz5rvs8yilpb3xlt.png" alt="nsg" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: &lt;strong&gt;Open port 22 (SSH) &amp;amp; 80 (HTTP)&lt;/strong&gt;&lt;br&gt;
Let's add inbound rules prioritizing SSH (port 22) and HTTP (port 80) access from the internet.&lt;br&gt;
The reason for this action is that you'll need SSH to log in and configure the server, and HTTP so users can view the web page.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This explicitly defining inbound access using the principle of least privilege&lt;/em&gt;.&lt;br&gt;
&lt;strong&gt;First&lt;/strong&gt; ensure your terminal knows what $RG is by running this line first: &lt;strong&gt;$RG="azurecli-lab-rg"&lt;/strong&gt;&lt;br&gt;
Then &lt;strong&gt;create the SSH rule&lt;/strong&gt; using this command:&lt;br&gt;
&lt;strong&gt;az network nsg rule create &lt;code&gt;&lt;br&gt;
  --resource-group $RG&lt;/code&gt;&lt;br&gt;
  --nsg-name lab-nsg &lt;code&gt;&lt;br&gt;
  --name AllowSSH&lt;/code&gt;&lt;br&gt;
  --priority 1000 &lt;code&gt;&lt;br&gt;
  --destination-port-ranges 22&lt;/code&gt;&lt;br&gt;
  --access Allow &lt;code&gt;&lt;br&gt;
  --protocol Tcp&lt;/code&gt;&lt;br&gt;
  --direction Inbound&lt;/strong&gt;&lt;br&gt;
Wait for the first command to finish. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxe02vlm2qnweemvt8ua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxe02vlm2qnweemvt8ua.png" alt="ssh" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then &lt;strong&gt;create the HTTP Rule&lt;/strong&gt; by running this separate block:&lt;br&gt;
&lt;strong&gt;az network nsg rule create &lt;code&gt;&lt;br&gt;
  --resource-group $RG&lt;/code&gt;&lt;br&gt;
  --nsg-name lab-nsg &lt;code&gt;&lt;br&gt;
  --name AllowHTTP&lt;/code&gt;&lt;br&gt;
  --priority 1010 &lt;code&gt;&lt;br&gt;
  --destination-port-ranges 80&lt;/code&gt;&lt;br&gt;
  --access Allow &lt;code&gt;&lt;br&gt;
  --protocol Tcp&lt;/code&gt;&lt;br&gt;
  --direction Inbound&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Notice (`)serves as a line breaker because the command is long. Leave one space before the "backtick" and then enter&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5vgv83o5pw4xyzywljc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5vgv83o5pw4xyzywljc.png" alt="http" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;:&lt;strong&gt;Attach NSG to Subnet&lt;/strong&gt;&lt;br&gt;
This enforces the firewall rules (NSG) at the subnet boundary.&lt;/p&gt;

&lt;p&gt;It's needed because by applying the NSG to the subnet ensures that &lt;strong&gt;any VM&lt;/strong&gt; created in that subnet automatically inherits those exact firewall rules, thereby &lt;strong&gt;protecting the entire subnet&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Security — subnet-level application of security controls.&lt;br&gt;
Run this block of commands:&lt;br&gt;
&lt;strong&gt;az network vnet subnet update &lt;code&gt;&lt;br&gt;
  --resource-group $RG &lt;/code&gt;&lt;br&gt;
  --vnet-name lab-vnet &lt;code&gt;&lt;br&gt;
  --name lab-subnet &lt;/code&gt;&lt;br&gt;
  --network-security-group lab-nsg&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Remember to run the resource group variable first if you restart the terminal&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuelp57knhmu7ftlaa3s6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuelp57knhmu7ftlaa3s6.png" alt="nsg" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Provision a Linux Virtual Machine
&lt;/h2&gt;

&lt;p&gt;Here, you will create an Ubuntu VM with a public IP inside your VNet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: &lt;strong&gt;Allocate a Public IP&lt;/strong&gt;&lt;br&gt;
This allocates a static public IP address in Azure.&lt;/p&gt;

&lt;p&gt;It's needed because without a public IP, the VM can only be accessed internally through the VNet or a VPN. You need this to reach your web server from your browser.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reliability — using a Static IP ensures the address does not change upon reboot.&lt;/em&gt;&lt;br&gt;
Run this command: &lt;br&gt;
&lt;strong&gt;az network public-ip create &lt;code&gt;&lt;br&gt;
  --resource-group $RG &lt;/code&gt;&lt;br&gt;
  --name lab-public-ip &lt;code&gt;&lt;br&gt;
  --allocation-method Static &lt;/code&gt;&lt;br&gt;
  --sku Basic&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwg77ck2ifoffq46lkka1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwg77ck2ifoffq46lkka1.png" alt="nsg" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notice due to error, I had to switch to "standard". Pay attention to error messages&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq7nhxn0sv72gijqqibc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq7nhxn0sv72gijqqibc.png" alt="nsg" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: &lt;strong&gt;Create the VM&lt;/strong&gt;&lt;br&gt;
This creates a B1s Ubuntu VM with auto-generated SSH keys and connects it to the existing subnet and firewall.&lt;/p&gt;

&lt;p&gt;It's necessary because this is the actual cloud compute instance that will run your web application code.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Performance Efficiency — selecting the appropriately sized VM for your workload (B1s for dev/test)&lt;/em&gt;.&lt;br&gt;
Run this command: &lt;strong&gt;az vm create &lt;code&gt;&lt;br&gt;
--resource-group azurecli-lab-rg &lt;/code&gt;&lt;br&gt;
--name lab-vm &lt;code&gt;&lt;br&gt;
--image Ubuntu2204 &lt;/code&gt;&lt;br&gt;
--size Standard_B2s_v2 &lt;code&gt;&lt;br&gt;
--location koreacentral &lt;/code&gt;&lt;br&gt;
--admin-username azureuser &lt;code&gt;&lt;br&gt;
--generate-ssh-keys &lt;/code&gt;&lt;br&gt;
--vnet-name lab-vnet &lt;code&gt;&lt;br&gt;
--subnet lab-subnet &lt;/code&gt;&lt;br&gt;
--public-ip-address lab-public-ip&lt;br&gt;
--nsg lab-nsg&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sqm5r1di3wu0e6ipxj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sqm5r1di3wu0e6ipxj2.png" alt="vm" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Notice this time around, I did not have to change anything because am now aware of the limitations that come with the subscription and the &lt;strong&gt;Powerstate reads running&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: &lt;strong&gt;Retrieve the public IP&lt;/strong&gt;&lt;br&gt;
This will filter the Azure API response to return just the IP address string.&lt;/p&gt;

&lt;p&gt;It's important because you'll need this IP to SSH into the machine and to test the web application.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Operational Excellence — automated retrieval of resource attributes avoids manual portal lookups&lt;/em&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az network public-ip show &lt;code&gt;&lt;br&gt;
--resource-group azurecli-lab-rg &lt;/code&gt;&lt;br&gt;
--name lab-public-ip &lt;code&gt;&lt;br&gt;
--query ipAddress &lt;/code&gt;&lt;br&gt;
--output tsv&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubbpgzkwd031tx03dcmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubbpgzkwd031tx03dcmb.png" alt="Ip" width="800" height="451"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notice the error occurred because I mistakenly omitted the "-rg" suffix&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: &lt;strong&gt;Verify the VM is running&lt;/strong&gt;&lt;br&gt;
This queries the VM status and displays it in a clean table format.&lt;/p&gt;

&lt;p&gt;It's needed because you always verify provisioning success before attempting connections.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Operational Excellence — verification and monitoring&lt;/em&gt;.&lt;br&gt;
Run the command: &lt;strong&gt;az vm show &lt;code&gt;&lt;br&gt;
--resource-group azurecli-lab-rg &lt;/code&gt;&lt;br&gt;
--name lab-vm &lt;code&gt;&lt;br&gt;
--show-details&lt;/code&gt;&lt;br&gt;
--query '{Name:name, State:powerState, IP:publicIps}'&lt;br&gt;
--output table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsa9bj99nrxmu00sy5a1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsa9bj99nrxmu00sy5a1.png" alt="running" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;:&lt;strong&gt;SSH into your VM &amp;amp; install Nginx&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It logs into the VM over the internet via SSH, installs the Nginx package using APT, and starts the service.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why It's Needed&lt;/em&gt;&lt;br&gt;
A fresh VM is blank. Nginx serves as the web server to test our HTTP port 80 firewall rule.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pillar Connection&lt;/em&gt;&lt;br&gt;
Operational Excellence — bootstrap scripts or userdata are typically used to automate this step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5kjebxyw491umseykci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5kjebxyw491umseykci.png" alt="nginx" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34k8w71ec4t5tq6zzlh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34k8w71ec4t5tq6zzlh9.png" alt="ubuntu" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr07lgx3s199rqd9cuj8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr07lgx3s199rqd9cuj8r.png" alt="ubuntu" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjhb1j55xbxqf4o076rk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjhb1j55xbxqf4o076rk.png" alt="ubuntu" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic4bvpmj5wa46cl0ac10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fic4bvpmj5wa46cl0ac10.png" alt="ubuntu" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can go ahead to verify the resources' provision in your Azure Portal. This will help you appreciate resource creation via the Azure CLI. It's very fast and efficient&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Your Turn to Command the Cloud
&lt;/h2&gt;

&lt;p&gt;Stepping out of the comfort zone of the Azure Portal and into the CLI is more than just a technical shift, it’s a mindset shift. By following this guide, you’ve moved from being a "user" of the cloud to someone who truly "architects" it.&lt;/p&gt;

&lt;p&gt;Don't be discouraged if you hit errors along the way. Every "SkuNotAvailable" or "InvalidParameter" is just a signal that you're learning how the machine actually thinks. The more you practice, the more these commands will feel like a second language.&lt;/p&gt;

&lt;p&gt;I’d love to hear from you! Let's go to the comment section👇👇&lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>infrastructure</category>
      <category>azurecli</category>
      <category>automation</category>
    </item>
    <item>
      <title>Microsoft Azure Management Tasks: Manage Tags and Locks</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 12 Mar 2026 10:57:23 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/microsoft-azure-management-tasks-manage-tags-and-locks-475c</link>
      <guid>https://dev.to/rahimah_dev/microsoft-azure-management-tasks-manage-tags-and-locks-475c</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In a fast-paced cloud environment, visibility and protection are the difference between seamless operations and costly downtime. As a cloud-focused professional, I understand that managing resources isn't just about deployment,it's about governance. &lt;br&gt;
&lt;em&gt;This project demonstrates my ability to implement Azure Resource Governance by applying strategic metadata via tags for cost center tracking and enforcing Resource Locks to prevent accidental deletions of mission-critical infrastructure&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If you’ve completed the previous exercises(in the Microsoft Azure Management Tasks), you’ve managed added a subnet to a virtual network, made changes to a virtual machine, and worked with an Azure storage account. The final set of tasks for this guided project focus around working with &lt;strong&gt;tags&lt;/strong&gt; and &lt;strong&gt;resource locks&lt;/strong&gt; to help manage and monitor your environment. During this exercise you’ll go back into each of the areas you’ve already worked to add tags, locks, or a combination of both.&lt;/p&gt;

&lt;p&gt;This exercise should take approximately &lt;strong&gt;5&lt;/strong&gt; minutes to complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;Pleased with your progress so far, the Azure admin hopes that you can wrap a few things up to help with monitoring and protecting resources. They want to know that someone can’t accidentally get rid of the virtual machine that’s running as an FTP server, and they want a quick way to see what department is using resources and the resource’s purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manage tags and locks on VMs
&lt;/h2&gt;

&lt;p&gt;Adding tags to resources is a quick way to be able to group and organize resources. Tags can be added at different levels, giving you the ability to organize and group resources at a level that makes sense for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add tags to a virtual machine
&lt;/h2&gt;

&lt;p&gt;You’ll start by adding a pair of tags to the virtual machine. One tag will be to identify the purpose of the virtual machine and the other will be to indicate the department the machine supports.&lt;/p&gt;

&lt;p&gt;1.Login to Microsoft Azure at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;&lt;br&gt;
2.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual machines&lt;/strong&gt;.&lt;br&gt;
3.Select &lt;strong&gt;virtual machines&lt;/strong&gt; under services.&lt;br&gt;
4.Select the &lt;strong&gt;guided-project-vm&lt;/strong&gt; virtual machine.&lt;br&gt;
5.From the menu pane, select &lt;strong&gt;Tags&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4j6mirngbc9o3h5a7nw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4j6mirngbc9o3h5a7nw.png" alt="tags" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.On one line for &lt;strong&gt;Name&lt;/strong&gt; enter &lt;code&gt;Department&lt;/code&gt; and for &lt;strong&gt;Value&lt;/strong&gt; enter &lt;code&gt;Customer Service&lt;/code&gt;&lt;br&gt;
7.On the next line, for &lt;strong&gt;Name&lt;/strong&gt; enter &lt;code&gt;Purpose&lt;/code&gt; and for &lt;strong&gt;Value&lt;/strong&gt; enter &lt;code&gt;FTP Server&lt;/code&gt;.&lt;br&gt;
8.Select &lt;strong&gt;Apply&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7igsbv3w2r766pgoe64s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7igsbv3w2r766pgoe64s.png" alt="apply" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While you’re working on the virtual machine, it’s a great time to add a resource lock.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add a resource lock to a VM
&lt;/h2&gt;

&lt;p&gt;1.If necessary, expand the &lt;strong&gt;Settings&lt;/strong&gt; submenu.&lt;br&gt;
2.Select &lt;strong&gt;Locks&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe68tbe2iht3bt88seotd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe68tbe2iht3bt88seotd.png" alt="locks" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Select &lt;strong&gt;+ Add&lt;/strong&gt;.&lt;br&gt;
4.For the name, enter &lt;code&gt;VM-delete-lock&lt;/code&gt;.&lt;br&gt;
5.For the &lt;strong&gt;Lock type&lt;/strong&gt;, select &lt;strong&gt;Delete&lt;/strong&gt;.&lt;br&gt;
6.You may enter a note to help remind you why you created the lock.&lt;br&gt;
7.Select &lt;strong&gt;OK&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0ue2dgb0nm6aac6oakj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0ue2dgb0nm6aac6oakj.png" alt="deletelock" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it. Now the VM is protected from deletion and has tags assigned to help track use. Time to move onto the network.&lt;/p&gt;

&lt;p&gt;1.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add tags to network resources
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;virtual networks&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;virtual networks&lt;/strong&gt; under services.&lt;br&gt;
3.Select the &lt;strong&gt;guided-project-vnet&lt;/strong&gt; network.&lt;br&gt;
4.From the menu pane, select &lt;strong&gt;Tags&lt;/strong&gt;.&lt;br&gt;
&lt;strong&gt;Note&lt;/strong&gt;: Notice that now you can select an existing tag to apply or add a new tag. You can also select just the name or value and apply create something new in the other field.&lt;/p&gt;

&lt;p&gt;5.For the &lt;strong&gt;Name&lt;/strong&gt; select &lt;strong&gt;Department&lt;/strong&gt;.&lt;br&gt;
6.For the &lt;strong&gt;Value&lt;/strong&gt; enter &lt;code&gt;IT&lt;/code&gt;.&lt;br&gt;
7.Select &lt;strong&gt;Apply&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fted73iuopdze9tyodz6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fted73iuopdze9tyodz6i.png" alt="vnet tags" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now both the VNet and VM have been organized.&lt;/p&gt;

&lt;p&gt;Congratulations! You’ve completed this exercise. .&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By successfully implementing tags and resource locks, I have ensured that the environment is not only organized for departmental billing and monitoring but also hardened against human error. &lt;br&gt;
&lt;em&gt;These foundational governance tasks are essential for maintaining a secure, scalable, and professional Azure footprint&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azureadmin</category>
      <category>infrastructure</category>
      <category>resourcelocks</category>
    </item>
    <item>
      <title>Microsoft Azure Management Tasks: Control storage access</title>
      <dc:creator>Rahimah Sulayman</dc:creator>
      <pubDate>Thu, 12 Mar 2026 10:43:14 +0000</pubDate>
      <link>https://dev.to/rahimah_dev/microsoft-azure-management-tasks-control-storage-access-566k</link>
      <guid>https://dev.to/rahimah_dev/microsoft-azure-management-tasks-control-storage-access-566k</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In modern cloud environments, storing files is only part of the job, controlling who can access them and how they are accessed is just as critical. Organizations rely heavily on secure and scalable storage solutions to share data across teams, applications, and services. &lt;br&gt;
&lt;em&gt;In &lt;strong&gt;Microsoft Azure&lt;/strong&gt;, storage accounts, containers, and file shares provide powerful ways to manage data while maintaining strict control over access&lt;/em&gt;. &lt;br&gt;
In this exercise, you’ll explore how to create storage containers and file shares, upload files, manage access tiers, and securely control access using shared access signatures.In this exercise, you’ll complete several tasks related to managing a storage account and components of the storage account.&lt;/p&gt;

&lt;p&gt;This exercise should take approximately &lt;strong&gt;12&lt;/strong&gt; minutes to complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;The Azure admin wants you to get more familiar with storage accounts, containers, and file shares. They anticipate needing to share an increasing number of files and need someone who is skilled using these services. They’ve given you a task of creating a storage container and a file share and uploading files to both locations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a storage container
&lt;/h2&gt;

&lt;p&gt;1.Login to Microsoft Azure at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;&lt;br&gt;
2.From the Azure portal home page, in the search box, enter &lt;strong&gt;storage accounts&lt;/strong&gt;.&lt;br&gt;
3.Select &lt;strong&gt;storage accounts&lt;/strong&gt; under services.&lt;br&gt;
4.Select the storage account you created in the &lt;strong&gt;Prepare&lt;/strong&gt; exercise. The storage account &lt;strong&gt;name&lt;/strong&gt; is the hyperlink to the storage account. &lt;em&gt;(Note: it should be associated with the resource group &lt;strong&gt;guided-project-rg&lt;/strong&gt;).&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uo5f9osii5dpszrfcqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uo5f9osii5dpszrfcqp.png" alt="storageacct" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.On the storage account blade, under the &lt;strong&gt;Data storage&lt;/strong&gt; submenu, select &lt;strong&gt;Containers&lt;/strong&gt;.&lt;br&gt;
6.Select &lt;strong&gt;+ Add container&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtf8c20i75ugfz3fmr8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtf8c20i75ugfz3fmr8w.png" alt="container" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.In the &lt;strong&gt;Name&lt;/strong&gt; field, enter &lt;code&gt;storage-container&lt;/code&gt;.&lt;br&gt;
8.Select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Great! With a storage container created, you can upload a blob to the container. Locate a picture that you can upload, either on your computer or from the internet, and save it locally to make uploading easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upload a file to the storage container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Select the storage container you just created. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f6hjrs5yb4643ocykf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f6hjrs5yb4643ocykf3.png" alt="container" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.Select &lt;strong&gt;Upload&lt;/strong&gt; and upload the file you prepared. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9icyqx2uliyi7h4ux9jl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9icyqx2uliyi7h4ux9jl.png" alt="upload" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Once the file is ready for upload, select &lt;strong&gt;Upload&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;With the file uploaded, notice that the Access tier is displayed. For something we uploaded just for testing, it doesn’t need to be assigned to the &lt;strong&gt;Hot&lt;/strong&gt; access tier. In the next few steps, you’ll change the access tier for the file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Change the access tier
&lt;/h2&gt;

&lt;p&gt;1.Select the file you just uploaded (the file name is a hyperlink).&lt;br&gt;
2.Select &lt;strong&gt;Change tier&lt;/strong&gt;. Screenshot of menu for a blob storage item with Change tier highlighted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsyslmb6qnqz1ydqcqz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsyslmb6qnqz1ydqcqz7.png" alt="changetier" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Select &lt;strong&gt;Cold&lt;/strong&gt;.&lt;br&gt;
4.Select &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiduyarba33nkfqy5iwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiduyarba33nkfqy5iwv.png" alt="cold" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You just changed the access tier for an individual blob or file. To change the default access tier for all blobs within the storage account, you could change it at the storage account level.&lt;/p&gt;

&lt;p&gt;5.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;Good job! You’ve successfully uploaded a storage blob and changed the access tier from Hot to Cold. Next, you’ll work with file shares.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a file share
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;storage accounts&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;storage accounts&lt;/strong&gt; under services.&lt;br&gt;
3.Select the storage account you created in the &lt;strong&gt;Prepare&lt;/strong&gt; exercise. The storage account &lt;strong&gt;name&lt;/strong&gt; is the hyperlink to the storage account. &lt;em&gt;(Note: it should be associated with the resource group &lt;strong&gt;guided-project-rg&lt;/strong&gt;.)&lt;/em&gt;&lt;br&gt;
4.On the storage account blade, under the &lt;strong&gt;Data storage&lt;/strong&gt; submenu, select &lt;strong&gt;File shares&lt;/strong&gt;.&lt;br&gt;
5.Select + &lt;strong&gt;File share&lt;/strong&gt;.&lt;br&gt;
6.On the Basics tab, in the name field enter &lt;code&gt;file-share&lt;/code&gt;.&lt;br&gt;
7.On the &lt;strong&gt;Backup&lt;/strong&gt; tab, uncheck &lt;strong&gt;Enable backup&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84hyt3mql8x09d7jatba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84hyt3mql8x09d7jatba.png" alt="uncheck" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Select &lt;strong&gt;Review + create&lt;/strong&gt;.&lt;br&gt;
9.Select &lt;strong&gt;Create&lt;/strong&gt;.&lt;br&gt;
10.Once the file share is created, select &lt;strong&gt;Upload&lt;/strong&gt;.&lt;br&gt;
11.Upload the same file you uploaded to the blob storage or a different file, it’s up to you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pliinaroxrd5rfhnb55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pliinaroxrd5rfhnb55.png" alt="newupload" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;12.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;The next piece of the puzzle is figuring one way to control access to the files that have been uploaded. Azure has many ways to control files, including things like role-based access control. In this scenario, the Azure admin wants you to use shared access tokens or keys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a shared access signature token
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;storage accounts&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;storage accounts&lt;/strong&gt; under services.&lt;br&gt;
3.Select the storage account you created in the &lt;strong&gt;Prepare&lt;/strong&gt; exercise.&lt;br&gt;
4.On the storage account blade, select &lt;strong&gt;Storage browser&lt;/strong&gt;.&lt;br&gt;
5.Expand &lt;strong&gt;Blob containers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Blob container is another name for the storage containers. Items uploaded to a storage container are called &lt;strong&gt;blobs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;6.Select the storage container you created earlier, &lt;strong&gt;storage-container&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci5saoiowtmwjfx5byqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci5saoiowtmwjfx5byqs.png" alt="blob" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.Select the ellipses (three dots) on the end of the line for the image you uploaded. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywmzx3yogwagrjpl3bcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywmzx3yogwagrjpl3bcz.png" alt="ellipse" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Select &lt;strong&gt;Generate SAS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: When you generate a shared access signature, you set the duration. Once the duration is over, the link stops working. The &lt;strong&gt;Start&lt;/strong&gt; automatically populates with the current date and time.&lt;/p&gt;

&lt;p&gt;9.Set &lt;strong&gt;Signing method&lt;/strong&gt; to &lt;strong&gt;Account key.&lt;/strong&gt;&lt;br&gt;
10.Set &lt;strong&gt;Signing key&lt;/strong&gt; to &lt;strong&gt;Key 1&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: There are two signing keys available. You can choose either one, or create SAS tokens with different durations.&lt;/p&gt;

&lt;p&gt;11.Set &lt;strong&gt;Stored access policy&lt;/strong&gt; to &lt;strong&gt;None&lt;/strong&gt;.&lt;br&gt;
12.Set &lt;strong&gt;Permissions&lt;/strong&gt; to &lt;strong&gt;Read&lt;/strong&gt;.&lt;br&gt;
13.Enter a custom start and expiry time or leave the defaults. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F305yts7q802pwgtzq7yn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F305yts7q802pwgtzq7yn.png" alt="SAS" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;14.Set &lt;strong&gt;Allowed protocols&lt;/strong&gt; to &lt;strong&gt;HTTPS only&lt;/strong&gt;.&lt;br&gt;
15.Select &lt;strong&gt;Generate SAS token and URL&lt;/strong&gt;.&lt;br&gt;
16.Copy the &lt;strong&gt;Blob SAS URL&lt;/strong&gt; and paste it in another window or tab of your browser. It should display the image you uploaded. Keep this tab or window open.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa65x78rqckwhrmnu8gbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa65x78rqckwhrmnu8gbo.png" alt="url" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: You can configure SAS tokens for files shares and blob containers using the same process.&lt;/p&gt;

&lt;p&gt;17.Select &lt;strong&gt;Home&lt;/strong&gt; to return to the Azure portal home page.&lt;/p&gt;

&lt;p&gt;With the SAS token created, anyone with that link can access the file for the duration that was set when you created the SAS token. However, controlling access to a resource or file is about more than just granting access. It’s also about being able to &lt;strong&gt;revoke access&lt;/strong&gt;. To revoke access with a SAS token, you need to invalidate the token. You invalidate the token by rotating the key that was used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rotate access keys
&lt;/h2&gt;

&lt;p&gt;1.From the Azure portal home page, in the search box, enter &lt;strong&gt;storage accounts&lt;/strong&gt;.&lt;br&gt;
2.Select &lt;strong&gt;storage accounts&lt;/strong&gt; under services.&lt;br&gt;
3.Select the storage account you created in the &lt;strong&gt;Prepare&lt;/strong&gt; exercise.&lt;br&gt;
4.Expand the &lt;strong&gt;Security + networking&lt;/strong&gt; submenu.&lt;br&gt;
5.Select &lt;strong&gt;Access keys&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgf86876nas2qfhfsq60h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgf86876nas2qfhfsq60h.png" alt="access" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.For Key 1, select &lt;strong&gt;Rotate key&lt;/strong&gt;.&lt;br&gt;
7.Read and then acknowledge the warning about regenerating the access key by selecting &lt;strong&gt;Yes&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj3t9b5wq9zxoarroxy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj3t9b5wq9zxoarroxy0.png" alt="rotatekey" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Once you see the success message for rotating the access key, go back to the window or tab you used to check the SAS token and refresh the page. You should receive an authentication failed error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f3k110plt3q7b6c4iwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2f3k110plt3q7b6c4iwp.png" alt="authentification" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You’ve completed this exercise. Return to Microsoft Learn to continue the guided project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaway
&lt;/h2&gt;

&lt;p&gt;By creating a storage container, uploading files, configuring file shares, and generating shared access signature tokens, you’ve learned how to manage and secure storage resources in &lt;strong&gt;Microsoft Azure&lt;/strong&gt;. You also explored how to revoke access by rotating access keys—an essential security practice. These hands-on tasks highlight the importance of balancing accessibility with strong access control when managing cloud-based storage systems.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloudcomputing</category>
      <category>blobstorage</category>
      <category>cloudsecurity</category>
    </item>
  </channel>
</rss>
