<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yasir Rehman</title>
    <description>The latest articles on DEV Community by Yasir Rehman (@theyasirr).</description>
    <link>https://dev.to/theyasirr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/theyasirr"/>
    <language>en</language>
    <item>
      <title>Comprehensive Guide to 50+ Kubernetes Resources</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Mon, 20 Jan 2025 21:56:08 +0000</pubDate>
      <link>https://dev.to/theyasirr/comprehensive-guide-to-50-kubernetes-resources-34k</link>
      <guid>https://dev.to/theyasirr/comprehensive-guide-to-50-kubernetes-resources-34k</guid>
      <description>&lt;p&gt;Kubernetes is a powerful container orchestration platform that enables easy deployment, management, and scaling of applications. At the heart of Kubernetes are its resources—the fundamental building blocks that define and manage the behavior of your applications, networking, and storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are they important?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;- Foundation of Kubernetes operations:&lt;/strong&gt; Every Kubernetes action involves resources, making them essential for mastering the platform.&lt;br&gt;
&lt;strong&gt;- Customizable and extensible:&lt;/strong&gt; These resources can be tailored to meet your application's unique requirements, ensuring flexibility and scalability.&lt;br&gt;
&lt;strong&gt;- Interconnected system:&lt;/strong&gt; Understanding how these resources interact helps you design robust and reliable systems in Kubernetes.&lt;/p&gt;

&lt;p&gt;This article provides a clear overview of each resource, including:&lt;br&gt;
✅ Resource Name&lt;br&gt;
✅ Short Names&lt;br&gt;
✅ API Version&lt;br&gt;
✅ Namespaced&lt;br&gt;
✅ A brief explanation of what each resource does!&lt;br&gt;
Find an easy-to-read document on my LinkedIn post about "&lt;a href="https://www.linkedin.com/feed/update/urn:li:activity:7286632802219945984/" rel="noopener noreferrer"&gt;Comprehensive Guide to 50+ Kubernetes Resources&lt;/a&gt;".&lt;/p&gt;

&lt;h2&gt;
  
  
  1. APIServices
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: APIService&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: apiregistration.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Enables custom resource definitions to be served over HTTPS by registering them with the API server.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. CertificateSigningRequests
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: CertificateSigningRequest (csr)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: certificates.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Requests a signed certificate from a Certificate Authority within the Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. ClusterRoleBindings
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ClusterRoleBinding&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: rbac.authorization.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Grants permissions to users, service accounts, or other groups at the cluster level by binding them to a ClusterRole.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. ClusterRoles
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ClusterRole&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: rbac.authorization.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines a set of permissions that can be granted to users or groups at the cluster level.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. ComponentStatuses
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ComponentStatus (cs)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: core/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Provides information about the health and status of core Kubernetes components.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. ConfigMaps
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ConfigMap (cm)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Stores non-sensitive key-value data that can be mounted as environment variables or volumes in containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. ControllerRevisions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ControllerRevision&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: apps/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Stores past configurations of Deployments and ReplicaSets, allowing for easy rollback to previous versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. CronJobs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: CronJob (cj)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: batch/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Schedules jobs to run periodically on a specified schedule (e.g., daily, weekly).&lt;/p&gt;

&lt;h2&gt;
  
  
  9. CSIDrivers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: CSIDriver&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: storage.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Represents a Container Storage Interface (CSI) driver, which allows for the integration of third-party storage systems with Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. CSINodes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: CSINode&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: storage.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Represents the registration of a CSI driver on a specific node within the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  11. CSIStoreageCapacities
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: CSIStoreageCapacity&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: storage.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Reports the available capacity of a CSI volume.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. CustomResourceDefinitions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: CustomResourceDefinition (crd, crds)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: apiextensions.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines custom resources that extend the Kubernetes API, allowing users to create and manage their own application-specific objects.&lt;/p&gt;

&lt;h2&gt;
  
  
  13. DaemonSets
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: DaemonSet (ds)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: apps/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Ensures that a single instance of a pod is running on every node in the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  14. Deployments
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Deployment (deploy)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: apps/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Manages the deployment and updates of pods and ReplicaSets. Provides features like rolling updates, rollback, and health checks.&lt;/p&gt;

&lt;h2&gt;
  
  
  15. Endpoints
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Endpoints (ep)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Represents the current endpoints (IP addresses and ports) for a Service.&lt;/p&gt;

&lt;h2&gt;
  
  
  16. Endpointslices
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Endpointslice&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: discovery.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Divides large sets of endpoints into smaller subsets for efficient service discovery in Kubernetes. Improves scalability and performance for services with many pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  17. Events
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Event (ev)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Records events that occur within the Kubernetes cluster, such as pod creation, deletion, and scheduling failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  18. FlowSchemas
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: FlowSchema&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: flowcontrol.apiserver.k8s.io/v1beta2&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines a set of rules for limiting the rate of API requests to the Kubernetes API server.&lt;/p&gt;

&lt;h2&gt;
  
  
  19. HorizontalPodAutoscalers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: HorizontalPodAutoscaler (HPA)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: autoscaling/v2beta2&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Automatically scales the number of pods in a Deployment or ReplicaSet based on observed CPU utilization or other metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  20. IngressClasses
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: IngressClass&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: networking.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines a set of configuration parameters that can be used by Ingress controllers to configure their behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  21. Ingresses
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Ingress (ing)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: networking.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  22. Jobs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Job&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: batch/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Creates one or more pods and ensures that a specified number of them successfully completed. Often used for one-time tasks or batch &lt;br&gt;
processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  23. LimitRanges
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: LimitRange (limits)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines minimum and maximum resource limits for containers that are created in a namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  24. LocalSubjectAccessReview
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: LocalSubjectAccessReview&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: authorization.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Allows you to determine if a user or group has specific permissions within the context of the pod where the request originates.&lt;/p&gt;

&lt;h2&gt;
  
  
  25. MutatingWebhookConfigurations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: MutatingWebhookConfiguration&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: admissionregistration.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines a set of webhooks that are called to modify objects before they are created or modified in the Kubernetes API server.&lt;/p&gt;

&lt;h2&gt;
  
  
  26. Namespaces
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Namespace (ns)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Provides a way to divide a single cluster into multiple virtual clusters, isolating resources and permissions.&lt;/p&gt;

&lt;h2&gt;
  
  
  27. NetworkPolicies
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: NetworkPolicy (netpol)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: networking.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Controls network traffic between pods within a namespace and between pods and external entities.&lt;/p&gt;

&lt;h2&gt;
  
  
  28. Nodes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Node (no)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Represents a worker machine in the Kubernetes cluster, where pods are scheduled and executed.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;29. PersistentVolumeClaims&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: PersistentVolumeClaim (PVC)&lt;br&gt;
&lt;strong&gt;Short Names&lt;/strong&gt;: pvc&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Represents a request for persistent storage by a user. PVCs describe the desired characteristics of the storage (e.g., size, access &lt;br&gt;
modes).&lt;/p&gt;

&lt;h2&gt;
  
  
  30. PersistentVolumes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: PersistentVolume (PV)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Represents a piece of persistent storage in the cluster that has been provisioned by an administrator or dynamically provisioned using a storage class.&lt;/p&gt;

&lt;h2&gt;
  
  
  31. PodDisruptionBudgets
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: PodDisruptionBudget (PDB)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: policy/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Protects applications from disruption caused by voluntary and involuntary node evacuations (e.g., maintenance). Ensures that a minimum number of pods of a specific type are always available.&lt;/p&gt;

&lt;h2&gt;
  
  
  32. Pods
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Pod (po)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: The smallest and most atomic unit in Kubernetes. Represents a running container or a set of co-located containers that share resources and a common lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  33. PodTemplates
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: PodTemplate&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: A blueprint for creating pods. Used within higher-level objects like Deployments, ReplicaSets, and Jobs.&lt;/p&gt;

&lt;h2&gt;
  
  
  34. PriorityClasses
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: PriorityClass (pc)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: scheduling.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines a priority level for pods, influencing their scheduling decisions. Higher priority pods are more likely to be scheduled on available nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  35. PriorityLevelConfigurations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: PriorityLevelConfiguration&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: flowcontrol.apiserver.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Configures the global priority and preemption behavior of pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  36. Profiles
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Profile&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: autoscaling.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines a set of resource requests and limits that can be applied to pods within a namespace. Used by horizontal pod autoscalers to determine scaling boundaries.&lt;/p&gt;

&lt;h2&gt;
  
  
  37. ReplicaSets
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ReplicaSet (rs)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: apps/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Ensures that a specified number of pod replicas are running at any given time. Often used as a building block for higher-level controllers like Deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  38. ReplicationControllers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ReplicationController (rc)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: An older mechanism for ensuring a fixed number of pod replicas. Largely superseded by ReplicaSets.&lt;/p&gt;

&lt;h2&gt;
  
  
  39. ResourceQuotas
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ResourceQuota (quota)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Limits the amount of resources (CPU, memory, storage) that can be consumed by users or teams within a namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  40. RoleBindings
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: RoleBinding&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: rbac.authorization.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Grants permissions to users, service accounts, or other groups by binding them to a specific Role.&lt;/p&gt;

&lt;h2&gt;
  
  
  41. Roles
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Role&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: rbac.authorization.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines a set of permissions that can be granted to users or groups within a namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  42. RuntimeClasses
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: RuntimeClass&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: node.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Specifies the runtime environment (e.g., container runtime, security settings) for containers within a pod.&lt;/p&gt;

&lt;h2&gt;
  
  
  43. Secrets
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Secret&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Stores sensitive information securely within the Kubernetes cluster, such as passwords, API keys, and certificates.&lt;/p&gt;

&lt;h2&gt;
  
  
  44. SelfSubjectAccessReview
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: SelfSubjectAccessReview&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: authorization.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Allows a user or service account to check whether they have specific permissions within the current namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  45. SelfSubjectRulesReview
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: SelfSubjectRulesReview&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: authorization.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Allows a user or service account to check the set of permissions they have within the current namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  46. ServiceAccounts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ServiceAccount (sa)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Represents a service account within a namespace, which can be used to authenticate and authorize access to Kubernetes resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  47. Services
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: Service (svc)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines a logical set of Pods and a policy for accessing them. Enables services to be discovered and accessed by other services within or outside the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  48. StatefulSets
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: StatefulSet (sts)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: apps/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Manages the deployment and scaling of stateful applications that require stable, persistent identifiers for each pod.&lt;/p&gt;

&lt;h2&gt;
  
  
  49. StorageClasses
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: StorageClass (sc)&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: storage.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Provides a way for administrators to define and manage different types of storage (e.g., cloud storage, local storage) that can be used by PersistentVolumes.&lt;/p&gt;

&lt;h2&gt;
  
  
  50. SubjectAccessReview
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: SubjectAccessReview&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: authorization.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: Yes&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Allows a user to check whether a particular user or group has specific permissions within the current namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  51. TokenReview
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: TokenReview&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: authentication.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Validates an authentication token and returns information about the user associated with the token.&lt;/p&gt;

&lt;h2&gt;
  
  
  52. ValidationWebhookConfigurations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: ValidationWebhookConfiguration&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: admissionregistration.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Defines a set of webhooks that are called to validate objects before they are created or modified in the Kubernetes API server.&lt;/p&gt;

&lt;h2&gt;
  
  
  53. VolumeAttachments
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;: VolumeAttachment&lt;br&gt;
&lt;strong&gt;API Version&lt;/strong&gt;: storage.k8s.io/v1&lt;br&gt;
&lt;strong&gt;Name-spaced&lt;/strong&gt;: No (Cluster-scoped)&lt;br&gt;
&lt;strong&gt;Explanation&lt;/strong&gt;: Represents the binding of a &lt;br&gt;
PersistentVolume to a node. This object is created automatically by the Kubernetes system.&lt;/p&gt;

&lt;p&gt;To discover the full list of resources available in your Kubernetes cluster, use the &lt;code&gt;kubectl api-resources&lt;/code&gt; command. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember&lt;/strong&gt; that production clusters often include add-on components (e.g., Istio, Traefik, Prometheus, Grafana, Fluentd, Falco, etc.) that extend the core set of resources listed here.&lt;/p&gt;

&lt;p&gt;Find an easy-to-read document on my LinkedIn post about "&lt;a href="https://www.linkedin.com/feed/update/urn:li:activity:7286632802219945984/" rel="noopener noreferrer"&gt;Comprehensive Guide to 50+ Kubernetes Resources&lt;/a&gt;".&lt;/p&gt;

&lt;p&gt;I post about DevOps, cloud-native, and compassionate leadership. You can reach out to me on &lt;a href="https://www.linkedin.com/in/yasirrehman1/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>webdev</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Backing Up and Restoring the System—Best Practices</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Sun, 08 Dec 2024 16:06:13 +0000</pubDate>
      <link>https://dev.to/theyasirr/backing-up-and-restoring-the-system-best-practices-5f1p</link>
      <guid>https://dev.to/theyasirr/backing-up-and-restoring-the-system-best-practices-5f1p</guid>
      <description>&lt;p&gt;&lt;strong&gt;Backup&lt;/strong&gt; is the process of copying and storing data securely to prevent loss.&lt;br&gt;
&lt;strong&gt;Recovery&lt;/strong&gt; is the process of restoring this data after a failure or disaster.&lt;/p&gt;

&lt;h2&gt;
  
  
  How is it important?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prevents data loss:&lt;/strong&gt; Protects against accidental deletion, hardware failures, or cyberattacks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ensures business continuity:&lt;/strong&gt; Minimizes downtime during disasters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance:&lt;/strong&gt; Some industries mandate regular backups for security and audit purposes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consider &lt;strong&gt;RTO&lt;/strong&gt; and &lt;strong&gt;RPO&lt;/strong&gt; factors to strategize backup and recovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  RTO
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Recovery Time Objective (RTO)&lt;/strong&gt; is the maximum acceptable duration your system can be down after a failure before it impacts business operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can I calculate RTO for my systems?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Identify critical systems and processes&lt;/em&gt; for your business operations.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Perform a Business Impact Analysis (BIA)&lt;/em&gt; including financial losses, operational disruptions, and reputational damage.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Determine Maximum Tolerable Downtime (MTD)&lt;/em&gt; of each critical system before it causes significant harm.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Evaluate recovery resources&lt;/em&gt; (personnel, technology, and processes) required to restore each critical system.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Calculate recovery time&lt;/em&gt; to acquire or create the necessary resources and complete the recovery process for each system.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Set the RTO&lt;/em&gt; by combining the MTD and the estimated recovery time to determine the RTO for each critical system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, if a critical application has an MTD of 4 hours and it takes 2 hours to recover, the RTO would be 6 hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  RPO
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Recovery Point Objective (RPO)&lt;/strong&gt; is the maximum acceptable amount of data loss measured in time (e.g. last 5 mins, 1 hour).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can I calculate RPO for my systems?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Identify critical data&lt;/em&gt; for your business operations and cannot be lost without significant impact.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Assess data change rate&lt;/em&gt; for you critical data changes. &lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Evaluate data loss tolerance&lt;/em&gt; for your business can tolerate. This is often measured in time (e.g., hours or minutes).&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Set backup frequency&lt;/em&gt; based on your data change rate and loss tolerance, decide how often you need to perform backups.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Implement&lt;/em&gt; the backup strategy and regularly &lt;em&gt;test&lt;/em&gt; it to ensure it meets your RPO requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, if your critical data changes every hour and you can tolerate up to 2 hours of data loss, your RPO would be 2 hours, meaning you need to back up your data at least every 2 hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of backups
&lt;/h2&gt;

&lt;p&gt;Some common types of backups are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Full backup:&lt;/em&gt; A complete copy of all data. Longer time, more storage needed.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Incremental backup:&lt;/em&gt; Backs up only the data that has changed since the last full or incremental backup. Faster, uses less storage.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Differential backup:&lt;/em&gt; Backs up all data that has changed since the last full backup. Balances speed and storage usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Methods of backups
&lt;/h2&gt;

&lt;p&gt;Some common methods for backing up the systems are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Manual backup:&lt;/em&gt; Human intervention is required to initiate and monitor the backup process.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Local automated backup:&lt;/em&gt; Backups are automated and stored locally on the same device or network.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Remote automated backup:&lt;/em&gt; Backups are automated and stored remotely, often in a cloud-based storage solution.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Recommendations
&lt;/h2&gt;

&lt;p&gt;A few recommendations for backing up and restoring the system are:&lt;br&gt;
&lt;strong&gt;The 3-2-1 Rule:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 total copies of your data&lt;/li&gt;
&lt;li&gt;2 different storage types (e.g., local and cloud)&lt;/li&gt;
&lt;li&gt;1 copy offsite
This rule is also known as “the rule of three”. Depending on your needs, consider variations like 3-1-2, 3-2-2, or 3-2-3.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Continuous backup check&lt;/strong&gt;&lt;br&gt;
Regularly test your backups to ensure they are functioning correctly, and that data can be restored without issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clear and documented backup strategy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define responsibilities: Who handles backups?&lt;/li&gt;
&lt;li&gt;Set schedules: When do backups occur?&lt;/li&gt;
&lt;li&gt;Test recovery plans: Practice restoring to ensure readiness during disasters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it for today!&lt;/p&gt;

&lt;p&gt;You can find a document on my LinkedIn post about "&lt;a href="https://www.linkedin.com/feed/update/urn:li:activity:7271547157055225856/" rel="noopener noreferrer"&gt;Backing Up and Restoring the System&lt;/a&gt;".&lt;/p&gt;

&lt;p&gt;I post about DevOps, engineering excellence, and compassionate leadership. You can reach out to me on &lt;a href="https://www.linkedin.com/in/yasirrehman1/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>architecture</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Understanding Kubernetes Services</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Sun, 01 Dec 2024 21:25:57 +0000</pubDate>
      <link>https://dev.to/theyasirr/kubernetes-services-simplified-627</link>
      <guid>https://dev.to/theyasirr/kubernetes-services-simplified-627</guid>
      <description>&lt;p&gt;A &lt;strong&gt;Kubernetes Service&lt;/strong&gt; is an abstract way to expose an application running on a cluster. It provides a stable network endpoint for your application, even if the underlying pods are constantly changing.&lt;/p&gt;

&lt;p&gt;Kubernetes offers the following &lt;strong&gt;types of Services&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ClusterIP&lt;/li&gt;
&lt;li&gt;NodePort&lt;/li&gt;
&lt;li&gt;LoadBalancer&lt;/li&gt;
&lt;li&gt;External&lt;/li&gt;
&lt;li&gt;Headless&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  ClusterIP
&lt;/h2&gt;

&lt;p&gt;The ClusterIP service is the default Kubernetes service type. It assigns a cluster-internal IP address to the Service. Ideal for internal communication between Pods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qm2a0mf9ezzlha9qf4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qm2a0mf9ezzlha9qf4i.png" alt="ClusterIP Service in Kubernetes" width="800" height="819"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  NodePort
&lt;/h2&gt;

&lt;p&gt;A NodePort service exposes the application on a specific port on each node's IP address. It forwards traffic from that port to the underlying pods. Useful for development and testing, allowing access without a full load balancer setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkl248jg7x6hmqac03k9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkl248jg7x6hmqac03k9.jpg" alt="NodePort Service in Kubernetes" width="800" height="682"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  LoadBalancer
&lt;/h2&gt;

&lt;p&gt;A LoadBalancer service creates an external load balancer that routes traffic to the service's Pods. It is typically used in cloud environments&lt;br&gt;
where load balancing is supported.Ideal for production applications that need to handle high traffic efficiently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66xsr93ulqeq54ghcyxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66xsr93ulqeq54ghcyxl.png" alt="Loadbalancer Service in Kubernetes" width="800" height="781"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  External
&lt;/h2&gt;

&lt;p&gt;An External service allows you to connect to services that are outside your Kubernetes cluster. Useful for integrating with external APIs or &lt;br&gt;
services. Allows Kubernetes applications to interact with resources that are not managed within the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn64si0iorsw0be19zfx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn64si0iorsw0be19zfx2.png" alt="External Service in Kubernetes" width="800" height="663"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Headless
&lt;/h2&gt;

&lt;p&gt;A Headless service is a service without a ClusterIP. It enables direct access to the Pods without load balancing. Useful for stateful applications where you want to connect directly to specific Pods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo9fr0xmtxircxwvhnb4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo9fr0xmtxircxwvhnb4.png" alt="Headless Service in Kubernetes" width="800" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it for today!&lt;/p&gt;

&lt;p&gt;Link to my LinkedIn post on "&lt;a href="https://www.linkedin.com/posts/yasirrehman1_services-in-kubernetes-activity-7263694949966073856-_-6q?utm_source=share&amp;amp;utm_medium=member_desktop" rel="noopener noreferrer"&gt;Understanding Kubernetes Services&lt;/a&gt;". &lt;/p&gt;

&lt;p&gt;I post about DevOps, engineering excellence, and compassionate leadership. You can reach out to me on &lt;a href="https://www.linkedin.com/in/yasirrehman1/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudnative</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Mastery: A Comprehensive Guide for Beginners and Pros</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Fri, 07 Jun 2024 21:12:20 +0000</pubDate>
      <link>https://dev.to/theyasirr/docker-mastery-a-comprehensive-guide-for-beginners-and-pros-2p18</link>
      <guid>https://dev.to/theyasirr/docker-mastery-a-comprehensive-guide-for-beginners-and-pros-2p18</guid>
      <description>&lt;p&gt;Docker is a powerful platform that simplifies the creation, deployment, and management of applications within lightweight, portable containers. It allows developers to package applications and their dependencies into a standardized unit for seamless development and deployment. Docker enhances efficiency, scalability, and collaboration across different environments, making it an essential tool for modern software development and DevOps practices.&lt;br&gt;
We'll delve into every aspect of Docker, from installation and configuration to mastering images, storage, networking, and security.&lt;/p&gt;
&lt;h2&gt;
  
  
  Installation and configuration
&lt;/h2&gt;

&lt;p&gt;Basic guides for installing Docker Community Edition (CE) on CentOS and Ubuntu are given below.&lt;br&gt;
&lt;strong&gt;Install Docker CE on CentOS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install the required packages:&lt;br&gt;
&lt;code&gt;sudo yum install -y () device-mapper-persistent-data lvm2&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add Docker CE yum repository:&lt;br&gt;
&lt;code&gt;sudo yum-config-manager --add-repo  https://download.docker.com/linux/centos/docker-ce.repo&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the Docker CE packages:&lt;br&gt;
&lt;code&gt;sudo yum install -y docker-ce-18.09.5 docker-ce-cli-18.09.5 containerd.io&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start and enable Docker Service:&lt;br&gt;
&lt;code&gt;sudo systemctl start docker&lt;br&gt;
sudo systemctl enable docker&lt;/code&gt;&lt;br&gt;
Add the user to Docker group to grant the user permission to run Docker commands. It will have access to Docker after its next login.&lt;br&gt;
&lt;code&gt;sudo usermod -a -G docker&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Installing Docker CE on Ubuntu&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install the required packages:&lt;br&gt;
&lt;code&gt;sudo apt-get update&lt;br&gt;
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the Docker repo's GNU Privacy Guard (GPG) key:&lt;br&gt;
&lt;code&gt;curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the Docker Ubuntu repository:&lt;br&gt;
&lt;code&gt;sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install packages:&lt;br&gt;
&lt;code&gt;sudo apt-get install -y docker-ce=5:18.09.5~3-0~ubuntu-bionic docker-ce-cli=5:18.09.5~3-0~ubuntu-bionic containerd.io&lt;/code&gt;&lt;br&gt;
Add the user to Docker group to grant the user permission to run Docker commands. It will have access to Docker after its next login.&lt;br&gt;
&lt;code&gt;sudo usermod -a -G docker&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Selecting a storage driver&lt;/strong&gt;&lt;br&gt;
A storage driver is a pluggable driver that handles internal storage for containers. The default driver for CentOS and Ubuntu systems is overlay2. &lt;br&gt;
To determine the current storage driver:&lt;br&gt;
&lt;code&gt;docker info | grep "Storage"&lt;/code&gt;&lt;br&gt;
One way to select a different storage driver is to pass the &lt;code&gt;--storage-driver&lt;/code&gt; flag over to the Docker daemon. The recommended method to set the storage driver is using the Daemon Config file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create or edit the Daemon config file:&lt;br&gt;
&lt;code&gt;sudo vi /etc/docker/daemon.json&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the storage driver value:&lt;br&gt;
&lt;code&gt;"storage-driver": "overlay2"&lt;/code&gt;&lt;br&gt;
Remember to restart Docker after any changes, and then check the status.&lt;br&gt;
&lt;code&gt;sudo systemctl restart docker&lt;br&gt;
sudo systemctl status docker&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Running a container&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;docker run IMAGE[:TAG] [COMMAND] [ARGS]&lt;/code&gt;&lt;br&gt;
&lt;code&gt;IMAGE&lt;/code&gt;: Specifies the image to run a container.&lt;br&gt;
&lt;code&gt;COMMAND and ARGS&lt;/code&gt;: Run a command inside the container. &lt;br&gt;
&lt;code&gt;TAG&lt;/code&gt;: Specifies the image tag or version&lt;br&gt;
&lt;code&gt;-d&lt;/code&gt;: Runs the container in detached mode.&lt;br&gt;
&lt;code&gt;--name NAME&lt;/code&gt;: Gives the container a specified name instead of the usual randomly assigned name.&lt;br&gt;
&lt;code&gt;--restart RESTART&lt;/code&gt;: Specifies when Docker should automatically restart the container. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;no (default)&lt;/code&gt;: Never restart the container. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;on-failure&lt;/code&gt;: Only if the container fails (exits with a non-zero exit code).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;always&lt;/code&gt;: Always restart the container whether it succeeds or fails. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;unless-stopped&lt;/code&gt;: Always restart the container whether it succeeds or fails, and on daemon startup unless the container is manually stopped.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;-p HOST_PORT&lt;/code&gt;: CONTAINER_PORT: Publish a container's port. The HOST_PORT is the port that listens on the host machine, and traffic to that port is mapped to the CONTAINER_PORT on the container. &lt;br&gt;
&lt;code&gt;--memory MEMORY&lt;/code&gt;: Set a hard limit on memory usage. &lt;br&gt;
&lt;code&gt;--memory-reservation MEMORY&lt;/code&gt;: Set a soft limit on memory usage.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d --name nginx --restart unless-stopped -p 8080:80 --memory 500M --memory-reservation 256M nginx:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Some of the commands for managing running containers are: &lt;br&gt;
&lt;code&gt;docker ps&lt;/code&gt;: List running containers. &lt;br&gt;
&lt;code&gt;docker ps -a&lt;/code&gt;: List all containers, including stopped containers. &lt;br&gt;
&lt;code&gt;docker container stop [alias: docker stop]&lt;/code&gt;: Stop a running container. &lt;br&gt;
&lt;code&gt;docker container start [alias: docker start]&lt;/code&gt;: Start a stopped container. &lt;br&gt;
&lt;code&gt;docker container rm [alias: docker rm]&lt;/code&gt;: Delete a container (must be stopped first)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upgrading the Docker Engine&lt;/strong&gt;&lt;br&gt;
Stop the Docker service:&lt;br&gt;
&lt;code&gt;sudo systemctl stop docker&lt;/code&gt;&lt;br&gt;
Install the required version of docker-ce and docker-ce-cli:&lt;br&gt;
&lt;code&gt;sudo apt-get install -y docker-ce=&amp;lt;new version&amp;gt; docker-ce-cli=&amp;lt;new version&amp;gt;&lt;/code&gt;&lt;br&gt;
Verify the current version&lt;br&gt;
&lt;code&gt;docker version&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Image creation, management, and registry
&lt;/h2&gt;

&lt;p&gt;An image is an executable package containing all the software needed to run a container.&lt;br&gt;
Run a container using an image with:&lt;br&gt;
&lt;code&gt;docker run IMAGE&lt;/code&gt;&lt;br&gt;
Download an image with:&lt;br&gt;
&lt;code&gt;docker pull IMAGE&lt;br&gt;
docker image pull IMAGE&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Images and containers use a layered file system. Each layer contains only the differences from the previous layer.&lt;br&gt;
View file system layers in an image with: &lt;br&gt;
&lt;code&gt;docker image history IMAGE&lt;/code&gt;&lt;br&gt;
A Dockerfile is a file that defines a series of directives and is used to build an image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use the official Nginx base image
FROM nginx:latest

# Set an environment variable
ENV MY_VAR=my_value

# Copy custom configuration file to container
COPY nginx.conf /etc/nginx/nginx.conf

# Run some commands during the build process
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y curl

# Expose port 80 for incoming traffic
EXPOSE 80

# Start Nginx server when the container starts
CMD ["nginx", "-g", "daemon off;"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build an image:&lt;br&gt;
&lt;code&gt;docker build -t TAG_NAME DOCKERFILE_LOCATION&lt;/code&gt;&lt;br&gt;
Dockerfile directives:&lt;br&gt;
&lt;code&gt;FROM&lt;/code&gt;: Specifies the base image to use for the Docker image being built. It defines the starting point for the image and can be any valid image available on Docker Hub or a private registry.&lt;br&gt;
&lt;code&gt;ENV&lt;/code&gt;: Sets environment variables within the image. These variables are accessible during the build process and when the container is running.&lt;br&gt;
&lt;code&gt;COPY or ADD&lt;/code&gt;: Copies files and directories from the build context (the directory where the Dockerfile is located) into the image. COPY is generally preferred for simple file copying, while ADD supports additional features such as unpacking archives.&lt;br&gt;
&lt;code&gt;RUN&lt;/code&gt;: Executes commands during the build process. You can use RUN to install dependencies, run scripts, or perform any other necessary tasks.&lt;br&gt;
&lt;code&gt;EXPOSE&lt;/code&gt;: Informs Docker that the container will listen on the specified network ports at runtime. It does not publish the ports to the host machine or make the container accessible from outside.&lt;br&gt;
&lt;code&gt;CMD or ENTRYPOINT&lt;/code&gt;: Specifies the command to run when a container is started from the image. CMD provides default arguments that can be overridden, while ENTRYPOINT specifies a command that cannot be overridden.&lt;br&gt;
&lt;code&gt;WORKDIR&lt;/code&gt;: Sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY, or ADD instructions.&lt;br&gt;
&lt;code&gt;STOPSIGNAL&lt;/code&gt;: Sets a custom signal that will be used to stop the container process.&lt;br&gt;
&lt;code&gt;HEALTHCHECK&lt;/code&gt;: Sets a command that will be used by the Docker daemon to check whether the container is healthy&lt;/p&gt;

&lt;p&gt;A multi-stage build in a Dockerfile is a technique used to create more efficient and smaller Docker images. It involves defining multiple stages within the Dockerfile, each with its own set of instructions and dependencies.&lt;br&gt;
An example Dockerfile containing a multi-stage build definition is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build stage
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app

# Copy and restore project dependencies
COPY *.csproj .
RUN dotnet restore

# Copy the entire project and build
COPY . .
RUN dotnet build -c Release --no-restore

# Publish the application
RUN dotnet publish -c Release -o /app/publish --no-restore

# Runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime
WORKDIR /app
COPY --from=build /app/publish .

# Expose the required port
EXPOSE 80

# Set the entry point for the application
ENTRYPOINT ["dotnet", "YourApplication.dll"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Managing images&lt;/strong&gt;&lt;br&gt;
Some key commands for image management are:&lt;br&gt;
List images on the system:&lt;br&gt;
&lt;code&gt;docker image ls&lt;/code&gt;&lt;br&gt;
List images on the system including intermediate images:&lt;br&gt;
&lt;code&gt;docker image ls -a&lt;/code&gt;&lt;br&gt;
Get detailed information about an image:&lt;br&gt;
&lt;code&gt;docker image inspect &amp;lt;IMAGE&amp;gt;&lt;/code&gt;&lt;br&gt;
Delete an image:&lt;br&gt;
&lt;code&gt;docker rmi &amp;lt;IMAGE&amp;gt;&lt;br&gt;
docker image rm &amp;lt;IMAGE&amp;gt;&lt;br&gt;
docker image rm -f &amp;lt;IMAGE&amp;gt;&lt;/code&gt;&lt;br&gt;
An image can only face deletion if no containers or other image tags reference it. Find and delete dangling or unused images:&lt;br&gt;
&lt;code&gt;docker image prune&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker registries&lt;/strong&gt;&lt;br&gt;
Docker Registry serves as a centralized repository for storing and sharing Docker images. Docker Hub is the default, publicly available registry managed by Docker. By utilizing the registry image, we can set up and manage our own private registry at no cost.&lt;br&gt;
Run a simple registry:&lt;br&gt;
&lt;code&gt;docker run -d -p 5000:5000 --restart=always --name registry registry:2&lt;/code&gt;&lt;br&gt;
Upload an image to a registry:&lt;br&gt;
&lt;code&gt;docker push &amp;lt;IMAGE&amp;gt;:&amp;lt;TAG&amp;gt;&lt;/code&gt;&lt;br&gt;
Download an image from a registry:&lt;br&gt;
&lt;code&gt;docker pull &amp;lt;IMAGE&amp;gt;:&amp;lt;TAG&amp;gt;&lt;/code&gt;&lt;br&gt;
Login to a registry:&lt;br&gt;
&lt;code&gt;docker login REGISTRY_URL&lt;/code&gt;&lt;br&gt;
There are two authentication methods for connecting to a private registry with an untrusted or self-signed certificate:&lt;br&gt;
&lt;strong&gt;Secure&lt;/strong&gt;: This involves adding the registry's public certificate to the /etc/docker/certs.d/ directory.&lt;br&gt;
&lt;strong&gt;Insecure&lt;/strong&gt;: This method entails adding the registry to the insecure-registries list in the daemon.json file or passing it to dockerd using the --insecure-registry flag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage and volumes&lt;/strong&gt;&lt;br&gt;
The storage driver controls how images and containers are stored and managed on your Docker host. Docker supports several storage drivers, using a pluggable architecture.&lt;br&gt;
&lt;strong&gt;overlay2&lt;/strong&gt;: Preferred for all Linux distributions&lt;br&gt;
&lt;strong&gt;fuse-overlayfs&lt;/strong&gt;: Preferred only for running Rootless Docker (not Ubuntu or Debian 10)&lt;br&gt;
&lt;strong&gt;vfs&lt;/strong&gt;: Intended for testing purposes, and for situations where no copy-on-write filesystem can be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage models&lt;/strong&gt;&lt;br&gt;
*&lt;em&gt;Filesystem storage: *&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data is stored in the form of regular files on the host disk&lt;/li&gt;
&lt;li&gt;Efficient use of memory&lt;/li&gt;
&lt;li&gt;Inefficient with write-heavy workloads&lt;/li&gt;
&lt;li&gt;Used by overlay2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Block Storage:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stores data in blocks using special block storage devices&lt;/li&gt;
&lt;li&gt;Efficient with write-heavy workloads&lt;/li&gt;
&lt;li&gt;Used by btrfs and zfs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Object Storage:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stores data in an external object-based store&lt;/li&gt;
&lt;li&gt;Applications must be designed to use object-based storage.&lt;/li&gt;
&lt;li&gt;Flexible and scalable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuring the overlay2 storage driver&lt;/strong&gt;&lt;br&gt;
Stop Docker service:&lt;br&gt;
&lt;code&gt;sudo systemctl stop docker&lt;/code&gt;&lt;br&gt;
Create or edit the Daemon config file:&lt;br&gt;
&lt;code&gt;sudo vi /etc/docker/daemon.json&lt;/code&gt;&lt;br&gt;
Add/edit the storage driver value:&lt;br&gt;
&lt;code&gt;"storage-driver": "overlay2"&lt;/code&gt;&lt;br&gt;
Remember to restart Docker after any changes, and then check the status.&lt;br&gt;
&lt;code&gt;sudo systemctl restart docker&lt;br&gt;
sudo systemctl status docker&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Volumes&lt;/strong&gt;&lt;br&gt;
There are two different types of data mounts on Docker: &lt;br&gt;
&lt;strong&gt;Bind Mount&lt;/strong&gt;: Mounts a specific directory on the host to the container. It is useful for sharing configuration files, and other data between the container and host. &lt;br&gt;
&lt;strong&gt;Named Volume&lt;/strong&gt;: Mounts a directory to the container, but Docker controls the location of the volume on disk dynamically.&lt;br&gt;
There are different syntaxes for adding bind mounts or volumes to containers:&lt;br&gt;
&lt;em&gt;-v syntax _&lt;br&gt;
Bind mount: The source begins with a forward slash "/" which makes this a bind mount.&lt;br&gt;
&lt;code&gt;docker run -v /opt/data:/tmp nginx&lt;/code&gt;&lt;br&gt;
Named volume: The source is just a string, which means this is a volume. It will be automatically created if no volume exists with the provided name. &lt;br&gt;
&lt;code&gt;docker run -v my-vol:/tmp nginx&lt;/code&gt;&lt;br&gt;
_--mount syntax&lt;/em&gt;&lt;br&gt;
Bind mount: &lt;br&gt;
&lt;code&gt;docker run --mount source=/opt/data,destination=/tmp nginx&lt;/code&gt;&lt;br&gt;
Named volume: &lt;br&gt;
&lt;code&gt;docker run --mount source=my-vol,destination=/tmp nginx&lt;/code&gt;&lt;br&gt;
We can mount the same volume to multiple containers, allowing them to share data. We can also create and manage volumes by ourselves without running a container. &lt;/p&gt;

&lt;p&gt;Some common and useful commands: &lt;br&gt;
&lt;code&gt;docker volume create VOLUME&lt;/code&gt;: Creates a volume. &lt;br&gt;
&lt;code&gt;docker volume ls&lt;/code&gt;: Lists volumes. &lt;br&gt;
&lt;code&gt;docker volume inspect VOLUME&lt;/code&gt;: Inspects a volume. &lt;br&gt;
&lt;code&gt;docker volume rm VOLUME&lt;/code&gt;: Deletes a volume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image Cleanup&lt;/strong&gt;&lt;br&gt;
Check Docker's disk usage:&lt;br&gt;
&lt;code&gt;docker system df&lt;/code&gt;&lt;br&gt;
&lt;code&gt;docker system df -v&lt;/code&gt;&lt;br&gt;
Delete unused or dangling images:&lt;br&gt;
&lt;code&gt;docker image prune&lt;br&gt;
docker image prune -a&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Docker networking
&lt;/h2&gt;

&lt;p&gt;Docker Container Networking Model (CNM) is a conceptual model that describes the components and concepts of Docker networking. &lt;br&gt;
There are multiple implementations of the Docker CNM: &lt;br&gt;
&lt;strong&gt;Sandbox&lt;/strong&gt;: An isolated unit containing all networking components associated with a single container. &lt;br&gt;
&lt;strong&gt;Endpoint&lt;/strong&gt;: Connects one sandbox to one network. &lt;br&gt;
&lt;strong&gt;Network&lt;/strong&gt;: A collection of endpoints that can communicate with each other. &lt;strong&gt;Network Driver&lt;/strong&gt;: A pluggable driver that provides a specific implementation of the CNM. &lt;br&gt;
&lt;strong&gt;IPAM Driver&lt;/strong&gt;: Provides IP address management. Allocates and assigns IP addresses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built-In Network Drivers&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Host&lt;/strong&gt;: This driver connects the container directly to the host's networking stack. It provides no isolation between containers or between containers and the host.&lt;br&gt;
&lt;code&gt;docker run --net host nginx&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Bridge&lt;/strong&gt;: This driver uses virtual bridge interfaces to establish connections between containers running on the same host.&lt;br&gt;
&lt;code&gt;docker network create --driver bridge my-bridge-net &lt;br&gt;
docker run -d --network my-bridge-net nginx&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Overlay&lt;/strong&gt;: This driver uses a routing mesh to connect containers across multiple Docker hosts, usually in a Docker swarm.&lt;br&gt;
&lt;code&gt;docker network create --driver overlay my-overlay-net &lt;br&gt;
docker service create --network my-overlay-net nginx&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;MACVLAN&lt;/strong&gt;: This driver connects containers directly to the host's network interfaces but uses a special configuration to provide isolation.&lt;br&gt;
&lt;code&gt;docker network create -d macvlan --subnet 192.168.0.0/24 --gateway 192.168.0.1 -o parent=eth0 my-macvlan-net &lt;br&gt;
docker run -d --net my-macvlan-net nginx&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;None&lt;/strong&gt;: This driver provides sandbox isolation, but it does not provide any implementation for networking between containers or between containers and the host.&lt;br&gt;
&lt;code&gt;docker run --net none -d nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a Docker Bridge network&lt;/strong&gt;&lt;br&gt;
It is the default driver. Therefore, any network that is created without specifying the driver will be a bridge network.&lt;br&gt;
Create a bridge network. &lt;br&gt;
&lt;code&gt;docker network create my-net&lt;/code&gt;&lt;br&gt;
Run a container on the bridge network. &lt;br&gt;
&lt;code&gt;docker run -d --network my-net nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;By default, containers and services on the same network can communicate with each other simply by using their container or service names. Docker provides DNS resolution on the network that allows this to work. &lt;br&gt;
Supply a network alias to provide an additional name by which a container or service is reached.&lt;br&gt;
&lt;code&gt;docker run -d --network my-net --network-alias my-nginx-alias nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Some useful commands for when one must interact with Docker networks are:&lt;br&gt;
&lt;code&gt;docker network ls&lt;/code&gt;: Lists networks.&lt;br&gt;
&lt;code&gt;docker network inspect NETWORK&lt;/code&gt;: Inspects a network.&lt;br&gt;
&lt;code&gt;docker network connect CONTAINER NETWORK&lt;/code&gt;: Connects a container to a network.&lt;br&gt;
&lt;code&gt;docker network disconnect CONTAINER NETWORK&lt;/code&gt;: Disconnects a container from a network.&lt;br&gt;
&lt;code&gt;docker network rm NETWORK&lt;/code&gt;: Deletes a network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a Docker Overlay Network&lt;/strong&gt;&lt;br&gt;
Create an overlay network: &lt;br&gt;
&lt;code&gt;docker network create --driver overlay NETWORK_NAME&lt;/code&gt;&lt;br&gt;
Create a service that uses the network: &lt;br&gt;
&lt;code&gt;docker service create --network NETWORK_NAME IMAGE&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Troubleshooting&lt;/strong&gt;&lt;br&gt;
View container logs:&lt;br&gt;
&lt;code&gt;docker logs CONTAINER&lt;/code&gt;&lt;br&gt;
View logs for all tasks of a service:&lt;br&gt;
&lt;code&gt;docker service logs SERVICE&lt;/code&gt;&lt;br&gt;
View Docker daemon logs:&lt;br&gt;
&lt;code&gt;sudo jounralctl -u docker&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can use the nicolaka/netshoot image to perform network troubleshooting. It comes packaged with a variety of useful networking-related tools. We can inject a container into another container's networking sandbox for troubleshooting purposes.&lt;br&gt;
&lt;code&gt;docker run --network container:CONTAINER_NAME nicolaka/netshoot&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring Docker to Use External DNS&lt;/strong&gt;&lt;br&gt;
Set the system-wide default DNS for Docker containers in daemon.json: &lt;br&gt;
&lt;code&gt;{ &lt;br&gt;
"dns": ["8.8.8.8"] &lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
Set the DNS for an individual container. &lt;br&gt;
&lt;code&gt;docker run --dns 8.8.4.4 IMAGE&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Signing Images and Enabling Docker Content Trust&lt;/strong&gt;&lt;br&gt;
Docker Content Trust (DCT) is a feature that allows us to sign images and verify signatures before running them. Enable Docker Content Trust by setting an environment variable: &lt;br&gt;
&lt;code&gt;DOCKER_CONTENT_TRUST=1&lt;/code&gt;&lt;br&gt;
The system will not run images if they are unsigned or if the signature is not valid with Docker Content Trust enabled.&lt;br&gt;
Sign and push an image with: &lt;br&gt;
&lt;code&gt;docker trust sign&lt;/code&gt;&lt;br&gt;
With DOCKER_CONTENT_TRUST=1, docker push automatically signs the image before pushing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Default Docker Engine Security&lt;/strong&gt;&lt;br&gt;
Basic Docker security concepts:&lt;br&gt;
Docker uses namespaces to isolate container processes from one another and the host. This prevents an attacker from affecting or gaining control of other containers or the host if they manage to gain control of one container. &lt;br&gt;
The Docker daemon must run with root access. Before allowing anyone to interact with the daemon, be aware of this. It could be used to gain access to the entire host. &lt;br&gt;
Docker leverages Linux capabilities to assign granular permissions to container processes. For example, listening on a low port (below 1024) usually requires a process to run as root, but Docker uses Linux capabilities to allow a container to listen on port 80 without running as root.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Securing the Docker Daemon HTTP Socket&lt;/strong&gt;&lt;br&gt;
Generate a certificate authority and server certificates for the Docker server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa -aes256 -out ca-key.pem 4096` 

`openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem -subj "/C=US/ST=Texas/L=Keller/O=Linux Academy/OU=Content/CN=$HOSTNAME" openssl genrsa -out server-key.pem 4096 `

`openssl req -subj "/CN=$HOSTNAME" -sha256 -new -key server-key.pem -out server.csr \ echo subjectAltName = DNS:$HOSTNAME,IP:,IP:127.0.0.1 &amp;gt;&amp;gt; extfile.cnf `

`echo extendedKeyUsage = serverAuth &amp;gt;&amp;gt; extfile.cnf `

`openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf`
Generate client certificates:
`openssl genrsa -out key.pem 4096 

openssl req -subj '/CN=client' -new -key key.pem -out client.csr 

echo extendedKeyUsage = clientAuth &amp;gt; extfile-client.cnf 

openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -extfile extfile-client.cnf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set appropriate permissions on the certificate files:&lt;br&gt;
&lt;code&gt;chmod -v 0400 ca-key.pem key.pem server-key.pem chmod -v 0444 ca.pem server-cert.pem cert.pem&lt;/code&gt;&lt;br&gt;
Configure the Docker host to use tlsverify mode with the certificates created earlier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vi /etc/docker/daemon.json

{
 "tlsverify": true,
 "tlscacert": "/home/user/ca.pem",
 "tlscert": "/home/user/server-cert.pem",
 "tlskey": "/home/user/server-key.pem"
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit the Docker service file, look for the line that begins with ExecStart and change the -H.&lt;br&gt;
&lt;code&gt;sudo vi /lib/systemd/system/docker.service&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ExecStart=/usr/bin/dockerd -H=0.0.0.0:2376 --containerd=/run/containerd/containerd.sock&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl daemon-reload&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo systemctl restart docker&lt;/code&gt;&lt;br&gt;
Copy the CA cert and client certificate files to the client machine.&lt;br&gt;
On the client machine, configure the client to connect to the remote Docker daemon securely:&lt;br&gt;
&lt;code&gt;mkdir -pv ~/.docker&lt;/code&gt;&lt;br&gt;
&lt;code&gt;cp -v {ca,cert,key}.pem ~/.docker&lt;/code&gt; &lt;br&gt;
&lt;code&gt;export DOCKER_HOST=tcp://:2376 DOCKER_TLS_VERIFY=1&lt;/code&gt;&lt;br&gt;
Test the connection: &lt;br&gt;
&lt;code&gt;docker version&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, mastering Docker transforms your development workflow by streamlining installation, configuration, image management, storage, networking, and security. This guide equips you with essential knowledge and practical skills, enabling you to build, ship, and run applications efficiently. Embrace Docker's power to elevate your container management to the next level.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>webdev</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Maximizing Value: A Guide to Cost Optimization in Microsoft Azure</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Wed, 27 Mar 2024 22:19:01 +0000</pubDate>
      <link>https://dev.to/theyasirr/maximizing-value-a-guide-to-cost-optimization-in-microsoft-azure-2989</link>
      <guid>https://dev.to/theyasirr/maximizing-value-a-guide-to-cost-optimization-in-microsoft-azure-2989</guid>
      <description>&lt;p&gt;Cost optimization is essential for businesses to ensure they are getting the most value from their Azure investments. By optimizing costs, organizations can effectively manage their budgets, allocate resources efficiently, and ultimately improve their bottom line. With the right strategies in place, businesses can achieve significant savings while maintaining optimal performance and scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost optimization fundamentals
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Balance is everything&lt;/strong&gt;&lt;br&gt;
Achieving cost optimization requires striking a delicate balance between performance, reliability, and cost. It involves identifying areas where resources are underutilized or overprovisioned and making adjustments to optimize efficiency without sacrificing quality.&lt;br&gt;
&lt;strong&gt;The cloud shift in process&lt;/strong&gt;&lt;br&gt;
Cloud cost management requires a different mindset compared to on-premises IT. Traditional upfront capital expenditure is replaced by a pay-as-you-go model, demanding continuous monitoring and optimization strategies.&lt;br&gt;
&lt;strong&gt;Financial Operations (FinOps)&lt;/strong&gt;&lt;br&gt;
FinOps is a methodology that combines financial and operational processes to optimize cloud costs continuously. It involves collaboration between finance, operations, and engineering teams to align cloud spending with business objectives and drive accountability across the organization.&lt;br&gt;
&lt;strong&gt;FinOps goal&lt;/strong&gt;&lt;br&gt;
The primary goal of FinOps is to enable organizations to understand and control their cloud spending effectively. By implementing FinOps practices, businesses can gain visibility into their cloud usage, identify cost optimization opportunities, and implement strategies to reduce waste and unnecessary spending.&lt;/p&gt;

&lt;p&gt;FinOps advocates for a continuous improvement approach. This involves setting clear goals, measuring and analyzing costs, identifying optimization opportunities, implementing solutions, and monitoring progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Cloud constructs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Management Groups, Subscriptions, and Resource Groups&lt;/strong&gt;&lt;br&gt;
Organizing Azure resources into management groups, subscriptions, and resource groups provides a hierarchical structure for managing access, policies, and costs. This allows organizations to enforce governance policies, track spending, and optimize resource utilization effectively.&lt;br&gt;
&lt;strong&gt;Using Tags&lt;/strong&gt;&lt;br&gt;
Tags provide a flexible mechanism for categorizing and organizing Azure resources. By applying tags consistently across resources, organizations can gain insights into cost allocation, track spending by department or project, and implement targeted cost optimization strategies.&lt;br&gt;
&lt;strong&gt;Policy and Role-Based Access Control&lt;/strong&gt;&lt;br&gt;
Implementing policies and role-based access control (RBAC) helps organizations enforce compliance requirements, control access to resources, and prevent unauthorized spending. By defining policies and roles, businesses can ensure that only authorized users have access to resources and that they adhere to predefined cost optimization guidelines.&lt;br&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;&lt;br&gt;
Infrastructure as code (IaC) enables organizations to define and manage Azure resources programmatically. By automating the deployment and configuration of resources, businesses can improve consistency, reduce errors, and optimize costs by eliminating manual intervention and streamlining resource provisioning.&lt;br&gt;
&lt;strong&gt;Budgets&lt;/strong&gt;&lt;br&gt;
Setting and managing budgets in Azure allows organizations to monitor and control their cloud spending effectively. By establishing budget thresholds and alerts, businesses can proactively identify and address potential cost overruns, track spending against targets, and optimize resource usage to stay within budget constraints.&lt;/p&gt;

&lt;p&gt;Azure cloud constructs provide a foundation for organized and accountable resource management, ultimately contributing to optimized Azure cloud costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure tooling
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Azure Monitoring&lt;/strong&gt;&lt;br&gt;
Azure monitoring provides visibility into the performance, availability, and usage of Azure resources. By monitoring key metrics and trends, organizations can identify areas for optimization, troubleshoot issues, and ensure optimal performance and cost efficiency.&lt;br&gt;
&lt;strong&gt;Azure Pricing Calculator&lt;/strong&gt;&lt;br&gt;
Estimate costs upfront for various Azure resources and configurations using the Azure Pricing Calculator. This helps plan your cloud migration and optimize resource selection for future deployments.&lt;br&gt;
&lt;strong&gt;Azure Advisor&lt;/strong&gt;&lt;br&gt;
Azure Advisor offers personalized recommendations for optimizing Azure resources based on best practices and usage patterns. By following Advisor recommendations, businesses can improve resource utilization, reduce costs, and enhance the overall efficiency of their Azure environment.&lt;br&gt;
&lt;strong&gt;Azure Advisor Cost Optimization Workbook&lt;/strong&gt;&lt;br&gt;
The Azure Advisor cost optimization workbook provides a comprehensive view of cost-saving opportunities across Azure subscriptions. By analyzing cost recommendations and implementing optimization actions, organizations can achieve significant cost savings while maintaining performance and reliability.&lt;br&gt;
&lt;strong&gt;Azure Monitor Insights&lt;/strong&gt;&lt;br&gt;
Azure Monitor Insights provides actionable insights into Azure resource usage, performance, and cost trends. By analyzing monitoring data and identifying optimization opportunities, businesses can make informed decisions to improve efficiency and reduce costs in their Azure environment.&lt;br&gt;
&lt;strong&gt;Microsoft Cost Management&lt;/strong&gt;&lt;br&gt;
Microsoft Cost Management offers a centralized platform for managing and optimizing Azure costs. By leveraging Cost Management features, organizations can track spending, analyze cost drivers, implement cost-saving strategies, and optimize resource usage to achieve maximum value from their Azure investments.&lt;br&gt;
&lt;strong&gt;Power BI Cost Management App&lt;/strong&gt;&lt;br&gt;
The Power BI Cost Management app provides interactive dashboards and reports for visualizing and analyzing Azure cost data. By using Power BI, businesses can gain deeper insights into their cloud spending, identify cost optimization opportunities, and make data-driven decisions to improve cost efficiency.&lt;/p&gt;

&lt;p&gt;Azure offers a variety of tools for managing and optimizing cloud resources. Azure Monitor provides visibility into resource performance and usage. Azure Advisor offers recommendations for improving efficiency and reducing costs. Azure Monitor Insights and Cost Management tools along with Power BI help analyze resource data and make informed decisions to optimize your Azure environment. Additionally, the Azure Pricing Calculator helps estimate costs upfront, allowing you to plan your cloud migration and optimize resource selection for future deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization of workloads
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;General Guidance&lt;/strong&gt;&lt;br&gt;
Optimizing workload involves identifying inefficiencies and implementing strategies to improve resource utilization and reduce costs. By following general guidance and best practices, organizations can achieve significant cost savings while maintaining performance and reliability in their Azure environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plan for the future&lt;/strong&gt;: Invest time upfront to understand your requirements and strategic direction. This ensures you architect the optimal solution that meets your needs without overspending.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimize the development environment&lt;/strong&gt;: Utilize the most cost-effective resources for development environments. Restrict resource types and ensure they are stopped when not in use. However, be cautious not to create a testing environment that deviates significantly from production, as this can lead to performance issues later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evaluate regional costs&lt;/strong&gt;: Azure offers varying pricing across different regions. Evaluate the cost implications of deploying resources in specific regions based on your needs and data residency requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leverage development benefits&lt;/strong&gt;: Take advantage of Visual Studio Subscriptions for development purposes, but avoid using them for production workloads to control costs. Utilize Dev/Test Pricing and free tier resources offered by Azure for development and testing activities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Governance is key&lt;/strong&gt;: Implement strong governance standards to manage resource provisioning and usage. This helps prevent accidental overspending and ensures resources are aligned with business needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Find and remove unused resources&lt;/strong&gt;: Regularly identify and remove unused resources like idle VMs or unattached disks. However, exercise caution to avoid deleting critical resources accidentally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shift right&lt;/strong&gt;: As you move your workloads to platform as a service (PaaS) and embrace serverless offerings, you benefit from a pay-per-use model, where you only pay for the work performed by the service. This can significantly reduce your cloud expenditures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Specific workload approach&lt;/strong&gt;&lt;br&gt;
Each workload requires a tailored approach to cost optimization based on its unique requirements and usage patterns. By analyzing workload characteristics and implementing targeted optimization strategies, businesses can optimize costs effectively without compromising performance or reliability.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Machines&lt;/strong&gt;: Optimizing virtual machines involves rightsizing instances, implementing auto-scaling, and leveraging reserved instances or Azure Hybrid Benefit to reduce costs while meeting performance requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure Kubernetes Service&lt;/strong&gt;: For Azure Kubernetes Service (AKS), optimizing costs involves rightsizing node pools, leveraging spot instances, and optimizing cluster configurations to improve resource utilization and reduce costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App Services&lt;/strong&gt;: Optimizing App Services involves implementing auto-scaling, optimizing instance sizes, and leveraging Azure Hybrid Benefit or reserved instances to reduce costs while ensuring scalability and performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage Accounts&lt;/strong&gt;: Optimizing storage accounts involves implementing data lifecycle management policies, leveraging tiered storage, and optimizing data access patterns to reduce storage costs while meeting data retention and access requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managed Disks&lt;/strong&gt;: For managed disks, optimizing costs involves rightsizing disk sizes, implementing auto-scaling, and leveraging reserved capacity to reduce costs while ensuring optimal performance and availability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Databases&lt;/strong&gt;: Optimizing databases involves rightsizing database instances, implementing auto-scaling, and leveraging reserved capacity or Azure Hybrid Benefit to reduce costs while maintaining performance and reliability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App Gateway&lt;/strong&gt;: Optimizing Azure Application Gateway involves rightsizing instances, optimizing configuration settings, and leveraging reserved capacity or Azure Hybrid Benefit to reduce costs while ensuring optimal performance and scalability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log Analytics Workspaces&lt;/strong&gt;: Optimizing log analytics workspaces involves managing retention policies, optimizing data ingestion and query performance, and leveraging reserved capacity to reduce costs while meeting compliance and operational requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Optimizing workloads involves finding inefficiencies and making your resources work best for you. This can be done through general best practices or by taking a specific approach depending on the workload type. Here's a breakdown for common Azure workloads: VMs, AKS clusters, App Services, Storage, and Databases. Each has its own optimization techniques like rightsizing, auto-scaling, and leveraging cost-saving options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Financial strategies
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Azure Hybrid Benefit&lt;/strong&gt;&lt;br&gt;
Azure Hybrid Benefit allows organizations to use on-premises licenses to run certain Microsoft software on Azure virtual machines, reducing costs by up to 40%.&lt;br&gt;
&lt;strong&gt;Reserved Instances&lt;/strong&gt;&lt;br&gt;
Reserved Instances enable organizations to reserve virtual machines or Azure SQL Database capacity for one or three years, offering significant cost savings compared to pay-as-you-go pricing.&lt;br&gt;
&lt;strong&gt;Azure Savings Plan&lt;/strong&gt;&lt;br&gt;
Azure Savings Plan provides flexible pricing options for virtual machines, Azure Kubernetes Service, Azure SQL Database capacity, and other Azure services, offering up to 72% savings compared to pay-as-you-go pricing. With Azure Savings Plan, organizations commit to a specific usage volume for a one or three-year term, allowing them to benefit from discounted rates based on their committed usage.&lt;br&gt;
&lt;strong&gt;Reserved Instances vs Azure Savings Plan&lt;/strong&gt;&lt;br&gt;
While both Reserved Instances and Azure Savings Plan offer significant cost savings, they differ in terms of flexibility and pricing model. Choosing between reserved instances and Azure savings plans depends on your specific workload predictability:&lt;br&gt;
&lt;em&gt;Reserved Instances:&lt;/em&gt; Ideal for workloads with consistent and predictable resource consumption. You get the highest discount (up to 72%) but sacrifice some flexibility.&lt;br&gt;
&lt;em&gt;Azure Savings Plan:&lt;/em&gt; More flexible option suitable for workloads with variable compute usage. You get a lower discount (up to 65%) but can benefit from applying it across different compute resources.&lt;/p&gt;

&lt;p&gt;Azure offers financial strategies to reduce costs. Use existing licenses for VMs with Azure Hybrid Benefit, reserve resources for upfront discounts with Reserved Instances, or commit to a spending level for broader discounts with Azure Savings Plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Cost optimization in the cloud is a dynamic and ongoing process. By understanding and leveraging the tools and mechanisms available within Azure, organizations can achieve a balance between cost, performance, and scalability, driving greater business value and innovation. Implementing a culture of cost awareness and accountability, such as FinOps, and utilizing Azure's cost management features, can lead to significant savings and a more efficient use of cloud resources.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>cloud</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Kubernetes: Designing and Building Applications</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Sat, 08 Jul 2023 18:39:12 +0000</pubDate>
      <link>https://dev.to/theyasirr/kubernetes-designing-and-building-applications-222c</link>
      <guid>https://dev.to/theyasirr/kubernetes-designing-and-building-applications-222c</guid>
      <description>&lt;p&gt;This article provides various aspects of application design and development in Kubernetes, covering topics such as container images, running jobs and cron jobs, building multi-container pods, utilizing Init containers, leveraging volumes and their types, and utilizing persistent volumes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container Image
&lt;/h2&gt;

&lt;p&gt;An image is a self-contained file that encapsulates all the essential software and executables required to execute a container. It serves as a compact and portable package that enables you to deploy and operate your application across numerous containers.&lt;br&gt;
&lt;strong&gt;Docker&lt;/strong&gt; is one tool that you use to create your own images. A &lt;strong&gt;Dockerfile&lt;/strong&gt; defines what is contained in the image. The &lt;em&gt;docker build&lt;/em&gt; command builds an image using the Dockerfile.&lt;br&gt;
Checkout to my article "&lt;a href="https://dev.to/theyasirr/demystifying-docker-images-a-comprehensive-guide-to-creation-management-and-registry-4b35"&gt;Docker: Image Creation, Management, and Registry&lt;/a&gt;" to create and manage Docker images.&lt;/p&gt;
&lt;h2&gt;
  
  
  Running Jobs and CronJobs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Jobs&lt;/strong&gt; are specifically designed to execute containerized tasks and ensure their successful completion. It can also be used to run multiple Pods in parallel.&lt;br&gt;
A sample manifest of a Job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: batch/v1
kind: Job
metadata:
  name: example-job
spec:
  template:
    spec:
      containers:
      - name: my-container
        image: my-image:latest
        command: ["echo", "Hello, Kubernetes!"]
      restartPolicy: Never
  backoffLimit: 3
  activeDeadlineSeconds: 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Job object is in Kubernetes "batch" APIs. The &lt;em&gt;restartPolicy&lt;/em&gt; is set to "&lt;em&gt;Never&lt;/em&gt;" to ensure the Job completes without retries. The &lt;em&gt;backoffLimit&lt;/em&gt; is set to 3, meaning the Job will be retried at most 3 times in case of failures. The &lt;em&gt;activeDeadlineSeconds&lt;/em&gt; is the maximum time Kubernetes allows to run the job.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CronJobs&lt;/strong&gt; are designed to execute Jobs at regular intervals based on a predefined schedule. One CronJob object is like one line of a crontab (cron table) file on a Unix system.&lt;br&gt;
A sample manifest of a CronJob:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: batch/v1
kind: CronJob
metadata:
  name: example-cronjob
spec:
  schedule: "*/5 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: my-container
            image: my-image:latest
            command: ["echo", "Hello, Kubernetes CronJob!"]
          restartPolicy: OnFailure
      backoffLimit: 3
      activeDeadlineSeconds: 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;schedule&lt;/em&gt; field uses a cron expression to define the desired schedule for running the Job. It represents minute, hour, day of the month, month, and day of the week (from left to right).&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;restartPolicy&lt;/em&gt; for a Job or CronJob Pod must be &lt;em&gt;OnFailure&lt;/em&gt; or &lt;em&gt;Never&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Multi-Container Pods
&lt;/h2&gt;

&lt;p&gt;Multi-Container Pods are Pods that consist of multiple containers collaborating within a single unit. They involve the inclusion of more than one container within a shared Pod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8i4tfzo338uzdkw33hv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8i4tfzo338uzdkw33hv2.png" alt="Multi-Container Pods" width="800" height="746"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example manifest of a multi-container Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  volumes:
    - name: shared-volume
      emptyDir: {}
  containers:
    - name: main-container
      image: main-image:latest
      volumeMounts:
        - name: shared-volume
          mountPath: /shared-data
    - name: sidecar-container
      image: sidecar-image:latest
      volumeMounts:
        - name: shared-volume
          mountPath: /shared-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Multi-container Pods are recommended for scenarios where containers require tight coupling and the need to share resources like network and storage volumes.&lt;/p&gt;

&lt;p&gt;It is advisable to utilize multi-container Pods selectively, focusing on situations where close collaboration and resource sharing between containers are necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Container Design Patterns
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa48y6z1kk11ftvvczj1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa48y6z1kk11ftvvczj1u.png" alt="Multi-Container Design Patterns" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;&lt;u&gt;Sidecar pattern&lt;/u&gt;&lt;/strong&gt;, a sidecar container works alongside the main container to provide assistance or perform complementary tasks. One example is when the main container serves files from a shared volume, and the sidecar container periodically updates those files.&lt;br&gt;
In the &lt;strong&gt;&lt;u&gt;Ambassador pattern&lt;/u&gt;&lt;/strong&gt;, an ambassador container acts as a proxy for network traffic to and/or from the main container. One example is when the main container needs to communicate with a database, and the ambassador container proxies the traffic to different databases based on the environment.&lt;br&gt;
In the &lt;strong&gt;&lt;u&gt;Adapter pattern&lt;/u&gt;&lt;/strong&gt;, an adapter container is responsible for transforming the output of the main container in some way. One example is when the main container produces log data in a non-standard format without timestamps, and the adapter container transforms the data by adding timestamps and converting it into a standard format.&lt;/p&gt;
&lt;h2&gt;
  
  
  Using Init containers
&lt;/h2&gt;

&lt;p&gt;An Init container is designed to fulfill a specific task before the main container within a Pod is initiated. It operates on a one-time basis, executing its designated job and then concluding its execution. Once the Init container completes its task, the main container within the Pod is initiated and begins its execution.&lt;br&gt;
An example manifest of a Pod using Init container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
    - name: main-container
      image: main-image:latest
      command: ["sh", "-c", "echo Main container started; sleep 10; echo Main container completed"]
      ports:
        - containerPort: 8080
          protocol: TCP
  initContainers:
    - name: init-container
      image: init-image:latest
      command: ["sh", "-c", "echo Init container started; sleep 5; echo Init container completed"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few use-cases of Init containers are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Separate image&lt;/strong&gt;: Init containers offer the capability to execute start-up tasks using a distinct image, which may involve software not included or required by the main container's image.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delay startup&lt;/strong&gt;: Init containers provide a means to defer the start of the main container until specific prerequisites are satisfied.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Init containers offer a secure approach to executing sensitive start-up processes, such as consuming secrets, in isolation from the main container.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Exploring Volumes
&lt;/h2&gt;

&lt;p&gt;A volume serves as a storage solution external to the container's file system, offering additional storage capabilities for containers.&lt;br&gt;
&lt;strong&gt;Volume vs VolumeMounts&lt;/strong&gt;&lt;br&gt;
&lt;u&gt;Volume:&lt;/u&gt; Specified within the Pod configuration, the volume defines the specifics regarding the storage location and configuration for data within a Pod.&lt;br&gt;
&lt;u&gt;VolumeMounts:&lt;/u&gt; Configured within the container specification, volumeMounts associate a volume with a particular container and determine the directory path at which the volume data will be accessible during runtime.&lt;/p&gt;
&lt;h2&gt;
  
  
  Volume types
&lt;/h2&gt;

&lt;p&gt;The volume type determine where and how data storage is handled. There are a lot of &lt;a href="https://kubernetes.io/docs/concepts/storage/volumes/#volume-types" rel="noopener noreferrer"&gt;volume types&lt;/a&gt; but a few important ones are:&lt;br&gt;
&lt;strong&gt;hostPath&lt;/strong&gt;&lt;br&gt;
Data is stored in a specific location directly on the host file system on the Kubernetes node where the Pod is running.&lt;br&gt;
An example manifest of Pod using hostPath volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: hostpath-pod
spec:
  containers:
    - name: example-container
      image: example-image:latest
      volumeMounts:
        - name: data-volume
          mountPath: /app/data
  volumes:
    - name: data-volume
      hostPath:
        path: /data
        type: Directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;hostPath&lt;/em&gt; volume has further four types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;Directory&lt;/u&gt;: Mounts an existing directory on the host&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;DirectoryOrCreate&lt;/u&gt;: Mounts an existing directory on the host and creates if does not exist.&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;File&lt;/u&gt;: Mounts an existing single file on the host&lt;/li&gt;
&lt;li&gt;
&lt;u&gt;FileOrCreate&lt;/u&gt;: Mounts an existing single file on the host and creates if it does not exist.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;emptyDir&lt;/strong&gt;&lt;br&gt;
Data is stored in an automatically managed location on the host file system. Data is deleted if the Pod is deleted.&lt;br&gt;
An example manifest of a Pod using &lt;em&gt;emptyDir&lt;/em&gt; volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: emptydir-pod
spec:
  containers:
    - name: example-container
      image: example-image:latest
      volumeMounts:
        - name: data-volume
          mountPath: /app/data
  volumes:
    - name: data-volume
      emptyDir: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;persistentVolumeClaim&lt;/strong&gt;&lt;br&gt;
Data is stored using a &lt;em&gt;persistentVolume&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Using PersistentVolume
&lt;/h2&gt;

&lt;p&gt;A PersistentVolume provides the ability to abstract storage details from Pods, treating storage as a consumable resource without directly exposing the underlying storage implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PersistentVolume vs PersistentVolumeClaim&lt;/strong&gt;&lt;br&gt;
A &lt;u&gt;PersistentVolume (PV)&lt;/u&gt; defines an abstract storage resource that is available for consumption by Pods. It encompasses specific information regarding the type and capacity of storage that it provides.&lt;br&gt;
A &lt;u&gt;PersistentVolumeClaim (PVC)&lt;/u&gt; declares the need for storage and specifies the required characteristics, such as storage type. It automatically binds to an available PersistentVolume (PV) that fulfills the defined requirements. Once bound, the PVC can be mounted within a Pod like any other volume.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3zz3iu2womw9fn0docw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3zz3iu2womw9fn0docw.png" alt="PersistentVolumeClaim binding with PersistentVolume" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
An example of a Pod manifest YAML file that includes a PersistentVolumeClaim (PVC) and PersistentVolume (PV):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: pvc-pod
spec:
  containers:
    - name: example-container
      image: example-image:latest
      volumeMounts:
        - name: data-volume
          mountPath: /app/data
  volumes:
    - name: data-volume
      persistentVolumeClaim:
        claimName: my-pvc
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /data

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example demonstrates the usage of PVC and PV in a Pod. The PVC is bound to an available PV that meets the specified requirements, and the PV is mounted as a volume within the Pod.&lt;/p&gt;

&lt;p&gt;By focusing on container images, running jobs and cron jobs, building multi-container pods, utilizing Init containers, exploring volumes and their types, and leveraging persistent volumes, developers can create robust and resilient applications in the Kubernetes ecosystem. Adopting these approaches and understanding the underlying concepts will empower developers to unlock the full potential of Kubernetes for their application deployments.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Docker: Image Creation, Management, and Registry</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Sun, 25 Jun 2023 22:29:57 +0000</pubDate>
      <link>https://dev.to/theyasirr/demystifying-docker-images-a-comprehensive-guide-to-creation-management-and-registry-4b35</link>
      <guid>https://dev.to/theyasirr/demystifying-docker-images-a-comprehensive-guide-to-creation-management-and-registry-4b35</guid>
      <description>&lt;h2&gt;
  
  
  Docker Images
&lt;/h2&gt;

&lt;p&gt;A Docker image is a self-contained bundle that includes all the necessary software required to run a container. It serves as an executable package, encapsulating the application code, dependencies, and system libraries, ensuring consistency and portability across different environments.&lt;/p&gt;

&lt;p&gt;Run a container using an image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; docker run &amp;lt;IMAGE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download an image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; docker pull &amp;lt;IMAGE&amp;gt;
 docker image pull &amp;lt;IMAGE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker utilizes a layered file system for its images and containers. Each layer within the file system contains only the modifications and differences from the preceding layer, allowing for efficient storage and sharing of resources. &lt;br&gt;
View file system layers in an image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; docker image history &amp;lt;IMAGE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Components of a Dockerfile
&lt;/h2&gt;

&lt;p&gt;A Dockerfile is a file that provides instructions for building a Docker image. It contains a series of directives that define the steps and configuration needed to create the image.&lt;/p&gt;

&lt;p&gt;An example of Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use the official Nginx base image
FROM nginx:latest

# Set an environment variable
ENV MY_VAR=my_value

# Copy custom configuration file to container
COPY nginx.conf /etc/nginx/nginx.conf

# Run some commands during the build process
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y curl

# Expose port 80 for incoming traffic
EXPOSE 80

# Start Nginx server when the container starts
CMD ["nginx", "-g", "daemon off;"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build an image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; docker build -t TAG_NAME DOCKERFILE_LOCATION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dockerfile directives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FROM:&lt;/strong&gt; Specifies the base image to use for the Docker image being built. It defines the starting point for the image and can be any valid image available on Docker Hub or a private registry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ENV:&lt;/strong&gt; Sets environment variables within the image. These variables are accessible during the build process and when the container is running.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;COPY or ADD:&lt;/strong&gt; Copies files and directories from the build context (the directory where the Dockerfile is located) into the image. COPY is generally preferred for simple file copying, while ADD supports additional features such as unpacking archives.&lt;/li&gt;
&lt;li&gt;RUN: Executes commands during the build process. You can use RUN to install dependencies, run scripts, or perform any other necessary tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EXPOSE:&lt;/strong&gt; Informs Docker that the container will listen on the specified network ports at runtime. It does not actually publish the ports to the host machine or make the container accessible from outside.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CMD or ENTRYPOINT:&lt;/strong&gt; Specifies the command to run when a container is started from the image. CMD provides default arguments that can be overridden, while ENTRYPOINT specifies a command that cannot be overridden.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WORKDIR:&lt;/strong&gt; Sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY, or ADD instructions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;STOPSIGNAL:&lt;/strong&gt; Sets a custom signal that will be used to stop the container process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HEALTHCHECK:&lt;/strong&gt; Sets a command that will be used by the Docker daemon to check whether the container is healthy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A multi-stage build in a Dockerfile is a technique used to create more efficient and smaller Docker images. It involves defining multiple stages within the Dockerfile, each with its own set of instructions and dependencies.&lt;/p&gt;

&lt;p&gt;An example Dockerfile containing multi-stage build defination:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build stage
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app

# Copy and restore project dependencies
COPY *.csproj .
RUN dotnet restore

# Copy the entire project and build
COPY . .
RUN dotnet build -c Release --no-restore

# Publish the application
RUN dotnet publish -c Release -o /app/publish --no-restore

# Runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime
WORKDIR /app
COPY --from=build /app/publish .

# Expose the required port
EXPOSE 80

# Set the entry point for the application
ENTRYPOINT ["dotnet", "YourApplication.dll"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing images
&lt;/h2&gt;

&lt;p&gt;Some key commands for image management are:&lt;/p&gt;

&lt;p&gt;List images on the system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker image ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List images on the system including intermediate images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker image ls -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get detailed information about an image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker image inspect &amp;lt;IMAGE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete an image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rmi &amp;lt;IMAGE&amp;gt;
docker image rm &amp;lt;IMAGE&amp;gt;
docker image rm -f &amp;lt;IMAGE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An image can only face deletion if no containers or other image tags reference it. But force deletion of an image, even if gets referenced by something else. &lt;br&gt;
Find and delete dangling or unused images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker image prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker registries
&lt;/h2&gt;

&lt;p&gt;Docker Registry serves as a centralized repository for storing and sharing Docker images. Docker Hub is the default, publicly available registry managed by Docker.&lt;br&gt;
By utilizing the registry image, we have the ability to set up and manage our own private registry at no cost.&lt;/p&gt;

&lt;p&gt;Run a simple registry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 5000:5000 --restart=always --name registry registry:2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upload an image to a registry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push &amp;lt;IMAGE&amp;gt;:&amp;lt;TAG&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download an image from a registry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull &amp;lt;IMAGE&amp;gt;:&amp;lt;TAG&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Login to a registry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login REGISTRY_URL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are two authentication methods for connecting to a private registry with an untrusted or self-signed certificate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Secure:&lt;/strong&gt; This involves adding the registry's public certificate to the /etc/docker/certs.d/ directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Insecure:&lt;/strong&gt; This method entails adding the registry to the insecure-registries list in the daemon.json file or passing it to dockerd using the --insecure-registry flag.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By leveraging the power of Docker, developers and organizations can streamline application deployment, improve scalability, and enhance overall software development processes. With the knowledge gained from this comprehensive overview, you're well-equipped to harness the full potential of Docker images and drive innovation in your containerized workflows.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Kubernetes Architectural Overview</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Sun, 25 Jun 2023 17:49:18 +0000</pubDate>
      <link>https://dev.to/theyasirr/kubernetes-architectural-overview-1a66</link>
      <guid>https://dev.to/theyasirr/kubernetes-architectural-overview-1a66</guid>
      <description>&lt;p&gt;Kubernetes is a popular open-source platform used to manage containerized applications. It provides a flexible and scalable way to deploy, manage, and automate containerized workloads. A Kubernetes cluster consists of a control plane and worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyroo29ck9gzzpd44o3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyroo29ck9gzzpd44o3u.png" alt="K8s Architectural Overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Control Plane
&lt;/h2&gt;

&lt;p&gt;The Kubernetes control plane is composed of several components that are responsible for overseeing and managing the cluster on a global scale. Its main function is to govern the operations of the entire cluster. While the control plane components can be deployed on any machine within the cluster, it is common practice to run them on dedicated controller machines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fial0ubcjc16cc4rrpump.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fial0ubcjc16cc4rrpump.png" alt="K8s Architecture - Control Plane"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;kube-api-server&lt;/strong&gt; fulfills the role of serving the Kubernetes API, which serves as the primary interface for interacting with the control plane and the cluster as a whole. When you interact with your Kubernetes cluster, you will typically do so through the Kubernetes API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Etcd&lt;/strong&gt; serves as the underlying data storage for the Kubernetes cluster, functioning as a reliable and highly available storage solution for all cluster-related data. It is responsible for storing and maintaining the cluster's state information.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;kube-scheduler&lt;/strong&gt; is responsible for the task of scheduling, which involves the selection of an appropriate node within the cluster for running containers. It ensures that containers are allocated to available nodes based on factors such as resource requirements and availability, optimizing the distribution of workloads across the cluster.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;kube-controller-manager&lt;/strong&gt; consolidates multiple controller utilities into a single process. These controllers perform various automated tasks within the cluster, facilitating efficient management and operation. By running multiple controllers within a unified framework, the kube-controller-manager streamlines the execution of these automation tasks in a cohesive manner.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;cloud-controller-manager&lt;/strong&gt; serves as a bridge between Kubernetes and diverse cloud platforms, facilitating seamless integration and management of cloud resources within the Kubernetes environment. This component is utilized specifically when leveraging cloud-based resources alongside Kubernetes, enabling efficient utilization of cloud services in conjunction with Kubernetes capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The control plane components can be deployed on either a single server or multiple servers, depending on the desired configuration. In order to achieve high availability for the cluster, it is recommended to run these components on multiple servers. This ensures redundancy and fault tolerance, enhancing the overall resilience of the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Worker nodes
&lt;/h2&gt;

&lt;p&gt;Kubernetes nodes serve as the execution environment for containers managed by the cluster. A cluster can comprise any number of nodes, each responsible for hosting and managing containers. The nodes consist of multiple components that oversee container operations on the machine and establish communication with the control plane. These components ensure proper orchestration and coordination of containers within the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9t1732cc2v93a0nqk99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9t1732cc2v93a0nqk99.png" alt="K8s architecture - worker node"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;kubelet&lt;/strong&gt; acts as the Kubernetes agent deployed on every node within the cluster. It establishes communication with the control plane and ensures the execution of containers on its respective node, following instructions provided by the control plane. Additionally, the kubelet is responsible for reporting container status and transmitting relevant data pertaining to containers back to the control plane.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;kube-proxy&lt;/strong&gt; serves as a network proxy within the Kubernetes cluster. It runs on each node, and takes on specific responsibilities associated with facilitating networking between containers and services.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;container runtime&lt;/strong&gt; is an independent software component that Kubernetes relies on to execute containers on the underlying machine. It is not inherently integrated into Kubernetes itself. Instead, Kubernetes supports multiple implementations of container runtimes. These runtime implementations are responsible for the actual execution and management of containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, Kubernetes architecture consists of a robust control plane comprising various components such as kube-api-server, kube-scheduler, kube-controller-manager, and cloud-controller-manager. These components work together to manage the cluster, schedule containers, handle automation tasks, and interface with cloud platforms. On the worker nodes, kubelet and kube-proxy play essential roles in managing and executing containers while ensuring seamless networking. This distributed architecture enables the efficient orchestration and scalability of containerized applications in Kubernetes clusters.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Azure Architecture Fundamentals: Azure subscriptions and management groups</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Wed, 24 May 2023 16:26:33 +0000</pubDate>
      <link>https://dev.to/theyasirr/azure-architecture-fundamentals-azure-subscriptions-and-management-groups-5g2h</link>
      <guid>https://dev.to/theyasirr/azure-architecture-fundamentals-azure-subscriptions-and-management-groups-5g2h</guid>
      <description>&lt;p&gt;Azure Architecture Fundamentals:&lt;br&gt;
Part 1: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-overview-of-azure-subscriptions-management-groups-and-resources-1ko"&gt;Overview of Azure subscriptions, management groups, and resources&lt;/a&gt;&lt;br&gt;
Part 2: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-regions-availability-zones-and-region-pairs-22k3"&gt;Azure regions, availability zones, and region pairs&lt;/a&gt;&lt;br&gt;
Part 3: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-resources-and-azure-resource-manager-o9h"&gt;Azure resources and Azure Resource Manager&lt;/a&gt;&lt;br&gt;
Part 4: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-subscriptions-and-management-groups-5g2h"&gt;Azure subscriptions and management groups&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get started with Azure, one of your first steps will be to create at least one Azure subscription. You'll use it to create your cloud-based resources in Azure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure subscriptions
&lt;/h2&gt;

&lt;p&gt;Using Azure requires an Azure subscription. A subscription provides you with authenticated and authorized access to Azure products and services. It also allows you to provision resources. An Azure subscription is a logical unit of Azure services that links to an Azure account, which is an identity in Azure Active Directory (Azure AD) or in a directory that Azure AD trusts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zvlqacqe3xmbeaaf3ac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zvlqacqe3xmbeaaf3ac.png" alt="Diagram showing Azure subscriptions using authentication and authorization to access Azure accounts."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An account can have one subscription or multiple subscriptions that have different billing models and to which you apply different access-management policies. You can use Azure subscriptions to define boundaries around Azure products, services, and resources. There are two types of subscription boundaries that you can use:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Billing boundary:&lt;/strong&gt; This subscription type determines how an Azure account is billed for using Azure. You can create multiple subscriptions for different types of billing requirements. Azure generates separate billing reports and invoices for each subscription so that you can organize and manage costs.&lt;br&gt;
&lt;strong&gt;Access control boundary:&lt;/strong&gt; Azure applies access-management policies at the subscription level, and you can create separate subscriptions to reflect different organizational structures. &lt;/p&gt;

&lt;h2&gt;
  
  
  Create additional Azure subscriptions
&lt;/h2&gt;

&lt;p&gt;You might want to create additional subscriptions for resource or billing management purposes. For example, you might choose to create additional subscriptions to separate:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environments:&lt;/strong&gt; When managing your resources, you can choose to create subscriptions to set up separate environments for development and testing, security, or to isolate data for compliance reasons. This design is particularly useful because resource access control occurs at the subscription level.&lt;br&gt;
&lt;strong&gt;Organizational structures:&lt;/strong&gt; You can create subscriptions to reflect different organizational structures. For example, you could limit a team to lower-cost resources, while allowing the IT department a full range. This design allows you to manage and control access to the resources that users create within each subscription.&lt;br&gt;
&lt;strong&gt;Billing:&lt;/strong&gt; You might want to also create additional subscriptions for billing purposes. Because costs are first aggregated at the subscription level, you might want to create subscriptions to manage and track costs based on your needs. For instance, you might want to create one subscription for your production workloads and another subscription for your development and testing workloads.&lt;br&gt;
You might also need additional subscriptions because of:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subscription limits:&lt;/strong&gt; Subscriptions are bound to some hard limitations. For example, the maximum number of Azure ExpressRoute circuits per subscription is 10. Those limits should be considered as you create subscriptions on your account. If there's a need to go over those limits in particular scenarios, you might need additional subscriptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customize billing to meet your needs
&lt;/h2&gt;

&lt;p&gt;If you have multiple subscriptions, you can organize them into invoice sections. Each invoice section is a line item on the invoice that shows the charges incurred that month. For example, you might need a single invoice for your organization but want to organize charges by department, team, or project.&lt;/p&gt;

&lt;p&gt;Depending on your needs, you can set up multiple invoices within the same billing account by creating additional billing profiles. Each billing profile has its own monthly invoice and payment method.&lt;/p&gt;

&lt;p&gt;The following diagram shows an overview of how billing is structured. If you've previously signed up for Azure or if your organization has an Enterprise Agreement, your billing might be set up differently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu28dbalhryf0tz9s11jj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu28dbalhryf0tz9s11jj.png" alt="Flowchart-style diagram showing an example of setting up a billing structure where different groups like marketing or development have their own Azure subscription that rolls up into a larger company-paid Azure billing account."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure management groups
&lt;/h2&gt;

&lt;p&gt;If your organization has many subscriptions, you might need a way to efficiently manage access, policies, and compliance for those subscriptions. Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called management groups and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale no matter what type of subscriptions you might have. All subscriptions within a single management group must trust the same Azure AD tenant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hierarchy of management groups and subscriptions&lt;/strong&gt;&lt;br&gt;
You can build a flexible structure of management groups and subscriptions to organize your resources into a hierarchy for unified policy and access management. The following diagram shows an example of creating a hierarchy for governance by using management groups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb8y2kbmpsibapxqe8a4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb8y2kbmpsibapxqe8a4.png" alt="Diagram showing an example of a management group hierarchy tree."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important facts about management groups&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;10,000 management groups can be supported in a single directory.&lt;/li&gt;
&lt;li&gt;A management group tree can support up to six levels of depth. This limit doesn't include the root level or the subscription level.&lt;/li&gt;
&lt;li&gt;Each management group and subscription can support only one parent.&lt;/li&gt;
&lt;li&gt;Each management group can have many children.&lt;/li&gt;
&lt;li&gt;All subscriptions and management groups are within a single hierarchy in each directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Credits: &lt;a href="https://www.microsoft.com/" rel="noopener noreferrer"&gt;Microsoft&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>career</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>Azure Architecture Fundamentals: Azure resources and Azure Resource Manager</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Wed, 24 May 2023 16:09:37 +0000</pubDate>
      <link>https://dev.to/theyasirr/azure-architecture-fundamentals-azure-resources-and-azure-resource-manager-o9h</link>
      <guid>https://dev.to/theyasirr/azure-architecture-fundamentals-azure-resources-and-azure-resource-manager-o9h</guid>
      <description>&lt;p&gt;Azure Architecture Fundamentals:&lt;br&gt;
Part 1: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-overview-of-azure-subscriptions-management-groups-and-resources-1ko"&gt;Overview of Azure subscriptions, management groups, and resources&lt;/a&gt;&lt;br&gt;
Part 2: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-regions-availability-zones-and-region-pairs-22k3"&gt;Azure regions, availability zones, and region pairs&lt;/a&gt;&lt;br&gt;
Part 3: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-resources-and-azure-resource-manager-o9h"&gt;Azure resources and Azure Resource Manager&lt;/a&gt;&lt;br&gt;
Part 4: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-subscriptions-and-management-groups-5g2h"&gt;Azure subscriptions and management groups&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will need to be ready to start creating resources and storing them in resource groups before you create a subscription. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource:&lt;/strong&gt; A manageable item that's available through Azure. Virtual machines (VMs), storage accounts, web apps, databases, and virtual networks are examples of resources.&lt;br&gt;
&lt;strong&gt;Resource group:&lt;/strong&gt; A container that holds related resources for an Azure solution. The resource group includes resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure resource groups
&lt;/h2&gt;

&lt;p&gt;Resource groups are a fundamental element of the Azure platform. A resource group is a logical container for resources deployed on Azure. These resources are anything you create in an Azure subscription like VMs, Azure Application Gateway instances, and Azure Cosmos DB instances. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All resources must be in a resource group, and a resource can only be a member of a single resource group. &lt;/li&gt;
&lt;li&gt;Many resources can be moved between resource groups with some services having specific limitations or requirements to move. Resource groups can't be nested. &lt;/li&gt;
&lt;li&gt;Before any resource can be provisioned, you need a resource group for it to be placed in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Logical grouping&lt;/strong&gt;&lt;br&gt;
Resource groups exist to help manage and organize your Azure resources. By placing resources of similar usage, type, or location in a resource group, you can provide order and organization to resources you create in Azure. Logical grouping is the aspect that you're most interested in here, because there's disorder among our resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwojbe4x0raxgstfr549c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwojbe4x0raxgstfr549c.png" alt="Conceptual image showing a resource group box with a function, VM, database, and app included." width="508" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Life cycle&lt;/strong&gt;&lt;br&gt;
If you delete a resource group, all resources contained within it are also deleted. Organizing resources by life cycle can be useful in nonproduction environments, where you might try an experiment and then dispose of it. Resource groups make it easy to remove a set of resources all at once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authorization&lt;/strong&gt;&lt;br&gt;
Resource groups are also a scope for applying role-based access control (RBAC) permissions. You can ease administration and limit access to allow only what's needed by applying RBAC permissions to a resource group.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Resource Manager
&lt;/h2&gt;

&lt;p&gt;Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. You use management features like access control, locks, and tags to secure and organize your resources after deployment.&lt;/p&gt;

&lt;p&gt;When a user sends a request from any of the Azure tools, APIs, or SDKs, Resource Manager receives the request. It authenticates and authorizes the request. Resource Manager sends the request to the Azure service, which takes the requested action. Because all requests are handled through the same API, you see consistent results and capabilities in all the different tools.&lt;/p&gt;

&lt;p&gt;The following image shows the role Resource Manager plays in handling Azure requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfn4d3bjkzcxizjho3zi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfn4d3bjkzcxizjho3zi.png" alt="Diagram showing a Resource Manager request model." width="594" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All capabilities that are available in the Azure portal are also available through PowerShell, the Azure CLI, REST APIs, and client SDKs. Functionality initially released through APIs will be represented in the portal within 180 days of initial release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of using Resource Manager&lt;/strong&gt;&lt;br&gt;
With Resource Manager, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manage your infrastructure through declarative templates rather than scripts. A Resource Manager template is a JSON file that defines what you want to deploy to Azure.&lt;/li&gt;
&lt;li&gt;Deploy, manage, and monitor all the resources for your solution as a group, rather than handling these resources individually.&lt;/li&gt;
&lt;li&gt;Redeploy your solution throughout the development life cycle and have confidence your resources are deployed in a consistent state.&lt;/li&gt;
&lt;li&gt;Define the dependencies between resources so they're deployed in the correct order.&lt;/li&gt;
&lt;li&gt;Apply access control to all services because RBAC is natively integrated into the management platform.&lt;/li&gt;
&lt;li&gt;Apply tags to resources to logically organize all the resources in your subscription.&lt;/li&gt;
&lt;li&gt;Clarify your organization's billing by viewing costs for a group of resources that share the same tag.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Credits: &lt;a href="https://www.microsoft.com/" rel="noopener noreferrer"&gt;Microsoft&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloud</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>Azure Architecture Fundamentals: Azure regions, availability zones, and region pairs</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Wed, 24 May 2023 15:38:22 +0000</pubDate>
      <link>https://dev.to/theyasirr/azure-architecture-fundamentals-azure-regions-availability-zones-and-region-pairs-22k3</link>
      <guid>https://dev.to/theyasirr/azure-architecture-fundamentals-azure-regions-availability-zones-and-region-pairs-22k3</guid>
      <description>&lt;p&gt;Azure Architecture Fundamentals:&lt;br&gt;
Part 1: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-overview-of-azure-subscriptions-management-groups-and-resources-1ko"&gt;Overview of Azure subscriptions, management groups, and resources&lt;/a&gt;&lt;br&gt;
Part 2: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-regions-availability-zones-and-region-pairs-22k3"&gt;Azure regions, availability zones, and region pairs&lt;/a&gt;&lt;br&gt;
Part 3: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-resources-and-azure-resource-manager-o9h"&gt;Azure resources and Azure Resource Manager&lt;/a&gt;&lt;br&gt;
Part 4: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-subscriptions-and-management-groups-5g2h"&gt;Azure subscriptions and management groups&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In part 1, you learned about Azure resources and resource groups. Resources are created in regions, which are different geographical locations around the globe that contain Azure datacenters.&lt;/p&gt;

&lt;p&gt;Azure is made up of datacenters located around the globe. When you use a service or create a resource such as an SQL database or virtual machine (VM), you're using physical equipment in one or more of these locations. These specific datacenters aren't exposed to users directly. Instead, Azure organizes them into regions. As you'll see later in this post, some of these regions offer availability zones, which are different Azure datacenters within that region.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure regions
&lt;/h2&gt;

&lt;p&gt;A region is a geographical area on the planet that contains at least one but potentially multiple datacenters that are nearby and networked together with a low-latency network. Azure intelligently assigns and controls the resources within each region to ensure workloads are appropriately balanced.&lt;/p&gt;

&lt;p&gt;When you deploy a resource in Azure, you'll often need to choose the region where you want your resource deployed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Some services or VM features are only available in certain regions, such as specific VM sizes or storage types. There are also some global Azure services that don't require you to select a particular region, such as Azure Active Directory, Azure Traffic Manager, and Azure DNS.&lt;/p&gt;

&lt;p&gt;A few examples of regions are West US, Canada Central, West Europe, Australia East, and Japan West. Here's a view of all the available regions as of June 2020. [&lt;a href="https://learn.microsoft.com/en-us/training/azure-fundamentals/azure-architecture-fundamentals/media/regions-small-be724495.png" rel="noopener noreferrer"&gt;Image link&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifx3uf78jt09td1ee6rc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifx3uf78jt09td1ee6rc.png" alt="Global map of available Azure regions as of June 2020"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why are regions important?&lt;/strong&gt;&lt;br&gt;
Azure has more global regions than any other cloud provider. These regions give you the flexibility to bring applications closer to your users no matter where they are. Global regions provide better scalability and redundancy. They also preserve data residency for your services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Special Azure regions
&lt;/h2&gt;

&lt;p&gt;Azure has specialized regions that you might want to use when you build out your applications for compliance or legal purposes. A few examples include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;US DoD Central, US Gov Virginia, US Gov Iowa and more:&lt;/strong&gt; These regions are physical and logical network-isolated instances of Azure for U.S. government agencies and partners. These datacenters are operated by screened U.S. personnel and include extra compliance certifications.&lt;br&gt;
&lt;strong&gt;China East, China North, and more:&lt;/strong&gt; These regions are available through a unique partnership between Microsoft and 21Vianet, whereby Microsoft doesn't directly maintain the datacenters.&lt;/p&gt;

&lt;p&gt;Regions are what you use to identify the location for your resources. There are two other terms you should also be aware of: geographies and availability zones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure availability zones
&lt;/h2&gt;

&lt;p&gt;You want to ensure your services and data are redundant so you can protect your information if there's a failure. When you host your infrastructure, setting up your own redundancy requires that you create duplicate hardware environments. Azure can help make your app highly available through availability zones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is an availability zone?&lt;/strong&gt;&lt;br&gt;
Availability zones are physically separate datacenters within an Azure region. Each availability zone is made up of one or more datacenters equipped with independent power, cooling, and networking. An availability zone is set up to be an isolation boundary. If one zone goes down, the other continues working. Availability zones are connected through high-speed, private fiber-optic networks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslb18op8k7q9u88ny4xi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslb18op8k7q9u88ny4xi.png" alt="Diagram showing three datacenters connected in a single Azure region representing an availability zone."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported regions&lt;/strong&gt;&lt;br&gt;
Not every region has support for availability zones. For an updated list, see &lt;a href="https://learn.microsoft.com/en-us/azure/availability-zones/az-region" rel="noopener noreferrer"&gt;Regions that support availability zones in Azure&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use availability zones in your apps&lt;/strong&gt;&lt;br&gt;
By co-locating your compute, storage, networking, and data resources within a zone and replicating them in other zones. You can use availability zones to run mission-critical applications and build high-availability into your application architecture. Keep in mind that there could be a cost to duplicating your services and transferring data between zones.&lt;/p&gt;

&lt;p&gt;Availability zones are primarily for VMs, managed disks, load balancers, and SQL databases. The following categories of Azure services support availability zones:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zonal services:&lt;/strong&gt; You pin the resource to a specific zone (for example, VMs, managed disks, IP addresses).&lt;br&gt;
&lt;strong&gt;Zone-redundant services:&lt;/strong&gt; The platform replicates automatically across zones (for example, zone-redundant storage, SQL Database).&lt;br&gt;
&lt;strong&gt;Non-regional services:&lt;/strong&gt; Services are always available from Azure geographies and are resilient to zone-wide outages and region-wide outages.&lt;br&gt;
Check the documentation to determine which elements of your architecture you can associate with an availability zone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure region pairs
&lt;/h2&gt;

&lt;p&gt;Availability zones are created by using one or more datacenters. There's a minimum of three zones within a single region. It's possible that a disaster could cause an outage large enough to affect even two datacenters, so Azure also creates region pairs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a region pair?&lt;/strong&gt;&lt;br&gt;
Each Azure region is always paired with another region within the same geography (such as US, Europe, or Asia) at least 300 miles away. This approach allows for the replication of resources (such as VM storage) across a geography to help reduce the likelihood of interruptions due to catastrophic events. For example, events such as natural disasters, civil unrest, power outages, or physical network outages that affect multiple zones at once. If a region in a pair was affected by a natural disaster, services would automatically failover to the other region in its region pair.&lt;/p&gt;

&lt;p&gt;Examples of region pairs in Azure are West US paired with East US and SouthEast Asia paired with East Asia.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yk2x1l4ljktf1a0nf8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yk2x1l4ljktf1a0nf8k.png" alt="Diagram showing relationship between geography, region pair, region, and datacenter."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because the pair of regions is directly connected and far enough apart to be isolated from regional disasters, you can use them to provide reliable services and data redundancy. Some services offer automatic geo-redundant storage by using region pairs.&lt;/p&gt;

&lt;p&gt;More advantages of region pairs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If an extensive Azure outage occurs, one region out of every pair is prioritized to make sure at least one is restored as quickly as possible for applications hosted in that region pair.&lt;/li&gt;
&lt;li&gt;Planned Azure updates are rolled out to paired regions one region at a time to minimize downtime and risk of application outage.&lt;/li&gt;
&lt;li&gt;Data continues to reside within the same geography as its pair (except for Brazil South) for tax- and law-enforcement jurisdiction purposes.
Having a broadly distributed set of datacenters allows Azure to provide a high guarantee of availability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Credits: &lt;a href="https://www.microsoft.com/" rel="noopener noreferrer"&gt;Microsoft&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloud</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>Azure Architecture Fundamentals: Overview of Azure subscriptions, management groups, and resources</title>
      <dc:creator>Yasir Rehman</dc:creator>
      <pubDate>Wed, 24 May 2023 15:14:58 +0000</pubDate>
      <link>https://dev.to/theyasirr/azure-architecture-fundamentals-overview-of-azure-subscriptions-management-groups-and-resources-1ko</link>
      <guid>https://dev.to/theyasirr/azure-architecture-fundamentals-overview-of-azure-subscriptions-management-groups-and-resources-1ko</guid>
      <description>&lt;p&gt;Azure Architecture Fundamentals:&lt;br&gt;
Part 1: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-overview-of-azure-subscriptions-management-groups-and-resources-1ko"&gt;Overview of Azure subscriptions, management groups, and resources&lt;/a&gt;&lt;br&gt;
Part 2: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-regions-availability-zones-and-region-pairs-22k3"&gt;Azure regions, availability zones, and region pairs&lt;/a&gt;&lt;br&gt;
Part 3: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-resources-and-azure-resource-manager-o9h"&gt;Azure resources and Azure Resource Manager&lt;/a&gt;&lt;br&gt;
Part 4: &lt;a href="https://dev.to/theyasirr/azure-architecture-fundamentals-azure-subscriptions-and-management-groups-5g2h"&gt;Azure subscriptions and management groups&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's say that you work as a developer for a successful company, and your company's Chief Technology Officer recently decided to adopt Azure as the cloud computing platform. You're currently in the planning stages for the migration. Before you begin the migration process, you decide to study Azure concepts, resources, and terminology to ensure your migration is a success.&lt;/p&gt;

&lt;p&gt;As part of your research, you need to learn the organizing structure for resources in Azure, which has four levels: management groups, subscriptions, resource groups, and resources.&lt;/p&gt;

&lt;p&gt;The following image shows the top-down hierarchy of organization for these levels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fh7bnf9bm3ahguu4h1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fh7bnf9bm3ahguu4h1q.png" alt="Hierarchy for objects in Azure"&gt;&lt;/a&gt;Screenshot of the hierarchy for objects in Azure.&lt;/p&gt;

&lt;p&gt;Having seen the top-down hierarchy of organization, let's describe each of those levels from the bottom up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; Resources are instances of services that you create, like virtual machines, storage, or SQL databases.&lt;br&gt;
&lt;strong&gt;Resource groups:&lt;/strong&gt; Resources are combined into resource groups. Resource groups act as a logical container into which Azure resources like web apps, databases, and storage accounts are deployed and managed.&lt;br&gt;
&lt;strong&gt;Subscriptions:&lt;/strong&gt; A subscription groups together user accounts and the resources that have been created by those user accounts. For each subscription, there are limits or quotas on the amount of resources that you can create and use. Organizations can use subscriptions to manage costs and the resources that are created by users, teams, or projects.&lt;br&gt;
&lt;strong&gt;Management groups:&lt;/strong&gt; These groups help you manage access, policy, and compliance for multiple subscriptions. All subscriptions in a management group automatically inherit the conditions applied to the management group.&lt;/p&gt;

&lt;p&gt;Credits: &lt;a href="https://www.microsoft.com" rel="noopener noreferrer"&gt;Microsoft&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>beginners</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
