<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Grigor Khachatryan</title>
    <description>The latest articles on DEV Community by Grigor Khachatryan (@grigorkh).</description>
    <link>https://dev.to/grigorkh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/grigorkh"/>
    <language>en</language>
    <item>
      <title>Advanced Docker Networking: A Complete Guide</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Sat, 05 Oct 2024 04:00:35 +0000</pubDate>
      <link>https://dev.to/grigorkh/advanced-docker-networking-a-complete-guide-pel</link>
      <guid>https://dev.to/grigorkh/advanced-docker-networking-a-complete-guide-pel</guid>
      <description>&lt;p&gt;&lt;em&gt;Docker&lt;/em&gt; has revolutionized the way we develop, ship, and run applications. While many are familiar with containerizing applications, networking within Docker remains a topic that can be daunting for both newcomers and seasoned developers. This guide aims to demystify Docker networking, offering insights that cater to all levels of expertise.&lt;/p&gt;




&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction to Docker Networking&lt;/li&gt;
&lt;li&gt;
Docker Network Drivers

&lt;ul&gt;
&lt;li&gt;Bridge Networks&lt;/li&gt;
&lt;li&gt;Host Networks&lt;/li&gt;
&lt;li&gt;Overlay Networks&lt;/li&gt;
&lt;li&gt;Macvlan Networks&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Custom Network Configurations

&lt;ul&gt;
&lt;li&gt;Creating a User-Defined Bridge Network&lt;/li&gt;
&lt;li&gt;Connecting Containers to Multiple Networks&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Advanced Networking Concepts

&lt;ul&gt;
&lt;li&gt;Network Namespaces&lt;/li&gt;
&lt;li&gt;IP Address Management (IPAM)&lt;/li&gt;
&lt;li&gt;Exposing and Publishing Ports&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Best Practices and Tips&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Introduction to Docker Networking
&lt;/h2&gt;

&lt;p&gt;Docker networking allows containers to communicate with each other, the host system, and external networks. Understanding how Docker handles networking is crucial for building scalable and efficient applications.&lt;/p&gt;

&lt;p&gt;By default, Docker uses the &lt;strong&gt;bridge&lt;/strong&gt; network driver, creating an isolated network namespace for each container. However, Docker provides several network drivers and options to customize networking to suit various application needs.&lt;/p&gt;




&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Docker Network Drivers
&lt;/h2&gt;

&lt;p&gt;Docker offers multiple network drivers, each suited for different use cases.&lt;/p&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Bridge Networks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default network type&lt;/strong&gt; for standalone containers.&lt;/li&gt;
&lt;li&gt;Containers on the same bridge network can communicate using IP addresses or container names.&lt;/li&gt;
&lt;li&gt;Suitable for applications running on a single host.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a new bridge network&lt;/span&gt;
docker network create my-bridge-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Host Networks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Containers share the host's network stack.&lt;/li&gt;
&lt;li&gt;No network isolation between the container and the host.&lt;/li&gt;
&lt;li&gt;Offers performance benefits by reducing network overhead.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Run a container using the host network&lt;/span&gt;
docker run &lt;span class="nt"&gt;--network&lt;/span&gt; host my-image
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Overlay Networks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Used for &lt;strong&gt;swarm services&lt;/strong&gt; and multi-host networking.&lt;/li&gt;
&lt;li&gt;Enables containers running on different Docker hosts to communicate securely.&lt;/li&gt;
&lt;li&gt;Requires a key-value store like Consul or etcd for coordination.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create an overlay network&lt;/span&gt;
docker network create &lt;span class="nt"&gt;-d&lt;/span&gt; overlay my-overlay-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Macvlan Networks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Assigns a &lt;strong&gt;MAC address&lt;/strong&gt; to each container, making it appear as a physical device on the network.&lt;/li&gt;
&lt;li&gt;Containers can be directly connected to the physical network.&lt;/li&gt;
&lt;li&gt;Useful for legacy applications requiring direct network access.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a macvlan network&lt;/span&gt;
docker network create &lt;span class="nt"&gt;-d&lt;/span&gt; macvlan &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--subnet&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.1.0/24 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--gateway&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.1.1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;parent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eth0 my-macvlan-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Custom Network Configurations
&lt;/h2&gt;

&lt;p&gt;Customizing Docker networks allows for greater control over container communication and network policies.&lt;/p&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a User-Defined Bridge Network
&lt;/h3&gt;

&lt;p&gt;User-defined bridge networks provide better isolation and enable advanced networking features like service discovery.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a user-defined bridge network&lt;/span&gt;
docker network create my-custom-network

&lt;span class="c"&gt;# Run containers on the custom network&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; container1 &lt;span class="nt"&gt;--network&lt;/span&gt; my-custom-network my-image
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; container2 &lt;span class="nt"&gt;--network&lt;/span&gt; my-custom-network my-image
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting Containers to Multiple Networks
&lt;/h3&gt;

&lt;p&gt;Containers can be connected to multiple networks, allowing them to communicate across different network segments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Connect an existing container to a network&lt;/span&gt;
docker network connect additional-network container1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Advanced Networking Concepts
&lt;/h2&gt;

&lt;p&gt;Diving deeper into Docker networking reveals concepts that offer enhanced control and flexibility.&lt;/p&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Network Namespaces
&lt;/h3&gt;

&lt;p&gt;Each Docker container runs in its own network namespace, isolating its network environment. Understanding namespaces helps in troubleshooting and customizing network configurations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Inspect a container's network namespace&lt;/span&gt;
docker inspect &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s1"&gt;'{{ .NetworkSettings.SandboxKey }}'&lt;/span&gt; container_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  IP Address Management (IPAM)
&lt;/h3&gt;

&lt;p&gt;Docker's IPAM driver manages IP address allocation for networks. Custom IPAM drivers can be used for specific networking requirements.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a network with a specific subnet and gateway&lt;/span&gt;
docker network create &lt;span class="nt"&gt;--subnet&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;172.20.0.0/16 &lt;span class="nt"&gt;--gateway&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;172.20.0.1 my-net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Exposing and Publishing Ports
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Exposing Ports&lt;/strong&gt;: Makes a port available to linked containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publishing Ports&lt;/strong&gt;: Maps a container port to a host port, making it accessible externally.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Publish a container's port to the host&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 my-image
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Best Practices and Tips
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use User-Defined Networks&lt;/strong&gt;: Avoid the default bridge network for better isolation and name resolution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leverage DNS&lt;/strong&gt;: Docker provides built-in DNS for container name resolution within user-defined networks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor Network Performance&lt;/strong&gt;: Use tools like &lt;code&gt;docker stats&lt;/code&gt; and network plugins to monitor and optimize performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure Your Networks&lt;/strong&gt;: Implement network policies and use overlay networks with encryption for secure communication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean Up Unused Networks&lt;/strong&gt;: Regularly remove unused networks to prevent clutter.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Remove an unused network&lt;/span&gt;
docker network &lt;span class="nb"&gt;rm &lt;/span&gt;my-unused-network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Conclusion
&lt;/h2&gt;

&lt;p&gt;Docker networking is a powerful feature that, when understood and utilized correctly, can significantly enhance the scalability and performance of containerized applications. Whether you're just starting out or looking to optimize complex deployments, mastering Docker networking opens up a world of possibilities.&lt;/p&gt;

&lt;p&gt;Remember, the key to effectively using Docker networking lies in understanding your application's requirements and choosing the right tools and configurations to meet those needs.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>networking</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Docker Compose vs Kubernetes: When to Use What</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Sat, 05 Oct 2024 03:55:55 +0000</pubDate>
      <link>https://dev.to/grigorkh/docker-compose-vs-kubernetes-when-to-use-what-3l90</link>
      <guid>https://dev.to/grigorkh/docker-compose-vs-kubernetes-when-to-use-what-3l90</guid>
      <description>&lt;p&gt;In the ever-growing world of containerization, two major players dominate the scene: Docker Compose and Kubernetes. Both are powerful tools, but they serve different purposes depending on the scale and complexity of the project. In this article, we will explore the key differences, use cases, and best practices for using Docker Compose and Kubernetes, helping you decide which tool is best suited for your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Basics
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker Compose
&lt;/h3&gt;

&lt;p&gt;Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to manage your services through a simple YAML file, configuring networks, volumes, and dependencies. With a single command, your entire application can be spun up locally, making Docker Compose an excellent choice for development environments and smaller-scale applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes
&lt;/h3&gt;

&lt;p&gt;Kubernetes, often referred to as K8s, is an open-source container orchestration platform designed for automating deployment, scaling, and managing containerized applications across clusters of machines. While Kubernetes is more complex and requires a steeper learning curve, it offers a powerful suite of features that make it the go-to tool for managing large-scale, production-level applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Differences Between Docker Compose and Kubernetes
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Complexity&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose:&lt;/strong&gt; Simple to set up and manage, making it ideal for smaller projects or development environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes:&lt;/strong&gt; More complex, with a higher learning curve, but designed for large-scale, production deployments.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose:&lt;/strong&gt; Limited scalability, best suited for smaller environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes:&lt;/strong&gt; Highly scalable and capable of managing thousands of containers across clusters of machines.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Service Discovery and Load Balancing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose:&lt;/strong&gt; Basic service discovery and load balancing, sufficient for smaller setups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes:&lt;/strong&gt; Built-in service discovery, load balancing, and routing, making it ideal for managing complex networks.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;High Availability and Fault Tolerance&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose:&lt;/strong&gt; Minimal built-in support for high availability and fault tolerance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes:&lt;/strong&gt; Offers robust high availability and self-healing features.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Resource Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose:&lt;/strong&gt; Basic resource management (CPU and memory limits can be specified).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes:&lt;/strong&gt; Advanced resource management with options like auto-scaling, resource quotas, and limits.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Updates and Rollbacks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose:&lt;/strong&gt; Manual updates and rollbacks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes:&lt;/strong&gt; Supports automated rolling updates and easy rollbacks with minimal downtime.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  When to Use Docker Compose
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Ideal Use Cases:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Local Development:&lt;/strong&gt; Docker Compose makes it easy to set up local environments quickly, ensuring consistency across your team.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Small to Medium Applications:&lt;/strong&gt; If your application has fewer services and doesn't require complex scaling or high availability, Docker Compose is a great choice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prototyping and Testing:&lt;/strong&gt; Its simplicity allows for rapid prototyping and testing, making it an excellent tool for validating new ideas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple CI/CD Pipelines:&lt;/strong&gt; Docker Compose can integrate easily into CI/CD pipelines for straightforward application deployments.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Best Practices for Docker Compose:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Version control your Docker Compose files to keep track of changes.&lt;/li&gt;
&lt;li&gt;Use environment variables to adapt configurations across different environments.&lt;/li&gt;
&lt;li&gt;Leverage YAML anchors and aliases to keep your configuration DRY (Don’t Repeat Yourself).&lt;/li&gt;
&lt;li&gt;Implement health checks to ensure all services are up and running as expected.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  When to Use Kubernetes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Ideal Use Cases:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Large-Scale Production Deployments:&lt;/strong&gt; If you're managing a large-scale application with hundreds or thousands of containers across multiple machines, Kubernetes provides the orchestration you need.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microservices Architectures:&lt;/strong&gt; For applications built on microservices, Kubernetes excels in handling complex service discovery and load balancing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability Requirements:&lt;/strong&gt; Kubernetes offers built-in fault tolerance and self-healing features that ensure your application is highly available.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-Scaling Needs:&lt;/strong&gt; If your application requires auto-scaling based on demand or resource utilization, Kubernetes is the better option.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Cloud and Hybrid Cloud Deployments:&lt;/strong&gt; Kubernetes provides a consistent platform across different cloud providers, making it perfect for multi-cloud and hybrid cloud environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Best Practices for Kubernetes:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use declarative configuration through YAML files for better maintainability.&lt;/li&gt;
&lt;li&gt;Implement proper resource requests and limits to optimize performance.&lt;/li&gt;
&lt;li&gt;Use namespaces to organize your resources and manage different environments.&lt;/li&gt;
&lt;li&gt;Regularly monitor and log your Kubernetes clusters to maintain system health.&lt;/li&gt;
&lt;li&gt;Utilize Helm charts to manage complex Kubernetes applications and simplify deployments.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Both Docker Compose and Kubernetes are essential tools in the containerization ecosystem, but they serve different purposes depending on your project’s needs. &lt;strong&gt;Docker Compose&lt;/strong&gt; shines in local development, small applications, and testing scenarios, offering simplicity and speed. &lt;strong&gt;Kubernetes&lt;/strong&gt;, on the other hand, is the go-to solution for large-scale production environments, offering advanced features like auto-scaling, service discovery, and fault tolerance.&lt;/p&gt;

&lt;p&gt;Many teams use both tools: Docker Compose for local development and testing, and Kubernetes for production. By understanding the strengths and limitations of each, you can make an informed decision to enhance your development workflow and improve operational efficiency.&lt;/p&gt;

&lt;p&gt;Whether you use Docker Compose, Kubernetes, or both, the key is to leverage these tools effectively to ensure your applications are robust, scalable, and easy to manage.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Best Practices: Security</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Mon, 30 Jan 2023 09:11:24 +0000</pubDate>
      <link>https://dev.to/grigorkh/docker-best-practices-security-1b3</link>
      <guid>https://dev.to/grigorkh/docker-best-practices-security-1b3</guid>
      <description>&lt;p&gt;Docker is a popular tool for creating and managing containerized applications. However, as with any technology, there are security best practices that should be followed to ensure that your applications and systems remain secure. In this article, we will discuss some of the best practices for securing Docker containers and applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Use official images
&lt;/h2&gt;

&lt;p&gt;One of the best ways to ensure the security of your Docker containers is to use official images from trusted sources. Official images are those that have been created and maintained by the upstream software vendor or a trusted third-party organization. These images are typically more secure than those that have been created by individuals or untrusted sources, as they are regularly updated and patched to address known vulnerabilities.&lt;/p&gt;

&lt;p&gt;For example, when running a containerized version of the Apache web server, you should use the official Apache image from the Docker Hub rather than a version created by an individual or untrusted organization. This will ensure that the image you are using is up-to-date and free from known vulnerabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Minimize the attack surface
&lt;/h2&gt;

&lt;p&gt;Another important aspect of securing Docker containers is to minimize the attack surface. This can be achieved by running only the necessary processes and services within a container and by using minimalistic images. Additionally, it is important to keep the number of open ports to a minimum and to use network segmentation to limit the exposure of your containers to the internet.&lt;/p&gt;

&lt;p&gt;For example, instead of running a full-fledged Linux distribution within a container, you can use a minimalistic image that includes only the necessary components for your application. Additionally, you can use network segmentation to limit the number of open ports and to restrict access to your container from the internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Keep your images and containers up-to-date
&lt;/h2&gt;

&lt;p&gt;Keeping your images and containers up-to-date is another important aspect of securing Docker. This includes updating your images to the latest version and patching any known vulnerabilities. Additionally, it is important to regularly update the software and libraries within your containers to ensure that they are free from known vulnerabilities.&lt;/p&gt;

&lt;p&gt;For example, if you are running an older version of the Apache web server within a container, it is important to update the image to the latest version to ensure that it is free from known vulnerabilities. Additionally, it is important to regularly update the software and libraries within your container to ensure that they are free from known vulnerabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Use container orchestration
&lt;/h2&gt;

&lt;p&gt;Container orchestration tools, such as Kubernetes, can help you to manage and secure your Docker containers at scale. These tools provide features such as automatic scaling, automatic updates, and automatic failover, which can help to ensure that your containers are always running and are free from known vulnerabilities. Additionally, container orchestration tools can be configured to automatically update your images and containers to the latest version and to patch any known vulnerabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Use a container firewall
&lt;/h2&gt;

&lt;p&gt;Another important aspect of securing Docker is to use a container firewall. Container firewalls can be configured to restrict incoming and outgoing network traffic and to limit access to your containers from the internet. Additionally, container firewalls can be configured to automatically block any traffic that is deemed to be malicious.&lt;/p&gt;

&lt;p&gt;For example, you can use a container firewall to restrict incoming and outgoing network traffic and to limit access to your containers from the internet. Additionally, you can configure the firewall to automatically block any traffic that is deemed to be malicious, such as traffic from known malicious IP addresses or traffic that is trying to exploit known vulnerabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Use a vulnerability scanner
&lt;/h2&gt;

&lt;p&gt;A vulnerability scanner is a tool that can scan your images and containers for known vulnerabilities. Vulnerability scanners can be configured to automatically scan your images and containers and to alert you to any known vulnerabilities that they find. Additionally, they can be configured to automatically update your images and containers to the latest version and to patch any known vulnerabilities. Some examples of vulnerability scanners include Aqua Security’s Trivy, Sysdig Secure, and StackRox.&lt;/p&gt;

&lt;p&gt;For example, you can use a vulnerability scanner such as Trivy to scan your images and containers for known vulnerabilities. If the scanner finds any vulnerabilities, it will alert you and provide information on how to patch or update the image or container. Additionally, Trivy can also be configured to automatically update your images and containers to the latest version and to patch any known vulnerabilities.&lt;/p&gt;

&lt;p&gt;In conclusion, securing Docker containers and applications requires a multi-faceted approach. By following best practices such as using official images, minimizing the attack surface, keeping your images and containers up-to-date, using container orchestration, using a container firewall, and using a vulnerability scanner, you can significantly reduce the risk of vulnerabilities and attacks on your containerized applications. It is important to regularly review and update your security practices, as new vulnerabilities and attack vectors are discovered and the threat landscape changes.&lt;/p&gt;

&lt;p&gt;For example, in January 2021, the CVE-2021–3450 vulnerability affected the Docker Engine, allowing an attacker to execute arbitrary code. The vulnerability was patched by Docker in the same month, but it highlights the importance of keeping your images and containers up-to-date and using vulnerability scanners to detect and remediate known vulnerabilities.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>security</category>
      <category>cybersecurity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Running a GraphQL endpoint with Serverless</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Mon, 30 Jan 2023 09:07:54 +0000</pubDate>
      <link>https://dev.to/grigorkh/running-a-graphql-endpoint-with-serverless-4038</link>
      <guid>https://dev.to/grigorkh/running-a-graphql-endpoint-with-serverless-4038</guid>
      <description>&lt;p&gt;&lt;strong&gt;GraphQL?&lt;/strong&gt;&lt;br&gt;
In case you are not familiar with it, GraphQL is a data query language created by Facebook. GraphQL allows the client to select ad-hoc data. This differentiates it from REST API’s that expose pre-set data structures. &lt;a href="https://dev.to/grigorkh/what-is-graphql-4n9j"&gt;More here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serverless?&lt;/strong&gt;&lt;br&gt;
Serverless pattern encourages development focus on well-defined units of business logic, without premature optimization decisions related to how this logic is deployed or scaled.&lt;/p&gt;
&lt;h3&gt;
  
  
  Setting up the Serverless Framework
&lt;/h3&gt;

&lt;p&gt;Before we start, make sure you have Node.js and npm installed on your machine. You can check this by running the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="nt"&gt;-v&lt;/span&gt;
npm &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, install the Serverless Framework globally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; serverless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have the Serverless Framework installed, let’s create a new Serverless project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;serverless create &lt;span class="nt"&gt;--template&lt;/span&gt; aws-nodejs &lt;span class="nt"&gt;--path&lt;/span&gt; graphql-serverless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a new Serverless project using the AWS Node.js template in a directory named graphql-serverless.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing the Required Libraries
&lt;/h3&gt;

&lt;p&gt;To run a GraphQL endpoint, we need to install the following libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;apollo-server-lambda graphql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;apollo-server-lambda&lt;/code&gt; is a library that makes it easy to run an Apollo GraphQL server on AWS Lambda. &lt;code&gt;graphql&lt;/code&gt; is the library that provides the GraphQL implementation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing the GraphQL Schema
&lt;/h3&gt;

&lt;p&gt;The next step is to write the GraphQL schema. This schema defines the types and the operations that can be performed on these types.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;schema.js&lt;/code&gt; file in the &lt;code&gt;graphql-serverless&lt;/code&gt; directory with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;gql&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;apollo-server-lambda&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;typeDefs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;gql&lt;/span&gt;&lt;span class="s2"&gt;`
  type Query {
    hello: String
  }
`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;typeDefs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a simple schema that defines a single operation hello that returns a string.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing the GraphQL Resolvers
&lt;/h3&gt;

&lt;p&gt;Next, we need to write the resolvers. Resolvers are the functions that implement the operations defined in the schema.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;resolvers.js&lt;/code&gt; file in the &lt;code&gt;graphql-serverless&lt;/code&gt; directory with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;resolvers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;Query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;hello&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hello, Serverless!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;resolvers&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This resolver implements the hello operation from the schema and returns the string &lt;code&gt;"Hello, Serverless!"&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the Apollo Server
&lt;/h3&gt;

&lt;p&gt;Finally, we need to set up the Apollo Server.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;handler.js&lt;/code&gt; file in the &lt;code&gt;graphql-serverless&lt;/code&gt; directory with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ApolloServer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;apollo-server-lambda&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;typeDefs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./schema&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;resolvers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./resolvers&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ApolloServer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;typeDefs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;resolvers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;graphqlHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createHandler&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets up the Apollo Server using the typeDefs and resolvers from the previous sections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring the Serverless Function
&lt;/h3&gt;

&lt;p&gt;Now that we have the GraphQL endpoint set up, we need to configure the Serverless function to run it.&lt;/p&gt;

&lt;p&gt;Update the &lt;code&gt;serverless.yml&lt;/code&gt; file in the &lt;code&gt;graphql-serverless&lt;/code&gt; directory with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;graphql-serverless&lt;/span&gt;
&lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws&lt;/span&gt;
  &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nodejs18.x&lt;/span&gt;

&lt;span class="na"&gt;functions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;graphql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;handler.graphqlHandler&lt;/span&gt;
    &lt;span class="na"&gt;events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;graphql&lt;/span&gt;
          &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;post&lt;/span&gt;

&lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;serverless-apollo-middleware&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configures a single Serverless function named &lt;code&gt;graphql&lt;/code&gt; that will be triggered by a HTTP POST request to the &lt;code&gt;/graphql&lt;/code&gt; endpoint. It also includes the &lt;code&gt;serverless-apollo-middleware&lt;/code&gt; plugin, which is required for the Apollo Server to work on AWS Lambda.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying the Serverless Function
&lt;/h3&gt;

&lt;p&gt;The final step is to deploy the Serverless function to AWS. Run the following command in the &lt;code&gt;graphql-serverless&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;serverless deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will deploy the function to AWS and return the URL of the GraphQL endpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the GraphQL Endpoint
&lt;/h3&gt;

&lt;p&gt;You can use a tool like &lt;a href="https://insomnia.rest/" rel="noopener noreferrer"&gt;Insomnia&lt;/a&gt; or &lt;a href="https://www.postman.com/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt; to test the GraphQL endpoint.&lt;/p&gt;

&lt;p&gt;Create a new POST request to the URL of the GraphQL endpoint with the following query in the request body:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight graphql"&gt;&lt;code&gt;&lt;span class="k"&gt;query&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;hello&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Send the request and you should receive the following response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"hello"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Hello, Serverless!"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Continuous Integration and Deployment (CI/CD) using GitHub Actions
&lt;/h3&gt;

&lt;p&gt;To automate the deployment process, we can set up a CI/CD pipeline using GitHub Actions. GitHub Actions allows you to automate your workflow by creating custom actions that run whenever a specific event occurs in your GitHub repository.&lt;/p&gt;

&lt;p&gt;Create a new file named &lt;code&gt;.github/workflows/deploy.yml&lt;/code&gt; in your GitHub repository with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;AWS_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
  &lt;span class="na"&gt;AWS_PROFILE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS credentials&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.AWS_REGION }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to AWS&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;anothrNick/serverless-action@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This action is triggered whenever a push is made to the main branch of the GitHub repository. It uses the &lt;code&gt;anothrNick/serverless-action&lt;/code&gt; action to deploy the Serverless function to AWS.&lt;/p&gt;

&lt;p&gt;Make sure to replace &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; in the code above with the actual values from your AWS account. You can store these values as secrets in your GitHub repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this article, we showed how to run a GraphQL endpoint with Serverless on AWS Lambda. We covered the basics of setting up an Apollo Server and deploying it as a Serverless function. We also showed how to automate the deployment process using GitHub Actions, allowing for a seamless CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;By using Serverless, you can take advantage of the scalability and cost-effectiveness of AWS Lambda while keeping the development process simple and streamlined. The combination of GraphQL and Serverless provides a powerful platform for building and deploying modern web applications.&lt;/p&gt;

&lt;p&gt;With the code and steps outlined in this article, you should be able to get started with running a GraphQL endpoint with Serverless and continue building upon this foundation for your own projects.&lt;/p&gt;

</description>
      <category>python</category>
      <category>automation</category>
      <category>sre</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>GraphQL vs REST</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Mon, 30 Jan 2023 08:54:12 +0000</pubDate>
      <link>https://dev.to/grigorkh/graphql-vs-rest-2ind</link>
      <guid>https://dev.to/grigorkh/graphql-vs-rest-2ind</guid>
      <description>&lt;p&gt;GraphQL and REST are two popular API architectures for building web applications. Both have been widely used for building modern applications and have their own advantages and disadvantages. In this article, we will compare and contrast GraphQL and REST, explore the benefits of each, and provide code examples to illustrate the differences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is GraphQL?&lt;/strong&gt; GraphQL is a query language for APIs that was developed by Facebook in 2012. It provides a more efficient and flexible alternative to traditional REST APIs by allowing clients to request exactly the data they need, reducing the risk of over-fetching or under-fetching data. GraphQL also includes a type system that defines the data that can be fetched and the operations that can be performed, making it easier for clients and servers to understand the API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is REST?&lt;/strong&gt; REST, or Representational State Transfer, is an architectural style for building web services that was first described by Roy Fielding in 2000. REST APIs use HTTP methods (such as GET, POST, PUT, and DELETE) to perform operations on resources. REST APIs typically return all the data associated with a resource, which can result in over-fetching data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Examples
&lt;/h3&gt;

&lt;p&gt;Let’s look at a simple code example to see the difference between a GraphQL API and a REST API.&lt;/p&gt;

&lt;p&gt;Here is an example of a simple GraphQL query to retrieve information about a user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query {
  user(id: 1) {
    name
    email
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here is an equivalent REST API call to retrieve the same information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /users/1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the GraphQL query allows you to specify exactly what data you want to retrieve, while the REST API call returns all the data associated with the user resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of GraphQL
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Flexibility: GraphQL allows clients to request exactly the data they need, reducing the risk of over-fetching or under-fetching data. This makes it easier to optimize performance and reduce network overhead.&lt;/li&gt;
&lt;li&gt;Efficient: GraphQL allows clients to fetch multiple resources in a single request, reducing the number of round trips to the server. This can greatly improve performance, especially for mobile devices with slow or unreliable network connections.&lt;/li&gt;
&lt;li&gt;Type System: GraphQL includes a type system that defines the data that can be fetched and the operations that can be performed, making it easier for clients and servers to understand the API.&lt;/li&gt;
&lt;li&gt;Versioning: GraphQL has a built-in versioning system through its schema, which allows for new fields and types to be added without breaking existing clients.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefits of REST
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Simple: REST is a simple and well-established architecture that provides a straightforward way to build APIs.&lt;/li&gt;
&lt;li&gt;Backwards Compatible: REST APIs are typically designed to be backwards compatible, which means that new versions of the API can be introduced without breaking existing clients.&lt;/li&gt;
&lt;li&gt;Widely Adopted: REST is a widely adopted architecture and has a large community of developers and tools that support it.&lt;/li&gt;
&lt;li&gt;Caching: REST APIs can be cached, reducing network overhead and improving performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  N+1 Problem
&lt;/h3&gt;

&lt;p&gt;The n+1 problem is a common issue in REST API design, where multiple API calls are required to retrieve related data. For example, to retrieve information about a user and their posts, you might need to make two separate API calls: one to retrieve the user information and another to retrieve the list of posts.&lt;/p&gt;

&lt;p&gt;In GraphQL, related data can be retrieved in a single API call, avoiding the n+1 problem and improving performance. For example, the following GraphQL query retrieves information about a user and their posts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query {
  user(id: 1) {
    name
    email
    posts {
      title
      body
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Overfetching and Underfetching
&lt;/h3&gt;

&lt;p&gt;Overfetching occurs when an API returns more data than a client needs, which can result in increased network overhead and slower performance. Underfetching occurs when an API does not return enough data for a client to complete its task, which can result in additional API calls and reduced performance.&lt;/p&gt;

&lt;p&gt;In REST, overfetching and underfetching can be a common issue as the API often returns a fixed set of data for each resource. In GraphQL, the client has control over what data is retrieved, reducing the risk of overfetching or underfetching.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backward Compatibility
&lt;/h3&gt;

&lt;p&gt;Backward compatibility is an important aspect of API design, as it allows for new versions of the API to be introduced without breaking existing clients. REST APIs are typically designed to be backwards compatible, while GraphQL has a built-in versioning system through its schema. This allows for new fields and types to be added without breaking existing clients, making it easier to evolve the API over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Schema and Type System
&lt;/h3&gt;

&lt;p&gt;The schema and type system in GraphQL provide a clear and concise definition of the data that can be fetched and the operations that can be performed. This makes it easier for clients and servers to understand the API and reduces the risk of misunderstandings or errors. In addition, the type system helps to ensure that the data returned from the API is consistent and accurate.&lt;/p&gt;

&lt;p&gt;In conclusion, GraphQL and REST are two popular API architectures that each have their own advantages and disadvantages. GraphQL provides a more efficient and flexible alternative to REST, while REST is a simple and well-established architecture that is widely adopted. Ultimately, the choice between GraphQL and REST will depend on the specific needs of the project and the preferences of the development team.&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>api</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>About Rust Programming Language</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Mon, 30 Jan 2023 08:48:38 +0000</pubDate>
      <link>https://dev.to/grigorkh/about-rust-programming-language-36ac</link>
      <guid>https://dev.to/grigorkh/about-rust-programming-language-36ac</guid>
      <description>&lt;p&gt;Rust is a modern programming language that has become increasingly popular in recent years. It is a statically typed, multi-paradigm language that focuses on speed, reliability, and safety. Rust was created by Graydon Hoare in 2006 and was officially released in 2010. Over the years, Rust has gained a reputation as a language that provides developers with low-level control, strong memory safety guarantees, and excellent performance.&lt;/p&gt;

&lt;p&gt;Rust is designed to be an alternative to traditional systems programming languages such as C and C++. These languages provide low-level control and access to the underlying hardware, but they also come with a high risk of bugs, crashes, and security vulnerabilities. Rust aims to provide the same level of low-level control and performance, but with improved memory safety and reliability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory Safety in Rust:
&lt;/h3&gt;

&lt;p&gt;Memory safety is a critical aspect of software development, and it is one of the areas where Rust excels. Unlike C and C++, Rust has built-in memory safety guarantees that prevent common programming errors such as null pointer dereferences and buffer overflows. This is achieved through a combination of strict type checking, ownership, and borrowing rules.&lt;/p&gt;

&lt;p&gt;In Rust, each value has a unique owner, and the owner is responsible for freeing the memory when the value is no longer needed. When a value is shared, Rust uses a borrowing system to ensure that the value can only be used in a controlled and safe manner. This helps to prevent bugs and crashes that can occur when multiple parts of a program attempt to access the same memory location at the same time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Concurrency in Rust:
&lt;/h3&gt;

&lt;p&gt;Another area where Rust excels is concurrency. Concurrent programming involves running multiple parts of a program in parallel, allowing for faster and more efficient processing. Rust supports concurrent programming through its ownership and borrowing model, which helps developers write efficient and correct parallel code.&lt;/p&gt;

&lt;p&gt;Rust’s ownership and borrowing system ensures that multiple parts of a program cannot access the same data simultaneously, preventing race conditions and other concurrency-related bugs. Additionally, Rust has a built-in system for synchronizing access to shared data, allowing developers to write safe and efficient concurrent code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance in Rust:
&lt;/h3&gt;

&lt;p&gt;Rust is known for its excellent performance, and it is often compared to C and C++. Rust’s low-level control and memory safety guarantees help to ensure that programs run quickly and efficiently. Additionally, Rust has faster compile times and lower memory overhead than C and C++, making it a great choice for high-performance systems.&lt;/p&gt;

&lt;p&gt;Rust’s performance is also helped by its built-in concurrency support, which allows developers to write efficient parallel code. With the rise of multi-core processors, concurrent programming is becoming increasingly important for achieving high performance, and Rust provides the tools and features necessary to build high-performance systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Syntax and Ecosystem:
&lt;/h3&gt;

&lt;p&gt;Rust’s syntax is similar to C++, making it easier for C++ developers to learn. This is an important consideration, as it makes it easier for developers to transition from one language to another, and it helps to ensure that the knowledge and experience gained in one language can be easily applied to another.&lt;/p&gt;

&lt;p&gt;Rust also has a growing ecosystem, with many popular libraries and tools available. This ecosystem is supported by a large and active community of developers, who contribute to the development of the language and its libraries. This makes it easy for developers to find the tools and resources they need to build their projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;Rust is a modern programming language that provides developers with the speed, reliability, and safety they need to build high-performance systems. Whether you’re building a game, a web service, or a system utility, Rust is a great choice for your next project. Its combination of low-level control, memory safety, and concurrency support makes it well suited for systems programming, while its growing ecosystem and active community provide developers with the tools and resources they need to be productive.&lt;/p&gt;

&lt;p&gt;Additionally, Rust’s syntax and design philosophy make it a language that is easy to learn and use, while its performance and reliability make it a language that is well suited for production use. Whether you’re a seasoned systems programmer or just starting out, Rust is a language that you should consider for your next project.&lt;/p&gt;

&lt;p&gt;In conclusion, Rust is a fast, safe, and reliable programming language that provides developers with the low-level control and performance they need, while also providing the memory safety and concurrency support they need to build robust and scalable systems. With its growing ecosystem, active community, and excellent performance, Rust is a language that is worth exploring for any programmer looking to build high-performance systems.&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>Dockerfile: ADD vs COPY</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Thu, 17 Mar 2022 07:23:39 +0000</pubDate>
      <link>https://dev.to/grigorkh/dockerfile-add-vs-copy-2k0l</link>
      <guid>https://dev.to/grigorkh/dockerfile-add-vs-copy-2k0l</guid>
      <description>&lt;p&gt;&lt;code&gt;COPY&lt;/code&gt; and &lt;code&gt;ADD&lt;/code&gt; are both &lt;code&gt;Dockerfile&lt;/code&gt; instructions that serve similar purposes. They let you copy files from a specific location into a Docker image.&lt;/p&gt;

&lt;h2&gt;
  
  
  COPY
&lt;/h2&gt;

&lt;p&gt;The COPY instruction copies new files or directories from &lt;code&gt;&amp;lt;src&amp;gt;&lt;/code&gt; and adds them to the filesystem of the container at the path &lt;code&gt;&amp;lt;dest&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;COPY has two forms:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY [--chown=&amp;lt;user&amp;gt;:&amp;lt;group&amp;gt;] &amp;lt;src&amp;gt;... &amp;lt;dest&amp;gt;
COPY [--chown=&amp;lt;user&amp;gt;:&amp;lt;group&amp;gt;] ["&amp;lt;src&amp;gt;",... "&amp;lt;dest&amp;gt;"] (this form is required for paths containing whitespace)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ADD
&lt;/h2&gt;

&lt;p&gt;ADD has two forms:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ADD [--chown=&amp;lt;user&amp;gt;:&amp;lt;group&amp;gt;] &amp;lt;src&amp;gt;... &amp;lt;dest&amp;gt;
ADD [--chown=&amp;lt;user&amp;gt;:&amp;lt;group&amp;gt;] ["&amp;lt;src&amp;gt;",... "&amp;lt;dest&amp;gt;"] (this form is required for paths containing whitespace)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Dockerfile best practice for copying from a URL
&lt;/h2&gt;

&lt;p&gt;Docker suggests that it is often not efficient to copy from a URL using &lt;code&gt;ADD&lt;/code&gt;, and it is &lt;a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#add-or-copy"&gt;best practice&lt;/a&gt; to use other strategies to include the required remote files.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;COPY&lt;/code&gt; only supports the basic copying of local files into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious. Consequently, the best use for &lt;code&gt;ADD&lt;/code&gt; is local tar file auto-extraction into the image, as in &lt;code&gt;ADD rootfs.tar.xz /.&lt;/code&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;— &lt;a href="https://docs.docker.com/"&gt;Dockerfile Best Practices&lt;/a&gt;&lt;br&gt;
Because image size matters, using ADD to fetch packages from remote URLs is strongly discouraged; you should use &lt;code&gt;curl&lt;/code&gt; or &lt;code&gt;wget&lt;/code&gt; instead. That way you can delete the files you no longer need after they’ve been extracted and you don’t have to add another layer in your image. For example, you should avoid doing things like:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ADD http://example.com/big.tar.xz /usr/src/things/
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
RUN make -C /usr/src/things all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And instead, do something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN mkdir -p /usr/src/things \
    &amp;amp;&amp;amp; curl -SL http://example.com/big.tar.xz \
    | tar -xJC /usr/src/things \
    &amp;amp;&amp;amp; make -C /usr/src/things all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For other items (files, directories) that do not require &lt;code&gt;ADD&lt;/code&gt;’s tar auto-extraction capability, you should always use &lt;code&gt;COPY&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Like to learn?
&lt;/h2&gt;

&lt;p&gt;Follow me on &lt;a href="https://twitter.com/grigorkh"&gt;twitter&lt;/a&gt; where I post all about the latest and greatest AI, DevOps, VR/AR, Technology, and Science! Connect with me on &lt;a href="https://www.linkedin.com/in/grigorkh"&gt;LinkedIn&lt;/a&gt; too!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>100daysofcode</category>
    </item>
    <item>
      <title>Fix: tzdata hangs during Docker image build</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Mon, 07 Mar 2022 16:24:39 +0000</pubDate>
      <link>https://dev.to/grigorkh/fix-tzdata-hangs-during-docker-image-build-4o9m</link>
      <guid>https://dev.to/grigorkh/fix-tzdata-hangs-during-docker-image-build-4o9m</guid>
      <description>&lt;p&gt;During the installation of a few packages, Ubuntu usually installs the tzdata package. It’s usually included in some PHP or Python packages dependencies. The issue with it is that it hangs and waits for user input to continue the installation. It’s ok until we are using Docker and trying to build images (it’s hanging or even throwing errors in newer versions of Ubuntu). We will try to reproduce the situation and try to fix it.&lt;/p&gt;

&lt;p&gt;To reproduce the hanging situation, we can use this Docker image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu:20.04
RUN apt update
RUN apt install -y tzdata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the logs that we see in terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Step 1/3 : FROM ubuntu:20.04
 ---&amp;gt; 1e4467b07108
Step 2/3 : RUN apt update
 ---&amp;gt; Using cache
 ---&amp;gt; 174ce3e1bb84
Step 3/3 : RUN apt install -y tzdata
...
Configuring tzdata
------------------

Please select the geographic area in which you live. Subsequent configuration
questions will narrow this down by presenting a list of cities, representing
the time zones in which they are located.

  1. Africa      4. Australia  7. Atlantic  10. Pacific  13. Etc
  2. America     5. Arctic     8. Europe    11. SystemV
  3. Antarctica  6. Asia       9. Indian    12. US
Geographic area: 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here it hangs waiting for us enter data, and even after you’ll enter a region — the process will not resume.&lt;/p&gt;

&lt;p&gt;To fix this situation we need to add lines 3 and 4 to our Dockerfile. We will create a variable called &lt;code&gt;$TZ&lt;/code&gt; which will hold our timezone, and the create a &lt;code&gt;/etc/timezone&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu:20.04

ENV TZ=Asia/Dubai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &amp;amp;&amp;amp; echo $TZ &amp;gt; /etc/timezone

RUN apt update
RUN apt install -y tzdata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And after building image we will see this output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Step 1/5 : FROM ubuntu:20.04
 ---&amp;gt; 1e4467b07108
Step 2/5 : ENV TZ=Asia/Dubai
 ---&amp;gt; Using cache
 ---&amp;gt; 7f4c85bd0d3e
Step 3/5 : RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &amp;amp;&amp;amp; echo $TZ &amp;gt; /etc/timezone
 ---&amp;gt; Using cache
 ---&amp;gt; f6f784dfbad5
Step 4/5 : RUN apt update
 ---&amp;gt; Using cache
 ---&amp;gt; 5b1b5617eaa5
Step 5/5 : RUN apt install -y tzdata
 ---&amp;gt; Running in e71a917a9b6b
Current default time zone: 'Asia/Dubai'
Local time is now:      Tue Aug  4 12:14:55 +04 2020.
Universal Time is now:  Tue Aug  4 08:14:55 UTC 2020.
Run 'dpkg-reconfigure tzdata' if you wish to change it.

Removing intermediate container e71a917a9b6b
 ---&amp;gt; 3d29f4e8f7eb
Successfully built 3d29f4e8f7eb
Successfully tagged tzdata:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So it’s used the timezone that we provide and nothing hangs.&lt;/p&gt;

&lt;p&gt;Here is the list of Timezones that you can pick one for you:&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones"&gt;List of tz database time zones&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Like to learn?
&lt;/h2&gt;

&lt;p&gt;Follow me on &lt;a href="https://twitter.com/grigorkh"&gt;twitter&lt;/a&gt; where I post all about the latest and greatest AI, DevOps, VR/AR, Technology, and Science! Connect with me on &lt;a href="https://www.linkedin.com/in/grigorkh"&gt;LinkedIn&lt;/a&gt; too!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>linux</category>
    </item>
    <item>
      <title>Access services in k8s that are not exposed publicly</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Sat, 05 Mar 2022 20:57:21 +0000</pubDate>
      <link>https://dev.to/grigorkh/access-services-in-k8s-that-are-not-exposed-publicly-3bii</link>
      <guid>https://dev.to/grigorkh/access-services-in-k8s-that-are-not-exposed-publicly-3bii</guid>
      <description>&lt;p&gt;Services running in Kubernetes are not exposed to the public by default so, no one can access them from outside.To access services running in K8s that are not exposed publicly we have few ways which are secure and will not bring to opening security holes in our systems and services. One of them is the Ingress Controller or API Gateway that most of the service meshes are using which basically mapping domains and subdomains to the services.&lt;/p&gt;

&lt;p&gt;Another option is Kubernetes CLI (&lt;strong&gt;kubectl&lt;/strong&gt;)using which you can bind any services or POD/container port to your localhost and access private service from your localhost.&lt;/p&gt;

&lt;p&gt;Let’s assume we have a service called &lt;strong&gt;customer-dashboard&lt;/strong&gt; and we want to access it using Kubernetes CLI (kubectl).&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to Pod
&lt;/h2&gt;

&lt;p&gt;If we want to connect to Pod/Container we should know the exact name of the Pod, for that we can run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod | grep customer-dashboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the output would be something similar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;customer-dashboard-7945c779f4-27c5k     1/1   Running   0   4d
customer-dashboard-7945c779f4-885h9     1/1   Running   0   4d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now when you know the pod name you can bind pods specific port to your localhost using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward pod/customer-dashboard-7945c779f4-27c5k 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see this output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Forwarding from 127.0.0.1:8080 -&amp;gt; 80
Forwarding from [::1]:8080 -&amp;gt; 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So first port is the port to which you want to bind the service in localhost (so you can access it using 127.0.0.1:8080 or localhost:8080), second port is the pod/container port number.&lt;/p&gt;

&lt;p&gt;If you need to open few ports you can do the same for another port too, for that you need to open new terminal tab and type &lt;strong&gt;kubectl port-forward customer-dashboard-7945c779f4–27c5k 8081:81&lt;/strong&gt; to bind 81 port as well to your 8081 on localhost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to Service
&lt;/h2&gt;

&lt;p&gt;Connecting to Service (or internal K8s load balancer) is almost same as connecting to the pod but in this case you will need the service name instead of Pod name. To get the service name type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get service | grep customer-dashboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and you will get the service names:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;customer-dashboard  ClusterIP 10.24.2.124  &amp;lt;none&amp;gt;   80/TCP,81/TCP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To bind it to your localhost type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward service/customer-dashboard 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see this output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Forwarding from 127.0.0.1:8080 -&amp;gt; 80
Forwarding from [::1]:8080 -&amp;gt; 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Like to learn?
&lt;/h2&gt;

&lt;p&gt;Follow me on &lt;a href="https://twitter.com/grigorkh"&gt;twitter&lt;/a&gt; where I post all about the latest and greatest AI, DevOps, VR/AR, Technology, and Science! Connect with me on &lt;a href="https://www.linkedin.com/in/grigorkh"&gt;LinkedIn&lt;/a&gt; too!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Install Docker on Ubuntu 21.10</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Fri, 04 Mar 2022 14:16:33 +0000</pubDate>
      <link>https://dev.to/grigorkh/how-to-install-docker-on-ubuntu-2110-3aeo</link>
      <guid>https://dev.to/grigorkh/how-to-install-docker-on-ubuntu-2110-3aeo</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system.&lt;br&gt;
In this tutorial, you’ll learn how to install and use it on an existing installation of Ubuntu 21.10.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Docker requires a 64-bit version of Ubuntu as well as a kernel version equal to or greater than 3.10. The default 64-bit Ubuntu 21.10 server meets these requirements.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Installing Docker
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by sudo.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Docker installation package available in the official Ubuntu 21.10 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that.&lt;/p&gt;

&lt;p&gt;First, add the GPG key for the official Docker repository to the system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the Docker repository to APT sources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash -c 'echo "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" &amp;gt; /etc/apt/sources.list.d/docker-ce.list'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, update the package database with the Docker packages from the newly added repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you are about to install from the Docker repo instead of the default Ubuntu 21.10 repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-cache policy docker-ce
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output similar to the follow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-ce:
  Installed: (none)
  Candidate: 5:20.10.12~3-0~ubuntu-impish
  Version table:
     5:20.10.12~3-0~ubuntu-impish 500
        500 https://download.docker.com/linux/ubuntu impish/stable amd64 Packages
     5:20.10.11~3-0~ubuntu-impish 500
        500 https://download.docker.com/linux/ubuntu impish/stable amd64 Packages
     5:20.10.10~3-0~ubuntu-impish 500
        500 https://download.docker.com/linux/ubuntu impish/stable amd64 Packages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 21.10. The docker-ce version number might be different.&lt;br&gt;
Finally, install Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install -y docker-ce
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should be similar to the following, showing that the service is active and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-02-28 06:54:43 UTC; 8s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 2398 (dockerd)
      Tasks: 8
     Memory: 29.9M
        CPU: 375ms
     CGroup: /system.slice/docker.service
             └─2398 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Feb 28 06:54:43 ubuntu dockerd[2398]: time="2022-02-28T06:54:43.547798752Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 28 06:54:43 ubuntu dockerd[2398]: time="2022-02-28T06:54:43.547839103Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/conta&amp;gt;
Feb 28 06:54:43 ubuntu dockerd[2398]: time="2022-02-28T06:54:43.547857684Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 28 06:54:43 ubuntu dockerd[2398]: time="2022-02-28T06:54:43.601928785Z" level=info msg="Loading containers: start."
Feb 28 06:54:43 ubuntu dockerd[2398]: time="2022-02-28T06:54:43.803682067Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. D&amp;gt;
Feb 28 06:54:43 ubuntu dockerd[2398]: time="2022-02-28T06:54:43.912000602Z" level=info msg="Loading containers: done."
Feb 28 06:54:43 ubuntu dockerd[2398]: time="2022-02-28T06:54:43.935127854Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12
Feb 28 06:54:43 ubuntu dockerd[2398]: time="2022-02-28T06:54:43.935361734Z" level=info msg="Daemon has completed initialization"
Feb 28 06:54:43 ubuntu systemd[1]: Started Docker Application Container Engine.
Feb 28 06:54:43 ubuntu dockerd[2398]: time="2022-02-28T06:54:43.978811872Z" level=info msg="API listen on /run/docker.sock"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. Have fun!&lt;/p&gt;

&lt;h2&gt;
  
  
  Like to learn?
&lt;/h2&gt;

&lt;p&gt;Follow me on &lt;a href="https://twitter.com/grigorkh"&gt;twitter&lt;/a&gt; where I post all about the latest and greatest AI, DevOps, VR/AR, Technology, and Science! Connect with me on &lt;a href="https://www.linkedin.com/in/grigorkh"&gt;LinkedIn&lt;/a&gt; too!&lt;/p&gt;

</description>
      <category>ubuntu</category>
      <category>docker</category>
      <category>devops</category>
      <category>linux</category>
    </item>
    <item>
      <title>Resilience Engineering – Don't Be Afraid to Show Your Vulnerable Side!</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Wed, 10 Feb 2021 13:02:54 +0000</pubDate>
      <link>https://dev.to/grigorkh/resilience-engineering-don-t-be-afraid-to-show-your-vulnerable-side-58mf</link>
      <guid>https://dev.to/grigorkh/resilience-engineering-don-t-be-afraid-to-show-your-vulnerable-side-58mf</guid>
      <description>&lt;p&gt;Every software developer’s primary goal is to come up with a practical, intuitive, and robust product, a platform or service lots of people can use without any major issue. The problem is that what happens out there with real users is a lot more, well, chaotic than in a control environment developers initially work in. That’s why more and more devs have been using specialized techniques to test out their handiwork and ensure optimal reliability. &lt;/p&gt;

&lt;p&gt;Resilience engineering is a practice within Site Reliability Engineering (SRE), closely related to Chaos Engineering. If you’re having trouble wrapping your head around all these terms, don’t worry, we’ll cover each aspect separately and then show you the real magic behind properly applied resilience engineering. &lt;/p&gt;

&lt;p&gt;First, here’s a short history lesson. &lt;/p&gt;

&lt;h2&gt;
  
  
  Where Did SRE Come from?
&lt;/h2&gt;

&lt;p&gt;SRE dates back to almost two full decades ago when a ragtag group of Google devs tried to find a way to improve the reliability of the company’s sites and keep them working smoothly as they grew. Needless to say, these guys were so effective that &lt;a href="https://sre.google/books/"&gt;their techniques and strategies&lt;/a&gt; were turned into an IT subset of its own. &lt;/p&gt;

&lt;p&gt;It’s an important part of modern DevOps and helps bridge the gap between the initial framework created by developers and the highly practical concerns of real-life system administration. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Advent of Chaos Engineering
&lt;/h2&gt;

&lt;p&gt;These days, it’s not easy to see an issue coming a mile away and address them in advance to keep a company’s cloud-based platform up and running. And with even just 10-20 minutes of downtime, large corporations stand to lose a lot of potential business, as well as their brand equity. Enter the creatively destructive art of Chaos Engineering. &lt;/p&gt;

&lt;p&gt;Think of it as handing the keys to your finely-tuned sedan to a rally driver to run it through its paces on the track and see what breaks first when the system is pushed to its limits. &lt;br&gt;
If you do this on your first batch of sedans, you can go back to the shop and tweak out all the little glitches and potential weak spots, ensuring that the cars run like greased lightning for miles without breaking down when you actually start driving people in them. &lt;/p&gt;

&lt;p&gt;The first example of this approach was Netflix’s surprisingly aptly named &lt;a href="https://www.gremlin.com/chaos-monkey/"&gt;Chaos Monkey&lt;/a&gt;, back in 2010, and Simian Army a year later. It was simple, but it got the job done – it simulated a server failure by shutting down instances at random.  &lt;/p&gt;

&lt;h2&gt;
  
  
  The Practice of Resilience Engineering
&lt;/h2&gt;

&lt;p&gt;Resilience Engineering is all about building systems that can adapt and automatically take the best course of action when common issues occur. Any inadequacies found through testing are ironed out before the system can become truly resilient. &lt;/p&gt;

&lt;h3&gt;
  
  
  The Flow of a Basic Chaos Experiment
&lt;/h3&gt;

&lt;p&gt;There are several &lt;a href="https://www.infoq.com/articles/chaos-engineering-security-networking/"&gt;basic steps to testing&lt;/a&gt; out the system’s vulnerabilities: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define a baseline measurement for the system when things are running smoothly.&lt;/li&gt;
&lt;li&gt;Come up with an idea of a potential failure.&lt;/li&gt;
&lt;li&gt;Test for said failure on a small enough scale so as not to disrupt the whole system but still get measurable data you can act on.&lt;/li&gt;
&lt;li&gt;Proceed to compare any issues that have popped up with the baseline performance.&lt;/li&gt;
&lt;li&gt;Scale up your tests if no issues were found initially.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What happens with the system is often not the same as what the developers hypothesized would happen, making it an excellent learning opportunity. &lt;/p&gt;

&lt;p&gt;The most common issues distributed systems are tested for are server failures – either a single server simply not responding, the network working periodically and crashing, or an entire host of servers going out. &lt;/p&gt;

&lt;h3&gt;
  
  
  The Ideal System Response to Shoot For
&lt;/h3&gt;

&lt;p&gt;A resilient and well-put-together system will have a quick answer for the most common issues outlined above. For instance, if the cloud provider no longer permits access to a CPU, for whatever reason, the system should respond by connecting to the next best thing. &lt;/p&gt;

&lt;p&gt;Also, if the entire network of servers in a particular time zone goes out, the system should look for servers in another region. If the number of users hits a peak in a very short time, the system should naturally scale up and start using more servers to compensate.  &lt;/p&gt;

&lt;h3&gt;
  
  
  New Technologies Allow for Scalability and Automation, Cutting Down on Human Interventions
&lt;/h3&gt;

&lt;p&gt;With the &lt;a href="https://www.bmc.com/blogs/what-is-kubernetes/"&gt;advent of Kubernetes&lt;/a&gt;, continuous delivery has been made much easier. The system’s response can be automated – the necessity for human intervention goes down dramatically, and the whole system experiences less downtime as a result. The ability to quickly scale up with sudden bursts of user traffic is incredibly important for cloud-based services. &lt;br&gt;
Imagine giants like Netflix or Blizzard being unable to accommodate all the new users logging on, especially now that everyone is online throughout the day. Any amount of downtime would lead to hoards of unsatisfied customers ready to move on to other services who understand the power of continuous delivery. &lt;/p&gt;

&lt;h2&gt;
  
  
  Resilient Systems are Reliable and Competitive Systems
&lt;/h2&gt;

&lt;p&gt;While we may have had a little bit of fun flexing our bad pun muscles in the title, the fact is that some companies really are afraid to look for vulnerabilities and address them early on. It’s tempting to start believing the myth of the flawless cloud environment, where all the servers work all the time, there’s no latency, and the number of users using your service barely fluctuates.&lt;/p&gt;

&lt;p&gt;However, the reality of it is that if something can go wrong, it eventually will, and chances are you’re not going to be ready for it. Well, if you play it smart and invest in resilience engineering, you’ll save yourself a whole lot of headache down the line.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>chaosengineering</category>
      <category>sre</category>
    </item>
    <item>
      <title>What is Cloud-Native Computing and How CNCF Contributes to Industry</title>
      <dc:creator>Grigor Khachatryan</dc:creator>
      <pubDate>Wed, 09 Dec 2020 16:41:21 +0000</pubDate>
      <link>https://dev.to/grigorkh/what-is-cloud-native-computing-and-how-cncf-contributes-to-industry-10bi</link>
      <guid>https://dev.to/grigorkh/what-is-cloud-native-computing-and-how-cncf-contributes-to-industry-10bi</guid>
      <description>&lt;p&gt;&lt;a href="https://thenewstack.io/what-is-cloud-native-and-why-does-it-matter/"&gt;Cloud computing&lt;/a&gt; has become the leading method for scaling up workloads and growing businesses at a steady rate. It allows companies to build and run scalable applications in dynamic environments known as clouds. &lt;br&gt;
Cloud technologies allow integration of multiple systems, offering a new platform designed to enable easy management and detailed reporting. With an emphasis on automation, these services allow engineers to make huge changes quickly and effectively. The &lt;a href="https://en.wikipedia.org/wiki/Cloud_native_computing"&gt;Cloud-Native Computing&lt;/a&gt; Foundation or CNFC is making a push to create an open-source system that allows all users to access new technologies and improve their platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Importance of Cloud Computing
&lt;/h2&gt;

&lt;p&gt;Scaling up and growing a business requires expanding servers and other technologies designed to handle large amounts of data. However, with ultra-high internet speeds and massive amounts of data generated by websites, extracting the right data can take a long time and a lot of money.&lt;br&gt;
That’s why more and more businesses choose to migrate their websites, data analytics, and other business details onto cloud services. These services are designed to allow fast data analytics and results with automation features designed to speed things up.&lt;br&gt;
The process of migrating data to cloud services provides all kinds of benefits that can help you make better business decisions in the future. Here’s a quick overview of the things you’ll get by using cloud technologies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access to new markets, customer base, and previously untargeted areas;&lt;/li&gt;
&lt;li&gt;The ability to deliver more value to your customers by understanding their needs better;&lt;/li&gt;
&lt;li&gt;The means to directly influence your customer’s behavior;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, even though cloud services improve resilience and allow better scalability, you need the right management to enjoy the benefits listed above. You have to start small and scale the application up as the need presents itself. Companies such as Twitter and Google both used this method to scale their operations globally and find the right management solutions to keep everything going.&lt;/p&gt;

&lt;p&gt;The thing is - cloud technologies are only as useful and beneficial as the managers behind them. If your company doesn’t have the right technologies, training methods, and management needs, you won’t be able to get the most out of this new technology. You must make sure that all internal systems work together, and only then will you be able to scale up your cloud services.&lt;/p&gt;

&lt;h2&gt;
  
  
  CNCF Making The Transition Much Smoother
&lt;/h2&gt;

&lt;p&gt;As mentioned, making the switch from traditional systems to cloud services can be a difficult, time-consuming task that depends on the volume of the data. You have to tune your entire system with the new, modern distributed system environments that can help you scale things up almost indefinitely. &lt;br&gt;
That’s where &lt;a href="https://thenewstack.io/cncf-survey-snapshot-tech-adoption-in-the-cloud-native-world/"&gt;CNCF can help you a lot&lt;/a&gt;. Its primary goal is to allow merging multiple projects in one native cloud space, such as orchestration services, containers, microservices, etc. These services are then moved to one cloud, allowing seamless integration with current IT solutions already in use. As a result, the migration process becomes much faster and easier to manage.&lt;br&gt;
CNCF offers a set of comprehensive solutions that include features such as container runtime, container orchestration &amp;amp; networking, service development, and management. In other words, it allows full integration with tools such as Kubernetes, CNI, OpenTracing, Prometheus, gRPC, and many others. CNCF can help you to get in tune with the new technologies faster, speeding up the transition from classic systems to cloud services.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Benefits of Using CNCF
&lt;/h2&gt;

&lt;p&gt;Most major CSPs are already a part of CNCF. They are working together to create new standards for the latest cloud-native technologies and implement them into their CSP platforms. They are working hard to make the integration process as simple as possible, allowing other businesses and enterprises to make the switch to the cloud standard without too much hassle. In other words, &lt;a href="https://www.prnewswire.com/news-releases/cloud-native-computing-foundation-continues-efforts-to-drive-cloud-native-adoption-with-application-focused-new-members-300419183.html"&gt;CNCF allows direct upgrades&lt;/a&gt; and migration to cloud services much more comfortable than it ever was.&lt;br&gt;
Since all major cloud service providers are already a part of the CNCF open standards, companies can use these new technologies to speed up the cloud-merging process and get constant updates with new cloud technologies in the future. &lt;br&gt;
Some cloud-native technologies, including containers, service mash, orchestration, etc., are already available as different services that are provided by various clouds. With a pay-as-you-go model, you can scale your business whenever the need arises. &lt;br&gt;
The entire process allows enterprises to try out new ideas and methods to improve growth by developing new applications all within the cloud itself. You won’t have to invest in new ideas and setups since it’s an open-source type of deal. If your ideas turn out to be useful, the cloud will validate them, allowing you to scale to new applications without changing your existing systems. That is only possible because the entire application is built within the cloud. &lt;br&gt;
By coupling CNCF and CSPs, the real potential of cloud computing technologies will become apparent. It will allow large enterprises to improve sales, drive growth, and create more stable systems within this &lt;a href="https://www.infoworld.com/article/3281046/what-is-cloud-native-the-modern-way-to-develop-software.html"&gt;new dynamic environment&lt;/a&gt; called cloud computing. &lt;/p&gt;

&lt;h2&gt;
  
  
  Changing The Way We Create Software Solutions Forever
&lt;/h2&gt;

&lt;p&gt;Like it or not, the need for cloud services is growing steadily during the past few years. With new technologies reaching the market, and vast amounts of data needed to scale any business and promote growth in the online environment, CNCF will make things much faster and more manageable.&lt;br&gt;
By offering an open-source software stack where companies can work together to improve existing systems and develop new technologies that promote efficiency, enterprises will increase their reach and income without spending massive amounts of money on potential software solutions. &lt;br&gt;
The bottom line is that CNCF allows the industry’s top developers, users, and vendors to put their knowledge together and create new technologies designed to promote growth and scaling much easier than ever before in human history. &lt;/p&gt;

</description>
      <category>cncf</category>
      <category>cloudnative</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
