<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ramkumar Jayakumar</title>
    <description>The latest articles on DEV Community by Ramkumar Jayakumar (@rkj180220).</description>
    <link>https://dev.to/rkj180220</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rkj180220"/>
    <language>en</language>
    <item>
      <title>Getting the Actual Client IP When Using Application Load Balancer (ALB) in AWS Lambda</title>
      <dc:creator>Ramkumar Jayakumar</dc:creator>
      <pubDate>Sat, 20 Jul 2024 12:37:40 +0000</pubDate>
      <link>https://dev.to/rkj180220/getting-the-actual-client-ip-when-using-application-load-balancer-alb-in-aws-lambda-4fcn</link>
      <guid>https://dev.to/rkj180220/getting-the-actual-client-ip-when-using-application-load-balancer-alb-in-aws-lambda-4fcn</guid>
      <description>&lt;p&gt;When I was new to AWS, I faced an interesting challenge while working on a task to digitally sign a document, which required the client's IP as part of the e-signature. Initially, I was thrilled when the implementation seemed to work perfectly the first time. However, my excitement was short-lived. During testing, I noticed that the same IP address was being returned, even when I accessed the application from different machines. It was then that I realized the IP address I was receiving was not the actual client IP but the IP of the load balancer.&lt;/p&gt;

&lt;p&gt;This discovery led me down a path of investigation and learning. I had to dig deeper to understand what was happening and how to retrieve the real client IP. In this blog, I will share my experience and provide a comprehensive guide on how to achieve this using AWS Lambda and Python, ensuring you can accurately capture the client’s IP address when using an Application Load Balancer (ALB).&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding the Challenge
&lt;/h3&gt;

&lt;p&gt;When a client makes a request to your application through an ALB, the load balancer acts as an intermediary. Consequently, the IP address your application sees is that of the ALB, not the client's. To address this, ALB includes the client's IP in the X-Forwarded-For HTTP header. This header can contain multiple IP addresses in case the request has passed through multiple proxies.&lt;/p&gt;

&lt;p&gt;Here's what we need to handle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extract the Client IP&lt;/strong&gt;: Retrieve and parse the &lt;code&gt;X-Forwarded-For&lt;/code&gt;  header.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Handle Multiple IPs&lt;/strong&gt;: Ensure that we get the correct client IP even when multiple proxies are involved.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security Consideration
&lt;/h3&gt;

&lt;p&gt;The X-Forwarded-For header should be used with caution due to potential security risks. The entries can only be considered trustworthy if added by systems that are properly secured within the network. This ensures that the client IPs are not tampered with and are reliable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choosing the Right Tools
&lt;/h3&gt;

&lt;h4&gt;
  
  
  AWS Lambda and Python
&lt;/h4&gt;

&lt;p&gt;AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Python, with its simplicity and readability, is an excellent choice for handling this task within a Lambda function.&lt;/p&gt;

&lt;h4&gt;
  
  
  Key Components
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lambda Function&lt;/strong&gt;: The core function that processes incoming requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Application Load Balancer (ALB)&lt;/strong&gt;: The load balancer that forwards requests to the Lambda function.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Implementation Details
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Setting Up AWS Lambda with ALB
&lt;/h4&gt;

&lt;p&gt;First, ensure your Lambda function is set up and integrated with an ALB. Follow AWS's official guide if needed: &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html" rel="noopener noreferrer"&gt;Using Lambda functions as targets for Application Load Balancer.&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda Function Code
&lt;/h4&gt;

&lt;p&gt;Let's dive into the Python code for the Lambda function. This function will extract the client's IP address from the X-Forwarded-For header.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json

def lambda_handler(event, context):
    # Extract the 'X-Forwarded-For' header
    x_forwarded_for = event['headers'].get('x-forwarded-for')

    if x_forwarded_for:
        # The first IP in the list is the client's IP
        client_ip = x_forwarded_for.split(',')[0]
    else:
        # Fallback if header is not present
        client_ip = event['requestContext']['identity']['sourceIp']

    # Log the client IP
    print(f"Client IP: {client_ip}")

    # Respond with the client IP
    return {
        'statusCode': 200,
        'body': json.dumps({'client_ip': client_ip})
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Explanation
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Extract the Header: Retrieve the X-Forwarded-For header from the incoming request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parse the Header: Take the first IP, which represents the client's original IP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fallback Mechanism: Use the source IP from the request context if the header is not present.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logging and Response: Log and return the client's IP for verification.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example Request and Response
&lt;/h4&gt;

&lt;p&gt;Request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "headers": {
        "x-forwarded-for": "203.0.113.195, 70.41.3.18, 150.172.238.178"
    },
    "requestContext": {
        "identity": {
            "sourceIp": "70.41.3.18"
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "client_ip": "203.0.113.195"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Identifying the actual client IP in an AWS Lambda function behind an ALB requires careful handling of the &lt;code&gt;X-Forwarded-For&lt;/code&gt; header. This approach ensures accurate IP logging and enhances the application's ability to personalize and secure user interactions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Refrences
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html" rel="noopener noreferrer"&gt;AWS ALB Documentation:&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/python-handler.html" rel="noopener noreferrer"&gt;Python in AWS Lambda:&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For" rel="noopener noreferrer"&gt;HTTP Headers Explained&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>serverless</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Coding Efficiency: Designing a High-Performance Data Import System with Excel Integration</title>
      <dc:creator>Ramkumar Jayakumar</dc:creator>
      <pubDate>Wed, 20 Mar 2024 10:20:24 +0000</pubDate>
      <link>https://dev.to/rkj180220/coding-efficiency-designing-a-high-performance-data-import-system-with-excel-integration-3ng5</link>
      <guid>https://dev.to/rkj180220/coding-efficiency-designing-a-high-performance-data-import-system-with-excel-integration-3ng5</guid>
      <description>&lt;h2&gt;
  
  
  Introduction: Bridging the Data Gap - Efficiently Importing Data from Other Systems
&lt;/h2&gt;

&lt;p&gt;In today's digital world, our applications often need to work together. Imagine you have a customer relationship management (CRM) system that stores all your customer information, but you also have a separate system for tracking sales and orders. To get a complete picture of your business, you might need to import data (like customer details) from one system into the other. This process of bringing data into a system from an external source is called a data import.&lt;/p&gt;

&lt;p&gt;Data imports can be crucial for making informed decisions. For example, by combining your customer information with your sales data, you can analyze trends and identify your most valuable customers. But how do you actually move this data between systems?&lt;/p&gt;

&lt;p&gt;There are several ways to approach data migration, each with its own advantages and limitations.  This blog post will explore how you can design a data import system using a familiar tool: Microsoft Excel.&lt;/p&gt;

&lt;p&gt;We'll delve into strategies for handling large datasets efficiently, ensuring data accuracy, and creating a user-friendly experience for the import process. By leveraging the right techniques, you can unlock the power of Excel import and bridge the data gap between your systems seamlessly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Navigating the Maze of Data Migration
&lt;/h3&gt;

&lt;p&gt;Moving data between different systems can feel like navigating a maze. There are several approaches you can take, each with its own pros and cons. Let's explore some common methods for data import:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manual Creation&lt;/strong&gt;: Imagine having to type in all your customer information one by one from a spreadsheet into your CRM system. This is manual data creation, and while it works for very small datasets, it's time-consuming, error-prone, and not scalable for large amounts of data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Scripts&lt;/strong&gt;: For more complex data transfers, you might consider writing custom scripts or programs. This approach offers a lot of flexibility, but they lack the efficiency and can be time-consuming to develop and maintain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Integration&lt;/strong&gt;: Many applications offer APIs (Application Programming Interfaces) that allow them to talk to each other.  Imagine an API as a special translator between your CRM system and your sales data system. By using the API, you can build a connection that lets you import data automatically.  However, using APIs often requires development effort for both sides and may not be available for all systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Excel Import&lt;/strong&gt;: Simple and ubiquitous, but raises concerns about scalability and data integrity, especially for high volumes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Excel Import: Reimagined for Efficiency
&lt;/h2&gt;

&lt;p&gt;While Excel import is a familiar and accessible solution for bringing data into your system, it can face challenges when dealing with large datasets or complex data validation needs.  Imagine you have a massive customer list with thousands of entries in an Excel spreadsheet.  Traditionally, importing this data might take a long time or lead to errors if the data isn't formatted correctly.&lt;/p&gt;

&lt;p&gt;Here's where we can reimagine Excel import for efficiency:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High-Volume Processing&lt;/strong&gt;:  By leveraging specialized techniques, we can optimize the import process to handle large datasets without sacrificing performance or system stability.  This might involve splitting the data into smaller chunks for processing or using efficient algorithms for data validation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Robust Validation and Error Handling&lt;/strong&gt;: Data accuracy is crucial.  Our system can perform thorough checks on your Excel data to ensure it meets the requirements of your target system.  For example, it can verify data types (like numbers or dates) and identify any inconsistencies or missing information.  This helps catch errors early on and prevents inaccurate data from entering your system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Streamlined Workflow&lt;/strong&gt;: The data import process shouldn't be a hassle.  We can design a user-friendly interface that guides you through each step, from configuring the import to uploading your Excel file.  This makes it easy for anyone, even beginners, to import data efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By incorporating these features, we can transform Excel import from a simple data transfer tool into a robust and efficient system.  Imagine being able to import your large customer list with confidence, knowing the system will handle the process smoothly and ensure the accuracy of your data.&lt;/p&gt;

&lt;h2&gt;
  
  
  System Design
&lt;/h2&gt;

&lt;p&gt;Let's delve into the design section for an efficient data import system using Excel, addressing key concerns such as:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Dynamic Field Mapping: Empowering User Flexibility
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flo5kgqulvb93jmvf04c6.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flo5kgqulvb93jmvf04c6.jpeg" alt="Field Mapping"&gt;&lt;/a&gt;&lt;br&gt;
One of the key challenges in data import systems is accommodating the diverse structure of data from different sources. This is where dynamic field mapping plays a pivotal role. Here's a closer look at how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User-Driven Configuration&lt;/strong&gt;: Instead of relying on fixed mappings, users are empowered to configure the import process based on their specific needs. Upon initiating a data import task, the system prompts users to define mappings between the columns in the Excel file and corresponding fields in the destination system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexible Mapping Interface&lt;/strong&gt;: To ensure user-friendly interaction, the system provides an intuitive interface for mapping configuration. Users are presented with a list of available fields in the destination system, allowing them to easily match them with the columns in the Excel file. Additionally, the interface supports dynamic suggestions and auto-completion to streamline the mapping process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customizable Mapping Profiles&lt;/strong&gt;: Recognizing that users may need to import data from multiple sources with varying structures, the system supports the creation of customizable mapping profiles. Users can define and save multiple mapping configurations, each tailored to a specific source or import scenario. This flexibility allows for efficient handling of diverse data formats without the need for repetitive mapping tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Validation and Error Handling&lt;/strong&gt;: To ensure data accuracy and integrity, the system performs validation checks during the mapping configuration phase. This includes verifying the consistency of data types, detecting potential mismatches between source and destination fields, and flagging any inconsistencies or conflicts for user review. Comprehensive error handling mechanisms are in place to guide users in resolving mapping issues effectively.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To ensure seamless configuration and mapping between Excel data and the database, a structured approach leveraging MySQL is adopted. Here's how the database schema facilitates the process:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;1.&lt;strong&gt;Import Configuration Table&lt;/strong&gt;: This table stores various import configurations available in the system, enabling users to manage different import scenarios effectively.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE `import_configuration` (
  `import_configuration_id` int UNSIGNED NOT NULL,
  `configuration_name` varchar(55) NOT NULL,
  `created_at` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `is_deleted` tinyint(1) NOT NULL DEFAULT '0',
  PRIMARY KEY (`import_configuration_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2.&lt;strong&gt;Configuration Mapping Table&lt;/strong&gt;: This table establishes mappings between Excel columns and database fields for each import configuration. It defines column titles, mandatory fields, sort order, and default values, facilitating accurate data mapping.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE &lt;code&gt;import_mapping_columns&lt;/code&gt; (&lt;br&gt;
  &lt;code&gt;import_mapping_column_id&lt;/code&gt; int UNSIGNED NOT NULL,&lt;br&gt;
  &lt;code&gt;import_configuration_id&lt;/code&gt; int UNSIGNED NOT NULL,&lt;br&gt;
  &lt;code&gt;import_column_name&lt;/code&gt; varchar(55) DEFAULT NULL,&lt;br&gt;
  &lt;code&gt;import_column_title&lt;/code&gt; varchar(100) NOT NULL,&lt;br&gt;
  &lt;code&gt;is_mandatory&lt;/code&gt; enum('0','1') NOT NULL,&lt;br&gt;
  &lt;code&gt;sort_order&lt;/code&gt; int NOT NULL,&lt;br&gt;
  &lt;code&gt;default_value&lt;/code&gt; json DEFAULT NULL,&lt;br&gt;
  &lt;code&gt;is_deleted&lt;/code&gt; tinyint(1) NOT NULL DEFAULT '0',&lt;br&gt;
  PRIMARY KEY (&lt;code&gt;import_mapping_column_id&lt;/code&gt;),&lt;br&gt;
  FOREIGN KEY (&lt;code&gt;import_configuration_id&lt;/code&gt;) REFERENCES &lt;code&gt;import_configuration&lt;/code&gt;(&lt;code&gt;import_configuration_id&lt;/code&gt;)&lt;br&gt;
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  2.File Upload Phase: Streamlining Data Entry&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu2eexsoxrypda0gck2o.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu2eexsoxrypda0gck2o.jpeg" alt="File Upload Phase"&gt;&lt;/a&gt;&lt;br&gt;
The file upload phase initiates the import process, ensuring seamless integration of Excel data into the system. Here's a concise overview of this crucial phase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration Preparation&lt;/strong&gt;: Users begin by configuring the import process, selecting the appropriate import configuration from the predefined options. This step ensures that the system knows how to interpret the incoming Excel data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Excel Upload&lt;/strong&gt;: Once configured, users upload the Excel file containing the data to be imported. The system validates the file format and size to ensure compatibility and prevent data loss.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storage in S3&lt;/strong&gt;: The uploaded file is securely stored in an Amazon S3 bucket, providing scalability, reliability, and easy access for subsequent processing steps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Background Task Creation&lt;/strong&gt;: Concurrently, the system creates a background task or job to manage the import process efficiently. This task orchestrates the various steps involved in reading, validating, and importing the Excel data into the system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Excel Reading and Temporary Record Creation: Ensuring Data Integrity
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwz7yaa630639eqrnj8a.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwz7yaa630639eqrnj8a.jpeg" alt="Excel Reading and Temporary Record Creation: Ensuring Data Integrity"&gt;&lt;/a&gt;&lt;br&gt;
In the Excel Reading and Temporary Record Creation phase, the system processes the uploaded Excel file, validates its contents, and creates temporary records for further validation. Here's a detailed breakdown of this crucial step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CRON Picks up the Background Task&lt;/strong&gt;: The scheduled task responsible for managing the import process retrieves the Excel file from the designated Amazon S3 bucket. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Updating Task Status&lt;/strong&gt;: To prevent duplicate processing, the system updates the status of the background task to "in progress." This prevents multiple instances from picking up the same task simultaneously, maintaining data integrity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Heading Validation&lt;/strong&gt;: The system verifies the headers of the Excel file to ensure they match the expected format defined in the import configuration. Any discrepancies are flagged for user attention, preventing data misinterpretation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Excel Reading&lt;/strong&gt;: Using specialized libraries or tools, the system reads the data from the Excel file, extracting each row as a separate dataset for further processing. This step ensures accurate data extraction while handling large volumes efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Row Validation&lt;/strong&gt;: Each row of data undergoes validation against predefined rules and constraints specified in the import configuration. Mandatory fields, data types, and format checks are performed to identify any inconsistencies or errors. Use hashmaps or some md5 hashing mechanisms for getting duplicate entries within the excel file validation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Splitting into Chunks&lt;/strong&gt;: To optimize performance and manage memory usage, the dataset is split into manageable chunks for bulk insertion into the database. This ensures efficient processing, especially with large datasets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bulk Insertion&lt;/strong&gt;: Validated data is bulk-inserted into temporary tables within the database, segregating valid records from exceptions. This step lays the groundwork for further validation and refinement of the imported data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Within System Validations&lt;/strong&gt;: Concurrently, the system performs additional validations within the database environment, such as duplicate checks and cross-referencing against existing records. This ensures data consistency and integrity at every stage of the import process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Temporary Records Creation&lt;/strong&gt;: Validated records are stored as temporary entries within the database, providing a snapshot of the imported data for subsequent review and refinement. Exception records are also retained for further investigation and correction.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By meticulously executing the Excel Reading and Temporary Record Creation phase, the system establishes a solid foundation for data integrity, enabling accurate and reliable integration of external data into the system.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    public function map($row): array&lt;br&gt;
    {&lt;br&gt;
        $this-&amp;gt;trimValues($row);
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    $row['failed_columns'] = array();
    $row['validation_message'] = array();
    $row['import_status'] = 1;

    $this-&amp;amp;gt;setDefaultValues($row);
    $this-&amp;amp;gt;validate($row)

    $row['import_status'] = count($row['failed_columns']) &amp;amp;gt; 0 ? 0 : 1;
    $row['failed_columns'] = json_encode($row['failed_columns']);
    $row['validation_message'] = !empty($row['validation_message']) ? implode(", ", $row['validation_message']) : null;

    return $row;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// remove white spaces and non-breaking spaces&lt;br&gt;
    protected function trimValues(&amp;amp;$row): void&lt;br&gt;
    {&lt;br&gt;
        $row = array_map(function ($value) {&lt;br&gt;
            // For numeric values accept zero.&lt;br&gt;
            if ($value == 0) {&lt;br&gt;
                return $value;&lt;br&gt;
            }&lt;br&gt;
            return !empty(trim($value)) ? trim(str_replace("\xc2\xa0", '', $value)) : NULL;&lt;br&gt;
        }, $row);&lt;br&gt;
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function registerEvents(): array&lt;br&gt;
    {&lt;br&gt;
        return [&lt;br&gt;
            AfterImport::class =&amp;gt; function () {&lt;br&gt;
                $this-&amp;gt;updateImportDetails();&lt;br&gt;
                $this-&amp;gt;updateTimesheetImportCount();&lt;br&gt;
            },&lt;br&gt;
        ];&lt;br&gt;
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Insert the chunked Data&lt;br&gt;
public function array(array $array)&lt;br&gt;
{&lt;br&gt;
    ImportDetails::insert($array);&lt;br&gt;
    return $array;&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    public function chunkSize(): int&lt;br&gt;
    {&lt;br&gt;
        return $this-&amp;gt;chunk_size;&lt;br&gt;
    }&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Verification &amp;amp; Real Record Creation: Validating and Finalizing Imported Data&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80ok77e0l7cpoohl6ko9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80ok77e0l7cpoohl6ko9.jpeg" alt="Verification &amp;amp; Real Record Creation: Validating and Finalizing Imported Data"&gt;&lt;/a&gt;&lt;br&gt;
After the temporary records are created from the Excel data, the system proceeds to verify and finalize the imported data. This phase ensures that only accurate and validated records are permanently stored in the system. Here's a detailed explanation of this crucial step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Validate Imported Records&lt;/strong&gt;: All temporary records generated from the Excel data undergo thorough validation checks. This includes verifying data consistency, ensuring adherence to business rules, and detecting any anomalies or discrepancies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fix Exception Records&lt;/strong&gt;: If any validation errors are identified during the verification process, the system prompts users to review and rectify the exception records. Users can correct data inaccuracies or provide missing information to ensure completeness and accuracy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Background Task for Real Record Generation&lt;/strong&gt;: Once the imported data passes validation, a background task is initiated to generate real records from the validated entries. This task orchestrates the finalization process, ensuring data integrity before permanent storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CRON Picks Up the Task&lt;/strong&gt;: The scheduled task manager periodically picks up the task for real record creation, ensuring timely processing and minimal delay in data integration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Validate for Duplicate Records&lt;/strong&gt;: Before creating real records in the database, the system performs a final check to ensure that no duplicate entries exist. This prevents redundancy and maintains data consistency within the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Construct Insertion Query&lt;/strong&gt;: Based on the validated temporary records, the system constructs an optimized insertion query using efficient bulk-insert techniques. This minimizes database overhead and maximizes performance during the final data insertion step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bulk Insertion of Real Records&lt;/strong&gt;: The validated and deduplicated records are bulk-inserted into the system's database tables, finalizing the data import process. This step ensures that only accurate and verified data is permanently stored for use within the application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggrh49q8uyqcc9att3yd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fggrh49q8uyqcc9att3yd.jpeg" alt="Real Record Creation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By meticulously verifying and finalizing imported data, the system ensures data accuracy, integrity, and consistency, laying the groundwork for effective data-driven decision-making and business operations. This phase marks the culmination of the data import process, ensuring that the system is equipped with reliable and trustworthy information for optimal performance and functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In the dynamic landscape of data management, the seamless integration of external data into our systems is crucial. Throughout this journey, we've delved into the intricacies of designing a high-performance data import system, focusing on the versatility of Excel and the efficiency of our approach.&lt;/p&gt;

&lt;p&gt;From configuring import settings to finalizing real records, each phase of the data import process is pivotal in ensuring accuracy, integrity, and efficiency. Our commitment to optimization extends beyond functionality to include minimizing database hits and connections, while also striving to maintain O(n) time complexity.&lt;/p&gt;

&lt;p&gt;Through dynamic field mapping and a structured database schema, users can tailor import configurations to their needs, fostering flexibility and customization. This empowers organizations to navigate the complexities of data migration with confidence.&lt;/p&gt;

&lt;p&gt;During file upload, we prioritize compatibility and security, laying the groundwork for efficient processing. Subsequent phases, from Excel reading to temporary record creation and verification, ensure meticulous validation with minimal database interactions.&lt;/p&gt;

&lt;p&gt;As we conclude, it's clear that a well-designed data import system enhances operational efficiency and underpins informed decision-making. By embracing innovative solutions and optimization strategies, we navigate the complexities of data migration, driving growth and innovation in the digital age while ensuring minimal database overhead and optimal performance. Together, let's continue harnessing the power of efficient data integration to unlock new possibilities and opportunities.&lt;br&gt;
Feel free to post your ideas as to how you would process the data imports.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>systemdesign</category>
      <category>dataimports</category>
      <category>excel</category>
    </item>
    <item>
      <title>General Docker Troubleshooting, Best Practices &amp; Where to Go From Here</title>
      <dc:creator>Ramkumar Jayakumar</dc:creator>
      <pubDate>Fri, 19 Jan 2024 13:00:00 +0000</pubDate>
      <link>https://dev.to/rkj180220/general-docker-troubleshooting-best-practices-where-to-go-from-here-1gj6</link>
      <guid>https://dev.to/rkj180220/general-docker-troubleshooting-best-practices-where-to-go-from-here-1gj6</guid>
      <description>&lt;h2&gt;
  
  
  Tackling Common Issues: No Space and Sluggish Containers
&lt;/h2&gt;

&lt;p&gt;Addressing common issues is crucial for maintaining a seamless Docker experience. Let's highlight the importance and break down each of the upcoming sections:&lt;/p&gt;

&lt;h3&gt;
  
  
  No Space Issue:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Image Cleanup:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To remove unused images:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; docker rmi &amp;lt;img1-id&amp;gt; &amp;lt;img2-id&amp;gt; &amp;lt;img3-id&amp;gt; ..
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Execute system prune:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; docker system prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This command intelligently removes unnecessary data, including stopped containers and obsolete layers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Container Performance:
&lt;/h3&gt;

&lt;p&gt;1.&lt;strong&gt;Monitoring Container Stats:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use the &lt;code&gt;docker stats&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; docker stats &amp;lt;name/id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides real-time snapshots of your container's performance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2.&lt;strong&gt;Inspecting Container Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Explore detailed information in JSON format:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; docker inspect &amp;lt;container name/id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;ul&gt;
&lt;li&gt;For less detailed output:
&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker inspect &amp;lt;container name/id&amp;gt; | less
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;3.&lt;strong&gt;Viewing Running Processes:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker top &amp;lt;container name/id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Reveals running processes inside the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Docker Best Practices
&lt;/h2&gt;

&lt;p&gt;Adhering to best practices is paramount for ensuring security and stability within your Docker environment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Image Security:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use verified images for enhanced security.&lt;/li&gt;
&lt;li&gt;Leverage Container Image Scanners like Clair, Trivy, and Dagda for unverified images.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Avoid Using Latest:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specify image versions to maintain stability.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;User Privileges:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Utilize non-root users within containers for increased security.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Where to Go Next
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker Compose:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Multi-Container Apps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker excels for single-application deployment.&lt;/li&gt;
&lt;li&gt;For apps with multiple components, use Docker Compose to define containers and their relationships in a single Compose Manifest.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Getting Started:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start containers with:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Kubernetes:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scaling for Production:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker eases single-system container deployment.&lt;/li&gt;
&lt;li&gt;Kubernetes addresses challenges in managing hundreds of thousands of containers in production.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes Challenges:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overcoming difficulties in linking docker networks across hosts.&lt;/li&gt;
&lt;li&gt;Managing containers across multiple hosts.&lt;/li&gt;
&lt;li&gt;Lack of built-in solutions for container migration between hosts.&lt;/li&gt;
&lt;li&gt;Production concerns like load balancing and securing traffic are challenging with Docker client alone.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Container Orchestrations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Solutions like VMware’s vCenter and Rundeck utilize orchestrators for easy scaling, container movement, and traffic routing.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Planet-Scale Container Orchestrator - Kubernetes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes automates deployment, scaling, and management of containerized applications.&lt;/li&gt;
&lt;li&gt;It's designed for distributed systems, running components across multiple machines.&lt;/li&gt;
&lt;li&gt;Enables auto-scaling and URL-based traffic routing, making it a platform of platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Navigating Docker troubleshooting, adopting best practices, and exploring advanced orchestration tools are essential steps in mastering containerization. Whether you choose Docker Compose for multi-container simplicity or venture into the vast landscape of Kubernetes, each path unlocks new possibilities for deploying and managing your applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww8sh54qhgn6dzh5yuvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww8sh54qhgn6dzh5yuvm.png" alt="Containerization Evolution" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Docker Documentation. &lt;a href="https://docs.docker.com/engine/reference/commandline/system_prune/" rel="noopener noreferrer"&gt;Docker system prune.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docker Documentation. &lt;a href="https://docs.docker.com/storage/" rel="noopener noreferrer"&gt;Manage data in containers.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Clair. &lt;a href="https://github.com/quay/clair" rel="noopener noreferrer"&gt;Vulnerability Static Analysis for Containers.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Trivy. &lt;a href="https://github.com/aquasecurity/trivy" rel="noopener noreferrer"&gt;A Simple and Comprehensive Vulnerability Scanner for Containers.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Dagda. &lt;a href="https://github.com/eliasgranderubio/dagda" rel="noopener noreferrer"&gt;A tool to perform static analysis of known vulnerabilities, trojans, viruses, malware &amp;amp; other malicious threats in Docker images/containers.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docker Compose Documentation. &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Overview of Docker Compose.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes Documentation. &lt;a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/" rel="noopener noreferrer"&gt;Kubernetes Basics.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;VMware. &lt;a href="https://www.vmware.com/products/vcenter-server.html" rel="noopener noreferrer"&gt;vCenter Server.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Rundeck Documentation. &lt;a href="https://docs.rundeck.com/docs/manual/introduction.html" rel="noopener noreferrer"&gt;Introduction to Rundeck.&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Create Containers and Interaction in Docker</title>
      <dc:creator>Ramkumar Jayakumar</dc:creator>
      <pubDate>Wed, 17 Jan 2024 13:00:00 +0000</pubDate>
      <link>https://dev.to/rkj180220/create-containers-and-interaction-in-docker-3hac</link>
      <guid>https://dev.to/rkj180220/create-containers-and-interaction-in-docker-3hac</guid>
      <description>&lt;h2&gt;
  
  
  Building Containers: The Detailed Way
&lt;/h2&gt;

&lt;p&gt;Containers in Docker are born from container images—a compressed, pre-packaged file system containing your app, its environment, and configuration instructions. Let's embark on the journey of container creation, starting with the meticulous approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Container Using 'docker container create'
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Understanding Images:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Images are compressed file systems with instructions on how to start your app.&lt;/li&gt;
&lt;li&gt;Docker Hub serves as a default repository for images.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Command to Create Container:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker container create &lt;span class="nt"&gt;--help&lt;/span&gt;
   docker container create hello-world:linux
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;create&lt;/code&gt; command doesn't start the container; it merely generates it.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Listing Containers:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker ps
   docker ps &lt;span class="nt"&gt;--all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The status "0 exit code (exited)" indicates successful execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Starting the Container:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker container start &amp;lt;container_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check logs using &lt;code&gt;docker logs &amp;lt;container_id&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choosing the detailed way may be preferable in scenarios where you want granular control over container creation, particularly when specific configurations or setups are required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shortcut: 'docker run' Command
&lt;/h3&gt;

&lt;p&gt;On the flip side, the 'docker run' command offers a more efficient and convenient way to create and run containers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Single Command Magic:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker run &amp;lt;image_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This command combines create, start, and attach actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Note:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;docker ps&lt;/code&gt; to list containers and retrieve IDs.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Experimenting with both methods is encouraged. The 'docker run' command is often favored for its simplicity, making it an excellent choice for quick development iterations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dockerfile and Image Creation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Dockerfile Basics
&lt;/h3&gt;

&lt;p&gt;Introducing Dockerfile basics is essential for efficient image creation. Let's explain each component succinctly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Example Dockerfile:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;   FROM ubuntu:latest
   LABEL maintainer="Your Name"
   USER nobody
   COPY . /app
   RUN apt-get update &amp;amp;&amp;amp; apt-get install -y curl bash
   USER nobody
   ENTRYPOINT ["curl"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Explanation:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FROM ubuntu:latest&lt;/code&gt;: Specifies the base image as the latest version of Ubuntu.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;LABEL maintainer="Your Name"&lt;/code&gt;: Adds metadata to the image, specifying the maintainer.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;USER nobody&lt;/code&gt;: Sets the default user for subsequent commands to 'nobody'.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY . /app&lt;/code&gt;: Copies the contents of the current directory to the '/app' directory in the image.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN apt-get update &amp;amp;&amp;amp; apt-get install -y curl bash&lt;/code&gt;: Updates package lists and installs 'curl' and 'bash'.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;USER nobody&lt;/code&gt;: Switches back to the 'nobody' user.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ENTRYPOINT ["curl"]&lt;/code&gt;: Defines the default executable for the container as 'curl'.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Building an Image:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker build &lt;span class="nt"&gt;-t&lt;/span&gt; &amp;lt;image-name&amp;gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;-t&lt;/code&gt; to tag an image.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Intermediary Images:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker generates intermediate images for each line in the Dockerfile.&lt;/li&gt;
&lt;li&gt;The final image is squashed and tagged.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Running the Image:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker run &amp;lt;image-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Removing Images
&lt;/h3&gt;

&lt;p&gt;To maintain a clean and efficient system, it's crucial to remove unused images:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To remove an image:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; docker rmi &amp;lt;image_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;docker images&lt;/code&gt; to list images.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advanced Container Operations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Container Naming
&lt;/h3&gt;

&lt;p&gt;Organizing advanced container operations under subheadings for better structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assign a name to your container:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;container_name&amp;gt; &amp;lt;image_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Port Binding
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Bind ports for network access:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;server_name&amp;gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 5001:5000 &amp;lt;image_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Access the server at &lt;a href="http://localhost:5001" rel="noopener noreferrer"&gt;http://localhost:5001&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Volume Mounting
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Preserve data outside containers using volumes:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--entrypoint&lt;/span&gt; sh &lt;span class="nt"&gt;-v&lt;/span&gt; /tmp/container:/tmp ubuntu &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"echo 'Hello There.' &amp;gt; /tmp/file &amp;amp;&amp;amp; cat /tmp/file"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Map a folder on your machine to a folder in the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Container Registries
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Understanding Registries:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Container image registries track and store container images.&lt;/li&gt;
&lt;li&gt;Docker Hub is the default registry.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pushing Images to Docker Hub:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker login
   docker tag &amp;lt;image_to_rename&amp;gt; username/new_image_name
   docker push &amp;lt;image-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deleting Images in Docker Hub:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Delete images through the Docker Hub settings in the browser.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Mastering container creation and interaction forms the backbone of Docker. Whether you choose the detailed or shortcut approach, understanding the nuances empowers you to efficiently manage and deploy your applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/reference/run/" rel="noopener noreferrer"&gt;Docker Documentation. Docker run reference.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/reference/builder/" rel="noopener noreferrer"&gt;Docker Documentation. Dockerfile reference.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/reference/commandline/cli/" rel="noopener noreferrer"&gt;Docker Documentation. Docker CLI documentation.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/storage/" rel="noopener noreferrer"&gt;Docker Documentation. Manage data in containers.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/compose/compose-file/" rel="noopener noreferrer"&gt;Docker Documentation. Docker Compose overview.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Anatomy of Docker</title>
      <dc:creator>Ramkumar Jayakumar</dc:creator>
      <pubDate>Mon, 15 Jan 2024 13:00:00 +0000</pubDate>
      <link>https://dev.to/rkj180220/anatomy-of-docker-4a4j</link>
      <guid>https://dev.to/rkj180220/anatomy-of-docker-4a4j</guid>
      <description>&lt;p&gt;In this section, we delve into the intricate components that form the backbone of Docker. Understanding the anatomy of Docker is essential for harnessing its true power in containerization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Components:
&lt;/h3&gt;

&lt;p&gt;Docker operates on the fusion of Linux namespaces and control groups, providing a unique and isolated environment for applications. Let's simplify this complex structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Linux Namespace:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Linux kernel feature providing different "views" of your system to applications.&lt;/li&gt;
&lt;li&gt;Offers a unique and isolated instance of system resources, allowing independent operation of processes or containers.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Linux Control Group (cgroup):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Another Linux kernel feature restricting how much hardware each process can use.&lt;/li&gt;
&lt;li&gt;Utilized for monitoring and restricting CPU, network, disk bandwidth, and memory consumption.&lt;/li&gt;
&lt;li&gt;Assigning disk quotas (not supported by Docker).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06hmff96ykniuc26irx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06hmff96ykniuc26irx.png" alt="Anatomy of Docker"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Note:&lt;/strong&gt; While Docker is native to Linux and some newer versions of Windows, containers can run on any OS. Docker is compatible with Windows Server, and with the introduction of Windows Subsystem for Linux (WSL) from Windows 10 Anniversary Edition onwards, performance on Windows has significantly improved.&lt;/p&gt;

&lt;h3&gt;
  
  
  Linux's Eight Kernels:
&lt;/h3&gt;

&lt;p&gt;Let's briefly explore the eight namespaces in the Linux kernel that provide isolation for processes, networking, interprocess communication, and more:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;PID Namespace (pid):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Isolates the process ID number space, ensuring processes in different namespaces have distinct PIDs.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Network Namespace (net):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides isolation in terms of network resources.&lt;/li&gt;
&lt;li&gt;Enables each namespace to have its own network interfaces, routing tables, and firewall rules.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mount Namespace (mnt):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Isolates the set of mount points seen by a group of processes.&lt;/li&gt;
&lt;li&gt;Ensures each namespace has its own filesystem hierarchy.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;IPC Namespace (ipc):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Isolates interprocess communication resources.&lt;/li&gt;
&lt;li&gt;Each namespace has its own System V IPC, message queues, and semaphore arrays.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;UTS Namespace (uts):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Isolates system identifiers like hostname and NIS domain name.&lt;/li&gt;
&lt;li&gt;Enables each namespace to have its own hostname and domain name.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;User Namespace (user):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides isolation for user and group IDs.&lt;/li&gt;
&lt;li&gt;Allows different namespaces to have different user and group mappings.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cgroup Namespace (cgroup):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduced in Linux kernel 5.10.&lt;/li&gt;
&lt;li&gt;Enables isolation for control groups, allowing each namespace to have its own set of cgroups.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Time Namespace (time):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduced in Linux kernel 5.6.&lt;/li&gt;
&lt;li&gt;Ensures each namespace has its own perception of time.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Docker Advantages:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Dockerfiles:
&lt;/h4&gt;

&lt;p&gt;Dockerfiles simplify configuring and packaging apps and their environments.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users instruct Docker on container configuration through Dockerfiles.&lt;/li&gt;
&lt;li&gt;Docker utilizes these Dockerfiles to package apps and their environments into images.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="c"&gt;# Dockerfile example (more details in the next blog)&lt;/span&gt;&lt;br&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:latest&lt;/span&gt;&lt;br&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nginx&lt;br&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["nginx", "-g", "daemon off;"]&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Sharing Images:&lt;br&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Docker Hub, a global repository, facilitates easy image sharing.&lt;/li&gt;
&lt;li&gt;Users can utilize alternative registries like AWS Elastic Container Registry (ECR), GCP Google Container Registry (GCR), Quay.io, Harbor, Azure Container Registry (ACR) etc., and share images effortlessly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Docker CLI:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Docker CLI makes starting apps in containers a breeze.&lt;/li&gt;
&lt;li&gt;No worries about UID mappings, network interfaces, or Docker configuration files.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Docker Limitations:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Docker relies on Linux namespaces and control groups, making it technically native to Linux and some newer versions of Windows.&lt;/li&gt;
&lt;li&gt;Containers can run on any OS, but their images are bound to their parent OS. Workarounds exist for these limitations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Docker Alternatives:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Container Runtime Interface (CRI):
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes introduced CRI, enabling developers to create their container runtimes.&lt;/li&gt;
&lt;li&gt;Examples: CRI-O, RUNC, Firecracker from AWS.&lt;/li&gt;
&lt;li&gt;Podman by RedHat for highly secure, rootless containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Docker Machine:
&lt;/h4&gt;

&lt;p&gt;Docker Machine serves as a workaround for Mac and Windows users, allowing them to run Docker on their systems. However, if you are using Linux, there are specific distributions optimized to support container runtimes more efficiently.&lt;/p&gt;

&lt;h5&gt;
  
  
  Docker Machine for Mac/Windows:
&lt;/h5&gt;

&lt;p&gt;Designed as a workaround for non-Linux environments.&lt;br&gt;
Utilizes VirtualBox to create virtual machines running Docker.&lt;br&gt;
Offers a solution for those operating systems lacking native Docker support.&lt;/p&gt;

&lt;h5&gt;
  
  
  Linux Distributions Optimized for Containers:
&lt;/h5&gt;

&lt;p&gt;When using Docker on Linux, consider distributions optimized for container runtimes. Examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ubuntu: A widely used and well-supported Linux distribution with good compatibility for Docker.&lt;/li&gt;
&lt;li&gt;Alpine Linux: Known for its lightweight nature, making it an excellent choice for containerized applications.&lt;/li&gt;
&lt;li&gt;Fedora CoreOS: A minimal OS designed explicitly for running containerized applications at scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Performance Considerations:
&lt;/h5&gt;

&lt;p&gt;Docker Machine on Mac and Windows may exhibit slower performance due to the virtualization layer and dependency on VirtualBox.&lt;br&gt;
On Linux, choosing a distribution optimized for containers can enhance Docker's performance and resource utilization.&lt;br&gt;
When selecting a Linux distribution for Docker, it's essential to consider factors like compatibility, resource efficiency, and the level of support provided for container runtimes. Each distribution may have its strengths, so choose the one that aligns best with your specific requirements.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker Desktop:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;A more refined solution introduced in 2016.&lt;/li&gt;
&lt;li&gt;Smaller and faster VM running on Apple's native hypervisor or Hyper-V on Windows.&lt;/li&gt;
&lt;li&gt;Improved GUI for configuration, mounting volumes, and exposing network ports.&lt;/li&gt;
&lt;li&gt;Addresses the limitations of Docker Machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;Understanding Docker's anatomy is crucial for harnessing its power. From Linux namespaces to Dockerfiles, we've peeled back the layers. Stay tuned for the next blog, where we'll dissect Dockerfiles and dive deeper into crafting containerized environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://lwn.net/Articles/531114/#:~:text=The%20namespaces,instance%20of%20the%20global%20resource" rel="noopener noreferrer"&gt;Linux Documentation. Namespaces in operation, part 1: namespaces overview.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/reference/builder/" rel="noopener noreferrer"&gt;Docker Documentation. Dockerfile reference.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="noopener noreferrer"&gt;Kubernetes Documentation. Container runtimes.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://podman.io/" rel="noopener noreferrer"&gt;Podman Documentation. Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.redhat.com/en/technologies/cloud-computing/openshift/red-hat-openshift-kubernetes" rel="noopener noreferrer"&gt;RedHat vs Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Why Docker?</title>
      <dc:creator>Ramkumar Jayakumar</dc:creator>
      <pubDate>Sat, 13 Jan 2024 12:55:40 +0000</pubDate>
      <link>https://dev.to/rkj180220/why-docker-4hpn</link>
      <guid>https://dev.to/rkj180220/why-docker-4hpn</guid>
      <description>&lt;p&gt;Have you ever found yourself frustrated with the "It works on my machine" dilemma during your software development journey? Despite careful testing in various environments, your application misbehaves elsewhere, falling victim to the common challenges of software engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Landscape of Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration Management:&lt;/strong&gt;&lt;br&gt;
Tools like Chef, Ansible, and Puppet tackle the issue by enabling developers to write code in configuration languages. This code describes precisely what machines need to run your app.&lt;br&gt;
&lt;em&gt;Drawback: Requires in-depth knowledge of underlying OS and hardware for tools like Chef, Puppet, and Ansible.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Virtual Machines as Code (Vagrant):&lt;/strong&gt;&lt;br&gt;
HashiCorp’s Vagrant empowers developers to script entire virtual machines for their apps.&lt;br&gt;
&lt;em&gt;Drawback: Involves understanding the virtual machine's hardware requirements and often configuring the OS before app installation. Heavy, relatively slow, and demands inconvenient configurations.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;A Paradigm Shift: Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Docker?&lt;/strong&gt;&lt;br&gt;
Amidst these challenges, Docker emerges as a game-changer. It empowers developers to package their applications into images that seamlessly run on containers. These containers act as virtualized operating systems, configured with just enough resources to run your app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Virtual Machines vs Containers:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Virtual Machines:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run on a platform called Hypervisor.&lt;/li&gt;
&lt;li&gt;Hypervisor emulates hardware operations, translating them to real hardware operations on hosts.&lt;/li&gt;
&lt;li&gt;Takes up a lot of space.&lt;/li&gt;
&lt;li&gt;Requires OS installation and configuration.&lt;/li&gt;
&lt;li&gt;Can run multiple apps simultaneously but can't directly interact with the host securely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Containers:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run on container runtimes.&lt;/li&gt;
&lt;li&gt;Runtimes work with the OS to allocate hardware and copy files, running like any other app on the system.&lt;/li&gt;
&lt;li&gt;Do not emulate hardware, don't need to "boot up," and use the same hardware and OS.&lt;/li&gt;
&lt;li&gt;Do not require OS installation, enabling quick startup.&lt;/li&gt;
&lt;li&gt;Take up less space, allowing more apps to run.&lt;/li&gt;
&lt;li&gt;Run only one app at a time by design but can interact with the host, posing security challenges (mostly resolved).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu02h24v7yf2ayluvn67x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu02h24v7yf2ayluvn67x.jpg" alt="Containers vs VMs" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Docker Advantage:&lt;/strong&gt;&lt;br&gt;
Docker streamlines the development process by creating images from lightweight configuration files, detailing everything your app needs to run. Unlike traditional virtual machines, containers hide hardware details, ensuring consistent app performance across different environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Portability and Efficiency:&lt;/strong&gt;&lt;br&gt;
Build your app once, and if the machine can run Docker, your app will run consistently, irrespective of its location. Docker brings efficiency to the development process, enabling you to build, deploy, and scale your applications quickly, safely, and cost-effectively.&lt;/p&gt;

&lt;p&gt;In a nutshell, Docker leverages images and containers to offer a universal solution—run your apps anywhere with consistent behavior. Say goodbye to the chaos of environment discrepancies, and embrace the power to build, deploy, and scale your applications effortlessly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this beginner's guide to Docker, we'll unravel the world of containerization, exploring how Docker empowers developers to craft, share, and run applications seamlessly. Let's decode Docker and unlock its potential for your development endeavors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/" rel="noopener noreferrer"&gt;Docker Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.chef.io/" rel="noopener noreferrer"&gt;Chef Software Inc. Chef Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.vagrantup.com/docs" rel="noopener noreferrer"&gt;HashiCorp. Vagrant Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>tutorial</category>
      <category>softwaredevelopment</category>
      <category>containers</category>
    </item>
    <item>
      <title>Persistent Data Grids in Angular: A Comprehensive Guide</title>
      <dc:creator>Ramkumar Jayakumar</dc:creator>
      <pubDate>Wed, 20 Dec 2023 10:15:57 +0000</pubDate>
      <link>https://dev.to/rkj180220/persistent-data-grids-in-angular-a-comprehensive-guide-4jbf</link>
      <guid>https://dev.to/rkj180220/persistent-data-grids-in-angular-a-comprehensive-guide-4jbf</guid>
      <description>&lt;p&gt;Ever wondered how web applications keep your search data and grid configurations intact as you navigate through pages? This seamless experience is thanks to data grid implementations—a cornerstone of modern web development.&lt;/p&gt;

&lt;p&gt;Data grids are essential tools, revolutionizing the presentation and interaction with vast sets of information. Picture online banking platforms allowing effortless transaction filtering or project management tools dynamically updating tasks without a page refresh. These real-world scenarios underscore the power of data grids, offering users intuitive controls for customization, sorting, and real-time updates.&lt;/p&gt;

&lt;p&gt;This blog explores the intricacies of achieving persistent storage for data grids in Angular, eliminating the need for manual data synchronization. Join us as we delve into ensuring a seamless user experience where apps effortlessly retain grid configurations and search data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Challenge
&lt;/h2&gt;

&lt;p&gt;Before delving into the technical details, let's briefly outline the key elements of an application's data grids. Typically, a data grid consists of an external filter that searches a database and a grid that displays the retrieved records. Two critical pieces of information need to be preserved:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Search Data&lt;/strong&gt;: Retaining all the searches made in external filters whenever the page is visited.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grid Configuration&lt;/strong&gt;: Storing options like selected columns, column width, and column order, which should persist across visits.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The next important thing is how long this data must be stored. Deciding when to retain or discard the data depends on the context. Search data is often considered temporary, while grid configurations are expected to persist across user sessions.&lt;br&gt;
So, the search data should be stored for a particular session against a data grid, while grid configurations are stored against a data grid and the user ID.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right Storage Option
&lt;/h2&gt;

&lt;p&gt;Now that we've identified what data to store and when, let's explore the options available to achieve persistent storage. Angular offers an array of options for persistence, each with its own strengths and quirks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;LocalStorage&lt;/strong&gt;: Persistent even after closing the tab, ideal for large data and simple settings. Performance would be a concern for large chunks of data since we will be parsing and stringifying the large data.&lt;br&gt;
&lt;em&gt;Example: User preferences like language or theme persist across logins.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SessionStorage&lt;/strong&gt;: Similar to LocalStorage, but data vanishes when the browser window closes.&lt;br&gt;
&lt;em&gt;Example: Shopping cart contents during a single shopping session.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IndexedDB&lt;/strong&gt;: A more powerful local database with richer object storage capabilities. While IndexedDB offers powerful storage capabilities, its complexity and overhead, encompassing factors like error handling challenges, potential performance limitations, and reduced compatibility across browsers and modes, make it a less straightforward choice for simple persistent data grid storage.&lt;br&gt;
&lt;em&gt;Example: Offline data persistence for complex applications.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cookies&lt;/strong&gt;: small data packages sent between browser and server, useful for tracking user sessions, but their privacy concerns, technical limitations, and potential negative impact on the user experience make them a less ideal choice.&lt;br&gt;
&lt;em&gt;Example: Remembering login credentials for returning users.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Angular Routes&lt;/strong&gt;: Embed data directly in the URL, making it bookmarkable and SEO-friendly. Can be used for simple configurations&lt;br&gt;
&lt;em&gt;Example: Sharing specific content configurations with a link.&lt;br&gt;
&lt;code&gt;/analytics/user-report?start_date=2023-10-01&amp;amp;end_date=2023-12-19&lt;/code&gt; (shows specific date range for analysis)&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Angular Services&lt;/strong&gt;: Services act as data managers, making your app cleaner, more organized, and easier to work with. Its key advantages are code reusability, abstraction, and synchronization. Data will be lost upon page refresh/ tab closure.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Remember&lt;/em&gt;&lt;/strong&gt;: The best choice depends on your application's needs. Consider data size, persistence duration, security implications, and potential limitations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage Options Implementation
&lt;/h2&gt;

&lt;p&gt;In my implementation, I've opted for a hybrid approach, utilizing both local storage and a backend database to store different types of data. Here's a glimpse of the structure used for storing grid configurations.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "grid_name": {
        "external_filters": {
            "name": null,
            "customer": [ 1234, 1235],
            "email": null,
            "some_boolean_filter": false,
            "created_at": "2023-12-13T18:30:00.000Z"
        },
        "columns_config": [
            // Individual Column Configurations
            {
                "field": "name",
                "width": 120,
                "title": "Name",
                "orderIndex": 1
            },
            {
                "field": "amount",
                "width": 100,
                "title": "Amount(₹)",
                "cssClass": "text-end",
                "orderIndex": 3
            },
            {
                "field": "email",
                "width": 100,
                "title": "Email",
                "hidden": true,
                "orderIndex": 2
            }

        ], 
        "state": {
            // Grid State Information
            "skip": 0,
            "take": 10,
            "sort": []
        }
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Saving Grid Configurations
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Functional Breakdown
&lt;/h3&gt;

&lt;p&gt;Let's get to the exact details as to how and when we save the data. First, we need to break down each of the operations into separate functions so that we can reuse them. We will have the following functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;saveGridSettings&lt;/strong&gt;: This function builds a data structure containing column configurations, grid state, and external filter data. It then saves this data to LocalStorage.&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

saveGridSettings() {
    const external_filters = this.searchForm.value;
    const grid_config = {
        contractor_list: {
            state: this.grid_state,
            external_filters: external_filters,
            columns_config: this.columnsConfig
        }
    };
    this.persistentStorageService.setGridSettings('grid_name', grid_config);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;setGridSettings(grid_key: string, gridConfig: any): void {&lt;br&gt;
    localStorage.setItem(grid_key, JSON.stringify(gridConfig));&lt;br&gt;
}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
- **saveColumnConfigSettings**: This function saves the user-specific column configuration for the current grid to the backend via an API call. It's triggered when the "Apply" button is clicked.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;saveColumnConfigSettings(event: any) {
    const input_data = {
        grid: 'grid_name',
        columns_config: event
    };
    this.saveGridSettings();

    this.persistentStorageService.saveColumnConfigSettings(input_data).subscribe({
        next: (response: any) =&amp;gt; {
            this.toast.showSuccess(response.message);
            this.getColumnConfigSettings();
            this.contractor_list_grid_loading = false;
        },
        error: (error: any) =&amp;gt; {
            this.toast.showError(error.message);
        }
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
- **getColumnConfigSettings**: This function retrieves the user-specific column configuration for the current grid from the backend. It's used during component initialization.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;getColumnConfigSettings() {
    const input_data = {
        grid: 'grid_name'
    };

    this.persistentStorageService.getColumnConfigSettings(input_data).subscribe({
        next: (response: any) =&amp;gt; {
            if (response &amp;amp;&amp;amp; response['columns_config'].length !== 0) {
                this.mapGridSettings(response['columns_config']);
            } else {
                this.initialGridSettings();
            }
        },
        error: (error: any) =&amp;gt; {
            this.toast.showError(error.message);
            this.initialGridSettings();
        }
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
- **initializeGridRememberSettings**: This function checks for existing grid settings in LocalStorage. If unavailable, it fetches the user's column configuration via API or uses the default configuration if none exists.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ngOnInit(): void {
    // Other initializations
    this.initializeGridRememberSettings();
}

initializeGridRememberSettings() {
    if (!!this.savedState) {
        this.mapGridSettings();
    } else {
        this.getColumnConfigSettings();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
### Application Flow

![Persistent data grids in angular workflow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jkk20kcat00bfuzl2hdq.png)

- **User interaction**: When the user applies filters or modifies the grid layout, the updated state (including external filters, column configurations, and grid state) is saved to LocalStorage with the help of state change events.

- **Local Storage cache**: This cached data ensures quick restoration of configurations when the user revisits the page.

- **API calls**: Saving of user-specific column configurations to the backend happens only when the user explicitly confirms with the "Apply" button, triggering the saveColumnConfigSettings function.

- **Component initialization**: During initial page load, the initializeGridRememberSettings function checks for LocalStorage data. If unavailable, it retrieves the user's configuration via API or uses the default configuration.

Overall, this approach leverages LocalStorage for efficient caching and utilizes the backend for persistent, user-specific configuration storage.

## Conclusion

Choosing the right approach for persistent storage in Angular data grids involves careful consideration of your application's requirements. In my case, LocalStorage with a backend database for grid configurations provided a flexible and scalable solution.

However, LocalStorage is not without drawbacks. Performance concerns arise due to the parsing and stringification processes involved. Instead, leveraging an Angular service to store grid data can optimize performance by avoiding these processes. This comes at the cost of losing data upon tab closure. Ultimately, the choice depends on your specific needs.

To further enhance this solution, consider maintaining a queue in LocalStorage with a maximum capacity of 10 data grids. This minimizes storage size by evicting older entries, assuming users navigating beyond 10 pages are unlikely to revisit those specific searches.

Remember, thorough analysis and planning before diving into code is crucial for ensuring a robust and efficient solution for persistent data grids in your Angular application. Happy coding!

## Further Resources

- What are data grids? [Introducing In-Memory Data Grid — Hazelcast IMDG | by Dina Bogdan | The Startup | Medium](https://medium.com/swlh/introducing-in-memory-data-grid-hazelcast-imdg-1f2af2be5344)

- For a deeper dive into Angular Routes, see [Angular Tutorial](https://www.mygreatlearning.com/blog/angular-routing-tutorial/).

- Learn more about LocalStorage and SessionStorage [Beginner's Guide to Web Storage](https://levelup.gitconnected.com/beginners-guide-to-web-storage-a88b6e5b522e)

- A Developer's Guide to HTTP Cookies: [cookies - Mozilla | MDN](https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/cookies)

- Introducing IndexedDB: [IndexedDB API](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>angular</category>
      <category>storage</category>
      <category>softwaredevelopment</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
