<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sandeep Kumar Seeram</title>
    <description>The latest articles on DEV Community by Sandeep Kumar Seeram (@sandeepseeram).</description>
    <link>https://dev.to/sandeepseeram</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sandeepseeram"/>
    <language>en</language>
    <item>
      <title>Grafana Faro</title>
      <dc:creator>Sandeep Kumar Seeram</dc:creator>
      <pubDate>Fri, 04 Oct 2024 10:57:14 +0000</pubDate>
      <link>https://dev.to/sandeepseeram/grafana-faro-3478</link>
      <guid>https://dev.to/sandeepseeram/grafana-faro-3478</guid>
      <description>&lt;p&gt;&lt;strong&gt;Grafana Faro&lt;/strong&gt; is a web SDK developed by Grafana Labs for frontend application observability. It is part of Grafana's broader observability stack and helps developers gain insights into the performance, health, and behavior of their web applications by collecting frontend metrics and logs. Faro can be integrated into web applications to monitor and troubleshoot client-side issues, enhancing the full visibility of an application's performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of Grafana Faro:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Real-time Frontend Monitoring: Faro tracks key performance indicators (KPIs) for web applications such as page load times, user interactions, and JavaScript errors.&lt;/p&gt;

&lt;p&gt;Error Reporting: It captures JavaScript errors in real time, allowing you to see where and why failures occur in the browser.&lt;/p&gt;

&lt;p&gt;User Interaction Tracking: Monitors user events like clicks, form submissions, and other interactions to help identify areas of poor user experience or bugs in user flows.&lt;/p&gt;

&lt;p&gt;Session Tracking: Helps trace how users navigate through the application, making it easier to debug or track down performance issues based on session data.&lt;/p&gt;

&lt;p&gt;Integration with Grafana: Faro can send the data it collects to Grafana Cloud or a self-hosted Grafana setup, where the collected metrics, logs, and traces can be visualized and monitored.&lt;/p&gt;

&lt;p&gt;Lightweight SDK: Designed to have minimal overhead, ensuring it doesn’t slow down the performance of your web application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common Use Cases for Grafana Faro:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Frontend Error Monitoring: Developers can track JavaScript errors in real-time, helping to quickly fix bugs that affect the user experience.&lt;br&gt;
Performance Analytics: Track frontend performance metrics like page load times, Time to First Byte (TTFB), and other user-centric performance measures.&lt;/p&gt;

&lt;p&gt;User Experience Monitoring: Collect and analyze data on user interactions, helping to optimize workflows and ensure smooth user experiences.&lt;/p&gt;

&lt;p&gt;Session Replay: Track and replay user sessions to understand user behaviors and reproduce issues reported by users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Grafana Faro Works:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Installation: Grafana Faro is typically installed via npm or a CDN link in a web application.&lt;/p&gt;

&lt;p&gt;Configuration: After installing the SDK, you configure it to collect specific metrics, errors, or events. You can choose what data to track, such as performance metrics, user interactions, and errors.&lt;/p&gt;

&lt;p&gt;Data Collection: Faro collects the specified data from users interacting with your web app. It can gather performance metrics like page load speed, errors, and user activity, and send this data to Grafana.&lt;/p&gt;

&lt;p&gt;Visualization and Alerts: The data can be integrated into Grafana, where you can visualize trends, set up alerts, and build dashboards to monitor frontend performance over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Workflow:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Track frontend errors: By integrating Faro into a web app, you can automatically detect JavaScript errors that occur on the client side and send those logs to Grafana for analysis.&lt;/p&gt;

&lt;p&gt;Monitor page load performance: Use Faro to track how quickly pages load for users and visualize those metrics in Grafana to identify potential performance bottlenecks.&lt;/p&gt;

&lt;p&gt;Session Monitoring: Monitor individual user sessions to identify areas where users might be experiencing poor performance or encountering bugs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Full Observability Stack: Combined with other Grafana observability tools like Loki (for logs), Tempo (for traces), and Prometheus (for metrics), Faro offers a comprehensive view of both backend and frontend performance.&lt;/p&gt;

&lt;p&gt;Unified Dashboards: Developers and operations teams can see frontend and backend performance data together in Grafana dashboards, which makes it easier to track down issues across the stack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who Should Use Grafana Faro?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Frontend Developers: To monitor and improve the performance and reliability of their web applications.&lt;/p&gt;

&lt;p&gt;SREs (Site Reliability Engineers): To gain a complete understanding of how frontend performance affects the overall system's reliability and user experience.&lt;/p&gt;

&lt;p&gt;DevOps Engineers: To get visibility into frontend and backend applications in a single platform.&lt;/p&gt;

</description>
      <category>grafana</category>
      <category>fullstack</category>
      <category>observability</category>
      <category>frontend</category>
    </item>
    <item>
      <title>What is a Slowloris attack?</title>
      <dc:creator>Sandeep Kumar Seeram</dc:creator>
      <pubDate>Sun, 23 Jun 2024 08:29:03 +0000</pubDate>
      <link>https://dev.to/sandeepseeram/what-is-a-slowloris-attack-50d</link>
      <guid>https://dev.to/sandeepseeram/what-is-a-slowloris-attack-50d</guid>
      <description>&lt;p&gt;A Slowloris attack is a type of denial-of-service (DoS) attack that targets web servers by exhausting their connection capacity. This attack, often referred to as a slow HTTP DoS attack, takes advantage of how web servers manage connections, making them unable to handle legitimate requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Origins of the Slowloris Attack&lt;/strong&gt;&lt;br&gt;
The name "Slowloris" comes from a tool created by Robert RSnake Hansen in 2009, named after the slow-moving primate, the slow loris. This tool demonstrated how an attacker could use slow HTTP requests to overwhelm a server. The technique has been used in significant real-world incidents, such as attacks on Iranian government websites following the 2009 presidential election.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Does a Slowloris Attack Work?&lt;/strong&gt;&lt;br&gt;
Web servers can be either thread-based (e.g., Apache, Microsoft IIS) or event-based (e.g., Nginx, lighttpd). Thread-based servers handle fewer connections than event-based servers. For instance, Apache can handle 150 connections by default, whereas Nginx can manage 512.&lt;/p&gt;

&lt;p&gt;A server keeps a connection open until it receives all HTTP headers and the complete body of a request, or until it times out. Apache, for example, has a default timeout of 300 seconds. An attacker exploiting this can send numerous incomplete HTTP requests, keeping the connections open and preventing the server from accepting new ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Variations: Slow HTTP POST Attack&lt;/strong&gt;&lt;br&gt;
A variation of the Slowloris attack is the slow HTTP POST attack. Instead of GET requests, it uses POST requests, sending data very slowly to keep the connection alive and evade timeout protections. This method is harder to detect and mitigate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Detecting Slowloris Attacks&lt;/strong&gt;&lt;br&gt;
Detecting a Slowloris attack can be challenging as it uses legitimate-looking requests. Monitoring for patterns such as numerous long-duration connections, partial HTTP requests, and high server resource usage is essential. Intrusion detection systems (IDS) often miss these attacks, so continuous monitoring is crucial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mitigating Slowloris Attacks on Apache Servers&lt;/strong&gt;&lt;br&gt;
Apache servers are common targets for slow HTTP DoS attacks. Here are three effective mitigation techniques:&lt;/p&gt;

&lt;p&gt;Using the mod_reqtimeout Module:&lt;/p&gt;

&lt;p&gt;This module sets time limits for receiving HTTP request headers and bodies. If the client doesn't send data within the set time, the server responds with a 408 REQUEST TIMEOUT error.&lt;/p&gt;

&lt;p&gt;Apache&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;IfModule mod_reqtimeout.c&amp;gt;
  RequestReadTimeout header=20-40,MinRate=500 body=20-40,MinRate=500
&amp;lt;/IfModule&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the mod_qos Module:&lt;/p&gt;

&lt;p&gt;This module allows assigning different priorities to HTTP requests. It limits the number of connections per IP and enforces minimum data transfer rates.&lt;/p&gt;

&lt;p&gt;Apache&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;IfModule mod_qos.c&amp;gt;
  QS_ClientEntries 100000
  QS_SrvMaxConnPerIP 50
  MaxClients 256
  QS_SrvMaxConnClose 180
  QS_SrvMinDataRate 150 1200
&amp;lt;/IfModule&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the mod_security Module:&lt;/p&gt;

&lt;p&gt;This web application firewall (WAF) can block IPs generating multiple 408 responses, indicating potential slow HTTP DoS attacks.&lt;/p&gt;

&lt;p&gt;Apache&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SecRule RESPONSE_STATUS "@streq 408" "phase:5,t:none,nolog,pass,setvar:ip.slow_dos_counter=+1,expirevar:ip.slow_dos_counter=60,id:'1234123456'"
SecRule IP:SLOW_DOS_COUNTER "@gt 5" "phase:1,t:none,log,drop,msg:'Client Connection Dropped due to high number of slow DoS alerts',id:'1234123457'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Combining these methods with additional protections like load balancers, reverse proxies, and rate limiting can significantly enhance server resilience against Slowloris attacks.&lt;/p&gt;

&lt;p&gt;By understanding and implementing these strategies, you can better protect your web servers from the disruptive impact of Slowloris and similar slow HTTP attacks.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Java Message Service (JMS)</title>
      <dc:creator>Sandeep Kumar Seeram</dc:creator>
      <pubDate>Sun, 14 May 2023 14:43:57 +0000</pubDate>
      <link>https://dev.to/sandeepseeram/java-message-service-jms-2h8k</link>
      <guid>https://dev.to/sandeepseeram/java-message-service-jms-2h8k</guid>
      <description>&lt;p&gt;Java Message Service (JMS) is a messaging API that provides a standard way for Java applications to send and receive messages. JMS is a loosely coupled messaging system, which means that the sender and receiver of a message do not need to be running at the same time. JMS is also a reliable messaging system, which means that messages are not lost or corrupted.&lt;/p&gt;

&lt;p&gt;JMS is used in a variety of applications, including:&lt;/p&gt;

&lt;p&gt;• Enterprise application integration (EAI)&lt;br&gt;
• Business-to-business (B2B) integration&lt;br&gt;
• Web services&lt;br&gt;
• Cloud computing&lt;/p&gt;

&lt;p&gt;JMS provides two messaging domains:&lt;/p&gt;

&lt;p&gt;• Point-to-point messaging&lt;br&gt;
• Publish-subscribe messaging&lt;/p&gt;

&lt;p&gt;In point-to-point messaging, there is a one-to-one relationship between the sender and receiver of a message. In publish-subscribe messaging, there is a one-to-many relationship between the sender and receiver of a message.&lt;/p&gt;

&lt;p&gt;JMS provides a number of features that make it a powerful messaging API, including:&lt;/p&gt;

&lt;p&gt;• Message persistence&lt;br&gt;
• Message acknowledgment&lt;br&gt;
• Message delivery guarantees&lt;br&gt;
• Message filtering&lt;br&gt;
• Message expiration&lt;br&gt;
• Message transformation&lt;/p&gt;

&lt;p&gt;JMS is a mature and widely used messaging API. It is supported by a variety of messaging vendors, including IBM, Oracle, and Red Hat.&lt;/p&gt;

&lt;p&gt;Here are some of the benefits of using JMS:&lt;/p&gt;

&lt;p&gt;• Loosely coupled messaging: The sender and receiver of a message do not need to be running at the same time.&lt;br&gt;
• Reliable messaging: Messages are not lost or corrupted.&lt;br&gt;
• Standardized API: JMS is a standard API, which means that Java applications can communicate with any JMS provider.&lt;br&gt;
• Vendor-neutral: JMS is a vendor-neutral API, which means that Java applications can communicate with any JMS provider without being locked into a particular vendor.&lt;br&gt;
• Mature and widely used: JMS is a mature and widely used messaging API, which means that there is a large body of knowledge and expertise available.&lt;/p&gt;

&lt;p&gt;Java program that sends messages via Java JMS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import javax.jms.*;
import javax.naming.*;

public class JmsMessageSender {
    public static void main(String[] args) throws NamingException, JMSException {
        // Set up the JNDI context to access the JMS provider
        Context context = new InitialContext();
        ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("ConnectionFactory");
        Destination destination = (Destination) context.lookup("queue/MyQueue");

        // Create a JMS connection and session
        Connection connection = connectionFactory.createConnection();
        Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

        // Create a JMS message producer
        MessageProducer producer = session.createProducer(destination);

        // Create a text message and send it
        TextMessage message = session.createTextMessage("Hello, world!");
        producer.send(message);

        // Clean up resources
        producer.close();
        session.close();
        connection.close();
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Java program that receives messages from the JMS queue named “MyQueue”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import javax.jms.*;
import javax.naming.*;

public class JmsMessageReceiver {
    public static void main(String[] args) throws NamingException, JMSException {
        // Set up the JNDI context to access the JMS provider
        Context context = new InitialContext();
        ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("ConnectionFactory");
        Destination destination = (Destination) context.lookup("queue/MyQueue");

        // Create a JMS connection and session
        Connection connection = connectionFactory.createConnection();
        Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

        // Create a JMS message consumer
        MessageConsumer consumer = session.createConsumer(destination);

        // Start the connection
        connection.start();

        // Receive messages until there are no more
        while (true) {
            Message message = consumer.receive();
            if (message instanceof TextMessage) {
                TextMessage textMessage = (TextMessage) message;
                System.out.println("Received message: " + textMessage.getText());
            } else {
                System.out.println("Received message of unsupported type: " + message.getClass().getName());
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>java</category>
      <category>jms</category>
      <category>pubsub</category>
    </item>
    <item>
      <title>Getting started on your Zero Trust journey</title>
      <dc:creator>Sandeep Kumar Seeram</dc:creator>
      <pubDate>Tue, 07 Feb 2023 07:37:37 +0000</pubDate>
      <link>https://dev.to/sandeepseeram/getting-started-on-your-zero-trust-journey-3la6</link>
      <guid>https://dev.to/sandeepseeram/getting-started-on-your-zero-trust-journey-3la6</guid>
      <description>&lt;p&gt;Zero Trust is an information security model based on the assumption that all actors, both internal and external, are untrusted and should never be allowed access to resources without explicit authorization. The ultimate goal is to prevent unauthorized access to sensitive information, no matter where it is stored or accessed.&lt;/p&gt;

&lt;p&gt;The core components of a Zero Trust Security Model are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-Factor Authentication (MFA):&lt;/strong&gt; Using MFA enables organizations to ensure that only authorized individuals can access their networks and systems, by requiring multiple forms of identification to authenticate users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access Control:&lt;/strong&gt; Access control is the process of restricting access to a system based on user roles and privileges. Access control can also be used to prevent unauthorized access to sensitive data and systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Least Privilege:&lt;/strong&gt; The principle of least privilege stipulates that users and applications should only have access to the resources they need to perform their job. This means organizations should grant the least amount of necessary access to their systems and data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microsegmentation:&lt;/strong&gt; Microsegmentation is a security strategy that divides a network into small segments to help protect critical resources from unauthorized access. By segmenting the network, organizations can reduce the risk of a security breach by limiting the attack surface.&lt;/p&gt;

&lt;p&gt;In addition to these core components, Zero Trust models should include other security measures such as endpoint security, security monitoring, patch management, and encryption. When used together, these measures can help mitigate the risk of a data breach and protect the organization from malicious actors. &lt;/p&gt;

&lt;p&gt;Getting started on your Zero Trust journey is a complex task, but it is a necessary one. The steps to begin your journey are outlined below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Determine Your Security Goals:&lt;/strong&gt; Before you begin your Zero Trust journey, it's important to take the time to identify your security goals, such as ensuring compliance, reducing risk, or protecting sensitive data. This will help you to identify the areas that need the most attention and set the guidelines for your journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review Your Existing Security Solutions:&lt;/strong&gt; After you have identified your security goals, it's important to review your existing security solutions. This includes assessing your current network, perimeter, and identity access management (IAM) system and determining how they can be modified to better meet your needs. This is an important step as it will help you to identify any areas that are not adequately protected or that may need to be supplemented with additional security measures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implement Multi-factor Authentication (MFA):&lt;/strong&gt; Implementing multi-factor authentication (MFA) is an essential part of any Zero Trust strategy. MFA requires users to provide multiple factors of authentication, such as a password and a code sent to their phones, before they are granted access. This added layer of security can help to protect your data from malicious actors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Utilize User and Entity Behavior Analytics (UEBA):&lt;/strong&gt; User and entity behavior analytics (UEBA) is another tool that can help you to protect your data. UEBA is a form of machine learning that looks for anomalies in user behavior. If unusual activity is detected, the system can alert administrators, who can then investigate to determine if the activity is malicious or not. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implement Risk-Based Access Policy:&lt;/strong&gt; Risk-based access policy is another key element of a Zero Trust strategy. This policy looks at the user, the device, and the context of the transaction in order to determine the level of access that should be granted. &lt;/p&gt;

</description>
      <category>zerotrust</category>
      <category>security</category>
    </item>
    <item>
      <title>Horizontal partitioning (Sharding) on MongoDB - Cloud-Native Data Patterns</title>
      <dc:creator>Sandeep Kumar Seeram</dc:creator>
      <pubDate>Sun, 25 Sep 2022 07:52:11 +0000</pubDate>
      <link>https://dev.to/sandeepseeram/horizontal-partitioning-sharding-on-mongodb-cloud-native-data-patterns-lk6</link>
      <guid>https://dev.to/sandeepseeram/horizontal-partitioning-sharding-on-mongodb-cloud-native-data-patterns-lk6</guid>
      <description>&lt;p&gt;In many large-scale solutions, data is divided into partitions that can be managed and accessed separately. Partitioning can improve scalability, reduce contention, and optimize performance. It can also provide a mechanism for dividing data by usage pattern. &lt;/p&gt;

&lt;p&gt;For example, you can archive older data in cheaper data storage.&lt;/p&gt;

&lt;p&gt;Designing partitions&lt;/p&gt;

&lt;p&gt;There are three typical strategies for partitioning data:&lt;/p&gt;

&lt;p&gt;Horizontal partitioning (often called sharding). In this strategy, each partition is a separate data store, but all partitions have the same schema. Each partition is known as a shard and holds a specific subset of the data, such as all the orders for a specific set of customers.&lt;/p&gt;

&lt;p&gt;Vertical partitioning. In this strategy, each partition holds a subset of the fields for items in the data store. The fields are divided according to their pattern of use. For example, frequently accessed fields might be placed in one vertical partition and less frequently accessed fields in another.&lt;/p&gt;

&lt;p&gt;Functional partitioning. In this strategy, data is aggregated according to how it is used by each bounded context in the system. For example, an e-commerce system might store invoice data in one partition and product inventory data in another.&lt;/p&gt;

&lt;p&gt;In this article, we will take a step-by-step approach to dividing data into partitions (horizontal partitioning aka. sharding) so that it can be accessed and managed separately. &lt;/p&gt;

&lt;p&gt;Technologies used: MongoDB, Docker, Docker-Compose &lt;/p&gt;

&lt;p&gt;Creating a Shard on MongoDB: We will be using MongoDB to implement sharding. Sharding in MongoDB is using groups of MongoDB instances called clusters. We will need a configuration server and a router, which holds the information about different shards in the cluster, while router is responsible for routing the client requests to appropriate backend shards. &lt;/p&gt;

&lt;p&gt;We will be using below docker compose file with scripts defining the mongodb configuration server with 3 shards and a router configuration. &lt;/p&gt;

&lt;p&gt;version: "2"&lt;br&gt;
services:&lt;br&gt;
  # Configuration server&lt;br&gt;
  config:&lt;br&gt;
    image: mongo&lt;br&gt;
    command: mongod --configsvr --replSet configserver --port 27017&lt;br&gt;
    volumes:&lt;br&gt;
      - ./scripts:/scripts&lt;br&gt;
  # Shards&lt;br&gt;
  shard1:&lt;br&gt;
    image: mongo&lt;br&gt;
    command: mongod --shardsvr --replSet shard1 --port 27018&lt;br&gt;
    volumes:&lt;br&gt;
      - ./scripts:/scripts&lt;br&gt;
  shard2:&lt;br&gt;
    image: mongo&lt;br&gt;
    command: mongod --shardsvr --replSet shard2 --port 27019&lt;br&gt;
    volumes:&lt;br&gt;
      - ./scripts:/scripts&lt;br&gt;
  shard3:&lt;br&gt;
    image: mongo&lt;br&gt;
    command: mongod --shardsvr --replSet shard3 --port 27020&lt;br&gt;
    volumes:&lt;br&gt;
      - ./scripts:/scripts&lt;br&gt;
  # Router&lt;br&gt;
  router:&lt;br&gt;
    image: mongo&lt;br&gt;
    command: mongos --configdb configserver/config:27017 --bind_ip_all --port 27017&lt;br&gt;
    ports:&lt;br&gt;
      - "27017:27017"&lt;br&gt;
    volumes:&lt;br&gt;
      - ./scripts:/scripts&lt;br&gt;
    depends_on:&lt;br&gt;
      - config&lt;br&gt;
      - shard1&lt;br&gt;
      - shard2&lt;br&gt;
      - shard3&lt;/p&gt;

&lt;p&gt;init-config.js, script is used to configure the configuration server, we are initiating the single configuration server. &lt;/p&gt;

&lt;p&gt;// Initializes the config server&lt;br&gt;
rs.initiate({&lt;br&gt;
  _id: 'configserver',&lt;br&gt;
  configsvr: true,&lt;br&gt;
  version: 1,&lt;br&gt;
  members: [&lt;br&gt;
    {&lt;br&gt;
      _id: 0,&lt;br&gt;
      host: 'config:27017',&lt;br&gt;
    },&lt;br&gt;
  ],&lt;br&gt;
}); &lt;/p&gt;

&lt;p&gt;When we execute by providing init-config.js script inside the config container &lt;/p&gt;

&lt;p&gt;docker-compose exec config sh -c "mongosh --port 27017 &amp;lt; /scripts/init-config.js"&lt;/p&gt;

&lt;p&gt;Next, we can configure the three shards by running the three shard scripts. Just like the previous initialization command, this one will initialize the MongoDB shards.&lt;/p&gt;

&lt;p&gt;Shard configuration files: &lt;/p&gt;

&lt;p&gt;// Initialize shard1&lt;br&gt;
rs.initiate({&lt;br&gt;
  _id: 'shard1',&lt;br&gt;
  version: 1,&lt;br&gt;
  members: [{ _id: 0, host: 'shard1:27018' }],&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;// Initialize shard2&lt;br&gt;
rs.initiate({&lt;br&gt;
  _id: 'shard2',&lt;br&gt;
  version: 1,&lt;br&gt;
  members: [{ _id: 0, host: 'shard2:27019' }],&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;// Initialize shard3&lt;br&gt;
rs.initiate({&lt;br&gt;
  _id: 'shard3',&lt;br&gt;
  version: 1,&lt;br&gt;
  members: [{ _id: 0, host: 'shard3:27020' }],&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;Initialize shard 1:&lt;/p&gt;

&lt;h1&gt;
  
  
  docker-compose exec shard1 sh -c "mongosh --port 27018 &amp;lt; /scripts/init-shard1.js"
&lt;/h1&gt;

&lt;p&gt;Initialize shard 2:&lt;/p&gt;

&lt;h1&gt;
  
  
  docker-compose exec shard2 sh -c "mongosh --port 27019 &amp;lt; /scripts/init-shard2.js"
&lt;/h1&gt;

&lt;p&gt;Initialize shard 3:&lt;/p&gt;

&lt;h1&gt;
  
  
  docker-compose exec shard3 sh -c "mongosh --port 27020 &amp;lt; /scripts/init-shard3.js"
&lt;/h1&gt;

&lt;p&gt;Now, we have 3 shards configured and now we have to create the routes to these shards with the router configuration. &lt;/p&gt;

&lt;p&gt;init-router.js, script is used to configure the routes. &lt;/p&gt;

&lt;p&gt;// Initialize the router with shards&lt;br&gt;
sh.addShard('shard1/shard1:27018');&lt;br&gt;
sh.addShard('shard2/shard2:27019');&lt;br&gt;
sh.addShard('shard3/shard3:27020');&lt;/p&gt;

&lt;p&gt;Configure the router with this command: &lt;/p&gt;

&lt;p&gt;docker-compose exec router sh -c "mongosh &amp;lt; /scripts/init-router.js"&lt;/p&gt;

&lt;p&gt;Verification: &lt;/p&gt;

&lt;p&gt;To verify the status of the sharded cluster, we can get a shell inside of the router container&lt;/p&gt;

&lt;p&gt;docker-compose exec router mongosh&lt;/p&gt;

&lt;p&gt;Once we get the mongos&amp;gt; prompt, we can run the sh.status() command. &lt;/p&gt;

&lt;p&gt;Under the shards field the configuration should look like this:&lt;/p&gt;

&lt;p&gt;shards:&lt;br&gt;
        {  "_id" : "shard1",  "host" : "shard1/shard1:27018",  "state" : 1 }&lt;br&gt;
        {  "_id" : "shard2",  "host" : "shard2/shard2:27019",  "state" : 1 }&lt;br&gt;
        {  "_id" : "shard3",  "host" : "shard3/shard3:27020",  "state" : 1 }&lt;/p&gt;

&lt;p&gt;Type exit to exit from the router container.&lt;/p&gt;

&lt;p&gt;Configure hashed sharding&lt;/p&gt;

&lt;p&gt;With the MongoDB sharded cluster running, we need to enable the sharding for a database and then shard a specific collection in that database.&lt;/p&gt;

&lt;p&gt;MongoDB provides two strategies for sharding collections: &lt;/p&gt;

&lt;p&gt;hashed sharding and range-based sharding.&lt;/p&gt;

&lt;p&gt;The hashed sharding uses a hashed index of a single field as a shard key to partition the data. The range-based sharding can use multiple fields as the shard key and will divide the data into adjacent ranges based on the shard key values.&lt;/p&gt;

&lt;p&gt;The first thing that we need to do is to enable sharding on our database. &lt;/p&gt;

&lt;p&gt;We will be running all commands from within the router container. &lt;/p&gt;

&lt;p&gt;Run docker-compose exec router mongosh to get the MongoDB shell.&lt;/p&gt;

&lt;p&gt;Let's enable sharding in a database called test:&lt;/p&gt;

&lt;p&gt;sh.enableSharding('test')&lt;br&gt;
finally, we can configure how we want to shard the specific collections inside the database. We will configure sharding for the customertable. Let us enable the collection sharding:&lt;/p&gt;

&lt;p&gt;sh.shardCollection("test.customertable", { title : "hashed" } )&lt;br&gt;
Type exit to exit the container.&lt;/p&gt;

&lt;p&gt;With sharding enabled, we can start import your data and see how it gets distributed between different shards. &lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>mongodb</category>
      <category>docker</category>
    </item>
    <item>
      <title>Hybrid Identity</title>
      <dc:creator>Sandeep Kumar Seeram</dc:creator>
      <pubDate>Fri, 03 Jul 2020 11:30:59 +0000</pubDate>
      <link>https://dev.to/sandeepseeram/hybrid-identity-f45</link>
      <guid>https://dev.to/sandeepseeram/hybrid-identity-f45</guid>
      <description>&lt;p&gt;Hybrid Identity&lt;br&gt;
Businesses are now challenged to deal with increased workforce mobility and the rise of technology avenues in the market to better serve customers and partners. The prime goal for any business is to protect the assets (digital &amp;amp; physical) and make them securely accessible to customers, partners, vendors and employees. Identity and Access Management (IAM) has been a strong security pillar over the years providing these safe guards. Now, new IAM architecture concepts are rapidly evolving. One such concept is “Hybrid Identity” &lt;br&gt;
Implementing IAM strategy is always proven to be a well-planned, tested and executed business function. With cloud environments growing, many businesses moving their applications to cloud and making them accessible to wider audience brings in the concepts like Single Sign On (SSO), Multi-Factor Authentication (MFA), Self Service Password Management, Device Management etc. The goal is to not only provide latency-free, secure authentication capabilities to the applications, but also to provide rich user experience. &lt;br&gt;
I noticed; the infrastructure hosting directory servers and integrations is well hardened and any change to existing infrastructure requires lot of change approvals and reviews. With this, any modernization plans for the current IAM infrastructure will be a time prone activity. Well, Cloud Identity is rapidly growing on other hand giving businesses the right set of interfaces and integrations to manage all their identity needs without the overhead of managing the directory infrastructure. Everything, Identity Authentication, Authorization, Auditing, Access Management happens in the cloud. Being a pay as you go solution, Cloud Identity is well adopted among the startups and digital service businesses. &lt;br&gt;
Majority of medium and large enterprises still tend to develop applications that depend on a federated on-prem authentication. Applications hosted in cloud and authentication requests sent to and forth to on-prem directories for authentication is not a good idea for scalability. So, the “Hybrid Identity” emerged - &lt;br&gt;
The concept of Hybrid Identity, originated years back with cloud providers offering hybrid identity capabilities for customers on their cloud hosted identity directories. With Hybrid Identity, businesses have full control to over identity management and carry authentication and authorization functions either on on-prem or in the cloud depending on their application requirements. Hybrid Identity also promoted a new concept called “Bring Your Own Identity” BYOI – your application will start accepting identities from trusted third-parties.  Here is an example of Cisco application for customers and partners accepting identities from trusted third-parties. &lt;/p&gt;

&lt;p&gt;Microsoft Azure AD is an offering from Microsoft, which offers Cloud and Hybrid Identity solutions. There are three deployment models for Hybrid Identity: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LACA-LBH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tvournx1m35xzmj0fydb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LACA-LBH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tvournx1m35xzmj0fydb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>identity</category>
      <category>cloud</category>
      <category>security</category>
      <category>authentication</category>
    </item>
  </channel>
</rss>
