<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Franklin Thaker</title>
    <description>The latest articles on DEV Community by Franklin Thaker (@franklinthaker).</description>
    <link>https://dev.to/franklinthaker</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/franklinthaker"/>
    <language>en</language>
    <item>
      <title>Avoiding console.log in Production: Best Practices for Robust Logging</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Sun, 08 Dec 2024 05:26:02 +0000</pubDate>
      <link>https://dev.to/franklinthaker/avoiding-consolelog-in-production-best-practices-for-robust-logging-5me</link>
      <guid>https://dev.to/franklinthaker/avoiding-consolelog-in-production-best-practices-for-robust-logging-5me</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Logging is crucial for debugging and monitoring applications, but improper logging can lead to performance issues, security vulnerabilities, and cluttered output. In this article, we'll explore why console.log should be avoided in production and provide best practices using examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why one should avoid console.log in Production?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance Overhead&lt;/strong&gt;
-&amp;gt; This took around 46 seconds in my system.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;console.time("with -&amp;gt; console.log");
for (let i = 0; i &amp;lt; 1000000; i++) {
    console.log(`Iteration number: ${i}`);
}
console.timeEnd("with -&amp;gt; console.log");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This loop logs a message a million times, causing performance degradation.&lt;/p&gt;

&lt;p&gt;-&amp;gt; This took around 1ms in my system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;console.time("without -&amp;gt; console.log");
for (let i = 0; i &amp;lt; 1000000; i++) {
}
console.timeEnd("without -&amp;gt; console.log");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security Risks&lt;/strong&gt;
Logging sensitive information can expose data to unintended parties.
This code logs sensitive credentials, posing security risks.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const userCredentials = { username: 'john_doe', password: 's3cr3t' };
console.log(userCredentials);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluttered Logs&lt;/strong&gt;
Frequent logging can overwhelm the console, making it difficult to find relevant information.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function processOrder(order) {
  console.log('Processing order:', order);
  // Order processing logic here
  console.log('Order processed successfully');
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practices for Logging in Production&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use a Proper Logging Library&lt;/strong&gt;
Libraries like morgan, winston, pino, or log4js provide structured logging with log levels.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const pino = require('pino');
const logger = pino();

function processOrder(order) {
  logger.info({ order }, 'Processing order');
  // Order processing logic here
  logger.info('Order processed successfully');
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Log Sensitive Information Securely&lt;/strong&gt;
Avoid logging sensitive data directly.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const userCredentials = { username: 'john_doe', password: 's3cr3t' };
logger.info({ username: userCredentials.username }, 'User logged in');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Implement Conditional Logging&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const isProduction = process.env.NODE_ENV === 'production';

function log(message) {
  if (!isProduction) {
    console.log(message);
  }
}

log('This message will only appear in development');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Log to a Server or External Service&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const axios = require('axios');

function logToServer(message) {
  axios.post('/api/log', { message })
    .catch(error =&amp;gt; console.error('Failed to send log:', error));
}

logToServer('This is an important event');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Using console.log in production can lead to performance issues, security risks, and cluttered logs. By adopting proper logging practices with dedicated libraries and secure methodologies, you can ensure that your application is robust, maintainable, and secure.&lt;/p&gt;

</description>
      <category>logging</category>
      <category>javascript</category>
      <category>node</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Effortless Web Hosting with Caddy: A Beginner’s Guide</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Sat, 23 Nov 2024 05:35:02 +0000</pubDate>
      <link>https://dev.to/franklinthaker/effortless-web-hosting-with-caddy-a-beginners-guide-4pg3</link>
      <guid>https://dev.to/franklinthaker/effortless-web-hosting-with-caddy-a-beginners-guide-4pg3</guid>
      <description>&lt;p&gt;In the ever-evolving world of web servers, &lt;strong&gt;Caddy&lt;/strong&gt; has emerged as a game-changer. Designed for simplicity and modern workflows, it offers automatic &lt;strong&gt;HTTPS&lt;/strong&gt;, straightforward configuration, and a developer-friendly approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Caddy Server?
&lt;/h2&gt;

&lt;p&gt;Caddy is a modern, open-source web server written in Go. It’s known for its ability to handle complex web hosting tasks effortlessly. Whether you’re hosting a static site, reverse proxying APIs, or managing secure connections, Caddy simplifies the process with features like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic HTTPS:&lt;/strong&gt; Get secure connections by default using Let's Encrypt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Readable Configs:&lt;/strong&gt; Manage sites with a simple Caddyfile—no cryptic syntax required.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built-in Reverse Proxy:&lt;/strong&gt; Handle API routing and load balancing without extra tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Choose Caddy?
&lt;/h2&gt;

&lt;p&gt;Compared to traditional servers like Apache and Nginx, Caddy is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Easier to Use:&lt;/strong&gt; Minimal setup and automatic configurations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure by Default:&lt;/strong&gt; HTTPS is enabled automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lightweight Yet Powerful:&lt;/strong&gt; Written in Go for speed and scalability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quick Setup
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Download the version according to your OS -&amp;gt; &lt;a href="https://caddyserver.com/download" rel="noopener noreferrer"&gt;https://caddyserver.com/download&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a Caddyfile in your server's directory.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Example Caddyfile

localhost {
    reverse_proxy localhost:8080
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start the Server:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sudo caddy start&lt;/code&gt; -&amp;gt; to run the server in the background&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo caddy stop&lt;/code&gt; -&amp;gt; to stop the background running caddy server&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo caddy run&lt;/code&gt; -&amp;gt; to run the server in foreground&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  When to Use Caddy?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Static Websites:&lt;/strong&gt; Host portfolios, blogs, or landing pages with ease.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;APIs and Microservices:&lt;/strong&gt; Use built-in reverse proxying for seamless API management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTPS Needs:&lt;/strong&gt; Say goodbye to manual SSL certificate management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quick Example:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app.js
const express = require('express');
const app = express();
const port = 8080;

app.get('/', (req, res) =&amp;gt; {
  res.send('Hello from Express server!');
});

app.listen(port, () =&amp;gt; {
  console.log(`Server is running on http://localhost:${port}`);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Caddyfile

localhost {
    reverse_proxy localhost:8080
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;node app.js&lt;/code&gt; -&amp;gt; This will start your actual backend server in 8080 PORT&lt;br&gt;
&lt;code&gt;sudo caddy run&lt;/code&gt; -&amp;gt; This will start the caddy server and you will be able to handle all requests at localhost:443&lt;/p&gt;

&lt;h2&gt;
  
  
  Extra tips
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;For Linux, once you get the binary from official site, you can rename that to "caddy" and then move the file to "/usr/local/bin/"&lt;/li&gt;
&lt;li&gt;Then change the permissions. &lt;code&gt;sudo chmod ugoa+x caddy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Then you will be able to access "caddy" via CLI anywhere in your system&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Caddy&lt;/strong&gt; server is perfect for developers looking for a modern, no-hassle web server. Whether you’re a beginner or a seasoned professional, its features and simplicity make it an excellent choice for most projects.&lt;/p&gt;

</description>
      <category>caddy</category>
      <category>nginx</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The Mind Games of Jigsaw: A list of His Most Famous Quotes</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Sat, 09 Nov 2024 06:49:09 +0000</pubDate>
      <link>https://dev.to/franklinthaker/the-mind-games-of-jigsaw-a-list-of-his-most-famous-quotes-33if</link>
      <guid>https://dev.to/franklinthaker/the-mind-games-of-jigsaw-a-list-of-his-most-famous-quotes-33if</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Live or die. Make your choice.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;The rules are simple.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;I want to play a game.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Those who don’t appreciate life do not deserve life.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;Cherish your life.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;If you're good at anticipating the human mind, it leaves nothing to chance.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;Heed my warning.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;The choice is yours.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;Experience is a harsh teacher.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;You Can’t Help Them. They Have To Help Themselves.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;This Is Not Retribution. This Is A Reawakening.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;If You Find The Meaning Of Your Life, What You’ve Found Is Your Soul.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;When Faced With Death, Who Should Live Versus Who Will Live Are Two Entirely Separate Things.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;Congratulations, You Are Still Alive. Most People Are So Ungrateful To Be Alive... But Not You, Not Anymore.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;Every Day Of Your Working Life You've Given People News That They'll Die. Now, You Will Be The Cause Of Death.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Death Is A Surprise Party. Unless Of Course, You're Already Dead On The Inside.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;Game Over.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>jigsaw</category>
      <category>johnkramer</category>
      <category>quotes</category>
      <category>saw</category>
    </item>
    <item>
      <title>Mastering CRUD Operations with OpenSearch in Python: A Practical Guide</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Sat, 21 Sep 2024 12:48:22 +0000</pubDate>
      <link>https://dev.to/franklinthaker/mastering-crud-operations-with-opensearch-in-python-a-practical-guide-53k7</link>
      <guid>https://dev.to/franklinthaker/mastering-crud-operations-with-opensearch-in-python-a-practical-guide-53k7</guid>
      <description>&lt;p&gt;&lt;strong&gt;OpenSearch&lt;/strong&gt;, an open-source alternative to &lt;strong&gt;Elasticsearch&lt;/strong&gt;, is a powerful search and analytics engine built to handle &lt;strong&gt;large datasets&lt;/strong&gt; with ease. In this blog, we’ll &lt;strong&gt;demonstrate&lt;/strong&gt; how to perform basic CRUD (Create, Read, Update, Delete) operations in &lt;strong&gt;OpenSearch&lt;/strong&gt; using Python.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.7+&lt;/li&gt;
&lt;li&gt;OpenSearch installed locally using Docker&lt;/li&gt;
&lt;li&gt;Familiarity with RESTful APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1: Setting Up OpenSearch Locally with Docker&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To get started, we need a &lt;strong&gt;local OpenSearch instance&lt;/strong&gt;. Below is a simple &lt;strong&gt;docker-compose.yml&lt;/strong&gt; file that spins up &lt;strong&gt;OpenSearch&lt;/strong&gt; and OpenSearch &lt;strong&gt;Dashboards&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'
services:
  opensearch-test-node-1:
    image: opensearchproject/opensearch:2.13.0
    container_name: opensearch-test-node-1
    environment:
      - cluster.name=opensearch-test-cluster
      - node.name=opensearch-test-node-1
      - discovery.seed_hosts=opensearch-test-node-1,opensearch-test-node-2
      - cluster.initial_cluster_manager_nodes=opensearch-test-node-1,opensearch-test-node-2
      - bootstrap.memory_lock=true
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
      - "DISABLE_INSTALL_DEMO_CONFIG=true"
      - "DISABLE_SECURITY_PLUGIN=true"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-test-data1:/usr/share/opensearch/data
    ports:
      - 9200:9200
      - 9600:9600
    networks:
      - opensearch-test-net

  opensearch-test-node-2:
    image: opensearchproject/opensearch:2.13.0
    container_name: opensearch-test-node-2
    environment:
      - cluster.name=opensearch-test-cluster
      - node.name=opensearch-test-node-2
      - discovery.seed_hosts=opensearch-test-node-1,opensearch-test-node-2
      - cluster.initial_cluster_manager_nodes=opensearch-test-node-1,opensearch-test-node-2
      - bootstrap.memory_lock=true
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
      - "DISABLE_INSTALL_DEMO_CONFIG=true"
      - "DISABLE_SECURITY_PLUGIN=true"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-test-data2:/usr/share/opensearch/data
    networks:
      - opensearch-test-net

  opensearch-test-dashboards:
    image: opensearchproject/opensearch-dashboards:2.13.0
    container_name: opensearch-test-dashboards
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      - 'OPENSEARCH_HOSTS=["http://opensearch-test-node-1:9200","http://opensearch-test-node-2:9200"]'
      - "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true"
    networks:
      - opensearch-test-net

volumes:
  opensearch-test-data1:
  opensearch-test-data2:

networks:
  opensearch-test-net:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Run the following command to bring up your OpenSearch instance:&lt;br&gt;
&lt;code&gt;docker-compose up&lt;/code&gt;&lt;br&gt;
OpenSearch will be accessible at &lt;a href="http://localhost:9200" rel="noopener noreferrer"&gt;http://localhost:9200&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: Setting Up the Python Environment&lt;/strong&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python -m venv .venv
source .venv/bin/activate
pip install opensearch-py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;We'll also structure our project as follows:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── interfaces.py
├── main.py
├── searchservice.py
├── docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Defining Interfaces and Resources (interfaces.py)&lt;/strong&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;In the interfaces.py file, we define our Resource and Resources classes. These will help us dynamically handle different resource types in OpenSearch (in this case, users).&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from dataclasses import dataclass, field

@dataclass
class Resource:
    name: str

    def __post_init__(self) -&amp;gt; None:
        self.name = self.name.lower()

@dataclass
class Resources:
    users: Resource = field(default_factory=lambda: Resource("Users"))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4: CRUD Operations with OpenSearch (searchservice.py)&lt;/strong&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;In searchservice.py, we define an abstract class SearchService to outline the required operations. The HTTPOpenSearchService class then implements these CRUD methods, interacting with the OpenSearch client.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# coding: utf-8

import abc
import logging
import typing as t
from dataclasses import dataclass
from uuid import UUID

from interfaces import Resource, Resources
from opensearchpy import NotFoundError, OpenSearch

resources = Resources()


class SearchService(abc.ABC):
    def search(
        self,
        kinds: t.List[Resource],
        tenants_id: UUID,
        companies_id: UUID,
        query: t.Dict[str, t.Any],
    ) -&amp;gt; t.Dict[t.Literal["hits"], t.Dict[str, t.Any]]:
        raise NotImplementedError

    def delete_index(
        self,
        kind: Resource,
        tenants_id: UUID,
        companies_id: UUID,
        data: t.Dict[str, t.Any],
    ) -&amp;gt; None:
        raise NotImplementedError

    def index(
        self,
        kind: Resource,
        tenants_id: UUID,
        companies_id: UUID,
        data: t.Dict[str, t.Any],
    ) -&amp;gt; t.Dict[str, t.Any]:
        raise NotImplementedError

    def delete_document(
        self,
        kind: Resource,
        tenants_id: UUID,
        companies_id: UUID,
        document_id: str,
    ) -&amp;gt; t.Optional[t.Dict[str, t.Any]]:
        raise NotImplementedError

    def create_index(
        self,
        kind: Resource,
        tenants_id: UUID,
        companies_id: UUID,
        data: t.Dict[str, t.Any],
    ) -&amp;gt; None:
        raise NotImplementedError


@dataclass(frozen=True)
class HTTPOpenSearchService(SearchService):
    client: OpenSearch

    def _gen_index(
        self,
        kind: Resource,
        tenants_id: UUID,
        companies_id: UUID,
    ) -&amp;gt; str:
        return (
            f"tenant_{str(UUID(str(tenants_id)))}"
            f"_company_{str(UUID(str(companies_id)))}"
            f"_kind_{kind.name}"
        )

    def index(
        self,
        kind: Resource,
        tenants_id: UUID,
        companies_id: UUID,
        data: t.Dict[str, t.Any],
    ) -&amp;gt; t.Dict[str, t.Any]:
        self.client.index(
            index=self._gen_index(kind, tenants_id, companies_id),
            body=data,
            id=data.get("id"),
        )
        return data

    def delete_index(
        self,
        kind: Resource,
        tenants_id: UUID,
        companies_id: UUID,
    ) -&amp;gt; None:
        try:
            index = self._gen_index(kind, tenants_id, companies_id)
            if self.client.indices.exists(index):
                self.client.indices.delete(index)
        except NotFoundError:
            pass

    def create_index(
        self,
        kind: Resource,
        tenants_id: UUID,
        companies_id: UUID,
    ) -&amp;gt; None:
        body: t.Dict[str, t.Any] = {}
        self.client.indices.create(
            index=self._gen_index(kind, tenants_id, companies_id),
            body=body,
        )

    def search(
        self,
        kinds: t.List[Resource],
        tenants_id: UUID,
        companies_id: UUID,
        query: t.Dict[str, t.Any],
    ) -&amp;gt; t.Dict[t.Literal["hits"], t.Dict[str, t.Any]]:
        return self.client.search(
            index=",".join(
                [self._gen_index(kind, tenants_id, companies_id) for kind in kinds]
            ),
            body={"query": query},
        )

    def delete_document(
        self,
        kind: Resource,
        tenants_id: UUID,
        companies_id: UUID,
        document_id: str,
    ) -&amp;gt; t.Optional[t.Dict[str, t.Any]]:
        try:
            response = self.client.delete(
                index=self._gen_index(kind, tenants_id, companies_id),
                id=document_id,
            )
            return response
        except Exception as e:
            logging.error(f"Error deleting document: {e}")
            return None

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 5: Implementing CRUD in Main (main.py)&lt;/strong&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;In main.py, we demonstrate how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an &lt;strong&gt;index&lt;/strong&gt; in OpenSearch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Index documents&lt;/strong&gt; with sample user data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search&lt;/strong&gt; for documents based on a query.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delete&lt;/strong&gt; a document using its ID.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;main.py&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# coding=utf-8

import logging
import os
import typing as t
from uuid import uuid4

import searchservice
from interfaces import Resources
from opensearchpy import OpenSearch

resources = Resources()

logging.basicConfig(level=logging.INFO)

search_service = searchservice.HTTPOpenSearchService(
    client=OpenSearch(
        hosts=[
            {
                "host": os.getenv("OPENSEARCH_HOST", "localhost"),
                "port": os.getenv("OPENSEARCH_PORT", "9200"),
            }
        ],
        http_auth=(
            os.getenv("OPENSEARCH_USERNAME", ""),
            os.getenv("OPENSEARCH_PASSWORD", ""),
        ),
        use_ssl=False,
        verify_certs=False,
    ),
)

tenants_id: str = "f0835e2d-bd68-406c-99a7-ad63a51e9ef9"
companies_id: str = "bf58c749-c90a-41e2-b66f-6d98aae17a6c"
search_str: str = "frank"
document_id_to_delete: str = str(uuid4())

fake_data: t.List[t.Dict[str, t.Any]] = [
    {"id": document_id_to_delete, "name": "Franklin", "tech": "python,node,golang"},
    {"id": str(uuid4()), "name": "Jarvis", "tech": "AI"},
    {"id": str(uuid4()), "name": "Parry", "tech": "Golang"},
    {"id": str(uuid4()), "name": "Steve", "tech": "iOS"},
    {"id": str(uuid4()), "name": "Frank", "tech": "node"},
]

search_service.delete_index(
    kind=resources.users, tenants_id=tenants_id, companies_id=companies_id
)

search_service.create_index(
    kind=resources.users,
    tenants_id=tenants_id,
    companies_id=companies_id,
)

for item in fake_data:
    search_service.index(
        kind=resources.users,
        tenants_id=tenants_id,
        companies_id=companies_id,
        data=dict(tenants_id=tenants_id, companies_id=companies_id, **item),
    )

search_query: t.Dict[str, t.Any] = {
    "bool": {
        "must": [],
        "must_not": [],
        "should": [],
        "filter": [
            {"term": {"tenants_id.keyword": tenants_id}},
            {"term": {"companies_id.keyword": companies_id}},
        ],
    }
}
search_query["bool"]["must"].append(
    {
        "multi_match": {
            "query": search_str,
            "type": "phrase_prefix",
            "fields": ["name", "tech"],
        }
    }
)

search_results = search_service.search(
    kinds=[resources.users],
    tenants_id=tenants_id,
    companies_id=companies_id,
    query=search_query,
)

final_result = search_results.get("hits", {}).get("hits", [])
for item in final_result:
    logging.info(["Item -&amp;gt; ", item.get("_source", {})])

deleted_result = search_service.delete_document(
    kind=resources.users,
    tenants_id=tenants_id,
    companies_id=companies_id,
    document_id=document_id_to_delete,
)
logging.info(["Deleted result -&amp;gt; ", deleted_result])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 6: Running the project&lt;/strong&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;docker compose up&lt;/code&gt;&lt;br&gt;
&lt;code&gt;python main.py&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Results&lt;/strong&gt;:
&lt;/h2&gt;

&lt;p&gt;It should print found &amp;amp; deleted records information.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 7: Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this blog, we’ve &lt;strong&gt;demonstrated&lt;/strong&gt; how to set up &lt;strong&gt;OpenSearch&lt;/strong&gt; locally using &lt;strong&gt;Docker&lt;/strong&gt; and perform basic &lt;strong&gt;CRUD&lt;/strong&gt; operations with &lt;strong&gt;Python&lt;/strong&gt;. &lt;strong&gt;OpenSearch&lt;/strong&gt; provides a powerful and scalable solution for managing and querying large datasets. While this guide focuses on integrating OpenSearch with &lt;strong&gt;dummy data&lt;/strong&gt;, in &lt;strong&gt;real-world applications&lt;/strong&gt;, OpenSearch is often used as a &lt;strong&gt;read-optimized store&lt;/strong&gt; for &lt;strong&gt;faster&lt;/strong&gt; data retrieval. In such cases, it is common to implement different &lt;strong&gt;indexing strategies&lt;/strong&gt; to ensure data consistency by updating both the primary database and &lt;strong&gt;OpenSearch concurrently&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This ensures that &lt;strong&gt;OpenSearch&lt;/strong&gt; remains in sync with your &lt;strong&gt;primary data&lt;/strong&gt; source, &lt;strong&gt;optimizing&lt;/strong&gt; both &lt;strong&gt;performance&lt;/strong&gt; and &lt;strong&gt;accuracy&lt;/strong&gt; in data retrieval.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;References&lt;/strong&gt;:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/FranklinThaker/opensearch-integration-example" rel="noopener noreferrer"&gt;https://github.com/FranklinThaker/opensearch-integration-example&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>opensearch</category>
      <category>python</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>Unleashing MongoDB: Why Cursor-Based Pagination Outperforms Offset-Based Pagination Every Time!</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Wed, 04 Sep 2024 13:20:48 +0000</pubDate>
      <link>https://dev.to/franklinthaker/unleashing-mongodb-why-cursor-based-pagination-outperforms-offset-based-pagination-every-time-4o30</link>
      <guid>https://dev.to/franklinthaker/unleashing-mongodb-why-cursor-based-pagination-outperforms-offset-based-pagination-every-time-4o30</guid>
      <description>&lt;p&gt;&lt;strong&gt;Pagination&lt;/strong&gt; is a critical part of any database operation when dealing with large datasets. It allows you to split data into manageable chunks, making it easier to browse, process, and display. MongoDB provides two common pagination methods: offset-based and cursor-based. While both methods serve the same purpose, they differ &lt;strong&gt;significantly&lt;/strong&gt; in performance and usability, especially as the dataset grows.&lt;/p&gt;

&lt;p&gt;Let's dive into the two approaches and see why cursor-based pagination often outperforms offset-based pagination.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. &lt;strong&gt;Offset-Based Pagination&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Offset-based pagination is straightforward. It retrieves a specific number of records starting from a given offset. For example, the first page might retrieve records 0-9, the second page retrieves records 10-19, and so on.&lt;/p&gt;

&lt;p&gt;However, this method has a significant drawback: as you move to higher pages, the query becomes slower. This is because the database needs to skip over the records from the previous pages, which involves scanning through them.&lt;/p&gt;

&lt;p&gt;Here’s the code for offset-based pagination:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function offset_based_pagination(params) {
  const { page = 5, limit = 100 } = params;
  const skip = (page - 1) * limit;
  const results = await collection.find({}).skip(skip).limit(limit).toArray();
  console.log(`Offset-based pagination (Page ${page}):`, results.length, "page", page, "skip", skip, "limit", limit);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  2. &lt;strong&gt;Cursor-Based Pagination&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Cursor-based pagination, also known as keyset pagination, relies on a unique identifier (e.g., an ID or timestamp) to paginate through the records. Instead of skipping a certain number of records, it uses the last retrieved record as the reference point for fetching the next set.&lt;/p&gt;

&lt;p&gt;This approach is more efficient because it avoids the need to scan the records before the current page. As a result, the query time remains consistent, regardless of how deep into the dataset you go.&lt;/p&gt;

&lt;p&gt;Here's the code for cursor-based pagination:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function cursor_based_pagination(params) {
  const { lastDocumentId, limit = 100 } = params;
  const query = lastDocumentId ? { documentId: { $gt: lastDocumentId } } : {};
  const results = await collection
    .find(query)
    .sort({ documentId: 1 })
    .limit(limit)
    .toArray();
  console.log("Cursor-based pagination:", results.length);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this example, &lt;strong&gt;lastDocumentId&lt;/strong&gt; is the ID of the last document from the previous page. When querying for the next page, the database fetches documents with an ID greater than this value, ensuring a seamless transition to the next set of records.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. &lt;strong&gt;Performance Comparison&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s see how these two methods perform with a large dataset.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function testMongoDB() {
    console.time("MongoDB Insert Time:");
    await insertMongoDBRecords();
    console.timeEnd("MongoDB Insert Time:");

  // Create an index on the documentId field
  await collection.createIndex({ documentId: 1 });
  console.log("Index created on documentId field");

  console.time("Offset-based pagination Time:");
  await offset_based_pagination({ page: 2, limit: 250000 });
  console.timeEnd("Offset-based pagination Time:");

  console.time("Cursor-based pagination Time:");
  await cursor_based_pagination({ lastDocumentId: 170000, limit: 250000 });
  console.timeEnd("Cursor-based pagination Time:");

  await client.close();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6iw29pgbqutk821boqi5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6iw29pgbqutk821boqi5.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In the performance test, you’ll notice that the &lt;strong&gt;offset-based&lt;/strong&gt; pagination takes &lt;strong&gt;longer&lt;/strong&gt; as the page number &lt;strong&gt;increases&lt;/strong&gt;, whereas the &lt;strong&gt;cursor-based&lt;/strong&gt; pagination &lt;strong&gt;remains consistent&lt;/strong&gt;, making it the better choice for &lt;strong&gt;large&lt;/strong&gt; datasets. This example also demonstrates the power of indexing as well. Try to remove index &amp;amp; then see the result as well!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Indexing is Important&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Without an index, MongoDB would need to perform a collection scan, which means it has to look at each document in the collection to find the relevant data. This is inefficient, especially when your dataset grows. Indexes allow MongoDB to efficiently to find the documents that match your query conditions, significantly speeding up query performance.&lt;/p&gt;

&lt;p&gt;In the context of cursor-based pagination, the index ensures that fetching the next set of documents (based on documentId) is quick and does not degrade in performance as more documents are added to the collection.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While offset-based pagination is easy to implement, it can become inefficient with large datasets due to the need to scan through records. Cursor-based pagination, on the other hand, provides a more scalable solution, keeping performance consistent regardless of the dataset size. If you are working with large collections in MongoDB, it’s worth considering cursor-based pagination for a smoother and faster experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Here's complete index.js for you to run locally:&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { MongoClient } = require("mongodb");
const uri = "mongodb://localhost:27017";
const client = new MongoClient(uri);
client.connect();
const db = client.db("testdb");
const collection = db.collection("testCollection");

async function insertMongoDBRecords() {
  try {
    let bulkOps = [];

    for (let i = 0; i &amp;lt; 2000000; i++) {
      bulkOps.push({
        insertOne: {
          documentId: i,
          name: `Record-${i}`,
          value: Math.random() * 1000,
        },
      });

      // Execute every 10000 operations and reinitialize
      if (bulkOps.length === 10000) {
        await collection.bulkWrite(bulkOps);
        bulkOps = [];
      }
    }

    if (bulkOps.length &amp;gt; 0) {
      await collection.bulkWrite(bulkOps);
      console.log("🚀 Inserted records till now -&amp;gt; ", bulkOps.length);
    }

    console.log("MongoDB Insertion Completed");
  } catch (err) {
    console.error("Error in inserting records", err);
  }
}

async function offset_based_pagination(params) {
  const { page = 5, limit = 100 } = params;
  const skip = (page - 1) * limit;
  const results = await collection.find({}).skip(skip).limit(limit).toArray();
  console.log(`Offset-based pagination (Page ${page}):`, results.length, "page", page, "skip", skip, "limit", limit);
}

async function cursor_based_pagination(params) {
  const { lastDocumentId, limit = 100 } = params;
  const query = lastDocumentId ? { documentId: { $gt: lastDocumentId } } : {};
  const results = await collection
    .find(query)
    .sort({ documentId: 1 })
    .limit(limit)
    .toArray();
  console.log("Cursor-based pagination:", results.length);
}

async function testMongoDB() {
  console.time("MongoDB Insert Time:");
  await insertMongoDBRecords();
  console.timeEnd("MongoDB Insert Time:");

  // Create an index on the documentId field
  await collection.createIndex({ documentId: 1 });
  console.log("Index created on documentId field");

  console.time("Offset-based pagination Time:");
  await offset_based_pagination({ page: 2, limit: 250000 });
  console.timeEnd("Offset-based pagination Time:");

  console.time("Cursor-based pagination Time:");
  await cursor_based_pagination({ lastDocumentId: 170000, limit: 250000 });
  console.timeEnd("Cursor-based pagination Time:");

  await client.close();
}

testMongoDB();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>pagination</category>
      <category>mongodb</category>
      <category>indexes</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Real-time Data Updates with Server-Sent Events (SSE) in Node.js</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Fri, 05 Jul 2024 06:15:46 +0000</pubDate>
      <link>https://dev.to/franklinthaker/real-time-data-with-server-sent-events-sse-in-nodejs-4hba</link>
      <guid>https://dev.to/franklinthaker/real-time-data-with-server-sent-events-sse-in-nodejs-4hba</guid>
      <description>&lt;p&gt;In this blog post, we'll explore how to use Server-Sent Events (SSE) to push real-time data from a server to clients. We'll create a simple example using Node.js and Express to demonstrate how SSE works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What are Server-Sent Events (SSE)?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Server-Sent Events (SSE) allow servers to push updates to the client over a single, long-lived HTTP connection. Unlike WebSockets, SSE is a unidirectional protocol where updates flow from server to client. This makes SSE ideal for live data feeds like news updates, stock prices, or notifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Creating the Server&lt;/strong&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// app.js
const express = require("express");
const app = express();
const { v4 } = require("uuid");
let clients = [];

app.use(express.json());
app.use(express.static("./public"));

function sendDataToAllClients() {
  const value_to_send_to_all_clients = Math.floor(Math.random() * 1000) + 1;
  clients.forEach((client) =&amp;gt;
    client.response.write("data: " + value_to_send_to_all_clients + "\n\n")
  );
}

app.get("/subscribe", async (req, res) =&amp;gt; {
  const clients_id = v4();
  const headers = {
    "Content-Type": "text/event-stream",
    "Cache-Control": "no-cache",
    Connection: "keep-alive",
  };
  res.writeHead(200, headers);
  clients.push({ id: clients_id, response: res });

  // Close the connection when the client disconnects
  req.on("close", () =&amp;gt; {
    clients = clients.filter((c) =&amp;gt; c.id !== clients_id);
    console.log(`${clients_id} Connection closed`);
    res.end("OK");
  });
});

app.get("/data", (req, res) =&amp;gt; {
  sendDataToAllClients();
  res.send("Data sent to all subscribed clients.");
});

app.listen(80, () =&amp;gt; {
  console.log("Server is running on port 80");
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Code Explanation&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Express Setup:&lt;/strong&gt; We create an Express app and set up JSON parsing and static file serving.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client Management:&lt;/strong&gt; We maintain a list of connected clients.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSE Headers:&lt;/strong&gt; In the /subscribe endpoint, we set the necessary headers to establish an SSE connection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Send Data:&lt;/strong&gt; The &lt;strong&gt;sendDataToAllClients&lt;/strong&gt; function sends random data to all subscribed clients.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subscribe Endpoint:&lt;/strong&gt; Clients connect to this endpoint to receive real-time updates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Endpoint:&lt;/strong&gt; This endpoint triggers the &lt;strong&gt;sendDataToAllClients&lt;/strong&gt; function to send data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Creating the Client&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Next, let's create a simple HTML page to subscribe to the server and display the real-time data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- index.html  --&amp;gt;
&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
&amp;lt;head&amp;gt;
  &amp;lt;meta charset="UTF-8" /&amp;gt;
  &amp;lt;meta name="viewport" content="width=device-width, initial-scale=1.0" /&amp;gt;
  &amp;lt;title&amp;gt;SSE - Example (Server-Sent-Events)&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
  &amp;lt;div id="data"&amp;gt;&amp;lt;/div&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;

&amp;lt;script&amp;gt;
  const subscription = new EventSource("/subscribe");

  // Default events
  subscription.addEventListener("open", () =&amp;gt; {
    console.log("Connection opened");
  });

  subscription.addEventListener("error", () =&amp;gt; {
    console.error("Subscription err'd");
    subscription.close();
  });

  subscription.addEventListener("message", (event) =&amp;gt; {
    console.log("Receive message", event);
    document.getElementById("data").innerHTML += `${event.data}&amp;lt;br&amp;gt;`;
  });
&amp;lt;/script&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Code Explanation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EventSource:&lt;/strong&gt; We create a new &lt;strong&gt;EventSource&lt;/strong&gt; object to subscribe to the /subscribe endpoint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Listeners:&lt;/strong&gt; We set up listeners for open, error, and message events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Display Data:&lt;/strong&gt; When a message is received, we append the data to a div.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Running the Example&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Start the server:&lt;br&gt;
&lt;code&gt;node app.js&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open your browser and navigate to &lt;a href="http://localhost/subscribe"&gt;http://localhost/subscribe&lt;/a&gt;. [Don't close it]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, open another tab &amp;amp; navigate to &lt;a href="http://localhost/data"&gt;http://localhost/data&lt;/a&gt; and you should see random data prints onto the screen in other tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can subscribe to multiple clients/tabs as you want &amp;amp; you can simply navigate to &lt;a href="http://localhost/data"&gt;http://localhost/data&lt;/a&gt; &amp;amp; you would see the same data emits to all the subscribed clients.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this post, we've seen how to use Server-Sent Events (SSE) to push real-time updates from a server to connected clients using Node.js and Express. SSE is a simple yet powerful way to add real-time capabilities to your web applications without the complexity of WebSockets.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Warning⚠️:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When not used over HTTP/2, SSE suffers from a &lt;strong&gt;limitation&lt;/strong&gt; to the &lt;strong&gt;maximum number of open connections&lt;/strong&gt;, which can be especially painful when opening multiple tabs, as the limit is per browser and is set to a very low number (6). The issue has been marked as "Won't fix" in &lt;a href="https://bugs.chromium.org/p/chromium/issues/detail?id=275955"&gt;Chrome&lt;/a&gt; and &lt;a href="https://bugzilla.mozilla.org/show_bug.cgi?id=906896"&gt;Firefox&lt;/a&gt;. This limit is per browser + domain, which means that you can open 6 SSE connections across all of the tabs to &lt;code&gt;www.example1.com&lt;/code&gt; and another 6 SSE connections to &lt;code&gt;www.example2.com&lt;/code&gt; (per &lt;a href="https://stackoverflow.com/questions/5195452/websockets-vs-server-sent-events-eventsource/5326159"&gt;Stackoverflow&lt;/a&gt;). When using HTTP/2, the maximum number of simultaneous HTTP streams is negotiated between the server and the client (defaults to 100).&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Resources:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events"&gt;https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>pubsub</category>
      <category>eventdriven</category>
      <category>node</category>
    </item>
    <item>
      <title>How to use SRI (Subresource integrtiy) attribute in script tag to prevent modification of static resources ?</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Sat, 29 Jun 2024 06:23:44 +0000</pubDate>
      <link>https://dev.to/franklinthaker/how-to-use-sri-subresource-integrtiy-attribute-in-script-tag-to-prevent-modification-of-static-resources--1h3a</link>
      <guid>https://dev.to/franklinthaker/how-to-use-sri-subresource-integrtiy-attribute-in-script-tag-to-prevent-modification-of-static-resources--1h3a</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding Supply Chain Attacks&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Supply chain attacks involve compromising a third-party service or library to inject malicious code into websites that rely on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Could This Be Prevented?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One effective way to mitigate such risks is by using the Subresource Integrity (SRI) attribute in your HTML scripts and link tags. SRI allows browsers to verify that the fetched resources (like scripts or stylesheets) are delivered without unexpected manipulation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Demonstration&lt;/strong&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#script.js
console.log("Hello World - Secured!");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#index.html
&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
&amp;lt;head&amp;gt;
  &amp;lt;meta charset="UTF-8" /&amp;gt;
  &amp;lt;meta name="viewport" content="width=device-width, initial-scale=1.0" /&amp;gt;
  &amp;lt;script
    src="script.js"
    integrity="sha384-[YOUR-GENERATED-HASH]"
    crossorigin="anonymous"
  &amp;gt;&amp;lt;/script&amp;gt;
  &amp;lt;title&amp;gt;SUBRESOURCE INTEGRITY EXAMPLE&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
  Hello World
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#app.js - to serve above files
const express = require('express');
const app = express();

app.use(express.static("./"));

app.listen(80, () =&amp;gt; {
  console.log('Server is running on http://localhost');
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Generating the Integrity Hash&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;openssl dgst -sha384 -binary script.js | openssl base64 -A&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now if we perform all the above steps correctly then the following working output should appear:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca5xu9ns1bjny9iih7sp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca5xu9ns1bjny9iih7sp.png" alt="WORKING OUTPUT IF ABOVE STEPS PERFORM CORRECTLY" width="448" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now Let's try to modify script.js
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#script.js - modified
console.log("Hello World - Modified!");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Now try to open &lt;a href="http://localhost/index.html"&gt;http://localhost/index.html&lt;/a&gt;, you should see following error:
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;None of the sha384 hashes in the integrity attribute match the content of the resource.&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  SO THE MODIFIEFD SCRIPT WON'T RUN AT ALL!!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s3qh5dj9iiizanc1lmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s3qh5dj9iiizanc1lmj.png" alt="SCRIPT WON'T RUN AFTER MODIFICATION" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to disable camera devices in Linux systems ?</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Fri, 07 Jun 2024 13:47:03 +0000</pubDate>
      <link>https://dev.to/franklinthaker/how-to-disable-camera-devices-in-linux-systems--32cc</link>
      <guid>https://dev.to/franklinthaker/how-to-disable-camera-devices-in-linux-systems--32cc</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsusb
lsmod [to identify camera devices]

sudo nano /etc/modprobe.d/blacklist.conf

****add the following lines in above file****
##Disable webcam.
blacklist uvcvideo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;References: &lt;a href="https://forums.linuxmint.com/viewtopic.php?t=370960"&gt;https://forums.linuxmint.com/viewtopic.php?t=370960&lt;/a&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>devices</category>
      <category>ubuntu</category>
      <category>camera</category>
    </item>
    <item>
      <title>Auth0 integration - Node.js + ExpressJS</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Tue, 04 Jun 2024 13:24:18 +0000</pubDate>
      <link>https://dev.to/franklinthaker/auth0-integration-nodejs-expressjs-54l0</link>
      <guid>https://dev.to/franklinthaker/auth0-integration-nodejs-expressjs-54l0</guid>
      <description>&lt;p&gt;This is a simple guide to demonstrate backend Auth0 integration. There will be no frontend involved. User sign-up, log-in, log-out, all operations will be done through backend only.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// index.js
require('dotenv').config();
const { auth, requiresAuth } = require("express-openid-connect");
const app = require("express")();

const config = {
  authRequired: false,
  auth0Logout: true,
  secret: process.env.CLIENT_SECRET,
  baseURL: "http://localhost:3000",
  clientID: process.env.CLIENT_ID,
  issuerBaseURL:`https://${process.env.AUTH0_TENANT}.auth0.com`,
};

// auth router attaches /login, /logout, and /callback routes to the baseURL
app.use(auth(config));

// req.isAuthenticated is provided from the auth router
app.get("/", (req, res) =&amp;gt; {
  res.send(req.oidc.isAuthenticated() ? "Logged in" : "Logged out");
});

app.get("/profile", requiresAuth(), (req, res) =&amp;gt; {
  res.send(JSON.stringify(req.oidc.user));
});

app.listen(3000);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Environment Variables&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To run this project, you will need to add the following environment variables to your .env file&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLIENT_ID&lt;/strong&gt; -&amp;gt; Go to Auth0 -&amp;gt; Applications -&amp;gt; Settings -&amp;gt; Client ID&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AUTH0_TENANT&lt;/strong&gt; -&amp;gt; Go to Auth0 -&amp;gt; Applications -&amp;gt; Settings -&amp;gt; Domain&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLIENT_SECRET&lt;/strong&gt; -&amp;gt; Run this command to generate the secret value: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;openssl rand -hex 32&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you are running on Windows: Try to run this in Git Bash it should work without you needing to install Win64 OpenSSL&lt;br&gt;
Also make sure to setup this in Settings tab in Auth0:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Allowed Callback URLs:&lt;/strong&gt; &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Allowed Logout URLs:&lt;/strong&gt; &lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;References&lt;br&gt;
&lt;a href="https://github.com/FranklinThaker/auth0-integration-nodejs"&gt;https://github.com/FranklinThaker/auth0-integration-nodejs&lt;/a&gt;&lt;br&gt;
&lt;a href="https://auth0.github.io/express-openid-connect/index.html"&gt;https://auth0.github.io/express-openid-connect/index.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>auth0</category>
      <category>express</category>
      <category>node</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to use compression in Node.js server for better bandwidth ?</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Wed, 15 May 2024 11:57:14 +0000</pubDate>
      <link>https://dev.to/franklinthaker/how-to-use-compression-in-nodejs-server-for-better-bandwidth--4hje</link>
      <guid>https://dev.to/franklinthaker/how-to-use-compression-in-nodejs-server-for-better-bandwidth--4hje</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require("express");
const compression = require("compression");
const app = express();

app.set("etag", false);
app.use(compression());

app.get("/data", (req, res) =&amp;gt; {
  return res.json({
    message: "Hello, Axel Blaze, This is a test message. ".repeat(10000),
  });
});

app.listen(3000, function () {
  console.log("listening on 3000");
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to start your server &amp;amp; check if gzip compression is working or not!
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;DEBUG=compression node app.js&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Make sure you pass correct Request header i.e. Accept-Encoding: gzip&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Output examples:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;With Compression&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsz7g35w6nezprpwvkbgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsz7g35w6nezprpwvkbgf.png" alt="With Compression" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without Compression&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yg5qrrf3zwleiyxswxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yg5qrrf3zwleiyxswxx.png" alt="Without Compression" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>compression</category>
      <category>express</category>
      <category>gzip</category>
      <category>node</category>
    </item>
    <item>
      <title>Uninterrupted Access to Dropbox API</title>
      <dc:creator>Franklin Thaker</dc:creator>
      <pubDate>Sun, 12 May 2024 05:12:36 +0000</pubDate>
      <link>https://dev.to/franklinthaker/uninterrupted-access-to-dropbox-api-4aja</link>
      <guid>https://dev.to/franklinthaker/uninterrupted-access-to-dropbox-api-4aja</guid>
      <description>&lt;p&gt;&lt;strong&gt;This guide will help you set up uninterrupted access to the Dropbox API using OAuth 2.0 and refresh tokens.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. Authorize Your App&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Visit the following URL to authorize your app and generate an authorization code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://www.dropbox.com/oauth2/authorize?client_id=YOUR_APP_KEY&amp;amp;response_type=code&amp;amp;token_access_type=offline
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;2. Use the Authorization Code&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Once you've authorized your app, you'll receive an authorization code. Use this code in your app codebase, replacing ACCESS_CODE_FROM_STEP_1.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;3. Start the Application&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Run your application (e.g., app.js). For the first time, the application will generate a refresh token.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;4. Store the Refresh Token&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The refresh token generated in Step 3 won't expire. Make sure to store this long-term token securely. In this example, we're storing it as REFRESH_TOKEN.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;5. Restart the Application&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Restart your application. You can now use the upload functionality without any interruption.&lt;/p&gt;

&lt;p&gt;Ref: &lt;a href="https://github.com/FranklinThaker/Dropbox-API-Uninterrupted-Access"&gt;https://github.com/FranklinThaker/Dropbox-API-Uninterrupted-Access&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dropbox</category>
      <category>javascript</category>
      <category>node</category>
      <category>unlimited</category>
    </item>
  </channel>
</rss>
