<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pranjal Sharma</title>
    <description>The latest articles on DEV Community by Pranjal Sharma (@pranjal_sharma_38482a3041).</description>
    <link>https://dev.to/pranjal_sharma_38482a3041</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pranjal_sharma_38482a3041"/>
    <language>en</language>
    <item>
      <title>Back-of-the-envelope Estimation System Design</title>
      <dc:creator>Pranjal Sharma</dc:creator>
      <pubDate>Thu, 27 Jun 2024 20:27:30 +0000</pubDate>
      <link>https://dev.to/pranjal_sharma_38482a3041/back-of-the-envelope-estimation-system-design-13a5</link>
      <guid>https://dev.to/pranjal_sharma_38482a3041/back-of-the-envelope-estimation-system-design-13a5</guid>
      <description>&lt;p&gt;Back-of-the-envelope estimation is a technique used to quickly approximate values and make rough calculations using simple arithmetic and basic assumptions.&lt;/p&gt;




&lt;h3&gt;
  
  
  Estimation Techniques
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) Rule of Thumb →
&lt;/h4&gt;

&lt;p&gt;General principals applied to make good estimates. eg : 1 user generates 1MB of data on social media / day.&lt;/p&gt;

&lt;h4&gt;
  
  
  2) Approximations →
&lt;/h4&gt;

&lt;p&gt;Rounding of complex calculations to powers of 10 or 2 to simply and get to the estimates easily. eg: 1 day = 10^5 seconds.&lt;/p&gt;

&lt;h4&gt;
  
  
  3) BreakDown and aggregation →
&lt;/h4&gt;

&lt;p&gt;Breaking down bigger problems to smaller components and estimating them individually along with aggregating or combining them to reach the results. eg: Social media data = User Data + Multimedia Data + Metadata .&lt;/p&gt;

&lt;h4&gt;
  
  
  4) Sanity check →
&lt;/h4&gt;

&lt;p&gt;Just having an overall check over the possibility of the estimates not varying a lot from reality is needed at last . For eg : The numbers achieved should match the original real life data.&lt;/p&gt;




&lt;h3&gt;
  
  
  Types of Estimations
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) Load Estimations
&lt;/h4&gt;

&lt;p&gt;Designing a post generation social media platform.&lt;br&gt;
Daily Active Users ( DAU ) → 100 Million&lt;br&gt;
Avg. Posts → 10 per user per day&lt;br&gt;
Total posts → 100 M * 10 = 1B post/day&lt;/p&gt;

&lt;p&gt;Hence Request rate per second = 1B / 10^5 requests/second = 10000 req/sec.&lt;/p&gt;

&lt;h4&gt;
  
  
  2) Storage Estimations
&lt;/h4&gt;

&lt;p&gt;Twitter Storage&lt;br&gt;
DAU → 500 M&lt;br&gt;
1 user = 3 tweets (avg)/day&lt;br&gt;
1 tweet text ~ 250B &lt;br&gt;
1 photo ~ 200KB [10% contain photo]&lt;br&gt;
1 video ~ 300MB [5% contain video]&lt;/p&gt;

&lt;p&gt;Total storage/day ~ 1500M * (250B + 20KB + 15KB)&lt;br&gt;
                  ~ 375 GB + 30TB + 225TB ~ 255TB&lt;/p&gt;

&lt;h4&gt;
  
  
  3) Bandwidth requirements
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Estimate the daily amount of incoming data to the service.&lt;/li&gt;
&lt;li&gt;Estimate the daily amount of outgoing data from the service.&lt;/li&gt;
&lt;li&gt;Estimate the bandwidth in Gbps (Gigabits per second) by dividing the incoming and outgoing data by the number of seconds in a day.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4) Latency Estimation
&lt;/h4&gt;

&lt;p&gt;For eg. API consist of RestCall 1 , Rest Call 2 , Rest Call 3&lt;/p&gt;

&lt;p&gt;Total Latency → 50ms + 100ms + 150ms ~ 300ms [ if it is sequential ]&lt;br&gt;
              → max(50,100,150) ~ 150ms [ if it is parallel ]&lt;/p&gt;

&lt;h4&gt;
  
  
  5) Resource Estimation
&lt;/h4&gt;

&lt;p&gt;1 req ~ 10ms of CPU&lt;br&gt;
total req ~ 10000req/sec&lt;br&gt;
total cpu time ~ 10000 * 10 = 100000 ms/sec.&lt;br&gt;
1 CPU can handle 1000ms/sec&lt;br&gt;
Total CPU core = 100000 / 1000 = 100&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>capacityestimation</category>
    </item>
    <item>
      <title>Notion for NextJS CMS</title>
      <dc:creator>Pranjal Sharma</dc:creator>
      <pubDate>Fri, 21 Jun 2024 09:18:02 +0000</pubDate>
      <link>https://dev.to/pranjal_sharma_38482a3041/notion-for-nextjs-cms-1cjm</link>
      <guid>https://dev.to/pranjal_sharma_38482a3041/notion-for-nextjs-cms-1cjm</guid>
      <description>&lt;p&gt;Notion has become a popular productivity tool for individuals and teams. Did you know that Notion can also serve as a backend for your web applications? In this article, we’ll explore the benefits of using Notion as a backend for a Next.js application and demonstrate how to do it using the &lt;strong&gt;Notion API&lt;/strong&gt; and &lt;strong&gt;TypeScript&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before we get started, make sure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Notion account&lt;/li&gt;
&lt;li&gt;A Notion API key&lt;/li&gt;
&lt;li&gt;A Next.Js project set up with TypeScript&lt;/li&gt;
&lt;li&gt;The notion-client package installed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To get your Notion API key, go to your Notion integrations page, create a new integration, and copy the API key. To install the notion-client package, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install notion-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1) Create a Notion database
&lt;/h3&gt;

&lt;p&gt;First, let’s create a Notion database to store our blog posts. To do this, go to your Notion dashboard and create a new page. In the page properties, click the “Add a database” button, and select “Blog Posts” as the database template.&lt;/p&gt;

&lt;p&gt;This will create a database with the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Date&lt;/li&gt;
&lt;li&gt;Tags&lt;/li&gt;
&lt;li&gt;Title&lt;/li&gt;
&lt;li&gt;Content
Feel free to customize the properties to suit your needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Fetch data from Notion
&lt;/h3&gt;

&lt;p&gt;Next, let’s fetch the data from Notion using the Notion API. Create a new file called &lt;strong&gt;notion.ts&lt;/strong&gt; in your &lt;strong&gt;Next.js&lt;/strong&gt; project, and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Client } from '@notionhq/client';


const notion = new Client({ auth: process.env.NOTION_API_KEY });
export async function getBlogPosts() {
  const databaseId = process.env.NOTION_DATABASE_ID;
  const response = await notion.databases.query({
    database_id: databaseId,
  });
  return response.results.map((page) =&amp;gt; ({
    id: page.id,
    title: page.properties['Title'].title[0].text.content,
    date: page.properties['Date'].date.start,
    tags: page.properties['Tags'].multi_select.map((tag) =&amp;gt; tag.name),
    content: page.properties['Content'].rich_text[0].text.content,
  }));
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code, we’re creating a new Client object from the notion-client package, using the NOTION_API_KEY environment variable to authenticate. We’re also defining a function called getBlogPosts that retrieves the blog posts from Notion. We’re using the database_id environment variable to specify the ID of the Notion database we created earlier. Then, we’re using the databases.query method to retrieve the data from the database. The databases.query method returns an array of pages, where each page represents a blog post. We’re mapping over the results and extracting the relevant properties (title, date, tags, and content) from each page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Create a Next.js API route
&lt;/h3&gt;

&lt;p&gt;Next, let’s create a Next.js API route to serve our blog posts. Create a new file called blog.ts in the pages/api directory, and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { NextApiRequest, NextApiResponse } from 'next';
import { getBlogPosts } from '../../lib/notion';

...

export default async function handler(req: NextApiRequest, res: NextApiResponse) {
  const posts = await getBlogPosts();
  res.status(200).json(posts);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code defines a new API route that retrieves the blog posts using the getBlogPosts function we defined earlier. We’re using the NextApiRequest and NextApiResponse types from Next.js to ensure type safety.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Display the blog posts on the frontend
&lt;/h3&gt;

&lt;p&gt;Finally, let’s display the blog posts on the frontend. Create a new file called &lt;strong&gt;index.tsx&lt;/strong&gt; in the pages directory, and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { GetStaticProps } from 'next';
import { getBlogPosts } from '../lib/notion';

export const getStaticProps: GetStaticProps = async () =&amp;gt; {
  const posts = await getBlogPosts();
  return {
    props: {
      posts,
    },
  };
};
interface Post {
  id: string;
  title: string;
  date: string;
  tags: string[];
  content: string;
}
interface Props {
  posts: Post[];
}
export default function Home({ posts }: Props) {
  return (
    &amp;lt;div&amp;gt;
      {posts.map((post) =&amp;gt; (
        &amp;lt;div key={post.id}&amp;gt;
          &amp;lt;h2&amp;gt;{post.title}&amp;lt;/h2&amp;gt;
          &amp;lt;p&amp;gt;{post.date}&amp;lt;/p&amp;gt;
          &amp;lt;ul&amp;gt;
            {post.tags.map((tag) =&amp;gt; (
              &amp;lt;li key={tag}&amp;gt;{tag}&amp;lt;/li&amp;gt;
            ))}
          &amp;lt;/ul&amp;gt;
          &amp;lt;p&amp;gt;{post.content}&amp;lt;/p&amp;gt;
        &amp;lt;/div&amp;gt;
      ))}
    &amp;lt;/div&amp;gt;
  );
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code, we’re using the GetStaticProps function from Next.js to fetch the blog posts at build time. We’re defining a new interface called Post to represent a single blog post, and another interface called Props to represent the props of our Home component. In the Home component, we’re using the map method to render each blog post as a div element. We’re displaying the title, date, tags, and content of each post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Run the application
&lt;/h3&gt;

&lt;p&gt;That’s it! You can now run your application using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>notion</category>
      <category>nextjs</category>
      <category>cms</category>
      <category>blog</category>
    </item>
    <item>
      <title>MsSQL on MacOs</title>
      <dc:creator>Pranjal Sharma</dc:creator>
      <pubDate>Thu, 20 Jun 2024 17:08:24 +0000</pubDate>
      <link>https://dev.to/pranjal_sharma_38482a3041/mssql-on-macos-2l3e</link>
      <guid>https://dev.to/pranjal_sharma_38482a3041/mssql-on-macos-2l3e</guid>
      <description>&lt;p&gt;MSSql database is easy to configure on a Windows System . For MacOs we need to take care of few steps to get it installed and run properly . &lt;/p&gt;

&lt;p&gt;Lets see what all steps we need to follow → &lt;/p&gt;

&lt;h3&gt;
  
  
  01 : Download Docker
&lt;/h3&gt;

&lt;p&gt;Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. We would need it to run Microsoft SQL on Mac.&lt;/p&gt;

&lt;p&gt;→ Check docker version&lt;br&gt;
&lt;code&gt;$ docker --version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;→ Download  and install docker from here 👉 &lt;a href="https://www.docker.com/products/docker-desktop/"&gt;Docker Desktop&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  02 : Download the MS SQL Server Image to Docker
&lt;/h3&gt;

&lt;p&gt;→ After that, you need to pull the SQL Server 2019 Linux container image from Microsoft Container Registry.&lt;/p&gt;

&lt;p&gt;[ &lt;em&gt;Make sure docker is running in background&lt;/em&gt; ]&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ sudo docker pull mcr.microsoft.com/mssql/server:2019-latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;→ Then you can run the docker images command and verify whether the docker image has been pulled successfully.&lt;/p&gt;
&lt;h3&gt;
  
  
  03 : Run the docker container
&lt;/h3&gt;

&lt;p&gt;→ Command to run the docker container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🔥 Command to run the container 
docker run -d --name sql_server_demo -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=reallyStrongPwd123' -p 1433:1433 mcr.microsoft.com/mssql/server:2019-latest

🔥 Command for M1 Chip, please try this
docker run -e "ACCEPT_EULA=1" -e "MSSQL_SA_PASSWORD=reallyStrongPwd123" -e "MSSQL_PID=Developer" -e "MSSQL_USER=SA" -p 1433:1433 -d --name=sql mcr.microsoft.com/azure-sql-edge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;→ Make sure to put you own password in SA_PASSWORD.&lt;br&gt;
→ You can name your container after the --name flag.&lt;br&gt;
→ -d flag represents the detach mode that releases the terminal after you run the above command.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;→ Then run the &lt;code&gt;docker ps&lt;/code&gt; command to verify whether your container has started to run&lt;/p&gt;

&lt;p&gt;→ If your container stops after a few seconds it started, run &lt;code&gt;docker ps -a&lt;/code&gt; command &amp;amp; &amp;gt; docker logs  to check what’re the errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  04 : Install the MS SQL CLI
&lt;/h3&gt;

&lt;p&gt;→ Next, you need to install &lt;strong&gt;sql-cli&lt;/strong&gt; via npm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm install -g sql-cli
OR
$ sudo npm install -g sql-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;→ &lt;a href="https://nodejs.org/en/download/package-manager"&gt;Link&lt;/a&gt; to install npm if not present .&lt;/p&gt;

&lt;h3&gt;
  
  
  05 : Test the Installation by Login In
&lt;/h3&gt;

&lt;p&gt;→ Testing the mssql integration by logging in&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ mssql -u sa -p &amp;lt;Your Pass word&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;→ If correctly done : &lt;code&gt;mssql&amp;gt;&lt;/code&gt; prompt will come up.&lt;/p&gt;

&lt;p&gt;→ Then run &lt;code&gt;select @@version&lt;/code&gt; to verify the connectivity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mssql -u sa -p reallyStrongPwd123
Connecting to localhost...done
sql-cli version 0.6.2
Enter ".help" for usage hints.
mssql&amp;gt; select @@version
--------------------------------------------------------------------
Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64)
Jan 12 2022 22:30:08
Copyright (C) 2019 Microsoft Corporation
Developer Edition (64-bit) on Linux (Ubuntu 20.04.3 LTS) &amp;lt;X64&amp;gt;
1 row(s) returned
Executed in 1 ms
mssql&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  06 : [OPTIONAL] Download and install the GUI application - Azure Data Studio
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://learn.microsoft.com/en-us/sql/azure-data-studio/download-azure-data-studio?view=sql-server-ver15&amp;amp;tabs=redhat-install%2Credhat-uninstall"&gt;Azure Data Studio&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  07 : 😊 We Are Done ! Stop the services once completed with the work
&lt;/h3&gt;

&lt;p&gt;→ &lt;code&gt;docker stop &amp;lt;container-id&amp;gt;&lt;/code&gt; to stop the docker container.&lt;/p&gt;

</description>
      <category>microsoft</category>
      <category>apple</category>
      <category>database</category>
      <category>macos</category>
    </item>
    <item>
      <title>ChatGPT Slack Bot</title>
      <dc:creator>Pranjal Sharma</dc:creator>
      <pubDate>Thu, 20 Jun 2024 16:38:20 +0000</pubDate>
      <link>https://dev.to/pranjal_sharma_38482a3041/chatgpt-slack-bot-3bfe</link>
      <guid>https://dev.to/pranjal_sharma_38482a3041/chatgpt-slack-bot-3bfe</guid>
      <description>&lt;p&gt;Slack is one of the most widely used communication tools for teams, and with the integration of OpenAI’s ChatGPT, it becomes an even more powerful tool. ChatGPT is a highly advanced language model that can generate human-like responses to a given prompt. In this blog, we will show you how to integrate ChatGPT with Slack and use it to answer questions and have conversations with your team members.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4tdko5kqq13m1jprek0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4tdko5kqq13m1jprek0.png" alt="The GPT Bot 🎉" width="800" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: the API has a higher uptime compared to the ChatGPT UI 😄&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  1. Register an app with Slack and gather all the tokens
&lt;/h3&gt;

&lt;p&gt;First step is to register a new app on Slack and obtain the Slack Bot Token and Slack API Token.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log in to your Slack workspace and Go to &lt;a href="https://api.slack.com/"&gt;Slack API website.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;“Create an app”&lt;/strong&gt; and select &lt;strong&gt;“From scratch”&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Give your app a name, select your Slack workspace.&lt;/li&gt;
&lt;li&gt;In Basic information &amp;gt; Add features and functionality. Click on “Permissions” and in Scopes add in Bot Token Scopes: &lt;a href="https://api.slack.com/scopes/app_mentions:read"&gt;app_mentions:read&lt;/a&gt; ; &lt;a href="https://api.slack.com/scopes/channels:history"&gt;channels:history&lt;/a&gt; ; &lt;a href="https://api.slack.com/scopes/channels:read"&gt;channels:read&lt;/a&gt; ; &lt;a href="https://api.slack.com/scopes/chat:write"&gt;chat:write&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;In settings, click on &lt;strong&gt;“Socket Mode”&lt;/strong&gt;, enable it and give the token a name. Copy the Slack Bot App Token.&lt;/li&gt;
&lt;li&gt;In &lt;u&gt;Basic information &amp;gt; Add features and functionality.&lt;/u&gt; Click on &lt;strong&gt;“Event Subscriptions”&lt;/strong&gt; and enable it. Furthermore in &lt;strong&gt;“Subscribe to bot events”&lt;/strong&gt; select “app_mention”. Save changes.&lt;/li&gt;
&lt;li&gt;Go to the &lt;strong&gt;“OAuth &amp;amp; Permissions”&lt;/strong&gt; section and install your app to your workspace.&lt;/li&gt;
&lt;li&gt;Copy the &lt;strong&gt;Slack Bot Token.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Get the OpenAI API key . [ Valid for a month for free users ]
&lt;/h3&gt;

&lt;p&gt;Need to have a OpenAI API key to integrate ChatGPT.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to OpenAI website.&lt;/li&gt;
&lt;li&gt;Go to API ket section and create a new API key after you login.&lt;/li&gt;
&lt;li&gt;Copy the API key.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Install necessary dependencies
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install openai
pip install slack-bolt 
pip install slack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install these dependencies . Slack - bolt is a bunch of tools and libraries that allow developers to easily create Slack applications.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Fill in the first 3 tokens in this script with your tokens and run the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SLACK_BOT_TOKEN = "xoxb-2196501177986-5475158173799-DTxoGAJMjSrqZ1UbKJQDRkYq"
SLACK_APP_TOKEN = "xapp-1-A05DZ02E7JT-5502278073649-6d0d2eabadfa2388189e2bd414393d764d87de68f4df7234f2e87a421eba9440"
OPENAI_API_KEY  = "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"
import os
import openai
from slack_bolt.adapter.socket_mode import SocketModeHandler
from slack import WebClient
from slack_bolt import App
# Event API &amp;amp; Web API
app = App(token=SLACK_BOT_TOKEN) 
client = WebClient(SLACK_BOT_TOKEN)
# This gets activated when the bot is tagged in a channel    
@app.event("app_mention")
def handle_message_events(body, logger):
    # Log message
    print(str(body["event"]["text"]).split("&amp;gt;")[1])
    # Create prompt for ChatGPT
    prompt = str(body["event"]["text"]).split("&amp;gt;")[1]
    # Let the user know that we are busy with the request 
    response = client.chat_postMessage(channel=body["event"]["channel"], 
                                       thread_ts=body["event"]["event_ts"],
                                       text=f"Hello LazyPay junkies !! :robot_face: \nThanks for your request, I'm on it!")
    # Check ChatGPT
    openai.api_key = OPENAI_API_KEY
    response = openai.Completion.create(
        engine="text-davinci-003",
        prompt=prompt,
        max_tokens=1024,
        n=1,
        stop=None,
        temperature=0.5).choices[0].text
    # Reply to thread 
    response = client.chat_postMessage(channel=body["event"]["channel"], 
                                       thread_ts=body["event"]["event_ts"],
                                       text=f"Here you go: \n{response}")
if __name__ == "__main__":
    SocketModeHandler(app, SLACK_APP_TOKEN).start()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add &lt;strong&gt;here we go&lt;/strong&gt; !! Add your bot to a channel in the integration tab. 🚀&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>slack</category>
      <category>bot</category>
      <category>ai</category>
    </item>
    <item>
      <title>&gt;1 RDBMS in Spring Data JPA</title>
      <dc:creator>Pranjal Sharma</dc:creator>
      <pubDate>Thu, 20 Jun 2024 15:34:35 +0000</pubDate>
      <link>https://dev.to/pranjal_sharma_38482a3041/1-rdbms-in-spring-data-jpa-5ge4</link>
      <guid>https://dev.to/pranjal_sharma_38482a3041/1-rdbms-in-spring-data-jpa-5ge4</guid>
      <description>&lt;p&gt;This document deals with building the backend application that uses Spring Data JPA with multiple relational databases. &lt;br&gt;
For an example we will connect to &lt;strong&gt;MySQL + MSSQL&lt;/strong&gt; database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Main task here is to seperate properties and configurations for all the multiple databases that have to integrated.
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Other JPA layers in code remain the same as for single integration. [ Repository + Entity]
&lt;/h3&gt;

&lt;p&gt;[ Point to remember : Define these  in different packages for different databases as we would need them when defining configs. ]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdh9lvxivz89risy1ttm1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdh9lvxivz89risy1ttm1.png" alt="Sample Code Structure Snippet" width="640" height="1006"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For specific code refer &lt;a&gt;this&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sample Configurations in application properties
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring.datasource.url=jdbc:mysql://127.0.0.1/heimdall_db?useSSL=false
spring.datasource.username=root
spring.datasource.password=pranjal
spring.datasource.driverClassName=com.mysql.cj.jdbc.Driver

##SQL Server
sqlserver.datasource.url=jdbc:sqlserver://localhost;databaseName=jpa_test
sqlserver.datasource.username=sa
sqlserver.datasource.password=reallyStrongPwd123
sqlserver.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver

spring.jpa.database=default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Don't define other hibernate configurations specific to database here.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Defining Separate Config Classes for all the databases
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.sma.backend.multidb.config;

import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.orm.jpa.EntityManagerFactoryBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;


import javax.persistence.EntityManagerFactory;
import javax.sql.DataSource;

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "sqlServerEntityManagerFactory",
        transactionManagerRef = "sqlServerTransactionManager",
        basePackages = "com.sma.backend.multidb.database.sqlserver.repository")
public class SqlServerConfig {

    @Bean
    @ConfigurationProperties(prefix = "sqlserver.datasource")
    public DataSourceProperties sqlServerDataSourceProperties() {
        return new DataSourceProperties();
    }
    @Bean
    public DataSource sqlServerDataSource(@Qualifier("sqlServerDataSourceProperties") DataSourceProperties dataSourceProperties) {
        return dataSourceProperties.initializeDataSourceBuilder().build();
    }

    @Bean(name = "sqlServerEntityManagerFactory")
    public LocalContainerEntityManagerFactoryBean sqlServerEntityManagerFactory(@Qualifier("sqlServerDataSource") DataSource sqlServerDataSource, EntityManagerFactoryBuilder builder) {

        return builder.dataSource(sqlServerDataSource)
                .packages("com.sma.backend.multidb.database.sqlserver.domain")
                .persistenceUnit("sqlserver")
                .build();

    }

    @Bean
    public PlatformTransactionManager sqlServerTransactionManager(@Qualifier("sqlServerEntityManagerFactory")
                                                                              EntityManagerFactory factory) {
        return new JpaTransactionManager(factory);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  MySqlConfig
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.sma.backend.multidb.config;

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "mysqlEntityManagerFactory", transactionManagerRef = "mysqlTransactionManager", basePackages = {"com.sma.backend.multidb.database.mysql.repository"})

public class MySqlConfig {

    @Primary
    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DataSourceProperties mysqlDataSourceProperties() {
        return new DataSourceProperties();
    }

    @Primary
    @Bean
    public DataSource mysqlDataSource(@Qualifier("mysqlDataSourceProperties") DataSourceProperties dataSourceProperties) {
        return dataSourceProperties.initializeDataSourceBuilder().build();
    }

    @Primary
    @Bean
    public LocalContainerEntityManagerFactoryBean mysqlEntityManagerFactory(@Qualifier("mysqlDataSource") DataSource hubDataSource, EntityManagerFactoryBuilder builder) {
        return builder.dataSource(hubDataSource).packages("com.sma.backend.multidb.database.mysql.domain")
                .persistenceUnit("mysql").build();
    }

    @Primary
    @Bean
    public PlatformTransactionManager mysqlTransactionManager(@Qualifier("mysqlEntityManagerFactory") EntityManagerFactory factory) {
        return new JpaTransactionManager(factory);
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  POINTS TO REMEMBER :
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;hibernate.dialect&lt;/strong&gt; → The dialect specifies the type of database used in hibernate so that hibernate generate appropriate type of SQL statements. For connecting any hibernate application with the database, it is required to provide the configuration of SQL dialect.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hence to specify which language to use we have to define seperate values for this.&lt;/p&gt;

&lt;p&gt;We can do that by passing this and all other properties which are specific to the databases in a map tagged as &lt;strong&gt;properties&lt;/strong&gt; in &lt;code&gt;EntityManagerFactoryBuilder&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If running on Mac local you have to keep different ports for running both the databases on localHost as MSSQL needs Docker to run .&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I hope that this Blog Post helped you! If you have any questions, feel free to use the comment section! 💬&lt;/p&gt;

&lt;p&gt;Oh and if you want more content like this, follow me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/pj-iitk"&gt;Github&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/pj-iitk/"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rdbms</category>
      <category>springboot</category>
      <category>mysql</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
