<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abhishek Prajapati</title>
    <description>The latest articles on DEV Community by Abhishek Prajapati (@blacviking).</description>
    <link>https://dev.to/blacviking</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/blacviking"/>
    <language>en</language>
    <item>
      <title>BookMyShow/Ticketmaster system design</title>
      <dc:creator>Abhishek Prajapati</dc:creator>
      <pubDate>Sat, 14 Jun 2025 09:25:34 +0000</pubDate>
      <link>https://dev.to/blacviking/bookmyshowticketmaster-system-design-5ccf</link>
      <guid>https://dev.to/blacviking/bookmyshowticketmaster-system-design-5ccf</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What is BookMyShow
&lt;/h4&gt;

&lt;p&gt;It is a platform which allows users to book an ticket to events like movie, concert, shows etc.&lt;/p&gt;

&lt;h4&gt;
  
  
  Steps to tackle this question
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Requirements&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Functional requirements: These are the core functionalities of our system&lt;/li&gt;
&lt;li&gt;Non Functional requirements: These are the qualities of our system&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Entities&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
These are the data bodies that would be used to help us understand what kind of data is being moved in our system&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;APIs&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
These are the endpoints that would be used to fulfill our functional requirements and other functions to help services communicate between each  other&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;HLD&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This design would satisfy our functional requirements&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deep Dive&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
This is where we are going to full fil our Non functional requirements &lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;With the roadmap ready, let's get started&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Requirements
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Functional Requirements
&lt;/h4&gt;

&lt;p&gt;We want our systems to fulfill the below core functionalities&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search for shows/movies/events in their city&lt;/li&gt;
&lt;li&gt;View the details of the event(this refers to all the above mentioned)&lt;/li&gt;
&lt;li&gt;Book a ticket for the event&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Non-functional requirements
&lt;/h4&gt;

&lt;p&gt;Here we will try to decide the quality of our system&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We want consistency in our booking service since we don't want multiple people to book the same seats for an event&lt;/li&gt;
&lt;li&gt;Availability for searching an event&lt;/li&gt;
&lt;li&gt;Low latency for searching&lt;/li&gt;
&lt;li&gt;Scalability to handle surges, maybe to book a ticket to a famous convert(like book tickets for the cold play concert in india, boy that was an experience)&lt;/li&gt;
&lt;li&gt;Prioritize reads &amp;gt;&amp;gt; writes since we would mostly searching and looking at events when compared to booking tickets&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  2. Entities
&lt;/h3&gt;

&lt;p&gt;We are now going to decide which data types are we going to use in our system&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Event - Use to store details about a events&lt;/li&gt;
&lt;li&gt;User - store details about the user&lt;/li&gt;
&lt;li&gt;Ticket - Store the information about the event reservation&lt;/li&gt;
&lt;li&gt;Performer - Store details about then performer who is going to perform at an event&lt;/li&gt;
&lt;li&gt;Venue - Details about a particular venu, ex: Capacity, location, amenities  etc&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. APIs
&lt;/h3&gt;

&lt;p&gt;Now we are going to define the endpoints that are going to satisfy our functional requirements&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For search/view an event&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request:
GET /events/:eventId

Response:
Event, Performer, Venue, Tickets[]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Search for an event&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request:
GET /search?term={searchTerm}&amp;amp;location={locationOfEvent}&amp;amp;type={typeOfEvent}&amp;amp;...

Response:
Partial&amp;lt;Event&amp;gt;[]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We get a list of partial Event types so that we get the basic idea about the event and if we want to know more details then we can click on the event and that will hit the &lt;code&gt;/events/:eventId&lt;/code&gt; for more details&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Book a ticket&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is going to be a little tricky since we don't want multiple users to book the same seats we are going to use 2 phase to tackle this.&lt;/p&gt;

&lt;p&gt;Phase 1 - we lock the seats for some time so that user can book the seats within some time let's say x mins. If booking fails then we open the tickets to other users.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request:
POST /booking/reserve
header: JWT session token
body: {
    ticketId: "ad98384109ffe0"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Phase 2 - User books the ticket and we mark the seats booked&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Reqeust:
POST /booking/confirm
header: JWT session token
body: {
    ticketId,
    paymentDetails (3rd party payment provider details)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have decided on the Entities and APIs, let get into High level design&lt;/p&gt;

&lt;h3&gt;
  
  
  4. High Level Design
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn3ed8ygaplwc3ggkkjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn3ed8ygaplwc3ggkkjo.png" alt="HLD Design" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Deep dive
&lt;/h3&gt;

&lt;p&gt;Now that we have satisfied a functional requirements we can go deep diving into a non functional requirements&lt;br&gt;
The area of improvements are&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Improving the search speed which is the low latency search&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Introducing elasticsearch for querying the events which stores the events against tokenized terms (term: [event1, event2 ...])&lt;/p&gt;

&lt;p&gt;we can use the event CRUD service to update the data in both database and the elastic search but that would add complexity to our event CRUD service since it would be updating both the elasticsearch and the database'&lt;/p&gt;

&lt;p&gt;In order to better handle this updation of data we will use change data captures which is CDC. So first we will make an update to database and then eventually the database will utilize CDC and send that data to elasticsearch&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf9e5jl0as24f9z2otgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf9e5jl0as24f9z2otgy.png" alt="Option1 vs Option2" width="766" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also cache popular events which will reduce the search time&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scalability to handle surges from popular event&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Problem: if a user is checking seats for a popular event then they might face an issue because a lot of people are checking that event at the same time so this can lead to users having delayed response or no response at all&lt;/p&gt;

&lt;p&gt;Possible Solutions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Long Polling&lt;/li&gt;
&lt;li&gt;Websockets&lt;/li&gt;
&lt;li&gt;SSE&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So in order to tackle the issue of multiple uses trying to book seats for popular event we can take the approach of using virtual queue where N users will be inserted into the queue. Once the users are in the queue we can have something like an SSE which will send an event to the user mentioning that now they can book their ticket.&lt;/p&gt;

&lt;p&gt;The waiting queue can be configured to only be used for popular events, so this way we don't have to use queue every time.&lt;/p&gt;

&lt;p&gt;Updated HLD diagram&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzly0oaxdmx4mikrxw94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzly0oaxdmx4mikrxw94.png" alt="Updated HLD Diagram" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all folks hope you had a great time reading this blog and I hope to see you again and if you have any suggestions or questions please comment down below. &lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>hld</category>
    </item>
    <item>
      <title>Tiny URL Design</title>
      <dc:creator>Abhishek Prajapati</dc:creator>
      <pubDate>Mon, 03 Mar 2025 17:02:00 +0000</pubDate>
      <link>https://dev.to/blacviking/tiny-url-design-411e</link>
      <guid>https://dev.to/blacviking/tiny-url-design-411e</guid>
      <description>&lt;h3&gt;
  
  
  What is a Tiny URL
&lt;/h3&gt;

&lt;p&gt;It's a service that takes a long url like &lt;code&gt;https://www.example.com/averylongpathoftheurlthatneedstobeshortened&lt;/code&gt; and converts it to something like &lt;code&gt;https://sho.rt/34ssd21&lt;/code&gt;. Now whenever someone visits the short URL they will be redirected to the original URL&lt;/p&gt;

&lt;h4&gt;
  
  
  Advantage
&lt;/h4&gt;

&lt;p&gt;With short url, it becomes easier to share and remember urls (provided you can define your own endpoint)&lt;/p&gt;

&lt;h4&gt;
  
  
  Disadvantages
&lt;/h4&gt;

&lt;p&gt;The one problem that arises with this the risk of now knowing where you are getting redirected to. This can easily be used to phish people.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Generate a short URL for a given long URL. Need to ensure they are unique. If not, then users can visit websites that they were not meant to see (like your portfolio)&lt;/li&gt;
&lt;li&gt;Be able to see the number of clicks of the short URL generated.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Performance Consideration
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Median URL has 10k clicks and popular URL can have more than million clicks&lt;/li&gt;
&lt;li&gt;We may have to support 1 trillion short URLs at a time (Yes, its trillion)&lt;/li&gt;
&lt;li&gt;The average size of the short URL would be in KBs, so total space required would be &lt;code&gt;1 trillion * 1KB = 1 PB&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;We will optimize for reads since most people would just be reading rather than generating&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Link generation
&lt;/h3&gt;

&lt;p&gt;In order to generate a shorter link for the provided long link, we can use  a hash function which would take some parameters and generate a short url for us&lt;/p&gt;

&lt;p&gt;Example hash function&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hash(long_url, user_id, upload_timestamp)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;long_url: The URL that needs to be shorted&lt;/li&gt;
&lt;li&gt;user_id: The user who is shortening the URL&lt;/li&gt;
&lt;li&gt;upload_timestamp: The time at which the short url is created&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we need to accommodate a lot of URLs in order to avoid collision between the links, thus we need to determine the length of the short URL (this is the path of the URL i.e &lt;code&gt;https://www.sho.rt/&amp;lt;hash_function_output&amp;gt;&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;We will be using lower case alphabets and digits to generate the short URL path. So now we have total 36 characters (10 digits + 26 alphabets).&lt;/p&gt;

&lt;p&gt;If the length of the short URL path is &lt;code&gt;n&lt;/code&gt; then we have &lt;code&gt;36^n&lt;/code&gt; possibilities. In our earlier assumption we have aimed to store 1 Trillion URLs.&lt;/p&gt;

&lt;p&gt;For n = 8&lt;br&gt;
Number of possibilities = 36 ^ 8 ~= 2 Trillion&lt;/p&gt;

&lt;p&gt;With n = 8, we have enough possibilities.&lt;/p&gt;

&lt;p&gt;So our hashed URL would look something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hash(long_url, user_id, upload_timestamp) = https://www.sho.rt/2fjh7rw6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The length of short url path &lt;code&gt;2fjh7rw6&lt;/code&gt; is 8&lt;/p&gt;

&lt;p&gt;Now what should we do in order to deal with hash collisions, we can look for the next available hash since we would be storing the data in a DB table.&lt;/p&gt;

&lt;h3&gt;
  
  
  Assigning URLs (Writes)
&lt;/h3&gt;

&lt;p&gt;Now we will try to decide on strategies to store the generated short URL.&lt;/p&gt;

&lt;h4&gt;
  
  
  Replication
&lt;/h4&gt;

&lt;p&gt;We want to maximize our write throughput wherever possible even though our  overall system would be optimized for reads.&lt;/p&gt;

&lt;p&gt;Now let's assume we are using multi leader replication. We have 2 masters.&lt;/p&gt;

&lt;p&gt;User 1 creates a link with hash &lt;code&gt;abc&lt;/code&gt; on Master 1 and User 2 creates the same hash &lt;code&gt;abc&lt;/code&gt; on Master 2.&lt;/p&gt;

&lt;p&gt;After some time both the Masters sync up, in doing so they encounter that &lt;br&gt;
hash &lt;code&gt;abc&lt;/code&gt; exists on both of them. In order to solve this conflict the system is designed to use Last Write Wins policy. With that the value of &lt;code&gt;abc&lt;/code&gt; is stored &lt;code&gt;def.com&lt;/code&gt; which was uploaded by User 2 and the value &lt;code&gt;xyz.com&lt;/code&gt; which was set by User 1 is lost.&lt;/p&gt;

&lt;p&gt;When User 3 who is expecting "xyz.com" for hash &lt;code&gt;abc&lt;/code&gt; is getting &lt;code&gt;def.com&lt;/code&gt;, this leads to confusion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpx4lieil3jui5mbxvbm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpx4lieil3jui5mbxvbm.png" alt="Wrong hashes being served" width="645" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above issue can cause a lot of confusion and thus we will go with &lt;code&gt;Single Leader Replication&lt;/code&gt; for our case&lt;/p&gt;

&lt;p&gt;With single leader we won't have the data lost issue as described above but we now have a single point of failure but since our system would be more read heavy, we should be find with &lt;code&gt;Single leader replication&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Caching
&lt;/h4&gt;

&lt;p&gt;We can use cache in order to speed up our writes by first writing to our cache and then flush those values to our data base.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcal7aax6hxcg1sut4nh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcal7aax6hxcg1sut4nh.png" alt="Caching" width="682" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach has the same issue of same hash being generated on both caches.&lt;/p&gt;

&lt;p&gt;So, using a cache would not make much sense here&lt;/p&gt;
&lt;h4&gt;
  
  
  Partitioning
&lt;/h4&gt;

&lt;p&gt;It is an important aspect of our design since we can use it to improve our reads and writes by distributing the load across different nodes&lt;/p&gt;

&lt;p&gt;Since we have generated a short URL which is a hash we can use &lt;code&gt;range&lt;/code&gt; based hashing reason being that the short URLs generated should be relatively even.&lt;/p&gt;

&lt;p&gt;Now if a hash is generated against a long URL and it is being stored in one of the nodes but that node already has the generated short URL entry, then we can use probing to deal with this. The advantage of probing is that similar hashes will be present in the same node.&lt;/p&gt;

&lt;p&gt;We also need to keep track of consistent hashing ring on coordination service, minimize reshuffling on cluster size change&lt;/p&gt;
&lt;h4&gt;
  
  
  Single Node (Master Node)
&lt;/h4&gt;

&lt;p&gt;In single leader replication, we have a single write node. In this scenario there could be cases where 2 users end up generating the same hashes, What should be done in that situation ? We could use locking but the hash entry does not exist in the DB yet&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63l6dk1gl5t4ogcxtwwr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63l6dk1gl5t4ogcxtwwr.png" alt="Users writing new rows" width="696" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To deal with this we have some workarounds&lt;/p&gt;
&lt;h5&gt;
  
  
  Predicate Locks
&lt;/h5&gt;

&lt;p&gt;What are predicate locks ? This is a shared lock on row based on some condition that is satisfied.&lt;/p&gt;

&lt;p&gt;For example,&lt;br&gt;
We have the below query&lt;br&gt;
&lt;code&gt;SELECT * FROM urls WHERE shorturl = 'dgf4221d'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now user1 is trying to run this query and in the meanwhile user2 is already utilizing the conditions mentioned in the above query then user1 needs to wait for user2 to finish.&lt;/p&gt;

&lt;p&gt;Now we can lock the rows that don't exist yet and then insert the required row. If another user wants to create the same entry they will not be able to do so, since there is a predicate lock on the row because of the first user.&lt;/p&gt;

&lt;p&gt;Example query to create a lock on the row (For Postgres)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;

-- Check if the user exists (this will acquire a predicate lock)
SELECT * FROM users WHERE email = 'user@example.com';

-- If no rows are found, insert
INSERT INTO users (email, name) VALUES ('user@example.com', 'John Doe');

COMMIT;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to improve the speed of the queries we can index on the short url column would make the predicate query much faster(O(n) vs O(logN))&lt;/p&gt;

&lt;p&gt;We can also use stored procedure to deal with hash collision. If a user receives a collision then we can use probing to store the long url in the next available hash for the user.&lt;/p&gt;

&lt;h5&gt;
  
  
  Materializing conflicts
&lt;/h5&gt;

&lt;p&gt;We can populate all the DB with all the possible rows so that users can lock onto row when they are trying to write.&lt;/p&gt;

&lt;p&gt;Let's calculate the total space required to store 2 trillion short urls&lt;/p&gt;

&lt;p&gt;No. of rows = 2 trillion&lt;br&gt;
Size of 1 character = 1 byte&lt;br&gt;
Length of each short url = 8&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Total Space required = 2 trillion * 1 byte * 8 characters = 16 TB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;16TB&lt;/code&gt; is not a lot of storage since we have personal PC with 1 TB storages&lt;/p&gt;

&lt;p&gt;When a user is updating an existing entry, the DB will lock that row and if another user tries to update the same row then they would be incapable of doing so&lt;/p&gt;

&lt;h4&gt;
  
  
  Engine Implementation
&lt;/h4&gt;

&lt;p&gt;We don't need range-based queries since we are going to mostly query a single row while reading or writing. So, a hash index would be much faster. We cannot use a in memory DB since we have 16 TB of data.&lt;/p&gt;

&lt;p&gt;We have 2 choices - LSM tree + SS Table and BTree&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;LSM Tree + SSTable&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;B-Tree&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Write Performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;High&lt;/strong&gt; (sequential writes, batched flushing)&lt;/td&gt;
&lt;td&gt;❌ &lt;strong&gt;Lower&lt;/strong&gt; (random disk writes for every insert/update)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Read Performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ &lt;strong&gt;Moderate&lt;/strong&gt; (requires merging multiple SSTables)&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;High&lt;/strong&gt; (direct lookup using tree traversal)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Write Amplification&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;Lower&lt;/strong&gt; (writes are buffered and flushed in bulk)&lt;/td&gt;
&lt;td&gt;❌ &lt;strong&gt;Higher&lt;/strong&gt; (writes propagate across multiple levels)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Read Amplification&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ &lt;strong&gt;Higher&lt;/strong&gt; (may need to scan multiple SSTables)&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;Lower&lt;/strong&gt; (direct path to data via tree traversal)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Storage Efficiency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;Better&lt;/strong&gt; (compaction reduces fragmentation)&lt;/td&gt;
&lt;td&gt;❌ &lt;strong&gt;Less Efficient&lt;/strong&gt; (fragmentation due to node splits)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Compaction Needed?&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;Yes&lt;/strong&gt; (merging SSTables to optimize reads)&lt;/td&gt;
&lt;td&gt;❌ &lt;strong&gt;No&lt;/strong&gt; (data is structured directly in a balanced tree)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Durability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;Yes&lt;/strong&gt; (WAL ensures no data loss before flushing to SSTables)&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;Yes&lt;/strong&gt; (data stored directly in the tree structure)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Concurrency Control&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;Better&lt;/strong&gt; (writes are append-only, reducing contention)&lt;/td&gt;
&lt;td&gt;❌ &lt;strong&gt;More Locking Needed&lt;/strong&gt; (modifies multiple nodes in-place)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Disk I/O&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;Optimized&lt;/strong&gt; (sequential writes, fewer random writes)&lt;/td&gt;
&lt;td&gt;❌ &lt;strong&gt;More I/O&lt;/strong&gt; (random writes and in-place updates)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Use Case&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;🔹 &lt;strong&gt;Write-heavy&lt;/strong&gt; workloads (NoSQL, Logs, Streaming data)&lt;/td&gt;
&lt;td&gt;🔹 &lt;strong&gt;Read-heavy&lt;/strong&gt; workloads (Relational Databases, Indexes)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Examples&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;🔹 Apache Cassandra, LevelDB, RocksDB, Bigtable&lt;/td&gt;
&lt;td&gt;🔹 MySQL (InnoDB), PostgreSQL, Oracle DB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Considering all the above points and requirement of our system. It makes a lot of sense to use &lt;code&gt;BTree&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Database Choice
&lt;/h4&gt;

&lt;p&gt;So far we have made the following choices&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single leader replication&lt;/li&gt;
&lt;li&gt;Partiotion&lt;/li&gt;
&lt;li&gt;B-Tree based&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keeping all of these things in mind, we can use a relational DB and since we don't need to make distributed query we can choose &lt;code&gt;MySQL&lt;/code&gt; for our case (We could also go with Postgresql here).&lt;/p&gt;

&lt;h3&gt;
  
  
  Maximizing read speeds
&lt;/h3&gt;

&lt;p&gt;We can replicate data on multiple nodes in order to handle the user traffic.&lt;br&gt;
We could possibly get stale data from replica but we could check leader on null result (This can lead to more load on the leader)&lt;/p&gt;

&lt;h4&gt;
  
  
  Handling Hotlinks
&lt;/h4&gt;

&lt;p&gt;There may be some links that are accessed by a large number of people as compared to the other links. These links can be called Hotlinks&lt;/p&gt;

&lt;p&gt;A caching layer can help mitigate a lot of load.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Caching layer can be scaled independently of DB&lt;/li&gt;
&lt;li&gt;Partitioning the cache by shortURL should lead to fewer cache misses&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Populating cache
&lt;/h4&gt;

&lt;p&gt;We can't push every link to cache since we don't which links would be popular. So what should be the approach.&lt;/p&gt;

&lt;p&gt;Use &lt;code&gt;write back&lt;/code&gt; - This will cause write conflicts since there could be multiple entries for the same short URL&lt;/p&gt;

&lt;p&gt;Use &lt;code&gt;Write through&lt;/code&gt; - If the cache goes down then we risk losing the shortURL data and this also slows down the write speeds since now we are waiting for the DB to finish the update as well.&lt;/p&gt;

&lt;p&gt;Use &lt;code&gt;Write around&lt;/code&gt; - We can use this to update the DB first and then in case of a cache miss we can update the cache data as well. This will increase your cache miss but eventually we can better reads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Analytics Solutions
&lt;/h3&gt;

&lt;p&gt;We will now discuss the ways to update the number of clicks on a particular shortURL&lt;/p&gt;

&lt;h4&gt;
  
  
  Naive Approach
&lt;/h4&gt;

&lt;p&gt;We keep a clicks counter per rows, we could just increment it.&lt;/p&gt;

&lt;p&gt;So, if 2 users are trying to read the same row at the same time then we can have a race condition since now both the users will try to increment the counter at the same time this might lead to wrong count updates&lt;/p&gt;

&lt;p&gt;We could have something like a lock / use atomic increment operation per row.&lt;/p&gt;

&lt;h4&gt;
  
  
  Stream Processing
&lt;/h4&gt;

&lt;p&gt;We can place individual data somewhere that doesn't require grabbing locks and then aggregate later&lt;/p&gt;

&lt;p&gt;We can explore the below options&lt;br&gt;
DB - Relatively slow&lt;br&gt;
In-Memory DB - Superfast but not durable&lt;br&gt;
Log Based message - We can write to a write ahead log, which is durable and can be processed at later point&lt;/p&gt;

&lt;p&gt;Now we can place the date in some sort of queue like Kafka&lt;/p&gt;

&lt;h4&gt;
  
  
  Click consumer
&lt;/h4&gt;

&lt;p&gt;We have the following options to process the clicks data from the queue&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;HDFS + Spark&lt;br&gt;
Batch jobs to aggregate the clicks data will be too infrequent since we would need to dump the clicks data to HDFS first and then process it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flink&lt;br&gt;
Process each event individually in realtime, may send too many writes to the database depending on implementation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Spark streaming&lt;br&gt;
Processes events in configurable mini-batch sizes. This also does not put a lot of load on the DB unlike Flink&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Considering our scenario and how frequently we want to update the clicks data, we can choose &lt;code&gt;Spark Streaming&lt;/code&gt; since it processes the data in batches which does not put a lot of load on DB and also updates the DB frequently&lt;/p&gt;

&lt;p&gt;Stream consumer frameworks enable us to ensure exactly one processing of events via checkpointing or queue offsets&lt;/p&gt;

&lt;h4&gt;
  
  
  Process Exactly once
&lt;/h4&gt;

&lt;p&gt;We are guaranteed that the queue and Spark streaming will process the event exactly once but when we are trying to udpate the DB with the click data we might face some issues.&lt;/p&gt;

&lt;p&gt;If Spark streaming wants to update the click count by 100 and sends a request to the DB and DB acknowledges the changes but in between the network Spark streaming service goes down. Now when the service comes back up it would not have received the acknowledgment for the previous 100 click count data and it would repush that data to DB&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflm74q7rvbpj3wz15rtf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflm74q7rvbpj3wz15rtf.png" alt="Spark Streaming" width="699" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we have couple of options for this&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Two phase commit - This is too slow for our systems since we would need a coordinator node to implement the 2 phase commit between the spark streaming service and DB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Idempotency Keys - Every update would have a idempotency key t o verify the latest update, if same key is present then we can reject the update&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This would scale poorly if there are multiple publishers for the same row i.e lot of updates from multiple spark streaming services to the same row&lt;/p&gt;

&lt;h5&gt;
  
  
  One Publisher per row
&lt;/h5&gt;

&lt;p&gt;Imaginee our data is growing and we need to process all the clicks data. Now to handle traffic we have implemented multiple kafka queues and spark streaming but now when we are trying to update the data we might need to store multiple idempotency keys in order to update the data correctly which might not be a good idea&lt;/p&gt;

&lt;p&gt;Now, we can partition our Kafka queues and spark streaming consumers by short URL. we can ensure that only one consumer is publishing clicks for a short url at a time.&lt;/p&gt;

&lt;p&gt;Benefits&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fewer idempotency keys to store&lt;/li&gt;
&lt;li&gt;No need to grab locks on publish step&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deleting Expired Links
&lt;/h3&gt;

&lt;p&gt;We can run a relatively less expensive batch job every &lt;code&gt;x&lt;/code&gt; hours to check for expired links.&lt;br&gt;
Only has to grab locks for row commonly being read&lt;/p&gt;

&lt;p&gt;### High Level Diagram&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9flpixog8h91l7tv2y9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9flpixog8h91l7tv2y9f.png" alt="HLD" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Now with this system we should be able to support the initial numbers that we have quoted. Do let me know what can be improved in this system.&lt;/p&gt;

&lt;h4&gt;
  
  
  Credits
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://youtu.be/5V6Lam8GZo4?list=PLjTveVh7FakJOoY6GPZGWHHl4shhDT8iV" rel="noopener noreferrer"&gt;https://youtu.be/5V6Lam8GZo4?list=PLjTveVh7FakJOoY6GPZGWHHl4shhDT8iV&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>redis</category>
      <category>kafka</category>
      <category>spark</category>
    </item>
    <item>
      <title>Starting a new project</title>
      <dc:creator>Abhishek Prajapati</dc:creator>
      <pubDate>Thu, 29 Feb 2024 17:38:11 +0000</pubDate>
      <link>https://dev.to/blacviking/starting-a-new-project-2o4o</link>
      <guid>https://dev.to/blacviking/starting-a-new-project-2o4o</guid>
      <description>&lt;h2&gt;
  
  
  Starting again ...
&lt;/h2&gt;

&lt;p&gt;So I have decided to write again. The idea is to document and keep track of how things are growing with time, it is more of a &lt;br&gt;
keeping track of things for me. Initially when I was into web hacking, I used to write a lot of writeups for the machines I hacked (on hack the box platform) but I eventually left that and started working as a developer.&lt;/p&gt;

&lt;p&gt;I have decided to write again regarding everything I do in tech. So, with that out of the way. lets start with the topic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project
&lt;/h3&gt;

&lt;p&gt;I recently came across the problem of updating the packages in &lt;code&gt;package.json&lt;/code&gt;, It is quite a challenge when you have so many packages and don't know which packages are out of date so the only solution is to check each package manually.&lt;/p&gt;

&lt;p&gt;So, in order to solve this problem I have decided to build a tool to help people see which packages are out of date and what is the latest version of the package.&lt;/p&gt;

&lt;p&gt;It is going to be a mainly a frontend application where the user can upload a &lt;code&gt;package.json&lt;/code&gt; file and the app goes through each package and lets the user see the latest version of the package.&lt;/p&gt;

&lt;h3&gt;
  
  
  Next steps ...
&lt;/h3&gt;

&lt;p&gt;The next step is to ACTUALLY build this thing, this is not a big project but more of a useful tool to have around (for some people at least). Even if someone decides to steal this idea, this post is going to be the proof of &lt;code&gt;I came up with it first&lt;/code&gt;, although if someone has already built something like this, then they could say the same thing about me. Anyways, see you folks around.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Will think of the name later on&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Ngrok - From Localhost to Everywhere</title>
      <dc:creator>Abhishek Prajapati</dc:creator>
      <pubDate>Mon, 07 Jun 2021 18:45:35 +0000</pubDate>
      <link>https://dev.to/blacviking/ngrok-from-localhost-to-everywhere-2m2m</link>
      <guid>https://dev.to/blacviking/ngrok-from-localhost-to-everywhere-2m2m</guid>
      <description>&lt;p&gt;Last weekend my team &lt;code&gt;c4t_fl4g.txt&lt;/code&gt; participated in zh3r0 CTF and got 32nd rank overall, it was a huge achievement for our team since all the members are fairly new to the field. During this competition I came across a really cool tool, &lt;code&gt;ngrok&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The very first impression was, where the hell was it all this time. The basic functionality of the tool is that it forwards your localhost server to the public internet, you don't need the hassle of trying to host your website on any platform. Just start your server locally and start &lt;code&gt;ngrok&lt;/code&gt; for the same port and boom, you have a website on the link provided by &lt;code&gt;ngrok&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to set it up
&lt;/h3&gt;

&lt;p&gt;It's a single command setup. First create an account on the &lt;a href="https://ngrok.com/" rel="noopener noreferrer"&gt;official page&lt;/a&gt;. You can download the executable from the official page. Once you have downloaded the executable, just run the below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ngrok authtoken {your_authtoken}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;You can find you authtoken on your ngrok profile&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running ngrok
&lt;/h2&gt;

&lt;p&gt;To run ngrok you just need to specify the protocol and the port number&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ngrok {protocl} {port_no}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;protocol ⇒ http, tcp, tls&lt;/p&gt;

&lt;p&gt;To get &lt;code&gt;https&lt;/code&gt; you can use its default port i.e 443.&lt;/p&gt;

&lt;p&gt;You can also specify a custom domain that you would like for the url&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ngrok http -subdomain=noice 4444
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to set up authentication for your tunnel then you can do that too.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ngrok http -auth="username:password" 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ngrok also gives you the ability to forward server that are not hosted locally. For example, if your website is hosted on 192.168.1.1 then you can forward that too.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ngrok http 192.168.1.1:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ngrok also provides a web interface at &lt;code&gt;127.0.0.1:4040&lt;/code&gt; where you can see the requests that are being made to your server. It is quite handy when you want to analyze a request, especially when you are playing Capture the Flag competitions&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This tool is fantastic when you just want to share your local project with somebody. It is also quite useful for CTF players as they don't need to have a server of their own when they want to host their exploit.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Don't be dumb on the internet&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>cybersecurity</category>
      <category>todayilearned</category>
    </item>
    <item>
      <title>Bug Bounty Journey</title>
      <dc:creator>Abhishek Prajapati</dc:creator>
      <pubDate>Tue, 18 May 2021 07:10:02 +0000</pubDate>
      <link>https://dev.to/blacviking/bug-bounty-journey-2h12</link>
      <guid>https://dev.to/blacviking/bug-bounty-journey-2h12</guid>
      <description>&lt;p&gt;So I have started doing bug bounty, and I wasn't able to write many blogs daily, looks like consistency took a terrible hit. I got busy with my work and need to invest more time to it, but that didn't stop me from doing bug bounty. It's been really tough so far, I did find something, but they were all just not a security threat. I changed a couple of targets and tried to find some bugs but unsuccessful so far. It's super annoying to not find anything there, but I'm still trying my best to find my first bug.&lt;/p&gt;

&lt;p&gt;I recently started writing write-ups on CTFs that I participate. Earlier I used to write on GitHub, but now I have decided to do my write-ups on &lt;code&gt;hashnode&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;You can read my write-ups &lt;a href="https://blacviking.hashnode.dev/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm also developing a website to search for podcasts, but I'm having little trouble with that. Spotify API is really annoying but is also superb.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Don't be dumb on the internet&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Bug Bounty journey</title>
      <dc:creator>Abhishek Prajapati</dc:creator>
      <pubDate>Thu, 08 Apr 2021 18:45:23 +0000</pubDate>
      <link>https://dev.to/blacviking/bug-bounty-journey-4mik</link>
      <guid>https://dev.to/blacviking/bug-bounty-journey-4mik</guid>
      <description>&lt;h3&gt;
  
  
  Day 2
&lt;/h3&gt;

&lt;p&gt;I enumerated my target for sub domains in order to increase the scope for my attack vector and found quite a handful of them but as of now couldn't find any low hanging bug. I looked at the API and tried making some requests to them but got a message saying that I am unauthorized(obviously). I tried a few ways to get something but I couldn't. I looked at another target and found something on that target but when I opened the file which I thought was useful turned out to be empty, looks like I just got trolled by the person who made the website. Well gonna try something new tommorow.&lt;/p&gt;

&lt;p&gt;I also found this amazing video which was recently released and was presented at the Nahamcon 2021.&lt;br&gt;
Check it out &lt;a href="https://www.youtube.com/watch?v=-PkK9DP5nec" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also found this amazing website where bug hunters can find another hunter to collaborate with, haven't tried it myself but seems like a nice place to meet some bug bounty hunters&lt;br&gt;
Check it out &lt;a href="https://findhunters.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Don't be dumb on the internet&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Starting Bug bounty</title>
      <dc:creator>Abhishek Prajapati</dc:creator>
      <pubDate>Wed, 07 Apr 2021 17:49:06 +0000</pubDate>
      <link>https://dev.to/blacviking/starting-bug-bounty-g8g</link>
      <guid>https://dev.to/blacviking/starting-bug-bounty-g8g</guid>
      <description>&lt;h3&gt;
  
  
  Starting bug bounty as beginner
&lt;/h3&gt;

&lt;p&gt;I have been practicing cyber security related stuff for an year now. &lt;br&gt;
Initially started to learn from few courses on Udemy and then started doing some machines on Hack the box for practice. Initially it was really difficult as I had zero knowledge about tools and the methodology involved in it. Starting 3 months were the most crucial as I almost wanted to give up since it was so annoyingly difficult but somehow managed to pull through and made some real progress. I started practicing on different platforms in order to learn and participated in CTF as they were so much fun (not initially though). &lt;/p&gt;

&lt;p&gt;So now I have decided to start bug bounty, though I don't expect to be stok straight away but gotta try hard.&lt;/p&gt;

&lt;p&gt;Today is day 1 of my bug bounty journey.&lt;/p&gt;

&lt;p&gt;Found a suitable target and started looking for something interesting, although I was literally sweating since I didn't want to break any laws and get in trouble. It took me a while to know what programs, scope and priority mean but I know them now.&lt;/p&gt;

&lt;p&gt;Haven't found anything interesting but I am gonna keep looking for my first bug.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Don't de dumb on the internet.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
