DEV Community

Mritunjay Singh
Mritunjay Singh

Posted on • Edited on

Complete Interview

Node.js Interview Notes

Topics to Cover

  1. Node js
  2. REST gRPC/GraphQL Web-Socket
  3. Redis SQL MONGODB POSTGRES ,DBMS
  4. Security(Web)
  5. Next js , React , AJAX , DOM etc
  6. S3 EC2 DOCKER KUBERNETES TERRAFORM OPENTELEMETRY
  7. Moderation API , Vector DB
  8. Projects
  9. Web Server vs App Server

1. What is Node js?

Node js is a runtime, so initally when js was launched it was only for client side (browser), but this js need to be converted to machine language and that was done by javascript engines, we have various differnt browsers availabe like chrome , firefox, etc so all these broweser need to have js engine to run the js code

"The best thing about Node.js is its single-threaded event loop architecture. Unlike traditional architectures where each request needs to be processed in a unique thread or a new thread is created for each request, which leads to huge memory consumption and overhead of context switching. Even though thread context switches are less costly than process context switches, they still need to store stack pointers, registers, etc., and these operations happen on each blocking request.
But Node.js is single-threaded and non-blocking. It doesn't block its main thread and doesn't block requests by processing I/O operations in the background and returning to the main thread through callbacks"

so to run the run the js outside the browser we need the runtime enviroment , and that runtime is Node js , that provide the capability to run tha js outside the browsers

2. What is Express js?

Express.js is a web framework built on top of Node.js that simplifies building web applications and APIs.Express.js adds a layer of convenience with features like routing, middleware, and request/response handling.

3. What is Difference b/w framework and liberary?

The fundamental difference is about control. With a library, I'm in control - I call the library functions when I need them. For example, when I use Axios, I decide when to make HTTP requests and call axios.get().

With a framework, the framework is in control - it defines the structure and calls my code. For example, with Express.js, the framework handles the HTTP server, routing, and middleware pipeline, and calls my route handlers when requests come in. This is called inversion of control.

4. What is Event loop in Node js?

The Event Loop in Node.js is a mechanism that allows Node.js to perform non-blocking I/O operations even though JavaScript is single-threaded.Node.js uses asynchronous operations for I/O tasks (like file system, database queries, HTTP requests).Instead of waiting for the operation to complete, Node.js registers a callback and moves to the next task, The event loop continuously checks a queue of events and executes their corresponding callbacks.

5. Explain Callbacks, promises and async/await

callback is a function that we pass as an argument to another function, and it gets executed after some operation completes,

to solve this problem, Promises were introduced. A Promise is an object that represents a future value. It has 3 states - Pending, Fulfilled, and Rejected. Promise chaining solved the callback hell problem. Error handling also became better with .catch() method,

async/await is syntactic sugar over Promises. It makes asynchronous code look synchronous, which improves readability significantly

callback (nested callback in example)

app.get("/users", (req, res) => {
  db.collection("users").find().toArray((err, users) => {
    if (err) return res.status(500).send("DB error");
    res.json(users);
  });
});

Enter fullscreen mode Exit fullscreen mode

Promise (one callback and one promise)

app.get('/users', (req,res)=>{
  db.collection("users").find().toArray()
  .then(users=>res.json(users))
  .catch(rr=>es.status(500).send("DB Error"))
})
Enter fullscreen mode Exit fullscreen mode

Async await

app.get('/users',async(req,res)=>{
  try{
    const users=await db.collection("users").find().toArray()
    res.json(users)
  }catch(error){
    res.status(500).send("DB error");
  }
})
Enter fullscreen mode Exit fullscreen mode

6. What is closure in JavaScript?

Closure means when a function remembers variables from its outer scope, even after the outer function is finished.

function outer() {
  let counter = 0;

  function inner() {
    counter++; 
    return counter;
  }

  return inner;
}

const increment = outer();

console.log(increment()); // 1
console.log(increment()); // 2
console.log(increment()); // 3
Enter fullscreen mode Exit fullscreen mode

6. What is middlewares?

Middleware in Express.js are functions that sit between the incoming request and the outgoing response. They have access to three key things: the request object (req), the response object (res), and the next function which passes control to the next middleware in the chain.

It works/is used on various levels:

If you want to perform a particular function on all endpoints, you can use application-level middleware like app.use(express.json()) which basically parses the incoming request.

When you want to perform some function on a particular route, we use router-level middleware. For example, for authentication, you make a function for auth and pass it as a parameter in your login/registration routes.

Then you have error middleware, which is very unique as it takes 4 parameters: err, req, res, next. It's used to handle any errors that occur during request processing - like database errors, validation errors, or any unexpected issues. When any middleware calls next(error), it jumps directly to this error middleware.

But one thing is, the order of placement decides the order of execution.

7. How do you connect to databases in Node.js?

For database connections in Node.js, you can use raw drivers for maximum control and performance, or ORMs (for SQL databases) / ODMs (for NoSQL databases like MongoDB) for better developer experience and rapid development.

For example, when I use MongoDB as a database, I use Mongoose ODM, and when I use PostgreSQL or SQL databases, I use Sequelize ORM.

What they do is instead of directly talking to the database and writing full queries, they work as a translator - we use JavaScript objects and they convert it to SQL queries and return results as JavaScript objects. This makes development faster and code more maintainable, though with slight performance overhead.

8. How do you handle command line arguments in Node.js?

"Node.js provides process.argv array that contains command line arguments. The first two elements are the Node.js path and script path, and the rest are actual arguments. For basic use, I access process.argv[2], process.argv[3], etc. For more complex argument parsing, I use libraries like yargs or commander which provide features like named arguments, flags, and validation."

9. What are streams in Node.js?

"Streams are objects that handle reading or writing data piece by piece instead of loading everything into memory at once. There are four types: Readable (reading data), Writable (writing data), Duplex (both reading and writing), and Transform (modifying data while reading/writing). Streams are memory-efficient for handling large files and can be chained using pipes. They're perfect for file processing, HTTP requests/responses, and real-time data processing."

10. How do you handle file uploads in Node.js?

"I use the multer middleware which is built on top of busboy for handling multipart/form-data. Multer provides options for destination folder, filename customization, file size limits, and file type filtering. I can configure it for single file uploads with upload.single(), multiple files with upload.array(), or mixed fields with upload.fields(). For cloud storage, I integrate it with services like AWS S3 or Cloudinary."

11. How do you implement input validation?

"I implement input validation using libraries like Joi or express-validator. With Joi, I define validation schemas that specify data types, required fields, string lengths, and custom validation rules. I create middleware functions that validate request data before it reaches route handlers. If validation fails, I return appropriate error responses with detailed error messages. I also sanitize input data to prevent XSS and injection attacks."

12. How do you implement event-driven architecture?

Event-Driven Architecture is a design pattern where applications communicate through events instead of direct API calls. Components publish events when something important happens, and other components subscribe to react to those events asynchronously

There are three main components:

1. Event Producer/Publisher:

Creates and publishes events when business logic changes occur
Example: OrderService publishes 'OrderCreated' event after saving order
Doesn't know who will consume the event - complete decoupling

2. Event Broker/Message Bus:

Central component that receives, stores, and routes events
Popular options: Kafka, RabbitMQ, AWS SQS
Handles delivery guarantees, persistence, and scaling
Acts like a post office - receives messages and delivers to subscribers

3. Event Consumer/Subscriber:

Services that listen to specific events and react accordingly
Example: EmailService subscribes to 'OrderCreated' to send confirmation emails
Process events asynchronously and independently

Let me explain the complete flow with an e-commerce example:

Step 1: User places order β†’ OrderService processes request β†’ Saves order in database

Step 2: OrderService detects significant change β†’ Creates OrderCreated event with order details β†’ Publishes to 'order-events' topic in broker

Step 3: Event Broker receives event β†’ Persists it for reliability β†’ Identifies all subscribers to 'order-events' topic

Step 4: Broker delivers event to multiple consumers simultaneously:

  • EmailService receives event β†’ Sends confirmation email
  • InventoryService receives event β†’ Updates stock levels
  • PaymentService receives event β†’ Processes payment
  • AnalyticsService receives event β†’ Updates metrics

Step 5: Each consumer processes independently β†’ Sends acknowledgment back to broker β†’ Broker marks event as processed

13. DBMS vs RDBMS

DBMS (Database Management System)

Definition: A software that allows creation, management, and manipulation of databases.

Data Storage: Can store data in files, key-value pairs, documents, or tables.

Structure: Doesn’t always enforce relationships between data.

Example Systems:

MongoDB (document-based)

Redis (key-value store)

Neo4j (graph DBMS)

RDBMS (Relational Database Management System)

Definition: A type of DBMS based on the relational model (E. F. Codd).

Data Storage: Data is stored in tables (rows & columns).

Structure: Enforces relationships between tables using primary keys & foreign keys.

Supports SQL (Structured Query Language).

Example Systems:

MySQL

PostgreSQL

Oracle

SQL Server




14 Candidate key vs Super key vs Primary key


1. Super Key

  • A set of one or more attributes that can uniquely identify a row in a table.
  • May contain extra attributes (not minimal).
  • Example:
  Table: Students(student_id, email, phone)  
Enter fullscreen mode Exit fullscreen mode
  • {student_id} β†’ uniquely identifies a student βœ…
  • {email} β†’ uniquely identifies a student βœ…
  • {student_id, phone} β†’ also uniquely identifies (but extra column = still a super key)

πŸ‘‰ Super key = any unique identifier (not necessarily minimal).


2. Candidate Key

  • A minimal super key β†’ a super key with no redundant attributes.
  • Each candidate key is a potential choice for primary key.
  • Example:

    • {student_id} βœ… minimal
    • {email} βœ… minimal
    • {student_id, phone} ❌ not minimal (since student_id alone is enough)

πŸ‘‰ Candidate key = minimal unique identifier.


3. Primary Key

  • One chosen candidate key to uniquely identify rows.
  • Only one primary key per table (though it may be composite, i.e., made of multiple columns).
  • Example:

    • If we choose {student_id} β†’ that becomes the primary key.

πŸ‘‰ Primary key = selected candidate key.


Visual Hierarchy

Super Keys  β†’  Candidate Keys  β†’  Primary Key
(many)         (minimal ones)      (one chosen)
Enter fullscreen mode Exit fullscreen mode

Example Table

student_id email phone
1 alice@gmail.com 12345
2 bob@gmail.com 67890
  • Super Keys: {student_id}, {email}, {phone}, {student_id, email}, {student_id, phone}, {email, phone}, etc.
  • Candidate Keys: {student_id}, {email}, {phone}
  • Primary Key: Suppose we choose {student_id}

βœ… Summary in one line:

  • Super Key: Any set of columns that uniquely identify a row (may be extra).
  • Candidate Key: Minimal super key (no extra attributes).
  • Primary Key: The chosen candidate key for the table.

15 Normalization


What is Normalization?

πŸ‘‰ Normalization is the process of organizing data in a database to:

  • reduce data redundancy (duplicate data), and
  • improve data integrity (accuracy and consistency).

It involves breaking a large table into smaller, related tables and defining relationships between them.


Why is Normalization Important in DBMS?

  1. Removes Redundancy β†’ avoids storing the same data in multiple places.
  • Example: Student’s course name stored in one place instead of repeating in every record.
  1. Improves Data Integrity β†’ ensures data is consistent and correct.
  • Example: If a course name changes, update it in one table only.
  1. Easier Maintenance β†’ smaller, well-structured tables are easier to update, insert, or delete.

  2. Prevents Anomalies:

  • Insertion anomaly β†’ Can’t insert data because other unrelated data is missing.
  • Update anomaly β†’ Updating in one place but forgetting in another causes inconsistency.
  • Deletion anomaly β†’ Deleting one record causes unintended loss of related data.
  1. Efficient Storage β†’ saves memory by avoiding duplicate storage.

Forms of Normalization (Normal Forms)

Each step removes a type of redundancy/anomaly:

  1. 1NF (First Normal Form) β†’ No repeating groups, atomic values only.
  2. 2NF (Second Normal Form) β†’ 1NF + no partial dependency (applies to composite keys).
  3. 3NF (Third Normal Form) β†’ 2NF + no transitive dependency.
  4. BCNF (Boyce-Codd Normal Form) β†’ Stronger version of 3NF.


Does Denormalization Improve Query Speed?

πŸ‘‰ Yes, sometimes β€” denormalization can improve query speed, but it comes with trade-offs.


How Denormalization Works

  • In normalization, data is spread across multiple tables (to reduce redundancy).
  • In denormalization, we merge some of these tables or duplicate some data to reduce the need for costly JOIN operations.

Why It Improves Query Speed

  1. Fewer Joins β†’ Queries don’t need to fetch from multiple tables.
  2. Faster Reads β†’ Since related data is pre-combined, SELECT queries can run faster.
  3. Optimized for Analytics β†’ Reporting & BI systems often denormalize data into fact tables.

Trade-offs of Denormalization

  • Increased Redundancy β†’ same data stored in multiple places.
  • Update Anomalies β†’ updating one copy but forgetting others can cause inconsistency.
  • More Storage β†’ duplicates use extra disk space.
  • Slower Writes β†’ inserts/updates/deletes become more complex since data exists in multiple places.

Example

Normalized (Slower Read, Faster Write)

  • Students table
  • Courses table
  • Enrollment table πŸ‘‰ Need JOIN to find Alice’s course.

Denormalized (Faster Read, Slower Write)

student_id student_name course_id course_name
1 Alice C101 DBMS
2 Bob C102 OOPS

πŸ‘‰ Single query, no JOINs β†’ faster reads.



Q: What is the difference between OLTP and OLAP?

Answer

OLTP stands for Online Transaction Processing. It is used for day-to-day operations like banking or e-commerce. It handles a large number of short transactions β€” insert, update, delete. OLTP databases are usually normalized to maintain consistency and avoid redundancy.

OLAP stands for Online Analytical Processing. It is used for analysis and reporting, like sales trends, forecasting, or dashboards. OLAP systems mainly run complex read-only queries on historical data. Databases here are usually denormalized (star schema, snowflake schema) for faster query performance.


Good one πŸ‘ β€” "Types of SQL" is a very common interview question.
SQL is divided into categories based on what the commands do.


Types of SQL Commands

1. DDL (Data Definition Language)

πŸ‘‰ Used to define and manage the structure of the database (tables, schemas).

  • Commands:

    • CREATE β†’ create database objects (tables, views, etc.)
    • ALTER β†’ modify structure of objects
    • DROP β†’ delete objects
    • TRUNCATE β†’ remove all records (reset table)
  • Example:

  CREATE TABLE Students (
      student_id INT PRIMARY KEY,
      name VARCHAR(50)
  );
Enter fullscreen mode Exit fullscreen mode

2. DML (Data Manipulation Language)

πŸ‘‰ Used to manipulate data inside tables.

  • Commands:

    • INSERT β†’ add records
    • UPDATE β†’ modify records
    • DELETE β†’ remove records
  • Example:

  INSERT INTO Students VALUES (1, 'Alice');
  UPDATE Students SET name = 'Bob' WHERE student_id = 1;
Enter fullscreen mode Exit fullscreen mode

3. DQL (Data Query Language)

πŸ‘‰ Used to query data.

  • Command:

    • SELECT β†’ retrieve data
  • Example:

  SELECT name FROM Students WHERE student_id = 1;
Enter fullscreen mode Exit fullscreen mode

4. DCL (Data Control Language)

πŸ‘‰ Used to control access/permissions.

  • Commands:

    • GRANT β†’ give permissions
    • REVOKE β†’ remove permissions
  • Example:

  GRANT SELECT ON Students TO user1;
  REVOKE SELECT ON Students FROM user1;
Enter fullscreen mode Exit fullscreen mode

5. TCL (Transaction Control Language)

πŸ‘‰ Used to manage transactions in a database.

  • Commands:

    • COMMIT β†’ save changes
    • ROLLBACK β†’ undo changes
    • SAVEPOINT β†’ set a checkpoint in transaction
  • Example:

  BEGIN;
  UPDATE Students SET name = 'Charlie' WHERE student_id = 1;
  ROLLBACK;  -- undo change
Enter fullscreen mode Exit fullscreen mode

βœ… Quick Interview Summary

  • DDL β†’ Structure (CREATE, ALTER, DROP, TRUNCATE)
  • DML β†’ Data (INSERT, UPDATE, DELETE)
  • DQL β†’ Query (SELECT)
  • DCL β†’ Permissions (GRANT, REVOKE)
  • TCL β†’ Transactions (COMMIT, ROLLBACK, SAVEPOINT)


Good one πŸ‘ This is another favorite interview question in DBMS/SQL. Let’s break it very clearly:


DELETE vs TRUNCATE vs DROP

1. DELETE

  • Purpose: Removes some or all rows from a table.
  • Type: DML (Data Manipulation Language).
  • WHERE Clause: βœ… Yes, can delete specific rows.
  • Rollback: βœ… Yes, changes can be rolled back (if inside a transaction).
  • Table Structure: Remains intact (only data is deleted).
  • Speed: Slower (logs each row deletion).

Example:

DELETE FROM Students WHERE student_id = 1;  -- deletes one row
DELETE FROM Students;  -- deletes all rows (but table remains)
Enter fullscreen mode Exit fullscreen mode

2. TRUNCATE

  • Purpose: Removes all rows from a table.
  • Type: DDL (Data Definition Language).
  • WHERE Clause: ❌ Not allowed (removes everything).
  • Rollback: ⚠️ Depends on DBMS β†’ In some (like Oracle), can’t rollback; in others (like SQL Server with transactions), possible.
  • Table Structure: Remains intact.
  • Speed: Faster (deletes in bulk, minimal logging).
  • Resets AUTO_INCREMENT counters (if any).

Example:

TRUNCATE TABLE Students;  -- deletes all rows, resets identity
Enter fullscreen mode Exit fullscreen mode

3. DROP

  • Purpose: Deletes the entire table (structure + data).
  • Type: DDL (Data Definition Language).
  • WHERE Clause: ❌ Not applicable.
  • Rollback: ❌ Cannot be rolled back (table is gone).
  • Table Structure: Removed completely from DB.
  • Speed: Fastest (removes definition + data).

Example:

DROP TABLE Students;  -- deletes the table and its data permanently
Enter fullscreen mode Exit fullscreen mode

βœ… Quick Comparison Table

Feature DELETE TRUNCATE DROP
Type DML DDL DDL
Removes Rows (selected/all) All rows Whole table (data + schema)
WHERE βœ… Yes ❌ No ❌ No
Rollback βœ… Yes ⚠️ Depends (usually No) ❌ No
Structure Remains intact Remains intact Removed completely
Speed Slow (row by row) Faster (bulk) Fastest
Resets Auto ID ❌ No βœ… Yes ❌ Not applicable

βœ… Interview 1-liner answer:

  • DELETE β†’ removes rows (can filter with WHERE, rollback possible).
  • TRUNCATE β†’ removes all rows, keeps table structure, faster, resets identity.
  • DROP β†’ removes the entire table (structure + data).

Good question πŸ‘ β€” Joins are one of the most commonly asked topics in DBMS / SQL interviews.


What is a Join?

A JOIN in SQL is used to combine rows from two or more tables based on a related column (usually a foreign key ↔ primary key relationship).

πŸ‘‰ Joins allow you to query data that is spread across multiple tables.


Types of Joins

1. INNER JOIN

  • Returns only the rows that have matching values in both tables.
SELECT s.student_id, s.name, c.course_name
FROM Students s
INNER JOIN Courses c
ON s.course_id = c.course_id;
Enter fullscreen mode Exit fullscreen mode

βœ… Only students enrolled in a valid course will be shown.


2. LEFT JOIN (or LEFT OUTER JOIN)

  • Returns all rows from the left table + matching rows from the right table.
  • If no match, NULL is returned for the right table columns.
SELECT s.student_id, s.name, c.course_name
FROM Students s
LEFT JOIN Courses c
ON s.course_id = c.course_id;
Enter fullscreen mode Exit fullscreen mode

βœ… Shows all students, even those who don’t have a course assigned (course_name = NULL).


3. RIGHT JOIN (or RIGHT OUTER JOIN)

  • Returns all rows from the right table + matching rows from the left table.
SELECT s.student_id, s.name, c.course_name
FROM Students s
RIGHT JOIN Courses c
ON s.course_id = c.course_id;
Enter fullscreen mode Exit fullscreen mode

βœ… Shows all courses, even those with no students enrolled (student = NULL).


4. FULL JOIN (or FULL OUTER JOIN)

  • Returns all rows when there is a match in either left or right table.
  • If no match, fills with NULL.
SELECT s.student_id, s.name, c.course_name
FROM Students s
FULL OUTER JOIN Courses c
ON s.course_id = c.course_id;
Enter fullscreen mode Exit fullscreen mode

βœ… Shows all students and all courses, matching where possible, NULL otherwise.


5. CROSS JOIN

  • Returns the Cartesian product of both tables (every row of left Γ— every row of right).
SELECT s.name, c.course_name
FROM Students s
CROSS JOIN Courses c;
Enter fullscreen mode Exit fullscreen mode

βœ… If 10 students and 5 courses β†’ 50 rows.


6. SELF JOIN

  • A table joins with itself (useful for hierarchical data, e.g., employees with managers).
SELECT e1.name AS Employee, e2.name AS Manager
FROM Employees e1
LEFT JOIN Employees e2
ON e1.manager_id = e2.employee_id;
Enter fullscreen mode Exit fullscreen mode

Redis Interview Questions - Simple Answers

Q1: What is Redis?

Simple Answer: Redis ek super fast in-memory database hai jo data ko RAM mein store karta hai instead of hard disk pe. Ye cache, database, aur message queue ke liye use hota hai. Data key-value pairs mein store hota hai jaise "user:123" -> "john_doe". Redis ka matlab hai Remote Dictionary Server. Performance bohot zyada hai kyunki RAM disk se 1000x faster hota hai.

Q2: Is Redis just a cache?

Simple Answer: Nahi! Redis sirf cache nahi hai - ye cache se kaafi zyada powerful hai. Normal cache sirf simple key-value store karta hai, lekin Redis mein:

  • Different data types support (strings, lists, sets, hashes, sorted sets)
  • Data ko disk pe permanently save kar sakta hai (persistence)
  • Pub-sub messaging system hai (like WhatsApp broadcast)
  • Master-slave replication for backup
  • Lua scripts run kar sakta hai

Q3: Does Redis persist data?

Simple Answer: Haan, lekin completely guaranteed nahi hai. Redis do tarike se data save karta hai:

  1. Snapshots (RDB) - Har kuch time pe complete backup leta hai memory ka
  2. AOF - Har command ko file mein log karta hai

Problem ye hai ki agar crash ho jaaye snapshot ke beech mein, toh last snapshot ke baad ka data loss ho sakta hai. Ye trade-off hai - speed ke liye perfect safety compromise karta hai. PostgreSQL jaisa durability nahi hai.

Q4: What's the advantage of Redis vs using memory?

Simple Answer: Local memory faster hai but Redis ke advantages hain:

  • Shared Access: Multiple applications/servers ek saath access kar sakte hain same data
  • Memory Efficiency: Java/Node.js jaise languages mein large heap garbage collection slow kar deta hai, Redis separate process mein efficiently handle karta hai
  • Persistence: Data crash ke baad bhi recover kar sakte hain
  • Features: Simple memory mein sirf variables store kar sakte hain, Redis mein lists, sets, pub-sub, atomic operations sab kuch hai
  • High Availability: Master-slave replication se backup ready rehta hai

Q5: When to use Redis Lists?

Simple Answer: Jab ordered data chahiye jo first/last position se add/remove karna ho. Lists ka behavior exactly array jaisa hai but distributed. Perfect use cases:

  • Job Queues: Background tasks queue karna (email sending, image processing)
  • Activity Logs: Recent activities track karna
  • Message Buffers: Chat applications mein recent messages store karna
  • LIFO/FIFO operations: Stack ya Queue implement karna Commands: LPUSH/RPUSH (add), LPOP/RPOP (remove), LRANGE (get range)

Q6: When to use Redis Sets?

Simple Answer: Jab unique values store karni hain aur fast lookup chahiye. Set automatically duplicates remove kar deta hai. Best use cases:

  • Unique Visitors: Website pe aaj kaun aaya hai track karna
  • Tags System: Article ke tags, user interests
  • Access Control: User permissions (admin_users set mein check karna)
  • Set Operations: Common friends find karna (intersection), all friends (union) O(1) time complexity hai membership check karne ke liye. Lists mein O(n) lagta hai same operation.

Q7: When to use Redis over MongoDB?

Simple Answer: Depends on use case, but Redis better hai jab:

Redis Choose karo jab:

  • Caching - MongoDB caching ke liye bilkul slow hai
  • Extreme Performance - Redis memory-based hai toh super fast
  • Simple Data - Complex relationships nahi chahiye
  • Time hai design karne ka - Redis mein data structure properly design karna padta hai

MongoDB Better jab:

  • Complex Queries - SQL-like queries chahiye
  • Scaling - Horizontal scaling easily kar sakte hain
  • Document Storage - JSON/BSON documents store karne hain
  • Ad-hoc Queries - Runtime pe new queries banana ho

Q8: How are Redis pipelining and transaction different?

Simple Answer:

Pipelining:

  • Multiple commands ek saath network pe bhejte hain (batching)
  • Network round-trips save karte hain
  • Commands atomic nahi hain - beech mein dusre client ke commands aa sakte hain
  • Sirf performance optimization hai

Transactions (MULTI/EXEC):

  • Commands ko atomically execute karta hai
  • Guarantee hai ki beech mein koi interference nahi hoga
  • Ya toh saare commands run honge ya koi bhi nahi
  • Data consistency ke liye use karte hain

Q9: Does Redis support transactions?

Simple Answer: Haan! Redis mein transactions hain but SQL transactions se thoda different.

Commands: MULTI (start), EXEC (execute), DISCARD (cancel), WATCH (conditional)

Guarantees:

  1. Isolation: Saare commands serial mein execute honge, beech mein koi interference nahi
  2. Atomicity: Ya toh saare commands successful honge ya koi bhi nahi

Example: Bank transfer - balance deduct aur credit dono atomic hone chahiye. Agar beech mein crash ho jaaye toh partial state nahi rahega.

Q10: How does Redis handle multiple clients?

Simple Answer: Redis single-threaded hai with event loop (like Node.js).

How it works:

  • Ek time pe sirf ek command execute hota hai
  • Network I/O non-blocking hai toh multiple clients connect ho sakte hain
  • Commands queue mein wait karte hain, ek-ek karke process hote hain
  • Automatically atomic guarantee hai kyunki no parallel execution

Advantage: No locks, no race conditions, no complex synchronization needed. Simple and fast!

Q11: Difference between Redis replication and sharding?

Simple Answer:

  • Replication = Same data multiple servers pe copy (backup ke liye)
  • Sharding = Different data different servers pe (performance ke liye)

Q12: When to use Redis Hashes?

Simple Answer: Jab ek key ke andar multiple field-value pairs store karne hain. Jaise user profile - user:123 ke andar name, age, email store karna.

Q13: Use case for Sorted Set?

Simple Answer: Leaderboards! Users ko score ke saath store karta hai aur automatically sort kar deta hai. Gaming scores, top performers list ke liye perfect.

Q14: What is Pipelining and when to use?

Simple Answer: Multiple Redis commands ko ek saath bhejna instead of ek-ek karke. Jab bulk operations karne hain toh network time bachta hai.

Q16: How to use multiple CPU cores?

Simple Answer: Redis single-threaded hai, toh ek core hi use karta hai. Multiple cores use karne ke liye multiple Redis instances chalana padega same machine pe.

Q18: Why no rollbacks in Redis?

Simple Answer: Redis mein rollback nahi hai kyunki:

  • Commands fail sirf programming errors se hoti hain
  • Rollback functionality se Redis slow ho jayega
  • Simple aur fast rakhne ke liye rollback nahi diya

Q19: What is AOF persistence?

Simple Answer: AOF matlab Append Only File. Har write operation ko file mein log karta hai. Server restart pe ye log replay karke data recover kar leta hai.

Q20: Check if key exists in Redis list?

Simple Answer: Direct way nahi hai. Options hain:

  • LREM use karke remove karo, agar remove hua matlab exist karta tha
  • Separate SET maintain karo list ke saath
  • Pure list ko loop karke check karo (slow hai)

Q21: Redis underlying data structures?

Simple Answer:

  • Strings = Dynamic C strings
  • Lists = Linked lists
  • Sets = Hash tables
  • Sorted Sets = Skip lists
  • Hashes = Hash tables

Q23: What if Redis runs out of memory?

Simple Answer:

  • Linux kill kar dega (OOM killer)
  • Redis crash ho jayega
  • Ya performance slow ho jayegi
  • Solution: maxmemory set karo config mein

Q25: Is Redis durable (ACID)?

Simple Answer: Nahi, Redis durable nahi hai by default. Speed ke liye durability sacrifice karta hai. AOF mode mein thoda durable ban sakta hai but performance cost pe.


Top comments (0)