DEV Community

Cover image for 10 Backend Terms Every Frontend Developer Should Know
Sanjeev Sharma
Sanjeev Sharma Subscriber

Posted on

10 Backend Terms Every Frontend Developer Should Know

When a backend dev gives their update in the daily standup meeting, most frontend devs are clueless about the work they're doing. They use a lot of jargons that sound gibberish to us frontend devs. But not anymore! 😎

This post aims to tackle some of those jargons. Why should you learn these, you ask? First, you’ll start to understand your product's architecture a bit more. Second, don’t limit yourself to frontend — being an engineer means having a well-rounded understanding of other domains too.

Remember, backend devs aren’t trying to confuse anyone — they’re just speaking the language of their domain. Let’s break it down into simpler terms and demystify concepts like rate limiting, load balancing, proxies, and many more.

Ready? Let’s dive in! 💪

1. Rate Limiting

Rate limiting is a way to control the number of requests a client (like a user, app, or system) can make to a server within a specific period. Think of it as a traffic cop ensuring no one overuses or abuses the server's resources.

For example, a server may allow 100 requests per minute from a single client. If the client exceeds this limit, the server will reject additional requests, often with a 429 (Too Many Requests) response.

Rate limiting

Usage of Rate Limiting:

  1. Prevent Abuse and Overload: Rate limiting ensures no single user or system overwhelms the server by sending excessive requests, which could crash the server.
  2. Control Costs: Helps avoid unexpectedly high resource usage (like bandwidth or compute power), which can be costly. For instance, limiting API calls to a free-tier user's account to avoid excessive use without payment.
  3. Enhance Security: Helps mitigate attacks like DDoS (Distributed Denial-of-Service) by blocking excessive requests from attackers.

2. Load Balancing

Load balancing is a way to distribute incoming requests or traffic across multiple servers to ensure no single server gets overwhelmed. Think of it as a traffic cop directing cars (requests) to different lanes (servers) to keep everything running smoothly.

For example, Social media platforms like Instagram have millions of users accessing its servers simultaneously. Instead of sending all those users to one server, a load balancer ensures their requests are spread across several servers.

Load balancer

Usage of Load Balancing:

  1. Avoid Overloading Servers: If all traffic goes to one server, it might crash due to excessive load. Load balancing prevents this by distributing requests evenly.
  2. Improve Performance and Speed: By spreading traffic, each server handles a manageable number of requests, leading to faster responses for users.
  3. Scale Applications: As traffic increases, you can add more servers to the pool, and the load balancer will automatically start distributing traffic to the new servers.
  4. Ensure High Availability: If one server fails, the load balancer redirects traffic to the other working servers, ensuring the website or service stays online.

3. Caching

Caching is a way to store frequently accessed data temporarily so that it can be retrieved quickly without repeatedly fetching it from the original source.

Think of it like keeping a frequently used recipe on your kitchen counter. Instead of searching for it in a cookbook every time you need it, you grab it from the counter, saving time and effort.

Most backend systems use Caching for storing frequently used database query results, API responses, or pre-rendered web pages. Redis is popular choice among backend developers for caching data.

Caching

Usage of Caching:

  1. Speed Up Performance: Fetching data from a cache is much faster than getting it from the original source (e.g., a database or server).
  2. Reduce Load on Servers: By serving data from the cache, you reduce the number of requests to the backend, preventing it from being overwhelmed.
  3. Handle High Traffic: During peak usage, caching helps ensure the system can handle many users without slowing down.

4. CDN

A CDN (Content Delivery Network) is a network of servers distributed across different locations that work together to deliver content to users more quickly. It helps speed up the process of loading websites, images, videos, and other content by serving them from a server that is geographically closer to the user.

Imagine a library with books stored in different branches. Instead of always going to the central library to get a book, you can visit the closest branch and get the book faster. A CDN does something similar by storing copies of website content in multiple locations.

CDN explained

Usage of CDN:

  1. Faster Load Times: By serving content from a server closer to the user, a CDN reduces the distance data has to travel, leading to faster load times.
  2. Better Scalability and Reliability: CDNs make it easier for websites to scale by automatically handling a large number of users. If one server goes down, CDNs can route the traffic to another server without affecting the user experience.

5. Microservices

Microservices is an architectural style where an application is divided into smaller, independent services, each responsible for a specific piece of functionality. Each microservice is like a small building block that focuses on one task and communicates with other services to create the full application.

Imagine a company where each department (sales, marketing, finance, etc.) works independently but collaborates to run the business. Similarly, in a microservices architecture, each service (like user management, payment processing, etc.) works on its own but interacts with others to make the whole system work.

Microservices architecture

The opposite of Microservices is Monolithic architecture, where all the functionality of an application is combined into a single, unified service or codebase.

Usage of Microservices:

  1. Scalability: Since each service is independent, you can scale them individually. For example, if the payment processing service gets more traffic, you can scale just that service without affecting the others.
  2. Flexibility and Technology Choices: Each microservice can be built with a different technology stack, allowing teams to choose the best tool for each job. For example, You could use Python for data analytics and Node.js for real-time chat without worrying about compatibility.

6. API Gateway

An API Gateway is a server that acts as an entry point for all client requests to your system. It routes those requests to the appropriate microservice, handles load balancing, authentication, caching, and other tasks. In simple terms, it’s like a doorman who directs visitors to the right department in a large building.

Imagine a large organization with many departments (microservices), and instead of going directly to each department, visitors (users) go through a central reception (API Gateway) that sends them to the right place.

Kong is a popular open-source API gateway used by backend developers.

API Gateway

Usage of API Gateway:

  1. Single Entry Point: It simplifies the interaction between clients and the backend services by providing a single entry point for all requests, rather than having clients talk to multiple microservices directly. The API Gateway directs requests to the correct microservice based on the URL, method, or other factors, acting as a traffic controller.
  2. Logging and Monitoring: It collects logs and metrics for all requests passing through, helping to monitor the system and debug issues.
  3. Security: It rate limits, load balances and check for authentication token in incoming requests.

7. Webhook

A Webhook is a way for one application to send real-time updates or notifications to another application when a specific event happens. Instead of the receiving application repeatedly checking for updates (polling), the sending application automatically sends the information when needed.

For example, when something important happens (like a package delivery), the sender (delivery service) texts you (the recipient) immediately, instead of you having to constantly check the website for updates.

Webhook explained

Usage of Webhooks:

  1. Real-Time Notifications: Webhooks provide instant updates, eliminating delays caused by periodic polling.
  2. Automation: Webhooks enable automatic workflows between different applications or systems.

8. Sharding

Sharding is a method of splitting a large database into smaller, more manageable pieces, called shards. Each shard holds a portion of the data and operates as an independent database.

Think of sharding as dividing a library's books into multiple sections based on genres. Instead of searching the entire library for a book, you only look in the relevant section, making the process faster and more efficient.

Database Sharding

Usage of Sharding:

  1. Improved Performance: By dividing data across multiple shards, the workload is distributed. This reduces the load on any single database and allows queries to be processed faster.
  2. High Availability: If one shard goes down, only a portion of the data is affected. The rest of the system continues to work, making it more resilient.

9. Proxy

A Proxy is a server that acts as an intermediary between a client (like your browser) and another server (like a website). Instead of connecting directly to the server, your request goes through the proxy, which forwards it to the destination and then sends the response back to you.

Think of it like a middleman: if you want to send a message to someone, you give it to the middleman, who delivers it for you and brings back the reply.

Proxy server

Usage of Proxy servers:

  1. Privacy and Anonymity: A proxy hides your identity by masking your IP address. The destination server only sees the proxy's IP, not yours.
  2. Bypassing Restrictions: Proxies allow users to access content that might be blocked in their region by routing requests through a server in a different location.

10. Message Queues

A Message Queue is a system used to send, store, and retrieve messages between different parts of an application (or between applications) in a reliable and organized way. Instead of sending messages directly, a queue acts like a post office:

  • One part of the system (the sender) places a message in the queue.
  • Another part (the receiver) picks it up whenever it’s ready to process it.

RabbitMQ and Apache Kafka are two popular message queue systems,

Message Queue

Usage of Message Queues:

  1. Decoupling Systems: A message queue allows different parts of a system to work independently. The sender and receiver don’t need to know about each other or work at the same time.
  2. Asynchronous Processing: Message queues enable background tasks to be handled without making the user wait. For example, When you upload a photo, the app instantly confirms the upload while processing resizing and optimization tasks in the background.

Thank you for reading my post! 🙏 I hope you've learnt something today.

Please leave a reaction 🦄 or a comment 💬, so this post reaches other devs like you. 🌱👨‍💻

Preparing for Frontend interviews?

👉 Check out Frontend Camp

  • Popular interview questions
  • Popular system design questions (upcoming)
  • All for free! ✨

Join now!

Top comments (0)