DEV Community

이관호(Gwanho LEE)
이관호(Gwanho LEE)

Posted on

Backend Architecture Fundamentals_part1

In this article, I summarize the core backend architecture concepts:

  • Load Balancing
  • Asynchronous Concurrency
  • Caching (Redis)
  • Database Scaling (Sharding & Replicas)
  • API Design
  • API Security
  • Monolithic vs Microservices Architecture

The goal is to understand why these patterns exist and how they help systems scale.


1. Load Balancing

When an application receives many user requests, sending all requests to a single server can quickly overwhelm it. A load balancer distributes incoming traffic across multiple servers so that no single server becomes overloaded.

This improves:

  • reliability
  • scalability
  • fault tolerance

If one server fails, the load balancer routes traffic to other healthy servers.

Example Architecture

          Users
            |
            v
        Load Balancer
             |
    |        |         |
|Server A|  |Server B|  |Server|
Enter fullscreen mode Exit fullscreen mode

Example (NGINX Load Balancer)

http {
    upstream backend_servers {
        server app1.example.com;
        server app2.example.com;
        server app3.example.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://backend_servers;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

2. Asynchronous Concurrency

Modern backend systems must handle thousands of requests simultaneously. Blocking threads while waiting for database queries or network calls reduces performance.

Instead, many systems use asynchronous execution, allowing other tasks to run while waiting for I/O operations.

Rust commonly uses the Tokio runtime for asynchronous programming.

Example (Rust Async Server)

use tokio::net::TcpListener;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {

    let listener = TcpListener::bind("127.0.0.1:8080").await?;

    loop {
        let (socket, _) = listener.accept().await?;

        tokio::spawn(async move {
            println!("Handling new request");
        });
    }
}
Enter fullscreen mode Exit fullscreen mode

3. Caching (Redis)

Many backend systems repeatedly fetch the same data (for example: product details or user sessions). Querying the database every time increases latency.

A cache stores frequently accessed data in memory, allowing extremely fast reads.

Redis is a popular in-memory key-value store used as a cache.

Typical Data Flow

        Client Request
              |
              v
         Redis Cache
              |
            (Miss)
              |
           Database
Enter fullscreen mode Exit fullscreen mode

If the data exists in Redis, the system returns it immediately without querying the database.

Example (Redis in Rust)

use redis::Commands;

fn main() -> redis::RedisResult<()> {

    let client = redis::Client::open("redis://127.0.0.1/")?;
    let mut con = client.get_connection()?;

    con.set("user:1", "Tony")?;

    let name: String = con.get("user:1")?;

    println!("User name: {}", name);

    Ok(())
}
Enter fullscreen mode Exit fullscreen mode

Redis dramatically reduces latency because data is stored in memory instead of disk.


4. Database Scaling

As applications grow, a single database may become a bottleneck.

Read Replicas

One primary database handles writes, while multiple replicas handle read requests.

                Write
     App --------------> Primary DB
                  |
               Replication
                  |
         ---------------------
         |        |         |
    Replica1   Replica2   Replica3
Enter fullscreen mode Exit fullscreen mode

This reduces load on the primary database.


Database Sharding

Sharding means splitting a large database into smaller databases called shards.

Example:

Shard 1 → Users 1 - 1,000,000

Shard 2 → Users 1,000,001 - 2,000,000

Shard 3 → Users 2,000,001 - 3,000,000

This distributes the workload across multiple database servers.


5. API Design Principles

A well-designed API is predictable and easy to use.

Resource-Based Endpoints

Use nouns instead of verbs.

GET    /users
GET    /users/1
POST   /users
DELETE /users/1
Enter fullscreen mode Exit fullscreen mode

API Versioning

Versioning allows backward compatibility when APIs evolve.

/api/v1/users
/api/v2/users
Enter fullscreen mode Exit fullscreen mode

6. API Security

Security is critical in backend systems.

Rate Limiting

Limit the number of requests a user can make in a given time period.

Example:

100 requests per minute per IP


Firewalls

Firewalls act as traffic filters and block suspicious traffic before it reaches the server.


Authentication

Authentication ensures only authorized users access the system.

Common methods:

  • API Keys
  • OAuth
  • JWT (JSON Web Tokens)

Example concept:

fn validate_token(token: &str) -> bool {
    token == "valid_token"
}
Enter fullscreen mode Exit fullscreen mode

7. Monolithic vs Microservices Architecture

Monolithic Architecture

All application features exist in a single codebase.

Advantages:

  • simple architecture
  • easier initial development

Disadvantages:

  • harder to scale
  • difficult deployments
  • tightly coupled code

Microservices Architecture

Microservices divide the system into independent services.

Example services:

  • User Service
  • Payment Service
  • Order Service
  • Notification Service

Each service can scale and deploy independently.


Final keywords

  • high traffic
  • reliability
  • scalability

  • Load balancing

  • Async concurrency

  • Redis caching

  • Database sharding

  • Secure API design

  • Microservice architecture

Top comments (0)