DEV Community

Cover image for Designing a Flight Search Engine: Architecture, Caching, and Trade-offs
Rahman Nugar
Rahman Nugar

Posted on

Designing a Flight Search Engine: Architecture, Caching, and Trade-offs

I was recently asked to design and build a flight search engine using the Amadeus API as part of a frontend assessment. While the visible requirement was to surface flight results and price trends through a user-facing interface, I chose to approach the problem as a complete system rather than a UI-only task.

To reflect how flight search products are built in practice, I introduced a server layer responsible for integrating a rate-limited external API, defining a stable internal data contract, and implementing a caching strategy that keeps latency low under repeated queries.

The system uses global, deterministic caching keyed by search parameters rather than per-user caching. This allows identical searches to share cached results, reducing redundant upstream requests while keeping response times consistent across the application.

This article documents the system design of Flight33, part of an ongoing series of applications I build under the “33” naming convention. You can view the live application available at Flight33.

Table of Contents

1. System Architecture
2. System Boundaries and Constraints
3. Data Flow and Responsibilities
4. Global Caching Strategy
5. Data Normalization and Contracts
6. Code Structure and Separation of Concerns
7. Client Responsibilities and State Management
8. Booking Flow Integration
9. Trade-offs and Limitations
10. Closing Thoughts

1. System Architecture

Flight33 is structured as a two-tier system with a clear separation between the client application and a server layer that mediates all access to external flight data.

The client is responsible for user interaction, search input, filtering, and presentation.
The server layer handles integration with the Amadeus API, response shaping, caching, and enforcement of system constraints.

The client does not communicate directly with the Amadeus API. All search requests are routed through the server layer, which acts as the single integration point for external flight data.

User
 ↓
Web Client (Next.js)
 ↓
Flight33 API
 ↓
Redis Cache
 ↓
Amadeus API
Enter fullscreen mode Exit fullscreen mode

With this abstraction, Rate limits, response inconsistencies, and caching behavior are handled once, in a controlled environment, rather than being distributed across the client. As a result, the client remains predictable and focused on delivering a responsive search experience.

2. System Boundaries and Constraints

The design of Flight33 is driven primarily by the constraints imposed by the Amadeus API and the expectations of a responsive search experience.

The Amadeus API is rate-limited, exhibits variable latency, and returns responses whose structure can vary depending on query shape. Treating it as a direct dependency of the client would couple user experience to these constraints and make system behavior harder to control.

To avoid this, all interaction with Amadeus is isolated behind the server layer. This establishes a clear boundary where external variability is handled explicitly, rather than leaking into the client through retries, conditional logic, or UI workarounds.

Within this boundary, the system enforces a small set of invariants:

  • External API access is centralized and controlled
  • Identical searches resolve deterministically
  • Cached data is shared globally rather than scoped to individual users
  • The client interacts only with normalized, stable data structures

3. Data Flow and Responsibilities

A flight search in Flight33 follows a deliberately constrained flow designed to minimize external API usage while keeping client interactions responsive.

A search begins when the client submits a set of parameters, including origin, destination, dates, and passenger configuration. These parameters are forwarded to the server layer as a single search request.

On receipt, the server layer performs three sequential steps:

  1. Cache resolution
    A deterministic cache key is generated from the full search payload. If a matching entry exists, the cached result is returned immediately.

  2. External fetch
    If no cached result is found, the server fetches data from the Amadeus API using the validated search parameters.

  3. Response shaping
    The raw response is normalized into a stable internal structure before being returned to the client and written to cache.

Once the client receives the search results, no further network requests are required for filtering or pagination. All subsequent interaction operates on the in-memory dataset provided by the initial response.

4. Global Caching Strategy

Caching is central to how Flight33 controls latency and external API usage.

Search results are cached globally, not per user. A deterministic cache key is derived from the complete search payload, ensuring that identical searches resolve to the same cached entry regardless of who initiates them. This allows repeated queries to benefit from shared cache state rather than fragmenting cache usage across sessions.

The cache itself is backed by an in-memory data store(Redis) running alongside the server layer. Cached entries are written with a short time-to-live to balance freshness with performance, allowing the system to serve repeat searches quickly while avoiding long-lived stale data.

Because caching is handled within the server layer, the client remains unaware of whether a response was served from cache or fetched upstream. This keeps client behavior consistent while allowing cache policies to evolve independently.

During development and deployment, the cache service runs alongside the server layer using Docker, allowing cache behavior to be tested under the same assumptions used in production. This avoids environment-specific logic and keeps cache semantics consistent across local and deployed setups.

5. Data Normalization and Contracts

Responses from the Amadeus API are not consumed directly by the client. The raw payload contains deeply nested structures, optional fields, and variations that depend on the search parameters.

Before any data is returned, responses are transformed into a normalized internal format. This step extracts only the fields required for search results, pricing, and filtering, and maps them into a predictable structure.

This normalization step serves two purposes:

  • It isolates the client from upstream response variability
  • It establishes a stable contract that the rest of the system can rely on

As a result, changes in the external API surface do not propagate directly to the client, and frontend logic does not need to defensively handle inconsistent data shapes.

6. Code Structure and Separation of Concerns

The server layer is structured around explicit responsibilities rather than generic layers or framework conventions.

Search handling is composed of three distinct concerns:

  • Request coordination: receiving search input, validating parameters, and managing the request lifecycle
  • Search execution: resolving cached results, invoking the external flight data provider when required, and aggregating responses
  • Data transformation: converting provider-specific payloads into a stable internal data contract

Caching logic is treated as a first-class concern rather than an implementation detail. Cache key generation, read/write behavior, and expiry policies are centralized and invoked explicitly as part of the search flow, rather than being hidden behind decorators or middleware.

External integrations, including the Amadeus client and the cache client, are isolated behind well-defined interfaces. This prevents upstream changes or integration-specific behavior from leaking into search logic or request handling.

7. Client Responsibilities and State Management

The client is intentionally limited to interaction and presentation concerns.

Its responsibilities include:

  • Collecting and validating search input
  • Triggering search execution
  • Rendering search results
  • Managing UI state such as loading, errors, filtering, and pagination

Once search results are returned, the client does not issue additional requests when filters change. All filtering and sorting operations are applied to the in-memory dataset returned by the initial search.

This approach ensures fast feedback during interaction while keeping network usage predictable and minimal.

Flight 33 Search Results

8. Booking Flow Integration

Flight33 does not attempt to handle bookings directly.

Each search result exposes a booking action that redirects users to an external booking platform with the relevant search parameters pre-filled. This allows users to continue the booking process without the system managing reservations, payments, or ticketing flows.

This decision keeps the system focused on search and comparison while delegating transactional complexity to a platform designed for it.

Flight 33 Booking Interface

9. Trade-offs and Limitations

This design makes several deliberate trade-offs.

  • Search result sets are expected to remain within a manageable size
  • Client-side filtering assumes sufficient browser memory
  • Cached results tolerate short-lived staleness

If result volume or traffic patterns increased significantly, filtering and pagination would need to move server-side and be sent via every request, also the cache policies would need to be revisited.

10. Closing Thoughts

Flight33 was approached as a system design problem rather than a UI exercise.

By introducing a server layer to control external API access, define stable data contracts, and apply deterministic caching, the client remains predictable and responsive without compensating for upstream uncertainty.

Flight33 demonstrates that even a simple search-driven application lives or dies by how external data is handled. By centralizing API access, caching identical searches globally, and returning a stable dataset for client-side interaction, the system keeps both performance and behavior predictable.

Top comments (0)