DEV Community

Cover image for You’re Losing 300ms Before Your API Even Runs (HTTPS Explained)
Kaushik Tank
Kaushik Tank

Posted on

You’re Losing 300ms Before Your API Even Runs (HTTPS Explained)

Prefer watching instead?

If you’d rather see this visually explained, you can watch the full breakdown here:


Your API is not slow… or is it?

You make an API call. It takes around 300 milliseconds.

You open your code and start optimizing. You reduce execution time, improve queries, clean up logic. Eventually, your business logic runs in under 50 milliseconds.

But the total response time? Still around 300 milliseconds.

At this point, it feels like something is off.

The reality is simple, but often overlooked: a large portion of that time is spent before your API logic even begins execution.

To understand where that time goes, we need to look at the full lifecycle of an HTTPS request.


The journey of a request

When you send an HTTPS request, it does not immediately reach your application. Before your server processes a single line of your code, several steps happen to establish a reliable and secure connection.

These steps are essential, but they also introduce latency. Most developers don’t see them, so they rarely think about them.

Let’s go through them one by one.


Step 1: TCP connection (establishing the link)

TCP connection

Before any data can be exchanged, the client and server must establish a connection. This is done using the TCP three-way handshake.

The process is straightforward:

  • The client sends a SYN request to initiate a connection
  • The server responds with SYN-ACK
  • The client sends an ACK to confirm

At this point, the connection is established and both sides are ready to communicate.

However, it’s important to understand what has not happened yet.

No API request has been sent. No headers, no payload, nothing.

This step only ensures that both sides are reachable and ready.


Step 2: TLS handshake (making it secure)

TLS handshake

Since the request is made over HTTPS, the connection must be secured before any actual data is transmitted.

This is where the TLS handshake comes in.

During this phase, the client and server negotiate how communication will be encrypted. The server presents its SSL certificate, which the client verifies to ensure it is talking to a trusted source.

They agree on a cipher suite and prepare for encrypted communication.

This process involves multiple back-and-forth exchanges between the client and server. Each round trip adds latency.

This is also the point where a significant portion of the total request time is spent.


Step 3: Key exchange (establishing a secure channel)

Key exchange

After agreeing on the encryption method, both sides need to establish a shared secret key.

The client sends encrypted key material, and both sides derive a session key that will be used to encrypt and decrypt all further communication.

Once this step is complete, the connection becomes fully secure.

Only now is the system ready to safely transmit actual request data.


Step 4: The actual request and response

With the connection established and secured, the client finally sends the HTTP request.

At this point, your application starts doing what you usually think about:

  • Parsing the request
  • Validating input
  • Authenticating the user
  • Executing business logic
  • Preparing and returning the response

In most well-optimized systems, this part is relatively fast.

The response is sent back through the same secure channel, and the connection may be closed or reused depending on configuration.


Where the time actually goes

If you break down the total latency of a typical request, the distribution often looks like this:

  • A large portion of time is spent in TCP and TLS setup
  • A smaller portion is spent in actual application logic

This is why you can optimize your code significantly and still see little change in overall response time.

You are improving the part that is already fast, while the majority of the delay happens elsewhere.


What about connection reuse?

Modern systems use techniques like HTTP keep-alive to reuse connections and reduce overhead.

This does help. If a connection remains open, subsequent requests can skip the TCP and TLS setup.

However, in real-world environments:

  • Connections are closed after periods of inactivity
  • Load balancers may terminate idle connections
  • Not every request benefits from reuse

Because of this, the overhead does not disappear entirely. It shows up frequently enough to impact performance.


A necessary trade-off

It’s important to recognize that this overhead exists for a reason.

HTTPS is designed for secure communication over an untrusted network. It ensures:

  • Data encryption
  • Server authenticity
  • Protection against interception

These guarantees come at a cost. The additional latency is the price paid for security and trust.


The real takeaway

When you look at API performance, it’s easy to focus only on application code. That’s the part you control directly, and the part you interact with every day.

But the full request lifecycle starts much earlier.

Before your application processes anything, the system has already spent time:

  • Establishing a connection
  • Negotiating security
  • Setting up encryption

If you ignore this part of the system, you are only seeing a fraction of the picture.

Understanding this changes how you approach performance optimization. It shifts the focus from just writing faster code to understanding the entire path a request takes.


Final thought

Performance is not just about how fast your code runs.

It is about how efficiently the entire system works, from the moment a request is initiated to the moment a response is delivered.

Once you start thinking in terms of the full lifecycle, you begin to see where the real bottlenecks are.

Top comments (0)