<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kaushik Tank</title>
    <description>The latest articles on DEV Community by Kaushik Tank (@kaushiktank).</description>
    <link>https://dev.to/kaushiktank</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kaushiktank"/>
    <language>en</language>
    <item>
      <title>You’re Losing 300ms Before Your API Even Runs (HTTPS Explained)</title>
      <dc:creator>Kaushik Tank</dc:creator>
      <pubDate>Wed, 22 Apr 2026 17:06:58 +0000</pubDate>
      <link>https://dev.to/kaushiktank/youre-losing-300ms-before-your-api-even-runs-https-explained-3m3g</link>
      <guid>https://dev.to/kaushiktank/youre-losing-300ms-before-your-api-even-runs-https-explained-3m3g</guid>
      <description>&lt;h2&gt;
  
  
  Prefer watching instead?
&lt;/h2&gt;

&lt;p&gt;If you’d rather see this visually explained, you can watch the full breakdown here:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/UfXTblpCuZQ"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  Your API is not slow… or is it?
&lt;/h2&gt;

&lt;p&gt;You make an API call. It takes around 300 milliseconds.&lt;/p&gt;

&lt;p&gt;You open your code and start optimizing. You reduce execution time, improve queries, clean up logic. Eventually, your business logic runs in under 50 milliseconds.&lt;/p&gt;

&lt;p&gt;But the total response time? Still around 300 milliseconds.&lt;/p&gt;

&lt;p&gt;At this point, it feels like something is off.&lt;/p&gt;

&lt;p&gt;The reality is simple, but often overlooked: a large portion of that time is spent before your API logic even begins execution.&lt;/p&gt;

&lt;p&gt;To understand where that time goes, we need to look at the full lifecycle of an HTTPS request.&lt;/p&gt;




&lt;h2&gt;
  
  
  The journey of a request
&lt;/h2&gt;

&lt;p&gt;When you send an HTTPS request, it does not immediately reach your application. Before your server processes a single line of your code, several steps happen to establish a reliable and secure connection.&lt;/p&gt;

&lt;p&gt;These steps are essential, but they also introduce latency. Most developers don’t see them, so they rarely think about them.&lt;/p&gt;

&lt;p&gt;Let’s go through them one by one.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: TCP connection (establishing the link)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73k7iwlx3pavrond4pbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73k7iwlx3pavrond4pbe.png" alt="TCP connection" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before any data can be exchanged, the client and server must establish a connection. This is done using the TCP three-way handshake.&lt;/p&gt;

&lt;p&gt;The process is straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The client sends a SYN request to initiate a connection&lt;/li&gt;
&lt;li&gt;The server responds with SYN-ACK&lt;/li&gt;
&lt;li&gt;The client sends an ACK to confirm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, the connection is established and both sides are ready to communicate.&lt;/p&gt;

&lt;p&gt;However, it’s important to understand what has not happened yet.&lt;/p&gt;

&lt;p&gt;No API request has been sent. No headers, no payload, nothing.&lt;/p&gt;

&lt;p&gt;This step only ensures that both sides are reachable and ready.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: TLS handshake (making it secure)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc7vv5galkqni6bjgrky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc7vv5galkqni6bjgrky.png" alt="TLS handshake" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since the request is made over HTTPS, the connection must be secured before any actual data is transmitted.&lt;/p&gt;

&lt;p&gt;This is where the TLS handshake comes in.&lt;/p&gt;

&lt;p&gt;During this phase, the client and server negotiate how communication will be encrypted. The server presents its SSL certificate, which the client verifies to ensure it is talking to a trusted source.&lt;/p&gt;

&lt;p&gt;They agree on a cipher suite and prepare for encrypted communication.&lt;/p&gt;

&lt;p&gt;This process involves multiple back-and-forth exchanges between the client and server. Each round trip adds latency.&lt;/p&gt;

&lt;p&gt;This is also the point where a significant portion of the total request time is spent.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Key exchange (establishing a secure channel)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcurrbsv327corxc3z8ob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcurrbsv327corxc3z8ob.png" alt="Key exchange" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After agreeing on the encryption method, both sides need to establish a shared secret key.&lt;/p&gt;

&lt;p&gt;The client sends encrypted key material, and both sides derive a session key that will be used to encrypt and decrypt all further communication.&lt;/p&gt;

&lt;p&gt;Once this step is complete, the connection becomes fully secure.&lt;/p&gt;

&lt;p&gt;Only now is the system ready to safely transmit actual request data.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: The actual request and response
&lt;/h2&gt;

&lt;p&gt;With the connection established and secured, the client finally sends the HTTP request.&lt;/p&gt;

&lt;p&gt;At this point, your application starts doing what you usually think about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Parsing the request&lt;/li&gt;
&lt;li&gt;Validating input&lt;/li&gt;
&lt;li&gt;Authenticating the user&lt;/li&gt;
&lt;li&gt;Executing business logic&lt;/li&gt;
&lt;li&gt;Preparing and returning the response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In most well-optimized systems, this part is relatively fast.&lt;/p&gt;

&lt;p&gt;The response is sent back through the same secure channel, and the connection may be closed or reused depending on configuration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where the time actually goes
&lt;/h2&gt;

&lt;p&gt;If you break down the total latency of a typical request, the distribution often looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A large portion of time is spent in TCP and TLS setup&lt;/li&gt;
&lt;li&gt;A smaller portion is spent in actual application logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why you can optimize your code significantly and still see little change in overall response time.&lt;/p&gt;

&lt;p&gt;You are improving the part that is already fast, while the majority of the delay happens elsewhere.&lt;/p&gt;




&lt;h2&gt;
  
  
  What about connection reuse?
&lt;/h2&gt;

&lt;p&gt;Modern systems use techniques like HTTP keep-alive to reuse connections and reduce overhead.&lt;/p&gt;

&lt;p&gt;This does help. If a connection remains open, subsequent requests can skip the TCP and TLS setup.&lt;/p&gt;

&lt;p&gt;However, in real-world environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connections are closed after periods of inactivity&lt;/li&gt;
&lt;li&gt;Load balancers may terminate idle connections&lt;/li&gt;
&lt;li&gt;Not every request benefits from reuse&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because of this, the overhead does not disappear entirely. It shows up frequently enough to impact performance.&lt;/p&gt;




&lt;h2&gt;
  
  
  A necessary trade-off
&lt;/h2&gt;

&lt;p&gt;It’s important to recognize that this overhead exists for a reason.&lt;/p&gt;

&lt;p&gt;HTTPS is designed for secure communication over an untrusted network. It ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data encryption&lt;/li&gt;
&lt;li&gt;Server authenticity&lt;/li&gt;
&lt;li&gt;Protection against interception&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These guarantees come at a cost. The additional latency is the price paid for security and trust.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real takeaway
&lt;/h2&gt;

&lt;p&gt;When you look at API performance, it’s easy to focus only on application code. That’s the part you control directly, and the part you interact with every day.&lt;/p&gt;

&lt;p&gt;But the full request lifecycle starts much earlier.&lt;/p&gt;

&lt;p&gt;Before your application processes anything, the system has already spent time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Establishing a connection&lt;/li&gt;
&lt;li&gt;Negotiating security&lt;/li&gt;
&lt;li&gt;Setting up encryption&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you ignore this part of the system, you are only seeing a fraction of the picture.&lt;/p&gt;

&lt;p&gt;Understanding this changes how you approach performance optimization. It shifts the focus from just writing faster code to understanding the entire path a request takes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;Performance is not just about how fast your code runs.&lt;/p&gt;

&lt;p&gt;It is about how efficiently the entire system works, from the moment a request is initiated to the moment a response is delivered.&lt;/p&gt;

&lt;p&gt;Once you start thinking in terms of the full lifecycle, you begin to see where the real bottlenecks are.&lt;/p&gt;

</description>
      <category>api</category>
      <category>webdev</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
