DEV Community

Lalit Vavdara
Lalit Vavdara

Posted on • Originally published at python.plainenglish.io on

How I Decreased API Response Time by 89.30% in Python

Make your APIs faster

API response time is an important factor to look for if you want to build fast and scalable applications.

While working on a client’s project I had a task where I needed to integrate a third-party API for the project. This means that the Client’s project API response time will now also depend on this third-party API.

Everything works fine if we just need to make a single query to the API and return the results for that query, things start getting worse when we need to make multiple queries and return results for each query we made.

For demonstrating how to make multiple requests efficiently we will use this simple Rhyming word API.

For example, suppose we want to find rhyming words for all the words present in [“hello”, “mellow”, “cat", "rat", “dog”, “frog”, “mouse”, “sparrow”, “man”, “women”] one might think of implementing a code like this.

import requests, time

words = ["hello", "mellow", "cat", "rat", "dog", "frog", "mouse", "sparrow", "man", "women"]

def make_req_syncronously(words_arr):
    final_res = []
    for word in words_arr:
        url = f"https://api.datamuse.com/words?rel_rhy={word}&max=100"
        response = requests.get(url)
        json_response = response.json()
        for item in json_response:
            rhyming_word = item.get("word", "")
            final_res.append({"word": word, "rhyming_word": rhyming_word})
    return final_res

without_async_start_time = time.time()
response = make_req_syncronously(words)
time_without_async = time.time() - without_async_start_time

print("total time for with synchronous execution >> ", time_without_async, " seconds")
Enter fullscreen mode Exit fullscreen mode

you will see that this approach takes approximately 9.479 seconds to search rhyming words for each word present in the Words list. Can you imagine you open a site and it takes 10 seconds to load? just because someone wrote this horrible code in the backend.

To better understand this look at this graph of how we make requests in this synchronous approach.

Synchronous Requests vs. Time Graph

As you see we make 1st request for (“hello”) word then wait for it to return the result and then append it to the list we want to return, and the same for each word in the list. This is an extremely time-consuming way of implementing these.

So, how can we improve this?

Well, that’s where Async/wait comes to our help. Let’s understand how we can make multiple requests simultaneously with the following Python code.

import asyncio
import aiohttp  # external library
import time

words = ["hello", "mellow", "cat", "rat", "dog", "frog", "mouse", "sparrow", "man", "women"]

async def main():
    headers = {'content-type': 'application/json'}
    async with aiohttp.ClientSession(headers=headers) as session:
        tasks = []  # for storing all the tasks we will create in the next step
        for word in words:
            task = asyncio.ensure_future(get_rhyming_words(session, word))  # means get this process started and move on
            tasks.append(task)

        # .gather() will collect the result from every single task from tasks list
        # here we use await to wait till all the requests have been satisfied
        all_results = await asyncio.gather(*tasks)
        combined_list = merge_lists(all_results)
        return combined_list


async def get_rhyming_words(session, word):
    url = f"https://api.datamuse.com/words?rel_rhy={word}&max=1000"
    async with session.get(url) as response:
        result_data = await response.json()
        return result_data


async_func_start_time = time.time()
response2 = asyncio.get_event_loop().run_until_complete(main(words))
time_with_async = time.time() - async_func_start_time

print("\nTotal time with async/await execution >> ", time_with_async, " seconds")

print("\nTotal time with async/await execution >> ", time_with_async, " seconds")
Enter fullscreen mode Exit fullscreen mode

here we create a Task for each request also called an Couritine in Python. In this implementation, we start a task but don’t wait for it to complete we simply move on and start a new task.

Finally, after we are done with creating all the tasks, we wait till we have result from each task, then we simply merge all the results from tasks using merge_lists function and return a combined list containing all the results.

The time it takes to search rhyming words for all the words in words list is approximate 1 second which is way better than what we had in the previous approach.

The requests vs time graph will now look like

Asynchronous Requests vs. Time Graph

As you can see in the graph by the time we initiate the final request we will have some of the previous requests already completed.

Here is the Python code for you to compare both implementations.

_import_ requests, time

words = ["hello", "mellow", "cat", "rat", "dog", "frog", "mouse", "sparrow", "man", "women"]

_def_ make_req_syncronously(words_arr):
    final_res = []
    _for_ word _in_ words_arr:
        url = f"https://api.datamuse.com/words?rel_rhy={word}&max=100"
        response = requests.get(url)
        json_response = response.json()
        _for_ item _in_ json_response:
            rhyming_word = item.get("word", "")
            final_res.append({"word": word, "rhyming_word": rhyming_word})
    _return_ final_res

without_async_start_time = time.time()
response = make_req_syncronously(words)
time_without_async = time.time() - without_async_start_time
_#  
print_ ("total time for with synchronous execution >> ", time_without_async, " seconds")

_import_ asyncio
_import_ aiohttp _# external library_


_def_ merge_lists(results_from_fc):
    _"""  
 Function for merging multiple lists  
 """_ combined_list = []
    _for_ li _in_ results_from_fc:
        combined_list.extend(li)

    _return_ combined_list


_async def_ main():
    headers = {'content-type': 'application/json'}
    _async with_ aiohttp.ClientSession(headers=headers) _as_ session:
        tasks = [] _# for storing all the tasks we will create in the next step  
 for_ word _in_ words:
            task = asyncio.ensure_future(get_rhyming_words(session, word)) _# means get this process started and move on_  
tasks.append(task)

        _# .gather() will collect the result from every single task from tasks list  
 # here we use await to wait till all the requests have been satisfied_ all_results = _await_ asyncio.gather(*tasks)
        combined_list = merge_lists(all_results)
        _return_ combined_list


_async def_ get_rhyming_words(session, word):
    url = f"https://api.datamuse.com/words?rel_rhy={word}&max=1000"
    _async with_ session.get(url) _as_ response:
        result_data = _await_ response.json()
        _return_ result_data


async_func_start_time = time.time()
response2 = asyncio.get_event_loop().run_until_complete(main())
time_with_async = time.time() - async_func_start_time

_print_("\nTotal time with async/await execution >> ", time_with_async, " seconds")

total_improvement = (time_without_async - time_with_async) / time_without_async * 100
_print_(f"\n{'*' * 100}\n{' ' * 32}Improved by {total_improvement} %\n{'*' * 100}")
Enter fullscreen mode Exit fullscreen mode

Thank you for taking the time to read my blog.


Top comments (0)