DEV Community

Matt
Matt

Posted on • Edited on

Asynchronous HTTP requests in Python

In python, you can make HTTP request to API using the requests module
or native urllib3 module.

However, requests and urllib3 are synchronous. It means that only one HTTP call can be made at a time in a single thread. Sometimes you have to make multiples HTTP call and synchronous code will perform baldy. To avoid this, you can use multi-threading or since python 3.4 asyncio module.

Test case

In order to show the difference of time between sync and async code, i made a script that read a file with 500 cities names and perform HTTP call to an API to retrieve information about location, population and so on from the city name.

Sync code performance

Here is the sync code version with the requests module

@timeit
def fetch_all(cities):
    responses = []
    with requests.session() as session:
        for city in cities:
            resp = session.get(f"https://geo.api.gouv.fr/communes?nom={city}&fields=nom,region&format=json&geometry=centr")
            responses.append(resp.json())
    return responses
Enter fullscreen mode Exit fullscreen mode

Finished 'fetch_all' in 38.7053 secs

Async code performance

I used aiohttp module to make the async code as the requests module doesn't support asyncio for now.

async def fetch(session, url):
    """Execute an http call async
    Args:
        session: contexte for making the http call
        url: URL to call
    Return:
        responses: A dict like object containing http response
    """
    async with session.get(url) as response:
        resp = await response.json()
        return resp

async def fetch_all(cities):
    """ Gather many HTTP call made async
    Args:
        cities: a list of string 
    Return:
        responses: A list of dict like object containing http response
    """
    async with aiohttp.ClientSession() as session:
        tasks = []
        for city in cities:
            tasks.append(
                fetch(
                    session,
                    f"https://geo.api.gouv.fr/communes?nom={city}&fields=nom,region&format=json&geometry=centr",
                )
            )
        responses = await asyncio.gather(*tasks, return_exceptions=True)
        return responses
@timeit
def run(cities):
    responses = asyncio.run(fetch_all(cities))
    return responses

Enter fullscreen mode Exit fullscreen mode

Finished 'run' in 3.0706 secs

Conclusion

As you can see, the async version is a lot faster than the sync version so if you run into a situation where your code is performing multiple I/O calls then you should consider concurrency to improve performance. However asynchronous version requires more work as you can see.

If you want to see the threading version that works with the requests module and also see how to implement automatic retry and caching on your API call check out this tutorial.

Top comments (2)

Collapse
 
datamatin profile image
Martin Breuss • Edited

Wondering the same as Alexander. Is there a feasible way to do this without any external libraries (e.g. using urllib.request for the HTTP calls, but making it async with asyncio)?

Most of what I can find online uses aiohttp, or httpx, which are both external libraries.

Collapse
 
whillas profile image
Alexander Whillas

this uses aiohttp which is not a native python3 module. is it possible to do this with only in native python3?