DEV Community

Cover image for How To Use The Map Object In Javascript For Filtering API Results 🗺️
Justin Ho
Justin Ho

Posted on • Updated on • Originally published at

How To Use The Map Object In Javascript For Filtering API Results 🗺️

The map data structure and its related keyed collections siblings are handy tools to have in your hypothetical pocket for Javascript development. Maps are key-value stores that allow us to access data using a unique key without iterating through every element (this is a key difference, hah) which makes it faster than an array for search and retrieval in certain scenarios.

For those of you who have only ever used indexed collections such as arrays or lists, you may be wondering why you would use this over a Javascript object.

The MDN documentation actually has a detailed comparison chart comparing maps and objects so take a look there for more in-depth information.

In summary, maps can offer more flexibility and performance because of its ability to use types other than strings and symbols as a key as well as its optimizations for frequent additions and deletions.

Let's take a look at a real scenario where maps may come in handy.

A Realistic Scenario Consuming APIs

So I know I just bashed native objects, but the reality is that many APIs produce results in JSON, which is then transformed into, that's right, objects.

Now let's say we just retrieved multiple arrays of objects from an API endpoint, perhaps it's the list of users who commented on each of a certain blog's articles.

const userResults = [
        "id": 1,
        "slug": "how-to-cook-chicken",
        "comments": [
                "userid": 1,
                "name": "sally",
                // other fields
            // other users
        // other fields
    // other articles
Enter fullscreen mode Exit fullscreen mode

We now have the data we want, but there may be duplicates. Some users may have commented on multiple articles, but we really have no guarantee.

Moving on, we may want to obtain some information about each of these users we have just retrieved and we will call another API endpoint with the user's id.

There are several ways to tackle this issue, with the least effort being to just ignore it and bombard the API with duplicate requests and relying on some form of cache either on the endpoint or on our server to minimize load times. (Please do not do this)

Let's look at some real solutions. As is, we can merge our results then sort the array of objects. There are numerous good solutions for removing duplicate objects in arrays each with their own advantages and drawbacks. (Sets will not filter out duplicates as each object is technically different)


I have an object that contains an array of objects.

things = new Object();

things.thing = new Array();


I'm wondering what is the best method to remove duplicate objects from an array. So for example, things.thing would become...


However, I'm here to offer an alternative using maps.

Most data are structured in some way, especially from an API. Rarely will you get results without some sort of a unique identifier or composite key. In this case, we have the user's id field. This is how we can use this structure to our advantage.

The Solution with Maps

As I mentioned at the start of the article, maps can only contain unique keys.

Our results have unique keys.

Putting this together, we can filter out duplicate results by inserting our results into a map using the id (or any unique key) as the key field and the entire object as the value.

Note that the value is actually overridden and not discarded so this assumes that the values are exact duplicates.

/* Map<id, user> */
const uniqueUsers = new Map();
 * Store users into a map object filtered by user's id.
 * @param {Map} map - map to store in
 * @param {Array} users - user objects
const filterUsersWithMap = (map, users) => users.forEach(user => map.set(, user));
userResults.forEach(results => filterUsersWithMap(uniqueUsers, results["comments"]));
Enter fullscreen mode Exit fullscreen mode

Please note that while we have a guaranteed time complexity of O(n) (with regards to the users) for filtering our duplicates, our space complexity has now increased to O(n^2), so watch out if you have memory constraints.

The Advantage of Using Map

We get several benefits by using a map here, the first being that it is an iterable, meaning we can still loop over it to fetch data from another API endpoint like so:

// values is the object we stored earlier
// ***the results are retrieved in the order they are inserted
[...uniqueUsers].values().forEach(user => fetch(`/api/users/${}`));
Enter fullscreen mode Exit fullscreen mode

Secondly, we have basically created an in-memory cache for fast access to users by their id! If we don't need a database, we can just append additional user data to the object value and always check for it before our API calls.

 * Retrieve user data from API or cache.
 * @param {number} userId - user to query for
 * @param {string} detail - object field in user data
 * @param {Map} allUsers - in-memory store of users
 * @returns user object with details
const getUserDetailById = (userId, detail, allUsers = undefined) => {
    const cachedUser = allUsers.get(userId);
    if (cachedUser !== undefined && cachedUser.hasOwnProperty(detail)) {
        return cachedUser.detail;
    // =================================
    // send API request for user details...
    // add data to a new user object...
    // =================================
    allUsers.set(userId, user); // the value will be overriden at that key
    return user.detail;
Enter fullscreen mode Exit fullscreen mode


I hope I was able to help someone out with their sorting algorithm because I know that I personally reach for hash maps (the technical term for the underlying map data structure) way too much as a space-time tradeoff.

I wanted to write about a real-world application of hash maps as there are plenty of articles explaining how hash maps work under the hood.

Finally, you can keep up with my coding tidbits on Twitter @justinhodev!



This technique is also scalable by moving the in-memory datastore onto an external server. You may have heard of Redis, an example of a dedicated in-memory data store that can be used as a replacement (more accurately an extension as Redis offers far more features) to our technique here if we want multiple Node servers to retrieve users concurrently.


A possible optimization when calling multiple APIs endpoints asynchronously is to execute filterUsersWithMap as a promise and start inserting our user data at the end of each API call instead of waiting for all promises to resolve first. (Someone who is more knowledgeable can correct me if Javascript maps cannot be accessed or written asynchronously, I could not find any results on this topic. If we cannot, Redis and other solutions do provide locking mechanisms.)


Under the hood, the Javascript map is an example of the abstract data type hash map which uses a hashing function on the key to determine where (in memory) to store our value objects. With this hashing function, we can on average achieve an O(1) time complexity for search, insert, and delete operations which have helped me tons during coding challenges.


Cover Photo by Марьян Блан | @marjanblan on Unsplash

Top comments (0)