<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yue Su</title>
    <description>The latest articles on DEV Community by Yue Su (@yuesu).</description>
    <link>https://dev.to/yuesu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yuesu"/>
    <language>en</language>
    <item>
      <title>Setting Up vGPU in Docker Container (with Specific Python Version)</title>
      <dc:creator>Yue Su</dc:creator>
      <pubDate>Fri, 28 Jul 2023 18:42:17 +0000</pubDate>
      <link>https://dev.to/yuesu/setting-up-vgpu-in-docker-container-with-specific-python-version-5ei6</link>
      <guid>https://dev.to/yuesu/setting-up-vgpu-in-docker-container-with-specific-python-version-5ei6</guid>
      <description>&lt;p&gt;Our team has recently been building a backend service using the Django framework. Alongside this, we've also developed an in-house machine learning command line utility tool. To run this tool efficiently, we utilize an instance with an enabled nVidia GPU. The challenge lies in setting up an environment where both Django and the utility are containerized with Docker, with the ability to harness the GPU power inside the container. In addition, the tool requires a specific Python version (3.9 in our case) to run.&lt;/p&gt;

&lt;p&gt;This task was not as straightforward as we initially imagined. It seemed we could not just pull off a Python 3.9 office image, install the CUDA driver, and expect it to run smoothly. After a few attempts, we found this to be a considerable challenge. We then took a different approach, opting to use an official nvidia/cuda image and installing the needed Python version. After several tweaks, it finally worked. &lt;/p&gt;

&lt;p&gt;Here are the main steps involved:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Ensure that the GPU driver is installed properly on the instance.&lt;/strong&gt; This is relatively easy. We followed this &lt;a href="https://docs.alliancecan.ca/wiki/Using_cloud_vGPUs#Preparation_of_a_VM_running_Debian11"&gt;guide&lt;/a&gt;. To check if everything is working as expected, use the &lt;code&gt;nvidia-smi&lt;/code&gt; command line. If all is well, you will see the CUDA version and other related information. For additional guidance, refer to the official &lt;a href="https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html"&gt;CUDA installation guide&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.199.02   Driver Version: 470.199.02   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GRID V100D-8C       On   | 00000000:00:05.0 Off |                    0 |
| N/A   N/A    P0    N/A /  N/A |    560MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use the base Dockerfile to build the backend.&lt;/strong&gt; It's crucial to mention that the chosen image should be based on the CUDA driver version used in your environment. If they do not match, the container might throw errors. Ubuntu20.04 comes with Python 3.8 by default. If you do not set up symbolic links correctly, the packages will be installed with Python 3.8, resulting in errors. We used the official &lt;a href="https://hub.docker.com/r/nvidia/cuda"&gt;NVIDIA CUDA Docker images&lt;/a&gt; as a base.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use an official NVIDIA CUDA runtime as a parent image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nvidia/cuda:11.4.3-cudnn8-devel-ubuntu20.04&lt;/span&gt;

&lt;span class="c"&gt;# Set environment variables&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONUNBUFFERED 1&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DEBIAN_FRONTEND=noninteractive&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; TZ=America/Toronto&lt;/span&gt;

&lt;span class="c"&gt;# Install Python and pip&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    python3.9 &lt;span class="se"&gt;\
&lt;/span&gt;    python3-pip &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Create symbolic links for Python and pip&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; /usr/bin/python3.9 /usr/bin/python &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; /usr/bin/python3.9 /usr/bin/python3 &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; /usr/bin/pip3 /usr/bin/pip

&lt;span class="c"&gt;# Set the working directory to /backend&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /backend&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt requirements.txt&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; pip
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use Docker Compose to manage our services.&lt;/strong&gt; Here is a base yml file you can use if you're utilizing Django as the backend. Make sure to configure the &lt;code&gt;runtime&lt;/code&gt; and &lt;code&gt;environment&lt;/code&gt; settings correctly for the container to properly utilize the GPU.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8000:8000&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nvidia&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;NVIDIA_VISIBLE_DEVICES=all&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./:/backend&lt;/span&gt;
    &lt;span class="na"&gt;env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.env&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="s"&gt;bash -c "python manage.py makemigrations&lt;/span&gt;
    &lt;span class="s"&gt;&amp;amp;&amp;amp; python manage.py migrate&lt;/span&gt;
    &lt;span class="s"&gt;&amp;amp;&amp;amp; python manage.py runserver 0.0.0.0:8000"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test your configuration.&lt;/strong&gt; If all the previous steps were successful, you should be able to run the &lt;code&gt;nvidia-smi&lt;/code&gt; command within the container.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>A simple database modelling for a web-based messenger with Sequelize and postgresDB</title>
      <dc:creator>Yue Su</dc:creator>
      <pubDate>Mon, 28 Dec 2020 19:43:40 +0000</pubDate>
      <link>https://dev.to/yuesu/a-simple-database-modelling-for-a-web-based-messenger-with-sequelize-and-postgresdb-ch8</link>
      <guid>https://dev.to/yuesu/a-simple-database-modelling-for-a-web-based-messenger-with-sequelize-and-postgresdb-ch8</guid>
      <description>&lt;h2&gt;
  
  
  Goal
&lt;/h2&gt;

&lt;p&gt;This is a simplified database modelling example, which aims to provide a basic structure of the database and demonstrate data persistence.&lt;/p&gt;

&lt;p&gt;It will be used for an application that is designed to be a messenger clone, featuring real-time and offline messaging. All the messages and conversations are stored in the database so that a registered user could retrieve the information when logged in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tables and associations
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SZuzFTJ6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/ds4w8rbkrxamqof/database.png%3Fraw%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SZuzFTJ6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/ds4w8rbkrxamqof/database.png%3Fraw%3D1" alt="database" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A User table for storing username, email and hashed password.&lt;/li&gt;
&lt;li&gt;Conversation table and UserToConversation tables for storing user's conversation and many-to-many relationships between user and conversations. &lt;/li&gt;
&lt;li&gt;Message table for storing message including sender's id, conversation id and content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A user can create many conversations, and a conversation can have many users, the UserToConversation table is used to storing this mapping info. &lt;/p&gt;

&lt;p&gt;For example, when user_1 wants to have a conversation with user_2 and user_3 in a group chat, a conversation record will be created first, and three UserToConversation records would be created subsequently. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IyvxsMRG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/8y6ms5uwmq1fk82/dbrelation.png%3Fraw%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IyvxsMRG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.dropbox.com/s/8y6ms5uwmq1fk82/dbrelation.png%3Fraw%3D1" alt="relationship" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Here, the "users" column in the Conversation table is a string for recording all the users' IDs in a conversation. It could be used for eliminating duplicated conversations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Connecting Postgres with Sequelize
&lt;/h2&gt;

&lt;p&gt;I used to use &lt;a href="http://knexjs.org/"&gt;Knex&lt;/a&gt; for this type of work, but when I learned &lt;a href="https://sequelize.org/master/"&gt;Sequelize&lt;/a&gt;, I forget about Knex right away, as well as the trauma when setting up the Knex environment.&lt;/p&gt;

&lt;p&gt;File structure&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── models
│   ├── index.js
│   ├── addAssociations.js
│   ├── syncModels.js
│   ├── user.model.js
│   └── conversation.model.js
    .
    .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, for initializing the Sequlize instance, we could setup a 'models' folder and have an index.js file as such:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { Sequelize } = require("sequelize");
const { addAssociations } = require("./addAssociations");
const { syncModels } = require("./syncModels");

const sequelize = new Sequelize(
  process.env.DB_NAME,
  process.env.DB_USERNAME,
  process.env.DB_PASSWORD,

  {
    host: process.env.DB_HOST,
    dialect: "postgres",
    operatorsAliases: false,

    pool: {
      max: 5,
      min: 0,
      acquire: 30000,
      idle: 10000,
    },
  }
);

const modelDefiners = [
  require("./user.model"),
  require("./conversation.model"),
  require("./message.model"),
  require("./userToConversation.model"),
];

for (const modelDefiner of modelDefiners) {
  modelDefiner(sequelize);
}

addAssociations(sequelize);
syncModels(sequelize);

//test the database connection
sequelize
  .authenticate()
  .then(() =&amp;gt; console.log("Postgres Connected!"))
  .catch((err) =&amp;gt; console.error(err));

module.exports = sequelize;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the connection is up, we could add tables and associations. Sequelize will take care of setting up foreign keys. &lt;/p&gt;

&lt;p&gt;For users in 'users.modules.js':&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { DataTypes } = require("sequelize");

module.exports = (sequelize) =&amp;gt; {
  sequelize.define("user", {
    username: {
      type: DataTypes.STRING,
      allowNull: false,
    },
    email: {
      type: DataTypes.STRING,
      allowNull: false,
      isEmail: true,
    },
    password: {
      type: DataTypes.STRING,
      allowNull: false,
    },
    photoURL: { type: DataTypes.STRING, allowNull: true },
  });
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For conversations in 'conversation.model.js':&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { DataTypes } = require("sequelize");

module.exports = (sequelize) =&amp;gt; {
  sequelize.define("conversation", {
    users: {
      type: DataTypes.STRING,
      unique: true,
    },
  });
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For UserToConversation in 'userToConversation.module.js':&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = (sequelize) =&amp;gt; {
  sequelize.define("userToConversation");
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Messages in 'message.model.js':&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { DataTypes } = require("sequelize");

module.exports = (sequelize) =&amp;gt; {
  sequelize.define("message", {
    content: {
      type: DataTypes.STRING,
      allowNull: false,
    },
    currentChatReceiverId: {
      type: DataTypes.INTEGER,
      allowNull: false,
    },
  });
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For associations in 'addAssociations.js':&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function addAssociations(sequelize) {
  const { user, conversation, message, userToConversation } = sequelize.models;

  user.hasMany(userToConversation);
  userToConversation.belongsTo(user);

  conversation.hasMany(userToConversation);
  userToConversation.belongsTo(conversation);

  conversation.hasMany(message);
  message.belongsTo(conversation);

  user.hasMany(message);
  message.belongsTo(user);
}

module.exports = { addAssociations };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At last, we will need to sync tables with the posgres server in 'syncModels.js':&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const syncModels = async (sequelize) =&amp;gt; {
  const { user, conversation, message, userToConversation } = sequelize.models;

  try {
    await user.sync();
    await conversation.sync();
    await userToConversation.sync();
    await message.sync();
    console.log("synced");
  } catch (error) {
    console.error(error);
  }
};

module.exports = { syncModels };

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The modelling part is done, and the models will be used in the routes for querying data and so on.&lt;/p&gt;

&lt;p&gt;Thanks for reading.&lt;/p&gt;

</description>
      <category>database</category>
      <category>node</category>
    </item>
    <item>
      <title>Making concurrent API calls in Node</title>
      <dc:creator>Yue Su</dc:creator>
      <pubDate>Thu, 24 Dec 2020 01:30:01 +0000</pubDate>
      <link>https://dev.to/yuesu/making-concurrent-api-calls-in-node-3deg</link>
      <guid>https://dev.to/yuesu/making-concurrent-api-calls-in-node-3deg</guid>
      <description>&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;When building up a backend API, it is common that we need to fetch data from a third-party API, clean, format and merge them, and then forward it to the front-end. &lt;/p&gt;

&lt;p&gt;For instance, NASA's public could be used to fetch&lt;br&gt;
&lt;a href="https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY" rel="noopener noreferrer"&gt;APOD&lt;/a&gt; (Astronomy Photo of the Day) with any given date. However, it doesn't support fetching multiple photos with a range of dates. Now suppose we were asked to build a backend API that can return a list of APOD with a given number of days, what should we do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.dropbox.com%2Fs%2Fq3wujx9y3zx6xlj%2Fconcurrent-call.png%3Fraw%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.dropbox.com%2Fs%2Fq3wujx9y3zx6xlj%2Fconcurrent-call.png%3Fraw%3D1" alt="API map"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first thought I came up with is to generate an array that contains a range of dates. Then I can do a forEach method or a for loop to iterate through the array and make API calls one by one, get the data, push it into a result array, and finally return the result to the front-end. However, even this would work, it doesn't align with the goal, which requires to do the calls concurrently. Using forEach or a for loop still would do the job in order, not simultaneously. It's slow and not efficient. &lt;/p&gt;

&lt;p&gt;After a little bit of research, I came across a library called &lt;a href="https://caolan.github.io/async/v3/" rel="noopener noreferrer"&gt;async&lt;/a&gt; that perfectly fulfills the requirement of the task. The async library provides various types of functions for working with asynchronous JavaScript. &lt;/p&gt;

&lt;p&gt;In this example, the method will be using is &lt;a href="https://caolan.github.io/async/v3/docs.html#parallel" rel="noopener noreferrer"&gt;parallel&lt;/a&gt;, and it's mainly for flow control:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parallel(tasks, callback)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;It allows us to run a number of tasks in parallel, without waiting until the previous function has completed.&lt;/strong&gt; The results are passed to the callback as an array. &lt;/p&gt;

&lt;p&gt;Let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  The solution
&lt;/h2&gt;

&lt;p&gt;First, we need to make a helper function, it takes the number of days as a parameter, and returns an array of dates. NASA's API can only take the date format as YYYY-MM-DD, so for example, if today's date is 2020-12-23, and the number of days is equal to 6, the returned array will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  '2020-12-18',
  '2020-12-19',
  '2020-12-20',
  '2020-12-21',
  '2020-12-22',
  '2020-12-23'
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is  what the function looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function generatedates(numberOfDays) {
  const result = []
  const today = new Date()

  for (let i = 0; i &amp;lt; numberOfDays; i++) {
    let date = new Date(today)
    date.setDate(today.getDate() - i)
    let dd = date.getDate()
    let mm = date.getMonth() + 1
    let yyyy = date.getFullYear()

    if (dd &amp;lt; 10) {
      dd = "0" + dd
    }
    if (mm &amp;lt; 10) {
      mm = "0" + mm
    }
    date = yyyy + "-" + mm + "-" + dd
    result.unshift(date)
  }

  return result
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we need to add an endpoint to the node server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/api/photos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The parallel function takes an array of function as the first argument, so we could use the map method to iterate through the dates array and returns the function array. Each function in the array fires an Axios call to the NASA API and get the picture of that date.&lt;/p&gt;

&lt;p&gt;The second argument of the parallel function is a callback function. In this case, since the API calls return promises, the callback function will return two items. The first one is the possible error, and the second one is the array of the result. &lt;/p&gt;

&lt;p&gt;If we don't need to further process the data, we can simply pass them to the front-end. We can also use the forEach method to clean the data and only extract the information we need.&lt;/p&gt;

&lt;p&gt;Here is the logic of the endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const URL = "https://api.nasa.gov/planetary/apod"

server.get("/api/photos", (req, res) =&amp;gt; {
  const days = req.query.days
  const dates = generateDates(days)

  const functionArray = dates.map((date) =&amp;gt; {
    return async function () {
      const data = await axios.get(`${URL}?api_key=${api_key}&amp;amp;date=${date}`)
      return data.data
    }
  })

  async.parallel(functionArray, (err, result) =&amp;gt; {
    res.status(200).json({ items: result.length, photos: result })
  })
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the user can make an API request to fetch any number of photos, such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//fetch photos of the past week
api/photos?days=7

//fetch photos of the past month
api/photos?days=30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the result will be shown as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "items": 6,
    "photos": [...]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;Check the GitHub repo for this example&lt;br&gt;
&lt;a href="https://github.com/yue-su/get-nasa-photo" rel="noopener noreferrer"&gt;Repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
