DEV Community

Sushant Nair
Sushant Nair

Posted on

Here's how I achieved faster code runs for running Docker Containers in Jetson Nano L4T

Overview

When it comes to developing embedded systems, knowing some tips and tricks can go a long way in saving time and resources.

I recently had to deal with a project to develop an Autonomous Maneuvering Wheelchair (though, in the end, due to time and resource constraints we ended up making a toy!). At the core of it is a Jetson Nano Developer Kit, which takes input video frames, makes a Depth Estimation of each frame by running the frame through a Depth Estimation Model from HuggingFace, makes a decision using the Depth Estimated Frame as to whether to continue moving straight or to turn left or right in case of an obstacle ahead, and changes the speed of the two motors that are being controlled by it using the concept of Differential Drive. An Ultrasonic Sensor was used as well, but we're not going to discuss about it as it is not related.

One of the aspects of the software part with the HuggingFace thing is to run the whole thing using Docker. This is because Jetson Nano comes with Python2 as default and if we try to install Python3 and run Python3 dependent programs on the same environment, it results in incompatibilities. Therefore, we had to use Docker Containers to create a separate environment where only Python3 was available in order to resolve the conflict. Of course later, we ended up using a pre-built L4T image, but still the discoveries made in our own separate attempt to create our own image led us to finding things which were not available on the internet. It took quite a while to discover it. And what is that thing? Docker Volume.

Old Method and associated Issues

Consider a typical process to build and run Docker containers. Firstly the Docker File is written, and the container is built using the docker build command

sudo docker build -t <container_name> -f <dockerfile_name> .

(be careful of the period at the end)

This step runs, and takes some time initially to install all the dependencies. The next step is to run the docker container using the docker run command

sudo docker run –privileged -t -a stdout -a stderr <container_name>

However, using the above command has an issue - every time the command is run, it downloads the whole Depth Estimation Model from Hugging Face all over again. This causes the problem of using large amounts of internet data, which can cost a lot over metered connections. Furthermore, it is likely to take a lot of time, especially if the internet connection is slow. Additionally, the Jetson Nano, being a small system, may hang (especially if many things are running on it). And lastly, this is an embedded system which is supposed to work in any condition - the wheelchair shouldn't be needing a WiFi connection to work! So how do we avoid downloading the model everytime? Isn't there a way that the model downloads only once and rest of the time the downloaded model is simply loaded and the frames are run on them to make the Depth Estimated Frames? Turns out, Docker Volume is the solution.

Procedure

Docker Volume is a feature of Docker which allows us to download, among other things, HuggingFace models once and for all, so that they simply have to be invoked in the future instead of having to be downloaded again. Here are the steps to accomplish this:

Step 1: Build the container
FORMAT:
sudo docker build -t <container_name> -f <dockerfile_name> .
EXAMPLE:
sudo docker build -t dm0803245 -f MyDockerfile_DepthEst4 .

Step 1: Build the container

Step 2: The HF model is downloaded to a Docker Volume (inteldptmidashybrid_vol)
FORMAT:
sudo docker run –privileged -t -a stdout -a stderr -v <volume_name>:/<directory_path> <container_name>
EXAMPLE:

sudo docker run --privileged -t -a stdout -a stderr -v inteldpthybridmidas_vol:/models dm0803245

Step 2: The HF model is downloaded to a Docker Volume

Step 3: List the containers, as the next step runs with container ID only and not container Name as there can be multiple containers with the same name
FORMAT:
sudo docker container ls -a

Step 3: List the containers

Step 4: Copy the output video using the container name to the Desktop
FORMAT:
sudo docker cp <container_id>:/app/output.mp4 ~/Desktop
EXAMPLE:

sudo docker cp 9023296fba7b:/app/output.mp4 ~/Desktop
Please keep in mind the name of the output video file and enter commands accordingly.

Testing

Here, we shall check how the solution works

Test 1: Run the same container (dm0803245) and see if the HF model is taken from the Docker Volume or not.
Step 1: Run the docker
FORMAT:
sudo docker run –privileged -t -a stdout -a stderr -v <volume_name>:/<directory_path> <container_name>
EXAMPLE:

sudo docker run --privileged -t -a stdout -a stderr -v inteldpthybridmidas_vol:/models dm0803245

Step 1: Run the docker

Step 2: List the containers, as the next step runs with container ID only and not container Name as there can be multiple containers with the same name
FORMAT:
sudo docker container ls -a

Step 2: List the containers

Step 3: Copy the output video using the container name to the Desktop
FORMAT:
sudo docker cp <container_id>:/app/output.mp4 ~/Desktop
EXAMPLE:
sudo docker cp be88d04f2db4:/app/output.mp4 ~/Desktop
Result: Test is successful. The model is taken from the Directory Volume inteldpthybridmidas_vol and not downloaded again.

Test 2: Build a new container and run it using the same Directory Volume where the model is saved.
Step 1: Build the container
FORMAT:
sudo docker build -t <container_name> -f <dockerfile_name> .
EXAMPLE:
sudo docker build -t dm0803246_test -f MyDockerfile_DepthEst4 .

Step 1: Build the container

Step 2: : Run the docker
FORMAT:
sudo docker run –privileged -t -a stdout -a stderr -v <volume_name>:/<directory_path> <container_name>
EXAMPLE:
sudo docker run --privileged -t -a stdout -a stderr -v inteldpthybridmidas_vol:/models dm0803246_test

Step 2: : Run the docker

Step 3: List the containers, as the next step runs with container ID only and not container Name as there can be multiple containers with the same name
FORMAT:
sudo docker container ls -a

Step 3: List the containers

Step 4: Copy the output video using the container name to the Desktop
FORMAT:
sudo docker cp <container_id>:/app/output.mp4 ~/Desktop
EXAMPLE:
sudo docker cp e18e7bfdae4e:/app/output.mp4 ~/Desktop
Result: Test is successful. The model is taken from the Directory Volume inteldpthybridmidas_vol and not downloaded again, even though there was another build event and an entirely different container was used.

Footnotes

1: Delete unnecessary containers once they are not required.
FORMAT: sudo docker rm

2: The "false" in the images is nothing to worry about. Actually, you do need to be worried. The false indicates that GPU is not being used and CPU is being used instead. This method is excellent if you want to use CPU, but if you want GPU-Accelerated Computing, you need to use specially pre-built L4T Image for PyTorch, which is available in the official website of NVIDIA.
Refer:

Use GPU in Jetson Nano Ubuntu 18.04.6 - #2 by TomNVIDIA - Jetson Nano - NVIDIA Developer Forums

Hello, Welcome to the NVIDIA Developer forums! Your topic will be best served in the Jetson category. I will move this post over for visibility. Cheers, Tom

favicon forums.developer.nvidia.com

Installing Tensorflow on Jetson Nano Ubuntu 18.04 - #19 by system - Jetson Nano - NVIDIA Developer Forums

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

favicon forums.developer.nvidia.com

3: inteldpthybridmidas - this refers to Intel's DPT Hybrid Midas model on HuggingFace which we had used to make the Monocular Depth Estimation of the Input Video Frames.
Liquid error: internal

4: Access the document

Thank you! Hope this was informative and useful.

Top comments (0)