DEV Community

Computer vision at the edge with Nvidia Jetson in 2 commands

A few days ago I explained the benefits of using the Pipeless computer vision framework to develop and deploy your applications. Among other advantages, you get multi-stream processing and dynamic configuration out-of-the-box. This means you can add, edit and remove streams on the fly, without restarting your program, as well as specify how those streams should be processed at the time of adding the stream.
In this post I will guide you through the list of commands that you need to deploy a Pipeless application to a Nvidia Jetson device. This example has been tested on a Nvidia Jetson Xavier, but it should work with other models too.

Nvidia Jetson image - Pipeless computer vision framework

Walkthrough

First, install Pipeless on the Jetson device. Connect to the device via ssh and run the following command. Note iT will show some env vars at the end that you need to export:

curl https://raw.githubusercontent.com/pipeless-ai/pipeless/main/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

Then, the only other piece we need is to add our Pipeless stages. In this case, we will use the YOLOv8 example. You can learn more about Pipeless stages here, but in short, a stage is like a micro-pipeline. You can plug several stages one after the other dynamically when providing streams to Pipeless, so you can modify the processing behaviour per stream without changing your code and without restarting your application.

Let’s install some dependencies:

pip install opencv-python numpy ultralytics
Enter fullscreen mode Exit fullscreen mode

Create the new project folder and download the YOLOv8 stage functions:

pipeless init my-project --template empty # Using the empty template we avoid the interactive shell
cd my-project
wget -O - https://github.com/pipeless-ai/pipeless/archive/main.tar.gz | tar -xz --strip=2 "pipeless-main/examples/yolo"
Enter fullscreen mode Exit fullscreen mode

You can now start Pipeless:

pipeless start --stages-dir .
Enter fullscreen mode Exit fullscreen mode

And provide a stream as follows:

pipeless add stream --input-uri "https://pipeless-public.s3.eu-west-3.amazonaws.com/cats.mp4" --output-uri "screen" --frame-path "yolo"
Enter fullscreen mode Exit fullscreen mode

The above command assumes you have a display connected to the Jetson device to visualize the output stream. If you don’t have a display connected you can change the output URI to use a file or some multimedia server you may have.

And that’s all! Impressive, right?

You can find more examples in our documentation and learn how to create applications from scratch using Pipeless.

If you like the ease of creating and deploying computer vision applications with Pipeless don’t forget to star our GitHub repository.

GitHub logo pipeless-ai / pipeless

An open-source computer vision framework to build and deploy apps in minutes

Pipeless

Easily create, deploy and run computer vision applications.



Loading video...

Pipeless is an open-source framework that takes care of everything you need to develop and deploy computer vision applications in just minutes. That includes code parallelization, multimedia pipelines, memory management, model inference, multi-stream management, and more. Pipeless allows you to ship applications that work in real-time in minutes instead of weeks/months.

Pipeless is inspired by modern serverless technologies. You provide some functions and Pipeless takes care of executing them for new video frames and everything involved.

With Pipeless you create self-contained boxes that we call "stages". Each stage is a micro pipeline that performs a specific task. Then, you can combine stages dynamically per stream, allowing you to process each stream with a different pipeline without changing your code and without restarting the program. To create a stage you simply provide a pre-process function, a model and a post-process function.




Top comments (0)