<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Josh Alphonse</title>
    <description>The latest articles on DEV Community by Josh Alphonse (@joshalphonse).</description>
    <link>https://dev.to/joshalphonse</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/joshalphonse"/>
    <language>en</language>
    <item>
      <title>BMF: Frame extraction acceleration- video similarity search with Pinecone</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Fri, 10 May 2024 17:42:44 +0000</pubDate>
      <link>https://dev.to/bytedanceoss/bmf-frame-extraction-acceleration-video-similarity-search-with-pinecone-5e23</link>
      <guid>https://dev.to/bytedanceoss/bmf-frame-extraction-acceleration-video-similarity-search-with-pinecone-5e23</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;TL;DR: This is a tutorial on how to create a video similarity search with BMF and &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt; from scratch. View this project's code on &lt;a href="https://github.com/Joshalphonse/BMF-video-similarity-search/blob/main/Video_Extraction.ipynb"&gt;github&lt;/a&gt; and test it out in a notebook like colab.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So you might have seen in my last blog post that I showed you how to accelerate video frame extraction using GPU's and Babit multimedia framework. In this blog we are going to improve upon our video frame extractor and create a video similarity search(Reverse video search) utlizing different RAG(Retrival Augemented Gerneation) concepts with &lt;a href="https://www.pinecone.io/"&gt;Pinecone&lt;/a&gt;, the vector database that will help us build knowledgeable AI. Pinecone is designed to perform vector searches effectively. You'll see throughout this blog how we extrapulate vectors from videos to make our search work like a charm. With Pinecone, you can quickly find items in a dataset that are most similar to a query vector, making it handy for tasks like recommendation engines, similar item search, or even detecting duplicate content. It's particularly well-suited for machine learning applications where you deal with high-dimensional data and need fast, accurate similarity search capabilities.&lt;br&gt;
Reverse video search works like reverse image search but uses a video to find other videos that are alike. Essentially, you use a video to look for matching ones. While handling videos is generally more complex and the accuracy might not be as good as with other models, the use of AI for video tasks is growing. Reverse video search is really good at finding videos that are connected and can make other video applications better.&lt;br&gt;
So why would you want to create a video similarity search app? &lt;/p&gt;
&lt;h2&gt;
  
  
  Here are some reasons:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Content Discovery: It enables users to find videos that are visually or contextually similar to what they're interested in, enhancing content discoverability on platforms like streaming services or stock footage libraries.&lt;/li&gt;
&lt;li&gt;Recommendation Systems: Enhances recommendation engines by suggesting content that is similar to a user's viewing history, thus improving user engagement and retention.&lt;/li&gt;
&lt;li&gt;Duplicate or Near-duplicate Detection: Helps in identifying copies or slight variations of the same video, which is useful for copyright enforcement or content management.&lt;/li&gt;
&lt;li&gt;Categorization and Tagging: Assists in automatically categorizing and tagging videos based on content, which can simplify content management and improve searchability.&lt;/li&gt;
&lt;li&gt;User-generated Content Moderation: Useful in moderating platforms where vector similarity can help identify potentially problematic content by comparing new uploads with known flagged videos.&lt;/li&gt;
&lt;li&gt;Video Analysis: In fields like surveillance, sports, or medical imaging, it can help in analyzing and identifying specific moments or objects in video sequences.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Oh yeah and of course a similarity search like what we'll do in this blog! I've taken inspiration by reading the &lt;a href="https://milvus.io/docs/video_similarity_search.md"&gt;Milvus video reserve video search notebook&lt;/a&gt; and decided to recreate it using technologies I prefer. &lt;br&gt;
Babit Multimedia framework brings forth all the great things we know and love about FFMPEG and amplifies it all with its multi-language support and GPU acceleration capabilities.&lt;br&gt;
Now you might be familiar with other frame extraction methods using OpenCV, FFmpeg,or GStreamer. These are all great options. However, I'm choosing to use BMF for a few reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-language support- BMF supports the use of Python, GO and C++&lt;/li&gt;
&lt;li&gt;Full compatiblity with FFmpeg- BMF is fully compatible with FFmpeg’s processing capabilities and indicators, such as demuxing, decoding, filter, encoding, and muxing capabilities. The configuration of these processing capabilities and the results consistent with FFmpeg’s pts, duration, bitrate, fps and other indicators can be obtained. Very good It satisfies the need to quickly integrate FFmpeg capabilities into projects.&lt;/li&gt;
&lt;li&gt;Enhanced Support for NVIDIA GPUs to create enterprise ready GPU accelerated video pipelines

&lt;ul&gt;
&lt;li&gt;NVENC/NVDEC/GPU filters work out-of-box by inheriting abilities from FFmpeg.&lt;/li&gt;
&lt;li&gt;High performance frame processing is enabled by integration of CV-CUDA and customized CUDA kernels.&lt;/li&gt;
&lt;li&gt;AI inferencing can be easily integrated into video pipelines using TensorRT.&lt;/li&gt;
&lt;li&gt;Data moving between CPU and GPU can be done by a simple call.
Alright, so that's more than just a few reasons, but you get the point! Now let's build a video similarity search/&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;### The Architecture&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2b5n3obm4ardgxbpuug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2b5n3obm4ardgxbpuug.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1z646055cxwmavitmsj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1z646055cxwmavitmsj.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Required Python packages to install:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.9-3.10&lt;/li&gt;
&lt;li&gt;pinecone-client&lt;/li&gt;
&lt;li&gt; BabitMF-GPU&lt;/li&gt;
&lt;li&gt;  torch&lt;/li&gt;
&lt;li&gt;  torchvision&amp;gt;=0.12.0&lt;/li&gt;
&lt;li&gt;  python-dotenv&lt;/li&gt;
&lt;li&gt;  av
Grab a video. I had a short video stored on a github repo. You can use a video stored on your system or elsewhere. BMF can handle any video format(FFmpeg compatibility!)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Inserting the dataset into a Pinecone index
&lt;/h3&gt;

&lt;p&gt;Let's start with inserting videos from our dataset into our Pinecone index. We do this so that our vector database has knowledge of the videos we will be comparing to the end user's video. This is a necessary starting point for our application.&lt;br&gt;
First, I'm going to create an account on Pinecone and create my first index using Pinecone serverless. Pinecone is a fully managed vector database. You can use the CLI or the dashboard when you log in. Here's to learn how to set it up: &lt;a href="https://docs.pinecone.io/guides/getting-started/quickstart"&gt;https://docs.pinecone.io/guides/getting-started/quickstart&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;git clone https://github.com/Joshalphonse/Bmf-Huggingface.git&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install BMF with GPU capabilities&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install -qU \
  pinecone-client \
  BabitMF-GPU \
  torch \
  torchvision&amp;gt;=0.12.0 \
  python-dotenv \
  av
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install this video dataset or use your own &lt;br&gt;
The data is organized as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;train: candidate videos, 20 classes, 10 videos per class (200 in total)&lt;/li&gt;
&lt;li&gt;test: query videos, same 20 classes as train data, 1 video per class (20 in total)&lt;/li&gt;
&lt;li&gt;reverse_video_search.csv: a csv file containing an id, path, and label for each video in train data
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;! curl -L https://github.com/towhee-io/examples/releases/download/data/reverse_video_search.zip -O
! unzip -q -o reverse_video_search.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Put the files in a dataframe and convert them to a list&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;

&lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./reverse_video_search.csv&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nrows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;#put the files in the dataframe
&lt;/span&gt;&lt;span class="n"&gt;video_paths&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;path&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;tolist&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;#convert df to python list
&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;video_paths&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;#check if the video paths
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to import all of the necessary packages. Then create environment variables to manage your configurations. They will make your life a lot easier. &lt;br&gt;
Afterwards, load the CSV file from the data set folder I'm also limiting the list to 3 rows just to speed things up for demo purposes. &lt;br&gt;
We'll also load the ResNet Pretrained model because in the next steps we will use it to generate the vector embeddings.&lt;br&gt;
Lastly, in this code snippet, configure a preprocessing pipeline for images using PyTorch's &lt;code&gt;transforms&lt;/code&gt; module, which is often used in deep learning for preparing data before feeding it into a neural network.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;av&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pinecone&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Pinecone&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ServerlessSpec&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torchvision.transforms&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;transforms&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torchvision.models&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;models&lt;/span&gt;

&lt;span class="n"&gt;PINECONE_API_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PINECONE_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;PINECONE_ENVIRONMENT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PINECONE_ENVIRONMENT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;PINECONE_DATABASE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PINECONE_DATABASE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Replace 'your_pinecone_api_key' with your actual Pinecone API key or use environment variables like I am here
&lt;/span&gt;&lt;span class="n"&gt;pc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Pinecone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PINECONE_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;environment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PINECONE_ENVIRONMENT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PINECONE_DATABASE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;#load the CSV file
&lt;/span&gt;&lt;span class="n"&gt;csv_file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./reverse_video_search.csv&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;csv_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nrows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;video_paths&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;path&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;tolist&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;video_paths&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;#check if the video paths
&lt;/span&gt;
&lt;span class="c1"&gt;#load a pretrained ResNet model
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resnet18&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pretrained&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;eval&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;#remove the last fully connectected layer
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;children&lt;/span&gt;&lt;span class="p"&gt;())[:&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Define the preprocessing transforms
&lt;/span&gt;&lt;span class="n"&gt;preprocess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;transforms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Compose&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="n"&gt;transforms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ToTensor&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;transforms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Resize&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;224&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;224&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="n"&gt;transforms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Normalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mean&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.485&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.456&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.406&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.229&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.224&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.225&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we have our dataset and our files ready to go, we will iterate over each video path and generate an embedding. I'm also using the &lt;a href="https://pypi.org/project/av/"&gt;av&lt;/a&gt; package to handle the video file, so we can open it and do the extraction.&lt;br&gt;
We then iterate over the frames of the video, preprocessing each frame (using a preprocess function that is not shown) and generating an embedding for the frame using a pre-trained ResNet model. These frame embeddings are stored in a list.&lt;br&gt;
Once all the frame embeddings have been collected, then we calculate the average of the embeddings to get a single embedding that represents the entire video.&lt;br&gt;
Now all we have to do is use the Pinecone package to upsert (insert or update) the average video embedding to a Pinecone index, under the namespace 'video_embeddings'. The video path is used as the unique identifier for the embedding&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Iterate over each video path and generate embeddings
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;video_path&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;video_paths&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Open the video file
&lt;/span&gt;    &lt;span class="n"&gt;video&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;av&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;video_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Get the first video stream
&lt;/span&gt;    &lt;span class="n"&gt;video_stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;streams&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;video&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Initialize variables for storing embeddings
&lt;/span&gt;    &lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

    &lt;span class="c1"&gt;# Iterate over the video frames
&lt;/span&gt;    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Convert the frame to a numpy array
&lt;/span&gt;        &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_ndarray&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rgb24&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Preprocess the frame
&lt;/span&gt;        &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;preprocess&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;unsqueeze&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Add batch dimension
&lt;/span&gt;
        &lt;span class="c1"&gt;# Generate embeddings using the ResNet model
&lt;/span&gt;        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;no_grad&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
            &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;squeeze&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;numpy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="c1"&gt;# Append the embedding to the list
&lt;/span&gt;        &lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Convert the list of embeddings to a numpy array
&lt;/span&gt;    &lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Calculate the average embedding for the video
&lt;/span&gt;    &lt;span class="n"&gt;avg_embedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;avg_embedding&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Upsert the embedding to Pinecone
&lt;/span&gt;    &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upsert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;vectors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;video_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;avg_embedding&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tolist&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;video_embeddings&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Upserted embedding for video: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;video_path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can either use the Pinecone CLI or the dashboard to view what in your index we just updated the data to. Check out the picture below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9qwuejzpxumeu0qobgx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9qwuejzpxumeu0qobgx.png" alt="Image description" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Searching For A Similar Video
&lt;/h3&gt;

&lt;p&gt;Install ffmpeg and related libraries. For this demo, we don't have to do this step, because ffmpeg libraries are already installed in the Google Colab environment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install ffmpeg&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;List the ffmpeg libraries. It is expected that the related libraries such libavcodec, libavformat are installed. The output should be shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v4bfz0cot2ue25fmxpy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v4bfz0cot2ue25fmxpy.png" alt="Image description" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install libdw1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;dpkg -l | grep -i ffmpeg&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ffmpeg -version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install the following package to show the BMF C++ logs in the colab console, otherwise only python logs are printed. This step is not necessary if you're not in a Colab or iPython notebook environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install wurlitzer
%load_ext wurlitzer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now import all of these dependencies listed and the beginning of our process is the same as our data upsert from above. Use your Pinecone credentials that we stored in a .env file and work with the ResNet18 pretrained model.&lt;br&gt;
The difference here is that we are finally using BMF for frame extraction.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;glob&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torchvision.transforms&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;transforms&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torchvision.models&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;models&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pinecone&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Pinecone&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bmf&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;IPython&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;display&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PIL&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;

&lt;span class="n"&gt;PINECONE_API_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PINECONE_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;PINECONE_ENVIRONMENT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PINECONE_ENVIRONMENT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;PINECONE_DATABASE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PINECONE_DATABASE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Replace 'your_pinecone_api_key' with your actual Pinecone API key or use environment variables like I am here
&lt;/span&gt;&lt;span class="n"&gt;pc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Pinecone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PINECONE_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;environment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PINECONE_ENVIRONMENT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PINECONE_DATABASE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resnet18&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pretrained&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;eval&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c1"&gt;#remove the last fully connectected layer
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;children&lt;/span&gt;&lt;span class="p"&gt;())[:&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;preprocess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;transforms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Compose&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="n"&gt;transforms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ToTensor&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;transforms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Resize&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;224&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;224&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="n"&gt;transforms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Normalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mean&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.485&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.456&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.406&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.229&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.224&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.225&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;input_video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/content/linedancing.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;output_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./extracted-images/simple_%03d.jpg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bmf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dump_graph&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="n"&gt;video&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;input_video_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;fps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;bmf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;video&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;output_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;format&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;image2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video_params&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;codec&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;jpg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Frame extraction completed successfully.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error during frame extraction: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we will load the extracted query frames and generate the embeddings for our video that we will compare to the ones stored in our Pinecone index. &lt;/p&gt;

&lt;p&gt;Let me break it down for you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load the extracted query frames: I used the &lt;code&gt;glob&lt;/code&gt; module to find all the file paths of the extracted query frames, which are stored in the query_frame_paths variable. These are individual frames extracted from the original video.&lt;/li&gt;
&lt;li&gt;Generate embeddings for each query frame: We then iterate over each query frame path, load the image using &lt;code&gt;cv2.imread&lt;/code&gt;, preprocess it (using a preprocess function that is not shown), and generate an embedding for the frame using the pre-trained model.&lt;/li&gt;
&lt;li&gt;Store the embeddings: The generated embeddings for each frame are stored in the &lt;code&gt;query_embeddings&lt;/code&gt; list.&lt;/li&gt;
&lt;li&gt;Calculate the average embedding: Once all the frame embeddings have been collected, then we calculate the average of the embeddings to get a single embedding that represents the entire set of query frames.
By generating an average embedding for the query frames, we are able to capture the overall visual content of the query, which is a main component of how our similarity search will work.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Load the extracted query frames and generate embeddings
&lt;/span&gt;&lt;span class="n"&gt;query_frame_paths&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output_path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%03d&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;query_embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;frame_path&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;query_frame_paths&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;imread&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cvtColor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COLOR_BGR2RGB&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;preprocess&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;unsqueeze&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Add batch dimension
&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;no_grad&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;squeeze&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;numpy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;query_embeddings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;query_embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_embeddings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;avg_query_embedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_embeddings&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, let's perform our similarity search with Pinecone. &lt;br&gt;
The &lt;code&gt;query&lt;/code&gt; method from Pinecone will be used to search for the most similar vectors to the &lt;code&gt;avg_query_embedding&lt;/code&gt; we created. The top_k parameter is set to 5, which means that the code will retrieve the 5 closest matching vectors to the query(choose whatever number you'd like depending on how many items were upserted into your database. The include_metadata parameter is set to True, which means that we will retrieve the metadata (in this case, the video file paths) associated with the matching vectors.&lt;br&gt;
This step is really straight forward. Pinecone has great documentation and a really easy to use package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Perform similarity search using Pinecone
&lt;/span&gt;&lt;span class="n"&gt;num_results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;  &lt;span class="c1"&gt;# Number of similar videos to retrieve
&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;avg_query_embedding&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tolist&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;top_k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;num_results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;include_metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;video_embeddings&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Print the most similar video paths
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;match&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;matches&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="n"&gt;video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;match&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Similar video: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;video_path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  And our result is....
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn322rf60h213wm1ujto6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn322rf60h213wm1ujto6.png" alt="Image description" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Bonus
&lt;/h3&gt;

&lt;p&gt;Since I'm using a notebook and I don't want to use up a ton of memory, I also converted all the videos to gifs to view them easier. So here is some bonus code for ya!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;video_to_gif&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;video_path&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;gif_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tmp_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;video_path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;][:&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.gif&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;frames&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="n"&gt;cap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;VideoCapture&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;video_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isOpened&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="n"&gt;ret&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;ret&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cvtColor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COLOR_BGR2RGB&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;frames&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromarray&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;release&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;frames&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;gif_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;GIF&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;append_images&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;frames&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:],&lt;/span&gt; &lt;span class="n"&gt;save_all&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;loop&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;gif_path&lt;/span&gt;

&lt;span class="c1"&gt;# Display the input video as a GIF
&lt;/span&gt;&lt;span class="n"&gt;html&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Query video &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;: &amp;lt;br/&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_video_path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;query_gif&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;video_to_gif&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_video_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;html_line&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;img src=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt; &amp;lt;br/&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_gif&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;html&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;html_line&lt;/span&gt;
&lt;span class="n"&gt;html&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Top {} search results: &amp;lt;br/&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Display the similar videos as GIFs
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;match&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;match&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;matches&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]]:&lt;/span&gt;
    &lt;span class="n"&gt;gif_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;video_to_gif&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;html_line&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;img src=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; style=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;display:inline;margin:1px&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gif_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;html&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;html_line&lt;/span&gt;

&lt;span class="n"&gt;display&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;HTML&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;html&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  You can do it too
&lt;/h3&gt;

&lt;p&gt;What I've showed you is a niche use case for BMF. Video frame extraction has a lot of use cases outside of our example. There are a ton of features with this framework, especially when it comes to building video processing pipelines. Make sure you check out the &lt;a href="https://babitmf.github.io/docs/bmf/overview/"&gt;BMF documentation&lt;/a&gt; and try out some other example apps on the &lt;a href="https://babitmf.github.io/docs/bmf/quick_experience/"&gt;quick experience page&lt;/a&gt; for more. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why Your Web Bundler Matters For Optimized WebGPU-Powered 3D Game Development</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Fri, 10 May 2024 16:50:33 +0000</pubDate>
      <link>https://dev.to/joshalphonse/why-your-web-bundler-matters-for-optimized-webgpu-powered-3d-game-development-498h</link>
      <guid>https://dev.to/joshalphonse/why-your-web-bundler-matters-for-optimized-webgpu-powered-3d-game-development-498h</guid>
      <description>&lt;p&gt;Browser games have had quite the transformation throughout the years. From Flash to webGL we've seen gaming on the web be taken to new heights. Yet, with each step we take to advance, we seem to hit the same issues and deal with the bottleneck of creating the next big game that anyone can play no matter if they have a console or an expensive PC. The allure of browser-based games is expanding rapidly, not just among casual gamers but also within the AAA space. In this blog I want to share my experience of experimenting with different bundlers and how they affect my usage with WebGPU.&lt;br&gt;
Performance will continue to be one of the most vital specs for game developers. Thats really why we haven't really seen too many AAA games on the web. Usually these games require a lot of GPU/CPU resources that are difficult to generate games on the DOM. This is where WebGPU comes in. &lt;/p&gt;
&lt;h2&gt;
  
  
  WebGPU
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/WebGPU_API"&gt;WebGPU&lt;/a&gt; is an emerging standard that aims to provide modern 3D graphics and computation capabilities on the web. Unlike WebGL, which is a JavaScript API for rendering interactive 2D and 3D graphics within any compatible web browser without the use of plug-ins, WebGPU is designed from the ground up to provide an efficient, low-level interface to the graphics processing unit (GPU).&lt;/p&gt;

&lt;p&gt;Key features of WebGPU include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Low-level Access: WebGPU offers fine-grained control over GPU operations, which means developers can perform more sophisticated rendering and data manipulation. This access allows for more optimized graphics rendering and general computation on the web.&lt;/li&gt;
&lt;li&gt;Modern API Design: WebGPU is designed to interface with modern GPU features directly, similar to Vulkan, Metal, or DirectX 12. This means it can leverage capabilities that might not be available in older graphics APIs, enabling developers to create more graphically advanced and computationally intensive applications.&lt;/li&gt;
&lt;li&gt;Cross-Platform Compatibility: As a web standard, WebGPU is intended to work across multiple platforms and devices, abstracting away the platform-specific graphics APIs while providing unified access to GPU capabilities.&lt;/li&gt;
&lt;li&gt;Security: WebGPU is designed with modern web security standards in mind. It provides a more secure environment for executing GPU tasks, mitigating potential security risks that can arise from allowing web content to interact directly with the system's hardware resources.&lt;/li&gt;
&lt;li&gt;Efficiency: Better performance and efficiency are key goals of WebGPU. The API allows for pre-compilation and efficient management of GPU resources, minimizing overhead, and maximizing throughput.&lt;/li&gt;
&lt;li&gt;Compute Shaders: WebGPU supports compute shaders, which are programs that use the GPU for general computation tasks, not just graphics rendering. This expands the possible use cases to include scientific computing, machine learning, and other GPU-accelerated applications.&lt;/li&gt;
&lt;li&gt;Concurrency and Async: The API is designed to fully take advantage of the GPU's ability to perform parallel operations, with support for asynchronous operations to ensure a smoother experience by not blocking the main thread in complex computations and rendering tasks.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0148cttw2hpl931wxq97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0148cttw2hpl931wxq97.png" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Choosing a web bundler
&lt;/h2&gt;

&lt;p&gt;WebGPU presents an exciting opportunity to harness the power of modern graphics and computation capabilities directly within web browsers. In tandem, the emergence of Rsbuild, based upon Rspack as a build tool, facilitates an effortless and efficient developmental workflow for web projects.&lt;br&gt;
RSBuild is designed to supercharge the development process, offering out-of-the-box functionality, semantic build configuration, and an array of official plugins that terrace the path for effortless adoption and immediate productivity gains. With RSBuild's emphasis on easy configuration, performance orientation, and a lightweight plugin system, developers can focus more on the creative aspects of their projects and less on the intricacies of the build process.&lt;br&gt;
Leveraging Rsbuild's zero-configuration setup and Rspack's fast Rust-based bundler, we detail how developers can establish an efficient pipeline for developing, bundling, and deploying WebGPU applications. &lt;br&gt;
Rsbuild can manage WebGPU's modularity and the benefits it brings to the table in terms of developer experience. The examples encompass a range of WebGPU functionalities such as rendering pipelines, compute operations, and asset management while capitalizing on the rapid build times and optimized output of Rsbuild.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;High-Performance Builds: RSBuild integrates high-performance Rust-based tools such as Rspack and SWC, vital for WebGPU projects where build speed and efficiency are crucial for iterative testing and complex shader compilation.&lt;/li&gt;
&lt;li&gt;Universal Framework Support: RSBuild's framework agnostic nature allows developers to integrate with various UI frameworks, making it easier to adopt WebGPU across diverse tech stacks.&lt;/li&gt;
&lt;li&gt;Consistent Artifacts: Stability of building artifacts is paramount for WebGPU applications, where a small change can have a significant visual impact. RSBuild ensures that developers have consistent results between development and production environments.&lt;/li&gt;
&lt;li&gt;Simplified Configuration: The zero-config start-up and semantic configuration approach streamlines the set-up for Orillusion projects, meaning developers can spend more time leveraging WebGPU's capabilities instead of dealing with build issues.
Leveraging RSBuild Plugins:
The RSBuild plugin system is a game-changer, offering developers a chance to customize their build process according to the specific needs of their WebGPU project. Notable plugins can include:&lt;/li&gt;
&lt;li&gt;Asset Optimization: Essential for WebGPU projects with large textures or models, asset optimization plugins can automatically compress and manage assets to ensure optimal loading times and runtime performance. You can use plugins like the &lt;/li&gt;
&lt;li&gt;The Asset Retry Plugin or the Image Compress Plugin&lt;/li&gt;
&lt;li&gt;Code Security: Protect your intellectual property with obfuscation plugins that add an extra layer of security, particularly important when deploying high-value WebGPU projects.&lt;/li&gt;
&lt;li&gt;Enhanced Development Workflow: Streamline development with plugins that support hot module reloading(HMR), code splitting, and live previews, all of which are crucial for teams working on sophisticated WebGPU experiences.&lt;/li&gt;
&lt;li&gt;Cross-Platform Compatibility: Ensure that your WebGPU application runs smoothly across all supported browsers and platforms with plugins designed to handle the nuances of different environments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Use Cases:&lt;br&gt;
Web LLM- run LLM's on the web powered by WebGPU&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt3lvnim8ap8f4mrnjkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt3lvnim8ap8f4mrnjkj.png" alt="Image description" width="800" height="550"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Building 3D worlds:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://orillusion.com/"&gt;Orillusion&lt;/a&gt;: The Illusion of Complexity Simplified&lt;br&gt;
Orillusion is a comprehensive library leveraging WebGPU to offer a high-level abstraction for intricate 3D graphics rendering. It simplifies the development of immersive web experiences, making the power of WebGPU accessible without demanding a deep dive into the lower-level intricacies.&lt;br&gt;
Imagine developing a web application that features an interactive 3D model showcase, designed to be as performant as it is breathtaking. By integrating WebGPU for rendering, Rspack for bundling, and Orillusion for graphics abstraction, we can create an application that is both cutting-edge and efficient.&lt;/p&gt;

&lt;p&gt;Orillusion is a comprehensive library leveraging WebGPU to offer a high-level abstraction for intricate 3D graphics rendering. It simplifies the development of immersive web experiences, making the power of WebGPU accessible without demanding a deep dive into the lower-level intricacies.&lt;br&gt;
Imagine developing a web application that features an interactive 3D model showcase, designed to be as performant as it is breathtaking. By integrating WebGPU for rendering, Rspack for bundling, and Orillusion for graphics abstraction, we can create an application that is both cutting-edge and efficient.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm create rsbuild@latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next Install orillusion&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install @orillusion/core --save&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Incorporating Orillusion for WebGPU Abstraction
&lt;/h2&gt;

&lt;p&gt;With your project scaffolded, import Orillusion to provide a high-level interface to WebGPU. Orillusion makes it easy to create, manipulate, and render 3D models without delving deep into WebGPU's API details.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Import Orillusion&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Scene&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Engine&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MeshBuilder&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;orillusion&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Initialize the Orillusion Engine&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;engine&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Engine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;renderCanvas&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Create a simple 3D scene&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;scene&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Scene&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;MeshBuilder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createBox&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;box&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Crafting the 3D Experience
&lt;/h2&gt;

&lt;p&gt;Next let's take it up a notch&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Object3D&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Scene3D&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Engine3D&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;AtmosphericComponent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;CameraUtil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;webGPUContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;HoverCameraController&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;View3D&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LitMaterial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MeshRenderer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;BoxGeometry&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;DirectLight&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;KelvinUtil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Object3DUtil&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@orillusion/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Stats&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@orillusion/stats&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Sample_Skeleton&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Object3D&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Scene3D&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kr"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;Ori&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;dat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GUI&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;Engine3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;setting&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;shadow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;autoUpdate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nx"&gt;Engine3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;setting&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;shadow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;updateFrameRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nx"&gt;Engine3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;setting&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;shadow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;shadowBound&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Engine3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scene&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Scene3D&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Stats&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;sky&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;AtmosphericComponent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;camera&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;CameraUtil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createCamera3DObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nx"&gt;camera&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;perspective&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Engine3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aspect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;5000.0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;ctrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;camera&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;object3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;HoverCameraController&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nx"&gt;ctrl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setCamera&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nx"&gt;ctrl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxDistance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;view&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;View3D&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="nx"&gt;view&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scene&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nx"&gt;view&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;camera&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;camera&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="nx"&gt;Engine3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startRenderView&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;view&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initScene&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nx"&gt;sky&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;relativeTransform&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;initScene&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Scene3D&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// load model with skeleton animation&lt;/span&gt;
            &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;man&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Engine3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loadGltf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://cdn.orillusion.com/gltfs/CesiumMan/CesiumMan_compress.gltf&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;man&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scaleX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nx"&gt;man&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scaleY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nx"&gt;man&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scaleZ&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addChild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;man&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// load model with skeleton animation&lt;/span&gt;
            &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;man&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Engine3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loadGltf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://cdn.orillusion.com/gltfs/CesiumMan/CesiumMan_compress.gltf&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;man&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scaleX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nx"&gt;man&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scaleY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nx"&gt;man&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scaleZ&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addChild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;man&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="cm"&gt;/******** floor *******/&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addChild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Object3DUtil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;GetSingleCube&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;355&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

        &lt;span class="cm"&gt;/******** light *******/&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Object3D&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rotationX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;144&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rotationY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rotationZ&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;directLight&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;DirectLight&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;directLight&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightColor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;KelvinUtil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;color_temperature_to_rgb&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5355&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;directLight&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;castShadow&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nx"&gt;directLight&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;intensity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nx"&gt;scene&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addChild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lightObj3D&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Sample_Skeleton&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1no8qj63zfe5r86hsj5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1no8qj63zfe5r86hsj5r.png" alt="Image description" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bringing together WebGPU, Rspack, and Orillusion creates an ecosystem that is more than the sum of its parts. Developers can forge web applications exhibiting both stunning visual richness and unparalleled performance. This potent combination caters to the evolving demands of users and paves the way for the next generation of immersive web experiences. As the web continues to evolve, these tools will undoubtedly be at the forefront, powering experiences we've yet to imagine.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Monorepos with Rspack Just Makes Things Easier</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Tue, 06 Feb 2024 23:17:00 +0000</pubDate>
      <link>https://dev.to/bytedanceoss/monorepos-with-rspack-just-makes-things-easier-45l3</link>
      <guid>https://dev.to/bytedanceoss/monorepos-with-rspack-just-makes-things-easier-45l3</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6dptumUk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bytedance.us.larkoffice.com/space/api/box/stream/download/asynccode/%3Fcode%3DMGU2NzFjNzE3Y2MyZjlhZGE2ODQzN2Q2ZDZlYjgzZGVfcm5QbW1JeVBVblVDaG1jUWdSOFlNVW5aNTUyQjR0WWZfVG9rZW46VmtOYWJhaEJXb0dZd1h4YnlJM3VZU2NNc2tJXzE3MDcyNjEzNDY6MTcwNzI2NDk0Nl9WNA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6dptumUk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bytedance.us.larkoffice.com/space/api/box/stream/download/asynccode/%3Fcode%3DMGU2NzFjNzE3Y2MyZjlhZGE2ODQzN2Q2ZDZlYjgzZGVfcm5QbW1JeVBVblVDaG1jUWdSOFlNVW5aNTUyQjR0WWZfVG9rZW46VmtOYWJhaEJXb0dZd1h4YnlJM3VZU2NNc2tJXzE3MDcyNjEzNDY6MTcwNzI2NDk0Nl9WNA" alt="" width="800" height="306"&gt;&lt;/a&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ll4Z7JJo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bytedance.us.larkoffice.com/space/api/box/stream/download/asynccode/%3Fcode%3DZTliY2FlZWI2MzA3YmI1MWRiZTc4ZDcwZWI2OTNkYzdfTkpxVktMM2pndzZwQWhDQjNwRTBrMk5PS1dhdGhnZE5fVG9rZW46TUNnamJ2dEhab2RDSFZ4U0VxNnV4YWdrc0ZiXzE3MDcyNjEzNDc6MTcwNzI2NDk0N19WNA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ll4Z7JJo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bytedance.us.larkoffice.com/space/api/box/stream/download/asynccode/%3Fcode%3DZTliY2FlZWI2MzA3YmI1MWRiZTc4ZDcwZWI2OTNkYzdfTkpxVktMM2pndzZwQWhDQjNwRTBrMk5PS1dhdGhnZE5fVG9rZW46TUNnamJ2dEhab2RDSFZ4U0VxNnV4YWdrc0ZiXzE3MDcyNjEzNDc6MTcwNzI2NDk0N19WNA" alt="" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;When it comes to web dev, it's hard to keep up with all the latest trends. I also can't come up with many examples of ecosystems that move faster than JavaScript's. It feels as if a new version of the iPhone is released every month. However, just because things are moving fast, doesn't mean you have to be left in the dust.&lt;/p&gt;

&lt;p&gt;Monorepos are one of the hottest additions to the scene especially for web developers. One of the biggest challenges when building a full stack app is managing complexity. The larger your application becomes, the more complex it gets and it doesn't matter how big your organization or team may be. You've probably experienced this kind of struggle if you're building a product that has many different front-end code bases that share backend's and interfaces.&lt;/p&gt;

&lt;p&gt;The way orgs are solving this issue now is with the use of monorepos. Monorepos help mitigate the confusion that comes along with having to manage multiple repositories for multiple distinct projects/products. A monorepo can consolidate your projects all into one repository. Contrary to the name, monorepos are technically not monolithic. A common misconception is that if you use a monorepo, every project in the repository have to be released on the same day. This acutally isn't the case. Just because we are developing our code in the same place doesn't mean we have to deploy everything at the same time nor to the same place. Since your code bases can exist in one repository, it makes common tasks like code sharing and refactoring way easier. This leads to significantly lowering the cost that comes with creating libs, microservices and microfronents. If you want true development flexibility, then you have to give this approach a try. This is why some of the biggest companies are implementing monorepos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rspack
&lt;/h3&gt;

&lt;p&gt;So, where does &lt;a href="https://www.rspack.dev/"&gt;Rspack&lt;/a&gt; come into play here?&lt;/p&gt;

&lt;p&gt;As you may know, Rspack is a high performance web bundler that offers interoperability with the webpack ecosystem but also build systems like &lt;a href="https://nx.dev/"&gt;Nx&lt;/a&gt; to build smart monorepos! Rspack can bring several key benefits when used in conjunction with NX, primarily by enhancing the performance aspects of the development and build processes. Here are a few ways Rspack can positively impact an NX-powered project :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Speeding up builds:&lt;/strong&gt;Rspack is known for its quick bundling capabilities. In complex NX workspaces with multiple applications and libraries, faster bundling translates to quicker build times. This is especially beneficial when you're dealing with large-scale projects where build times can significantly impact developer productivity and continuous integration/deployment pipelines.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Reducing setup time:&lt;/strong&gt;Rspack has a reputation for requiring less configuration out of the box compared to other bundlers, which can simplify the setup process. In an NX workspace, this means you can get applications up and running more quickly, and you can spend less time adjusting bundler configurations and more time developing features for the e-commerce store.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Efficient code splitting:&lt;/strong&gt; Rspack can facilitate code splitting, which means that each page or feature can have its bundle. This ensures that users are only downloading the code necessary for the page they're visiting, improving load times and enhancing the user experience.&lt;/li&gt;
&lt;li&gt; Providing &lt;strong&gt;Hot Module Replacement (HMR)&lt;/strong&gt;While NX manages the overall structure and dependencies in your monorepo, Rspack can provide Hot Module Replacement for a smoother development experience. HMR allows changes to be applied to a running application without needing a full reload, preserving the application state and speeding up the development process.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;Although monorepos can solve a bunch of problems for applications of any size, there is one feature in particular that I'm going to highlight in this blog post. That feature being the ability to manage aliases with Rspack and Nx.&lt;/p&gt;

&lt;p&gt;Defining aliases with rspack in an NX monorepo provides multiple benefits and solves several issues commonly faced in large-scale projects. Let's take a look before we dive into the tutorial:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Simplified Imports&lt;/strong&gt;: With aliases, you can avoid long and complicated relative paths that are hard to read and maintain. Aliases provide a clear, concise way to import modules.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Without alias... How do you even know where your functinos are?! Half the time in guesssing.
import utility from '../../../../libs/utils/src/utility';

// With alias
import utility from '@utils/utility';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Easier&lt;/strong&gt; &lt;strong&gt;Refactoring&lt;/strong&gt;: When you decide to rearrange your project's folder structure, you won't need to update every import statement. Instead, you only update the alias paths in the rspack configuration.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Code Readability&lt;/strong&gt;: Aliases can help signify the intent or the origin of the imported module more clearly than relative paths.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Consistency Across Apps and Libs&lt;/strong&gt;: In a monorepo, you typically have multiple apps and libraries. Aliases ensure that every part of your project refers to shared libs in the same way.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Namespace&lt;/strong&gt; &lt;strong&gt;Clarity&lt;/strong&gt;: By using aliases that represent features or shared libraries (like &lt;code&gt;@feature-a&lt;/code&gt; or &lt;code&gt;@shared&lt;/code&gt;), you provide a clear namespace. This indicates that the import is from a shared source and not a local module.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Avoid File Path Errors&lt;/strong&gt;: As projects grow, relative paths become more prone to errors when files are moved or when the developer is unsure of the current file's depth in the directory structure.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Enhanced Autocompletion&lt;/strong&gt;: Many IDEs can provide improved autocompletion for imports when aliases are set up properly, making development faster and reducing typos.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Collaboration Enhancement&lt;/strong&gt;: When working with a team, aliases ensure that everyone is using the same paths to common resources, reducing the cognitive load of understanding where a file is located within the project hierarchy.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Integration with Build Tools&lt;/strong&gt;: Tools like rspack can understand these aliases and use them to resolve the actual bundle paths during the build process, ensuring the generated bundles reflect the same structure and readability of the source code.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Decoupling the Code&lt;/strong&gt;: Aliases can help in abstracting the actual file system paths, which means developers can think more about architecture and less about the file system.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Tutorial
&lt;/h3&gt;

&lt;p&gt;Alright, so now that we have some base knowledge of what monorepos are, let's jump into a tutorial. This time around, we're going to set up a monorepo for an e-commerce store using NX and Rspack. We'll configure aliases to simplify module resolution, keeping our import statements clean and readable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before we get started
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  At the minimum, a basic understanding of Javascript/Typescript and react&lt;/li&gt;
&lt;li&gt;  Node.js, yarn, or pnpm&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://nx.dev/getting-started/installation#installing-nx-globally"&gt;Install Nx globally&lt;/a&gt; depending on your package manager. I'm using &lt;code&gt;npm add --global nx@latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  Create a project with &lt;code&gt;npx create-nx-workspace myrspackapp --preset=@nx/rspack&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 1: Creating an NX Workspace
&lt;/h4&gt;

&lt;p&gt;First, we need to create a new NX workspace by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-nx-workspace monrepo-example --preset=@nx/rspack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Choose &lt;code&gt;empty&lt;/code&gt; as the preset for full customization. Once the setup is complete, navigate into your new workspace directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd monorepo-example
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2: Adding Applications and Libraries
&lt;/h4&gt;

&lt;p&gt;With NX, you can have multiple apps and libraries co-existing in a single monorepo. Since we installed Nx globally, we can use the nx command to generate our libraries.&lt;/p&gt;

&lt;p&gt;For this example, let's add an application and two libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nx generate @nrwl/react:application store-front
nx generate @nrwl/react:library ui-shared --directory=shared
nx generate @nrwl/react:library product-data --directory=shared
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will set up a React application (&lt;code&gt;store-front&lt;/code&gt;) and two shared libraries (&lt;code&gt;ui-shared&lt;/code&gt; and &lt;code&gt;product-data&lt;/code&gt;) under the &lt;code&gt;libs/shared/&lt;/code&gt; directory.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Configuring Aliases in rspack.config.js
&lt;/h4&gt;

&lt;p&gt;Create a &lt;code&gt;rspack.config.js&lt;/code&gt; file at the root of your workspace if it doesn't exist, and define your aliases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const path = require('path');

module.exports = (config) =&amp;gt; {
  config.resolve.alias = {
    ...config.resolve.alias,

    // For that beautiful UI:
    '@ui-elements': path.resolve(__dirname, 'libs/shared/ui-elements/src'),

    // For the business logic behind products:
    '@product-services': path.resolve(__dirname, 'libs/shared/product-services/src'),

    // Continue as your project grows...
  };
  return config;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4: Using Aliases in Your Code
&lt;/h4&gt;

&lt;p&gt;Now that we have our aliases, we can use them within our applications and libraries. For example, in our &lt;code&gt;store-front&lt;/code&gt; app, we can now import from our shared libraries like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { CheckoutButton, StoreBanner } from '@ui-elements';
import { fetchAllProducts } from '@product-services';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 5: Building the Application
&lt;/h4&gt;

&lt;p&gt;To build your application with the new alias configurations, simply use the NX build command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nx build store-front
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NX will invoke Rspack with your custom configuration including the aliases, and you should see a successful build output.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;By using aliases with rspack in an NX monorepo, you're essentially laying down a scalable, maintainable foundation that streamlines development workflows and helps manage complexity as your project grows. Keep an eye out for more content and if you have any questions and want to join the community find us on &lt;a href="https://github.com/web-infra-dev/rspack"&gt;Github&lt;/a&gt; and &lt;a href="https://discord.gg/4wXUpdrK2z"&gt;Join us on our ByteDance Open Source Discord Server!&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>BMF 📹 + Hugging Face🤗, The New Video Processing BFFs</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Thu, 01 Feb 2024 01:27:15 +0000</pubDate>
      <link>https://dev.to/bytedanceoss/bmf-hugging-face-the-new-video-processing-bffs-59m8</link>
      <guid>https://dev.to/bytedanceoss/bmf-hugging-face-the-new-video-processing-bffs-59m8</guid>
      <description>&lt;p&gt;&lt;em&gt;TL;DR&lt;/em&gt;&lt;em&gt;if you want to test this tutorial before we start, try it out&lt;/em&gt; &lt;em&gt;&lt;a href="https://colab.research.google.com/drive/1eQxiZc2vZeyOggMoFle_b0xnblupbiXd?usp=sharing"&gt;here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/"&gt;Hugging Face&lt;/a&gt; has created a major shift in the AI community. It fuels cutting-edge open source machine learning/AI models and datasets. The Hugging Face community is thriving with great ideas and innovations to the point where the possibilities seem endless.&lt;/p&gt;

&lt;p&gt;Hugging Face is revolutionizing Natural Language Processing (NLP) with state-of-the-art solutions for tasks like translation, summarization, sentiment analysis, and contextual understanding. Its arsenal of pre-trained models makes it a robust platform for diverse NLP tasks, streamlining the integration of machine learning functionalities. Hugging Face simplifies the training, evaluation, and deployment of models with a user-friendly interface. The more I used Hugging Face in my own personal projects, the more I felt inspired to combine it with &lt;a href="https://babitmf.github.io/"&gt;Babit Multimedia Framework (BMF)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're reading this and are not familiar with BMF, it's a cross-platform multimedia processing framework by ByteDance Open Source. Currently, BMF is used to process over 2 billion videos a day across multiple social media apps. Can this get complex? Yes, it sure can. However, in this article, I'll break it all down, so you know how to create unique experiences across any type of media platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why&lt;/strong&gt; &lt;strong&gt;BMF&lt;/strong&gt;&lt;strong&gt;?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;BMF stands out with its multilingual support, putting it ahead in the video processing game. BMF excels in various scenarios like video transcoding, editing, videography, and analysis. The integration of advanced technologies like Hugging Face with BMF is a game-changer for complex multimedia processing challenges.&lt;/p&gt;

&lt;p&gt;Before we get started with the tutorial, let me share with you some ideas I envision coming to life with BMF + Hugging Face:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Multimedia Content Analysis:&lt;/strong&gt; Leveraging Hugging Face's NLP models, BMF can delve deep into textual data associated with multimedia content, like subtitles or comments, for richer insights.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Accessibility:&lt;/strong&gt; NLP models can automatically generate video captions, enhancing accessibility for the hard-of-hearing or deaf community.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Content Categorization and Recommendation:&lt;/strong&gt; These models can sort multimedia content based on textual descriptions, paving the way for sophisticated recommendation systems.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Enhanced User Interaction:&lt;/strong&gt; Sentiment analysis on user comments can offer valuable insights into user engagement and feedback for content improvement.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What now?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Open Source AI is creating the building blocks of the future. Generative AI impacts all industries, and this leads me to think about how generative AI can impact the future of broadcasting and video processing. I experimented with BMF and Hugging Face to create the building blocks for a broadcasting service that uses AI to create unique experiences for viewers. So, enough about the background, let's get it going!&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What we'll build&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Follow along, as we'll build a video processing pipeline with BMF that uses the &lt;a href="https://huggingface.co/runwayml/stable-diffusion-v1-5"&gt;runwayml/stable-diffusion-v1-5&lt;/a&gt; model to generate an image to display as an overlayed image ontop of an encoded video. If that didn't make sense, don't worry, here's a picture for reference:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O_ajNTgr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bytedance.us.larkoffice.com/space/api/box/stream/download/asynccode/%3Fcode%3DODQ1NDg1YWEyYjg0MjM1NjhkNWIzNTgzZTIyODkxNzFfZE5TNkJsNnRVWUtqakJPd2hnWlNhYmd1UG9LN0l3WmNfVG9rZW46UmE1aGJ4OFJYb0p5dkF4MXVTRHVqamZDc2tlXzE3MDY3NTA2MjQ6MTcwNjc1NDIyNF9WNA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O_ajNTgr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bytedance.us.larkoffice.com/space/api/box/stream/download/asynccode/%3Fcode%3DODQ1NDg1YWEyYjg0MjM1NjhkNWIzNTgzZTIyODkxNzFfZE5TNkJsNnRVWUtqakJPd2hnWlNhYmd1UG9LN0l3WmNfVG9rZW46UmE1aGJ4OFJYb0p5dkF4MXVTRHVqamZDc2tlXzE3MDY3NTA2MjQ6MTcwNjc1NDIyNF9WNA" alt="" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So why is this significant? The image of the panda is AI generated and combined with BMF , we can put it down a processing pipeline to put it on top of our video. Think about! There could be a scenario where you are creating a video broadcasting service and during live streams, you'd like to display images quickly and display them for your audience with a simple prompt. There can also be a scenario where you are using BMF to edit your videos and you'd like to add some AI-generated art. This tutorial is just one example. BMF combined with models created by the Hugging Face community opens up a whole new world of possibilities.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Let's Get Started&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  A GPU(I'm using google Colab A100 GPU. You can also use v100 or TP4 GPUs but they will just run a bit slower)&lt;/li&gt;
&lt;li&gt;  Install &lt;a href="https://babitmf.github.io/docs/bmf/getting_started_yourself/install/#pip"&gt;BMFGPU&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  Python 3.9-3.10 (strictly required to work with bmf)&lt;/li&gt;
&lt;li&gt;  FFMPEG&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find all the BMF installation docs &lt;a href="https://babitmf.github.io/docs/bmf/getting_started_yourself/install/#ffmpeg"&gt;here&lt;/a&gt;. The docs will highlight more system requirements if you decide to run things locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Getting Started&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Begin by ensuring that essential toolkits like Hugging Face Transformers and BMF are installed in your Python environment. Use pip for installation:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Initial Setup&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; First, we'll clone the following repository to get our video that we want to process(If you are coding along and want to use your own video, create your own repo and simply add a video file, preferably a short video and add to easily clone just like I did. You can also just save the video to the directory you're coding in.)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/Joshalphonse/Bmf-Huggingface.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Install BabitMF-GPU to accelerate your video processing pipeline with BMF's GPU capablities
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install BabitMF-GPU
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Install the following dependencies
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install requests diffusers transformers torch accelerate scipy safetensors moviepy Pillow tqdm numpy modelscope==1.4.2 open_clip_torch pytorch-lightning
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Install ffmpeg.BMF framework utilizes the FFmpeg video decoders and encoders as the built-in modules for video decoding and encoding. It's necessary for users to install supported FFmpeg libraries before using BMF.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install ffmpeg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dpkg -l | grep -i ffmpeg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ffmpeg -version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This package below is installed to show the BMF C++ logs in the colab console, otherwise only python logs are printed. This step is not necessary if you're not in a Colab or iPython notebook environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install wurlitzer
%load_ext wurlitzer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Create a new folder in the directory for the github repository we cloned. We'll need this path later on.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import sys
sys.path.insert(0, '/content/Bmf-Huggingface')
print(sys.path)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Creating the Module&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now it's time for the fun part. We'll create a module to process the video.Here's the module I created and I'll break it down for you below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import bmf
from bmf import bmf_sync, Packet
from bmf import SubGraph
from diffusers import StableDiffusionPipeline
import torch

model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a photo of a panda eating waffles"
image = pipe(prompt).images[0]

image.save("panda_photo.png")

class video_overlay(SubGraph):

    def create_graph(self, option=None):
        # create source stream
        self.inputs.append('source')
        source_stream = self.graph.input_stream('source')
        # create overlay stream
        overlay_streams = []
        for (i, _) in enumerate(option['overlays']):
            self.inputs.append('overlay_' + str(i))
            overlay_streams.append(self.graph.input_stream('overlay_' + str(i)))

        # pre-processing for source layer
        info = option['source']
        output_stream = (
            source_stream.scale(info['width'], info['height'])
                .trim(start=info['start'], duration=info['duration'])
                .setpts('PTS-STARTPTS')
        )

        # overlay processing
        for (i, overlay_stream) in enumerate(overlay_streams):
            overlay_info = option['overlays'][i]

            # overlay layer pre-processing
            p_overlay_stream = (
                overlay_stream.scale(overlay_info['width'], overlay_info['height'])
                    .loop(loop=overlay_info['loop'], size=10000)
                    .setpts('PTS+%f/TB' % (overlay_info['start']))
            )

            # calculate overlay parameter
            x = 'if(between(t,%f,%f),%s,NAN)' % (overlay_info['start'],
                                                 overlay_info['start'] + overlay_info['duration'],
                                                 str(overlay_info['pox_x']))
            y = 'if(between(t,%f,%f),%s,NAN)' % (overlay_info['start'],
                                                 overlay_info['start'] + overlay_info['duration'],
                                                 str(overlay_info['pox_y']))
            if overlay_info['loop'] == -1:
                repeat_last = 0
                shortest = 1
            else:
                repeat_last = overlay_info['repeat_last']
                shortest = 1

            # do overlay
            output_stream = (
                output_stream.overlay(p_overlay_stream, x=x, y=y,
                                      repeatlast=repeat_last)
            )

        # finish creating graph
        self.output_streams = self.finish_create_graph([output_stream])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Code Breakdown:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Importing Required Modules:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import bmf
from bmf import bmf_sync, Packet
from bmf import SubGraph
from diffusers import StableDiffusionPipeline
import torch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;bmf&lt;/code&gt; and its components are imported to harness the functionalities of the Babit Multimedia Framework for video processing tasks.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;SubGraph&lt;/code&gt; is a class in BMF, used to create a customizable processing node.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;StableDiffusionPipeline&lt;/code&gt; is imported from the &lt;code&gt;diffusers&lt;/code&gt; library that allows the generation of images using text prompts.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;torch&lt;/code&gt; is the PyTorch library used for machine learning applications, which Stable Diffusion relies on.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Configuring the Stable Diffusion Model:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  The Stable Diffusion model is loaded with the specified &lt;code&gt;model_id&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  The &lt;code&gt;torch_dtype&lt;/code&gt; parameter ensures the model uses lower precision to reduce memory usage.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;.to("cuda")&lt;/code&gt; moves the model to GPU for faster computation if CUDA is available.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Generating an Image Using Stable Diffusion:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prompt = "a photo of a panda eating waffles"
image = pipe(prompt).images[0]
image.save("panda_photo.png")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  We then set a text prompt to generate an image of "a photo of a panda eating waffles".&lt;/li&gt;
&lt;li&gt;  The image is created and saved to "panda_photo.png".&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Defining a Custom BMF SubGraph for Video Overlay:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class video_overlay(SubGraph):
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;video_overlay&lt;/code&gt; class is derived from &lt;code&gt;SubGraph&lt;/code&gt;. This class will define a custom graph for video overlay operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Creating the Graph:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def create_graph(self, option=None):
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;create_graph&lt;/code&gt; method is where the actual graph (workflow) of the video and overlays are constructed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Processing Source and Overlay Streams:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;self.inputs.append('source')
source_stream = self.graph.input_stream('source')
overlay_streams = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Registers input streams for the source and prepares a list of overlay input streams.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Scaling and Trimming Source Video:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;info = option['source']
output_stream = (
    source_stream.scale(info['width'], info['height']).trim(start=info['start'], duration=info['duration']).setpts('PTS-STARTPTS'))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  The source video is scaled and trimmed according to the specified &lt;code&gt;option&lt;/code&gt;. Adjustments are made for the timeline placement.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Scaling and Looping Overlay Streams:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;p_overlay_stream = (
    overlay_stream.scale(overlay_info['width'], overlay_info['height']).loop(loop=overlay_info['loop'], size=10000).setpts('PTS+%f/TB' % (overlay_info['start'])))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Each overlay is scaled and looped as needed, providing a dynamic and flexible overlay process.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Overlaying on the Source Stream:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output_stream = (
    output_stream.overlay(p_overlay_stream, x=x, y=y,
                          repeatlast=repeat_last))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Overlays are added to the source stream at the calculated position and with the proper configuration. This allows multiple overlays to exist within the same timeframe without conflicts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Finalizing the Graph:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;self.output_streams = self.finish_create_graph([output_stream])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Final output streams are set, which concludes the creation of the graph. Now, after this, it's time for us to encode the video and display it how we want to.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Applying Hugging Face Model&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let's add our image as an overlay to the video file. Let's break down each section of the code to explain how it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input_video_path = "/content/Bmf-Huggingface/black_and_white.mp4"
logo_path = "/content/panda_photo.png"
output_path = "./complex_edit.mp4"
dump_graph = 0

duration = 10

overlay_option = {
    "dump_graph": dump_graph,
    "source": {
        "start": 0,
        "duration": duration,
        "width": 1280,
        "height": 720
    },
    "overlays": [
        {
            "start": 0,
            "duration": duration,
            "width": 300,
            "height": 200,
            "pox_x": 0,
            "pox_y": 0,
            "loop": 0,
            "repeat_last": 1
        }
    ]
}

my_graph = bmf.graph({
    "dump_graph": dump_graph
})

logo_1 = my_graph.decode({'input_path': logo_path})['video']

video1 = my_graph.decode({'input_path': input_video_path})

overlay_streams = list()
overlay_streams.append(bmf.module([video1['video'], logo_1], 'video_overlay', overlay_option, entry='__main__.video_overlay')[0])

bmf.encode(
    overlay_streams[0],
    video1['audio'],
    {"output_path": output_path}
    ).run()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Let's break this down too
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Defining Paths and Options:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input_video_path = "/content/Bmf-Huggingface/black_and_white.mp4"
logo_path = "/content/panda_photo.png"
output_path = "./complex_edit.mp4"
dump_graph = 0
duration = 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;input_video_path&lt;/code&gt;: Specifies the file path to the input video.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;logo_path&lt;/code&gt;: File path to the image (logo) you want to overlay on the video.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;output_path&lt;/code&gt;: The file path where the edited video will be saved.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;dump_graph&lt;/code&gt;: A debugging tool in BMF that can be set to &lt;code&gt;1&lt;/code&gt; to visualize the graph but is set to &lt;code&gt;0&lt;/code&gt; here, meaning no graph will be dumped.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;duration&lt;/code&gt;: The duration in seconds for the overlay to be visible in the video.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Overlay Configuration:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;overlay_option = {
    "dump_graph": dump_graph,
    "source": {
        "start": 0,
        "duration": duration,
        "width": 1280,
        "height": 720
    },
    "overlays": [
        {
            "start": 0,
            "duration": duration,
            "width": 300,
            "height": 200,
            "pox_x": 0,
            "pox_y": 0,
            "loop": 0,
            "repeat_last": 1
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;overlay_option&lt;/code&gt;: A dictionary that defines the settings for the source video and the overlay.&lt;/li&gt;
&lt;li&gt;  For the source, the width and height you want to scale the video to, and when the overlay should start and end are specified.&lt;/li&gt;
&lt;li&gt;  For the overlays, detailed options such as position, size, and behavior (like &lt;code&gt;loop&lt;/code&gt; and &lt;code&gt;repeat_last&lt;/code&gt;) are defined.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating a BMF Graph:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my_graph = bmf.graph({"dump_graph": dump_graph
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;my_graph&lt;/code&gt; is an instance of BMF graph which sets up the processing graph (pipeline), with &lt;code&gt;dump_graph&lt;/code&gt; passed as an option.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Decoding the Logo and Video Streams:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;logo_1 = my_graph.decode({'input_path': logo_path})['video']
video1 = my_graph.decode({'input_path': input_video_path})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  The video and logo are loaded and decoded to be processed. This decoding extracts the video streams to be used in subsequent steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating Overlay Streams:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;overlay_streams = list()
overlay_streams.append(bmf.module([video1['video'], logo_1], 'video_overlay', overlay_option, entry='__main__.video_overlay')[0])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  An empty list &lt;code&gt;overlay_streams&lt;/code&gt; is created to hold the video layers.&lt;/li&gt;
&lt;li&gt;  The &lt;code&gt;bmf.module&lt;/code&gt; function is used to create an overlay module, where the source video and logo are processed using the &lt;code&gt;video_overlay&lt;/code&gt; class defined previously with the corresponding options.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Encoding the Final Output:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bmf.encode(
    overlay_streams[0],
    video1['audio'],{"output_path": output_path}).run()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  The final video stream, with the overlay applied, and the original audio from the input video are encoded together into a new output file specified by &lt;code&gt;output_path&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  The &lt;code&gt;.run()&lt;/code&gt; method is called to execute the encoding process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our final output should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--idKWpg8V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bytedance.us.larkoffice.com/space/api/box/stream/download/asynccode/%3Fcode%3DNTg4MjMwNjMwZWQ3NGJiYTFlNDJiOGM2YWEwMGJhNjZfYlY1Q0FlYUdLaGFZM1dPeUs2aEFxcUcyUWlHNElTSE5fVG9rZW46SXZka2JuQkhyb2lCMFR4aWNkUXVpaWtBc3RkXzE3MDY3NTA2MjQ6MTcwNjc1NDIyNF9WNA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--idKWpg8V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://bytedance.us.larkoffice.com/space/api/box/stream/download/asynccode/%3Fcode%3DNTg4MjMwNjMwZWQ3NGJiYTFlNDJiOGM2YWEwMGJhNjZfYlY1Q0FlYUdLaGFZM1dPeUs2aEFxcUcyUWlHNElTSE5fVG9rZW46SXZka2JuQkhyb2lCMFR4aWNkUXVpaWtBc3RkXzE3MDY3NTA2MjQ6MTcwNjc1NDIyNF9WNA" alt="" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thats it! We've explored a practical example of utilizing Babit Multimedia Framework (BMF) a video editing task using AI to create an image we can overlay on a video. Now you know how to set up a BMF graph, decode the input streams, create overlay modules, and finally encode the edited video with the overlay in place. In the future, I will consider adding more AI models, like one to improve the resolution, or even a model that creates a video from text. Through the power of BMF and Hugging Face open source models, users can create complex video editing workflows with overlays that can dynamically change over time, offering vast creative possibilities.&lt;/p&gt;

&lt;p&gt;Try it out on CoLab and tell us what you think:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://colab.research.google.com/drive/1eQxiZc2vZeyOggMoFle_b0xnblupbiXd?usp=sharing"&gt;https://colab.research.google.com/drive/1eQxiZc2vZeyOggMoFle_b0xnblupbiXd?usp=sharing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://discord.gg/4wXUpdrK2z"&gt;Join us on our ByteDance Open Source Discord Server!&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>ffmpeg</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Leveraging GPU Acceleration in BMF for High-Performance Video Processing</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Thu, 25 Jan 2024 01:26:05 +0000</pubDate>
      <link>https://dev.to/bytedanceoss/leveraging-gpu-acceleration-in-bmf-for-high-performance-video-processing-35p8</link>
      <guid>https://dev.to/bytedanceoss/leveraging-gpu-acceleration-in-bmf-for-high-performance-video-processing-35p8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Multimedia processing and GPU acceleration have become a cornerstone for achieving high performance. Babit Multimedia Framework (BMF) harnesses this power, offering unparalleled speed and efficiency for video processing tasks. In this blog post, we'll explore how BMF utilizes GPU acceleration and provide practical examples to help you integrate this capability into your projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding GPU Acceleration in BMF
&lt;/h2&gt;

&lt;p&gt;BMF's architecture is designed to exploit the parallel processing capabilities of GPUs. This is crucial for tasks like video transcoding, real-time rendering, and applying complex filters or effects, where the computational intensity can be staggering.&lt;/p&gt;

&lt;p&gt;GPU acceleration is like a turbocharged engine in a sports car, propelling you forward at unimaginable speeds. It's all about doing more in less time. Imagine you're editing a video for your youtube channel or streaming a live esports tournament; every millisecond counts. This is where BMF's GPU prowess shines, slicing through processing times like a hot knife through butter.&lt;/p&gt;

&lt;p&gt;BMF has carried out Performance optimization in CPU and GPU heterogeneous scenarios that many FFmpeg existing solutions do not have, and enriched the Pipeline. Taking compression and super resolution scenarios as examples, after statistics, the total throughput of BMF has increased by 15%.&lt;/p&gt;

&lt;p&gt;BMF's GPU codec is inherited from FFmpeg, using GPU NVENC, NVDEC and other proprietary hardware to accelerate video codec, and using FFmpeg's CUDA filters to accelerate image preprocessing, which is no barrier for users familiar with FFmpeg. At this stage, BMF supports GPU decoding, encoding and one-to-many transcoding.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Benefits of GPU Acceleration:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Speed: GPUs can process multiple operations simultaneously, drastically reducing processing time.&lt;/li&gt;
&lt;li&gt; Efficiency: Offloading intensive tasks to the GPU frees up the CPU for other operations, improving overall system performance.&lt;/li&gt;
&lt;li&gt; Scalability: As video resolutions and processing demands increase, GPUs can scale to meet these challenges.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up GPU Acceleration in BMF
&lt;/h2&gt;

&lt;p&gt;Before diving into coding, ensure your environment is set up to leverage GPU capabilities. This typically involves installing the necessary GPU drivers and libraries, like CUDA for NVIDIA GPUs. BMF documentation provides detailed setup instructions. You can use tools like co-lab or your own hardware. BMF also has the capability to run on Windows, Mac OS and Linux.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Example: Basic GPU-Accelerated Video Processing
&lt;/h3&gt;

&lt;p&gt;Let's start with a simple example of GPU-accelerated video processing in BMF. This example assumes you have BMF and all necessary GPU libraries installed. If you haven't installed it yet, &lt;a href="https://babitmf.github.io/docs/bmf/getting_started_yourself/install/"&gt;click this link&lt;/a&gt; and you can install BMF based on your system set up. You can also use tools like Colab as well. If you're using GPU just make sure you meet the hardware requirements to do so.&lt;/p&gt;

&lt;p&gt;Prerequsites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Python 3.9&lt;/li&gt;
&lt;li&gt;  Cmake&lt;/li&gt;
&lt;li&gt;  ffmpeg4&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Python, C++, or Go experience&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Python code
&lt;/h2&gt;

&lt;p&gt;===========&lt;/p&gt;

&lt;p&gt;In this example , BMF implements a call to the GPU codec function for video transcoding. BMF basically follows the parameters of FFmpeg, and the lines of code you'll see that are written in red are where true magic happens.&lt;/p&gt;

&lt;p&gt;First &lt;a href="https://babitmf.github.io/docs/bmf/getting_started_yourself/create_a_graph/"&gt;create a BMF Graph&lt;/a&gt; and Decode model, specify the incoming Hardware Accelerator parameter as cuda, and then you can decode the GPU.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bmf&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_gpu_transcode&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;print &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Testing gpu transcoding......&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;input_video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input.flv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;output_video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bmf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;video&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;input_video_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video_params&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hwaccel&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, use the CUDA filter for the decoded video stream. In BMF, CUDA filters can be used serially at the same time. In this case, we used Scale cuda and Yadif cuda. Then we passed in the audio &amp;amp; video stream to build an Encode model, specifying Codec as h264_nvenc and Pix format as cuda. Once the entire pipeline is complete, call RUN to start execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bmf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;ff_filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scale_cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1280&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;720&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;ff_filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;yadif_cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;audio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;output_video_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video_params&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;codec&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;h264_nvenc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pix_fmt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Full Code&lt;/strong&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bmf&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_gpu_transcode&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;print &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Testing gpu transcoding......&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;input_video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input.flv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;output_video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bmf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;video&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;input_video_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video_params&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hwaccel&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bmf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;ff_filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scale_cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1280&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;720&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;ff_filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;yadif_cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;audio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;output_video_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video_params&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;codec&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;h264_nvenc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pix_fmt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced GPU-Accelerated Video Processing
&lt;/h2&gt;

&lt;p&gt;For more complex scenarios, BMF allows fine-tuning of GPU settings and integration with other GPU-accelerated libraries.&lt;/p&gt;

&lt;p&gt;we introduce CV-CUDA accelerated image preprocessing. In order to fully mobilize the computing power of &lt;a href="https://developer.nvidia.com/cuda-toolkit"&gt;CUDA&lt;/a&gt;, we introduced &lt;a href="https://developer.nvidia.com/cv-cuda"&gt;CV-CUDA&lt;/a&gt; in BMF, which is the acceleration operator base specially developed by Nvidia for Computer Vision applications. At this stage, it provides about 45 common high-performance operators.It provides rich API interfaces such as C/C++/Python API, supports batch input of images of different sizes at the same time, and can realize zero-copy data conversion with other deep learning frameworks, and also provides a variety of scene application examples.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cuda&lt;/strong&gt; &lt;strong&gt;Operators you can use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Blur&lt;/li&gt;
&lt;li&gt;  Crop&lt;/li&gt;
&lt;li&gt;  Flip&lt;/li&gt;
&lt;li&gt;  Gamma&lt;/li&gt;
&lt;li&gt;  Rotate&lt;/li&gt;
&lt;li&gt;  Scale
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_gpu_transcode&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;  &lt;span class="c1"&gt;# Start of function named 'test_gpu_transcode'
&lt;/span&gt;    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Testing GPU transcoding...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;    &lt;span class="c1"&gt;# Print out a string "Testing GPU transcoding..." in the console
&lt;/span&gt;
    &lt;span class="c1"&gt;# Variables containing the paths of the input video and the path to save the output video (transcoded one)
&lt;/span&gt;    &lt;span class="n"&gt;input_video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input.flv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;    &lt;span class="c1"&gt;# Path to the video file we want to transcode
&lt;/span&gt;    &lt;span class="n"&gt;output_video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# Path to save the output video
&lt;/span&gt;
    &lt;span class="c1"&gt;# Create a BMF graph to represent a series of processing operations
&lt;/span&gt;    &lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bmf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Call the 'decode' function of the created BMF graph. Input is the video file pointed by 'input_video_path'.
&lt;/span&gt;    &lt;span class="c1"&gt;# Use hardware acceleration on the GPU to decode the video (hwaccel means hardware accelerator)
&lt;/span&gt;    &lt;span class="n"&gt;video&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;input_video_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video_params&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hwaccel&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;# Use NVIDIA CUDA technology for hardware accelerated decoding
&lt;/span&gt;        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="c1"&gt;# Call the 'encode' function to encode the video and audio streams.
&lt;/span&gt;    &lt;span class="c1"&gt;# The input video stream is first processed by a GPU scale module to resize it to 1280x720 pixels.
&lt;/span&gt;    &lt;span class="c1"&gt;# The encoded video will be saved to the path pointed by 'output_video_path'.
&lt;/span&gt;    &lt;span class="c1"&gt;# Use NVIDIA NVENC technology for GPU accelerated encoding,
&lt;/span&gt;    &lt;span class="c1"&gt;# and 'pix_fmt' is set to 'cuda' to let the GPU to read in the processed frames directly from its own memory.
&lt;/span&gt;    &lt;span class="n"&gt;bmf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;module&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scale_gpu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;size&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1280x720&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}),&lt;/span&gt;    &lt;span class="c1"&gt;# Scaling the video to dimension of 1280x720 pixels using GPU
&lt;/span&gt;        &lt;span class="n"&gt;video&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;audio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;    &lt;span class="c1"&gt;# Including the audio stream in the processed video
&lt;/span&gt;        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;output_video_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;# Path to save the output video
&lt;/span&gt;            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;video_params&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;codec&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;h264_nvenc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c1"&gt;# Use H.264 codec for video encoding with NVENC technology
&lt;/span&gt;                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pix_fmt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;    &lt;span class="c1"&gt;# The input video frames are in GPU memory
&lt;/span&gt;            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;    &lt;span class="c1"&gt;# Execute the graph operations
&lt;/span&gt;
&lt;span class="c1"&gt;# Now Call the above defined function
&lt;/span&gt;&lt;span class="nf"&gt;test_gpu_transcode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;    &lt;span class="c1"&gt;# Call the 'test_gpu_transcode' function to start the whole process
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example: Integrating AI Models for Video Enhancement
&lt;/h3&gt;

&lt;p&gt;BMF's flexibility enables the integration of AI models for tasks like super-resolution or frame interpolation. Here's an example of how you might integrate an AI model for super-resolution. &lt;a href="https://colab.research.google.com/github/BabitMF/bmf/blob/master/bmf/demo/video_enhance/bmf-enhance-demo.ipynb"&gt;Check out this example&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Sorcery with BMF and GPU Acceleration
&lt;/h2&gt;

&lt;p&gt;Let's list up some real-world scenarios where GPU-accelerated BMF works its magic:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The Live Sports Event: Picture a live sports broadcast. With BMF's GPU acceleration, you can stream high-definition, slow-motion replays almost instantaneously. It's like having the ability to freeze time and zoom in on that crucial game-winning goal.&lt;/li&gt;
&lt;li&gt; Hollywood films: In film editing, BMF with GPU acceleration is your special effects wizard. Render stunning visual effects in a fraction of the time, bringing dragons to life or creating epic space battles that look breathtakingly real.&lt;/li&gt;
&lt;li&gt; The Viral Video Sensation: For content creators, time is of the essence. GPU-accelerated BMF is like having a superpower to edit and render viral-worthy videos in record time, ensuring you hit the trends before they fade.&lt;/li&gt;
&lt;li&gt; The Gaming Livestream: In the gaming world, live streaming with real-time effects is key. With BMF's GPU acceleration, you can stream your gameplay with high-quality graphics and overlays, keeping your audience glued to their screens.&lt;/li&gt;
&lt;li&gt; The AI-Powered Masterpiece: Dive into the future with AI-enhanced video processing. From upscaling vintage film footage to crystal-clear quality to applying real-time face filters in a video chat, BMF's GPU acceleration makes it all possible, and at lightning speeds.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;GPU acceleration in BMF opens up a world of possibilities for high-performance video processing. By leveraging the power of GPUs, developers can achieve remarkable speed and efficiency in multimedia applications. The examples provided are just a starting point -- the real potential lies in how you apply these capabilities to your unique projects.&lt;/p&gt;

&lt;p&gt;Remember, the key to successful implementation is understanding your specific processing requirements and how best to utilize BMF's GPU acceleration features to meet those needs.&lt;/p&gt;

</description>
      <category>gpu</category>
      <category>videoprocessing</category>
      <category>ffmpeg</category>
      <category>ai</category>
    </item>
    <item>
      <title>The fastest way to use code splitting</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Thu, 25 Jan 2024 01:20:30 +0000</pubDate>
      <link>https://dev.to/bytedanceoss/the-fastest-way-to-use-code-splitting-1b42</link>
      <guid>https://dev.to/bytedanceoss/the-fastest-way-to-use-code-splitting-1b42</guid>
      <description>&lt;h2&gt;
  
  
  Performance and Code Splitting with Rspack
&lt;/h2&gt;

&lt;p&gt;Performance can be a big deal. For every optimization there is a sacrifice being made. As applications grow in complexity, the need for efficient resource loading becomes increasingly vital. Enter code splitting - a technique that may not be so new to you but it's revolutionized content delivery on the web. At the forefront of this revolution is Rspack, my web bundler of choice that excels at optimizing and packaging web applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Code Splitting?
&lt;/h2&gt;

&lt;p&gt;Code splitting is the process of breaking down a JavaScript bundle into smaller chunks that can be loaded on demand. This is crucial for improving load times, particularly in large-scale applications. Instead of downloading the entire JavaScript bundle upfront, users only download the necessary code for their current page or feature, significantly reducing the initial load time.&lt;/p&gt;

&lt;p&gt;In the context of Rspack, code splitting can be implemented using dynamic imports. Dynamic imports enable you to load JavaScript modules dynamically at runtime, rather than including them in the main bundle.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Splitting with Rspack
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Efficient Performance:&lt;/strong&gt; Rspack is a highly performant tool. Utilizing concepts like Code Splitting and Tree Shaking, it ensures that your web application loads faster and only what's needed. Supporting HTTP/2, it has the ability to split your code into many pieces that can be loaded in parallel, drastically improving loading times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Loaders and Plugins:&lt;/strong&gt; Rspack boasts of a variety of loaders and plugins to make the development process smooth. Loaders preprocess files, allowing you to bundle any static resource, while plugins provide a wide range of solutions such as bundle optimization, environment variable injection, and HTML generation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Import Magic Comments:&lt;/strong&gt; Rspack takes advantage of Webpack’s import syntax to provide the developer with succinct control over the chunk names, which can be useful in debugging and provides a way to control caching via customized chunk names.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Webpack Compatibility:&lt;/strong&gt; Rspack aims to maintain a high level of compatibility with Webpack's plugin and loader ecosystem. Developers familiar with Webpack's configuration can easily set up and configure Rspack. Many of Webpack’s features, such as code splitting, dynamic imports, module federation, hot module replacement, among others, are supported by Rspack as well.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Implementing Code Splitting in Rspack
&lt;/h4&gt;

&lt;p&gt;There are three primary methods for code splitting in Rspack:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Entry Points:&lt;/strong&gt; Manually split code using the entry configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SplitChunksPlugin:&lt;/strong&gt; Use this plugin to deduplicate and split chunks, extracting shared modules into a new chunk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Imports:&lt;/strong&gt; Using the &lt;code&gt;import()&lt;/code&gt; syntax for dynamic imports to split code within modules.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each method has its own configuration approach and use case, providing flexibility and control over how your assets are generated and managed. In this blog post, we are going to use the dynamic imports method.&lt;/p&gt;

&lt;p&gt;For detailed code examples and further explanation, you can refer to Rspack's code splitting official documentation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting Up Rspack/Rsbuild
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Node.Js&lt;/li&gt;
&lt;li&gt;JavaScript/Framework of choice knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before diving into code splitting techniques, ensure you have Rspack installed. Rspack supports frameworks like Svelte, React, Vue, SolidJS, NestJS, and Modern.JS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm create rsbuild@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run this command, let it load and choose a framework you want to work with. For this example, I'll be using React. Rspack and React work well together and have they both have some built-in features we'll discuss later in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring Code Splitting Techniques in Rspack
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F871bwbt2dur5idykitkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F871bwbt2dur5idykitkw.png" alt="Image description" width="558" height="810"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's a breakdown of some of the files in our project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pages/&lt;/code&gt; : This directory contains different pages of your app, such as &lt;code&gt;Home.tsx&lt;/code&gt;, and &lt;code&gt;ProductList.jsx&lt;/code&gt;.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;App.jsx&lt;/code&gt;: Where you define your primary routes and wrappers around your app.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;package.json&lt;/code&gt;: It contains metadata about the project, like the project name, version, dependencies, etc.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rsbuild.config.mjs&lt;/code&gt;: The configuration file for configuring rsbuild settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Let's Begin
&lt;/h3&gt;

&lt;p&gt;Code splitting is a feature supported by Modern.js and it works alongside Rspack by splitting code into different "chunks". This is a crucial optimization technique which is used when bundling large applications. It works a bit differently than other frameworks.&lt;/p&gt;

&lt;h4&gt;
  
  
  Define Routes
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;App.jsx&lt;/code&gt; will define the routing for your application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Suspense&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;lazy&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;BrowserRouter&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Routes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Link&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-router-dom&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Home&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;lazy&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./Components/Home&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ProductList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;lazy&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./Components/ProductList&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Suspense&lt;/span&gt; &lt;span class="na"&gt;fallback&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Loading...&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;nav&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Link&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Home&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Link&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Link&lt;/span&gt; &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/products"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Products&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Link&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;li&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;ul&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;nav&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Routes&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Route&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/"&lt;/span&gt; &lt;span class="na"&gt;element&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Home&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Route&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/products"&lt;/span&gt; &lt;span class="na"&gt;element&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ProductList&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Routes&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Suspense&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;React's lazy function is combined with Suspense to dynamically import the component for each route. When the Route is rendered, React will automatically load the chunk containing the corresponding component.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Home Component&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;Home.jsx&lt;/code&gt; file, import another chunk using &lt;code&gt;React.lazy&lt;/code&gt;. It will be automatically split into its own chunk by webpack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Components/Home.js
import React from 'react';

const Home = () =&amp;gt; {
  return &amp;lt;h1&amp;gt;Welcome to Our Online Store!&amp;lt;/h1&amp;gt;;
};

export default Home;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ProductList Component&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// /Components/ProductList
import React from 'react';
import { Link } from 'react-router-dom';

const products = [
  // Dummy products data
  { id: 1, name: 'Product 1' },
  { id: 2, name: 'Product 2' },
  { id: 3, name: 'Product 3' },
];

const ProductList = () =&amp;gt; {
  return (
    &amp;lt;div&amp;gt;
      &amp;lt;h1&amp;gt;Product List&amp;lt;/h1&amp;gt;
      &amp;lt;ul&amp;gt;
        {products.map((product) =&amp;gt; (
          &amp;lt;li key={product.id}&amp;gt;
            &amp;lt;Link to={`/product/${product.id}`}&amp;gt;{product.name}&amp;lt;/Link&amp;gt;
          &amp;lt;/li&amp;gt;
        ))}
      &amp;lt;/ul&amp;gt;
    &amp;lt;/div&amp;gt;
  );
};

export default ProductList;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;index.js&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your &lt;code&gt;index.js&lt;/code&gt; file is the starting point of your application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App';
import Home from './Components/Home';

console.log('index.js')

const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(
  &amp;lt;React.StrictMode&amp;gt;
    &amp;lt;App /&amp;gt;
    &amp;lt;Home/&amp;gt;
  &amp;lt;/React.StrictMode&amp;gt;,
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;rsbuild.confic.mjs&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { defineConfig } from '@rsbuild/core';
import { pluginReact } from '@rsbuild/plugin-react';

export default defineConfig({
  plugins: [pluginReact()],
  mode: 'development',
  entry: {
    index: './src/index.jsx',
  },
  output: {
    filename: '[name].bundle.js',
  },

});
module.exports.defineConfig = defineConfig;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this configuration React with Rspack will automatically split each page component (Home.jsx and ProductList.jsx) into its own chunk that gets loaded only when the corresponding route is navigated to using dynamic imports.&lt;/p&gt;

&lt;p&gt;You can see this behavior in the Network tab of your browser's developer tools when you navigate between the different pages of your app. When you switch from &lt;code&gt;/&lt;/code&gt; to &lt;code&gt;/products&lt;/code&gt;, for example, you will notice the browser loading a new JavaScript file for that page.&lt;/p&gt;

&lt;p&gt;Code splitting in React with Rspack enhances application performance by loading only necessary code chunks on demand. This way of splitting code ensures that the user only downloads the necessary code for the current page rather than all the code at once, substantially improving the load time of your application. Code splitting is just one piece of the optimization puzzle, but it's a significant one.&lt;/p&gt;

&lt;p&gt;Be sure to &lt;a href="https://discord.gg/4wXUpdrK2z"&gt;join us on our ByteDance Open Source Discord Server!&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>tutorial</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Deploying ByConity with Kubernetes: A Step-by-Step Guide</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Tue, 03 Oct 2023 19:39:24 +0000</pubDate>
      <link>https://dev.to/joshalphonse/deploying-byconity-with-kubernetes-a-step-by-step-guide-4261</link>
      <guid>https://dev.to/joshalphonse/deploying-byconity-with-kubernetes-a-step-by-step-guide-4261</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsd57mc7sczp3gpq6n2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsd57mc7sczp3gpq6n2y.png" alt="Image description" width="300" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1haj2ixls9odknk5h95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1haj2ixls9odknk5h95.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Introduction:&lt;/p&gt;

&lt;p&gt;ByConity is a powerful open-source data warehouse developed by ByteDance. It's designed for high-performance computing and utilizes a computing-storage separation architecture. For those who don't know, ByConity's origin can be traced back to ClickHouse DB. Initially, ByteDance was implementing ClickHouse as their main Data warehouse. As the business grew and enhanced performance was needed, the team decided to fork ClickHouse to start ByConity as we know it today. &lt;br&gt;
If you want to harness its capabilities within your Kubernetes cluster, this step-by-step guide will walk you through the deployment process. Whether you want to set up a local environment for testing or deploy it in your self-built Kubernetes cluster, we've got you covered.&lt;/p&gt;

&lt;p&gt;Why Kubernetes?&lt;br&gt;
Deploying ByConity with Kubernetes offers several advantages. Most tech enterprises require agility, cost-effectiveness, and superior performance to harness real-time data. Kubernetes applications exhibit versatility, capable of running seamlessly across diverse environments, from public clouds to isolated on-premise setups. The Kubernetes operator pattern simplifies intricate analytic infrastructure management, transforming it into easily controllable assets.&lt;br&gt;
I like to think of Kubernetes as if it's an orchestra to understand how it works...&lt;br&gt;
Imagine Kubernetes as the conductor of the orchestra, responsible for managing and coordinating various musical instruments (containers) to produce a harmonious symphony (your application). Let's break it down:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Conductor (Kubernetes Master): Kubernetes has a conductor, which is like the conductor of an orchestra. This conductor is the brain behind the operation, making high-level decisions and coordinating the entire performance.&lt;/li&gt;
&lt;li&gt;Musicians (Containers): Each musician in the orchestra corresponds to a container within Kubernetes. Each musician (container) has a specific role and plays a part in the overall composition.&lt;/li&gt;
&lt;li&gt;Sheet Music (Pods): Kubernetes groups containers together in units called "Pods." A Pod is like a piece of sheet music that defines which instruments (containers) should play together in harmony. The conductor (Kubernetes) ensures that the right combinations of instruments (containers) are playing together.&lt;/li&gt;
&lt;li&gt;Stage (Nodes): The stage represents the physical or virtual machines where the musicians (containers) perform. Kubernetes manages these stages (nodes) and assigns musicians (containers) to them based on availability and resource requirements.&lt;/li&gt;
&lt;li&gt;Instruments (Images): Musicians use instruments to create music. In Kubernetes, these instruments are represented by container images. Each musician (container) uses a specific instrument (image) to perform its part.
The Kubernetes orchestra ensures efficient resource allocation, scalability, and high availability for your data warehouse. It allows you to easily manage and scale your data infrastructure, optimizing resource utilization and reducing operational complexity. This approach aligns well with modern DevOps practices, streamlining deployment, automation, and monitoring. By using Kubernetes, you can build a robust, flexible, and cost-effective data warehouse with ByConity. If you'd like to learn more about Kubernetes visit &lt;a href="https://kubernetes.io"&gt;https://kubernetes.io&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What Can I use it for?&lt;br&gt;
Here are some use cases for deploying ByConity with Kubernetes&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Live data statistics dashboard&lt;/li&gt;
&lt;li&gt;System link monitoring&lt;/li&gt;
&lt;li&gt;ETL calculation&lt;/li&gt;
&lt;li&gt;Real-time data access&lt;/li&gt;
&lt;li&gt;Behavior log analysis
And much more! Now, let's start our deployment method. For this post, we're going to assume you have some kind of experience with Kubernetes. 
There are a few things we need to have ready first before we begin.
How to Deploy ByConity Locally 
Before diving into the full deployment process, have all your prereqs ready to go.
Prerequisites:&lt;/li&gt;
&lt;li&gt;A managed Kubernetes Service like Azure, Google, or Digital Ocean. I'll be using an AWS EKS cluster in this example. Choose the one that fits your or your team/organization's needs. &lt;/li&gt;
&lt;li&gt;Install and set up kubectl in your local environment.

&lt;ul&gt;
&lt;li&gt; a command-line utility used for interacting with Kubernetes clusters. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Install Helm in your local environment.
a package manager for Kubernetes that simplifies the deployment and management of containerized applications, services, and resources in Kubernetes clusters.&lt;/li&gt;
&lt;li&gt;Install Kind and Docker.
Steps:&lt;/li&gt;
&lt;li&gt;Navigate to the ByConity Deployment Documentation and head over to Deploy ByConity in Kubernetes. In this post, we're going to be using the local deployment demo version. If you have your own self-built Kubernetes cluster, you'll find those instructions on the same doc. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr3rinq0gns5zjfpt44g.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr3rinq0gns5zjfpt44g.PNG" alt="Image description" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start by cloning the ByConity deployment code from GitHub:
git clone &lt;a href="mailto:git@github.com"&gt;git@github.com&lt;/a&gt;:ByConity/byconity-deploy.git
cd byconity-deploy
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com:ByConity/byconity-deploy.git
cd byconity-deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open up your IDE and let's check out the .yaml file we'll be working with. In this example, we'll use values.yaml in the k8s folder(examples -&amp;gt; k8s -&amp;gt; values.yaml). This YAML-formatted file already has default values for ByConity. If you decide to deploy your own self-built cluster, you can declare variables to be passed into your templates. Afterwards, replace the storageClassName. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lvlezhu81z8waam44x8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lvlezhu81z8waam44x8.png" alt="Image description" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Kind to configure a local Kubernetes cluster. Note that Kind is not intended for production use.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster --config examples/kind/kind-byconity.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Test to ensure the local Kind cluster is ready:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cluster-info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Initialize the ByConity demo cluster:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install with fdb CRD first
helm upgrade --install --create-namespace --namespace byconity -f ./examples/k8s/values.yaml byconity ./chart/byconity --set fdb.enabled=false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr6zdp4scpavfmniujgt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr6zdp4scpavfmniujgt.png" alt="Image description" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Then install the FBD Cluster
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install with fdb cluster
helm upgrade --install --create-namespace --namespace byconity -f ./examples/k8s/values.yaml byconity ./chart/byconity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3opkv3x1fg9gzxp0u3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3opkv3x1fg9gzxp0u3i.png" alt="Image description" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wait until all the pods are ready:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n byconity get po
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ygoh3tovho3wsj3tnsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ygoh3tovho3wsj3tnsf.png" alt="Image description" width="800" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrds19rals7iqg5iui7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrds19rals7iqg5iui7d.png" alt="Image description" width="800" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now you can try it out:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n byconity exec -it sts/byconity-server -- bash
root@byconity-server-0:/# clickhouse client
172.16.1.1 :)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's test out a query &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02e5a8lfcqf4s88ftpsn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02e5a8lfcqf4s88ftpsn.png" alt="Image description" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To delete or stop ByConity from your Kubernetes cluster, you can use the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm uninstall --namespace byconity byconity

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By following these steps, you can deploy ByConity with Kubernetes and harness its powerful data warehousing capabilities within your own environment. &lt;br&gt;
Join our community on Discord to meet developers and stay updated with the latest releases. &lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>kubernetes</category>
      <category>datascience</category>
      <category>database</category>
    </item>
    <item>
      <title>WFH: Automated plant monitor! Part 1</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Fri, 14 Aug 2020 20:47:37 +0000</pubDate>
      <link>https://dev.to/joshalphonse/wfh-automated-plant-monitor-part-1-2j6</link>
      <guid>https://dev.to/joshalphonse/wfh-automated-plant-monitor-part-1-2j6</guid>
      <description>&lt;p&gt;So I've been working from home for the last few months and I've been looking to get into a new space in tech. To start this off I crawled my way into IOT. I was amazed with all of the new IOT products coming out and they inspired me to create my own!&lt;/p&gt;

&lt;p&gt;I remember back when I was in college a professor of mine introduced the Raspberry pi to my class. We built simple web servers and I honestly didn't revisit the device till now and that was years ago! So here we are, now in 2020! I've always wanted to grow my own herbs but I'm also lazy when it comes to maintaining plants. So I decided to build a plant monitor with a Raspberry pi 4 to help. &lt;/p&gt;

&lt;p&gt;To get started with part 1 I collected a few items:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Raspberry Pi 4&lt;/li&gt;
&lt;li&gt;DH11 Humidity Sensor&lt;/li&gt;
&lt;li&gt;Mouse
-Keyboard
-Monitor
-Power Supply
-Python version 2.7 and up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cool so now that we have our supplies lets boot on our raspberry Pi and use the text editor of our liking. I'm using VS Code!&lt;/p&gt;

&lt;p&gt;First step is to connect your DH11 sensors to the correct pins on your rapsberry pi. In our case choose pin 1,4 and 6.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3kg88j8k3ra0uxx3qy82.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3kg88j8k3ra0uxx3qy82.jpg" alt="Alt Text" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next lets add some code!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import RPi.GPIO as GPIO
import Adafruit_DHT
import time 

dht_sensor = Adafruit_DHT.DHT11
dht_pin = 14

y1_channel = 21
GPIO.setmode(GPIO.BCM)
GPIO.setup(y1_channel, GPIO.IN)

while True:
    humidity, temperature = Adafruit_DHT.read_retry(dht_sensor, dht_pin)
    moisture_reading = GPIO.input(y1_channel)
    if moisture_reading == GPIO.LOW:
        moisture = "Sufficient Moisture."
        moisture_db = 1
    else:
        moisture = "Low moisture, irrigation needed"


    print("Sensor data: Humidity = {0:0.2f} % Temp = {1:0.2f} deg C moisture: {2}".format(humidity, temperature, moisture))



    time.sleep(10)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I installed packages to read the GPIO sensors. Keep in mind that these are digital sensors. If you do have an analog sensor then you need an additional part to convert to digital. &lt;/p&gt;

&lt;p&gt;The code is pretty straight forward but as we go to step two things will start to pick up. &lt;/p&gt;

&lt;p&gt;Till next time! &lt;/p&gt;

</description>
      <category>iot</category>
      <category>python</category>
      <category>javascript</category>
      <category>api</category>
    </item>
    <item>
      <title>WFH DAW STYLE</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Thu, 09 Apr 2020 20:34:20 +0000</pubDate>
      <link>https://dev.to/joshalphonse/wfh-daw-style-5fmk</link>
      <guid>https://dev.to/joshalphonse/wfh-daw-style-5fmk</guid>
      <description>&lt;p&gt;It's been about a mont since we've been sent to work from home because of the global pandemic caused by Covid-19. Businesses have shut down, developers have been laid off and it's hard to get certain supplies. Although these are not the best circumstances, there are always things we can do to have fun while we're at home! Music is one of the best tools you can use to improve your mood while we're in these gloomy times. This week I decided to take things a bit further with my passion for music and decided to start to working on my own DAW.&lt;/p&gt;

&lt;p&gt;A Digital Audio Workstation or DAW for short, is commonly used by musicians, mix engineers and producers to create their projects. There are a number of great softwares like protool, logic, ableton. I use theses all but they all cost hundreds of dollars. Some of us are inside forced out of work and can't afford to buy a DAW. Kids also can't go to school, so why not help them exercise a new hobby? This lead me to build my own web app called Algorhythm, a DAW for the web browser.&lt;br&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flizk8qf7qrnc1q7lhmbp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flizk8qf7qrnc1q7lhmbp.gif" alt="Alt Text" width="500" height="281"&gt;&lt;/a&gt;&lt;br&gt;
For this project I'm using React.js, Redux, Rails and some NPM Packages(midi-sounds-react is where the instruments are from). Most DAWs come with stock instruments, MIDI Capabilities, a BPM Counter as those are some basic tools to get started.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faw6jqtztcnnz4td7igms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faw6jqtztcnnz4td7igms.png" alt="Alt Text" width="800" height="485"&gt;&lt;/a&gt;&lt;br&gt;
So we have a drum machine that works off a grid and a keyboard. Algorhythm comes packed with over 1,500 sounds thanks to Midi-react-sounds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Component&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Pads&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../Components/Pads&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Controls&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../Components/controls&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// import Bpm from "../Components/Bpm";&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../App.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;MIDISounds&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;midi-sounds-react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Sequncer&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Component&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;drumSnare&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;drumBass&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;drumHiHat&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;35&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;drumClap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;pads&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;bpm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;108&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;numPads&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;playing&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;selectedDrum&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;145&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;35&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;volume&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;mute&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;open1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;open2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;userPads&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt;
    &lt;span class="na"&gt;loaded&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The sequencer has a few rows on the grid and we can initialize what 'pads' we need active within state. I also initialized some other parameters like volume and BPM to start the user with some default settings.&lt;/p&gt;

&lt;p&gt;The next features I want to add is the ability for users to join different rooms or sessions with one another. It would be great for collaborations especially during the quarantine. A record and export feature will be in the works as well. I'm having a lot of fun with this project and excited to see where it goes. &lt;/p&gt;

</description>
      <category>javascript</category>
      <category>music</category>
      <category>productivity</category>
    </item>
    <item>
      <title>A Few Lessons I've Learned As A Developer Advocate (So Far...)</title>
      <dc:creator>Josh Alphonse</dc:creator>
      <pubDate>Mon, 24 Feb 2020 20:44:35 +0000</pubDate>
      <link>https://dev.to/joshalphonse/a-few-lessons-i-ve-learned-as-a-developer-advocate-so-far-4b4n</link>
      <guid>https://dev.to/joshalphonse/a-few-lessons-i-ve-learned-as-a-developer-advocate-so-far-4b4n</guid>
      <description>&lt;p&gt;I don't think a year ago I knew what developer relations was or how one even gets into this role. Although I haven't been in the role for a long time, so much has happened that has allowed me to learn quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  To Start
&lt;/h2&gt;

&lt;p&gt;My official role is a developer advocate! For those reading this that don't know what developer advocacy entails I'll fill you in. As a developer advocate I have the privilege of building relationships and engaging with the dev community. This isn't a traditional engineering role, but trust me developer advocates are engineers. You need a strong technical background, relevant experience or even shipped an application to prod. &lt;/p&gt;

&lt;p&gt;I went to college and got a degree in computer science but at that time I didn't feel a sense of community. I started to feel the community I wanted to be apart of when I attended a bootcamp for a few months. Developers come in all different shapes and sizes and come from unique backgrounds. Day by day the tech industry is getting more diverse and with that so are our engineering need and wants! Having a developer relations team is one of the hottest trends right now because the value in having one is blatant. Companies realize that it's going to take more than just documentation, forums or blogs to communicate with their consumers. They need dedicated engineers that can be on the ground level to engage with consumers directly. Word of mouth is a powerful tool and the way advocates communicate will shape a community.    &lt;/p&gt;

&lt;p&gt;Developer advocates act as the bridge that connects the people to the product+ and vice versa. In theory, a developer advocate practices three main disciplines. &lt;/p&gt;

&lt;p&gt;Those are: &lt;br&gt;
-engineering &lt;br&gt;
-marketing&lt;br&gt;
-community &lt;/p&gt;

&lt;p&gt;These three disciplines are used to fill any gaps the technical consumer may have with the product. So with that being said lets jump into the three lessons I've learned!&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Something Real!
&lt;/h2&gt;

&lt;p&gt;Anyone that works in tech from product managers to engineers, know that this career will always require you to learn something new. Same goes for developer advocates. Just having an understanding of why your product works isn't enough. You have to advocate for the developers so you must gain an understanding of their experience. Keep building example applications but make them interesting and make them related to yourself! I'm a musician and I've built a number of applications themed around my passion. Your engineering skills come into play as you must think outside the box and build something meaningful and engaging for your users! Build applications that have a purpose. Your developers will build accordingly &lt;/p&gt;

&lt;h2&gt;
  
  
  Community Is KEY!
&lt;/h2&gt;

&lt;p&gt;Advocates shape, educate and facilitate the community. When you're done making an example application, you can take off your engineering hat and put on your community cap. Make a blog post and share how you built the example application. Post the code, explain why you implemented certain functions and try to use best practices! You can take it a step further by creating video content or putting together a presentation. All the content you create is valid to use as material for your meetups and conferences. The way you interact with your community effects the marketing of the product you are representing. With this role you become the face of your product's community and you have the ability to brand yourself! &lt;/p&gt;

&lt;h2&gt;
  
  
  Be Visible!
&lt;/h2&gt;

&lt;p&gt;It's really convenient to just go straight home after work, or crawl to your apartment after a day at a conference. It's your job to be seen, and theres no better places than meetups, conferences and happy hours. Loosen up and go to a variety of meetups with different topics and audiences. You'll be around like minded people, gain exposure and make friends that can shift your perspective. Being with other developers can spark conversations that will help your journey. &lt;/p&gt;

&lt;h2&gt;
  
  
  Dont Be An A-Hole!
&lt;/h2&gt;

&lt;p&gt;Just be a good person! Simple as that. Although this role is very technical you are definitely required to use more of your soft skills throughout your journey. Become that go to person for your product and brand yourself!   &lt;/p&gt;

&lt;p&gt;I haven't had this role for long but I am adjusting to wearing many hats and I'm grateful to be where I am!&lt;/p&gt;

</description>
      <category>devrel</category>
    </item>
  </channel>
</rss>
