<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: AnthonyCvn</title>
    <description>The latest articles on DEV Community by AnthonyCvn (@anthonycvn).</description>
    <link>https://dev.to/anthonycvn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anthonycvn"/>
    <language>en</language>
    <item>
      <title>Air-Gapped Drone Data Operations with Delayed Sync and Auditability</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/air-gapped-drone-data-operations-with-delayed-sync-and-auditability-55ne</link>
      <guid>https://dev.to/reductstore/air-gapped-drone-data-operations-with-delayed-sync-and-auditability-55ne</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9aq3u4rhjg760xmpz7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9aq3u4rhjg760xmpz7u.png" alt="Architecture for Air-Gapped Drone Data" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Drones in air-gapped environments produce a &lt;strong&gt;lot&lt;/strong&gt; of data (camera images, telemetry, logs, model outputs). Storing this data reliably on each drone and syncing it to a ground station later can be hard. &lt;strong&gt;ReductStore&lt;/strong&gt; makes this easier: it's a lightweight, time-series object store that works offline and replicate data when a connection is available.&lt;/p&gt;

&lt;p&gt;This guide explains a simple setup where each drone stores data locally with labels, replicates records to a ground station based on what it detects, and keeps a clear audit trail of what was captured and replicated.&lt;/p&gt;

&lt;p&gt;What we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#drone-to-ground-architecture" rel="noopener noreferrer"&gt;&lt;strong&gt;Drone-to-Ground Architecture&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#setting-up-the-drone-node" rel="noopener noreferrer"&gt;&lt;strong&gt;Setting Up the Drone Node&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#storing-drone-data-with-labels" rel="noopener noreferrer"&gt;&lt;strong&gt;Storing Drone Data with Labels&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#setting-up-selective-replication" rel="noopener noreferrer"&gt;&lt;strong&gt;Setting Up Selective Replication&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#querying-for-audit-reports" rel="noopener noreferrer"&gt;&lt;strong&gt;Querying for Audit Reports&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#why-this-setup-works-well-for-drones" rel="noopener noreferrer"&gt;&lt;strong&gt;Why This Setup Works Well for Drones&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Drone-to-Ground Architecture&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#drone-to-ground-architecture" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The architecture has three main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Each drone runs a small ReductStore server&lt;/strong&gt; to save images and telemetry locally on disk (this lets the drone operate fully offline).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A ground station runs a ReductStore instance&lt;/strong&gt; that receives replicated data for analysis and archiving.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReductStore replication tasks&lt;/strong&gt; copy data from drone to ground based on labels and conditions (e.g., only records flagged as anomalies, plus context around them).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7h80aubzv31qrce5xw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7h80aubzv31qrce5xw5.png" alt="Drone Workflow" width="800" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each drone pushes its data to the ground whenever it is connected. If the network disconnects, replication continues when the drone reconnects. This approach provides offline capability, lets you decide which data to replicate, and keeps a clear record of what happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up the Drone Node&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#setting-up-the-drone-node" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Start by running ReductStore on the drone's companion computer. Here is a minimal &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;services:
  reductstore:
    image: reduct/store:latest
    ports:
      - &lt;span class="s2"&gt;"8383:8383"&lt;/span&gt;
    environment:
      RS_API_TOKEN: &amp;lt;DRONE_TOKEN&amp;gt;
      RS_BUCKET_1_NAME: mission-data
      RS_BUCKET_1_QUOTA_TYPE: FIFO
      RS_BUCKET_1_QUOTA_SIZE: 10000000000 &lt;span class="c"&gt;# 10 GB&lt;/span&gt;
    volumes:
      - ./data:/data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts a ReductStore server with a &lt;code&gt;mission-data&lt;/code&gt; bucket that uses FIFO retention. Old data is deleted only when the 10 GB limit is reached, so the drone always keeps as much history as possible.&lt;/p&gt;

&lt;p&gt;FIFO quota is volume-based, not time-based. This means data is only deleted when disk space runs out, not after a fixed time period. This is important for drones that may sit idle between missions.&lt;/p&gt;

&lt;p&gt;If you prefer Snap instead of Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;reductstore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That starts a ReductStore server on port &lt;code&gt;8383&lt;/code&gt; by default. You can then create the bucket using the &lt;strong&gt;&lt;a href="https://github.com/reductstore/reduct-cli" rel="noopener noreferrer"&gt;Reduct CLI&lt;/a&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add drone &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:8383 &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;DRONE_TOKEN&amp;gt;"&lt;/span&gt;
reduct-cli bucket create drone/mission-data &lt;span class="nt"&gt;--quota-type&lt;/span&gt; FIFO &lt;span class="nt"&gt;--quota-size&lt;/span&gt; 10GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Storing Drone Data with Labels&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#storing-drone-data-with-labels" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Use labels to tag every record with mission context. This is what makes selective replication and auditing possible later. Here is an example using the Python SDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;


&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8383&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;DRONE_TOKEN&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mission-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Read a camera frame
&lt;/span&gt;        &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;frame.jpg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;checksum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sha256&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;hexdigest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1_000_000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# microseconds
&lt;/span&gt;
        &lt;span class="c1"&gt;# Write with mission labels
&lt;/span&gt;        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;camera&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mission_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m-2026-02-24-01&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;platform_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;uav-07&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anomaly&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;confidence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.95&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;checksum&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;checksum&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;content_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;image/jpeg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Write telemetry as a CSV batch
&lt;/span&gt;        &lt;span class="n"&gt;csv_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts,lat,lon,alt,speed&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;csv_data&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1708771200000000,47.3769,8.5417,450.2,12.5&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;csv_data&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1708771201000000,47.3770,8.5418,451.0,12.8&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;telemetry&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;csv_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mission_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m-2026-02-24-01&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;platform_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;uav-07&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anomaly&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;checksum&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sha256&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;csv_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;hexdigest&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;content_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text/csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;anomaly&lt;/code&gt; label is important: it lets the replication task decide what to sync based on what the drone actually sees. For example, if the drone detects something unusual (an object, a warning, a low confidence score), it sets &lt;code&gt;anomaly=true&lt;/code&gt;. The replication task can then automatically sync that record — plus the context around it.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;checksum&lt;/code&gt; label gives you a simple way to verify data integrity during audits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Selective Replication&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#setting-up-selective-replication" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once the drone connects to a trusted network, replication sends only the relevant records to the ground station. The simplest approach is to replicate based on a label, for example only records where the drone detected an anomaly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add drone &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:8383 &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;DRONE_TOKEN&amp;gt;"&lt;/span&gt;

reduct-cli replica create drone/mission-to-ground &lt;span class="se"&gt;\&lt;/span&gt;
    mission-data &lt;span class="se"&gt;\&lt;/span&gt;
    https://&amp;lt;GROUND_TOKEN&amp;gt;@&amp;lt;ground-address&amp;gt;/drone-data &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--when&lt;/span&gt; &lt;span class="s1"&gt;'{"&amp;amp;anomaly": {"$eq": "true"}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a replication task that copies only records where &lt;code&gt;anomaly=true&lt;/code&gt; from the drone's &lt;code&gt;mission-data&lt;/code&gt; bucket to the ground station.&lt;/p&gt;

&lt;h3&gt;
  
  
  Replicating with context (before and after)&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#replicating-with-context-before-and-after" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In many cases, you don't just want the anomaly record itself — you also want to see what happened &lt;strong&gt;before&lt;/strong&gt; it. ReductStore supports this with the &lt;code&gt;#ctx_before&lt;/code&gt; and &lt;code&gt;#ctx_after&lt;/code&gt; directives. For example, to replicate each anomaly record plus 30 seconds of data before it and 10 seconds after:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;anomaly"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$eq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"#ctx_before"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"30s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"#ctx_after"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10s"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is powerful for drone operations: imagine the drone's onboard model detects an unexpected object. ReductStore will replicate that record &lt;strong&gt;and&lt;/strong&gt; the 30 seconds of camera frames leading up to the detection, so the ground team can review what happened.&lt;/p&gt;

&lt;p&gt;You can provision this directly in Docker using environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;services:
  reductstore:
    image: reduct/store:latest
    ports:
      - &lt;span class="s2"&gt;"8383:8383"&lt;/span&gt;
    environment:
      RS_API_TOKEN: &amp;lt;DRONE_TOKEN&amp;gt;
      RS_BUCKET_1_NAME: mission-data
      RS_BUCKET_1_QUOTA_TYPE: FIFO
      RS_BUCKET_1_QUOTA_SIZE: 10000000000
      RS_REPLICATION_1_NAME: mission-to-ground
      RS_REPLICATION_1_SRC_BUCKET: mission-data
      RS_REPLICATION_1_DST_BUCKET: drone-data
      RS_REPLICATION_1_DST_HOST: https://&amp;lt;ground-address&amp;gt;
      RS_REPLICATION_1_DST_TOKEN: &amp;lt;GROUND_TOKEN&amp;gt;
      RS_REPLICATION_1_WHEN: |
        &lt;span class="o"&gt;{&lt;/span&gt;
          &lt;span class="s2"&gt;"&amp;amp;anomaly"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="s2"&gt;eq"&lt;/span&gt;: &lt;span class="s2"&gt;"true"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;,
          &lt;span class="s2"&gt;"#ctx_before"&lt;/span&gt;: &lt;span class="s2"&gt;"30s"&lt;/span&gt;,
          &lt;span class="s2"&gt;"#ctx_after"&lt;/span&gt;: &lt;span class="s2"&gt;"10s"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    volumes:
      - ./data:/data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, the drone can operate fully offline. Replication runs automatically when a connection is available and waits when it's not. It's also possible to pause replication tasks if needed. And because context is included, the ground team always has enough data to understand what triggered the event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Querying for Audit Reports&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#querying-for-audit-reports" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;After a mission, you can query the ground station to check what was captured and replicated. Here is a simple example that lists all records from a specific mission:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import asyncio
from reduct import Client


async def main():
    async with Client("https://&amp;lt;ground-address&amp;gt;", api_token="&amp;lt;GROUND_TOKEN&amp;gt;") as client:
        bucket = await client.get_bucket("drone-data")

        # Query all camera records from a specific mission
        async for record in bucket.query(
            "camera",
            when={"&amp;amp;mission_id": {"$eq": "m-2026-02-24-01"}},
        ):
            print(
                f"ts={record.timestamp}, "
                f"anomaly={record.labels.get('anomaly')}, "
                f"checksum={record.labels.get('checksum')}, "
                f"size={record.size}"
            )


asyncio.run(main())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives you a clear log of every record in that mission: timestamp, anomaly flag, checksum, and size. You can use this to verify that all expected data arrived on the ground side.&lt;/p&gt;

&lt;p&gt;To go further, compare the checksums on the drone with the ground side to confirm nothing was altered during transfer. You can also check the error logs of the replication task to see if any records failed to replicate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Setup Works Well for Drones&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#why-this-setup-works-well-for-drones" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Drones have specific constraints that general purpose databases don't handle well. Here is what makes this setup practical:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full offline operation.&lt;/strong&gt; Drones store everything locally and don't need a network connection during the mission. Data is safe on disk until sync happens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic sync when connected.&lt;/strong&gt; When the drone lands or connects to a trusted network, replication picks up where it left off. No manual file transfers, no rsync scripts, no USB sticks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart replication with context.&lt;/strong&gt; You don't have to sync everything. The replication task filters by labels and can include past records around each event using &lt;code&gt;#ctx_before&lt;/code&gt;. The ground team gets exactly what they need to understand what happened.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disk never fills up unexpectedly.&lt;/strong&gt; FIFO retention removes the oldest data only when the disk is full. The drone always keeps as much history as possible without running out of space mid mission.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy auditing.&lt;/strong&gt; Every record has a timestamp, labels, and a checksum. After a mission, you can query the ground station and verify exactly what was captured and what was synced.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store any file type.&lt;/strong&gt; Camera frames, telemetry CSV, logs, MCAP files, model outputs. Everything goes into the same system with the same interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps&lt;a href="https://www.reduct.store/blog/air-gapped-drone-data#next-steps" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If you want to go deeper, check out these articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/reductstore/distributed-storage-in-mobile-robotics-1oe0"&gt;Distributed Storage in Mobile Robotics&lt;/a&gt;&lt;/strong&gt; for a similar setup with mobile robots and S3 cloud backend&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dev.to/reductstore/how-to-store-and-manage-robotic-data-3ojp"&gt;How to Store and Manage Robotics Data&lt;/a&gt;&lt;/strong&gt; for a broader look at ReductStore features for robotics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://www.reduct.store/docs/guides/data-replication" rel="noopener noreferrer"&gt;Data Replication Guide&lt;/a&gt;&lt;/strong&gt; for the full documentation on replication tasks, filters, and modes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://www.reduct.store/docs/conditional-query" rel="noopener noreferrer"&gt;Conditional Query Reference&lt;/a&gt;&lt;/strong&gt; for all available conditional query operators you can use in replication filters and queries&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I hope you found this article helpful! If you have any questions or feedback, don't hesitate to reach out on our &lt;a href="https://community.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

</description>
      <category>aerospace</category>
      <category>robotics</category>
      <category>database</category>
    </item>
    <item>
      <title>Comparing Data Management Tools for Robotics</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Thu, 04 Dec 2025 09:26:57 +0000</pubDate>
      <link>https://dev.to/reductstore/comparing-data-management-tools-for-robotics-5a61</link>
      <guid>https://dev.to/reductstore/comparing-data-management-tools-for-robotics-5a61</guid>
      <description>&lt;p&gt;Modern robots collect a lot of data from sensors, cameras, logs, and system outputs. Managing this data well is important for debugging, performance tracking, and training machine learning models.&lt;/p&gt;

&lt;p&gt;Over the past few years, we've been building a storage system from scratch. As part of that work, we spoke with many robotics teams across different industries to understand their challenges with data management.&lt;/p&gt;

&lt;p&gt;Here's what we heard often:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only a subset of what robots generate is actually useful&lt;/li&gt;
&lt;li&gt;Network connections are not always stable or fast&lt;/li&gt;
&lt;li&gt;On-device storage is limited (hard drive swaps is not practical)&lt;/li&gt;
&lt;li&gt;Teams rely on manual workflows with scripts and raw files&lt;/li&gt;
&lt;li&gt;It's hard to find and extract the right data later&lt;/li&gt;
&lt;li&gt;ROS bag files get large quickly and are difficult to manage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, we compare four tools built to handle robotics data: &lt;strong&gt;ReductStore&lt;/strong&gt;, &lt;strong&gt;Foxglove&lt;/strong&gt;, &lt;strong&gt;Rerun&lt;/strong&gt;, and &lt;strong&gt;Heex&lt;/strong&gt;. We look at how they work, what they're good at, and which use cases they support.&lt;/p&gt;

&lt;p&gt;If you're working with robots and need to organize, stream, or store data more effectively, this overview should help.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Criteria for Comparison
&lt;/h2&gt;

&lt;p&gt;When picking a data tool for robotics, focus on these areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Types&lt;/strong&gt;
Robotics is a large field with many sensor types. The tool should support the data you work with, such as:

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Telemetry:&lt;/em&gt; Lightweight (GPS, IMU, joints), ideal for monitoring.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Downsampled Data:&lt;/em&gt; Lower-rate images or lidar for incident review without high storage cost.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Full-Resolution:&lt;/em&gt; Raw sensor outputs for deep debugging or training. This is storage-intensive but essential for some applications.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Integration&lt;/strong&gt;
The tool should work with what you already use, like ROS, Grafana, MQTT, cloud platforms (S3, Azure, Google Cloud), and your development environment to avoid extra glue code and simplify workflows.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Performance and Scalability&lt;/strong&gt;
Data must move quickly (both locally and to the cloud). Large files or slow queries can block robots or delay analysis.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Ease of Use and APIs&lt;/strong&gt;
A simple UI and solid API support make it easier to automate, scale, and adapt the tool to different use cases.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tool Overviews
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ReductStore
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03tsk44ak9or6allpkk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03tsk44ak9or6allpkk1.png" alt="ReductStore Dashboard" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ReductStore&lt;/strong&gt; is a storage and streaming system designed for robotics data. It works both on the robot and in central storage (on-premise/self-hosted or in the cloud) with the same interface and SDKs (in Python, C++, Go, Javascript/TypeScript or Rust). That means your code stays the same whether you're reading local, remote data (or creating a browser-based dashboard).&lt;/p&gt;

&lt;p&gt;To move data to the cloud, ReductStore uses &lt;strong&gt;conditional replication&lt;/strong&gt;. You can define rules to upload only certain records: by label, rules, or event. For example, replicate all incident data, or just 1 out of 10 entries for routine monitoring.&lt;/p&gt;

&lt;p&gt;ReductStore handles storage limits on edge devices with &lt;strong&gt;FIFO retention&lt;/strong&gt;. Old data is deleted only when the device is full. Each bucket can have different rules, so you can keep more images and less telemetry, for example.&lt;/p&gt;

&lt;p&gt;With an &lt;strong&gt;S3 backend&lt;/strong&gt;, ReductStore batches small records together before uploading. This cuts down the number of requests and lowers cloud storage costs. For observability, you can connect &lt;strong&gt;Grafana&lt;/strong&gt; to ReductStore to create dashboards with system metrics and sensor data. For MCAP files, ReductStore supports shareable query links that open directly in &lt;strong&gt;Foxglove v1/v2&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It also lets you &lt;strong&gt;filter or merge records server-side&lt;/strong&gt;. For example, you can pull all temperature readings above a threshold over a time range without downloading full datasets.&lt;/p&gt;

&lt;p&gt;Want more technical detail? Check out &lt;a href="https://www.reduct.store/blog/database-for-robotics" rel="noopener noreferrer"&gt;&lt;strong&gt;The Missing Database for Robotics Is Out&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Foxglove and MCAP
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkvdrb2pukv2h4k9jwpu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkvdrb2pukv2h4k9jwpu.png" alt="Foxglove Dashboard" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; is a browser-based visualization and observability tool for robotics. It supports &lt;strong&gt;ROS 1, ROS 2&lt;/strong&gt;, and &lt;strong&gt;MCAP logs&lt;/strong&gt;, and handles data types like telemetry, camera feeds, lidar, and depth maps.&lt;/p&gt;

&lt;p&gt;It uses &lt;strong&gt;MCAP&lt;/strong&gt;, an open-source log format built for robotics, to store high-resolution data efficiently. You can explore MCAP files interactively in &lt;strong&gt;Foxglove Studio&lt;/strong&gt; or stream them programmatically.&lt;/p&gt;

&lt;p&gt;Foxglove provides an &lt;strong&gt;agent&lt;/strong&gt; that detects new MCAP files on the robot and uploads them to the cloud automatically. This requires robots to record short rosbag segments (typically a few minutes each) which are closed and rotated continuously.&lt;/p&gt;

&lt;p&gt;It integrates natively with &lt;strong&gt;ROS topics, services, and actions&lt;/strong&gt;, and offers &lt;strong&gt;WebSocket and REST APIs&lt;/strong&gt;. It also connects to major cloud providers like &lt;strong&gt;AWS, Azure,&lt;/strong&gt; and &lt;strong&gt;Google Cloud&lt;/strong&gt; for scalable storage.&lt;/p&gt;

&lt;p&gt;The interface is built for time-series and sensor data, with interactive 2D/3D views, plots, and drag-and-drop panels for quick setup and review.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rerun
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2whlpk1u114zfrutvs3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2whlpk1u114zfrutvs3f.png" alt="Rerun Dashboard" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; is an open-source visualization solution for time-series and multimodal data. It supports data types like images, point clouds, lidar, depth maps, tensors, and other sensor streams.&lt;/p&gt;

&lt;p&gt;Its main strength is combining flexible logging with a fast, built-in 3D viewer designed for robotics and extended reality (XR) applications. For large datasets, Rerun provides a &lt;strong&gt;column-oriented API&lt;/strong&gt; to speed up ingestion and reduce memory usage. It also uses efficient internal structures to minimize allocations and optimize performance on edge devices.&lt;/p&gt;

&lt;p&gt;Rerun doesn't offer native ROS integration yet, but it can be used in ROS projects by adding custom logging to nodes.&lt;/p&gt;

&lt;p&gt;You can embed Rerun in &lt;strong&gt;Jupyter notebooks&lt;/strong&gt; or web pages, and use loggers for &lt;strong&gt;Python, Rust, and C++&lt;/strong&gt; to stream data into the viewer.&lt;/p&gt;

&lt;p&gt;The UI is built for &lt;strong&gt;real-time 3D exploration&lt;/strong&gt;, with overlays and live tracking that make it easy to inspect different data types in the same visual space.&lt;/p&gt;

&lt;h3&gt;
  
  
  Heex
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc01tbs5lqwlhxkl70zz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc01tbs5lqwlhxkl70zz.png" alt="Heex Dashboard" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heex&lt;/strong&gt; is a data capture and review platform for autonomous systems that focuses on collecting only key moments—like errors or specific events instead of logging everything. This reduces bandwidth and storage needs while keeping important context.&lt;/p&gt;

&lt;p&gt;Robots using Heex record data continuously in short ROSbag segments. A small agent on the robot watches for triggers and uploads only selected segments to the cloud based on rules.&lt;/p&gt;

&lt;p&gt;A core feature is &lt;strong&gt;RDA (Resource and Data Automation)&lt;/strong&gt; for ROS 2, which automates what to record and when. Rules can be changed remotely without restarting the robot.&lt;/p&gt;

&lt;p&gt;Data is stored in &lt;strong&gt;ROSbag&lt;/strong&gt; and can be reviewed directly in the &lt;strong&gt;Heex dashboard&lt;/strong&gt;, which includes built-in open-source version of &lt;strong&gt;Foxglove&lt;/strong&gt;. This setup makes it easy to manage data across fleets and locations.&lt;/p&gt;

&lt;p&gt;Heex supports both &lt;strong&gt;ROS 1 and ROS 2&lt;/strong&gt;, and integrates with other systems through &lt;strong&gt;SDKs, APIs, and a CLI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The interface includes customizable dashboards to monitor sensor data, errors, and system status. Timelines and streams are easy to navigate for quick analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparative Analysis Table
&lt;/h2&gt;

&lt;p&gt;To help visualize the differences between the tools, here is a comparison table summarizing their main characteristics:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Tool&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Core Focus&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Data Types&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Storage Strategy&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Visualization&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;ROS Integration&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Unique Features&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Time-series storage and streaming for robotics&lt;/td&gt;
&lt;td&gt;Telemetry, camera images, lidar, logs&lt;/td&gt;
&lt;td&gt;Local + cloud with same API (supports S3, FIFO retention, conditional replication)&lt;/td&gt;
&lt;td&gt;Grafana, Foxglove (via MCAP links)&lt;/td&gt;
&lt;td&gt;Integrated with ROS via extensions&lt;/td&gt;
&lt;td&gt;Filter/merge on server, batch uploads, topic-level control, efficient on edge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Foxglove&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Visualization and observability for robotics logs&lt;/td&gt;
&lt;td&gt;MCAP logs (telemetry, lidar, camera, depth)&lt;/td&gt;
&lt;td&gt;ROSbag short segments, auto-upload with agent&lt;/td&gt;
&lt;td&gt;Foxglove Studio (2D/3D, timeline, plots)&lt;/td&gt;
&lt;td&gt;Native ROS 1 &amp;amp; 2&lt;/td&gt;
&lt;td&gt;Drag-and-drop views, real-time stream inspection, cloud integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Rerun&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Real-time 3D visualization of multimodal time-series data&lt;/td&gt;
&lt;td&gt;Images, lidar, point clouds, tensors, metrics&lt;/td&gt;
&lt;td&gt;User-defined logging; logs streamed into viewer or embedded in notebooks&lt;/td&gt;
&lt;td&gt;Built-in viewer (3D overlays, tracking)&lt;/td&gt;
&lt;td&gt;Not native (custom logging)&lt;/td&gt;
&lt;td&gt;Column-oriented API, fast ingestion, selective logging, notebook/web integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Heex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Event-driven data capture for fleets of robots&lt;/td&gt;
&lt;td&gt;ROSbag (telemetry, images, lidar, metrics)&lt;/td&gt;
&lt;td&gt;Continuous recording, uploads filtered by event-based rules via onboard agent&lt;/td&gt;
&lt;td&gt;Built-in Foxglove in dashboard&lt;/td&gt;
&lt;td&gt;Native ROS 1 &amp;amp; 2&lt;/td&gt;
&lt;td&gt;RDA (automated capture rules), remote config, scalable fleet-wide dashboards&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Each tool addresses a different part of the robotics data workflow. &lt;strong&gt;ReductStore&lt;/strong&gt; is ideal for distributed storage across many robots, with selective replication to the cloud and flexible integration with tools like Grafana and Foxglove. &lt;strong&gt;Foxglove&lt;/strong&gt; excels at visualizing MCAP logs and ROS topics. &lt;strong&gt;Rerun&lt;/strong&gt; offers flexible, real-time 3D inspection for custom applications. &lt;strong&gt;Heex&lt;/strong&gt; focuses on capturing just the important moments for efficient fleet analysis.&lt;/p&gt;

&lt;p&gt;Choosing the right tool depends on what kind of data you collect, how you process it, and where you need it to go. In many cases, combining tools can give you the best of all worlds.&lt;/p&gt;




&lt;p&gt;Thanks for reading. I hope this article helps you decide on the right storage strategy for your vibration data.&lt;br&gt;
If you have questions or comments, feel free to visit the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>ros</category>
    </item>
    <item>
      <title>Distributed Storage in Mobile Robotics</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Mon, 17 Nov 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/distributed-storage-in-mobile-robotics-1oe0</link>
      <guid>https://dev.to/reductstore/distributed-storage-in-mobile-robotics-1oe0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1numadno34nlnfk2m0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1numadno34nlnfk2m0g.png" alt="Distributed Storage in Mobile Robotics" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mobile robots produce a &lt;strong&gt;lot&lt;/strong&gt; of data (camera images, IMU readings, logs, etc). Storing this data reliably on each robot and syncing it to the cloud can be hard. &lt;strong&gt;ReductStore&lt;/strong&gt; makes this easier: it's a lightweight, time-series object store built for robotics and industrial IoT. It stores binary blobs (images, logs, CSV sensor data, MCAP, JSON) with timestamps and labels so you can quickly find and query them later.&lt;/p&gt;

&lt;p&gt;This introduction guide explains a simple setup where each robot stores data locally and automatically syncs it to a cloud ReductStore instance backed by Amazon S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge-to-Cloud Architecture&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#edge-to-cloud-architecture" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The architecture has three main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Each robot runs a small ReductStore server&lt;/strong&gt; in order to save images and IMU data locally on disk (this let the robot operate offline).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A cloud ReductStore instance runs on a server (e.g., EC2)&lt;/strong&gt; and uses S3 for long-term storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReductStore replication tasks&lt;/strong&gt; copies data from robot to cloud based on labels, events, or rules (e.g., 1 record every minute).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each robot pushes its data to the cloud whenever it is connected to the network. This approach provides the robots with offline capability, allows you to decide which data to replicate, and easily scales to support many robots.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Replication Works&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#how-replication-works" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore uses an &lt;strong&gt;append-only&lt;/strong&gt; replication model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The robot stores new data locally.&lt;/li&gt;
&lt;li&gt;ReductStore automatically detects new records.&lt;/li&gt;
&lt;li&gt;It sends them to the cloud in batches (or streams large files).&lt;/li&gt;
&lt;li&gt;If the network disconnects, replication continues when the robot reconnects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can replicate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;everything&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;or only specific sensors&lt;/li&gt;
&lt;li&gt;or only records with certain labels&lt;/li&gt;
&lt;li&gt;or based on rules (e.g., 1 record every S seconds or every N records)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This can be configured per robot using environment variables (provisioning), with the web console or via the CLI (as shown in this guide).&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud ReductStore With S3 Backend&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#cloud-reductstore-with-s3-backend" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore supports storing all records directly in S3. It keeps a local cache for fast access and batches many small blobs into larger blocks to save on S3 costs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By batching data into S3 objects, you can save &lt;strong&gt;significantly&lt;/strong&gt; on storage costs compared to storing many small files individually.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here is an example &lt;code&gt;docker-compose.yml&lt;/code&gt; to run a ReductStore server that uses S3 as the remote backend and provisions buckets for robots:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;reductstore&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reduct/store:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reductstore&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8383:8383"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# AWS credentials and S3 bucket configuration&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_BACKEND_TYPE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_BUCKET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_S3_BUCKET_NAME&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_S3_REGION&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_ACCESS_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_AWS_ACCESS_KEY_ID&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_SECRET_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_AWS_SECRET_ACCESS_KEY&amp;gt;&lt;/span&gt;
      &lt;span class="na"&gt;RS_REMOTE_CACHE_PATH&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data/cache&lt;/span&gt;
      &lt;span class="c1"&gt;# Bucket provisioning&lt;/span&gt;
      &lt;span class="na"&gt;RS_BUCKET_ROBOT_1_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;robot1-data&lt;/span&gt;
      &lt;span class="na"&gt;RS_BUCKET_ROBOT_2_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;robot2-data&lt;/span&gt;
      &lt;span class="c1"&gt;# .. additional buckets as needed&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./cache:/data/cache&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts a ReductStore server that writes to S3 automatically. There are many more configuration options available in the &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/configuration" rel="noopener noreferrer"&gt;configuration documentation&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Replication&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#setting-up-replication" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;First spin up a local ReductStore on each robot. Here with Snap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;reductstore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That starts a ReductStore server on port &lt;code&gt;8383&lt;/code&gt; by default. Then you can use the &lt;strong&gt;&lt;a href="https://github.com/reductstore/reduct-cli" rel="noopener noreferrer"&gt;Reduct CLI&lt;/a&gt;&lt;/strong&gt; to set up replication from the robot to the cloud instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Point the CLI to the robot's local ReductStore&lt;/span&gt;
reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add &lt;span class="nb"&gt;local&lt;/span&gt; &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:8383 &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;ROBOT_API_TOKEN&amp;gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Create a bucket for that robot&lt;/span&gt;
reduct-cli bucket create &lt;span class="nb"&gt;local&lt;/span&gt;/robot1-data

&lt;span class="c"&gt;# Create a replication task to the cloud&lt;/span&gt;
reduct-cli replica create &lt;span class="nb"&gt;local&lt;/span&gt;/robot1-to-cloud &lt;span class="se"&gt;\&lt;/span&gt;
    robot1-data &lt;span class="se"&gt;\&lt;/span&gt;
    https://&amp;lt;CLOUD_API_TOKEN&amp;gt;@&amp;lt;cloud-address&amp;gt;/robot1-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a replication task called &lt;code&gt;robot1-to-cloud&lt;/code&gt; that copies all data from the robot's local &lt;code&gt;robot1-data&lt;/code&gt; bucket to the cloud instance. You can customize replication further by adding filters or rules. See the &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/guides/data-replication" rel="noopener noreferrer"&gt;replication guide&lt;/a&gt;&lt;/strong&gt; for more details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storing Sensor Data&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#storing-sensor-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are many ways to store data. When it comes to high-frequency sensor data like IMU readings, a common approach is to store them in 1-second files. Images can be stored as binary blobs (e.g., JPEG or PNG files). Here is an example of storing IMU data as CSV files and images as binary blobs using the Python SDK (this stores 10,000 samples and one camera image for a given timestamp as an example):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;


&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8383&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;ROBOT_API_TOKEN&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;robot1-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Current timestamp to index the data by time in ReductStore
&lt;/span&gt;        &lt;span class="n"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1_000_000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# microseconds
&lt;/span&gt;
        &lt;span class="c1"&gt;# Generate 10'000 IMU samples
&lt;/span&gt;        &lt;span class="n"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10_000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# microseconds
&lt;/span&gt;                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uniform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uniform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uniform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;8.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;10.0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Convert to CSV (store 1 seconds of data per file)
&lt;/span&gt;        &lt;span class="n"&gt;csv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts,linear_acceleration_x,linear_acceleration_y,linear_acceleration_z&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;rows&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Write the IMU batch
&lt;/span&gt;        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;entry_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;imu_logs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;csv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sensor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;imu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rows&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1000&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;content_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text/csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# MIME type
&lt;/span&gt;        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Write one camera image
&lt;/span&gt;        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;camera_image.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;entry_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;images&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
                &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sensor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;camera&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="n"&gt;content_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;image/png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;If you are considering storing all IMU data as individual records in a time series database (TSDB) like Timescale or InfluxDB, keep in mind that high-frequency sensors (e.g., 1000 Hz) can lead to performance and cost issues. Batching samples into files (e.g., one second of data per CSV file) is a more efficient storage and querying method.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Querying Sensor Data Using ReductSelect&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#querying-sensor-data-using-reductselect" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If your IMU data is stored as CSV, the &lt;strong&gt;ReductSelect extension&lt;/strong&gt; lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;extract only certain columns&lt;/li&gt;
&lt;li&gt;filter rows based on conditions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example: filter CSV rows where &lt;code&gt;acc_x &amp;gt; 10&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "#ext": {
        "select": {
            "csv": {"has_headers": True},
            "columns": [
                {"name": "ts", "as_label": "ts_ns"},
                {"name": "linear_acceleration_x", "as_label": "acc_x"},
                {"name": "linear_acceleration_y"},
                {"name": "linear_acceleration_z"},
            ],
        },
        "when": {"@acc_x": {"$gt": 1.9}},
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Python example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;

&lt;span class="n"&gt;when&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="c1"&gt;# the JSON condition from above
&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://&amp;lt;cloud-address&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;TOKEN&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;robot1-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;imu_logs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;when&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;when&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This returns only the rows where &lt;code&gt;linear_acceleration_x &amp;gt; 1.9&lt;/code&gt;, along with the timestamp.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Setup Works Well for Robotics&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#why-this-setup-works-well-for-robotics" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are several advantages to using a specialized storage solution like ReductStore for mobile robotics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Robots can store data locally&lt;/strong&gt; and operate offline without network connectivity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic replication when connected&lt;/strong&gt; to avoid manual uploads and simplify data management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Selective replication&lt;/strong&gt; lets you control what data is sent to the cloud (i.e. decide on your reduction strategy) to save bandwidth and storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Labels and timestamps&lt;/strong&gt; make it easy to organize and query sensor data later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store files of any type&lt;/strong&gt; (images, CSV, logs, MCAP) in a single system without needing separate storage solutions for each data type.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps&lt;a href="https://www.reduct.store/blog/distributed-storage-mobile-robotics#next-steps" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore also integrates into robotics observability stacks such as the Canonical Observability Stack (COS) for robotics. You can visualize sensor data, logs, and metrics in Grafana dashboards alongside your other robot telemetry. More details in our blog post &lt;strong&gt;&lt;a href="https://dev.to/reductstore/the-missing-database-for-robotics-is-out-4p4i"&gt;The Missing Database for Robotics Is Out&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;I hope you found this article helpful! If you have any questions or feedback, don't hesitate to reach out on our &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

</description>
      <category>database</category>
      <category>ros</category>
      <category>robotics</category>
    </item>
    <item>
      <title>The Missing Database for Robotics Is Out</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Wed, 22 Oct 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/the-missing-database-for-robotics-is-out-4p4i</link>
      <guid>https://dev.to/reductstore/the-missing-database-for-robotics-is-out-4p4i</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5p4xxqhkx9pq86jm95d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5p4xxqhkx9pq86jm95d.png" alt="Img example" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Robotics teams today wrestle with data that grows faster than their infrastructure. Every robot generates streams of images, sensor readings, logs, and events in different formats. These data piles are fragmented, expensive to move, and slow to analyze. Teams often rely on generic cloud tools that are not built for robotics. They charge way too much per gigabyte (when it should cost little per terabyte), hide the raw data behind proprietary APIs, and make it hard for robots (and developers) to access or use their own data.&lt;/p&gt;

&lt;p&gt;ReductStore introduces a new category: a database purpose built for robotics data pipelines. It is open, efficient, and developer friendly. It lets teams store, query, and manage any time series of unstructured data directly from robots to the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes It a New Category&lt;a href="https://www.reduct.store/blog/database-for-robotics#what-makes-it-a-new-category" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore treats robotics with the respect it deserves. It captures everything in its raw form and stores it with a time index and labels for flexible querying and management. It ingests and streams any type of data (images, sensor frames, logs, MCAP files, CSVs, JSON, etc) without forcing developers to convert or reformat it.&lt;/p&gt;

&lt;p&gt;It works on robots and in the cloud using the same interface and SDKs (Python, C++, Rust, Javascript, Go). This means developers can build data pipelines that run the same way on robots or in the cloud without needing to change code or learn new tools.&lt;/p&gt;

&lt;p&gt;Developers can run ReductStore on an edge device for local data capture and replicate to a cloud instance (with S3 backend) for cloud analytics or archiving.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It is the first and only database designed specifically for unstructured, time series robotics data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Data Handling and Querying&lt;a href="https://www.reduct.store/blog/database-for-robotics#data-handling-and-querying" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Developers can work directly with data using simple queries and SDKs. The focus is speed and flexibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. MCAP topic filtering&lt;a href="https://www.reduct.store/blog/database-for-robotics#1-mcap-topic-filtering" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You can filter topics directly from multiple MCAP files stored in ReductStore without needing to download and reprocess everything locally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;

&lt;span class="c1"&gt;# Extract only the IMU topic from MCAP files
&lt;/span&gt;&lt;span class="n"&gt;ext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ros&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;extract&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;topic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/imu/data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://test.reduct.store&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-robotics-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mcap-entry&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ext&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;header&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sec&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1_000_000_000&lt;/span&gt;
                &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;header&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;nanosec&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to extract only the relevant topics from multiple bags. In this example, we extract only the IMU topic as a stream of JSON records, which would look like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ts&lt;/th&gt;
&lt;th&gt;linear_acceleration_x&lt;/th&gt;
&lt;th&gt;linear_acceleration_y&lt;/th&gt;
&lt;th&gt;linear_acceleration_z&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1633024800000&lt;/td&gt;
&lt;td&gt;0.1&lt;/td&gt;
&lt;td&gt;0.3&lt;/td&gt;
&lt;td&gt;-9.8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1633024801000&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.1&lt;/td&gt;
&lt;td&gt;-9.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  2. CSV/JSON field extraction and filtering&lt;a href="https://www.reduct.store/blog/database-for-robotics#2-csvjson-field-extraction-and-filtering" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You can extract specific JSON fields or CSV columns when querying data. This lets you select only the information you need, for example, filtering and visualizing certain fields from streams of JSON or CSV sensor readings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;

&lt;span class="c1"&gt;# Select specific CSV columns and filter rows
&lt;/span&gt;&lt;span class="n"&gt;ext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;select&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;has_headers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="c1"&gt;# Use "json": {}, for JSON data
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;columns&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;as_label&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;acc_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_y&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;linear_acceleration_z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;when&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$gt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$abs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@acc_x&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://test.reduct.store&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-robotics-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Loop over filtered CSV entries
&lt;/span&gt;    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_sensor_readings&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ext&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;csv_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BytesIO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tabular result will only include the selected columns and rows that match the filter &lt;code&gt;abs(linear_acceleration_x) &amp;gt; 10&lt;/code&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ts&lt;/th&gt;
&lt;th&gt;linear_acceleration_x&lt;/th&gt;
&lt;th&gt;linear_acceleration_y&lt;/th&gt;
&lt;th&gt;linear_acceleration_z&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1633024800000&lt;/td&gt;
&lt;td&gt;12.5&lt;/td&gt;
&lt;td&gt;0.3&lt;/td&gt;
&lt;td&gt;-9.8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1633024801000&lt;/td&gt;
&lt;td&gt;-15.2&lt;/td&gt;
&lt;td&gt;0.1&lt;/td&gt;
&lt;td&gt;-9.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  3. Query any type of data&lt;a href="https://www.reduct.store/blog/database-for-robotics#3-query-any-type-of-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore automatically batches small records and streams large ones for efficient storage and access. You can query any type of data, from lightweight telemetry to high-resolution images or point clouds, efficiently.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;io&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PIL&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;

&lt;span class="c1"&gt;# Every 5 seconds, limit to 5 records
&lt;/span&gt;&lt;span class="n"&gt;when&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$each_t&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$limit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://test.reduct.store&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-robotics-data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;camera_frames&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;when&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;when&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;rec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;io&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BytesIO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The example above retrieves camera frames at 5-second intervals. You can then process or visualize these images as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs41awykv9yru7r8ybsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs41awykv9yru7r8ybsw.png" alt="Query Images Example" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Browse petabytes of data&lt;a href="https://www.reduct.store/blog/database-for-robotics#4-browse-petabytes-of-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore is designed to handle massive volumes of data. Its indexing and storage architecture allows you to efficiently browse data at scale without downloading everything locally.&lt;/p&gt;

&lt;p&gt;For example, you can quickly navigate records and preview your data directly in the ReductStore &lt;a href="https://www.reduct.store/docs/glossary#web-console" rel="noopener noreferrer"&gt;&lt;strong&gt;web console&lt;/strong&gt;&lt;/a&gt;, even when working with petabytes of robotics data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gbh4ko0yqdal2udvoxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gbh4ko0yqdal2udvoxp.png" alt="Browse Large Datasets" width="800" height="650"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;info&lt;/p&gt;

&lt;p&gt;You can build custom applications on top of ReductStore using its SDKs for Python, C++, Rust, Javascript, and Go. This makes it easy to build data pipelines, dashboards that works in the browser, or integrate with existing tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Integration and Cost Savings&lt;a href="https://www.reduct.store/blog/database-for-robotics#cloud-integration-and-cost-savings" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore connects robots and the cloud in a simple and flexible way. It works with S3-compatible storage and includes a robust replication system to transfer data from robots to the cloud (even when the network is unstable or intermittent), making it perfect for field robots that often go offline.&lt;/p&gt;

&lt;p&gt;Replication tasks can be configured to replicate only specific data based on labels or any criteria (for example, only replicate data when the confidence score is below a threshold, or &lt;strong&gt;replicate everything from a 10-minute window around a specific event&lt;/strong&gt; ).&lt;/p&gt;

&lt;p&gt;In the cloud, by batching multiple records into single data blocks, ReductStore minimizes both the number of blobs and the number of API calls to S3. This design reduces storage and retrieval costs by leveraging S3's pricing model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdj8tgvb7zoiln9zoimgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdj8tgvb7zoiln9zoimgo.png" alt="Diagram Cloud Integration" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This approach can deliver major savings when working with large volumes of robotics data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Observability Stack Integration&lt;a href="https://www.reduct.store/blog/database-for-robotics#observability-stack-integration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore works with the tools robotics engineers already trust.&lt;/p&gt;

&lt;h3&gt;
  
  
  Foxglove Studio&lt;a href="https://www.reduct.store/blog/database-for-robotics#foxglove-studio" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Foxglove is an amazing tool for visualizing robotics data and debugging robots for the MCAP format.&lt;/p&gt;

&lt;p&gt;To share data from ReductStore to Foxglove, you can use the ReductStore web console (or the SDKs) to generate a &lt;a href="https://www.reduct.store/docs/glossary#query-link" rel="noopener noreferrer"&gt;&lt;strong&gt;query link&lt;/strong&gt;&lt;/a&gt; that Foxglove can open directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr17w58ytt4cjd4069l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr17w58ytt4cjd4069l1.png" alt="ReductStore Query Link" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can then paste the query link into Foxglove Studio to visualize the data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F058x6bhrz4stf0fut1tj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F058x6bhrz4stf0fut1tj.png" alt="Foxglove Studio" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Grafana&lt;a href="https://www.reduct.store/blog/database-for-robotics#grafana" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Grafana is a popular open-source tool for creating dashboards and visualising time-series data. You can connect Grafana to ReductStore using the ReductStore data source plugin, which allows you to query and visualise data stored in ReductStore.&lt;/p&gt;

&lt;p&gt;You can query data using labels, for example, localization coordinates, object detected, confidence score, etc:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjoqmnbn97pjsocud3zv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjoqmnbn97pjsocud3zv.png" alt="Grafana Query Labels" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can query based on content, such as JSON files with sensor readings or other structured data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmr8ivq30fhabtjrsbtv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmr8ivq30fhabtjrsbtv.png" alt="Grafana Query Content" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Canonical Observability Stack (COS)&lt;a href="https://www.reduct.store/blog/database-for-robotics#canonical-observability-stack-cos" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Canonical's COS (Canonical Observability Stack) for robotics is an end to end observability framework built on open source tools such as Prometheus, Loki, Grafana, and Foxglove.&lt;/p&gt;

&lt;p&gt;The missing piece in this stack has always been a purpose built system for storing and managing robotics data efficiently from robot to cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg09arjzl4jeks920hmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg09arjzl4jeks920hmf.png" alt="Diagram Observability Stack Integration" width="800" height="742"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ReductStore closes that gap. It provides a data storage and streaming solution optimized for both edge and cloud environments, along with an agent that captures data directly from ROS and streams it into the observability pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda5hez3zprc4mrw8vnov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda5hez3zprc4mrw8vnov.png" alt="COS with ReductStore" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts&lt;a href="https://www.reduct.store/blog/database-for-robotics#closing-thoughts" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Robotics teams no longer need to choose between control and convenience. ReductStore gives full ownership of data from robot to cloud. It removes vendor lock, cuts cost, and keeps everything observable and connected. It is the new foundation for robotics data infrastructure (the missing database for robotics).&lt;/p&gt;

&lt;p&gt;If you are interested to compare ReductStore with other databases (like MongoDB or InfluxDB), you can read our &lt;a href="https://www.reduct.store/whitepaper" rel="noopener noreferrer"&gt;&lt;strong&gt;white paper&lt;/strong&gt;&lt;/a&gt; that goes deeper into the architecture and design choices.&lt;/p&gt;




&lt;p&gt;I hope you found this article helpful! If you have any questions or feedback, don't hesitate to reach out on our &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

</description>
      <category>ros</category>
      <category>robotics</category>
    </item>
    <item>
      <title>Comparing Robotics Visualization Tools: RViz, Foxglove, Rerun</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Tue, 15 Jul 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/comparing-robotics-visualization-tools-rviz-foxglove-rerun-458n</link>
      <guid>https://dev.to/reductstore/comparing-robotics-visualization-tools-rviz-foxglove-rerun-458n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjhuzia3684xf0b3uz09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjhuzia3684xf0b3uz09.png" alt="Intro image" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In robotics development, effective visualization and analysis tools are essential for monitoring, debugging, and interpreting complex sensor data. Platforms like RViz, Foxglove, and Rerun play a key role at the visualization layer of the observability stack. They help developers interact with both live and recorded data. These tools rely on timely, well-structured access to the underlying data streams. That's where &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt; comes in. It handles the data logging, storage, and processing, with a focus on capturing high-volume time-series data efficiently. ReductStore aims to integrate with tools like RViz, Foxglove, and Rerun, supporting a complete observability pipeline: from raw data ingestion to actionable insights.&lt;/p&gt;

&lt;p&gt;Each visualization platform has its unique role in the development workflow. &lt;a href="https://wiki.ros.org/rviz" rel="noopener noreferrer"&gt;&lt;strong&gt;RViz (ROS Visualization) is the classic 3D visualization tool built for the ROS ecosystem&lt;/strong&gt;&lt;/a&gt;, widely used for real-time robot monitoring and debugging. &lt;a href="https://foxglove.dev/about" rel="noopener noreferrer"&gt;&lt;strong&gt;Foxglove is a modern data visualization and inspection platform for robotics and physical AI systems&lt;/strong&gt;&lt;/a&gt;, aiming to simplify how teams collect, visualize, analyze, and manage large volumes of diverse sensor data. &lt;a href="https://rerun.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;Rerun is a lightweight, native desktop application focused on fast and efficient visualization of robotics data&lt;/strong&gt;&lt;/a&gt;, enabling developers to quickly explore and debug both live and recorded sensor streams with minimal setup.&lt;/p&gt;

&lt;p&gt;This article compares RViz, Foxglove, and Rerun across key criteria: pricing, cross-platform support, remote collaboration, user interface, extensibility, ROS integration, performance with large datasets, and visualization and analysis features. The goal is to help robotics developers choose the right tool for their specific needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pricing&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#pricing" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; and &lt;strong&gt;RViz 2&lt;/strong&gt; are part of the ROS ecosystem and released under the BSD 3-Clause License. This permissive open-source license allows free use, modification, and redistribution (including for commercial purposes), as long as the original copyright and license notices are preserved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; offers a free tier that includes core features for up to 3 users, 10 devices, and 10 GB of cloud storage. For larger teams or needs (e.g., extra users, storage, private extensions, enterprise integrations), paid subscriptions are available. Pricing is based on the number of users and storage volume, as well as usage and support level. There is also a free academic plan for qualified institutions, which includes more users and storage. Foxglove itself is proprietary software, though it is built on open protocols like MCAP and integrates with open-source ROS tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; is fully open-source under both the MIT and Apache 2.0 licenses. There are no current paid plans for the open-source core. The project follows an open-core model: the core visualizer and SDK are free, while a commercial platform is in early access for teams needing cloud-based storage, collaboration tools, advanced analytics, and scalable CI/CD workflows. This commercial layer is designed to build on top of the open-source foundation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Platform &amp;amp; Collaboration&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#platform--collaboration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; and &lt;strong&gt;RViz 2&lt;/strong&gt; are primarily developed for Linux, where they offer the most stable and reliable performance. RViz 2 also supports Windows and macOS as part of ROS 2, but these versions are less mature and less commonly used. They often require manual setup or compilation, though support continues to improve with newer ROS 2 releases.&lt;/p&gt;

&lt;p&gt;Both RViz versions are local desktop applications and are not designed for remote or multi-user use out of the box. Workarounds like SSH with X11 forwarding, VNC, or running RViz locally while connecting remotely to a ROS system are possible, but they are often fragile, require manual configuration, and may suffer from performance or latency issues depending on the network and hardware.&lt;/p&gt;

&lt;p&gt;To address these limitations, early tools like &lt;code&gt;ROS3D.js&lt;/code&gt; offered browser-based ROS 1 visualization, but they are now mostly unmaintained and incompatible with ROS 2. Modern web visualization is typically done with tools like Foxglove, Webviz, or custom WebSocket-based interfaces. Some cloud robotics platforms also offer remote ROS visualization, though they typically require extra integration work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; runs on Windows, macOS, and Linux, available both as a native desktop app and in a web browser. This gives users the flexibility to work locally or remotely without installing software. The browser version supports multi-user collaboration, allowing teams to share layouts and stream live data securely in real time from any internet-connected device.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; is a lightweight native desktop application for Windows, macOS, and Linux. It requires minimal setup and enables developers to quickly visualize and debug live or recorded sensor data without needing a browser or complex configuration. Although Rerun does not support multi-user or collaborative features, teams often share log files for offline review. This approach is usually more practical than using remote desktop tools. Rerun also integrates well into development workflows, such as Python environments, which typically require installing Rerun's SDKs and dependencies.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : All three tools support sharing of recorded data, such as rosbag files for RViz &amp;amp; Rviz 2 (&lt;code&gt;.bag&lt;/code&gt; for ROS 1 and &lt;code&gt;.db3&lt;/code&gt;, &lt;code&gt;.mcap&lt;/code&gt; for ROS 2), &lt;code&gt;.mcap&lt;/code&gt; files for Foxglove, and &lt;code&gt;.rrd&lt;/code&gt; for Rerun. To support these workflows at scale, you can use &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt; solutions to manage continuous recording, indexing, and long-term storage of these file types across teams and infrastructure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;User Interface&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#user-interface" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; and &lt;strong&gt;RViz 2&lt;/strong&gt; have a powerful but somewhat dated interface that focuses more on functionality than modern design. The learning curve can be steep, especially for beginners, due to the complex layout and the need to manually configure displays, topics, coordinate frames, and tools. The interface is built around multiple panels and dialogs that require careful configuration. It lacks the visual polish and streamlined workflows of newer visualization tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; features a modern, user-friendly interface with flexible dashboards and responsive controls. It is designed to be accessible to users at all experience levels, making it easier to explore, analyze, and share robotics data. The interface relies heavily on graphical elements instead of commands or configuration files, which lowers the entry barrier for users unfamiliar with ROS or robotics tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; offers a clean and straightforward interface focused on efficient data visualization. It balances ease of use with core functionality, providing easy-to-navigate views without overwhelming users. The interface requires minimal setup and supports intuitive exploration of data streams and logs. However, it currently has fewer customization options than RViz or Foxglove.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Extensibility&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#extensibility" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; (both ROS 1 and ROS 2) supports extensibility through C++ plugins, allowing users to develop and integrate custom visualizations, tools, and panels. This plugin architecture makes RViz highly adaptable across robotics domains such as perception, navigation, and manipulation. Many ROS packages include their own RViz plugins by default. However, developing and using plugins requires tight integration with the specific ROS environment. Plugins made for RViz in ROS 1 are not directly compatible with RViz 2; they often require modification or a complete rewrite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; offers extensibility through an Extensions SDK, which allows developers to build React-based visualizations using TypeScript. Extensions can be shared via an online registry and do not require recompilation. Foxglove also provides APIs and libraries in C++, Python, and Rust, primarily for working with the MCAP file format, enabling integration with ROS (both versions), WebSocket streams, and recorded sensor data. Foxglove's ecosystem also supports integration with popular robotics and simulation tools such as NVIDIA Isaac Sim, Velodyne LiDAR, and Jupyter Notebooks, either directly or via external bridges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; focuses on extensibility through SDKs and APIs, especially for Python and other programming environments. It does not support plugin-based customization or drag-and-drop extensions like RViz or Foxglove. Instead, it prioritizes programmatic data embedding and visualization, making it well-suited for users who prefer scripting and code-driven workflows.&lt;/p&gt;

&lt;p&gt;Rerun offers strong Python support, but its core is built with Rust and the egui GUI framework — technologies less familiar to many robotics developers. This can introduce a learning curve and limit low-level customization unless users are comfortable with Rust.&lt;/p&gt;

&lt;p&gt;Rerun does not offer a simple or dynamic plugin system or scripting layer similar to RViz's C++ plugins or Foxglove's TypeScript extensions. This limits rapid prototyping or quick third-party integration.&lt;/p&gt;

&lt;p&gt;Still, its APIs offer robust integration with diverse data sources, including ROS topics, sensor streams, and machine learning frameworks like TensorFlow and PyTorch. This makes Rerun a flexible tool for logging, visualizing, and debugging complex data pipelines.&lt;/p&gt;

&lt;p&gt;Rerun is best suited for developers who prefer programming-driven customization over GUI-based tools. It provides direct control over data ingestion and visualization, enabling highly tailored, dynamic workflows that can grow with project needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;ROS Integration&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#ros-integration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; is tightly integrated with ROS and supports direct interaction with live ROS topics. Originally developed for ROS 1, it was succeeded by &lt;strong&gt;RViz 2&lt;/strong&gt; for ROS 2, and it remains a core visualization tool in many robotics workflows. However, this deep integration limits RViz's usability outside the ROS ecosystem. Both versions depend on a fully functioning ROS environment and are not designed to run independently or handle non-ROS data without conversion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; connects to live ROS systems using &lt;code&gt;foxglove_bridge&lt;/code&gt;, a WebSocket-based bridge designed for this purpose. It runs on the same network as the ROS system and streams real-time ROS messages to Foxglove over WebSocket. This architecture allows remote monitoring and interaction with installing ROS locally. Unlike RViz, Foxglove can be used without a full ROS setup.&lt;/p&gt;

&lt;p&gt;In addition to live data, Foxglove also supports opening and analyzing ROS bag files locally. This makes it easy to review recorded data, visualize topics, and troubleshoot issues offline, without needing an active ROS system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; supports integration with both ROS 1 and ROS 2, enabling live topic visualization and recorded data inspection. For ROS 2, Rerun officially maintaines basic example scripts, hosted on GitHub, that use Python (&lt;code&gt;rclpy&lt;/code&gt;) or C++ to subscribe to ROS 2 topics and forward selected data to the Rerun viewer. This is a user-defined bridge rather than a native-plugin integration. ROS 1 integration is possible using custom nodes written in either C++ or Python (&lt;code&gt;rospy&lt;/code&gt;), but usually requires more manual setup. Unlike Foxglove, which uses standardized communication protocols like &lt;code&gt;foxglove_websocket&lt;/code&gt; via &lt;code&gt;foxglove_bridge&lt;/code&gt; (and optionally &lt;code&gt;rosbridge&lt;/code&gt;), Rerun ingests data directly through user-defined code and does not rely on ROS-specific bridge protocols. While Rerun avoids protocol-based bridging, it still requires users to write custom nodes that translate ROS messages into its API.&lt;/p&gt;

&lt;p&gt;Rerun is especially useful for visualizing time-synchronized multimodal data, such as sensor readings, 3D geometry, camera images, transforms, and trajectories. However, it currently lacks built-in support for certain ROS-specific features like interactive TF tree exploration, occupancy/grid map overlays, and full URDF-based robot model visualization. Community-maintained examples (e.g., the &lt;code&gt;urdf_loader&lt;/code&gt;) offer partial support for URDF rendering, but do not yet match RViz’s depth or interactivity.&lt;/p&gt;

&lt;p&gt;Rerun also cannot currently open ROS bag files directly (&lt;code&gt;.bag&lt;/code&gt; for ROS 1 or &lt;code&gt;.db3&lt;/code&gt; for ROS 2). Instead, users replay them with &lt;code&gt;rosbag play&lt;/code&gt; or &lt;code&gt;ros2 bag play&lt;/code&gt; and forward selected topics to Rerun using custom Python or C++ bridge nodes. This workflow offers flexibility and performance but requires additional configuration. Rerun uses its own &lt;code&gt;.rrd&lt;/code&gt; log format, which is optimized for high-throughput, time-seekable storage and streaming.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Performance with Large Data&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#performance-with-large-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;RViz&lt;/strong&gt; is not fully optimized for very large datasets, such as dense point clouds, high-frequency topics, or long message histories. When visualizing large volumes of data, users may encounter performance issues like low frame rates, rendering lag, and high CPU or GPU usage. This happens because RViz continuously renders incoming ROS messages and stores message history in memory, which can quickly overwhelm system resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RViz 2&lt;/strong&gt; improves on this with better multithreading and more efficient message transport via DDS. These changes help boost performance and scalability in ROS 2 environments. However, RViz 2 still struggles with very dense or high-rate data streams, especially when rendering complex 3D data in real time, and these improvements do not fully solve the challenges of high-density visualization. To improve performance, users often reduce message history length, filter or downsample data, and disable non-essential displays.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foxglove&lt;/strong&gt; , particularly its web version, can underperform RViz in high-data scenarios. Because it runs in a web browser, it's constrained by browser memory limits, single-threaded JavaScript execution, and limited access to hardware acceleration. As a result, visualizing large point clouds or streaming high-frequency topics may lead to lag, dropped frames, or browser instability. These limitations are especially evident when handling continuous 3D data or large bag files.&lt;/p&gt;

&lt;p&gt;Performance can vary depending on the use case and browser environment. The desktop application bypasses some browser limitations and can perform better. However, since it is built on Electron, it still has overhead related to memory usage and resource management common to Electron-based apps. Though these issues are generally less severe than in the web version. For lighter workloads, such as 2D plots or moderate-frequency telemetry, Foxglove often performs well and benefits from its accessible UI and cross-platform support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rerun&lt;/strong&gt; is designed with high performance in mind for large-scale, multimodal data workflows. It is a native desktop application written in Rust and uses the modern WGPU rendering backend. This gives it direct access to system resources, helping it efficiently handle dense point clouds, long message histories, and high-frequency data streams. Behind the scenes, Rerun uses techniques such as memory-mapped I/O, zero-copy data handling, and intelligent batching to reduce latency and resource use.&lt;/p&gt;

&lt;p&gt;Although there are only few formal benchmarks comparing Rerun with RViz or Foxglove, early community feedback and its architecture suggest that Rerun scales effectively with complex datasets. Performance can be further improved by filtering or downsampling data streams according to specific needs. Rerun is currently under active development to expand its capabilities for robotics visualization and analysis.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : Best practices for handling large datasets include splitting data files by time or size (e.g., every 1–5 minutes), using separate files for different topic groups, and automatically deleting old files when disk space is low. Chunk compression can also save disk space more efficiently than whole-file compression, but this approach consumes more CPU and memory resources, representing a trade-off between storage and performance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Analysis &amp;amp; Visualization&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#analysis--visualization" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;RViz &amp;amp; RViz 2&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#rviz--rviz-2" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Key Capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Visualization &amp;amp; Bag File Support&lt;/strong&gt; : RViz and RViz 2 support real-time visualization by subscribing to live ROS topics. They also display data from recorded bag files (&lt;code&gt;.bag&lt;/code&gt; for ROS 1, &lt;code&gt;.db3&lt;/code&gt; and &lt;code&gt;.mcap&lt;/code&gt; for ROS 2), when those files are replayed using tools like &lt;code&gt;rosbag play&lt;/code&gt; or &lt;code&gt;ros2 bag play&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Format Support&lt;/strong&gt; : RViz visualizes a wide range of robot state information, including URDF robot models, coordinate transforms (TF), and various sensor data such as LIDAR, IMU, depth, and RGB cameras. It also supports odometry, localization, occupancy grid maps (used in SLAM), navigation data (paths, goals, trajectories), and interactive markers for user interaction. RViz 2 supports the same data types with ROS 2 message compatibility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interactive Markers&lt;/strong&gt; : These 3D UI elements enable users to manipulate objects within the visualization: setting navigation goals, adjusting robot end-effector positions, or dragging points for motion planning. Using them requires writing supporting ROS nodes and configuring interaction logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configurable Interface&lt;/strong&gt; : Users can add, remove, and arrange panels, and customize display properties such as colors, shapes, and update rates for each data type. These configurations can be saved and reloaded using &lt;code&gt;.rviz&lt;/code&gt; files, streamlining repetitive workflows like navigation, debugging, or SLAM visualization. Multiple camera control modes (Orbit, FPS, Top-down) allow flexible 3D scene navigation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plugin-Based Architecture&lt;/strong&gt; : Developers can extend RViz by creating custom visualizations and tools through C++ plugins. RViz 2 supports plugins too, built on a more modern and modular architecture.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsiuhsoi39k1qg9verjkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsiuhsoi39k1qg9verjkh.png" alt="RViz" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://foxglove.dev/examples" rel="noopener noreferrer"&gt;Data from Mobile Robot Example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Analysis&lt;/strong&gt; : RViz and RViz 2 primarily serve visualization purposes and lack built-in tools for detailed message inspection, conditional logging, or advanced playback controls like pause, step, or speed adjustment. These features typically require external tools such as &lt;code&gt;rqt_bag&lt;/code&gt;, ROS CLI utilities, or third-party RViz plugins (e.g., &lt;code&gt;rosbag_panel&lt;/code&gt;). RViz also does not consistently warn about invalid data (e.g., NaNs or infinities), which can result in missing or misleading visuals. These tools are not designed for deep offline data analysis and are best used alongside more specialized logging or analysis solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Time-Series Analysis&lt;/strong&gt; : RViz and RViz 2 do not support time-series plotting or statistical analysis. For these tasks, dedicated tools like &lt;code&gt;rqt_plot&lt;/code&gt;, PlotJuggler (with ROS 2 support), or external environments like Jupyter with Python are more appropriate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Conditional Filtering&lt;/strong&gt; : RViz and RViz 2 display all incoming data without the ability to filter messages based on content or fields. Filtering must be performed upstream, often by custom ROS nodes. Some plugins or panels offer limited filtering but are not general solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Topic Synchronization&lt;/strong&gt; : RViz and RViz 2 subscribe to each topic independently and display messages as they arrive. They do not synchronize data streams from different topics based on timestamps, which can cause misalignment or inconsistencies in time-sensitive visualizations (e.g., camera images, LIDAR scans, TF frames). Synchronization requires external tools like &lt;code&gt;message_filters&lt;/code&gt; or custom nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Built-In Logging or Export&lt;/strong&gt; : RViz and RViz 2 cannot automatically export visualized data or record screencasts. Users are limited to manual screenshots unless using custom plugins or external tools to record sessions or extract data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Multi-Robot Support&lt;/strong&gt; : While RViz can display data from multiple robots using namespaces, the interface is not designed for straightforward multi-robot workflows. RViz 2 includes minor improvements, but still lacks dedicated features for managing multiple robots simultaneously.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Foxglove&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#foxglove" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Key Capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-Modal 3D Visualization&lt;/strong&gt; : Foxglove provides comprehensive 3D visualization for a variety of robotics data, including URDF robot models, TF trees, sensor streams (LIDAR, point clouds, camera feeds), occupancy grids, and navigation elements such as paths, goals, and costmaps. Users can interact with the scene in real time: rotating the view, toggling layers, and focusing on specific frames or topics. Multi-camera views, tooltips, and overlays enhance spatial understanding. Synchronized multi-viewports and flexible camera modes (free, fixed, follow-frame, sensor-aligned) make it possible to examine several spatial data streams side by side. All streams are synchronized through a shared timeline for consistent context across modalities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Topic Synchronization &amp;amp; Playback Timeline&lt;/strong&gt; : Foxglove offers a unified, timestamp-based timeline that synchronizes data from multiple topics. This ensures time-aligned playback of sensor streams like RGB images, depth, point clouds, IMU, and TFs, useful both in real time and with recorded data. The timeline includes playback controls such as pause, frame-by-frame stepping, variable speed, and bookmarks for quickly navigating to key events. This tight time synchronization is a major advantage over RViz, enabling clearer insights into system behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advanced Analysis &amp;amp; Time-Series Tools&lt;/strong&gt; : Foxglove offers a capable set of tools for offline analysis of recorded data. Users can inspect messages in detail, filter them by topic or namespace, and control playback through an integrated timeline with pause, step-by-step navigation, and adjustable speed. To view custom ROS 2 message types with full support, messages are best recorded in or converted to the MCAP format, although Foxglove can open other formats with some limitations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Modular &amp;amp; Configurable Interface&lt;/strong&gt; : The Foxglove UI is fully modular, allowing users to add, remove, duplicate, and rearrange panels such as 3D views, image feeds, message viewers, plots, diagnostics, and consoles. Each panel is highly configurable, with settings for color, scale, transparency, update rate, and filtering. Users can save layouts as JSON files, enabling reproducible setups, role-based dashboards, and fast task switching (e.g., from SLAM debugging to perception analysis). Layouts can be shared across teams or versioned over time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Panels &amp;amp; Extensions&lt;/strong&gt; : Foxglove allows users to build custom panels using plugins, enabling specialized interfaces tailored to specific workflows. These panels are embedded directly into the Foxglove interface, keeping everything streamlined and centralized. This is particularly valuable for teams developing internal tools or dashboards for robotics development and testing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud &amp;amp; Collaboration&lt;/strong&gt; : Foxglove can be run locally or in the cloud. Its cloud features include shared dashboards, timeline comments, and real-time collaboration, enabling teams to jointly review logs or live data remotely. This makes it particularly useful for distributed development, remote testing, or asynchronous data reviews.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7ob7uf307o200pazpro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7ob7uf307o200pazpro.png" alt="Foxglove" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://foxglove.dev/examples" rel="noopener noreferrer"&gt;Autonomous Robotic Manipulation Example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Real-Time 3D Interactivity&lt;/strong&gt; : Foxglove does not natively support interactive 3D markers like RViz. Users cannot directly manipulate objects in the 3D scene (e.g., setting goals, editing poses, or dragging elements) without building custom extensions. This limits Foxglove's out-of-the-box usability for real-time tasks such as motion planning, teleoperation, or interactive environment setup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Advanced Features&lt;/strong&gt; : Foxglove currently lacks certain advanced features found in tools like PlotJuggler. For example, Foxglove does not yet support strict axis ratio locking — a critical feature for accurately visualizing spatial data where maintaining proportional relationships between axes is important. Additionally, Foxglove's built-in data transformation capabilities are limited compared to PlotJuggler's comprehensive suite of statistical and signal-processing tools, such as moving averages, derivatives, filtering, and custom mathematical expressions. These advanced features make PlotJuggler especially useful for detailed signal analysis and fine-grained data manipulation, often essential when debugging sensor data or control signals.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Automated Anomaly Detection&lt;/strong&gt; : Foxglove does not include built-in automated validation or anomaly detection. It does not use ML models or rule-based systems to automatically flag issues. Instead, it offers detailed message introspection and customizable visualizations that enable users to manually identify irregularities such as NaNs, infinities, or out-of-range values. This hands-on approach requires user expertise but provides flexible, in-depth analysis without automated alerts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Rerun&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#rerun" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Key Capabilities&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time &amp;amp; Recorded Data Visualization&lt;/strong&gt; : Rerun supports both live-streamed and recorded sensor data visualization with minimal latency. It ingests data via Rust- or Python-based logging SDKs, handling a wide range of robotics sensor modalities including 3D spatial data, camera imagery, numeric time-series, semantic segmentation maps, depth maps, annotations (bounding boxes, keypoints), and textual or categorical event data. Recorded datasets can be replayed with full timeline control for stepwise inspection or smooth playback, aiding in bug reproduction and model validation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collaboration &amp;amp; Sharing Features&lt;/strong&gt; : Rerun streamlines collaborative workflows through data export and session sharing via &lt;code&gt;.rrd&lt;/code&gt; files. Teams can share recorded &lt;code&gt;.rrd&lt;/code&gt; files for offline inspection, annotate data using Annotation Context (which supports labeling via class IDs and color mapping), and use shared Recording IDs to log streams from multiple processes or machines into a unified session, as long as the Recording ID is set consistently at the time of logging. Note: merging previously recorded &lt;code&gt;.rrd&lt;/code&gt; files with different Recording IDs offline is currently not supported. Users can also export screenshots (for reports or dashboards) via the CLI or viewer options, depending on the version and available commands.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customizable &amp;amp; Extensible UI&lt;/strong&gt; : The Rerun Viewer offers a modular, layout-aware interface tailored for tasks such as SLAM debugging, multi-sensor calibration, and performance profiling. Users can save and reload Blueprints — serialized UI configurations that preserve panel layouts, timelines, selected entities, and styling (e.g., color, transparency, size). A full styling hierarchy (override → store → default → fallback) makes it easy to customize visuals without modifying source data. Multiple synchronized views (3D scenes, timelines, 2D plots, raw data inspectors) support comprehensive analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rich 3D Visualization with Spatial Context&lt;/strong&gt; : Built on egui and WGPU, Rerun's 3D viewer efficiently renders large-scale scenes on consumer hardware. It uses an entity-path-based scene graph that reflects the hierarchical kinematic tree, allowing intuitive navigation and inspection of components, sensor frames, trajectories, bounding boxes, segmentation masks, dense point clouds, annotated images, 3D meshes, and time-series plots. Users can customize visual parameters (e.g., color maps, visibility, annotations, rendering modes) and navigate using orbit, zoom, and pan controls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexible Time-Series &amp;amp; Event Logging&lt;/strong&gt; : Rerun supports synchronized timeline playback of multiple data streams, using both explicit (user-defined) and implicit (auto-derived) timestamps. It manages multiple time domains (logical/log time and timeline time) to accurately align heterogeneous data sources. Timeline controls include zooming, scrubbing, filtering by entity path or timeline, and detailed event inspection with metadata. Conditional filtering and selective visibility help isolate anomalies or relevant events in complex multi-agent or multi-sensor deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Programmable Data Access &amp;amp; Web Integration&lt;/strong&gt; : The Rerun SDK provides semantic logging primitives (e.g., &lt;code&gt;log_scalar&lt;/code&gt;, &lt;code&gt;log_image&lt;/code&gt;, &lt;code&gt;log_point_cloud&lt;/code&gt;, &lt;code&gt;log_text_entry&lt;/code&gt;, &lt;code&gt;log_tensor&lt;/code&gt;) that render automatically in the Viewer. Rerun uses Apache Arrow for efficient data handling, supporting advanced analysis with tools like Pandas and Jupyter. Direct export to formats like Parquet is supported via the API, making it suitable for both streaming visualization and offline batch analysis. The Viewer is also available as a React component, enabling seamless embedding within React applications and custom web dashboards, though integration with other JavaScript frameworks may require additional adaptation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Emerging Features&lt;/strong&gt; : Experimental capabilities include graph-based views for visualizing system architectures, connectivity, and agent interactions, extending Rerun's utility beyond traditional sensor data visualization into system design and research workflows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpkl31ajhsla9aed5yu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpkl31ajhsla9aed5yu8.png" alt="Rerun" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rerun.io/examples" rel="noopener noreferrer"&gt;nuScenes Example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Built-In Advanced Analytics&lt;/strong&gt; : Rerun focuses primarily on visualization and lacks integrated statistical analysis, anomaly detection, or expression-based plotting features. In contrast, Foxglove provides richer analytics, including expression plots and integration with monitoring systems like Prometheus.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Not Optimized for Live Robot Control&lt;/strong&gt; : Although it supports real-time data streaming, Rerun is not designed for robot teleoperation or control input interaction. RViz and Foxglove offer more mature tools for monitoring and interacting with live robots.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Native Support for Navigation and SLAM Maps&lt;/strong&gt; : Unlike RViz, Rerun does not natively visualize occupancy grids, costmaps, or SLAM results, limiting its utility for path planning or localization workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Real-Time Collaboration&lt;/strong&gt; : While Rerun supports offline session sharing, it lacks live multi-user collaboration features such as synchronized remote views or cloud-hosted live sessions, which are available in Foxglove.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Visualization of Large-Scale System Architectures&lt;/strong&gt; : Rerun's entity-based model focuses on spatial and temporal data but does not yet offer comprehensive tools for exploring complex system communication graphs or architecture diagrams interactively.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt; &lt;a href="https://www.reduct.store/blog/comparison-rviz-foxglove-rerun#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This article provided a detailed comparison of RViz, Foxglove, and Rerun, evaluating them across practical dimensions: pricing, platform, and collaboration support, user interface, extensibility, ROS integration, performance with large datasets, and analysis and visualization capabilities. By outlining their strengths and limitations, we offer a clear perspective to help robotics engineers and developers choose the right tool for their specific needs.&lt;/p&gt;

&lt;p&gt;Choosing the right tool depends on your context: use RViz for real-time ROS development and interactive debugging, Foxglove for collaborative data analysis, time-synchronized playback, and remote team workflows, and Rerun for fast, developer-centric visualization of structured data in programmatic pipelines. In practice, many robotics teams find that combining these tools enables more effective development and validation across different stages of their workflows.&lt;/p&gt;




&lt;p&gt;We hope this comparison helps you make informed decisions and inspires you to keep exploring better tools and workflows. If you have questions, feedback, or insights to share, join the conversation on the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>rviz</category>
      <category>foxglove</category>
      <category>rerun</category>
    </item>
    <item>
      <title>Getting Started with LeRobot</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Tue, 27 May 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/getting-started-with-lerobot-4i0a</link>
      <guid>https://dev.to/reductstore/getting-started-with-lerobot-4i0a</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct5n78g9tpnlcaxt05nd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct5n78g9tpnlcaxt05nd.png" alt="Intro image" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/lerobot" rel="noopener noreferrer"&gt;&lt;strong&gt;LeRobot is an open-source project by Hugging Face&lt;/strong&gt;&lt;/a&gt; that makes it easy to explore the world of robotics with machine learning, even if you’ve never done anything like this before. It gives you pre-trained models, real-world data, and simple tools built with PyTorch, a popular machine learning framework. Whether you're just curious or ready to try your first robotics project, LeRobot is a great place to start.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Need to Get Started&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#what-you-need-to-get-started" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You can run everything in a simulation right from your browser — no robot, no installations, and no powerful computer needed. We’ll be using Google Colab, a free cloud-based coding environment.&lt;/p&gt;

&lt;p&gt;Here’s what you’ll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Google Account:&lt;/strong&gt; To use Colab, you need a Google account. If you use Gmail, you already have one. If not, you can &lt;a href="https://colab.research.google.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;create a Google account&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hugging Face Account:&lt;/strong&gt; LeRobot uses models and datasets hosted on Hugging Face. To access all features, you'll need to &lt;a href="https://huggingface.co/" rel="noopener noreferrer"&gt;&lt;strong&gt;create a Hugging Face account&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Preparation&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#preparation" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;To get started with LeRobot in Google Colab, first open Google Colab and sign in with your Google account. Once you're signed in, click the &lt;code&gt;New Notebook&lt;/code&gt; button to create a blank notebook — this is where you’ll run all your code.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All the commands below are already written out in a &lt;a href="https://colab.research.google.com/gist/AnthonyCvn/f02f12ce113f0e2fcd773fd39d0e1dfa/getting-started-with-lerobot.ipynb" rel="noopener noreferrer"&gt;&lt;strong&gt;ready-made Google Colab notebook&lt;/strong&gt;&lt;/a&gt; you can use to follow along.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Step 1: Switch to GPU&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#step-1-switch-to-gpu" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;LeRobot can use a GPU (Graphics Processing Unit), which makes things run faster, especially for simulation and machine learning tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the Colab menu, click &lt;code&gt;Runtime&lt;/code&gt; &amp;gt; &lt;code&gt;Change runtime type&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under the &lt;code&gt;Hardware accelerator&lt;/code&gt;, select &lt;code&gt;GPU&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;code&gt;Save&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now your notebook is using a free GPU provided by Google. Note that GPU access in Colab is limited in time and resources, depending on whether you’re using the free or PRO version.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Clone the LeRobot Repository&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#step-2-clone-the-lerobot-repository" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Run this command in a Colab code cell to download LeRobot from GitHub. This repository is public, so you don’t need a GitHub account to clone it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;git clone https://github.com/huggingface/lerobot.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A new folder named &lt;code&gt;lerobot&lt;/code&gt; will appear in the file browser on the left (click the folder icon to open it).&lt;/p&gt;

&lt;p&gt;For now, you can simply start with the &lt;code&gt;lerobot/examples&lt;/code&gt; folder. It contains ready-to-use scripts that let you try out real robot tasks using pre-trained models — no setup or deep knowledge needed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Colab’s environment is temporary. If you restart the runtime, the files will be deleted and you’ll need to run the setup steps again. It’s best to keep these commands handy at the top of your notebook.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Step 3: Move into the LeRobot folder&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#step-3-move-into-the-lerobot-folder" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Now that the LeRobot files are downloaded, we need to tell Python to work inside that folder. Run this command in a new cell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;%cd lerobot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This changes the current working directory to the &lt;code&gt;lerobot&lt;/code&gt; folder, where all the code and scripts are located.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Install LeRobot and Its Dependencies&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#step-4-install-lerobot-and-its-dependencies" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;After cloning the repository and switching to the &lt;code&gt;lerobot&lt;/code&gt; folder, the next step is to install everything LeRobot needs to work. Run this command in a new cell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="s2"&gt;".[pusht]"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running it, LeRobot will be ready to use in your notebook. All necessary tools and libraries will be installed automatically.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you see any errors during installation, you may just need to install a missing libraries.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We recommend installing the &lt;code&gt;hf_xet&lt;/code&gt; library for faster and more reliable downloads from Hugging Face:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;hf_xet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tool helps speed up access to models and datasets, especially when loading large files in Colab.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running a Pre-Trained Model&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#running-a-pre-trained-model" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;LeRobot includes several pre-trained models, so you can try robot tasks without needing to train anything yourself. These models are already trained on specific tasks and ready to go.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;π0:&lt;/strong&gt; A powerful model that combines vision, language, and action. It’s designed for general robot tasks, for example, following instructions or reacting to what it sees.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;π0 FAST:&lt;/strong&gt; A faster, optimized version of the π0 model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Diffusion Policy:&lt;/strong&gt; A model trained on the Push-T dataset, where a robot learns to push a T-shaped object toward a target.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VQ-BeT:&lt;/strong&gt; Another model trained on the same Push-T task, but it uses a different architecture. You can run both and compare how they perform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ACT:&lt;/strong&gt; A model trained for fine manipulation tasks that require high precision, like inserting objects or handling small parts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, the example script runs the Diffusion Policy model on the Push-T task. To try it out, run this command in a code cell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;python examples/2_evaluate_pretrained_policy.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run the command, LeRobot will automatically download the pre-trained model, set up a simulation environment, and run the robot as it tries to complete the task. Throughout the process, you’ll see messages showing what’s happening step-by-step. A short video will also be saved so you can see how the robot performed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; When running dataset downloads or model loading multiple times in a row, you might occasionally encounter temporary access restrictions from Hugging Face. This is normal and part of their rate limiting to prevent abuse.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;What You’ll See in the Output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the model runs, Colab will print some logs in the output below the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{'observation.image': PolicyFeature(type=&amp;lt;FeatureType.VISUAL: 'VISUAL'&amp;gt;, shape=(3, 96, 96)), 'observation.state': PolicyFeature(type=&amp;lt;FeatureType.STATE: 'STATE'&amp;gt;, shape=(2,))}
Dict('agent_pos': Box(0.0, 512.0, (2,), float64), 'pixels': Box(0, 255, (96, 96, 3), uint8))
{'action': PolicyFeature(type=&amp;lt;FeatureType.ACTION: 'ACTION'&amp;gt;, shape=(2,))}
Box(0.0, 512.0, (2,), float32)
step=0 reward=np.float64(0.0) terminated=False
step=1 reward=np.float64(0.0) terminated=False
...
step=108 reward=np.float64(0.9727550736734778) terminated=False
step=109 reward=np.float64(0.9969248691240408) terminated=False
step=110 reward=np.float64(1.0) terminated=True
Success!
IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (680, 680) to (688, 688) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to 1 (risking incompatibility).
Video of the evaluation is available in 'outputs/eval/example_pusht_diffusion/rollout.mp4'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s what they mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observations:&lt;/strong&gt; What kind of data the robot receives, like the shape and type of images or sensor readings it expects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Actions:&lt;/strong&gt; The format of the commands the robot will output to control its movements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reward:&lt;/strong&gt; A number that shows how well the robot is doing (higher = better).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Step-by-step info:&lt;/strong&gt; Shows progress, like step 108, reward 0.97, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Success or Failure:&lt;/strong&gt; Whether the robot completed the task. In our experiments, the same pre-trained model produced different results. It didn’t always complete the task successfully.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may also see a &lt;strong&gt;warning&lt;/strong&gt; about video resizing. It’s normal and doesn’t affect how the robot runs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where’s the Video?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The video is saved in &lt;code&gt;lerobot/outputs/eval/example_pusht_diffusion/rollout.mp4&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It shows the robot pushing the T-shaped object in simulation using the actions generated by the model. To download it, find the file in the file browser, click the three dots to the right of the filename, and select &lt;code&gt;Download&lt;/code&gt;. Then you can watch it with any video player.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qcssdzb2b4udiqod4dt.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qcssdzb2b4udiqod4dt.gif" alt="GIF" width="688" height="688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Want to Try a Different Model?&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#want-to-try-a-different-model" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You can switch from Diffusion Policy to VQ-BeT, which is trained on the same task. It’s a good way to explore how different models perform.&lt;/p&gt;

&lt;p&gt;Here’s how you can do it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the file browser, open the file &lt;code&gt;lerobot/examples/2_evaluate_pretrained_policy.py&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Double-click the file to open it in the editor pane on the right.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the following lines in the script:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;33&lt;/span&gt; &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;lerobot.common.policies.vqbet.modeling_vqbet&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;VQBeTPolicy&lt;/span&gt;
&lt;span class="c1"&gt;# Optional: change output path to avoid overwriting results
&lt;/span&gt;&lt;span class="mi"&gt;36&lt;/span&gt; &lt;span class="n"&gt;output_directory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outputs/eval/example_vqbet_pusht&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="mi"&gt;43&lt;/span&gt; &lt;span class="n"&gt;pretrained_policy_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lerobot/vqbet_pusht&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="mi"&gt;47&lt;/span&gt; &lt;span class="n"&gt;policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;VQBeTPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pretrained_policy_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Save the file by pressing &lt;code&gt;Ctrl+S&lt;/code&gt; (or &lt;code&gt;Cmd+S&lt;/code&gt; on Mac).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After saving, re-run the code cell that runs the script:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;python examples/2_evaluate_pretrained_policy.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will now evaluate the VQ-BeT model instead of the Diffusion Policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training a Model&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#training-a-model" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;LeRobot isn’t just for running pre-trained models, it also lets you try training one yourself. You can train the same type of model used by the official LeRobot team: the Diffusion Policy on the Push-T task.&lt;/p&gt;

&lt;p&gt;Since we’re using Google Colab, you have access to a free GPU, which is important because training on other systems without a CUDA-enabled GPU can be very slow. For example, in our tests on a Mac with Apple Silicon (using the MPS backend), training took significantly longer — in one case, up to two hours just to complete just 20 steps.&lt;/p&gt;

&lt;p&gt;By default, the training script runs for 5000 steps, which takes some time. In our case, the run took about an hour on Colab’s GPU. If you want to try it faster, you can reduce the steps to, say, 100. This will still give you a good idea of how training works.&lt;/p&gt;

&lt;p&gt;In the file &lt;code&gt;lerobot/examples/3_train_policy.py&lt;/code&gt;, find and change these line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;42&lt;/span&gt; &lt;span class="n"&gt;training_steps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the training script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;python examples/3_train_policy.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start training the Diffusion Policy model on the Push-T task using the &lt;code&gt;lerobot/pusht&lt;/code&gt; dataset.&lt;/p&gt;

&lt;p&gt;As the script runs, you’ll see lines like this in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;step: 0 loss: 1.161
step: 1 loss: 5.978
...
step: 4998 loss: 0.048
step: 4999 loss: 0.037
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each line shows the current training step and the corresponding loss value. A decreasing loss generally means the model is learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where the Trained Model is Saved&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LeRobot will save your trained model in &lt;code&gt;lerobot/outputs/train/example_pusht_diffusion&lt;/code&gt;. Inside the folder, you’ll find two files represent your trained Diffusion Policy: one with the model’s weights and one with its settings. They will be used automatically when you run the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Evaluating Your Trained Model&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#evaluating-your-trained-model" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Now let’s see your model in action.&lt;/p&gt;

&lt;p&gt;Open the file &lt;code&gt;lerobot/examples/2_evaluate_pretrained_policy.py&lt;/code&gt; and change the code so it loads your trained model instead of the pre-trained one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;33&lt;/span&gt; &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;lerobot.common.policies.diffusion.modeling_diffusion&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DiffusionPolicy&lt;/span&gt;
&lt;span class="mi"&gt;36&lt;/span&gt; &lt;span class="n"&gt;output_directory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outputs/eval/example_pusht_diffusion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Comment out the old pretrained model path
&lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt; &lt;span class="c1"&gt;# pretrained_policy_path = "lerobot/diffusion_pusht"
# Use your newly trained model path instead
&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt; &lt;span class="n"&gt;pretrained_policy_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outputs/train/example_pusht_diffusion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="mi"&gt;47&lt;/span&gt; &lt;span class="n"&gt;policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DiffusionPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pretrained_policy_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To avoid overwriting the previous video, give your video a new name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;136&lt;/span&gt; &lt;span class="n"&gt;video_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;output_directory&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rollout_our_model.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the evaluation script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;python examples/2_evaluate_pretrained_policy.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What You’ll See&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The script will run your model in simulation and save a video you can later open to see how your model behaved &lt;code&gt;lerobot/outputs/eval/example_pusht_diffusion/rollout_our_model.mp4&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You’ll also see logs like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
step=297 reward=np.float64(0.0) terminated=False
step=298 reward=np.float64(0.0) terminated=False
step=299 reward=np.float64(0.0) terminated=False
Failure!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means the robot didn’t complete the task successfully. Even if you trained for 5000 steps, your model may still perform noticeably worse than the official pre-trained model. That’s normal, the LeRobot team trained their models with much more compute and fine-tuning. In comparison, your version might show less precise or more random movements. It’s a good first step, though, and shows the entire training and evaluation pipeline working end-to-end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Downloading a Dataset&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#downloading-a-dataset" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To train a model we need one key ingredient: data. These include video from the robot’s cameras, joint positions, and the actions it took over time.&lt;/p&gt;

&lt;p&gt;LeRobot makes this part easy. It comes with a growing collection of high-quality robot learning datasets you can download and explore with just a few lines of code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/datasets?other=LeRobot" rel="noopener noreferrer"&gt;&lt;strong&gt;Browse all available datasets here&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To download and inspect a dataset, run this example script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!python examples/1_load_lerobot_dataset.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, this will download a dataset &lt;code&gt;lerobot/aloha_mobile_cabinet&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;But you’re not limited to just one. If you’d like to try the dataset used by the models in the previous section (DiffusionPolicy and VQ-BeT), open the script and change the &lt;code&gt;repo_id&lt;/code&gt; variable like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="n"&gt;repo_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lerobot/pusht&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then re-run the script. This will download the &lt;a href="https://huggingface.co/datasets/lerobot/pusht" rel="noopener noreferrer"&gt;&lt;strong&gt;Push-T dataset&lt;/strong&gt;&lt;/a&gt;, the same one used to train both models you just ran earlier. You’ll now have access to the raw data they were trained on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip: Clean Up the Output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The dataset script prints a lot of information, overwhelming for beginners. To make things easier, you can comment out some of the verbose print lines.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To comment out multiple lines quickly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Windows/Linux:&lt;/strong&gt; Press &lt;code&gt;Ctrl + /&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;macOS:&lt;/strong&gt; Press &lt;code&gt;Cmd + /&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Suggested lines to comment out include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="mi"&gt;38&lt;/span&gt;  &lt;span class="c1"&gt;# print("List of available datasets:")
&lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;  &lt;span class="c1"&gt;# pprint(lerobot.available_datasets)
&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;  &lt;span class="c1"&gt;# hub_api = HfApi()
&lt;/span&gt;&lt;span class="mi"&gt;43&lt;/span&gt;  &lt;span class="c1"&gt;# repo_ids = [info.id for info in hub_api.list_datasets(task_categories="robotics", tags=["LeRobot"])]
&lt;/span&gt;&lt;span class="mi"&gt;44&lt;/span&gt;  &lt;span class="c1"&gt;# pprint(repo_ids)
&lt;/span&gt;&lt;span class="mi"&gt;65&lt;/span&gt;  &lt;span class="c1"&gt;# print("Features:")
&lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;  &lt;span class="c1"&gt;# pprint(ds_meta.features)
&lt;/span&gt;&lt;span class="mi"&gt;69&lt;/span&gt;  &lt;span class="c1"&gt;# print(ds_meta)
&lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;  &lt;span class="c1"&gt;# dataset = LeRobotDataset(repo_id, episodes=[0, 10, 11, 23])
&lt;/span&gt;&lt;span class="mi"&gt;76&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Selected episodes: {dataset.episodes}")
&lt;/span&gt;&lt;span class="mi"&gt;77&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Number of episodes selected: {dataset.num_episodes}")
&lt;/span&gt;&lt;span class="mi"&gt;78&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Number of frames selected: {dataset.num_frames}")
&lt;/span&gt;&lt;span class="mi"&gt;82&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Number of episodes selected: {dataset.num_episodes}")
&lt;/span&gt;&lt;span class="mi"&gt;83&lt;/span&gt;  &lt;span class="c1"&gt;# print(f"Number of frames selected: {dataset.num_frames}")
&lt;/span&gt;&lt;span class="mi"&gt;86&lt;/span&gt;  &lt;span class="c1"&gt;# print(dataset.meta)
&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;  &lt;span class="c1"&gt;# print(dataset.hf_dataset)
&lt;/span&gt;&lt;span class="mi"&gt;111&lt;/span&gt; &lt;span class="c1"&gt;# pprint(dataset.features[camera_key])
&lt;/span&gt;&lt;span class="mi"&gt;113&lt;/span&gt; &lt;span class="c1"&gt;# pprint(dataset.features[camera_key])
&lt;/span&gt;&lt;span class="mi"&gt;119&lt;/span&gt; &lt;span class="c1"&gt;# delta_timestamps = {
#... all lines
&lt;/span&gt;&lt;span class="mi"&gt;148&lt;/span&gt; &lt;span class="c1"&gt;# break
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can always uncomment them later if you want a deeper look into the dataset structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s Inside the Push-T Dataset?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once downloaded, you’ll see a summary like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Number of episodes:&lt;/strong&gt; 206. An episode is like one full attempt by the robot to complete a task, one round of practice.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Frames per episode (avg.):&lt;/strong&gt; ~124. Each episode is made up of about 124 images (or frames), showing what the robot saw over time as it moved and acted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recording speed:&lt;/strong&gt; 10 FPS. These images were recorded at 10 frames per second, like a slow-motion video. It lets you see how the robot moved step by step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Camera views:&lt;/strong&gt; &lt;code&gt;observation.image&lt;/code&gt;. Each frame is taken from the robot’s camera, and labeled as &lt;code&gt;observation.image&lt;/code&gt; in the data. It’s what the robot sees.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task description:&lt;/strong&gt; Push the T-shaped block onto the T-shaped target.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image format:&lt;/strong&gt; Each image is stored as a PyTorch tensor (a data structure used in machine learning):&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LeRobot downloads the dataset into a special hidden cache folder inside the Colab environment &lt;code&gt;/root/.cache/huggingface/lerobot/lerobot/pusht/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This folder contains all the data files: observations, actions, metadata, and even video recordings. Since it’s hidden by default, follow these steps to access it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the eye icon at the top of the file browser to show hidden folders like &lt;code&gt;.cache.&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the folder icon with two dots just above the &lt;code&gt;lerobot&lt;/code&gt; folder.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4satqv0pbrp2qw4daeb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4satqv0pbrp2qw4daeb.png" alt="Folders" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Now navigate through the folders like this: &lt;code&gt;root&lt;/code&gt; &amp;gt; &lt;code&gt;.cache&lt;/code&gt; &amp;gt; &lt;code&gt;huggingface&lt;/code&gt; &amp;gt; &lt;code&gt;lerobot&lt;/code&gt; &amp;gt; &lt;code&gt;lerobot&lt;/code&gt; &amp;gt; &lt;code&gt;pusht&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To go back to the &lt;code&gt;lerobot&lt;/code&gt; folder, look for the &lt;code&gt;content&lt;/code&gt; folder, it's at the same level as the &lt;code&gt;root&lt;/code&gt;, and go inside.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Dataset Folder Structure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's what the folder structure typically looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lerobot/pusht
├── README.md
├── .cache/
├── data/
│   └── chunk-000/
│       ├── episode_000000.parquet
│       └── ...  # More episodes
├── meta/
├── videos/
│   └── chunk-000/
│       └── observation.image/
│           ├── episode_000000.mp4
│           └── ...  # More videos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;README.md&lt;/code&gt;: A short file that explains what’s inside the dataset and what it’s for.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;data/&lt;/code&gt;: This folder contains one file per episode &lt;code&gt;.parquet&lt;/code&gt;, where the robot logs everything it experienced.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;meta/&lt;/code&gt;: This folder contains helpful background info, like the episode’s descriptions, task goals, and performance stats, that LeRobot uses to organize and analyze the data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;videos/&lt;/code&gt;: Short &lt;code&gt;.mp4&lt;/code&gt; videos showing the robot’s camera view during each episode. These are great if you want to see what the robot was doing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;.cache/&lt;/code&gt;: A hidden folder used by LeRobot internally.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Visualize a Dataset&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#visualize-a-dataset" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once your dataset is loaded, it’s super helpful to see what the robot actually experienced. LeRobot comes with an easy-to-use, interactive visualization tool that runs right in your browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try the Built-in Viewer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can open it here: &lt;a href="https://huggingface.co/spaces/lerobot/visualize_dataset" rel="noopener noreferrer"&gt;&lt;strong&gt;Visualize Dataset (v2.0+ latest dataset format)&lt;/strong&gt;&lt;/a&gt; or use the older version: &lt;a href="https://huggingface.co/spaces/lerobot/visualize_dataset_v1.6" rel="noopener noreferrer"&gt;&lt;strong&gt;Visualize Dataset (v1.6 old dataset format)&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the viewer, just enter the name of a dataset, like &lt;code&gt;lerobot/pusht&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Can You See?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Watch each episode like a video from the robot’s point of view.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explore graphs showing how the robot moved and what actions it took. For example, in the &lt;code&gt;lerobot/pusht&lt;/code&gt; dataset, the viewer displays Motor 0 and Motor 1 — both state and action — as four curves plotted over time. This allows you to see how the robot's decisions changed from frame to frame during each episode.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziuy95474ovgo9qwiok8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fziuy95474ovgo9qwiok8.png" alt="Motors" width="774" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#next-steps" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You’ve just taken your first steps into robotics and machine learning with LeRobot, so what can you do next?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Try different models and tasks:&lt;/strong&gt; LeRobot supports several models and scenarios. For more challenging examples, check out the &lt;code&gt;lerobot/examples/advanced&lt;/code&gt; folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run your own experiment:&lt;/strong&gt; Once you’re familiar with the basic workflow, you can try a simple experiment: change the dataset slightly or load a new one. Even a small change, such as selecting a different set of episodes, will help you see how data affects the model’s behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Grow your projects further:&lt;/strong&gt; As you work more with LeRobot and collect larger amounts of data, organizing and managing that data becomes important. This can feel overwhelming at first, but understanding the basics of data management will save you time and frustration later. We recommend checking out this beginner-friendly guide, &lt;a href="https://www.reduct.store/blog/store-robotic-data" rel="noopener noreferrer"&gt;&lt;strong&gt;How to Store and Manage Robotics Data&lt;/strong&gt;&lt;/a&gt;. It explains simple strategies for handling robot data efficiently. You don’t need to master this now, but keeping these ideas in mind will help you scale your experiments smoothly when you’re ready.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/hugging-face-lerobot#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we saw how LeRobot lets you explore robotics and machine learning without needing a physical robot. You ran pre-trained models in simulation, worked with real robot data, and even trained a simple model — all within Colab.&lt;/p&gt;

&lt;p&gt;What many find surprising is how accessible this has become. Tasks that once required expensive hardware and deep skills can now be done with just a browser and a few lines of code. Seeing a robot act based on what it sees is exciting, and you can go further by modifying, training, and evaluating models yourself. LeRobot is a great way to start new projects and dive into robotics.&lt;/p&gt;




&lt;p&gt;We hope this tutorial inspires you to keep exploring. If you have any questions or ideas to share, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>tutorials</category>
      <category>robotics</category>
      <category>lerobot</category>
    </item>
    <item>
      <title>Getting Started with MetriCal</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Tue, 13 May 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/getting-started-with-metrical-215i</link>
      <guid>https://dev.to/reductstore/getting-started-with-metrical-215i</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp5tq3iz8hkst9ivlaag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp5tq3iz8hkst9ivlaag.png" alt="Intro image" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sensor calibration&lt;/strong&gt; is the process of determining the precise mathematical parameters that describe how a sensor perceives or measures the physical world. By comparing sensor outputs to known reference values, we can correct measurement errors and ensure data from different sensors align accurately.&lt;/p&gt;

&lt;p&gt;There are two main categories of calibration parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Intrinsic parameters (Intrinsics):&lt;/strong&gt; These capture the internal characteristics of a sensor, such as lens distortion in cameras or bias and scaling errors in IMUs. Calibrating intrinsics helps eliminate built-in measurement errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extrinsic parameters (Extrinsics):&lt;/strong&gt; These define a sensor's position and orientation relative to another sensor or the environment. Accurate extrinsics are essential for transforming and combining data from multiple sensors into a shared coordinate system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;High-quality calibration is key to getting reliable, consistent data, which is critical for mapping, perception, and decision-making in robotics and autonomous systems. Recognizing this need, &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt; can be used to manage the entire calibration data pipeline — from raw inputs such as LiDAR scans and calibration images to the output files produced during processing (e.g., intrinsic/extrinsic parameters, transformation matrices). When used together with tools like &lt;a href="https://www.tangramvision.com/products/calibration/metrical" rel="noopener noreferrer"&gt;&lt;strong&gt;MetriCal&lt;/strong&gt;&lt;/a&gt;, which streamline the calibration of multimodal sensor data, ReductStore can help enable scalable, automated workflows across distributed systems by making it easy to collect, store, and manage sensor data directly at the edge. Calibration results can then be saved back to ReductStore for persistent access and reuse.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is MetriCal?&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#what-is-metrical" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.tangramvision.com/metrical/intro/" rel="noopener noreferrer"&gt;&lt;strong&gt;MetriCal is a calibration tool developed by Tangram Vision&lt;/strong&gt;&lt;/a&gt; for systems that include diverse types of sensors. It’s designed to handle real-world calibration scenarios and supports the simultaneous processing of data from cameras, LiDARs, and IMUs. MetriCal is suitable for both small-scale setups and larger, production-level environments, providing tools for precise and consistent multi-sensor calibration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#key-features" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ROS Data Input:&lt;/strong&gt; Supports &lt;code&gt;.bag&lt;/code&gt; and &lt;code&gt;.mcap&lt;/code&gt; files (recommended)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic Extrinsics Estimation:&lt;/strong&gt; Computes sensor and target poses without requiring CAD models or manual setup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unlimited Sensor Streams:&lt;/strong&gt; Supports an arbitrary number of input streams&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Broad Target Support:&lt;/strong&gt; Compatible with both 2D and 3D targets; includes a library of premade targets and supports multiple targets at once&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Modular Calibration Workflow:&lt;/strong&gt; Allows splitting the calibration process into multiple datasets and stages&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Detailed Diagnostics:&lt;/strong&gt; Provides visual and numerical feedback on data quality and calibration performance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ROS Integration:&lt;/strong&gt; Outputs calibration results as an URDF file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pixel-Level Corrections:&lt;/strong&gt; Generates lookup tables for single-camera undistortion and stereo rectification&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lightweight Deployment:&lt;/strong&gt; CPU-only operation; runs efficiently on compact devices like Intel NUCs or in the cloud&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How MetriCal Works&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#how-metrical-works" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;MetriCal is structured as a CLI-based, fully scriptable pipeline designed to support reproducible workflows and automation. The core calibration process can be divided into the following stages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Preparation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The quality of calibration strongly depends on the choice of targets and the quality of the input data. It's important to select or build targets suited to your use case and follow MetriCal’s data capture guidelines to ensure the collected data meets the required quality standards.&lt;/p&gt;

&lt;p&gt;At this stage, you'll also prepare an &lt;strong&gt;object space file&lt;/strong&gt; , which describes all calibration targets and their properties.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Initialization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the dataset and configuration files are ready, MetriCal’s &lt;code&gt;init mode&lt;/code&gt; analyzes sensor observations to infer a raw input &lt;strong&gt;plex&lt;/strong&gt; — a description of the spatial, temporal, and semantic relationships within your perception system. It represents the physical system being calibrated and serves as the starting point for all further calibration steps.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you already have a plex with existing calibration results that you want to preserve, it can be used as a seed for an init plex.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;3. Calibration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;calibrate mode&lt;/code&gt;, MetriCal performs a full bundle adjustment to refine both the initial plex and the object space. It applies motion filtering to remove features affected by motion blur, rolling shutter, false detections, and other artifacts in images or point clouds.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;.json&lt;/code&gt; cache file is created at this step. This file stores detected objects, allowing future runs to skip the detection process and complete faster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The calibration data capture and detection process can also be visualized during this step.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;4. Diagnostics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MetriCal generates a detailed diagnostic report with color-coded charts summarizing calibration quality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cyan&lt;/strong&gt; – spectacular&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Green&lt;/strong&gt; – good&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orange&lt;/strong&gt; – okay, but generally poor&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Red&lt;/strong&gt; – bad&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Visualization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;display mode&lt;/code&gt;, calibration results are visualized using &lt;a href="https://rerun.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;Rerun, an open-source tool for multimodal data visualization&lt;/strong&gt;&lt;/a&gt;. It allows you to quickly verify the calibration quality before exporting.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Typically, the same dataset is used for visualization, but you can also use a different one if it has the same topic names.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;6. Export&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;shape mode&lt;/code&gt;, the optimized plex can be transformed into various configurations for use in deployed systems, for example, ROS URDFs or pixel-wise lookup tables.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;MetriCal also includes several additional modes to support advanced workflows: &lt;code&gt;completion mode&lt;/code&gt;, &lt;code&gt;consolidate object spaces mode&lt;/code&gt;, &lt;code&gt;pipeline mode&lt;/code&gt;, and &lt;code&gt;pretty print mode&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  MetriCal Example&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#metrical-example" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To test MetriCal’s multi-sensor capabilities, we use the &lt;a href="https://gitlab.com/tangram-vision/platform/metrical/-/tree/main/examples/camera_lidar" rel="noopener noreferrer"&gt;&lt;strong&gt;official example featuring two cameras and a LiDAR&lt;/strong&gt;&lt;/a&gt;. The dataset contains synchronized observations from all three sensors, capturing a LiDAR circle target from different angles. This allows MetriCal to calculate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Intrinsics and poses for both cameras&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Extrinsics between each camera and the LiDAR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Target geometry and consistency across different views&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We installed MetriCal via Docker. Make sure to set up an alias for convenient access during installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/.zshrcmetrical() { docker run --rm --tty --init --user="$(id -u):$(id -g)" \ --volume="$PATH/metrical/":"/datasets" \ --volume=metrical-license-cache:/.cache/tangram-vision \ --workdir="/datasets" \ --add-host=host.docker.internal:host-gateway \ tangramvision/cli:latest \ --license="LICENSE KEY" \ "$@";}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;MetriCal requires a license key.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can also install MetriCal natively on &lt;code&gt;Ubuntu&lt;/code&gt; or &lt;code&gt;Pop!_OS&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Calibration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After cloning the repository, download and unzip the &lt;code&gt;.zip&lt;/code&gt; file. Place the observations folder into:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$PATH/metrical/examples/camera_lidar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, set the &lt;code&gt;LICENSE&lt;/code&gt; variable inside &lt;code&gt;metrical_alias.sh&lt;/code&gt;, located in the same directory.&lt;/p&gt;

&lt;p&gt;Once everything is configured, you can run the full calibration pipeline using the provided shell script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$PATH/metrical/examples/camera_lidar/camera_lidar_runner.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Visualization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To visualize the results, install Rerun via &lt;code&gt;pip&lt;/code&gt; and launch the Rerun server in a separate terminal tab.&lt;/p&gt;

&lt;p&gt;Then, run the following command to display calibration results in &lt;code&gt;display mode&lt;/code&gt; and view the data in real time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;metrical display /datasets/examples/camera_lidar/observations /datasets/examples/camera_lidar/results.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bz7fl6ey1ryd7rjqi1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bz7fl6ey1ryd7rjqi1f.png" alt="correction" width="701" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Results&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#understanding-results" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;During calibration, MetriCal produces charts and diagnostics that show the quality of the process and highlight areas that may need improvement.&lt;/p&gt;

&lt;h4&gt;
  
  
  Data Inputs (DI Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#data-inputs-di-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;The Data Inputs section provides an overview of the input data and ensures that the dataset is appropriate for a successful calibration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Calibration Inputs (DI-1):&lt;/strong&gt; Displays basic configuration parameters.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ DI-1 █ Calibration Inputs
+--------------------------------------+----------+
| Calibration Parameter                | Value    |
+--------------------------------------+----------+
| MetriCal Version                     | 13.2.1   |
+--------------------------------------+----------+
| Optimization Profile                 | Standard |
+--------------------------------------+----------+
| Camera Motion Threshold              | Disabled |
+--------------------------------------+----------+
| Lidar Motion Threshold               | Disabled |
+--------------------------------------+----------+
| Preserve Input Constraints           | Disabled |
+--------------------------------------+----------+
| Object Relative Extrinsics Inference | Enabled  |
+--------------------------------------+----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Object Space Descriptions(DI-2):&lt;/strong&gt; Describes the calibration targets (object spaces).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ DI-2 █ Object Space Descriptions
+-------------+-------------------------+---------------------------------------+------------------------+
| Type        | UUID                    | Detector                              | Variance               |
+-------------+-------------------------+---------------------------------------+------------------------+
| Circle      | 34e6df7b...45d796bf     | - 0.6m radius                         | 1e-8, 1e-8, 1e-8       |
|             |                         | - 0.375m x offset                     |                        |
|             | Mutual Group A          | - 0.375m y offset                     |                        |
|             | |-- 24e6df7b...45d796bf | - 0m z offset                         |                        |
|             |                         | - 0.05m reflective tape width         |                        |
|             |                         | - Detect interior points: true        |                        |
+-------------+-------------------------+---------------------------------------+------------------------+
| Markerboard | 24e6df7b...45d796bf     | - 7x7 grid                            | 0.0002, 0.0002, 0.0002 |
|             |                         | - 0.097m markers                      |                        |
|             | Mutual Group A          | - 0.125m checkers (aka solid squares) |                        |
|             | |-- 34e6df7b...45d796bf | - Dictionary: Aruco4x4_1000           |                        |
|             |                         | - Marker IDs start at 0               |                        |
|             |                         | - Top-left corner is a Marker         |                        |
+-------------+-------------------------+---------------------------------------+------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Processed Observation Count (DI-3):&lt;/strong&gt; Shows how many observations were processed from the dataset.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ DI-3 █ Processed Observation Count
+----------------------------------+--------+-------------------+------------------------+-----------------------+
| Component                        | # read | # with detections | # after quality filter | # after motion filter |
+----------------------------------+--------+-------------------+------------------------+-----------------------+
| infra1_image_rect_raw (f7df04cc) |    283 |               276 |                    273 |                   273 |
+----------------------------------+--------+-------------------+------------------------+-----------------------+
| infra2_image_rect_raw (34ed8934) |    284 |               282 |                    278 |                   278 |
+----------------------------------+--------+-------------------+------------------------+-----------------------+
|      velodyne_points1 (38140838) |   2750 |              2026 |                   2026 |                  2026 |
+----------------------------------+--------+-------------------+------------------------+-----------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Camera FOV Coverage (DI-4):&lt;/strong&gt; Displays how well the calibration data covers the field of view (FOV) of each camera. Ideal coverage is characterized by minimal red cells, which represent areas without detected features.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6galt99iai13y8bncyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6galt99iai13y8bncyp.png" alt="DI-4" width="761" height="768"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Detection Timeline(DI-5):&lt;/strong&gt; Displays when detections occurred across the dataset timeline. Each row corresponds to a different sensor, making it easier to check synchronization.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ DI-5 █ Detection Timeline
+----------------------------------+-------------------------------------------------------------------------------------------------+
|            Components            |          Detection Timeline (x axis is seconds elapsed since first observation)                 |
|                                  |          Every point on the timeline represents an observation with detected features.          |
+----------------------------------+-------------------------------------------------------------------------------------------------+
|                                  | ⡁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ 4.0 |
| infra1_image_rect_raw (f7df04cc) | ⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     |
| infra2_image_rect_raw (34ed8934) | ⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁⠉⠉⠉⠁⠉⠀⠉⠉⠉⠉⠁⠁⠉⠀⠈⠉⠉⠉⠉⠉⠈⠉⠁⠀⠀⠈⠁⠈⠈⠉⠉⠉⠁⠁⠀⠉⠀⠈⠁⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠈⠈⠀⠉⠉⠉⠁⠉⠉⠈⠉⠉⠈⠉⠉⠈⠉⠁⠉⠉⠈⠉⠉⠀⠉⠁     |
| velodyne_points1 (38140838)      | ⡁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     |
|                                  | ⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁⠉⠉⠉⠁⠉⠀⠉⠉⠉⠉⠉⠀⠉⠁⠈⠉⠉⠉⠉⠉⠈⠉⠀⠀⠀⠀⠀⠉⠉⠉⠉⠉⠁⠉⠉⠉⠁⠀⠉⠉⠉⠁⠉⠈⠁⠉⠉⠉⠉⠉⠈⠁⠉⠉⠉⠉⠉⠉⠉⠉⠈⠁⠉⠉⠈⠉⠁⠉⠉⠈⠉⠉⠀⠉⠁     |
|                                  | ⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     |
|                                  | ⡉⠀⠀⠈⠀⠉⠈⠈⠀⠀⠈⠁⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠈⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠁⠉⠁     |
|                                  | ⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     |
|                                  | ⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁⠈⠀⠁ 0.0 |
|                                  | 0.0                                                                                  269.1      |
+----------------------------------+-------------------------------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Camera Modeling (CM Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#camera-modeling-cm-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section shows how well the camera models fit the actual calibration data — that is, how accurately the system understood the camera’s behavior based on the collected data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Binned Reprojection Errors (CM-1):&lt;/strong&gt; A heatmap showing reprojection errors across the camera’s FOV. If certain areas show high error (orange or red), it could indicate problems with the camera model or lens distortion that isn't being captured correctly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7y93ja9g0rg5l0hlg5dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7y93ja9g0rg5l0hlg5dw.png" alt="CM-1" width="758" height="696"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stereo Pair Rectification Error (CM-2):&lt;/strong&gt; For multi-camera setups, this shows the stereo rectification error between camera pairs, indicating how well the cameras are aligned for stereo vision.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ CM-2 █ Stereo Pair Rectification Error
+---------------------------------------+--------------+-------+-------------------------------------------------------------------------------------+
| Stereo Pair                           | # Mutual Obs | RMSE  | Binned rectified error (px)                                                         |
+---------------------------------------+--------------+-------+-------------------------------------------------------------------------------------+
| Dominant eye:  infra1_image_rect_raw  | 155          | 0.742 | ⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ 3202.0 |
| Secondary eye: infra2_image_rect_raw  |              |       | ⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⣇⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⡇⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⡇⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⡇⢸⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        |
|                                       |              |       | ⠇⠸⠀⠏⠹⠒⠖⠲⠒⠖⠲⠒⠦⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠀⠀ 0.0    |
|                                       |              |       | 0.0                                                                     7.0         |
+---------------------------------------+--------------+-------+-------------------------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Extrinsics Info (EI Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#extrinsics-info-ei-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section focuses on the spatial relationships between components in the calibration setup. Accurate extrinsic calibration ensures that the relative positions and orientations of the sensors are well understood.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Component Extrinsics Errors (EI-1):&lt;/strong&gt; Displays the extrinsic errors between each pair of components. If the errors are large, check whether all components are positioned and oriented correctly.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ EI-1 █ Component Extrinsics Errors
+--------------------------------------------+----------+----------+----------+----------+-----------+---------+
| Weighted Component Relative Extrinsic RMSE | X (m)    | Y (m)    | Z (m)    | Roll (°) | Pitch (°) | Yaw (°) |
| Rotation is Euler XYZ ext                  |          |          |          |          |           |         |
+--------------------------------------------+----------+----------+----------+----------+-----------+---------+
| To: infra1_image_rect_raw (f7df04cc),      | 2.254e-3 | 1.802e-3 | 3.780e-3 |    0.077 |     0.100 |   0.148 |
|    From: infra2_image_rect_raw (34ed8934)  |          |          |          |          |           |         |
+--------------------------------------------+----------+----------+----------+----------+-----------+---------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IMU Preintegration Errors (EI-2):&lt;/strong&gt; Displays a summary of all IMU preintegration errors from the system. In this example, IMUs were not calibrated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observed Camera Range of Motion (EI-3):&lt;/strong&gt; Shows how much motion was observed for each camera during the data collection. Sufficient motion is necessary to avoid projective compensation errors.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ EI-3 █ Observed Camera Range of Motion
+----------------------------------+--------+----------------------+--------------------+
| Camera                           | Z (m)  | Horizontal angle (°) | Vertical angle (°) |
+----------------------------------+--------+----------------------+--------------------+
| infra1_image_rect_raw (f7df04cc) | 6.308  | 127.081              | 63.801             |
+----------------------------------+--------+----------------------+--------------------+
| infra2_image_rect_raw (34ed8934) | 6.434  | 144.606              | 126.280            |
+----------------------------------+--------+----------------------+--------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Calibrated Plex (CP Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#calibrated-plex-cp-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section displays the final results of the calibration, including the intrinsic and extrinsic parameters that can be used for updating the system configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Camera Metrics (CP-1):&lt;/strong&gt; Contains the intrinsic parameters of each camera, such as focal length, principal point, and distortion parameters. Standard deviations indicate the uncertainty of each parameter.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ CP-1 █ Camera Metrics
+----------------------------------+-------------------------+-----------------------------------------+--------------------------------------+
| Camera                           | Specs                   | Projection Model                        | Distortion Model                     |
+----------------------------------+-------------------------+-----------------------------------------+--------------------------------------+
| infra1_image_rect_raw (f7df04cc) |  width (px)        848  |  Pinhole                                |      OpenCV Distortion               |
|                                  |  height (px)       480  |  f (px)      431.914 ±      0.224 (1σ)  |  k1   -1.574e-3 ±   8.715e-4 (1σ)    |
|                                  |  pixel pitch (um)  1    |  cx (px)     421.938 ±      0.395 (1σ)  |  k2       0.011 ±   1.876e-3 (1σ)    |
|                                  |                         |  cy (px)     230.592 ±      0.465 (1σ)  |  k3   -6.171e-3 ±   1.241e-3 (1σ)    |
|                                  |                         |                                         |  p1   -2.037e-3 ±   2.680e-4 (1σ)    |
|                                  |                         |                                         |  p2   -1.479e-3 ±   2.443e-4 (1σ)    |
|                                  |                         |                                         |                                      |
+----------------------------------+-------------------------+-----------------------------------------+--------------------------------------+
| infra2_image_rect_raw (34ed8934) |  width (px)        848  |  Pinhole                                |      OpenCV Distortion               |
|                                  |  height (px)       480  |  f (px)      429.085 ±      0.215 (1σ)  |  k1   -3.050e-4 ±   8.638e-4 (1σ)    |
|                                  |  pixel pitch (um)  1    |  cx (px)     421.203 ±      0.387 (1σ)  |  k2    1.517e-3 ±   1.809e-3 (1σ)    |
|                                  |                         |  cy (px)     230.821 ±      0.436 (1σ)  |  k3   -6.881e-4 ±   1.170e-3 (1σ)    |
|                                  |                         |                                         |  p1   -1.887e-3 ±   2.510e-4 (1σ)    |
|                                  |                         |                                         |  p2   -1.630e-3 ±   2.358e-4 (1σ)    |
|                                  |                         |                                         |                                      |
+----------------------------------+-------------------------+-----------------------------------------+--------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimized IMU Metrics (CP-2):&lt;/strong&gt; In this example, IMUs were not calibrated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Calibrated Extrinsics (CP-3):&lt;/strong&gt; Shows the minimum spanning tree of spatial constraints in the plex, highlighting only the most critical constraints needed to keep the structure intact.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ CP-3 █  Calibrated Extrinsics
+---------------------------+------------+-----------------+----------------------+---------------+---------------------+
| Final Extrinsics          | Subplex ID | Translation (m) | Diff from input (mm) | Rotation (°)  | Diff from input (°) |
| 'To' component is Origin  |            |                 |                      |               |                     |
| Rotation is Euler XYZ ext |            |                 |                      |               |                     |
+---------------------------+------------+-----------------+----------------------+---------------+---------------------+
| To: infra1_image_rect_raw | A          | X: 0.360        | ΔX: 359.862          | Roll: -85.208 | ΔRoll: -85.208      |
|     f7df04cc, RDF         |            | Y: 0.083        | ΔY: 82.722           | Pitch: -2.812 | ΔPitch: -2.812      |
| From: velodyne_points1    |            | Z: 0.048        | ΔZ: 48.451           | Yaw: 171.579  | ΔYaw: 171.579       |
|     38140838, Unknown     |            |                 |                      |               |                     |
+---------------------------+------------+-----------------+----------------------+---------------+---------------------+
| To: infra2_image_rect_raw | A          | X: 0.319        | ΔX: 318.513          | Roll: -85.317 | ΔRoll: -85.317      |
|     34ed8934, RDF         |            | Y: 0.086        | ΔY: 85.533           | Pitch: -2.717 | ΔPitch: -2.717      |
| From: velodyne_points1    |            | Z: 0.033        | ΔZ: 33.454           | Yaw: 171.470  | ΔYaw: 171.470       |
|     38140838, Unknown     |            |                 |                      |               |                     |
+---------------------------+------------+-----------------+----------------------+---------------+---------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Summary Statistics (SS Section)&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#summary-statistics-ss-section" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section provides a high-level overview of the optimization process and the overall calibration quality.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Optimization Summary Statistics (SS-1):&lt;/strong&gt; Includes overall reprojection error and posterior variance, which indicates the calibration’s uncertainty.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ SS-1 █ Optimization Summary Statistics
+------------------------+----------+
| Optimized Object RMSE, | 0.206 px |
| based on all cameras   |          |
+------------------------+----------+
| Posterior Variance     | 0.731    |
+------------------------+----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Camera Summary Statistics (SS-2):&lt;/strong&gt; Summarizes the reprojection errors for each camera. An RMSE under 0.5 pixels is typically acceptable, and under 0.2 pixels is excellent.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ SS-2 █ Camera Summary Statistics
+----------------------------------+------------------------------------------+
| Camera                           | Reproj. RMSE, outliers downweighted (px) |
+----------------------------------+------------------------------------------+
| infra1_image_rect_raw (f7df04cc) | 0.209 px                                 |
+----------------------------------+------------------------------------------+
| infra2_image_rect_raw (34ed8934) | 0.204 px                                 |
+----------------------------------+------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LiDAR Summary Statistics (SS-3):&lt;/strong&gt; Shows the RMSE of various residual metrics: circle misalignment, interior points to plane error, paired 3D point error, and paired plane normal error.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;█ SS-3 █ LiDAR Summary Statistics
+-----------------------------+-------------------------------+------------------------------------+--------------------------+--------------------------+
| LiDAR                       | Circle misalignment RMSE with | Circle edge misalignment RMSE with | Interior point RMSE with | Plane normal difference, |
|                             | all cameras, outliers         | all cameras, outliers              | all cameras, outliers    | lidar-lidar, outliers    |
|                             | downweighted (m)              | downweighted (m)                   | downweighted (m)         | downweighted (deg)       |
+-----------------------------+-------------------------------+------------------------------------+--------------------------+--------------------------+
| velodyne_points1 (38140838) | 0.020 m                       | 0.028 m                            | 0.018 m                  | (n/a)                    |
+-----------------------------+-------------------------------+------------------------------------+--------------------------+--------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Data Diagnostics&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#data-diagnostics" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;This section highlights potential issues with the calibration setup, data, or process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fji00y3klhhs0xffb4oip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fji00y3klhhs0xffb4oip.png" alt="diagnostics" width="748" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-Risk Diagnostics:&lt;/strong&gt; Critical issues such as insufficient camera motion or missing required components must be addressed for successful calibration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Medium and Low-Risk Diagnostics:&lt;/strong&gt; Less critical issues, such as poor feature coverage, should still be monitored and corrected when possible to improve calibration quality.&lt;/p&gt;

&lt;h4&gt;
  
  
  Output Summary&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#output-summary" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+------------------------+------------------------------------------------------------+
| Results JSON           | /datasets/camera_lidar/results.json                        |
+------------------------+------------------------------------------------------------+
| Calibrated Plex        | Run `jq .plex [results.json] &amp;gt; optimized_plex.json`        |
+------------------------+------------------------------------------------------------+
| Optimized Object Space | Run `jq .object_space [results.json] &amp;gt; optimized_obj.json` |
+------------------------+------------------------------------------------------------+
| Cached Detections JSON | /datasets/camera_lidar/observations.detections.json        |
+------------------------+------------------------------------------------------------+
| Report Path            | /datasets/camera_lidar/report.html                         |
+------------------------+------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The calibration process generates several output files, located in the &lt;code&gt;$PATH/metrical/examples/camera_lidar&lt;/code&gt; directory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;init_plex.json:&lt;/strong&gt; A raw input plex from the &lt;code&gt;init mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;observations.detections.json:&lt;/strong&gt; Cached detections for faster reruns in &lt;code&gt;calibrate mode&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;results.json:&lt;/strong&gt; The main output file, containing calibrated plex and object space.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;report.html:&lt;/strong&gt; An HTML report summarizing calibration performance visually.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;results_urdf.xml:&lt;/strong&gt; A ROS-compatible URDF file that describes the spatial relationships between the two calibrated cameras and the LiDAR, enabling tools like &lt;code&gt;robot_state_publisher&lt;/code&gt; to publish real-time TF transforms based on these relationships.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/metrical-calibrate-camera-and-lidar#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;MetriCal simplifies multimodal sensor calibration by offering a fully scriptable, CLI-based workflow with detailed diagnostics and seamless ROS integration. One of the key takeaways from working with this tool is that successful calibration depends heavily on the quality of the captured data. Carefully choosing calibration targets, ensuring sufficient sensor motion, and achieving full field-of-view coverage all have a major impact on the results. For those just starting out, prioritizing high-quality data capture and closely following the recommended guidelines is essential for obtaining reliable outcomes.&lt;/p&gt;




&lt;p&gt;We hope this tutorial provided a clear and practical introduction to using MetriCal for multi-sensor calibration. If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>tutorials</category>
      <category>robotics</category>
      <category>ros</category>
    </item>
    <item>
      <title>How to Analyze ROS Bag Files and Build a Dataset for Machine Learning</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Wed, 30 Apr 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/how-to-analyze-ros-bag-files-and-build-a-dataset-for-machine-learning-1fn7</link>
      <guid>https://dev.to/reductstore/how-to-analyze-ros-bag-files-and-build-a-dataset-for-machine-learning-1fn7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jnfkbfbmx7bgvs1xp0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jnfkbfbmx7bgvs1xp0a.png" alt="Linear and Angular Velocities over Time" width="690" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Working with real-world robot data depends on how ROS (Robot Operating System) messages are stored. In the article &lt;a href="https://www.reduct.store/blog/store-ros-topics#method-2-store-rosbag-data-in-time-series-object-storage" rel="noopener noreferrer"&gt;&lt;strong&gt;3 Ways to Store ROS Topics&lt;/strong&gt;&lt;/a&gt;, we explored several approaches — including storing compressed Rosbag files in time-series storage and storing topics as separate records.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll focus on the most common format: &lt;code&gt;.bag&lt;/code&gt; files recorded with Rosbag. These files contain valuable data on how a robot interacts with the world — such as odometry, camera frames, LiDAR, or IMU readings — and provide the foundation for analyzing the robot's behavior.&lt;/p&gt;

&lt;p&gt;You’ll learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extract motion data from &lt;code&gt;.bag&lt;/code&gt; files&lt;/li&gt;
&lt;li&gt;Create basic velocity features&lt;/li&gt;
&lt;li&gt;Train a classification model to recognize different types of robot movements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll use the &lt;code&gt;bagpy&lt;/code&gt; library to process &lt;code&gt;.bag&lt;/code&gt; files and apply basic machine learning techniques for classification.&lt;/p&gt;

&lt;p&gt;Although the examples in this tutorial use &lt;a href="http://ptak.felk.cvut.cz/darpa-subt/qualification_videos/spot/" rel="noopener noreferrer"&gt;&lt;strong&gt;data from a Boston Dynamics Spot robot&lt;/strong&gt;&lt;/a&gt; (performing movements like moving forward, sideways, and rotating), you can adapt the code for your recordings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Required Libraries&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#install-required-libraries" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;numpy pandas matplotlib seaborn scikit-learn bagpy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Loading and Preprocessing Bag Files&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#loading-and-preprocessing-bag-files" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let's create a function to load a &lt;code&gt;.bag&lt;/code&gt; file and extract velocity features.&lt;/p&gt;

&lt;p&gt;In our example, the odometry data is published under the &lt;code&gt;/spot/odometry&lt;/code&gt; topic. Make sure to specify the correct topic where your robot's motion data is recorded. Depending on your use case, you might find other features, such as accelerations or additional sensor data, more relevant for recognizing your robot's movements. For this task, we'll primarily focus on linear and angular velocities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;bagpy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bagreader&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_bag_to_dataframe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/spot/odometry&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="c1"&gt;# Load a .bag file and generate a DataFrame with velocity features and a target label
&lt;/span&gt;
    &lt;span class="n"&gt;bag&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;bagreader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;message_by_topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="c1"&gt;# Calculate linear and angular velocities
&lt;/span&gt;    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                                    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                                    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;angular_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                                     &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
                                     &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Assign target label
&lt;/span&gt;    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;target&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;

    &lt;span class="c1"&gt;# Keep only relevant columns
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.linear.z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
               &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.x&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.y&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;twist.twist.angular.z&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
               &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;angular_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;target&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Processing three different movement types&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#processing-three-different-movement-types" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Let's use the &lt;code&gt;process_bag_to_dataframe&lt;/code&gt; function to load and process the data for each of the three movement types. Each movement type was recorded in a separate &lt;code&gt;.bag&lt;/code&gt; file, so we'll apply the function to each file individually, and then merge the results into a single DataFrame.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;df_forward&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;process_bag_to_dataframe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_x.bag&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="c1"&gt;# moving forward
&lt;/span&gt;&lt;span class="n"&gt;df_sideways&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;process_bag_to_dataframe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_y.bag&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# moving sideways
&lt;/span&gt;&lt;span class="n"&gt;df_rotation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;process_bag_to_dataframe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rotation.bag&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# rotating
&lt;/span&gt;
&lt;span class="c1"&gt;# Combine all samples
&lt;/span&gt;&lt;span class="n"&gt;df_all&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;df_forward&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;df_sideways&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;df_rotation&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;ignore_index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Visualizing velocities&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#visualizing-velocities" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We can visualize the linear and angular velocities over time for each type of motion, as shown in the example for the forward movement. This will help us better understand how the velocities change during each specific motion.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;plot_velocities&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;figure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;figsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subplot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;plot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#4B0082&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Linear Velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xlabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Time Step&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subplot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;plot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;angular_velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#9A9E5E&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Angular Velocity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xlabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Time Step&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;suptitle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tight_layout&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nf"&gt;plot_velocities&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df_forward&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Forward Movement&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlo3pvru13i412dtfvdj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlo3pvru13i412dtfvdj.png" alt="Forward Movement" width="681" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Training and Evaluating Classification Models&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#training-and-evaluating-classification-models" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We'll test several popular models, including Logistic Regression, Decision Tree, Random Forest, and Support Vector Machine, and tune their hyperparameters using &lt;code&gt;GridSearchCV&lt;/code&gt;. You can also experiment with other hyperparameters to optimize the models based on your specific data and requirements.&lt;/p&gt;

&lt;p&gt;To evaluate the classifier, we'll use the &lt;strong&gt;F1 Score&lt;/strong&gt; metric, which balances precision and recall and is especially useful for imbalanced datasets. However, you can also choose to evaluate using Accuracy, Precision, or Recall, depending on your needs.&lt;/p&gt;

&lt;p&gt;Now, let's prepare the data for training.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df_all&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;target&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df_all&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;target&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The features &lt;code&gt;X&lt;/code&gt; consist of the velocity data, and the labels &lt;code&gt;y&lt;/code&gt; represent the different movement types.&lt;/p&gt;

&lt;p&gt;Next, let’s define the scalers, models, and their respective hyperparameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.preprocessing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StandardScaler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MinMaxScaler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;RobustScaler&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.linear_model&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LogisticRegression&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.tree&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DecisionTreeClassifier&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.ensemble&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RandomForestClassifier&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.svm&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SVC&lt;/span&gt;

&lt;span class="c1"&gt;# Scalers
&lt;/span&gt;&lt;span class="n"&gt;scalers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Standard Scaler&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;StandardScaler&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MinMax Scaler&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;MinMaxScaler&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Robust Scaler&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;RobustScaler&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Models
&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Logistic Regression&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LogisticRegression&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_iter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Decision Tree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;DecisionTreeClassifier&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Random Forest&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;RandomForestClassifier&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;SVM&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;SVC&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;probability&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Hyperparameters for tuning
&lt;/span&gt;&lt;span class="n"&gt;parameters&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Logistic Regression&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;C&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Decision Tree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;max_depth&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Random Forest&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;n_estimators&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;SVM&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;C&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;kernel&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rbf&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's train and test the models using the defined scalers, models, and hyperparameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.model_selection&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;train_test_split&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;GridSearchCV&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;StratifiedKFold&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.metrics&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;f1_score&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;confusion_matrix&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_classification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scaler_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="c1"&gt;# Split data into train and test sets
&lt;/span&gt;    &lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;train_test_split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stratify&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Apply the chosen scaler to the data
&lt;/span&gt;    &lt;span class="n"&gt;scaler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scalers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;scaler_name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;X_train&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit_transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;X_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scaler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Set up hyperparameter grid and cross-validation
&lt;/span&gt;    &lt;span class="n"&gt;param_grid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;cv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StratifiedKFold&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n_splits&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shuffle&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;grid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GridSearchCV&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;param_grid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scoring&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;f1_weighted&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Get the best model from the grid search
&lt;/span&gt;    &lt;span class="n"&gt;best_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;best_estimator_&lt;/span&gt;

    &lt;span class="c1"&gt;# Make predictions on the test set
&lt;/span&gt;    &lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Best parameters: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;best_params_&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;F1 Score: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;f1_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;average&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;weighted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After training and testing the models, we'll plot the confusion matrix to visualize how well our model is performing by comparing the predicted labels with the actual labels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;seaborn&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sns&lt;/span&gt;

&lt;span class="c1"&gt;# Plot confusion matrix
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;plot_confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="n"&gt;cm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;sns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;heatmap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;annot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;d&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cmap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Purples&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Confusion Matrix&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xlabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Predicted&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ylabel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Actual&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After evaluating the model's performance, we’ll visualize the feature importance for the Decision Tree and Random Forest models to understand which features contribute the most to the model’s predictions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Feature importance for Decision Tree and Random Forest
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;plot_feature_importance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_columns&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="n"&gt;importances&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;feature_importances_&lt;/span&gt;
    &lt;span class="n"&gt;sorted_idx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;importances&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;argsort&lt;/span&gt;&lt;span class="p"&gt;()[::&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="n"&gt;sns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;barplot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;importances&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;sorted_idx&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;X_columns&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;sorted_idx&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#50208B&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Feature Importances&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xticks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rotation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tight_layout&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Training a Random Forest classifier&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#training-a-random-forest-classifier" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Finally, let's apply the Random Forest classifier to our data and evaluate its performance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_columns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;run_classification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Random Forest&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MinMax Scaler&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The optimal parameter for &lt;strong&gt;n_estimators&lt;/strong&gt; is &lt;strong&gt;100&lt;/strong&gt; , and the model achieved an &lt;strong&gt;F1 score&lt;/strong&gt; of &lt;strong&gt;0.976&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We’ll plot the &lt;strong&gt;confusion matrix&lt;/strong&gt; to assess the classifier's performance across different movement types. The diagonal elements represent the correctly classified instances, while the off-diagonal elements indicate the misclassifications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;plot_confusion_matrix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_pred&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikl864mb3auak88xqu69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikl864mb3auak88xqu69.png" alt="Confusion Matrix" width="530" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After evaluating the Random Forest model, we can check the &lt;strong&gt;Feature Importance&lt;/strong&gt; to see which velocity components were most important in distinguishing the movement types. This is especially useful for Decision Tree and Random Forest models, as they automatically rank features by their importance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;plot_feature_importance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;best_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_columns&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfcosa4ckp4hq1uy6ak1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfcosa4ckp4hq1uy6ak1.png" alt="Feature Importance" width="590" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Best practices&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#best-practices" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;When working with &lt;code&gt;.bag&lt;/code&gt; files and training machine learning models, these best practices can help you manage data more effectively and build better-performing models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Split large files:&lt;/strong&gt; If your &lt;code&gt;.bag&lt;/code&gt; files are too large, divide them into smaller episodes. This helps avoid memory issues and makes the files easier to process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separate topics by type:&lt;/strong&gt; If your &lt;code&gt;.bag&lt;/code&gt; file includes both lightweight messages (like battery level) and large data streams (like images or LiDAR), store them in separate &lt;code&gt;.bag&lt;/code&gt; files. This separation can optimize performance and make your workflow simpler.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Randomized Search for tuning:&lt;/strong&gt; If your model’s accuracy isn’t good enough, try using RandomizedSearchCV instead of GridSearchCV. It can find the best hyperparameters faster and more efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Try different algorithms:&lt;/strong&gt; Experiment with different algorithms to find what works best for your specific data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consider ensemble methods:&lt;/strong&gt; Techniques like bagging or boosting can improve accuracy by combining multiple models and leveraging their strengths.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explore deep learning:&lt;/strong&gt; If you have a large dataset and enough computing power, deep learning models can capture complex patterns that simpler models may miss.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prevent overfitting:&lt;/strong&gt; Make sure that your model generalizes well by splitting your dataset into training, validation, and test sets. Use cross-validation to evaluate your model’s performance more reliably.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/boston-dynamic-example#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we walked through the process of handling robot movement data stored in &lt;code&gt;.bag&lt;/code&gt; files. We extracted key velocity features and used them to train machine learning models for classifying different types of robot movements.&lt;/p&gt;

&lt;p&gt;As a next step, you can experiment with various models, hyperparameters, or additional features to improve classification performance. You can also explore advanced techniques such as deep learning for more complex tasks.&lt;/p&gt;




&lt;p&gt;We hope this tutorial provided a clear starting point for processing robot data and building basic movement classification models. If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>ros</category>
      <category>tutorials</category>
    </item>
    <item>
      <title>How to Store and Manage ROS Data</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Wed, 02 Apr 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/how-to-store-and-manage-ros-data-1j45</link>
      <guid>https://dev.to/reductstore/how-to-store-and-manage-ros-data-1j45</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm10zaxz5blgndzq1r82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm10zaxz5blgndzq1r82.png" alt="ROS 2 Data Storage Tutorial" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this tutorial, we will create a custom ROS 2 Humble package called &lt;strong&gt;&lt;code&gt;rosbag2reduct&lt;/code&gt;&lt;/strong&gt; that records incoming ROS 2 topics into MCAP bag files on a Raspberry Pi and automatically uploads those files to a ReductStore instance with metadata labels. We'll walk through setting up ROS 2 Humble on the Pi, interfacing a USB camera using the &lt;code&gt;v4l2_camera&lt;/code&gt; driver, deploying a lightweight YOLOv5 (nano) object detection node (using ONNX Runtime) to produce detection metadata, and implementing the &lt;code&gt;rosbag2reduct&lt;/code&gt; node to capture data and offload it. We will also cover installing ReductStore on the Pi, configuring replication of labeled data to a central storage on your laptop (using label-based filters via the web console). This end-to-end guide is structured with clear steps, code examples, and configuration snippets to help you build and deploy the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites and Architecture&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#prerequisites-and-architecture" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Before beginning, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hardware:&lt;/strong&gt; Raspberry Pi, a USB camera, and an internet connection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OS:&lt;/strong&gt; Ubuntu or Desktop Server (22.04 LTS or later) on the Raspberry Pi.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Laptop/PC:&lt;/strong&gt; A separate machine on the same network, to serve as a central data store.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this tutorial, we will set up the following architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x5pdrcald3a16m0xvuu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x5pdrcald3a16m0xvuu.png" alt="Tutorial Architecture" width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;Architecture: Raspberry Pi with ROS 2, USB camera, YOLOv5n, and ReductStore; Laptop with ReductStore Web Console.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Raspberry Pi:&lt;/strong&gt; Running ROS 2 Humble, interfacing with a USB camera, and running a YOLOv5n object detection node with ONNX Runtime.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Laptop/PC:&lt;/strong&gt; Running a ReductStore instance (acting as a central data store) and ReductStore's Web Console for managing data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network:&lt;/strong&gt; The Pi and the laptop are on the same network. The Pi will upload data to the ReductStore instance on the laptop automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. Install Ubuntu on Raspberry Pi&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#1-install-ubuntu-on-raspberry-pi" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;If not already done, flash Ubuntu for Raspberry Pi on a microSD card and boot your Pi. You can &lt;a href="https://ubuntu.com/download/raspberry-pi" rel="noopener noreferrer"&gt;&lt;strong&gt;download Ubuntu images for Raspberry Pi from the official site&lt;/strong&gt;&lt;/a&gt;. Ensure you have internet access and update the system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On Raspberry Pi&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Install ROS 2 Humble on Raspberry Pi&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#2-install-ros-2-humble-on-raspberry-pi" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Follow the steps below to set up &lt;strong&gt;ROS 2&lt;/strong&gt; on your Raspberry Pi. This guide assumes you are using a supported 64-bit Ubuntu OS (e.g., 22.04, 24.04). ROS 2 is available in multiple distributions such as &lt;strong&gt;Jazzy&lt;/strong&gt; , &lt;strong&gt;Humble&lt;/strong&gt; , and &lt;strong&gt;Noetic&lt;/strong&gt;. If you're unsure, we recommend using the latest &lt;strong&gt;LTS version&lt;/strong&gt; , which we can install via the commands below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setup Locale:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Add ROS 2 apt Repository:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install ROS 2 Packages:&lt;/strong&gt; For a baseline, install ROS 2 base packages (or &lt;code&gt;ros-&amp;lt;distro&amp;gt;-desktop&lt;/code&gt; if you need GUI tools):&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install Build Tools:&lt;/strong&gt; We will create custom packages, so install development tools:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. Create a ROS 2 Workspace&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#3-create-a-ros-2-workspace" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Set up a workspace for our project (if you don't have one):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On Raspberry Pi&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/ros2_ws/src
&lt;span class="nb"&gt;cd&lt;/span&gt; ~/ros2_ws
colcon build &lt;span class="c"&gt;# just to initialize, will be empty initially&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add &lt;code&gt;source ~/ros2_ws/install/setup.bash&lt;/code&gt; to your &lt;code&gt;~/.bashrc&lt;/code&gt; so that the workspace is sourced on each new shell, or remember to source it in each terminal when using the workspace. We will add packages to this workspace in subsequent steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the USB Camera with &lt;code&gt;v4l2_camera&lt;/code&gt;&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#setting-up-the-usb-camera-with-v4l2_camera" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We'll use the &lt;code&gt;v4l2_camera&lt;/code&gt; ROS 2 package to interface with the USB camera via Video4Linux2. This package publishes images from any V4L2-compatible camera (most USB webcams) as ROS 2 image topics.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Install the &lt;code&gt;v4l2_camera&lt;/code&gt; Package&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#1-install-the-v4l2_camera-package" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;On the Raspberry Pi, install the driver node via apt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;ros-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ROS_DISTRO&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-v4l2-camera&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This installs the &lt;code&gt;v4l2_camera&lt;/code&gt; node and its dependencies. Alternatively, you could build it from source, but the binary is available for Humble.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Connect and Verify the Camera&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#2-connect-and-verify-the-camera" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Plug in the USB camera to the Pi. Verify that it's recognized by listing video devices:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /dev/video&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="c"&gt;# You should see /dev/video0 (and possibly /dev/video1, etc. if multiple video capture interfaces are connected)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;/dev/video0&lt;/code&gt; is present, the system sees the camera (at list one). You might also install &lt;code&gt;v4l2-utils&lt;/code&gt; and run &lt;code&gt;v4l2-ctl --list-devices&lt;/code&gt; to see the camera name and capabilities. You can also run &lt;code&gt;v4l2-ctl --list-formats-ext&lt;/code&gt; to see supported resolutions and formats.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Run the Camera Node&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#3-run-the-camera-node" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Launch the camera driver to start publishing images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# In a sourced ROS 2 environment on the Pi&lt;/span&gt;
ros2 run v4l2_camera v4l2_camera_node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, this node will open &lt;code&gt;/dev/video0&lt;/code&gt; and start publishing images to the &lt;code&gt;~/image_raw&lt;/code&gt; topic (type &lt;code&gt;sensor_msgs/Image&lt;/code&gt;) at a default resolution of 640x480 and pixel format YUYV converted to &lt;code&gt;rgb8&lt;/code&gt; (see &lt;a href="https://docs.ros.org/en/jazzy/p/v4l2_camera/" rel="noopener noreferrer"&gt;&lt;strong&gt;ROS 2 camera driver for Video4Linux2 Documentation&lt;/strong&gt;&lt;/a&gt;). You should see console output from the node indicating it opened the device and is streaming.&lt;/p&gt;

&lt;p&gt;Open a new terminal (with ROS sourced) on the Pi (or from a laptop connected to the ROS 2 network) and verify images are coming through, e.g., by running &lt;code&gt;rqt_image_view&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;ros-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ROS_DISTRO&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-rqt-image-view&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="c"&gt;# if not installed&lt;/span&gt;
ros2 run rqt_image_view rqt_image_view
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;rqt_image_view&lt;/code&gt;, select &lt;code&gt;/image_raw&lt;/code&gt; to view the camera feed. This confirms the camera setup is working.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can adjust parameters by remapping or via ROS 2 parameters, e.g., to change resolution or device:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ros2 run v4l2_camera v4l2_camera_node &lt;span class="nt"&gt;--ros-args&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; image_size:&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"[1280,720]"&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; video_device:&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/dev/video0"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would set the camera to 1280x720 resolution (if supported).&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying a Lightweight YOLOv5 Object Detection Node&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#deploying-a-lightweight-yolov5-object-detection-node" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Next, we set up an object detection node to analyze the camera images and output metadata (&lt;code&gt;object_detected&lt;/code&gt; and &lt;code&gt;confidence_score&lt;/code&gt;). We'll use &lt;strong&gt;YOLOv5n (Nano)&lt;/strong&gt; - the smallest YOLOv5 model ("only" 1.9 million parameters) which is ideal for resource-constrained devices (&lt;a href="https://github.com/ultralytics/yolov5/releases" rel="noopener noreferrer"&gt;&lt;strong&gt;see releases at ultralytics/yolov5&lt;/strong&gt;&lt;/a&gt;). We will run inference using the ONNX Runtime, which allows running the model without needing the full PyTorch framework on the Pi.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Install ONNX Runtime and Dependencies&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#1-install-onnx-runtime-and-dependencies" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;On the Raspberry Pi, install the ONNX Runtime Python package and OpenCV (for image processing):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;onnxruntime opencv-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;(If &lt;code&gt;pip&lt;/code&gt; isn't available, use &lt;code&gt;sudo apt install python3-pip&lt;/code&gt; to install it. You may also install &lt;code&gt;numpy&lt;/code&gt; if not already present, as ONNX Runtime will likely need it.)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. YOLOv5n ONNX Model&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#2-yolov5n-onnx-model" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We need the YOLOv5n model in ONNX format. To do that, we can clone the YOLOv5 repository on a more powerful machine than the Pi and export the model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On a PC or via Colab:&lt;/span&gt;
git clone https://github.com/ultralytics/yolov5.gitcd yolov5
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt &lt;span class="c"&gt;# includes PyTorch&lt;/span&gt;
python export.py &lt;span class="nt"&gt;--weights&lt;/span&gt; yolov5n.pt &lt;span class="nt"&gt;--include&lt;/span&gt; onnx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create &lt;code&gt;yolov5n.onnx&lt;/code&gt;. Transfer that file to your Raspberry Pi (e.g., via SCP).&lt;/p&gt;

&lt;p&gt;For this tutorial, assume &lt;code&gt;yolov5n.onnx&lt;/code&gt; is now on the Raspberry Pi (e.g., placed in &lt;code&gt;~/ros2_ws/src&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create a ROS 2 Package for the YOLO Node (optional)&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#3-create-a-ros-2-package-for-the-yolo-node-optional" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You can integrate the YOLO inference in the same package as &lt;code&gt;rosbag2reduct&lt;/code&gt;, but for modularity, let's create a separate ROS 2 Python package called &lt;code&gt;yolo_detector&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the workspace src directory, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/ros2_ws/src
ros2 pkg create &lt;span class="nt"&gt;--build-type&lt;/span&gt; ament_python yolo_detector &lt;span class="nt"&gt;--dependencies&lt;/span&gt; rclpy sensor_msgs std_msgs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a &lt;code&gt;yolo_detector&lt;/code&gt; folder with a Python package structure. Edit &lt;code&gt;yolo_detector/package.xml&lt;/code&gt; to add dependencies for &lt;code&gt;opencv-python&lt;/code&gt; and &lt;code&gt;onnxruntime&lt;/code&gt; (since these are non-ROS dependencies, we list them for documentation; you might use &lt;code&gt;pip&lt;/code&gt; in the installation step rather than rosdep). For example, inside &lt;code&gt;&amp;lt;exec_depend&amp;gt;&lt;/code&gt; tags, add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;&amp;lt;exec_depend&amp;gt;onnxruntime&amp;lt;/exec_depend&amp;gt;&lt;/span&gt;
&lt;span class="s"&gt;&amp;lt;exec_depend&amp;gt;opencv-python&amp;lt;/exec_depend&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Implement the YOLO Detection Node&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#4-implement-the-yolo-detection-node" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Create a file &lt;code&gt;yolo_detector/yolo_detector/yolo_node.py&lt;/code&gt; with the following content:&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;YoloDetectorNode (Python code)&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;rclpy&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;rclpy.node&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Node&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sensor_msgs.msg&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;std_msgs.msg&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Float32&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;onnxruntime&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;ort&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;YoloDetectorNode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Node&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;yolo_detector&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="c1"&gt;# Load the YOLOv5n ONNX model
&lt;/span&gt;        &lt;span class="n"&gt;model_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/path/to/yolov5n.onnx&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;  &lt;span class="c1"&gt;# TODO: update to actual path
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ort&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;InferenceSession&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;providers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;CPUExecutionProvider&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Loaded model &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;model_path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Get model input details for preprocessing
&lt;/span&gt;        &lt;span class="n"&gt;model_inputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_inputs&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;input_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model_inputs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;input_shape&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model_inputs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;  &lt;span class="c1"&gt;# e.g., [1, 3, 640, 640]
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;img_height&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;input_shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;img_width&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;input_shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

        &lt;span class="c1"&gt;# Subscribers and publishers
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;subscription&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_subscription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/image_raw&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;image_callback&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pub_object&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_publisher&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;object_detected&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pub_conf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_publisher&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Float32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;confidence_score&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# If the model requires normalization factors or specific transformations, define them:
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mean&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;  &lt;span class="c1"&gt;# YOLOv5 models assume 0-255 input, no mean subtraction
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mf"&gt;255.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;255.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;255.0&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;  &lt;span class="c1"&gt;# we'll scale 0-1 later by dividing by 255
&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;image_callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Convert ROS Image message to OpenCV format (BGR array)
&lt;/span&gt;        &lt;span class="c1"&gt;# Assuming msg.encoding is 'rgb8' as provided by v4l2_camera default output
&lt;/span&gt;        &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;frombuffer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uint8&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;height&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="c1"&gt;# Convert RGB to BGR as YOLO model might expect BGR input (depending on training)
&lt;/span&gt;        &lt;span class="n"&gt;img_bgr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cvtColor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COLOR_RGB2BGR&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Resize and pad image to model input shape (letterboxing if needed)
&lt;/span&gt;        &lt;span class="n"&gt;input_img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img_bgr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;img_width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;img_height&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="c1"&gt;# Convert to float32 and normalize 0-1
&lt;/span&gt;        &lt;span class="n"&gt;input_img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;input_img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;float32&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mf"&gt;255.0&lt;/span&gt;
        &lt;span class="c1"&gt;# transpose to [channels, height, width]
&lt;/span&gt;        &lt;span class="n"&gt;input_blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;transpose&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_img&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;input_blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;expand_dims&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_blob&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# shape [1,3,H,W]
&lt;/span&gt;
        &lt;span class="c1"&gt;# Run inference
&lt;/span&gt;        &lt;span class="n"&gt;outputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;input_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;input_blob&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

        &lt;span class="c1"&gt;# Parse outputs to find the highest confidence detection (for simplicity)
&lt;/span&gt;        &lt;span class="c1"&gt;# YOLOv5 ONNX output typically includes [1, num_boxes, 85] array (for COCO: 4 box coords, 1 objness, 80 class scores)
&lt;/span&gt;        &lt;span class="n"&gt;detections&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="c1"&gt;# Filter by confidence threshold (e.g., 0.5)
&lt;/span&gt;        &lt;span class="n"&gt;conf_threshold&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;
        &lt;span class="n"&gt;best_label&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;none&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;best_conf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;detections&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;det&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;detections&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
                &lt;span class="n"&gt;obj_conf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;det&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
                &lt;span class="n"&gt;class_conf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;det&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;:]&lt;/span&gt;  &lt;span class="c1"&gt;# class confidences
&lt;/span&gt;                &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;obj_conf&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;class_conf&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="n"&gt;class_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;argmax&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;class_conf&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;conf_threshold&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;best_conf&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="n"&gt;best_conf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;float&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="n"&gt;best_label&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;class_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# or use a class id-&amp;gt;name mapping
&lt;/span&gt;
        &lt;span class="c1"&gt;# Publish results
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pub_object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;best_label&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pub_conf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Float32&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;best_conf&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Detected: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;best_label&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; (&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;best_conf&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;rclpy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;YoloDetectorNode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;rclpy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;spin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;destroy_node&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;rclpy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shutdown&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt; This node subscribes to the camera images (&lt;code&gt;/image_raw&lt;/code&gt;), processes each frame through the YOLOv5n model, and publishes two topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;object_detected&lt;/code&gt; (std_msgs/String): the class label (or ID) of the primary detected object (or &lt;code&gt;"none"&lt;/code&gt; if none above threshold).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;confidence_score&lt;/code&gt; (std_msgs/Float32): the confidence score of that detection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For simplicity, we took the detection with highest confidence above a threshold. In a real scenario, you probably output multiple detections or more info (bounding boxes, etc.), but we only need metadata for this tutorial.&lt;/p&gt;

&lt;p&gt;Make sure to adjust the &lt;code&gt;model_path&lt;/code&gt; to the actual location of your &lt;code&gt;yolov5n.onnx&lt;/code&gt;. Also note that without class name mapping, &lt;code&gt;best_label&lt;/code&gt; is currently the class index (as string). You can map this index to an actual label (e.g., using the COCO class list below).&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;COCO Class List&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;names&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;0&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;person&lt;/span&gt;
  &lt;span class="na"&gt;1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bicycle&lt;/span&gt;
  &lt;span class="na"&gt;2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;car&lt;/span&gt;
  &lt;span class="na"&gt;3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;motorcycle&lt;/span&gt;
  &lt;span class="na"&gt;4&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;airplane&lt;/span&gt;
  &lt;span class="na"&gt;5&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bus&lt;/span&gt;
  &lt;span class="na"&gt;6&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;train&lt;/span&gt;
  &lt;span class="na"&gt;7&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;truck&lt;/span&gt;
  &lt;span class="na"&gt;8&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;boat&lt;/span&gt;
  &lt;span class="na"&gt;9&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traffic light&lt;/span&gt;
  &lt;span class="na"&gt;10&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fire hydrant&lt;/span&gt;
  &lt;span class="na"&gt;11&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stop sign&lt;/span&gt;
  &lt;span class="na"&gt;12&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;parking meter&lt;/span&gt;
  &lt;span class="na"&gt;13&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bench&lt;/span&gt;
  &lt;span class="na"&gt;14&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bird&lt;/span&gt;
  &lt;span class="na"&gt;15&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cat&lt;/span&gt;
  &lt;span class="na"&gt;16&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dog&lt;/span&gt;
  &lt;span class="na"&gt;17&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;horse&lt;/span&gt;
  &lt;span class="na"&gt;18&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sheep&lt;/span&gt;
  &lt;span class="na"&gt;19&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cow&lt;/span&gt;
  &lt;span class="na"&gt;20&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elephant&lt;/span&gt;
  &lt;span class="na"&gt;21&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bear&lt;/span&gt;
  &lt;span class="na"&gt;22&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zebra&lt;/span&gt;
  &lt;span class="na"&gt;23&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;giraffe&lt;/span&gt;
  &lt;span class="na"&gt;24&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backpack&lt;/span&gt;
  &lt;span class="na"&gt;25&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;umbrella&lt;/span&gt;
  &lt;span class="na"&gt;26&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;handbag&lt;/span&gt;
  &lt;span class="na"&gt;27&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tie&lt;/span&gt;
  &lt;span class="na"&gt;28&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;suitcase&lt;/span&gt;
  &lt;span class="na"&gt;29&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frisbee&lt;/span&gt;
  &lt;span class="na"&gt;30&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;skis&lt;/span&gt;
  &lt;span class="na"&gt;31&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snowboard&lt;/span&gt;
  &lt;span class="na"&gt;32&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sports ball&lt;/span&gt;
  &lt;span class="na"&gt;33&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kite&lt;/span&gt;
  &lt;span class="na"&gt;34&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;baseball bat&lt;/span&gt;
  &lt;span class="na"&gt;35&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;baseball glove&lt;/span&gt;
  &lt;span class="na"&gt;36&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;skateboard&lt;/span&gt;
  &lt;span class="na"&gt;37&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;surfboard&lt;/span&gt;
  &lt;span class="na"&gt;38&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tennis racket&lt;/span&gt;
  &lt;span class="na"&gt;39&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bottle&lt;/span&gt;
  &lt;span class="na"&gt;40&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wine glass&lt;/span&gt;
  &lt;span class="na"&gt;41&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cup&lt;/span&gt;
  &lt;span class="na"&gt;42&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fork&lt;/span&gt;
  &lt;span class="na"&gt;43&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;knife&lt;/span&gt;
  &lt;span class="na"&gt;44&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;spoon&lt;/span&gt;
  &lt;span class="na"&gt;45&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bowl&lt;/span&gt;
  &lt;span class="na"&gt;46&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;banana&lt;/span&gt;
  &lt;span class="na"&gt;47&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apple&lt;/span&gt;
  &lt;span class="na"&gt;48&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sandwich&lt;/span&gt;
  &lt;span class="na"&gt;49&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;orange&lt;/span&gt;
  &lt;span class="na"&gt;50&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;broccoli&lt;/span&gt;
  &lt;span class="na"&gt;51&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;carrot&lt;/span&gt;
  &lt;span class="na"&gt;52&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hot dog&lt;/span&gt;
  &lt;span class="na"&gt;53&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pizza&lt;/span&gt;
  &lt;span class="na"&gt;54&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;donut&lt;/span&gt;
  &lt;span class="na"&gt;55&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cake&lt;/span&gt;
  &lt;span class="na"&gt;56&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;chair&lt;/span&gt;
  &lt;span class="na"&gt;57&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;couch&lt;/span&gt;
  &lt;span class="na"&gt;58&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;potted plant&lt;/span&gt;
  &lt;span class="na"&gt;59&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bed&lt;/span&gt;
  &lt;span class="na"&gt;60&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dining table&lt;/span&gt;
  &lt;span class="na"&gt;61&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;toilet&lt;/span&gt;
  &lt;span class="na"&gt;62&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tv&lt;/span&gt;
  &lt;span class="na"&gt;63&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;laptop&lt;/span&gt;
  &lt;span class="na"&gt;64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mouse&lt;/span&gt;
  &lt;span class="na"&gt;65&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;remote&lt;/span&gt;
  &lt;span class="na"&gt;66&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keyboard&lt;/span&gt;
  &lt;span class="na"&gt;67&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cell phone&lt;/span&gt;
  &lt;span class="na"&gt;68&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;microwave&lt;/span&gt;
  &lt;span class="na"&gt;69&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oven&lt;/span&gt;
  &lt;span class="na"&gt;70&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;toaster&lt;/span&gt;
  &lt;span class="na"&gt;71&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sink&lt;/span&gt;
  &lt;span class="na"&gt;72&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;refrigerator&lt;/span&gt;
  &lt;span class="na"&gt;73&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;book&lt;/span&gt;
  &lt;span class="na"&gt;74&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clock&lt;/span&gt;
  &lt;span class="na"&gt;75&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vase&lt;/span&gt;
  &lt;span class="na"&gt;76&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scissors&lt;/span&gt;
  &lt;span class="na"&gt;77&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;teddy bear&lt;/span&gt;
  &lt;span class="na"&gt;78&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hair drier&lt;/span&gt;
  &lt;span class="na"&gt;79&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;toothbrush&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, update &lt;code&gt;setup.py&lt;/code&gt; entry points to include our node script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;setup(&lt;/span&gt;
    &lt;span class="s"&gt;...&lt;/span&gt;
    &lt;span class="s"&gt;entry_points={&lt;/span&gt;
        &lt;span class="s"&gt;'console_scripts'&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;
            &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;yolo_node&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;yolo_detector.yolo_node:main'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
        &lt;span class="pi"&gt;]&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;
    &lt;span class="err"&gt;},&lt;/span&gt;
&lt;span class="s"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Build and Run the YOLO Node&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#5-build-and-run-the-yolo-node" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Add &lt;code&gt;onnxruntime&lt;/code&gt; and &lt;code&gt;opencv-python&lt;/code&gt; to your workspace's requirements (you might include them in a &lt;code&gt;requirements.txt&lt;/code&gt; for the package and use &lt;code&gt;pip&lt;/code&gt; to install, since they are pip packages). For now, ensure they are installed via pip as done earlier.&lt;/p&gt;

&lt;p&gt;Build the workspace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/ros2_ws
colcon build &lt;span class="nt"&gt;--packages-select&lt;/span&gt; yolo_detector
&lt;span class="nb"&gt;source install&lt;/span&gt;/local_setup.bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the YOLO detection node in a new terminal on the Pi:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ros2 run yolo_detector yolo_node.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see log output from the node whenever it processes an image (every frame or at least when something is detected, depending on your logging). The node will publish messages on &lt;code&gt;object_detected&lt;/code&gt; and &lt;code&gt;confidence_score&lt;/code&gt; topics.&lt;/p&gt;

&lt;p&gt;You can echo these topics in another terminal to verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ros2 topic &lt;span class="nb"&gt;echo&lt;/span&gt; /object_detected
ros2 topic &lt;span class="nb"&gt;echo&lt;/span&gt; /confidence_score
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, if a person is detected with 85% confidence. You should see messages like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;object&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0"&lt;/span&gt;
&lt;span class="na"&gt;confidence&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.85&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have a camera streaming images and a detector outputting metadata. Next, we'll create the &lt;code&gt;rosbag2reduct&lt;/code&gt; package to record these data and handle file rotation and uploading to ReductStore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the &lt;code&gt;rosbag2reduct&lt;/code&gt; Package&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#creating-the-rosbag2reduct-package" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Our &lt;code&gt;rosbag2reduct&lt;/code&gt; package will be a ROS 2 node that does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subscribe to the relevant topics (e.g. &lt;code&gt;/image_raw&lt;/code&gt;, &lt;code&gt;object_detected&lt;/code&gt;, &lt;code&gt;confidence_score&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Record those topics into a bag file using the rosbag2 Python API (in MCAP format).&lt;/li&gt;
&lt;li&gt;Label each bag file with the latest detection metadata.&lt;/li&gt;
&lt;li&gt;After a fixed time interval (bag rotation period), close the current bag, then &lt;strong&gt;upload it to ReductStore&lt;/strong&gt; with Python client SDK, including the metadata as labels, and delete the local bag file.&lt;/li&gt;
&lt;li&gt;Start a new bag file and repeat.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;warning&lt;/p&gt;

&lt;p&gt;This is not optimal (writing to disk then uploading) and is done for simplicity. In a real scenario, you should stream directly to ReductStore or separate storage topics by type. More on this in the &lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#best-practices-and-further-improvements" rel="noopener noreferrer"&gt;&lt;strong&gt;Best Practices and Further Improvements&lt;/strong&gt;&lt;/a&gt; section.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Create the Package Structure&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#1-create-the-package-structure" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Run the ROS 2 package creation command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/ros2_ws/src
ros2 pkg create &lt;span class="nt"&gt;--build-type&lt;/span&gt; ament_python rosbag2reduct &lt;span class="nt"&gt;--dependencies&lt;/span&gt; rclpy rosbag2_py sensor_msgs std_msgs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a new folder &lt;code&gt;rosbag2reduct&lt;/code&gt; with a basic Python package setup. Update &lt;code&gt;rosbag2reduct/package.xml&lt;/code&gt; to include &lt;code&gt;&amp;lt;exec_depend&amp;gt;reduct-py&amp;lt;/exec_depend&amp;gt;&lt;/code&gt; (ReductStore's Python SDK) since we'll use that. Also make sure &lt;code&gt;rosbag2_py&lt;/code&gt; is listed as a dependency (the above command included it). In &lt;code&gt;setup.py&lt;/code&gt;, add an entry point for our main node if desired (though we can also run the Python file directly with ros2 run as long as it's installed).&lt;/p&gt;

&lt;p&gt;After creation, the structure should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ros2_ws/src/rosbag2reduct/
├── package.xml
├── setup.cfg
├── setup.py
├── resource/
│   └── rosbag2reduct
└── rosbag2reduct
    ├── __init__.py
    └── recorder_node.py   &lt;span class="c"&gt;# (we will create this)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Install ReductStore Python SDK and MCAP Support&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#2-install-reductstore-python-sdk-and-mcap-support" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Before coding, install the ReductStore client library (&lt;code&gt;reduct-py&lt;/code&gt;) in your Python environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;reduct-py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives us the &lt;code&gt;reduct&lt;/code&gt; module for interacting with a ReductStore server. (We will set up the actual ReductStore server on the Pi soon, but we can write the code first.)&lt;/p&gt;

&lt;p&gt;Also, install the MCAP storage format support for rosbag2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;ros-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ROS_DISTRO&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;-rosbag2-storage-mcap&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Implementing the &lt;code&gt;rosbag2reduct&lt;/code&gt; Node&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#3-implementing-the-rosbag2reduct-node" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Open a new file &lt;code&gt;rosbag2reduct/recorder_node.py&lt;/code&gt; and add the following code from the snippet below. This code defines the &lt;code&gt;Rosbag2ReductNode&lt;/code&gt; class, which is a ROS 2 node that subscribes to the camera images and metadata topics, records them into bag files, and uploads the bag files to ReductStore with metadata labels.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Rosbag2ReductNode (Python code)&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;rclpy&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;rclpy.node&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Node&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;std_msgs.msg&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Float32&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sensor_msgs.msg&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;rclpy.serialization&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;serialize_message&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;rosbag2_py&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SequentialWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;StorageOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ConverterOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;TopicMetadata&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;shutil&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Bucket&lt;/span&gt;


&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Rosbag2ReductNode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Node&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rosbag2reduct&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bag_duration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;60.0&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;storage_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mcap&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;  &lt;span class="c1"&gt;# Or 'sqlite3'
&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;current_bag_index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;current_bag_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_open_new_bag&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_subscription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/image_raw&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_image_callback&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_subscription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;object_detected&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_metadata_callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;object_detected&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_subscription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Float32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;confidence_score&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_metadata_callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;confidence_score&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_object_detected&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;none&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_confidence&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_timer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bag_duration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_rotate_bag_timer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reduct_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://127.0.0.1:8383&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bucket_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pi_robot&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;entry_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rosbags&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reduct_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reduct_url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_event_loop&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;run_until_complete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_ensure_bucket&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rosbag2reduct node initialized, writing to &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;storage_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; bags and uploading to ReductStore bucket &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bucket_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_open_new_bag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SequentialWriter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;bag_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rosbag_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;%Y%m%d_%H%M%S&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;current_bag_index&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;storage_options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StorageOptions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bag_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;storage_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;storage_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;converter_options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ConverterOptions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;storage_options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;converter_options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TopicMetadata&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/image_raw&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sensor_msgs/msg/Image&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;serialization_format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cdr&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TopicMetadata&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;object_detected&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;std_msgs/msg/String&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;serialization_format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cdr&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TopicMetadata&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;confidence_score&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;std_msgs/msg/Float32&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;serialization_format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cdr&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TopicMetadata&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/image_raw&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sensor_msgs/msg/Image&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cdr&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TopicMetadata&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;object_detected&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;std_msgs/msg/String&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cdr&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TopicMetadata&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;confidence_score&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;std_msgs/msg/Float32&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cdr&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;))&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Opened new bag: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bag_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;bag_name&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_image_callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/image_raw&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;serialize_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_clock&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;nanoseconds&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_metadata_callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topic_name&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;topic_name&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;object_detected&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_object_detected&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;object_detected&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;serialize_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_clock&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;nanoseconds&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;topic_name&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;confidence_score&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_confidence&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;confidence_score&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;serialize_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_clock&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;nanoseconds&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_rotate_bag_timer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;object_label&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_object_detected&lt;/span&gt;
        &lt;span class="n"&gt;confidence_val&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;float&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;last_confidence&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;old_bag_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;current_bag_name&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;current_bag_index&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;current_bag_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_open_new_bag&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Uploading bag &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;old_bag_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; with metadata: object=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;object_label&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;, confidence=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;confidence_val&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_event_loop&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;run_until_complete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_upload_bag_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;old_bag_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;object_label&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;confidence_val&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

        &lt;span class="c1"&gt;# Clean up the bag directory if a file was uploaded
&lt;/span&gt;        &lt;span class="n"&gt;bag_dir&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;old_bag_name&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isdir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag_dir&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;shutil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rmtree&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag_dir&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Deleted bag directory &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bag_dir&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; after upload.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Could not delete directory &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bag_dir&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bag directory &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bag_dir&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; not found.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_ensure_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reduct_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bucket_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exist_ok&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_upload_bag_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bag_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;object_label&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;confidence_val&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;bag_dir&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bag_name&lt;/span&gt;
        &lt;span class="n"&gt;mcap_file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;

        &lt;span class="c1"&gt;# Find the first .mcap file in the bag directory
&lt;/span&gt;        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listdir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag_dir&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;endswith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.mcap&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                    &lt;span class="n"&gt;mcap_file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bag_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;FileNotFoundError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bag directory &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bag_dir&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; does not exist.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;mcap_file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;No .mcap file found in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bag_dir&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; — skipping upload.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;

        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mcap_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error reading &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;mcap_file&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;

        &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reduct_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bucket_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;utcnow&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;object&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;object_label&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;confidence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;confidence_val&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;entry_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_logger&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Uploaded &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;mcap_file&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; to ReductStore with labels &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;rclpy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Rosbag2ReductNode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;rclpy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;spin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;KeyboardInterrupt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;pass&lt;/span&gt;
    &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;destroy_node&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;rclpy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shutdown&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;pass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down key parts of this implementation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We use &lt;strong&gt;rosbag2's Python API&lt;/strong&gt; : &lt;code&gt;SequentialWriter&lt;/code&gt; to record messages. We specify MCAP as the storage format when opening the bag. We explicitly register three topics (&lt;code&gt;/image_raw&lt;/code&gt;, &lt;code&gt;object_detected&lt;/code&gt;, &lt;code&gt;confidence_score&lt;/code&gt;) with their message types, so we can write to them. In each subscription callback, we call &lt;code&gt;self.writer.write(topic_name, serialize_message(msg), timestamp)&lt;/code&gt; to append to the bag.&lt;/li&gt;
&lt;li&gt;We maintain &lt;code&gt;last_object_detected&lt;/code&gt; and &lt;code&gt;last_confidence&lt;/code&gt; variables to store the most recent detection metadata. The &lt;code&gt;_metadata_callback&lt;/code&gt; updates these whenever a message on those topics arrives, and writes the message to the bag as well.&lt;/li&gt;
&lt;li&gt;A ROS timer triggers &lt;code&gt;_rotate_bag_timer()&lt;/code&gt; every &lt;code&gt;self.bag_duration&lt;/code&gt; seconds (e.g., every 60 seconds). This function closes the current bag and opens a new one (by calling &lt;code&gt;_open_new_bag()&lt;/code&gt; which increments a bag index and starts a new file). We then proceed to upload the bag file (that we just closed) to ReductStore.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReductStore upload:&lt;/strong&gt; Using &lt;code&gt;reduct-py&lt;/code&gt;, we ensure a bucket named &lt;code&gt;pi_robot&lt;/code&gt; exists in ReductStore (which we assume is running locally on the Pi at &lt;code&gt;127.0.0.1:8383&lt;/code&gt;). We then read the bag file into memory and use &lt;code&gt;bucket.write()&lt;/code&gt; to store it in the bucket under the &lt;code&gt;rosbags&lt;/code&gt; entry. We label each record with the detection metadata (e.g., &lt;code&gt;labels={"object": "1", "confidence": "0.85"}&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;After a successful upload, we delete the local &lt;code&gt;.mcap&lt;/code&gt; file to save space on the Pi (since it's now stored in ReductStore).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;info&lt;/p&gt;

&lt;p&gt;This is a basic implementation. In a real-world scenario, you will likely want to handle prediction and timestamping differently, add more metadata, and of course use a more efficient way to upload data directly to ReductStore.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Build the &lt;code&gt;rosbag2reduct&lt;/code&gt; Package&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#4-build-the-rosbag2reduct-package" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Ensure that in &lt;code&gt;setup.py&lt;/code&gt; of &lt;code&gt;rosbag2reduct&lt;/code&gt; you have an entry point if needed, e.g.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;entry_points={&lt;/span&gt;
    &lt;span class="s"&gt;'console_scripts'&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rosbag2reduct&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;rosbag2reduct.recorder_node:main'&lt;/span&gt;
    &lt;span class="pi"&gt;]&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;
&lt;span class="err"&gt;},&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will allow us to run &lt;code&gt;ros2 run rosbag2reduct rosbag2reduct&lt;/code&gt; to launch the node. Now build the package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/ros2_wscolcon build &lt;span class="nt"&gt;--packages-select&lt;/span&gt; rosbag2reduct
&lt;span class="nb"&gt;source install&lt;/span&gt;/local_setup.bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything compiles (installs) without errors, we're ready to run the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing and Configuring ReductStore on the Raspberry Pi&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#installing-and-configuring-reductstore-on-the-raspberry-pi" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Before running the &lt;code&gt;rosbag2reduct&lt;/code&gt; node, we need a ReductStore server running on the Pi to accept uploads. ReductStore is a lightweight time-series object storage, perfect for edge devices. We will install it on the Pi and create a bucket for our bag files.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Install ReductStore on Raspberry Pi&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#1-install-reductstore-on-raspberry-pi" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The easiest way on Ubuntu is to use &lt;strong&gt;snap&lt;/strong&gt; or &lt;strong&gt;Docker&lt;/strong&gt;. We'll use snap for simplicity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On Raspberry Pi&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;snapd &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="c"&gt;# if snapd is not already installed&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;reductstore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install ReductStore from the Snap Store. The snap should set up ReductStore as a service listening on port 8383 by default. (If using a different OS or if snap isn't desired, you can use Docker: e.g., &lt;code&gt;docker run -d -p 8383:8383 -v ~/reduct_data:/data reduct/store:latest&lt;/code&gt; to run ReductStore in a container.)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The database will listen on &lt;code&gt;http://0.0.0.0:8383&lt;/code&gt; (accessible to the LAN). Ensure this port is allowed through any firewall if you want external access.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Check that ReductStore is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;snap services reductstore &lt;span class="c"&gt;# should show active&lt;/span&gt;
curl http://127.0.0.1:8383/api/v1/info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;curl&lt;/code&gt; command should return JSON info about the instance (like version, uptime, etc.).&lt;/p&gt;

&lt;h3&gt;
  
  
  2. (Optional) Configure ReductStore&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#2-optional-configure-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;By default, ReductStore doesn't require authentication for local use (anonymous access is allowed). This is fine for our edge scenario on a local network. If you want to set up access tokens or adjust storage quotas, you can do so by following the &lt;a href="https://www.reduct.store/docs/guides/access-control" rel="noopener noreferrer"&gt;&lt;strong&gt;Access Control documentation&lt;/strong&gt;&lt;/a&gt;. For now, we'll use defaults.&lt;/p&gt;

&lt;p&gt;We will use the &lt;strong&gt;Web Console&lt;/strong&gt; to verify data and to set up replication later. The web console is accessible from a browser at the server's address (it's the same as the API endpoint). For example, on the Pi, open &lt;code&gt;http://&amp;lt;raspberrypi_ip&amp;gt;:8383&lt;/code&gt; in a browser - you should see the ReductStore Web Console interface (a simple GUI).&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create a Bucket for ROS bag data&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#3-create-a-bucket-for-ros-bag-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Our &lt;code&gt;rosbag2reduct&lt;/code&gt; code will attempt to create a bucket named &lt;code&gt;"pi_robot"&lt;/code&gt;. We called &lt;code&gt;create_bucket("pi_robot", exist_ok=True)&lt;/code&gt; in the code, so the bucket will be created on first run if it doesn't exist. You can also create it manually via the web console:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Buckets&lt;/strong&gt; and create a new bucket named “pi_robot”. (You can set a quota, e.g., a FIFO quota to avoid filling up the disk on the Pi.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmmh5w7bck8c19c8gx6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmmh5w7bck8c19c8gx6i.png" alt="ReductStore Web Console Bucket" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;ReductStore Web Console: Creating a bucket named "pi_robot" for storing ROS bags.&lt;/p&gt;

&lt;p&gt;Now, ReductStore is set up on the Pi and ready to accept data. We can run the complete system to record ROS data, detect objects, and upload to ReductStore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Complete System&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#running-the-complete-system" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We have three ROS 2 nodes to run on the Raspberry Pi:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The camera driver (&lt;code&gt;v4l2_camera_node&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;The YOLO detection node (&lt;code&gt;YoloDetectorNode&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;The rosbag2reduct recorder/uploader node (&lt;code&gt;Rosbag2ReductNode&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's best to run each in its own terminal (or use a launch file to launch them together). For clarity, we'll do it step-by-step:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terminal 1:&lt;/strong&gt; Camera node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Terminal 1 on Pi (source ROS 2 and workspace)&lt;/span&gt;
ros2 run v4l2_camera v4l2_camera_node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terminal 2:&lt;/strong&gt; YOLO detection node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Terminal 2 on Pi (source ROS 2 and workspace)&lt;/span&gt;
ros2 run yolo_detector yolo_node.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(If you set up the entry point, you could do &lt;code&gt;ros2 run yolo_detector yolo_detector&lt;/code&gt; or similar, but here we assume running the script directly.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terminal 3:&lt;/strong&gt; rosbag2reduct recorder node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Terminal 3 on Pi (source ROS 2 and workspace)&lt;/span&gt;
ros2 run rosbag2reduct rosbag2reduct
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(This uses the console script entry point we defined. Alternatively &lt;code&gt;ros2 run rosbag2reduct recorder_node.py&lt;/code&gt; if not configured as an entry point.)&lt;/p&gt;

&lt;p&gt;Now monitor the outputs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The camera node should just stream (no text output unless error or warning).&lt;/li&gt;
&lt;li&gt;The YOLO node will log detections (as we coded with &lt;code&gt;get_logger().info&lt;/code&gt; on each detection).&lt;/li&gt;
&lt;li&gt;The rosbag2reduct node will log bag rotations and uploads. For example, you should see logs like “Opened new bag: rosbag_20230325_101500_0” and later “Uploading bag 0 with metadata: object=person, confidence=0.85” then “Uploaded rosbag_... to ReductStore with labels ...” etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let this run for a while. By default, every 60 seconds it will finalize a bag and upload it. If you want to trigger a rotation sooner (for testing), you could reduce &lt;code&gt;bag_duration&lt;/code&gt; or even manually call the rotation function (not easily via ROS, unless you expose a service but it's not implemented here).&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Verify Data in ReductStore&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#1-verify-data-in-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;On the Pi (or from any machine that can access the Pi's port 8383), open the ReductStore Web Console in a browser: &lt;strong&gt;&lt;code&gt;http://&amp;lt;raspberrypi_ip&amp;gt;:8383&lt;/code&gt;&lt;/strong&gt;. You should see the bucket “pi_robot”. Click it to see the list of entries.&lt;/p&gt;

&lt;p&gt;Each uploaded bag file appears as a &lt;strong&gt;record&lt;/strong&gt; in the &lt;code&gt;rosbags&lt;/code&gt; entry of the &lt;code&gt;pi_robot&lt;/code&gt; bucket. The record name is the bag file name, and you should be able to see its labels, e.g., &lt;code&gt;object: 1&lt;/code&gt; and &lt;code&gt;confidence: 0.85&lt;/code&gt; attached to that record.. You can also see the timestamp of each record (when it was uploaded).&lt;/p&gt;

&lt;p&gt;You have successfully set up the edge device to capture ROS data and push it to ReductStore with metadata labels!&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Replication to a Central ReductStore (Laptop)&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#setting-up-replication-to-a-central-reductstore-laptop" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;With data being collected on the Pi, we likely want to aggregate it on a central server (e.g., your laptop or a cloud instance) for analysis or long-term storage.&lt;/p&gt;

&lt;p&gt;ReductStore's replication feature allows the Pi (source) to &lt;strong&gt;push&lt;/strong&gt; new records to another ReductStore instance (destination) in real-time, filtering by labels so that, for example, only important events (e.g., specific objects or high-confidence detections) are sent for long-term storage.&lt;/p&gt;

&lt;p&gt;info&lt;/p&gt;

&lt;p&gt;Replication lets you stream only the data you need from the edge to the cloud or between edge devices. You can filter by label and push data without constant polling. If the device is offline or the destination is down, the data waits and replicates later.&lt;/p&gt;

&lt;p&gt;In this section, we'll:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run ReductStore on the laptop.&lt;/li&gt;
&lt;li&gt;Use the ReductStore Web Console to create a replication task on the Pi's instance that filters and forwards data to the laptop's instance based on labels.&lt;/li&gt;
&lt;li&gt;Verify that replication works.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. Install/Run ReductStore on Laptop&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#1-installrun-reductstore-on-laptop" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;On your laptop (assuming Ubuntu 22.04 or any system with Docker or Snap):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Option A: Docker&lt;/strong&gt; - run ReductStore in a container:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Option B: Native&lt;/strong&gt; - you could similarly install via Snap (&lt;code&gt;sudo snap install reductstore&lt;/code&gt;) or use a binary. Docker is quick and easy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After starting it, ensure you can access it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://127.0.0.1:8383/api/v1/info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;should return info as before (but for the laptop's instance).&lt;/p&gt;

&lt;p&gt;Open the web console on the laptop: &lt;code&gt;http://localhost:8383&lt;/code&gt; and keep it open for monitoring.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create a bucket named “pi_robot” on the laptop as well, otherwise the replication task will fail (it needs the destination bucket to exist).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;tip&lt;/p&gt;

&lt;p&gt;Use provisioning to automate bucket creation and other setup steps in a real deployment. See &lt;a href="https://www.reduct.store/docs/configuration#provisioning" rel="noopener noreferrer"&gt;&lt;strong&gt;Configuration/Provisioning Documentation&lt;/strong&gt;&lt;/a&gt; for more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Networking:&lt;/strong&gt; Make sure your laptop is accessible from the Pi. If both are on the same LAN, you might use the laptop's IP (e.g., 192.168.x.x). If the laptop's ReductStore is in Docker, ensure the port 8383 is open (it is published in the run command above). For testing, you might temporarily disable firewall or ensure port 8383 is allowed.&lt;/p&gt;

&lt;p&gt;Find your laptop's IP address (e.g., &lt;code&gt;hostname -I&lt;/code&gt; on Linux) - let's say it's &lt;code&gt;192.168.1.100&lt;/code&gt; for example.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Configure Replication on the Pi via Web Console&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#2-configure-replication-on-the-pi-via-web-console" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;On the Pi's web console ( &lt;strong&gt;&lt;code&gt;http://&amp;lt;raspberrypi_ip&amp;gt;:8383&lt;/code&gt;&lt;/strong&gt; ), look for “Replications” and the “+” to add replication). We want to create a replication task that sends data to the laptop.&lt;/p&gt;

&lt;p&gt;Fill in the replication settings as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source Bucket:&lt;/strong&gt; pi_robot (on Pi)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Destination URL:&lt;/strong&gt; &lt;code&gt;http://&amp;lt;laptop_ip&amp;gt;:8383&lt;/code&gt; (e.g., &lt;code&gt;http://192.168.1.100:8383&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Destination Bucket:&lt;/strong&gt; pi_robot (on laptop)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replication Name:&lt;/strong&gt; (give it a name like “to_laptop”)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Entries:&lt;/strong&gt; &lt;code&gt;rosbags&lt;/code&gt; (or whatever entry name you used in &lt;code&gt;bucket.write()&lt;/code&gt;), or leave blank to replicate all entries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filter Records&lt;/strong&gt; : add the "Exclude" filter rule for &lt;code&gt;object&lt;/code&gt; equals &lt;code&gt;"none"&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start the replication task. The Pi's ReductStore will now start forwarding new records that meet the criteria to the laptop, in real-time. It's a push model from Pi to laptop, so the laptop doesn't need to know about the Pi or poll it - the Pi will push new records as they arrive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosmxpp4394m3mtdohhym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosmxpp4394m3mtdohhym.png" alt="ReductStore Web Console Replication" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;ReductStore Web Console: Setting up a replication task to forward data to a central storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Test Replication&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#3-test-replication" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Back on the Pi, ensure the &lt;code&gt;rosbag2reduct&lt;/code&gt; node is still running and creating new records. When the next bag file is uploaded on the Pi, if it meets the replication filter conditions, the Pi's ReductStore replication task will immediately send it to the laptop.&lt;/p&gt;

&lt;p&gt;On the &lt;strong&gt;laptop's web console&lt;/strong&gt; , open the “pi_robot” bucket. You should start seeing records appear that correspond to those on the Pi (with a slight delay for transfer). The labels should also be present. If you configured a filter (e.g., confidence &amp;gt; 0.8), try to produce a detection above that confidence on the Pi (point the camera at an easily recognized object or adjust threshold on the Pi code for testing).&lt;/p&gt;

&lt;p&gt;Records not meeting the condition will stay only on the Pi and eventually be overwritten if Pi's FIFO quota is set.&lt;/p&gt;

&lt;p&gt;You can also check the replication status on the Pi's web console; it may show last replicated record timestamp etc., indicating it's working.&lt;/p&gt;

&lt;p&gt;At this point, we have a robust pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Raspberry Pi captures camera data and detection metadata.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rosbag2reduct&lt;/code&gt; segments the data into time-based bags, labels them, and pushes to local storage.&lt;/li&gt;
&lt;li&gt;ReductStore on Pi retains recent data and automatically forwards critical data to the laptop's ReductStore based on labels.&lt;/li&gt;
&lt;li&gt;The laptop accumulates the forwarded data in its own ReductStore bucket, which you can browse or integrate with other systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best Practices and Further Improvements&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#best-practices-and-further-improvements" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This tutorial only scratches the surface of building a robust data acquisition and storage pipeline for robotics. Here are some best practices and further improvements to consider:&lt;/p&gt;

&lt;h3&gt;
  
  
  📊 Separate Storage Topics by Types&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#-separate-storage-topics-by-types" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Separate storage streams for different topic categories (e.g., telemetry vs. raw sensor data) to optimise storage and retrieval. This allows you to apply different retention policies, access controls, or replication rules to each category.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📖 See &lt;a href="https://www.reduct.store/blog/store-ros-topics" rel="noopener noreferrer"&gt;&lt;strong&gt;3 Ways to Store ROS Topics&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📘 Example: &lt;a href="https://www.reduct.store/blog/tutorials/ros/optimal-image-storage-solutions-for-ros-based-computer-vision" rel="noopener noreferrer"&gt;&lt;strong&gt;How to Store Images in ROS 2&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are some common data categories and their characteristics to consider:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;th&gt;Characteristics&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Lightweight telemetry&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GPS, IMU, joint states, system status&lt;/td&gt;
&lt;td&gt;Low bandwidth, near real-time, useful for business analytics&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Downsampled sensor data&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Lower framerate/resolution camera or lidar data&lt;/td&gt;
&lt;td&gt;Mid-size, great for monitoring and incident triage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Full-resolution data&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Raw camera frames, high-fps lidar, depth maps&lt;/td&gt;
&lt;td&gt;High volume (up to 1TB/hour), needed for debugging or model retraining&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  🗜️ Compress High-Bandwidth Topics&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#%EF%B8%8F-compress-high-bandwidth-topics" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We can se ROS 2's built-in compression mechanisms for image topics and large messages to reduce bandwidth and storage costs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🖼️ Example: Use &lt;code&gt;/image_raw/compressed&lt;/code&gt; instead of raw image streams&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔐 Use Token-Based Authentication&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#-use-token-based-authentication" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This is especially relevant for cloud storage or remote ReductStore instances. Use access tokens to secure data transfers and prevent unauthorized access.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚙️ Use Non-Blocking Operations&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#%EF%B8%8F-use-non-blocking-operations" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;When uploading large files or performing storage operations, use non-blocking (asynchronous) methods to avoid freezing your ROS 2 nodes. This keeps your system responsive and prevents dropped messages or missed frames.&lt;/p&gt;

&lt;h3&gt;
  
  
  📉 Combine Downsampling with Replication&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#-combine-downsampling-with-replication" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This is typical in ELT (Extract-Load-Transform) pipelines. The idea is to save everything locally (on the robot) at high resolution and framerate, then stream part of the data to the cloud or a central server at a lower resolution or lower frequency.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➡️ Example: Store 1 FPS video in cloud, keep 30 FPS original on robot SSD&lt;/li&gt;
&lt;li&gt;🔄 See example in: &lt;a href="https://www.reduct.store/blog/daq-manufacture-system" rel="noopener noreferrer"&gt;&lt;strong&gt;Building a Data Acquisition System for Manufacturing&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ❄️ Offload Data to Cold Storage&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#%EF%B8%8F-offload-data-to-cold-storage" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;For long-term archiving, consider offloading data to cold storage to reduce costs while keeping data accessible.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⚖️ Example: Keep 30 days of data locally, archive older data to Google Cloud Storage.&lt;/li&gt;
&lt;li&gt;📖 Guide: &lt;a href="https://www.reduct.store/docs/guides/cloud/saas" rel="noopener noreferrer"&gt;&lt;strong&gt;Deploy on ReductStore Cloud&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/tutorial-store-ros-data#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we set up a complete pipeline on a Raspberry Pi running ROS 2 Humble that captures camera data and AI output, records it to MCAP bag files with a custom Python node, and automatically offloads those files to a time-series object store. We added labels to each record (object ID detected and YOLO's confidence level). We configured replication to forward labeled data to a central instance on a laptop, filtering by labels to reduce bandwidth - for example, forwarding only "interesting" events (excluding "none" detections).&lt;/p&gt;

&lt;p&gt;The end-to-end system demonstrates how to build a robust data logging and centralized storage solution for robotics. Of course, this tutorial must be adapted to your specific use case and environment. Keep in mind that the pipeline is highly customizable and can be adapted to different scenarios (separating topics by type, using specific filters, adding more metadata, etc.).&lt;/p&gt;

&lt;p&gt;We hope this end-to-end tutorial helps you build your own ROS 2 data acquisition system. Happy hacking!&lt;/p&gt;




&lt;p&gt;Thanks for reading, I hope this article will help you choose the right storage strategy for your vibration data. If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>ros</category>
      <category>robotics</category>
    </item>
    <item>
      <title>ReductStore vs. MinIO: Beyond Benchmarks</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Tue, 25 Mar 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/reductstore-vs-minio-beyond-benchmarks-1f4n</link>
      <guid>https://dev.to/reductstore/reductstore-vs-minio-beyond-benchmarks-1f4n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45mptnig6tlhuqbb571d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45mptnig6tlhuqbb571d.png" alt="ReductStore with MinIO" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As data-driven applications evolve, the need for efficient storage solutions continues to grow. ReductStore and MinIO are two powerful solutions designed to handle massive amounts of unstructured data, but they serve different purposes.&lt;/p&gt;

&lt;p&gt;While ReductStore is optimized for time-series object storage with a focus on unstructured data such as sensor logs, images, and machine-generated data for robotics and IIoT, MinIO is a high-performance object storage system built for scalable, cloud-native applications with a focus on S3 compatibility and enterprise-wide storage needs.&lt;/p&gt;

&lt;p&gt;In this article, we'll explore the differences between ReductStore and MinIO, examine where each excels, and discuss how they can be used together to build a more comprehensive data storage solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding MinIO: The Scalable Cloud-Native Object Storage&lt;a href="https://www.reduct.store/blog/minio-reductstore-beyond-benchmarks#understanding-minio-the-scalable-cloud-native-object-storage" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;MinIO is an &lt;strong&gt;open-source object storage system&lt;/strong&gt; that provides a high-performance, S3-compatible API for storing and managing unstructured data. Designed to be lightweight yet highly scalable, it is often deployed in cloud-native environments, acting as a drop-in replacement for &lt;strong&gt;Amazon S3&lt;/strong&gt; or integrated into private and hybrid cloud infrastructures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of MinIO&lt;a href="https://www.reduct.store/blog/minio-reductstore-beyond-benchmarks#key-features-of-minio" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon S3 API Compatibility&lt;/strong&gt; : Provides a smooth integration with existing cloud storage applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-Performance Object Storage&lt;/strong&gt; : Designed for fast throughput and large-scale workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Security &amp;amp; Compliance&lt;/strong&gt; : Provides encryption, access controls, and security measures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability &amp;amp; Redundancy&lt;/strong&gt; : Supports multi-node clusters with erasure coding and replication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Cloud Deployment&lt;/strong&gt; : Works across private, hybrid, and public cloud infrastructures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MinIO is particularly suited for cloud-native storage, providing an efficient way to manage large datasets across distributed environments. It is commonly used in AI/ML pipelines, backup and disaster recovery, and data lakes where scalability and reliability are critical.&lt;/p&gt;

&lt;p&gt;Amazon S3 compatibility allows MinIO to integrate with existing cloud applications, reducing migration and operational challenges. Security and compliance measures, such as encryption and access controls, ensure that corporate data is protected.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The platform's ability to operate across multiple cloud environments makes it a preferred choice for organizations adopting hybrid and multi-cloud strategies.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  ReductStore: Purpose-Built for Storage and Streaming in Data Acquisition Systems&lt;a href="https://www.reduct.store/blog/minio-reductstore-beyond-benchmarks#reductstore-purpose-built-for-storage-and-streaming-in-data-acquisition-systems" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In a recent article we showed the performance benefits of ReductStore for time-series data, &lt;a href="https://www.reduct.store/blog/comparisons/computer-vision/iot/performance-comparison-reductstore-vs-minio" rel="noopener noreferrer"&gt;&lt;strong&gt;out-performing MinIO for multiple file sizes&lt;/strong&gt;&lt;/a&gt;. ReductStore excels with unstructured time series data, where MinIO serves as a more general purpose object storage solution.&lt;/p&gt;

&lt;p&gt;MinIO is highly optimized for AI/ML, but not specifically towards data acquisition systems (DAQ) where time series data is key, particularly time-series data with large records such as vibrational and acoustic data for industrial applications, camera and sensor logs (telemetry) for robotics. AI/ML often relies on unstructured time-series data as well, and when this data is the primary focus, ReductStore outperforms MinIO by a significant margin.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of ReductStore&lt;a href="https://www.reduct.store/blog/minio-reductstore-beyond-benchmarks#key-features-of-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Optimized for Time-Series Data:&lt;/strong&gt; Stores unstructured, sequential data efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FIFO (First-In, First-Out) Quota System:&lt;/strong&gt; Automatically manages storage volume by replacing older data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-Latency Retrieval:&lt;/strong&gt; Ensures fast access to historical data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Processing for High-Speed Ingestion&lt;/strong&gt; : Reduces network overhead and speeds up data acquisition.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lightweight and HTTP(s) API Integration:&lt;/strong&gt; Designed for portability and easy connectivity in robotics and IIoT infrastructures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ReductStore is particularly useful for automated industrial systems, robotics, or any autonomous system (such as self-driving cars) that require fast storage and retrieval of large volumes of sensor data, video streams, and device logs.&lt;/p&gt;

&lt;p&gt;In distributed setups - such as those with high latency or remote nodes - the tool supports data streaming (replication) from one node to another. This is ideal when edge devices need immediate access to information locally, while simultaneously &lt;a href="https://www.reduct.store/solutions/cloud" rel="noopener noreferrer"&gt;&lt;strong&gt;replicating that data to centralized servers or cloud storage&lt;/strong&gt;&lt;/a&gt; for broader analysis or backup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Differences: ReductStore vs. MinIO&lt;a href="https://www.reduct.store/blog/minio-reductstore-beyond-benchmarks#key-differences-reductstore-vs-minio" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;MinIO&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Primary Use Case&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Time-series object storage for high-frequency, unstructured data (robotics/IIoT).&lt;/td&gt;
&lt;td&gt;Cloud-native object storage for scalable, S3-compatible workloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Type&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Sequential, time-indexed blobs (sensor streams, logs, images, event data).&lt;/td&gt;
&lt;td&gt;General-purpose unstructured data (files, backups, training datasets).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Optimized for low-latency writes/reads of time-series data; batch ingestion.&lt;/td&gt;
&lt;td&gt;High-throughput for large-scale parallel workloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Deployment Focus&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Edge devices, lightweight data acquisition, and central time-series storage.&lt;/td&gt;
&lt;td&gt;Enterprise data centers, multi-cloud clusters, and hybrid environments.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Key Use Cases&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Data acquisition systems, robotics telemetry, industrial monitoring, ELT.&lt;/td&gt;
&lt;td&gt;Data lakes, AI/ML pipelines, backup/archival, multi-cloud file storage.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When to Use ReductStore&lt;a href="https://www.reduct.store/blog/minio-reductstore-beyond-benchmarks#when-to-use-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore is ideal for applications that require real-time, high-speed data capture and retrieval. It works particularly well in applications such as manufacturing and autonomous systems (autonomous cars/robotics). Especially when raw, time-stamped sensor data (such as logs, LiDAR scans, or event streams) is collected for later processing or AI training via an &lt;strong&gt;extract-load-transform (ELT) approach&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It's ideal for &lt;strong&gt;deploying robust data collection systems&lt;/strong&gt; on edge devices with limited memory. For example, a computer vision module with AI predictions can locally store images along with AI-generated labels. When memory runs low, the system automatically removes older data to make room for new information (FIFO quota). At the same time, it can stream selected records - filtered by AI labels - to centralized storage, providing efficient and reliable data flow from the edge to the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use MinIO&lt;a href="https://www.reduct.store/blog/minio-reductstore-beyond-benchmarks#when-to-use-minio" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;MinIO is the better choice for organizations requiring &lt;strong&gt;high-performance, scalable storage for enterprise applications&lt;/strong&gt;. It is widely used in AI and machine learning workflows, where large datasets must be stored and processed across distributed cloud environments. Enterprises looking to build private or hybrid cloud storage solutions will benefit from MinIO's &lt;strong&gt;S3-compatible API and multi-cloud deployment&lt;/strong&gt; capabilities.&lt;/p&gt;

&lt;p&gt;MinIO also excels in &lt;strong&gt;long-term storage and disaster recovery&lt;/strong&gt; , making it a preferred option for organizations that need to store a lot of data while ensuring redundancy and resilience. Businesses in industries such as media, healthcare and financial services can rely on MinIO's ability to securely store large files, backups and archives while maintaining fast retrieval speeds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Combining ReductStore and MinIO for a Complete Solution&lt;a href="https://www.reduct.store/blog/minio-reductstore-beyond-benchmarks#combining-reductstore-and-minio-for-a-complete-solution" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;ReductStore and MinIO can be combined to create an efficient hybrid storage architecture.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;ReductStore is designed to store and manage high-frequency blob data - such as images, video frames, or sensor logs - using the local file system. This makes it easy to integrate with any backend that supports FUSE (Filesystem in Userspace).&lt;/p&gt;

&lt;p&gt;MinIO provides scalable, S3-compliant object storage that can be mounted using MinFS, a FUSE driver for MinIO. By mounting a MinIO bucket via MinFS, ReductStore can write and read data directly from MinIO, just like a regular filesystem.&lt;/p&gt;

&lt;p&gt;This setup allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ReductStore to run at the edge or in the cloud&lt;/li&gt;
&lt;li&gt;MinIO to handle long-term blob storage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, they provide a flexible solution: ReductStore manages real-time ingestion and fast access, while MinIO provides persistent, scalable storage in the background.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/minio-reductstore-beyond-benchmarks#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;ReductStore is a strong choice for workloads that require real-time access to time-series blob data, whether at the edge or in the cloud. It provides low-latency ingestion and efficient storage for use cases in robotics and IIoT.&lt;/p&gt;

&lt;p&gt;MinIO is best suited for general purpose, cloud-native object storage. It works well for long-term archiving, backups, and applications that require S3-compliant access across distributed environments.&lt;/p&gt;

&lt;p&gt;In many cases, using both systems together is a practical solution: ReductStore handles the active, high-frequency data stream, while MinIO provides scalable and resilient storage for long-term retention.&lt;/p&gt;




&lt;p&gt;Thanks for reading, I hope this article will help you choose the right storage strategy for your vibration data. If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>comparison</category>
    </item>
    <item>
      <title>ReductStore vs. TimescaleDB: How to Choose the Right Time-Series Database</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Thu, 06 Mar 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/reductstore-vs-timescaledb-how-to-choose-the-right-time-series-database-2018</link>
      <guid>https://dev.to/reductstore/reductstore-vs-timescaledb-how-to-choose-the-right-time-series-database-2018</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2fuippsc8zfrrs0nr1y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2fuippsc8zfrrs0nr1y.png" alt="ReductStore vs TimescaleDB" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the rapid growth of time-series data in AI, IoT, and industrial automation, choosing the right database solution can significantly impact performance, scalability, and efficiency. As we covered briefly in &lt;a href="https://www.reduct.store/whitepaper" rel="noopener noreferrer"&gt;&lt;strong&gt;our whitepaper&lt;/strong&gt;&lt;/a&gt;, ReductStore and TimescaleDB are two powerful but distinct solutions, each designed to handle time-series data in different ways. ReductStore specializes in unstructured time-series data, making it ideal for edge computing and large binary objects. TimescaleDB, on the other hand, is an extension of PostgreSQL, optimized for structured time-series data with robust querying capabilities. In this article, we'll explore these differences between ReductStore and TimescaleDB, as well as their other strengths, and when to use each.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding TimescaleDB: The PostgreSQL-Based Time-Series Database&lt;a href="https://www.reduct.store/blog/timescaledb-reductstore#understanding-timescaledb-the-postgresql-based-time-series-database" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;TimescaleDB is a time-series database built as an extension of PostgreSQL, leveraging SQL's familiarity while adding optimizations for time-series workloads. Unlike traditional relational databases, TimescaleDB structures data into hypertables, which automatically partitions data across multiple chunks, improving read and write performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of TimescaleDB&lt;a href="https://www.reduct.store/blog/timescaledb-reductstore#key-features-of-timescaledb" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SQL Compatibility&lt;/strong&gt; : Allows users to run traditional SQL queries on structured time-series data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hypertables &amp;amp; Automatic Partitioning&lt;/strong&gt; : Improves storage efficiency and query performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compression &amp;amp; Data Retention Policies&lt;/strong&gt; : Reduces storage costs by compressing or discarding older data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full PostgreSQL Ecosystem&lt;/strong&gt; : Supports joins, indexing, and relational data integration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficient Querying&lt;/strong&gt; : Optimized for aggregations, rollups, and downsampling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensive Analytical Capabilities&lt;/strong&gt; : Ideal for real-time monitoring, forecasting, and trend analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability &amp;amp; Replication&lt;/strong&gt; : Supports distributed architectures for improved availability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because TimescaleDB is based on PostgreSQL, it's particularly useful for applications that blend time-series data with relational datasets. It excels in scenarios requiring frequent queries, detailed analytics, and structured data storage. With features such as full-text or vector search, it is hard to beat the performance that TimescaleDB offers for structured PostgreSQL time series data, especially when dealing with deep metadata queries. Like ReductStore, TimescaleDB is able to leverage on-premise and cloud storage to create a powerful, efficient and affordable solution, provided the dataset is structured. Because of this flexibility, performance, built-in SQL integration, and deep query capabilities, TimescaleDB is widely used in financial transactions, predictive maintenance, and smart city infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  ReductStore: Optimized for Unstructured Time-Series Data&lt;a href="https://www.reduct.store/blog/timescaledb-reductstore#reductstore-optimized-for-unstructured-time-series-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;While TimescaleDB shines in structured environments, ReductStore is purpose-built for high-speed, unstructured time-series data. It is designed for scenarios where time-series data consists of binary large objects (BLOBs), such as images, videos, and sensor logs. Unlike relational databases, ReductStore organizes data into buckets and entries, with specialised features for edge computing and industrial IoT applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of ReductStore&lt;a href="https://www.reduct.store/blog/timescaledb-reductstore#key-features-of-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Designed for Unstructured Data&lt;/strong&gt; : Efficiently stores and retrieves binary time-series objects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volume Based FIFO (First-In, First-Out) Quota System&lt;/strong&gt;: Ensures the most relevant and recent data is retained with minimal configuration, and ensures that edge storage is not overrun.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batching &amp;amp; Low-Latency Retrieval&lt;/strong&gt; : Reduces network overhead, making it highly efficient for edge devices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lightweight HTTP API (and ReductStore SDKs)&lt;/strong&gt;: Provides a simple interface for integrating with AI, robotics, and other IoT systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimized for Large Data Objects&lt;/strong&gt; : Unlike TimescaleDB, ReductStore handles large records natively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-Effective Storage Management&lt;/strong&gt; : Aside from efficient data retention policies with minimal configuration, leverage lower cost blob storage to keep costs down. (For more information, see our article on &lt;a href="https://www.reduct.store/blog/data-lakehouse-manufacturing" rel="noopener noreferrer"&gt;&lt;strong&gt;data lakehouses for manufacturing&lt;/strong&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ReductStore excels in AI, robotics, and industrial automation, where managing high-frequency unstructured time-series data is critical. It is particularly well-suited for environments requiring efficient storage and processing of large sensor data, video streams, or machine logs. Organizations developing autonomous systems, security monitoring applications, and predictive maintenance solutions could benefit significantly from ReductStore's ability to efficiently handle and prioritize large volumes of binary time-series data. Its flexibility in leveraging both on-premise and cloud storage solutions allows for seamless integration into a variety of industrial and AI-driven pipelines.&lt;/p&gt;

&lt;p&gt;One of ReductStore's standout features is its real-time FIFO (First-In, First-Out) quota system based on storage volume, which ensures optimal storage management by automatically replacing older data while retaining high-priority information. This capability is particularly valuable in edge computing, where storage constraints require careful storage management. &lt;a href="https://www.reduct.store/blog/comparisons/iot/reductstore-vs-timescaledb" rel="noopener noreferrer"&gt;&lt;strong&gt;Additionally, ReductStore's batching and iterator-based query approach significantly reduces latency overhead&lt;/strong&gt;&lt;/a&gt;, making it an efficient choice for high-frequency data retrieval. While TimescaleDB offers advanced SQL-based time-series indexing and partitioning, it is inherently optimized for structured datasets and relational metadata, making ReductStore the preferred choice for workloads involving continuous ingestion and rapid access to unstructured binary time-series data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Differences: ReductStore vs. TimescaleDB&lt;a href="https://www.reduct.store/blog/timescaledb-reductstore#key-differences-reductstore-vs-timescaledb" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;TimescaleDB&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Relational time-series database (PostgreSQL extension)&lt;/td&gt;
&lt;td&gt;Time-series object storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best For&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SQL-based analytics and structured time-series data&lt;/td&gt;
&lt;td&gt;Fast data acquisition systems (primarily unstructured/binary data)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Schema&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Standard relational schema design&lt;/td&gt;
&lt;td&gt;Flat storage structure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Transport Protocol&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;PostgreSQL wire protocol (TCP)&lt;/td&gt;
&lt;td&gt;HTTP/1 and HTTP/2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Query Language&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SQL-based queries&lt;/td&gt;
&lt;td&gt;Conditional Query Language (HTTP-based)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Vertical/horizontal scaling via hypertables and distributed hypertables&lt;/td&gt;
&lt;td&gt;Optimized for edge computing and centralized cloud storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High ingest rates for structured data; optimized compression for analytics&lt;/td&gt;
&lt;td&gt;High-speed ingestion and retrieval for large binary data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ideal Use Cases&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Real-time analytics, monitoring, IoT with structured data, financial workloads&lt;/td&gt;
&lt;td&gt;Industrial IoT, vibration/acoustic sensors, predictive maintenance, robotics, computer vision&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When Both Solutions Might Work Together&lt;a href="https://www.reduct.store/blog/timescaledb-reductstore#when-both-solutions-might-work-together" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;While ReductStore and TimescaleDB specialize in different areas, there are scenarios where using both solutions together can provide the best of both worlds. For example, in an industrial IoT setting, TimescaleDB can be used to store structured metadata, such as device identifiers and timestamps, while ReductStore can manage the corresponding unstructured data, such as sensor images, audio logs, or vibration waveforms.&lt;/p&gt;

&lt;p&gt;Similarly, AI-driven applications might rely on ReductStore for raw data storage while using TimescaleDB for structured annotations and analytics. For instance, in predictive maintenance, TimescaleDB could store structured sensor readings like temperature and pressure logs, while ReductStore could handle infrared images, vibrational sensor data or audio recordings used for diagnosing issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which One Should You Choose?&lt;a href="https://www.reduct.store/blog/timescaledb-reductstore#which-one-should-you-choose" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If your application requires structured, SQL-compatible time-series data with strong analytics and query performance, TimescaleDB is a great choice. It excels in scenarios like financial monitoring, IoT device management, and predictive analytics where structured data is key. Organizations that need deep historical analysis, trend forecasting, and regulatory compliance for long-term data storage often find TimescaleDB to be a perfect fit.&lt;/p&gt;

&lt;p&gt;On the other hand, if your workload involves unstructured time-series data with large binary objects, ReductStore is the better fit. It is optimized for high-speed ingestion, edge computing, and AI-driven applications, making it ideal for robotics, manufacturing, and high-frequency sensor data. Its ability to handle binary time-series data makes it a natural choice for applications in satellite imaging or industrial automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts&lt;a href="https://www.reduct.store/blog/timescaledb-reductstore#final-thoughts" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Both ReductStore and TimescaleDB offer powerful solutions for managing time-series data, but their strengths lie in different areas. Understanding these differences allows you to choose the best fit for your specific use case—or even leverage both for a comprehensive data storage strategy.&lt;/p&gt;

&lt;p&gt;If your project deals with structured time-series data analytics, particularly if it involves SQL in any way, TimescaleDB is a great choice. If you work with large unstructured time-series datasets, ReductStore is the optimal solution. For businesses handling both types of data, integrating the two databases can offer a balanced, efficient, and scalable approach to time-series data management.&lt;/p&gt;




&lt;p&gt;If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>comparison</category>
    </item>
    <item>
      <title>ReductStore vs. MongoDB: Which One is Right for Your Data?</title>
      <dc:creator>AnthonyCvn</dc:creator>
      <pubDate>Fri, 21 Feb 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/reductstore-vs-mongodb-which-one-is-right-for-your-data-349d</link>
      <guid>https://dev.to/reductstore/reductstore-vs-mongodb-which-one-is-right-for-your-data-349d</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F496zmu3zjv9zul34aem0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F496zmu3zjv9zul34aem0.png" alt="ReductStore and MongoDB Comparison" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the rapid expansion of data-driven applications, choosing the right database for your workload has never been more crucial. As data complexity increases, so do the number of specialized solutions. ReductStore, as we've covered before, is a powerful &lt;a href="https://dev.to/anthonycvn/alternative-to-mongodb-for-blob-data-1l45-temp-slug-2170766"&gt;&lt;strong&gt;alternative for handling time series unstructured data&lt;/strong&gt;&lt;/a&gt;, but it's not the only player in the space. MongoDB, one of the most widely used NoSQL databases, also offers an effective solution for managing large-scale data. However, each has their own key areas of strength. In this article, we'll break down the differences between ReductStore and MongoDB, and help you determine which is best suited for your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding MongoDB: The NoSQL Powerhouse&lt;a href="https://www.reduct.store/blog/mongodb-reductstore#understanding-mongodb-the-nosql-powerhouse" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;MongoDB is a document-based NoSQL database designed for flexibility and scalability. Unlike relational databases, it doesn't enforce a strict schema, making it ideal for applications where data structures may evolve over time.&lt;/p&gt;

&lt;p&gt;One of MongoDB's biggest advantages is its JSON-like document model (BSON), which allows developers to store and retrieve complex, hierarchical data efficiently. Combined with horizontal scaling via sharding, MongoDB can handle massive datasets across multiple distributed nodes. This makes it a preferred choice for applications that demand real-time performance, fast read/write capabilities, and scalability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of MongoDB&lt;a href="https://www.reduct.store/blog/mongodb-reductstore#key-features-of-mongodb" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Schema Flexibility&lt;/strong&gt; : MongoDB allows users to store data without a fixed schema, making it highly flexible for modern applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Indexing &amp;amp; Query Performance&lt;/strong&gt; : Allows the creation of powerful indexing options including single field, compound, multikey, geospatial, text and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replication &amp;amp; High Availability&lt;/strong&gt; : Replicates data across multiple servers to ensure data redundancy and reliability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Horizontal Scaling&lt;/strong&gt; : Data can be sharded across multiple servers for performance optimization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggregation Framework&lt;/strong&gt; : Enables complex data transformations using MongoDB Query Language and SQL-style queries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built-in Load Balancing&lt;/strong&gt; : Ensures smooth performance even with high-volume transactions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For applications that involve real-time analytics, dynamic content management, or large-scale web and mobile applications, MongoDB's high write throughput and indexing capabilities make it a strong choice. However, while MongoDB can manage time-series data, it's not inherently optimized for binary large objects (BLOBs). While this can be mitigated by GridFS, which can handle data files above 16MB, it is slower, and cannot &lt;a href="https://www.reduct.store/solutions/cloud" rel="noopener noreferrer"&gt;&lt;strong&gt;optimize storage costs by storing blobs in a commodity cloud storage solution&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  ReductStore: The Specialist for Time Series Unstructured Data&lt;a href="https://www.reduct.store/blog/mongodb-reductstore#reductstore-the-specialist-for-time-series-unstructured-data" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we've covered the basics of MongoDB, let's turn to &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;. While MongoDB is highly versatile, ReductStore is built specifically for time series unstructured data, making it a stronger choice for industrial IoT, computer vision, and edge computing applications. Unlike MongoDB, which is better suited to storing documents, ReductStore stores time-ordered binary data in a lightweight, object-store-based structure. This makes it a natural fit for machine-generated data, including sensor readings, video feeds, or other large-scale unstructured datasets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of ReductStore&lt;a href="https://www.reduct.store/blog/mongodb-reductstore#key-features-of-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Optimized for Time-Series Data&lt;/strong&gt; : Purpose-built for handling time series unstructured data in a compact format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-Speed Data Ingestion&lt;/strong&gt; : Supports extremely fast write speeds for records larger than a few KB, making it ideal for data acquisition (DAQ) systems (e.g., vibration sensors, cameras, log files, etc.).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time FIFO Quota System&lt;/strong&gt; : Ensures storage is efficiently managed, preventing storage overflow on edge devices, while retaining the most recent and necessary data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batching &amp;amp; Low Latency&lt;/strong&gt; : Reduces network overhead for high-latency environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficient Storage Management&lt;/strong&gt; : When it comes to its &lt;a href="https://www.reduct.store/solutions/cloud" rel="noopener noreferrer"&gt;&lt;strong&gt;cloud solution&lt;/strong&gt;&lt;/a&gt;, ReductStore can leverages low cost blob storage to reduce costs and stores these large binary objects (BLOBs) in a scalable and efficient way.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of ReductStore's key advantages is its real-time FIFO (First-In, First-Out) quota system based on storage volume, which ensures that older data is automatically replaced as needed. This makes it highly efficient for edge computing, where storage resources are often limited.&lt;/p&gt;

&lt;p&gt;Another key feature is data replication between edge devices and the cloud (or across edge devices) using label-based filtering. Each data record can be labeled with key-value pairs (such as AI labels), allowing replication tasks to automatically select and transfer only the relevant data. This approach provides flexibility to optimize bandwidth and storage costs, or to ensure that critical data is always accessible.&lt;/p&gt;

&lt;p&gt;In addition, ReductStore optimizes data retrieval and storage efficiency by adapting to different data sizes and network conditions. For large records, it supports chunked downloads to avoid overwhelming system memory. In high-latency environments or when handling many small records, its batching capabilities minimize overhead and ensure faster writes and retrievals. In addition, rather than relying on traditional query mechanisms, ReductStore uses an iterative approach to efficiently navigate and retrieve unstructured-time series data with minimal resource consumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Differences: ReductStore vs. MongoDB&lt;a href="https://www.reduct.store/blog/mongodb-reductstore#key-differences-reductstore-vs-mongodb" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;MongoDB&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;NoSQL Document Store (BSON)&lt;/td&gt;
&lt;td&gt;Time-Series Object Storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best For&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Scalable web applications&lt;/td&gt;
&lt;td&gt;Fast data acquisition systems&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Schema&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Flexible, dynamic schema&lt;/td&gt;
&lt;td&gt;Flat storage structure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Transport Protocol&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;TCP&lt;/td&gt;
&lt;td&gt;HTTP/1, HTTP/2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Query Language&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;MongoDB Query Language&lt;/td&gt;
&lt;td&gt;Conditional Query Language&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Horizontal scaling with sharding&lt;/td&gt;
&lt;td&gt;Optimized for edge computing and centralized cloud storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High read/write throughput for document storage&lt;/td&gt;
&lt;td&gt;High-speed ingestion and retrieval for large binary data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ideal Use Cases&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Real-time analytics, content management, mobile/web apps, and Generative AI&lt;/td&gt;
&lt;td&gt;Industrial IoT, vibration / acoustic sensors, predictive maintenance, robotics, computer vision&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Real-World Applications and Use Cases&lt;a href="https://www.reduct.store/blog/mongodb-reductstore#real-world-applications-and-use-cases" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  When to Use MongoDB&lt;a href="https://www.reduct.store/blog/mongodb-reductstore#when-to-use-mongodb" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;MongoDB is ideally suited for web and mobile applications requiring a dynamic schema, high speed queries, and distributed data storage. Other use cases include e-commerce platforms, which benefit from MongoDB's flexible schema and fast search capabilities, big data and analytics, where its aggregation framework allows it to optimally store and analyze structured or semi-structured data. It is also ideal for content management systems and generative AI where its JSON-like BSON language allows it to more efficiently retrieve user-generated content, blogs or media files, and to efficiently create new content of this type..&lt;/p&gt;

&lt;h3&gt;
  
  
  When to Use ReductStore&lt;a href="https://www.reduct.store/blog/mongodb-reductstore#when-to-use-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore is an ideal solution for time series unstructured data from any source. Sources of such time series unstructured data include Industrial IoT, edge computing devices, manufacturing machine sensors or vibration sensors, computer vision, robotics, GPS, log files, and more.&lt;/p&gt;

&lt;p&gt;If your use case involves large amounts of sensor data, especially those with large file sizes, ReductStore offers the best performance and storage efficiency. Since disk space is often at a premium in edge and cloud locations, FIFO quotas and cost-effective blob storage are additional benefits. And because ReductStore's performance metrics are so high, you need fewer disks or edge devices in parallel to ingest data at speed.&lt;/p&gt;

&lt;p&gt;The ability to leverage cheaper blob storage in the cloud without taking a significant performance hit means that ReductStore offers the biggest bang for your buck. AI/ML applications is another area in which ReductStore excels. Learning models that rely on time-series data can greatly benefit from ReductStore's optimized ingestion and retrieval speeds.&lt;/p&gt;

&lt;p&gt;Robotics is another potential use case, where video files, positional data, logs and sensor readings of various sizes need to be processed quickly. For larger file sizes, such as video and images, vibrational sensor data, or audio, ReductStore's read and write efficiency cannot be beat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which One Should You Choose?&lt;a href="https://www.reduct.store/blog/mongodb-reductstore#which-one-should-you-choose" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If you're looking for a general-purpose NoSQL database with flexibility, scalability, and strong query support, MongoDB is an excellent choice. It works well for applications that require structured yet schema-flexible data storage, such as e-commerce platforms, social media applications, and analytics dashboards.&lt;/p&gt;

&lt;p&gt;However, if your application deals primarily with time series unstructured data, especially in an AI-driven environment, ReductStore offers a more specialized, high-performance solution. Its FIFO quota system, efficient batching, and low-latency retrieval make it ideal for managing large-scale sensor data, image processing, and robotics applications.&lt;/p&gt;

&lt;p&gt;Ultimately, your choice will depend on the type of data you are working with and the performance requirements of your system. For flexible document storage, go with MongoDB. For efficient time series unstructured data storage, ReductStore is the better fit.&lt;/p&gt;

&lt;p&gt;By understanding the strengths and limitations of both, you can make an informed decision and ensure your data management strategy aligns with your application's needs.&lt;/p&gt;




&lt;p&gt;If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>comparison</category>
    </item>
  </channel>
</rss>
