<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexey Timin</title>
    <description>The latest articles on DEV Community by Alexey Timin (@atimin).</description>
    <link>https://dev.to/atimin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/atimin"/>
    <language>en</language>
    <item>
      <title>ReductStore v1.19: Open Data Backbone for Robotics and ROS</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Wed, 08 Apr 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/atimin/reductstore-v119-open-data-backbone-for-robotics-and-ros-1efk</link>
      <guid>https://dev.to/atimin/reductstore-v119-open-data-backbone-for-robotics-and-ros-1efk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcfkuziyyn90yrcvntry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcfkuziyyn90yrcvntry.png" alt="ReductStore v1.19.0 Released" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ReductStore &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.19.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.19.0&lt;/strong&gt;&lt;/a&gt; is now available. This release extends the storage model for robotics and telemetry workloads and introduces new integration points for ROS and Zenoh.&lt;/p&gt;

&lt;p&gt;To download the latest release, visit the &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.19.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_19_0-released#whats-new-in-1190" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The first major change in v1.19 is licensing. &lt;a href="https://dev.to/anthonycvn/reductstore-core-adopts-apache-20-license-4j0k-temp-slug-3253510"&gt;&lt;strong&gt;ReductStore Core is now open source under Apache 2.0&lt;/strong&gt;&lt;/a&gt;, which makes the core database easier to evaluate, integrate, and extend in production systems.&lt;/p&gt;

&lt;p&gt;The second major change is the data model. ReductStore now supports hierarchical entry names, similar to ROS topics, and adds entry attachments for schemas and metadata. This makes it possible to represent structured robotics data without flattening topic hierarchies or moving context into external systems.&lt;/p&gt;

&lt;p&gt;The release also introduces a &lt;a href="https://www.reduct.store/docs/integrations/zenoh" rel="noopener noreferrer"&gt;&lt;strong&gt;native Zenoh API&lt;/strong&gt;&lt;/a&gt; for direct ingestion and querying over Zenoh, and &lt;a href="https://www.reduct.store/docs/reduct-bridge" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductBridge&lt;/strong&gt;&lt;/a&gt; for ROS1 and ROS2 integration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nested Data Model with Attachments&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_19_0-released#nested-data-model-with-attachments" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The new hierarchical data model lets you organize data in a path-based structure, similar to ROS topics, Zenoh key expressions, or MQTT topics. Instead of relying on a flat namespace, ReductStore can now store data in a form that matches the structure used by upstream systems.&lt;/p&gt;

&lt;p&gt;Each entry can also include attachments for schemas and metadata. These attachments preserve the context required by downstream tooling without changing the record payload itself. For example, the &lt;a href="https://www.reduct.store/docs/extensions/official/ros-ext" rel="noopener noreferrer"&gt;&lt;strong&gt;ROS Extension&lt;/strong&gt;&lt;/a&gt; can use them to decode serialized ROS messages and export them to MCAP files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk616iksoj3aiertofuq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk616iksoj3aiertofuq5.png" alt="Web Console with Nested Data Model" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Native Zenoh API&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_19_0-released#native-zenoh-api" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Zenoh is increasingly used in robotics and edge environments as a low-overhead protocol for distributed data exchange. With the &lt;a href="https://www.reduct.store/docs/integrations/zenoh" rel="noopener noreferrer"&gt;&lt;strong&gt;native Zenoh API&lt;/strong&gt;&lt;/a&gt;, ReductStore can participate directly in Zenoh-based systems without requiring an additional bridge or adapter.&lt;/p&gt;

&lt;p&gt;You can start ReductStore with the Zenoh API enabled using a minimal Docker configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull reduct/store:v1.19.0docker run 
 &lt;span class="nt"&gt;--env&lt;/span&gt; &lt;span class="s2"&gt;"RS_ZENOH_ENABLED=ON"&lt;/span&gt; 
 &lt;span class="nt"&gt;--env&lt;/span&gt; &lt;span class="s2"&gt;"RS_ZENOH_CONFIG={}"&lt;/span&gt; 
 &lt;span class="nt"&gt;--env&lt;/span&gt; &lt;span class="s2"&gt;"RS_ZENOH_SUB_KEYEXPRS=**"&lt;/span&gt; 
 &lt;span class="nt"&gt;-p&lt;/span&gt; 8383:8383 &lt;span class="nt"&gt;-p&lt;/span&gt; 36597:36597 &lt;span class="nt"&gt;-p&lt;/span&gt; 7446:7446 reduct/store:v1.19.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once enabled, the API allows you to write data to ReductStore directly over Zenoh. If a sample includes a JSON attachment, ReductStore stores it as record labels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;zenoh&lt;/span&gt;

&lt;span class="n"&gt;KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;factory/line1/camera&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;PAYLOAD&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;binary payload&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;LABELS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;robot&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;alpha&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;zenoh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;zenoh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;PAYLOAD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;attachment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LABELS&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also query data through Zenoh. For conditional queries, pass a &lt;code&gt;when&lt;/code&gt; expression in the query attachment. If you want all matching records returned individually, use &lt;code&gt;zenoh.ConsolidationMode.NONE&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;zenoh&lt;/span&gt;

&lt;span class="n"&gt;KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;factory/line1/when-query&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;CONSOLIDATION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;zenoh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ConsolidationMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NONE&lt;/span&gt;
&lt;span class="n"&gt;attachment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;when&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;amp;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$eq&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}}}).&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;zenoh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;zenoh&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;replies&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="n"&gt;reply&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;reply&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;attachment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;attachment&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;consolidation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;CONSOLIDATION&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ok&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;reply&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;replies&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_bytes&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Data ingested through Zenoh is stored in ReductStore like data written through the HTTP API, so it remains available for querying, replication, and downstream tools and extensions.&lt;/p&gt;

&lt;h3&gt;
  
  
  ReductBridge for ROS Integration&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_19_0-released#reductbridge-for-ros-integration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This release also introduces &lt;a href="https://github.com/reductstore/reduct-bridge" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductBridge&lt;/strong&gt;&lt;/a&gt;, a new project for integrating ReductStore with ROS1 and ROS2. ReductBridge automatically labels ROS messages and stores related schemas and metadata as attachments.&lt;/p&gt;

&lt;p&gt;This makes raw ROS payloads usable after ingestion: they can be decoded later with the ROS Extension or exported to MCAP files. ROS support is the first target, but the same pattern can be extended to operating system metrics, logs, and other telemetry to build a unified storage layer with consistent labeling and metadata.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_19_0-released#whats-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The next area of work is data compression and storage efficiency, especially for robotics workloads where payload sizes and formats vary significantly. We plan to introduce backend compression optimized for ReductStore's storage model, where many records are packed into a single block.&lt;/p&gt;

&lt;p&gt;We also plan to store metadata in Parquet format. This should improve query efficiency, make metadata accessible without scanning the underlying data blocks, and simplify integration with analytics and data lake tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compatibility and Migration&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_19_0-released#compatibility-and-migration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Starting with v1.19, ReductStore Docker images no longer run as &lt;code&gt;root&lt;/code&gt; for security reasons. If you deploy with Docker, make sure the mounted data directory is writable by UID/GID &lt;code&gt;10001:10001&lt;/code&gt; before upgrading.&lt;/p&gt;




&lt;p&gt;If you have questions or feedback, join the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
      <category>reductstore</category>
      <category>robotics</category>
      <category>ros</category>
    </item>
    <item>
      <title>ReductStore Core Adopts Apache 2.0 License</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/atimin/reductstore-core-adopts-apache-20-license-2lec</link>
      <guid>https://dev.to/atimin/reductstore-core-adopts-apache-20-license-2lec</guid>
      <description>&lt;p&gt;Hello, everyone!&lt;/p&gt;

&lt;p&gt;Starting with &lt;strong&gt;ReductStore v1.19&lt;/strong&gt; , we are changing how we license and package the project. From this version onward, ReductStore is split into two editions: &lt;strong&gt;&lt;a href="https://github.com/reductstore/reductstore" rel="noopener noreferrer"&gt;ReductStore Core&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;ReductStore Pro&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Previously, ReductStore shipped as a single edition under &lt;strong&gt;BUSL-1.1 (Business Source License)&lt;/strong&gt;. With v1.19, the core database is open source under Apache 2.0, while Pro continues under commercial terms.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's changing&lt;a href="https://www.reduct.store/blog/news/reductstore-core-apache-2-0#whats-changing" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;With this release, &lt;strong&gt;&lt;a href="https://github.com/reductstore/reductstore" rel="noopener noreferrer"&gt;ReductStore Core&lt;/a&gt;&lt;/strong&gt; becomes the open-source foundation of the project and is now available under the &lt;strong&gt;Apache License 2.0&lt;/strong&gt;. It includes the database server and the core functionality that most users rely on for edge deployments and everyday data workflows.&lt;/p&gt;

&lt;p&gt;Alongside Core, we will continue offering &lt;strong&gt;ReductStore Pro&lt;/strong&gt; , which is distributed under a &lt;strong&gt;commercial license&lt;/strong&gt;. The Pro edition provides additional capabilities and support options for teams with more advanced requirements. See &lt;strong&gt;&lt;a href="https://www.reduct.store/pricing" rel="noopener noreferrer"&gt;Pricing&lt;/a&gt;&lt;/strong&gt; for a clear comparison of what is included in each edition.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we're doing this&lt;a href="https://www.reduct.store/blog/news/reductstore-core-apache-2-0#why-were-doing-this" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We want to better support open source communities and make it easier to contribute to, integrate with, and build on top of &lt;strong&gt;&lt;a href="https://github.com/reductstore/reductstore" rel="noopener noreferrer"&gt;ReductStore Core&lt;/a&gt;&lt;/strong&gt;. Just as importantly, we want users to be able to run ReductStore on the edge without licensing restrictions for the core database use cases.&lt;/p&gt;

&lt;p&gt;At the same time, we want to keep a sustainable commercial model for companies that build more complex cloud setups or need advanced support and functionality around robotics and IIoT data formats. That is the role of ReductStore Pro.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for you&lt;a href="https://www.reduct.store/blog/news/reductstore-core-apache-2-0#what-this-means-for-you" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If you use the core database functionality, you can freely adopt, integrate, and distribute &lt;strong&gt;&lt;a href="https://github.com/reductstore/reductstore" rel="noopener noreferrer"&gt;ReductStore Core&lt;/a&gt;&lt;/strong&gt; under the Apache 2.0 license. Existing workflows and upgrades remain straightforward.&lt;/p&gt;

&lt;p&gt;For teams that rely on advanced functionality or require commercial support, &lt;strong&gt;ReductStore Pro&lt;/strong&gt; remains available as the supported commercial edition. We will clearly document which features belong to Core and which are part of Pro in the documentation and release notes.&lt;/p&gt;

&lt;p&gt;If you have questions or want to discuss which edition fits your use case, please reach out on the &lt;strong&gt;&lt;a href="https://community.reduct.store/" rel="noopener noreferrer"&gt;ReductStore Community&lt;/a&gt;&lt;/strong&gt; forum or via our &lt;strong&gt;&lt;a href="https://www.reduct.store/contact" rel="noopener noreferrer"&gt;contact page&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>news</category>
      <category>reductstore</category>
      <category>opensource</category>
    </item>
    <item>
      <title>ReductStore v1.18.0 Released with Resilient Deployments and the Multi-entry API</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Thu, 05 Feb 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/atimin/reductstore-v1180-released-with-resilient-deployments-and-the-multi-entry-api-3coh</link>
      <guid>https://dev.to/atimin/reductstore-v1180-released-with-resilient-deployments-and-the-multi-entry-api-3coh</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48hovxrs663gjezd39is.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48hovxrs663gjezd39is.png" alt="ReductStore v1.18.0 Released" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are pleased to announce the release of the latest minor version of &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.18.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.18.0&lt;/strong&gt;&lt;/a&gt;. ReductStore is a high-performance storage and streaming solution designed for storing and managing large volumes of historical data.&lt;/p&gt;

&lt;p&gt;To download the latest released version, please visit our &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.18.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_18_0-released#whats-new-in-1180" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this release, we have added support for resilient deployments to build a more robust, fault-tolerant, and highly available ReductStore cluster. Now, you can implement hot-standby configurations, automatic failover, and seamless recovery to ensure uninterrupted service even in the face of hardware failures or network issues. You can also elastically scale read-only nodes to handle increased read workloads without impacting the performance of the primary nodes.&lt;/p&gt;

&lt;p&gt;Additionally, we have introduced a new Multi-entry API that allows you to efficiently manage and query multiple entries in a single request. This API is designed to optimize performance and reduce latency when working with large datasets, making it easier to retrieve and manipulate data in bulk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resilient Deployments&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_18_0-released#resilient-deployments" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In ReductStore v1.18.0, resilient deployments are now a first-class feature. Using the &lt;code&gt;RS_INSTANCE_ROLE&lt;/code&gt; setting, you can build topologies that keep your ingestion endpoint available during node failures and scale reads independently from writes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hot standby (active-passive) for write availability&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_18_0-released#hot-standby-active-passive-for-write-availability" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Run two nodes against the same backend (a shared filesystem or the same remote backend). Only one node is active at a time: the active node holds a lock file and refreshes it, while the standby waits and takes over when the lock becomes stale.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8f1f34bkyntt2anxr01.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8f1f34bkyntt2anxr01.webp" alt="ReductStore Hot standby deployment" width="800" height="793"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set &lt;code&gt;RS_INSTANCE_ROLE=PRIMARY&lt;/code&gt; for the active node and &lt;code&gt;RS_INSTANCE_ROLE=SECONDARY&lt;/code&gt; for the standby.&lt;/li&gt;
&lt;li&gt;Put both nodes behind a single virtual endpoint (load balancer / reverse proxy).&lt;/li&gt;
&lt;li&gt;Route traffic only to the node that returns &lt;code&gt;200 OK&lt;/code&gt; on &lt;code&gt;GET /api/v1/ready&lt;/code&gt; (the inactive node returns &lt;code&gt;503&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Tune failover behavior with &lt;code&gt;RS_LOCK_FILE_TTL&lt;/code&gt; (how long the standby waits) and &lt;code&gt;RS_LOCK_FILE_TIMEOUT&lt;/code&gt; (how long a node waits to acquire the lock).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To avoid split-brain writes, don’t run both nodes in &lt;code&gt;STANDALONE&lt;/code&gt; mode against the same dataset.&lt;/p&gt;

&lt;h3&gt;
  
  
  Read-only replicas for read scaling&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_18_0-released#read-only-replicas-for-read-scaling" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Add one or more &lt;code&gt;REPLICA&lt;/code&gt; nodes to serve queries from the same dataset. Replicas never write and periodically refresh bucket metadata and indexes from the backend, so newly written data may appear with a small delay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytiv5b84kh0cyzy1clox.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytiv5b84kh0cyzy1clox.webp" alt="ReductStore Read-only replicas" width="800" height="647"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Route writes to a dedicated ingestion node (or the active node in a hot-standby pair).&lt;/li&gt;
&lt;li&gt;Route reads to replicas to scale query workloads horizontally.&lt;/li&gt;
&lt;li&gt;Tune staleness with &lt;code&gt;RS_ENGINE_REPLICA_UPDATE_INTERVAL&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For an end-to-end walkthrough (including S3-based standalone, active-passive, and replicas), see the &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/integrations/s3" rel="noopener noreferrer"&gt;S3 Backend&lt;/a&gt;&lt;/strong&gt; tutorial. For architecture options and operational notes, see the &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/guides/disaster-recovery" rel="noopener noreferrer"&gt;Disaster Recovery&lt;/a&gt;&lt;/strong&gt; guide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-entry API&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_18_0-released#multi-entry-api" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The new &lt;strong&gt;Multi-entry API&lt;/strong&gt; makes it possible to work with multiple entries in a single request. In practice, this is most useful for &lt;strong&gt;querying&lt;/strong&gt; : instead of running one query per sensor/stream and merging results on the client, you can request all the entries you need at once and process a single result stream (each returned record includes its &lt;code&gt;entry&lt;/code&gt; name).&lt;/p&gt;

&lt;p&gt;Here is a Python example using &lt;code&gt;reduct-py&lt;/code&gt; to query multiple entries in one call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import asynciofrom reduct import Clientasync def main() -&amp;gt; None: async with Client("http://localhost:8383", api_token="my-token") as client: bucket = await client.get_bucket("my-bucket") # Query multiple entries in a single request (since ReductStore v1.18). # You can mix exact names and wildcards. entries = ["sensor-*", "camera"] async for record in bucket.query( entries, start="2026-02-05T10:00:00Z", stop="2026-02-05T10:05:00Z", when={"&amp;amp;score": {"$gte": 10}}, ): payload = await record.read_all() print(record.entry, record.timestamp, len(payload))if __name__ == " __main__": asyncio.run(main())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What’s Next&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_18_0-released#whats-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We’re already working on the next improvements to make ReductStore easier to integrate into real-world data pipelines:&lt;/p&gt;

&lt;h3&gt;
  
  
  Native Zenoh API&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_18_0-released#native-zenoh-api" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Zenoh is becoming a common choice for data exchange in distributed, edge-first systems (robotics, industrial IoT, and telemetry). In upcoming releases, we plan to add a &lt;strong&gt;native Zenoh API&lt;/strong&gt; so ReductStore can join Zenoh networks seamlessly.&lt;/p&gt;

&lt;p&gt;This will make it easier to ingest and serve data directly through Zenoh—without custom bridges—so your storage layer fits naturally into existing Zenoh-based deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Entry attachments (metadata)&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_18_0-released#entry-attachments-metadata" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Today, labels work well for filtering and replication, but many projects also need structured metadata tied to an entry itself: data format, schema version, units, encoding, calibration details, and other context.&lt;/p&gt;

&lt;p&gt;We plan to introduce &lt;strong&gt;attachments for entries&lt;/strong&gt; , allowing you to store and retrieve this kind of metadata alongside your data streams, making datasets more self-describing and easier to consume across teams and tools.&lt;/p&gt;




&lt;p&gt;I hope you find those new features useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
    </item>
    <item>
      <title>ReductStore v1.17.0 Released with Query Links and S3 Storage Backend Support</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Tue, 21 Oct 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/reductstore-v1170-released-with-query-links-and-s3-storage-backend-support-447j</link>
      <guid>https://dev.to/reductstore/reductstore-v1170-released-with-query-links-and-s3-storage-backend-support-447j</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbm0m00zkjfcvyzs7dt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbm0m00zkjfcvyzs7dt6.png" alt="ReductStore v1.17.0 Released" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are pleased to announce the release of the latest minor version of &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.17.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.17.0&lt;/strong&gt;&lt;/a&gt;. ReductStore is a high-performance storage and streaming solution designed for storing and managing large volumes of historical data.&lt;/p&gt;

&lt;p&gt;To download the latest released version, please visit our &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.17.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#whats-new-in-1170" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This release includes several new features and enhancements. First, there are query links for simplified data access. Second, there is support for S3-compatible storage backends.&lt;/p&gt;

&lt;p&gt;These new features enhance the usability and flexibility of ReductStore for various use cases in the cloud and on-premises environments and make it easier to share and access data stored in the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔗 Query Links for Data Access&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#-query-links-for-data-access" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore now supports &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/glossary#query-link" rel="noopener noreferrer"&gt;query links&lt;/a&gt;&lt;/strong&gt;, enabling users to generate temporary, public URLs for specific data records — without requiring authentication. This makes it easier to share datasets with &lt;strong&gt;external collaborators&lt;/strong&gt; , embed links into dashboards, or integrate with &lt;strong&gt;third-party systems&lt;/strong&gt; that need read-only access to specific data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l5ism0xyqfqobolt3px.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l5ism0xyqfqobolt3px.webp" alt="Generate Query Links in ReductStore Web Console" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can create query links directly from the &lt;strong&gt;Web Console&lt;/strong&gt; (or any SDKs):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;strong&gt;Data Browser&lt;/strong&gt; page and select a record you want to share.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;“Share record”&lt;/strong&gt; icon in the action panel.&lt;/li&gt;
&lt;li&gt;Configure an &lt;strong&gt;expiration time&lt;/strong&gt; to automatically revoke access after a defined period.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once generated, anyone with the link can access the selected record via a simple HTTP(S) request — no access token required. The link only has access to the specific query for which it was created, along with the creator's permissions. This provides a secure and convenient way to expose selected data for collaboration and analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  ☁️ S3-Compatible Storage Backend&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#%EF%B8%8F-s3-compatible-storage-backend" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore now supports &lt;strong&gt;S3-compatible storage backends&lt;/strong&gt; , allowing you to use &lt;strong&gt;object storage&lt;/strong&gt; instead of a local file system for your underlying data. This update brings greater flexibility and scalability for managing large datasets in the cloud.&lt;/p&gt;

&lt;p&gt;Previously, ReductStore supported only local disk storage, and users had to mount S3 buckets as local disks via FUSE drivers. With this release, ReductStore can now natively integrate with S3-compatible backends — no additional software or mounting is required.&lt;/p&gt;

&lt;p&gt;This feature is designed with performance and &lt;strong&gt;cost optimization&lt;/strong&gt; in mind. ReductStore uses a local disk cache layer to speed up read and write operations, while batching multiple records into a single data block to reduce storage and retrieval costs. This approach works especially well with cost-efficient AWS S3 storage classes such as &lt;strong&gt;S3 Standard-IA&lt;/strong&gt; or &lt;strong&gt;S3 Glacier&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To run ReductStore with an S3-compatible backend, use the following environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 8383:8383 \
 -e RS_REMOTE_BACKEND_TYPE=s3 \
 -e RS_REMOTE_BUCKET=&amp;lt;YOUR_S3_BUCKET_NAME&amp;gt; \
 -e RS_REMOTE_REGION=&amp;lt;YOUR_S3_REGION&amp;gt; \
 -e RS_REMOTE_ACCESS_KEY=&amp;lt;YOUR_S3_ACCESS_KEY_ID&amp;gt; \
 -e RS_REMOTE_SECRET_KEY=&amp;lt;YOUR_S3_SECRET_ACCESS_KEY&amp;gt; \ 
 -e RS_REMOTE_CACHE_PATH=/data/cache \
 -e RS_LICENSE_PATH=&amp;lt;PATH_TO_YOUR_LICENSE_FILE&amp;gt; \ 
 -v ${PWD}/data:/data/cache \
 reduct/store:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read more about configuring S3-compatible storage backend in the &lt;a href="https://www.reduct.store/docs/configuration#remote-backend-settings" rel="noopener noreferrer"&gt;&lt;strong&gt;documentation&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;info&lt;/p&gt;

&lt;p&gt;This feature requires a commercial license. Please see the &lt;strong&gt;&lt;a href="https://www.reduct.store/pricing" rel="noopener noreferrer"&gt;Pricing page&lt;/a&gt;&lt;/strong&gt; for more details.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#whats-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We’re continuing to develop new features to make ReductStore even more powerful and user-friendly. Here’s a preview of what’s coming in the next releases:&lt;/p&gt;

&lt;h3&gt;
  
  
  📦 Multiple Entries in a Single Request&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#-multiple-entries-in-a-single-request" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Currently, each write or query request must target a &lt;strong&gt;single entry&lt;/strong&gt;. This can be limiting when dealing with &lt;strong&gt;multiple entries&lt;/strong&gt; or dynamic lists of entries in your applications.&lt;/p&gt;

&lt;p&gt;In upcoming versions, ReductStore will support &lt;strong&gt;batch operations&lt;/strong&gt; across multiple entries within a single API request. This improvement will simplify integrations and reduce overhead for large-scale data ingestion and querying workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔒 Read-Only Mode for ReductStore&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_17_0-released#-read-only-mode-for-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Like most databases, ReductStore currently requires &lt;strong&gt;exclusive access&lt;/strong&gt; to its data directory while running. As a result, running multiple instances on the same dataset—for load balancing or high availability—is not yet possible.&lt;/p&gt;

&lt;p&gt;To address this, we’re introducing a &lt;strong&gt;read-only mode&lt;/strong&gt; that will allow one writer instance* and multiple reader instances to access the same dataset concurrently. This approach will enable &lt;strong&gt;scalable read operations&lt;/strong&gt; and &lt;strong&gt;improved availability&lt;/strong&gt; without adding the complexity of clustering or replication mechanisms.&lt;/p&gt;




&lt;p&gt;I hope you find those new features useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
    </item>
    <item>
      <title>Building a Resilient ReductStore Deployment with NGINX</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Sat, 13 Sep 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/building-a-resilient-reductstore-deployment-with-nginx-59jb</link>
      <guid>https://dev.to/reductstore/building-a-resilient-reductstore-deployment-with-nginx-59jb</guid>
      <description>&lt;p&gt;If you’re collecting high-rate sensor or video data at the edge and need zero-downtime ingestion and fault-tolerant querying, an &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/guides/disaster-recovery#active-active-setup" rel="noopener noreferrer"&gt;active–active ReductStore setup&lt;/a&gt;&lt;/strong&gt; fronted by NGINX is a clean, practical pattern.&lt;/p&gt;

&lt;p&gt;This tutorial walks you through the &lt;strong&gt;&lt;a href="https://github.com/reductstore/nginx-resilient-setup" rel="noopener noreferrer"&gt;reference implementation&lt;/a&gt;&lt;/strong&gt;, explains the architecture, and shows production-grade NGINX snippets you can adapt.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We’ll Build&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#what-well-build" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We’ll set up a &lt;strong&gt;ReductStore cluster&lt;/strong&gt; with NGINX as a reverse proxy, separating the &lt;strong&gt;ingress&lt;/strong&gt; and &lt;strong&gt;egress&lt;/strong&gt; layers. This architecture allows for independent scaling of write and read workloads, ensuring high availability and performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno4ylirej4b8tpfrhctg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno4ylirej4b8tpfrhctg.png" alt="NGINX Resilient Deployment" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Ingress layer&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#ingress-layer" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;ingress layer&lt;/strong&gt; handles all writes and replicates data to the egress layer. Its nodes may have limited storage capacity, while they need only to handle writes and replicate data to the &lt;strong&gt;egress&lt;/strong&gt; nodes. It can use high-rate storage like NVMe SSDs or even RAM disks, depending on your data volume.&lt;/p&gt;

&lt;h3&gt;
  
  
  Egress layer&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#egress-layer" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;egress layer&lt;/strong&gt; handles all reads and serves data to clients. Its nodes are optimized for read performance and can use larger, slower storage like HDDs or cloud object storage. Each egress node holds a complete copy of the dataset, allowing for high availability and load balancing.&lt;/p&gt;

&lt;h3&gt;
  
  
  NGINX Load Balancer&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#nginx-load-balancer" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;NGINX&lt;/strong&gt; load balancer sits in front of both layers, exposing two stable endpoints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;http://&amp;lt;host&amp;gt;/ingress&lt;/code&gt; → load balances writes across ingress nodes&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;http://&amp;lt;host&amp;gt;/egress&lt;/code&gt; → load balances reads across egress nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation allows you to scale each layer independently and ensures that writes and reads are handled optimally.&lt;/p&gt;

&lt;p&gt;It is also important to note that NGINX must maintain &lt;strong&gt;session affinity&lt;/strong&gt; (stickiness) for both ingress and egress requests to ensure that queries remain consistent and throughput is maximized.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#quick-start" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Clone the example and bring it up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/reductstore/nginx-resilient-setup
&lt;span class="nb"&gt;cd &lt;/span&gt;nginx-resilient-setupdocker 
compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start two ingress nodes and two egress nodes with NGINX in front, all configured to replicate data between them. Check the docker compose file for details on how the nodes are set up.&lt;/p&gt;

&lt;p&gt;Now we need to write some data and verify that we can read it back.&lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Install the &lt;code&gt;reduct-cli&lt;/code&gt; tool&lt;/strong&gt;&lt;/a&gt; if you haven't already, then run the following commands to set up aliases for the ingress and egress endpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add ingress &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:80/ingress &lt;span class="nt"&gt;--token&lt;/span&gt; secret
reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add egress &lt;span class="nt"&gt;-L&lt;/span&gt; http://localhost:80/egress &lt;span class="nt"&gt;--token&lt;/span&gt; secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then copy some data from our &lt;a href="https://play.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;Demo Server&lt;/strong&gt;&lt;/a&gt; to the ingress layer and read it back from the egress layer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add demo server alias to the CLI&lt;/span&gt;
reduct-cli &lt;span class="nb"&gt;alias &lt;/span&gt;add play &lt;span class="nt"&gt;-L&lt;/span&gt; https://play.reduct.store &lt;span class="nt"&gt;--token&lt;/span&gt; reductstore
&lt;span class="c"&gt;# Copy data from the demo server to ingress&lt;/span&gt;
reduct-cli &lt;span class="nb"&gt;cp &lt;/span&gt;play/datasets ingress/bucket-1 &lt;span class="nt"&gt;--limit&lt;/span&gt; 1000
&lt;span class="c"&gt;# Read/export via egress&lt;/span&gt;
reduct-cli &lt;span class="nb"&gt;cp &lt;/span&gt;egress/bucket-1 ./export_folder &lt;span class="nt"&gt;--limit&lt;/span&gt; 1000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  NGINX Configuration&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#nginx-configuration" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Below is a distilled config you can adapt for open-source NGINX:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Upstreams
# Separate pools for ingress (writes) and egress (reads)
upstream reduct_ingress {
    ip_hash;   # stickiness for writes
    server ingress-1:8383 max_fails=3 fail_timeout=10s;
    server ingress-2:8383 max_fails=3 fail_timeout=10s;
    keepalive 64;
}

upstream reduct_egress {
    ip_hash;   # stickiness for queries
    server egress-1:8383 max_fails=3 fail_timeout=10s;
    server egress-2:8383 max_fails=3 fail_timeout=10s;
    keepalive 64;
}

server {
    listen 80;
    server_name _;

    client_max_body_size 512m;
    proxy_read_timeout 600s;
    proxy_send_timeout 600s;

    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $remote_addr;

    location /ingress/ {
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_pass http://reduct_ingress/;
    }

    location /egress/ {
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_pass http://reduct_egress/;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the config above, we define two upstream blocks: &lt;code&gt;reduct_ingress&lt;/code&gt; for handling write requests and &lt;code&gt;reduct_egress&lt;/code&gt; for handling read requests. Each block uses &lt;code&gt;ip_hash&lt;/code&gt; to ensure session affinity, which is crucial for maintaining consistent writes and reads.&lt;/p&gt;

&lt;h2&gt;
  
  
  ReductStore Configuration Notes&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#reductstore-configuration-notes" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The configuration between nodes of each layer is identical. To reach the desired architecture, you need to provision buckets and replication tasks for ingress nodes and buckets only for egress nodes. See the configuration files in the example repo for details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure Drills&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#failure-drills" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;When the setup is running, you can simulate failures to see how it behaves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Kill an ingress node&lt;/strong&gt; → writes continue via other ingress nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kill an egress node&lt;/strong&gt; → reads continue via other egress nodes; replication resyncs when it’s back.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simulate total ingress outage&lt;/strong&gt; → analysis continues on egress; for true ingestion continuity, pair with a pilot-light instance in another location.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Runbook&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#runbook" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s a high-level runbook for deploying this architecture in production:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provision ingress + egress ReductStore nodes&lt;/li&gt;
&lt;li&gt;Create buckets and replication tasks&lt;/li&gt;
&lt;li&gt;Expose &lt;code&gt;/ingress&lt;/code&gt; and &lt;code&gt;/egress&lt;/code&gt; via NGINX with &lt;code&gt;ip_hash&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test with demo dataset&lt;/li&gt;
&lt;li&gt;Validate reads from egress&lt;/li&gt;
&lt;li&gt;Run failure drills&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  References&lt;a href="https://www.reduct.store/blog/nginx-resilient-deployment#references" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/reductstore/nginx-resilient-setup" rel="noopener noreferrer"&gt;NGINX Resilient Setup Example&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/docs/guides/disaster-recovery" rel="noopener noreferrer"&gt;Disaster Recovery Guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;I hope you find this article interesting and useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

</description>
      <category>tutorials</category>
      <category>nginx</category>
    </item>
    <item>
      <title>ReductStore v1.16.0 Released With New Extensions and Context Replication</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Sat, 30 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/reductstore-v1160-released-with-new-extensions-and-context-replication-562c</link>
      <guid>https://dev.to/reductstore/reductstore-v1160-released-with-new-extensions-and-context-replication-562c</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F034g7npet9bq7xya5pln.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F034g7npet9bq7xya5pln.webp" alt="ReductStore v1.16.0 Released" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are pleased to announce the release of the latest minor version of &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.16.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.16.0&lt;/strong&gt;&lt;/a&gt;. ReductStore is a high-performance storage and streaming solution designed for storing and managing large volumes of historical data.&lt;/p&gt;

&lt;p&gt;To download the latest released version, please visit our &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.16.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#whats-new-in-1160" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The v1.16.0 release introduces two new extensions designed to enhance data workflows for robotics and columnar data, along with support for replicating context records during queries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Querying and Replicating Data with Context&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#querying-and-replicating-data-with-context" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We’ve extended the conditional query syntax with &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/conditional-query/directives" rel="noopener noreferrer"&gt;directives&lt;/a&gt;&lt;/strong&gt; that allow users to modify global query behavior. The first directives introduced are &lt;code&gt;#ctx_before&lt;/code&gt; and &lt;code&gt;#ctx_after&lt;/code&gt;, which enable the inclusion of context records that occur before or after each matching record in a query.&lt;/p&gt;

&lt;p&gt;This feature is particularly useful when analyzing specific events or conditions in your data, as it helps provide a clearer picture of the surrounding context. For instance, you can use these directives to include records from a few seconds before or after an anomaly or incident, aiding in root cause analysis or pattern recognition.&lt;/p&gt;

&lt;p&gt;Here’s an example of how to use the &lt;code&gt;#ctx_before&lt;/code&gt; directive in a query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"#ctx_before"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"5s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;anomaly_score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$gt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This query returns all records with an anomaly score greater than 0.8, along with the context records that occurred within 5 seconds before each matching entry.&lt;/p&gt;

&lt;h3&gt;
  
  
  New ReductSelect Extension&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#new-reductselect-extension" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore is fundamentally a blob storage system and does not allow direct manipulation of stored data. However, with its extension mechanism, we can introduce new capabilities while keeping the core system simple.&lt;/p&gt;

&lt;p&gt;The new &lt;a href="https://www.reduct.store/docs/extensions/official/select-ext" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductSelect&lt;/strong&gt;&lt;/a&gt; extension enables users to query and transform data stored in CSV or JSON formats, making it easier to build flexible and efficient data processing workflows.&lt;/p&gt;

&lt;p&gt;For example, the following query uses ReductSelect to extract specific columns from CSV data and filter rows using the same conditional syntax available in ReductStore's native query language:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ext"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"select"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"csv"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"has_headers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"columns"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"temperature"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"as_labels"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"temp"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"humidity"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"when"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;temperature"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$gt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This query selects the &lt;code&gt;temperature&lt;/code&gt; and &lt;code&gt;humidity&lt;/code&gt; columns from a CSV file, renames &lt;code&gt;temperature&lt;/code&gt; to &lt;code&gt;temp&lt;/code&gt;, and filters rows where the temperature is greater than 30°C.&lt;/p&gt;

&lt;p&gt;These simple transformations enable you to ingest structured data very quickly and retrieve only subsets of it for further processing and analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  New ReductROS Extension&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#new-reductros-extension" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Another exciting addition is the &lt;a href="https://www.reduct.store/docs/extensions/official/ros-ext" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductROS&lt;/strong&gt;&lt;/a&gt; extension, which provides tools for extracting and transforming data stored in ReductStore into formats compatible with the Robot Operating System (ROS).&lt;/p&gt;

&lt;p&gt;With this extension, you can extract data from MCAP files containing ROS 2 messages and convert it into JSON format, making it easier to analyze and visualize. It also supports transforming raw binary data—such as images—into more accessible formats like JPEG or base64 strings.&lt;/p&gt;

&lt;p&gt;For example, the following query extracts data from a ROS 2 topic and encodes the image payload as a JPEG:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "ext": {
    "ros": {
      "extract": {
        "topic": "/camera/image",
        "encode": { "data": "jpeg" }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ReductROS is still in active development, and we plan to expand its capabilities with support for additional ROS message types and more flexible extraction options in future releases. Stay tuned for updates!&lt;/p&gt;

&lt;h2&gt;
  
  
  What next?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#what-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We are constantly working on improving ReductStore and adding new features to provide the best experience for our users. In the next release we plan to add new features and improvements, including:&lt;/p&gt;

&lt;h3&gt;
  
  
  Shareable Query Links&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#shareable-query-links" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We are developing a feature that allows users to generate and share links to specific queries in ReductStore.&lt;/p&gt;

&lt;p&gt;This will simplify collaboration by enabling team members to access query results without needing direct access to the ReductStore instance. It will also allow users to download results directly via a link and support integration with external tools and platforms such as &lt;strong&gt;&lt;a href="https://foxglove.dev/" rel="noopener noreferrer"&gt;Foxglove&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration with Grafana&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_16_0-released#integration-with-grafana" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We are also working on a &lt;strong&gt;&lt;a href="https://github.com/reductstore/reduct-grafana" rel="noopener noreferrer"&gt;Grafana plugin&lt;/a&gt;&lt;/strong&gt; that enables users to visualize and analyze data stored in ReductStore directly within Grafana dashboards.&lt;/p&gt;

&lt;p&gt;This integration will provide a seamless experience with Grafana’s powerful visualization tools, allowing you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build custom dashboards using data from ReductStore.&lt;/li&gt;
&lt;li&gt;Monitor your data streams and historical records in real time.&lt;/li&gt;
&lt;li&gt;Visualize labels and data output in JSON or CSV formats.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stay tuned for the first release—coming soon!&lt;/p&gt;




&lt;p&gt;I hope you find those new features useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
    </item>
    <item>
      <title>ReductStore v1.15.0 Released With Extension API and Improved Web Console</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Wed, 07 May 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/reductstore-v1150-released-with-extension-api-and-improved-web-console-2gdk</link>
      <guid>https://dev.to/reductstore/reductstore-v1150-released-with-extension-api-and-improved-web-console-2gdk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qcxb0djg8he29zjn3ap.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qcxb0djg8he29zjn3ap.webp" alt="ReductStore v1.15.0 Released" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are pleased to announce the release of the latest minor version of &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.15.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.15.0&lt;/strong&gt;&lt;/a&gt;. ReductStore is a high-performance storage and streaming solution designed for storing and managing large volumes of historical data.&lt;/p&gt;

&lt;p&gt;To download the latest released version, please visit our &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.15.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#whats-new-in-1150" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This release includes several new features and enhancements. These are the extension API, the improved Web Console and the new conditional query operators.&lt;/p&gt;

&lt;h3&gt;
  
  
  Extension API&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#extension-api" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore is blob storage. It doesn't know anything about the data it stores. We are determined to maintain this because it enables us to ingest and query data in any format with optimal performance. We know that you sometimes need to perform processing and special queries based on the original data format.&lt;/p&gt;

&lt;p&gt;For example, if you ingest data in the JSON format, you should be able to query only some fields of the JSON object or use it in the query condition for filtering. The new extension API makes it possible.&lt;/p&gt;

&lt;p&gt;The extension API is experimental and not yet documented. We are developing extensions for columnar data, CSV and MCAP formats. Once we have enough experience, we will document the API and publish the extensions so that you can build your own extensions for your data formats.&lt;/p&gt;

&lt;p&gt;For most curious users, a demo extension that scales JPEG images on the fly can be found on GitHub: &lt;a href="https://github.com/reductstore/img-ext" rel="noopener noreferrer"&gt;https://github.com/reductstore/img-ext&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Improved Web Console&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#improved-web-console" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In the v1.14.0 release, we introduced the ability to browse data in the Web Console. This release includes two new features: the ability to upload files to the database and update labels in the Web Console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3fgh29hvhf19pnq2m1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3fgh29hvhf19pnq2m1d.png" alt="Update Labels in ReductStore Web Console" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The upload feature can be useful when you store some artifacts e.g. AI models or configuration files in the storage and want to update it occasionally.&lt;/p&gt;

&lt;h3&gt;
  
  
  New Conditional Query Operators&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#new-conditional-query-operators" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We have expanded the set of conditional query operators with new ones that allow you to filter and aggregate data more effectively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;$each_n&lt;/code&gt; - keeps only every N-th record in the result set.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;$each_t&lt;/code&gt; - keeps only one record within given time period in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;$limit&lt;/code&gt; - limits the number of records in the result set.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;$timestamp&lt;/code&gt; - allows you to filter records by timestamp.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;$timestamp&lt;/code&gt; operator can be particularly useful if you store timestamps and metadata in another database and want to retrieve blobs from ReductStore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8080&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my_bucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1231231081&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1231231085&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="n"&gt;when&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$in&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$timestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1231231081&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1231231082&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1231231083&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1231231084&lt;/span&gt;&lt;span class="p"&gt;,]&lt;/span&gt; &lt;span class="p"&gt;},):&lt;/span&gt; 
  &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can read more about the new operators in the &lt;a href="https://www.reduct.store/docs/conditional-query" rel="noopener noreferrer"&gt;&lt;strong&gt;Conditional Query&lt;/strong&gt;&lt;/a&gt; documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What next?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#what-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We are constantly working on improving ReductStore and adding new features to provide the best experience for our users. In the next few releases we plan to add new features and improvements, including:&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration with ROS&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#integration-with-ros" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore is a great solution for storing and managing large amounts of data in robotic applications. Currently, we are working on integrating ReductStore with ROS to provide a seamless experience for storing and retrieving data in ROS applications. We have started a new &lt;a href="https://github.com/reductstore/ros2-reduct-agent" rel="noopener noreferrer"&gt;&lt;strong&gt;ROS2 Agent&lt;/strong&gt;&lt;/a&gt; that allows you to store and retrieve data in ReductStore from ROS2 applications. The agent is designed to be easy to use and integrate with existing ROS2 applications. We are also going to add support for the mcap format with the new Extension API, which will allow you to retrieve data in the original format from mcap files, filter topics and many other features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Golang SDK&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_15_0-released#golang-sdk" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Our big goal is to integrate Grafana by the end of 2025. This month we started work on the Golang SDK, which is the first step towards achieving this goal. The project is still in the early stages of development, but you can already check it out on GitHub: &lt;a href="https://github.com/reductstore/reduct-go" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Golang SDK&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;I hope you find those new features useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
      <category>reductstore</category>
    </item>
    <item>
      <title>Data Acquisition System for Manufacturing: Shop Floor to Cloud</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Tue, 22 Apr 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/data-acquisition-system-for-manufacturing-shop-floor-to-cloud-1oc4</link>
      <guid>https://dev.to/reductstore/data-acquisition-system-for-manufacturing-shop-floor-to-cloud-1oc4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo9n2c7y2usglfn8dpxn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo9n2c7y2usglfn8dpxn.png" alt="ReductStore on DAQ edge device" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As modern manufacturing becomes increasingly data-driven, the need for efficient data acquisition systems is more critical than ever. In my previous article, &lt;a href="https://dev.to/reductstore/building-a-data-acquisition-system-for-manufacturing-1an6"&gt;&lt;strong&gt;Building a Data Acquisition System for Manufacturing&lt;/strong&gt;&lt;/a&gt;, we discussed the challenges of data acquisition in manufacturing and how &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt; can help solve them. Here we will learn how to use ReductStore at the edge of the shop floor and stream data to the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  DAQ Edge Device&lt;a href="https://www.reduct.store/blog/daq-shop-floor#daq-edge-device" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The shop floor is the place where data is generated, and it is essential to have a reliable &lt;strong&gt;data acquisition (DAQ)&lt;/strong&gt; device that can collect and process this data. The DAQ device can process data locally or act as a FIFO buffer, sending data to long-term storage on premises or in the cloud.&lt;/p&gt;

&lt;p&gt;The DAQ device can be a Raspberry Pi with a USB stick or a powerful industrial computer with a 1TB or larger NVMe SSD. Despite the differences in hardware, your DAQ device should have the following capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data ingestion&lt;/strong&gt; : The DAQ device must be able to ingest data from various sources, such as sensors, PLCs, and other devices, with minimal latency and high throughput.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data storage&lt;/strong&gt; : The DAQ device must be able to store data locally and provide data retention policies to manage the amount of data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data replication&lt;/strong&gt; : The DAQ device must be able to replicate data to the cloud or local storage for long-term retention and analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These features don't look like much, but the devil is in the detail. If you have very intensive data acquisition, &lt;strong&gt;you can't store and replicate everything&lt;/strong&gt;. You need an intelligent data collection system that can filter and process data before sending it to long-term storage. You may also face connectivity issues, and your DAQ device should be able to handle these issues gracefully without losing data.&lt;/p&gt;

&lt;p&gt;Let's take a look at ReductStore and how it can help you build a modern data acquisition system for manufacturing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo Setup&lt;a href="https://www.reduct.store/blog/daq-shop-floor#demo-setup" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;For demonstration purposes, we will build a simple data acquisition system that includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Source&lt;/strong&gt; : A simple data source that generates random data and writes it to ReductStore.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReductStore&lt;/strong&gt; : A ReductStore instance running on the DAQ device that replicates data to a cloud instance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Storage&lt;/strong&gt; : A cloud instance of ReductStore at &lt;a href="https://play.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;play.reduct.store&lt;/strong&gt;&lt;/a&gt; (API token: &lt;code&gt;reductstore&lt;/code&gt;) that stores the replicated data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7swt05mld0yqi939x1ww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7swt05mld0yqi939x1ww.png" alt="ReductStore Demo DAQ Setup" width="800" height="680"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Provisioning ReductStore&lt;a href="https://www.reduct.store/blog/daq-shop-floor#provisioning-reductstore" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;For this demo, we need to create a bucket where we will ingest data and a replication task that will replicate data to the cloud. You can do this using the Web console, CLI or SDKs, but let's provision the resources we need using environment variables. Here is a simple &lt;code&gt;docker-compose.yml&lt;/code&gt; file that creates a ReductStore instance with a bucket and a replication task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;reductstore&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reduct/store:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reductstore&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;RS_DATA_PATH&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data&lt;/span&gt;

      &lt;span class="na"&gt;RS_BUCKET_1_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
      &lt;span class="na"&gt;RS_BUCKET_1_QUOTA_TYPE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FIFO&lt;/span&gt;
      &lt;span class="na"&gt;RS_BUCKET_1_QUOTA_SIZE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1GB&lt;/span&gt;

      &lt;span class="na"&gt;RS_REPLICATION_1_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repl-task&lt;/span&gt;
      &lt;span class="na"&gt;RS_REPLICATION_1_SRC_BUCKET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
      &lt;span class="na"&gt;RS_REPLICATION_1_DST_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://play.reduct.store&lt;/span&gt;
      &lt;span class="na"&gt;RS_REPLICATION_1_DST_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reductstore&lt;/span&gt;
      &lt;span class="na"&gt;RS_REPLICATION_1_DST_BUCKET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
      &lt;span class="na"&gt;RS_REPLICATION_1_WHEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{"&amp;amp;score":&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{"$$gt":&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.8}}'&lt;/span&gt; &lt;span class="c1"&gt;# we need $$ to escape $ in YAML&lt;/span&gt;

    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./data:/data&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8383:8383&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;In this example we will create a bucket called &lt;code&gt;data&lt;/code&gt; with a &lt;a href="https://www.reduct.store/docs/glossary#fifo-quota" rel="noopener noreferrer"&gt;&lt;strong&gt;FIFO quota&lt;/strong&gt;&lt;/a&gt; of 1GB and a replication task called &lt;code&gt;repl-task&lt;/code&gt;which will replicate data from the &lt;code&gt;data&lt;/code&gt; bucket to the &lt;code&gt;demo&lt;/code&gt; bucket on &lt;a href="https://play.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;our demo server&lt;/strong&gt;&lt;/a&gt;. The &lt;code&gt;RS_REPLICATION_1_WHEN&lt;/code&gt; environment variable contains a filter that allows you to replicate only data with a score greater than 0.8.&lt;/p&gt;

&lt;p&gt;This is a simple example, but you can use more complex filters to replicate data based on your needs.&lt;/p&gt;

&lt;p&gt;Now you can start ReductStore with Docker Compose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;You can open the Web Console at &lt;code&gt;http://localhost:8383&lt;/code&gt; and check the status of the bucket and replication task.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Ingestion&lt;a href="https://www.reduct.store/blog/daq-shop-floor#data-ingestion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Now we need to create a data source that will generate random data and write it to ReductStore.&lt;/p&gt;

&lt;p&gt;We provide several SDKs for different programming languages, but for this example we will use the Python SDK. Let's install it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install reduct-py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Now we can create a simple data source. Create a file called &lt;code&gt;data_source.py&lt;/code&gt; and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sleep&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;randbytes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;randint&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;

&lt;span class="n"&gt;BLOB&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randbytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# 100 KB of random bytes
&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;send_data&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://127.0.0.1:8383&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Get bucket
&lt;/span&gt;        &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Generate score
&lt;/span&gt;            &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mf"&gt;100.0&lt;/span&gt;

            &lt;span class="c1"&gt;# Write record
&lt;/span&gt;            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;scored_data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;BLOB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;score&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Record with score &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; written&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
    &lt;span class="n"&gt;loop&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new_event_loop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_until_complete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;send_data&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;As you can see, it's very simple. You can read working with the Python SDK in the &lt;a href="https://www.reduct.store/docs/getting-started/with-python" rel="noopener noreferrer"&gt;&lt;strong&gt;Quick Start With Python&lt;/strong&gt;&lt;/a&gt; guide.&lt;/p&gt;

&lt;p&gt;Let's run the data source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python data_source.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;You can check if the data is ingested by opening the Web Console and checking the &lt;code&gt;data&lt;/code&gt; bucket.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Replication&lt;a href="https://www.reduct.store/blog/daq-shop-floor#data-replication" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Now we need to check that the data is replicated to the demo server. You can do this by opening the Web Console at &lt;code&gt;https://play.reduct.store&lt;/code&gt; then logging in with the &lt;code&gt;reductstore&lt;/code&gt; token and checking the &lt;code&gt;demo&lt;/code&gt; bucket. You should find the &lt;code&gt;scored_data&lt;/code&gt; entry with the replicated data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty3e2rlk61d4rjrzzanz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty3e2rlk61d4rjrzzanz.png" alt="Replicated Data in ReductStore Web Console" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should see the records with a score greater than 0.8, as we specified in the replication task.&lt;/p&gt;

&lt;p&gt;Let's simulate a network outage and see how ReductStore handles it. If you run the demo setup locally on your computer, you can simply disconnect from the internet for a while and see in the web console that the replication task has pending records. The replication task will store any pending records in a log to be sent later. When you re-enable the Internet connection, the replication task will automatically send the pending records to the demo server.&lt;/p&gt;

&lt;p&gt;Now you don't need to worry about &lt;strong&gt;network outages or data loss&lt;/strong&gt;. ReductStore will take care of that for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices&lt;a href="https://www.reduct.store/blog/daq-shop-floor#best-practices" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The example we have provided is very simple and does not cover all aspects of building a modern data collection system for manufacturing. Here are some best practices to consider when building your own system based on ReductStore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security: Use secure connections (HTTPS) and authentication tokens to protect your data. Read more about access control in the &lt;a href="https://www.reduct.store/docs/guides/access-control" rel="noopener noreferrer"&gt;&lt;strong&gt;Access Control&lt;/strong&gt;&lt;/a&gt; guide.&lt;/li&gt;
&lt;li&gt;Data retention: Use the &lt;a href="https://www.reduct.store/docs/glossary#fifo-quota" rel="noopener noreferrer"&gt;&lt;strong&gt;FIFO quota&lt;/strong&gt;&lt;/a&gt; to limit the amount of data stored in ReductStore. This will help you avoid running out of disk space and keep your data fresh.&lt;/li&gt;
&lt;li&gt;Store data in the cloud: Use the &lt;a href="https://www.reduct.store/solutions/cloud" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Cloud&lt;/strong&gt;&lt;/a&gt; to store your data in the cloud. It uses a cloud object storage as a backend and reduces the cost of storing data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/daq-shop-floor#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Building a modern and robust data acquisition system for manufacturing can be a challenging task. In this article, we have shown you how ReductStore can help you build a data acquisition system that can ingest, store, and replicate data from the shop floor to the cloud with minimal effort and maximum performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources&lt;a href="https://www.reduct.store/blog/daq-shop-floor#resources" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.reduct.store/docs/getting-started" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Documentation&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/reductstore/building-a-data-acquisition-system-for-manufacturing-1an6"&gt;&lt;strong&gt;Building a Data Acquisition System for Manufacturing&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/reductstore/reductstore" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore GitHub&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/reductstore/daq-edge-example" rel="noopener noreferrer"&gt;&lt;strong&gt;Example On GitHub&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;We hope this article has provided you with valuable insights into building a modern data acquisition system for manufacturing. If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>daq</category>
      <category>reductstore</category>
      <category>iot</category>
    </item>
    <item>
      <title>Building a Data Acquisition System for Manufacturing</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Mon, 17 Mar 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/building-a-data-acquisition-system-for-manufacturing-1an6</link>
      <guid>https://dev.to/reductstore/building-a-data-acquisition-system-for-manufacturing-1an6</guid>
      <description>&lt;p&gt;Large manufacturing plants generate vast amounts of data from machines and sensors. This data is valuable for monitoring machine health, predicting failures, and optimizing production. It also serves as a foundation for building industrial AI models for predictive maintenance, quality control, and process optimization.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Data Acquisition (DAQ)&lt;/strong&gt; system collects this data, processes it, and stores it for further analysis. It typically consists of edge devices that gather real-time data, central servers or cloud storage for retention, and software that enables analytics and AI-driven insights.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkpm4q10ckx86omoiz17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkpm4q10ckx86omoiz17.png" alt="DAQ System based on ReductStore" width="800" height="689"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;An example of a 3 tier DAQ system based on ReductStore.&lt;/p&gt;

&lt;p&gt;Traditional automation solutions like SCADA and historians are complex, expensive, and not optimized for modern cloud-based AI applications. They often limit access to data, making it difficult for engineers and data scientists to develop machine learning models and gain actionable insights.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore the challenges of building a modern DAQ system for manufacturing and how &lt;strong&gt;&lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;ReductStore&lt;/a&gt;&lt;/strong&gt; can simplify the process and support &lt;strong&gt;ELT (Extract, Load, Transform) workflows&lt;/strong&gt; for advanced analytics and &lt;strong&gt;AI applications&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Role of DAQ Systems in Manufacturing&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#the-role-of-daq-systems-in-manufacturing" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In the 21st century, manufacturing is becoming increasingly data-driven. While many factories are already equipped with automation systems, SCADA systems, and historians that collect data from machines and sensors, these solutions are limited when it comes to today's data needs.&lt;/p&gt;

&lt;p&gt;Traditional systems struggle with data-intensive sources such as vibration analysis and video surveillance, which generate &lt;strong&gt;large volumes of unstructured data&lt;/strong&gt;. They don't have interfaces to efficiently access the &lt;strong&gt;history of the raw data&lt;/strong&gt; , which is essential for building advanced analytics and AI models. In addition, they often lack the flexibility to integrate with modern &lt;strong&gt;cloud-based storage&lt;/strong&gt; and analytics platforms.&lt;/p&gt;

&lt;p&gt;This is where a dedicated data acquisition (DAQ) system comes in. A DAQ system efficiently collects, processes, and stores manufacturing data, providing real-time access for optimization and predictive analytics. They are more flexible and scalable than traditional automation solutions and can handle multiple data sources, from sensors and PLCs to cameras and vision systems.&lt;/p&gt;

&lt;p&gt;Importantly, implementing a DAQ system does not mean replacing the existing automation infrastructure. In many cases, DAQ systems are deployed alongside automation systems, complementing rather than replacing them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges of Building a DAQ System&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#challenges-of-building-a-daq-system" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;DAQ systems are often perceived as simpler and more cost-effective than full-scale automation solutions. However, designing a robust DAQ system for manufacturing comes with its own set of challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Handling Massive Data Volume and Velocity&lt;/strong&gt; – Manufacturing plants generate vast amounts of high-frequency data from machines and sensors. Efficiently collecting, processing, and storing this data in real-time requires a scalable architecture.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ensuring Reliable Connectivity with Low Latency&lt;/strong&gt; – Edge devices must transmit data to central servers or cloud storage with minimal latency and high reliability, even in industrial environments with network constraints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Balancing Cost and Scalability&lt;/strong&gt; – As data volume grows, the DAQ system must scale efficiently without excessive costs, requiring optimized storage, compression techniques, and cloud integration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these challenges in mind, let’s explore how ReductStore can simplify the process of building a DAQ system for manufacturing.&lt;/p&gt;

&lt;p&gt;A DAQ system can range from a single edge device near a machine to a complex infrastructure spanning multiple factories and cloud instances. Below is a high-level overview of its structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shop Floor&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#shop-floor" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;shop floor&lt;/strong&gt; is where the manufacturing process takes place, and it is the closest point to the machines and sensors that generate data. Due to the real-time nature of manufacturing data, it is crucial to have edge devices positioned near the machines to collect and transmit data efficiently.&lt;/p&gt;

&lt;p&gt;Edge devices can either process data locally or act as &lt;strong&gt;FIFO buffers&lt;/strong&gt; , temporarily storing data before sending it to the next level of the system. Their role depends on system requirements, but their primary function is to ensure seamless data acquisition without overloading the network or losing critical information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddqprshbn16s0tiolotg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddqprshbn16s0tiolotg.png" alt="Shop Floor Edge Device" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;An example of a DAQ edge device based on ReductStore.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Ingestion&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#data-ingestion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Industrial data comes in a variety of formats and frequencies. A DAQ system must support acquisition from multiple sources such as &lt;strong&gt;sensors, PLCs, and cameras&lt;/strong&gt;. As manufacturing environments rely on various industrial protocols such as OPCUA, IO-Link, and GenICam, the DAQ system must be able to collect data from these sources. For this purpose, edge devices are equipped with &lt;strong&gt;data connectors&lt;/strong&gt; that collect data from different sources and convert it into standardized formats e.g. JSON, CSV, WAV and JPEG.&lt;/p&gt;

&lt;p&gt;By using standard protocols, a DAQ system remains flexible and scalable, allowing integration with a variety of industrial devices. However, the raw data collected from these sources is often &lt;strong&gt;unstructured and storage-intensive&lt;/strong&gt; , requiring effective processing and storage strategies.&lt;/p&gt;

&lt;p&gt;ReductStore enables the storage of a history of unstructured data and provides an HTTP API and SDKs for multiple programming languages, allowing data connectors to write data in various formats directly to storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Storage&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#data-storage" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Data storage in a DAQ system must handle diverse data types and support real-time ingestion. Additionally, edge devices have &lt;strong&gt;limited storage&lt;/strong&gt; , meaning the system cannot afford to stop data collection when storage is full.&lt;/p&gt;

&lt;p&gt;ReductStore solves these challenges by providing a unified data storage solution optimized for manufacturing environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High-Speed Ingestion&lt;/strong&gt; – Designed for fast, real-time data collection from multiple sources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FIFO Quotas&lt;/strong&gt; – Automatically deletes old data when storage is full, ensuring continuous operation without manual intervention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Labeling&lt;/strong&gt; – Enables structured organization by tagging data based on source, type, or timestamp, making retrieval and analysis more efficient.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Data labeling is particularly useful for structuring information. For instance, video data can be labeled with machine states (e.g., running, stopped, fault), making it easy to filter and retrieve condition-specific records.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Replication&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#data-replication" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In most cases, edge devices serve as temporary storage before data is transferred to a central server or cloud storage for long-term retention and further analysis.&lt;/p&gt;

&lt;p&gt;However, manufacturing environments present several connectivity challenges that must be addressed. One of the most common issues is network reliability. Edge devices may operate in remote locations with unstable or slow networks, making it difficult to maintain a continuous connection to the central server. In addition, &lt;strong&gt;limited bandwidth&lt;/strong&gt; and &lt;strong&gt;firewall restrictions&lt;/strong&gt; can further complicate data transfer. Security concerns often restrict direct access to edge devices, requiring a more sophisticated replication mechanism.&lt;/p&gt;

&lt;p&gt;info&lt;/p&gt;

&lt;p&gt;Although &lt;strong&gt;IoT protocols such as MQTT&lt;/strong&gt; can address many of connectivity issues, using a &lt;strong&gt;database for streaming data&lt;/strong&gt; ensures persistence and reliability - data is stored on disk and can be replicated even after prolonged loss of connectivity.&lt;/p&gt;

&lt;p&gt;ReductStore is designed to meet these industrial constraints with robust replication capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Push Replication&lt;/strong&gt; - Edge devices push data to the central server/cloud only when a connection is available, eliminating the need for a permanent connection and direct access to the edge device.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP/HTTPS Support&lt;/strong&gt; - Ensures compatibility with firewalls and industrial network configurations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Replication&lt;/strong&gt; - Sends multiple records in a single request, optimizing bandwidth usage and reducing network overhead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conditional Replication&lt;/strong&gt; - Replicates only necessary data based on labels, timestamps, or other conditions, optimizing network traffic and storage costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With ReductStore, setting up replication requires no custom logic - simply label your data, define replication conditions, and the system does the rest. This approach ensures efficient, cost-effective, and scalable data collection for manufacturing environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Factory Data Store&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#factory-data-store" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;factory storage system&lt;/strong&gt; is the next level of the DAQ architecture, responsible for aggregating data from multiple edge devices and providing structured access for data scientists and engineers.&lt;/p&gt;

&lt;p&gt;Unlike edge devices, which have limited &lt;strong&gt;processing power and storage&lt;/strong&gt; , the factory storage system can leverage more powerful hardware and larger storage capacity, allowing for &lt;strong&gt;longer data retention&lt;/strong&gt; and more advanced processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxo20njg44tsq30r5gaz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxo20njg44tsq30r5gaz2.png" alt="Factory Storage based on ReductStore" width="732" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;An example of a DAQ factory storage based on ReductStore.&lt;/p&gt;

&lt;h3&gt;
  
  
  Massive Data Volumes&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#massive-data-volumes" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The factory storage can be a &lt;strong&gt;dedicated server&lt;/strong&gt; , &lt;strong&gt;NAS&lt;/strong&gt; , or &lt;strong&gt;SAN&lt;/strong&gt; optimized for high-throughput data storage. But independent of the implementation, the storage system must be capable to store &lt;strong&gt;TBs or even PBs&lt;/strong&gt; of data because of the high-frequency data generated by manufacturing processes.&lt;/p&gt;

&lt;p&gt;info&lt;/p&gt;

&lt;p&gt;Usually, large amount of unstructured data is stored just in a file system or object storage, which is not optimized for &lt;strong&gt;historical data access&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;ReductStore provides a scalable and efficient storage solution for manufacturing data, designed to handle massive amounts of data. and ensure fast access to manufacturing data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Indexing by time&lt;/strong&gt; - ReductStore indexes data by time, enabling fast retrieval of data over time intervals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separate storage for metadata&lt;/strong&gt; - ReductStore stores and manages metadata separately from data, allowing it to scale and handle more data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batching&lt;/strong&gt; - ReductStore batches data at the communication and file system level, optimizing data storage and access.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Data Management&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#data-management" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;When we talked about the need for factory storage to handle massive amounts of data, we meant not only storing the data, but also providing efficient data management. with access control, data retention policies, and data labeling.&lt;/p&gt;

&lt;p&gt;The data engineers and data scientists need to be able to manage the data for the following tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retrieve data&lt;/strong&gt; - Efficiently search and retrieve data based on timestamps, labels. This is important for building analytics and AI models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Labeling&lt;/strong&gt; - Label data based on source, type, or timestamps, enabling efficient search and conditional data retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Access Control&lt;/strong&gt; - Restrict access to sensitive data based on user roles and privileges.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With ReductStore, you can manage data efficiently and securely, ensuring that only authorized users have access to sensitive data. ReductStore provides a RESTful API and SDKs for multiple programming languages, enabling data engineers and data scientists to programmatically interact with the data store.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Storage and Processing&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#cloud-storage-and-processing" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Factory storage can be further extended to a &lt;strong&gt;cloud-based data acquisition system&lt;/strong&gt; , providing additional scalability, redundancy, and advanced analytics capabilities. It allows data to be accessed from anywhere, enabling remote monitoring, predictive maintenance, and AI-driven insights.&lt;/p&gt;

&lt;p&gt;Building a cloud component of the DAQ system that can handle data from multiple plants requires a robust architecture that can scale horizontally and vertically. There are several key components to consider when designing a cloud-based data acquisition system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Ingestion&lt;/strong&gt; - The cloud component must be able to ingest data from multiple factories and edge devices, handling different data types and formats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-Effective Storage&lt;/strong&gt; - Cloud storage costs can escalate quickly with large volumes of data. The system must be optimized for cost-effective storage and retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Transformation&lt;/strong&gt; - Data from multiple sources must be transformed and aggregated for analytics, visualization, and AI applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated workflows&lt;/strong&gt; - The cloud component should support automated workflows to efficiently scale and manage the cloud infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's discuss these parts in more detail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhnz25a2tco728c1dc1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhnz25a2tco728c1dc1r.png" alt="Cloud Data Acquisition System" width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;An example of a cloud-based DAQ storage based on ReductStore.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Ingestion&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#data-ingestion-1" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Many cloud platforms offer services for data ingestion, with many of them leveraging the Pub/Sub pattern using message brokers such as Kafka, MQTT, or AMQP. The general concept is to have a &lt;strong&gt;message broker&lt;/strong&gt; that receives data from edge devices and then transmits it to the cloud storage. Subsequently, data is transformed with AWS Lambda or Google Cloud Functions, and the aggregated data is stored in a database. This approach is referred to as &lt;strong&gt;ETL (Extract, Transform, Load)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;While this approach is widely used and highly scalable, it has some drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Loss of raw data&lt;/strong&gt; - The raw data is transformed during the ETL process, resulting in the loss of the source of truth.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complexity&lt;/strong&gt; - Establishing and maintaining a message broker, data transformation, and cloud storage can be complex and time-consuming.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Costs&lt;/strong&gt; - cloud services can be costly, particularly when dealing with substantial data volumes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;warning&lt;/p&gt;

&lt;p&gt;It is important to understand that during the ETL process, the raw data is transformed, making it irrecoverable. This limitation precludes the ability to revert to the raw data or apply a different transformation to historical data.&lt;/p&gt;

&lt;p&gt;ReductStore uses the &lt;strong&gt;ELT (Extract, Load, Transform)&lt;/strong&gt; approach, where data is extracted from edge devices and loaded directly into cloud storage. This approach greatly simplifies the ingestion process by leveraging ReductStore's replication mechanism.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalable Storage&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#scalable-storage" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The most common cloud storage solutions are &lt;strong&gt;object storage&lt;/strong&gt; such as Amazon S3, Google Cloud Storage, and Azure Blob Storage. It is many times cheaper than persistent disk and provides a scalable and durable storage solution for large volumes of data. Unfortunately, as mentioned earlier, object storage is not optimized for historical data access.&lt;/p&gt;

&lt;p&gt;ReductStore can use object storage as backend storage using &lt;strong&gt;FUSE drivers&lt;/strong&gt;. This allows you to store data in a scalable and cost-effective manner while maintaining fast access to historical data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Transformation&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#data-transformation" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Data transformation is a critical step in the cloud data acquisition system. It involves aggregating data from multiple sources, cleaning and normalizing data, and preparing it for analytics, visualization, and AI applications. Cloud platforms provide services such as AWS Lambda, Google Cloud Functions, and Azure Functions for data transformation; they are elastically scalable, but can be expensive.&lt;/p&gt;

&lt;p&gt;ReductStore provides a &lt;strong&gt;subscription mechanism&lt;/strong&gt; that allows you to subscribe to data changes and apply &lt;strong&gt;transformations on the fly&lt;/strong&gt; , With the ReductStore SDKs, you can write your own transformation functions in Python, JavaScript, or any other language.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated Workflows&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#automated-workflows" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The cloud data acquisition system must support automated workflows to efficiently manage the cloud infrastructure. This includes scaling resources on demand, monitoring system health, and handling failures gracefully.&lt;/p&gt;

&lt;p&gt;ReductStore provides a &lt;strong&gt;&lt;a href="https://www.reduct.store/solutions/cloud" rel="noopener noreferrer"&gt;SaaS solution&lt;/a&gt;&lt;/strong&gt; that takes care of the storage infrastructure management, monitoring, and scaling so you can focus on building analytics and AI applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://www.reduct.store/blog/daq-manufacture-system#conclusion" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this article, we have explored the challenges of building a modern data acquisition system for manufacturing. Every DAQ system is unique and depends on the specific requirements of the manufacturing environment. It may include all of the components described or only a subset of them. The data transformation can be done on the edge devices, factory storage, or cloud storage, depending on the use case. What can change the scale and complexity of the system. However, the key principles of efficient data acquisition, processing, and storage remain the same, and your DAQ system should be designed to handle massive amounts of data and ensure reliable connectivity.&lt;/p&gt;

&lt;p&gt;With ReductStore, you can adopt the &lt;strong&gt;ELT approach&lt;/strong&gt; , simplify the process of building a manufacturing DAQ system, replicate data from edge devices to central servers or cloud storage, and provide efficient data storage and management.&lt;/p&gt;




&lt;p&gt;We hope this article has provided you with valuable insights into building a modern data acquisition system for manufacturing. If you have any questions or comments, feel free to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community Forum&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>daq</category>
      <category>iot</category>
      <category>reductstore</category>
    </item>
    <item>
      <title>ReductStore v1.14.0 Released With Many Improvements</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Tue, 25 Feb 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/reductstore-v1140-released-with-many-improvements-35n9</link>
      <guid>https://dev.to/reductstore/reductstore-v1140-released-with-many-improvements-35n9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznfiuqyhn3l2r7fvba95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznfiuqyhn3l2r7fvba95.png" alt="ReductStore v1.14.0 Released" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are pleased to announce the release of the latest minor version of &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.14.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.14.0&lt;/strong&gt;&lt;/a&gt;. ReductStore is a time series database designed for storing and managing large amounts of blob data.&lt;/p&gt;

&lt;p&gt;To download the latest released version, please visit our &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.14.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_14_0-released#whats-new-in-1140" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This release introduces several new features and enhancements, including new conditional query operators, I/O and replication settings, and data browsing in the Web console.&lt;/p&gt;

&lt;h3&gt;
  
  
  New Conditional Query Operators&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_14_0-released#new-conditional-query-operators" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In &lt;strong&gt;&lt;a href="https://dev.to/anthonycvn/reductstore-v1130-released-with-new-conditional-query-api-30kg-temp-slug-5241141"&gt;version 1.13&lt;/a&gt;&lt;/strong&gt; we introduced support for conditional queries, allowing you to filter data using labels in complex conditions. Now we have added several new conditional query operators that allow you to filter data more effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.reduct.store/docs/conditional-query/arithmetic-operators" rel="noopener noreferrer"&gt;Arithmetic Operators&lt;/a&gt;&lt;/strong&gt; perform arithmetic operations on labels in a query like arithmetic addition, subtraction etc. This allows you to perform complex calculations on the fly when querying data and use the results to filter records. For example, you can use arithmetic operators to filter records where the deviation of &lt;code&gt;score&lt;/code&gt; is greater than 10 of its mean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "$gt": [{ "$abs": [{ "&amp;amp;score": { "$sub": "&amp;amp;mean_score" } }] }, 10 ]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.reduct.store/docs/conditional-query/string-operators" rel="noopener noreferrer"&gt;String Operators&lt;/a&gt;&lt;/strong&gt; perform string operations on values in a query like checking if a string contains a substring, starts with, or ends with. You can use string operators to filter records based on the string values of their labels. For example, you can filter records where the &lt;code&gt;name&lt;/code&gt; label contains the substring &lt;code&gt;bottle&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "&amp;amp;name": { "$contains": "bottle" }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.reduct.store/docs/conditional-query/misc-operators" rel="noopener noreferrer"&gt;Miscellaneous Operators&lt;/a&gt;&lt;/strong&gt; provide additional functionality like checking if a label exists in a record, casting a label value to a different type, or referencing a label value in a record. These can be useful if you want to filter records based on the presence of a label, explicitly cast a label value to another type, or explicitly reference a label value in a record.&lt;/p&gt;

&lt;p&gt;All these new operators are available in the latest version of ReductStore and its client SDKs. You can start using them right away to improve your data retrieval experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  I/O and Replication Settings&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_14_0-released#io-and-replication-settings" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Since version 1.14.0, ReductStore gives you more control over I/O and replication settings which you can apply using environment variables.&lt;/p&gt;

&lt;p&gt;The following I/O settings are available:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;RS_IO_BATCH_MAX_SIZE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;8MB&lt;/td&gt;
&lt;td&gt;Maximum size of a batch of records sent to the client.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;RS_IO_BATCH_MAX_RECORDS&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;85&lt;/td&gt;
&lt;td&gt;Maximum number of records in a batch sent and received from the client.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;RS_IO_BATCH_MAX_METADATA_SIZE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;8KB&lt;/td&gt;
&lt;td&gt;Maximum size of metadata in a batch of records sent and received from the client.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;RS_IO_BATCH_TIMEOUT&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;5s&lt;/td&gt;
&lt;td&gt;Maximum time for a batch of records to be prepared and sent to the client. If the batch is not full, it will be sent after the timeout.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;RS_IO_BATCH_RECORD_TIMEOUT&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1s&lt;/td&gt;
&lt;td&gt;Maximum time to wait for a record to be added to a batch. If the record is not added, the unfinished batch will be sent to the client.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The I/O settings allow you to control the size and number of records in a batch, the size of metadata, and the timeouts for sending batches to the client. This can be useful if you want to optimise the performance of your ReductStore instance and tailor it to your specific use case and network conditions. In addition, the &lt;code&gt;RS_IO_BATCH_MAX_METADATA_SIZE&lt;/code&gt; and &lt;code&gt;RS_IO_BATCH_MAX_RECORDS&lt;/code&gt; settings specify the size of HTTP/1 headers, which is important if your instance is behind a reverse proxy or load balancer.&lt;/p&gt;

&lt;p&gt;In addition, you can now configure replication settings using the following environment variables:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;RS_REPLICATION_TIMEOUT&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;5s&lt;/td&gt;
&lt;td&gt;Timeout for attempts to reconnect to the target server in seconds.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;RS_REPLICATION_LOG_SIZE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1000000&lt;/td&gt;
&lt;td&gt;Maximum number of pending records in the replication log. The oldest records are overwritten when the limit is reached.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The replication settings allow you to control the timeout for attempts to reconnect to the target server and the maximum number of pending records in the replication log.&lt;/p&gt;

&lt;p&gt;Read more about all available settings in the &lt;a href="https://www.reduct.store/docs/configuration" rel="noopener noreferrer"&gt;&lt;strong&gt;Configuration Reference&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Browsing in Web Console&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_14_0-released#data-browsing-in-web-console" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;ReductStore has an embedded Web Console that allows you to manage your buckets, monitor data ingestion, and manage replication settings. Since version 1.14.0, the Web Console has a new feature that allows you to browse the data stored in your buckets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.reduct.store%2Fassets%2Fimages%2Fdata_browsing-07d4ee25c74615849916b029afc13d85.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.reduct.store%2Fassets%2Fimages%2Fdata_browsing-07d4ee25c74615849916b029afc13d85.webp" alt="Data Browsing in Web Console" width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can easily view the records stored in your buckets, filter them by labels, and download the contents of the records. This feature can be useful if you want to quickly check the data stored in your buckets or debug problems related to data ingestion.&lt;/p&gt;

&lt;p&gt;You can try it right now with our &lt;strong&gt;&lt;a href="https://play.reduct.store/" rel="noopener noreferrer"&gt;demo server&lt;/a&gt;&lt;/strong&gt; (API token: &lt;strong&gt;reductstore&lt;/strong&gt; ) or with our &lt;strong&gt;&lt;a href="https://www.reduct.store/solutions/cloud" rel="noopener noreferrer"&gt;SaaS offer&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What next?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_14_0-released#what-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We have very exciting plans for the future of ReductStore, including new features, enhancements and integrations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Extensions&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_14_0-released#extensions" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Firstly, we are working on an extension system that will allow you to extend the functionality of ReductStore with custom plugins and integrations written in Rust. One of the first extensions we plan to release is a plugin for structured data such as JSON, CSV and Parquet, which will allow you to store structured data in ReductStore and query it with SQL-like queries.&lt;/p&gt;

&lt;p&gt;The goal is to make ReductStore more versatile and suitable for a wider range of use cases, and not just blob storage, but an advanced way to store and query data with different structures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Web Console Improvements&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_14_0-released#web-console-improvements" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We are also planning to improve the Web Console by adding more features for data management, diagnostics and monitoring. One of the features we are working on will allow data and updated records to be uploaded directly from the Web Console without using the HTTP API.&lt;/p&gt;




&lt;p&gt;I hope you find those new features useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/signup" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
      <category>database</category>
      <category>reductstore</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Fri, 21 Feb 2025 16:29:50 +0000</pubDate>
      <link>https://dev.to/atimin/-1pcn</link>
      <guid>https://dev.to/atimin/-1pcn</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/reductstore" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F5829%2Fa0981feb-b10f-44b2-93aa-286aeaf3866d.jpg" alt="ReductStore" width="200" height="200"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1126177%2F63cd6ff7-c3d0-428e-b2bb-d468e0aad279.jpeg" alt="" width="460" height="460"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/reductstore/reductstore-vs-mongodb-which-one-is-right-for-your-data-349d" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;ReductStore vs. MongoDB: Which One is Right for Your Data?&lt;/h2&gt;
      &lt;h3&gt;AnthonyCvn for ReductStore ・ Feb 21&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#database&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#comparison&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>database</category>
      <category>comparison</category>
    </item>
    <item>
      <title>ReductStore v1.13.0 Released With New Conditional Query API</title>
      <dc:creator>Alexey Timin</dc:creator>
      <pubDate>Thu, 05 Dec 2024 00:00:00 +0000</pubDate>
      <link>https://dev.to/reductstore/reductstore-v1130-released-with-new-conditional-query-api-3n0o</link>
      <guid>https://dev.to/reductstore/reductstore-v1130-released-with-new-conditional-query-api-3n0o</guid>
      <description>&lt;p&gt;We are pleased to announce the release of the latest minor version of &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://github.com/reductstore/reductstore/releases/tag/v1.13.0" rel="noopener noreferrer"&gt;&lt;strong&gt;1.13.0&lt;/strong&gt;&lt;/a&gt;. ReductStore is a time series database designed for storing and managing large amounts of blob data.&lt;/p&gt;

&lt;p&gt;To download the latest released version, please visit our &lt;a href="https://www.reduct.store/download" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new in 1.13.0?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_13_0-released#whats-new-in-1130" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This release introduces a new conditional query API that should significantly improve your experience when querying or removing records. The new conditional queries allow you to use logical and comparison operators to filter records by labels. Previously, you could only filter records by labels using the &lt;code&gt;include&lt;/code&gt; and &lt;code&gt;exclude&lt;/code&gt; options, which were limited to exact matches. This means that you had to classify your records in advance at the ingest stage to be able to filter them later. Now, all you have to do is label your records with metric labels and then use the conditional queries to filter them by any condition you want.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conditional Query Syntax&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_13_0-released#conditional-query-syntax" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The conditional query syntax is inspired by the MongoDB query language and is based on the JSON format. The query consists of a set of conditions that are combined using logical operators. It can be written in simple object notation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;label_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$gt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Or in array notation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; 
  &lt;/span&gt;&lt;span class="nl"&gt;"$any_of"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;label_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$gt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; 
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;label_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$lt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; 
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;label_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$eq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Here we refer to labels with the &lt;code&gt;&amp;amp;&lt;/code&gt; symbol followed by the label name. All operators start with the &lt;code&gt;$&lt;/code&gt; symbol followed by the operator name. Constant values are just JSON values. And that's it! You don't have to learn a new query language to start using conditional queries.&lt;/p&gt;

&lt;p&gt;The current version of the Conditional Query API supports all the logical and comparison operators you need to filter records by labels. For a complete list of supported operators, see the &lt;a href="https://www.reduct.store/docs/conditional-query/comparison-operators" rel="noopener noreferrer"&gt;&lt;strong&gt;Comparison Operators&lt;/strong&gt;&lt;/a&gt; and &lt;a href="https://www.reduct.store/docs/conditional-query/logical-operators" rel="noopener noreferrer"&gt;&lt;strong&gt;Logical Operators&lt;/strong&gt;&lt;/a&gt; sections of the documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-world Example&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_13_0-released#real-world-example" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;But why do you need to use conditional queries? Let's look at a real-world example. Suppose you collect raw vibration data from a machine and store it in ReductStore. You can calculate the vibration metrics, such as RMS and crest factor, and store them as labels. Now you can use the conditional queries to filter the records by the vibration metrics. For example, we know that when the machine is working, the RMS must be greater than 10, and some problems with the engine happen when the crest factor is greater than 2.5. You can write the following conditional query to get the raw data for detailed analysis:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;rms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$gt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"&amp;amp;crest_factor"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"$lt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;2.5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;With our Python SDK, the conditional query can be used like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;reduct&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Bucket&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# Create a client instance, then get or create a bucket
&lt;/span&gt;    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://127.0.0.1:8383&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vibration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Query vibration data with RMS &amp;gt; 10 and crest factor &amp;lt; 2.5
&lt;/span&gt;        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sensor-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;when&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;amp;rms&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$gt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;amp;crest_factor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$lt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;2.5&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;):&lt;/span&gt;

            &lt;span class="n"&gt;_raw_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;


&lt;span class="n"&gt;loop&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_event_loop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_until_complete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;We have updated all of our official SDKs to support the new Conditional Query API for querying and removing data. Read our &lt;strong&gt;&lt;a href="https://www.reduct.store/docs/guides" rel="noopener noreferrer"&gt;Guides&lt;/a&gt;&lt;/strong&gt; to learn more about conditional queries and how to use them in your applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  What next?&lt;a href="https://www.reduct.store/blog/news/reductstore-v1_13_0-released#what-next" rel="noopener noreferrer"&gt;​&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This release introduces only the basic set of logical and comparison operators. We plan to add more operators and functionality to the Conditional Query API in future releases. It will also be available in the replication engine to filter records during the replication process.&lt;/p&gt;




&lt;p&gt;I hope you find those new features useful. If you have any questions or feedback, don’t hesitate to use the &lt;a href="https://community.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore Community&lt;/strong&gt;&lt;/a&gt; forum.&lt;/p&gt;

&lt;p&gt;Thanks for using &lt;a href="https://www.reduct.store/" rel="noopener noreferrer"&gt;&lt;strong&gt;ReductStore&lt;/strong&gt;&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>news</category>
      <category>database</category>
      <category>reductstore</category>
    </item>
  </channel>
</rss>
