<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: HiveMQ</title>
    <description>The latest articles on DEV Community by HiveMQ (@hivemq_).</description>
    <link>https://dev.to/hivemq_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hivemq_"/>
    <language>en</language>
    <item>
      <title>Announcing HiveMQ Pulse: The Distributed Data Intelligence Platform</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Tue, 18 Feb 2025 14:00:00 +0000</pubDate>
      <link>https://dev.to/hivemq/announcing-hivemq-pulse-the-distributed-data-intelligence-platform-2ak5</link>
      <guid>https://dev.to/hivemq/announcing-hivemq-pulse-the-distributed-data-intelligence-platform-2ak5</guid>
      <description>&lt;p&gt;&lt;strong&gt;Author of the blog: Gaurav Suman,Director of Product Marketing at HiveMQ&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Over the past several years, there has been a fundamental shift in the requirements for data management. The old approach—rigid, hierarchical structures and siloed systems—can no longer support the demands of modern, data-rich use cases. There’s an ever-increasing demand for productivity, plus the constant pressure of training and retaining the talent needed to deliver on the promises of Industry 4.0 and beyond. &lt;/p&gt;

&lt;p&gt;Over the last few weeks, we’ve been showcasing an approach that’s helping forward-thinking companies tackle these challenges: the Unified Namespace (UNS). We’ve learned &lt;a href="https://www.hivemq.com/blog/bringing-clarity-to-unified-namespace-approach/" rel="noopener noreferrer"&gt;how it should be understood&lt;/a&gt;, how to implement its &lt;a href="https://www.hivemq.com/blog/elements-of-the-unified-namespace/" rel="noopener noreferrer"&gt;foundational elements&lt;/a&gt;, and crucially, why a &lt;a href="https://www.hivemq.com/blog/why-mqtt-is-critical-building-unified-namespace/" rel="noopener noreferrer"&gt;single technology or protocol alone&lt;/a&gt; (even MQTT) can’t fulfill everything a UNS promises.&lt;/p&gt;

&lt;p&gt;HiveMQ has powered UNS implementations at some of the world’s largest energy, pharmaceutical, and logistics organizations. We’ve seen three persistent challenges: first, the difficulty of creating a truly unified, interoperable data model across diverse systems; second, the need for real-time on-location data insights that don’t require perfect connectivity or round trips to the cloud; and third, the pressure to ensure robust security, governance, and compliance at every step. Conventional bolt-on or proprietary platforms miss one or more of these needs, forcing enterprises to compromise.&lt;/p&gt;

&lt;p&gt;We kept hearing the same questions from our customers: Can we get a real-time view of our operations, keep control of our data, and make sure the right people have access to the right information—whether at the edge or in the cloud? The short answer is “yes,” and it’s why we’re excited to introduce and announce a &lt;a href="https://www.hivemq.com/products/hivemq-pulse" rel="noopener noreferrer"&gt;private preview of HiveMQ Pulse.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is HiveMQ Pulse?
&lt;/h2&gt;

&lt;p&gt;HiveMQ Pulse is a real-time, distributed data intelligence platform that delivers the best of both worlds—bringing intelligence to the edge for immediate decision-making while ensuring centralized governance and interoperability across the entire data ecosystem. It allows enterprises to manage, transform, and govern distributed data while maintaining a single, structured view of the enterprise. By historicizing and governing in-flight messages, enabling real-time queries and compute tasks, and applying contextual information, Pulse ensures that data is not only available but also actionable for AI and other critical use cases.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/wporewUB0WE"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;
&lt;h2&gt;
  
  
  Why Enterprises Need Distributed Data Intelligence
&lt;/h2&gt;

&lt;p&gt;Both IIoT and IoT data have increased in complexity with rapid growth in devices, legacy systems, and disparate data formats, making it harder for enterprises to unify their data in a structured, actionable way. Without a scalable, intelligent approach to managing this complexity, businesses risk falling behind.&lt;/p&gt;

&lt;p&gt;Faster access to insights means faster time-to-market and greater ability to make adjustments that prevent downtime, reduce costs, and ultimately increase profits. Research shows that businesses operating in real-time environments achieve 62% higher revenues and make decisions 30% faster, proving that the ability to process, analyze, and act on data as it’s generated directly translates to business success.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr8ku3tdrfyyxei65muj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr8ku3tdrfyyxei65muj.png" alt=" " width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s why data-driven decision-making is no longer optional—it’s a competitive necessity. Industry leaders recognize that adaptability is key, and organizations that successfully harness real-time intelligence see measurable advantages. This is where HiveMQ Pulse comes in—delivering the unified data management, real-time insights, distributed intelligence, and AI-readiness that enterprises need to succeed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcawt2g2bgydyvacxv4c9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcawt2g2bgydyvacxv4c9.png" alt=" " width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Unified Data Management
&lt;/h3&gt;

&lt;p&gt;The Unified Namespace pattern has gained significant traction, and &lt;a href="https://www.hivemq.com/products/mqtt-broker/" rel="noopener noreferrer"&gt;HiveMQ&lt;/a&gt; has worked with many customers to implement UNS at scale on our MQTT Platform. While MQTT is an ideal technology—and HiveMQ is the ideal broker—for transporting data reliably at scale, there is an absence of tooling for managing the metadata and context of a Unified Namespace. HiveMQ Pulse fills this gap by providing a structured way to manage UNS data transformations efficiently.&lt;/p&gt;

&lt;p&gt;Since a UNS approach ensures that all data across an enterprise is organized under a single, centralized data structure, many companies mistakenly assume a data warehouse or traditional enterprise data platform can serve as their UNS. &lt;a href="https://www.hivemq.com/blog/why-data-warehouse-cannot-be-the-unified-namespace/" rel="noopener noreferrer"&gt;This is not the case&lt;/a&gt;. Unlike traditional architectures where transformations and data modifications are performed at individual sites or edge gateways, a UNS enabled by HiveMQ Pulse allows for changes to be made at the namespace level. The underlying system then orchestrates where those changes need to take effect.&lt;/p&gt;

&lt;p&gt;For example, instead of manually modifying a data tag at a specific site, the user updates the Unified Namespace. The system then ensures the transformation occurs at the relevant location automatically, eliminating the need to interact with individual gateways. This declarative approach is core to HiveMQ Pulse, enabling it to autonomously determine the type and location of operations to execute when the namespace is modified.&lt;/p&gt;
&lt;h3&gt;
  
  
  Actionable Insights
&lt;/h3&gt;

&lt;p&gt;Actionable Insights are real-time analytics and calculations that occur as close to the data source as possible, rather than relying on cloud-based processing. This ensures immediate feedback and response, reducing latency and dependency on centralized infrastructure.&lt;/p&gt;

&lt;p&gt;For instance, an operational efficiency metric such as Overall Equipment Effectiveness (OEE) can be calculated on-site from existing performance, quality, and availability data without sending raw data to the cloud. This capability enables businesses to derive real-time insights and take corrective action instantly. The ability to generate and apply insights at the point of need is a fundamental shift in data intelligence.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/xY-2zlmCwf8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Intelligence
&lt;/h3&gt;

&lt;p&gt;Distributed Intelligence refers to the decentralization of computing power, ensuring that intelligence and decision-making occur at the appropriate edge locations rather than relying on a central processing hub. This architecture reduces bottlenecks, improves response times, and ensures resilience.&lt;/p&gt;

&lt;p&gt;With a distributed intelligence approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Unified Namespace is defined in a centralized plane, but the execution of logic, transformations, and processing happens at the local nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edge devices autonomously learn and execute tasks based on intelligence that is centrally defined but dynamically distributed, eliminating the need for explicit coordination.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, if a script for converting Celsius to Fahrenheit needs to run at a specific filling line, the system automatically determines where it should execute. The user does not need to specify the site or agent responsible for running it—the system handles the orchestration at the edge.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/-5KVX65v50A"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Powers AI
&lt;/h3&gt;

&lt;p&gt;Powering AI means allowing for easy AI model integration and ensuring that data is contextualized, structured, and accessible in a way that supports AI and machine learning applications. The key aspects of an AI-ready system include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rich metadata&lt;/strong&gt;: Every data point is tagged with units, context, and historical changes, ensuring AI models can properly interpret and analyze it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lookup capabilities&lt;/strong&gt;: Data entries include IDs and relationships that allow for easy traceability and contextual understanding.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Future AI integrations&lt;/strong&gt;: The system is designed to seamlessly integrate with AI models and automation tools by maintaining high data quality and integrity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For instance, if a dataset containing machine sensor data is stored in a data lake, an AI model can quickly retrieve associated contextual information—such as units, location, equipment type, and past trends—to generate accurate predictions and recommendations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Under the Hood: The Technology Behind HiveMQ Pulse
&lt;/h2&gt;

&lt;p&gt;HiveMQ Pulse is based on a distributed architecture that overlays an enterprise MQTT deployment. The strength of HiveMQ’s proven MQTT platform is now coupled with an agent-based architecture that captures and shares insights across the enterprise.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/24gEHa9Li5s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The HiveMQ Pulse distributed intelligence platform includes the following components in its deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pulse Server
&lt;/h3&gt;

&lt;p&gt;Manages information models, authorizes Agents, and orchestrates queries. Scalable for policy enforcement, enables high-throughput processing, and seamless OT-IT integration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pulse Agent
&lt;/h3&gt;

&lt;p&gt;Indexes and processes edge data with a distributed calculation engine. Filters, historicizes, and governs in-flight messages while enabling real-time queries and compute tasks. The agents can be deployed on/alongside standards-based endpoints and brokers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pulse Client
&lt;/h3&gt;

&lt;p&gt;Secure web app/GUI for managing data models, policies, and queries. Supports UNS modeling, structured data interaction, and simple dashboards for actionable insights.&lt;/p&gt;

&lt;p&gt;Together, these components deliver a truly seamless experience where the needs of users and those building and running advanced capabilities can be served equally effectively, and in a distributed manner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts: A New Era of Distributed Data Intelligence Begins Today
&lt;/h2&gt;

&lt;p&gt;As operations grow more complex, businesses need solutions that don’t just collect data but make it immediately useful—without sacrificing security, governance, or flexibility. HiveMQ Pulse is built to address exactly that need. By delivering real-time intelligence at the source, ensuring data sovereignty, and enabling seamless edge-to-cloud governance, Pulse represents a fundamental shift in how data is accessed, processed, and acted upon.&lt;/p&gt;

&lt;p&gt;With unified data management, actionable insights, distributed intelligence, and powering AI use cases, HiveMQ Pulse is more than just a UNS enabler—it’s a new way to think about data architectures. It eliminates data silos, creates true IT-OT convergence, and empowers teams with real-time visibility, ensuring enterprises are ready for the AI-driven future without unnecessary complexity.&lt;/p&gt;

&lt;p&gt;Now is the time to move beyond bolted-on, centralized, and reactive data solutions. The industry is changing, and businesses that can act on insights faster, ensure resilience, and maintain control over their data will have the competitive edge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Started with HiveMQ Pulse
&lt;/h3&gt;

&lt;p&gt;HiveMQ Pulse is now available in private preview. &lt;a href="https://www.hivemq.com/products/hivemq-pulse" rel="noopener noreferrer"&gt;Sign up today&lt;/a&gt; to explore how it can revolutionize your data strategy.&lt;/p&gt;

</description>
      <category>hivemq</category>
      <category>unifiednamespace</category>
      <category>iot</category>
    </item>
    <item>
      <title>Identifying, Acquiring and Integrating Plant-Floor Data for Smart Manufacturing</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Thu, 22 Jun 2023 15:27:24 +0000</pubDate>
      <link>https://dev.to/hivemq_/identifying-acquiring-and-integrating-plant-floor-data-for-smart-manufacturing-3fc9</link>
      <guid>https://dev.to/hivemq_/identifying-acquiring-and-integrating-plant-floor-data-for-smart-manufacturing-3fc9</guid>
      <description>&lt;p&gt;The success of modern manufacturing enterprises relies heavily on the ability to collect, analyze, and act upon data. It is, therefore, crucial to pinpoint potential data sources and determine the most efficient methods for acquiring data from predominantly legacy systems to achieve desired outcomes. Ensuring a cost-effective, scalable, and replicable solution is essential. Additionally, it is vital to aggregate the collected data to a level where it can be seamlessly integrated with external enterprise systems.&lt;/p&gt;

&lt;p&gt;In manufacturing environments, a diverse range of data is generated from various sources on the plant floor, each carrying unique importance and objectives. To effectively manage this data, it is crucial to initially identify the available information and determine the appropriate methods for accessing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This article serves as the second instalment in a six-part series titled &lt;a href="https://www.hivemq.com/comprehensive-guide-to-industrial-data-management-for-smart-manufacturing-iiot/" rel="noopener noreferrer"&gt;A Comprehensive Guide To Industrial Data Management for Smart Manufacturing&lt;/a&gt;, discussing a practical approach to help you begin implementing data management for smart manufacturing. In part-1 of this series, &lt;a href="https://www.hivemq.com/articles/power-of-iot-data-management-in-smart-manufacturing/" rel="noopener noreferrer"&gt;The Power of Data Management in Driving Smart Manufacturing Success&lt;/a&gt; we explored how to establish a well-thought-out strategy for harnessing the power of data in smart manufacturing.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Identifying Potential Data Sources for Smart Manufacturing
&lt;/h2&gt;

&lt;p&gt;Building upon this understanding, we can start identifying potential data sources for smart manufacturing implementation by examining the Computer Integrated Manufacturing (CIM) pyramid.&lt;/p&gt;

&lt;p&gt;This reference model, developed in the 1990s, provides a framework for implementing industrial automation. It focuses on collecting, coordinating, sharing, and transmitting data and information between various systems and sub-systems through software applications and communication networks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf5rv9rf6fxqw1su79k6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf5rv9rf6fxqw1su79k6.png" alt=" " width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is a list of possible data sources for smart manufacturing implementation and the reasons why.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Programmable Logic Controllers (PLCs) at level 2&lt;/li&gt;
&lt;li&gt;Supervisory Control and Data Acquisition Systems (SCADA) at Level 3&lt;/li&gt;
&lt;li&gt;Historians at Level 3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Firstly, Sensors, Actuators, RTUs, CNCs, and other field equipment are not suitable as data sources due to their numerous connections to isolated networks managed by outdated protocols. Furthermore, since PLCs primarily collect and provide their data, there is no need to establish direct connections to these components.&lt;/p&gt;

&lt;p&gt;On the other hand, PLCs efficiently handle all the data from sensors and devices at lower levels, processing the information based on their respective scan time resolutions, which can be as low as ten milliseconds. This information is typically time-series data but can also include calculations and alarm data. In turn, they make this information available to higher-level applications through communication protocols like Modbus, OPC DA and OPC UA, making great data sources.&lt;/p&gt;

&lt;p&gt;SCADA and Historian systems do not gather all the data provided by the connected PLCs. Rather, they focus on collecting the most critical information and data with lower frequency. Initially designed as consumers of industrial data through an OPC client interface, SCADA and Historian systems have also evolved into producers of industrial data. They now fulfill both roles simultaneously by implementing an OPC UA server interface.&lt;/p&gt;

&lt;p&gt;Now that we’ve identified our potential data integration sources, let’s examine each source closely, considering their abilities, pros and cons, and connectivity and data-gathering alternatives.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Data from Programmable Logic Controllers (PLCs)
&lt;/h2&gt;

&lt;p&gt;Within a large industrial facility, numerous PLCs of varying sizes and capabilities can be found, typically arranged in a hierarchical manner. The highest-level PLCs function as data concentrators and are ideal points for data acquisition. Data Concentration PLCs may sometimes relay their information to a standalone OPC UA server, which would then be used as the access point.&lt;/p&gt;

&lt;p&gt;In scenarios where you do not have this kind of hierarchical arrangement of PLCs, you’d need to connect to the primary PLC of each working cell, production line, or plant area to collect data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn38ah3shmo5vk84dqhpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn38ah3shmo5vk84dqhpv.png" alt=" " width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros of Integrating Data from PLCs
&lt;/h3&gt;

&lt;p&gt;PLCs have swift scanning capabilities that ensure a consistent stream of updated data originating from sensors and other production machinery, with a resolution starting at about tens of milliseconds. Most significantly, PLCs stand out for their reliability, stability, and deterministic nature. They guarantee minimal downtime, an essential feature for uninterrupted data collection, thereby avoiding compromised signal quality or disruptions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cons of Integrating Data from PLCs
&lt;/h3&gt;

&lt;p&gt;PLCs are situated at the lower levels of the automation pyramid, meaning data collection occurs close to the hardware with limited abstraction. Such a low level of abstraction introduces complexity in managing what could be thousands of process signals, calculations, and alarms. Moreover, tags may have inconsistent naming conventions across various plant areas, depending on the PLC vendor or the control logic developer. Part 3 of this series discusses contextualizing and normalizing this kind of data before integrating it with enterprise systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Data from Supervisory Control and Data Acquisition (SCADA)
&lt;/h2&gt;

&lt;p&gt;SCADA systems can communicate with factory floor field devices using legacy communication protocols, OPC, or Fieldbus protocols. Typically, SCADA applications acquire data at intervals ranging from half a second to one minute. SCADA systems process thousands of signals, utilizing a standardized approach based on essential parameters for accurate information management, often implementing an OPC UA server interface. These parameters include the tag name, a description, the sampling time, a minimum and a maximum value, and engineering units. SCADA systems, therefore, make good access points for acquiring data with some semblance of a data model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh25qpalqdoyry1lequzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh25qpalqdoyry1lequzh.png" alt=" " width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros of Integrating Data from SCADA
&lt;/h3&gt;

&lt;p&gt;A SCADA system primarily functions as a data acquisition system, effectively serving as a robust data concentrator. It collects data from various field devices and PLCs, controlling different production lines or functional areas. With SCADA systems adopting a standardized approach to handling industrial data, often based on a common data model, it simplifies identifying and recognizing data streams required for smart manufacturing implementation.&lt;/p&gt;

&lt;p&gt;Furthermore, given their frequent need to interface with MES and ERP systems, they are typically already integrated into the plant or corporate network, which eases the process of transferring data to the enterprise level, thereby streamlining overall data management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cons of Integrating Data from SCADA
&lt;/h3&gt;

&lt;p&gt;Compared to PLCs, SCADA systems tend to be less reliable due to several factors. Regular updates to SCADA systems often necessitate application restarts and reboots of the Windows operating system for security patches or installations. Additionally, the modular architecture of SCADA systems may lead to overloads that could disrupt communication tasks essential for data integration with enterprise applications.&lt;/p&gt;

&lt;p&gt;Further, direct connection to a SCADA system often requires communication via its API or SDK, which may demand substantial effort to maintain, and update connectors for various SCADA systems while ensuring compatibility. Like PLCs, an alternative solution involves connecting to them through an OPC UA server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Data from Historians
&lt;/h2&gt;

&lt;p&gt;The Historian’s ability to interface with various common industrial protocols and Fieldbuses enables it to collect data from various plant-floor devices and systems and log it as time-series data. This data is crucial for tracking and analyzing the performance of machines and systems, and detecting anomalies, which makes it a potential source for data integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhhzl2lpyqrf36h6lle2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhhzl2lpyqrf36h6lle2.png" alt=" " width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros of Integrating Data from Historians
&lt;/h3&gt;

&lt;p&gt;Historians arrange time-series data using a hierarchical model associated with the asset. This model structures data like branches on a tree, creating an organized, logical system for pinpointing and accessing specific data points.&lt;/p&gt;

&lt;p&gt;Much like SCADA systems, Historians are often already integrated into the plant or corporate network, simplifying transferring data to the cloud and enhancing overall data management efficiency. However, unlike SCADA systems, Historians come with superior reliability features, such as built-in data buffering capabilities and store-and-forward mechanisms. This in-built resilience to network disruptions or unforeseen downtime ensures the data’s reliability, provided the source device is available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cons of Integrating Data from Historians
&lt;/h3&gt;

&lt;p&gt;Historians are designed to optimize data sampling and storage, but they might only gather a portion of the data, limiting the overall data availability. Additionally, data obtained from Historians is in its raw form and does not deliver a time-specific snapshot of an asset - which is an essential aspect of cloud-based analytic applications.&lt;/p&gt;

&lt;p&gt;Like PLCs and SCADA, directly connecting to a Historian using its SDK or API poses a maintenance challenge. Again, the alternative could be to connect to it through an OPC UA server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Plant-Floor Data to The Enterprise
&lt;/h2&gt;

&lt;p&gt;When gathering data from established industrial systems, you’re bound to accumulate information from diverse sources based on the advantages and disadvantages I’ve previously mentioned. Regardless of the device acting as a data source, a software layer must be in place that communicates with this source via its unique protocol. This layer should be able to request Tag and Time-Series data, making it accessible to higher levels and external systems through a single standardized interface, typically OPC UA.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotdolspk8cccso42g920.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotdolspk8cccso42g920.png" alt=" " width="800" height="731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Direct connections to PLCs, CNCs, SCADA, Historians, etc., mean your Edge IT infrastructure has to set up and manage multiple protocol endpoints. This approach is neither scalable, secure, nor reliable. Instead, it’s preferable to utilize a connectivity platform to simplify the management of diverse interfaces and protocols, thereby enhancing data integration efficiency.&lt;/p&gt;

&lt;p&gt;KEPServerEX, for instance, could be an appropriate choice for efficient data management. It can interact with various devices and machines, irrespective of the manufacturer, and supports numerous communication protocols. Its unique feature exposes your plant-floor data for enterprise integration through a single interface, OPC UA. This significantly eases the process of integrating data into enterprise applications.&lt;/p&gt;

&lt;p&gt;Data in a standardized communication interface like OPC UA can be integrated into the enterprise network using a communication protocol like MQTT. MQTT is ideal for this stage of integration for several reasons:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: MQTT’s publish/subscribe model is scalable, especially when dealing with many devices, making it a good fit for extensive manufacturing setups or those expected to grow significantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud Integration&lt;/strong&gt;: MQTT is a popular choice for cloud integration due to its native support on many IoT platforms. If manufacturing data needs to be integrated with a cloud platform, converting it to MQTT can simplify this task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-time Data Processing&lt;/strong&gt;: MQTT is suitable for real-time data processing thanks to its lightweight and real-time capabilities. This is advantageous in situations where immediate insights and swift decision-making are crucial.&lt;/p&gt;

&lt;p&gt;To enhance the quality and speed of real-time insights, it’s imperative that we first contextualize, normalize, and model the acquired and aggregated data before integrating it. Furthermore, to establish the comprehensive context necessary for implementing smart manufacturing, it’s crucial to integrate your plant floor data with various platforms. These include Manufacturing Execution Systems (MES), Enterprise Resource Planning systems (ERP), and Laboratory Information Management Systems (LIMS), among others. As such, the necessity of introducing DataOps to operationalize data management at this layer becomes clear. This topic will be our main focus in Part 3 of this series.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we have navigated through identifying data sources and integration opportunities as a first step to implementing a data management strategy for smart manufacturing. We addressed identifying sources of plant-floor data and the procedures for data acquisition. We also highlighted the methods to integrate this data, making it accessible via a standardized interface for enterprise integration.&lt;/p&gt;

&lt;p&gt;In Part 3 of this six-part series titled &lt;a href="https://www.hivemq.com/comprehensive-guide-to-industrial-data-management-for-smart-manufacturing-iiot/" rel="noopener noreferrer"&gt;A Comprehensive Guide To Industrial Data Management for Smart Manufacturing&lt;/a&gt;, we delve into the methods of &lt;a href="https://www.hivemq.com/article/data-modelling-contextualization-normalization-for-smart-manufacturing/" rel="noopener noreferrer"&gt;transforming, standardizing, normalizing, and modelling the collected data&lt;/a&gt;. This process is crucial to ensure that the data can be correctly understood and interpreted, thus enhancing its quality and usefulness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/kudzaimanditereza" rel="noopener noreferrer"&gt;Kudzai Manditereza&lt;/a&gt; is a Developer Advocate at &lt;a href="https://www.hivemq.com/" rel="noopener noreferrer"&gt;HiveMQ&lt;/a&gt; and the Founder of Industry40.tv. He is the host of an IIoT Podcast and is involved in Industry4.0 research and educational efforts.&lt;/p&gt;

</description>
      <category>smartmanufacturing</category>
      <category>iot</category>
      <category>iiot</category>
    </item>
    <item>
      <title>The Power of Data Management in Driving Smart Manufacturing Success</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Thu, 25 May 2023 05:00:00 +0000</pubDate>
      <link>https://dev.to/hivemq_/the-power-of-data-management-in-driving-smart-manufacturing-success-1klc</link>
      <guid>https://dev.to/hivemq_/the-power-of-data-management-in-driving-smart-manufacturing-success-1klc</guid>
      <description>&lt;p&gt;&lt;a href="https://www.hivemq.com/solutions/manufacturing/" rel="noopener noreferrer"&gt;Smart Manufacturing &lt;/a&gt;encompasses a diverse range of priorities and objectives for companies in the industry. For some, it’s about innovating customer service to stay ahead of emerging competitors, while others focus on improving quality and cost performance. Despite the differing objectives, every smart manufacturing pursuit emphasizes the vital role of data in realizing their desired outcomes.&lt;/p&gt;

&lt;p&gt;However, many businesses embark on their smart manufacturing or industry 4.0 journey with isolated digital projects, lacking a comprehensive data management strategy. While these projects may produce positive results, they eventually give rise to various challenges that necessitate reevaluating the approach. Issues include a complex network of digital technologies, an array of disparate solutions that are challenging to scale, an uncoordinated and frequently inefficient digital infrastructure, and escalating costs associated with digital investments arise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;This article serves as the first installment in a six-part series that will explore how to establish a well-thought-out strategy for harnessing the power of data in smart manufacturing. By providing an overview of the importance of data management and its role in driving success, we lay the groundwork for the subsequent articles in this series, which will delve deeper into specific data management techniques.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Aligning Data Management with Business Objectives
&lt;/h2&gt;

&lt;p&gt;It is essential to emphasize that the key to successfully implementing a comprehensive data management strategy is ensuring that it aligns with and supports the overarching business strategy. In many manufacturing businesses, there’s a continuous effort to create consistency in both business and operational methods. This process started by introducing a shared ERP system at the company level and has since expanded to include process and manufacturing operations.&lt;/p&gt;

&lt;p&gt;A significant obstacle in promoting uniformity across operations lies in the vast diversity present among physical production facilities. Consequently, numerous companies are identifying clusters of similar operating technologies as a starting point for establishing a degree of consistency. Then they employ data-driven key performance indicators (KPIs) like Overall Equipment Effectiveness (OEE), Lead Time, Time to Resolution (TTR) for quality issues, Product Development Costs, and Supply Chain Cycle Time, among others, to measure the level of achievement for established objectives. &lt;/p&gt;

&lt;p&gt;However, it is important to recognize that implementing this measurement system can be more complex than initially anticipated. It goes way beyond developing a weekly or monthly dashboard. Rather, it involves bringing measurement closer to real-time and proactively driving actions that impact business performance. As previously highlighted, the data used for calculating and summarizing KPIs are often intricate, relying on information from numerous sources that do not normally talk to each other. Therefore, a well-orchestrated data management approach is essential in facilitating the automated calculation and aggregation of such KPIs across the organization. &lt;/p&gt;

&lt;p&gt;As you will discover in later parts of this series, it is critical that performance calculations utilize trusted data that represents the single source of the truth and, whenever possible, not created in a manual process that is subject to individual bias. The enables autonomous calculation and aggregation of data into a higher-level enterprise data structure, a Unified Namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Addressing the Challenges of Plant-Floor Data
&lt;/h2&gt;

&lt;p&gt;As already ascertained, plant-floor data can potentially drive significant business outcomes, but it often presents challenges due to its raw and unstructured nature. Originally designed for process control, this data can appear in various formats, such as machine, transactional, or time series data, and it lacks context and standardization. Simply, plant-floor data is not immediately compatible with cloud-based enterprise applications. As a result, managing the integration of this complex data to derive meaningful insights from it can be daunting. It requires a data management approach that enforces standardization and repeatability across a manufacturing enterprise.&lt;/p&gt;

&lt;p&gt;In addition, data quality remains the foremost challenge for manufacturing companies’ analytics initiatives, with teams often dedicating a significant portion of their time to data preparation and cleaning. Common data quality emanating from manufacturing operations includes issues such as missing or incorrect data, inconsistent data, unsuitable formats, duplicated data, etc. These can be addressed by implementing standardized governance processes.&lt;/p&gt;

&lt;p&gt;In addition, legacy industrial systems pose several challenges for smart manufacturing analytics due to their outdated technologies, lack of connectivity, and resistance to change. Some of the key challenges include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration difficulties:&lt;/strong&gt; Integrating legacy systems with modern smart manufacturing technologies can be complex, time-consuming, and costly. These systems often lack standardized interfaces and use proprietary communication protocols, which is challenging when integrating newer technologies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data access and compatibility:&lt;/strong&gt; Legacy systems often have limited data storage and retrieval capabilities. Extracting, processing, and integrating data from these systems into smart manufacturing platforms may require extensive data manipulation and transformation, which can be time-consuming and error-prone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resistance to change:&lt;/strong&gt; Organizations that rely on legacy systems may face resistance from employees accustomed to using the old systems. This can slow down the adoption of smart manufacturing technologies and hinder overall progress.&lt;/p&gt;

&lt;p&gt;A well-defined data management strategy can help alleviate these challenges by developing a systematic approach to integrating data from legacy systems with newer smart manufacturing technologies. This may involve using middleware, data connectors, or custom-built APIs to bridge the communication gap between systems and enable seamless data exchange. Further, data standardization and transformation can be enforced by establishing standardized data formats and structures to facilitate smooth data exchange and processing across different systems. &lt;/p&gt;

&lt;p&gt;By implementing a comprehensive data management strategy, organizations can overcome many of the challenges of legacy industrial systems and create a strong foundation for the successful adoption of smart manufacturing technologies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leveraging Semantic Data Representation for Interoperability
&lt;/h2&gt;

&lt;p&gt;Implementing capabilities to support a manufacturing company throughout its production cycle is crucial. This includes determining what products to create, the required materials, the production location, specific operations and equipment, and the quality-critical process parameters. Additionally, analyzing past performance is essential for continuous improvement.&lt;/p&gt;

&lt;p&gt;It is vital to establish a robust data infrastructure, including accurate models for materials and product families and quality and production-specific specifications to achieve this. Furthermore, it is necessary to consider the production units themselves, ensuring a high degree of repeatability by defining machinery categories or types and setting up instances of those types. Connecting the production unit’s performance specifications with material definitions and quality, food safety, or other compliance-related factors is also important.&lt;/p&gt;

&lt;p&gt;Having a strong data model is the foundation for building various tools and systems around it. If the data infrastructure is weak, developers might have to compensate for the shortcomings with additional code in the automation or reporting layers. However, if the data model surrounding materials, processes, production units, and personnel is well-structured, creating custom user experiences or reports becomes significantly easier due to the presence of logical structures. In summary, investing in a solid data model is essential for streamlining operations and supporting growth in the manufacturing industry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling Seamless Exchange of Data Across the Enterprise
&lt;/h2&gt;

&lt;p&gt;At the heart of smart manufacturing lies data transfer between data producers and consumers for performance analysis and occasionally back to the producers for corrective measures. Smart manufacturing inherently demands integrating data from a diverse range of enterprise components, vendors, and domains in an easily manageable manner. As a result, the effectiveness of a smart manufacturing initiative is directly linked to the openness, flexibility, and scalability of your data exchange architecture.&lt;/p&gt;

&lt;p&gt;You can significantly enhance your smart manufacturing initiative by carefully crafting your data management strategy, particularly regarding data exchange capabilities. This will establish a strong foundation upon which you can continuously add and interchange components without constraints. In subsequent sections of this series, we will explore how the publish-subscribe architectural pattern for data exchange, in which data consumers are decoupled from data producers, simplifies data integration for smart manufacturing.&lt;/p&gt;

&lt;p&gt;In addition, a key challenge in data integration is the disparity in data exchange methods between the Information Technology (IT) and Operations Technology (OT) domains. Messaging protocols and message formats vary between these domains. Therefore, a well-orchestrated data management strategy will enable you to choose a data transportation and messaging format that effectively spans both domains.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assessing Digital Capability and Maturity Assessment
&lt;/h2&gt;

&lt;p&gt;Before developing a comprehensive data management strategy for smart manufacturing, it is essential to assess your present digital capability and maturity level—which may vary significantly across sites and business functions—and your desired future state. Digital capability refers to the extent of digital technology available within your company, while digital maturity represents your organization’s readiness to utilize these technologies effectively. &lt;/p&gt;

&lt;p&gt;It is self-evident that embarking on a smart manufacturing journey demands a growing level of digital capability and maturity throughout the organization. Therefore, comprehending digital capability and maturity is vital as your company decides on the next steps for laying a data management foundation for smart manufacturing implementation. &lt;/p&gt;

&lt;p&gt;When assessing your manufacturing enterprise’s digital capability and maturity, it is crucial to examine various factors, comparing the current state to the ideal target state. These factors and guidelines can help manufacturing companies understand their current position and work towards enhancing their digital capabilities.&lt;/p&gt;

&lt;p&gt;For example, you can start by assessing whether data is being collected, stored, and shared effectively across the organization. Your target would be recording and historizing data from all devices and establishing common data definitions across plants. You can also evaluate how easily real-time and historical data can be accessed and ensure its accuracy. Aim for easy access to accurate data, minimizing manual collection or manipulation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article explored the importance of effective data management in achieving business objectives through smart manufacturing. We delved into how it tackles the issue of diverse plant-floor data, promotes interoperability through semantic representation, and enables smooth data exchange across the entire enterprise. We also discussed methods for assessing your digital capabilities and maturity to formulate a strategic data management plan.&lt;/p&gt;

&lt;p&gt;Check out Part 2 on &lt;a href="https://www.hivemq.com/article/identify-acquire-integrate-plant-floor-data-smart-manufacturing/" rel="noopener noreferrer"&gt;Identifying, Acquiring and Integrating Plant-Floor Data for Smart Manufacturing&lt;/a&gt;, where we take a practical approach to help you begin implementing data management for smart manufacturing. We’ll guide you in identifying data sources and walk you through the process of acquiring and aggregating data from your manufacturing operations to facilitate the implementation of smart manufacturing.&lt;/p&gt;

&lt;h3&gt;
  
  
  About the Author
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/kudzaimanditereza" rel="noopener noreferrer"&gt;Kudzai Manditereza&lt;/a&gt; is a Developer Advocate at &lt;a href="https://www.hivemq.com/" rel="noopener noreferrer"&gt;HiveMQ&lt;/a&gt; and the Founder of Industry40.tv. He is the host of an IIoT Podcast and is involved in Industry4.0 research and educational efforts.&lt;/p&gt;

</description>
      <category>iiot</category>
      <category>data</category>
      <category>manufacturing</category>
    </item>
    <item>
      <title>Finding the Right MQTT Platform for IoT Data Movement</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Thu, 23 Mar 2023 10:13:12 +0000</pubDate>
      <link>https://dev.to/hivemq_/finding-the-right-mqtt-platform-for-iot-data-movement-2d4c</link>
      <guid>https://dev.to/hivemq_/finding-the-right-mqtt-platform-for-iot-data-movement-2d4c</guid>
      <description>&lt;p&gt;Over the past several years, we’ve seen unprecedented growth in the connected world and data driven decision-making. Chances are you found this guide because you are trying to figure out how to effectively use data to build new connected products, achieve more efficient operations, or improve the customer experience. Successful IoT projects and digital transformation depend on having the right data in the right place at the right time.&lt;/p&gt;

&lt;p&gt;No matter what you want to do with your data, or whether you are an IT Architect, an Engineer, or a Head of Digital Transformation, you need a plan for putting the right technology infrastructure and IoT protocols in place for reliable, scalable, and secure data movement. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We’ve written a guide to help you make an informed purchase decision for a data movement platform that meets your technical and organizational needs.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Reading this guide will help you understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The enterprise challenges and priorities for moving data&lt;/li&gt;
&lt;li&gt;Why MQTT is the de facto standard and ideal protocol for IoT&lt;/li&gt;
&lt;li&gt;The MQTT platform landscape today&lt;/li&gt;
&lt;li&gt;How to lay the right foundation for a data driven enterprise&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://info.hivemq.com/mqtt-platform-buyers-guide" rel="noopener noreferrer"&gt;Get your copy of 2023 Buyer's Guide on MQTT Platform.&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://info.hivemq.com/mqtt-platform-buyers-guide" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs91eqiy0a6b6jb8haa1.png" alt=" " width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mqtt</category>
      <category>iot</category>
      <category>iio</category>
    </item>
    <item>
      <title>Send MQTT Messages in 10 Minutes Using HiveMQ Cloud Web Client</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Tue, 14 Feb 2023 23:00:00 +0000</pubDate>
      <link>https://dev.to/hivemq_/send-mqtt-messages-in-10-minutes-using-hivemq-cloud-web-client-1d3j</link>
      <guid>https://dev.to/hivemq_/send-mqtt-messages-in-10-minutes-using-hivemq-cloud-web-client-1d3j</guid>
      <description>&lt;p&gt;** In 2022, we introduced a web client to help you easily connect to the HiveMQ Cloud broker. This blog will show you how to get started with MQTT project using this functionality and send messages in less than 10 minutes.**&lt;/p&gt;

&lt;h2&gt;
  
  
  The HiveMQ Web Client
&lt;/h2&gt;

&lt;p&gt;An &lt;a href="https://www.hivemq.com/blog/mqtt-essentials-part-3-client-broker-connection-establishment/" rel="noopener noreferrer"&gt;MQTT client&lt;/a&gt; is any device (from a microcontroller to a full-fledged server) that runs an MQTT library and connects to an MQTT broker over a network. The HiveMQ Cloud web client is a ready-to-use client within the HiveMQ Cloud user interface that helps you connect to your devices easily with the HiveMQ Cloud Broker.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Access the Web Client?
&lt;/h2&gt;

&lt;p&gt;Once you log-in to your HiveMQ cloud and select the cluster of your choice, you can view the Web Client option on the top menu bar, as seen in the image below. You can access it by simply clicking on the navigation item.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhr29ypd8jzf8evjat2wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhr29ypd8jzf8evjat2wj.png" alt="Menu bar showing the Web Client option" width="725" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to MQTT Broker / Cluster
&lt;/h2&gt;

&lt;p&gt;To securely connect to the client, you must use your credentials, i.e., a username and a password. These credentials are used to allows MQTT clients access to your MQTT broker. Manage your credentials inside the HiveMQ Cloud console under the, “Access Management.”&lt;/p&gt;

&lt;p&gt;If you already have credentials, simply enter them and click on the “Connect Client” button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89qy6n2lud4zkjjqc6kn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89qy6n2lud4zkjjqc6kn.png" alt="Client connection settings GUI" width="490" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you don’t have credentials, no worries; you can create your credentials in a single click using the “Connect With Generated Credentials” button. Once you click it, you will get a pop-up sharing the credentials. Copy and save this for later use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5my6w8d16acb0ilsrt52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5my6w8d16acb0ilsrt52.png" alt="Auto connect Web Client GUI Pop-up" width="207" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on the create and connect button, automatically creates credentials and connects the client to the broker.&lt;/p&gt;

&lt;p&gt;Once connected, you will see “Web-Client Connected” displayed in green below the connection setting, as seen in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8mmkj7j2rso3hdsn40d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8mmkj7j2rso3hdsn40d.png" alt="Web Client connected as shown in the GUI" width="406" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Send Your First Message
&lt;/h2&gt;

&lt;p&gt;Now that you have successfully connected the web client to the HiveMQ Cloud broker, you are ready to send your first message.&lt;/p&gt;

&lt;p&gt;First, subscribe to a topic you want to receive messages from. You can use a specific topic name as displayed in the screenshot, or you can subscribe to all topics, using the wildcard “#”.&lt;/p&gt;

&lt;p&gt;You can learn more about &lt;a href="https://www.hivemq.com/blog/mqtt-essentials-part-5-mqtt-topics-best-practices/" rel="noopener noreferrer"&gt;MQTT Topics, wildcards, and some best practices here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, select the &lt;a href="https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels/" rel="noopener noreferrer"&gt;quality of service (QoS)&lt;/a&gt; you want to receive messages with. If you do not have a specific use case where you need a higher QoS, you can keep the default QoS 0 setting.&lt;/p&gt;

&lt;p&gt;As you can see in the image, we create a topic subscription:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my/test/topic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we hit the subscribe button, we can view the subscribed topic below as a list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foh8ehy9sfha5gwxhh35l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foh8ehy9sfha5gwxhh35l.png" alt="Subscribing to a topic using the Web Client GUI" width="502" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To receive messages, we need to publish a message with this topic. Make sure to use a topic that matches the subscription you defined earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my/test/topic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your message can contain any information you like. In our case, we wrote a simple “Hello,” as seen in the image below. But you can also send a JSON payload, or any other format that suits your needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjhc9dxq6xlx9aqb8ka1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjhc9dxq6xlx9aqb8ka1.png" alt="Publishing messages using the Web Client GUI" width="474" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once complete, press the publish button. The message will be published to any client connected to your broker that has subscribed to this topic.&lt;/p&gt;

&lt;p&gt;You can view this message inside the web client with the corresponding topic, QoS, and timestamp below the publish button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh824u53dvji9q4h4nnow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh824u53dvji9q4h4nnow.png" alt="Publish button publishes any message contained in the topic" width="482" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also connect from any other device to your HiveMQ Cloud cluster using the same credentials and use it to subscribe to topics from the Web Client. In our case, we can use MQTT CLI (Command Line Interface) to receive messages sent using GUI. To learn more about MQTT CLI, please check out &lt;a href="https://console.hivemq.cloud/clients/mqtt-cli?uuid=97a20e736c554f6e8fdd15840f235a76&amp;amp;__hstc=184124345.9ec222da250c0a2391a8f1d5693d4452.1616495678251.1676450556558.1676452777609.2350&amp;amp;__hssc=184124345.2.1676452777609&amp;amp;__hsfp=920285691" rel="noopener noreferrer"&gt;HiveMQ Cloud getting started with MQTT CLI section&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy4ivax97gyq13vdsfb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy4ivax97gyq13vdsfb3.png" alt="The screenshot shows MQTT CLI on a local device subscribing to same topic as the web client" width="573" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’ve followed these steps, you should now be able to connect your web client to your HiveMQ Cloud cluster and use it to publish and receive messages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;To access the HiveMQ Cloud Web Client, all you need to do is sign up for HiveMQ Cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.hivemq.cloud/?utm_source=HiveMQ+Cloud+Web+Client+Blog+on%20Dev+to&amp;amp;utm_medium=CTA+Button&amp;amp;utm_campaign=HiveMQ+Cloud&amp;amp;__hstc=184124345.9ec222da250c0a2391a8f1d5693d4452.1616495678251.1676450556558.1676452777609.2350&amp;amp;__hssc=184124345.2.1676452777609&amp;amp;__hsfp=920285691" rel="noopener noreferrer"&gt;Sign-up for HiveMQ Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Were you able to successful follow these steps to connect your web client to your HiveMQ Cloud cluster? Did you have any problems along the way? Did you make/take any alternate steps?&lt;br&gt;
Let us know your thoughts in the comments.&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>How to Stream Data Between HiveMQ Cloud and Apache Kafka for Free</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Thu, 09 Feb 2023 11:07:00 +0000</pubDate>
      <link>https://dev.to/hivemq_/how-to-stream-data-between-hivemq-cloud-and-apache-kafka-for-free-2nm7</link>
      <guid>https://dev.to/hivemq_/how-to-stream-data-between-hivemq-cloud-and-apache-kafka-for-free-2nm7</guid>
      <description>&lt;p&gt;According to a study published by Statista, IoT devices will produce 79 zettabytes of data in 2025, which will be a 483% increase from 2019. To put this number into perspective, if we store this information in smartphones with a storage of 128 GB each, we would need 617.1875 billion smartphones. Yet, without further processing, this data is worth almost nothing. Only by transforming and analyzing this data do you unlock the immense added value promised by the Internet of Things (IoT).&lt;/p&gt;

&lt;p&gt;A common question is how do you actually process the data collected from IoT devices. There are several ways to do this, but one of the most compelling is &lt;strong&gt;using MQTT protocol to send IoT data via Apache Kafka for further processing in a system of your choice.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MQTT and Apache Kafka are often used together to enhance the functionality of IoT and Machine-to-Machine communications. You commonly see them married in the following use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data collection&lt;/strong&gt;: MQTT is used to collect data from IoT devices and publish it to a Kafka broker, where it is processed, analyzed, and stored for future use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time processing&lt;/strong&gt;: Using MQTT and Kafka, organizations build real-time data processing pipelines that handle large amounts of incoming data from IoT devices.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The easiest way to process data from your IoT devices to your Kafka service is our newly introduced Kafka integration with HiveMQ Cloud.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we will show you the key capabilities of the Kafka-HiveMQ Cloud integration, how you can use it to stream your data, and walk you through how you can set it up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The HiveMQ Cloud Kafka Integration
&lt;/h2&gt;

&lt;p&gt;Before we jump into the step-by-step instructions, let’s look at the benefits of the Kafka-HiveMQ Cloud integration. This simple configuration enables you to stream your data efficiently between your HiveMQ Cloud broker and your Kafka cluster for bidirectional message exchange without ongoing operational burden.&lt;/p&gt;

&lt;p&gt;There are five (5) easy steps to ingest data from your IoT devices with the Apache Kafka service of your choice. These can be broadly divided into&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connection Configuration parameters&lt;/li&gt;
&lt;li&gt;Topic mapping parameters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The connection configuration parameters help establish a secure connection between HiveMQ Cloud and your Apache Kafka cluster. The topic mappings let you set up the bidirectional data flow between your MQTT cluster and Apache Kafka.&lt;/p&gt;

&lt;p&gt;But first you must find the Kafka extension in the “Integrations” tab inside your HiveMQ Cloud cluster. This integration is available with HiveMQ Cloud.&lt;/p&gt;

&lt;p&gt;Note: if you are using the free version of HiveMQ Cloud for the first time to follow these instructions, you can start without adding any payment information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvguf9g1w2165h55hpcv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvguf9g1w2165h55hpcv.png" alt=" " width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you are ready to dive into the five steps:&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;Connect HiveMQ Cloud with the Kafka service of your choice&lt;/strong&gt;: To connect, you need a list of bootstrap servers for your Kafka cluster so the integration can fetch the initial metadata about your Kafka cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx41ufrn741mxyuxae9q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx41ufrn741mxyuxae9q7.png" alt=" " width="800" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Secure the connection&lt;/strong&gt;: Now you need to add your Kafka credentials. This helps ensure there is a secure connection between HiveMQ Cloud and Kafka.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8nqtfhgi2qcxn91mmx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8nqtfhgi2qcxn91mmx5.png" alt=" " width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We offer two different SASL mechanisms for connection security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5xas5gu7lg06n71uiqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5xas5gu7lg06n71uiqg.png" alt=" " width="800" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;Send data from HiveMQ to Kafka&lt;/strong&gt;: Once you set up and secure the connection, you can choose what data to forward from your IoT devices. This requires mapping topics from HiveMQ Cloud to your Kafka cluster. The source topic is the MQTT topic you want to send from your HiveMQ cluster. The destination topics are the Kafka topic receiving the messages that your HiveMQ cluster sent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k8nmksft4vdstnecqhl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k8nmksft4vdstnecqhl.png" alt=" " width="473" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.&lt;strong&gt;Establish bidirectional communication&lt;/strong&gt;: For bidirectional communication between Kafka and HiveMQ, you can configure the Kafka cluster to HiveMQ Cloud similarly as you define the topic mapping from HiveMQ Cloud to your Kafka cluster. In this case, the source topic represents the Kafka topic from which the integration should read messages. These messages are then published with the defined destination topic on your HiveMQ Cloud MQTT broker cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdf5cs6utiwkznbanqd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdf5cs6utiwkznbanqd5.png" alt=" " width="486" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.&lt;strong&gt;Enable the configuration&lt;/strong&gt;: You can start the data flow between the HiveMQ cloud cluster and your Kafka cluster by selecting the “enable” button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkawaqealv2tix9rh76xw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkawaqealv2tix9rh76xw.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’ve followed these five steps, you should now be able to employ Apache Kafka with HiveMQ Cloud to use data from your IoT devices for bidirectional communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;To access the Kafka-HiveMQ Cloud functionality for free, all you need to do is sign up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://console.hivemq.cloud/?utm_source=HiveMQ+Cloud+Kafka+Integration+Blog+Dev+to&amp;amp;utm_medium=CTA+Button&amp;amp;utm_campaign=HiveMQ+Cloud&amp;amp;__hstc=184124345.9ec222da250c0a2391a8f1d5693d4452.1616495678251.1675924597526.1675936167442.2321&amp;amp;__hssc=184124345.17.1675936167442&amp;amp;__hsfp=920285691" rel="noopener noreferrer"&gt;Sign-up for HiveMQ Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The integration is a lightweight version of our HiveMQ Enterprise Extension for Kafka and offers to solve frequently requested use cases. If you are still missing functionality, don’t hesitate to reach out to us. We are always keen on direct user feedback.&lt;/p&gt;

</description>
      <category>selfhost</category>
      <category>webui</category>
      <category>howto</category>
    </item>
    <item>
      <title>MQTT Broker Comparison – Open Source Vs. Commercial Vs. Cloud-managed Vs. General Purpose</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Fri, 19 Aug 2022 12:17:00 +0000</pubDate>
      <link>https://dev.to/hivemq_/mqtt-broker-comparison-which-is-the-best-for-your-iot-application-4cif</link>
      <guid>https://dev.to/hivemq_/mqtt-broker-comparison-which-is-the-best-for-your-iot-application-4cif</guid>
      <description>&lt;p&gt;MQTT brokers help implement the publish-subscribe communication model between devices and applications. The MQTT broker also helps implement rules and filters that help make the communications efficient and secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where MQTT Brokers are Used
&lt;/h2&gt;

&lt;p&gt;The most common use cases for MQTT and MQTT brokers are in IoT applications.&lt;br&gt;
In fact, &lt;a href="https://dev.to/mqtt-essentials/"&gt;MQTT&lt;/a&gt; is the de facto standard for IoT use cases. It’s deployed in bandwidth and resource-challenged environments where the clients have to be lightweight.&lt;/p&gt;

&lt;p&gt;Thanks to the efficient nature of the protocol, MQTT is deployed across a variety of verticals and use cases. For example, it helps car-sharing app &lt;a href="https://dev.to/case-studies/bmw-mobility-services/"&gt;ShareNow&lt;/a&gt; to give its users instant access to their cars, it helps &lt;a href="https://dev.to/case-studies/netflix/"&gt;Netflix&lt;/a&gt; certify devices that can use its software and it helps &lt;a href="https://dev.to/case-studies/matternet/"&gt;Matternet&lt;/a&gt; monitor autonomous drones delivering medical samples.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of MQTT Broker
&lt;/h2&gt;

&lt;p&gt;The MQTT specification lays out the functionality expected from an MQTT-based deployment, and that offers a common definition that communities and businesses can then use for building their applications.&lt;/p&gt;

&lt;p&gt;Currently, an MQTT broker is available in the following variants:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Types of MQTT Brokers&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Open Source&lt;/td&gt;
&lt;td&gt;HiveMQ CE, Mosquitto&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Commercial&lt;/td&gt;
&lt;td&gt;HiveMQ Professional and Enterprise, EMQ, VerneMQ&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud (Managed)&lt;/td&gt;
&lt;td&gt;HiveMQ Cloud, CloudMQTT, AWS IoT Core, Azure IoT Hub&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;General-purpose brokers with MQTT support&lt;/td&gt;
&lt;td&gt;Solace PubSub+, IBM MQ, RabbitMQ, ActiveMQ&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Choosing the Best MQTT Broker
&lt;/h2&gt;

&lt;p&gt;An effective way to evaluate software technology is through Architectural Requirements (a.k.a. Non-Functional Requirements). An MQTT broker comparison based on these architectural requirements should give you insight into how to find the best MQTT broker for your needs&lt;/p&gt;

&lt;p&gt;Note: This is a category related view and HiveMQ’s Enterprise MQTT Broker is not the focus of this table&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
  &lt;tr&gt;
    &lt;th colspan="5"&gt;&lt;span&gt;Common Challenges by variants&lt;/span&gt;&lt;/th&gt;
  &lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
  &lt;tr&gt;
    &lt;td&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Open Source&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Commercial&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Cloud-Managed&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;General Purpose&lt;/span&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Scalability&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Limited scalabilty&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Do not scale to millions of devices and messages&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;Require support tickets to add capacity&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Unable to scale linearly and require massive step upgrades&lt;/span&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;&lt;span&gt;Security&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Limited options&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Lack support for latest cipher suites for encryption&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Lack flexibility e.g. turning off TLS for private networks to save compute and bandwidth&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Miss advanced security features like plug-ins and chaining of authentication / authorization logic&lt;/span&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;&lt;span&gt;Resilience&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Cannot cluster for higher availability&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Missing plugins for DB integration&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Lack reason codes and other core MQTT features which sacrifices resolution times&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;
&lt;span&gt;- Master-Slave architecure create long failover time&lt;/span&gt;&lt;br&gt;- Miss key features like Retained Messages that hurts recovery times&lt;br&gt;
&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Agility&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Hard to manage when coded in difficult libraries like Erlang&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Require restarting application when adding nodes&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Poor MQTT compliance makes interwork with systems unpredictable&lt;/span&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;&lt;span&gt;Observability&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Very few meaningful metrics available&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;
&lt;span&gt;- can’t query individual endpoints&lt;/span&gt;&lt;br&gt;&lt;br&gt;&lt;span&gt;- action/hops inside the cloud are a black box&lt;/span&gt;
&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Force a stack of closed management systems that hamper collaboration between systems&lt;/span&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;&lt;span&gt;Availability&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;No overload protection from overactive publishers&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;span&gt;Persist on memory and not on disk causing data loss in many scenarios&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;/td&gt;
    &lt;td&gt;&lt;/td&gt;
  &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These NFRs should form the foundation of an MQTT broker comparison and it will help to have deeper understanding of each of the architectural requirements:&lt;/p&gt;

&lt;h2&gt;
  
  
  Scalability
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What to look for&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Native support for the publish-subscribe pattern&lt;/td&gt;
&lt;td&gt;Efficiency of Fan-in/Fan-out pattern helps avoid spaghetti architecture and its complexity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linear &lt;a href="https://dev.to/blog/mqtt-broker-scalability-tests/"&gt;scalability&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;Helps avoid abrupt infrastructure costs when handing incremental growth&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;High number of topics and concurrent connections&lt;/td&gt;
&lt;td&gt;Helps teams prioritize the business logic over managing lower-level components&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grow and shrink &lt;a href="https://dev.to/blog/clustering-mqtt-introduction-benefits/"&gt;cluster&lt;/a&gt; size at runtime without losing data&lt;/td&gt;
&lt;td&gt;Keeps availability and uptime commitments, whether scaling up or down&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Resilience
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What to look for&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Fault tolerance at multiple levels (broker, cluster, cloud)&lt;/td&gt;
&lt;td&gt;IoT environments are prone to network outages and disruptions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Masterless cluster architecture&lt;/td&gt;
&lt;td&gt;Master/Slave architecture suffers from long recovery times which hurts application availability and performance&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Agility
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What to look for&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Variety of deployment options - on-premise, cloud, fully-managed&lt;/td&gt;
&lt;td&gt;Helps right-size your deployment for different use cases while operating under the same technical principles&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Easy Maintainability through standards-based approach&lt;/td&gt;
&lt;td&gt;Ease of development that helps accelerate time to market&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Testability&lt;/td&gt;
&lt;td&gt;For quality and performance assurance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Support for multi-cloud strategy&lt;/td&gt;
&lt;td&gt;Helps avoid vendor lock-in and brings in the best features from multiple platforms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tested and packaged extensions for common enterprise system&lt;/td&gt;
&lt;td&gt;Complex integration like Apache Kafka etc. can be very time-consuming to build and maintain in-house&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Expertise to certify extensions for enterprise use&lt;/td&gt;
&lt;td&gt;During deployment and support issues, a vendor-certified extension is one less problem for the enterprise&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Availability
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What to Look for&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cluster Overload Protection&lt;/td&gt;
&lt;td&gt;Reduces the rate of incoming messages and connection requests from publishing clients that risk overloading a cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Built-in support for features like Retained Messages&lt;/td&gt;
&lt;td&gt;IRL environments, a client needs a new/last state to be productive&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Usability
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What to Look for&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;REST API&lt;/td&gt;
&lt;td&gt;For programmatic access to the broker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;K8s Operator&lt;/td&gt;
&lt;td&gt;Your DevOps can easily orchestrate, automate, and manage the lifecycle of multiple HiveMQ cluster deployments within Kubernetes (platform agnostic)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wide support of libraries&lt;/td&gt;
&lt;td&gt;Helps your developers spend less time learning new coding languages and constructs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h3&gt;
  
  
  &amp;gt; To know what to look out for in regards to Security, Observability, Extensibility, and more, read &lt;a href="https://www.hivemq.com/blog/mqtt-broker-comparison-iot-application/" rel="noopener noreferrer"&gt;MQTT Broker Comparison – Which is the Best for Your IoT Application?&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Best MQTT Broker
&lt;/h2&gt;

&lt;p&gt;From an architectural perspective, it’s clear that enterprises should choose an MQTT broker that scales without compromising on the security and resilience of the application. The ability to tweak the parameters of the broker and integrate with enterprise systems like Kafka can be very powerful for a business.&lt;/p&gt;

&lt;p&gt;While resilience, security, flexibility, and scalability are key, it’s important that the MQTT broker you choose is easy to use and manage - manually and programmatically.&lt;/p&gt;

&lt;p&gt;HiveMQ Enterprise MQTT Broker has brought innovative features to mature businesses for their mission-critical applications. HiveMQ has 100% compliance to the MQTT 3.1.1 and 5 standards while offering highly-specialized professional services and 24x7 support to 150+ IoT customers across the globe.&lt;/p&gt;

&lt;p&gt;See how &lt;a href="https://dev.to/hivemq/mqtt-broker/"&gt;HiveMQ Enterprise MQTT Broker&lt;/a&gt; stacks up against your enterprise criteria for deployment. &lt;a href="https://dev.to/contact/"&gt;Contact us&lt;/a&gt; today.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>MQTT: Retained Messages | Part 8</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Wed, 06 Jul 2022 08:05:11 +0000</pubDate>
      <link>https://dev.to/hivemq_/mqtt-retained-messages-part-8-1kea</link>
      <guid>https://dev.to/hivemq_/mqtt-retained-messages-part-8-1kea</guid>
      <description>&lt;p&gt;In MQTT, the client that publishes a message has no guarantee that a subscribing client actually receives the message. The publishing client can only make sure that the message gets delivered safely to the broker. Basically, the same is true for a subscribing client. The client that connects and subscribes to topics has no guarantee on when the publishing client will publish a message in one of their topics of interest. It can take a few seconds, minutes, or hours for the publisher to send a new message in one of the subscribed topics. Until the next message is published, the subscribing client is totally in the dark about the current status of the topic. This situation is where retained messages come into play.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retained Messages
&lt;/h2&gt;

&lt;p&gt;A retained message is a normal MQTT message with the retained flag set to true. The broker stores the last retained message and the corresponding QoS for that topic. Each client that subscribes to a topic pattern that matches the topic of the retained message receives the retained message immediately after they subscribe. The broker stores only one retained message per topic.&lt;/p&gt;

&lt;p&gt;If the subscribing client includes wildcards in the topic pattern they subscribe to, it receives a retained message even if the topic of the retained message is not an exact match. Here’s an example: Client A publishes a retained message to &lt;code&gt;myhome/livingroom/temperature&lt;/code&gt;. Sometime later, client B subscribes to &lt;code&gt;myhome/#&lt;/code&gt;. Client B receives the &lt;code&gt;myhome/livingroom/temperature&lt;/code&gt; retained message directly after subscribing to &lt;code&gt;myhome/#&lt;/code&gt;. Client B (the subscribing client) can see that the message is a retained message because the broker sends retained messages with the retained flag set to true. The client can decide how it wants to process the retained messages.&lt;/p&gt;

&lt;p&gt;Retained messages help newly-subscribed clients get a status update immediately after they subscribe to a topic. The retained message eliminates the wait for the publishing clients to send the next update.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In other words, a retained message on a topic is the last known good value. The retained message doesn’t have to be the last value, but it must be the last message with the retained flag set to true.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is important to understand that a retained message has nothing to do with persistent sessions. Once a retained message is stored by the broker, there’s only one way to remove it. Keep reading to find out how.&lt;/p&gt;




&lt;p&gt;To learn how to send or delete a retained MQTT message, &lt;a href="https://www.hivemq.com/blog/mqtt-essentials-part-8-retained-messages/" rel="noopener noreferrer"&gt;read this article&lt;/a&gt; and watch this video.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Ct5s4gXefn4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;




&lt;p&gt;Get your copy of &lt;a href="https://www.hivemq.com/download-mqtt-ebook/?utm_source=content+syndication&amp;amp;utm_medium=devto&amp;amp;utm_campaign=MQTT+Essentials" rel="noopener noreferrer"&gt;MQTT Essentials eBook&lt;/a&gt; to understand the protocol in detail without you having to read the entire specification.&lt;/p&gt;




</description>
      <category>iot</category>
      <category>mqtt</category>
      <category>beginners</category>
    </item>
    <item>
      <title>MQTT: Persistent Session and Queuing Messages | Part 7</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Wed, 06 Jul 2022 07:42:01 +0000</pubDate>
      <link>https://dev.to/hivemq_/mqtt-persistent-session-and-queuing-messages-part-7-4olo</link>
      <guid>https://dev.to/hivemq_/mqtt-persistent-session-and-queuing-messages-part-7-4olo</guid>
      <description>&lt;p&gt;n this post we talk about persistent sessions and message queueing in MQTT.&lt;/p&gt;

&lt;h2&gt;
  
  
  Persistent Session
&lt;/h2&gt;

&lt;p&gt;To receive messages from an MQTT broker, a client connects to the broker and creates subscriptions to the topics in which it is interested. If the connection between the client and broker is interrupted during a non-persistent session, these topics are lost and the client needs to subscribe again on reconnect. &lt;/p&gt;

&lt;p&gt;Re-subscribing every time the connection is interrupted is a burden for constrained clients with limited resources. To avoid this problem, the client can request a persistent session when it connects to the broker. Persistent sessions save all information that is relevant for the client on the broker. The clientId that the client provides when it establishes connection to the broker identifies the session.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s stored in a persistent session?
&lt;/h2&gt;

&lt;p&gt;In a persistent session, the broker stores the following information (even if the client is offline). When the client reconnects the information is available immediately.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Existence of a session (even if there are no subscriptions).&lt;/li&gt;
&lt;li&gt;All the subscriptions of the client.&lt;/li&gt;
&lt;li&gt;All messages in a Quality of Service (QoS) 1 or 2 flow that the client has not yet confirmed.&lt;/li&gt;
&lt;li&gt;All new QoS 1 or 2 messages that the client missed while offline.&lt;/li&gt;
&lt;li&gt;All QoS 2 messages received from the client that are not yet completely acknowledged.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How do you start or end a persistent session?
&lt;/h2&gt;

&lt;p&gt;When the client connects to the broker, it can request a persistent session. The client uses a cleanSession flag to tell the broker what kind of session it needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When the clean session flag is set to true, the client does not want a persistent session. If the client disconnects for any reason, all information and messages that are queued from a previous persistent session are lost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the clean session flag is set to false, the broker creates a persistent session for the client. All information and messages are preserved until the next time that the client requests a clean session. If the clean session flag is set to false and the broker already has a session available for the client, it uses the existing session and delivers previously queued messages to the client.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To understand in-depth on how an MQTT client knows if a session is already stored and to know some of the best practices while using persistent sessions, &lt;a href="https://www.hivemq.com/blog/mqtt-essentials-part-7-persistent-session-queuing-messages/" rel="noopener noreferrer"&gt;read this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Watch this video to visually understand persistent sessions in MQTT.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/2ETj1fM7-ZA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Get your copy of &lt;a href="https://www.hivemq.com/download-mqtt-ebook/?utm_source=content+syndication&amp;amp;utm_medium=devto&amp;amp;utm_campaign=MQTT+Essentials" rel="noopener noreferrer"&gt;MQTT Essentials eBook&lt;/a&gt; to understand the protocol in detail without you having to read the entire specification.&lt;/p&gt;

</description>
      <category>iot</category>
      <category>mqtt</category>
      <category>beginners</category>
    </item>
    <item>
      <title>MQTT: Quality of Service (QoS) Levels | Part 6</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Wed, 06 Jul 2022 07:30:38 +0000</pubDate>
      <link>https://dev.to/hivemq_/mqtt-quality-of-service-qos-levels-part-6-2f5a</link>
      <guid>https://dev.to/hivemq_/mqtt-quality-of-service-qos-levels-part-6-2f5a</guid>
      <description>&lt;p&gt;In this post, we explain the different Quality of Service levels in MQTT.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Quality of Service (QoS)?
&lt;/h2&gt;

&lt;p&gt;The Quality of Service (QoS) level is an agreement between the sender of a message and the receiver of a message that defines the guarantee of delivery for a specific message. There are 3 QoS levels in MQTT:&lt;/p&gt;

&lt;p&gt;At most once (0)&lt;br&gt;
At least once (1)&lt;br&gt;
Exactly once (2).&lt;br&gt;
When you talk about QoS in MQTT, you need to consider the two sides of message delivery:&lt;/p&gt;

&lt;p&gt;Message delivery form the publishing client to the broker.&lt;br&gt;
Message delivery from the broker to the subscribing client.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why is Quality of Service (QoS) important?
&lt;/h2&gt;

&lt;p&gt;QoS is a key feature of the MQTT protocol. QoS gives the client the power to choose a level of service that matches its network reliability and application logic. Because MQTT manages the re-transmission of messages and guarantees delivery (even when the underlying transport is not reliable), QoS makes communication in unreliable networks a lot easier.&lt;/p&gt;

&lt;p&gt;There are 3 levels of QoS in MQTT&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;QoS 0 - at most once&lt;/li&gt;
&lt;li&gt;QoS 1 - at least once&lt;/li&gt;
&lt;li&gt;QoS 2 - exactly once&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To under these QoS levels in detail, &lt;a href="https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels/" rel="noopener noreferrer"&gt;read this article &lt;/a&gt;or watch the below video.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/hvhtJORsE5Y"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Get your copy of &lt;a href="https://www.hivemq.com/download-mqtt-ebook/?utm_source=content+syndication&amp;amp;utm_medium=devto&amp;amp;utm_campaign=MQTT+Essentials" rel="noopener noreferrer"&gt;MQTT Essentials eBook&lt;/a&gt; to understand the protocol in detail without you having to read the entire specification.&lt;/p&gt;

</description>
      <category>iot</category>
      <category>mqtt</category>
      <category>beginners</category>
    </item>
    <item>
      <title>MQTT: Topics, Wildcards, &amp; Best Practices | Part 5</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Wed, 06 Jul 2022 07:03:35 +0000</pubDate>
      <link>https://dev.to/hivemq_/mqtt-topics-wildcards-best-practices-part-5-87g</link>
      <guid>https://dev.to/hivemq_/mqtt-topics-wildcards-best-practices-part-5-87g</guid>
      <description>&lt;p&gt;In this post, we focus on MQTT topics, wildcards, and best practices. &lt;/p&gt;

&lt;h2&gt;
  
  
  MQTT Topics
&lt;/h2&gt;

&lt;p&gt;In MQTT, the word topic refers to an UTF-8 string that the broker uses to filter messages for each connected client. The topic consists of one or more topic levels. Each topic level is separated by a forward slash (topic level separator).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l99r0hv9vr02lc6g078.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l99r0hv9vr02lc6g078.png" alt=" " width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In comparison to a message queue, MQTT topics are very lightweight. The client does not need to create the desired topic before they publish or subscribe to it. The broker accepts each valid topic without any prior initialization.&lt;/p&gt;

&lt;p&gt;Here are some examples of topics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myhome/groundfloor/livingroom/temperature
USA/California/San Francisco/Silicon Valley
5ff4a2ce-e485-40f4-826c-b1a5d81be9b6/status
Germany/Bavaria/car/2382340923453/latitude
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that each topic must contain at least 1 character and that the topic string permits empty spaces. Topics are case-sensitive. For example, myhome/temperature and MyHome/Temperature are two different topics. Additionally, the forward slash alone is a valid topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  MQTT Wildcards
&lt;/h2&gt;

&lt;p&gt;When a client subscribes to a topic, it can subscribe to the exact topic of a published message or it can use wildcards to subscribe to multiple topics simultaneously. A wildcard can only be used to subscribe to topics, not to publish a message. There are two different kinds of wildcards: single-level and multi-level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Single Level: +
&lt;/h2&gt;

&lt;p&gt;As the name suggests, a single-level wildcard replaces one topic level. The plus symbol represents a single-level wildcard in a topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0ysum0q4evs5synkvce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0ysum0q4evs5synkvce.png" alt=" " width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Any topic matches a topic with single-level wildcard if it contains an arbitrary string instead of the wildcard. For example a subscription to myhome/groundfloor/+/temperature can produce the following results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw9jc5umyxgcr163whjf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw9jc5umyxgcr163whjf.png" alt=" " width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi Level:
&lt;/h2&gt;

&lt;p&gt;The multi-level wildcard covers many topic levels. The hash symbol represents the multi-level wild card in the topic. For the broker to determine which topics match, the multi-level wildcard must be placed as the last character in the topic and preceded by a forward slash.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgexqs5smj8db6hzxz35m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgexqs5smj8db6hzxz35m.png" alt=" " width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F691v50y4l6sjkdn213vo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F691v50y4l6sjkdn213vo.png" alt=" " width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a client subscribes to a topic with a multi-level wildcard, it receives all messages of a topic that begins with the pattern before the wildcard character, no matter how long or deep the topic is. If you specify only the multi-level wildcard as a topic (#), you receive all messages that are sent to the MQTT broker. If you expect high throughput, subscription with a multi-level wildcard alone is an anti-pattern (see the best practices below).&lt;/p&gt;

&lt;h2&gt;
  
  
  Topics beginning with $
&lt;/h2&gt;

&lt;p&gt;Generally, you can name your MQTT topics as you wish. However, there is one exception: Topics that start with a $ symbol have a different purpose. These topics are not part of the subscription when you subscribe to the multi-level wildcard as a topic (#). The $-symbol topics are reserved for internal statistics of the MQTT broker. Clients cannot publish messages to these topics. At the moment, there is no official standardization for such topics. Commonly, $SYS/ is used for all the following information, but broker implementations varies. One suggestion for $SYS-topics is in the MQTT GitHub wiki. Here are some examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$SYS/broker/clients/connected
$SYS/broker/clients/disconnected
$SYS/broker/clients/total
$SYS/broker/messages/sent
$SYS/broker/uptime
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;These are the basics of MQTT message topics. As you can see, MQTT topics are dynamic and provide great flexibility. When you use wildcards in real-world applications, there are some challenges you should be aware of. We have collected the best practices that we have learned from working extensively with MQTT in various projects and are always open to suggestions or a discussion about these practices. &lt;/p&gt;

&lt;h2&gt;
  
  
  MQTT Best Practices When Using Wildcards
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Never use a leading forward slash&lt;/li&gt;
&lt;li&gt;Never use spaces in a topic&lt;/li&gt;
&lt;li&gt;Keep the MQTT topic short and concise&lt;/li&gt;
&lt;li&gt;Use only ASCII characters, avoid non printable characters&lt;/li&gt;
&lt;li&gt;Embed a unique identifier or the Client Id into the topic&lt;/li&gt;
&lt;li&gt;Don’t subscribe to #&lt;/li&gt;
&lt;li&gt;Don’t forget extensibility&lt;/li&gt;
&lt;li&gt;Use specific topics, not general ones&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.hivemq.com/blog/mqtt-essentials-part-5-mqtt-topics-best-practices/" rel="noopener noreferrer"&gt;Read this article to know the details of each best practices when using MQTT Wildcards.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Watch this video to visually understand MQTT topics and wildcards.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/juq_l70Vg1w"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.hivemq.com/blog/mqtt-essentials-part-5-mqtt-topics-best-practices/" rel="noopener noreferrer"&gt;Click here to read the original post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Get your copy of &lt;a href="https://www.hivemq.com/download-mqtt-ebook/?utm_source=content+syndication&amp;amp;utm_medium=devto&amp;amp;utm_campaign=MQTT+Essentials" rel="noopener noreferrer"&gt;MQTT Essentials eBook&lt;/a&gt; to understand the protocol in detail without you having to read the entire specification.&lt;/p&gt;

</description>
      <category>iot</category>
      <category>mqtt</category>
      <category>beginners</category>
    </item>
    <item>
      <title>MQTT: How to Publish, Subscribe &amp; Unsubscribe | Part 4</title>
      <dc:creator>HiveMQ</dc:creator>
      <pubDate>Wed, 06 Jul 2022 06:28:07 +0000</pubDate>
      <link>https://dev.to/hivemq_/mqtt-how-to-publish-subscribe-unsubscribe-part-4-fen</link>
      <guid>https://dev.to/hivemq_/mqtt-how-to-publish-subscribe-unsubscribe-part-4-fen</guid>
      <description>&lt;p&gt;Earlier in this series, we covered the basics of the publish/subscribe model. In this post we delve into the specifics of publish/subscribe in the MQTT protocol. If you haven’t read about the basics of the publish/subscribe pattern yet, we strongly encourage you to read that post first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Publish
&lt;/h2&gt;

&lt;p&gt;An MQTT client can publish messages as soon as it connects to a broker. MQTT utilizes topic-based filtering of the messages on the broker. Each message must contain a topic that the broker can use to forward the message to interested clients. Typically, each message has a payload which contains the data to transmit in byte format. MQTT is data-agnostic. The use case of the client determines how the payload is structured. The sending client (publisher) decides whether it wants to send binary data, text data, or even full-fledged XML or JSON.&lt;/p&gt;

&lt;p&gt;A PUBLISH message in MQTT has several attributes that we want to discuss in detail:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxk69iou7od79joswrtk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxk69iou7od79joswrtk.png" alt=" " width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Packet Identifier&lt;/strong&gt; The packet identifier uniquely identifies a message as it flows between the client and broker. The packet identifier is only relevant for QoS levels greater than zero. The client library and/or the broker is responsible for setting this internal MQTT identifier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Topic Name&lt;/strong&gt; The topic name is a simple string that is hierarchically structured with forward slashes as delimiters. For example, “myhome/livingroom/temperature” or “Germany/Munich/Octoberfest/people”. For details on topics, see part 5 of MQTT Essentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;QoS&lt;/strong&gt; This number indicates the Quality of Service Level (QoS) of the message. There are three levels: 0, 1, and 2. The service level determines what kind of guarantee a message has for reaching the intended recipient (client or broker). For details on QoS, see part 6 of MQTT Essentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retain Flag&lt;/strong&gt; This flag defines whether the message is saved by the broker as the last known good value for a specified topic. When a new client subscribes to a topic, they receive the last message that is retained on that topic. For details on retained messages, see part 8 of MQTT Essentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Payload&lt;/strong&gt; This is the actual content of the message. MQTT is data-agnostic. It is possible to send images, text in any encoding, encrypted data, and virtually every data in binary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DUP&lt;/strong&gt; flag The flag indicates that the message is a duplicate and was resent because the intended recipient (client or broker) did not acknowledge the original message. This is only relevant for QoS greater than 0. Usually, the resend/duplicate mechanism is handled by the MQTT client library or the broker as an implementation detail. For more information, part 6 of MQTT Essentials.&lt;/p&gt;

&lt;p&gt;When a client sends a message to an MQTT broker for publication, the broker reads the message, acknowledges the message (according to the QoS Level), and processes the message. Processing by the broker includes determining which clients have subscribed to the topic and sending the message to them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2bbsnzs0ps1xot6leoh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2bbsnzs0ps1xot6leoh.png" alt=" " width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The client that initially publishes the message is only concerned about delivering the PUBLISH message to the broker. Once the broker receives the PUBLISH message, it is the responsibility of the broker to deliver the message to all subscribers. The publishing client does not get any feedback about whether anyone is interested in the published message or how many clients received the message from the broker.&lt;/p&gt;

&lt;h2&gt;
  
  
  Subscribe
&lt;/h2&gt;

&lt;p&gt;Publishing a message doesn’t make sense if no one ever receives it. In other words, if there are no clients to subscribe to the topics of the messages. To receive messages on topics of interest, the client sends a SUBSCRIBE message to the MQTT broker. This subscribe message is very simple, it contains a unique packet identifier and a list of subscriptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Packet Identifier&lt;/strong&gt; The packet identifier uniquely identifies a message as it flows between the client and broker. The client library and/or the broker is responsible for setting this internal MQTT identifier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List of Subscriptions&lt;/strong&gt; A SUBSCRIBE message can contain multiple subscriptions for a client. Each subscription is made up of a topic and a QoS level. The topic in the subscribe message can contain wildcards that make it possible to subscribe to a topic pattern rather than a specific topic. If there are overlapping subscriptions for one client, the broker delivers the message that has the highest QoS level for that topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Suback
&lt;/h2&gt;

&lt;p&gt;To confirm each subscription, the broker sends a SUBACK acknowledgement message to the client. This message contains the packet identifier of the original Subscribe message (to clearly identify the message) and a list of return codes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfxwkanf3yl9gwlcibeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfxwkanf3yl9gwlcibeg.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Packet Identifier&lt;/strong&gt; The packet identifier is a unique identifier used to identify a message. It is the same as in the SUBSCRIBE message.&lt;/p&gt;

&lt;p&gt;**Return Code **The broker sends one return code for each topic/QoS-pair that it receives in the SUBSCRIBE message. For example, if the SUBSCRIBE message has five subscriptions, the SUBACK message contains five return codes. The return code acknowledges each topic and shows the QoS level that is granted by the broker. If the broker refuses a subscription, the SUBACK message contains a failure return code for that specific topic. For example, if the client has insufficient permission to subscribe to the topic or the topic is malformed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uk3ecj4198shjk77m7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uk3ecj4198shjk77m7x.png" alt=" " width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a client successfully sends the SUBSCRIBE message and receives the SUBACK message, it gets every published message that matches a topic in the subscriptions that the SUBSCRIBE message contained.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unsubscribe
&lt;/h2&gt;

&lt;p&gt;The counterpart of the SUBSCRIBE message is the UNSUBSCRIBE message. This message deletes existing subscriptions of a client on the broker. The UNSUBSCRIBE message is similar to the SUBSCRIBE message and has a packet identifier and a list of topics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08vclt6pogfmeqtbuwqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08vclt6pogfmeqtbuwqx.png" alt=" " width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Packet Identifier&lt;/strong&gt; The packet identifier uniquely identifies a message as it flows between the client and broker. The client library and/or the broker is responsible for setting this internal MQTT identifier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List of Topic&lt;/strong&gt; The list of topics can contain multiple topics from which the client wants to unsubscribe. It is only necessary to send the topic (without QoS). The broker unsubscribes the topic, regardless of the QoS level with which it was originally subscribed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unsuback
&lt;/h2&gt;

&lt;p&gt;To confirm the unsubscribe, the broker sends an UNSUBACK acknowledgement message to the client. This message contains only the packet identifier of the original UNSUBSCRIBE message (to clearly identify the message).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppdzhhbs32twodnhyyld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppdzhhbs32twodnhyyld.png" alt=" " width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Packet Identifier&lt;/strong&gt; The packet identifier uniquely identifies the message. As already mentioned, this is the same packet identifier that is in the UNSUBSCRIBE message. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmacy49bqnd7n9pornb0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmacy49bqnd7n9pornb0a.png" alt=" " width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After receiving the UNSUBACK from the broker, the client can assume that the subscriptions in the UNSUBSCRIBE message are deleted.&lt;/p&gt;

&lt;p&gt;Watch this video to visually understand how to publish, subscribe &amp;amp; unsubscribe an MQTT message.  &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/t2b1CwQmDRY"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.hivemq.com/blog/mqtt-essentials-part-4-mqtt-publish-subscribe-unsubscribe/" rel="noopener noreferrer"&gt;Click here to read the original post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Get your copy of &lt;a href="https://www.hivemq.com/download-mqtt-ebook/?utm_source=content+syndication&amp;amp;utm_medium=devto&amp;amp;utm_campaign=MQTT+Essentials" rel="noopener noreferrer"&gt;MQTT Essentials eBook&lt;/a&gt; to understand the protocol in detail without you having to read the entire specification.&lt;/p&gt;

</description>
      <category>iot</category>
      <category>mqtt</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
