<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: swenhelge</title>
    <description>The latest articles on DEV Community by swenhelge (@swenhelge).</description>
    <link>https://dev.to/swenhelge</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/swenhelge"/>
    <language>en</language>
    <item>
      <title>Securely connecting to Solace Cloud  from Boomi</title>
      <dc:creator>swenhelge</dc:creator>
      <pubDate>Tue, 09 Feb 2021 17:25:28 +0000</pubDate>
      <link>https://dev.to/swenhelge/securely-connecting-to-solace-cloud-from-boomi-4506</link>
      <guid>https://dev.to/swenhelge/securely-connecting-to-solace-cloud-from-boomi-4506</guid>
      <description>&lt;p&gt;So you followed the "Getting started with Boomi and Solace" tutorial or played with the "Solace PubSub+ Connector" in your Boomi account.&lt;br&gt;
Everything is great - you can connect, produce and consume events using the PubSub+ cloud service.&lt;/p&gt;

&lt;p&gt;(Note - for the rest of this article we assume you have a Connection to Solace defined in your Boomi workspace, e.g. because you followed the tutorial above)&lt;/p&gt;

&lt;p&gt;But hang on - in the tutorial you used a plain TCP socket to connect to PubSub+. Wouldn't it be better to use TLS encryption?&lt;/p&gt;

&lt;p&gt;That's when you look at your PubSub+ Service in Solace Cloud and discover there's a TLS encrypted endpoint - labelled "Secured SMF Host". This should be easy - just use the secure endpoint and you're done? Right?&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1fprzGWo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/17svle1fi9nabeu8xj40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1fprzGWo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/17svle1fi9nabeu8xj40.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well it isn't otherwise I wouldn't have written this post.&lt;/p&gt;

&lt;p&gt;Next you replaced the plain TCP connection URL with the secured URL (as I did in the screenshot below). Then you tested the connection again using the handy "Test Connection" button in the Boomi Connection Setup dialog. &lt;/p&gt;

&lt;p&gt;Looks good, right?&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H2t7RGwY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/92eim3j5ks43tq92rp1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H2t7RGwY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/92eim3j5ks43tq92rp1m.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Oh no, not if you are using a cloud hosted Atom!&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UfaigYBO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/39y442p48p1j1lia7bcw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UfaigYBO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/39y442p48p1j1lia7bcw.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see the Connector tries to use the default Java trust store so it can verify the certificate presented by the PubSub+ service. And for good reason this access isn't allowed in the cloud.&lt;/p&gt;

&lt;p&gt;How do we fix this?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;we will need to obtain or create a suitable trust store&lt;/li&gt;
&lt;li&gt;we will need to make the trust store available on the Atom&lt;/li&gt;
&lt;li&gt;then tell the connector to use it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Luckily we can do this. Let's look at these steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Obtain a suitable Trust Store
&lt;/h2&gt;

&lt;p&gt;PubSub+ Cloud services use a certificate signed by Digicert. The standard Java trust store includes the Root CA certificate.&lt;br&gt;
You have two options here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the easy way&lt;/li&gt;
&lt;li&gt;the hard way&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's do easy first - as you can see the Connector attempts to use &lt;code&gt;cacerts&lt;/code&gt;. This is included in all Java installations - chances are high you have a few copies on your hard disk already. So just do a search and grab one of these. You can also obtain a copy from the OpenJDK repository - e.g. &lt;a href="http://hg.openjdk.java.net/jdk/jdk/raw-file/76072a077ee1/src/java.base/share/lib/security/cacerts"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you wanted to make the effort to create a trust store with only the Root CA required by the Connector to connect to Solace Cloud there's always the hard way.&lt;/p&gt;

&lt;p&gt;First you need to download the root certificate from the Solace Cloud console in PEM format:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mfVRR9cE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w946ahlovd4792ezg3cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mfVRR9cE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w946ahlovd4792ezg3cn.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Java requires the trust store in a Java Keystore format. To create the trust store in the required format you can follow steps 1 to 6 in &lt;a href="https://docs.oracle.com/cd/E35976_01/server.740/es_admin/src/tadm_ssl_convert_pem_to_jks.html"&gt;Converting PEM-format keys to JKS format&lt;/a&gt;.&lt;br&gt;
In the following screenshots I used the easy way, just replace &lt;code&gt;cacerts&lt;/code&gt; where ever you see it with the name of your trust store if you did it the hard way. &lt;/p&gt;

&lt;h2&gt;
  
  
  Uploading the Trust Store to Boomi Atom(s)
&lt;/h2&gt;

&lt;p&gt;Now you have a trust store, how do you deploy it to your Boomi environment?&lt;br&gt;
There are &lt;a href="https://community.boomi.com/s/article/atominstallationdirectoryreference"&gt;multiple folders&lt;/a&gt; that a Boomi process has access to.&lt;br&gt;
From that list the &lt;code&gt;work&lt;/code&gt; and the &lt;code&gt;userlib&lt;/code&gt; location stood out to me. Any process can write into the work directory so that may be an option, but although it is marked as "Permanent" storage it reads like it's  intended for temporary storage. The other folder - userlib - is used to store "Custom Libraries". These are a way to add additional Java libraries to a Boomi environment. It goes through the same packaging, versioning and deployment cycle as Boomi processes themselves.&lt;br&gt;
Sounds like a good way to manage the rollout of our trust store.&lt;/p&gt;

&lt;p&gt;The steps to do this are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload the trust store to the &lt;a href="https://help.boomi.com/bundle/integration/page/r-atm-Account_Library_Management.html"&gt;Account Library&lt;/a&gt; in your Boomi account&lt;/li&gt;
&lt;li&gt;Create and deploy a &lt;a href="https://help.boomi.com/bundle/integration/page/c-atm-Custom_Library_components.html"&gt;Custom Library&lt;/a&gt;
-Verify the trust store was uploaded to your Atom(s)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'll add some key screen shots below, detailed instructions can be found in the links.&lt;/p&gt;

&lt;p&gt;Here's how you add the "Account Library". Note that you are only allowed to upload JAR files - you need to add the &lt;code&gt;.jar&lt;/code&gt; extension to your trust store's file name. I have renamed mine to &lt;code&gt;cacerts.jar&lt;/code&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i2ndybTa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvdvngtg6w8dj2c5bn8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i2ndybTa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvdvngtg6w8dj2c5bn8d.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Then create the "Custom Library", here's a screen shot how you do that. Also the library I have created is in the background, note how I added the &lt;code&gt;cacerts.jar&lt;/code&gt; in there:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rxpEamLC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dwjwra2on8zyhkibn3nt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rxpEamLC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dwjwra2on8zyhkibn3nt.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you prefer - when creating the custom library you can choose to associate it with the Connector by setting the library type to "Connector" and connector type to "PubSub+ Connector". This will put the trust store in a dedicated sub-directory of &lt;code&gt;userlib&lt;/code&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oNSuAtxw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9lw01ns4npsx9b9dre7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oNSuAtxw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9lw01ns4npsx9b9dre7l.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You then create a "Packaged Component" and deploy it to your environment, I'll skip this here and fast forward to the result, the library was applied to the Atom:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mcRZzsm---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ow6ph2eqppbmo0plyu1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mcRZzsm---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ow6ph2eqppbmo0plyu1z.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is what it looks like if you set the Custom Library type to "Connector", note the sub directory that was created in &lt;code&gt;userlib&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ad8dV4yk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x2xcg5ikqv2so6ler8fr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ad8dV4yk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x2xcg5ikqv2so6ler8fr.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set Trust Store used by the Connector
&lt;/h2&gt;

&lt;p&gt;The last step is to let the Connector know to use our trust store.&lt;br&gt;
This is where the "Custom Properties" come into play. There is a property &lt;code&gt;SSL_TRUST_STORE&lt;/code&gt; we can set and we point that to the file that was uploaded - remember it's in the &lt;code&gt;userlib&lt;/code&gt; directory&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LzjqhAE5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b12d2pmf33nmrcccp0zt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LzjqhAE5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b12d2pmf33nmrcccp0zt.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you selected "Connector" as Custom Library Type the trust store will be located in a sub directory of userlib - you can look up the location in the Atom Management - see the screen shot in the preceding section.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bel2agWh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hdzvtriist488mtt0c3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bel2agWh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hdzvtriist488mtt0c3e.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's some information on these "Custom Properties" in the &lt;a href="https://help.boomi.com/bundle/connectors/page/int-Solace_PubSub_connection.html"&gt;Connector's documentation&lt;/a&gt; and a guide on the properties you can use in the &lt;a href="https://docs.solace.com/API-Developer-Online-Ref-Documentation/java/com/solacesystems/jcsmp/JCSMPProperties.html"&gt;Solace API documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now try "Test Connection" again, select an Atom in the Environment that you applied the "Custom Library" to and ...&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NNjgBi-F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fqdnrqy695bz4xeuhbpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NNjgBi-F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fqdnrqy695bz4xeuhbpw.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ipaas</category>
      <category>boomi</category>
      <category>solace</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Integration/Functional Testing with JUnit and Solace PubSub+</title>
      <dc:creator>swenhelge</dc:creator>
      <pubDate>Thu, 19 Nov 2020 10:36:19 +0000</pubDate>
      <link>https://dev.to/swenhelge/integration-functional-testing-with-junit-and-solace-pubsub-1065</link>
      <guid>https://dev.to/swenhelge/integration-functional-testing-with-junit-and-solace-pubsub-1065</guid>
      <description>&lt;p&gt;When developing event driven Java services for Solace PubSub+, at some stage you want to move from unit testing Java code to testing services against a broker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/solace-iot-team/pubsubplus-container-junit"&gt;This repository&lt;/a&gt; demonstrates a base class for JUnit tests that launches a new PubSub+ instance in docker for use in tests.&lt;br&gt;
It utilises &lt;a href="https://www.testcontainers.org/"&gt;testcontainers&lt;/a&gt;, manages the container life cycle manually as there is no testcontainers wrapper (yet) for PubSub+ and follows the &lt;a href="https://www.testcontainers.org/test_framework_integration/manual_lifecycle_control/"&gt;singleton container pattern&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At the same time I put my original version of this together spierepf (Peter-Frank Spierenburg) created an example using JUnit 5, check it out here:&lt;br&gt;
&lt;a href="https://github.com/spierepf/solace-junit5-test"&gt;https://github.com/spierepf/solace-junit5-test&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Port Mapping and Ports Exposed by Container
&lt;/h2&gt;

&lt;p&gt;When the container is created the mapped ports are provided. This tells testcontainers which ports it shall expose for the container. &lt;a href="https://www.testcontainers.org/features/networking/"&gt;testcontainers exposes random ports and maps these to the internal ports of the container&lt;/a&gt;.&lt;br&gt;
For example - the base class maps port 55555 (SMF plain). This may be exposed as any port. The base class logs the SMF port used and output will look similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO: Started Solace PubSub+ Docker Container, available on host [localhost], SMF port [32872]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So an SMF client needs to connect on port &lt;code&gt;32872&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Base class - &lt;a href="https://github.com/solace-iot-team/pubsubplus-container-junit/blob/main/src/main/java/com/solace/junit/AbstractPubSubPlusTestCase.java"&gt;AbstractPubSubPlusTestCase&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This class starts the docker container. It logs a message when the container before and after start up.&lt;br&gt;
It exposes many of the plain services such as SMF, REST/HTTP, MQTT/TCP. The ports used by the container can be obtained from the class with &lt;code&gt;getXxxxx&lt;/code&gt; methods.&lt;br&gt;
The class also provides &lt;code&gt;getAdminUser&lt;/code&gt; and &lt;code&gt;getAdminPassword&lt;/code&gt; methods so SEMP v2 calls can be sent to configure the broker.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example test - &lt;a href="https://github.com/solace-iot-team/pubsubplus-container-junit/blob/main/src/test/java/com/solace/junit/demo/SolaceIntegrationTest.java"&gt;SolaceIntegrationTest&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This is an example unit test, extends the &lt;code&gt;AbstractPubSubPlusTestCase&lt;/code&gt; class.&lt;/p&gt;

&lt;p&gt;It includes some test cases to demo how the base class can be used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publish message using SMF&lt;/li&gt;
&lt;li&gt;Publish and subscribe to message using SMF&lt;/li&gt;
&lt;li&gt;Publish using REST/HTTP and subscribe using SMF&lt;/li&gt;
&lt;li&gt;Obtain a. list of message vpns form the broker using SEMPv2/HTTP&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>java</category>
      <category>junit</category>
      <category>solace</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Event Mess to Event Mesh - Automating the Configuration of a Hybrid IoT Event Mesh with Ansible</title>
      <dc:creator>swenhelge</dc:creator>
      <pubDate>Tue, 08 Sep 2020 13:40:54 +0000</pubDate>
      <link>https://dev.to/swenhelge/event-mess-to-event-mesh-automating-the-configuration-of-a-hybrid-iot-event-mesh-with-ansible-4opp</link>
      <guid>https://dev.to/swenhelge/event-mess-to-event-mesh-automating-the-configuration-of-a-hybrid-iot-event-mesh-with-ansible-4opp</guid>
      <description>&lt;p&gt;In this part of the Solace and Ansible blog series, we will describe how to configure and reconfigure a complex Solace event mesh in an automated fashion with a repeatable, consistent approach. The use case discussed here is a globally distributed, hybrid IoT event mesh to set up for an international shipping company. The company wants to enable a digital shipping eco-system with the following examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analyse tracking and quality data such as geo-location, cooling, and shock of all their customer’s shipments in real time to create an overall better customer experience&lt;/li&gt;
&lt;li&gt;Include data streams from their own containers, 3rd party ships, ports, etc.&lt;/li&gt;
&lt;li&gt;Provide selectively and securely tracking and quality data to regional partners – the freight forwarders – to better service their end-customers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s dive into the details of the project. In our approach, we have separated the deployment concerns into 1) &lt;strong&gt;infrastructure,&lt;/strong&gt; and 2) &lt;strong&gt;application-specific configuration&lt;/strong&gt; so they can be managed and applied independently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure&lt;/strong&gt;: the Event Mesh topology which consists of ungoverned DMR (dynamic message routing) links and governed links – static bridges – to public cloud providers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application&lt;/strong&gt;: The use case specific resources such as client usernames, queues as well as attaching subscriptions to bridges – governed links – where events are allowed to flow to brokers in the public cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Ansible tasks described in the &lt;a href="https://solace.com/blog/using-ansible-automate-config-pubsub-plus/"&gt;previous article&lt;/a&gt; give us the capability to safely configure both of these aspects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure – Event Mesh Topology
&lt;/h2&gt;

&lt;p&gt;The topology we intend to set up is a fully connected, internal event mesh with governed connections to services hosted in Solace &lt;a href="https://solace.com/products/event-broker/cloud/"&gt;PubSub+ Event Broker: Cloud&lt;/a&gt;. This diagram illustrates our original test deployment of three on-premise nodes connected to two Solace Cloud Services.&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dSR9AZMI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/08/automating-hybrid-iot-event-mesh-with-ansible_pic-01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dSR9AZMI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/08/automating-hybrid-iot-event-mesh-with-ansible_pic-01.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we want to grow the event mesh over time by adding additional nodes to the internal Mesh and bridges to cloud services, we are looking to create an Ansible playbook that can accommodate a growing inventory of brokers. Thus, the playbook can serve as a good starting point for your own deployment, possibly with only small adjustments.&lt;/p&gt;

&lt;p&gt;The full topology playbook can be found on &lt;a href="https://github.com/solace-iot-team/ansible-example-event-mesh/blob/master/topology.yml"&gt;GitHub&lt;/a&gt;. It has two main sections: creating the static bridges (light green in the diagram above) and the DMR cluster and internal links (dark green in the diagram above).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/solace-iot-team/ansible-example-event-mesh/blob/master/topology.yml"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NIrQH2VB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png" width="100" height="100"&gt;&lt;strong&gt;Playbook to set up Event Mesh Topology: Static Bridges&lt;/strong&gt;&lt;strong&gt;by solace-iot-team&lt;/strong&gt;Open on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When setting up the topology, we only set up the bridges but do not attach subscriptions, i.e. we create the link, but no events will initially flow. Static bridges are backed by queues for guaranteed message propagation between brokers. Below is the sequence of the tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove and re-create queues required by bridges&lt;/li&gt;
&lt;li&gt;Remove and re-create the bridges&lt;/li&gt;
&lt;li&gt;Add connection to the remote Message VPN&lt;/li&gt;
&lt;li&gt;Add trusted common names for TLS&lt;/li&gt;
&lt;li&gt;Enable the bridges&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is applied to all brokers in the mesh, regardless of broker location as it links up the internal mesh brokers to the Solace Cloud services. There are two groups of hosts defined:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;internal_dmr: on-premise brokers that form the full mesh&lt;/li&gt;
&lt;li&gt;public_cloud: the Solace Cloud brokers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code snippet below shows how the bridges are removed and re-created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# remove existing bridges and re-create
- name: Remove Bridge
  loop: "{{ bridges|default([]) }}"
  solace_bridge:
    name: "{{ item.name }}"
    msg_vpn: "{{ msg_vpn }}"
    virtual_router: "{{ item.virtual_router }}"
    state: absent

- name: Create or update bridge
  loop: "{{ bridges|default([]) }}"
  solace_bridge:
    name: "{{ item.name }}"
    msg_vpn: "{{ msg_vpn }}"
    virtual_router: "{{ item.virtual_router }}"
    settings:
      enabled: false
      remoteAuthenticationBasicClientUsername: "{{item.remote_auth_basic_client_username}}"
      remoteAuthenticationBasicPassword: "{{ item.remote_auth_basic_password }}"
      remoteAuthenticationScheme: basic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that many parameters are set from variables defined in the inventory. We will look at the inventory further below.&lt;/p&gt;

&lt;p&gt;The second section takes care of creating the Solace DMR cluster. Messages flow freely between the nodes as all brokers are connected via internal DMR links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove and re-upload the trusted CA’s certificate (this is required to link brokers via TLS)&lt;/li&gt;
&lt;li&gt;Remove the DMR cluster from all brokers and re-create&lt;/li&gt;
&lt;li&gt;Add DMR links&lt;/li&gt;
&lt;li&gt;Add connection to a remote broker to the links&lt;/li&gt;
&lt;li&gt;Add trusted common names for TLS to the links&lt;/li&gt;
&lt;li&gt;Enable the DMR links now that all configuration is done&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This code snippet shows how the DMR cluster is added to all brokers in the internal mesh. Again, note that most information comes from the inventory. This playbook is only applied to the “internal_dmr” group of brokers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Add DMR Cluster
    solace_dmr:
      name: "{{dmr_internal_cluster}}"
      state: present
      settings:
        tlsServerCertEnforceTrustedCommonNameEnabled: "{{enforce_trusted_common_name}}"
        tlsServerCertValidateDateEnabled: false
        tlsServerCertMaxChainDepth: 6
        authenticationBasicPassword: "{{dmr_cluster_password}}"

  - name: Add DMR Link
    solace_link:
      name: "{{item.remote_node}}"
      dmr: "{{dmr_internal_cluster}}"
      state: present
      settings:
        enabled: false
        authenticationBasicPassword: "{{dmr_cluster_password}}"
        span: "{{dmr_span}}"
        initiator: "{{item.initiator}}"
        transportTlsEnabled: false
    loop: "{{ dmr_links|default([]) }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As mentioned above, the inventory defines two groups of brokers: &lt;code&gt;internal_dmr&lt;/code&gt; and &lt;code&gt;public_cloud&lt;/code&gt;. For each host/broker we define the queues, bridges, DMR links and so on that should be created.&lt;/p&gt;

&lt;p&gt;Here’s an example of the configuration for an “internal_dmr” broker. See &lt;a href="https://github.com/solace-iot-team/ansible-example-event-mesh/blob/master/inventory/local-topology.yml"&gt;GitHub&lt;/a&gt; for the full inventory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/solace-iot-team/ansible-example-event-mesh/blob/master/inventory/local-topology.yml"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NIrQH2VB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png" width="100" height="100"&gt;&lt;strong&gt;Internal DMR&lt;/strong&gt;&lt;strong&gt;by solace-iot-team&lt;/strong&gt;Open on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A couple of things to note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A host contains two lists: “dmr_links” and “bridges”.&lt;/li&gt;
&lt;li&gt;While in the Solace broker the queues associated to a queue are separate objects, we list the required queue(s) for a bridge underneath it.&lt;/li&gt;
&lt;li&gt;For each DMR link we need to set the initiator of the underlying connection. We will discuss the implications further below.&lt;/li&gt;
&lt;li&gt;The vars within this host group hold the common information for the DMR cluster such as the cluster name and password.&lt;/li&gt;
&lt;li&gt;Reference back to the “Add DMR Cluster” and “Add DMR links” tasks in the snippet above

&lt;ul&gt;
&lt;li&gt;The former uses the “vars” in the snippet below, for example,
&lt;code&gt;name: "{{dmr_internal_cluster}}"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The latter loops through the DMR links defined for a host and uses information such as the connection initiator:
&lt;code&gt;loop: "{{ dmr_links|default([]) }}"&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;internal_dmr:
  hosts:
    macbook:
      ansible_connection: local
      msg_vpn: default
      host: localhost
      dmr_links:
      - name: internal-lenovo
        remote_node: v195858
        dmr_remote_address: 192.168.0.34:55555
        initiator: remote
        trusted_common_names:
        - name: "CA-CERT"
      bridges:
      - name: mac-partner-edge-01
        remote_auth_basic_client_username: solace-cloud-client
        remote_auth_basic_password: secret
        virtual_router: auto
        queues: 
        - name: mac-partner-edge-01_Queue
          owner: default
        remote_vpns: 
        - name: partner-edge-01
          location: mr2.messaging.solace.cloud:55555
          queue_binding: mac-partner-edge-01_Queue
          tls_enabled: false
        trusted_common_names:
          - name: "CA-CERT"
  vars:
    # use common connection parameters as we use the cloud API SEMP v2 proxy
    secure_connection: false
    port: 8080
    password: admin
    username: admin 
    dmr_internal_cluster: internal-cluster
    dmr_cluster_password: secret
    dmr_span: internal
    cert_content: |
        -----BEGIN CERTIFICATE-----
        -----END CERTIFICATE-----
    enforce_trusted_common_name: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why did we explicitly define the DMR links in the inventory? Arguably, we could write a playbook that simply links every broker to every other broker automatically. The reason is that we needed to plan out which node initiates the DMR connection – in our case some nodes cannot be reached from the outside due to security policies preventing incoming connections. For links between these nodes and others, the initiator of connections to other nodes in the cluster needs to be the “isolated” node. Configuring the event mesh may require knowledge of the network infrastructure and means you may need to plan and configure DMR links explicitly. Especially since there could be multiple nodes that have restricted inbound connectivity so you will need to figure out how to connect a pair of these brokers.&lt;/p&gt;

&lt;p&gt;Now that we have the topology set up, let’s see how we can add the resources required by an application interacting with this topology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case-Specific Resources
&lt;/h2&gt;

&lt;p&gt;An example similar to our actual use case is a global shipment company that tracks containers as they move across regions and continents. For internal processes the company needs to be able to track any shipment from anywhere in the world. The company offers partners to propagate all shipping and tracking events to a Solace Cloud service so they can process and analyse events in real time – but only the data that concerns shipments into or out of their region.&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4v0tHAt1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/08/automating-hybrid-iot-event-mesh-with-ansible_pic-02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4v0tHAt1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/08/automating-hybrid-iot-event-mesh-with-ansible_pic-02.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tracking events are sent on a topic structure like &lt;code&gt;shipment/from/US/to/UK/{shipment_id}/scan/{location}&lt;/code&gt;. Mapping this use case onto our topology will make it clearer what we need to configure:&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4AowzVb0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/08/automating-hybrid-iot-event-mesh-with-ansible_pic-03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4AowzVb0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2020/08/automating-hybrid-iot-event-mesh-with-ansible_pic-03.png" alt=""&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
We have multiple clients that require to connect to different parts of the mesh:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Global Ops – they can connect to any node on the internal mesh and receive all shipment tracking events.&lt;/li&gt;
&lt;li&gt;US-Based Partner – it can only connect to the public cloud broker in the US. It should only be able to receive events concerning their region. We could enforce this via their Access Control List (ACL). As the service is connected to the internal mesh via a bridge, we can enforce this rule on the bridge itself and only relevant events are actually propagated to the broker servicing the partner.&lt;/li&gt;
&lt;li&gt;EU-based partner – it can only connect to the EU broker. As above, we can enforce only France and UK relevant events ever are propagated to the partner service.&lt;/li&gt;
&lt;li&gt;Tracking points within the shipment networks that emit the network events — they need to be able to connect to a specific node in the mesh and allowed to publish on any topic within the “shipment” namespace.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To enable this use-case we need to create&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ACL profiles&lt;/li&gt;
&lt;li&gt;Client Usernames&lt;/li&gt;
&lt;li&gt;Bridge subscriptions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An example playbook to do exactly these steps can also be found in &lt;a href="https://github.com/solace-iot-team/ansible-example-event-mesh/blob/master/shipment_setup.yml"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/solace-iot-team/ansible-example-event-mesh/blob/master/shipment_setup.yml"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NIrQH2VB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png" width="100" height="100"&gt;&lt;strong&gt;Shipment Setup Example&lt;/strong&gt;&lt;strong&gt;by solace-iot-team&lt;/strong&gt;Open on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are some examples from the playbook – as for the topology you can see we use a lot of variables form the inventory file so we can easily add additional client usernames, subscriptions, and so on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Remove Client Usernames before removing the ACL Profile
  solace_client:
    name: "{{ item.name }}"
    msg_vpn: "{{ msg_vpn }}"
    state: absent
  loop: "{{ users | default([]) }} "

- name: Add ACL Profile
  solace_acl_profile:
    name: "{{ item.name }}"
    msg_vpn: "{{ msg_vpn }}"
    settings:
      clientConnectDefaultAction: "{{ item.client_connect_default }}"
  loop: "{{ acls | default([]) }} "

- name: Add ACL Publish Exception
  solace_acl_publish_exception:
    name: "{{ item.1.topic }}"
    acl_profile_name: "{{ item.0.name }}"
    msg_vpn: "{{ msg_vpn }}"
    topic_syntax: "{{ item.1.syntax }}"
  with_subelements: 
    - "{{ acls | default([]) }} "
    - publish_topic_exceptions
    - flags:
      skip_missing: true

- name: Add Client
  solace_client:
    name: "{{ item.name }}"
    msg_vpn: "{{ msg_vpn }}"
    settings:
      clientProfileName: "{{ item.client_profile }}"
      aclProfileName: "{{ item.acl_profile }}"
      password: "{{ item.password }}"
      enabled: "{{ item.enabled }}"
  loop: "{{ users | default([]) }} "

- name: add subscriptions
  solace_bridge_remote_subscription:
    name: "{{ item.1.name }}"
    msg_vpn: "{{ msg_vpn }}"
    bridge_name: "{{ item.0.name }}"
    virtual_router: "{{ item.0.virtual_router }}"
    deliver_always: true
  with_subelements: 
    - "{{ bridges | default([]) }}"
    - remote_subscriptions
    - flags:
      skip_missing: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that before removing an ACL, all client usernames using the ACL must be removed or updated to use a different ACL.&lt;/p&gt;

&lt;p&gt;The full playbook contains some more tasks, however, it is a straightforward playbook whose results can be easily changed by adjusting the inventory. The inventory defines only one host group – “shipment_demo”. For each broker in the mesh (cloud and on premise), it specifies the configurations to make. The snippet below shows the configuration for one internal broker and one cloud service. You can find the full inventory in &lt;a href="https://github.com/solace-iot-team/ansible-example-event-mesh/blob/master/inventory/local-shipment-setup.yml"&gt;GitHub&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/solace-iot-team/ansible-example-event-mesh/blob/master/inventory/local-shipment-setup.yml"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NIrQH2VB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png" width="100" height="100"&gt;&lt;strong&gt;Local Shipment Setup&lt;/strong&gt;&lt;strong&gt;by solace-iot-team&lt;/strong&gt;Open on GitHub&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# internal broker
macbook:
  ansible_connection: local
  secure_connection: false
  port: 8080
  host: 192.168.0.8
  password: admin
  username: admin 
  msg_vpn: default
  bridges:
  - name: mac-partner-edge-01
    virtual_router: auto
    guaranteed_remote_subscriptions: 
    - name: "shipment/from/UK/to/&amp;gt;"
      queue: mac-partner-edge-01_Queue
    - name: "shipment/from/FR/to/&amp;gt;"
      queue: mac-partner-edge-01_Queue
    - name: "shipment/from/*/to/UK/&amp;gt;"
      queue: mac-partner-edge-01_Queue
    - name: "shipment/from/*/to/FR/&amp;gt;"
      queue: mac-partner-edge-01_Queue
  acls: 
  - name: shipping_publisher
    client_connect_default: allow
    publish_topic_exceptions:
    - topic: shipment/&amp;gt;
      syntax: smf
  - name: global_shipping_ops
    client_connect_default: allow
    subscribe_topic_exceptions:
    - topic: shipment/&amp;gt;
      syntax: smf
  users:
  - name: publisher
    password: HLP2
    acl_profile: shipping_publisher
    client_profile: default
    enabled: true
  - name: global_ops_europe
    password: wzC3B
    acl_profile: global_shipping_ops
    client_profile: default
    enabled: true
# partner broker in Europe
partner_edge_01_na:
  ansible_connection: local
  secure_connection: true
  port: 943
  host: mr2.messaging.solace.cloud
  password: nrkijp
  username: partner-edge-01-admin
  msg_vpn: partner-edge-01
  acls: 
  - name: partner-subscriber
    client_connect_default: allow
    subscribe_topic_exceptions:
    - topic: "shipment/&amp;gt;"
      syntax: smf
  users:
  - name: demo
    password: HLP2
    acl_profile: partner-subscriber
    client_profile: default
    enabled: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first thing to note is that we attach guaranteed message subscriptions to the bridge configuration on the internal broker so messages for the EU partner can flow to the cloud service. We also declare ACLs for the internal broker so publishers of shipment events can connect and send events as well as consumers from the global Ops applications can connect and receive events. The allowed topic permissions are very wide allowing these clients to publish or respectively consume messages on a wide range of topics.&lt;/p&gt;

&lt;p&gt;Having ensured relevant messages are propagated to the EU partner broker, we simply declare an ACL and a client username on this broker – we can even grant very permissive subscription privileges to the ACL as the bridge configuration gives us tight control over the events that are propagated to the partner broker.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this article we have seen how we can use the solace-ansible tasks to completely automate the setup of a use case utilizing a hybrid event mesh. We have discussed why it makes sense to separate event mesh topology configuration from the specific configuration required to support a use case with all its connecting apps. Creating generic playbooks that are driven by the inventory allows us to easily amend the event mesh topology, de-provision and re-provision use case configurations and support different use cases by simply creating new inventory files.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>ansible</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Responsible for Solace PubSub+ DevOps?
Ansible modules for PubSub+ may make your life easier …</title>
      <dc:creator>swenhelge</dc:creator>
      <pubDate>Fri, 04 Sep 2020 09:15:21 +0000</pubDate>
      <link>https://dev.to/swenhelge/use-ansible-to-automate-the-configuration-of-solace-pubsub-event-broker-4ghi</link>
      <guid>https://dev.to/swenhelge/use-ansible-to-automate-the-configuration-of-solace-pubsub-event-broker-4ghi</guid>
      <description>&lt;p&gt;**&lt;br&gt;
Use Ansible to Automate the Configuration of Solace PubSub+ Event Broker**&lt;/p&gt;

&lt;p&gt;Managing and configuring a number of Solace PubSub+ event brokers – or even a single one – in a consistent and repeatable manner is a crucial requirement for production deployments.&lt;/p&gt;

&lt;p&gt;Though the &lt;a href="https://solace.com/products/event-broker/"&gt;PubSub+ Event Broker&lt;/a&gt; itself supports a number of configuration options such as a REST API, you still need to implement logic to manage a configuration, apply configurations consistently, and re-apply configurations in a safe manner. In addition, &lt;em&gt;removing&lt;/em&gt; a configuration is just as important during the development lifecycle of your project. This is where tools such as Ansible come in useful.&lt;/p&gt;

&lt;p&gt;For a few recent engagements we were looking into how we can address this and came across a number of open source projects set up by Solace colleagues and end-user, such as this one: &lt;a href="https://solace.com/blog/automating-solace-configuration-management-semp-ansible/"&gt;Automating Solace configuration management using SEMP and Ansible&lt;/a&gt;. The most promising project we found was Mark Street’s &lt;a href="https://github.com/mkst/ansible-solace"&gt;ansible-solace&lt;/a&gt;. As he describes in his &lt;a href="https://medium.com/@streetster/using-ansible-to-configure-a-solace-appliance-or-vmr-6ce5383d27e7"&gt;blog&lt;/a&gt;, the project was developed for in-house production use, so it was sufficiently battle hardened. The project defines a framework for creating Ansible tasks for the &lt;a href="https://docs.solace.com/API-Developer-Online-Ref-Documentation/swagger-ui/config/index.html"&gt;Solace PubSub+ SEMP API&lt;/a&gt; and in its initial version supported all the most important resources on the broker. We required additional resources for our projects such as Bridges, DMR Cluster configuration, and REST Delivery Points (RDP). Fortunately, adding tasks was pretty simple due to the foundation that Mark Street et al had built.&lt;/p&gt;
&lt;h2&gt;
  
  
  What’s the current state of ansible-solace?
&lt;/h2&gt;

&lt;p&gt;It supports a majority of Message VPN and broker-level resources that are manageable via REST/SEMPv2.&lt;/p&gt;
&lt;h3&gt;
  
  
  Message VPN Level:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Message VPN&lt;/li&gt;
&lt;li&gt;ACL Profile&lt;/li&gt;
&lt;li&gt;Client Username&lt;/li&gt;
&lt;li&gt;Client Profile&lt;/li&gt;
&lt;li&gt;Queue and Queue Subscriptions&lt;/li&gt;
&lt;li&gt;Topic Endpoint&lt;/li&gt;
&lt;li&gt;Bridge&lt;/li&gt;
&lt;li&gt;Rest Delivery Point&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Broker Level:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Certificate Authority&lt;/li&gt;
&lt;li&gt;DMR Bridge and Link&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Note: Some of these resources are not available in Solace&lt;/em&gt; &lt;a href="https://solace.com/products/event-broker/cloud/"&gt;&lt;em&gt;PubSub+ Event Broker: Cloud&lt;/em&gt;&lt;/a&gt; &lt;em&gt;– such as Client Profile and the broker-level resources, so these respective tasks only work for self-managed deployments.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Getting Started with Ansible-Solace
&lt;/h2&gt;

&lt;p&gt;The quickest way to get started is to head over to the ansible-solace-samples GitHub repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/solace-iot-team/ansible-solace"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NIrQH2VB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png" width="100" height="100"&gt;&lt;br&gt;
&lt;strong&gt;Sample projects using ansible-solace&lt;/strong&gt;&lt;strong&gt;by solace-iot-team&lt;/strong&gt;&lt;br&gt;
Follow the instructions in the README, clone it, and run the sample projects. Open on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to look at the code, the development repository lives here: &lt;a href="https://github.com/solace-iot-team/ansible-solace-modules"&gt;https://github.com/solace-iot-team/ansible-solace-modules&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Using the Ansible Tasks
&lt;/h2&gt;

&lt;p&gt;All tasks have common parameters describing which broker to connect to via SEMPv2 Administrator API. The defaults are set to work against a local broker with the standard admin username and password – e.g. deployed in docker following one of &lt;a href="https://docs.solace.com/Solace-SW-Broker-Set-Up/Docker-Containers/Set-Up-Docker-Container-Image.htm"&gt;Solace’s Docker Install Guides&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;These are the connection parameters:&lt;br&gt;&lt;br&gt;
&lt;code&gt;username: "{{ username }}"&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;password: "{{ password }}"&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;host: "{{ host }}"&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;port: "{{ port }}"&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;secure_connection: "{{ secure_connection }}"&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;timeout: 30&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Username and password are of the management user, e.g. admin on a locally installed broker. Port is the management port – 8080 by default for plain connections or 943 for HTTPS in Solace PubSub+ Event Broker: Cloud. The secure connection parameter can be set to “True” to switch to the HTTPS protocol.&lt;/p&gt;

&lt;p&gt;As usual, in Ansible you can use variables or facts to set these dynamically as required.&lt;/p&gt;

&lt;p&gt;Here’s a basic example to create a VPN:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Playbook to add a vpn named 'foo'
  hosts: localhost
  tasks:
  - name: Add 'foo' VPN
    solace_vpn:
      name: foo
      state: present # default state
      settings:
        enabled: true
        dmrEnabled: false
        eventTransactionCountThreshold:
          clearPercent: 66
          setPercent: 98
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is pretty straightforward, but there are a few things to note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can request the state of the resource – &lt;strong&gt;present&lt;/strong&gt; or &lt;strong&gt;absent&lt;/strong&gt;. The ansible tasks will choose the right SEMPv2 method (POST, PATCH, DELETE) accordingly.&lt;/li&gt;
&lt;li&gt;We have omitted the connection parameters for better legibility.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;settings&lt;/strong&gt; dictionary can pass in additional arguments to the POST/PATCH call.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The last point is important as all tasks have the settings argument and you can pass in any parameters that the underlying REST resource supports. It is helpful to have the SEMPv2 developer documentation ready when writing your playbooks.&lt;/p&gt;

&lt;p&gt;Looking at the &lt;a href="https://docs.solace.com/API-Developer-Online-Ref-Documentation/swagger-ui/config/index.html#/msgVpn/createMsgVpn"&gt;create Message VPN resource operation&lt;/a&gt;, we discover the &lt;strong&gt;settings&lt;/strong&gt; in the example above and we can also find the other parameters we could set such as those enabling MQTT Connectivity for example.&lt;/p&gt;

&lt;p&gt;If we wanted to update the message VPN created by the playbook above to enable plain text MQTT, the playbook looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- 
name: Playbook to configure plain text MQTT on a vpn named 'foo'
hosts: localhost
tasks:
  - name: Add 'foo' VPN
    solace_vpn:
      name: foo
      state: present 
      settings:
        serviceMqttPlainTextEnabled: true
        serviceMqttPlainTextListen
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This playbook will preserve the state of the message VPN in all other aspects.&lt;/p&gt;

&lt;p&gt;What if we wanted to delete the message VPN? The playbook looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Playbook to delete a vpn named 'foo'
hosts: localhost
tasks:
  - name: Add 'foo' VPN
    solace_vpn:
      name: foo
      state: absent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, let’s see how we can connect to a remote broker via HTTPS with a non-default password to create a Message VPN:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- 
  name: Playbook to add a vpn named 'foo'
  hosts: localhost
  tasks:
  - name: Add 'foo' VPN
    solace_vpn:
      password: "secret!"
      host: remote-broker.com
      port: 943
      secure_connection: true
      name: foo
      state: present # default state
      settings:
        enabled: true
        dmrEnabled: false
        eventTransactionCountThreshold:
          clearPercent: 66
          setPercent: 98
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to contribute?
&lt;/h2&gt;

&lt;p&gt;If you find anything that’s missing or could be improved, we value all contributions. You can start by forking our repository and we will be happy to incorporate your pull requests. As stated above, there’s a base class that makes it really easy to add additional modules. The easiest way is to clone an existing module and adjust it as needed. &lt;a href="https://github.com/solace-iot-team/ansible-solace-modules/blob/master/Development.md"&gt;Here’s a detailed guide on module development&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>ansible</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>PubSub+ - Configuring DMR using SEMPv2</title>
      <dc:creator>swenhelge</dc:creator>
      <pubDate>Tue, 21 Apr 2020 15:44:17 +0000</pubDate>
      <link>https://dev.to/solacedevs/pubsub-configuring-dmr-using-sempv2-361d</link>
      <guid>https://dev.to/solacedevs/pubsub-configuring-dmr-using-sempv2-361d</guid>
      <description>&lt;p&gt;Dynamic Message Routing (DMR) is a mechanism in Solace PubSub+ brokers to create a network or mesh of brokers within which events can flow freely between brokers. It is used in scenarios where events need to be distributed across sites such as in hybrid or multi-cloud or for horizontal scalability when single broker limits are exceeded.&lt;/p&gt;

&lt;p&gt;You can find more information on &lt;a href="https://docs.solace.com/Configuring-and-Managing/DMR.htm"&gt;DMR&lt;/a&gt; on the Solace documentation site.&lt;/p&gt;

&lt;p&gt;In this article I'll describe how to set up or join DMR clusters and establish links between brokers via the PubSub+ broker's REST based SEMPv2 API. &lt;br&gt;
Two postman collections illustrate how to create and tear down a DMR connection.&lt;/p&gt;

&lt;p&gt;Full documentation of the SEMPv2 API is available &lt;a href="https://docs.solace.com/SEMP/Using-SEMP.htm"&gt;online&lt;/a&gt;. We will need to use the Config API - whose Swagger API doc you can find &lt;a href="https://docs.solace.com/API-Developer-Online-Ref-Documentation/swagger-ui/config/index.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We will look at the scenario of connecting brokers across sites to facilitate data movement e.g. between on-premise and cloud. As a prerequisite you should have access to two PubSub+ brokers. You could use Solace Cloud services (with the exception of the "Free" plan), Standard Edition brokers deployed using Docker (&lt;a href="https://docs.solace.com/Configuring-and-Managing/SW-Broker-Specific-Config/Configuring-SW-Broker-Conn-Scale-Tier-Keys-All.htm"&gt;configured for the 1000 connection scaling tier&lt;/a&gt;) or a combination of both.&lt;/p&gt;

&lt;p&gt;Setting up DMR requires configuration on two resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Message VPN

&lt;ul&gt;
&lt;li&gt;Ensure the VPN is enabled for DMR. This should be the case with new deployments, it may be disabled on upgrades from previous versions of the broker. &lt;/li&gt;
&lt;li&gt;Add bridges between the local VPN and remote VPN that you want to connect.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;DMR Cluster

&lt;ul&gt;
&lt;li&gt;Create a DMR cluster or join an existing cluster&lt;/li&gt;
&lt;li&gt;Create external or internal links to remote nodes&lt;/li&gt;
&lt;li&gt;Attach the address of the cloud broker to the link on the local broker as the local broker will establish the connection.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DMR links establish the physical connection between nodes, DMR bridges connect the data channel and enable the flow of events between two Message VPNs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gather information about the brokers
&lt;/h2&gt;

&lt;p&gt;First let's gather the information we need to connect to the SEMPv2 API and other data we need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Broker hostnames - in Solace Cloud you can find a service's host name on the &lt;code&gt;Status&lt;/code&gt; tab of your service - look for "Host Name".&lt;/li&gt;
&lt;li&gt;Broker SEMPv2 base path - host name, HTTP or HTTPS admin port. For a locally deployed broker it may look like &lt;code&gt;http://localhost:8080/SEMP/v2/&lt;/code&gt;. In cloud you can find the SEMPv2 base path on the &lt;code&gt;Manage&lt;/code&gt; tab in the cloud console, look at the &lt;code&gt;SEMP - REST API&lt;/code&gt; section.&lt;/li&gt;
&lt;li&gt;Names of the VPNs that you would like to connect&lt;/li&gt;
&lt;li&gt;Management username and password. Refer to your docker container setup or to your services' Management tab in the Cloud Console.

&lt;ul&gt;
&lt;li&gt;I'll &lt;code&gt;admin/secret&lt;/code&gt; and &lt;code&gt;admin/cloud&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Primary router name / virtual router name of both brokers. In Solace Cloud you can find this on the &lt;code&gt;Status&lt;/code&gt; tab of your service - look for "Primary Router Name". For any broker you can use the following cUrl command, replacing &lt;code&gt;&lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt;&lt;/code&gt; with your broker URL and changing the &lt;code&gt;--user&lt;/code&gt; parameter to your admin username and password. I have added this call to the postman collection as well.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="nt"&gt;--request&lt;/span&gt; POST &lt;span class="s1"&gt;'http://localhost:8080/SEMP'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/xml'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--user&lt;/span&gt; admin:secret  &lt;span class="se"&gt;\ &lt;/span&gt;                                 
&lt;span class="nt"&gt;--data-raw&lt;/span&gt; &lt;span class="s1"&gt;'&amp;lt;rpc&amp;gt;
        &amp;lt;show&amp;gt;
        &amp;lt;router-name&amp;gt;      

        &amp;lt;/router-name&amp;gt;
    &amp;lt;/show&amp;gt;
&amp;lt;/rpc&amp;gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;If using Solace cloud you need the following information about the cluster that is pre-configured on the service. You can find this on the &lt;code&gt;Status&lt;/code&gt; tab of your service

&lt;ul&gt;
&lt;li&gt;Cluster name&lt;/li&gt;
&lt;li&gt;Cluster password&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the Postman collections I'll use the following parameters for the brokers. I set these up in an environment so you can easily import the collection, adjust the environment and test against your brokers. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Local broker (docker)&lt;/th&gt;
&lt;th&gt;Solace Cloud Service&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Host Name&lt;/td&gt;
&lt;td&gt;localhost&lt;/td&gt;
&lt;td&gt;service.messaging.solace.cloud&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Base URL&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:8080/SEMP/v2/"&gt;http://localhost:8080/SEMP/v2/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://service.messaging.solace.cloud:943/SEMP/v2/"&gt;https://service.messaging.solace.cloud:943/SEMP/v2/&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Message VPN&lt;/td&gt;
&lt;td&gt;default&lt;/td&gt;
&lt;td&gt;cloud-vpn&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Management username&lt;/td&gt;
&lt;td&gt;admin&lt;/td&gt;
&lt;td&gt;admin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Management password&lt;/td&gt;
&lt;td&gt;secret&lt;/td&gt;
&lt;td&gt;cloud&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Primary Router Name&lt;/td&gt;
&lt;td&gt;local-router&lt;/td&gt;
&lt;td&gt;cloud-router&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DMR Cluster Name&lt;/td&gt;
&lt;td&gt;zone-1&lt;/td&gt;
&lt;td&gt;cloud-cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DMR Cluster Password&lt;/td&gt;
&lt;td&gt;cluster-1&lt;/td&gt;
&lt;td&gt;cloud-secret&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Setting up the DMR cluster and connection
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://documenter.getpostman.com/view/1386967/Szf81nz4"&gt;This&lt;/a&gt; Postman collection illustrates how to setup the DMR cluster and connection between local and cloud broker.&lt;br&gt;
Executing all the request in the order they appear in the collection - manually or by using Postman's "Run" dialog - results in a functional external link between the brokers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Removing the connection and the DMR cluster
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://documenter.getpostman.com/view/1386967/Szf81nz7"&gt;This&lt;/a&gt; collection removes the connection created by the previous collection.&lt;br&gt;
Executing all the request in the order they appear in the collection - manually or by using Postman's "Run" dialog - undos all configuration.&lt;/p&gt;

</description>
      <category>solace</category>
      <category>rest</category>
      <category>management</category>
      <category>broker</category>
    </item>
    <item>
      <title>Installing Photon OS on a Dell Edge Gateway 3001/3002</title>
      <dc:creator>swenhelge</dc:creator>
      <pubDate>Tue, 17 Sep 2019 13:18:48 +0000</pubDate>
      <link>https://dev.to/solacedevs/installing-photon-os-on-a-dell-edge-gateway-3001-3002-50dd</link>
      <guid>https://dev.to/solacedevs/installing-photon-os-on-a-dell-edge-gateway-3001-3002-50dd</guid>
      <description>&lt;p&gt;Installing Photon OS - actually any OS - on the Dell Edge Gateway 300x series is generally quite tricky as neither the 3001 nor 3002 offer a display port.&lt;br&gt;
Any OS installation therefore must be headless and automated.&lt;br&gt;
During a recent attempt to install Photon OS I found the instructions provided for the Edge Gateway 300x on the Photon OS site missed a bit of detail.&lt;br&gt;
Fortunately I got some very good instructions from a helpful person at VMware that got me nearly there with a few tweaks that I had to figure out.&lt;/p&gt;

&lt;p&gt;In this post I’ll provide a step by step guide to create a bootable USB stick for an automated Photon OS installation and how to run this on the Dell Gateway. &lt;br&gt;
The Photon OS ISO includes configuration for grub  and MBR boot options. The instructions on the Photon OS site (&lt;a href="https://vmware.github.io/photon/assets/files/html/3.0/photon_installation/Installing-Photon-OS-on-Dell-300X.html"&gt;https://vmware.github.io/photon/assets/files/html/3.0/photon_installation/Installing-Photon-OS-on-Dell-300X.html&lt;/a&gt;) reference the grub based approach.&lt;br&gt;
I’ll use the MBR approach instead - the basic steps are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a bootable USB media from the Photon OS ISO image&lt;/li&gt;
&lt;li&gt;Configure the boot menu for automated boot&lt;/li&gt;
&lt;li&gt;Configure kickstarter for headless installation&lt;/li&gt;
&lt;li&gt;Add BIOS script to instruct Gateway to boot from USB&lt;/li&gt;
&lt;li&gt;Install Photon OS onto the Gateway using the USB media&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's best to use a USB stick with an indicator LED for disk activity as it will make it easier to monitor the installation process.&lt;/p&gt;
&lt;h2&gt;
  
  
  Create bootable USB media from Photon OS ISO
&lt;/h2&gt;

&lt;p&gt;Download the ISO file for your processor architecture (x86/64 for the Dell Edge GW) from the Photon OS site: &lt;a href="https://github.com/vmware/photon/wiki/Downloading-Photon-OS"&gt;https://github.com/vmware/photon/wiki/Downloading-Photon-OS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the contents of the ISO onto a USB stick. As a result the stick must be in FAT format and bootable via MBR.&lt;/p&gt;

&lt;p&gt;On Windows this can easily be achieved using Rufus, select “ISO file” in the "boot selection” and locate the Photon ISO file on your disk.&lt;br&gt;
Make sure the options are similar to below. If you don’t see the "FAT32(Default)" option for “File System” go with any other FAT32 option there may be and use its default cluster size.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2LLu3sFQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6n070f9b04l7ha26e3pq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2LLu3sFQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6n070f9b04l7ha26e3pq.png" alt="rufus screen for creating bootable USB stick"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Configure the boot menu for automated start
&lt;/h2&gt;

&lt;p&gt;Open the syslinux.cfg file on the USB stick that you have prepared and verify it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DEFAULT loadconfig

LABEL loadconfig
  CONFIG /isolinux/isolinux.cfg
  APPEND /isolinux/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file references &lt;code&gt;isolinux.cfg&lt;/code&gt; in the &lt;code&gt;isolinux&lt;/code&gt; folder. Open it in a text editor.&lt;br&gt;
The line &lt;code&gt;default vesamenu.c32&lt;/code&gt; in this file forces the install into graphic mode even though the prompt and timeout values should ensure the boot runs automatically. &lt;br&gt;
Change the line to &lt;code&gt;default install&lt;/code&gt;, your file should look similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# D-I config version 2.0
include menu.cfg
default install
prompt 0
timeout 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The boot menu is now configured to auto-run the &lt;code&gt;install&lt;/code&gt; installation option.&lt;/p&gt;

&lt;p&gt;The next steps will configure the &lt;code&gt;install&lt;/code&gt; option for automated installation using kickstarter.&lt;br&gt;
Open &lt;code&gt;/isolinux/menu.cfg&lt;/code&gt; and change the &lt;code&gt;append&lt;/code&gt; line as below so kickstarter picks up a configuration file (accessible as a CD ROM):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;menu hshift 7
menu width 61

menu title Photon installer boot menu
include stdmenu.cfg

default install
label install
    menu label ^Install
    menu default
    kernel vmlinuz
    append initrd=initrd.img root=/dev/ram0 loglevel=3 photon.media=cdrom ks=cdrom:/isolinux/sample_ks.cfg console=ttyS0,115200n8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configure kickstarter for headless installation
&lt;/h2&gt;

&lt;p&gt;Now let's review the &lt;code&gt;/isolinux/sample_ks.cfg&lt;/code&gt; which configures the automated installation. The changes required in this file are also documented on the Photon OS site - it's the same file the grub boot option uses.&lt;br&gt;
The important part in this file is to adjust the &lt;code&gt;postinstall&lt;/code&gt; section to enable remote SSH for the root account so you can actually connect to the Gateway once the installation is finished.&lt;br&gt;
The &lt;code&gt;postinstall&lt;/code&gt; step simply patches the &lt;code&gt;/etc/ssh/sshd_config&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Also adjust the &lt;code&gt;hostname&lt;/code&gt; and &lt;code&gt;password.text&lt;/code&gt; as needed and take note of the values as you will need these to connect to the Gateway after installation.&lt;br&gt;
Also note for the 300X series the &lt;code&gt;disk&lt;/code&gt; device name must be specified as &lt;code&gt;/dev/mmcblk0&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "hostname": "photon-machine",
    "password": 
        {
            "crypted": false,
            "text": "Secret!"
        },
    "disk": "/dev/mmcblk0",
    "partitions": [
                        {"mountpoint": "/", "size": 0, "filesystem": "ext4"},
                        {"mountpoint": "/boot", "size": 128, "filesystem": "ext4"},
                        {"mountpoint": "/root", "size": 128, "filesystem": "ext4"},
                        {"size": 512, "filesystem": "swap"}
                    ],
    "packagelist_file": "packages_minimal.json",
    "additional_packages": ["vim"],
    "postinstall": [
                        "#!/bin/sh",
                        "sed -i 's/PermitRootLogin no/PermitRootLogin yes/g' /etc/ssh/sshd_config"
                   ],
    "public_key": "&amp;lt;ssh-key-here&amp;gt;",
    "install_linux_esx": false
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure there is a &lt;code&gt;packages_minimal.json&lt;/code&gt; in the &lt;code&gt;isolinux&lt;/code&gt; folder and that it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "packages": [
                 "minimal",
                 "linux",
                 "initramfs"
                 ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add BIOS script to instruct Gateway to boot from USB
&lt;/h2&gt;

&lt;p&gt;In order to force the Gateway to boot from the USB stick we will need to add a file that instructs the BIOS to do so.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;UsbInvocationScript.txt&lt;/code&gt; file in the root of the USB drive with the content below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;usb_disable_secure_boot noreset;
usb_one_time_boot usb;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Gateway Manual provides more information on &lt;a href="https://www.dell.com/support/manuals/uk/en/ukbsdt1/dell-edge-gateway-3000-series/dell-edge_gateway-3003-install_manual/using-the-usb-invocation-script?guid=guid-bda5b4d9-1d1c-40df-81b9-c652a55f43a4&amp;amp;lang=en-us"&gt;Using USB invocation scripts&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install onto Gateway using the USB media
&lt;/h2&gt;

&lt;p&gt;Now you are ready to install the OS onto the Gateway. &lt;/p&gt;

&lt;p&gt;There are two visual indicators for the installation process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The USB stick's activity LED&lt;/li&gt;
&lt;li&gt;The Gateway's status LEDs - only two will be active, the power and network LED.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Power-off the Gateway - unplug network/POE or power cable. Insert your USB stick and power up the Gateway again.&lt;/p&gt;

&lt;p&gt;I'll describe what to generally expect to happen but will skip details.&lt;br&gt;
If you want to understand in detail what Gateway LEDs you should expect to flash and in which sequence have a look at "Edge Gateway USB script utility User's Guide" accessible from &lt;a href="https://www.dell.com/support/manuals/uk/en/ukbsdt1/dell-edge-gateway-3000-series/dell-edge_gateway-3003-install_manual/using-the-usb-invocation-script?guid=guid-bda5b4d9-1d1c-40df-81b9-c652a55f43a4&amp;amp;lang=en-us"&gt;Using USB invocation scripts&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have powered up the Gateway I'd recommend to closely monitor the installation.&lt;br&gt;
The following will happen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Power and Network LED turn green&lt;/li&gt;
&lt;li&gt;The gateway scans for &lt;code&gt;UsbInvocationScript.txt&lt;/code&gt;files on the USB interfaces.&lt;/li&gt;
&lt;li&gt;The USB stick's LED flashes once the Gateway found our invocation script.&lt;/li&gt;
&lt;li&gt;The gateway LEDs flash in different patterns as the BIOS script is executed. The flash patterns correspond to the commands in the &lt;code&gt;UsbInvocationScript.txt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Eventually the power LED turns green and the network LED turns amber and both will be on continuously.&lt;/li&gt;
&lt;li&gt;The activity LED of the USB stick flashes indicating that the Gateway boots from the stick and has started copying data during the installation.&lt;/li&gt;
&lt;li&gt;Eventually - after 5 to 10 minutes both Gateway LEDs will turn green again. &lt;strong&gt;Unplug your USB stick as soon as that happens, otherwise the Gateway may reboot from the USB stick and kick off the installation again.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the installation is complete find the IP address of the Gateway in your router. Depending on your router you can identify the IP address by the MAC address of the Gateway or the host name that is set in the &lt;code&gt;/isolinux/sample_ks.cfg&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is how it looks like on my router, showing the IP, host name and MAC address:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--350kE7-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jjqsel8p6ch1tz560i9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--350kE7-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jjqsel8p6ch1tz560i9h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now connect to the Gateway via SSH using the IP address above and the root password that you set in the &lt;code&gt;/isolinux/sample_ks.cfg&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@192.168.0.112
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>linux</category>
      <category>iot</category>
    </item>
    <item>
      <title>Designing, Documenting and Testing Event APIs for IoT Platforms</title>
      <dc:creator>swenhelge</dc:creator>
      <pubDate>Thu, 15 Aug 2019 18:42:21 +0000</pubDate>
      <link>https://dev.to/solacedevs/designing-documenting-and-testing-event-apis-for-iot-platforms-20j2</link>
      <guid>https://dev.to/solacedevs/designing-documenting-and-testing-event-apis-for-iot-platforms-20j2</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0pzcUya7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/03/DARK_Service-Mesh-Meet-Event-Mesh-.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0pzcUya7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/03/DARK_Service-Mesh-Meet-Event-Mesh-.png" alt=""&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In this article we will dive deeper into how we designed and documented the Event APIs of our IoT reference platform that we outlined in our &lt;a href="https://dev.to/solacedevs/rapid-iot-prototyping-with-the-bosch-xdk110-and-an-event-mesh-374p-temp-slug-4334168"&gt;previous blog post about prototyping an IoT platform&lt;/a&gt;. We will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Describe the event flow between platform components&lt;/li&gt;
&lt;li&gt;Identify the core APIs for the use case&lt;/li&gt;
&lt;li&gt;Explain how we designed and documented Event APIs and provide examples and links to the full design and documentation.&lt;/li&gt;
&lt;li&gt;Briefly explain how we on-boarded partner applications and tested apps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Throughout the project we started to adopt &lt;a href="https://www.asyncapi.com/"&gt;AsyncAPI&lt;/a&gt;, so we will also briefly outline how AsyncAPI and Solace’s &lt;a href="https://solace.com/press-center/event-horizon/"&gt;Event Horizon&lt;/a&gt; initiative will eventually provide full, tool-supported Event API life-cycle management. The APIs and tooling described here can be found on &lt;a href="https://github.com/solace-iot-team/iot-platform-api-docs"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In our earlier post we introduced a reference platform for IoT that we built with several partners. Solace’s event mesh connects all the components of the platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://solace.com/wp-content/uploads/2019/08/iot-prototype-blog-post_05-1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SpPkSSFO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/08/iot-prototype-blog-post_05-1-1024x484.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We had to ensure that all our and our partner’s contributions played together, and that development could be carried out independently by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Designing the contract between the components&lt;/li&gt;
&lt;li&gt;Documenting this contract&lt;/li&gt;
&lt;li&gt;On-boarding partners onto the event mesh&lt;/li&gt;
&lt;li&gt;Coming up with a way to independently test contributions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, we had to design, document, and test event driven APIs.&lt;br&gt;&lt;br&gt;
The diagram below depicts the event flow between our platform components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sensor data/telemetry flows from the device.&lt;/li&gt;
&lt;li&gt;This data is consumed/processed by visual analytics – Altair Panopticon.&lt;/li&gt;
&lt;li&gt;Telemetry is also fed into SL’s RTView-based real-time dashboards.&lt;/li&gt;
&lt;li&gt;Commands and configuration events flow from a custom user command and control app implemented in Dell Boomi to the device.&lt;/li&gt;
&lt;li&gt;Configuration events signal a status change – this is of interest to analytics systems. For example, setting devices into race mode indicates the start of a race on our slot-car track.&lt;/li&gt;
&lt;li&gt;Notifications and alerts are created based on data analytics and are consumed to create service cases in Salesforce.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://solace.com/wp-content/uploads/2019/08/iot-apis-blog-post_image-2.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lXAzK66z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/08/iot-apis-blog-post_image-2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking at this use case, we identified three functional areas or APIs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Telemetry&lt;/strong&gt; – everything related to sensor data from the device&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Command &amp;amp; Configuration&lt;/strong&gt; – control and status messages from and to the device&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notification&lt;/strong&gt; – alert and notification events that can be consumed and used to generate actions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, we separated the data and control communication with the device into two APIs. This enables us to do more fine-grained routing of events through the IoT Event Mesh.&lt;/p&gt;

&lt;p&gt;The actors in the diagram above fall into two different categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Distributed devices – the Bosch XDK110&lt;/li&gt;
&lt;li&gt;Platform services including analytics, device configuration, dashboards, and monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compared to REST/HTTP, the roles of the actors in an Event API are harder to classify. In REST a client initiates a request, a server provides the response and this communication is always point to point. Event APIs are much more varied. They are inherently bi-directional, and they support publish/subscribe and request/reply message exchanges. This makes it  harder to assign client and server roles.&lt;/p&gt;

&lt;p&gt;However, we still decided to identify the client side of an interaction and the service provider and describe the interaction from the client’s point of view similar to REST APIs. We considered topics as resource identifier and assigned the provider role to the actor that is managing the data.&lt;/p&gt;

&lt;p&gt;In case of device interaction, this means we identify the device as the client that pushes data to a telemetry resource from which the provider consumes the events and processes them. For configuration data we consider the service provider to be the device or configuration management application as it is responsible for maintaining configuration and notifying the client – the device – when changes occur.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to design event-driven APIs?
&lt;/h2&gt;

&lt;p&gt;Expanding on the previous sections we identified the following aspects of an Event API that we would need to address:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Actors&lt;/strong&gt; – client and service provider&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Channels&lt;/strong&gt; these actors communicate on – topics or queues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exchange patterns&lt;/strong&gt; – publish, subscribe, request-response&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intent or verbs&lt;/strong&gt; – Is the event intended as a create, update, or delete? We considered it important to include the verb similarly to the HTTP request method.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Payloads&lt;/strong&gt; – the event data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attributes&lt;/strong&gt; such as protocols, Quality of Service, security.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As mentioned previously, we decided to describe the interaction from the client’s point of view with a provider implementation mirroring this interaction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://solace.com/wp-content/uploads/2019/08/iot-apis-blog-post_image-3.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i9U05DsM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/08/iot-apis-blog-post_image-3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Solace PubSub+ is a polyglot event broker and we leveraged a number of protocols for the showcase such as MQTT, REST, SMF (Solace API), and WebSockets based on the client’s or provider’s preferred technology/APIs. This is quite a powerful feature when it comes to rapid integration of many different technologies, but it has its drawbacks as well. The implication is that we can only rely on the event characteristics that all these protocols share in order to maintain interoperability.&lt;/p&gt;

&lt;p&gt;The typical approach to solving this would be to create a convention for a message envelope that encloses message headers as well as the payload. However, this has some fundamental drawbacks as well. Each application needs to parse the payload to extract the headers and only then apply processing rules. This is both expensive in development and maintenance but also at runtime because it typically results in each message being parsed multiple times in the system.&lt;/p&gt;

&lt;p&gt;An alternative approach is to make use of fine-grained topic structures to convey all required information at run time about the event. In practice, this requires that the event broker be able to support potentially many millions of topics – which is exactly one of the characteristics the Solace PubSub+ broker has. This is the approach we have chosen here.&lt;/p&gt;

&lt;p&gt;Essentially, the topic becomes the way to communicate meta information such as the verb (or operation), the resource identifier and resource hierarchy or the expected payload type.&lt;/p&gt;

&lt;p&gt;Therefore, we need to define conventions for the topic namespace and to design this namespace carefully. The way Solace PubSub+ implements topics helps us to create a flexible and extendable namespace. This &lt;a href="https://solace.com/blog/solace-topics-vs-kafka-topics/"&gt;blog post&lt;/a&gt; contrasts Solace’s topic implementation to a more traditional approach and explains it in more detail:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“With Solace, topics aren’t actually configured on the broker: they’re instead defined by the publishing application at publish time. They can be thought of as a property of a message, or metadata, and not like “writing to a file” or “sending to a destination.” Each and every message could be published with a unique topic.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is also why it’s easy to commission additional IoT devices into our solution that can communicate on dedicated topics immediately without additional broker configuration.&lt;/p&gt;

&lt;p&gt;We defined the following convention for topic names (you can find more details and examples in this &lt;a href="https://github.com/solace-iot-team/iot-platform-api-docs/blob/master/topic.md"&gt;github project&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{method}/{representation}/{base-topic}/{version}/{resource-categorization}/{resource}/{id}/{aspect}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This convention is loosely based on typical REST resource identifiers and also incorporates some information that in the REST world is typically conveyed via HTTP headers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Method&lt;/strong&gt; – indicates clearly the action that the recipient of the event shall take such as “CREATE”, “UPDATE”, and “REPLACE”. Equivalent to HTTP verbs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Representation&lt;/strong&gt; – the data format used such as JSON or XML&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Base topic&lt;/strong&gt; – similar to the base URL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version&lt;/strong&gt; – the API version&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource categorization&lt;/strong&gt; – this can be a hierarchical location, functional categorisation or similar&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource&lt;/strong&gt; – such as device or IoT gateway&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Id&lt;/strong&gt; – the unique resource Id&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aspect&lt;/strong&gt; – such as metrics, status, configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we standardised on JSON as the payload format and didn’t intend to introduce versioning, we omitted these items from the topics we defined.&lt;/p&gt;

&lt;p&gt;For a device emitting telemetry data the destination topic looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE/iot-control/bcw/solacebooth/handheld-solace/device/7bbb15d6-ca36-498a-9d52-7ddcef2a1d75/metrics
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Armed with all of this we started describing our API design. We used Async API   for this, an initiative similar to Open API in the REST world.&lt;/p&gt;

&lt;p&gt;Let’s have a quick look at the channels and messages defined in the Telemetry API.&lt;/p&gt;

&lt;p&gt;It defines one channel that the client – the device – is supposed to publish events to (see the “publish” definition in the excerpt below). The intent of the client is to create a new telemetry event, hence the verb used is “CREATE”. You will recognize the topic design outlined above. The elements of the resource categorization and the device id are defined as parameters in the channel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;channels: CREATE/iot-event/{region}/{location}/{production-line}/device/{deviceId}/metrics: description: Topic to subscribe to connected events parameters: - $ref: '#/components/parameters/region' - $ref: '#/components/parameters/location' - $ref: '#/components/parameters/production-line' - $ref: '#/components/parameters/deviceId' publish: summary: device publishes sensor readings. each event represents a new (create) reading for the timestamp. operationId: publishTelemetry message: $ref: '#/components/messages/telemetryMessage'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The telemetryMessage transferred via this channel is referenced above and you can see how it is defined in the components section.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;components: messages: telemetryMessage: name: telemetryMessage title: Telemetry Message summary: Telemetry Message description: "Depending on the mode of the device, the telemetry message can contain all of the sensor readings or only a partial set. For units and ranges please see the datasheet for the XDK. You can find it on this site: https://xdk.bosch-connectivity.com" contentType: application/json payload: $ref: "#/components/schemas/telemetryEvent" schemas: telemetryEvent: type: array items: type: object title: Telemetry Reading/Sample
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The components section also defines schemas and parameters using JSON Schema.&lt;/p&gt;

&lt;p&gt;The second API we created was for the device to receive command and configuration events.&lt;/p&gt;

&lt;p&gt;It was important to use a topic hierarchy that allowed us to target either a specific device or a group of devices on a production line, a location, or region. To achieve this, we designed a set of topics (or channels) that follows our design convention and creates a unique topic for each of the devices and device groups.&lt;/p&gt;

&lt;p&gt;Here’s an excerpt from the channel hierarchy which strips out elements of the categorisation one by one to address a higher-level group. You can see that a device subscribes to a number of topics and once again these topics do not need to be defined on the broker upfront and the broker supports use of millions of topics.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;channels: UPDATE/iot-control/{region}/{location}/{production-line}/device/{deviceId}/configuration: description: device subscribes to configuration events. each event represents an update to the existing configuration. parameters: - $ref: '#/components/parameters/region' - $ref: '#/components/parameters/location' - $ref: '#/components/parameters/production-line' - $ref: '#/components/parameters/deviceId' subscribe: summary: Subscribe to device specific configuration events operationId: subscribeDeviceConfiguration message: $ref: '#/components/messages/configurationMessage' UPDATE/iot-control/{region}/{location}/{production-line}/device/configuration: description: device subscribes to configuration events. each event represents an update to the existing configuration. parameters: - $ref: '#/components/parameters/region' - $ref: '#/components/parameters/location' - $ref: '#/components/parameters/production-line' subscribe: summary: Subscribe to production line specific configuration events operationId: subscribeProductionLineConfiguration message: $ref: '#/components/messages/configurationMessage'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can find the full definitions for all three APIs on GitHub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/solace-iot-team/iot-platform-api-docs/blob/master/api-definitions/iot-event.yml"&gt;Telemetry API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/solace-iot-team/iot-platform-api-docs/blob/master/api-definitions/iot-command-configuration.yml"&gt;Command &amp;amp; Configuration API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/solace-iot-team/iot-platform-api-docs/blob/master/api-definitions/notification-api.yml"&gt;Notification API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not very easy on the eye, so let’s see how we can create HTML or Markdown documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation
&lt;/h2&gt;

&lt;p&gt;We first started out handcrafting our documentation on our internal wiki. However, as AsyncAPI progressed over time, it became the best option to maintain the API design and generate documentation as required. We have published the full documentation &lt;a href="https://github.com/solace-iot-team/iot-platform-api-docs"&gt;on GitHub&lt;/a&gt; in markdown format so it renders really nicely.&lt;/p&gt;

&lt;p&gt;The AsyncAPI tooling is still in its early days, however, we found their template-based generator tool quite useful. It comes with HTML and Markdown templates.&lt;/p&gt;

&lt;p&gt;You can explore how to design and create different documentation formats on the &lt;a href="https://playground.asyncapi.io/?utm_source=editor&amp;amp;utm_medium=editor-topbar&amp;amp;utm_campaign=playground"&gt;AsyncAPI playground&lt;/a&gt; for their upcoming 2.0.0 release of the specification.&lt;/p&gt;

&lt;p&gt;It loads an example API:&lt;br&gt;&lt;br&gt;
&lt;a href="https://solace.com/wp-content/uploads/2019/08/iot-apis-blog-post_image-4.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VUflWg6C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/08/iot-apis-blog-post_image-4-1024x461.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s load an API definition directly from our GitHub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;copy this URL &lt;a href="https://raw.githubusercontent.com/solace-iot-team/iot-platform-api-docs/master/api-definitions/iot-event.yml"&gt;https://raw.githubusercontent.com/solace-iot-team/iot-platform-api-docs/master/api-definitions/iot-event.yml&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;paste this URL into the textbox that reads “Type the URL of the AsyncAPI …” and then click “Import document”:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://solace.com/wp-content/uploads/2019/08/iot-apis-blog-post_image-5.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OgXqGO2B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://solace.com/wp-content/uploads/2019/08/iot-apis-blog-post_image-5-1024x575.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can toggle the different views between HTML and markdown. The &lt;a href="https://github.com/asyncapi/generator"&gt;AsyncAPI Generator&lt;/a&gt; allows you to automate document generation as part of a build process in a similar manner to this web tool.&lt;/p&gt;

&lt;p&gt;To explore our three APIs in detail, start with the &lt;a href="https://github.com/solace-iot-team/iot-platform-api-docs"&gt;readme on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Onboarding
&lt;/h2&gt;

&lt;p&gt;We manually onboarded applications onto the PubSub+ platform. Onboarding of an API client or provider required the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an Access Control List (ACL) for each application: the ACL provides access to the topics an application utilising one or more APIs needs to publish and subscribe as per the API definitions. A future enhancement could be to generate these ACL profiles directly from the Async API definition.&lt;/li&gt;
&lt;li&gt;Create a client user name for the application to be registered. At this time Solace supports username/password and client certificate authentication. OAuth support utilising an external Authorization Server will be released shortly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here at Solace we have things in the pipeline that will automate this process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event Portal – will provide application lifecycle capabilities that provision the information above into a Solace Event Mesh. Read more in the “What’s next?” section below.&lt;/li&gt;
&lt;li&gt;A set of IoT APIs that simplify integration with Device Management platforms and provide capabilities to provision device types and devices into the Event Mesh. This will be released as part of our IoT Labs repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  API Testing
&lt;/h2&gt;

&lt;p&gt;As described in the &lt;a href="https://dev.to/solacedevs/rapid-iot-prototyping-with-the-bosch-xdk110-and-an-event-mesh-374p-temp-slug-4334168"&gt;previous blog post in this series&lt;/a&gt;, we mainly used MQTT Box to test APIs. In the future we may be able to use the AsyncAPI Generator to generate stub applications and build automated tests based on these – at the time of writing the code generation templates are at a lower level of maturity compared to the document generation, so pursuing this avenue was not deemed feasible for our project.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;Solace recently has announced its &lt;a href="https://solace.com/press-center/event-horizon/"&gt;Event Horizon&lt;/a&gt; initiative aiming to create a developer experience and ecosystem around Event APIs similar to the synchronous API/REST experience.&lt;/p&gt;

&lt;p&gt;The first step will be an Event Portal that enables the creation and discovery of events and the definition of event-driven application interfaces. Solace is also contributing to frameworks such as Spring CloudStreams and AsyncAPI adding code generators for CloudStreams which will accelerate Event API implementation and aid in testing.&lt;/p&gt;

&lt;p&gt;In the IoT space we will add more open source capabilities for Device Management and IoT Platform integration. We also evolve the IoT APIs described in this blog post as we add more prototypes and examples on other platforms such as Azure IoT Hub and Azure Sphere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;A crucial factor in the development of our prototyping showcase was to design our event-driven IoT APIs upfront similar to contract first development with REST APIs. By making sure the topic structure/namespace we used was flexible and extendable, we avoided payload introspection and enabled fine-grained and flexible filtering for every subscriber.&lt;/p&gt;

&lt;p&gt;We followed a typical API management lifecycle to design and document our IoT APIs through to on-boarding of applications and testing. AsyncAPI proved very useful in the design and document phases. As the Solace Event Portal, AsyncAPI and code generators evolve the full API life-cycle will eventually be tool-supported.&lt;/p&gt;

&lt;p&gt;The next article (coming soon) will focus on the implementation of the custom RTOS C application we have developed for the Bosch XDK110 which implements the device APIs described here.&lt;/p&gt;

&lt;p&gt;Stay tuned!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://solace.com/blog/designing-event-apis-iot-platforms/"&gt;Designing, Documenting and Testing Event APIs for IoT Platforms&lt;/a&gt; appeared first on &lt;a href="https://solace.com"&gt;Solace&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>iot</category>
      <category>asyncapi</category>
      <category>api</category>
    </item>
  </channel>
</rss>
